2021
07-26
07-26
PyTorch训练LSTM时loss.backward()报错的解决方案
训练用PyTorch编写的LSTM或RNN时,在loss.backward()上报错:RuntimeError:Tryingtobackwardthroughthegraphasecondtime,butthebuffershavealreadybeenfreed.Specifyretain_graph=Truewhencallingbackwardthefirsttime.千万别改成loss.backward(retain_graph=True),会导致显卡内存随着训练一直增加直到OOM:RuntimeError:CUDAoutofmemory.Triedtoallocate20.00MiB(GPU0;10.73GiBtotalcapa...
继续阅读 >