I am trying to generate perturbed images in pytorch dataloader with trained ANN1 for another ANN2 in the __getitem__
function.
The problem is that I get CUDA out of memory after a few batches. With some debugging, I found out that the memory was accumulating at each iteration.
I have tried to use del
but only freed a very limited amount of GPU memory.
I am wondering is there any special mechanism in the dataloader that might cause this issue?
BTW when I run only the dataloader with ANN1 in a for loop or only run the ANN2 with official dataloader the simulation worked fine.
Thanks in advance.
question from:
https://stackoverflow.com/questions/65557441/ann-in-pytorch-dataloader 与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…