During Gan’s training, we often encounter this problem: runtimeerror: trying to backward through the graph a second time, but the saved intermediate results have already been free
The description of this error is that when you are propagating in the direction, the cache is released in advance. Many solutions are to change loss. Backward() to loss. Backward (retain_graph = true). In fact, most of the code problems are not here, and the retention of the calculation chart may cause the cache to accumulate rapidly and lead to the explosion of the video memory, In fact, the real reason is that the discriminator and the producer share the same variables. Just add detach () when using variables for the first time.
#### Update Discriminator ###
real_preds = netD(real_gt)
fake_preds = netD(fake_coarse)
#### Update Generator #####
real_preds = netD(real_gt)
fake_preds = netD(fake_coarse)
Change to:
#### Update Discriminator ###
real_preds = netD(real_gt.detach())
fake_preds = netD(fake_coarse.detach())
#### Update Generator #####
real_preds = netD(real_gt)
fake_preds = netD(fake_coarse)
It’s not easy to find the method. Thank you for your support