Solution: volatile was removed and now has no effect. UseĀ with torch.no_grad():
instead.
Source code
self.priors = Variable(self.priorbox.forward(), volatile=True)
the reason
It volatilehas been removed in the torch version .
Before pytorch 0.4.0 input = Variable(input, volatile=True) set volatile to True, as long as an input is volatile, the output is also volatile, which can guarantee that there is no intermediate state; but canceled after pytorch 0.4.0 The volatile mechanism is replaced with functions such as torch.no_grad(), torch.set_grad_enable(grad_mode)
torch.no_grad() is a context manager.
When using pytorch , not all operations require the generation of calculation graphs (the construction of the calculation process to facilitate gradient back propagation and other operations). For the calculation operation of tensor, the default is to construct the calculation graph. In this case, you can use with torch.no_grad(): to force the subsequent content not to construct the calculation graph.
The torch.no_grad() will affect pytorch’s backpropagation mechanism. In the test, because it is determined that backpropagation will not be used, this mode can help save memory space. The same is true for torch.set_grad_enable(grad_mode)
change into
with torch.no_grad():
self.priors = Variable(self.priorbox.forward())
or
self.priors = Variable(self.priorbox.forward())