Tag Archives: pytorch 0.4.0

[Solved] volatile was removed and now has no effect. Use `with torch.no_grad():` instead.

Solution: volatile was removed and now has no effect. UseĀ with torch.no_grad():instead.

Source code

self.priors = Variable(self.priorbox.forward(), volatile=True)

 

the reason

It volatilehas been removed in the torch version .
Before pytorch 0.4.0 input = Variable(input, volatile=True) set volatile to True, as long as an input is volatile, the output is also volatile, which can guarantee that there is no intermediate state; but canceled after pytorch 0.4.0 The volatile mechanism is replaced with functions such as torch.no_grad(), torch.set_grad_enable(grad_mode)
torch.no_grad() is a context manager.
When using pytorch , not all operations require the generation of calculation graphs (the construction of the calculation process to facilitate gradient back propagation and other operations). For the calculation operation of tensor, the default is to construct the calculation graph. In this case, you can use with torch.no_grad(): to force the subsequent content not to construct the calculation graph.
The torch.no_grad() will affect pytorch’s backpropagation mechanism. In the test, because it is determined that backpropagation will not be used, this mode can help save memory space. The same is true for torch.set_grad_enable(grad_mode)

change into

with torch.no_grad():
	self.priors = Variable(self.priorbox.forward())

or

self.priors = Variable(self.priorbox.forward())

pytorch RuntimeError: Error(s) in loading state_ Dict for dataparall… Import model error solution

When importing model files in pytorch, the following error is reported.

RuntimeError: Error(s) in loading state_dict for DataParallel:
Unexpected running stats buffer(s) “module.norm1.norm_func.running_mean” and “module.norm1.norm_func.running_var” for InstanceNorm2d with track_running_stats=False. If state_dict is a checkpoint saved before 0.4.0, this may be expected because InstanceNorm2d does not track running stats by default since 0.4.0. Please remove these keys from state_dict. If the running stats are actually needed, instead set track_running_stats=True in InstanceNorm2d to enable them. See the documentation of InstanceNorm2d for details.

Unexpected running stats buffer(s) “module.res5.norm1.norm_func.running_mean” and “module.res5.norm1.norm_func.running_var” for InstanceNorm2d with track_running_stats=False. If state_dict is a checkpoint saved before 0.4.0, this may be expected because InstanceNorm2d does not track running stats by default since 0.4.0. Please remove these keys from state_dict. If the running stats are actually needed, instead set track_running_stats=True in InstanceNorm2d to enable them. See the documentation of InstanceNorm2d for details.

Process finished with exit code 0
According to the hint, the imported model was generated with pytorch 0.4.0, but now we are using pytorch 1.0, so we checked the load_state_dict function in the module.

I guess the keyword has changed with the version.
The solution is.
Change the above statement to

    model_dict = torch.load(args.test_weight_path)
    model_dict_clone = model_dict.copy()
    for key, value in model_dict_clone.items():
        if key.endswith(('running_mean', 'running_var')):
            del model_dict[key]

    Gnet.load_state_dict(model_dict,False)