It’s probably such an error reporting method. I’ve been using torch for so many years. I first encountered this error NotImplementedError
I’m not using a nightly version
Traceback (most recent call last):
File "xxxxx\x.py", line 268, in <module>
print(x(y).shape)
File "xxxxx\lib\site-packages\torch\nn\modules\module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "xxxxx\x.py", line 259, in forward
x = self.features(x)
File "xxxxx\lib\site-packages\torch\nn\modules\module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "xxxxx\lib\site-packages\torch\nn\modules\container.py", line 119, in forward
input = module(input)
File "xxxxx\lib\site-packages\torch\nn\modules\module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "xxxxx\lib\site-packages\torch\nn\modules\module.py", line 201, in _forward_unimplemented
raise NotImplementedError
NotImplementedError
Call self.forward
in _call_impl
result = self.forward(*input, **kwargs)
If you inherit nn.Module
, and if you don’t implement self.forward
, it will
raise NotImplementedError
It turns out that when I use this function, I really don’t have the forward
method:
class Hswish(nn.Module):
def __init__(self, inplace=True):
super(Hswish, self).__init__()
self.inplace = inplace
def __swish(self, x, beta, inplace=True):
# But this swish is not used by H-swish
# The reason it's called H-swish is to make the sigmoid hard
# approximated by Relu6(x+3)/6
# Reduced computational effort for embedded deployment
return x * F.sigmoid(beta * x, inplace)
@staticmethod
def Hsigmoid(x, inplace=True):
return F.relu6(x + 3, inplace=inplace)/6
def foward(self, x):
return x * self.Hsigmoid(x, self.inplace)
forward
Write as foward
…
Read More:
- Pytorch ValueError: Expected more than 1 value per channel when training, got input size [1, 768
- [Solved] RuntimeError: cublas runtime error : resource allocation failed at
- [Solved] RuntimeError: cuda runtime error (100) : no CUDA-capable device is detected at
- Python: RNN principle realized by numpy
- Python RuntimeError: Expected 4-dimensional input for 4-dimensional weight [32, 1, 5, 5]
- [Solved] mmdetection benchmark.py Error: RuntimeError: Distributed package doesn‘t have NCCL built in
- How to Solve Error: RuntimeError CUDA out of memory
- [Solved] django.core.exceptions.ImproperlyConfigured: Error loading MySQLdb module.Did you install mysqlclie
- [Solved] Pdfplumber Read PDF Sheet Error: AttributeError: function/symbol ‘ARC4_stream_init‘ not found in library
- Django Issues: TypeError: “Settings” object is irreversible
- [How to Solve] ImportError: No module named typing
- [ONNXRuntimeError] : 10 : INVALID_Graph loading model error
- [Solved] RuntimeError: ProcessGroupNCCL is only supported with GPUs, no GPUs found
- [Solved] ValueError: check_hostname requires server_hostname
- OSError libespeak.so.1 error: no such file or directory [How to Solve]
- Djangorestframework-simplejwt: ‘str‘ object has no attribute ‘decode‘ [Solved]
- [Solved] theano-GPU Error: pygpu.gpuarray.GpuArrayException: b‘cuMemAlloc: CUDA_ERROR_OUT_OF_MEMORY: out of memory
- [Solved] matplotlib.units.ConversionError: Failed to convert value(s) to axis units: ‘LiR‘
- pytest pluggy.manager.PluginValidationError: unknown hook’pytest_namespace’ error handling method
- Gunicorn Flask Error: [ERROR] Socket error processing request