Python custom convolution kernel weight parameters

Pytorch build convolution layer generally use nn. Conv2d method, in some cases we need custom convolution kernels weight weight, and nn. Conv2d custom is not allowed in the convolution parameters, can use the torch at this time. The nn. Functional. Conv2d referred to as “f. onv2d

torch.nn.functional.conv2d(input, weight, bias=None, stride=1, padding=0, dilation=1, groups=1)

F.onv2d can and must be required to input the convolution weight and bias bias. Therefore, build the desired convolution kernel parameters, and then input F.conv2d. Here is an example of using f.conv2d to build the convolution layer, where a class is needed for the network model:

class CNN(nn.Module):
    def __init__(self):
        super(CNN, self).__init__()
        self.weight = nn.Parameter(torch.randn(16, 1, 5, 5))  # Customized weights
        self.bias = nn.Parameter(torch.randn(16))    # Customized bias

    def forward(self, x):
        x = x.view(x.size(0), -1)
        out = F.conv2d(x, self.weight, self.bias, stride=1, padding=0)
        return out

It is worth noting that the data type of weights to be trained for each layer in the PyTorch is set to nn.parameter rather than Tensor or Variable. Parameter’s require_grad defaults to true, and Varaible defaults to False.

Read More: