Category Archives: Python

[Solved] Pytorch Download CIFAR1 Datas Error: urllib.error.URLError: <urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certi

urllib.error.URLError: < urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certi

Solution:

Add the following two lines of code before the code starts:

import ssl
ssl._create_default_https_context = ssl._create_unverified_context

Complete example:

import torch
import torchvision
import torchvision.transforms as transforms
import ssl
ssl._create_default_https_context = ssl._create_unverified_context
#Download the data set and adjust the image, because the output of the torchvision data set is in PILImage format, and the data field is in [0,1]
#We convert it into the tensor format of the standard data field [-1,1]
#transform Data Converter
transform=transforms.Compose([transforms.ToTensor(),transforms.Normalize((0.5,0.5,0.5),(0.5,0.5,0.5))])
trainset=torchvision.datasets.CIFAR10(root='./data',train=True,download=True,transform=transform)
# The downloaded data is placed in the trainset
trainloader=torch.utils.data.DataLoader(trainset,batch_size=4,shuffle=True,num_workers=2)
# DataLoader Data Iterator Encapsulate data into DataLoader
# num_workers: Two threads read data
# batch_size=4 batch processing

testset=torchvision.datasets.CIFAR10(root='./data',train=False,download=True,transform=transform)
testloader=torch.utils.data.DataLoader(testset,batch_size=4,shuffle=False,num_workers=2)
classes=('airplane', 'automobile', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck')

Download result

[Solved] flash initializate database error: Keyerror: ‘migrate’

from flask_sqlalchemy import SQLAlchemy

from flask_migrate import Migrate, migrate

def create_app(register_all=True, **kwargs):
    #Add this code under this method, add the database package and initialize,
     db=SQLAlchemy()
     db.init_app(app)
     #Import and initialize
    migrate=Migrate(db=db)
    migrate.init_app(app)
    return app

directory = current_ app.extensions[‘migrate’].directory

This sentence means that migrations are not generated

So you need to add migrate and initialize

DB needs to be added for initialization

[Solved] RuntimeError: function ALSQPlusBackward returned a gradient different than None at position 3, but t

class ALSQPlus(Function):
    @staticmethod
    def forward(ctx, weight, alpha, g, Qn, Qp, per_channel, beta):
        # assert alpha > 0, "alpha={}".format(alpha)
        ctx.save_for_backward(weight, alpha, beta)
        ctx.other = g, Qn, Qp, per_channel
        if per_channel:
            sizes = weight.size()
            weight = weight.contiguous().view(weight.size()[0], -1)
            weight = torch.transpose(weight, 0, 1)
            alpha = torch.broadcast_to(alpha, weight.size())
            beta = torch.broadcast_to(beta, weight.size())
            w_q = Round.apply(torch.div((weight - beta), alpha)).clamp(Qn, Qp)
            w_q = w_q * alpha + beta
            w_q = torch.transpose(w_q, 0, 1)
            w_q = w_q.contiguous().view(sizes)
        else:
            w_q = Round.apply(torch.div((weight - beta), alpha)).clamp(Qn, Qp)
            w_q = w_q * alpha + beta
        return w_q

    @staticmethod
    def backward(ctx, grad_weight):
        weight, alpha, beta = ctx.saved_tensors
        g, Qn, Qp, per_channel = ctx.other
        if per_channel:
            sizes = weight.size()
            weight = weight.contiguous().view(weight.size()[0], -1)
            weight = torch.transpose(weight, 0, 1)
            alpha = torch.broadcast_to(alpha, weight.size())
            q_w = (weight - beta)/alpha
            q_w = torch.transpose(q_w, 0, 1)
            q_w = q_w.contiguous().view(sizes)
        else:
            q_w = (weight - beta)/alpha
        smaller = (q_w < Qn).float() #bool value to floating point value, 1.0 or 0.0
         bigger = (q_w > Qp).float() #bool value to floating point value, 1.0 or 0.0
         between = 1.0-smaller -bigger #Get the index in the quantization interval
        if per_channel:
            grad_alpha = ((smaller * Qn + bigger * Qp + 
                between * Round.apply(q_w) - between * q_w)*grad_weight * g)
            grad_alpha = grad_alpha.contiguous().view(grad_alpha.size()[0], -1).sum(dim=1)
            grad_beta = ((smaller + bigger) * grad_weight * g).sum().unsqueeze(dim=0)
            grad_beta = grad_beta.contiguous().view(grad_beta.size()[0], -1).sum(dim=1)
        else:
            grad_alpha = ((smaller * Qn + bigger * Qp + 
                between * Round.apply(q_w) - between * q_w)*grad_weight * g).sum().unsqueeze(dim=0)
            grad_beta = ((smaller + bigger) * grad_weight * g).sum().unsqueeze(dim=0)
        grad_weight = between * grad_weight
        #The returned gradient should correspond to the forward parameter
        return grad_weight, grad_alpha, grad_beta, None, None, None, None

RuntimeError: function ALSQPlusBackward returned a gradient different than None at position 3, but the corresponding forward input was not a Variable

The gradient return value of the backward function of Function should be consistent with the order of the parameters of forward

Modify the last line to return grad_weight, grad_alpha, None, None, None, None, grad_beta

Error:output with shape [1, 224, 224] doesn‘t match the broadcast shape [3, 224, 224]

Error: output with shape [1, 224, 224] don’t match the broadcast shape [3, 224, 224]
the image input by the original model is RGB three channel, and the input is single channel gray image.

# Error:output with shape [1, 224, 224] doesn't match the broadcast shape [3, 224, 224]
# The input image of the original model is RGB three-channel, and the one I input is a single-channel grayscale image.
# #------------------------------------------------ --------------------------------------
# from torch.utils.data import DataLoader
# dataloader = DataLoader(dataset, shuffle=True, batch_size=16)
# from torchvision.utils import make_grid, save_image
# dataiter = iter(dataloader)
# img = make_grid(next(dataiter)[0], 4) # Assemble a 4*4 grid image and convert it into 3 channels
# to_img(img)
# #-------------------------------------------------------------------------------------
# It seems that make_grid cannot be converted to 3 channels

The solution is as follows:

from torch import nn
from torchvision import datasets
from torchvision import transforms as T
from torch.utils.data import DataLoader
from torchvision.utils import make_grid, save_image
import numpy as np
import matplotlib.pyplot as plt

transform  = T.Compose([
         T.ToTensor(), #This will convert a numpy array between 0 and 255 into a floating point tensor between 0 and 1
          T.Normalize((0.5, ), (0.5, )), #In the normalize() method, we specify the mean of all channels of the normalized tensor image, and also specify the central deviation.
])
dataset = datasets.MNIST('data/', download=True, train=False, transform=transform)
dataloader = DataLoader(dataset, shuffle=True, batch_size=100)

print(type(dataset[0][0]),dataset[0][0].size())
# print(dataset[0][0])
# To draw a tensor image, we must change it back to a numpy array.
# We will do this in the function def im_convert(), which contains a parameter that will become a tensor image.
def im_convert(tensor):
    image=tensor.clone().detach().numpy()
    # The new tensor obtained using torch.clone() and the original data no longer share memory, but still remain in the calculation graph,
    # The clone operation supports gradient transfer and superposition without sharing data memory, so it is commonly used in scenarios where a unit in a neural network needs to be reused.
    # Usually if the requirements_grad of the original tensor=True, then:
    # tensor requires_grad=True after clone() operation
    # The tensor requires_grad=False after the detach() operation.
    image=image.transpose(1, 2, 0)
    # The tensor to be converted to a numpy array has the shape of the first, second and third dimensions. The first dimension represents the color channel, and the second and third dimensions represent the width and height of the image and pixels.
    # We know that each image in the MNIST dataset is a grayscale corresponding to a single color channel, and its width and height are 28 * 28 pixels. Therefore, the shape will be (1, 28, 28).
    # In order to draw an image, the shape of the image is required to be (28, 28, 1). Therefore, by swapping axis zero, one and two
    print(image.shape)
    image=image*(np.array((0.5, 0.5, 0.5))+np.array((0.5, 0.5, 0.5)))
    print(image.shape)
    # We normalize the image, and before we have to normalize it. Normalization is done by subtracting the average value and dividing by the standard deviation.
    # We will multiply by the standard deviation, and then add the average
    image=image.clip(0, 1)
    print(image.shape,type(image))
    return image
    # To ensure the range between 0 and 1, we use clip()
    # Function and passed zero and one as parameters. We apply the clip function to the minimum value 0 and maximum value 1 and return the image.

# It will create an object that allows us to pass through a variable training loader at a time.
# We access one element at a time by calling next on the dataiter.
# next() function will get our first batch of training data, and the training data will be divided into the following images and labels
dataiter=iter(dataloader)
images, labels=dataiter.next()

fig=plt.figure(figsize=(25, 6))
#fig=plt.figure(figsize=(25, 4)) #Picture output width is smaller than above
for idx in np.arange(20):
    ax=fig.add_subplot(2, 10, idx+1)
    plt.imshow(im_convert(images[idx]))
    ax.set_title([labels[idx].item()])
plt.show()

The final results are as follows:

[Solved] Training yolov5 Error: attributeerror: can get attribute sppf on Module

Problem Description:

There was a problem running the yolov5-train.py file:

Attributeerror: cant get attribute sppf on module models.common… (followed by file path)

Solution:

1. Double click to open the common.py file:

2. Add code:

import warnings

class SPPF(nn.Module):
    # Spatial Pyramid Pooling - Fast (SPPF) layer for YOLOv5 by Glenn Jocher
    def __init__(self, c1, c2, k=5):  # equivalent to SPP(k=(5, 9, 13))
        super().__init__()
        c_ = c1 // 2  # hidden channels
        self.cv1 = Conv(c1, c_, 1, 1)
        self.cv2 = Conv(c_ * 4, c2, 1, 1)
        self.m = nn.MaxPool2d(kernel_size=k, stride=1, padding=k // 2)

    def forward(self, x):
        x = self.cv1(x)
        with warnings.catch_warnings():
            warnings.simplefilter('ignore')  # suppress torch 1.9.0 max_pool2d() warning
            y1 = self.m(x)
            y2 = self.m(y1)
            return self.cv2(torch.cat([x, y1, y2, self.m(y2)], 1))


Copy and paste it directly into the common.py file.
Tips: Put import warnings on it!

Pytorch Error: runtimeerror: expected scalar type double but found float

The reason for this problem may be that the data type of tensor is wrong, it may be the wrong type of input X in the backpropagation, or the wrong data type in the training and testing process. If it is the backpropagation process, it depends on which layer of neural network has the problem, and add x = x.to (torch. Float32) in front of which layer, For example, this problem occurs in the first layer of neural network. For example, this problem occurs in the first layer of convolution. The solution is as follows:

If this problem occurs in the training or test model, the solution is as follows:

It is also possible that the type of labels is wrong during training or testing. The solutions are as follows:

[Solved] AttributeError: ‘NoneType‘ object has no attribute ‘append‘

Problem: in Python, when adding an element to a list, an error is reported attributeerror: ‘nonetype’ object has no attribute ‘append’
my code at that time was:

loss=[]
loss=loss.append(0.1)

Solution: change the code to the below

oss=[]
loss.append(0.1)

The append in the list can directly update the list of added elements without assignment

RuntimeError: Non RGB images are not supported [How to Fix]

Another version problem… I ran into it all. The version of
torchvision is too low, io.read_ Image does not support grayscale images. You can only read three channel color images…

segmentation = mask_to_rle(torchvision.io.read_image(os.path.join(self.root, m['mask']))[0] == 255)

current version

torchvision==0.8.2

Upgraded version

torchvision==0.9.0

Problem solved, yep! Of course, it is possible to solve the problem without upgrading torchvision. You can first convert the gray image into a three channel image, and you can try
(it’s also a reminder. Pay attention to the matching of torch versions)

[pl.LightningModule] spaCy & pytorch-lightning Error

In pl.lightningmodule, Spacy cannot be used for word segmentation, or an error will be reported

1. Use in the forward process

...
File "spacy/pipeline/trainable_pipe.pyx", line 75, in spacy.pipeline.trainable_pipe.TrainablePipe.pipe
...

It is possible that all objects in the model are automatically transformed into trainable objects within the PL framework. Similarly, if the original pipe is also transformed into trainablepipe, an error will be reported, including an error as shown above

2. Avoid problem 1 and use nlp.pipe

Similarly, the same problem as in forward will be converted to a trainable pipe

3. To avoid problem 1, write Spacy processing outside the model as a function call

The same error will be reported. The error is different from the above. It is a very inexplicable error

Solution:

I didn’t find a good solution, so I had to rewrite the required functions manually, such as stopping words