Category Archives: Python

[Solved] Pycharm paddle Error: Error: (External) CUDA error(35), CUDA driver version is insufficient for CUDA

Error content:

UserWarning: You are using GPU version Paddle, but your CUDA device is not set properly. CPU device will be used by default.
  "You are using GPU version Paddle, but your CUDA device is not set properly. CPU device will be used by default."
Traceback (most recent call last):
  File "D:/python-pic/exm4/fruits/01_fruits.py", line 178, in <module>
    place=fluid.CUDAPlace(0)#GPU上执行
OSError: (External) CUDA error(35), CUDA driver version is insufficient for CUDA runtime version. 
  [Hint: 'cudaErrorInsufficientDriver'. This indicates that the installed NVIDIA CUDA driver is older than the CUDA runtime library. This is not a supported configuration.Users should install an updated NVIDIA display driver to allow the application to run.] (at ..\paddle\fluid\platform\gpu_info.cc:108)

reason:

This indicates that the installed NVIDIA CUDA driver is older than the CUDA runtime library. This is not a supported configuration.Users should install an updated NVIDIA display driver to allow the application to run.

Solution: update the graphics card driver

[Solved] RuntimeError: Input type (torch.FloatTensor) and weight type (torch.cuda.FloatTensor) should be the

1. Problems

When I was practicing using pytorch today, I prepared to use GPU, and the following errors occurred:

RuntimeError: Input type (torch.FloatTensor) and weight type (torch.cuda.FloatTensor) should be the same

 

2. Code (adjusted and can run correctly)

import torch.optim
import torchvision.datasets

# Preparing the data set
from torch import nn
from torch.utils.data import DataLoader
from torch.utils.tensorboard import SummaryWriter
from time import time

print(torch.cuda.is_available())

train_data = torchvision.datasets.CIFAR10(root="./dataset",
                                          train=True,
                                          transform=torchvision.transforms.ToTensor(),
                                          download=True)

test_data = torchvision.datasets.CIFAR10(root="./dataset",
                                         train=False,
                                         transform=torchvision.transforms.ToTensor(),
                                         download=True)
print("training_set_data_length: %d" % len(train_data))
print("Test set data length: %d" % len(test_data))

# Load data using DataLoader
train_data_loader = DataLoader(train_data, batch_size=64)
test_data_loader = DataLoader(test_data, batch_size=64)


# Build the neural network (in a separate .py file)
class Net(nn.Module):
    def __init__(self) -> None:
        super(Net, self).__init__()
        self.model = nn.Sequential(
            nn.Conv2d(3, 32, 5, 1, 2),
            nn.MaxPool2d(2, 2),
            nn.Conv2d(32, 32, 5, 1, 2),
            nn.MaxPool2d(2, 2),
            nn.Conv2d(32, 64, 5, 1, 2),
            nn.MaxPool2d(2, 2),
            nn.Flatten(),
            nn.Linear(1024, 64),
            nn.Linear(64, 10)

        )

    def forward(self, x):
        x = self.model(x)
        return x


# Create a network model
net = Net()
# Only the model, data, and loss function can run on the GPU
# if torch.cuda.is_available():
#     net = net.cuda()
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
net.to(device)


# Loss function
loss_fn = nn.CrossEntropyLoss()
# if torch.cuda.is_available():
#     loss_fn = loss_fn.cuda()
loss_fn.to(device)

# Optimizer
learning_rate = 1e-2 # 0.01
optimizer = torch.optim.SGD(net.parameters(), lr=learning_rate)

# Set the parameters of the training network
# Record the number of training sessions
total_train_step = 0
# Record the number of training sessions
total_test_step = 0
# Number of training rounds
epoch = 10

start_time = time()
writer = SummaryWriter("./logs/train")
for i in range(epoch):
    print("------Round %d of training ------" % (i + 1))

    # Training steps
    for data in train_data_loader:
        imgs, targets = data
        if torch.cuda.is_available():
            imgs, targets = imgs.cuda(), targets.cuda()
        # imgs.to(device)
        # targets.to(device)
        outputs = net(imgs)
        loss = loss_fn(outputs, targets)

        # Optimizer optimization model
        optimizer.zero_grad()
        loss.backward()
        optimizer.step()

        total_train_step += 1
        if total_train_step % 100 == 0:
            end_time = time()
            print(end_time - start_time)
            print("training_step: {}, loss: {}".format(total_train_step, loss.item())) # .item() can convert the tensor type to a number
            writer.add_scalar("train_loss", loss.item(), total_train_step)

    # Test Steps
    total_test_loss = 0
    total_accuracy = 0
    with torch.no_grad(): # Zero out the gradient when testing. No need to adjust the gradient for optimization
        for data in test_data_loader:
            imgs, targets = data
            if torch.cuda.is_available():
                imgs, targets = imgs.cuda(), targets.cuda()
            # imgs.to(device)
            # targets.to(device)
            outputs = net(imgs)
            loss = loss_fn(outputs, targets)

            total_test_loss += loss.item()
            accuracy = (outputs.argmax(1) == targets).sum()
            total_accuracy += accuracy
    print("loss on the overall test set: {}".format(total_test_loss))
    print("Percent correct on the overall test set: {}".format(total_accuracy/len(test_data)))

    writer.add_scalar("test_loss", total_test_loss, total_test_step)
    writer.add_scalar("test_accuracy", total_accuracy/len(test_data), total_test_step)
    total_test_step += 1

    # Save the model
    # torch.save(net.state_dict(), "model_{}.pth".format(i))
    # print("Round {} training model saved".format(i))

writer.close()

3. Solutions

device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
for data in train_data_loader:
   imgs, targets = data
    if torch.cuda.is_available():
        imgs, targets = imgs.cuda(), targets.cuda()
    # imgs.to(device)
    # targets.to(device)
    outputs = net(imgs)

In the above code, IMGs and targets cannot use .to(device), so the input type (torch.Floattensor) will appear after use. If it is not GPU type, it can only be used in another way:

if torch.cuda.is_available():
    imgs, targets = imgs.cuda(), targets.cuda()

This can solve the problem that the input and weight types do not match.

4. Reference

https://stackoverflow.com/questions/59013109/runtimeerror-input-type-torch-floattensor-and-weight-type-torch-cuda-floatte

[Solved] cx_Oracle.DatabaseError: Error while trying to retrieve text for error ORA-01804

Error: 

cx_Oracle connect oracle error:

cx_Oracle.DatabaseError: Error while trying to retrieve text for error ORA-01804
sample code:
import cx_Oracle
conn = cx_Oracle.connect(user,pwd, self.ois_tns)

 

Solution: Check the environment variable settings for oracle in the .bash_profile under the Linux user on the server executing the code, as follows.

export ORACLE_HOME=/test/home/oracle/product/11.2.0.4
export LD_LIBRARY_PATH=O R A C L E H O M E / l i b e x p o r t T N S A D M I N = ORACLE_HOME/lib export TNS_ADMIN=ORACLEHOME/libexportTNSADMIN=ORACLE_HOME/network/admin

 

Python Error: ImportError: cannot import name ‘logsumexp’ from ‘scipy.misc’(Anaconda3\lib\site-packages\scipy\misc)

How to Solve Error:

ImportError: cannot import name ‘logsumexp’ from ‘scipy.misc’(Anaconda3\lib\site-packages\scipy\misc)
or
ImportError: cannot import name ‘comb’ and ‘logsumexp’

Directly go in and change the code inside the genisim package, but it is best not to change it indiscriminately. The reason is that there is no logsumexp after the update of scipy.misc package, the author I changed a code inside genisim.
Modify the following codes in ldamodel.py
from scipy.misc import logsumexp
to
from scipy.special import logsumexp
The approximate path is shown below, my gensim==3.0.1 is this version. But this error is also related to the version. I can see that there is a problem with line 56 of ldamodel.py in the gensim package, which should be updated.

Chromdriver Install error: File “C:\python37\lib\site-packages\selenium\webdriver\common\service.py”, line 76, in start stdin=PIPE) File…

When selenium is importing, chromdriver install error:

File “C:\python37\lib\site-packages\selenium\webdriver\common\service.py”, line 76, in start stdin=PIPE) File “C:\python37\lib\subprocess.py”, line 756, in init restore_signals, start_new_session) File “C:\python37\lib\subprocess.py”, line...

 

Reason: There is a problem in Installation path, put chromdriver.exe to python installation path, put it to C:\python37\Scripts

Django CSV file Error: UnicodeDecodeError: ‘utf-8’ codec can’t decode byte 0xb5 in position 0: invalid start

UnicodeDecodeError: ‘utf-8’ codec can’t decode byte 0xb5 in position 0: invalid start

This csv file I opened in jupyter using pd.read_csv open no problem

But after I cleaned the data and saved it as a new csv, I couldn’t open it and got an error

Solution: in the views.py direct cleaning work, the data will be stored in order into a list, using a loop to iterate through the database, the following is the code (less efficient after all is iterative. Welcome to share a better way)

df = pd.read_csv(r"xxxxx\xxx.csv", encoding='utf-8')
... # datas clean up
ls = []
for index, row in df.iterrows():
     res = []
     for i in df:
         res.append(row[i])
            ls.append(res)
for i in range(len(ls)):
    try:
        XXX.objects.create(title=ls[i][0], rating=ls[i][1])
    except Exception as e:
        print(e)
return HttpResponse('Datas save successfully')

pytorch DDP Accelerate Error: [W reducer.cpp:362] Warning: Grad strides do not match bucket view strides.

When pytorch uses DDP to accelerate, the prompt message is:

[W reducer.cpp:362] Warning: Grad strides do not match bucket view strides. This may indicate grad was not created according to the gradient layout contract, or that the param's strides changed since DDP was constructed.  This is not an error, but may impair performance.

The reason is that the internal memory address of the input tensor becomes contiguous after it has been transformed by transpose or permute.

It is simple to add a statement .contiguous() after the tensor has been transposed or permuted to make the memory address contiguous.

For example:

# error codes:
input_tensor = ori_tensor.transpose(1, 3)

# Modified codes:
input_tensor = ori_tensor.transpose(1, 3).contiguous()

[Solved] transformers Install Error: error can‘t find rust compiler

Install transformers after reinstalling the system. If you encounter a bug, record it and check it later.

When reinstalling with pip install transformers command under windows, an error is reported:

error: can't find Rust compiler
      
    If you are using an outdated pip version, it is possible a prebuilt wheel is available for this package but pip is not able to install from it. Installing from the wheel would avoid the need for a Rust compiler.
      
    To update pip, run:
      
        pip install --upgrade pip
      
    and then retry package installation.
      
    If you did intend to build this package from source, try installing a Rust compiler from your system package manager and ensure it is on the PATH during installation. Alternatively, rustup (available at https://rustup.rs) is the recommended way to download and update the Rust compiler toolchain.
    [end of output]
  
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for tokenizers
Failed to build tokenizers
ERROR: Could not build wheels for tokenizers, which is required to install pyproject.toml-based projects

According to the error prompt, first run pip install -- upgrade pip is invalid, and then install Rust Compiler according to the error prompt. First go to the official website to download the corresponding installation package, select the 64-bit installation file according to my actual situation, then click the downloaded exe file to install, and select the default configuration during the installation process.

According to the instructions on the official website, all tools of rust are in the  ~/.cargo/bin directory includes commands:  rustc, cargo and rustup . Therefore, it needs to be configured into the environment variable, but windows will configure it automatically, but the configured environment variable will take effect only after restarting the computer under windows. After restarting, run the installation command again:

pip install transformers

The result is a successful installation. The screenshot is as follows:

Python Error: [9880] failed to execute script [How to Solve]

1. When I was packing the game, the exe in dist flashes back. You can’t see what the error is.

After recording the screen with the video recorder, the error code is found as follows:

2. According to the error content, scoreboard There is a problem with line 18 of Py:

The cause of the error has been highlighted.

3. Problem-solving:

It is in the font setting “None”, can not be translated, so lead to error, change the “None” to the system font, here changed to ‘SimHei’

When checking the information, I found that other people encountered the problem: the path of the image. The relative path written is not correct, you have to write the absolute path.)

PS: At this point the game can still only be used on your own computer, also because of the picture path problem. The path on your own computer is not the same as the path on other people’s computers. To let others play too, you still have to change it.

[Solved] error: the following arguments are required (Default parameters are set)

When using the arguparse parameter list, if a parameter specifies a default value, it still displays: error: the following arguments are required

Solution:

1. Replace required=True with required=False.

2. Add — before parameter name.

As follows:

parser.add_argument('--mode', '-M', dest='mode', action='store', required=True,
                    choices=['train', 'test'], default='train',
                    help='Mode in which the script is executed.')