Tag Archives: python

[Solved] Pytorch Error: PytorchStreamReader failed reading zip archive failed finding central directory

Pytoch reports an error: pytochstreamreader failed reading zip archive: failed finding central directory

Error reporting position

An error is reported if the pre training model is not downloaded

resnet101 = torchvision.models.resnet101(pretrained=True)

Solution:

Download the files from the above URL and put them in the location of the path behind to replace the weights that have not been downloaded

[Solved] USB: usb_device_handle_win.cc:1049 Failed to read descriptor from node connection…

USB: usb_device_handle_win.cc:1049 Failed to read descriptor from node connection: The devices connected to the system are not functioning.

When executing automated tests in python + selenium + pytest, I encountered the following error.

[25612:15512:0220/162104.300:ERROR:device_event_log_impl.cc(211)] [16:21:04.299] USB: usb_device_handle_win.cc:1049 Failed to read descriptor from node connection:
 The devices connected to the system are not functioning.(0x1F)

At present, the reason has not been found and can only be solved by violence:

Add the following options when starting chrome:

option = webdriver.chromeOptions()

# Prevent printing some useless logs
option.add_experimental_option("excludeSwitches", ['enable-automation', 'enable-logging'])
driver = webdriver.Chrome(chrome_options=option)

 

Supplement

For this statement

driver = webdriver.Chrome(chrome_options=option)

For chrome browsers, chrome_options=option, preferably written as options=option, that is:

driver = webdriver.Chrome(options=option)

Or you’ll see it in terminal

DeprecationWarning: use options instead of chrome_options
  driver = webdriver.Chrome(chrome_options=option)

[Solved] celery Startup Error: kombu.exceptions.VersionMismatch: Redis transport requires redis-py versions 3.2.0 or later. You have 2.10.6

Error when starting celery:

kombu.exceptions.VersionMismatch: Redis transport requires redis-py versions 3.2.0 or later. You have 2.10.6

The reason is that my redis version is too low and incompatible with kombu. But I won’t touch my redis
uninstall the current celery, download the 4.1.0 version of celery (kombu will be updated during installation), and then start it again. An error is reported:

pip install Celery==4.1.0
error:
KeyError: 'async'

The problem is that version 4.1.0 of celery is incompatible with python3.6.9, so replace it with version 4.1.1 of celery

pip install Celery==4.1.0

Start celery again:

celery -A celery_task.main worker -l info

Done!

 

[Solved] Python Error: asyncio RuntimeError: This event loop is already running

In case of an error, the following diagram is given:

Solution:

# download nest_asyncio
pip3 install nest_asyncio

Add the following two lines at the beginning of the asynchronous collaboration code, or in the code:

import nest_asyncio

nest_asyncio.apply()

After consulting the data, it is found that using the Jupiter notebook environment, it is connected to the IPython kernel, and the IPython kernel itself runs on the event loop, while asyncio does not allow nesting of its event loop, so the error message as shown in the above figure will appear.

nest_asyncio exists as a patch for asynchronous operations.

[Solved] awtk scons Error: unsupported pickle protocol: 4

Error Messages:

scons
scons: Reading SConscript files …
scons: done reading SConscript files.
scons: Building targets …
scons: * [SConstruct] ValueError : unsupported pickle protocol: 4
scons: building terminated because of errors.

The reason for this is probably that the python version is different.

You can find out. I am working on an awtk project. I run it on PC and copy it to Ubuntu. This is the reason for compilation.

Solution:

Delete the .sconsign.dblite file in the root directory of your project.

 

tensorflow2.3 InvalidArgumentError: jpeg::Uncompress failed [How to Solve]

When training your own dataset, you often report errors:

tensorflow2.3 InvalidArgumentError: jpeg::Uncompress failed
[[{{node decode_image/DecodeImage}}]] [Op:IteratorGetNext]

 

Solution:
check whether the picture is damaged before training:

import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
import os


num_skipped = 0
for folder_name in ("Fruit apples", "Fruit bananas", "Fruit oranges"):
    folder_path = os.path.join(".\data\image_data", folder_name)
    for fname in os.listdir(folder_path):

        fpath = os.path.join(folder_path, fname)

        try:
            fobj = open(fpath, mode="rb")
            is_jfif = tf.compat.as_bytes("JFIF") in fobj.peek(10)
            
        finally:
            fobj.close()

        if not is_jfif:
            num_skipped += 1
            # Delete corrupted image
            os.remove(fpath)

print("Deleted %d images" % num_skipped)

Delete the damaged picture and train again to solve the problem
if an error is prompted again, use:

# Determine if an image is corrupt from local
def is_valid_image(path):
    '''
    Check if the file is corrupt
    '''
    try:
        bValid = True
        fileObj = open(path, 'rb')  # Open in binary form
        buf = fileObj.read()
        if not buf.startswith(b'\xff\xd8'): # whether to start with \xff\xd8
            bValid = False
        elif buf[6:10] in (b'JFIF', b'Exif'): # ASCII code of "JFIF"
            if not buf.rstrip(b'\0\r\n').endswith(b'\xff\xd9'): # whether it ends with \xff\xd9
                bValid = False
        else:
            try:
                Image.open(fileObj).verify()
            except Exception as e:
                bValid = False
                print(e)
    except Exception as e:
        return False
    return bValid
  
 num_skipped = 0
for folder_name in ("fruit-apple", "fruit-banana", "fruit-orange"):
    #os.path.join() joins two or more pathname components
    folder_path = os.path.join(". \data\image_data", folder_name)
    # os.listdir(path) lists the subdirectories under this directory
    for fname in os.listdir(folder_path):
        fpath = os.path.join(folder_path, fname)
        flag1 = is_valid_image(fpath)
        if not flag1:
            print(flag1)
            print(fpath)#Print the path and name of the error file
 

Adjust the error file and train again to solve the problem.

python chatterbot [nltk_data] Error loading stopwords: <urlopen error [Errno 11004]

The following error occurred while running the project:

[nltk_data] Error loading stopwords: <urlopen error [Errno 11004]
[nltk_data]     getaddrinfo failed>
[nltk_data] Error loading averaged_perceptron_tagger: <urlopen error
[nltk_data]     [Errno 11004] getaddrinfo failed>

The Solution is as follows:

Go to: https://github.com/nltk/nltk_data

Go to the directory /packages/corpora/ and find the corresponding file stopwords.zip and put it under the corresponding file

It is recommended that the entire nltk_data project is downloaded with a size of 695M to avoid other problems that cannot be downloaded!

Extract the zip file

nltk_data-gh-pages.zip\nltk_data-gh-pages\packages

all files to the following directory

C:\Users\Administrator\AppData\Roaming\nltk_data

Here the installation directory may be different for each person, here I am in the above directory.

Modify the corresponding file.

\venv\Lib\site-packages\chatterbot\utils.py under the current project directory

(Some children’s directory may not be under the current project, you can find the corresponding site-packages directory according to your own configuration and then find the corresponding files to modify)

The corresponding code nltk_download_corpus(‘xxx’) needs to be modified as follows:


def download_nltk_stopwords():
    """
    Download required NLTK stopwords corpus if it has not already been downloaded.
    """
    nltk_download_corpus('corpora/stopwords')


def download_nltk_wordnet():
    """
    Download required NLTK corpora if they have not already been downloaded.
    """
    nltk_download_corpus('corpora/wordnet')


def download_nltk_averaged_perceptron_tagger():
    """
    Download the NLTK averaged perceptron tagger that is required for this algorithm
    to run only if the corpora has not already been downloaded.
    """
    nltk_download_corpus('taggers/averaged_perceptron_tagger')


def download_nltk_vader_lexicon():
    """
    Download the NLTK vader lexicon for sentiment analysis
    that is required for this algorithm to run.
    """
    nltk_download_corpus('sentiment/vader_lexicon')

Done!

[Solved] RuntimeError: cuda runtime error (801) : operation not supported at

cuda runtime error (801) : Raw out

Error:
RuntimeError: cuda runtime error (801) : operation not supported at C:\w\1\s\windows\pytorch\torch/csrc/generic/StorageSharing.cpp:245 #85

Reason:
Guess, windows does not support multitasking

Solution:

    layer_loader = NeighborSampler(data.adj_t, node_idx=None, sizes=[-1], batch_size=4096, shuffle=False, num_workers=12)

For example, the above code

Delete numwork directly

layer_loader = NeighborSampler(data.adj_t, node_idx=None, sizes=[-1], batch_size=4096, shuffle=False)

 

[Solved] ERROR: Unexpected bus error encountered in worker. This might be caused by insufficient shared memor

ERROR: Unexpected bus error encountered in worker. This might be caused by insufficient shared memor

1. question

Using pytorch dataloader in docker may cause the following errors:

2. solution

View disk usage through df -h in docker:

You can see that /dev/shm is only 64M, but the data_loader has more num_works set, and it is collaborating through shared memory, resulting in insufficient memory.

Please note that PyTorch uses shared memory to share data between processes, so if torch multiprocessing is used (e.g. for multithreaded data loaders) the default shared memory segment size that container runs with is not enough, and you should increase shared memory size either with –ipc=host or –shm-size command line options to nvidia-docker run.

Solution:
(1) num_workers=0 (note that setting it to 1 does not work)
(2) docker is easy to share more memory:

--ipc=host  or --shm-size 8G
where -ipc=host will be adjusted according to the current host memory maximum, it is recommended to use this method

After restart: