Category Archives: Python

How to Solve Python WARNING: Ignoring invalid distribution -ip (e:\python\python_dowmload\lib\site-packages)

Recently, when using pip to install the plug-in, the following warning message appears:

WARNING: Ignoring invalid distribution -ip (e:\python\python_ dowmload\lib\site-packages)

resolvent:

Find the directory where the error is reported in the warning message, and then delete the folder at the beginning of ~. That kind of thing is caused by the plug-in installation failure/Midway exit, which leads to the plug-in installation exception. Although the warning message does not affect, it has obsessive-compulsive disorder. Just delete the folder:

As for why the above problems appear?

Because a few days ago, when I was using Python 3.9 to build the robot framework environment, I needed to install wxPython, and then when the latest fashion was installed, later when the robot framework ride was installed,

One of the components it depends on does not support the latest version, and the component it depends on does not support Python 3.9, so there was an exception when installing the plug-in.

[Solved] PyTorch Caught RuntimeError in DataLoader worker process 0和invalid argument 0: Sizes of tensors mus

The error is as follows:

Traceback (most recent call last):
  File "/home/jiang/miniconda3/envs/Net/lib/python3.6/site-packages/tqdm/std.py", line 1178, in __iter__
    for obj in iterable:
  File "/home/jiang/miniconda3/envs/Net/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 819, in __next__
    return self._process_data(data)
  File "/home/jiang/miniconda3/envs/Net/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 846, in _process_data
    data.reraise()
  File "/home/jiang/miniconda3/envs/Net/lib/python3.6/site-packages/torch/_utils.py", line 369, in reraise
    raise self.exc_type(msg)
RuntimeError: Caught RuntimeError in DataLoader worker process 0.
Original Traceback (most recent call last):
  File "/home/jiang/miniconda3/envs/Net/lib/python3.6/site-packages/torch/utils/data/_utils/worker.py", line 178, in _worker_loop
    data = fetcher.fetch(index)
  File "/home/jiang/miniconda3/envs/Net/lib/python3.6/site-packages/torch/utils/data/_utils/fetch.py", line 47, in fetch
    return self.collate_fn(data)
  File "/home/jiang/miniconda3/envs/Net/lib/python3.6/site-packages/torch/utils/data/_utils/collate.py", line 75, in default_collate
    return {key: default_collate([d[key] for d in batch]) for key in elem}
  File "/home/jiang/miniconda3/envs/Net/lib/python3.6/site-packages/torch/utils/data/_utils/collate.py", line 75, in <dictcomp>
    return {key: default_collate([d[key] for d in batch]) for key in elem}
  File "/home/jiang/miniconda3/envs/Net/lib/python3.6/site-packages/torch/utils/data/_utils/collate.py", line 65, in default_collate
    return default_collate([torch.as_tensor(b) for b in batch])
  File "/home/jiang/miniconda3/envs/Net/lib/python3.6/site-packages/torch/utils/data/_utils/collate.py", line 56, in default_collate
    return torch.stack(batch, 0, out=out)
RuntimeError: invalid argument 0: Sizes of tensors must match except in dimension 0. Got 8 and 16 in dimension 1 at /pytorch/aten/src/TH/generic/THTensor.cpp:689

In __ getitem__ function does get the data, so the problem lies in torch. Utils. Data. Dataloader

analysis

In fact, there are two mistakes

RuntimeError: invalid argument 0: Sizes of tensors must match except in dimension 0. Got 8 and 16 in dimension 1 at /pytorch/aten/src/TH/generic/THTensor.cpp:689

Prompt for inconsistent data dimensions, jump toFile "/home/jiang/miniconda3/envs/Net/lib/python3.6/site-packages/torch/utils/data/_utils/collate.py", line 56, in default_collate return torch.stack(batch, 0, out=out) Source file at :

  if isinstance(elem, torch.Tensor):
   out = None
   if torch.utils.data.get_worker_info() is not None:
       # If we're in a background process, concatenate directly into a
       # shared memory tensor to avoid an extra copy
       numel = sum([x.numel() for x in batch])
       storage = elem.storage()._new_shared(numel)
       out = elem.new(storage)
   return torch.stack(batch, 0, out=out)

It can be found that the dataloader needs to merge at the end. If the batchsize is set, then this is the process of batch merging. If the dimensions are not unified, an error will be reported.

Another error is to enable multi threading (Num)_ workers!= 0) prompt which thread has a problem. Because the dimensions of batch merge are different, the first thread will hang (worker process 0), so runtimeerror: caught runtimeerror in dataloader worker process 0. will be prompted

Solution:

Since the dimensions are not unified, it’s good to ensure that the dimensions are the same. You can set a large enough array or tent in advance, and mark the unfilled part. When you read the data, you can determine the valid data according to the mark.

[Solved] Matplotlib ERROR: MatplotlibDeprecationWarning: Adding an axes using the same arguments…

Matplotlib error: MatplotlibDeprecationWarning: Adding an axes using the same arguments…
matpltlib error:

MatplotlibDeprecationWarning: Adding an axes using the same arguments as a previous axes currently reuses the earlier instance.  In a future version, a new instance will always be created and returned.  Meanwhile, this warning can be suppressed, and the future behavior ensured, by passing a unique label to each axes instance.
  self.axes = self.fig.add_subplot(111)  # Create a subgraph

The reason is to add subgraphs repeatedly, for example, self.axes = self.fig.add has been used_ Subplot (111) adds a subgraph, and then adds it repeatedly to report an error.

Solution:

Delete the subgraph of figure and add it again. Clear() is the method of figure class. Examples (the following are all examples in the class)

self.fig = plt.figure()
self.axes = self.fig.add_subplot(111)  # Create a subgraph
self.fig.clear() # Clear the subplot first
self.axes = self.fig.add_subplot(111) # Create a subplot

Solution warning: userwarning: fixedformatter should only be used together with fixedlocator (illustrated version)!)

Resolve warning

Error information problem code problem analysis and solution

Error information

• When we draw the edge histogram, the following warning will appear when we use the conventional method to convert the x-axis scale of the scatter plot to floating-point number!!!

UserWarning: FixedFormatter should only be used together with FixedLocator
  ax_main.set_xticklabels(xlabels)

Problem code

xlabels = ax_main.get_xticks().tolist() # Convert scale values to floating point numbers
ax_main.set_xticklabels(xlabels) # Set the scale value to floating point
plt.show()

• When you use the above code to convert the scale value to floating-point number, the same warning as the title will appear, but the x-axis scale of the scatter image displayed has been successfully converted to floating-point number, as shown in the figure below

Problem analysis

• Problem Description: This is a user warning: it is a warning caused by our nonstandard operation. It tells us that fixedformatter (scale form) can only be used with fixedlocator , but can’t use other methods to change the scale form!!!

solve the problem

• In the above we analyzed the causes of the warning, we should use the fixedlocator locator to change the fixedformatter (scale form), rather than directly convert the scale format, leading to the warning
• First, import the ticker module in Matplotlib library, and the code is as follows:

import matplotlib.ticker as mticker

label_format = '{:,.1f}'  # Create floating point format .1f one decimal
xlabels = ax_main.get_xticks().tolist()
ax_main.xaxis.set_major_locator(mticker.FixedLocator(xlabels)) # locate to the x-axis of the scatter plot
ax_main.set_xticklabels([label_format.format(x) for x in xlabels]) # Convert scales to floating point numbers using a list derivative loop
plt.show()

Image display:

• The complete code for drawing the above image is:

import matplotlib.pyplot as plt
import matplotlib.ticker as mticker
import pandas as pd

# Get the data
df = pd.read_csv(r'D:\9\mpg_ggplot2.csv')

# Create a canvas and split it into a grid
fig = plt.figure(figsize=(16, 10), dpi=80, facecolor='white')
grid = plt.GridSpec(4, 4, hspace=0.5, wspace=0.2)

# Add subgraphs
ax_main = fig.add_subplot(grid[:-1, :-1])
ax_right = fig.add_subplot(grid[:-1, -1], xticklabels=[], yticklabels=[])
ax_bottom = fig.add_subplot(grid[-1, :-1], xticklabels=[], yticklabels=[])

# Plot the bubble in the center

ax_main.scatter('displ', 'hwy'
                , s=df.cty * 4
                , data=df
                , c=df.manufacturer.astype('category').cat.codes
                , cmap='tab10'
                , edgecolors='gray'
                , linewidth=.5
                , alpha=.9)
# Plot the bottom histogram
ax_bottom.hist(df.displ, 40, histtype='stepfilled', orientation='vertical', color='deeppink')
ax_bottom.invert_yaxis() # make the y-axis inverse

# Plot the right histogram
ax_right.hist(df.hwy, 40, histtype='stepfilled', orientation='horizontal', color='deeppink')

# decorate the image
plt.rcParams['font.sans-serif'] = ['Simhei']
ax_main.set(title='Edge histogram \n engine displacement vs highway miles/gallon'
            , xlabel='Engine displacement (L)'
            , ylabel='Highway miles/gallon')
ax_main.title.set_fontsize = (20)

for item in ([ax_main.xaxis.label, ax_main.yaxis.label] + ax_main.get_xticklabels() + ax_main.get_yticklabels()):
    item.set_fontsize(14)

for item in [ax_bottom, ax_right]:
    item.set_xticks([])
    item.set_yticks([])

label_format = '{:,.1f}'  # Create floating point format .1f one decimal
xlabels = ax_main.get_xticks().tolist()
ax_main.xaxis.set_major_locator(mticker.FixedLocator(xlabels)) # locate to the x-axis of the scatter plot
ax_main.set_xticklabels([label_format.format(x) for x in xlabels]) # Convert scales to floating point numbers using a list derivative loop
plt.show()

[How to Solve] ImportError: No module named typing

python version 2.7
Error
This error occurs when using pip

Traceback (most recent call last):
File “C:\Python27\Scripts\pip-script.py”, line 9, in
load_entry_point(‘pip==21.1.3’, ‘console_scripts’, ‘pip’)()
File “C:\Python27\lib\site-packages\pkg_resources_init_.py”, line 542, in load_entry_point
return get_distribution(dist).load_entry_point(group, name)
File “C:\Python27\lib\site-packages\pkg_resources_init_.py”, line 2569, in load_entry_point
return ep.load()
File “C:\Python27\lib\site-packages\pkg_resources_init_.py”, line 2229, in load
return self.resolve()
File “C:\Python27\lib\site-packages\pkg_resources_init_.py”, line 2235, in resolve
module = import(self.module_name, fromlist=[‘name’], level=0)
File “C:\Python27\lib\site-packages\pip_init_.py”, line 1, in
from typing import List, Optional
ImportError: No module named typing

The solution is to update python to 3, but I want to use 2.7, so this method does not work
Solution
I found that the version of pip is too high, and it is not compatible with python2, my version is pip21.1.3, so I need to set back the version of pip, the solution is as follows, just run it in order

curl -O https://bootstrap.pypa.io/pip/2.7/get-pip.py
python get-pip.py
python -m pip install --upgrade "pip < 21.0"

Perfect solution, PIP version back, install again without error correction

[Solved] Python Using or importing the ABCs from ‘collections‘ instead of from ‘collections.abc‘ is deprecate

python Import Warning: Using or importing the ABCs from ‘collections’ instead of from ‘collections.abc’ is deprecated since Python 3.3, and in 3.10 it will stop working
Codes below:

from collections import Iterator

print(isinstance(iter([]), Iterator))  # True

# Console output:
# D:\Code_data\pycharm project\first test\08-iterable.py:2: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated since Python 3.3, and in 3.10 it will stop working
#   from collections import Iterator
# True

Although the results have come out, there has always been an annoying warning (using or importing the ABCs from ‘collections’ instead of from’ collections. ABC ‘is predicted since Python 3.3, and in 3.10 it will stop working), The solution is to change
from collections import iterator
to
from collections.abc import iterator

‘coroutine‘ object is not iterable [How to Solve]

ValueError: [TypeError("'coroutine' object is not iterable"), TypeError('vars() argument must have __dict__ attribute')]

In fastapi, uvloop uses asynchronous function
to use asynchronism. ‘coroutine’ object is not Iterable error
originally found that asynchronous code was called in synchronous function.

Please add async, await to the external function

[Solved] RuntimeError : PyTorch was compiled without NumPy support

I used to use torch 0.4.1, numpy 1.20.1, running error pytorch was compiled without numpy support

later found the following solution
1

pip install numpy==1.15.0

2.upgrade torch-0.4.1to torch-0.4.1.post2

pip install torch==0.4.1.post2

Another error will be reported after using 1 (I forgot the screenshot), which is like this: valueerror: numpy.ufunc size changed, may indicate binary instability. Expected 216 from C header, got 192 from pyobject
after that, I changed the version of numpy to 1.20.1 , Then update torch from torch-0.4.1 to torch-0.4.1. Post2 , so the first step is unnecessary

[Solved] unknown error: Chrome failed to start: exited abnormally (Driver info: chromedriver=2.36.540471

I wrote a crawler in python, it works fine in pycharm, but it reports an error on the command line under linux
Message: unknown error: Chrome failed to start: exited abnormally
(Driver info: chromedriver=2.36.540471 (9c759b81a907e70363c6312294d30b6ccccc2752),platform=Linux 4.14.0-deepin2-amd64 x86_64)
You are using arg –headless so with that my be you can try with another argument –no-sandbox and window-size=1024,768.

chrome.additional.capabilities={“chromeOptions”:{“args”:[“–headless”, “window-size=1024,768”, “–no-sandbox”], “binary”: “/home/ubuntu/software/chromedriver”}}

Solution:

chrome_options = Options()
chrome_options.add_argument('--headless')
chrome_options.add_argument('--disable-gpu')
chrome_options.add_argument("window-size=1024,768")
chrome_options.add_argument("--no-sandbox")
driver = webdriver.Chrome(chrome_options=chrome_options)

[Solved] module ‘numpy.random‘ has no attribute ‘default_rng‘ gensim.model

I see that some bloggers have said to upgrade numpy
but my numpy is up-to-date
numpy 1.21.0
but still can’t
to get to the point directly
* * my solution may be limited to gensim *
(1) in Python, from gensim.models import fasttext shows that a file is disabled, PIP install Python Levenshtein can solve the problem, However, the following errors occurred during the installation process, which means that visual c + + 14.0 is missing or upgraded.

(2) in order to avoid this problem, install the WHL file directly and download it according to your own Python version. After 64 bit and 34 bit
PIP install path + WHL file, you can successfully install

(3) Restart Jupiter notebook
must restart, I don’t know why, restart not report error
solution

[Solved] TypeError: Object of type ‘bytes’ is not JSON serializable

After I read the data from the mat file with python, I get a dictionary array. I want to store this dictionary array in a json file, so the json data should be encoded first, so the json.dumps function is used for encoding, but I use json. It is found that there will be problems during the dumps function:

TypeError: Object of type 'bytes' is not JSON serializable

Later, after consulting related materials, I found that many data types of the default encoding function cannot be encoded, so you can write an encoder yourself to inherit jsonencoder, so that it can be encoded.

For example, the above problem is because the json.dumps function found that there are bytes type data in the dictionary, so it cannot be encoded. Just write an encoding class before the encoding function. As long as the data of the bytes type is checked, it will be converted. Into str type.

class MyEncoder(json.JSONEncoder):
    def default(self, obj):
        if isinstance(obj, bytes):
            return str(obj, encoding='utf-8');
        return json.JSONEncoder.default(self, obj)

This solved the problem.

Later, similar problems were found during encoding:

TypeError: Object of type 'ndarray' is not JSON serializable

This is the same processing method. When the ndarray data is checked, it is converted into list data:

class MyEncoder(json.JSONEncoder):
    def default(self, obj):
        if isinstance(obj, np.ndarray):
            return obj.tolist()
        elif isinstance(obj, bytes):
            return str(obj, encoding='utf-8');
        return json.JSONEncoder.default(self, obj)

In this way, the data is encoded.

Put the final code for your reference:

import scipy.io as sio
import os
import json
import numpy as np
 
load_fn = '2%.mat'
load_data = sio.loadmat(load_fn)
print(load_data.keys())
 
class MyEncoder(json.JSONEncoder):
    def default(self, obj):
        if isinstance(obj, np.ndarray):
            return obj.tolist()
        elif isinstance(obj, bytes):
            return str(obj, encoding='utf-8');
        return json.JSONEncoder.default(self, obj)
 
save_fn = os.path.splitext(load_fn)[0] + '.json'
file = open(save_fn,'w',encoding='utf-8');
file.write(json.dumps(load_data,cls=MyEncoder,indent=4))
file.close()