Category Archives: Python

bandersnatch Error: No module Named [How to Solve]

ERROR: Unable to load entry point swift_plugin = bandersnatch_storage_plugins.swift:SwiftStorage: No module named ‘keystoneauth1’
This solution is missing at the time of installation due to python version, solution:
pip3 install keystoneauth1
Collecting keystoneauth1
Downloading https://files.pythonhosted.org/packages/7c/2e/dcfd2412941244e8a00a568654ce1687a1cab7be05d634ada0b5a078d0a3/keystoneauth1-4.4.0-py3-none-any.whl (314kB)
100% |████████████████████████████████| 317kB 240kB/s
Collecting pbr!=2.1.0,>=2.0.0 (from keystoneauth1)
Downloading https://files.pythonhosted.org/packages/73/c3/d45171501210b0305f4c93fafe50950f0c2228e87034ceb51744bd03ff08/pbr-5.8.0-py2.py3-none-any.whl (112kB)
100% |████████████████████████████████| 122kB 184kB/s
Requirement already satisfied: requests>=2.14.2 in /usr/lib/python3/dist-packages (from keystoneauth1)
Collecting stevedore>=1.20.0 (from keystoneauth1)
Downloading https://files.pythonhosted.org/packages/7a/bc/fcce9e50da73ea23af6d236e05e15db8a02da1099a5e0a479451bcea3833/stevedore-3.5.0-py3-none-any.whl (49kB)
100% |████████████████████████████████| 51kB 166kB/s
Requirement already satisfied: six>=1.10.0 in /usr/lib/python3/dist-packages (from keystoneauth1)
Collecting iso8601>=0.1.11 (from keystoneauth1)
Downloading https://files.pythonhosted.org/packages/df/e5/589bc81d410139ec4e4f37d9af5a50987566abf6d087b3c4fbed708109a9/iso8601-1.0.2-py3-none-any.whl
Collecting os-service-types>=1.2.0 (from keystoneauth1)
Downloading https://files.pythonhosted.org/packages/10/2d/318b2b631f68e0fc221ba8f45d163bf810cdb795cf242fe85ad3e5d45639/os_service_types-1.7.0-py2.py3-none-any.whl
Collecting importlib-metadata>=1.7.0; python_version < “3.8” (from stevedore>=1.20.0->keystoneauth1)
Downloading https://files.pythonhosted.org/packages/c4/1f/e2238896149df09953efcc53bdcc7d23597d6c53e428c30e572eda5ec6eb/importlib_metadata-4.8.2-py3-none-any.whl
Collecting zipp>=0.5 (from importlib-metadata>=1.7.0; python_version < “3.8”->stevedore>=1.20.0->keystoneauth1)
Using cached https://files.pythonhosted.org/packages/bd/df/d4a4974a3e3957fd1c1fa3082366d7fff6e428ddb55f074bf64876f8e8ad/zipp-3.6.0-py3-none-any.whl
Requirement already satisfied: typing-extensions>=3.6.4; python_version < “3.8” in /usr/local/lib/python3.7/dist-packages (from importlib-metadata>=1.7.0; python_version < “3.8”->stevedore>=1.20.0->keystoneauth1)
Installing collected packages: pbr, zipp, importlib-metadata, stevedore, iso8601, os-service-types, keystoneauth1
Successfully installed importlib-metadata-4.8.2 iso8601-1.0.2 keystoneauth1-4.4.0 os-service-types-1.7.0 pbr-5.8.0 stevedore-3.5.0 zipp-3.6.0


ERROR: Unable to load entry point swift_plugin = bandersnatch_storage_plugins.swift:SwiftStorage: No module named ‘swiftclient’
This one was installed with python-swiftclient missing.
Solution:
pip3 install python-swiftclient
Collecting python-swiftclient
Downloading https://files.pythonhosted.org/packages/f6/5f/6784a830e618a89272f8efee784930e614be285a6d3e82986916076fe69e/python_swiftclient-3.13.0-py2.py3-none-any.whl (86kB)
100% |████████████████████████████████| 92kB 281kB/s
Requirement already satisfied: six>=1.9.0 in /usr/lib/python3/dist-packages (from python-swiftclient)
Requirement already satisfied: requests>=1.1.0 in /usr/lib/python3/dist-packages (from python-swiftclient)
Installing collected packages: python-swiftclient
Successfully installed python-swiftclient-3.13.0

[Solved] Pytorch c++ Error: Error checking compiler version for cl: [WinError 2] System cannot find the specified file.

1. Error information

(python37) H:\emd>python setup.py install
running install
running bdist_egg
running egg_info
creating emd.egg-info
writing emd.egg-info\PKG-INFO
writing dependency_links to emd.egg-info\dependency_links.txt
writing top-level names to emd.egg-info\top_level.txt
writing manifest file 'emd.egg-info\SOURCES.txt'
 D:\Anaconda_app\envs\python37\lib\site-packages\torch\utils\cpp_extension.py:370: UserWarning: Attempted to use ninja as the BuildExtension backend but we
could not find ninja.. Falling back to using the slow distutils backend. 
  warnings.warn(msg.format('we could not find ninja.'))
reading manifest file 'emd.egg-info\SOURCES.txt'
writing manifest file 'emd.egg-info\SOURCES.txt'
installing library code to build\bdist.win-amd64\egg
running install_lib
running build_ext
D:\Anaconda_app\envs\python37\lib\site-packages\torch\utils\cpp_extension.py:305: UserWarning: Error checking compiler version for cl: [WinError 2] The system finds The specified file was not found.
  warnings.warn(f'Error checking compiler version for {compiler}: {error}')
building 'emd' extension
creating build
creating build\temp.win-amd64-3.7
……
D:\Anaconda_app\envs\python37\lib\site-packages\torch\include\c10/util/Optional.h(427): note: See the instantiation of "OptionalBase<at::Tensor>" to the class template being compiled.
References
D:\Anaconda_app\envs\python37\lib\site-packages\torch\include\ATen/core/TensorBody.h(734): note: See the class template instantiating "c10:: optional<at::Tensor>
" reference
D:\Anaconda_app\envs\python37\lib\site-packages\torch\include\c10/util/Optional.h(395): warning C4624: "c10::trivially_ copyable_optimization_optional_base
<T>": the destructor has been implicitly defined as "deleted"
        with
        [
            T=at::Tensor
        ]
D:\Anaconda_app\envs\python37\lib\site-packages\torch\include\c10/util/Optional.h(476): warning C4814: “c10::optional<at::Tensor>::contained_val”: In C++
14, "constexpr" will not mean "constant"; please consider specifying "constant" explicitly
D:\Anaconda_app\envs\python37\lib\site-packages\torch\include\c10/util/Optional.h(477): error C2556: “at::Tensor &c10::optional<at::Tensor>::contained_val
(void) const &”: Overloaded functions with “const at::Tensor &c10::optional<at::Tensor>::contained_val(void) const &”It just differs in the return type
D:\Anaconda_app\envs\python37\lib\site-packages\torch\include\c10/util/Optional.h(471): note: See the declaration of "c10::optional<at::Tensor>::contained_val"
D:\Anaconda_app\envs\python37\lib\site-packages\torch\include\c10/util/Optional.h(477): error C2373: “c10::optional<at::Tensor>::contained_val”: Redefinition.
Different type modifiers
D:\Anaconda_app\envs\python37\lib\site-packages\torch\include\c10/util/Optional.h(471): note: see "c10::optional<at:: Tensor>::contained_val" declaration
D:\Anaconda_app\envs\python37\lib\site-packages\torch\include\c10/util/Optional.h(476): warning C4814: "c10::optional< int64_t>::contained_val": in C++14
In C++14, "constexpr" will not mean "constant"; please consider specifying "constant" explicitly
D:\Anaconda_app\envs\python37\lib\site-packages\torch\include\ATen/core/TensorBody.h(774): note: see instantiating "c10:: optional<int64_t>" for the
References
D:\Anaconda_app\envs\python37\lib\site-packages\torch\include\c10/util/Optional.h(477): error C2556: "int64_t &c10:: optional<int64_t>::contained_val(void)
 const &": overloaded function is the same as "const int64_t &c10::optional<int64_t>::contained_val(void) const & " only differs in the return type
D:\Anaconda_app\envs\python37\lib\site-packages\torch\include\c10/util/Optional.h(471): note: see "c10::optional<int64_t t>::contained_val" declaration
D:\Anaconda_app\envs\python37\lib\site-packages\torch\include\c10/util/Optional.h(477): error C2373: "c10::optional< int64_t>::contained_val": redefinition; different
Same type modifier
D:\Anaconda_app\envs\python37\lib\site-packages\torch\include\c10/util/Optional.h(471): note: see "c10::optional< int64_t t>::contained_val" declaration
D:\Anaconda_app\envs\python37\lib\site-packages\torch\include\c10/util/Optional.h(477): fatal error C1003: error count exceeds 100; compilation is being stopped
error: command 'D:\\Program Files (x86)\\Microsoft Visual Studio 14.0\\VC\\BIN\\x86_amd64\\cl.exe' failed with exit status 2


2. Error analysis
Analysis: From the above error message, we get the information
D:\Anaconda_app\envs\python37\lib\site-packages\torch\utils\cpp_extension.py:370: UserWarning: Attempted to use ninja as the BuildExtension backend but we could not find ninja… Falling back to using the slow distutils backend.
D:\Anaconda_app\envs\python37\lib\site-packages\torch\utils\cpp_extension.py:305: UserWarning: Error checking compiler version for cl: [WinError 2] The system could not find the
warnings.warn(f’Error checking compiler version for {compiler}: {error}’)
1) I need to use the ninja package to compile, but I don’t have ninja installed, so I need to install ninja first (pip install ninja)
2) Can’t find the cl.exe that meets the required version, according to the given D:\Anaconda_app\envs\python37\lib\site-packages\torch\utils\cpp_extension.py:305:
Jump to source code

    try:
        if sys.platform.startswith('linux'):
            minimum_required_version = MINIMUM_GCC_VERSION
            versionstr = subprocess.check_output([compiler, '-dumpfullversion', '-dumpversion'])
            version = versionstr.decode().strip().split('.')
        else:
            minimum_required_version = MINIMUM_MSVC_VERSION
            compiler_info = subprocess.check_output(compiler, stderr=subprocess.STDOUT)
            match = re.search(r'(\d+)\.(\d+)\.(\d+)', compiler_info.decode().strip())
            version = (0, 0, 0) if match is None else match.groups()
    except Exception:
        _, error, _ = sys.exc_info()
        warnings.warn(f'Error checking compiler version for {compiler}: {error}')
        return False

I am not on the Linux platform, but on the win platform, so I found the version requirements of cl.exe:

MINIMUM_MSVC_VERSION = (19, 0, 24215)

MSVC is a compiler called “cl.exe”. It is a compiler specially developed by Microsoft for vs.

The above statement states that the minimum version of MSVC requires MSVC 19.0.24215

3. Knowledge to be understood

_MSC_Ver is the built-in macro of MSVC compiler, which defines the version of the compiler
MS is short for Microsoft
C MSc is Microsoft’s C compiler
short for ver version.

So _MSC_Ver means: the version of Microsoft’s C compiler.

MSC    1.0   _MSC_VER == 100
MSC    2.0   _MSC_VER == 200
MSC    3.0   _MSC_VER == 300
MSC    4.0   _MSC_VER == 400
MSC    5.0   _MSC_VER == 500
MSC    6.0   _MSC_VER == 600
MSC    7.0   _MSC_VER == 700
MSVC++ 1.0   _MSC_VER == 800
MSVC++ 2.0   _MSC_VER == 900
MSVC++ 4.0   _MSC_VER == 1000 (Developer Studio 4.0)
MSVC++ 4.2   _MSC_VER == 1020 (Developer Studio 4.2)
MSVC++ 5.0   _MSC_VER == 1100 (Visual Studio 97 version 5.0)
MSVC++ 6.0   _MSC_VER == 1200 (Visual Studio 6.0 version 6.0)
MSVC++ 7.0   _MSC_VER == 1300 (Visual Studio .NET 2002 version 7.0)
MSVC++ 7.1   _MSC_VER == 1310 (Visual Studio .NET 2003 version 7.1)
MSVC++ 8.0   _MSC_VER == 1400 (Visual Studio 2005 version 8.0)
MSVC++ 9.0   _MSC_VER == 1500 (Visual Studio 2008 version 9.0)
MSVC++ 10.0  _MSC_VER == 1600 (Visual Studio 2010 version 10.0)
MSVC++ 11.0  _MSC_VER == 1700 (Visual Studio 2012 version 11.0)
MSVC++ 12.0  _MSC_VER == 1800 (Visual Studio 2013 version 12.0)
MSVC++ 14.0  _MSC_VER == 1900 (Visual Studio 2015 version 14.0)
MSVC++ 14.1  _MSC_VER == 1910 (Visual Studio 2017 version 15.0)
MSVC++ 14.11 _MSC_VER == 1911 (Visual Studio 2017 version 15.3)
MSVC++ 14.12 _MSC_VER == 1912 (Visual Studio 2017 version 15.5)
MSVC++ 14.13 _MSC_VER == 1913 (Visual Studio 2017 version 15.6)
MSVC++ 14.14 _MSC_VER == 1914 (Visual Studio 2017 version 15.7)
MSVC++ 14.15 _MSC_VER == 1915 (Visual Studio 2017 version 15.8)
MSVC++ 14.16 _MSC_VER == 1916 (Visual Studio 2017 version 15.9)
MSVC++ 14.2  _MSC_VER == 1920 (Visual Studio 2019 Version 16.0)
MSVC++ 14.21 _MSC_VER == 1921 (Visual Studio 2019 Version 16.1)
MSVC++ 14.22 _MSC_VER == 1922 (Visual Studio 2019 Version 16.2)

For example, MSVC + + 14.0 indicates that the version of Visual C + + is 14.0, and visual studio 2015 in parentheses indicates that the VC + + is included in Microsoft development tool visual studio 2015.

4. Review problems and Solutions

1) Ninja has been installed
2) compared with the table in the third part, I need to install vs2017 at least, but the vs version in the computer is 2015. Uninstall and reinstall. (some small partners can reinstall directly without uninstallation. They are used to uninstalling and reinstalling)
vs2015 complete uninstallation: use totaluninstaller to completely uninstall visual studio 2013 and 2015
vs2017 installation: vs2017 download address and installation tutorial (illustration)

The vs2017 download method in the above link is highly recommended. There is a simplified Chinese version of the installation boot program, which is very convenient~

How to Solve paddleOCR recognition of curved text Error

Lower the version of paddlepaddle from 2.2.0 to 2.1.3 and you’re done. Note that there are two commands to run:

python  tools/export_model.py -c configs/det/det_r50_vd_sast_totaltext.yml -o Global.checkpoints="./models/sast_r50_vd_total_text/best_accuracy" Global.save_inference_dir="./inference/det_sast_tt"

python3 tools/infer/predict_det.py --det_algorithm="SAST" --image_dir="./doc/imgs_en/img_10.jpg" --det_model_dir="./inference/det_sast_tt/" --det_sast_polygon=True --use_gpu=False

 

/home/PaddleOCR {release/2.3} python3 tools/infer/predict_det.py --det_algorithm="SAST" --image_dir="./doc/imgs_en/img623.jpg" --det_model_dir="./inference/det_sast_tt/" --det_sast_polygon=True --use_gpu=False
grep: warning: GREP_OPTIONS is deprecated; please use an alias or script
/usr/local/lib/python3.8/site-packages/setuptools/depends.py:2: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses
  import imp
---    Fused 0 subgraphs into layer_norm op.
---    fused 0 pairs of fc gru patterns
Traceback (most recent call last):
  File "tools/infer/predict_det.py", line 242, in <module>
    res = text_detector(img)
  File "tools/infer/predict_det.py", line 218, in __call__
    post_result = self.postprocess_op(preds, shape_list)
  File "/home/PaddleOCR/ppocr/postprocess/sast_postprocess.py", line 341, in __call__
    poly_list = self.detect_sast(
  File "/home/PaddleOCR/ppocr/postprocess/sast_postprocess.py", line 237, in detect_sast
    instance_count, instance_label_map = self.cluster_by_quads_tco(
  File "/home/PaddleOCR/ppocr/postprocess/sast_postprocess.py", line 164, in cluster_by_quads_tco
    pred_tc = xy_text - tco
ValueError: operands could not be broadcast together with shapes (43093,2) (43093,8)

 

 

[Solved] python Error: GuessedAtParserWarning: No parser was explicitly specified

f:\py\verification.py:148: GuessedAtParserWarning: No parser was explicitly specified, so I'm using the best available HTML parser for this system ("lxml"). This usually isn't a problem, but if you run this code on another system, or in a different virtual environment, it may use a different parser and behave differently.

The code that caused this warning is on line 148 of the file f:\py\verification.py. To get rid of this
warning, pass the additional argument 'features="lxml"' to the BeautifulSoup constructor.

It can operate normally, but an error message is prompted on the output console

soup = BeautifulSoup(response, ‘html’)

add: Features = "lxml" is fine

soup = BeautifulSoup(response, features="lxml")

[Solved] RecursionError: maximum recursion depth exceeded in comparison

It is found that the default recursion depth of Python is very limited (the default is 1000), so when the recursion depth exceeds 999, such an exception will be thrown.

Solution:

You can modify the value of recursion depth to make it larger

import sys

Sys.setrecursionlimit (100000) # for example, it is set to 100000 here

be careful:

This solution is not the root cause, but also needs to be optimized in the code.

[Solved] Pycharm error: attributeerror: ‘Htmlparser’ object has no attribute ‘unescape’

Pycharm reported an error attributeerror: ‘Htmlparser’

Python 3.9 error “ attributeerror: 'Htmlparser' object has no attribute 'unescape' ” exception resolution.

It is usually an environmental problem. When creating a project, the environment of the corresponding project will be automatically created

As shown in the figure below, python.exe of a project environment is automatically generated

In the settings, modify the address of your corresponding Python environment to solve the problem

But you can use it before. I don’t know if it’s a python 3.9 problem

[Solved] The appium doctor error: bundletool.jar under win10

Problem: first solve the problem of bundletool.jar

1. Download package

https://github.com/google/bundletool/releases

Create a new bundle tool directory in the Android directory, copy the downloaded package to this directory, and change the jar package name, as shown in the figure below

Add the jar package path under the user variable path

In the system variable, Add the contents shown in the figure to the path variable

Re execute appium doctor in a new CMD window