Tag Archives: tensorflow

AttAttributeError: module ‘typing‘ has no attribute ‘NoReturn‘

Problem Description:

an error is reported when installing tensorflow GPU = = 1.4.1 in the virtual environment with Python version 3.6.0

AttAttributeError: module ‘typing’ has no attribute ‘NoReturn’

Cause analysis:

Python version is too low. Python has changed a lot since 3.6

Solution:

install a newer version of python, and you can solve the problem above 3.6.2

Tensorflow operation report error modulenotfounderror: no module named ‘tensorflow. Python. Types’ solution

Running with tensorflow 1.15, the following error occurs:,

terms of settlement:

It is found that the tensorflow estimator installed automatically when installing 1.15.0 is version 2.5.0 (Figure 2)

Uninstall version 2.5: PIP uninstall   tensorflow-estimator

Reinstall version 1.15: CONDA install    tensorflow-estimator==1.15.1   Or pip   install    Tensorflow estimator = = 1.15.1 (I installed it with Canda, but failed to use pip, prompting an error, valueerror: check_ hostname requires server_ Hostname, after searching this error, I need to close the agent, but I rely on the agent to connect to the network but not close…)

Problem solving.

Note: the reason for installing 1.15.1 is that version 1.15.0 is not found. If you want to download the manual installation from the official website, you can only find tensorflow CPU estimator 1.15.1, search results · pypi

  Reference: modulenotfounderror: no module named ‘tensorflow. Python. Types’ – stack overflow

[Solved] CUDA driver version is insufficient for CUDA runtime version

CUDA driver version is insufficient for CUDA runtime version

Question:

An error is reported when docker runs ONEFLOW code of insightface

 Failed to get cuda runtime version: CUDA driver version is insufficient for CUDA runtime version

reason:

1. View CUDA runtime version

cat /usr/local/cuda/version.txt

The CUDA version in my docker is 10.0.130

CUDA Version 10.0.130

2. The CUDA version has requirements for the graphics card driver version, see the following link.
https://docs.nvidia.com/cuda/cuda-toolkit-release-notes/index.html

CUDA Toolkit Linux x86 64 Driver Version Windows x86 and 64 Driver Version
CUDA 11.0.3 Update 1
CUDA 11.0.2 GA >= 450.51.05 >= 451.48
CUDA 11.0.1 RC >= 450.36.06 >= 451.22
CUDA 10.2.89 >= 440.33 >= 441.22
CUDA 10.1 (10.1.105 general release, and updates) >= 418.39 >= 418.96
CUDA 10.0.130 >= 410.48 >= 411.31
CUDA 9.2 (9.2.148 Update 1) >= 396.37 >= 398.26
CUDA 9.2 (9.2.88) >= 396.26 >= 397.44

cat /proc/driver/nvidia/version took a look at the server’s graphics card driver is 418.67, CUDA 10.1 should be installed, and I installed 10.0.130 cuda.

NVRM version: NVIDIA UNIX x86_64 Kernel Module  418.67  Sat Apr  6 03:07:24 CDT 2019
GCC version:  gcc version 7.3.0 (Ubuntu 7.3.0-27ubuntu1~18.04)

solve:

Installing CUDA 10.1

(1) First in https://developer.nvidia.com/cuda-toolkit-archive According to the machine environment, download the corresponding cuda10.1 installation file. For the installer type, I choose runfile (local). The installation steps will be simpler.

wget https://developer.download.nvidia.com/compute/cuda/10.1/Prod/local_installers/cuda_10.1.243_418.87.00_linux.runsudo sh 

(2) Installation

sh cuda_10.1.243_418.87.00_linux.run

The same error occurred, unresolved
it will be updated when a solution is found later.

The solution of no space left on device always appears when using TF’s debug tool (tfdbg)

The first time you use tensorflow’s debug tool, but when you use it for the second time, there is always a shortage of space, which can be solved through the following steps.

  df -h

Find that the root directory is full, then go to the root directory and check the occupied directory  

  du –max-depth=1 -h

  It is found that TMP directory takes up a lot of space

 

Sure enough, when you go to TMP, you find files related to tfdbg. Just delete them  。

 

 

 

 

 

Type error: cannot unpack non Iterable non type object appears when starting Bert server in Ubuntu system

Type error: cannot unpack non Iterable non type object appears when starting Bert server in Ubuntu system

Questions

Enter Bert serving start – model_ dir chinese_ L-12_ H-768_ A-12 -num_ worker 1 -max_ seq_ In the case of len 64, type error: cannot unpack non Iterable non type object is reported

resolvent

    check whether the startup path is correct, – model_ Dir is followed by the downloaded and unzipped corpus model address. The figure below is my path after decompression. You can right-click on the interface and select “open on terminal”

    just re-enter the command. This is the reason for my problem. After re entering the command, no error will be reported
    2. Check your tensorflow version, which is not supported at present. My tensorflow version is 1.15.0. 0

    Finally, it can run without any error

Error in tensorflow loading model valueerror: unknown layer: functional

Contents of articles

Problem description solution references

Problem description

When you pull the model trained by the server to your own computer, tensorflow loads the model and reports an error valueerror: unknown layer: functional error

import tensorflow as tf

model = tf.keras.models.load_model('test.h5')

The server

python 3.6.13
tensorflow-gpu==2.3.0

Own computer

python 3.6.5
tensorflow-gpu==2.1.0

Solution

pip install tensorflow-gpu==2.3.0

References

    Value

Tensorflow Error polling for event status: failed to query event: CUDA_ERROR_ILLEGAL_ADDRESS

  Server environment:

    Ubuntu 16.04.4tensorflow 1.13.1cuda-10.0cudnn 7.4.5

Recently, when I was running demo pointasnl of point cloud classification, when batch_ When the size setting is relatively large, the following errors will appear during the training:

2020-06-12 00:14:01.824110: E tensorflow/stream_executor/cuda/cuda_event.cc:29] Error polling for event status: failed to query event: CUDA_ERROR_ILLEGAL_ADDRESS: an illegal memory access was encountered
2020-06-12 00:14:01.824142: F tensorflow/core/common_runtime/gpu/gpu_event_mgr.cc:273] Unexpected Event status: 1

At first, it was thought that there was something wrong with the GPU Programming code, but after repeated checking, it was found that there was no error.

After collecting information from the Internet, I vaguely realized that it should be the environmental version.

After reducing cudnn 7.4.5 to cudnn 7.3.1, this problem seems to be solved. I hope there will be no more problems.

AttributeError: module ‘keras.backend‘ has no attribute ‘eager‘

Project scenario:

under Windows environment, python 3.6, the versions of each CONDA package are as follows 0

\# Name                    Version                   Build  Channel
absl-py                   0.13.0                    <pip>
astor                     0.8.1                     <pip>
cached-property           1.5.2                     <pip>
certifi                   2021.5.30        py36ha15d459_0    https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
cycler                    0.10.0                    <pip>
dataclasses               0.8                       <pip>
gast                      0.2.2                     <pip>
google-pasta              0.2.0                     <pip>
grpcio                    1.38.1                    <pip>
h5py                      3.1.0                     <pip>
importlib-metadata        4.6.0                     <pip>
joblib                    1.0.1                     <pip>
Keras                     2.3.1                     <pip>
Keras-Applications        1.0.8                     <pip>
Keras-Preprocessing       1.1.2                     <pip>
kiwisolver                1.3.1                     <pip>
Markdown                  3.3.4                     <pip>
matplotlib                3.3.4                     <pip>
numpy                     1.19.5                    <pip>
opt-einsum                3.3.0                     <pip>
pandas                    1.1.5                     <pip>
Pillow                    8.2.0                     <pip>
pip                       21.1.3             pyhd8ed1ab_0    https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
protobuf                  3.17.3                    <pip>
pyparsing                 2.4.7                     <pip>
python                    3.6.13          h39d44d4_0_cpython    https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
python-dateutil           2.8.1                     <pip>
python_abi                3.6                     2_cp36m    https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
pytz                      2021.1                    <pip>
PyYAML                    5.4.1                     <pip>
scikit-learn              0.24.2                    <pip>
scipy                     1.5.4                     <pip>
seaborn                   0.11.1                    <pip>
setuptools                49.6.0           py36ha15d459_3    https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
six                       1.16.0                    <pip>
sklearn                   0.0                       <pip>
tensorboard               1.15.0                    <pip>
tensorflow                1.15.0                    <pip>
tensorflow-estimator      1.15.1                    <pip>
termcolor                 1.1.0                     <pip>
threadpoolctl             2.1.0                     <pip>
typing-extensions         3.10.0.0                  <pip>
ucrt                      10.0.20348.0         h57928b3_0    https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
vc                        14.2                 hb210afc_5    https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
vs2015_runtime            14.29.30037          h902a5da_5    https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
Werkzeug                  2.0.1                     <pip>
wheel                     0.36.2             pyhd3deb0d_0    https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
wincertstore              0.2             py36ha15d459_1006    https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
wrapt                     1.12.1                    <pip>
zipp                      3.4.1                     <pip>

Problem Description:

it seems that the version of the library that has been adjusted for a long time has finally run through a LSTM demo, but if you want to open a new test file for testing, the following problems appear


Solution:

the solution to stackoverflow is to use the current keras library and tensorflow Library of unitall, because the higher version of keras library may be used, but the running code is of lower version. Note that you need to install tensorflow before installing keras

The specific steps of stackoverflow with the most likes are as follows:
delete tenorflow delete tenorflow

pip uninstall tensorflow

Update PIP version

pip install --upgrade pip

install keras

pip install Keras

install tensorflow

pip install tensorflow

Apart from this method, I want to say that my final solution is: I found the import statement of the keras library

from keras.models import Sequential
from keras.layers import LSTM, Dense, Activation

Cannot appear in two. Py files!!! Even if you just don’t use import, you will still report this error. It’s too bad. I found this problem after reloading it, so when testing the code, you should either create a new project or write the import statement in a file.

TensorFlow Install Error: Could not load dynamic library ‘*****.dll‘; dlerror: ********.dll not found

After the installation of tensorflow2. X is successful, after running the following code:

tf.config.list_physical_devices('GPU')

There are always the following situations: (Note: there are usually multiple, only two here are shown here)

Could not load dynamic library ‘cublas64_ 10.dll’; dlerror: cublas64_ 10.dll not found

Could not load dynamic library ‘cudnn64_ 7.dll’; dlerror: cudnn64_ 7.dll not found

Solution:

Download the corresponding DLL file and put it into the folder C: Windows: system32.

The problem was solved successfully Pro test (valid)

So the key to the problem, where to download these DLL files, rest assured, this article will provide you with all the required DLL files.

[Solved] Tensorflow/Keras Error reading weights: ValueError: axes don‘t match array

Error information:

Traceback (most recent call last):
  File "bs.py", line 149, in <module>
    tcpserver1=MYTCPServer(('192.168.0.109',54321)) 
  File "wserver_bs.py", line 65, in __init__
    self.model.load_weights(weight_filepath)
  File "/home/ps/anaconda3/envs/anomaly/lib/python3.6/site-packages/tensorflow/python/keras/engine/training.py", line 162, in load_weights
    return super(Model, self).load_weights(filepath, by_name)
  File "/home/ps/anaconda3/envs/anomaly/lib/python3.6/site-packages/tensorflow/python/keras/engine/network.py", line 1424, in load_weights
    saving.load_weights_from_hdf5_group(f, self.layers)
  File "/home/ps/anaconda3/envs/anomaly/lib/python3.6/site-packages/tensorflow/python/keras/saving/hdf5_format.py", line 749, in load_weights_from_hdf5_group
    layer, weight_values, original_keras_version, original_backend)
  File "/home/ps/anaconda3/envs/anomaly/lib/python3.6/site-packages/tensorflow/python/keras/saving/hdf5_format.py", line 456, in preprocess_weights_for_loading
    weights[0] = np.transpose(weights[0], (3, 2, 0, 1))
  File "<__array_function__ internals>", line 6, in transpose
  File "/home/ps/anaconda3/envs/anomaly/lib/python3.6/site-packages/numpy/core/fromnumeric.py", line 653, in transpose
    return _wrapfunc(a, 'transpose', axes)
  File "/home/ps/anaconda3/envs/anomaly/lib/python3.6/site-packages/numpy/core/fromnumeric.py", line 58, in _wrapfunc
    return bound(*args, **kwds)
ValueError: axes don't match array

Tossed half a night, the Internet to check a variety of methods are useless
the fault lies in load_ On the weights function, it’s a matter of setting the model!

When loading the model, the default input size is used

model = cnn.CNNLikeModel() 

The actual size of the input tensor is different from the default size, which leads to this error.

solve

1. Modify the default value of the function
2. Fill in the correct value when using this function.

[Solved] Tensorflow cuda Error: Could not load dynamic library ‘libcudart.so.11.0‘; dlerror: libcudart.so.11.0:

Dlerror: libcudart. So. 11.0: problem solving

First find your computer path

/usr/local/cuda/lib64

Check your CUDA version
. According to the above figure, I find that my computer’s CUDA version is 10.0, so I will report an error at runtime. At this time, there are two solutions.

Scheme 1

If you force the CUDA environment of the local computer to be the running CUDA environment, there may be problems, which I haven’t tried.

cd /usr/local/cuda/lib64/
sudo ln -sf libcudart.so.10.0 libcudart.so.11.0

Scheme 2

Installing dynamic CUDA environment in CONDA environment

conda install tensorflow-gpu cudatoolkit=11.0