Tag Archives: tensorflow

[Solved] Error occurred when finalizing GeneratorDataset iterator: Failed precondition: Python interpreter st

When using tensorflow.keras, this error is often reported during model training:

tensorflow/core/kernels/data/generator_dataset_op.cc:107] Error occurred when finalizing GeneratorDataset iterator: Failed precondition: Python interpreter state is not initialized. The process may be terminated.
	 [[{{node PyFunc}}]]
tensorflow/core/kernels/data/generator_dataset_op.cc:107] Error occurred when finalizing GeneratorDataset iterator: Failed precondition: Python interpreter state is not initialized. The process may be terminated.

       [[{{node PyFunc}}]]

 

According to my own experience, there are several reasons for this errory:

1. The input image_size and input_shape does not match or is not defined when the model is built. Note that the input_shape must be defined when defining the first layer of the convolutional layer, e.g.

    model = keras.models.Sequential([
        # Input image [None,224,224,3]
        # Convolution layer 1: 32 5*5*3 filters, step size set to 1, fill set to same
        # Output [None,32,32,3]
        keras.layers.Conv2D(32, kernel_size=5, strides=1, padding='same', data_format='channels_last',
                            activation='relu', input_shape=(224, 224, 3)),

2. There is also train_generator and validate_generator related parameters must be consistent, such as batch_size, target_size, class_mode, etc.

3. The configuration limit itself, try to change the batch_size to a smaller size, or even to 1

4. The last program did not finish completely, finish all python programs to see.

TensorFlow-gpu Error: failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected

Error Messages:

failed call to cuInit: CUDA_ERROR_NO_Device: no CUDA capable device is detected
this is also what I encountered when running a CNN SVM classifier program of tensorflow GPU today. It’s not the problem of the program. It’s our graphics card.

Solution:

import tensorflow as tf

config = tf.compat.v1.ConfigProto()
config.gpu_options.allow_growth = True
session = tf.compat.v1.InteractiveSession(config=config)

Just add these lines of code to the head of the code, and you don’t need to write this code belowos.environ['CUDA_VISIBLE_DEVICES'] = '/gpu:0'

[Solved] original_keras_version = f.attrs[‘keras_version‘].decode(‘utf8‘)

windows system:
original_keras_version = f.attrs[‘keras_version’].decode(‘utf8’)
1. error:

load_weights_from_hdf5_group
    original_keras_version = f.attrs['keras_version'].decode('utf8')

AttributeError: 'str' object has no attribute 'decode'

2. Cause analysis

When installing tensorflow, the default installed h5py is 3.1.0, and an error is reported because the TF you installed does not support an excessively high version of h5py

3. Solutions

1. Uninstall h5py3 Version 1.0, installing h5py2.0 Version 10.0.2. Restart the compiler

pip install h5py==2.10.0

[Solved] Error(s) in loading state_dict for GeneratorResNet

**

Error (s) in loading state_dict for GeneratorResNet

**
cause of the problem: check whether we use dataparallel for multi GPU during training. The model generated by this method will automatically add key: module
observe the error message:
you can find that the key values in the model are more modules

Solution:
1. Delete the module

    gentmps=torch.load("./saved_models/generator_%d.pth" % opt.epoch)
    distmps = torch.load("./saved_models/discriminator_%d.pth" % opt.epoch)
    from collections import OrderedDict
    new_gens = OrderedDict()
    new_diss = OrderedDict()
    for k, v in gentmps.items():
        name = k.replace('module.','') # remove 'module.'
        new_gens[name] = v #The value corresponding to the key value of the new dictionary is a one-to-one value.
    for k, v in distmps.items():
        name = k.replace('module.','') # remove 'module.'
        new_diss[name] = v #The value corresponding to the key value of the new dictionary is a one-to-one value.
    generator.load_state_dict(new_gens)
    discriminator.load_state_dict(new_diss)

[Solved] Jupyter notebook use pyLDAvis Error: modulenotfounderror: no module named ‘pyLDAvis’‘

Background of the problem

Getting started with Python
try to use pyldavis for simple topic extraction;

Problems and related codes

Pyldavis has been installed, as shown in the figure (Annotated):

but an error occurs when entering the following statement:
error code segment:

import pyLDAvis
import pyLDAvis.sklearn
pyLDAvis.enable_notebook()
pyLDAvis.sklearn.prepare(lda,tf,tf_vectorizer)

Screenshot of specific error reporting:

Solution:

Some bloggers suggested that the installation was not successful. I installed it as an administrator. I operated it, but it was useless
in fact, the final solution is very simple. I found that the kernel used by my Jupiter notebook is wrong. I used a virtual environment I set up before. Just change it to the default environment (my one is named python3).

How to Solve Keras calls plot_model error

1. Error information

When building the neural network model, you can call plot in keras_ The model module draws a schematic diagram of the model to facilitate the adjustment of the model structure:

from tensorflow.keras.models import Model
from tensorflow.keras.utils import plot_model
model = Model(dense_inputs+sparse_inputs, output_layer)
plot_model(model, "fm_model.png", show_shapes=True)

As a result, the following error messages appear:

(‘Failed to import pydot. You must pip install pydot and install graphviz (ht

tps://graphviz.gitlab.io/download/ ), ‘, ‘for pydotprint to work.’)

Understand the error message: the installation is complete without pydot and graphviz packages

2. Solutions

2.1 installation of graphviz package

pip install graphviz

2.2 download and install graphviz Exe file and install

In Windows Environment

Download address: https://graphviz.gitlab.io/download/

2.3 configuring environment variables for graphviz

2.4 installing pydot package

pip install pydot-ng

2.5 restart development tools

Restart the IDE or other development tools (Jupiter notebook) with immediate effect.

3. Summary

1. Installing pydot and graphviz packages directly according to the error message does not work

2. You need to go to the website to download the corresponding EXE file or zip file. After installation, specify the environment variables

3. Don’t forget to restart your ide or other development tools

[Solved] Python2 Install tensorflow Error: class DescriptorBase(metaclass=DescriptorMetaclass), SyntaxError: invalid syntax

When Python 2 installs tensorflow, test after the installation is completed:

import tensorflow as tf

Will report an error:

Traceback (most recent call last):
File “<stdin>”, line 1, in <module>
File “/home/zhaokai/.local/lib/python2.7/site-packages/tensorflow/__init__.py”, line 28, in <module>
from tensorflow.python import pywrap_tensorflow  # pylint: disable=unused-import
File “/home/zhaokai/.local/lib/python2.7/site-packages/tensorflow/python/__init__.py”, line 52, in <module>
from tensorflow.core.framework.graph_pb2 import *
File “/home/zhaokai/.local/lib/python2.7/site-packages/tensorflow/core/framework/graph_pb2.py”, line 7, in <module>
from google.protobuf import descriptor as _descriptor
File “/home/zhaokai/.local/lib/python2.7/site-packages/google/protobuf/descriptor.py”, line 113
class DescriptorBase(metaclass=DescriptorMetaclass):
^
SyntaxError: invalid syntax

The solution is to re-install protobuf:

pip install protobuf==3.17.3

then Import tensorflow again.

[Solved] ERROR: THESE PACKAGES DO NOT MATCH THE HASHES FROM THE REQUIREMENTS FILE

pip Install tensorflow Error:

ERROR: THESE PACKAGES DO NOT MATCH THE HASHES FROM THE REQUIREMENTS FILE. If you have updated the package versions, please update the hashes. Otherwise, examine the package contents carefully; someone may have tampered with them. tensorflow<1.14,>=1.13 from https://www.piwheels.org/simple/tensorflow/tensorflow-1.13.1-cp35-none-linux_armv7l.whl#sha256=6c00dd13db0791e83cb08d532f007cc7fd44c8d7b52662a4a0065ac4fe7ca18a (from mycroft-precise==0.3.0): Expected sha256 6c00dd13db0791e83cb08d532f007cc7fd44c8d7b52662a4a0065ac4fe7ca18a Got f679035a7cd96d24f826463bef208cd04f1eee50eb6023a158c05b529e17a71b

The above error shows that the expected hash value when downloading the package is not the real hash, the package is damaged during pip installation, and it may also be caused by its own network problem or the version compatibility of the Python package.
Solution: Add a --no-cahce-dir when installing the pip package to solve the problem as follows:

pip install tensorflow --no-cache-dir

[Solved] ERROR: Cannot uninstall ‘wrapt‘. It is a distutils installed project and thus we cannot accurately d

ERROR: Cannot uninstall ‘wrapt’. It is a distutils installed project and thus we cannot accurately determine which files belong to it which would lead to only a partial uninstall.
Problem Description.
When installing tensorflow, an error is reported: “ERROR: Cannot uninstall ‘wrapt’. It is a distutils installed project and thus we cannot accurately determine which files belong to it which would lead to only a partial uninstall.”

pip install tensorflow==1.15.0

Solution:

Change the command to:

pip install tensorflow==1.15.0 --ignore-installed wrapt

[Solved] ERROR: pip‘s dependency resolver does not currently take into account all the packages that are inst

When installing wrapt, the following error is reported:

ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
tensorflow 2.7.0 requires h5py>=2.9.0, which is not installed.
tensorflow 2.7.0 requires typing-extensions>=3.6.6, which is not installed.
tensorflow 2.7.0 requires wheel<1.0,>=0.32.0, which is not installed.

Just follow the prompts

pip install h5py
pip install typing-extensions
pip install wheel

NXP mx8 Platform tensorflow-lite build error [How to Solve]

Solutions provided by NXP
Compiling L5.4.3_1.0.0 BSP On Ubuntu 180.4 LTS – NXP Community
1. Compile tensorflow-lite with bitbake
bitbake tensorflow-lite -c do_configure -v -f
The following error occurs, at this time you can see the download of the wrong package
FAILED: ruy-populate-prefix/src/ruy-populate-stamp/ruy-populate-download
The specific path is
tmp/work/cortexa53-crypto-poky-linux/tensorflow-lite/2.5.0-r0/build/_deps/ruy-subbuild/ruy-populate-prefix/src/ruy-populate-stamp/ruy-populate-download

tmp/work/cortexa53-crypto-poky-linux/tensorflow-lite/2.5.0-r0/build/_deps/ruy-subbuild/ruy-populate-prefix/src/

Check whether there is a corresponding zip package in the modified directory, and copy it to the tensorflow pack folder created in the corresponding root directory.

| FAILED: ruy-populate-prefix/src/ruy-populate-stamp/ruy-populate-download
| cd /work/code/temp/ver/build-imx-robot/tmp/work/cortexa53-crypto-poky-linux/tensorflow-lite/2.5.0-r0/build && /work/code/temp/ver/build-imx-robot/tmp/work/cortexa53-crypto-poky-linux/tensorflow-lite/2.5.0-r0/recipe-sysroot-native/usr/bin/cmake -P /work/code/temp/ver/build-imx-robot/tmp/work/cortexa53-crypto-poky-linux/tensorflow-lite/2.5.0-r0/build/_deps/ruy-subbuild/ruy-populate-prefix/src/ruy-populate-stamp/download-ruy-populate.cmake && /work/code/temp/ver/build-imx-robot/tmp/work/cortexa53-crypto-poky-linux/tensorflow-lite/2.5.0-r0/recipe-sysroot-native/usr/bin/cmake -P /work/code/temp/ver/build-imx-robot/tmp/work/cortexa53-crypto-poky-linux/tensorflow-lite/2.5.0-r0/build/_deps/ruy-subbuild/ruy-populate-prefix/src/ruy-populate-stamp/verify-ruy-populate.cmake && /work/code/temp/ver/build-imx-robot/tmp/work/cortexa53-crypto-poky-linux/tensorflow-lite/2.5.0-r0/recipe-sysroot-native/usr/bin/cmake -P /work/code/temp/ver/build-imx-robot/tmp/work/cortexa53-crypto-poky-linux/tensorflow-lite/2.5.0-r0/build/_deps/ruy-subbuild/ruy-populate-prefix/src/ruy-populate-stamp/extract-ruy-populate.cmake && /work/code/temp/ver/build-imx-robot/tmp/work/cortexa53-crypto-poky-linux/tensorflow-lite/2.5.0-r0/recipe-sysroot-native/usr/bin/cmake -E touch /work/code/temp/ver/build-imx-robot/tmp/work/cortexa53-crypto-poky-linux/tensorflow-lite/2.5.0-r0/build/_deps/ruy-subbuild/ruy-populate-prefix/src/ruy-populate-stamp/ruy-populate-download
| ninja: build stopped: subcommand failed.
|
| CMake Error at /work/code/temp/ver/build-imx-robot/tmp/work/cortexa53-crypto-poky-linux/tensorflow-lite/2.5.0-r0/recipe-sysroot-native/usr/share/cmake-3.19/Modules/FetchContent.cmake:989 (message):
|   Build step for ruy failed: 1
| Call Stack (most recent call first):
|   /work/code/temp/ver/build-imx-robot/tmp/work/cortexa53-crypto-poky-linux/tensorflow-lite/2.5.0-r0/recipe-sysroot-native/usr/share/cmake-3.19/Modules/FetchContent.cmake:1118:EVAL:2 (__FetchContent_directPopulate)
|   /work/code/temp/ver/build-imx-robot/tmp/work/cortexa53-crypto-poky-linux/tensorflow-lite/2.5.0-r0/recipe-sysroot-native/usr/share/cmake-3.19/Modules/FetchContent.cmake:1118 (cmake_language)
|   tools/cmake/modules/OverridableFetchContent.cmake:531 (FetchContent_Populate)
|   tools/cmake/modules/ruy.cmake:30 (OverridableFetchContent_Populate)
|   tools/cmake/modules/Findruy.cmake:16 (include)
|   CMakeLists.txt:197 (find_package)
|
|
| -- Configuring incomplete, errors occurred!
| See also "/work/code/temp/ver/build-imx-robot/tmp/work/cortexa53-crypto-poky-linux/tensorflow-lite/2.5.0-r0/build/CMakeFiles/CMakeOutput.log".
| See also "/work/code/temp/ver/build-imx-robot/tmp/work/cortexa53-crypto-poky-linux/tensorflow-lite/2.5.0-r0/build/CMakeFiles/CMakeError.log".
| + bb_sh_exit_handler
| + ret=1
| + [ 1 != 0 ]
| + echo WARNING: exit code 1 from a shell command.
| WARNING: exit code 1 from a shell command.
| + exit 1

The second error;

tmp/work/cortexa53-crypto-poky-linux/tensorflow-lite/2.5.0-r0/build/pthreadpool-download

Check whether there is a zip in this directory, and in the copy to tensorflow pack folder

ninja: build stopped: subcommand failed.
-- Downloading pthreadpool to /work/code/test/ver/build-imx-robot/tmp/work/cortexa53-crypto-poky-linux/tensorflow-lite/2.5.0-r0/build/pthreadpool-source (define PTHREADPOOL_SOURCE_DIR to avoid it)
-- Configuring done
-- Generating done
-- Build files have been written to: /work/code/test/ver/build-imx-robot/tmp/work/cortexa53-crypto-poky-linux/tensorflow-lite/2.5.0-r0/build/pthreadpool-download
[1/9] Creating directories for 'pthreadpool'
[2/9] Performing download step (download, verify and extract) for 'pthreadpool'
-- Downloading...
   dst='/work/code/test/ver/build-imx-robot/tmp/work/cortexa53-crypto-poky-linux/tensorflow-lite/2.5.0-r0/build/pthreadpool-download/pthreadpool-prefix/src/545ebe9f225aec6dca49109516fac02e973a3de2.zip'
   timeout='none'
   inactivity timeout='none'
-- Using src='https://github.com/Maratyszcza/pthreadpool/archive/545ebe9f225aec6dca49109516fac02e973a3de2.zip'
-- [download 100% complete]
-- verifying file...
       file='/work/code/test/ver/build-imx-robot/tmp/work/cortexa53-crypto-poky-linux/tensorflow-lite/2.5.0-r0/build/pthreadpool-download/pthreadpool-prefix/src/545ebe9f225aec6dca49109516fac02e973a3de2.zip'
-- Downloading... done
-- extracting...
     src='/work/code/test/ver/build-imx-robot/tmp/work/cortexa53-crypto-poky-linux/tensorflow-lite/2.5.0-r0/build/pthreadpool-download/pthreadpool-prefix/src/545ebe9f225aec6dca49109516fac02e973a3de2.zip'
     dst='/work/code/test/ver/build-imx-robot/tmp/work/cortexa53-crypto-poky-linux/tensorflow-lite/2.5.0-r0/build/pthreadpool-source'
-- extracting... [tar xfz]

3. Subsequent errors change in turn.

4. Execute this../cp.sh script during bitmake tensorflow Lite – C compile – V – f compilation, and tensorflow Lite will be compiled successfully.

#!/bin/bash
mkdir -p ./../build-imx-robot/tmp/work/cortexa53-crypto-poky-linux/tensorflow-lite/2.5.0-r0/build/_deps/ruy-subbuild/ruy-populate-prefix/src/
cp 54774a7a2cf85963777289193629d4bd42de4a59.zip  ./../build-imx-robot/tmp/work/cortexa53-crypto-poky-linux/tensorflow-lite/2.5.0-r0/build/_deps/ruy-subbuild/ruy-populate-prefix/src/


mkdir -p ./../build-imx-robot/tmp/work/cortexa53-crypto-poky-linux/tensorflow-lite/2.5.0-r0/build/cpuinfo-download/cpuinfo-prefix/src/
cp 5916273f79a21551890fd3d56fc5375a78d1598d.zip ../build-imx-robot/tmp/work/cortexa53-crypto-poky-linux/tensorflow-lite/2.5.0-r0/build/cpuinfo-download/cpuinfo-prefix/src/