Tag Archives: python

[Solved] Tensorflow-gpu Error: self._traceback = tf_stack.extract_stack()

Tensorflow GPU reports an error of self_ traceback = tf_ stack.extract_ stack()

Reason 1: the video memory is full

At this time, you can view the GPU running status by entering the command NVIDIA SMI in CMD,
most likely because of the batch entered_ Size or the number of hidden layers is too large, and the display memory is full and the data cannot be loaded completely. At this time, the GPU will not start working (similar to memory and CPU), and the utilization rate is 0%

Solution to reason 1:
1. turn down bath_Size and number of hidden layers, reduce the picture resolution, close other software that consumes video memory, and other methods that can reduce the occupation of video memory, and then try again. If the video memory has only two G’s, it’s better to run with CPU
2.
1. Use with code

os.environ['CUDA_VISIBLE_DEVICES'] = '/gpu:0'

config = tf.compat.v1.ConfigProto(allow_soft_placement=True)
config.gpu_options.per_process_gpu_memory_fraction = 0.7 
tf.compat.v1.keras.backend.set_session(tf.compat.v1.Session(config=config))

Reason 2. There are duplicate codes and the calling programs overlap
I found this when saving and loading the model. The assignment and operation of variables are repeatedly written during saving and loading, and an error self is reported during loading_traceback = tf_stack.extract_Stack()
There are many reasons for the tensorflow error self_traceback = tf_stack.extract_stack()
the error codes are as follows:

import tensorflow as tf


a = tf.Variable(5., tf.float32)
b = tf.Variable(6., tf.float32)
num = 10
model_save_path = './model/'
model_name = 'model'
saver = tf.train.Saver()

with tf.Session() as sess:
    init_op = tf.compat.v1.global_variables_initializer()
    sess.run(init_op)
    for step in np.arange(num):
        c = sess.run(tf.add(a, b))
        saver.save(sess, os.path.join(model_save_path, model_name), global_step=step)
print("Parameters saved successfully!")
a = tf.Variable(5., tf.float32)
b = tf.Variable(6., tf.float32) # Note the repetition here
num = 10
model_save_path = './model/'
model_name = 'model'
saver = tf.train.Saver()    # Note the repetition here

with tf.Session() as sess:
    init_op = tf.compat.v1.global_variables_initializer()
    sess.run(init_op)
    ckpt = tf.train.get_checkpoint_state(model_save_path)
    if ckpt and ckpt.model_checkpoint_path:
        saver.restore(sess, ckpt.model_checkpoint_path)
    print("load success")

Running the code will report an error: self_traceback = tf_stack.extract_stack()

Reason 2 solution
when Saver = TF.Train.Saver() in parameter loading is commented out or commented out

a = tf.Variable(5., tf.float32)
b = tf.Variable(6., tf.float32) # Note the repetition here

The model will no longer report errors. I don’t know the specific reason.

[Solved] Pycharm error: attributeerror: ‘Htmlparser’ object has no attribute ‘unescape’

Pycharm reported an error attributeerror: ‘Htmlparser’

Python 3.9 error “ attributeerror: 'Htmlparser' object has no attribute 'unescape' ” exception resolution.

It is usually an environmental problem. When creating a project, the environment of the corresponding project will be automatically created

As shown in the figure below, python.exe of a project environment is automatically generated

In the settings, modify the address of your corresponding Python environment to solve the problem

But you can use it before. I don’t know if it’s a python 3.9 problem

[Solved] The appium doctor error: bundletool.jar under win10

Problem: first solve the problem of bundletool.jar

1. Download package

https://github.com/google/bundletool/releases

Create a new bundle tool directory in the Android directory, copy the downloaded package to this directory, and change the jar package name, as shown in the figure below

Add the jar package path under the user variable path

In the system variable, Add the contents shown in the figure to the path variable

Re execute appium doctor in a new CMD window

[Solved] KeyError: ‘Transformer/encoderblock_0/MultiHeadDotProductAttention_1/query\\kernel is

Recently, I've been working on the application of Transformer to fine-grained images.
Solving the problem with the vit source code
 
KeyError: 'Transformer/encoderblock_0/MultiHeadDotProductAttention_1/query\kernel is not a file in the archive'
 
This is a problem when merging paths with os.path.join
 
Solution.
 
1. In the modeling.py file
 
Add '/' to the following paths:
ATTENTION_Q = "MultiHeadDotProductAttention_1/query/"
ATTENTION_K = "MultiHeadDotProductAttention_1/key/"
ATTENTION_V = "MultiHeadDotProductAttention_1/value/"
ATTENTION_OUT = "MultiHeadDotProductAttention_1/out/"
FC_0 = "MlpBlock_3/Dense_0/"
FC_1 = "MlpBlock_3/Dense_1/"
ATTENTION_NORM = "LayerNorm_0/"
MLP_NORM = "LayerNorm_2/"
 
2. In the vit_modeling_resnet.py file
 
ResNetV2 class Add '/' after each 'block' and 'unit'
 
self.body = nn.Sequential(OrderedDict([
    ('block1/', nn.Sequential(OrderedDict(
        [('unit1/', PreActBottleneck(cin=width, cout=width*4, cmid=width))] +
        [(f'unit{i:d}/', PreActBottleneck(cin=width*4, cout=width*4, cmid=width)) for i in range(2, block_units[0] + 1)],
        ))),
    ('block2/', nn.Sequential(OrderedDict(
        [('unit1/', PreActBottleneck(cin=width*4, cout=width*8, cmid=width*2, stride=2))] +
        [(f'unit{i:d}/', PreActBottleneck(cin=width*8, cout=width*8, cmid=width*2)) for i in range(2, block_units[1] + 1)],
        ))),
    ('block3/', nn.Sequential(OrderedDict(
        [('unit1/', PreActBottleneck(cin=width*8, cout=width*16, cmid=width*4, stride=2))] +
        [(f'unit{i:d}/', PreActBottleneck(cin=width*16, cout=width*16, cmid=width*4)) for i in range(2, block_units[2] + 1)],
        ))),
]))

[Modified] Hive SQL Error: SQL ERROR [10004] [42000]: Error while compiling statement: FAILED: SemanticException [Error

SQL ERROR [10004] [42000]: Error while compiling statement: FAILED: SemanticException [Error 10004]: Line 64:0 Invalid table alias or column reference ‘T4’: (possible column names are: order_id, order_status, update_time, charge_id, charge_status, station_id, station_name, soc, totalpower, i_a, i_b, i_c, u_a, u_b, u_c, pri_opr_id)


Change to:

ORDER BY
a,
b

[Solved] Harbor image replicate Error: Fetchartifacts error when collect tags for repos

Solution:

https://github.com/goharbor/harbor/issues/12003

postgresql.conf:
sudo sed 's/max_connections =.*/max_connections=999/g'

This issue is because the postgres database connection exceeded.
The default max connection is 100.
However ever it's still not working even set database.max_open_conns in harbor.yaml.
You have to also manually edit postgresql.conf like
sudo sed 's/max_connections =.*/max_connections=999/g' /data/database/postgresql.conf
and restart harbor-db

track db issue with #12124

[Solved] AttributeError: module ‘pandas‘ has no attribute ‘rolling_count‘

Problem Description:

For the problems encountered in automatic modeling today, we use iris data set to initialize the automl framework and pass in training data. The problem is that in the last line of fit, an error is reported: attributeerror: module ‘pandas’ has no attribute’ rolling_ At that time, I read the wrong version of pandas on the Internet. Then I reinstalled it on the Internet and found that it still couldn’t.

Use Microsoft’s flaml automated modeling framework to directly pip, Install flaml. Attach Code:

from flaml import AutoML
from sklearn.datasets import load_iris
import pandas as pd



iris = load_iris()
iris_data = pd.concat([pd.DataFrame(iris.data),pd.Series(iris.target)],axis=1)
iris_data.columns = ["_".join(feature.split(" ")[:2]) for feature in iris.feature_names]+["target"]
iris_data = iris_data[(iris_data.target==0) |(iris_data.target==1)]


flaml_automl = AutoML()
flaml_automl.fit(pd.DataFrame(iris_data.iloc[:,:-1]),iris_data.iloc[:,-1],time_budget=10,estimator_list=['lgbm','xgboost'])

After the upgrade dask is finally executed (PIP install — upgrade dask), it can run normally. However, it is strange that the error message does not prompt dask related problems. Some bloggers on the Internet say that dask provides interfaces to pandas and numpy, which may be caused by the low version of the interface??

Finally, after upgrading dask, the problem is solved!

[Solved] allure Error: unrecognized arguments: — alluredir — clean alluredir

An integrated environment has been built locally and runs smoothly. When moving to the ECS for construction, allure always reports an error and prompts that the command is wrong. After repeatedly comparing various modules and versions of python, it is found that they are the same and always report this error

ubuntu@VM-16-9-ubuntu:/var/lib/jenkins/workspace/autotest_daily/pytestdemo$ sudo python3 all.py 
ERROR: usage: all.py [options] [file_or_dir] [file_or_dir] [...]
all.py: error: unrecognized arguments: --alluredir --clean-alluredir
  inifile: /var/lib/jenkins/workspace/autotest_daily/pytestdemo/pytest.ini
  rootdir: /var/lib/jenkins/workspace/autotest_daily/pytestdemo

I searched everywhere but couldn’t find it. I thought there was a problem with the version of allure pytest. Compared with the version number, the version number is the same, because the difference between the two servers lies in the difference of source addresses. Is there any difference in the same version? Stuck here for an hour, execute   Sudo PIP3 install — upgrade allure pytest

Installing collected packages: six, allure-python-commons, allure-pytest
Successfully installed allure-pytest-2.9.45 allure-python-commons-2.9.45 six-1.16.0

The problem is that the version of six is different. Finally, we found a different version. The version of six used for local construction is 1.14.0 and the server is 1.16.0

Attempt to downgrade the version of six

sudo pip3 install six==1.14.0

Then continue to build. When pytest is executed normally, the build is OK. I have to say that there are so many holes in Python.

The question is, what does six do?

I searched it again. This thing is used to be compatible with Python 2 and 3 modules. It can be seen from this,

allure   allure-pytest                     2.9.45
allure-python-commons             2.9.45

The first two modules are incompatible with six 1.16.0, so they bring this kind of pit.