Tag Archives: python

[Solved] FLask Error: AttributeError: ‘Blueprint‘ object has no attribute ‘register_blueprint‘

Recently, a flash was deployed on alicloud, and an error was reported during startup

AttributeError: ‘Blueprint’ object has no attribute ‘register_blueprint’

I checked the location. There was an error below!

admin_bp = Blueprint('admin',__name__)
admin_bp.register_blueprint(activity_bp,url_prefix='/activity')

First of all, there is no problem running locally. When uploading to the server (CentOS 7, py39), an error is reported. Maybe it is because it is not standardized, but it should also be reported on the window. I don’t understand.

Solution:

Blueprint cannot call Method of register_blueprint(), register_Blueprint() is handed over to the app, and this line is moved to the place where the app is referenced

app.register_blueprint(activity_bp,url_prefix='/activity')

This should be no problem!

[Solved] Fatal error LNK1169: one or more multiple defined symbols were found

Declare the global variable. The global function must be declared in CPP. If other classes refer to the global variable, include the H file of the CPP, and then extern. Otherwise, the repeated definition error is likely to occur.

How does this “easy” explain?

For example, if a global variable is declared in A.H

int Global;

In B.H

include "A.h"
.....
extern int Global;
......

If you include A.H, it is equivalent to including the declaration of global variables in A.H, and the compiler will consider it a duplicate definition.

Therefore, global variables and function declarations must be in CPP when The vs compiler reports this error when there is a function implementation in the H file.

Solution:
1 Yes Add inline
2 before the function declaration in H In project – > Attribute – > Linker – > Command line – > Add/force to additional options

The above is loaded from David_H

I also encountered this error, but the whole project contains too many files to analyze the inclusion relationship, but the problem should be similar.

my solution is:

Project – > Properties – > Linker – > Command line – > Add /force 

to additional options

It solved the error, but there were a lot of warnings.

[Solved] flask db migrate execute error: ERROR [flask_migrate] Error: Can‘t locate revision identified by ‘8d1ad59dc71a‘

Recently, a new table has been added to the flash project, but an error is reported when executing flash DB migrate, as shown in the following figure:

Google around the Internet and finally know the problem:
although the migration directory in the project has been deleted, the version information has been saved in the database. Yes, this is the table – alembic_ version

So all you have to do is delete the version record of this table:

After deletion, execute flash DB migrate again, no error is reported, and the table is generated normally ~

Win10 remote connection submits error by using cluster: Batch: error: batch script contains DOS line breaks (\R\n) sbatch: error

Description:

The notebook of win10 system is remotely connected to the win10 workstation of the office, and then the win10 workstation is used to submit tasks to the cluster server. At this time, you can edit bash directly in the Linux environment of the cluster server The SH file cannot be run normally. If you use VIM to open the file and edit it again, the error will be prompted as follows:

batch: error: Batch script contains DOS line breaks (\r\n)
sbatch: error: instead of expected UNIX line breaks (\n)

In this case, use vscode to bash The SH file can be changed from CRLF to LF to solve the problem.

[Solved] Python Error: An attempt has been made to start a new process before the current process has finished …

This error usually occurs in Windows systems using multiple processes. For example, execute the following code in pychar:

import torch
import torch.utils.data as Data
import numpy as np
from sklearn.datasets import load_iris

iris_x, irisy = load_iris(return_X_y=True)
print("iris_x.dtype:", iris_x.dtype)
print("irisy:", irisy.dtype)

## transform the training set x into a tensor, and the training set y into a tensor
train_xt = torch.from_numpy(iris_x.astype(np.float32))
train_yt = torch.from_numpy(irisy.astype(np.int64))
print("train_xt.dtype:", train_xt.dtype)
print("train_yt.dtype:", train_yt.dtype)

## After converting the training set into a tensor, use TensorDataset to collate X and Y together
train_data = Data.TensorDataset(train_xt, train_yt)
## Define a data loader to batch the training dataset
train_loader = Data.DataLoader(
    dataset=train_data, ## the dataset to use
    batch_size=10, # # Batch sample size
    shuffle=True, # Break up the data before each iteration
    num_workers=2, # [Note: 2 processes are used here]
)

## Check if the dimensionality of the samples of a batch of the training dataset is correct
for step, (b_x, b_y) in enumerate(train_loader):
    if step > 0:
        break
## Output the dimensions of the training image and the labels, and the data type
print("b_x.shape:", b_x.shape)
print("b_y.shape:", b_y.shape)
print("b_x.dtype:", b_x.dtype)
print("b_y.dtype:", b_y.dtype)


## --------- -The correct result is as follows -------- --

# iris_x.dtype: float64
# irisy: int32
# train_xt.dtype: torch.float32
# train_yt.dtype: torch.int64
# b_x.shape: torch.Size([10, 4])
# b_y.shape: torch.Size([10])
# b_x.dtype: torch.float32
# b_y.dtype: torch.int64

The following errors will be reported. (no error will be reported when running in jupyter notebook under the same environment. I don’t know why…)

RuntimeError: 
        An attempt has been made to start a new process before the
        current process has finished its bootstrapping phase.

        This probably means that you are not using fork to start your
        child processes and you have forgotten to use the proper idiom
        in the main module:

            if __name__ == '__main__':
                freeze_support()
                ...

        The "freeze_support()" line can be omitted if the program
        is not going to be frozen to produce an executable.

 

Solution 1:

Remove the statement setting up multiple processes. In this example, comment or delete the following line.

num_workers=2,  # [Note: 2 processes are used here]

Solution 2:

Move the code part of calling multiple processes to [if _name_ = = ‘_main_’:].

if __name__ == '__main__':
    ##  Check if the dimensionality of the samples of a batch of the training dataset is correct
    for step, (b_x, b_y) in enumerate(train_loader):
        if step > 0:
            break
        ## Output the dimensions of the training image and the dimensions of the labels, and the data type
    print("b_x.shape:", b_x.shape)
    print("b_y.shape:", b_y.shape)
    print("b_x.dtype:", b_x.dtype)
    print("b_y.dtype:", b_y.dtype)

However, in pychart, the part before [for step, (b_x, b_y) in enumerate (train_loader):] will be executed twice.

## ——————————The result of running in Pycharm is as follows——————————
iris_x.dtype: float64
irisy: int32
train_xt.dtype: torch.float32
train_yt.dtype: torch.int64
iris_x.dtype: float64
irisy: int32
train_xt.dtype: torch.float32
train_yt.dtype: torch.int64
b_x.shape: torch.Size([10, 4])
b_y.shape: torch.Size([10])
b_x.dtype: torch.float32
b_y.dtype: torch.int64

[Solved] Huawei OBS Python SDK download picture error: nosuchkey

Background:

In the past, Huawei OBS was used to download pictures (that is, to view pictures through a browser), and the address was used to access OBS directly.
for example,
endpoint: obs-example-domain.cn
picture name: QCX% 2F1% 2f20210804% 2f2db3c4bb-0c2c-4c3c-84e0-7e131c1e8db61628047890560.jpg

Access address:

http://obs-example-domain.cn/qcx%2F1%2F20210804%2F2db3c4bb -0c2c-4c3c-84e0-7e131c1e8db61628047890560. jpg

WGet can download pictures from the above address.

However, if you use Python SDK to access, an error will be reported:


AK = 'PLAU4DD8EYVXSA****UL'
SK = 'MdNZCKgSwt9Qgq6ZXtaF7wtZOd8********xEiv'
server = "http://obs-example-domain.cn"
bucketName = 'qcx'
obsClient = ObsClient(access_key_id=AK, secret_access_key=SK, server=server)
name = "qcx%2F1%2F20210804%2F2db3c4bb-0c2c-4c3c-84e0-7e131c1e8db61628047890560.jpg"
resp = obsClient.getObject(bucketName, name, loadStreamInMemory=True)
print(resp.body)

Output: the specified key does not exist

In the above procedures: name = QCX% 2F1% 2f20210804% 2f2db3c4bb-0c2c-4c3c-84e0-7e131c1e8db61628047890560 jpg

Solution:

The picture name above is actually URLEncode. The original string urlcode can obtain:

qcx/1/20210804/2db3c4bb-0c2c-4c3c-84e0-7e131c1e8db61628047890560. jpg

Change the name in the above code to the picture name after URLDecode, that is:

name = "qcx/1/20210804/2db3c4bb-0c2c-4c3c-84e0-7e131c1e8db61628047890560.jpg"

You can get the picture correctly.

The function completion code

Picture browsing completed line:

Browser request img URL -> nginx -> API (with SK AK) – > obs -> Response API – > nginx -> browser

Python 2.7 (only this version is available on the server and cannot be upgraded to 3.X)

#coding=utf-8
from BaseHTTPServer import BaseHTTPRequestHandler
import urllib
import cgi
import os
import urllib
# print urllib.unquote('%E4%B8%89%E7%94%9F%E4%B8%89%E4%B8%96')

from obs import ObsClient

AK = 'PLAU4DD8EYVXSA****UL'
SK = 'MdNZCKgSwt9Qgq6ZXtaF7wtZOd8********xEiv'
server = "http://obs.cn-dchlw-1.digitalgd.com.cn"
bucketName = 'qcxx'
obsClient = ObsClient(access_key_id=AK, secret_access_key=SK, server=server)

cwd = os.getcwd()

class ObsHandler(BaseHTTPRequestHandler):
    def do_GET(self):
        path = urllib.unquote(self.path)
        buf = ""
        query_list = path.split("/")  #auto urldecode
        #print(query_list)
        if query_list[1] == "obs":
            name = "/".join(query_list[3:]).replace("?","")   # for online
            #name = urllib.unquote(name)
            resp = obsClient.getObject(bucketName, name, loadStreamInMemory=True)
            if resp.status < 300: 
                self.send_response(200) 
                buf = resp.body.buffer
            else: 
                self.send_response(400)
        self.end_headers()
        self.wfile.write(buf)
       
 
def StartServer():
    from BaseHTTPServer import HTTPServer
    sever = HTTPServer(("",12000),ObsHandler)
    sever.serve_forever()
  
  
if __name__=='__main__':
    StartServer()

[Solved] Ubuntu using blender script error: Numpy cannot be found

When rendering with blender script on Ubuntu 16, use the command blender — background — Python * Py, an error is reported and numpy cannot be found. But I installed numpy in CONDA environment, so I was puzzled.

Later, I learned that blender has its own Python interpreter. When running my py script, the built-in Python interpreter does not install the numpy extension library, so an error is reported.

Solution:

Find the Python interpreter directory for blender in your environment.
Open blender, shift+F4 and go to blender's Python interpreter
You can see the version of Python that comes with it, then use whereis python for that version and find the interpreter's directory
Use sudo apt-get install python version-numpy to install the third-party library for blender's own Python interpreter

How to Solve PIP3 install oct2py error

use sudo pip3 install oct2py error: ERROR: Cannot uninstall ‘pexpect’. It is a distutils installed project and thus we cannot accurately determine which files belong to it which would lead to only a partial uninstall.
Solution: sudo pip3 install oct2py –ignore-installed pexpect
Install successfuly!

pandas.DataFrame() Initializes NULL Error: DataFrame [How to Solve]

An error occurred while initializing an empty dataframe

Traceback (most recent call last):
    result_format = pd.DataFrame(index=index, columns=columns)
  File "C:\ProgramData\Anaconda3\lib\site-packages\pandas\core\frame.py", line 435, in __init__
    mgr = init_dict(data, index, columns, dtype=dtype)
  File "C:\ProgramData\Anaconda3\lib\site-packages\pandas\core\internals\construction.py", line 239, in init_dict
    val = construct_1d_arraylike_from_scalar(np.nan, len(index), nan_dtype)
  File "C:\ProgramData\Anaconda3\lib\site-packages\pandas\core\dtypes\cast.py", line 1440, in construct_1d_arraylike_from_scalar
    dtype = dtype.dtype
AttributeError: type object 'object' has no attribute 'dtype'

Solution:
result_format = pd.DataFrame(index=index, columns=columns, dtype=object)