Tag Archives: artificial intelligence

Maskrcnn-benchmark Error: KeyError “Non-existent config key: MODEL.BACKBONE.OUT_CHANNELS“

When trying to extract visual features using VQA maskrcnn benchmark: files · master · vedanuj Goswami/VQA maskrcnn benchmark · gitlab,

After compiling maskrcnn benchmark according to the instructions of install, run

python script/extract_features.py ... 

An error occurred:

KeyError "Non-existent config key: MODEL.BACKBONE.OUT_CHANNELS"

The problem is: instead of compiling maskrcnn benchmark, you can compile setup.py under VQA maskrcnn benchmark

PS: the author has made corresponding fine adjustments to the network structure and code. The structure in the original maskrcnn library does not correspond to config

Solution of Prophet error reporting when installing R package in Kubuntu virtual machine environment

Initial error reporting environment

Oracle VM VirtualBox 6.1.22
kunbuntu-21.04
memory 2048MB
storage space 20g

Error reporting and Solutions

If I directly install. Packages (‘prophet ‘) in R according to the above environment, three environment dependent errors will be reported, namely libcurl, libdev and rstan
first solve the problem that libcurl and libdev are not installed. First q() exit the R environment and install the two development tools:

sudo apt-get install libcurl4-openssl-dev
sudo apt-get install libv8-dev

Then enter the R environment and reinstall rstan. The key errors are as follows:

g++: internal compiler error: Killed (program cc1plus)
ERROR: compilation failed for package 'rstan'
In install.packages("rstan") :
  installation of package 'rstan' had non-zero exit status

There are many similar error reports and strange solutions on the Internet. In fact, most cases are caused by insufficient memory allocation. You can adjust the allocated memory to 4096mb in the virtual machine. After installing the above dependencies, reinstall. Packages (‘prophet ‘) and the installation will succeed.

[Solved] RuntimeError: each element in list of batch should be of equal size

RuntimeError: each element in list of batch should be of equal size

1. Example code 2. Running result 3. Error reason 4. Batch_ Size = 25. Analyze reason 6. Complete code

1. Example code

"""
Complete the preparation of the dataset
"""
from torch.utils.data import DataLoader, Dataset
import os
import re

def tokenlize(content):
    content = re.sub('<.*?>', ' ', content, flags=re.S)
    filters = ['!', '"', '#', '$', '%', '&', '\(', '\)', '\*', '\+', ',', '-', '\.', '/', ':', ';', '<', '=', '>', '\?',
               '@', '\[', '\\', '\]', '^', '_', '`', '\{', '\|', '\}', '~', '\t', '\n', '\x97', '\x96', '”', '“', ]

    content = re.sub('|'.join(filters), ' ', content)
    tokens = [i.strip().lower() for i in content.split()]

    return tokens


class ImdbDataset(Dataset):
    def __init__(self, train=True):
        self.train_data_path = r'E:\Python资料\视频\Py5.0\00.8-12课件资料V5.0\阶段9-人工智能NLP项目\第四天\代码\data\aclImdb_v1\aclImdb\train'
        self.test_data_path = r'E:\Python资料\视频\Py5.0\00.8-12课件资料V5.0\阶段9-人工智能NLP项目\第四天\代码\data\aclImdb_v1\aclImdb\test'
        data_path = self.train_data_path if train else self.test_data_path

        temp_data_path = [os.path.join(data_path, 'pos'), os.path.join(data_path, 'neg')]
        self.total_file_path = [] 
        for path in temp_data_path:
            file_name_list = os.listdir(path)
            file_path_list = [os.path.join(path, i) for i in file_name_list if i.endswith('.txt')]
            self.total_file_path.extend(file_path_list)

    def __getitem__(self, idx):
        file_path = self.total_file_path[idx]
        # 获取了label
        label_str = file_path.split('\\')[-2]
        label = 0 if label_str == 'neg' else 1
        # 获取内容
        # 分词
        tokens = tokenlize(open(file_path).read())
        return tokens, label

    def __len__(self):
        return len(self.total_file_path)


def get_dataloader(train=True):
    imdb_dataset = ImdbDataset(train)
    print(imdb_dataset[1])
    data_loader = DataLoader(imdb_dataset, batch_size=2, shuffle=True)
    return data_loader


if __name__ == '__main__':
    for idx, (input, target) in enumerate(get_dataloader()):
        print('idx', idx)
        print('input', input)
        print('target', target)
        break

2. Operation results

3. Reasons for error reporting

dataloader = DataLoader(dataset=dataset, batch_size=2, shuffle=True)

If batch_ Size = 2 changed to batch_ When size = 1 , no more errors will be reported. The operation results are as follows:

4. batch_ size=2

However, if you want batch_ When size = 2 , how to solve it?

resolvent:

The reason for the problem is the parameter collate in the dataloader_ fn

collate_ The default value of FN is Default customized by torch_ collate, collate_ FN is used to process each batch, and the default default_ Collate processing error.

Solution:

    1. first convert the data into a digital sequence and observe whether the results meet the requirements. No similar errors have occurred before using dataloader. Consider customizing a collate_ FN, observations. </ OL>

Here, use method 2 to customize a collate_ FN , and then observe the results:

def collate_fn(batch):
    """
    Processing of batch data
    :param batch: [the result of a getitem, the result of getitem, the result of getitem]
    :return: tuple
    """
    reviews,labels = zip(*batch)
    reviews = torch.LongTensor([config.ws.transform(i,max_len=config.max_len) for i in reviews])
    labels = torch.LongTensor(labels)

    return reviews, labels

collate_fn第二种定义方式:

import config

def collate_fn(batch):
    """
    Processing of batch data
    :param batch: [the result of a getitem, the result of getitem, the result of getitem]
    :return: tuple
    """
    reviews,labels = zip(*batch)
    reviews = torch.LongTensor([config.ws.transform(i,max_len=config.max_len) for i in reviews])
    labels = torch.LongTensor(labels)

    return reviews,labels

5. Analyze the causes

According to the error information, you can find the source of the error in the collate. Py source code, and the error appears in default_ Collate() function. Baidu found the defaul of this source code_ The collate function is the default batch processing method of the dataloader class. If collate is not used when defining the dataloader_ FN parameter specifies the function, and the method in the following source code will be called by default. If you have the above error, it should be the error in the penultimate line of this function

Source code:

def default_collate(batch):
    r"""Puts each data field into a tensor with outer dimension batch size"""

    elem = batch[0]
    elem_type = type(elem)
    if isinstance(elem, torch.Tensor):
        out = None
        if torch.utils.data.get_worker_info() is not None:
            # If we're in a background process, concatenate directly into a
            # shared memory tensor to avoid an extra copy
            numel = sum([x.numel() for x in batch])
            storage = elem.storage()._new_shared(numel)
            out = elem.new(storage)
        return torch.stack(batch, 0, out=out)
    elif elem_type.__module__ == 'numpy' and elem_type.__name__ != 'str_' \
            and elem_type.__name__ != 'string_':
        if elem_type.__name__ == 'ndarray' or elem_type.__name__ == 'memmap':
            # array of string classes and object
            if np_str_obj_array_pattern.search(elem.dtype.str) is not None:
                raise TypeError(default_collate_err_msg_format.format(elem.dtype))

            return default_collate([torch.as_tensor(b) for b in batch])
        elif elem.shape == ():  # scalars
            return torch.as_tensor(batch)
    elif isinstance(elem, float):
        return torch.tensor(batch, dtype=torch.float64)
    elif isinstance(elem, int_classes):
        return torch.tensor(batch)
    elif isinstance(elem, string_classes):
        return batch
    elif isinstance(elem, container_abcs.Mapping):
        return {key: default_collate([d[key] for d in batch]) for key in elem}
    elif isinstance(elem, tuple) and hasattr(elem, '_fields'):  # namedtuple
        return elem_type(*(default_collate(samples) for samples in zip(*batch)))
    elif isinstance(elem, container_abcs.Sequence):
        # check to make sure that the elements in batch have consistent size
        it = iter(batch)
        elem_size = len(next(it))
        if not all(len(elem) == elem_size for elem in it):
            raise RuntimeError('each element in list of batch should be of equal size')
        transposed = zip(*batch)
        return [default_collate(samples) for samples in transposed]

    raise TypeError(default_collate_err_msg_format.format(elem_type))

The function of this function is to pass in a batch data tuple. Each data in the tuple is in the dataset class you define__ getitem__() method. The tuple length is your batch_ Size sets the size of the. However, one field of the iteratable object finally returned in the dataloader class is batch_ The corresponding fields of the size sample are spliced together.

Therefore, when this method is called by default, the penultimate line of the statement return [default] will be entered for the first time_ Collate (samples) for samples in translated] generate iteratable objects from batch tuples through zip function. Then, the same field is extracted through iteration and recursively re passed in default_ In the collate() function, take out the first field and judge that the data type is among the types listed above, then the dateset content can be returned correctly.

If batch data is processed in the above order, the above error will not occur. If the data of the element is not in the listed data type after the second recursion, it will still enter the next recursion, that is, the third recursion. At this time, even if the data can be returned normally, it does not meet our requirements, and the error usually appears after the third recursion. Therefore, to solve this error, you need to carefully check the data type of the returned field of the dataset class you define. It can also be found in defaule_ The collate() method outputs the batch content before and after processing. View the specific processing flow of the function to help you find the error of the returned field data type.

Friendly tip: do not change the defaule in the source file_ The collate() method can copy this code and define its own collate_ Fn() function and specify its own defined collate when instantiating the dataloader class_ FN function.

6. Complete code

"""
Complete the preparation of the dataset
"""
from torch.utils.data import DataLoader, Dataset
import os
import re
import torch

def tokenlize(content):
    content = re.sub('<.*?>', ' ', content, flags=re.S)
    # filters = ['!', '"', '#', '$', '%', '&', '\(', '\)', '\*', '\+', ',', '-', '\.', '/', ':', ';', '<', '=', '>', '\?',
    #            '@', '\[', '\\', '\]', '^', '_', '`', '\{', '\|', '\}', '~', '\t', '\n', '\x97', '\x96', '”', '“', ]
    filters = ['\.', '\t', '\n', '\x97', '\x96', '#', '$', '%', '&']
    content = re.sub('|'.join(filters), ' ', content)
    tokens = [i.strip().lower() for i in content.split()]
    return tokens


class ImdbDataset(Dataset):
    def __init__(self, train=True):
        self.train_data_path = r'.\aclImdb\train'
        self.test_data_path = r'.\aclImdb\test'
        data_path = self.train_data_path if train else self.test_data_path

        temp_data_path = [os.path.join(data_path, 'pos'), os.path.join(data_path, 'neg')]
        self.total_file_path = []  
        for path in temp_data_path:
            file_name_list = os.listdir(path)
            file_path_list = [os.path.join(path, i) for i in file_name_list if i.endswith('.txt')]
            self.total_file_path.extend(file_path_list)

    def __getitem__(self, idx):
        file_path = self.total_file_path[idx]
        label_str = file_path.split('\\')[-2]
        label = 0 if label_str == 'neg' else 1
        tokens = tokenlize(open(file_path).read().strip())  
        return label, tokens

    def __len__(self):
        return len(self.total_file_path)

def collate_fn(batch):
    batch = list(zip(*batch))
    labels = torch.tensor(batch[0], dtype=torch.int32)
    texts = batch[1]
    del batch
    return labels, texts


def get_dataloader(train=True):
    imdb_dataset = ImdbDataset(train)
    data_loader = DataLoader(imdb_dataset, batch_size=2, shuffle=True, collate_fn=collate_fn)
    return data_loader

if __name__ == '__main__':
    for idx, (input, target) in enumerate(get_dataloader()):
        print('idx', idx)
        print('input', input)
        print('target', target)
        break

I wish you solve the bug and run through the model as soon as possible!

LGWR waits for event ‘DLM cross inst call completion’ [How to Solve]

Click “blue word” above

Pay attention to us and enjoy more dry goods!

The customer has a set of Oracle 19C DataGuard database environment. The standby side always has large gap at intervals. At the same time, LGWR (ospid: 105521) waits for event ‘DLM cross Inst call completion’ for n secs. The standby side does not provide external queries. At the same time, multi instance log applications are disabled, and the operating system resources are idle, The number of LMS processes is normal. If other nodes are shut down, leaving only the apply log does not exist. DLM is a distributed lock manager, which belongs to the core mechanism of Rac architecture. It realizes multi node resource sharing scheduling and transmits requests through the interconnect network. Here is a brief record of this case:

db alert log

PR00 (PID:109603): Media Recovery Log +ARCH/anbob1/ARCHIVELOG/2021_07_12/thread_3_seq_13586.1479.1077669291
2021-07-12T20:25:29.643687+08:00
PR00 (PID:109603): Media Recovery Log +ARCH/anbob1/ARCHIVELOG/2021_07_12/thread_2_seq_14361.1072.1077669019
2021-07-12T20:29:38.183656+08:00
LGWR (ospid: 105521) waits for event 'DLM cross inst call completion' for 1 secs.
2021-07-12T20:29:48.137737+08:00
LGWR (ospid: 105521) waits for event 'DLM cross inst call completion' for 2 secs.
2021-07-12T20:31:21.952345+08:00
 rfs (PID:113884): Selected LNO:26 for T-2.S-14456 dbid 3902007743 branch 1037635587
2021-07-12T20:31:21.987333+08:00
 rfs (PID:114704): Error ORA-235 occurred during an un-locked control file
 rfs (PID:114704): transaction.  This error can be ignored.  The control
 rfs (PID:114704): file transaction will be retried.
2021-07-12T20:31:43.532600+08:00
ARC2 (PID:106404): Archived Log entry 9591 added for T-2.S-14455 ID 0xe894b1bf LAD:1
2021-07-12T20:31:47.151671+08:00
 rfs (PID:113882): Selected LNO:31 for T-3.S-13731 dbid 3902007743 branch 1037635587
2021-07-12T20:31:49.116049+08:00
 rfs (PID:113880): Selected LNO:22 for T-1.S-13006 dbid 3902007743 branch 1037635587
2021-07-12T20:31:53.393547+08:00
ARC3 (PID:106408): Archived Log entry 9592 added for T-1.S-13005 ID 0xe894b1bf LAD:1
2021-07-12T20:32:02.346585+08:00
ARC2 (PID:106404): Archived Log entry 9593 added for T-3.S-13730 ID 0xe894b1bf LAD:1
2021-07-12T20:33:13.805344+08:00
LGWR (ospid: 105521) waits for event 'DLM cross inst call completion' for 0 secs.
2021-07-12T20:33:13.805470+08:00
LGWR (ospid: 105521) is hung in an acceptable location (inwait 0x1.ffff).
2021-07-12T20:33:21.196764+08:00
LGWR (ospid: 105521) waits for event 'DLM cross inst call completion' for 2 secs.
2021-07-12T20:33:31.310737+08:00
LGWR (ospid: 105521) waits for event 'DLM cross inst call completion' for 0 secs.
2021-07-12T20:33:41.223781+08:00
LGWR (ospid: 105521) waits for event 'DLM cross inst call completion' for 1 secs.
2021-07-12T20:33:51.205776+08:00
LGWR (ospid: 105521) waits for event 'DLM cross inst call completion' for 2 secs.
2021-07-12T20:34:01.307770+08:00
LGWR (ospid: 105521) waits for event 'DLM cross inst call completion' for 0 secs.
2021-07-12T20:34:25.440231+08:00
PR00 (PID:109603): Media Recovery Log +ARCH/anbob1/ARCHIVELOG/2021_07_12/thread_2_seq_14362.1867.1077670807
2021-07-12T20:34:44.864009+08:00
PR00 (PID:109603): Media Recovery Log +ARCH/anbob1/ARCHIVELOG/2021_07_12/thread_3_seq_13587.691.1077670845
2021-07-12T20:34:45.204773+08:00
PR00 (PID:109603): Media Recovery Log +ARCH/anbob1/ARCHIVELOG/2021_07_12/thread_1_seq_12934.1156.1077670917
2021-07-12T20:36:09.378685+08:00
LGWR (ospid: 105521) waits for event 'DLM cross inst call completion' for 2 secs.
2021-07-12T20:36:19.341635+08:00
LGWR (ospid: 105521) waits for event 'DLM cross inst call completion' for 0 secs.
2021-07-12T20:36:28.416573+08:00
LGWR (ospid: 105521) waits for event 'DLM cross inst call completion' for 0 secs.
2021-07-12T20:36:38.375742+08:00
LGWR (ospid: 105521) waits for event 'DLM cross inst call completion' for 1 secs.

LGWR trace

*** 2021-07-12T20:33:43.793041+08:00 ((4))
Received ORADEBUG command (#235) 'dump KSTDUMPCURPROC 1' from process '105470'
-------------------------------------------------------------------------------
Trace Bucket Dump Begin: default bucket for process 47 (osid: 105521, LGWR)
CDB_NAME(CON_ID):CON_UID:TIME(*=approx):SEQ:COMPONENT:FILE@LINE:FUNCTION:SECT/DUMP:SID:SERIAL#: [EVENT#:PID] DATA
-------------------------------------------------------------------------------
IRMSDB(4):3247498417:2021-07-12 20:33:42.784 :KJCI:kjci.c@1957:kjci_complete():4466:40278: freeing request 0x20fd651e8 (inst|inc|reqid)=(1|88|823031) with opcode=146 and completion status [DONE]
IRMSDB(4):3247498417:2021-07-12 20:33:42.784 :KJCI:kjci.c@1089:kjci_initreq():4466:40278: request 0x20fd651e8 (inst|inc|reqid)=(1|88|823032) with group (type|id)=(1|1), opcode=146, flags=0x0, msglen=56, where=[kqlmClusterMessage] to target instances=
IRMSDB(4):3247498417:2021-07-12 20:33:42.784 :KJCI:kjci.c@1091:kjci_initreq():4466:40278:    1 2
IRMSDB(4):3247498417:2021-07-12 20:33:42.784 :KJCI:kjci.c@1618:kjci_processcrq():4466:40278: processing reply 0x2cff2d4e8 for request 0x20fd651e8 (inst|inc|reqid)=(1|88|823032) with opcode=146 from callee (inst|pid|psn)=(1|36|1)
IRMSDB(4):3247498417:2021-07-12 20:33:42.784 :KJCI:kjci.c@1618:kjci_processcrq():4466:40278: processing reply 0x2cff2d718 for request 0x20fd651e8 (inst|inc|reqid)=(1|88|823032) with opcode=146 from callee (inst|pid|psn)=(2|36|1)
IRMSDB(4):3247498417:2021-07-12 20:33:42.784 :KJCI:kjci.c@1957:kjci_complete():4466:40278: freeing request 0x20fd651e8 (inst|inc|reqid)=(1|88|823032) with opcode=146 and completion status [DONE]
IRMSDB(4):3247498417:2021-07-12 20:33:42.785 :KJCI:kjci.c@1089:kjci_initreq():4466:40278: request 0x20fd651e8 (inst|inc|reqid)=(1|88|823033) with group (type|id)=(1|1), opcode=146, flags=0x0, msglen=56, where=[kqlmClusterMessage] to target instances=
IRMSDB(4):3247498417:2021-07-12 20:33:42.785 :KJCI:kjci.c@1091:kjci_initreq():4466:40278:    1 2
IRMSDB(4):3247498417:2021-07-12 20:33:42.785 :KJCI:kjci.c@1618:kjci_processcrq():4466:40278: processing reply 0x2cff2d4e8 for request 0x20fd651e8 (inst|inc|reqid)=(1|88|823033) with opcode=146 from callee (inst|pid|psn)=(1|36|1)
IRMSDB(4):3247498417:2021-07-12 20:33:42.785 :KJCI:kjci.c@1618:kjci_processcrq():4466:40278: processing reply 0x2cff2d718 for request 0x20fd651e8 (inst|inc|reqid)=(1|88|823033) with opcode=146 from callee (inst|pid|psn)=(2|36|1)
IRMSDB(4):3247498417:2021-07-12 20:33:42.785 :KJCI:kjci.c@1957:kjci_complete():4466:40278: freeing request 0x20fd651e8 (inst|inc|reqid)=(1|88|823033) with opcode=146 and completion status [DONE]
IRMSDB(4):3247498417:2021-07-12 20:33:42.785 :KJCI:kjci.c@1089:kjci_initreq():4466:40278: request 0x20fd651e8 (inst|inc|reqid)=(1|88|823034) with group (type|id)=(1|1), opcode=146, flags=0x0, msglen=56, where=[kqlmClusterMessage] to target instances=
IRMSDB(4):3247498417:2021-07-12 20:33:42.785 :KJCI:kjci.c@1091:kjci_initreq():4466:40278:    1 2

**KJCJ ** ==> ( kjci)_ processcrq – kernel lock management communication cross instance call

For cross node communication, there is no known bug in MOS. First analyze the network problem. You can also do SSD from the process blocker or view the hangmgr trace. The AHF framework in Oracle 19C CRS comes with OSW.

OSW netstat data

zzz ***Tue Jul 13 00:59:51 CST 2021
...
#kernel
IpInReceives                    1456201695         0.0
IpInHdrErrors                   0                  0.0
IpInAddrErrors                  0                  0.0
IpForwDatagrams                 0                  0.0
IpInUnknownProtos               0                  0.0
IpInDiscards                    0                  0.0
IpInDelivers                    1085210966         0.0
IpOutRequests                   1007206469         0.0
IpOutDiscards                   5280               0.0
IpOutNoRoutes                   8                  0.0
IpReasmTimeout                  6333500            0.0
IpReasmReqds                    408470736          0.0
IpReasmOKs                      37504539           0.0
IpReasmFails                    8651478            0.0
IpFragOKs                       29029579           0.0

Note:
currently, there are high IP reorganization failure packets, which is a cumulative value. You can view the daily changes below.

View the failure of daily IP reorganization

 awk '/zzz/{d=$3"/"$4" "$5}/IpReasmFails/{curr=$2;diff=curr-prev;if(diff&gt;5)print d,diff,prev,curr;prev=curr}' *.dat
Jul/13 00:00:16 8620039  8620039
Jul/13 00:00:46 185 8620039 8620224
Jul/13 00:01:16 242 8620224 8620466
Jul/13 00:01:46 324 8620466 8620790
Jul/13 00:02:16 279 8620790 8621069
Jul/13 00:02:46 325 8621069 8621394
Jul/13 00:03:16 325 8621394 8621719
Jul/13 00:03:46 247 8621719 8621966
Jul/13 00:04:16 246 8621966 8622212
Jul/13 00:04:46 210 8622212 8622422
Jul/13 00:05:16 327 8622422 8622749
Jul/13 00:05:46 247 8622749 8622996
Jul/13 00:06:16 238 8622996 8623234
Jul/13 00:06:46 219 8623234 8623453
Jul/13 00:07:16 262 8623453 8623715
Jul/13 00:07:46 254 8623715 8623969
Jul/13 00:08:16 179 8623969 8624148
Jul/13 00:08:46 294 8624148 8624442

Note:
it can be seen that there are high IP reorganization failures at ordinary times. Let’s try to use Ping to verify the network

Using Ping authentication

— on node1

ping -s 4000 {node2-privateIP}
Note:

Forget to keep the historical output here. It is found that there is 12% package loss, indicating that the current and heartbeat networks are not healthy. However, the bond made of two network cards is used. At present, it is in active backup active and standby mode. You can try to switch another network card.

Network card switching

cat /proc/net/bonding/bond0

Note:

Check that the current primary card is ens9f0 and switch to the standby card ens9f1

ifenslave -c bond0 ens9f1

After switching between active and standby network cards: Ping is normal, IP reorganization failure disappears, DLM cross Inst call completion does not appear, DG synchronization is normal, and the problem is solved.

[Solved] Denseflow Install Error: fatal error: opencv2/cudaarithm.hpp: No such file or directory

Installing denseflow compiles with the following error./home/m/src/denseflow/src/denseflow_gpu.cpp:2:10: fatal error: opencv2/cudaarithm.hpp: No such file or directory
#include “opencv2/cudaarithm.hpp”
where the keywords are
/home/m/src/denseflow/src/denseflow_gpu.cpp
cudaarithm.hpp
The solution is as follows.
1、Find the path where cudaarithm.hpp is located

sudo find/-name "cudaarithm.hpp"

A path similar to:

/home/m/src/opencv_contrib/modules/cudaarithm/include/opencv2/cudaarithm.hpp
/home/m/include/opencv4/opencv2/cudaarithm.hpp
............
..............

Then fill the absolute path into denseflow_ Gpu.cpp replaces relative path

#You have to fill in the absolute paths, and our paths may be different, you have to follow your own
sudo vim /home/m/src/denseflow/src/denseflow_gpu.cpp

Before replacing
#include “opencv2/cudaarithm.hpp”
After replacement
#include “/home/m/include/opencv4/opencv2/cudaarithm.hpp”
Compile again, the problem is solved.

Opencv opencvsharpexternal compilation and recording process

1-35013;- VS2019-36733;- OpenCvSharpExterior-283044;-

2’38468;’21547;’ 24405s;

D:\VS2019\opencvsharp-4.4.0.20200916\opencv files\opencv440 win x64\include\3rdparty\ippicv\ippicv win\icv\include

D:\VS2019\opencvsharp-4.4.0.20200916\\\opencv files\opencv440 wine x64\include

3-38745;- 24577;- 24211;- 38468;-

liblibliblibliblibliblibliblibliblibliblibliblibliblibliblibliblibliblibliblibliblibliblibliblibliblibliblibliblibliblibliblibliblibliblibliblibliblibliblibliblibliblibliblibliblibliblibliblibliblibliblibliblibliblibliblibliblibliblibliblibliblibliblibliblibliblibliblibliblibliblibliblibliblibliblibliblibliblibliblibliblibliblibliblibliblibliblibliblibliblibliblibliblibliblibliblibliblibliblibliblibliblibliblibliblibliblibliblibliblibliblibliblibliblibliblibliblibliblibliblibliblibliblibliblibliblibliblibliblibliblibliblibliblibliblibliblibliblibliblibliblibliblibliblibliblibliblibliblibliblibzlibd.lib
opencv aruco440d.lib
opencv bgsegm440d.lib
opencv bioinspired440d.lib
opencv calib3d440d.lib
The opening of the opening of the opening of the opening of the opening of the opening of the opening of the opening of the opening of the opening of the opening of the opening of the opening of the opening of the opening of the opening of the opening of the opening of the opening of the opening of the opening of the opening of the opening of the opening of the opening of the opening of the opening of the opening of the opening of the opening of the opening of the opening of the opening of the opening of the opening of the opening lib
opencv fuzzy440d.lib
opencv hfs440d.lib
opencv highgui440d.lib
opencv imgcodecs440d.lib
opencv imgproc440d.lib
open the line to descridescridescridescridescridescridescridescridescridescridescridescridescridescridescridescridescridescridescridescridescridescridescridescridescridescridescridescridescridescridescridescridescridescridescridescridescridescriopen open open open open open open open open open open open open open open open open open open open open open open open open open open open open open open open open open open open open open open open open open open open open open open open open open open open open open open open open open open open open open open open open open open open open open open open up photo4440d.libopen open open open open open open open open open open open open open open open open open open open open open open open open open open open open open open open open open open open open open open open open open open open open open open open open open open open open open open open open open open open open open open open open open open open open open open open open open open open open open 40d.lib
opencv quality 440d.lib
opencv reg440d.lib
opencv rgbd440d.lib
opencv saliency440d.lib
The opening of the opening of the structural opening of the superstsuperstsuperstsuperstsuperstsuperstsuperstsuperstsuperstsuperstsuperstsuperstsuperstsuperstsuperstsuperstsuperstsuperstsuperstsuperstsuperstsuperstsuperstsuperstsuperstsuperstsuperstsuperstsuperstsuperstsuperstsuperstsuperstsuperstsuperstsuperstsuperstsuperstsuperstsuperstsuperstsuperstsuperstsuperstsuperstsuperstsuperstsuperstsuperstsuperstsuperstsuperstsuperstsuperstsuperstsuperstsuperstsuperstsuperstsuperstsuperstsuperstsuperstsuperstsuperstsuperstsuperstsuperstsuperstsuperstsuperstsuperstsuperstsuperstsuperstsuperstsuperstsuperstsuperstsuperstsuperstsuperstsuperstsuperstsuperstsuperstsuperstsuperstsuperstsuperstsuperstsuperstsuperstsuperstsuperstsuperstsuperstsuperstsuperstsuperstsuperstsuperstsuperstsuperstsuperstsuperstsuperstsuperstsuperstsuperstsuperstsuperstremaremaremaremaremaremaremaremaremaremaremaremaremaremaremaremaremaremaremaremaremaremaremaremaremaremaremaremaremaremaremaremaremaremaremaremaremaremaremaremaremaremaremaremaremaremaremaremaremaremaremaremaremaremaremaremaremaremaremaremaremaremaremaremaremaremaremaremaremaremaremaremaremaremaremaremarema
opencv ts440d.lib
opencv video440d.lib
opencv videoio440d.lib
opencv videostab440d.lib
opencv xfeatures2d440d.lib
opencv ximgproc440d.lib
opencv xobjdetect440d.lib
opencv xphoto440d.lib

OpenCV 357933387454524577;24211s;

[Solved] RuntimeError: cuda runtime error: device-side assert trigger

In this way, when running fastercnn, we need to change the original model’s 21 categories to our own number of categories. After the first modification, no error will be reported in the run, and after the second modification, an error will be reported as follows:
1 block: [0,0,0], thread: [16,0,0] assertion T & gt= 0 && amp; t < n_ Classes failed.
2 runtime error: CUDA runtime error (59): device side assert triggered
the main solutions on the Internet are as follows:

The reason for this problem is that there are tags in the training data that exceed the number of categories. For example, I set up a total of 8 classes, but if 9 appears in the tag in the training data, this error will be reported. So here’s the problem. There’s a trap. If the tag in the training data contains 0, the above error will also be reported. This is very weird. Generally, we start counting from 0, but in Python, the category labels below 0 have to report an error. So if the category label starts from 0, add 1 to all category labels.

Solution:
The first time I ran the program, I found that there were 16 categories (I deleted 4 categories, but I didn’t find them). After running the program, I found that there were four more categories, so I deleted these four categories. However, when I ran the program again, I reported the above error. The reason is that every time we
run the program, we have to delete the cache generated by the last run, because I didn’t delete it, so the program thought it was 16 categories, But only 12 categories are provided. So if you report this error, you can delete the cache and run it again

Python PIP Fatal error in launcher: Unable to create process using ‘“e:\program files\programdata

Error content

Fatal error in launcher: Unable to create process using '"e:\program 
files\programdata\python3.9\python.exe"  "E:\Program Files\ProgramData
\Python39\Scripts\pip.exe" ': ???????????

Solution

python -m pip install --upgrade pip

Problem solving
there is a problem with PIP. Updating PIP through Python will delete the wrong PIP and install the updated pip

RuntimeError: Integer division of tensors using div or / is no longer supported, and in a future rel

RuntimeError: Integer division of tensors using div or/is no longer supported, and in a future release div will perform true division as in Python 3. Use true_divide or floor_divide (// in Python) instead.

from torchvision import transforms
import numpy as np

data = np.random.randint(0, 255, size=12)
img = data.reshape(2, 2, 3)
print(img.shape)
img_tensor = transforms.ToTensor()(img)  # Convert to tensor
print(img_tensor)
print(img_tensor.shape)
print("*" * 20)
norm_img = transforms.Normalize((10, 10, 10), (1, 1, 1))(img_tensor)  # Perform normative processing
print(norm_img)

Operation effect:

reason:

Pytorch1.5.0 is OK, but when upgrading to 1.6.0, it is found that division between tenor and int cannot be directly performed with ‘/’.

Solution:

Standardize the data processing

Example code:

from torchvision import transforms
import numpy as np

data = np.random.randint(0, 255, size=12)
img = data.reshape(2, 2, 3)
print(img.shape)
img_tensor = transforms.ToTensor()(img)  # convert to tensor
print(img_tensor)
print(img_tensor.shape)
print("*" * 20)
img_tensor = img_tensor.float()  # Add this line
norm_img = transforms.Normalize((10, 10, 10), (1, 1, 1))(img_tensor) # Perform normalization
print(norm_img)

Results of operation:

Record Stanford corenlp running all the time without error

Record the relevant contents of Stanford corenlp running without error

1. Detection method

NLP handles how to get the code for the cause of the error

from stanfordcorenlp import StanfordCoreNLP
import logging
# Use logging to monitor why errors occur

nlp = StanfordCoreNLP("D:\study\python\Lib\site-packages\stanfordcorenlp\stanford-corenlp-full-2018-02-27", lang='zh',quiet=False,logging_level=logging.DEBUG)
text = "Knowledge background in business management and related industry knowledge."

print('hello')
nlp.close()

2. Reason analysis

“The main problem is that my java environment is 32-bit. The largest memory of the 32-bit Java environment is 4G. And the memory required by Stanford corenlp is 4G. That’s why the Java environment refuses to create a JVM, and that’s why the program can’t run in the end. ” (from a blogger)

Several solutions to HDF5 error reporting in Python environment

Several solutions to the problem of HDF5 error reporting in Python environment (personal test)
the content of error reporting is as follows:
warning! HDF5 library version mismatched error
the HDF5 header files used to compile this application do not match
the version used by the HDF5 library to which this application is linked.
data corruption or segmentation faults may occur if the application continues.
This can happen when an application was compiled by one version of HDF5 but
linked with a different version of static or shared HDF5 library.
You should recompile the application or check your shared library related
settings such as ‘LD_ LIBRARY_ PATH’.
You can, at your own risk, disable this warning by setting the environment
variable ‘HDF5_ DISABLE_ VERSION_ CHECK’ to a value of ‘1’.
Setting it to 2 or higher will suppress the warning messages totally.
Headers are 1.10.4, library is 1.10.5

There are two ways to solve this problem.
first of all, this problem may be the mismatch of HDF5 library, or it may be something similar to warning. I will talk about it in detail below.
The first solution: uninstall HDF5 and then install it again.
The code executed by the terminal is as follows:
CONDA install HDF5
there are many friends on the Internet who use this method to be useful. I personally test that this method is useless to me.
The second solution: check the set path: LD_ LIBRARY_ Path
I personally test: because the system I use is win10, but LD_ LIBRARY_ I couldn’t find the path for a long time. Later, I searched for the path of Linux, so I didn’t use this method.
The third solution: the HDF5_ DISABLE_ VERSION_ Check is set to a higher level, ignoring warnings.
Before import tensorflow, add the following code to the code:
Import OS;
Import OS;
Import OS os.environ [‘HDF5_ DISABLE_ VERSION_ Check ‘] =’2’
my personal test: this method is really useful!

The Usage of Np.random.uniform()

np.random.uniform (low=0.0, high=1.0, size=None)

Function: random sampling from a uniform distribution [low, high]. Note that the definition field is left closed and right open, that is, it contains low but not high

Low: sampling lower bound, float type, default value is 0; high: sampling upper bound, float type, default value is 1; size: output sample number, int or tuple type, for example, size = (m, N, K), then output MNK samples, default value is 1. Return value: ndarray type, whose shape is consistent with the description in the parameter size.

The uniform () method randomly generates the next real number, which is in the range [x, y]

Evenly distributed, left closed, right open

np.random.uniform(1.75, 1, 100000000)
#output
array([1.25930467, 1.40160844, 1.53509096, ..., 1.57271193, 1.25317863,
       1.62040797])

Draw a picture to see the distribution

import matplotlib.pyplot as plt
# Generate a uniformly distributed random number
x1 = np.random.uniform(-1, 1, 100000000) # output the number of samples 100000000

# Draw a graph to see the distribution
# 1) Create a canvas
plt.figure(figuresize=(20, 8), dpi=100)

# 2) Plot the histogram
plt.hist(x1, 1000) # x represents the data to be used, bins represents the number of intervals to be divided

# 3) Display the image
plt.show()

*

*