NPM can’t find D: //nodejs/node all of The solution of sudden_modules/NPM/bin/npm-cli.js

Node installed with NVM

Use very normal, suddenly one day NPM error

npm -v  
NPM does not exist

node -v

Node does not exist

Baidu, very peer will tell you, re install node

If you haven’t changed environment variables and deleted files, you don’t need to

First NVM – V to see if NVM exists

If there is NVM, see if there is a node file

nvm list

 

There is a node, but it can’t be found with noe – V. then the problem is clear. Just NVM use   14.16.1, found in node – V NPM – V. Go to the next event, OK, all right

Property or field ‘Title‘ cannot be found on object of type

Do is a post function, post can be displayed after the details, the error is in the post after clicking the error
first of all, the meaning of the question is that the “title” attribute cannot be found in the list class, which means that the data is queried, but the title attribute cannot be found
after re checking one side of the code, it is found that there is no problem, that is, the method return type of the POJO class in the Dao layer is not written correctly. I wrote the list type at that time, but all I want to display is a message. List is a collection type, which can’t be found. Therefore, you can change the list type to the entity type consult.

RuntimeError: each element in

Runtimeerror: each element in list of batch should be of equal size
define your own dataset class, return the corresponding data to be returned, and find the following error

RuntimeError: each element in list of batch should be of equal size

Baidu said that the most direct way is to batch it_ The value of size is changed to 1, and the error report is released. But I’m training models, not just to correct mistakes. batch_ How to train the model when size is set to 1, so I decided to study this error.

Original Traceback (most recent call last):
  File "/home/cv/anaconda3/lib/python3.8/site-packages/torch/utils/data/_utils/worker.py", line 202, in _worker_loop
    data = fetcher.fetch(index)
  File "/home/cv/anaconda3/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 47, in fetch
    return self.collate_fn(data)
  File "/home/cv/anaconda3/lib/python3.8/site-packages/torch/utils/data/_utils/collate.py", line 83, in default_collate
    return [default_collate(samples) for samples in transposed]
  File "/home/cv/anaconda3/lib/python3.8/site-packages/torch/utils/data/_utils/collate.py", line 83, in <listcomp>
    return [default_collate(samples) for samples in transposed]
  File "/home/cv/anaconda3/lib/python3.8/site-packages/torch/utils/data/_utils/collate.py", line 83, in default_collate
    return [default_collate(samples) for samples in transposed]
  File "/home/cv/anaconda3/lib/python3.8/site-packages/torch/utils/data/_utils/collate.py", line 83, in <listcomp>
    return [default_collate(samples) for samples in transposed]
  File "/home/cv/anaconda3/lib/python3.8/site-packages/torch/utils/data/_utils/collate.py", line 81, in default_collate
    raise RuntimeError('each element in list of batch should be of equal size')

According to the error information, you can find the source of the error. Py source code, the error appears in the default_ In the collate() function. Baidu found this source defaul_ The collate function is the default batch processing method of the dataloader class. If the collate function is not used when defining the dataloader_ If the FN parameter specifies a function, the method in the following source code will be called by default. If you have the above error, it should be the last four line error in this function

def default_collate(batch):
    r"""Puts each data field into a tensor with outer dimension batch size"""

    elem = batch[0]
    elem_type = type(elem)
    if isinstance(elem, torch.Tensor):
        out = None
        if torch.utils.data.get_worker_info() is not None:
            # If we're in a background process, concatenate directly into a
            # shared memory tensor to avoid an extra copy
            numel = sum([x.numel() for x in batch])
            storage = elem.storage()._new_shared(numel)
            out = elem.new(storage)
        return torch.stack(batch, 0, out=out)
    elif elem_type.__module__ == 'numpy' and elem_type.__name__ != 'str_' \
            and elem_type.__name__ != 'string_':
        if elem_type.__name__ == 'ndarray' or elem_type.__name__ == 'memmap':
            # array of string classes and object
            if np_str_obj_array_pattern.search(elem.dtype.str) is not None:
                raise TypeError(default_collate_err_msg_format.format(elem.dtype))

            return default_collate([torch.as_tensor(b) for b in batch])
        elif elem.shape == ():  # scalars
            return torch.as_tensor(batch)
    elif isinstance(elem, float):
        return torch.tensor(batch, dtype=torch.float64)
    elif isinstance(elem, int_classes):
        return torch.tensor(batch)
    elif isinstance(elem, string_classes):
        return batch
    elif isinstance(elem, container_abcs.Mapping):
        return {key: default_collate([d[key] for d in batch]) for key in elem}
    elif isinstance(elem, tuple) and hasattr(elem, '_fields'):  # namedtuple
        return elem_type(*(default_collate(samples) for samples in zip(*batch)))
    elif isinstance(elem, container_abcs.Sequence):
        # check to make sure that the elements in batch have consistent size
        it = iter(batch)
        elem_size = len(next(it))
        if not all(len(elem) == elem_size for elem in it):
            raise RuntimeError('each element in list of batch should be of equal size')
        transposed = zip(*batch)
        return [default_collate(samples) for samples in transposed]

    raise TypeError(default_collate_err_msg_format.format(elem_type))

This function is to pass in a batch data tuple, in which each data is in the dataset class you defined__ getitem__() Method. The length of the tuple is your batch_ Size sets the size of the. However, one of the fields of the iteratable object returned by the dataloader class is the batch_ The corresponding fields of size samples are spliced together. Therefore, when this method is called by default, it will enter the penultimate line for the first time return [default]_ Collate (samples) for samples in translated] use the zip function to generate an iterative object from the batch tuple. Then the same field is retrieved by iteration and the default is recursively re passed in_ In the collate() function, take out the first field to judge whether the data type is in the type listed above, then the dateset content can be returned correctly
if the batch data is processed in the above order, the above error will not occur. If the data of the element is not in the listed data type after the second recursion, it will still enter the next recursion, that is, the third recursion. At this time, even if the data can be returned normally, it does not meet our requirements, and the error report generally appears after the third recursion. Therefore, if you want to solve this error, you need to carefully check the data type of the return field of your defined dataset class. It can also be found in defaule_ In the collate() method, output the batch content before and after processing, and view the specific processing flow of the function to help you find the error of the returned field data type
tips: don’t change defaule in the source code file_ The collate () method can copy this code and define its own collate_ Fn() function and specify your own collet when instantiating the dataloader class_ FN function
I hope you can solve the bug as soon as possible and run through the model!

java.nio.charset.MalformedInputException: Input length = 1

Project scenario:

Using springboot to build a personal blog system
reference video: reference video link

Problem Description:

Error reported:

20:40:41.091 [restartedMain] ERROR org.springframework.boot.SpringApplication - Application run failed
org.yaml.snakeyaml.error.YAMLException: java.nio.charset.MalformedInputException: Input length = 1
	at org.yaml.snakeyaml.reader.StreamReader.update(StreamReader.java:218)
	at org.yaml.snakeyaml.reader.StreamReader.ensureEnoughData(StreamReader.java:176)
	at org.yaml.snakeyaml.reader.StreamReader.ensureEnoughData(StreamReader.java:171)
	at org.yaml.snakeyaml.reader.StreamReader.peek(StreamReader.java:126)
	at org.yaml.snakeyaml.scanner.ScannerImpl.scanToNextToken(ScannerImpl.java:1177)
	at org.yaml.snakeyaml.scanner.ScannerImpl.fetchMoreTokens(ScannerImpl.java:287)
	at org.yaml.snakeyaml.scanner.ScannerImpl.checkToken(ScannerImpl.java:227)
	at org.yaml.snakeyaml.parser.ParserImpl$ParseImplicitDocumentStart.produce(ParserImpl.java:195)
	at org.yaml.snakeyaml.parser.ParserImpl.peekEvent(ParserImpl.java:158)
	at org.yaml.snakeyaml.parser.ParserImpl.checkEvent(ParserImpl.java:148)
	at org.yaml.snakeyaml.composer.Composer.checkNode(Composer.java:82)
	at org.yaml.snakeyaml.constructor.BaseConstructor.checkData(BaseConstructor.java:123)
	at org.yaml.snakeyaml.Yaml$1.hasNext(Yaml.java:507)

Cause analysis:

I remember converting the suffixes of properties files to YML files directly according to the video, and then the Chinese comments on the page became garbled, so I naturally converted them to GBK format, and then the above error was reported

The first possibility is that there is a problem with the encoding format of your application.yml file, which is changed to UTF-8
the second possibility is that your application.yml file is generated by directly changing the suffix of other types of files to YML. At this time, you need to copy all the contents in application.yml, delete them, and then create a new application.yml file, Copy the copied content into it and then run the project, and no error will be reported.

Solution:

Find file coding in the setting of idea

Convert GBK to UTF-8, and then rewrite the Chinese annotation to run normally

Copying a param with shape torch. Size ([262, 2048]), parameter size does not match

A parameter with shape torch. Size ([262]) is copied from the checkpoint, and the shape in the current model is torch. Size ([290]).

The parameter size of fc.weight does not match, just modify the parameter.

VIM open the corresponding file and modify the parameters

Solve the problem successfully

Reference link

 

Import any QT binding error during installation of evo

Installation Evo appears error:Failed to import any qt binding

We need to evaluate the trajectory accuracy of UAV. We can refer to the process given in GitHub for installation by using Evo tool. I chose to install it from the source file. During the test, there was error error:Failed to Import any QT binding error. This is caused by the installation of Matplotlib Library in different versions of Python. Because I want to use ROS, ROS was initially installed in the environment of python2. Here, continue to install this library in python2. First, uninstall Matplotlib and then install it again.

pip uninstall matplotlib
pip install matplotlib

After the installation is completed, the test now, there will be errors in the title, you need to run this command again in the source folder to install again.

pip install --editable . --upgrade --no-binary evo

The COMMIT TRANSACTION request has no corresponding BEGIN

Background

Error thrown when inserting data into SQL Server database using Python:

Cannot commit transaction: (3902,b'The COMMIT TRANSACTION request has no corresponding BEGIN TRANSACTION.
DB-Lib error message 20018, severity 16:\nGeneral SQL Server error:Check messages from the SQL Server\n')

analysis

Well, there’s no reason. But a solution was found by accident.

solve

The error caused by the field type in sqlserver. The error I encountered is: one of the fields is set to date type. When inserting data, I report an error. Change the date type to varchar type, the problem is solved, and the data is inserted normally.

Summary

The varchar type may affect the data usage. Solve the problem first, and then solve the problem later. I think that inserting varchar data into the database is the most stable (personal experience).


Personal ability is limited, if there is a mistake welcome to correct!

Allegro’s solution to “symbol is missing a refdes”

The reason and solution of lacking component identifier

When generating. PSM file, prompt:

ERROR: ERROR(SPMHCS-1): Symbol is missing a refdes. Symbol is missing a refdes.

Create symbol assigned.
create symbol assigned, error = symbol is missing a refdes.
is due to the lack of component identifier
solution:
select: layout – & gt; Label–> Refdes (package allgero open)
then select refdes
silkscreen in the active class and subclass of options on the right_ Top
click the blank place to add the screen printing automatically, and then the screen printing can be output

Loulou found that this error was caused by importing the
ascll format exported from ad into candence and generating the component packaging Library of candence. After building verification, it is found that the problem is that the version of ad software is too low, and the saved ascll file format is imported into candence, and the meta device identification is lost.

Arthas selects PID error and port occupancy error

After Arthas starts, select PID to report an error

The error is as follows:

[ERROR] Can not read maven-metadata.xml from: https://maven.aliyun.com/repository/public/com/taobao/arthas/arthas-packaging/maven-metadata.xml
[ERROR] Can not find Arthas under local: /root/.arthas/lib and remote: aliyun

Network access, direct download full package, decompression use
GitHub answers: https://github.com/alibaba/arthas/issues/1058

Arthas 3658 port occupancy error

The error is as follows:

[ERROR] Target process 19045 is not the process using port 3658, you will connect to an unexpected process.
[ERROR] 1. Try to restart arthas-boot, select process 2452, shutdown it first with running the 'shutdown' command.
[ERROR] 2. Or try to use different telnet port, for example: java -jar arthas-boot.jar --telnet-port 9998 --http-port -1

Netstat – ANP | grep 3658 can see the process ID that occupies the port. Just specify the port according to the method prompted at the end of the log

java -jar arthas-boot.jar –telnet-port 9998 –http-port -1

Resolve nginx startup failure

Nginx startup error: startup failed

nginx: [error] CreateFile() "C:\web\nginx-1.17.9/logs/nginx.pid" failed (2: The system cannot find the file specified)

Execute the command nginx – t to query the configuration of nginx

C:\web\nginx-1.17.9>nginx.exe -t
nginx: the configuration file C:\web\nginx-1.17.9/conf/nginx.conf syntax is ok
nginx: [emerg] bind() to 0.0.0.0:8090 failed (10013: An attempt was made to access a socket in a way forbidden by its access permissions)
nginx: configuration file C:\web\nginx-1.17.9/conf/nginx.conf test failed

The question is about the same

 
The problems are as follows   Port 80 is occupied

10013: An attempt was made to access a socket in a way forbidden by its access permissions

terms of settlement:

cmd   input    netstat -aon | findstr :80

input   tasklist|findstr “15616”

Find Task Manager – details

Manual closing