Category Archives: Linux

[Solved] An unexpected error has occurred. Conda has prepared the above report.

Problem Description:

Error when using CONDA to create a virtual environment:

An unexpected error has occurred. Conda has prepared the above report. 

Solution:

Method (1): delete the .Condarc file

Method (2): CONDA clean - I

Method (3): after closing your VPN, restart your computer and continue to install the virtual environment

VMware: vmw_ ioctl_ Command error invalid parameter [How to Solve]

(1) First, run: roscore & rosrun gazebo_ros gazebo  Or terminal directly enter: gazebo;

(2) Error reporting: vmw_ioctl_Command error invalid parameter

Solution: write export SVGA in ~ /.Bashrc file_Vgpu10 = 0, i.e.

$ echo "export SVGA_VGPU10=0" >> ~/.bashrc

Then turn off the terminal and re-enter(1), the following may appear:

In this case, turn off the terminal, reopen the terminal and input it several times: gazebo.

[Solved] S3fs mount error: s3fs: unable to access MOUNTPOINT…

s3fs mount reports an error, prompting: s3fs: unable to access MOUNTPOINT /backup/fileserver/: Transport endpoint is not connected

s3fs appfiles.v1 /backup/fileserver/ -o passwd_file=/etc/passwd-s3fs -o url=http://192.168.12.30 -o uid=1002,gid=1002 -o use_path_request_style
Error Messages:
s3fs: unable to access MOUNTPOINT /backup/fileserver/: Transport endpoint is not connected

How to Solve this error:
1, confirm whether the network link ok.
ping 192.168.12.30
can ping through.
2, whether the port is open
telnet 192.168.12.30 80
port can be passed, that is not a network problem caused.
3, ls /backup/fileserver/
error message: ls: cannot access fileserver: Transport endpoint is not connected
It is good that there is an error, because ls will not report an error for an empty directory, but this error means that the directory is still mounted, but the link is not available.
Checking the process, the process hangs. I suspect that the process is hung, but the directory is not unmounted.
4. Manually unmount
umount /backup/fileserver/
No error is reported, good, that means the inference is correct.
5, mount test
s3fs appfiles.v1 /backup/fileserver/ -o passwd_file=/etc/passwd-s3fs -o url=http://192.168.12.30 -o uid=1002,gid=1002 -o use_path_request_ style
No error was reported, ok! The troubleshooting process is over. This means that the problem is caused by the lack of unmounting.
6, confirm whether it is mounted up:

df -hT
Filesystem Type Size Used Avail Use% Mounted on
devtmpfs devtmpfs 7.8G 0 7.8G 0% /dev
tmpfs tmpfs 7.8G 0 7.8G 0% /dev/shm
tmpfs tmpfs 7.8G 819M 7.0G 11% /run
tmpfs tmpfs 7.8G 0 7.8G 0% /sys/fs/cgroup
/dev/vda1 xfs 15G 4.8G 11G 32% /
/dev/vdb xfs 100G 4.0G 97G 4% /backup
tmpfs tmpfs 1.6G 0 1.6G 0% /run/user/0
s3fs fuse.s3fs 16E 0 16E 0% /backup/recordfiles
tmpfs tmpfs 1.6G 0 1.6G 0% /run/user/1000
s3fs fuse.s3fs 16E 0 16E 0% /backup/fileserver

Check that it is already there. Mount ok The whole process is over.

nvm npm Error: segmentation fault [How to Solve]

Background

After NVM use switches the node version, NPM reports an error segmentation fault

My solution

sudo apt autoremove npm

Note: it must be autoremove, not remove. Remove is not completely cleared

Cause guess

There is a global NPM. The configuration of the global NPM affects the configuration of the NPM version corresponding to the node installed by the NVM

Memory error: cannot allocate memory [How to Solve]

1. Problem background

Start a process, and the process reports an error: cannot allocate memory

2. Cause of problem

Check the script of this process. It is found that the script needs to call the memory of the system kernel to start, but the current kernel memory is not allocated.

3. Troubleshooting

1. View the remaining memory of the current physical machine

free -m

2. View the number of processes in the current system

# The maximum number of processes allowed in the system
sysctl kernel.pid_max

# The maximum number of processes on the current host
ps -eLf | wc -l

3. View memory application and availability

cat /proc/meminfo | grep Commit

4. Solution

Unable to allocate kernel memory

sysctl overcommit_memory=1

Euopenler 21.09 sudo Yum Update Error: Errors during downloading metadata for repository ‘EPOL’

openEuler
openEuler-21.09-everything-x86_64-dvd.iso
sudo yum update error

EPOL
Errors during downloading metadata for repository 'EPOL':
	-Status code: 404 for htpp://repo.openeuler.org/openEuler-21.09/EPOL/repomd.xml

Please note that:

sudo vi/etc/yum.rest.d/openeuler.repo

Please note that

[EPOL]
name=EPOL
baseurl=http://repo.openeuler.org/openEuler-21.09/EPOL/$basearch/
enabled=1
gpgcheck=1
gpgkey=http://repo.openeuler.org/openEuler-21.09/OS/$basearch/RPM-GPG-KEY-openEuler

The correct address is:

name=EPOL
baseurl= http://repo.openeuler.org/openEuler-21.09/EPOL/main/ $basearch/

Save, exit and execute again.

SCP path contains special characters Error [How to Solve]

Question

From a file on the local copy server, use the following command:

scp [email protected]:/home/test/files(202110~202111).xls .

Error reporting:
bash: – C: line 0: syntax error near unexpected token ` (‘

Solution:

    1. enclose the entire path in single quotation marks;
  1. before the parentheses, add the escape character
scp [email protected]:'/home/test/files\(202110~202111\).xls' .

The semget function error: errno is set to 28 [How to Solve]

When running semget under Linux to create semaphores, it returns – 1 and the creation fails;

1. This function is a system function. You can only confirm the actual error code with errno, print errno through strError, and return no space left on device. Is the system space insufficient? Insufficient space to create semaphores?

2. Go to errno. H to check the error message enospc corresponding to the actual error code. What does this field mean?

3. Does the semget function have its own error field? Check the function manual: check the man Manual of semget function: a semaphore set has to be created but the system limit for the maximum number of semaphore sets (semmni), or the system-wide maximum number of semaphores. Semaphore exceeds system limit.

It is basically determined that it is caused by the system semaphore. First, temporarily modify the kernel semaphore parameters and run again to see whether it has been solved.

4. The following commands are used in viewing semaphores

#1)The sysctl command can view and set system kernel parameters
# The 4 corresponding values from left to right are SEMMSL, SEMMNS, SEMOPM and SEMMNI.
sysctl -a | grep sem #View the setting value of the system semaphore
kernel.sem = 250 32000 32 128


#2) There are three ways to modify: the numbers are for reference only
echo 610 86620 100 142 > /proc/sys/kernel/sem

sysctl -w kernel.sem="610 86620 100 142"

echo "kernel.sem=610 86620 100 142" >> /etc/sysctl.conf`


#3) View the current semaphore and pid of the system as well as user information, view more information and check --help
ipcs -s -p -c


#4) Delete the semaphore method of the specified semid, and check more usage --help
ipcrm -s semid


#5) Delete all semid semaphore methods
ipcrm  -asem

5. Here, in the process of finding semaphore resource leakage, in order to facilitate real-time viewing of semaphore information, the semaphore output is written into the script and printed circularly

#ipcs.sh
echo “ipcs -s loop”

while [ 1 ]
do
	sleep 1
	ipcs -s
done

6. Note: the final problem here is to see why the semaphore in the code exceeds the limit. Normally, the semaphore will not exceed the system limit.

Apex library Install Error: amp not installed error [How to Solve]

Problem Description:

the apex library is missing and needs to be installed! Note: do not use PIP install apex. Although it can be installed successfully, it will still be found that there are errors when running the program in the end. It can’t be used

The specific installation steps are as follows:

# When you execute git to download the apex folder, if the download is too slow, you can manually enter the URL https://github.com/NVIDIA/apex to download and unzip and perform subsequent operations
git clone https://github.com/NVIDIA/apex
cd apex
python setup.py install

After execution, if the results shown in the figure below appear, the installation is successful

Remember to collect, in case you can’t find it next time
for my sake, please give me a compliment before you leave

[Solved] Win 10 VS Code Connect to the container of the server error: Cannot connect to the Docker daemon at … Is the docker daemon running

Background

After solving the installation of docker desktop above, I continue to operate according to the instructions of vs code plug-in remote container, in order to connect and edit the files in the container on the remote server (Linux) on the PC (Windows).

After everything is completed, the container is connected, but an error is reported. The log content is cannot connect to the docker daemon at http://docker.example.com. Is the docker daemon running. However, the docker desktop of my PC has already started docker, no problem.

After searching, I found that my account on the server is not in the docker group, that is, I need sudo to execute the docker instruction every time. The solution is to add my account to the docker group.

Solution:

Add according to the instructions on the official website, and then everything goes well and the problem is solved. The following code is a simple version, you can go directly to the official website.

cat /etc/group | grep docker # Print group information and filter with grep to view onlydocker
# If there are no results, run the following command to create a new docker group 
sudo groupadd docker 
# If the cat command yields results, then instead of creating a new group, just run the following command
sudo usermod -aG docker $USER # where $USER is replaced by your account name
newgrp docker # In Linux environment, make the group update take effect; other environments see the official website link
docker run hello-world # Check if docker can be executed without sudo

Post-installation steps for Linux | Docker Documentation

 

To access files in a remote server’s container with vs Code
1. Install VS code on PC
2. Connect to the server remotely: Find the Remote-SSH plug-in in the extension of VS code, configure the ssh file after installation, and connect to the server successfully
3. Find Remote-Container in the extension of VS code, install it, and then install docker and WSL2 software according to the extension’s introduction ( the link where the problem occurred above )
4. Connect to the server according to the introduction of Remote-Container (the problematic link in this article)

[Solved] Pointsift Error: – ltensorflow not found_framework

My environment: Ubuntu 18.04 tensorflow 2.1
when reproducing pointsift, follow the readme prompt, modify the locations of tensorflow and Lib in the. Sh file, compile the. Sh file, and report an error:
/usr/bin/LD: cannot find – ltensorflow_framework
collect2: error: ld returned 1 exit status

The reason is that the shell file is connected to the dynamic library In libtensorflow_framework.so, the dynamic library name of tensorflow 2.1 is libtensorflow_Frame.So.2, so the link is not available

Solution: create a connection symbol to make libtensorflow_Framework. So. 2 and libtensorflow_Framework.so points to the same

cd /usr/local/lib/python3.6/dist-packages/tensorflow_core //My files are in this directory, some are in the tensorflow directory, as long as they are in the same directory as .so.2
ln -s libtensorflow_framework.so.1 libtensorflow_framework.so