Category Archives: How to Fix

[Solved] Source Zsh error: zsh problem: compinit:503: no such file or directory

zsh problem: compinit:503: no such file or directory: /usr/local/share/zsh/site-functions/_brew
[oh-my-zsh]  Solution:

## zsh problem: compinit:503: no such file or directory: /usr/local/share/zsh/site-functions/_brew

Solution:

```zsh
sudo ln -fsv /opt/homebrew/completions/zsh/_brew /usr/local/share/zsh/site-functions/_brew

brew cleanup && source ~/.zshrc
```

Then it was solved.

[Solved] RSA decrypt error: the data to be decrypted exceeds the maximum 128 bytes of this module

RSA decryption error: the data to be decrypted exceeds the maximum 128 bytes of this module

            RSACryptoServiceProvider rsa = new RSACryptoServiceProvider();
            byte[] cipherbytes;
            rsa.FromXmlString(privatekey);
            cipherbytes = rsa.Decrypt(Encoding.UTF8.GetBytes(content), false);
            var Text=Encoding.UTF8.GetString(cipherbytes);

cipherbytes = rsa.Decrypt(Encoding.UTF8.GetBytes(content), false);
Modify to:
cipherbytes = rsa.Decrypt(Convert.FromBase64String(content), false);

Error when accessing Oracle: connected to an idle instance

When accessing Oracle with SYSDBA, the following information is prompted:

[oracle@localhost ~]$ sqlplus/as sysdba

SQL*Plus: Release 19.0.0.0.0 - Production on Thu Dec 2 20:21:40 2021
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle.  All rights reserved.

Connected to an idle instance.

SQL> select * from dual;
select * from dual
*
ERROR at line 1:
ORA-01034: ORACLE not available
Process ID: 0
Session ID: 0 Serial number: 0

solve:

First, make sure to start listening:

[oracle@localhost ~]$ lsnrctl start

Then start instance:

SQL> startup;

This is the open status when you view the database instance again:

SQL> select status from v$instance;

STATUS
------------------------
OPEN

 

Error BC: command not found when git bash runs shell script

catalog:

1. Problem Description: 2. Error reporting reason: 3. Solution:

1. Problem Description:

Under Windows system, an error occurs when running shell script with git bash:

bc: command not found

2. Error reporting reason:

Git is missing the BC module, and git cannot directly install the BC module

3. Solution:

By downloading msys2, download the BC package in msys2 and copy it to git
specific steps:
(1) install msys2 and download the address https://www.msys2.org/
(2) After installation, open the msys2 shell and install BC with the following command

pacman -S bc

(3) Go to the msys64 \ usr \ bin folder under the msys2 installation directory and find bc.exe
(4) copy the bc.exe file to the GIT \ usr \ bin folder under the GIT installation directory
re run the shell script in Git bash, and there will be no BC: command not found error.

PHP artisan cache: clear command reports an error

Use the clear cache command when building projects with laravel version 5.7 or above:

php artisan cache:clear  

Sometimes, the following errors are reported:

“Failed to clear cache. Make sure you have the appropriate permissions”

The reason is that there are two directories. You need to create a new directory.

mkdir -p storage/framework/cache/data

You also need to set directory permissions  

chmod 777 /home/www/dir/bootstrap/cache && amp; chmod 777 /home/www/dir/bootstrap/cache/*

 

reference resources:

“Failed to clear cache. Make sure you have the appropriate permissions” in Laravel 5.7_ Yang level blog – CSDN blog

Datanode startup failed with an error: incompatible clusterids

Article catalog

Datanode failed to start and reported an error incompatible clusterids information error summary problem description problem cause analysis steps solution reference

Datanode startup failed with an error: incompatible clusterids

Information

Environment version: Hadoop 3.3.1 system version: CentOS 7.4 java version: Java se 1.8.0_ three hundred and one

Error report summary

java.io.IOException: Incompatible clusterIDs in /opt/module/hadoop-3.3.1/data/dfs/data: namenode clusterID = CID-aa23cfe4-9ad3-4c06-87fc-e862c8f3a722; datanode clusterID = CID-55fa9a51-7777-4ff4-87d6-4df7cf2cb8b9

Problem description

An error is reported when datanode is started. The contents of the error reported in/opt/module/hadoop-3.3.1/logs/hadoop-bordy-datanode-hadoop 102.log log are as follows:

2021-11-29 21:58:51,350 INFO org.apache.hadoop.hdfs.server.common.Storage: Using 1 threads to upgrade data directories (dfs.datanode.parallel.volumes.load.threads.num=1, dataDirs=1)
2021-11-29 21:58:51,354 INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /opt/module/hadoop-3.3.1/data/dfs/data/in_use.lock acquired by nodename 13694@hadoop102
2021-11-29 21:58:51,356 WARN org.apache.hadoop.hdfs.server.common.Storage: Failed to add storage directory [DISK]file:/opt/module/hadoop-3.3.1/data/dfs/data
java.io.IOException: Incompatible clusterIDs in /opt/module/hadoop-3.3.1/data/dfs/data: namenode clusterID = CID-aa23cfe4-9ad3-4c06-87fc-e862c8f3a722; datanode clusterID = CID-55fa9a51-7777-4ff4-87d6-4df7cf2cb8b9
        at org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:746)
        at org.apache.hadoop.hdfs.server.datanode.DataStorage.loadStorageDirectory(DataStorage.java:296)
        at org.apache.hadoop.hdfs.server.datanode.DataStorage.loadDataStorage(DataStorage.java:409)
        at org.apache.hadoop.hdfs.server.datanode.DataStorage.addStorageLocations(DataStorage.java:389)
        at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:561)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:1753)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1689)
        at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:394)
        at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:295)
        at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:854)
        at java.lang.Thread.run(Thread.java:748)
2021-11-29 21:58:51,358 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for Block pool <registering> (Datanode Uuid a4eeff59-0192-4402-8278-4743158fa405) service to hadoop101/192.168.2.101:8020. Exiting.
java.io.IOException: All specified directories have failed to load.
        at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:562)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:1753)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1689)
        at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:394)
        at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:295)
        at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:854)
        at java.lang.Thread.run(Thread.java:748)
2021-11-29 21:58:51,359 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Ending block pool service for: Block pool <registering> (Datanode Uuid a4eeff59-0192-4402-8278-4743158fa405) service to hadoop101/192.168.2.101:8020
2021-11-29 21:58:51,363 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Removed Block pool <registering> (Datanode Uuid a4eeff59-0192-4402-8278-4743158fa405)
2021-11-29 21:58:53,364 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Exiting Datanode
2021-11-29 21:58:53,424 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down DataNode at hadoop102/192.168.2.102
************************************************************/

Cause of problem

The upgrade function of Hadoop requires data node to store a permanent clusterid in its version file. When datanode starts, it will check and match the clusterid in the version file of namenode. If the two do not match, an exception of “incompatible clusterids” will appear. See the official CCR [hdfs-107]

Analysis steps

    View clusterid in the version file under /opt/module/hadoop-3.3.1/data/DFs/data/current in the datanode directory /opt/module/hadoop-3.3.1/data/DFs/name/current .
    view clusterid in the version file under /code> in the namnode directory /opt/module/hadoop-3.3.1/data/name/current . Br> found two files The clusterid in is missing and does not match. It is understood that in the HDFS architecture, each datanode needs to communicate with the namenode, and the clusterid is the unique ID of the namenode.

    terms of settlement

    Modify the clusterid value of the failed datanode to the clusterid of the primary namenode

    reference resources

    Hadoop failed to start datanode. There is a problem with clusterid - Wang Shen - blog Garden (cnblogs. Com)

Clion develops STM32, adds files and compiles with an error “no such file or directory”“

Clion adds files and compiles with an error “no such file or directory”

1.1 adding files

Create folders directly under the project directory and create source files and header files

After addition, errors will occur when writing. C files including. H files, which need to be modified cmakelist. TXT

Add header file

include_directories(Path1/path1 Path2/path2)

path1/path1 indicates the header file path. Different paths are separated by spaces, as shown in the following figure:

Add source file

file(GLOB_RECURSE SOURCES "directory/*.*")

directory indicates the path folder name. The source files under different paths are separated by spaces, as shown in the following figure:

After completion, the compilation can reach 100%, but an error is reported: no such file or directory

By reference: clion 2020.2.4 cmake error reporting

When a newly added file is found, cmakelist. TXT will automatically add linker_ SCRIPT

Delete the header file and source file behind it

set(LINKER_SCRIPT ${CMAKE_SOURCE_DIR}/STM32F103RFTx_FLASH.ld)

Compile again, no error will be reported and can be downloaded normally.

MYSQL:ERROR 1819 (HY000): Your password does not satisfy the current policy requirements

[root@ip-172-31-43-199 ~]# mysql -V
mysql  Ver 8.0.27 for Linux on x86_64 (MySQL Community Server - GPL)

Password policy problem exception information:
error 1819 (HY000): your password does not satisfy the current policy requirements

terms of settlement:

1. View the initial password policy of MySQL,
enter the statement show variables like 'validate_ password%'; view,
as shown in the following figure:
this is what I modified:

2. First, set the verification strength level of the password,
set validate_ password_ If the global parameter of policy is low,
enter the set value statement set global validate_ password.policy=LOW; to set the value,

Parameters related to MySQL password policy
1)、validate_ password_ Length the total length of the fixed password
2)、validate_ password_ dictionary_ File specifies the file path for password authentication
3)、validate_ password_ mixed_ case_ Count the total number of large/small letters in the whole password
4)、validate_ password_ number_ Count the number of Arabic numerals at least in the whole password
5)、validate_ password_ Policy specifies the strength and authentication level of the password. The default value is medium
about validate_ password_ Value of policy: 0/low: only verify the length; 1/medium: verify the length, number, case and special characters; 2/strong: verify the length, number, case, special characters and dictionary file
6)、validate_ password_ special_ char_ Count the number of special characters at least in the whole password;

Mysql5.1 password modification method:
set password for 'root' @ 'localhost' = password ('12121212 ');

https://blog.csdn.net/qq_ 39344689/article/details/89674079

Error in `./a.out‘: free(): invalid next size (fast): 0x0000000001da8010

Error in `./a.out’: free(): invalid next size (fast): 0x0000000001da8010

*** Error in `./a.out': free(): invalid next size (fast): 0x0000000001da8010 ***
======= Backtrace: =========
/lib/x86_64-linux-gnu/libc.so.6(+0x777f5)[0x7f216399b7f5]
/lib/x86_64-linux-gnu/libc.so.6(+0x8038a)[0x7f21639a438a]
/lib/x86_64-linux-gnu/libc.so.6(cfree+0x4c)[0x7f21639a858c]
./a.out[0x400896]
/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf0)[0x7f2163944840]
./a.out[0x400609]
======= Memory map: ========
00400000-00401000 r-xp 00000000 08:01 1155                               /home/csgec/C++/a.out
00600000-00601000 r--p 00000000 08:01 1155                               /home/csgec/C++/a.out
00601000-00602000 rw-p 00001000 08:01 1155                               /home/csgec/C++/a.out
01da8000-01dc9000 rw-p 00000000 00:00 0                                  [heap]
7f215c000000-7f215c021000 rw-p 00000000 00:00 0 
7f215c021000-7f2160000000 ---p 00000000 00:00 0 
7f216370e000-7f2163724000 r-xp 00000000 08:01 923094                     /lib/x86_64-linux-gnu/libgcc_s.so.1
7f2163724000-7f2163923000 ---p 00016000 08:01 923094                     /lib/x86_64-linux-gnu/libgcc_s.so.1
7f2163923000-7f2163924000 rw-p 00015000 08:01 923094                     /lib/x86_64-linux-gnu/libgcc_s.so.1
7f2163924000-7f2163ae4000 r-xp 00000000 08:01 953576                     /lib/x86_64-linux-gnu/libc-2.23.so
7f2163ae4000-7f2163ce4000 ---p 001c0000 08:01 953576                     /lib/x86_64-linux-gnu/libc-2.23.so
7f2163ce4000-7f2163ce8000 r--p 001c0000 08:01 953576                     /lib/x86_64-linux-gnu/libc-2.23.so
7f2163ce8000-7f2163cea000 rw-p 001c4000 08:01 953576                     /lib/x86_64-linux-gnu/libc-2.23.so
7f2163cea000-7f2163cee000 rw-p 00000000 00:00 0 
7f2163cee000-7f2163d14000 r-xp 00000000 08:01 953568                     /lib/x86_64-linux-gnu/ld-2.23.so
7f2163ef7000-7f2163efa000 rw-p 00000000 00:00 0 
7f2163f12000-7f2163f13000 rw-p 00000000 00:00 0 
7f2163f13000-7f2163f14000 r--p 00025000 08:01 953568                     /lib/x86_64-linux-gnu/ld-2.23.so
7f2163f14000-7f2163f15000 rw-p 00026000 08:01 953568                     /lib/x86_64-linux-gnu/ld-2.23.so
7f2163f15000-7f2163f16000 rw-p 00000000 00:00 0 
7ffcd77e9000-7ffcd780a000 rw-p 00000000 00:00 0                          [stack]
7ffcd798b000-7ffcd798e000 r--p 00000000 00:00 0                          [vvar]
7ffcd798e000-7ffcd7990000 r-xp 00000000 00:00 0                          [vdso]
ffffffffff600000-ffffffffff601000 r-xp 00000000 00:00 0                  [vsyscall]
fish: Job 2, “./a.out” terminated by signal SIGABRT (Abort)

resolvent:

unsigned int *a = (unsigned int *)malloc(sizeof(unsigned int) * 2 * n);

Malloc dynamically allocates memory. N must have confirmed the value
① you can add int n = 5 before applying for space

int n = 5;
    unsigned int *a = (unsigned int *)malloc(sizeof(unsigned int) * 2 * n);

⑤ You can also allocate space after user input, so that the value of n is known
unknown dynamic space cannot be allocated before user input.

 scanf("%d %d",&n,&m);
   // getchar();
    unsigned int *a = (unsigned int *)malloc(sizeof(unsigned int) * 2 * n);

After correction, the error report can disappear.

pytools.prefork.ExecError: error invoking ‘nvcc –version‘: [Errno 2] No such file or directory

Problem Description:. I run pycuda’s sample code on the local side of Linux without any problems. However, when I use pycharm to debug code remotely, the above problem occurs.

This problem needs two steps. If it can be realized after the first step, the second step is not needed

Step 1:

export PATH=”/usr/local/cuda/bin:$PATH”
export LD_ LIBRARY_ PATH=”/usr/local/cuda/lib64:$LD_LIBRARY_PATH”

Steps
1. find .bashrc file.
2. Add above lines to it.
3. source .bashrc
4. To Test run command “nvcc –version”

  Some people use cuda-10.1 (version number) in this place, but I use CUDA because my CUDA here is a cuda-10.1 soft connection (equivalent to a shortcut). So the first “L” in “lrwxrwxrwx” means soft connection. Therefore, the above two methods are OK.

Step 2:

open   compiler.py  

Add the following code

nvcc = ‘/usr/local/cuda/bin/’ + nvcc

  As follows:

    def compile_plain(source, options, keep, nvcc, cache_dir, target="cubin"):
        from os.path import join
    
        assert target in ["cubin", "ptx", "fatbin"]
        nvcc = '/usr/local/cuda/bin/' + nvcc # --> here is the new line
        
        if cache_dir:
            checksum = _new_md5()
            ...

File location of compiler.py:

Because I have an envs, I found it under one of them. You can use the locate command to locate.

anaconda3/envs/torch19/lib/python3.7/site-packages/pycuda

Command:

 find ./lib/python3.7/site-packages -name compiler.py