Author Archives: Robins

Centos Failed to Modify the port of SSH: error: Bind to port 27615 on 0.0.0.0 failed: Permission denied.

Error screenshot

problem causes

selinux problem

Solution

Modify the port of sshd in selinux

# Install the modification tool
$ yum -y install policycoreutils-python

# Check the port of sshd in selinux, the output is 22
$ semanage port -l | grep ssh

# New ports
$ semanage port -a -t ssh_port_t -p tcp 27615

$ systemctl status sshd
● sshd.service - OpenSSH server daemon
   Loaded: loaded (/usr/lib/systemd/system/sshd.service; enabled; vendor preset: enabled)
   Active: active (running) since Sat 2022-10-29 10:10:08 CST; 36min ago
     Docs: man:sshd(8)
           man:sshd_config(5)
 Main PID: 8653 (sshd)
   CGroup: /system.slice/sshd.service
           └─8653 /usr/sbin/sshd -D

Oct 29 10:10:08 centos-linux.shared systemd[1]: Starting OpenSSH server daemon...
Oct 29 10:10:08 centos-linux.shared sshd[8653]: Server listening on 0.0.0.0 port 27615.
Oct 29 10:10:08 centos-linux.shared systemd[1]: Started OpenSSH server daemon.
Oct 29 10:31:44 centos-linux.shared sshd[18735]: Accepted password for root from 10.211.55.2 port 50375 ssh2

[Solved] Group coordinator lookup failed,The coordinator is not available

Question

After the kafka consumer starts, I get the exception as the title.

Looking up the data, I found out that __consumer_offsetsthis topic is abnormal. The
abnormality may be that the topic has been deleted and no longer exists, or it may be that the topic is abnormal due to partition or other reasons.

Go to zookeeper to see if this topic exists.

ls /brokers/topics
[__consumer_offsets , xx,xx]

1) __consumer_offsets does not exist

./kafka-topics.sh --zookeeper master:2181 --partitions 1 --replication-factor 1 --create --topic __consumer_offsets

If topic does not exist, create one directly.

2) __consumer_offsets exists

Mine is this problem, topic exists, but an exception occurred during kafka migration.

  1. Then you need to stop kafka first.
  2. delete topic in zookeeper
Copy
# remove node infos
deleteall /brokers/topics/__consumer_offsets
# remove node
delete /brokers/topics/__consumer_offsets
  1. Restart kafka.

In theory, restart kafka, the consumer will go online automatically, and a __consumer_offsetstopic will be created automatically. If not, follow the previous step and create one manually.

[Solved] Nginx Request 500 Error: CreateFile() “/temp/client_body_temp/0000000013” failed (5: Access is denied)

In the past few days, when a front-end colleague was processing upload requests, nginx returned a 500 error. There was no additional error information, and the back-end did not receive the request. Look at his local nginx log error:

[crit] 28700#21636: *1389 CreateFile() "\nginx-clean/temp/client_body_temp/0000000010" failed (5: Access is denied)<br> Changing **/temp/client_body _temp folder to the group of users executing Nginx and it will be solved

 

nginx has several upload-related parameter descriptions:
client_max_body_size
client_max_body_size defaults to 1M, indicating the maximum allowable size of the client request server, which is specified in the “Content-Length” request header. If the requested body data is larger than client_max_body_size, the HTTP protocol will report the error 413 Request Entity Too Large.

client_body_buffer_size
Nginx allocates the Buffer size of the requested data. If the requested data is smaller than client_body_buffer_size, it directly stores the data in memory first. If the requested value is greater than client_body_buffer_size and less than client_max_body_size, the data will be stored in a temporary file first

client_body_temp_path 

Specify the temporary storage path. By default, the path value is
the client_body_temp address configured in /tmp/client_body_temp, and the user group of the executing Nginx must have read and write permissions. Otherwise, when the transmitted data is larger than client_body_buffer_size, an error will be reported if it fails to write to the temporary file

Syntax: client_body_temp_path dir-path[level1[level2[level3]]
If the package body size is larger than client_body_buffer_size, it will be named with an increasing integer and stored in the directory specified by client_body_temp_path. The level1, level2, and level3 that follow are used to prevent the number of files in a directory from being too large and thus causing performance degradation,<br>hence the use of the level parameter, so that it can follow the temporary

 

client_body_in_file_only on|clean|off;

When the value is not off, the HTTP packet body in the user request will always be stored in the disk file, even if it has only 0 bytes, it will be stored as a file.

When the request ends, if the configuration is on, the file will not be deleted (this configuration is generally used for debugging and locating problems), but if the configuration is clean, the file will be deleted.

Summarize:

If the transmitted data is larger than client_max_body_size, the transmission must be unsuccessful, and the HTTP protocol will report an error 413 Request Entity Too Large
is smaller than client_body_buffer_size, which is directly stored in memory efficiently.
If it is larger than client_body_buffer_size and smaller than client_max_body_size, temporary files will be stored, and temporary files must have permissions.
If the pursuit of efficiency, set client_max_body_size and client_body_buffer_size the same value, so that temporary files will not be stored, stored directly in memory.

failed (13: Permission denied) while reading upstream [How to Solve]

Error Messages:

2022/10/20 18:38:56 [crit] 67121#0: *16996 open() "/app/openresty/nginx/proxy_temp/8/03/0000000038" failed (13: Permission denied) while reading upstream, client: 100.xxx.xxx.92, server: sfimplat.sf-express.com, request: "GET /sfimplat/release/issuesql/query HTTP/1.1", upstream: "http://100.111.xxx.xxx:9080/sfimplat/release/issuesql/query", host: "100.111.136.71", referrer: "http://100.xxx.xxx.xxx/issue/sql/approve"

 

Solution:

Modify the following codes:

vi /usr/local/nginx/conf/nginx.conf

#use nobody;

to

use root;

Restart Nginx and it will be OK!

ImportError: DLL load failed while importing xxx: The specified program could not be found.

Error Message:

Connected to pydev debugger (build 203.7717.81)
======================================================================
Error when loading pyOpenMS libraries!
Libraries could not be found / could not be loaded.
Note: when using the Spyder IDE, this error may be triggered when
the 'Automatic' backend is used. Please change this in Tools ->
Preferences -> IPython -> Graphics to 'Inline'.
To debug this error, please run ldd (on linux) or dependency walker (on windows) on 
C:\ProgramData\Anaconda3\envs\dgl\lib\site-packages\pyopenms\pyopenms.so
======================================================================
======================================================================
python-BaseException
Traceback (most recent call last):
  File "<frozen importlib._bootstrap>", line 991, in _find_and_load
  File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 671, in _load_unlocked
  File "<frozen importlib._bootstrap_external>", line 783, in exec_module
  File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
  File "C:\ProgramData\Anaconda3\envs\dgl\lib\site-packages\pyopenms\__init__.py", line 80, in <module>
    raise e
  File "C:\ProgramData\Anaconda3\envs\dgl\lib\site-packages\pyopenms\__init__.py", line 43, in <module>
    from .all_modules import *
  File "C:\ProgramData\Anaconda3\envs\dgl\lib\site-packages\pyopenms\all_modules.py", line 1, in <module>
    from .pyopenms_1 import *
ImportError: DLL load failed while importing pyopenms_1: The specified program could not be found.

 

Analyze the causes
https://github.com/OpenMS/OpenMS/issues/4291#issuecomment-1221604911

A library like pyopenms has a self-contained version of pyqt by itself, so if you do an import of another graphical GUI library before importing this library, you will get an error. So some people put import pyopenms in front of import matplotlib.pyplot as plt just fine

For some code tools (such as Pycharm, etc.), the problem is caused by the automatic import of GUI libraries during debug in order to facilitate debugging.

The problem is that different versions of the same library are loaded. Therefore, you just need to make sure that you don’t accidentally provide multiple Qt versions. Unfortunately, this is a bit difficult on Windows, since you don’t have a proper package manager and each PythonWheel is bound to its own private version.

 

Solution:
Find PyQt compatible in the settings and uncheck https://github.com/OpenMS/OpenMS/issues/4110#issuecomment-578613842

  • File | Settings | Build execution and deployment | Python debugger | PyQtCompatible = Unchecked

Hive Error: FAILED: RuntimeException Error loading hooks(hive.exec.post.hooks): java.lang.ClassNotFoundException: org.apache.atlas.hive.hook.HiveHook

After entering the Hive client, executing any SQL prompts the following error message:

FAILED: RuntimeException Error loading hooks(hive.exec.post.hooks): java.lang.ClassNotFoundException: org.apache.atlas.hive.hook.HiveHook
	at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
	at java.lang.Class.forName0(Native Method)
	at java.lang.Class.forName(Class.java:348)
	at org.apache.hadoop.hive.ql.hooks.HookUtils.readHooksFromConf(HookUtils.java:55)
	at org.apache.hadoop.hive.ql.HookRunner.loadHooksFromConf(HookRunner.java:90)
	at org.apache.hadoop.hive.ql.HookRunner.initialize(HookRunner.java:79)
	at org.apache.hadoop.hive.ql.HookRunner.runBeforeParseHook(HookRunner.java:105)
	at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:612)
	at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1826)
	at org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:1773)
	at org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:1768)
	at org.apache.hadoop.hive.ql.reexec.ReExecDriver.compileAndRespond(ReExecDriver.java:126)
	at org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:214)
	at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:239)
	at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:188)
	at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:402)
	at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:821)
	at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:759)
	at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:683)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.hadoop.util.RunJar.run(RunJar.java:318)
	at org.apache.hadoop.util.RunJar.main(RunJar.java:232)

2022-10-18 17:07:01,441 ERROR [045aa301-3e86-4efe-927d-cfabf3f29cb6 main] ql.Driver (SessionState.java:printError(1250)) - FAILED: RuntimeException Error loading hooks(hive.exec.post.hooks): java.lang.ClassNotFoundException: org.apache.atlas.hive.hook.HiveHook
	at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
	at java.lang.Class.forName0(Native Method)
	at java.lang.Class.forName(Class.java:348)
	at org.apache.hadoop.hive.ql.hooks.HookUtils.readHooksFromConf(HookUtils.java:55)
	at org.apache.hadoop.hive.ql.HookRunner.loadHooksFromConf(HookRunner.java:90)
	at org.apache.hadoop.hive.ql.HookRunner.initialize(HookRunner.java:79)
	at org.apache.hadoop.hive.ql.HookRunner.runBeforeParseHook(HookRunner.java:105)
	at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:612)
	at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1826)
	at org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:1773)
	at org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:1768)
	at org.apache.hadoop.hive.ql.reexec.ReExecDriver.compileAndRespond(ReExecDriver.java:126)
	at org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:214)
	at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:239)
	at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:188)
	at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:402)
	at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:821)
	at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:759)
	at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:683)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.hadoop.util.RunJar.run(RunJar.java:318)
	at org.apache.hadoop.util.RunJar.main(RunJar.java:232)

java.lang.RuntimeException: Error loading hooks(hive.exec.post.hooks): java.lang.ClassNotFoundException: org.apache.atlas.hive.hook.HiveHook
	at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
	at java.lang.Class.forName0(Native Method)
	at java.lang.Class.forName(Class.java:348)
	at org.apache.hadoop.hive.ql.hooks.HookUtils.readHooksFromConf(HookUtils.java:55)
	at org.apache.hadoop.hive.ql.HookRunner.loadHooksFromConf(HookRunner.java:90)
	at org.apache.hadoop.hive.ql.HookRunner.initialize(HookRunner.java:79)
	at org.apache.hadoop.hive.ql.HookRunner.runBeforeParseHook(HookRunner.java:105)
	at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:612)
	at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1826)
	at org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:1773)
	at org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:1768)
	at org.apache.hadoop.hive.ql.reexec.ReExecDriver.compileAndRespond(ReExecDriver.java:126)
	at org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:214)
	at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:239)
	at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:188)
	at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:402)
	at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:821)
	at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:759)
	at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:683)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.hadoop.util.RunJar.run(RunJar.java:318)
	at org.apache.hadoop.util.RunJar.main(RunJar.java:232)

	at org.apache.hadoop.hive.ql.HookRunner.loadHooksFromConf(HookRunner.java:93)
	at org.apache.hadoop.hive.ql.HookRunner.initialize(HookRunner.java:79)
	at org.apache.hadoop.hive.ql.HookRunner.runBeforeParseHook(HookRunner.java:105)
	at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:612)
	at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1826)
	at org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:1773)
	at org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:1768)
	at org.apache.hadoop.hive.ql.reexec.ReExecDriver.compileAndRespond(ReExecDriver.java:126)
	at org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:214)
	at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:239)
	at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:188)
	at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:402)
	at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:821)
	at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:759)
	at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:683)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.hadoop.util.RunJar.run(RunJar.java:318)
	at org.apache.hadoop.util.RunJar.main(RunJar.java:232)

 

Cause and Solution:
After installing Atlas, Hive prompts the above error message.
Move all the jar packages in /opt/module/atlas/apache-atlas-hive-hook-2.1.0/hook/hive directory to Hive’s lib directory, the problem is solved

app: ASSERTION FAILED at D:\Cleaver\rf\test\nRF5_SDK_17.1.0_ddde560\external\freertos\source\tasks.c:2012

FreeRtos creates three tasks

<error> app: ASSERTION FAILED at D:\Cleaver\rf\test\nRF5_SDK_17.1.0_ddde560\external\freertos\source\tasks.c:2012

 

Sometimes it appears in timer.c

Because there is not enough stack space, when creating tasks, allocate a little less

xTaskCreate(aw2016_rgb_control, "AW2016_RGB_CONTROL", configMINIMAL_STACK_SIZE + 100, NULL, 1, &aw2016_rgb_task_handle);

 

How to Solve:

The previous error was configMINIMAL_STACK_SIZE + 200, change it to 100 and there is no more error

Linux Virtual Machine Boot Container: Error response from daemon: driver failed programming external connectivity on endpoint

docker: Error response from daemon: driver failed programming external connectivity on endpoint tomcat01 (00028237b8dd7b21dbce757be3bf2df0e0fcfa6c3987cac68c42d2fb6603b42d): 
(iptables failed: iptables --wait -t nat -A DOCKER -p tcp -d 0/0 --dport 49162 -j DNAT --to-destination 172.17.0.2:8080 ! -i docker0: iptables: No chain/target/match by that name.
(exit status 1)).

When starting a docker container or doing docker configuration, setting a configuration such as restart for the firewall
will clear the relevant configuration of docker, resulting in the query firewall rules when the docker chain is not displayed

The specific reason is that you deleted the chain in iptables
There are many ways to remove the links

restart firewalld firewall can be cleared, firewalld is centos7 or more, iptables is centos6 or less will have, and the underlying firewalld is involved in the iptables, so when it comes to firewall firewalld commands or commands in the iptables Be careful to remove the chain that involves docker

The solution is to restart the Docker engine

systemctl restart docker

Query the chain of the docker againiptables -L
or use this command to queryiptables -t nat - nL

[Solved] The application could not be installed: INSTALL_FAILED_NO_MATCHING_ABIS

The application could not be installed: INSTALL_FAILED_NO_MATCHING_ABIS when running the project to the Android emulator, the reason for this problem is that the Android emulator is not set to This problem is caused by not setting the Android emulator to support x86.

How to Solve:

In the build.gradle file of the module, under the android tag, add the defaultConfig tag:

ndk {
            //Select the .so library of the corresponding cpu type to add.
            abiFilters 'armeabi', 'armeabi-v7a', 'arm64-v8a', 'x86'
            // You can also add 'x86', 'x86_64', 'mips', 'mips64'
        }

If there has been an ndk tag before, you only need to add it (see Figure 1 below).

, 'x86'

Finally, please note: if you use the emulator, even after adding, ‘x86’, some third-party sdk still can’t work properly (for example, the gps location function of Gaode map, because the emulator has no gps), so for some important functions, if you have a cell phone, try to use it to test.

Readiness probe failed: calico/node is not ready: BIRD is not ready: Error querying BIRD: unable to

Installed calico using the tigera-operator method and reported an error after startup, all calico related pods show CrashLoopBackoff.

kubectl -n calico-system describe pod calico-node-2t8w6 and found the following error.

Readiness probe failed: calico/node is not ready: BIRD is not ready: Error querying BIRD: unable to connect to BIRDv4 socket: dial unix /var/ run/calico/ bird.ctl: connect: no such file or directory.

Cause of the problem:

We are experiencing this issue during a Kubernetes Cluster deployment. Since Calico automatically detects IP addresses by default using the first-found method and gets the wrong address, we need to specify the detection method manually.

1. Remove all the claico

kubectl -n tigera-operator get deployments.apps -o yaml > a.yaml
kubectl -n calico-system get daemonsets.apps calico-node -o yaml > b.yaml
kubectl -n calico-system get deployments.apps calico-kube-controllers -o yaml > c.yaml
kubectl -n calico-system get deployments.apps calico-typha -o yaml > d.yaml
kubectl -n calico-apiserver get deployments.apps calico-apiserver -o yaml > e.yaml
kubectl delete -f a.yaml
kubectl delete -f b.yaml
kubectl delete -f c.yaml
kubectl delete -f d.yaml
kubectl delete -f e.yaml
2. Remove custom-resources.yaml
kubectl delete -f tigera-operator.yaml
kubectl delete -f custom-resources.yaml

3. Remove vxlan.calico
ip link delete vxlan.calico

4. Modify custom-resources.yaml file and add nodeAddressAutodetectionV4:
# This section includes base Calico installation configuration.
# For more information, see: https://projectcalico.docs.tigera.io/v3.23/reference/installation/api#operator.tigera.io/v1.Installation
apiVersion: operator.tigera.io/v1
kind: Installation
metadata:
name: default
spec:
# Configures Calico networking.
calicoNetwork:
# Note: The ipPools section cannot be modified post-install.
#bgp: Enabled
#hostPorts: Enabled
ipPools:
– blockSize: 26
cidr: 10.244.0.0/16
encapsulation: VXLANCrossSubnet
natOutgoing: Enabled
nodeSelector: all()
#linuxDataplane: Iptables
#multiInterfaceMode: None
nodeAddressAutodetectionV4:
interface: ens.*

# This section configures the Calico API server.
# For more information, see: https://projectcalico.docs.tigera.io/v3.23/reference/installation/api#operator.tigera.io/v1.APIServer
apiVersion: operator.tigera.io/v1
kind: APIServer
metadata:
name: default
spec: {}
5. Re-create
kubectl create -f tigera-operator.yaml
kubectl create -f custom-resources.yaml
check
kubectl -n calico-system get daemonsets.apps calico-node  -o yaml|grep -A2 IP_AUTODETECTION_METHOD

[Solved] org.gradle.api.internal.plugins.PluginApplicationException: Failed to apply plugin

This problem sometimes occurs when opening Android files written by others:

Caused by: org.gradle.api.internal.plugins.PluginApplicationException: Failed to apply plugin [id 'com.android.internal.application']

The solution is as follows:

1. In Project view mode, select gradle.properties file.

2. On the last line (or any line in the file), enter the following code.

android.overridePathCheck=true

This line of code means “overlay path checking”.

3. Then click ‘Sync Now’ in the upper right corner and wait a few seconds to solve the problem.

Petalinux Failed to open PetaLinux lib: librdi_commonxillic.so: cannot open shared object file:

Petalinux create app reported an error

mrzhang@ubuntu:~/works/MZ702P_FEP$ petalinux-create -t apps -n testapp --enable --force
INFO: Create apps: testapp
WARNING: Component "/home/mrzhang/works/MZ702P_FEP/project-spec/meta-user/recipes-apps/testapp" already exists.
WARNING: --force parameter specified, overwriting
INFO: New apps successfully created in /home/mrzhang/works/MZ702P_FEP/project-spec/meta-user/recipes-apps/testapp
INFO: Enabling created component...
INFO: sourcing bitbake
INFO: oldconfig rootfs
INFO: testapp has been enabled 
Failed to open PetaLinux lib: librdi_commonxillic.so: cannot open shared object file: No such file or directory.

Solution:

sudo echo "/opt/pkg/petalinux/2018.3/tools/lib" > /etc/ld.so.conf.d/petalinux.so.conf
sudo ldconfig

Extended content:

Solution to ‘cannot open shared object file’ (Linux ):

1. Prompt when calling the dynamic library .so file in Linux:

cannot open shared object file: No such file or directory

Solution:

1. At this time, enter ldd xxx to check which libraries are missing

libmysqlcppconn.so.7 => not found
libboost_system.so.1.64.0 => not found

2. Set variable LD_LIBRARY_PATH

sudo gedit ~/.bashrc

Add on the last line:

export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/your/path

Re-open bash and then ldd to find the library path.

2. (Recommended) Modify the shared library configuration file /etc/ld.so.conf
/etc/ld.so.conf
1. Setting:

sudo gedit /etc/ld.so.conf

2. Add library path:

include /etc/ld.so.conf.d/*.conf
/home/xxx/Documents/core/Linux/Test/src/Test

Save and exit
3. Make the configuration effective immediately

sudo ldconfig