Category Archives: Error

OpenCV VideoWriter Failed to Start [How to Solve]

Problem: VideoWriter writer.open() will return false if it fails to open, while writer.isOpened() will also return false.

Solution: The runtime environment lacks opencv_ffmpeg***_64.dll and opencv_ffmpeg***.dll underlying call libraries, just copy them over.

opencv version, 2.x only supports avi format, 3.x supports mp4 (here all refer to the official native version, not the self-compiled version)

[Solved] Android Studio Error: Error:Execution failed for task’:app:mergeDebugResources’

Error:

Error:Execution failed for task’:app:mergeDebugResources’

 

Reason:

The image I added does not meet the review requirements of Android Studio, add two lines of code to disable the review

 

Solution:

Add the following codes in build.gradle of the app directory.

android {
.......
aaptOptions.cruncherEnabled = false
aaptOptions.useNewCruncher = false
.......
}

[Solved] URIError: Failed to decode param ‘/%3C%=%20BASE_URL%20%3Estatic/index.%3C%=%20VUE_APP_INDEX_CSS_HASH%20%3E.css’

Preface: When I run the vue+uniapp project to the browser, it throws the above error

The error is reported as follows

URIError: Failed to decode param '/%3C%=%20BASE_URL%20%3Estatic/index.%3C%=%20VUE_APP_INDEX_CSS_HASH%20%3E.css'

 

Reason: The project run path cannot be recognized by uniapp, specifically it should be a path setting problem, my problem here is that the [base path to run] in the web configuration options field in the manifest.json file configuration is written wrong.

Solution: Modify the path to the correct one and run it again.

[Solved] Group coordinator lookup failed,The coordinator is not available

Question

After the kafka consumer starts, I get the exception as the title.

Looking up the data, I found out that __consumer_offsetsthis topic is abnormal. The
abnormality may be that the topic has been deleted and no longer exists, or it may be that the topic is abnormal due to partition or other reasons.

Go to zookeeper to see if this topic exists.

ls /brokers/topics
[__consumer_offsets , xx,xx]

1) __consumer_offsets does not exist

./kafka-topics.sh --zookeeper master:2181 --partitions 1 --replication-factor 1 --create --topic __consumer_offsets

If topic does not exist, create one directly.

2) __consumer_offsets exists

Mine is this problem, topic exists, but an exception occurred during kafka migration.

  1. Then you need to stop kafka first.
  2. delete topic in zookeeper
Copy
# remove node infos
deleteall /brokers/topics/__consumer_offsets
# remove node
delete /brokers/topics/__consumer_offsets
  1. Restart kafka.

In theory, restart kafka, the consumer will go online automatically, and a __consumer_offsetstopic will be created automatically. If not, follow the previous step and create one manually.

[Solved] Nginx Request 500 Error: CreateFile() “/temp/client_body_temp/0000000013” failed (5: Access is denied)

In the past few days, when a front-end colleague was processing upload requests, nginx returned a 500 error. There was no additional error information, and the back-end did not receive the request. Look at his local nginx log error:

[crit] 28700#21636: *1389 CreateFile() "\nginx-clean/temp/client_body_temp/0000000010" failed (5: Access is denied)<br> Changing **/temp/client_body _temp folder to the group of users executing Nginx and it will be solved

 

nginx has several upload-related parameter descriptions:
client_max_body_size
client_max_body_size defaults to 1M, indicating the maximum allowable size of the client request server, which is specified in the “Content-Length” request header. If the requested body data is larger than client_max_body_size, the HTTP protocol will report the error 413 Request Entity Too Large.

client_body_buffer_size
Nginx allocates the Buffer size of the requested data. If the requested data is smaller than client_body_buffer_size, it directly stores the data in memory first. If the requested value is greater than client_body_buffer_size and less than client_max_body_size, the data will be stored in a temporary file first

client_body_temp_path 

Specify the temporary storage path. By default, the path value is
the client_body_temp address configured in /tmp/client_body_temp, and the user group of the executing Nginx must have read and write permissions. Otherwise, when the transmitted data is larger than client_body_buffer_size, an error will be reported if it fails to write to the temporary file

Syntax: client_body_temp_path dir-path[level1[level2[level3]]
If the package body size is larger than client_body_buffer_size, it will be named with an increasing integer and stored in the directory specified by client_body_temp_path. The level1, level2, and level3 that follow are used to prevent the number of files in a directory from being too large and thus causing performance degradation,<br>hence the use of the level parameter, so that it can follow the temporary

 

client_body_in_file_only on|clean|off;

When the value is not off, the HTTP packet body in the user request will always be stored in the disk file, even if it has only 0 bytes, it will be stored as a file.

When the request ends, if the configuration is on, the file will not be deleted (this configuration is generally used for debugging and locating problems), but if the configuration is clean, the file will be deleted.

Summarize:

If the transmitted data is larger than client_max_body_size, the transmission must be unsuccessful, and the HTTP protocol will report an error 413 Request Entity Too Large
is smaller than client_body_buffer_size, which is directly stored in memory efficiently.
If it is larger than client_body_buffer_size and smaller than client_max_body_size, temporary files will be stored, and temporary files must have permissions.
If the pursuit of efficiency, set client_max_body_size and client_body_buffer_size the same value, so that temporary files will not be stored, stored directly in memory.

Hive Error: FAILED: RuntimeException Error loading hooks(hive.exec.post.hooks): java.lang.ClassNotFoundException: org.apache.atlas.hive.hook.HiveHook

After entering the Hive client, executing any SQL prompts the following error message:

FAILED: RuntimeException Error loading hooks(hive.exec.post.hooks): java.lang.ClassNotFoundException: org.apache.atlas.hive.hook.HiveHook
	at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
	at java.lang.Class.forName0(Native Method)
	at java.lang.Class.forName(Class.java:348)
	at org.apache.hadoop.hive.ql.hooks.HookUtils.readHooksFromConf(HookUtils.java:55)
	at org.apache.hadoop.hive.ql.HookRunner.loadHooksFromConf(HookRunner.java:90)
	at org.apache.hadoop.hive.ql.HookRunner.initialize(HookRunner.java:79)
	at org.apache.hadoop.hive.ql.HookRunner.runBeforeParseHook(HookRunner.java:105)
	at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:612)
	at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1826)
	at org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:1773)
	at org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:1768)
	at org.apache.hadoop.hive.ql.reexec.ReExecDriver.compileAndRespond(ReExecDriver.java:126)
	at org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:214)
	at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:239)
	at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:188)
	at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:402)
	at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:821)
	at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:759)
	at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:683)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.hadoop.util.RunJar.run(RunJar.java:318)
	at org.apache.hadoop.util.RunJar.main(RunJar.java:232)

2022-10-18 17:07:01,441 ERROR [045aa301-3e86-4efe-927d-cfabf3f29cb6 main] ql.Driver (SessionState.java:printError(1250)) - FAILED: RuntimeException Error loading hooks(hive.exec.post.hooks): java.lang.ClassNotFoundException: org.apache.atlas.hive.hook.HiveHook
	at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
	at java.lang.Class.forName0(Native Method)
	at java.lang.Class.forName(Class.java:348)
	at org.apache.hadoop.hive.ql.hooks.HookUtils.readHooksFromConf(HookUtils.java:55)
	at org.apache.hadoop.hive.ql.HookRunner.loadHooksFromConf(HookRunner.java:90)
	at org.apache.hadoop.hive.ql.HookRunner.initialize(HookRunner.java:79)
	at org.apache.hadoop.hive.ql.HookRunner.runBeforeParseHook(HookRunner.java:105)
	at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:612)
	at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1826)
	at org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:1773)
	at org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:1768)
	at org.apache.hadoop.hive.ql.reexec.ReExecDriver.compileAndRespond(ReExecDriver.java:126)
	at org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:214)
	at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:239)
	at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:188)
	at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:402)
	at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:821)
	at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:759)
	at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:683)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.hadoop.util.RunJar.run(RunJar.java:318)
	at org.apache.hadoop.util.RunJar.main(RunJar.java:232)

java.lang.RuntimeException: Error loading hooks(hive.exec.post.hooks): java.lang.ClassNotFoundException: org.apache.atlas.hive.hook.HiveHook
	at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
	at java.lang.Class.forName0(Native Method)
	at java.lang.Class.forName(Class.java:348)
	at org.apache.hadoop.hive.ql.hooks.HookUtils.readHooksFromConf(HookUtils.java:55)
	at org.apache.hadoop.hive.ql.HookRunner.loadHooksFromConf(HookRunner.java:90)
	at org.apache.hadoop.hive.ql.HookRunner.initialize(HookRunner.java:79)
	at org.apache.hadoop.hive.ql.HookRunner.runBeforeParseHook(HookRunner.java:105)
	at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:612)
	at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1826)
	at org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:1773)
	at org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:1768)
	at org.apache.hadoop.hive.ql.reexec.ReExecDriver.compileAndRespond(ReExecDriver.java:126)
	at org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:214)
	at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:239)
	at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:188)
	at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:402)
	at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:821)
	at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:759)
	at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:683)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.hadoop.util.RunJar.run(RunJar.java:318)
	at org.apache.hadoop.util.RunJar.main(RunJar.java:232)

	at org.apache.hadoop.hive.ql.HookRunner.loadHooksFromConf(HookRunner.java:93)
	at org.apache.hadoop.hive.ql.HookRunner.initialize(HookRunner.java:79)
	at org.apache.hadoop.hive.ql.HookRunner.runBeforeParseHook(HookRunner.java:105)
	at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:612)
	at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1826)
	at org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:1773)
	at org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:1768)
	at org.apache.hadoop.hive.ql.reexec.ReExecDriver.compileAndRespond(ReExecDriver.java:126)
	at org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:214)
	at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:239)
	at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:188)
	at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:402)
	at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:821)
	at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:759)
	at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:683)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.hadoop.util.RunJar.run(RunJar.java:318)
	at org.apache.hadoop.util.RunJar.main(RunJar.java:232)

 

Cause and Solution:
After installing Atlas, Hive prompts the above error message.
Move all the jar packages in /opt/module/atlas/apache-atlas-hive-hook-2.1.0/hook/hive directory to Hive’s lib directory, the problem is solved

[Solved] The application could not be installed: INSTALL_FAILED_NO_MATCHING_ABIS

The application could not be installed: INSTALL_FAILED_NO_MATCHING_ABIS when running the project to the Android emulator, the reason for this problem is that the Android emulator is not set to This problem is caused by not setting the Android emulator to support x86.

How to Solve:

In the build.gradle file of the module, under the android tag, add the defaultConfig tag:

ndk {
            //Select the .so library of the corresponding cpu type to add.
            abiFilters 'armeabi', 'armeabi-v7a', 'arm64-v8a', 'x86'
            // You can also add 'x86', 'x86_64', 'mips', 'mips64'
        }

If there has been an ndk tag before, you only need to add it (see Figure 1 below).

, 'x86'

Finally, please note: if you use the emulator, even after adding, ‘x86’, some third-party sdk still can’t work properly (for example, the gps location function of Gaode map, because the emulator has no gps), so for some important functions, if you have a cell phone, try to use it to test.

Readiness probe failed: calico/node is not ready: BIRD is not ready: Error querying BIRD: unable to

Installed calico using the tigera-operator method and reported an error after startup, all calico related pods show CrashLoopBackoff.

kubectl -n calico-system describe pod calico-node-2t8w6 and found the following error.

Readiness probe failed: calico/node is not ready: BIRD is not ready: Error querying BIRD: unable to connect to BIRDv4 socket: dial unix /var/ run/calico/ bird.ctl: connect: no such file or directory.

Cause of the problem:

We are experiencing this issue during a Kubernetes Cluster deployment. Since Calico automatically detects IP addresses by default using the first-found method and gets the wrong address, we need to specify the detection method manually.

1. Remove all the claico

kubectl -n tigera-operator get deployments.apps -o yaml > a.yaml
kubectl -n calico-system get daemonsets.apps calico-node -o yaml > b.yaml
kubectl -n calico-system get deployments.apps calico-kube-controllers -o yaml > c.yaml
kubectl -n calico-system get deployments.apps calico-typha -o yaml > d.yaml
kubectl -n calico-apiserver get deployments.apps calico-apiserver -o yaml > e.yaml
kubectl delete -f a.yaml
kubectl delete -f b.yaml
kubectl delete -f c.yaml
kubectl delete -f d.yaml
kubectl delete -f e.yaml
2. Remove custom-resources.yaml
kubectl delete -f tigera-operator.yaml
kubectl delete -f custom-resources.yaml

3. Remove vxlan.calico
ip link delete vxlan.calico

4. Modify custom-resources.yaml file and add nodeAddressAutodetectionV4:
# This section includes base Calico installation configuration.
# For more information, see: https://projectcalico.docs.tigera.io/v3.23/reference/installation/api#operator.tigera.io/v1.Installation
apiVersion: operator.tigera.io/v1
kind: Installation
metadata:
name: default
spec:
# Configures Calico networking.
calicoNetwork:
# Note: The ipPools section cannot be modified post-install.
#bgp: Enabled
#hostPorts: Enabled
ipPools:
– blockSize: 26
cidr: 10.244.0.0/16
encapsulation: VXLANCrossSubnet
natOutgoing: Enabled
nodeSelector: all()
#linuxDataplane: Iptables
#multiInterfaceMode: None
nodeAddressAutodetectionV4:
interface: ens.*

# This section configures the Calico API server.
# For more information, see: https://projectcalico.docs.tigera.io/v3.23/reference/installation/api#operator.tigera.io/v1.APIServer
apiVersion: operator.tigera.io/v1
kind: APIServer
metadata:
name: default
spec: {}
5. Re-create
kubectl create -f tigera-operator.yaml
kubectl create -f custom-resources.yaml
check
kubectl -n calico-system get daemonsets.apps calico-node  -o yaml|grep -A2 IP_AUTODETECTION_METHOD

[Solved] org.gradle.api.internal.plugins.PluginApplicationException: Failed to apply plugin

This problem sometimes occurs when opening Android files written by others:

Caused by: org.gradle.api.internal.plugins.PluginApplicationException: Failed to apply plugin [id 'com.android.internal.application']

The solution is as follows:

1. In Project view mode, select gradle.properties file.

2. On the last line (or any line in the file), enter the following code.

android.overridePathCheck=true

This line of code means “overlay path checking”.

3. Then click ‘Sync Now’ in the upper right corner and wait a few seconds to solve the problem.

[Solved] stm32 Failed to Download: Error: Flash Download failed – “Cortex-M3”

Error:

Error:Flash Download failed – “Cortex-M3”

 

Solution:

1. The correct device is not selected

2. The boot file does not correspond to the memory size

In C/C++

Large capacity chip corresponds to STM32F10X_HD,USE_STDPERIPH_DRIVER

Medium capacity corresponds to STM32F10X_MD,USE_STDPERIPH_DRIVER

Small capacity corresponds to STM32F10X_LD,USE_STDPERIPH_DRIVER

After modifying the letters, it is also necessary to confirm that there is a startup file with corresponding capacity in the project CORE file

3. Another place to set memory

Flash Dowmload in Debug Settings

Add correct Flash capacity through ADD/Remove

[Solved] Android-android studio apk Install Error: INSTALL_PARSE_FAILED_MANIFEST_MALFORMED

I simply wrote a program today and found that it could not run all the time, prompting that the installation failed on the physical device

INSTALL_PARSE_FAILED_MANIFEST_MALFORMED
Installation failed due to: ‘null’

After checking on the Internet, most of the brothers said that it was a problem with the test mark, so just add it

android:testOnly="false"

But there are still problems after trying

Another brother said that this problem is related to the manifest file. It may be that there is a problem with the configuration of the manifest file. Therefore, I checked it from here. However, I was also very curious about how a hello world program has a configuration problem. I didn’t see any exceptions in the xml. I went to the apk generation directory to directly try to install it:

adb install -r -d a.apk

here I saw the error messages:

adb: failed to install app-debug.apk: Failure [INSTALL_PARSE_FAILED_MANIFEST_MALFORMED: Failed parse during installPackageLI: /data/app/vmdl631790294.tmp/base.apk (at Binary XML file line #20): com.leonard.goot.MainActivity: Targeting S+ (version 31 and above) requires that an explicit value for android:exported be defined when intent filters are present]

Find the problem, because the activity does not have the export property caused by the modification can be installed after debugging, Done!

[Solved] failed to set bridge addr: “cni0“ already has an IP address different from xxxx

failed to set bridge addr: “cni0“ already has an IP address different from xxxx

Recently, when debugging Kubernetes to add or delete a node, and then deploying Pod on this node, a network card address error exception occurred. The troubleshooting solution for this exception is as follows:

Error:

(combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox “745720ffb20646054a167560299b19bb9ae046fe6c677b5d26312b89a26554e1”: failed to set bridge addr: “cni0” already has an IP address different from 172.20.2.1/24

 

Solution:

  1. Delete the node without restarting the node server, restart the node server (in this case, it is usually caused by the server cache, restart the server on it)
  2. After restarting the server or not, delete the wrong NIC on the node and wait for the system to rebuild automatically, the operation process is as follows.
sudo ifconfig cni0 down    
sudo ip link delete cni0