Category Archives: Error

[Solved] Non-fatal Exception: java.lang.UnsatisfiedLinkError: dlopen failed: library “libmmkv.so“ not found

Project scenario:

The mmkv version 1.0.23 used in the project is too old, and 1.0.23 also introduces libc++_shared.so which is about 249K + libmmkv.so which is about 40K.

Checking github, I found that the latest version has reached 1.2.14 and the aar package has been optimized, so I have a need to upgrade.


Problem description

In the project, we upgraded mmkv version 1.0.23 to 1.2.14. After solving a lot of compilation errors (inconsistent kotlin versions, gradle upgrade required, etc.), we thought everything was all right, but we didn’t expect to report the startup

Non-fatal Exception: java.lang.UnsatisfiedLinkError: dlopen failed: library “libmmkv.so” not found

I searched various posts on the Internet – no answer. Later, someone in github issue raised similar questions: dlopen failed: library “libmmkv. so” not found · Issue # 958 · Tencent/MMKV · GitHub

Inspired, we developed the source code GitHub – Tencent/MMKV: An efficient, small mobile key value storage framework by WeChat Works on Android, iOS, macOS, Windows, and POSIX. Clone down and study.


Cause analysis:

After compiling the mmkv module with the source code clone, it is found that only the following four cpu architecture sos will be generated in the compilation log

armeabi-v7a, arm64-v8a, x86,  x86_64

No armeabi generated

My own project only supports armeabi

Therefore, the reason is obviously related to the CPU architecture settings of your project.

Why didn’t you compile so to generate armeabi?

The ndk17 does not support armeabi at first. The ndk version needs to be changed to 16 and below and the gradle plug-in needs to be downgraded to 4.1.3 and below. However, the gradle in the project has been upgraded to 7. x


Solution:

Method 1: The app’s build.gradle checks for ndk abiFilters under android-buildTypes

ndk {
    abiFilters "armeabi"
}

Modify to

ndk {
          abiFilters "armeabi-v7a"

}

Armeabi-v7a is backward compatible with armeabi

Method 2: If the project has only armeabi architecture and cannot upgrade to v7a, you can find the armeabi-v7a so through the aar package that mmkv maven depends on, put the so into the project armeabi directory, and the abiFilters can still be “armeabi”.

Node Memory Overflow: FATAL ERROR: Reached heap limit Allocation failed – JavaScript heap out of memory

The first time the npm run serve ran, there was no error. After the change file was saved, it was automatically repackaged and an error was reported: FATAL ERROR: Retrieved heap limit Allocation failed – JavaScript heap out of memory, as shown in the following figure:


Problem: Too many resources were referenced, Causing node memory overflow

Solution:
1. Global installnpm install - g increase memory-limit
2. Execute increase-memory-limit
3. Run the project npm run serve

If it appears after running:
“node –max-old-space-size=10240″‘ is not an internal or external command, nor is it an executable program or program file

Search for “%_prog%” in the .bin file in the node_modules directory and replace it all with %_prog%

If no file is found, click the ignored file button

to replace it and run: npm run serve

It will be OK!

[Solved] Windows Nginx Startup Error: bind() to 0.0.0.0:80 failed (10013: An attempt was made to access a socket

Solution 1:

(1) Check the error.log in the nginx-1.19.2\logs directory, and learn that the error message is: bind() to 0.0.0.0:80 failed (10013: An attempt was made to access a socket in a way forbidden by its access permissions)

(2) Press win+r, type cmd, and open the administrator interface

(3) type netstat -aon|findstr :80, find the port number 0.0.0.0:80 is occupied, check the pid value of 4

(4) Enter tasklist | findstr “4” to find the name corresponding to port 4, which is System

(5) after viewing the System system occupancy can not be manually terminated, the reason is SQLServer Reporting Services, stop this service and set to start manually can, after starting nginx, need to restart SQLServer Reporting Services

Disadvantage: This approach requires you to stop SQLServer Reporting Services again after each boot, and then start nginx

Solution 2:

Modify the default port number under nginx.conf

(1) Open the nginx.conf file in the nginx directory with Notepad

(2) Press win+r, type cmd, open the administrator interface, type netstat -aon|findstr :expected port number to see if your expected port number is occupied

(3) Modify nginx.conf, and then save it

(4) At the command prompt, type nginx -s reload (an important step)

(5) Then type start nginx at the command prompt

(6) in the browser localhost:81, if the following page appears in the modified successful

[Solved] Error:couldn‘t connect to server 127.0.0.1:27017, connection attempt failed: SocketException: …

Error:couldn‘t connect to server 127.0.0.1:27017, connection attempt failed: SocketException: …

Problem Examples

Do you encounter the following problems when entering mongo at the terminal?

couldn’t connect to server 127.0.0.1:27017, connection attempt failed: SocketException: Error connecting to 127.0.0.1:27017 :: caused by :: ���� Ŀ ����������� ܾ ���� ޷ ���� ӡ �

Problem analysis

In fact, this problem is not complicated, just because your mongodb is not started. Just start it.

Problem-solving

Enter the bin directory of mongodb (enter the bin directory of mongodb you installed)

Input command (port number can be specified)

mongod –logpath “E:\professional_software\mongodb\data\log\mongodb.log” –dbpath “E:\professional_software\mongodb\data\db” –logappend

or

mongod –logpath “E:\professional_software\mongodb\data\log\mongodb.log” –dbpath “E:\professional_software\mongodb\data\db” –logappend –port 8888

In this way, the startup is successful and the next step is ready. Open another command prompt and enter mongo.

Fuseki failed with message “Parse error: [line:1, col: 1] Content is not allowed in prolog.“

Fuseki failed with message “Parse error: [line:1, col: 1 ] Content is not allowed in prolog.“

Recently, when uploading an rdf file with fuseki, the following error code occurred:

 failed  with message "Parse error: [line:1, col: 1 ] Content is not allowed in prolog."

I went to Google and found a pdf. Here is the link. It means that the problem can be avoided by changing the file suffix to .ttl. But the reason is not clear yet.
Attached is the original content of test.rdf.

@prefix ns1: <http://test.com/> .

ns1:node1 ns1:born ns1:1964 ;
    ns1:labels ns1:Person ;
    ns1:name ns1:Keanu-Reeves .

ns1:node101 ns1:born ns1:1947 ;
    ns1:labels ns1:Person ;
    ns1:name ns1:Takeshi-Kitano .

ns1:node102 ns1:born ns1:1968 ;
    ns1:labels ns1:Person ;
    ns1:name ns1:Dina-Meyer .

ns1:node103 ns1:born ns1:1958 ;
    ns1:labels ns1:Person ;
    ns1:name ns1:Ice-T .

ns1:node104 ns1:born ns1:1953 ;
    ns1:labels ns1:Person ;
    ns1:name ns1:Robert-Longo .

ns1:node106 ns1:born ns1:1966 ;
    ns1:labels ns1:Person ;
    ns1:name ns1:Halle-Berry .

ns1:node107 ns1:born ns1:1949 ;
    ns1:labels ns1:Person ;
    ns1:name ns1:Jim-Broadbent .

ns1:node108 ns1:born ns1:1965 ;
    ns1:labels ns1:Person ;
    ns1:name ns1:Tom-Tykwer .

The reason may be that file format may be confused.

[Solved] PluginlibFactory: The plugin for class ‘rviz_imu_plugin/IMU‘ failed to load.

Error indicating:

PluginlibFactory: The plugin for class 'rviz_imu_plugin/IMU' failed to load. Error: According to the loaded plugin descriptions the class rviz_imu_plugin/IMU with base class Rviz: :Display does not exist. Declared types are rviz/AccelStamped rviz/Axes rviz/Camera rviz/DepthCloud rviz/Effort rviz/FluidPressure rviz/Grid rviz/GridCells rviz/Illuminance rviz/Image rviz/InteractiveMarkers rviz/LaserScanrviz/Map rviz/Marker rviz/MarkerArray rviz/Odometry rviz/Path rviz/PointCloud rviz/PointCloud2 rviz/PointStamped rviz/Polygon rviz/Pose rviz/PoseArray rviz/PoseWithCovariance rviz/Range rviz/Relative Humidity rviz/RobotModel rvtz/TF rviz/Temperature rviz/TwistStamped rviz/WrenchStamped rvtz_plugin_tutorials/Imu

Solution:

Install ros-rviz-imu-plugin library

Small technologies: Whether to install the corresponding database containing error checks!”.

Vivado Error: [Chipscope 16-302]Could not generate core for dbg_hub.Aborting IP Generate operaion.The current Vivado temporary directory path.

In the process of program synthesis using Vivado, errors are reported in full compilation, as shown below:

[Chipscope 16-302]Could not generate core for dbg_hub.Aborting IP Generate operaion.The current Vivado temporary directory path.

............

 

Error reason:

The project folder name is too long. Shorten the project folder name as shown in the following figure.

Solution:

Shorten the project folder name and recompile it.

ABAP: Overbooking BAPI_ACC_DOCUMENT_POST error reported FI/CO interface: inconsistent FI/CO voucher header data to be updated

Problem: When using BAPI_ACC_DOCUMENT_POST, the reason for the error is “FI/CO interface: inconsistent FI/CO voucher header data to be updated” when automatically posting the account.

Reasons:

1. If the company in the header data and the line item company are consistent, check the line item and do not assign company bukrs to the line item.

“it_item-comp_code = wa_account-bukrs.

2, check whether the amount is 0, if the line item amount is 0, it will report this error.

3, whether the positive and negative of the amount of debit and credit are consistent. Example: The bookkeeping code is 40, debit, should be a positive value; the bookkeeping code is 50, credit, is a negative value.

it_curr-amt_doccur = wa_account-wrbtr.

Solution:

Follow the above reasons, troubleshoot the program, and modify it.

[Solved] Flink Error: Flink Hadoop is not in the classpath/dependencies

Error background:

When installing the Flink on yarn cluster, the Flink cluster cannot be started.

Version:

flink-1.14.6

hadoop-3.2.3

org.apache.flink.runtime.entrypoint.ClusterEntrypointException: Failed to initialize the cluster entrypoint StandaloneSessionClusterEntrypoint.
	at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.startCluster(ClusterEntrypoint.java:216) ~[flink-dist_2.12-1.14.6.jar:1.14.6]
	at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.runClusterEntrypoint(ClusterEntrypoint.java:617) [flink-dist_2.12-1.14.6.jar:1.14.6]
	at org.apache.flink.runtime.entrypoint.StandaloneSessionClusterEntrypoint.main(StandaloneSessionClusterEntrypoint.java:59) [flink-dist_2.12-1.14.6.jar:1.14.6]
Caused by: java.io.IOException: Could not create FileSystem for highly available storage path (hdfs:/flink/ha/default)
	at org.apache.flink.runtime.blob.BlobUtils.createFileSystemBlobStore(BlobUtils.java:92) ~[flink-dist_2.12-1.14.6.jar:1.14.6]
	at org.apache.flink.runtime.blob.BlobUtils.createBlobStoreFromConfig(BlobUtils.java:76) ~[flink-dist_2.12-1.14.6.jar:1.14.6]
	at org.apache.flink.runtime.highavailability.HighAvailabilityServicesUtils.createHighAvailabilityServices(HighAvailabilityServicesUtils.java:121) ~[flink-dist_2.12-1.14.6.jar:1.14.6]
	at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.createHaServices(ClusterEntrypoint.java:361) ~[flink-dist_2.12-1.14.6.jar:1.14.6]
	at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.initializeServices(ClusterEntrypoint.java:318) ~[flink-dist_2.12-1.14.6.jar:1.14.6]
	at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.runCluster(ClusterEntrypoint.java:243) ~[flink-dist_2.12-1.14.6.jar:1.14.6]
	at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.lambda$startCluster$1(ClusterEntrypoint.java:193) ~[flink-dist_2.12-1.14.6.jar:1.14.6]
	at org.apache.flink.runtime.security.contexts.NoOpSecurityContext.runSecured(NoOpSecurityContext.java:28) ~[flink-dist_2.12-1.14.6.jar:1.14.6]
	at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.startCluster(ClusterEntrypoint.java:190) ~[flink-dist_2.12-1.14.6.jar:1.14.6]
	... 2 more
Caused by: org.apache.flink.core.fs.UnsupportedFileSystemSchemeException: Could not find a file system implementation for scheme 'hdfs'. The scheme is not directly supported by Flink and no Hadoop file system to support this scheme could be loaded. For a full list of supported file systems, please see https://nightlies.apache.org/flink/flink-docs-stable/ops/filesystems/.
	at org.apache.flink.core.fs.FileSystem.getUnguardedFileSystem(FileSystem.java:532) ~[flink-dist_2.12-1.14.6.jar:1.14.6]
	at org.apache.flink.core.fs.FileSystem.get(FileSystem.java:409) ~[flink-dist_2.12-1.14.6.jar:1.14.6]
	at org.apache.flink.core.fs.Path.getFileSystem(Path.java:274) ~[flink-dist_2.12-1.14.6.jar:1.14.6]
	at org.apache.flink.runtime.blob.BlobUtils.createFileSystemBlobStore(BlobUtils.java:89) ~[flink-dist_2.12-1.14.6.jar:1.14.6]
	at org.apache.flink.runtime.blob.BlobUtils.createBlobStoreFromConfig(BlobUtils.java:76) ~[flink-dist_2.12-1.14.6.jar:1.14.6]
	at org.apache.flink.runtime.highavailability.HighAvailabilityServicesUtils.createHighAvailabilityServices(HighAvailabilityServicesUtils.java:121) ~[flink-dist_2.12-1.14.6.jar:1.14.6]
	at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.createHaServices(ClusterEntrypoint.java:361) ~[flink-dist_2.12-1.14.6.jar:1.14.6]
	at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.initializeServices(ClusterEntrypoint.java:318) ~[flink-dist_2.12-1.14.6.jar:1.14.6]
	at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.runCluster(ClusterEntrypoint.java:243) ~[flink-dist_2.12-1.14.6.jar:1.14.6]
	at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.lambda$startCluster$1(ClusterEntrypoint.java:193) ~[flink-dist_2.12-1.14.6.jar:1.14.6]
	at org.apache.flink.runtime.security.contexts.NoOpSecurityContext.runSecured(NoOpSecurityContext.java:28) ~[flink-dist_2.12-1.14.6.jar:1.14.6]
	at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.startCluster(ClusterEntrypoint.java:190) ~[flink-dist_2.12-1.14.6.jar:1.14.6]
	... 2 more
Caused by: org.apache.flink.core.fs.UnsupportedFileSystemSchemeException: Hadoop is not in the classpath/dependencies.
	at org.apache.flink.core.fs.UnsupportedSchemeFactory.create(UnsupportedSchemeFactory.java:55) ~[flink-dist_2.12-1.14.6.jar:1.14.6]
	at org.apache.flink.core.fs.FileSystem.getUnguardedFileSystem(FileSystem.java:528) ~[flink-dist_2.12-1.14.6.jar:1.14.6]
	at org.apache.flink.core.fs.FileSystem.get(FileSystem.java:409) ~[flink-dist_2.12-1.14.6.jar:1.14.6]
	at org.apache.flink.core.fs.Path.getFileSystem(Path.java:274) ~[flink-dist_2.12-1.14.6.jar:1.14.6]
	at org.apache.flink.runtime.blob.BlobUtils.createFileSystemBlobStore(BlobUtils.java:89) ~[flink-dist_2.12-1.14.6.jar:1.14.6]
	at org.apache.flink.runtime.blob.BlobUtils.createBlobStoreFromConfig(BlobUtils.java:76) ~[flink-dist_2.12-1.14.6.jar:1.14.6]
	at org.apache.flink.runtime.highavailability.HighAvailabilityServicesUtils.createHighAvailabilityServices(HighAvailabilityServicesUtils.java:121) ~[flink-dist_2.12-1.14.6.jar:1.14.6]
	at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.createHaServices(ClusterEntrypoint.java:361) ~[flink-dist_2.12-1.14.6.jar:1.14.6]
	at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.initializeServices(ClusterEntrypoint.java:318) ~[flink-dist_2.12-1.14.6.jar:1.14.6]
	at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.runCluster(ClusterEntrypoint.java:243) ~[flink-dist_2.12-1.14.6.jar:1.14.6]
	at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.lambda$startCluster$1(ClusterEntrypoint.java:193) ~[flink-dist_2.12-1.14.6.jar:1.14.6]
	at org.apache.flink.runtime.security.contexts.NoOpSecurityContext.runSecured(NoOpSecurityContext.java:28) ~[flink-dist_2.12-1.14.6.jar:1.14.6]
	at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.startCluster(ClusterEntrypoint.java:190) ~[flink-dist_2.12-1.14.6.jar:1.14.6]
	... 2 more

The reason for the error
Flink needs two jar package dependencies to access HDFS. Flink does not have them, so it needs to be put in by itself.

  1. flink-shaded-hadoop-3-3.1.1.7.2.9.0-173-9.0.jar
  2. commons-cli-1.5.0.jar

Solution:

Directly search the Maven warehouse for these two jar packages and download them: https://mvnrepository.com/

Put the jar package in the /flink/lib directory.

[Solved] Mac M1 Debug Error: could not launch process: can not run under Rosetta

Debugging with vscode or goland in M1 environment reports the following errors

could not launch process: can not run under Rosetta, check that the installed build of Go is right for your CPU architecture

main cause:

M1 chip is based on ARM architecture. If the installed Golang SDK is ARM, the above error will be reported during debugging

Solution:

Re-download the Go SDK of ARM version, and the Golang Installer will automatically overwrite the previous version

Go env to view after installation

Dlv is required for golang debugging. If dlv is not installed, dlv must be installed

go install github.com/go-delve/delve/cmd/dlv

Debug again. It is found that debugging can be performed normally

[Solved] Cannot read properties of undefined (reading ‘ajax‘); Cannot read property ‘ajax‘ of undefined

Cannot read properties of undefined (reading ‘ajax‘); Cannot read property ‘ajax‘ of undefined

Sending requests using ajax in jQuery. it report an error: Cannot read properties of undefined (reading 'ajax'); Cannot read property 'ajax' of undefined

Codes that report an error:

            $.ajax({
                type:"POST",
                url:"pageServlet",
                data:jsonData,
                dataType:"json",
                success:function (data) {
                    alert(data);
                }
            })

Solution: Change $ to jQuery

            jQuery.ajax({
                type:"POST",
                url:"pageServlet",
                data:jsonData,
                dataType:"json",
                success:function (data) {
                    alert(data);
                }
            })