Category Archives: How to Fix

Vue project startup error: cannot find module XXX

Cause: a module that the project depends on cannot be found

Solution:
1. delete the folder where modules are stored node_ module

2. Execute the clear cache command NPM cache clean
If an error is reported, use to force to clear NPM cache clean -- force
If an error is reported, delete the package-lock.json file;

3. Reinstall the module, NPM install ; (the package-lock.json file will be automatically regenerated)

Then restart NPM run dev.

KAFKA – ERROR Failed to write meta.properties due to (kafka.server.BrokerMetadataCheckpoint)

 
I set up Kafka source code read environment on Windows10, and encourted with error below, so what is wrong, I am a fresh man to learn kafka

[2021-04-29 19:57:42,957] ERROR Failed to write meta.properties due to (kafka.server.BrokerMetadataCheckpoint)
java.nio.file.AccessDeniedException: C:\Users\a\workspace\kafka\logs
    at sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:83)
    at sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
    at sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:102)
    at sun.nio.fs.WindowsFileSystemProvider.newFileChannel(WindowsFileSystemProvider.java:115)
[2021-04-29 19:57:42,973] ERROR [KafkaServer id=0] Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
java.nio.file.AccessDeniedException: C:\Users\a\workspace\kafka\logs
    at sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:83)
    at sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
    at sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:102)
    at sun.nio.fs.WindowsFileSystemProvider.newFileChannel(WindowsFileSystemProvider.java:115)

and another ERROR below

[2021-04-29 19:57:43,821] ERROR Error while writing to checkpoint file C:\Users\a\workspace\kafka\logs\recovery-point-offset-checkpoint (kafka.server.LogDirFailureChannel)
java.nio.file.AccessDeniedException: C:\Users\a\workspace\kafka\logs
    at sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:83)
    at sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
    at sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:102)
    at sun.nio.fs.WindowsFileSystemProvider.newFileChannel(WindowsFileSystemProvider.java:115)
[2021-04-29 19:57:43,823] ERROR Disk error while writing recovery offsets checkpoint in directory C:\Users\a\workspace\kafka\logs: Error while writing to checkpoint file C:\Users\a\workspace\kafka\logs\recovery-point-offset-checkpoint (kafka.log.LogManager)
[2021-04-29 19:57:43,833] ERROR Error while writing to checkpoint file C:\Users\a\workspace\kafka\logs\log-start-offset-checkpoint (kafka.server.LogDirFailureChannel)
java.nio.file.AccessDeniedException: C:\Users\a\workspace\kafka\logs
    at sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:83)
    at sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
    at sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:102)
    at sun.nio.fs.WindowsFileSystemProvider.newFileChannel(WindowsFileSystemProvider.java:115)
[2021-04-29 19:57:43,834] ERROR Disk error while writing log start offsets checkpoint in directory C:\Users\a\workspace\kafka\logs: Error while writing to checkpoint file C:\Users\a\workspace\kafka\logs\log-start-offset-checkpoint (kafka.log.LogManager)

I downgraded to the Kafka 2.8.1 version (Index of /kafka/2.8.1) and after that everything is working perfectly.
Share
Improve this answer

This is a common error when log retention happens.
Kafka doesn’t have good support for windows filesystem.
You can use WSL2 or Docker to work around these limitations – 

This seems to be a bug affecting some Kafka versions. Try installing kafka_2.12-2.8.1.tgz version. It will solve your problem.

I have tried using Kafka kafka_2.12-3.0.0.tgz and the same issue occurred. Seems 2.12-3.0.0 have issues with windows.
Try to downgrade the version to kafka_2.12-2.8.1.tgz. This resolved all issues and working fine.

CompilationFailureException: Compilation failure error: cannot access NotThreadSafe

Error content:

Caused by: org.apache.maven.plugin.compiler.CompilationFailureException: Compilation failure
error: cannot access NotThreadSafe

Error reporting reason:

            <dependency>
                <groupId>org.apache.httpcomponents</groupId>
                <artifactId>httpclient</artifactId>
                <version>4.5.2</version>
            </dependency>
            <dependency>
                <groupId>org.apache.httpcomponents</groupId>
                <artifactId>httpcore</artifactId>
                <version>4.4.5</version>
            </dependency>

The versions of httpclient and HttpCore packages do not match.

be careful! Many online Posts say that reducing HttpCore to 4.4.4 can solve the problem, because 4.5.2 and 4.4.4 match.

However, the httpclient package itself has security vulnerabilities and is vulnerable to XSS attacks.

For httpclient problems, see: https://github.com/advisories/GHSA-7r82-7xv7-xcpj
Apache HttpClient versions prior to version 4.5.13 and 5.0.3 can misinterpret malformed authority component in request URIs passed to the library as java.net.URI object and pick the wrong target host for request execution.

Therefore, you can upgrade httpclient to 4.5.13 here, which matches 4.4.5 of HttpCore. satisfy both sides.

Correct reference posture:

            <dependency>
                <groupId>org.apache.httpcomponents</groupId>
                <artifactId>httpclient</artifactId>
                <version>4.5.13</version>
            </dependency>
            <dependency>
                <groupId>org.apache.httpcomponents</groupId>
                <artifactId>httpcore</artifactId>
                <version>4.4.5</version>
            </dependency>

Springmvc response. Senderror() custom message prompt

When the HTTP request is abnormal, spring MVC only returns the error code by default, but the error message is ignored

The message field is empty. If you want to get detailed information, you can add the following configuration in the configuration file:

server:
  error:
    include-message: always

After adding, the response body is:

Front end project runtime prompts syntax error: typeerror: token.type.endswith is not a function solution

Reference documents:

https://www.baidu.com/link?url=rxkuHBNVNB0i7GCoDEfgkwDr3AllV9XWRLWwkFQl7p1PjamwyIPupL93spZDTywmxMetZ7yHNtqtgJRPGR0POa&amp ; wd=& eqid=d1c30348003392a90000000461721b6e

Problem background:

After pulling a branch for the project code, execute NPM install to download the dependent package, and then execute NPM run serve to find that the project fails to run. The prompt message is syntax error: typeerror: token.type.endswith is not a function.

Solution:

The author’s version of Babel eslint is 10.1.0, and the version is reduced to 8.2.2. The project is run again and runs successfully.

remarks:

The reason for the error seen on GitHub is that the version of Babel eslint is wrong, and the version reduction can indeed solve this problem. However, it is strange that the Babel eslint version of the author’s trunk is the same as that of the branch, both of which are 10.1.0, while the trunk project can run successfully, but the branch project can’t.

java.lang.IllegalStateException: Logback configuration error detected

Recently, due to the integration of spring boot and log4j2 in the company’s project, the project cannot run

Error reporting reason:

    Logback configuration error:

Error code:

It is obvious that the logback configuration is wrong

Solution:

    Error path: C: \ users \ administrator \ desktop \ log \ error \ log_ error.log     Obviously, my current administrator name is admin, and the logback.xml file is in the configuration

Because it is the administrator, it cannot create the log file path. It needs to be changed to: C:// users// admin// desktop// log run again

 

  OK, perfect operation.

 

hdfs-bug:DataXceiver error processing WRITE_BLOCK operation

The error message and screenshot are as follows:

calculation112.aggrx:50010:DataXceiver error processing WRITE_BLOCK operation  src: /10.1.1.116:36274 dst: /10.1.1.112:50010
java.io.IOException: Premature EOF from inputStream
    at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:203)
    at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213)
    at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
    at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
    at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:501)
    at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:901)
    at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:808)
    at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:169)
    at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:106)
    at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:246)
    at java.lang.Thread.run(Thread.java:748)
  ......

Reason: the file operation exceeds the lease term, that is, the file is deleted during the data stream operation

Scheme:

Step 1: modify the maximum number of files opened in the process

cat /etc/security/limits.conf  | grep -v ^#


*        soft    nofile        1000000
*         hard    nofile        1048576
*        soft    nproc        65536
*        hard    nproc        unlimited
*        soft    memlock        unlimited
*        hard    memlock        unlimited
*         -      nofile          1000000

Step 2: (modify the number of data transmission threads)

Done.

 

Solve the problem of postgre error #22p02 malformed array literal

catalogue

Background

Problem solving


Background

When you use the go PG framework to modify a column, an error occurs.

The column to be modified is bigint [] type. Keep trying, and multiple errors are reported as follows:

Missing   “=”   after   array   dimensions

ERROR #22P02 malformed array literal

Problem solving

Modify data directly:

update table_name set  xxx= '{1,2}' where id=2;

  This modification is successful. Is it right to contact the code and convert the value types to be injected?Yes, that’s it!

Code mode: set (“column =?”, Val)   

Convert each value of Val of type [] Int64 into a string, and concatenate the string with commas: ` {% s}`

Expand to see: set (“column =?”, ` {1,2} ‘)   

Done!

Xilinx Vitis arm-xilinx-eabi-gcc.exe: error: *.c: Invalid argument

This problem may be a bug of version 2021.1 only. Unfortunately, I’m just currently using this version.
After Build Project:

Find where this Makefile is through “make: Leaving directory” information. For me, it is located at  Zynq_CPU_wrapper_hw_platform_1\zynq_fsbl\zynq_fsbl_bsp\ps7_cortexa9_0\libsrc\rtl_multiplier_v1_0\src\Makefile (we assume Zynq_CPU_wrapper_hw_platform_1 is the platform name and is under the Vitis workspace folder). Locate this file, we see the following code:

COMPILER=
ARCHIVER=
CP=cp
COMPILER_FLAGS=
EXTRA_COMPILER_FLAGS=
LIB=libxil.a

RELEASEDIR=../../../lib
INCLUDEDIR=../../../include
INCLUDES=-I./. -I${INCLUDEDIR}

INCLUDEFILES=*.h
LIBSOURCES=*.c
OUTS = *.o

libs:
	echo "Compiling rtl_multiplier..."
	$(COMPILER) $(COMPILER_FLAGS) $(EXTRA_COMPILER_FLAGS) $(INCLUDES) $(LIBSOURCES)
	$(ARCHIVER) -r ${RELEASEDIR}/${LIB} ${OUTS}
	make clean

include:
	${CP} $(INCLUDEFILES) $(INCLUDEDIR)

clean:
	rm -rf ${OUTS}

According to
Drivers created in Vivado fail in Vitis 2021.1https://support.xilinx.com/s/question/0D52E00006ihQSXSA2/drivers-created-in-vivado-fail-in-vitis-2021175527 – Drivers created in Create or Import Wizard in Vivado fail in Vitishttps://support.xilinx.com/s/article/75527
and damiet’s answer inVitis IDE 2021.1 custom AXI IP core compile errorhttps://support.xilinx.com/s/question/0D52E00006hpYgWSAU/vitis-ide-20211-custom-axi-ip-core-compile-errorNote if we do not modify LIBSOURCES as instructed above, there will be still an error pointing to the same line because it requires $(LIBSOURCES):

We modify the Makefile as:

COMPILER=
ARCHIVER=
CP=cp
COMPILER_FLAGS=
EXTRA_COMPILER_FLAGS=
LIB=libxil.a

RELEASEDIR=../../../lib
INCLUDEDIR=../../../include
INCLUDES=-I./. -I${INCLUDEDIR}

INCLUDEFILES=*.h
LIBSOURCES=$(wildcard *.c)
OUTS = *.o
OBJECTS = $(addsuffix .o, $(basename $(wildcard *.c)))
ASSEMBLY_OBJECTS = $(addsuffix .o, $(basename $(wildcard *.S)))

libs:
	echo "Compiling rtl_multiplier..."
	$(COMPILER) $(COMPILER_FLAGS) $(EXTRA_COMPILER_FLAGS) $(INCLUDES) $(LIBSOURCES)
	$(ARCHIVER) -r ${RELEASEDIR}/${LIB} ${OBJECTS} ${ASSEMBLY_OBJECTS}
	make clean

include:
	${CP} $(INCLUDEFILES) $(INCLUDEDIR)

clean:
	rm -rf ${OBJECTS} ${ASSEMBLY_OBJECTS}

We only modify LIBSOURCES, add OBJECTS and ASSEMBLY_OBJECTS, and replace $(OUTS) at $(ARCHIVER) line and rm -rf line with $(OBJECTS) $(ASSEMBLY_OBJECTS). If you go through Makefile under other directories under libsrc, like Zynq_CPU_wrapper_hw_platform_1\zynq_fsbl\zynq_fsbl_bsp\ps7_cortexa9_0\libsrc\cpu_cortexa9_v2_11\src\Makefile, you will see how is this patch invented.
This time building goes further, but still fails:

and two identical pop-up windows:

By adding echo, we found that this error is no longer caused by the Makefile we modified before. Reading information given by Build Console carefully, we found the following information:

‘Finished building libraries’
make: Leaving directory ‘D:/Documents/GitHub/ECE4810J_FA2021/Lab1/Zynq_CPU_wrapper_hw_platform_1/zynq_fsbl/zynq_fsbl_bsp’
arm-none-eabi-gcc -o fsbl.elf  sd.o  nand.o  image_mover.o  md5.o  fsbl_hooks.o  main.o  nor.o  qspi.o  rsa.o  ps7_init.o  pcap
.o  fsbl_handoff.o -MMD -MP       -mcpu=cortex-a9 -mfpu=vfpv3 -mfloat-abi=hard   -mcpu=cortex-a9 -mfpu=vfpv3 -mfloat-abi=hard –
Wl,-build-id=none -specs=Xilinx.spec  -lrsa -Wl,–start-group,-lxil,-lgcc,-lc,–end-group -Wl,–start-group,-lxilffs,-lxil,-lgc
c,-lc,–end-group -Wl,–start-group,-lrsa,-lxil,-lgcc,-lc,–end-group                                    -Wl,–gc-sections -Lzy
nq_fsbl_bsp/ps7_cortexa9_0/lib -L./ -Tlscript.ld
Building the BSP Library for domain  – standalone_ps7_cortexa9_0 on processor ps7_cortexa9_0
make –no-print-directory seq_libs
“Running Make include in ps7_cortexa9_0/libsrc/coresightps_dcc_v1_8/src”
make -C ps7_cortexa9_0/libsrc/coresightps_dcc_v1_8/src -s include  “SHELL=CMD” “COMPILER=arm-none-eabi-gcc” “ASSEMBLER=arm-none
-eabi-as” “ARCHIVER=arm-none-eabi-ar” “COMPILER_FLAGS=  -O2 -c” “EXTRA_COMPILER_FLAGS=-mcpu=cortex-a9 -mfpu=vfpv3 -mfloat-abi=h
ard -nostartfiles -g -Wall -Wextra -fno-tree-loop-distribute-patterns”

For me, it is located at Zynq_CPU_wrapper_hw_platform_1\ps7_cortexa9_0\standalone_ps7_cortexa9_0\bsp\ps7_cortexa9_0\libsrc\rtl_multiplier_v1_0\src\Makefile. Its contents are identical to the contents of the previous Makefile.
Interestingly, another same Makefile Zynq_CPU_wrapper_hw_platform_1\hw\drivers\rtl_multiplier_v1_0\src\Makefile does not cause any errors.
After done that, we can see the qemu_args.txt error mentioned by Programaths’s answer in
Vitis 2021.1 error Makefilehttps://support.xilinx.com/s/question/0D52E00006hpRo8SAE/vitis-20211-error-makefile

10:48:42 **** Incremental Build of configuration Debug for project hello_rtl_multiplier_system ****
make all 
Generating bif file for the system project
generate_system_bif.bat 1453 D:/Documents/GitHub/ECE4810J_FA2021/Lab1/Zynq_CPU_wrapper_hw_platform_1/export/Zynq_CPU_wrapper_hw_platform_1/Zynq_CPU_wrapper_hw_platform_1.xpfm standalone_ps7_cortexa9_0 D:/Documents/GitHub/ECE4810J_FA2021/Lab1/hello_rtl_multiplier_system/Debug/system.bif
sdcard_gen –xpfm D:/Documents/GitHub/ECE4810J_FA2021/Lab1/Zynq_CPU_wrapper_hw_platform_1/export/Zynq_CPU_wrapper_hw_platform_1/Zynq_CPU_wrapper_hw_platform_1.xpfm –sys_config Zynq_CPU_wrapper_hw_platform_1 –bif D:/Documents/GitHub/ECE4810J_FA2021/Lab1/hello_rtl_multiplier_system/Debug/system.bif –bitstream D:/Documents/GitHub/ECE4810J_FA2021/Lab1/hello_rtl_multiplier/_ide/bitstream/Zynq_CPU_wrapper_hw_platform_1.bit –elf D:/Documents/GitHub/ECE4810J_FA2021/Lab1/hello_rtl_multiplier/Debug/hello_rtl_multiplier.elf,ps7_cortexa9_0
creating BOOT.BIN using D:/Documents/GitHub/ECE4810J_FA2021/Lab1/hello_rtl_multiplier/_ide/bitstream/Zynq_CPU_wrapper_hw_platform_1.bit
Error intializing SD boot data : Software platform XML error, sdx:qemuArguments value “Zynq_CPU_wrapper_hw_platform_1/qemu/qemu_args.txt” path does not exist D:/Documents/GitHub/ECE4810J_FA2021/Lab1/Zynq_CPU_wrapper_hw_platform_1/export/Zynq_CPU_wrapper_hw_platform_1/sw/Zynq_CPU_wrapper_hw_platform_1/qemu/qemu_args.txt, platform path D:/Documents/GitHub/ECE4810J_FA2021/Lab1/Zynq_CPU_wrapper_hw_platform_1/export/Zynq_CPU_wrapper_hw_platform_1, sdx:configuration Zynq_CPU_wrapper_hw_platform_1, sdx:image standard
make: *** [makefile:39: package] Error 1
10:48:48 Build Finished (took 5s.544ms)

Under Zynq_CPU_wrapper_hw_platform_1\export\Zynq_CPU_wrapper_hw_platform_1\sw\Zynq_CPU_wrapper_hw_platform_1, create a new folder called qemu, and create a new TXT file called qemu_args.txt under qemu folder. Just leave it empty (you may find another existing qemu_args under standalone_ps7_cortexa9_0\qemu folder).
Finally, we solved this problem. 

ERROR: GLEW initalization error: Missing GL version

System: Ubuntu 18.04
since there is no NVIDIA XXX folder in my/usr/lib directory, use the following method in the. Bashrc file:

export PATH=/usr/local/cuda-10.2/bin${PATH:+:${PATH}}
export LD_LIBRARY_PATH=/usr/local/cuda-10.2/lib64${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}
export LD_PRELOAD=/usr/lib/x86_64-linux-gnu/libGLEW.so:/usr/lib/nvidia-460/libGL.so

it won ‘t work.

The final solution is:

By manually adding environment variables for the current running environment, run – & gt; Edit configuration – & gt; Add the corresponding environment variable to the environment variable

Variable name: LD_ Preload
variable value:/usr/lib/x86_ 64-linux-gnu/libGLEW.so

rsync error: error allocating core memory buffers

1、 Problem description

When using Rsync file transfer, the same two servers transfer the same file. One succeeds and the other reports the following error. It is obvious that there is insufficient memory

[root@sss085080 ~]# rsync -atvu /alauda/new/* /alauda/data/
sending incremental file list
ERROR: out of memory in flist_expand [sender]
rsync error: error allocating core memory buffers (code 22) at util2.c(106) [sender=3.1.2]
ERROR: out of memory in flist_expand [receiver]
rsync error: error allocating core memory buffers (code 22) at util2.c(106) [receiver=3.1.2]

2、 Process analysis

Successful server

  Failed server

  3、 Solution

It can be seen from the above server parameters that the successful server has enabled swap. If the memory is insufficient, swap will be used. However, if the failed server is not enabled, exceptions will be thrown directly when the memory is insufficient. Therefore, in this case, the two solutions are available

1. Turn on swap
2. Expand the memory of the machine