Category Archives: How to Fix

VS2013 error RC2108: expected numerical dialog constant

When viewing the. RC resource file, I don’t know what’s going on. After turning it off and opening it in the resource view, this error will be reported.

I don’t remember moving anything in the. RC file. It just can’t be opened anyway

It is said on the Internet that if the cursor is positioned to the error and the XXX code is added, but if my cursor is positioned to the first line, this method is invalid
https://blog.csdn.net/liuyi1207164339/article/details/47131833

Solution: copy from the old. RC file, delete all the. RC file and replace it with the old. RC file. This should be a bug in vs2013.

RuntimeError: Unable to find a valid cuDNN algorithm to run convolution

RuntimeError: Unable to find a valid cuDNN algorithm to run convolution

Preface solution

preface

Today, we use Yolo V5.6 training model and modify the batch size to 32. The following error occurred:

Starting training for 100 epochs...

     Epoch   gpu_mem       box       obj       cls    labels  img_size
  0%|                                                                                                                                                                         | 0/483 [00:04<?, ?it/s]
Traceback (most recent call last):
  File "train.py", line 620, in <module>
    main(opt)
  File "train.py", line 517, in main
    train(opt.hyp, opt, device, callbacks)
  File "train.py", line 315, in train
    pred = model(imgs)  # forward
  File "E:\Anaconda3\envs\yolov550\lib\site-packages\torch\nn\modules\module.py", line 1051, in _call_impl
    return forward_call(*input, **kwargs)
  File "D:\liufq\yolov5-6.0\models\yolo.py", line 126, in forward
    return self._forward_once(x, profile, visualize)  # single-scale inference, train
  File "D:\liufq\yolov5-6.0\models\yolo.py", line 149, in _forward_once
    x = m(x)  # run
  File "E:\Anaconda3\envs\yolov550\lib\site-packages\torch\nn\modules\module.py", line 1051, in _call_impl
    return forward_call(*input, **kwargs)
  File "D:\liufq\yolov5-6.0\models\common.py", line 137, in forward
    return self.cv3(torch.cat((self.m(self.cv1(x)), self.cv2(x)), dim=1))
  File "E:\Anaconda3\envs\yolov550\lib\site-packages\torch\nn\modules\module.py", line 1051, in _call_impl
    return forward_call(*input, **kwargs)
  File "D:\liufq\yolov5-6.0\models\common.py", line 45, in forward
    return self.act(self.bn(self.conv(x)))
  File "E:\Anaconda3\envs\yolov550\lib\site-packages\torch\nn\modules\module.py", line 1051, in _call_impl
    return forward_call(*input, **kwargs)
  File "E:\Anaconda3\envs\yolov550\lib\site-packages\torch\nn\modules\conv.py", line 443, in forward
    return self._conv_forward(input, self.weight, self.bias)
  File "E:\Anaconda3\envs\yolov550\lib\site-packages\torch\nn\modules\conv.py", line 440, in _conv_forward
    self.padding, self.dilation, self.groups)
RuntimeError: Unable to find a valid cuDNN algorithm to run convolution

terms of settlement

Modify the size of batchsize and make it smaller.

[MMCV]RuntimeError: CUDA error: no kernel image is available for execution on the device

There are two reasons for this problem
first, the GPU computing power and the python version do not match
Second, the server uses a combination of graphics cards with different computing power
on the first point, pytorch no longer supports graphics cards with computing power less than 3.7 after 1.3.0. You can reinstall the lower version of pytorch. The corresponding version can be found in the following link:
torch, torchvision historical version download
common graphics card computing power is as follows

GPU	Compute Capability
NVIDIA TITAN RTX	7.5
Geforce RTX 2080 Ti	7.5
Geforce RTX 2080	7.5
Geforce RTX 2070	7.5
Geforce RTX 2060	7.5
NVIDIA TITAN V	7.0
NVIDIA TITAN Xp	6.1
NVIDIA TITAN X	6.1
GeForce GTX 1080 Ti	6.1
GeForce GTX 1080	6.1
GeForce GTX 1070	6.1
GeForce GTX 1060	6.1
GeForce GTX 1050	6.1
GeForce GTX TITAN X	5.2
GeForce GTX TITAN Z	3.5
GeForce GTX TITAN Black	3.5
GeForce GTX TITAN	3.5
GeForce GTX 980 Ti	5.2
GeForce GTX 980	5.2
GeForce GTX 970	5.2
GeForce GTX 960	5.2
GeForce GTX 950	5.2
GeForce GTX 780 Ti	3.5
GeForce GTX 780	3.5
GeForce GTX 770	3.0
GeForce GTX 760	3.0
GeForce GTX 750 Ti	5.0
GeForce GTX 750	5.0
GeForce GTX 690	3.0
GeForce GTX 680	3.0
GeForce GTX 670	3.0
GeForce GTX 660 Ti	3.0
GeForce GTX 660	3.0
GeForce GTX 650 Ti BOOST	3.0
GeForce GTX 650 Ti	3.0
GeForce GTX 650	3.0
GeForce GTX 560 Ti	2.1
GeForce GTX 550 Ti	2.1
GeForce GTX 460	2.1
GeForce GTS 450	2.1
GeForce GTS 450*	2.1
GeForce GTX 590	2.0
GeForce GTX 580	2.0
GeForce GTX 570	2.0
GeForce GTX 480	2.0
GeForce GTX 470	2.0
GeForce GTX 465	2.0
GeForce GT 740	3.0
GeForce GT 730	3.5
GeForce GT 730 DDR3,128bit	2.1
GeForce GT 720	3.5
GeForce GT 705*	3.5
GeForce GT 640 (GDDR5)	3.5
GeForce GT 640 (GDDR3)	2.1
GeForce GT 630	2.1
GeForce GT 620	2.1
GeForce GT 610	2.1
GeForce GT 520	2.1
GeForce GT 440	2.1
GeForce GT 440*	2.1
GeForce GT 430	2.1
GeForce GT 430*	2.1
GPU	Compute Capability
Tesla K80	3.7
Tesla K40	3.5
Tesla K20	3.5
Tesla C2075	2.0
Tesla C2050/C2070	2.0

On the second point, if you make an error in the mmcv framework, recompile mmcv according to the computing power of your graphics card. Take two graphics cards with computing power of 6.1 and 7.5 as examples to compile. The commands are as follows:

TORCH_CUDA_ARCH_LIST="6.1;7.5"   pip install mmcv-full == {mmcv_version} -f   	https://download.openmmlab.com/mmcv/dist/{cuda version}/{pytorch version}/index.html

Among them, CUDA version and pytorch version are replaced by your version, such as cud101, torch 1.7.0
for specific corresponding information, please refer to GitHub of mmcv

Error creating bean with name ‘org.springframework.security.oauth2.config.annotation.web.configurati

The following error occurs mainly because the configuration of the resource server is added, but the service is not identified as a resource server.

org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'org.springframework.security.oauth2.config.annotation.web.configuration.ResourceServerConfiguration': Post-processing of merged bean definition failed; nested exception is java.lang.TypeNotPresentException: Type javax.servlet.Filter not present
	at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:579) ~[spring-beans-5.3.5.jar:5.3.5]
	at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:524) ~[spring-beans-5.3.5.jar:5.3.5]
	at org.springframework.beans.factory.support.AbstractBeanFactory.lambda$doGetBean$0(AbstractBeanFactory.java:335) ~[spring-beans-5.3.5.jar:5.3.5]
	at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:234) ~[spring-beans-5.3.5.jar:5.3.5]
	at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:333) ~[spring-beans-5.3.5.jar:5.3.5]
	at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:208) ~[spring-beans-5.3.5.jar:5.3.5]
	at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:944) ~[spring-beans-5.3.5.jar:5.3.5]
	at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:918) ~[spring-context-5.3.5.jar:5.3.5]
	at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:583) ~[spring-context-5.3.5.jar:5.3.5]
	at org.springframework.boot.web.reactive.context.ReactiveWebServerApplicationContext.refresh(ReactiveWebServerApplicationContext.java:63) ~[spring-boot-2.4.4.jar:2.4.4]
	at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:769) [spring-boot-2.4.4.jar:2.4.4]
	at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:761) [spring-boot-2.4.4.jar:2.4.4]
	at org.springframework.boot.SpringApplication.refreshContext(SpringApplication.java:426) [spring-boot-2.4.4.jar:2.4.4]
	at org.springframework.boot.SpringApplication.run(SpringApplication.java:326) [spring-boot-2.4.4.jar:2.4.4]
	at org.springframework.boot.SpringApplication.run(SpringApplication.java:1313) [spring-boot-2.4.4.jar:2.4.4]
	at org.springframework.boot.SpringApplication.run(SpringApplication.java:1302) [spring-boot-2.4.4.jar:2.4.4]
	at com.zsh.GatewayApplication.main(GatewayApplication.java:16) [classes/:na]
Caused by: java.lang.TypeNotPresentException: Type javax.servlet.Filter not present
	at sun.reflect.generics.factory.CoreReflectionFactory.makeNamedType(CoreReflectionFactory.java:117) ~[na:1.8.0_261]
	at sun.reflect.generics.visitor.Reifier.visitClassTypeSignature(Reifier.java:125) ~[na:1.8.0_261]
	at sun.reflect.generics.tree.ClassTypeSignature.accept(ClassTypeSignature.java:49) ~[na:1.8.0_261]
	at sun.reflect.generics.visitor.Reifier.reifyTypeArguments(Reifier.java:68) ~[na:1.8.0_261]
	at sun.reflect.generics.visitor.Reifier.visitClassTypeSignature(Reifier.java:138) ~[na:1.8.0_261]
	at sun.reflect.generics.tree.ClassTypeSignature.accept(ClassTypeSignature.java:49) ~[na:1.8.0_261]
	at sun.reflect.generics.repository.ClassRepository.getSuperInterfaces(ClassRepository.java:108) ~[na:1.8.0_261]
	at java.lang.Class.getGenericInterfaces(Class.java:913) ~[na:1.8.0_261]
	at org.springframework.core.ResolvableType.getInterfaces(ResolvableType.java:502) ~[spring-core-5.3.5.jar:5.3.5]
	at org.springframework.core.ResolvableType.as(ResolvableType.java:450) ~[spring-core-5.3.5.jar:5.3.5]
	at org.springframework.core.ResolvableType.as(ResolvableType.java:451) ~[spring-core-5.3.5.jar:5.3.5]
	at org.springframework.core.ResolvableType.as(ResolvableType.java:456) ~[spring-core-5.3.5.jar:5.3.5]
	at org.springframework.core.ResolvableType.as(ResolvableType.java:456) ~[spring-core-5.3.5.jar:5.3.5]
	at org.springframework.core.ResolvableType.forMethodParameter(ResolvableType.java:1341) ~[spring-core-5.3.5.jar:5.3.5]
	at org.springframework.core.ResolvableType.forMethodParameter(ResolvableType.java:1324) ~[spring-core-5.3.5.jar:5.3.5]
	at org.springframework.core.ResolvableType.forMethodParameter(ResolvableType.java:1291) ~[spring-core-5.3.5.jar:5.3.5]
	at org.springframework.core.ResolvableType.forMethodParameter(ResolvableType.java:1281) ~[spring-core-5.3.5.jar:5.3.5]
	at org.springframework.core.BridgeMethodResolver.isResolvedTypeMatch(BridgeMethodResolver.java:157) ~[spring-core-5.3.5.jar:5.3.5]
	at org.springframework.core.BridgeMethodResolver.isBridgeMethodFor(BridgeMethodResolver.java:141) ~[spring-core-5.3.5.jar:5.3.5]
	at org.springframework.core.BridgeMethodResolver.searchCandidates(BridgeMethodResolver.java:120) ~[spring-core-5.3.5.jar:5.3.5]
	at org.springframework.core.BridgeMethodResolver.findBridgedMethod(BridgeMethodResolver.java:82) ~[spring-core-5.3.5.jar:5.3.5]
	at org.springframework.context.annotation.CommonAnnotationBeanPostProcessor.lambda$buildResourceMetadata$1(CommonAnnotationBeanPostProcessor.java:390) ~[spring-context-5.3.5.jar:5.3.5]
	at org.springframework.util.ReflectionUtils.doWithLocalMethods(ReflectionUtils.java:324) ~[spring-core-5.3.5.jar:5.3.5]
	at org.springframework.context.annotation.CommonAnnotationBeanPostProcessor.buildResourceMetadata(CommonAnnotationBeanPostProcessor.java:389) ~[spring-context-5.3.5.jar:5.3.5]
	at org.springframework.context.annotation.CommonAnnotationBeanPostProcessor.findResourceMetadata(CommonAnnotationBeanPostProcessor.java:347) ~[spring-context-5.3.5.jar:5.3.5]
	at org.springframework.context.annotation.CommonAnnotationBeanPostProcessor.postProcessMergedBeanDefinition(CommonAnnotationBeanPostProcessor.java:295) ~[spring-context-5.3.5.jar:5.3.5]
	at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.applyMergedBeanDefinitionPostProcessors(AbstractAutowireCapableBeanFactory.java:1098) ~[spring-beans-5.3.5.jar:5.3.5]
	at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:576) ~[spring-beans-5.3.5.jar:5.3.5]
	... 16 common frames omitted
Caused by: java.lang.ClassNotFoundException: javax.servlet.Filter
	at java.net.URLClassLoader.findClass(URLClassLoader.java:382) ~[na:1.8.0_261]
	at java.lang.ClassLoader.loadClass(ClassLoader.java:418) ~[na:1.8.0_261]
	at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:355) ~[na:1.8.0_261]
	at java.lang.ClassLoader.loadClass(ClassLoader.java:351) ~[na:1.8.0_261]
	at java.lang.Class.forName0(Native Method) ~[na:1.8.0_261]
	at java.lang.Class.forName(Class.java:348) ~[na:1.8.0_261]
	at sun.reflect.generics.factory.CoreReflectionFactory.makeNamedType(CoreReflectionFactory.java:114) ~[na:1.8.0_261]
	... 43 common frames omitted

2021-10-31 18:29:13.998  WARN 8204 --- [       Thread-8] c.a.n.common.http.HttpClientBeanHolder   : [HttpClientBeanHolder] Start destroying common HttpClient
2021-10-31 18:29:13.998  WARN 8204 --- [       Thread-8] c.a.n.common.http.HttpClientBeanHolder   : [HttpClientBeanHolder] Destruction of the end

Process finished with exit code 1

There are two main solutions to this problem

First: remove the configuration of the resource server
Second: add the @ enableresourceserver annotation on the startup class

Start again to succeed!

Solution to prompt resource error when opening DAZ studio

        If you install DAZ studio through DAZ central and open it, you will be prompted with resource error

A valid PostgreSQL CMS connection could not be established. Several DAZ Studio features that require a valid PostgreSQL CMS connection,such as context aware content views and loading content installed using the Daz Connect service,will not be available.Check your network,anti-virus,and firewall settings for onflicts.

I solved it through the following methods. You can also try it

1. In dazcentral

Select uninstall to uninstall DAZ studio,

 

After the uninstall is successful, click Install to reinstall

  2. After reinstallation, click open to open DAZ studio, and click the icon on the left of the viewport

  After clicking the icon, a menu will pop up,

 

Then click the second item of the menu, content DB maintenance, and a small window will pop up, as shown in the following figure

  Then check the reset database item,

Then click accept to reset the database,

Close DAZ studio,

Reopen,

It will display the welcome window without login prompt before,

After logging in, click next in the lower right corner of the window

There will be no resource error prompt

Could not find method causes verifyerror, which in turn causes crash

On Android 5.0 and below, sometimes the course not find method causes verifyerror, which leads to crash. The writing method is as follows:

 

As shown in the figure above, calling static method test1 on Android 4.4 mobile phone will report the following error:

  10-28 16:02:40.913 2792-2792/com.example.myapplication I/dalvikvm: Could not find method com.bumptech.glide.Glide.with, referenced from method com.example.myapplication.TestKt.test2
10-28 16:02:40.913 2792-2792/com.example.myapplication W/dalvikvm: VFY: unable to resolve static method 4: Lcom/bumptech/glide/Glide;. with (Landroid/content/Context;)Lcom/bumptech/glide/RequestManager;

The reason is that Android 5 is a delvikvm virtual machine before, and then other static methods will be loaded when the static method is invoked. So when the test1 is called, the Glide in test2 will be loaded, but at this time it will be found that the with method can not find the method, so the problem can not be found, which will lead to the explosion of VerifyError error.

Solution: Although compileonly is intentionally written above, brand differentiation may occur on mobile phones of different brands, and then some brands do not rely on relevant classes to report errors. Therefore, you can change the places in test2 that need to be called to the dynamic loading mode, that is, the reflection mode, so that test2 will be loaded when test1 is called, However, the reflection method must be called to load the class, that is, the problem of calling test1 and then failing to find the method will not occur.

Examples of solutions are as follows:

public fun test2(context:Context) {
    try {
        Log.e("Test", "test1")
        val clazz = Class.forName("com.bumptech.glide.Glide")
        val getMethod: Method = clazz.getMethod("with", context.javaClass)
        getMethod.invoke(null, context)
    } catch (e: Exception) {
        Log.e("Test","test1 e "+e.message)
    }
}

Note: the above error is OK on Android 5.0 and above phones. It should be the difference between dalvikvm and artvm

npm ERR Error: EPERM:operation not permitted, rename

npm ERR! Error: EPERM: operation not permitted, rename

Problem background: Recently, there was a problem with the packaging of the project. I thought it was caused by the node version problem, so I tossed and switched several versions, which led to the problem of NPM

    when NPM is installed, the following error is reported (the picture refers to someone else’s, infringed or deleted). There are many reasons for this error. The specific reason is that the node version and windows version are intertwined, which makes it impossible to find out exactly (I hope some big guys can find it)
    in addition, the time when this problem occurs is also different, some are during installation, Some are during run dev, and some are during packaging.
    after encountering problems, the first is a variety of search solutions. The commonly used solutions are probably the following:

    Run CMD as administrator

    Because the system prompts that the permission is not enough, I am now an ordinary user. Then I run as an administrator. The idea is feasible
    see: https://blog.csdn.net/Running_ Fe/article/details/81629330

    Delete the. Npmrc file in the user directory

    This scheme can solve some problems to a certain extent

    Clear cache + reinstall: NPM clear cache — force, NPM install

    see: https://www.cnblogs.com/maycpou/p/12080814.html

    Delete the file mentioned in the error message

    Some people say that this solution can solve the problem, but I can’t find the file
    see: https://blog.csdn.net/LJJONESEED/article/details/119926728

    Close all editors that reference the current project

    Because his error message says “the current file may be open in another editor”, close the editor, clear the cache, and then reload

    Finally, a post in stack overflow is attached. The discussion below this post is very intense. There are many schemes. If you are interested, you can look

    see: https://stackoverflow.com/questions/39293636/npm-err-error-eperm-operation-not-permitted-rename#

    Please note that
    I hope some big guys can give the real reason for this problem and hold their fists

error: XML error: target ‘vdb‘ duplicated for disk sources ‘aaa.img‘ and ‘bbb.img‘

On a Sunday morning when you want to learn, try adding a hard disk to the KVM virtual machine with the command line.
create a disk
#qemu-img create – F qcow2/home/KVM FS/sy-b80915disk1.qcow2 10g

Bind disk to domain: sy-b80915
#virsh attach disk sy-b80915/home/KVM FS/sy-b80915disk1.qcow2 VDB — live — config

Later, I tried to unbind the VDA of the main disk. As a result, I accidentally unbind the VDA of the main disk
#virsh detach disk sy-b80915 VDA — live — config

But I unbound VDB, namely sy-b80915disk1.qcow2
#virsh detach disk sy-b80915 VDB — live — config

The virtual machine can still be restarted and used normally later, but I think the name of sy-b80915disk1.qcow2 is not good. Delete and recreate sy-b80915vdb.qcow2
#rm – RF/home/KVM/sy-b80915disk1.qcow2
#qemu-img create – F qcow2/home/KVM FS/sy-b80915vdb.qcow2 10g

Then bind
#virsh attach disk sy-b80915/home/KVM FS/sy-b80915vdb.qcow2 VDB — live — config
the results are as follows:
error: XML error: target ‘VDB’ duplicated for disk sources’ sy-b80915disk1. Img ‘and’ sy-b80915vdb. Img ‘
the big idea is to bind repeatedly, but it has been unbound before.

The only exception is that the main disk VDA is unbound accidentally, but the system can still run. So I checked the XML file. Compared with other virtual machines, I found that the XML file of SY – b80915 lacks the main disk VDA. When unbinding the VDA, the XML file is changed. So add VDA to the XML file again
execute the following command to edit the XML file:
#virsh edit sy-b80915
repair the XML definition of VDA, as shown in Figure 1:

Figure 1

Then bind sy-b80915vdb.qcow2 again. Success
#virsh attach-disk sy-B80915 /home/kvm-fs/sy-B80915vdb.qcow2 vdb –live –config

Error response from daemon: OCI runtime create failed: container_linux.go:380

Article catalog

The reasons for error reporting are as follows: error reporting solution: delete the new kernel

Self built multi GPU servers can refer to https://blog.csdn.net/landian0531/article/details/120242839

Error reporting reason

The unexpected power failure caused the Ubuntu server to restart, and the container in docker could not be started through the docker PS - AQ | xargs - I {} docker start {} command

Errors are reported as follows:

gpu@gpu-workstation:~$ docker ps -aq | xargs -I {} docker start {}
Error response from daemon: OCI runtime create failed: container_linux.go:380: starting container process caused: process_linux.go:545: container init caused: Running hook #1:: error running hook: exit status 1, stdout: , stderr: nvidia-container-cli: initialization error: nvml error: driver not loaded: unknown
Error: failed to start containers: 485f0e25b37c

Error reporting solution: delete the new kernel

View the existing system kernel dpkg -- get selections | grep Linux

gpu@gpu-workstation:~$ dpkg --get-selections | grep linux
binutils-x86-64-linux-gnu                       install
console-setup-linux                             install
libnvpair1linux                                 install
libselinux1:amd64                               install
libuutil1linux                                  install
libzfs2linux                                    install
libzpool2linux                                  install
linux-base                                      install
linux-firmware                                  install
linux-generic                                   install
linux-headers-5.4.0-88                          install
linux-headers-5.4.0-88-generic                  hold
linux-headers-5.4.0-89                          install
linux-headers-5.4.0-89-generic                  install
linux-headers-generic                           install
linux-image-5.4.0-88-generic                    hold
linux-image-5.4.0-89-generic                    install
linux-image-generic                             install
linux-libc-dev:amd64                            install
linux-modules-5.4.0-88-generic                  hold
linux-modules-5.4.0-89-generic                  install
linux-modules-extra-5.4.0-88-generic            hold
linux-modules-extra-5.4.0-89-generic            install
util-linux                                      install
zfsutils-linux                                  install

It is found that 5.4.0-89 is automatically installed in the system. Delete the kernel through the sudo apt get purge linux-image-5.4.0-89-generic command
there is a prompt in the middle and select Cancel (Note: deleting the kernel is risky and needs your own consideration.)

After deletion, restart the server

gpu@gpu-workstation:~$ sudo apt-get purge linux-image-5.4.0-89-generic
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following packages were automatically installed and are no longer required:
  amd64-microcode intel-microcode iucode-tool libdbus-glib-1-2 libevdev2 libimobiledevice6 libplist3 libupower-glib3 libusbmuxd6 linux-headers-generic thermald upower usbmuxd
Use 'sudo apt autoremove' to remove them.
The following additional packages will be installed:
  linux-image-unsigned-5.4.0-89-generic
Suggested packages:
  fdutils linux-doc | linux-source-5.4.0 linux-tools
The following packages will be REMOVED:
  linux-generic* linux-image-5.4.0-89-generic* linux-image-generic* linux-modules-extra-5.4.0-89-generic*
The following NEW packages will be installed:
  linux-image-unsigned-5.4.0-89-generic
0 upgraded, 1 newly installed, 4 to remove and 39 not upgraded.
Need to get 9,011 kB of archives.
After this operation, 202 MB disk space will be freed.
Do you want to continue?[Y/n] y
Get:1 http://ca.archive.ubuntu.com/ubuntu focal-updates/main amd64 linux-image-unsigned-5.4.0-89-generic amd64 5.4.0-89.100 [9,011 kB]
Fetched 9,011 kB in 4s (2,522 kB/s)
(Reading database ... 113040 files and directories currently installed.)
Removing linux-generic (5.4.0.89.93) ...
Removing linux-image-generic (5.4.0.89.93) ...
Removing linux-modules-extra-5.4.0-89-generic (5.4.0-89.100) ...
Removing linux-image-5.4.0-89-generic (5.4.0-89.100) ...
W: Removing the running kernel
I: /boot/vmlinuz is now a symlink to vmlinuz-5.4.0-88-generic
I: /boot/initrd.img is now a symlink to initrd.img-5.4.0-88-generic
/etc/kernel/postrm.d/initramfs-tools:
update-initramfs: Deleting /boot/initrd.img-5.4.0-89-generic
/etc/kernel/postrm.d/zz-update-grub:
Sourcing file `/etc/default/grub'
Sourcing file `/etc/default/grub.d/init-select.cfg'
Generating grub configuration file ...
Found linux image: /boot/vmlinuz-5.4.0-88-generic
Found initrd image: /boot/initrd.img-5.4.0-88-generic
Adding boot menu entry for UEFI Firmware Settings
done
Selecting previously unselected package linux-image-unsigned-5.4.0-89-generic.
(Reading database ... 107660 files and directories currently installed.)
Preparing to unpack .../linux-image-unsigned-5.4.0-89-generic_5.4.0-89.100_amd64.deb ...
Unpacking linux-image-unsigned-5.4.0-89-generic (5.4.0-89.100) ...
Setting up linux-image-unsigned-5.4.0-89-generic (5.4.0-89.100) ...
I: /boot/vmlinuz is now a symlink to vmlinuz-5.4.0-89-generic
I: /boot/initrd.img is now a symlink to initrd.img-5.4.0-89-generic
(Reading database ... 107663 files and directories currently installed.)
Purging configuration files for linux-modules-extra-5.4.0-89-generic (5.4.0-89.100) ...
Purging configuration files for linux-image-5.4.0-89-generic (5.4.0-89.100) ...
I: /boot/vmlinuz is now a symlink to vmlinuz-5.4.0-88-generic
I: /boot/initrd.img is now a symlink to initrd.img-5.4.0-88-generic
/var/lib/dpkg/info/linux-image-5.4.0-89-generic.postrm ... removing pending trigger
rmdir: failed to remove '/lib/modules/5.4.0-89-generic': Directory not empty
Processing triggers for linux-image-unsigned-5.4.0-89-generic (5.4.0-89.100) ...
gpu@gpu-workstation:~$

Install additional stage package for streamsets – error in cdh6.3.0 package rest API call error: java.io.eofexception

Version
streamsets3.16.1 (core)
cdh6.3.2

1、 Question

The name of the streamsets installation package is streamsets-datacollector-core-3.16.1. Tgz , and an error is reported when downloading the package of cdh6.3 after installation

1 operation

Install the package of cdh6.3.0 through the streamsets UI and report an error

click Show error

2. Complete error reporting contents

java.io.EOFException
	at org.apache.commons.compress.compressors.gzip.GzipCompressorInputStream.read(GzipCompressorInputStream.java:303)
	at org.apache.commons.compress.archivers.tar.TarArchiveInputStream.read(TarArchiveInputStream.java:608)
	at java.io.InputStream.read(InputStream.java:101)
	at org.apache.commons.io.IOUtils.copyLarge(IOUtils.java:1792)
	at org.apache.commons.io.IOUtils.copyLarge(IOUtils.java:1769)
	at org.apache.commons.io.IOUtils.copy(IOUtils.java:1744)
	at com.streamsets.datacollector.restapi.StageLibraryResource.installLibraries(StageLibraryResource.java:363)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.glassfish.jersey.server.model.internal.ResourceMethodInvocationHandlerFactory$1.invoke(ResourceMethodInvocationHandlerFactory.java:81)
	at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher$1.run(AbstractJavaResourceMethodDispatcher.java:144)
	at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.invoke(AbstractJavaResourceMethodDispatcher.java:161)
	at org.glassfish.jersey.server.model.internal.JavaResourceMethodDispatcherProvider$ResponseOutInvoker.doDispatch(JavaResourceMethodDispatcherProvider.java:160)
	at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.dispatch(AbstractJavaResourceMethodDispatcher.java:99)
	at org.glassfish.jersey.server.model.ResourceMethodInvoker.invoke(ResourceMethodInvoker.java:389)
	at org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:347)
	at org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:102)
	at org.glassfish.jersey.server.ServerRuntime$2.run(ServerRuntime.java:326)
	at org.glassfish.jersey.internal.Errors$1.call(Errors.java:271)
	at org.glassfish.jersey.internal.Errors$1.call(Errors.java:267)
	at org.glassfish.jersey.internal.Errors.process(Errors.java:315)
	at org.glassfish.jersey.internal.Errors.process(Errors.java:297)
	at org.glassfish.jersey.internal.Errors.process(Errors.java:267)
	at org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:317)
	at org.glassfish.jersey.server.ServerRuntime.process(ServerRuntime.java:305)
	at org.glassfish.jersey.server.ApplicationHandler.handle(ApplicationHandler.java:1154)
	at org.glassfish.jersey.servlet.WebComponent.serviceImpl(WebComponent.java:473)
	at org.glassfish.jersey.servlet.WebComponent.service(WebComponent.java:427)
	at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:388)
	at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:341)
	at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:228)
	at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:760)
	at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1617)
	at com.streamsets.datacollector.http.GroupsInScopeFilter.lambda$doFilter$0(GroupsInScopeFilter.java:82)
	at com.streamsets.datacollector.security.GroupsInScope.execute(GroupsInScope.java:34)
	at com.streamsets.datacollector.http.GroupsInScopeFilter.doFilter(GroupsInScopeFilter.java:81)
	at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1604)
	at com.streamsets.datacollector.restapi.rbean.rest.RestResourceContextFilter.doFilter(RestResourceContextFilter.java:42)
	at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1604)
	at org.eclipse.jetty.servlets.CrossOriginFilter.handle(CrossOriginFilter.java:310)
	at org.eclipse.jetty.servlets.CrossOriginFilter.doFilter(CrossOriginFilter.java:264)
	at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1604)
	at com.streamsets.datacollector.http.LocaleDetectorFilter.doFilter(LocaleDetectorFilter.java:39)
	at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1604)
	at org.eclipse.jetty.servlets.HeaderFilter.doFilter(HeaderFilter.java:117)
	at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1604)
	at com.streamsets.pipeline.http.MDCFilter.doFilter(MDCFilter.java:47)
	at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1604)
	at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:545)
	at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
	at org.eclipse.jetty.server.handler.gzip.GzipHandler.handle(GzipHandler.java:717)
	at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:501)
	at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)
	at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:235)
	at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1592)
	at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:233)
	at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1296)
	at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:188)
	at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:485)
	at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1562)
	at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:186)
	at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1211)
	at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
	at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)
	at org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:322)
	at org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:146)
	at org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:221)
	at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)
	at org.eclipse.jetty.server.Server.handle(Server.java:500)
	at com.streamsets.lib.security.http.LimitedMethodServer.handle(LimitedMethodServer.java:41)
	at org.eclipse.jetty.server.HttpChannel.lambda$handle$1(HttpChannel.java:386)
	at org.eclipse.jetty.server.HttpChannel.dispatch(HttpChannel.java:562)
	at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:378)
	at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:270)
	at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:311)
	at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:103)
	at org.eclipse.jetty.io.ChannelEndPoint$2.run(ChannelEndPoint.java:117)
	at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:336)
	at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:313)
	at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:171)
	at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:129)
	at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:388)
	at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:806)
	at org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:938)
	at java.lang.Thread.run(Thread.java:748)

3 sdc.log key error

2021-11-02 11:02:04,278 [user:admin] [pipeline:] [runner:] [thread:webserver-48] [stage:] INFO  StageLibraryResource - Installing stage library streamsets-datacollector-cdh_6_3-lib from http://archives.streamsets.com/datacollector/3.16.1/tarball/streamsets-datacollector-cdh_6_3-lib-3.16.1.tgz
2021-11-02 11:21:13,324 [user:admin] [pipeline:] [runner:] [thread:webserver-48] [stage:] ERROR ExceptionToHttpErrorProvider - REST API call error: java.io.EOFException
java.io.EOFException
        at org.apache.commons.compress.compressors.gzip.GzipCompressorInputStream.read(GzipCompressorInputStream.java:303)
        at org.apache.commons.compress.archivers.tar.TarArchiveInputStream.read(TarArchiveInputStream.java:608)
        at java.io.InputStream.read(InputStream.java:101)
        at org.apache.commons.io.IOUtils.copyLarge(IOUtils.java:1792)
        at org.apache.commons.io.IOUtils.copyLarge(IOUtils.java:1769)

2、 Positioning

1 idea a

According to the log, it is from http://archives.streamsets.com/datacollector/3.16.1/tarball/streamsets-datacollector-cdh_ 6_ 3-lib-3.16.1. Tgz Download streamsets DataCollector CDH_ 6_ 3-lib
try the browser to visit this address and find that it can be downloaded. Try to manually download the package and put it in the target directory

SSH connects to the server and finds that the network is very slow. Try downloading this package on a PC. after decompression, you find streamsets-datacollector-3.16.1 \ streamsets LIBS \ streamsets DataCollector CDH_ 6_ There are jar packages in the 3-lib \ lib directory. Put these in the target directory of streamsets download streamsets-datacollector-3.16.1/streamsets-libs/streamsets-datacollector-cdh_ 6_ When looking at the installed packages in 3-lib/lib , it is found that there is still no

this method is not very good. It may be that some other configurations will be written in addition to the jar package during the installation process, which we can’t do manually. Change your mind

2 idea B (feasible)

The installation package of this version is the core version. Consider reinstalling the full version. Download streamsets-datacollector-all-3.22.3. Tgz the installation steps refer to the official documents

You can see that the required package is already installed after reinstallation

reference material

Official documents

Syntax Error: Error: Node Sass version 6.0.1 is incompatible with ^4.0.0.

Problem: version correspondence
solution:

    uninstall node sass

    npm uninstall node-sass
    
      install version 4.14.1

      npm install [email protected]
      
        network problems may occur during installation. Taobao image can be used. It is recommended to use the following methods. It is not recommended to permanently use Taobao image
        single use

        npm install --registry=https://registry.npm.taobao.org
        
          if you still cannot install, you can find the corresponding version number in the package.json file and modify it directly. The version number I use here is
          node sass: 4.14.1
          sass loader: 7.1.3
          generally, the idea will remind the update dependency in the lower right corner, otherwise the node will be deleted directly_ Download the modules folder again

          Reference link 1
          reference link 2