Category Archives: Error

ubuntu docker dm_task_run failed error [How to Solve]

Problem Description:

[root@yisu-6144357b1dcaa docker]# docker rm 7226093de996
Error response from daemon: container 7226093de99632756b9b35caadb98e0db783aa3db04fc39d7d8323250b24445d: driver "devicemapper" failed to remove root filesystem: failed to remove device be2f5fe82905d8d1550717f86a0e72274e58d238331739da3bd15b86843e9444: devicemapper: Error running DeleteDevice dm_task_run failed

Solution:
1. systemctl stop docker
2. thin_check /var/lib/docker/devicemapper/devicemapper/metadata
3. thin_check –clear-needs-check-flag /var/lib/docker/devicemapper/devicemapper/metadata
4. systemctl start docker

 

[Solved] Error creating bean with name ‘braveHttpServerHandler‘ defined in class path

Start the microservice and report an error

Current solution: comment out the link tracking dependency first, and add it later

Error starting ApplicationContext. To display the conditions report re-run your application with ‘debug’ enabled.
2021-12-05 10:05:07.355 ERROR 608 — [  restartedMain] o.s.b.d.LoggingFailureAnalysisReporter   :
Description:
Failed to configure a DataSource: ‘url’ attribute is not specified and no embedded datasource could be configured.
Reason: Failed to determine a suitable driver class

Action:
Consider the following:
If you want an embedded database (H2, HSQL or Derby), please put it on the classpath.
If you have database settings to be loaded from a particular profile you may need to activate it (no profiles are currently active).

Solution: Add a database in the configuration center or comment out the database dependency (add database here), if the configuration center has a database, check whether the data center is annotated

spring:
  application:
    name: update-service
  cloud:
    config:
      uri: http://localhost:9007
      profile: default
      label: master
  config:
    import: optional:configserver:http://localhost:9007
  datasource:
    driver-class-name: com.mysql.jdbc.Driver
    url: jdbc:mysql://localhost:3306/bill-manager?useUnicode=true&characterEncoding=utf-8&useSSL=false&serverTimezone=GMT
    username: root
    password: 123456

com.netflix.discovery.shared.transport.TransportException: Cannot execute request on any known server

Eureka registry not specified, local or warehouse application YML for configuration

Attempting to connect to: [127.0.0.1:5672]

Rabbitmq console is not open

Solution: install rabbitmq

[Solved] Redis Startup Error: FATAL CONFIG FILE ERROR

1.Redis Startup Error: Reading the configuration file, at line 194>>> ‘always-show-logo yes’Bad directive or wrong number of arguments
Error Messages:

[root@xxx-0001 src]# redis-server /etc/redis-cluster/redis-7001.conf
*** FATAL CONFIG FILE ERROR ***
Reading the configuration file, at line 194
>>> 'always-show-logo yes'
Bad directive or wrong number of arguments

Cause analysis:

Error means that the specified configuration file directory is wrong or the number of parameters in the configuration file is wrong
the reason is that redis-4.0 is installed for the first time At 8:00, the environment variable is written. When redis server is executed, it will first query whether this instruction is configured in the environment variable,
it is found that there is (or the old 4.0.8) However, the configuration file used is 5.0 To sum up, the redis server in the environment variable is imported from my previous version. If I change the version of redis, I can’t use the previously imported environment variable to execute

Solution:

From this point of view, the solution is clear:
method 1: re import the redis server of the new version of redis to the environment variable
method 2: directly use the redis server in the new version of redis to execute the startup command

Finally, let’s see the situation after the solution

[root@xxx-0001 src]# ./redis-server /etc/redis-cluster/redis-7001.conf
27895:C 06 Dec 2021 13:09:29.818 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
27895:C 06 Dec 2021 13:09:29.818 # Redis version=5.0.7, bits=64, commit=00000000, modified=0, pid=27895, just started
27895:C 06 Dec 2021 13:09:29.818 # Configuration loaded
[root@apm-0003 src]# ./redis-server /etc/redis-cluster/redis-7002.conf
27952:C 06 Dec 2021 13:09:37.218 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
27952:C 06 Dec 2021 13:09:37.218 # Redis version=5.0.7, bits=64, commit=00000000, modified=0, pid=27952, just started
27952:C 06 Dec 2021 13:09:37.218 # Configuration loaded
[root@apm-0003 src]# ./redis-server /etc/redis-cluster/redis-7003.conf
27996:C 06 Dec 2021 13:09:40.829 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
27996:C 06 Dec 2021 13:09:40.829 # Redis version=5.0.7, bits=64, commit=00000000, modified=0, pid=27996, just started
27996:C 06 Dec 2021 13:09:40.829 # Configuration loaded
[root@apm-0003 src]# ./redis-server /etc/redis-cluster/redis-7004.conf
28021:C 06 Dec 2021 13:09:43.651 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
28021:C 06 Dec 2021 13:09:43.651 # Redis version=5.0.7, bits=64, commit=00000000, modified=0, pid=28021, just started
28021:C 06 Dec 2021 13:09:43.651 # Configuration loaded
[root@apm-0003 src]# ./redis-server /etc/redis-cluster/redis-7005.conf
28065:C 06 Dec 2021 13:09:46.736 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
28065:C 06 Dec 2021 13:09:46.737 # Redis version=5.0.7, bits=64, commit=00000000, modified=0, pid=28065, just started
28065:C 06 Dec 2021 13:09:46.737 # Configuration loaded
[root@apm-0003 src]# ./redis-server /etc/redis-cluster/redis-7006.conf
28124:C 06 Dec 2021 13:09:50.963 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
28124:C 06 Dec 2021 13:09:50.963 # Redis version=5.0.7, bits=64, commit=00000000, modified=0, pid=28124, just started
28124:C 06 Dec 2021 13:09:50.963 # Configuration loaded
[root@xxx-0001 src]# ps -ef|grep redis
root      6227     1  0 12:35 ?       00:00:04 redis-server 0.0.0.0:6379
root     27896     1  0 13:09 ?       00:00:00 ./redis-server 0.0.0.0:7001 [cluster]
root     27953     1  0 13:09 ?       00:00:00 ./redis-server 0.0.0.0:7002 [cluster]
root     27998     1  0 13:09 ?       00:00:00 ./redis-server 0.0.0.0:7003 [cluster]
root     28022     1  0 13:09 ?       00:00:00 ./redis-server 0.0.0.0:7004 [cluster]
root     28066     1  0 13:09 ?       00:00:00 ./redis-server 0.0.0.0:7005 [cluster]
root     28125     1  0 13:09 ?       00:00:00 ./redis-server 0.0.0.0:7006 [cluster]
root     28276  4581  0 13:10 pts/4    00:00:00 grep --color=auto redis

[Solved] k8s error retrieving resource lock default/fuseim.pri-ifs: Unauthorized

When helm installed Prometheus, the NFS client provider serviceaccount was arranged in the default namespace and encountered a title problem

[hadoop@hadoop03 NFS]$ vim nfs-rbac.yaml

apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-client-provisioner
  #namespace: nfs-client

---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-client-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "update", "patch"]   ## Deploy to the default namespace to report an error title error
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    namespace: default
roleRef:
  kind: ClusterRole
  name: nfs-client-provisioner-runner
  apiGroup: rbac.authorization.k8s.io

---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  namespace: default
rules:
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]

---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  namespace: default
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    namespace: default
roleRef:
  kind: Role
  name: leader-locking-nfs-client-provisioner
  apiGroup: rbac.authorization.k8s.io


kubectl logs nfs-client-provisioner-764f44f754-wdtqp nfs provider pod

E1206 08:52:27.293890       1 leaderelection.go:234] error retrieving resource lock default/fuseim.pri-ifs: endpoints "fuseim.pri-ifs" is forbidden: User "system:serviceaccount:default:nfs-client-provisioner" cannot get resource "endpoints" in API group "" in the namespace "default"

Modify clusterrole configuration permissions

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-client-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["get", "list", "watch", "create", "update", "patch"] ### 把权限修改为这个(default namespace)

error: src refspec master does not match any [How to Solve]

Summary

Solve the problem in one sentence:

Enter first

git push -u origin main

Then enter

 git config http.sslVerify "false"

Then enter

git push -u origin main

Problem background

When uploading a project to GitHub, enter the command:

git push -u origin master

report errors:

error: src refspec master does not match any

First attempt to solve

I found an answer on the Internet:

git push -u origin main

Error is still reported after input:

OpenSSL SSL_read: Connection was reset, errno 10054

Try again

Another answer was found:

 git config http.sslVerify "false"

No error is reported
Enter again:

git push -u origin main

The screenshot below shows that the upload was successful

[Solved] xacro: error: expected exactly one input file as argument

xacro: error: expected exactly one input file as argument RLException: Invalid tag: Cannot load command parameter [robot_description]: command ……
Param xml is <param command="$(find xacro)/xacro $(find rot_cararm)/urdf/robot_base
/base .urdf.xacro" name="robot_description"/>
The traceback for the exception was written to the log file

Reason: There is an extra space before “/base .urdf.xacro”
Can run after modification
File names with spaces cannot be used in the workspace path where the ROS function package is located

[Solved] STM32F4 MDK5 Software Simulate Error: error 65: no ‘read‘ permission

Stm32f4 mdk5 software simulation error: no ‘read’ permission

Problem description

CPU: stm32f407
mdk5 software simulation prompts that you have no read-write permission and can only run step by step. The prompt code is as follows:

*** error 65: access violation at 0x40023800 : no 'read' permission
*** error 65: access violation at 0x40023800 : no 'write' permission
*** error 65: access violation at 0x40023808 : no 'write' permission
*** error 65: access violation at 0x40023800 : no 'read' permission
*** error 65: access violation at 0x40023800 : no 'write' permission

The root cause is: there is a problem with the map address space permission mapping. Some address spaces do not have read and write permissions, resulting in the program can not run automatically.

Solution:

Add map address space permission mapping
according to the introduction of network resources, there are three methods:
1 Modify the debug tab in the project configuration option, and stm32f1 series chip verification can be used normally. F4 series chips cannot be used normally
2. Modify the map address permission mapping directly on the debugging page. After normal modification, the program can run normally. It needs to be reset when exiting debugging, which is inconvenient to operate
3. In the debug tab of the project configuration options, add the correct initialization file directly. (recommended)

Method 3:

Create a new “debug.ini” file in the project file, and add the map address permission mapping code in the file

map 0x40000000, 0x40007FFF read write // APB1
map 0x40010000, 0x400157FF read write // APB2
map 0x40020000, 0x4007FFFF read write // AHB1
map 0x50000000, 0x50060BFF read write // AHB2
map 0x60000000, 0x60000FFF read write // AHB3
map 0xE0000000, 0xE00FFFFF read write // CORTEX-M4 internal peripherals

The specific map address permission mapping can also be viewed in the memory map option in the debugging interface.

Method 2:

On the debug page, select the memory map option under the debug tab, as shown below

Map address mapping can be added according to the error prompt.

Method 1:

Relevant parameters are mainly configured according to the model of Engineering chip

MAC install lightgbm and xgboost Error [How to Solve]

MAC system + anaconda3 + Python 3 eight point eight

The two integrated learning algorithm packages lightgbm and xgboost were successfully installed
, but errors were always reported during import

---------------------------------------------------------------------------
OSError                                   Traceback (most recent call last)
<ipython-input-5-b18b3f8a6dc8> in <module>
----> 1 import lightgbm

dlopen(/Users/kumahiroshi/opt/anaconda3/lib/python3.8/site-packages/lightgbm/lib_lightgbm.so, 6): Library not loaded: /usr/local/opt/libomp/lib/libomp.dylib
  Referenced from: /Users/kumahiroshi/opt/anaconda3/lib/python3.8/site-packages/lightgbm/lib_lightgbm.so
  Reason: image not found

Always report: oserror reason: image not found

After consulting the data, it is found that you only need to re import this line of code after running on the terminal: brew install libomp


brew install libomp

Done!
Need to install: Homebrew,
Install commands:ruby -e "$(curl -fsSL https://raw.github.com/Homebrew/homebrew/go/install)"

[Solved] hive find error: The original program header file is: #include Modify to read

These errors occur when executing query statements in hive,

ERROR : Job Submission failed with exception 'java.net.ConnectException(Call From ************ failed on connection exception: java.net.ConnectException: 拒绝连接; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused)'
java.net.ConnectException: Call From *************** to ****************:8032 failed on connection exception: java.net.ConnectException: 拒绝连接; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
	at sun.reflect.GeneratedConstructorAccessor34.newInstance(Unknown Source)
	at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
	at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
	at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:831)
	at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:755)
	at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1549)
	at org.apache.hadoop.ipc.Client.call(Client.java:1491)
	at org.apache.hadoop.ipc.Client.call(Client.java:1388)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:233)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118)
	at com.sun.proxy.$Proxy85.getNewApplication(Unknown Source)
	at org.apache.hadoop.yarn.api.impl.pb.client.ApplicationClientProtocolPBClientImpl.getNewApplication(ApplicationClientProtocolPBClientImpl.java:274)
	at sun.reflect.GeneratedMethodAccessor32.invoke(Unknown Source)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
	at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
	at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
	at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
	at com.sun.proxy.$Proxy86.getNewApplication(Unknown Source)
	at org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.getNewApplication(YarnClientImpl.java:270)
	at org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.createApplication(YarnClientImpl.java:278)
	at org.apache.hadoop.mapred.ResourceMgrDelegate.getNewJobID(ResourceMgrDelegate.java:196)
	at org.apache.hadoop.mapred.YARNRunner.getNewJobID(YARNRunner.java:271)
	at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:157)
	at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1570)
	at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1567)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:422)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
	at org.apache.hadoop.mapreduce.Job.submit(Job.java:1567)
	at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:576)
	at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:571)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:422)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
	at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:571)
	at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:562)
	at org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:423)
	at org.apache.hadoop.hive.ql.exec.mr.MapRedTask.execute(MapRedTask.java:149)
	at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:205)
	at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:97)
	at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2664)
	at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:2335)
	at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:2011)
	at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1709)
	at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1703)
	at org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:157)
	at org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:224)
	at org.apache.hive.service.cli.operation.SQLOperation.access$700(SQLOperation.java:87)
	at org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork$1.run(SQLOperation.java:316)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:422)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
	at org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork.run(SQLOperation.java:329)
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
Caused by: java.net.ConnectException: Denied to connect
	at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
	at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
	at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
	at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)
	at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:700)
	at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:804)
	at org.apache.hadoop.ipc.Client$Connection.access$3800(Client.java:421)
	at org.apache.hadoop.ipc.Client.getConnection(Client.java:1606)
	at org.apache.hadoop.ipc.Client.call(Client.java:1435)
	... 54 more

ERROR : FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask. Call From ****** to hadoop102:8032 failed on connection exception: java.net.ConnectException: 拒绝连接; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused

After careful analysis of this problem, the problem should be that the port belonging to 8032 has an error and has been prompting for abnormal connection. However, I set the 8032 port as the yarn port, which is also 8032. I found that my own yarn service has not been turned on. So you can enter the following command again

start-yarn.sh

Restart yarn and execute the query statement again to display success.

fatal error: libusb.h: No such file or directory [How to Solve]

1. Error Messages:
In file included from /home/joes/jiao/ROS_Project/09_catkin_laser_Project/src/laser_myself/src/sick_tim310_1130000m01.cpp:2:0:
/home/joes/jiao/ROS_Project/09_catkin_laser_Project/src/laser_myself/include/laser_myself/sick_tim_common_mockup.h:42:20: fatal error: libusb.h: No such file or directory
compilation terminated.

 

2. Solution:
The original program header file is: #include <libusb.h>
Modify to: #include <libusb-1.0/libusb.h>

error: resource android:attr/lStar not found? [How to Solve]

Solution

// The module created before, by default, is appcompat:1.3.1, which corresponds to androidx.core:core:1.5.0
implementation 'androidx.appcompat:appcompat:1.3.1'

// The module created today, the template of the idea was changed to 1.4.0, corresponding to core:1.7.0
implementation 'androidx.appcompat:appcompat:1.4.0'

Change back to 1.3 1 can run

IIS Web Deploy Website Error: HTTP error 500.19 – Internal Server Error

HTTP error 500.19-Internal Server Error
The requested page cannot be accessed because the relevant configuration data for the page is invalid.

Detailed error information:

Module: IIS Web Core
Notice: BeginRequest
error code: 0x800700b7
Configuration error: When the unique key attribute “name” is set to “ScriptHandlerFactory”, duplicate collection items of type “add” cannot be added
Because the child site inherits the web.config of the parent site

Because the child site inherits the web.config of the parent site

  <location path="." allowOverride="true" inheritInChildApplications="false">
  /* Wrap <system.web> with location in the web.config file
    </system.web>
*/
// If it doesn't work, wrap all configurations with the <add> attribute in location
 
  </location>

For example: system.web, system.webServer, ApplicationConfiguration and other configuration
locations with add attributes only need to be written once

Make the subsite not inherit the web.config of the parent site