Category Archives: Error

[Solved] error during connect: This error may indicate that the docker daemon is not running

Because the shortcut key of my screenshot tool is Ctrl+q , and the shortcut key of docker desktop exit is also Ctrl+q, when I press Ctrl+q, docker desktop exits, and then when I enter the docker command in the console,

burst this line of error

error during connect: This error may indicate that the docker daemon is not running.: Get "http://%2F%2F.%2Fpipe%2Fdocker_engine/v1.24/containers/json": open //./pipe/docker_engine: The system cannot find the file specified.

Solution:
reopen docker desktop

when the color of the icon in the lower-left corner is the same as that shown in the picture, it proves that docker operates normally
then I go to CMD and enter the docker command

to see that there is no error

[Solved] Golang Error: fatal error: concurrent map writes

The specific codes are as follows:

package main

import (
	"fmt"
	"time"
)

var m = make(map[int]int, 10)

func solution(n int){
	res := 1
	for i:=1; i<=n; i++{
		res = res * i
	}
	m[n] = res
}

func main(){
	for i:=1; i<=200; i++{
		go solution(i)
	}
	time.Sleep(time.Second*10)
	for ind, val := range m{
		fmt.Printf("[%d] = %d \n", ind, val)
	}
}

The following error occurred:

fatal error: concurrent map writes
fatal error: concurrent map writes




runtime.mapassign_fast64(0x10b7760, 0xc00001e1b0, 0x12, 0x0)
        /usr/local/go/src/runtime/map_fast64.go:176 +0x325 fp=0xc000106fa0 sp=0xc000106f60 pc=0x1010bc5
main.solution(0x12)
        /Users/lcq/go/src/go_base/gochanneldemo/channeldemo.go:15 +0x65 fp=0xc000106fd8 sp=0xc000106fa0 pc=0x10a88a5
runtime.goexit()
        /usr/local/go/src/runtime/asm_amd64.s:1374 +0x1 fp=0xc000106fe0 sp=0xc000106fd8 pc=0x1062c41
created by main.main
        /Users/lcq/go/src/go_base/gochanneldemo/channeldemo.go:20 +0x58

The main reason is because map is not thread-safe, so it is not safe to use map in case of concurrency.
Solution.

    1. Add a lock
    2. Use sync.map
    3. Use a channel (multiple threads operating a channel is thread-safe)

C++ clang Compile Error: error: expected unqualified-id

Problem overview

Today, when learning the vector container, I found that clion threw such an error, as shown in the following figure

this problem still exists after compiling with clang

~/Documents/Clion_Project/Learning/L11.cpp:12:1: error: expected unqualified-id
for (int i = 0; i < v.size(); i++)
^
1 error generated.

reason:

The main function is not written

Solution:

#include <vector>
using namespace std;

int main()
{
    struct Vertex
    {
        int a;
        float b;
    };

    vector<Vertex> v;
    for (int i = 0; i < v.size(); i++)
    {

    }
}

DB2 detects a syntax error in the DRDA data stream: 0x3 ERRORCODE= -4499, SQLSTATE=58009

DB2 reports an error and detects a syntax error in the DRDA data stream Reason: 0x3 ERRORCODE= -4499, SQLSTATE=58009

[16:48:43] RMI TCP Connection(3)-127.0.0.1 ERROR  [] [] [com.alibaba.druid.pool.DruidDataSource] - 
dataSource init errorcom.ibm.db2.jcc.am.DisconnectNonTransientException: 
 [jcc][4][2034]11148][4.26.14] conversation was released due to a distribution protocol error.
thus causing the execution to fail. Cause: 0x3. ERRORCODE= -4499, SQLSTATE=58009
	at com.ibm.db2.jcc.am.b6.a(b6.java:340)
	at com.ibm.db2.jcc.am.b6.a(b6.java:463)
	at com.ibm.db2.jcc.t4.y.j(y.java:1016)
	at com.ibm.db2.jcc.t4.y.c(y.java:472)
	at com.ibm.db2.jcc.t4.y.v(y.java:1219)
	at com.ibm.db2.jcc.t4.z.a(z.java:53)
	at com.ibm.db2.jcc.t4.b.c(b.java:1410)
	at com.ibm.db2.jcc.t4.b.b(b.java:1282)
	at com.ibm.db2.jcc.t4.b.b(b.java:833)
	at com.ibm.db2.jcc.t4.b.a(b.java:804)
	at com.ibm.db2.jcc.t4.b.a(b.java:441)
	at com.ibm.db2.jcc.t4.b.a(b.java:414)
	at com.ibm.db2.jcc.t4.b.<init>(b.java:352)
	at com.ibm.db2.jcc.DB2SimpleDataSource.getConnection(DB2SimpleDataSource.java:233)
	at com.ibm.db2.jcc.DB2SimpleDataSource.getConnection(DB2SimpleDataSource.java:200)
	at com.ibm.db2.jcc.DB2Driver.connect(DB2Driver.java:471)
	at com.ibm.db2.jcc.DB2Driver.connect(DB2Driver.java:113)

Solution:

url ip can not use 127.0.0.1 and localhost; jdbc.url=jdbc:db2://localhost:50000/test ip can be changed to the real ip jdbc.url=jdbc:db2://192.168.xxx.xxx:50000/test

blackduck Error: Request failed authorization [HTTP Error]: XXX, response was 403 Forbidden.

Question:

15:03:23 2022-02-14 15:03:23 CST ERROR [main] --- Failed to upload code location: xxx/bom
15:03:23 2022-02-14 15:03:23 CST ERROR [main] --- Reason: Request failed authorization [HTTP Error]: There was a problem trying to POST https://xxx.com/api/scan/data/, response was 403 Forbidden.
15:03:23 2022-02-14 15:03:23 CST ERROR [main] --- An error occurred uploading a bdio file.
15:03:23 2022-02-14 15:03:23 CST ERROR [main] --- There was a problem: An error occurred uploading a bdio file.
15:03:23 2022-02-14 15:03:23 CST ERROR [main] --- Detect run failed: There was a problem:  An error occurred uploading a bdio file.
15:03:23 2022-02-14 15:03:23 CST ERROR [main] --- There was a problem:  An error occurred uploading a bdio file.

ROOT CAUSE:
The project-owner is not a role in Blackduck, but an external reference to a project.

SOLUTION:

SOLUTION:

Found that this is due to the user/project roles assigned to the user.
User had Project Code Scanner assigned under the specific project but this role does not allow you to create new projects (only project versions).
User must have Global Code Scanner assigned in order to create new projects within Black Duck while scanning.
OR

User can have Project Creator role assigned but needs to be assigned to the specific project in order to run scans against the new project.
OR
Ensure if user is already part of a group that has the correct above persmissions they are not running the following property as will also give 403 , because group already exists: ‘ detect.project.user.groups=blackduck.xxxx ‘ – Removing property will allow scan to run.

OR

To upload the scans using Detect, Global Code Scanner role (global scope) or Project Code Scanner role (project scope) needed to be set to the user, from who the token was generated and used in Detect CLI.

Please refer to the attached screenshot and role matrix doc part in Blackduck user guide (/doc/Welcome.htm#users_and_groups/rolematrix.htm)


NOTE:
Please ensure the user is the BOM manager of the project this will also prevent failure.
Product
Black Duck/Black Duck Hub

[Solved] QT Error: error: undefined reference to `GameModel::~GameModel()’

When compiling Qt program, error: undefined reference to `GameModel::~GameModel()’ is reported.
This is because Qt does not automatically generate the class destructor, so we need to write it ourselves, even if it is an empty function. After we write GameModel::~GameModel() by hand, the problem disappears when we compile it again.

There are two ways to write destructors:
Method 1:
in .cpp file:

Method 2:
in .h file.
in Destructor of

[Solved] ERROR #8003 More than one page is numbered 1.

Error Messages:

ERROR #8003 More than one page is numbered 1.

 

Solution:

You can modify the title block by double-clicking it in the lower right corner of the schematic.
The title block of the schematic can be modified by double-clicking on the title block in the right corner of the schematic, and modifying its properties, mainly the page count and page number.

Kafka executes the script to create topic error: error org apache. kafka. common. errors. InvalidReplicationFactorException: Replicati

Question:

To test the integration of sparkstreaming and Kafka in the code, you need to create two topics in Kafka in advance, but the following errors are reported during the execution of the creation script

 kafka-topics.sh --zookeeper linux1:2181,linux2:2181,linux3:2181 --create --topic wufabao_topic01 --replication-factor 2 --partitions 3

WARNING: Due to limitations in metric names, topics with a period ('.') or underscore ('_') could collide. To avoid issues it is best to use either, but not both.
Error while executing topic command : Replication factor: 2 larger than available brokers: 0.
[2022-02-09 17:27:18,432] ERROR org.apache.kafka.common.errors.InvalidReplicationFactorException: Replication factor: 2 larger than available brokers: 0.
 (kafka.admin.TopicCommand$)

reason:

The path of metadata stored in zookeeper configured by Kafka is incorrect. The metadata path I configured in Kafka is:
zookeeper connect=linux1:2181,linux2:2181,linux3:2181/myKafka

Solution:

Modify the path of Kafka metadata in the script as follows:
kafka-topics.sh --zookeeper linux1:2181,linux2:2181,linux3:2181/myKafka --create --topic wufabao_topic01 --replication-factor 2 --partitions 3
created successfully

Redis Error: (error) ERR Errors trying to SHUTDOWN. Check logs.

An error is reported when closing redis

(error) ERR Errors trying to SHUTDOWN. Check logs.

First, we need to understand that when we shutdown, redis will save the data, whether it is rdb or aof depends on your own settings. But when you save the file, you may encounter the save path does not exist, or the save path does not have permission, in the configuration file, the default save path for rdb is . /. So we have a problem because of path permissions.

Modify . / file permissions

[atguigu@hadoop100 bin]$ sudo chown atguigu:atguigu -R /usr/local/bin/redis-config/

Shut down again and stop successfully