Author Archives: Robins

Proteus simulation duplicate part reference error

In Proteus simulation, the duplicate part reference error is found after running the code.

  This is due to an error in part naming:

Double click the error location to quickly locate the faulty component:

 

Double click the faulty component to modify the component name:

 

  After renaming all electronic components with duplicate names, they can operate normally.

Rabbitmq reported an error installing the web interface plug-in

Rabbitmq reported an error installing the web interface plug-in

Recently, when installing rabbitmq plugins, I reported the following errors. I don’t know why. I have been online Baidu a lot and tried a lot. These steps are summarized. I hope I can help you:

Error:
[ root@sa software]# rabbitmq-plugins enable rabbitmq_ management
Enabling plugins on node rabbit@sa :
rabbitmq_ management
Error:
{:query, : rabbit@sa , {: badrpc,: timeout}}

solution:

    first enter: “hostnamectl” to view your own hostname
    , then enter “VI/etc/hosts” to set your own hostname (Note: here is your own IP address + hostname, for example: 172.12.1.68 admin) , and finally run again: rabbitmq plugins enable rabbitmq_ management

Come on, come on, work together!!!

Pytorch: error message with chunks of 0 [How to Solve]

File "D:/Codes/code/Python Project/group_reid-master/group_reid-master/main_group_gcn_siamese_part_half_fulltest_sink.py", line 348, in train_gcn
    loss.backward()
  File "D:\Codes\Anaconda3\envs\pytorch_gpu\lib\site-packages\torch\tensor.py", line 185, in backward
    torch.autograd.backward(self, gradient, retain_graph, create_graph)
  File "D:\Codes\Anaconda3\envs\pytorch_gpu\lib\site-packages\torch\autograd\__init__.py", line 127, in backward
    allow_unreachable=True)  # allow_unreachable flag
RuntimeError: chunk expects `chunks` to be greater than 0, got: 0
Exception raised from chunk at ..\aten\src\ATen\native\TensorShape.cpp:496 (most recent call first):

As shown in the figure, I always reported an error chunk of 0. At first, I was puzzled. There was no similar situation with me when looking for information on the Internet. I typed the error and found that an error was reported in the derivation of loss (the figure below). I thought that loss is a function called, and it is impossible to report such an error. So throw it directly on the server to debug. Due to different versions of pytorch, the error content is also different. We finally found the error in pytorch version 1.1.

            loss.backward()

It was caused by dimension mismatch during cutting:

env11_junk1 = env11.squeeze().unsqueeze(0).unsqueeze(0).repeat((5-x1_valid.shape[0]), parts, 1)
env22_junk2 = env22.squeeze().unsqueeze(0).unsqueeze(0).repeat((5-x2_valid.shape[0]), parts, 1)
env11 = env11.squeeze().unsqueeze(0).unsqueeze(0).repeat(x1_valid.shape[0], parts, 1)
env22 = env22.squeeze().unsqueeze(0).unsqueeze(0).repeat(x2_valid.shape[0], parts, 1)

  # calculate within graph and inter graph message
h_k1 = torch.cat((self.W_x(x1[i, :sample_size1, :]), self.W_neib(x_neib1), self.W_relative(mu1), self.W_env(env11)), 2).unsqueeze(0)  
h_k_junk1 = torch.cat((self.W_x(x1[i, sample_size1:, :]), self.W_x(x1[i, sample_size1:, :]), self.W_x(x1[i, sample_size1:, :]),self.W_env(env11_junk1)), 2).unsqueeze(0)

h_k2 = torch.cat((self.W_x(x2[i, :sample_size2, :]), self.W_neib(x_neib2), self.W_relative(mu2), self.W_env(env22)), 2).unsqueeze(0)
h_k_junk2 = torch.cat((self.W_x(x2[i, sample_size2:, :]), self.W_x(x2[i, sample_size2:, :]), self.W_x(x2[i, sample_size2:, :]),self.W_env(env22_junk2)), 2).unsqueeze(0)                       

In my code (square and unsqueeze are redundant, just premute directly). I intended to copy the same first dimension as X1 in the first dimension, but in the actual data set, the first dimension of X1 may be 0. Therefore, if the parameter of repeat is 0, an error will be reported, which cannot be less than the original dim. Since the error reported is not obvious, I wasted half a day thinking about this problem. I hereby record it.

Zeppelin starts successfully, but an error is reported

Error message

The Zeppelin service was started successfully, and the UI interface was accessed normally, but the running code reported an error.

org.apache.thrift.transport.TTransportException
    at org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:132)
    at org.apache.thrift.transport.TTransport.readAll(TTransport.java:86)
    at org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:429)
    at org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:318)
    at org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:219)
    at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:69)
    at org.apache.zeppelin.interpreter.thrift.RemoteInterpreterService$Client.recv_interpret(RemoteInterpreterService.java:241)
    at org.apache.zeppelin.interpreter.thrift.RemoteInterpreterService$Client.interpret(RemoteInterpreterService.java:225)
    at org.apache.zeppelin.interpreter.remote.RemoteInterpreter.interpret(RemoteInterpreter.java:229)
    at org.apache.zeppelin.interpreter.LazyOpenInterpreter.interpret(LazyOpenInterpreter.java:93)
    at org.apache.zeppelin.notebook.Paragraph.jobRun(Paragraph.java:229)
    at org.apache.zeppelin.scheduler.Job.run(Job.java:171)
    at org.apache.zeppelin.scheduler.RemoteScheduler$JobRunner.run(RemoteScheduler.java:328)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
    at java.util.concurrent.FutureTask.run(FutureTask.java:262)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:178)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:292)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)

Cause of error reporting: this error is caused by the failure to connect to the corresponding spark and other related service parsers. If the spark and Hadoop services run normally, it is the cause of version incompatibility.

Solution

Replace with a compatible Zeppelin version.

The springboot test class reported an error NullPointerException

The test class should be annotated @ runwith (springrunner. Class)
the significance of the annotation is that the test class should use the injected class, such as the class injected by @ Autowired,

With @ runwith (springrunner. Class), these classes can be instantiated into the spring container, and automatic injection can take effect,

Otherwise, just a NullPointerException

@SpringBootTest
@RunWith(SpringRunner.class)
public class AppTest
{
    @Autowired
    private Sender sender;

    @Test
    public void Sendtest(){

        System.out.println(Sender.class+""+sender);
        sender.send();


    }
}

You can still run without @ runwith in the idea because it is recognized as a JUnit running environment in the idea, which is equivalent to a self recognized runwidth environment configuration. But not in other ides. Therefore, in order that your code can run normally in other ides, it is recommended to add @ runwith (springrunner. Class)

How to Solve Sqlyog error 2058

Sqlyog configures a new connection and reports an error: the error number is 2058. It is analyzed that the MySQL password encryption method has changed
solution:
alter user ‘root’ @ ‘localhost’ identified with MySQL_ native_ password BY ‘password’; (note the semicolon)
#password is the root password you set yourself

after reconfiguring the connection of sqlyog, the connection is successful and it is OK.

[Solved] Demjson error: ERROR: Command errored out with exit status 1

    ERROR: Command errored out with exit status 1:
     command: /data/wangzy-p/soft/anaconda3/envs/tf_1.14/bin/python -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-lkidxq7i/demjson_1adc3de9a48e4e169cf993fa26319b82/setup.py'"'"'; __file__='"'"'/tmp/pip-install-lkidxq7i/demjson_1adc3de9a48e4e169cf993fa26319b82/setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info --egg-base /tmp/pip-pip-egg-info-ldwnwp11
         cwd: /tmp/pip-install-lkidxq7i/demjson_1adc3de9a48e4e169cf993fa26319b82/
    Complete output (1 lines):
    error in demjson setup command: use_2to3 is invalid.

Solution

When setuptools > 58.0.0 , the dependency installation of demjson failed, demjson has been published demjon3 to solve this problem, use reference https://pypi.org/project/demjson3

[Solved] Rocketmq error: service not available now, may disk full, CL: 0.95, CQ: 0.95, index: 0.95

Rocketmq reports an error: service not available now, may disk full, CL: 0.95, CQ: 0.95, index: 0.95, may
the error is caused by the large log file in the/store/commitlog folder. Df-h instructs to view the memory occupied by the current disk   By default, rocketmq will treat the ratio of remaining disks less than 75% as insufficient disk space

Solution:

1. Delete the useless log files in the/store/commitlog folder first

2. Edit/conf/2m-2s-async/broker-a.properties file and add   Diskmaxusedspaceratio = 98. An error will be reported when the disk occupies 98%

3. Check whether other processes occupy a lot of space. My problem is the logstash problem of elk. Just kill the process

[Solved] java.lang.noclassdeffounderror when idea runs Flink: org/Apache/flick/API/common/executionconfig

Solution:
change provided to compile ,for example:

		<dependency>
			<groupId>org.apache.flink</groupId>
			<artifactId>flink-java</artifactId>
			<version>${flink.version}</version>
			<scope>compile</scope>
		</dependency>
		<dependency>
			<groupId>org.apache.flink</groupId>
			<artifactId>flink-streaming-java_${scala.binary.version}</artifactId>
			<version>${flink.version}</version>
			<scope>compile</scope>
		</dependency>
		<dependency>
			<groupId>org.apache.flink</groupId>
			<artifactId>flink-clients_${scala.binary.version}</artifactId>
			<version>${flink.version}</version>
			<scope>compile</scope>
		</dependency>

Click POM. XML refresh in idea to refresh

Error creating bean with name ‘redisTemplate‘ defined in class path resource [xx/RedisConfig.class]

Error creating bean with name ‘redisTemplate’ defined in class path resource [xx/RedisConfig.class]

1. Problem description

When redis is used in the project, the following errors are reported:

2. Problem analysis

From the error reporting factory method ‘redisconnectionfactory’ thread exception; Needed exception is java.lang.noclassdeffounderror: org/Apache/commons/pool2/impl/genericobjectpoolconfig get key information. NoClassDefFoundError: it is caused by that the JVM cannot load the class or cannot find the class at compile time
reasons for NoClassDefFoundError:
1) one reason is that static variables cannot be loaded
2) if jars are not added to classpath and Maven projects in the project, they need to be checked according to the project conditions

3. Problem handling

The spring boot starter data redis package referenced in the project. By default, spring boot starter data redis uses lettuce as the redis client, and the underlying layer of lettuce is implemented by netty. Lettuce is a scalable thread safe redis client that supports synchronization, asynchronous and responsive modes. Multiple threads can share a connection instance without worrying about multithreading concurrency. The use of lettuce requires the configuration of thread pool. You also need to reference the following packages: