Tag Archives: distributed

[Solved] CDH ipa: ERROR: Host ‘cdh-master-v02.yunes.com‘ does not have corresponding DNS A/AAAA record

1. Error environment

1. System version: CentOS7.7

2. IPA version: 4.6.8

2. Error occurred when installing Kerberos

1. Error description:

Execute global command Generate lost credentials
/opt/cloudera/cm/bin/gen_credentials_ipa.sh failed with exit code 1 and output of <<
+ CMF_REALM=YUNES.COM
+ export PATH=/usr/kerberos/bin:/usr/kerberos/sbin:/usr/lib/mit/sbin:/usr/sbin:/usr/lib/mit/bin:/usr/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin
+ PATH=/usr/kerberos/bin:/usr/kerberos/sbin:/usr/lib/mit/sbin:/usr/sbin:/usr/lib/mit/bin:/usr/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin
+ kinit -k -t /var/run/cloudera-scm-server/cmf8356082187139991822.keytab [email protected]
+ KEYTAB_OUT=/var/run/cloudera-scm-server/cmf1174813005448134353.keytab
+ PRINCIPAL=impala/[email protected]
+ MAX_RENEW_LIFE=432000
+ '[' -z /etc/krb5.conf ']'
+ echo 'Using custom config path '\''/etc/krb5.conf'\'', contents below:'
+ cat /etc/krb5.conf
+ PRINC=impala
++ echo impala/[email protected]
++ cut -d/-f 2
++ cut -d @ -f 1
+ HOST=cdh-master-v02.yunes.com
+ set +e
+ ipa host-find cdh-master-v02.yunes.com
+ ERR=0
+ set -e
+ [[ 0 -eq 0 ]]
+ echo 'Host cdh-master-v02.yunes.com exists'
+ set +e
+ ipa service-find impala/[email protected]
+ ERR=1
+ set -e
+ [[ 1 -eq 0 ]]
+ PRINC_EXISTS=no
+ echo 'Adding new principal: impala/[email protected]'
+ ipa service-add impala/[email protected]
ipa: ERROR: Host 'cdh-master-v02.yunes.com' does not have corresponding DNS A/AAAA record


2, the solution (may be version differences in their own choice of test)
1)

ipa dnsrecord-add yunes.com cdh-cm-v01 –a-rec 192.168.0.200;
ipa dnsrecord-add yunes.com cdh-master-v01 –a-rec 192.168.0.201;
ipa dnsrecord-add yunes.com cdh-master-v02 –a-rec 192.168.0.202;
ipa dnsrecord-add yunes.com cdh-datanode-v01 –a-rec 192.168.0.203;
ipa dnsrecord-add yunes.com cdh-datanode-v02 –a-rec 192.168.0.204;
ipa dnsrecord-add yunes.com cdh-datanode-v03 –a-rec 192.168.0.205;
ipa dnsrecord-add yunes.com cdh-client-v01 –a-rec 192.168.0.206;

[root@cdh-ipa-v01 ~]# ipa dnsrecord-add yunes.com cdh-cm-v01 --a-rec 192.168.0.200;
ipa dnsrecord-add yunes.com cdh-master-v01 --a-rec 192.168.0.201;
ipa dnsrecord-add yunes.com cdh-master-v02 --a-rec 192.168.0.202;
ipa dnsrecord-add yunes.com cdh-datanode-v01 --a-rec 192.168.0.203;
ipa dnsrecord-add yunes.com cdh-datanode-v02 --a-rec 192.168.0.204;
ipa dnsrecord-add yunes.com cdh-datanode-v03 --a-rec 192.168.0.205;
ipa dnsrecord-add yunes.com cdh-client-v01 --a-rec 192.168.0.206;  Record name: cdh-cm-v01
  A record: 192.168.0.200
  SSHFP record: 1 1 2EE47C060AD498FACE720384A62F5672A24F2B15, 1 2 E0F507FC5983919E80A81397167FE5B0A31247E55B9FE96D1F789534 35250808, 3 1
                182B18E515A1A9D4C7B434BA4775876709F6DF2A, 3 2 76C34C382E5060EF30D0545A82D8BC0DB3D18034849CCE3ECB601A37 08F8F36C, 4 1
                67CE1FEB39325B57790BB046035E53A3AF2B893C, 4 2 CBDE7A7845393E7C60713731DE0F18CA7670FDA37A8232E2F53AE401 527D1248
[root@cdh-ipa-v01 ~]# ipa dnsrecord-add yunes.com cdh-master-v01 --a-rec 192.168.0.201;
  Record name: cdh-master-v01
  A record: 192.168.0.201
  SSHFP record: 1 1 2EE47C060AD498FACE720384A62F5672A24F2B15, 1 2 E0F507FC5983919E80A81397167FE5B0A31247E55B9FE96D1F789534 35250808, 3 1
                182B18E515A1A9D4C7B434BA4775876709F6DF2A, 3 2 76C34C382E5060EF30D0545A82D8BC0DB3D18034849CCE3ECB601A37 08F8F36C, 4 1
                67CE1FEB39325B57790BB046035E53A3AF2B893C, 4 2 CBDE7A7845393E7C60713731DE0F18CA7670FDA37A8232E2F53AE401 527D1248
[root@cdh-ipa-v01 ~]# ipa dnsrecord-add yunes.com cdh-master-v02 --a-rec 192.168.0.202;
  Record name: cdh-master-v02
  A record: 192.168.0.202
  SSHFP record: 1 1 2EE47C060AD498FACE720384A62F5672A24F2B15, 1 2 E0F507FC5983919E80A81397167FE5B0A31247E55B9FE96D1F789534 35250808, 3 1
                182B18E515A1A9D4C7B434BA4775876709F6DF2A, 3 2 76C34C382E5060EF30D0545A82D8BC0DB3D18034849CCE3ECB601A37 08F8F36C, 4 1
                67CE1FEB39325B57790BB046035E53A3AF2B893C, 4 2 CBDE7A7845393E7C60713731DE0F18CA7670FDA37A8232E2F53AE401 527D1248
[root@cdh-ipa-v01 ~]# ipa dnsrecord-add yunes.com cdh-datanode-v01 --a-rec 192.168.0.203;
  Record name: cdh-datanode-v01
  A record: 192.168.0.203
  SSHFP record: 1 1 2EE47C060AD498FACE720384A62F5672A24F2B15, 1 2 E0F507FC5983919E80A81397167FE5B0A31247E55B9FE96D1F789534 35250808, 3 1
                182B18E515A1A9D4C7B434BA4775876709F6DF2A, 3 2 76C34C382E5060EF30D0545A82D8BC0DB3D18034849CCE3ECB601A37 08F8F36C, 4 1
                67CE1FEB39325B57790BB046035E53A3AF2B893C, 4 2 CBDE7A7845393E7C60713731DE0F18CA7670FDA37A8232E2F53AE401 527D1248
[root@cdh-ipa-v01 ~]# ipa dnsrecord-add yunes.com cdh-datanode-v02 --a-rec 192.168.0.204;
  Record name: cdh-datanode-v02
  A record: 192.168.0.204
  SSHFP record: 1 1 2EE47C060AD498FACE720384A62F5672A24F2B15, 1 2 E0F507FC5983919E80A81397167FE5B0A31247E55B9FE96D1F789534 35250808, 3 1
                182B18E515A1A9D4C7B434BA4775876709F6DF2A, 3 2 76C34C382E5060EF30D0545A82D8BC0DB3D18034849CCE3ECB601A37 08F8F36C, 4 1
                67CE1FEB39325B57790BB046035E53A3AF2B893C, 4 2 CBDE7A7845393E7C60713731DE0F18CA7670FDA37A8232E2F53AE401 527D1248
[root@cdh-ipa-v01 ~]# ipa dnsrecord-add yunes.com cdh-datanode-v03 --a-rec 192.168.0.205;
  Record name: cdh-datanode-v03
  A record: 192.168.0.205
  SSHFP record: 1 1 2EE47C060AD498FACE720384A62F5672A24F2B15, 1 2 E0F507FC5983919E80A81397167FE5B0A31247E55B9FE96D1F789534 35250808, 3 1
                182B18E515A1A9D4C7B434BA4775876709F6DF2A, 3 2 76C34C382E5060EF30D0545A82D8BC0DB3D18034849CCE3ECB601A37 08F8F36C, 4 1
                67CE1FEB39325B57790BB046035E53A3AF2B893C, 4 2 CBDE7A7845393E7C60713731DE0F18CA7670FDA37A8232E2F53AE401 527D1248
[root@cdh-ipa-v01 ~]# ipa dnsrecord-add yunes.com cdh-client-v01 --a-rec 192.168.0.206;
  Record name: cdh-client-v01
  A record: 192.168.0.206
  SSHFP record: 1 1 2EE47C060AD498FACE720384A62F5672A24F2B15, 1 2 E0F507FC5983919E80A81397167FE5B0A31247E55B9FE96D1F789534 35250808, 3 1
                182B18E515A1A9D4C7B434BA4775876709F6DF2A, 3 2 76C34C382E5060EF30D0545A82D8BC0DB3D18034849CCE3ECB601A37 08F8F36C, 4 1
                67CE1FEB39325B57790BB046035E53A3AF2B893C, 4 2 CBDE7A7845393E7C60713731DE0F18CA7670FDA37A8232E2F53AE401 527D1248
[root@cdh-ipa-v01 ~]# 


2)

ipa dnsrecord-add 0.168.192.in-addr.arpa 200 –ptr-rec cdh-cm-v01.yunes.com.
ipa dnsrecord-add 0.168.192.in-addr.arpa 201 –ptr-rec cdh-master-v01.yunes.com.
ipa dnsrecord-add 0.168.192.in-addr.arpa 202 –ptr-rec cdh-master-v02.yunes.com.
ipa dnsrecord-add 0.168.192.in-addr.arpa 203 –ptr-rec cdh-datanode-v01.yunes.com.
ipa dnsrecord-add 0.168.192.in-addr.arpa 204 –ptr-rec cdh-datanode-v02.yunes.com.
ipa dnsrecord-add 0.168.192.in-addr.arpa 205 –ptr-rec cdh-datanode-v03.yunes.com.
ipa dnsrecord-add 0.168.192.in-addr.arpa 206 –ptr-rec cdh-client-v01.yunes.com.

ipa dnsrecord-add 0.168.192.in-addr.arpa 200 --ptr-rec cdh-cm-v01.yunes.com.
ipa dnsrecord-add 0.168.192.in-addr.arpa 201 --ptr-rec cdh-master-v01.yunes.com.
ipa dnsrecord-add 0.168.192.in-addr.arpa 202 --ptr-rec cdh-master-v02.yunes.com.
ipa dnsrecord-add 0.168.192.in-addr.arpa 203 --ptr-rec cdh-datanode-v01.yunes.com.
ipa dnsrecord-add 0.168.192.in-addr.arpa 204 --ptr-rec cdh-datanode-v02.yunes.com.
ipa dnsrecord-add 0.168.192.in-addr.arpa 205 --ptr-rec cdh-datanode-v03.yunes.com.
ipa dnsrecord-add 0.168.192.in-addr.arpa 206 --ptr-rec cdh-client-v01.yunes.com.

 

[Solved] Nacos2.1.1 Startup Error: nacos is starting with standalone

Project Scene:

Developing a microservice project, requires the service registration and configuration center. The development and application of Nacos technology. The native OS is Windows11, Nacos version is 2.1.1 installed on Drive D.


Problem description

Nacos start:

Double-click startup.cmd in the bin directory and it will flash back, that is, the startup failed;

Use the command to start: startup.cmd -m standalone, An error message appears

D:\Program Files (x86)\nacos-server-2.1.1\nacos\bin>startup.cmd -m standalone
"nacos is starting with standalone"
There should be no \nacos-server-2.1.1\nacos"\logs\java_heapdump.hprof -XX:-UseLargePages" at this point.


Solution:

The first solution:  unzip Nacos to a directory does not contain non-English and numbers, Command line started successfully

Second solution: Open startup with editor, Comment out the content shown below:

rem if %MODE% == "cluster" (
rem echo "nacos is starting with cluster"
rem if %EMBEDDED_STORAGE% == "embedded" (
rem set "NACOS_OPTS=-DembeddedStorage=true"
rem )

rem set "NACOS_JVM_OPTS=-server -Xms2g -Xmx2g -Xmn1g -XX:MetaspaceSize=128m -XX:MaxMetaspaceSize=320m -XX:-OmitStackTraceInFastThrow -XX:+HeapDumpOnOutOfMemoryError -XX: HeapDumpPath=%BASE_DIR%\logs\java_heapdump.hprof -XX:-UseLargePages"
rem )

Not sure if this solution will cause other problems, but it starts fine

[Solved] Nacos1.3.2 Startup Error: Unable to start embedded Tomcat

Recently, I started to plan to learn springcloud-alibaba, so I downloaded the Nacos installation package on the GitHub official website, and found an error on startup.

Error message: Unable to start embedded Tomcat
the built-in Tomcat cannot be loaded.

Opened the conf folder and saw a nacos-mysql.sql
It seems to import a database script, so I created a database named nacos in the local database
and executed this sql script, which generated some tables.

With a database and tables, we must change the configuration.
So I opened application.properties with an editor
I saw that there is a place to configure the db, so I changed it.

After saving
enter the bin directory again and double-click startbat.cmd to start.

I found that I still reported an error.


Caused by: java.net.UnknownHostException: jmenv.tbsite.net

Here I have changed the configuration file again, but it has no effect.

The key point is that during startup, I noticed a message,
Nacos has been started in cluster mode. Cluster list is []

I wonder if this is the problem, because I clicked run and did not configure the Nacos cluster.

So I use the editor to open the startup.cmd in the dictory bin
see a key message

Here you can configure click mode startup, so try to change the startup configuration to click mode startup

Exit after saving. Double click the startup file startup.cmd

It started normally this time.

Enter in Browser: localhost:8848/nacos/index.html

Normally access the Nacos configuration center.

[Solved] Spring Boot Error: org.springframework.jdbc.datasource.embedded.EmbeddedData

Record the spring boot error solution once,

If spring boot reports an error during Druid integration, and the following errors are reported:

org.springframework.beans.factory.UnsatisfiedDependencyException: Error creating bean with name 'dataSource': Unsatisfied dependency expressed through field 'basicProperties'; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'spring.datasource-org.springframework.boot.autoconfigure.jdbc.DataSourceProperties': Instantiation of bean failed; nested exception is org.springframework.beans.BeanInstantiationException: Failed to instantiate [org.springframework.boot.autoconfigure.jdbc.DataSourceProperties]: Constructor threw exception; nested exception is java.lang.NoClassDefFoundError: org/springframework/jdbc/datasource/embedded/EmbeddedDatabaseType
	at org.springframework.beans.factory.annotation.AutowiredAnnotationBeanPostProcessor$AutowiredFieldElement.inject(AutowiredAnnotationBeanPostProcessor.java:596)
	at org.springframework.beans.factory.annotation.InjectionMetadata.inject(InjectionMetadata.java:90)
	at org.springframework.beans.factory.annotation.AutowiredAnnotationBeanPostProcessor.postProcessProperties(AutowiredAnnotationBeanPostProcessor.java:374)
	at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.populateBean(AbstractAutowireCapableBeanFactory.java:1411)
	at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:592)
	at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:515)
	at org.springframework.beans.factory.support.AbstractBeanFactory.lambda$doGetBean$0(AbstractBeanFactory.java:320)
	at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:222)
	at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:318)
	at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:199)
	at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:849)
	at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:877)
	at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:549)
	at org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext.refresh(ServletWebServerApplicationContext.java:142)
	at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:775)
	at org.springframework.boot.SpringApplication.refreshContext(SpringApplication.java:397)
	at org.springframework.boot.SpringApplication.run(SpringApplication.java:316)
	at org.springframework.boot.SpringApplication.run(SpringApplication.java:1260)
	at org.springframework.boot.SpringApplication.run(SpringApplication.java:1248)
	at org.learn.SpringBootLearnApplication.main(SpringBootLearnApplication.java:23)
Caused by: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'spring.datasource-org.springframework.boot.autoconfigure.jdbc.DataSourceProperties': Instantiation of bean failed; nested exception is org.springframework.beans.BeanInstantiationException: Failed to instantiate [org.springframework.boot.autoconfigure.jdbc.DataSourceProperties]: Constructor threw exception; nested exception is java.lang.NoClassDefFoundError: org/springframework/jdbc/datasource/embedded/EmbeddedDatabaseType
	at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.instantiateBean(AbstractAutowireCapableBeanFactory.java:1303)
	at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBeanInstance(AbstractAutowireCapableBeanFactory.java:1197)
	at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:555)
	at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:515)
	at org.springframework.beans.factory.support.AbstractBeanFactory.lambda$doGetBean$0(AbstractBeanFactory.java:320)
	at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:222)
	at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:318)
	at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:199)
	at org.springframework.beans.factory.config.DependencyDescriptor.resolveCandidate(DependencyDescriptor.java:277)
	at org.springframework.beans.factory.support.DefaultListableBeanFactory.doResolveDependency(DefaultListableBeanFactory.java:1247)
	at org.springframework.beans.factory.support.DefaultListableBeanFactory.resolveDependency(DefaultListableBeanFactory.java:1167)
	at org.springframework.beans.factory.annotation.AutowiredAnnotationBeanPostProcessor$AutowiredFieldElement.inject(AutowiredAnnotationBeanPostProcessor.java:593)
	... 19 common frames omitted
Caused by: org.springframework.beans.BeanInstantiationException: Failed to instantiate [org.springframework.boot.autoconfigure.jdbc.DataSourceProperties]: Constructor threw exception; nested exception is java.lang.NoClassDefFoundError: org/springframework/jdbc/datasource/embedded/EmbeddedDatabaseType
	at org.springframework.beans.BeanUtils.instantiateClass(BeanUtils.java:184)
	at org.springframework.beans.factory.support.SimpleInstantiationStrategy.instantiate(SimpleInstantiationStrategy.java:87)
	at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.instantiateBean(AbstractAutowireCapableBeanFactory.java:1295)
	... 30 common frames omitted
Caused by: java.lang.NoClassDefFoundError: org/springframework/jdbc/datasource/embedded/EmbeddedDatabaseType
	at org.springframework.boot.jdbc.EmbeddedDatabaseConnection.<clinit>(EmbeddedDatabaseConnection.java:50)
	at org.springframework.boot.autoconfigure.jdbc.DataSourceProperties.<init>(DataSourceProperties.java:152)
	at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
	at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
	at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
	at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
	at org.springframework.beans.BeanUtils.instantiateClass(BeanUtils.java:172)
	... 32 common frames omitted
Caused by: java.lang.ClassNotFoundException: org.springframework.jdbc.datasource.embedded.EmbeddedDatabaseType
	at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
	at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
	... 39 common frames omitted

The reason is that the JDBC related jar package is missing,

Solution: import spring-jdbc package

<dependency>
    <groupId>org.springframework</groupId>
    <artifactId>spring-jdbc</artifactId>
</dependency>

[Solved] java Internal error in the mapping processor java.lang.NullPointerException

Error Messages:

java: Internal error in the mapping processor: java.lang.NullPointerException  	
at org.mapstruct.ap.internal.processor.DefaultVersionInformation.createManifestUrl(DefaultVersionInformation.java:182)  	
at org.mapstruct.ap.internal.processor.DefaultVersionInformation.openManifest(DefaultVersionInformation.java:153)  	
at org.mapstruct.ap.internal.processor.DefaultVersionInformation.getLibraryName(DefaultVersionInformation.java:129)  	
at org.mapstruct.ap.internal.processor.DefaultVersionInformation.getCompiler(DefaultVersionInformation.java:122)  	
at org.mapstruct.ap.internal.processor.DefaultVersionInformation.fromProcessingEnvironment(DefaultVersionInformation.java:95)  	
at org.mapstruct.ap.internal.processor.DefaultModelElementProcessorContext.<init>(DefaultModelElementProcessorContext.java:50)  
at org.mapstruct.ap.MappingProcessor.processMapperElements(MappingProcessor.java:218)  	
at org.mapstruct.ap.MappingProcessor.process(MappingProcessor.java:156)  	
at org.jetbrains.jps.javac.APIWrappers$ProcessorWrapper.process(APIWrappers.java:109)  	
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)  	
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)  	
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)  	
at java.lang.reflect.Method.invoke(Method.java:498)  	
at org.jetbrains.jps.javac.APIWrappers$1.invoke(APIWrappers.java:213)  	
at org.mapstruct.ap.MappingProcessor.process(Unknown Source)  	
at com.sun.tools.javac.processing.JavacProcessingEnvironment.callProcessor(JavacProcessingEnvironment.java:794)  	
at com.sun.tools.javac.processing.JavacProcessingEnvironment.discoverAndRunProcs(JavacProcessingEnvironment.java:705)  
at com.sun.tools.javac.processing.JavacProcessingEnvironment.access$1800(JavacProcessingEnvironment.java:91)  	
at com.sun.tools.javac.processing.JavacProcessingEnvironment$Round.run(JavacProcessingEnvironment.java:1035)  	
at com.sun.tools.javac.processing.JavacProcessingEnvironment.doProcessing(JavacProcessingEnvironment.java:1176)  	
at com.sun.tools.javac.main.JavaCompiler.processAnnotations(JavaCompiler.java:1170)  	
at com.sun.tools.javac.main.JavaCompiler.compile(JavaCompiler.java:856)  	
at com.sun.tools.javac.main.Main.compile(Main.java:523)  	
at com.sun.tools.javac.api.JavacTaskImpl.doCall(JavacTaskImpl.java:129)  	
at com.sun.tools.javac.api.JavacTaskImpl.call(JavacTaskImpl.java:138)  	
at org.jetbrains.jps.javac.JavacMain.compile(JavacMain.java:231)  	
at org.jetbrains.jps.incremental.java.JavaBuilder.compileJava(JavaBuilder.java:501)  	
at org.jetbrains.jps.incremental.java.JavaBuilder.compile(JavaBuilder.java:353)  	
at org.jetbrains.jps.incremental.java.JavaBuilder.doBuild(JavaBuilder.java:277)  	
at org.jetbrains.jps.incremental.java.JavaBuilder.build(JavaBuilder.java:231)  	
at org.jetbrains.jps.incremental.IncProjectBuilder.runModuleLevelBuilders(IncProjectBuilder.java:1441)  	
at org.jetbrains.jps.incremental.IncProjectBuilder.runBuildersForChunk(IncProjectBuilder.java:1100)  	at org.jetbrains.jps.incremental.IncProjectBuilder.buildTargetsChunk(IncProjectBuilder.java:1224)  	at org.jetbrains.jps.incremental.IncProjectBuilder.buildChunkIfAffected(IncProjectBuilder.java:1066)  	at org.jetbrains.jps.incremental.IncProjectBuilder.buildChunks(IncProjectBuilder.java:832)  	at org.jetbrains.jps.incremental.IncProjectBuilder.runBuild(IncProjectBuilder.java:419)  	at org.jetbrains.jps.incremental.IncProjectBuilder.build(IncProjectBuilder.java:183)  	at org.jetbrains.jps.cmdline.BuildRunner.runBuild(BuildRunner.java:132)  	at org.jetbrains.jps.cmdline.BuildSession.runBuild(BuildSession.java:302)  	at org.jetbrains.jps.cmdline.BuildSession.run(BuildSession.java:132)  	at org.jetbrains.jps.cmdline.BuildMain$MyMessageHandler.lambda$channelRead0$0(BuildMain.java:219)  	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)  	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)  	at java.lang.Thread.run(Thread.java:748)  

In the use of MapStruct, idea2020.3 version in the build project error: java: Internal error in the mapping processor: java.lang.NullPointerException

Solution:
Setting –>Build,Execution,Deployment –>Compiler –>User-local build Add the parameter:
-Djps.track.ap.dependencies=false

[Solved] Kafka Error: kafka.common.InconsistentClusterIdException…

1. Background

Kafka’s physical machine is unexpectedly down, resulting in Kafka’s failure to start

2. Details of error report

[2022-08-09 08:20:42,097] ERROR Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
kafka.common.InconsistentClusterIdException: The Cluster ID 123456 doesn't match stored clusterId Some(456789) in meta.properties. The broker is trying to join the wrong cluster. Configured zookeeper.connect may be wrong.
	at kafka.server.KafkaServer.startup(KafkaServer.scala:235)
	at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:44)
	at kafka.Kafka$.main(Kafka.scala:82)
	at kafka.Kafka.main(Kafka.scala)

3. Solution

It can be seen that the error is obvious. The cluster-id is not correct. At this time, we can modify the configuration of meta.properties

# The location of the meta.properties file can be found based on the value of the log.dirs parameter in the server.properties configuration file
vim meta.properties

cluster.id=123456

Then it can be started normally

[Solved] Hadoop Error: ERROR: Cannot set priority of namenode process

Phenomenon:

solve:

1. Look at Hadoop logs:

Check the namenode log: tail -n 200 hadoop-xinjie-namenode-VM-0-9-centos.log (location of file directory: Hadoop installation location logs file)

2. It is found that the port is occupied

3. Command to check the port occupancy: netstat -anp|grep 9866

4. Kill process: kill -9 9866

5. Restart the cluster after killing all the occupied ports. The problem is solved

Zookeeper Failed to Start Error: start failed [How to Solve]

=====There is no error prompt in ZK here. Look for the logs. There is a logs directory on the upper layer of the bin directory. Check the logs===

Main error log information:

2022-07-28 15:31:50,793 [myid:] - ERROR [main:QuorumPeerMain@98] - Invalid config, exiting abnormally

========Solution information========

cd apache-zookeeper-3.6.2-bin/conf/

View the VIM zoo configuration file, view the dataDir directory defined by yourself, and view the ID information of zkServer.

CD to the dataDir directory,

echo  “serverid”  > myid

Go to the bin directory and start again, sh zkServer.sh start
check ZK application status: sh zkServer.sh status, (if the startup is successful and the status is wrong, you can wait until the cluster machine starts completely).

========================End=====================

[Solved] ZooKeeper Configurate Error: Error contacting service. It is probably not running.

After the ZooKeeper download and decompression configuration is successfully started, the execution of zkServer.sh start reports the following error.

The reason why I report an error is that the 2 and 3 node jdks do not set environment variables, resulting in an error contacting service It is probably not running.

I only configured JDK environment variables on node 1, and did not configure JDK environment variables on nodes 2 and 3. (I have three machines configured here)

Execute the VIM /etc/profile command to set the environment variables.

After setting the environment variable, use the command: source /etc/profile to make it effective

After the JDK environment variables of the three machines are set successfully.

Check zookeeper status at startup: zkServer.sh status  three nodes started successfully

Successfully resolved.

[Solved] Win 10 Kafka error: failed to construct Kafka consumer

After updating the code, the package succeeds, but an error is reported when the project is started. The error information is shown in the following figure:

I mean, it is obvious that the problem lies in Kafka. After looking at the configuration, it should be the port problem.

Instead of using IP configuration, I directly configured the port name in the host file, as shown in the following figure:

Add configuration 127.0.0.1 Kafka-server

Then change the service address of Kafka to kafka-server:9092 in the application configuration file of the project

[Solved] RabbitMQ Error: Error creating bean with name ‘rabbitConnectionFactory‘ defined in class path resource

1. Error Messages:

Error creating bean with name 'rabbitConnectionFactory' defined in class path resource [org/springframework/boot/autoconfigure/amqp/RabbitAutoConfiguration$RabbitConnectionFactoryCreator.class]: Bean instantiation via factory method failed; nested exception is org.springframework.beans.BeanInstantiationException: Failed to instantiate [org.springframework.amqp.rabbit.connection.CachingConnectionFactory]: Factory method 'rabbitConnectionFactory' threw exception; nested exception is java.lang.IllegalArgumentException: Address 192.168.74.128:5672:5672 seems to contain an unquoted IPv6 address. Make sure you quote IPv6 addresses like so: [2001:db8:85a3:8d3:1319:8a2e:370:7348]
2. Reason
Address 192.168.74.128:5672:5672 seems to contain an unquoted IPv6 address. Make sure you quote IPv6 addresses like so: [2001:db8:85a3:8d3:1319:8a2e:370:7348]
Identified as ipv6, rabbitmq should also support IPV6
3. Solution
application.yml File

 rabbitmq:
  #host: 192.168.74.128:5672//去掉端口
  host: 192.168.74.128
  username: admin
  password: 123123
  virtual-host: /