Tag Archives: Kafka Error

[Solved] Kafka Error: kafka.common.InconsistentClusterIdException…

1. Background

Kafka’s physical machine is unexpectedly down, resulting in Kafka’s failure to start

2. Details of error report

[2022-08-09 08:20:42,097] ERROR Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
kafka.common.InconsistentClusterIdException: The Cluster ID 123456 doesn't match stored clusterId Some(456789) in meta.properties. The broker is trying to join the wrong cluster. Configured zookeeper.connect may be wrong.
	at kafka.server.KafkaServer.startup(KafkaServer.scala:235)
	at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:44)
	at kafka.Kafka$.main(Kafka.scala:82)
	at kafka.Kafka.main(Kafka.scala)

3. Solution

It can be seen that the error is obvious. The cluster-id is not correct. At this time, we can modify the configuration of meta.properties

# The location of the meta.properties file can be found based on the value of the log.dirs parameter in the server.properties configuration file
vim meta.properties

cluster.id=123456

Then it can be started normally

[Solved] kafka Error: java.net.UnknownHostException: ls-bptysztw

kafka connect error:

java.net.UnknownHostException: ls-bptysztw

2022-07-20 15:48:28.701  INFO 15924 --- [ntainer#0-0-C-1] org.apache.kafka.clients.Metadata        : [Consumer clientId=consumer-abc-1, groupId=abc] Cluster ID: LFbHxG8qSSu7PyPKXoDD4g
2022-07-20 15:48:28.703  INFO 15924 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator  : [Consumer clientId=consumer-abc-1, groupId=abc] Discovered group coordinator ls-bptysztw:9092 (id: 2147483647 rack: null)
2022-07-20 15:48:30.990  WARN 15924 --- [ntainer#0-0-C-1] org.apache.kafka.clients.NetworkClient   : [Consumer clientId=consumer-abc-1, groupId=abc] Error connecting to node ls-bptysztw:9092 (id: 2147483647 rack: null)

java.net.UnknownHostException: ls-bptysztw
	at java.net.Inet6AddressImpl.lookupAllHostAddr(Native Method) ~[na:1.8.0_144]
	at java.net.InetAddress$2.lookupAllHostAddr(InetAddress.java:928) ~[na:1.8.0_144]
	at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1323) ~[na:1.8.0_144]
	at java.net.InetAddress.getAllByName0(InetAddress.java:1276) ~[na:1.8.0_144]
	at java.net.InetAddress.getAllByName(InetAddress.java:1192) ~[na:1.8.0_144]
	at java.net.InetAddress.getAllByName(InetAddress.java:1126) ~[na:1.8.0_144]
	at org.apache.kafka.clients.DefaultHostResolver.resolve(DefaultHostResolver.java:27) ~[kafka-clients-3.1.1.jar:na]
	at org.apache.kafka.clients.ClientUtils.resolve(ClientUtils.java:110) ~[kafka-clients-3.1.1.jar:na]
	at org.apache.kafka.clients.ClusterConnectionStates$NodeConnectionState.currentAddress(ClusterConnectionStates.java:511) ~[kafka-clients-3.1.1.jar:na]
	at org.apache.kafka.clients.ClusterConnectionStates$NodeConnectionState.access$200(ClusterConnectionStates.java:468) ~[kafka-clients-3.1.1.jar:na]
	at org.apache.kafka.clients.ClusterConnectionStates.currentAddress(ClusterConnectionStates.java:173) ~[kafka-clients-3.1.1.jar:na]
	at org.apache.kafka.clients.NetworkClient.initiateConnect(NetworkClient.java:988) [kafka-clients-3.1.1.jar:na]
	at org.apache.kafka.clients.NetworkClient.ready(NetworkClient.java:301) [kafka-clients-3.1.1.jar:na]
	at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.tryConnect(ConsumerNetworkClient.java:575) [kafka-clients-3.1.1.jar:na]
	at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$FindCoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:854) [kafka-clients-3.1.1.jar:na]
	at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$FindCoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:830) [kafka-clients-3.1.1.jar:na]
	at org.apache.kafka.clients.consumer.internals.RequestFuture$1.onSuccess(RequestFuture.java:206) [kafka-clients-3.1.1.jar:na]
	at org.apache.kafka.clients.consumer.internals.RequestFuture.fireSuccess(RequestFuture.java:169) [kafka-clients-3.1.1.jar:na]
	at org.apache.kafka.clients.consumer.internals.RequestFuture.complete(RequestFuture.java:129) [kafka-clients-3.1.1.jar:na]
	at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler.fireCompletion(ConsumerNetworkClient.java:602) [kafka-clients-3.1.1.jar:na]
	at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.firePendingCompletedRequests(ConsumerNetworkClient.java:412) [kafka-clients-3.1.1.jar:na]
	at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:297) [kafka-clients-3.1.1.jar:na]
	at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:236) [kafka-clients-3.1.1.jar:na]
	at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:215) [kafka-clients-3.1.1.jar:na]
	at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureCoordinatorReady(AbstractCoordinator.java:246) [kafka-clients-3.1.1.jar:na]
	at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.coordinatorUnknownAndUnready(ConsumerCoordinator.java:459) [kafka-clients-3.1.1.jar:na]
	at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:487) [kafka-clients-3.1.1.jar:na]
	at org.apache.kafka.clients.consumer.KafkaConsumer.updateAssignmentMetadataIfNeeded(KafkaConsumer.java:1262) [kafka-clients-3.1.1.jar:na]
	at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1231) [kafka-clients-3.1.1.jar:na]
	at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1211) [kafka-clients-3.1.1.jar:na]
	at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.pollConsumer(KafkaMessageListenerContainer.java:1522) [spring-kafka-2.8.7.jar:2.8.7]
	at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.doPoll(KafkaMessageListenerContainer.java:1512) [spring-kafka-2.8.7.jar:2.8.7]
	at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.pollAndInvoke(KafkaMessageListenerContainer.java:1340) [spring-kafka-2.8.7.jar:2.8.7]
	at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.run(KafkaMessageListenerContainer.java:1252) [spring-kafka-2.8.7.jar:2.8.7]
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [na:1.8.0_144]
	at java.util.concurrent.FutureTask.run(FutureTask.java:266) [na:1.8.0_144]
	at java.lang.Thread.run(Thread.java:748) [na:1.8.0_144]

analysis

It can be found from the log that ls-bptysztw:9092 is the address information of the host. Because the host cannot recognize the IP corresponding to ls-bptysztw, which leads to an unknownHost exception. Therefore, as long as the host is configured to point to the correct IP, this error will be solved.

Solution:

Configure the C:\Windows\System32\drivers\etc\hosts file

123.123.123.123       ls-bptysztw

[Solved] Kafka Error: is/are not present and missingTopicsFatal is true

 

1. Error message

org.springframework.context.ApplicationContextException: Failed to start bean 'org.springframework.kafka.config.internalKafkaListenerEndpointRegistry'; nested exception is java.lang.IllegalStateException: Topic(s) [350002000000000042] is/are not present and missingTopicsFatal is true
	at org.springframework.context.support.DefaultLifecycleProcessor.doStart(DefaultLifecycleProcessor.java:185)
	at org.springframework.context.support.DefaultLifecycleProcessor.access$200(DefaultLifecycleProcessor.java:53)
	at org.springframework.context.support.DefaultLifecycleProcessor$LifecycleGroup.start(DefaultLifecycleProcessor.java:360)
	at org.springframework.context.support.DefaultLifecycleProcessor.startBeans(DefaultLifecycleProcessor.java:158)
	at org.springframework.context.support.DefaultLifecycleProcessor.onRefresh(DefaultLifecycleProcessor.java:122)
	at org.springframework.context.support.AbstractApplicationContext.finishRefresh(AbstractApplicationContext.java:893)
	at org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext.finishRefresh(ServletWebServerApplicationContext.java:162)
	at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:552)
	at org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext.refresh(ServletWebServerApplicationContext.java:141)
	at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:744)
	at org.springframework.boot.SpringApplication.refreshContext(SpringApplication.java:391)
	at org.springframework.boot.SpringApplication.run(SpringApplication.java:312)
	at org.springframework.boot.SpringApplication.run(SpringApplication.java:1215)
	at org.springframework.boot.SpringApplication.run(SpringApplication.java:1204)
	at com.iwhalecloud.oss.res.search.ApplicationMain.main(ApplicationMain.java:21)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.springframework.boot.loader.MainMethodRunner.run(MainMethodRunner.java:48)
	at org.springframework.boot.loader.Launcher.launch(Launcher.java:87)
	at org.springframework.boot.loader.Launcher.launch(Launcher.java:51)
	at org.springframework.boot.loader.JarLauncher.main(JarLauncher.java:52)
Caused by: java.lang.IllegalStateException: Topic(s) [350002000000000042] is/are not present and missingTopicsFatal is true
	at org.springframework.kafka.listener.AbstractMessageListenerContainer.checkTopics(AbstractMessageListenerContainer.java:318)
	at org.springframework.kafka.listener.ConcurrentMessageListenerContainer.doStart(ConcurrentMessageListenerContainer.java:136)
	at org.springframework.kafka.listener.AbstractMessageListenerContainer.start(AbstractMessageListenerContainer.java:292)
	at org.springframework.kafka.config.KafkaListenerEndpointRegistry.startIfNecessary(KafkaListenerEndpointRegistry.java:311)
	at org.springframework.kafka.config.KafkaListenerEndpointRegistry.start(KafkaListenerEndpointRegistry.java:255)
	at org.springframework.context.support.DefaultLifecycleProcessor.doStart(DefaultLifecycleProcessor.java:182)
	... 22 common frames omitted

2. Cause analysis

The above error is caused because the monitored Kafka topic does not exist.

 

3. Solutions

1. Enable Kafka to automatically create topics

2. Manually go to Kafka to create the required topic.

3. The function of Kafka monitoring and checking topic is turned off in the code, as shown in the following code:

spring.kafka.listener.missing-topics-fatal=false

 

[Solved] Kafka Error: InvalidReplicationFactorException: Replication factor:

Error:
InvalidReplicationFactorException: Replication factor:
1 larger than available brokers
The reason is that the configuration in kafka’s zk doesn’t match the creation parameters

Solution:
Open server.properties
vim /opt/module/kafka/config/server.properties
View Configuration
zookeeper.connect=hadoop102:2181,hadoop103:2181,hadoop104:2181/kafka
Create a topic at this time
bin/kafka-topics.sh –zookeeper hadoop102:2181/kafka –create –replication-factor 3 –partitions 1 –topic first
Here the zookeeper parameter value must be the same as the configured one

[Solved] Kafka Error: Discovered coordinator XXXXX:9092 (id: 2147483647 rack: null) for group itstyle.

Error information:

Discovered coordinator DESKTOP-NRTTBDM:9092 (id: 2147483647 rack: null) for group itstyle.

reason:

The host of Kafka running on windows is the machine name, not the IP address

So it will lead to error reporting

Desktop-nrttbdm is the host name of the server where the Kafka instance is located
and 9092 is the port of Kafka, that is, the connection address of Kafka.

Solution

Modify the hosts file directly

The windows hosts file is located in

C:\Windows\System32\drivers\etc\hosts

Open it with administrator’s permission and append the corresponding relationship between IP and host name

Add the

172.18.0.52 DESKTOP-NRTTBDM

Restart the service again

Problem solved!

Kafka error: ERROR There was an error in one of the threads during logs loading: java.lang.NumberFormatException: For input string: “derby” (kafka.log.LogManager)

1. Make a   note of Kafka error handling

 

After Kafka was stopped, an error occurred when restarting :

[2017-10-27 09:43:18,313] INFO Recovering unflushed segment 15000679 in log mytest-0. (kafka.log.Log)

[2017-10-27 09:43:18,972] ERROR There was an error in one of the threads during logs loading: java.lang.NumberFormatException: For input string: “derby” (kafka.log.LogManager)

[2017-10-27 09:43:18,975] FATAL [Kafka Server 0], Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)

java.lang.NumberFormatException: For input string: “derby”

        at java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)

        at java.lang.Long.parseLong (Long.java:589)

        at java.lang.Long.parseLong (Long.java:631)

        at scala.collection.immutable.StringLike$class.toLong(StringLike.scala:277)

        at scala.collection.immutable.StringOps.toLong(StringOps.scala:29)

        at kafka.log.Log$.offsetFromFilename(Log.scala:1648)

        at kafka.log.Log $$ anonfun $ loadSegmentFiles $ 3.apply (Log.scala: 284)

        at kafka.log.Log $$ anonfun $ loadSegmentFiles $ 3.apply (Log.scala: 272)

        at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)

 

Looking directly at the error log, you can see from the log that there is an obvious error:

ERROR There was an error in one of the threads during logs loading: java.lang.NumberFormatException: For input string: “derby” (kafka.log.LogManager) 

From the original meaning, it can be seen that there is a thread that made an error while loading the log, java.lang.NumberFormatException throws an exception, and the input string derby has a problem.

What the hell? ?

First, let’s analyze what to do when kafka restarts: When the
kafka broker is started, the data of each topic before it will be reloaded, and under normal circumstances, it will prompt that each topic is restored.

INFO Recovering unflushed segment 8790240 in log userlog-2. (kafka.log.Log)

INFO Loading producer state from snapshot file 00000000000008790240.snapshot for partition userlog-2 (kafka.log.ProducerStateManager)

INFO Loading producer state from offset 10464422 for partition userlog-2 with message format version 2 (kafka.log.Log)

INFO Loading producer state from snapshot file 00000000000010464422.snapshot for partition userlog-2 (kafka.log.ProducerStateManager)

INFO Completed load of log userlog-2 with 2 log segments, log start offset 6223445 and log end offset 10464422 in 4460 ms (kafka.log.Log)

 

But when the data recovery under some topics fails, it will cause the broker to shut down, and an error will be reported:
ERROR There was an error in one of the threads during logs loading: java.lang.NumberFormatException: For input string: “derby” (kafka .log.LogManager)

Now it is clear that the problem lies in the topic data. What is the problem? ?

Quickly go to the place where kafka stores the topic, this path is set in server.properties:

log.dirs=/data/kafka/kafka-logs

) From the previous line of the error log:

 It can be seen that the problem occurred when loading the topic mytest-0. Go directly to the directory where this topic is located, and find that there is an illegal file called derby.log. Delete it directly and restart the service.

) Check it completely and make sure that there is no similar document

#cd /data/kafka/kafka-logs

#find /data/kafka/kafka-logs/ -name “derby*”

You can see that there is a derby.log file under topic, mytest-0, which is illegal. Because kafka broker requires all data file names to be of type Long Just delete this file and restart kafka.

 

2.    Remember a kafka, zookeeper error report

Both    Kafka and zookeeper started normally, but from the log, it was disconnected soon after being connected. The error message is as follows:

[2017-10-27 15:06:08,981] INFO Established session 0x15f5c88c014000a with negotiated timeout 240000 for client /127.0.0.1:33494 (org.apache.zookeeper.server.ZooKeeperServer)

[2017-10-27 15:06:08,982] INFO Processed session termination for sessionid: 0x15f5c88c014000a (org.apache.zookeeper.server.PrepRequestProcessor)

[2017-10-27 15:06:08,984] WARN caught end of stream exception (org.apache.zookeeper.server.NIOServerCnxn)

EndOfStreamException: Unable to read additional data from client sessionid 0x15f5c88c014000a, likely client has closed socket

        at org.apache.zookeeper.server.NIOServerCnxn.doIO (NIOServerCnxn.java:239)

at org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:203)

at java.lang.Thread.run(Thread.java:745)

 

From the literal meaning of the log, the first log: said that session 0x15f5c88c014000a timed out after 240 seconds (what the hell?); continue to the second log and said 0x15f5c88c014000a This session ended, and the timeout caused the session to be disconnected. This is Understand; Ok, let’s look at the third item: You can’t read additional data from the 0x15f5c88c014000a session. (All disconnected, how to read). So far log analysis is complete, it seems that the session timeout disconnection. Just increase the connection time of the session.

The configured timeout is too short, Zookeeper has not finished reading the data of Consumer , and the connection is disconnected by Consumer !

 

Solution:

Modify kafka’s server.properties file:

# Timeout in ms for connecting to zookeeper

zookeeper.connection.timeout.ms=600000

zookeeper.session.timeout.ms=400000

 

Generally it is fine. If you are not at ease, change the zookeeper configuration file:

# disable the per-ip limit on the number of connections since this is a non-production config

maxClientCnxns=1000

tickTime = 120000

 

How to Solve Kafka Error: no leader

When sending messages to Kafka as producer , an error is reported

There is no leader for this topic-partition as we are in the middle of a leadership election

The specific reason is not very clear, but the solution issue has been found. According to the answer below issue , the following modifications have been made:

The original Kafka is deleted_ BROKER_ ID: 1 when starting docker compose , add -- no recreate at the end of the command. The official explanation is to ensure that the container is not recreated, so as to retain its name and ID

If you still can’t solve the problem after modifying the above configuration, delete the Kafka container, re run docker compose up -- no recreate , and check the #516

Kafka Error while fetching metadata with correlation id 1 : {alarmHis=LEADER_NOT_AVAILABLE}

environment

springboot2 + kafka_ Kafka is a stand-alone environment

Error report

Error while fetching metadata with correlation id 1 : {alarmHis=LEADER_NOT_AVAILABLE}

Cause of error

Error getting metadata with correlation ID XX

Problem solving

    1. Modify

config\ server.properties , as follows:

listeners=PLAINTEXT://localhost:9092
advertised.listeners=PLAINTEXT://localhost:9092

Restart Kafka and start the program to store

result

Smooth solution