Tag Archives: Kafka

Kafka Error while fetching metadata with correlation id 1 : {alarmHis=LEADER_NOT_AVAILABLE}


springboot2 + kafka_ Kafka is a stand-alone environment

Error report

Error while fetching metadata with correlation id 1 : {alarmHis=LEADER_NOT_AVAILABLE}

Cause of error

Error getting metadata with correlation ID XX

Problem solving

    1. Modify

config\ server.properties , as follows:


Restart Kafka and start the program to store


Smooth solution

Kafka reported error shutdown broker because all log dirs in… Have failed

When using the Kafka tool, if there is one more topic to view topics, it will be deleted. Then the problem came, and Kafka service began to report an error:

ERROR Shutdown broker because all log dirs in E:\kafka\kafka_2.11-2.4.0\log have failed (kafka.log.LogManager)

Delete the topic log in the log in the directory where the error is reported, and restart Kafka to report an error. The original deletion of Kafka’s log directory can’t solve this problem, so we have to delete zookeeper zoo.cfg The dataDir directory configured in.


Note: if it is important data, be sure to back up!!!

K8s deployment Kafka has been reporting errors.

Record that k8s will crash all the time when it deploys Kafka.

 ERROR [KafkaServer id=2] Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
java.lang.IllegalArgumentException: requirement failed: Configured end points in advertised listeners are already registered by broker 1
	at kafka.server.KafkaServer.$anonfun$createBrokerInfo$3(KafkaServer.scala:478)
	at kafka.server.KafkaServer.$anonfun$createBrokerInfo$3$adapted(KafkaServer.scala:476)
	at scala.collection.IterableOnceOps.foreach(IterableOnce.scala:553)
	at scala.collection.IterableOnceOps.foreach$(IterableOnce.scala:551)
	at scala.collection.AbstractIterable.foreach(Iterable.scala:920)
	at kafka.server.KafkaServer.createBrokerInfo(KafkaServer.scala:476)
	at kafka.server.KafkaServer.startup(KafkaServer.scala:311)
	at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:44)
	at kafka.Kafka$.main(Kafka.scala:82)
	at kafka.Kafka.main(Kafka.scala)


Change the name of deployment or statefulset to not “Kafka” and solve it successfully.

Kafka prompts no brokers found when trying to rebalance

Kafka prompts when executing the following command:

bin/kafka-console-consumer.sh --zookeeper localhost102:2181 --topic test

WARN [console-consumer-87796_ localhost002-1592779486563-9b43649b], no brokers found when trying to rebalance.( kafka.consumer.ZookeeperConsumerConnector )

The reason is that the Kafka process is not started or there is no Kafka cluster information on zookeeper

[[email protected] ~]# jps
3667 DataNode
3365 ResourceManager
21446 QuorumPeerMain
23386 Jps
3230 NodeManager

Solve the problem after starting Kafka

Centos7 installing Kafka

Download kafka
website: http://kafka.apache.org/
enter the url: http://kafka.apache.org/downloads
System environment
1, the operating system: 64 CentOS7
2, JDK version: 1.8.0 comes with _271 version
3, they are: they are – 3.4.6
4, three server deployment success:;;;

[[email protected] /]# timedatectl set-timezone Asia/Shanghai
[email protected] /]# timedatectl
[[email protected] /]# timedatectl
[[email protected] /]# timedatectl
[[email protected] /]# timedatectl

[[email protected] /]# yum-y install NTP
[>daoplaceholder1 /]# ntpdate ntp1.aliyun.com
hostname ectl set-hostname master
stname ectl set-hostname node1
h>ame ectl set-hostname node2
hostname ectl set-hostname node2
hostname ectl set-hostname node2
hostname ectl set-hostname node2 master node1
1>68.192.155 node2 localhost
: 1 localhost
1>68.93.150 master
192.168>152 node1>de2 node2 node2 nod> br> If you change the hosts in Windows 10, you can copy this file to the desktop and add the above contents. Then replace the hosts file under C:\Windows\System32\ Drivers \etc with

props. Put (consumerConfig. Bootstrap_servers_config,”node1:9092,node2:9092″);
tar-zxvf /usr/local/soft/kafka_2.11-2.2.0.tgz -c /usr/local/ kafka
tar-zxvf /usr/local/soft/kafka_2.11-2.2.0.tgz -c /usr/local/
2, configuration, kafka
[root @ localhost] # vi/usr/local/kafka_2. 11-2.2.0/config/server properties
Kafka’s configuration information is configured in server.properties
Find the following two lines of code and comment them separately
Modify the directory where the logs are stored
The log. The dirs =/kafka_2. 11-2.2.0/kafka – logs
Add the following three configurations at the bottom of the file:
Broker. Id = 1
zookeeper. Connect = 192.152:2181192168 192.155:2181

Listeners = PlainText :// Master :9092 Listeners :// Master :9092

[[email protected] local]# CD kafka2.11-2.2.0 /
[email protected] kafka2.11-2.2.0]# mkdir kafka-logs
[[email protected] local]# CD kafka2.11-2.2.0 /
[[email protected] kafka2.11-2.2.0]# mkdir kafka-logs

Note: If it is a stand-alone version, the default is fine, we do not need to change anything. Now we are configuring the cluster, so we need to configure some parameters
1), Broker. id: Each machine cannot be the same
2), ZooKeeper. Connect: Since I have 3 ZooKeeper servers, I set ZooKeeper. Connect to 3 servers and must add all of them
3), Listeners: Listeners must be set when configuring the cluster, otherwise the leader error will not be found for future operations
WARN [Producer clientId = console – Producer] Error while fetching the metadata with the correlation id 40: {test = LEADER_NOT_AVAILABLE} workClient (org.apache.kafka.clients.Net)
4) Notice that the two servers, ZooKeeper. Connect, are configured the same as here, but the Broker. ID and Listeners cannot be configured the same
5 copies, kafka to two other servers
[root @ localhost config] # SCP – r/usr/local/kafka_2. 11-2.2.0 [email protected]:/usr/local/
[root @ localhost config] # SCP – r/usr/local/kafka_2. 11-2.2.0 [email protected]:/usr/local /
You will be asked to enter the password of the target machine. Just follow the instructions and modify the Broker. id and Listeners of the two servers as follows:
[root @ localhost] # vi/usr/local/kafka_2. 11-2.2.0/config/server properties
Then a change on the broker. Id = 2 and listeners = PLAINTEXT://, the zookeeper. Connect don’t need to change are the same
id = 2 zookeeper. Connect = 192.152:2181192168 192.155:2181
listeners = PLAINTEXT:// node1:9092
[root @ localhost] # vi/usr/local/kafka_2. 11-2.2.0/config/server properties
Then a change on the broker. Id = 3 and listeners = PLAINTEXT://, the zookeeper. Connect don’t need to change are the same
Broker. Id = 3
zookeeper. Connect = 192.152:2181192168 192.155:2181
listeners = PLAINTEXT:// 2:9092
If the firewall has been turned off, it can be ignored.
All three machines must be turned on. Kafka communication is conducted through port 9092 by default, which is the Listed Listeners we have provided above
[[email protected] config]# firewall-cmd –zone=public –add-port=9092/tcp –permanent
[[email protected] config]# firewall-cmd –reload

[[email protected] /]# /usr/local/ ZooKeeper -3.4.6/bin/ zkserver.sh start
[[email protected] /]# /usr/local/ ZooKeeper -3.4.6/bin/ zkserver.sh start
[[email protected] /]# /usr/local/ ZooKeeper -3.4.6/bin/ zkserver.sh start
10, start, kafka,
three is to launch the
[root @ localhost /] #/usr/local/kafka_2. 11-2.2.0/bin/kafka – server – start. Sh – daemon/usr/local/kafka_2. 11-2.2.0/config/server properties

[[email protected] /]# JPS
87 JPS
9224 Kafka
9224 Kafka
[[email protected] /]# CD /usr/local/kafka_2.11-2.2.0/
[email protected] kafka_2.11-2.2.0]# bin/ kafka-Topics. Sh –create — ZooKeeper –replication-factor 1 –partitions 1 –topic test
Created topic test.
If successful, it outputs: Created Topic “test”.
13, see the topic
while created on the topic, but the other two machine also can see that the client
[[email protected] /]# CD /usr/local/kafka_2.11-2.2.0/
[[email protected] kafka_2.11-2.2.0]# bin/ kafka-Topics — List — ZooKeeper
Note: The IP here can be,, Themes can be seen on any of the three servers

[email protected] kafka_2.11-2.2.0]# bin/kafka-console-producer.sh –broker — list –topic test
>> a
> b
> c
# bin/kafka-console-consumer.sh –bootstrap-server — Topic test –from — Beginning


Kafka connection abnormal org.apache.kafka . common.errors.TimeoutException : Failed to update metadata after 60000 ms.

Introduction: there are many problems about hosts on the Internet. After testing, I found out that it is not. After testing by myself, I introduced three solutions to most problems:

1. Firewall port not opened or closed

firewall generally local test will be closed, the line is generally open, it is recommended to add port

add port:

# --permanent 为永久添加,不指定此参数重启后失效
firewall-cmd --zone=public --add-port=9092/tcp --permanent

delete port:

firewall-cmd --zone=public --remove-port=80/tcp --permanent

view all ports:

 firewall-cmd --zone=public --list-ports

view specified port:

firewall-cmd --zone= public --query-port=9092/tcp

view port status:

netstat -tunlp


firewall-cmd --reload

firewall command:

service firewalld start
service firewalld enable (永久生效)
service firewalld stop (重启后失效)
service firewalld disable (永久生效)
service firewalld restart
service firewalld status

2. Kafka service entry address not specified

edit the server.properties in the config directory of kafka, and add the address of the service entry to the external service:


3. The kafka version in the project is inconsistent with the kafka version installed on the server

modify maven pom. XML file, specify the corresponding kafka coordinates,
I installed kafka_2.12-2.1.0, the corresponding coordinates are: