Tag Archives: Kafka

Flume receives an error when a single message is too large

org.apache.kafka.common.errors.RecordTooLargeException: The request included a message larger than the max message size the server will accept.

Java client tuning: Max_ REQUEST_ SIZE_ CONFIG

Kafka server side adjustment: message. Max. bytes = 2147483640 is close to the max of int, because the maximum range of this value is int

Special note: after the parameter is adjusted, it has no effect on the created topic
adjust the parameter of the created topic: set to the maximum integer value of 2147483647

bin/kafka-configs.sh --zookeeper localhost:2181  --alter --topic topic_name   --add-config  max.message.bytes=2147483647

Problem solving
 

[Solved] Kafka Error: InvalidReplicationFactorException: Replication factor:

Error:
InvalidReplicationFactorException: Replication factor:
1 larger than available brokers
The reason is that the configuration in kafka’s zk doesn’t match the creation parameters

Solution:
Open server.properties
vim /opt/module/kafka/config/server.properties
View Configuration
zookeeper.connect=hadoop102:2181,hadoop103:2181,hadoop104:2181/kafka
Create a topic at this time
bin/kafka-topics.sh –zookeeper hadoop102:2181/kafka –create –replication-factor 3 –partitions 1 –topic first
Here the zookeeper parameter value must be the same as the configured one

[Solved] Flink Error: Cannot resolve method addSource

The error report is inexplicable, but in fact, the dependency package imported by idea has multiple options, and it is wrong.

The required dependencies are as follows:

import org.apache.flink.api.common.serialization.SimpleStringSchema;
import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
import org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer;

import java.util.Collections;
import java.util.List;
import java.util.Properties;

Error in node when PM2 starts multiple processes in docker

2021-09-13T15:41:15: PM2 log: App [kafka:1] starting in -cluster mode- 2021-09-13T15:41:15: PM2 log: App name:kafka id:2 disconnected 2021-09-13T15:41:15: PM2 log: App [kafka:2] exited with code [0] via signal [SIGINT] 2021-09-13T15:41:15: PM2 log: App [kafka:2] starting in -cluster mode- 2021-09-13T15:41:15: PM2 log: App [kafka:1] online 2021-09-13T15:41:15: PM2 log: App [kafka:2] online /bin/bash:1 ELF ^ SyntaxError: Invalid or unexpected token

Docker start use CMD["pm2-runtime","process.json"].

The configuration file looks like this

{
    "apps" : [
        {
            "name": "kafka",
            "script": "node main.js --NODE_ENV=test",
            "log_date_format"  : "YYYY-MM-DD HH:mm:ss",
            "log_file"   : "/home/logs/log.log",
            "error_file" : "/home/logs/err.log",
            "out_file"   : "/home/logs/out.log",
            "instances": 3,
            "exec_mode": "cluster"
        }
    ]
  }

Start three Kafka instances in docker. But it keeps reporting errors. The reason is “exec_mode” in the configuration file. Delete it. In docker, remember to use process blocking to run in the foreground mode. Do not use the background, otherwise it will start frequently and cause error.

Kafka opens JMX port and reports that the error port is occupied

Kafka turns on JMX_ After port, when using Kafka command-line tools (Kafka topics, kafka-console-consumer.sh, etc.), an exception will be reported that the port is occupied, such as:

bash-5.1# /opt/kafka_2.13-2.7.0/bin/kafka-topics.sh --create --topic chat --partitions 5 --zookeeper 172.16.5.16:2181 --replication-factor 3
Error: Exception thrown by the agent : java.rmi.server.ExportException: Port already in use: 9999; nested exception is:
        java.net.BindException: Address in use (Bind failed)
sun.management.AgentConfigurationError: java.rmi.server.ExportException: Port already in use: 9999; nested exception is:
        java.net.BindException: Address in use (Bind failed)
        at sun.management.jmxremote.ConnectorBootstrap.startRemoteConnectorServer(ConnectorBootstrap.java:480)
        at sun.management.Agent.startAgent(Agent.java:262)
        at sun.management.Agent.startAgent(Agent.java:452)
Caused by: java.rmi.server.ExportException: Port already in use: 9999; nested exception is:
        java.net.BindException: Address in use (Bind failed)
        at sun.rmi.transport.tcp.TCPTransport.listen(TCPTransport.java:346)
        at sun.rmi.transport.tcp.TCPTransport.exportObject(TCPTransport.java:254)
        at sun.rmi.transport.tcp.TCPEndpoint.exportObject(TCPEndpoint.java:412)
        at sun.rmi.transport.LiveRef.exportObject(LiveRef.java:147)
        at sun.rmi.server.UnicastServerRef.exportObject(UnicastServerRef.java:237)
        at sun.rmi.registry.RegistryImpl.setup(RegistryImpl.java:213)
        at sun.rmi.registry.RegistryImpl.<init>(RegistryImpl.java:173)
        at sun.management.jmxremote.SingleEntryRegistry.<init>(SingleEntryRegistry.java:49)
        at sun.management.jmxremote.ConnectorBootstrap.exportMBeanServer(ConnectorBootstrap.java:816)
        at sun.management.jmxremote.ConnectorBootstrap.startRemoteConnectorServer(ConnectorBootstrap.java:468)
        ... 2 more
Caused by: java.net.BindException: Address in use (Bind failed)
        at java.net.PlainSocketImpl.socketBind(Native Method)
        at java.net.AbstractPlainSocketImpl.bind(AbstractPlainSocketImpl.java:387)
        at java.net.ServerSocket.bind(ServerSocket.java:392)
        at java.net.ServerSocket.<init>(ServerSocket.java:254)
        at java.net.ServerSocket.<init>(ServerSocket.java:145)
        at sun.rmi.transport.proxy.RMIDirectSocketFactory.createServerSocket(RMIDirectSocketFactory.java:45)
        at sun.rmi.transport.proxy.RMIMasterSocketFactory.createServerSocket(RMIMasterSocketFactory.java:345)
        at sun.rmi.transport.tcp.TCPEndpoint.newServerSocket(TCPEndpoint.java:670)
        at sun.rmi.transport.tcp.TCPTransport.listen(TCPTransport.java:335)
        ... 11 more

terms of settlement:

Modify the bin/kafka-run-class.sh file:

Find this code

if [  $JMX_PORT ]; then
  KAFKA_JMX_OPTS="$KAFKA_JMX_OPTS -Dcom.sun.management.jmxremote.port=$JMX_PORT "
fi

Replace with the following:

ISKAFKASERVER="false"
if [[ "$*" =~ "kafka.Kafka" ]]; then
    ISKAFKASERVER="true"
fi
if [  $JMX_PORT ] && [ -z "$ISKAFKASERVER" ]; then
  KAFKA_JMX_OPTS="$KAFKA_JMX_OPTS -Dcom.sun.management.jmxremote.port=$JMX_PORT "
fi

Then create topic and solve the problem.

bash-5.1#
bash-5.1# /opt/kafka_2.13-2.7.0/bin/kafka-topics.sh --create --topic chat --partitions 5 --zookeeper 172.16.5.16:2181 --replication-factor 3
Created topic chat.
bash-5.1#

Grafana Error: 414 Request-URI Too Large [How to Solve]

Grafana error 414
Network error request URI too large

 INFO[09-03|06:26:09] Request Completed                        logger=context userId=1 orgId=1 uname=admin method=GET path=/api/datasources/proxy/61/api/v1/query_range status=414 remote_addr=ip time_ms=8 size=169 referer="http://ip:3000/d/qu-QZdfZz/kafka-jmx-overview?orgId=1&refresh=1m&var-env=prod&var-broker_id=All&var-percentile=All&var-topic=All&from=now-15m&to=now"

Solution:

Or add the domain name nginx configuration
client_header_buffer_size 512k;
large_client_header_buffers 4 512k; Or change the configuration of the data source to post request. The default is get

Because the error report is a background error report, you need to modify the data source request type

Huawei kafka Authentication error: Server not found in Kerberos database (7) – LOOKING_UP_SERVER

Error reporting original text

principal is [email protected]
Will use keytab
Commit Succeeded 

21/08/23 10:20:14 INFO NewConsumer: subscribe:bc_test
Exception in thread "main" java.lang.reflect.InvocationTargetException
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.springframework.boot.loader.MainMethodRunner.run(MainMethodRunner.java:49)
	at org.springframework.boot.loader.Launcher.launch(Launcher.java:108)
	at org.springframework.boot.loader.Launcher.launch(Launcher.java:58)
	at org.springframework.boot.loader.JarLauncher.main(JarLauncher.java:88)
Caused by: org.apache.kafka.common.errors.SaslAuthenticationException: An error: (java.security.PrivilegedActionException: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Server not found in Kerberos database (7) - LOOKING_UP_SERVER)]) occurred when evaluating SASL token received from the Kafka Broker. Kafka Client will go to AUTHENTICATION_FAILED state.
Caused by: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Server not found in Kerberos database (7) - LOOKING_UP_SERVER)]
	at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:211)
	at org.apache.kafka.common.security.authenticator.SaslClientAuthenticator$2.run(SaslClientAuthenticator.java:361)
	at org.apache.kafka.common.security.authenticator.SaslClientAuthenticator$2.run(SaslClientAuthenticator.java:359)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:422)
	at org.apache.kafka.common.security.authenticator.SaslClientAuthenticator.createSaslToken(SaslClientAuthenticator.java:359)
	at org.apache.kafka.common.security.authenticator.SaslClientAuthenticator.sendSaslClientToken(SaslClientAuthenticator.java:269)
	at org.apache.kafka.common.security.authenticator.SaslClientAuthenticator.authenticate(SaslClientAuthenticator.java:206)
	at org.apache.kafka.common.network.KafkaChannel.prepare(KafkaChannel.java:81)
	at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:486)
	at org.apache.kafka.common.network.Selector.poll(Selector.java:424)
	at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:460)
	at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:261)
	at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:233)
	at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:209)
	at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureCoordinatorReady(AbstractCoordinator.java:219)
	at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureCoordinatorReady(AbstractCoordinator.java:205)
	at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:279)
	at org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1149)
	at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1115)
	at receive.NewConsumer.doWork(NewConsumer.java:102)
	at receive.NewConsumer.main(NewConsumer.java:127)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.springframework.boot.loader.MainMethodRunner.run(MainMethodRunner.java:49)
	at org.springframework.boot.loader.Launcher.launch(Launcher.java:108)
	at org.springframework.boot.loader.Launcher.launch(Launcher.java:58)
	at org.springframework.boot.loader.JarLauncher.main(JarLauncher.java:88)
Caused by: GSSException: No valid credentials provided (Mechanism level: Server not found in Kerberos database (7) - LOOKING_UP_SERVER)
	at sun.security.jgss.krb5.Krb5Context.initSecContext(Krb5Context.java:770)
	at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:248)
	at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:179)
	at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:192)
	... 29 more
Caused by: KrbException: Server not found in Kerberos database (7) - LOOKING_UP_SERVER
	at sun.security.krb5.KrbTgsRep.<init>(KrbTgsRep.java:73)
	at sun.security.krb5.KrbTgsReq.getReply(KrbTgsReq.java:251)
	at sun.security.krb5.KrbTgsReq.sendAndGetCreds(KrbTgsReq.java:262)
	at sun.security.krb5.internal.CredentialsUtil.serviceCreds(CredentialsUtil.java:308)
	at sun.security.krb5.internal.CredentialsUtil.acquireServiceCreds(CredentialsUtil.java:126)
	at sun.security.krb5.Credentials.acquireServiceCreds(Credentials.java:458)
	at sun.security.jgss.krb5.Krb5Context.initSecContext(Krb5Context.java:693)
	... 32 more
Caused by: KrbException: Identifier doesn't match expected value (906)
	at sun.security.krb5.internal.KDCRep.init(KDCRep.java:140)
	at sun.security.krb5.internal.TGSRep.init(TGSRep.java:65)
	at sun.security.krb5.internal.TGSRep.<init>(TGSRep.java:60)
	at sun.security.krb5.KrbTgsRep.<init>(KrbTgsRep.java:55)
	... 38 more

 

Solution:

After troubleshooting the error reported by connecting to Huawei big data platform, it was found that the Kerberos authentication passed, because the words will use KeyTab commit succeeded appeared, and there was an exception message when creating a consumer. The methods on the Internet have been tried, such as configuring hosts, checking the generated jaas.conf file, comparing krb5 files, etc
finally, replace Kafka’s dependent package with Huawei’s three packages (Kafka)_ 2.11-1.1.0.jar, kafka-clients-1.1.0.jar, zookeeper-3.5.1. Jar) passed!

Maven local package

Maven’s command to install jar package is:
MVN install: install file
– dfile = jar package location
– dgroupid = groupid in POM file
– dartifactid = artifactid in POM file
– dversion = version in POM file
– dpacking = jar

mvn install:install-file -Dfile=./kafka_2.11-1.1.0.jar -DgroupId=org.apache.kafka -DartifactId=kafka_2.11 -Dversion=1.1.0-hw -Dpackaging=jar

Maven introduction

       <dependency>
            <groupId>org.apache.kafka</groupId>
            <artifactId>kafka_2.11</artifactId>
            <version>1.1.0-hw</version>
        </dependency>

[Solved] Kafka Error: Discovered coordinator XXXXX:9092 (id: 2147483647 rack: null) for group itstyle.

Error information:

Discovered coordinator DESKTOP-NRTTBDM:9092 (id: 2147483647 rack: null) for group itstyle.

reason:

The host of Kafka running on windows is the machine name, not the IP address

So it will lead to error reporting

Desktop-nrttbdm is the host name of the server where the Kafka instance is located
and 9092 is the port of Kafka, that is, the connection address of Kafka.

Solution

Modify the hosts file directly

The windows hosts file is located in

C:\Windows\System32\drivers\etc\hosts

Open it with administrator’s permission and append the corresponding relationship between IP and host name

Add the

172.18.0.52 DESKTOP-NRTTBDM

Restart the service again

Problem solved!

How to Solve Kafka Error: no leader

When sending messages to Kafka as producer , an error is reported

There is no leader for this topic-partition as we are in the middle of a leadership election

The specific reason is not very clear, but the solution issue has been found. According to the answer below issue , the following modifications have been made:

The original Kafka is deleted_ BROKER_ ID: 1 when starting docker compose , add -- no recreate at the end of the command. The official explanation is to ensure that the container is not recreated, so as to retain its name and ID

If you still can’t solve the problem after modifying the above configuration, delete the Kafka container, re run docker compose up -- no recreate , and check the #516

Kafka Error while fetching metadata with correlation id 1 : {alarmHis=LEADER_NOT_AVAILABLE}

environment

springboot2 + kafka_ Kafka is a stand-alone environment

Error report

Error while fetching metadata with correlation id 1 : {alarmHis=LEADER_NOT_AVAILABLE}

Cause of error

Error getting metadata with correlation ID XX

Problem solving

    1. Modify

config\ server.properties , as follows:

listeners=PLAINTEXT://localhost:9092
advertised.listeners=PLAINTEXT://localhost:9092

Restart Kafka and start the program to store

result

Smooth solution

Kafka reported error shutdown broker because all log dirs in… Have failed

When using the Kafka tool, if there is one more topic to view topics, it will be deleted. Then the problem came, and Kafka service began to report an error:

ERROR Shutdown broker because all log dirs in E:\kafka\kafka_2.11-2.4.0\log have failed (kafka.log.LogManager)

Delete the topic log in the log in the directory where the error is reported, and restart Kafka to report an error. The original deletion of Kafka’s log directory can’t solve this problem, so we have to delete zookeeper zoo.cfg The dataDir directory configured in.

 

Note: if it is important data, be sure to back up!!!

K8s deployment Kafka has been reporting errors.

Record that k8s will crash all the time when it deploys Kafka.

 ERROR [KafkaServer id=2] Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
java.lang.IllegalArgumentException: requirement failed: Configured end points 10.43.180.42:9092 in advertised listeners are already registered by broker 1
	at kafka.server.KafkaServer.$anonfun$createBrokerInfo$3(KafkaServer.scala:478)
	at kafka.server.KafkaServer.$anonfun$createBrokerInfo$3$adapted(KafkaServer.scala:476)
	at scala.collection.IterableOnceOps.foreach(IterableOnce.scala:553)
	at scala.collection.IterableOnceOps.foreach$(IterableOnce.scala:551)
	at scala.collection.AbstractIterable.foreach(Iterable.scala:920)
	at kafka.server.KafkaServer.createBrokerInfo(KafkaServer.scala:476)
	at kafka.server.KafkaServer.startup(KafkaServer.scala:311)
	at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:44)
	at kafka.Kafka$.main(Kafka.scala:82)
	at kafka.Kafka.main(Kafka.scala)

solve:

Change the name of deployment or statefulset to not “Kafka” and solve it successfully.