Tag Archives: Kafka

kafka Environment Build and Startup Error: ERROR [KafkaServer id=1] Fatal error during KafkaServer startup. Prepare to shutdown

Solution: Modify the configuration file of server.properties as below:

Add these two items to the server.properties configuration file.

listeners=PLAINTEXT://xx.xx.xx.xx(server intranet IP address):9092
advertised.listeners=PLAINTEXT://xx.xx.xx.xx(server external IP address):9092

kafka server fail to startup error:

[2022-09-29 16:04:58,630] ERROR [KafkaServer id=1] Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
org.apache.kafka.common.KafkaException: Socket server failed to bind to 47.100.19.248:9092: Cannot assign requested address.
at kafka.network.Acceptor.openServerSocket(SocketServer.scala:778)
at kafka.network.Acceptor.<init>(SocketServer.scala:672)
at kafka.network.DataPlaneAcceptor.<init>(SocketServer.scala:531)
at kafka.network.SocketServer.createDataPlaneAcceptor(SocketServer.scala:287)
at kafka.network.SocketServer.$anonfun$createDataPlaneAcceptorsAndProcessors$1(SocketServer.scala:267)
at kafka.network.SocketServer.$anonfun$createDataPlaneAcceptorsAndProcessors$1$adapted(SocketServer.scala:261)
at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
at kafka.network.SocketServer.createDataPlaneAcceptorsAndProcessors(SocketServer.scala:261)
at kafka.network.SocketServer.startup(SocketServer.scala:135)
at kafka.server.KafkaServer.startup(KafkaServer.scala:309)
at kafka.Kafka$.main(Kafka.scala:109)
at kafka.Kafka.main(Kafka.scala)
Caused by: java.net.BindException: Cannot assign requested address
at java.base/sun.nio.ch.Net.bind0(Native Method)
at java.base/sun.nio.ch.Net.bind(Net.java:461)
at java.base/sun.nio.ch.Net.bind(Net.java:453)
at java.base/sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:227)
at java.base/sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:80)
at kafka.network.Acceptor.openServerSocket(SocketServer.scala:774)
… 13 more
[2022-09-29 16:04:58,632] INFO [KafkaServer id=1] shutting down (kafka.server.KafkaServer)

[Solved] Kafka Restarts error | Cloudera Manager Access Returns 500 | HDFS Startup Error

Hi~ Long time no update

1.Problems that need attention after restarting kafka:
Kafka will have a write file a in the target storage location during execution,this file a will keep a write state for a while,usually one hour Heavy
Generate a new write file b,End the last write file a(The duration of this ending needs to check the configuration of each cluster). then restart
Here comes the problem,The last write file a,will be recreated after restarting,The last write file b,So the current
a will keep writing status,when reading and writing file a, it will report an error,including importing Hive query will also report an error&# xff08; load to hive
The table will not report an error,but it will report an error when selecting),because this file is always in the write state,It is inoperable,It is also called writing
Lock(I believe everyone has heard of).
Solution:Then we need to manually terminate the write status of the write file,First we need to determine the status of the write file,In the command
Execute the command on the line :
hdfs fsck /data/logs/( Write the directory where the file is located,Change according to where your file is located) -openforwrite
The displayed files are all in the write state:

insert picture description here

 After seeing the writing file,execute the command to stop all writing files,here explain,why all stop&# xff0c; Logically, it should be stopped before
A write file, but stopping all of them can also solve the problem, is relatively simple and violent, because manual stopping will automatically generate a
, writing files, so you can stop them all. then now execute the command :
hdfs debug recoverLease -path /logs/common_log/2022-09 -16/FlumeData.1663292498820.tmp(Execute the previous command to display Output write file path) -retries 3
It can be solved by executing each file once,Say more,If this file has been loaded into hive,, you need to go to /user/warehouse/hive/ to find this write status file

insert picture description here

2.CDH's Cloudera Manager launch browser access returns 500error:
① First check the configuration of the /etc/hosts file, only need to leave these two lines with the cluster The intranet IP mapping can be
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6

②It is also necessary to check whether the ports related to ,cm are occupied by the firewall.

③ Then restart CM, execute the command
nameNode:systemctl stop cloudera-scm-server
Then execute :systemctl stop cloudera-scm-agent on each node

nameNode:systemctl start cloudera-scm-server
Then execute :systemctl start cloudera-scm-agent on each node
Attention Pay attention to!!! The execution order of these commands cannot be reversed, Otherwise, there may be problems with cluster startup.
Then you can systemctl status cloudera-scm-server, systemctl status cloudera-scm-agent
Check out the operation.

②If cm starts and can access , but starts HDFS error 1 or 2
1.Unable to retrieve non-local non-loopback IP address. Seeing address: cm/127.0.0.1
2.ERROR ScmActive-0:com.cloudera.server.cmf. components.ScmActive: ScmActive was not able to access CM identity  to validate it.2017-04-18 09:40 :29,308 ERROR ScmActive-0

So congratulations ,find a solution.
First find the source database of CM,Some of them were configured at that time,If you don’t know, ask the person who installed them,Almost all of them are in
Don't ask me for the , account password on nameNode ~, then show databases; can See that there is a cm or scm library

insert picture description here

 use this library,then show tables;
You will see a table called HOSTS,View the data of this table-select * from HOSTS ;

insert picture description here

 You will find that there is a different line , that is, there is a difference between NAME and IP_ADDRESS, Then you need to modify it back, to
The name and IP_ADDRESS of the intranet,I believe everyone will modify it!Then restart the CM,It's done!

[Solved] kafka startup Error: ERROR Fatal error during KafkaServer startup. Prepare to shutdown

1. Error Message:

ERROR Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
kafka.common.InconsistentBrokerIdException: Configured broker.id 0 doesn’t match stored broker.id Some(1) in meta.properties. If you moved your data, make sure your configured broker.id matches. If you intend to create a new broker, you should remove all data in your data directories (log.dirs).
at kafka.server.KafkaServer.getOrGenerateBrokerId(KafkaServer.scala:793)
at kafka.server.KafkaServer.startup(KafkaServer.scala:221)
at kafka.Kafka$.main(Kafka.scala:109)
at kafka.Kafka.main(Kafka.scala)

 

2. Casue
The id value inside meta.properties (path: /opt/kafka/logs) does not match the broker.id in server.properties of /opt/kafka/config.

The cause is this: due to the mistaken deletion of the file on linux, so that _cd_ such a command can not be used, but fortunately there are other nodes can be used, after some backtracking, successfully run zookeeper, however, when running kafka, reported an error ERROR Fatal error during KafkaServer Prepare to shutdown, the first line of the error message is as follows.

The reason for this is that the id value in the meta.properties (path: /opt/kafka/logs) does not match the broker.id in the server.properties in /opt/kafka/config.
This is after I modified it, originally it was broker.id=1

 

3. System prompt

    [2022-06-18 14:32:02,309] ERROR Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)kafka.common.InconsistentBrokerIdException: Configured broker.id 0 doesn’t match stored broker.id Some(1) in meta.properties. If you moved your data, make sure your configured broker.id                      matches. If you intend to create a new broker, you should remove all data in your data directories (log.dirs).        at kafka.server.KafkaServer.getOrGenerateBrokerId(KafkaServer.scala:793)        at kafka.server.KafkaServer.startup(KafkaServer.scala:221)        at kafka.Kafka$.main(Kafka.scala:109)        at kafka.Kafka.main(Kafka.scala)[2022-06-18 14:32:02,323] INFO shutting down (kafka.server.KafkaServer)[2022-06-18 14:32:02,354] INFO [feature-zk-node-event-process-thread]: Shutting down (kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread)[2022-06-18 14:32:02,360] INFO [feature-zk-node-event-process-thread]: Stopped (kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread)[2022-06-18 14:32:02,383] INFO [feature-zk-node-event-process-thread]: Shutdown completed (kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread)[2022-06-18 14:32:02,406] INFO [ZooKeeperClient Kafka server] Closing. (kafka.zookeeper.ZooKeeperClient)[2022-06-18 14:32:02,568] INFO Session: 0x1000059c2cd0000 closed (org.apache.zookeeper.ZooKeeper)[2022-06-18 14:32:02,583] INFO EventThread shut down for session: 0x1000059c2cd0000 (org.apache.zookeeper.ClientCnxn)[2022-06-18 14:32:02,588] INFO [ZooKeeperClient Kafka server] Closed. (kafka.zookeeper.ZooKeeperClient)[2022-06-18 14:32:02,621] INFO App info kafka.server for 0 unregistered (org.apache.kafka.common.utils.AppInfoParser)[2022-06-18 14:32:02,624] INFO shut down completed (kafka.server.KafkaServer)[2022-06-18 14:32:02,625] ERROR Exiting Kafka. (kafka.Kafka$)[2022-06-18 14:32:02,655] INFO shutting down (kafka.server.KafkaServer)

4. Solution
Found the reason, broker=0 and broker.id=1 modified to the same value, and then restart, ERROR Fatal error during KafkaServer startup.

[Solved] flicksql cdc mysql to kafka Connect Error: org.apache.flink.table.api.ValidationException…

Error Messages: org.apache.flink.table.api.ValidationException: Could not find any factory for identifier 'debezium-json' that implements 'org.apache.flink.table.factories.SerializationFormatFactory' in the classpath.


Check if there is any package that I forgot to import
I didn’t import the flink-json package here

        <dependency>
            <groupId>org.apache.flink</groupId>
            <artifactId>flink-json</artifactId>
            <version>${flink.version}</version>
        </dependency>

The import is successful and Kafka can be connected normally~

spark SQL Export Data to Kafka error [How to Solve]

Failed to find data source: kafka. Please deploy the application as per the deployment section of "Structured Streaming + Kafka Integration Guide"

The reason for this error is the lack of spark-sql-kafka-0-10_2.11-2.4.5.jar dependency

Download the jar package, put it on the server, and add it to the submission command

–jars spark-sql-kafka-0-10_2.11-2.4.5.jar

Error is still reported, and error is reported at this time

ommandExec.sideEffectResult(commands.scala:69)
	at org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:87)
	at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:177)
	at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:173)
	at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:201)
	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
	at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:198)
	at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:173)
	at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:93)
	at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:91)
	at org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:727)
	at org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:727)
	at org.apache.spark.sql.execution.SQLExecution$$anonfun$withNewExecutionId$1$$anonfun$apply$1.apply(SQLExecution.scala:95)
	at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:144)
	at org.apache.spark.sql.execution.SQLExecution$$anonfun$withNewExecutionId$1.apply(SQLExecution.scala:86)
	at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:789)
	at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:63)
	at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:727)
	at org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:313)
	at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:288)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:694)
Caused by: java.lang.ClassNotFoundException: org.apache.kafka.common.serialization.ByteArrayDeserializer
	at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:419)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:352)
	... 45 more

Check the spark directory. There is no kafka-clients jar package

Just add the Kafka-clients dependency package to the submit command

spark-submit --master yarn --deploy-mode cluster --jars spark-sql-kafka-0-10_2.11-2.4.5.jar,kafka-clients-2.0.0.jar

Resubmit and solve the problem

[Solved] Spring Kafka Send Error in specifies partition: Topic radar not present in metadata after 60000

Error Messages;
org.apache.kafka.common.errors.TimeoutException: Topic radar not present in metadata after 60000 ms.

2022-06-27 16:31:22.734 ERROR 11236 --- [  XNIO-1 task-1] o.s.k.support.LoggingProducerListener    : Exception thrown when sending a message with key='radar-statistics-data-0' and payload='{"time":"2022-6-27 16:27:56","index":0,"id":16563186227240,"type":"statistics data"}' to topic radar and partition 3:

org.apache.kafka.common.errors.TimeoutException: Topic radar not present in metadata after 60000 ms.

2022-06-27 16:31:22.737 ERROR 11236 --- [  XNIO-1 task-1] c.n.radar.web.rest.RadarKafkaResource    : Exception in testKafka() with cause = 'org.apache.kafka.common.errors.TimeoutException: Topic radar not present in metadata after 60000 ms.' and exception = 'Send failed; nested exception is org.apache.kafka.common.errors.TimeoutException: Topic radar not present in metadata after 60000 ms.'

org.springframework.kafka.KafkaException: Send failed; nested exception is org.apache.kafka.common.errors.TimeoutException: Topic radar not present in metadata after 60000 ms.
	at org.springframework.kafka.core.KafkaTemplate.doSend(KafkaTemplate.java:666)
	at org.springframework.kafka.core.KafkaTemplate.send(KafkaTemplate.java:429)
	at com.newatc.collect.config.KafkaProducer.sendProducerRecord(KafkaProducer.java:66)
	at com.newatc.radar.web.rest.RadarKafkaResource.testKafka(RadarKafkaResource.java:95)
	at com.newatc.radar.web.rest.RadarKafkaResource$$FastClassBySpringCGLIB$$56927135.invoke(<generated>)
	at org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:218)
	at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.invokeJoinpoint(CglibAopProxy.java:783)
	at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163)
	at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.proceed(CglibAopProxy.java:753)
	at org.springframework.aop.aspectj.AspectJAfterThrowingAdvice.invoke(AspectJAfterThrowingAdvice.java:64)
	at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186)
	at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.proceed(CglibAopProxy.java:753)
	at org.springframework.aop.aspectj.MethodInvocationProceedingJoinPoint.proceed(MethodInvocationProceedingJoinPoint.java:89)
	at com.newatc.radar.aop.logging.LoggingAspect.logAround(LoggingAspect.java:103)
	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.base/java.lang.reflect.Method.invoke(Method.java:566)
	at org.springframework.aop.aspectj.AbstractAspectJAdvice.invokeAdviceMethodWithGivenArgs(AbstractAspectJAdvice.java:634)
	at org.springframework.aop.aspectj.AbstractAspectJAdvice.invokeAdviceMethod(AbstractAspectJAdvice.java:624)
	at org.springframework.aop.aspectj.AspectJAroundAdvice.invoke(AspectJAroundAdvice.java:72)
	at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186)
	at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.proceed(CglibAopProxy.java:753)
	at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:97)
	at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186)
	at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.proceed(CglibAopProxy.java:753)
	at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:698)
	at com.newatc.radar.web.rest.RadarKafkaResource$$EnhancerBySpringCGLIB$$14f2a630.testKafka(<generated>)
	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.base/java.lang.reflect.Method.invoke(Method.java:566)
	at org.springframework.web.method.support.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:205)
	at org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:150)
	at org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:117)
	at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandlerMethod(RequestMappingHandlerAdapter.java:895)
	at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:808)
	at org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:87)
	at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:1067)
	at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:963)
	at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:1006)
	at org.springframework.web.servlet.FrameworkServlet.doGet(FrameworkServlet.java:898)
	at javax.servlet.http.HttpServlet.service(HttpServlet.java:497)
	at org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:883)
	at javax.servlet.http.HttpServlet.service(HttpServlet.java:584)
	at io.undertow.servlet.handlers.ServletHandler.handleRequest(ServletHandler.java:74)
	at io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:129)
	at org.springframework.web.filter.CorsFilter.doFilterInternal(CorsFilter.java:91)
	at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:117)
	at io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:61)
	at io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131)
	at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:327)
	at org.springframework.security.web.access.intercept.FilterSecurityInterceptor.invoke(FilterSecurityInterceptor.java:115)
	at org.springframework.security.web.access.intercept.FilterSecurityInterceptor.doFilter(FilterSecurityInterceptor.java:81)
	at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:336)
	at org.springframework.security.web.access.ExceptionTranslationFilter.doFilter(ExceptionTranslationFilter.java:122)
	at org.springframework.security.web.access.ExceptionTranslationFilter.doFilter(ExceptionTranslationFilter.java:116)
	at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:336)
	at org.springframework.security.web.session.SessionManagementFilter.doFilter(SessionManagementFilter.java:126)
	at org.springframework.security.web.session.SessionManagementFilter.doFilter(SessionManagementFilter.java:81)
	at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:336)
	at org.springframework.security.oauth2.client.web.OAuth2AuthorizationCodeGrantFilter.doFilterInternal(OAuth2AuthorizationCodeGrantFilter.java:168)
	at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:117)
	at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:336)
	at org.springframework.security.web.authentication.AnonymousAuthenticationFilter.doFilter(AnonymousAuthenticationFilter.java:109)
	at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:336)
	at org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter.doFilter(SecurityContextHolderAwareRequestFilter.java:149)
	at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:336)
	at org.springframework.security.web.savedrequest.RequestCacheAwareFilter.doFilter(RequestCacheAwareFilter.java:63)
	at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:336)
	at org.springframework.security.oauth2.server.resource.web.BearerTokenAuthenticationFilter.doFilterInternal(BearerTokenAuthenticationFilter.java:121)
	at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:117)
	at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:336)
	at org.springframework.security.oauth2.client.web.OAuth2AuthorizationRequestRedirectFilter.doFilterInternal(OAuth2AuthorizationRequestRedirectFilter.java:178)
	at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:117)
	at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:336)
	at org.springframework.security.web.authentication.logout.LogoutFilter.doFilter(LogoutFilter.java:103)
	at org.springframework.security.web.authentication.logout.LogoutFilter.doFilter(LogoutFilter.java:89)
	at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:336)
	at org.springframework.security.web.header.HeaderWriterFilter.doHeadersAfter(HeaderWriterFilter.java:90)
	at org.springframework.security.web.header.HeaderWriterFilter.doFilterInternal(HeaderWriterFilter.java:75)
	at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:117)
	at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:336)
	at org.springframework.security.web.context.SecurityContextPersistenceFilter.doFilter(SecurityContextPersistenceFilter.java:110)
	at org.springframework.security.web.context.SecurityContextPersistenceFilter.doFilter(SecurityContextPersistenceFilter.java:80)
	at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:336)
	at org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter.doFilterInternal(WebAsyncManagerIntegrationFilter.java:55)
	at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:117)
	at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:336)
	at org.springframework.security.web.FilterChainProxy.doFilterInternal(FilterChainProxy.java:211)
	at org.springframework.security.web.FilterChainProxy.doFilter(FilterChainProxy.java:183)
	at org.springframework.web.filter.DelegatingFilterProxy.invokeDelegate(DelegatingFilterProxy.java:354)
	at org.springframework.web.filter.DelegatingFilterProxy.doFilter(DelegatingFilterProxy.java:267)
	at io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:61)
	at io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131)
	at org.springframework.web.filter.RequestContextFilter.doFilterInternal(RequestContextFilter.java:100)
	at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:117)
	at io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:61)
	at io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131)
	at org.springframework.web.filter.FormContentFilter.doFilterInternal(FormContentFilter.java:93)
	at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:117)
	at io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:61)
	at io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131)
	at org.springframework.boot.actuate.metrics.web.servlet.WebMvcMetricsFilter.doFilterInternal(WebMvcMetricsFilter.java:96)
	at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:117)
	at io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:61)
	at io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131)
	at org.springframework.web.filter.CharacterEncodingFilter.doFilterInternal(CharacterEncodingFilter.java:201)
	at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:117)
	at io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:61)
	at io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131)
	at io.undertow.servlet.handlers.FilterHandler.handleRequest(FilterHandler.java:84)
	at io.undertow.servlet.handlers.security.ServletSecurityRoleHandler.handleRequest(ServletSecurityRoleHandler.java:62)
	at io.undertow.servlet.handlers.ServletChain$1.handleRequest(ServletChain.java:68)
	at io.undertow.servlet.handlers.ServletDispatchingHandler.handleRequest(ServletDispatchingHandler.java:36)
	at io.undertow.servlet.handlers.RedirectDirHandler.handleRequest(RedirectDirHandler.java:68)
	at io.undertow.servlet.handlers.security.SSLInformationAssociationHandler.handleRequest(SSLInformationAssociationHandler.java:117)
	at io.undertow.servlet.handlers.security.ServletAuthenticationCallHandler.handleRequest(ServletAuthenticationCallHandler.java:57)
	at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
	at io.undertow.security.handlers.AbstractConfidentialityHandler.handleRequest(AbstractConfidentialityHandler.java:46)
	at io.undertow.servlet.handlers.security.ServletConfidentialityConstraintHandler.handleRequest(ServletConfidentialityConstraintHandler.java:64)
	at io.undertow.security.handlers.AuthenticationMechanismsHandler.handleRequest(AuthenticationMechanismsHandler.java:60)
	at io.undertow.servlet.handlers.security.CachedAuthenticatedSessionHandler.handleRequest(CachedAuthenticatedSessionHandler.java:77)
	at io.undertow.security.handlers.AbstractSecurityContextAssociationHandler.handleRequest(AbstractSecurityContextAssociationHandler.java:43)
	at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
	at io.undertow.servlet.handlers.SendErrorPageHandler.handleRequest(SendErrorPageHandler.java:52)
	at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
	at io.undertow.servlet.handlers.SessionRestoringHandler.handleRequest(SessionRestoringHandler.java:119)
	at io.undertow.servlet.handlers.ServletInitialHandler.handleFirstRequest(ServletInitialHandler.java:275)
	at io.undertow.servlet.handlers.ServletInitialHandler.access$100(ServletInitialHandler.java:79)
	at io.undertow.servlet.handlers.ServletInitialHandler$2.call(ServletInitialHandler.java:134)
	at io.undertow.servlet.handlers.ServletInitialHandler$2.call(ServletInitialHandler.java:131)
	at io.undertow.servlet.core.ServletRequestContextThreadSetupAction$1.call(ServletRequestContextThreadSetupAction.java:48)
	at io.undertow.servlet.core.ContextClassLoaderSetupAction$1.call(ContextClassLoaderSetupAction.java:43)
	at io.undertow.servlet.handlers.ServletInitialHandler.dispatchRequest(ServletInitialHandler.java:255)
	at io.undertow.servlet.handlers.ServletInitialHandler.access$000(ServletInitialHandler.java:79)
	at io.undertow.servlet.handlers.ServletInitialHandler$1.handleRequest(ServletInitialHandler.java:100)
	at io.undertow.server.Connectors.executeRootHandler(Connectors.java:387)
	at io.undertow.server.HttpServerExchange$1.run(HttpServerExchange.java:852)
	at org.jboss.threads.ContextClassLoaderSavingRunnable.run(ContextClassLoaderSavingRunnable.java:35)
	at org.jboss.threads.EnhancedQueueExecutor.safeRun(EnhancedQueueExecutor.java:2019)
	at org.jboss.threads.EnhancedQueueExecutor$ThreadBody.doRunTask(EnhancedQueueExecutor.java:1558)
	at org.jboss.threads.EnhancedQueueExecutor$ThreadBody.run(EnhancedQueueExecutor.java:1449)
	at org.xnio.XnioWorker$WorkerThreadFactory$1$1.run(XnioWorker.java:1280)
	at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: org.apache.kafka.common.errors.TimeoutException: Topic radar not present in metadata after 60000 ms.

Solution:

  • First check whether there is no such topic. There is no such topic in kafka at the beginning, but I set it in the yml configuration file, so that no error will be reported when spring.kafka.listener.missing-topics-fatalis set to false, but the topic will be created automatically
  • I checked the kafka configuration file server.properties and found that the kafka configuration parameter sets the number of partitions to 1 ( num.partitions=1), when a topic is created by default, only one partitioned topic will be created
  • In the code, data is sent to partitions other than partition 0, but this partition cannot be found, so an error is reported
  • Partitions can be created manually in the command window kafka-topics --bootstrap-server localhost:9092 --create --topic radar --partitions 3 --replication-factor 1
  • Or create NewTopic with @Bean

 

    /**
     * When the project starts, the topic is created automatically, specifying the number of partitions and copies
     * @return Topic
     */
    @Bean
    public NewTopic topic(){
        return new NewTopic(topic, patitions, replications);
    }

 

[Solved] Win 10 Kafka error: failed to construct Kafka consumer

After updating the code, the package succeeds, but an error is reported when the project is started. The error information is shown in the following figure:

I mean, it is obvious that the problem lies in Kafka. After looking at the configuration, it should be the port problem.

Instead of using IP configuration, I directly configured the port name in the host file, as shown in the following figure:

Add configuration 127.0.0.1 Kafka-server

Then change the service address of Kafka to kafka-server:9092 in the application configuration file of the project

Mac M1 Start Virtual Machine Centos8 with PD to install Kafka error: Error: VM option ‘UseG1GC‘

Error message:

Error: VM option 'UseG1GC' is experimental and must be enabled via -XX:+UnlockExperimentalVMOptions.
Error: Could not create the Java Virtual Machine.
Error: A fatal exception has occurred. Program will exit.

According to the error message, go to the kafka decompression directory bin/kafka-run-class.sh file, open it with vim, then use /UseG1GC to search for this piece of configuration information, and delete this piece of configuration

and then restart Kafka. If there are similar problems, you can see if there are abnormal similar configurations, you can try to delete them, and remember to backup and save them.

Kafka executes the script to create topic error: error org apache. kafka. common. errors. InvalidReplicationFactorException: Replicati

Question:

To test the integration of sparkstreaming and Kafka in the code, you need to create two topics in Kafka in advance, but the following errors are reported during the execution of the creation script

 kafka-topics.sh --zookeeper linux1:2181,linux2:2181,linux3:2181 --create --topic wufabao_topic01 --replication-factor 2 --partitions 3

WARNING: Due to limitations in metric names, topics with a period ('.') or underscore ('_') could collide. To avoid issues it is best to use either, but not both.
Error while executing topic command : Replication factor: 2 larger than available brokers: 0.
[2022-02-09 17:27:18,432] ERROR org.apache.kafka.common.errors.InvalidReplicationFactorException: Replication factor: 2 larger than available brokers: 0.
 (kafka.admin.TopicCommand$)

reason:

The path of metadata stored in zookeeper configured by Kafka is incorrect. The metadata path I configured in Kafka is:
zookeeper connect=linux1:2181,linux2:2181,linux3:2181/myKafka

Solution:

Modify the path of Kafka metadata in the script as follows:
kafka-topics.sh --zookeeper linux1:2181,linux2:2181,linux3:2181/myKafka --create --topic wufabao_topic01 --replication-factor 2 --partitions 3
created successfully

How to Solve Doris dynamic partition table routineload Error

report errors

Reason: no partition for this tuple.tuple=…

analysis

The data in Kafka comes in, but the dynamic partition table does not create the time partition of this data

Solution:

#Adding a partition to a dynamic partition

## Dynamic partition to static partition
ALTER TABLE ods_log_outlog_course_ydyjs_app SET ("dynamic_partition.enable" = "false");

## Add uncreated partitions
ALTER TABLE course_log.ods_log_outlog_course_ydyjs_app
        ADD PARTITION p20220307 VALUES [("2022-03-07"), ("2022-03-08"));
## Add uncreated partitions        
ALTER TABLE course_log.ods_log_outlog_course_ydyjs_app
        ADD PARTITION p20220308 VALUES [("2022-03-08"), ("2022-03-09"));

## Check if the creation is successful        
show partitions from ods_log_outlog_course_ydyjs_app;

## Restore a static partition to a dynamic partition
ALTER TABLE ods_log_outlog_course_ydyjs_app SET ("dynamic_partition.enable" = "true");

## resume routineLoad
resume routine load for ods_log_outlog_course_ydyjs_app_load

[Solved] kafka Error: java.net.UnknownHostException: kafkahost

 

Problem phenomenon:

Today, I wanted to debug the service on my computer. I found that when calling the interface through the gateway, an error related to Kafka was reported, as follows:

java.net.UnknownHostException: kafkahost


Problem analysis:

It can be seen from the error message that the host named kafkahost cannot be recognized.

By viewing the configuration of a service instance of Kafka cluster configuration on Linux server, you can find:

listeners=PLAINTEXT://kafkahost:0091

This configuration uses the {kafkahost} mentioned in the error message. You can see that this service instance listens to the IP port} kafkahost: 0091; By viewing the/etc/hosts file of the Linux server, you can see:

Kafkahost points to the Linux server IP.

Since I accessed the Kafka service on the Linux server in the native service, I naturally could not resolve to kafkahost. Therefore, you need to add the corresponding configuration in the hosts file of this machine!


Solution:

Find the hosts file path of this machine:

C:\Windows\System32\drivers\etc

Add the following configuration at the end of the file to identify kafkahost as the Linux server IP:

Restart the local service and call the interface again without any error: