Tag Archives: Big data

[Solved] Logstash Error: Logstash – java.lang.IllegalStateException: Logstash stopped processing because of an err

I recently tried to use Elasticsearch and IK in combination with Logstash to link mysql, and tested Logstash with the following error message.

First enter the command: logstash -e ‘input {stdin{}} output {stdout{}}’

D:\myworkspace\es\logstash-6.4.3\bin>logstash -e 'input {stdin{}} output {stdout{}}'

The command is correct, but the result is:

D:\myworkspace\es\logstash-6.4.3\bin>logstash -e 'input {stdin{}} output {stdout{}}'
ERROR: Unknown command '{stdin{}}'

See: 'bin/logstash --help'
[ERROR] 2022-08-23 09:06:42.875 [main] Logstash - java.lang.IllegalStateException: Logstash stopped processing because of an error: (SystemExit) exit

 

Solution:

You should try the following command first:

logstash -e “”

The result was successful:

D:\myworkspace\es\logstash-6.4.3\bin>logstash -e ""
Sending Logstash logs to D:/myworkspace/es/logstash-6.4.3/logs which is now configured via log4j2.properties
[2022-08-23T09:16:16,950][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.queue", :path=>"D:/myworkspace/es/logstash-6.4.3/data/queue"}
[2022-08-23T09:16:16,958][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.dead_letter_queue", :path=>"D:/myworkspace/es/logstash-6.4.3/data/dead_letter_queue"}
[2022-08-23T09:16:17,054][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2022-08-23T09:16:17,164][INFO ][logstash.agent           ] No persistent UUID file found. Generating new UUID {:uuid=>"0777ac0f-9efb-463d-8e2c-874bc1dc9feb", :path=>"D:/myworkspace/es/logstash-6.4.3/data/uuid"}
[2022-08-23T09:16:17,592][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"6.4.3"}
[2022-08-23T09:16:20,129][INFO ][logstash.pipeline        ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>8, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}
[2022-08-23T09:16:20,231][INFO ][logstash.pipeline        ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x5fba80a0 run>"}
The stdin plugin is now waiting for input:
[2022-08-23T09:16:20,277][INFO ][logstash.agent           ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2022-08-23T09:16:20,611][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}
[2022-08-23T09:16:43,203][WARN ][logstash.runner          ] SIGINT received. Shutting down.
[2022-08-23T09:16:43,338][INFO ][logstash.pipeline        ] Pipeline has terminated {:pipeline_id=>"main", :thread=>"#<Thread:0x5fba80a0 run>"}
[2022-08-23T09:16:43,340][FATAL][logstash.runner          ] SIGINT received. Terminating immediately.

Decisively replace with the following command:

logstash -e "input { stdin {} }  output {stdout {} }"

Done!

D:\myworkspace\es\logstash-6.4.3\bin>logstash -e "input { stdin {} }  output {stdout {} }"
Sending Logstash logs to D:/myworkspace/es/logstash-6.4.3/logs which is now configured via log4j2.properties
[2022-08-23T09:17:48,125][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2022-08-23T09:17:48,690][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"6.4.3"}
[2022-08-23T09:17:50,871][INFO ][logstash.pipeline        ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>8, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}
[2022-08-23T09:17:50,964][INFO ][logstash.pipeline        ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x268e4bb5 run>"}
The stdin plugin is now waiting for input:
[2022-08-23T09:17:51,008][INFO ][logstash.agent           ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2022-08-23T09:17:51,209][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}

[Solved] ANTRL Compile HiveLexer.g File Error: syntax error: antlr: NoViableAltException(@[])

Project scenario:

The need for programming to obtain data pedigrees is not common for data warehouses, and separate data pedigree work is only necessary when the scale of data reaches a significant level, or the number of reports with complex data production relationships increases to a significant level.
Until scale is reached, manual identification and management is more cost effective.

antlr is the open source syntax parser that automatically generates syntax trees based on input and displays them visually. antlr-Another Tool for Language Recognition, formerly PCCTS, provides a syntax tree for languages including Java, C++, and C# by description to automatically construct a custom language recognizer (recognizer), compiler (parser) and interpreter (translator) framework. ANTLR is available in v2 v3 v4, with most of the Chinese documentation being in v2 and the hive 1.1.0 version mentioning antlr 3.4 in the comments. antlr combines the above by allowing us to define lexical rules for recognizing character streams and syntactic parser rules for interpreting Token streams. ANTLR will then automatically generate the appropriate lexical/syntactic parser based on the syntax file provided by the user. The user can use them to compile the input text and convert it into other forms (e.g. AST-Abstract Syntax Tree).

Use ANTLR version 3.5.2 to parse Hivesql’s source code grammar file for HiveLexer.g, and place the code:

@lexer::header {
package org.apache.hadoop.hive.ql.parse

import org.apache.hadoop.conf.Configuration
import org.apache.hadoop.hive.conf.HiveConf
}

@lexer::members {
  private Configuration hiveConf
  
  public void setHiveConf(Configuration hiveConf) {
    this.hiveConf = hiveConf
  }
  
  protected boolean allowQuotedId() {
    if(hiveConf == null){
      return false
    }
    String supportedQIds = HiveConf.getVar(hiveConf, HiveConf.ConfVars.HIVE_QUOTEDID_SUPPORT)
    return !"none".equals(supportedQIds)
  }
}

This paragraph needs to be annotated so that Java compiled with ANTLR does not contain this package.


Problem description

After this paragraph is annotated with/* * /, it is still compiled and an error is found:

[10:58:10] error(100): HiveLexer.g:42:1: syntax error: antlr: NoViableAltException(9@[])
[10:58:10] error(100): HiveLexer.g:42:2: syntax error: antlr: NoViableAltException(49@[])
[10:58:10] error(100): HiveLexer.g:42:7: syntax error: antlr: MissingTokenException(inserted [@-1,0:0='<missing ACTION>',<4>,42:6] at :)
[10:58:10] error(100): HiveLexer.g:42:8: syntax error: antlr: NoViableAltException(22@[])
[10:58:10] error(100): HiveLexer.g:42:9: syntax error: antlr: NoViableAltException(80@[])
[10:58:10] error(100): HiveLexer.g:42:9: syntax error: antlr: NoViableAltException(80@[])
[10:58:10] error(100): HiveLexer.g:47:1: syntax error: antlr: MissingTokenException(inserted [@-1,0:0='<missing SEMI>',<82>,47:0] at KW_TRUE)

Cause analysis:

Query data discovery

‘; ‘ is a terminator in comment and so error occurred


Solution:

Delete all the original annotation documents or delete them; Delete it.

Problem solved.

[Solved] Cause: java.sql.SQLException: TDengine ERROR (8000000b): Unable to establish connection

FQDN is not configured during installation and configuration. There is no error when starting spingboot. The following error is reported when inserting data

Tdengine error (800000b): unable to establish connection

### Cause: java.sql.SQLException: TDengine ERROR (8000000b): Unable to establish connection

Solution:

vi /etc/taos/taos.cfg

/var/lib/taos/dnode/dnodeEps.json

The local windows system must install the client tdengine-client

Hosts to be set, as shown in the following figure

[Solved] hadoop-2.7.1 Error: Error Cannot find configuration directory etchadoop

Since the configuration is hadoop-2.7.1, it will be found later during startup

The terminal executes./start yarn sh
starting yarn daemons
Error: Cannot find configuration directory: /etc/hadoop
Error: Cannot find configuration directory: /etc/hadoop

This is the reason why the directory cannot be found. You can find the solution by reading the corresponding shell script~

Solution:

Configures the directory where a Hadoop configuration file is located in hadoop-env.sh (codes as below):

export HADOOP_CONF_DIR=/opt/hadoop-2.7.1/etc/hadoop/

Execute command

source  hadoop-env.sh

[Solved] java Internal error in the mapping processor java.lang.NullPointerException

Error Messages:

java: Internal error in the mapping processor: java.lang.NullPointerException  	
at org.mapstruct.ap.internal.processor.DefaultVersionInformation.createManifestUrl(DefaultVersionInformation.java:182)  	
at org.mapstruct.ap.internal.processor.DefaultVersionInformation.openManifest(DefaultVersionInformation.java:153)  	
at org.mapstruct.ap.internal.processor.DefaultVersionInformation.getLibraryName(DefaultVersionInformation.java:129)  	
at org.mapstruct.ap.internal.processor.DefaultVersionInformation.getCompiler(DefaultVersionInformation.java:122)  	
at org.mapstruct.ap.internal.processor.DefaultVersionInformation.fromProcessingEnvironment(DefaultVersionInformation.java:95)  	
at org.mapstruct.ap.internal.processor.DefaultModelElementProcessorContext.<init>(DefaultModelElementProcessorContext.java:50)  
at org.mapstruct.ap.MappingProcessor.processMapperElements(MappingProcessor.java:218)  	
at org.mapstruct.ap.MappingProcessor.process(MappingProcessor.java:156)  	
at org.jetbrains.jps.javac.APIWrappers$ProcessorWrapper.process(APIWrappers.java:109)  	
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)  	
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)  	
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)  	
at java.lang.reflect.Method.invoke(Method.java:498)  	
at org.jetbrains.jps.javac.APIWrappers$1.invoke(APIWrappers.java:213)  	
at org.mapstruct.ap.MappingProcessor.process(Unknown Source)  	
at com.sun.tools.javac.processing.JavacProcessingEnvironment.callProcessor(JavacProcessingEnvironment.java:794)  	
at com.sun.tools.javac.processing.JavacProcessingEnvironment.discoverAndRunProcs(JavacProcessingEnvironment.java:705)  
at com.sun.tools.javac.processing.JavacProcessingEnvironment.access$1800(JavacProcessingEnvironment.java:91)  	
at com.sun.tools.javac.processing.JavacProcessingEnvironment$Round.run(JavacProcessingEnvironment.java:1035)  	
at com.sun.tools.javac.processing.JavacProcessingEnvironment.doProcessing(JavacProcessingEnvironment.java:1176)  	
at com.sun.tools.javac.main.JavaCompiler.processAnnotations(JavaCompiler.java:1170)  	
at com.sun.tools.javac.main.JavaCompiler.compile(JavaCompiler.java:856)  	
at com.sun.tools.javac.main.Main.compile(Main.java:523)  	
at com.sun.tools.javac.api.JavacTaskImpl.doCall(JavacTaskImpl.java:129)  	
at com.sun.tools.javac.api.JavacTaskImpl.call(JavacTaskImpl.java:138)  	
at org.jetbrains.jps.javac.JavacMain.compile(JavacMain.java:231)  	
at org.jetbrains.jps.incremental.java.JavaBuilder.compileJava(JavaBuilder.java:501)  	
at org.jetbrains.jps.incremental.java.JavaBuilder.compile(JavaBuilder.java:353)  	
at org.jetbrains.jps.incremental.java.JavaBuilder.doBuild(JavaBuilder.java:277)  	
at org.jetbrains.jps.incremental.java.JavaBuilder.build(JavaBuilder.java:231)  	
at org.jetbrains.jps.incremental.IncProjectBuilder.runModuleLevelBuilders(IncProjectBuilder.java:1441)  	
at org.jetbrains.jps.incremental.IncProjectBuilder.runBuildersForChunk(IncProjectBuilder.java:1100)  	at org.jetbrains.jps.incremental.IncProjectBuilder.buildTargetsChunk(IncProjectBuilder.java:1224)  	at org.jetbrains.jps.incremental.IncProjectBuilder.buildChunkIfAffected(IncProjectBuilder.java:1066)  	at org.jetbrains.jps.incremental.IncProjectBuilder.buildChunks(IncProjectBuilder.java:832)  	at org.jetbrains.jps.incremental.IncProjectBuilder.runBuild(IncProjectBuilder.java:419)  	at org.jetbrains.jps.incremental.IncProjectBuilder.build(IncProjectBuilder.java:183)  	at org.jetbrains.jps.cmdline.BuildRunner.runBuild(BuildRunner.java:132)  	at org.jetbrains.jps.cmdline.BuildSession.runBuild(BuildSession.java:302)  	at org.jetbrains.jps.cmdline.BuildSession.run(BuildSession.java:132)  	at org.jetbrains.jps.cmdline.BuildMain$MyMessageHandler.lambda$channelRead0$0(BuildMain.java:219)  	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)  	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)  	at java.lang.Thread.run(Thread.java:748)  

In the use of MapStruct, idea2020.3 version in the build project error: java: Internal error in the mapping processor: java.lang.NullPointerException

Solution:
Setting –>Build,Execution,Deployment –>Compiler –>User-local build Add the parameter:
-Djps.track.ap.dependencies=false

[Solved] MySQL Startup Error: Job for mysqld.service failed because the control process exited with error code

Question

An error is reported when starting MySQL service, as shown below:

[root@node2 hadoop]# systemctl start mysqld.service
Job for mysqld.service failed because the control process exited with error code.
See "systemctl status mysqld.service" and "journalctl -xe" for details.

or

[root@node2 hadoop]# systemctl start mysqld.service
Job for mysqld.service failed because the control process exited with error code.
See "systemctl status mysqld.service" and "journalctl -xe" for details.

Solution

This is because the /var/lib/MySQL directory has insufficient permissions

[root@node2 hadoop]# cd /var/lib/mysql
[root@node2 mysql]# ll
all 167348
-rw-r-----. 1 root  root        56 6月  19 20:00  auto.cnf
-rw-r-----. 1 mysql mysql        0 6月  19 20:01  binlog.index
-rw-------. 1 root  root      1676 6月  19 20:00  ca-key.pem
-rw-r--r--. 1 root  root      1112 6月  19 20:00  ca.pem
-rw-r--r--. 1 root  root      1112 6月  19 20:00  client-cert.pem
-rw-------. 1 root  root      1676 6月  19 20:00  client-key.pem
-rw-r-----. 1 root  root    196608 6月  19 20:00 '#ib_16384_0.dblwr'
-rw-r-----. 1 root  root   8585216 6月  19 20:00 '#ib_16384_1.dblwr'
-rw-r-----. 1 root  root      3595 6月  19 20:00  ib_buffer_pool
-rw-r-----. 1 root  root  12582912 6月  19 20:00  ibdata1
-rw-r-----. 1 root  root  50331648 6月  19 20:00  ib_logfile0
-rw-r-----. 1 root  root  50331648 6月  19 20:00  ib_logfile1
drwxr-x---. 2 root  root         6 6月  19 20:00 '#innodb_temp'
drwxr-x---. 2 root  root         6 6月  19 20:00  mysql
-rw-r-----. 1 root  root  15728640 6月  19 20:00  mysql.ibd
drwxr-x---. 2 root  root      8192 6月  19 20:00  performance_schema
-rw-------. 1 root  root      1676 6月  19 20:00  private_key.pem
-rw-r--r--. 1 root  root       452 6月  19 20:00  public_key.pem
-rw-r--r--. 1 root  root      1112 6月  19 20:00  server-cert.pem
-rw-------. 1 root  root      1676 6月  19 20:00  server-key.pem
-rw-r-----. 1 root  root  16777216 6月  19 20:00  undo_001
-rw-r-----. 1 root  root  16777216 6月  19 20:00  undo_002

Modify permissions and start MySQL

[root@node2 ~]# setenforce 0 
[root@node2 ~]# chown -R mysql:mysql /var/lib/mysql
[root@node2 ~]# chmod -R 777 /var/lib/mysql
[root@node2 ~]# systemctl start mysqld.service
[root@node2 ~]# ps -ef |grep mysql
mysql      26627       1  4 23:57 ?       00:00:00 /usr/sbin/mysqld
root       26671   10438  0 23:57 pts/0    00:00:00 grep --color=auto mysql
[root@node2 ~]# cd /var/lib/mysql
[root@node2 mysql]# ll
all 190916
-rw-r-----. 1 mysql mysql       56 6月  19 23:57  auto.cnf
-rw-r-----. 1 mysql mysql      156 6月  19 23:57  binlog.000001
-rw-r-----. 1 mysql mysql       16 6月  19 23:57  binlog.index
-rwxrwxrwx. 1 mysql mysql     1680 6月  19 23:50  ca-key.pem
-rwxrwxrwx. 1 mysql mysql     1112 6月  19 23:50  ca.pem
-rwxrwxrwx. 1 mysql mysql     1112 6月  19 23:50  client-cert.pem
-rwxrwxrwx. 1 mysql mysql     1676 6月  19 23:50  client-key.pem
-rwxrwxrwx. 1 mysql mysql   196608 6月  19 23:57 '#ib_16384_0.dblwr'
-rwxrwxrwx. 1 mysql mysql  8585216 6月  19 23:50 '#ib_16384_1.dblwr'
-rwxrwxrwx. 1 mysql mysql     6059 6月  19 23:50  ib_buffer_pool
-rwxrwxrwx. 1 mysql mysql 12582912 6月  19 23:57  ibdata1
-rwxrwxrwx. 1 mysql mysql 50331648 6月  19 23:57  ib_logfile0
-rwxrwxrwx. 1 mysql mysql 50331648 6月  19 23:50  ib_logfile1
-rw-r-----. 1 mysql mysql 12582912 6月  19 23:57  ibtmp1
drwxrwxrwx. 2 mysql mysql      187 6月  19 23:57 '#innodb_temp'
drwxrwxrwx. 2 mysql mysql      143 6月  19 23:50  mysql
-rwxrwxrwx. 1 mysql mysql 27262976 6月  19 23:57  mysql.ibd
srwxrwxrwx. 1 mysql mysql        0 6月  19 23:57  mysql.sock
-rw-------. 1 mysql mysql        6 6月  19 23:57  mysql.sock.lock
drwxrwxrwx. 2 mysql mysql     8192 6月  19 23:50  performance_schema
-rwxrwxrwx. 1 mysql mysql     1676 6月  19 23:50  private_key.pem
-rwxrwxrwx. 1 mysql mysql      452 6月  19 23:50  public_key.pem
-rwxrwxrwx. 1 mysql mysql     1112 6月  19 23:50  server-cert.pem
-rwxrwxrwx. 1 mysql mysql     1680 6月  19 23:50  server-key.pem
drwxrwxrwx. 2 mysql mysql       28 6月  19 23:50  sys
-rwxrwxrwx. 1 mysql mysql 16777216 6月  19 23:57  undo_001
-rwxrwxrwx. 1 mysql mysql 16777216 6月  19 23:57  undo_002
[root@node2 mysql]# 

Note:

  1. Setenforce 0 is used to solve [InnoDB] Operating system error number 13 in a file operation.
  2. Directory permissions must be set to 777, just 755 will also cause an error

[Solved] Hadoop Error: ERROR: Cannot set priority of namenode process

Phenomenon:

solve:

1. Look at Hadoop logs:

Check the namenode log: tail -n 200 hadoop-xinjie-namenode-VM-0-9-centos.log (location of file directory: Hadoop installation location logs file)

2. It is found that the port is occupied

3. Command to check the port occupancy: netstat -anp|grep 9866

4. Kill process: kill -9 9866

5. Restart the cluster after killing all the occupied ports. The problem is solved

[Solved] Failed to re-init queues: Illegal queue capacity setting (abs-capacity=0.6) > (abs-maximum-capacity

The following exception was thrown when allocating queues to Yarn today,

llq@hadoop001:/software/hadoop-3.1.3$ yarn rmadmin -refreshQueues
2022-07-30 05:43:14,554 INFO client.RMProxy: Connecting to ResourceManager at hadoop002/192.168.86.102:8033
refreshQueues: java.io.IOException: Failed to re-init queues : Illegal queue capacity setting (abs-capacity=0.6) > (abs-maximum-capacity=0.4) for queue=[root.default],label=[]
	at org.apache.hadoop.yarn.ipc.RPCUtil.getRemoteException(RPCUtil.java:38)
	at org.apache.hadoop.yarn.server.resourcemanager.AdminService.logAndWrapException(AdminService.java:920)
	at org.apache.hadoop.yarn.server.resourcemanager.AdminService.refreshQueues(AdminService.java:406)
	at org.apache.hadoop.yarn.server.api.impl.pb.service.ResourceManagerAdministrationProtocolPBServiceImpl.refreshQueues(ResourceManagerAdministrationProtocolPBServiceImpl.java:114)
	at org.apache.hadoop.yarn.proto.ResourceManagerAdministrationProtocol$ResourceManagerAdministrationProtocolService$2.callBlockingMethod(ResourceManagerAdministrationProtocol.java:271)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:527)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1036)
	at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1000)
	at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:928)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:422)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2916)
Caused by: java.io.IOException: Failed to re-init queues : Illegal queue capacity setting (abs-capacity=0.6) > (abs-maximum-capacity=0.4) for queue=[root.default],label=[]
	at org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.reinitialize(CapacityScheduler.java:477)
	at org.apache.hadoop.yarn.server.resourcemanager.AdminService.refreshQueues(AdminService.java:430)
	at org.apache.hadoop.yarn.server.resourcemanager.AdminService.refreshQueues(AdminService.java:401)
	... 10 more
Caused by: java.lang.IllegalArgumentException: Illegal queue capacity setting (abs-capacity=0.6) > (abs-maximum-capacity=0.4) for queue=[root.default],label=[]
	at org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CSQueueUtils.capacitiesSanityCheck(CSQueueUtils.java:75)
	at org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CSQueueUtils.loadUpdateAndCheckCapacities(CSQueueUtils.java:116)
	at org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.AbstractCSQueue.setupConfigurableCapacities(AbstractCSQueue.java:179)
	at org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.AbstractCSQueue.setupQueueConfigs(AbstractCSQueue.java:356)
	at org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue.setupQueueConfigs(LeafQueue.java:177)
	at org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue.<init>(LeafQueue.java:162)
	at org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue.<init>(LeafQueue.java:141)
	at org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacitySchedulerQueueManager.parseQueue(CapacitySchedulerQueueManager.java:259)
	at org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacitySchedulerQueueManager.parseQueue(CapacitySchedulerQueueManager.java:283)
	at org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacitySchedulerQueueManager.reinitializeQueues(CapacitySchedulerQueueManager.java:171)
	at org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.reinitializeQueues(CapacityScheduler.java:726)
	at org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.reinitialize(CapacityScheduler.java:472)

Reason: the default rated queue capacity is greater than the maximum online queue capacity
solution:

<!-- Reduce the default queue resource rating to 40%, default 100% -->
<property>
    <name>yarn.scheduler.capacity.root.default.capacity</name>
    <value>40</value>
</property>

<!-- Lower the maximum capacity of default queue resources to 60%, default 100% -->
<property>
    <name>yarn.scheduler.capacity.root.default.maximum-capacity</name>
    <value>60</value>
</property>

spark SQL Export Data to Kafka error [How to Solve]

Failed to find data source: kafka. Please deploy the application as per the deployment section of "Structured Streaming + Kafka Integration Guide"

The reason for this error is the lack of spark-sql-kafka-0-10_2.11-2.4.5.jar dependency

Download the jar package, put it on the server, and add it to the submission command

–jars spark-sql-kafka-0-10_2.11-2.4.5.jar

Error is still reported, and error is reported at this time

ommandExec.sideEffectResult(commands.scala:69)
	at org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:87)
	at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:177)
	at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:173)
	at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:201)
	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
	at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:198)
	at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:173)
	at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:93)
	at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:91)
	at org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:727)
	at org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:727)
	at org.apache.spark.sql.execution.SQLExecution$$anonfun$withNewExecutionId$1$$anonfun$apply$1.apply(SQLExecution.scala:95)
	at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:144)
	at org.apache.spark.sql.execution.SQLExecution$$anonfun$withNewExecutionId$1.apply(SQLExecution.scala:86)
	at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:789)
	at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:63)
	at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:727)
	at org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:313)
	at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:288)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:694)
Caused by: java.lang.ClassNotFoundException: org.apache.kafka.common.serialization.ByteArrayDeserializer
	at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:419)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:352)
	... 45 more

Check the spark directory. There is no kafka-clients jar package

Just add the Kafka-clients dependency package to the submit command

spark-submit --master yarn --deploy-mode cluster --jars spark-sql-kafka-0-10_2.11-2.4.5.jar,kafka-clients-2.0.0.jar

Resubmit and solve the problem

Hive operation TMP file viewing content error [How to Solve]

1. Hive operation TMP file viewing content error

Permission denied: user=dr.who, access=READ_EXECUTE, inode="/tmp":hadoopadmin:supergroup:drwx-wx-wx

2. Cause analysis:

In case of insufficient user permissions, you can see that this tmp folder does not have read r permission for other users. The default login user of the page is dr.who users


3. Solution:

1. Change the permissions of TMP

Just execute the code:

hdfs dfs -chmod -R 777 /tmp

2. Modify the default login user of 50070

The core-site.xml can be configured as the user name corresponding to Hadoop

<property>
    <name>hadoop.http.staticuser.user</name>
    <value>username</value>
</property>

ES Startup error: ERROR: [2] bootstrap checks failed

ERROR: [2] bootstrap checks failed
[1]: max file descriptors [4096] for elasticsearch process is too low,   increase to at least [65536]

Reason: Generally, you can check whether the following nodes are present in the elastic.yaml configuration file in the es installation directory

elsearch soft nofile 65536
elsearch hard nofile 65536

If not, then you need to configure it, and replace the elsearch with your own server user.

There is also a possibility that the above error is still reported even though the server has been configured, possibly due to the fact that the current logged-in user has not synchronized the configuration due to a server reboot.
Use su ~ xx to re-login to solve the problem.

[Solved] Swashbuckle.AspNetCore.SwaggerGen.SwaggerGeneratorException: Failed to generate Operation

Error Messages:
ErrorMessage: Swashbuckle.AspNetCore.SwaggerGen.SwaggerGeneratorException: Failed to generate Operation for action - PMToolkit.API.Controllers.ReportController.GetOrderReportFilesList (PMToolkit.API). See inner exception\r\n ---\u003E Swashbuckle.AspNetCore.SwaggerGen.SwaggerGeneratorException: Failed to generate schema for type - System.Collections.Generic.IEnumerable\u00601[PMToolkit.API.Database.Models.OrderReportFile]. See inner exception\r\n ---\u003E System.InvalidOperationException: Can\u0027t use schemaId \u0022$OrderOption\u0022 for type \u0022$PMToolkit.API.Database.OrderOption\u0022. The same schemaId is already used for type \u0022$PMToolkit.API.Controllers.DelayOrdersController\u002BOrderOption\u0022\r\n   at Swashbuckle.AspNetCore.SwaggerGen.SchemaRepository.RegisterType(Type type, String schemaId)\r\n   at Swashbuckle.AspNetCore.SwaggerGen.SchemaGenerator.GenerateReferencedSchema(DataContract dataContract, SchemaRepository schemaRepository, Func\u00601 definitionFactory)\r\n   at Swashbuckle.AspNetCore.SwaggerGen.SchemaGenerator.GenerateConcreteSchema(DataContract dataContract, SchemaRepository schemaRepository)\r\n   at Swashbuckle.AspNetCore.SwaggerGen.SchemaGenerator.GenerateSchemaForMember(Type modelType, SchemaRepository schemaRepository, MemberInfo memberInfo, DataProperty dataProperty)\r\n   at Swashbuckle.AspNetCore.SwaggerGen.SchemaGenerator.CreateObjectSchema(DataContract dataContract, SchemaRepository schemaRepository)\r\n   at Swashbuckle.AspNetCore.SwaggerGen.SchemaGenerator.\u003C\u003Ec__DisplayClass10_0.\u003CGenerateConcreteSchema\u003Eb__3()\r\n   at Swashbuckle.AspNetCore.SwaggerGen.SchemaGenerator.GenerateReferencedSchema(DataContract dataContract, SchemaRepository schemaRepository, Func\u00601 definitionFactory)\r\n   at Swashbuckle.AspNetCore.SwaggerGen.SchemaGenerator.GenerateConcreteSchema(DataContract dataContract, SchemaRepository schemaRepository)\r\n   at Swashbuckle.AspNetCore.SwaggerGen.SchemaGenerator.GenerateSchemaForType(Type modelType, SchemaRepository schemaRepository)\r\n   at Swashbuckle.AspNetCore.SwaggerGen.SchemaGenerator.GenerateSchema(Type modelType, SchemaRepository schemaRepository, MemberInfo memberInfo, ParameterInfo parameterInfo)\r\n   at Swashbuckle.AspNetCore.SwaggerGen.SchemaGenerator.CreateArraySchema(DataContract dataContract, SchemaRepository schemaRepository)\r\n   at Swashbuckle.AspNetCore.SwaggerGen.SchemaGenerator.\u003C\u003Ec__DisplayClass10_0.\u003CGenerateConcreteSchema\u003Eb__1()\r\n   at Swashbuckle.AspNetCore.SwaggerGen.SchemaGenerator.GenerateConcreteSchema(DataContract dataContract, SchemaRepository schemaRepository)\r\n   at Swashbuckle.AspNetCore.SwaggerGen.SchemaGenerator.GenerateSchemaForType(Type modelType, SchemaRepository schemaRepository)\r\n   at Swashbuckle.AspNetCore.SwaggerGen.SchemaGenerator.GenerateSchema(Type modelType, SchemaRepository schemaRepository, MemberInfo memberInfo, ParameterInfo parameterInfo)\r\n   at Swashbuckle.AspNetCore.SwaggerGen.SwaggerGenerator.GenerateSchema(Type type, SchemaRepository schemaRepository, PropertyInfo propertyInfo, ParameterInfo parameterInfo)\r\n   --- End of inner exception stack trace ---\r\n   at Swashbuckle.AspNetCore.SwaggerGen.SwaggerGenerator.GenerateSchema(Type type, SchemaRepository schemaRepository, PropertyInfo propertyInfo, ParameterInfo parameterInfo)\r\n   at Swashbuckle.AspNetCore.SwaggerGen.SwaggerGenerator.CreateResponseMediaType(ModelMetadata modelMetadata, SchemaRepository schemaRespository)\r\n   at Swashbuckle.AspNetCore.SwaggerGen.SwaggerGenerator.\u003C\u003Ec__DisplayClass19_0.\u003CGenerateResponse\u003Eb__2(String contentType)\r\n   at System.Linq.Enumerable.ToDictionary[TSource,TKey,TElement](IEnumerable\u00601 source, Func\u00602 keySelector, Func\u00602 elementSelector, IEqualityComparer\u00601 comparer)\r\n   at System.Linq.Enumerable.ToDictionary[TSource,TKey,TElement](IEnumerable\u00601 source, Func\u00602 keySelector, Func\u00602 elementSelector)\r\n   at Swashbuckle.AspNetCore.SwaggerGen.SwaggerGenerator.GenerateResponse(ApiDescription apiDescription, SchemaRepository schemaRepository, String statusCode, ApiResponseType apiResponseType)\r\n   at Swashbuckle.AspNetCore.SwaggerGen.SwaggerGenerator.GenerateResponses(ApiDescription apiDescription, SchemaRepository schemaRepository)\r\n   at Swashbuckle.AspNetCore.SwaggerGen.SwaggerGenerator.GenerateOperation(ApiDescription apiDescription, SchemaRepository schemaRepository)\r\n   --- End of inner exception stack trace ---\r\n   at Swashbuckle.AspNetCore.SwaggerGen.SwaggerGenerator.GenerateOperation(ApiDescription apiDescription, SchemaRepository schemaRepository)\r\n   at Swashbuckle.AspNetCore.SwaggerGen.SwaggerGenerator.GenerateOperations(IEnumerable\u00601 apiDescriptions, SchemaRepository schemaRepository)\r\n   at Swashbuckle.AspNetCore.SwaggerGen.SwaggerGenerator.GeneratePaths(IEnumerable\u00601 apiDescriptions, SchemaRepository schemaRepository)\r\n   at Swashbuckle.AspNetCore.SwaggerGen.SwaggerGenerator.GetSwagger(String documentName, String host, String basePath)\r\n   at Swashbuckle.AspNetCore.Swagger.SwaggerMiddleware.Invoke(HttpContext httpContext, ISwaggerProvider swaggerProvider)\r\n   at Steeltoe.Management.Endpoint.CloudFoundry.CloudFoundrySecurityMiddleware.Invoke(HttpContext context)\r\n   at Microsoft.AspNetCore.Diagnostics.DeveloperExceptionPageMiddleware.Invoke(HttpContext context).

 

Solution:
You must add a swagger config as below:

services.ConfigureSwaggerGen(opt =>
{
	opt.CustomSchemaIds(x => x.FullName);
});