Tag Archives: Big data

[Solved] Error:couldn‘t connect to server 127.0.0.1:27017, connection attempt failed: SocketException: …

Error:couldn‘t connect to server 127.0.0.1:27017, connection attempt failed: SocketException: …

Problem Examples

Do you encounter the following problems when entering mongo at the terminal?

couldn’t connect to server 127.0.0.1:27017, connection attempt failed: SocketException: Error connecting to 127.0.0.1:27017 :: caused by :: ���� Ŀ ����������� ܾ ���� ޷ ���� ӡ �

Problem analysis

In fact, this problem is not complicated, just because your mongodb is not started. Just start it.

Problem-solving

Enter the bin directory of mongodb (enter the bin directory of mongodb you installed)

Input command (port number can be specified)

mongod –logpath “E:\professional_software\mongodb\data\log\mongodb.log” –dbpath “E:\professional_software\mongodb\data\db” –logappend

or

mongod –logpath “E:\professional_software\mongodb\data\log\mongodb.log” –dbpath “E:\professional_software\mongodb\data\db” –logappend –port 8888

In this way, the startup is successful and the next step is ready. Open another command prompt and enter mongo.

[Solved] Flink Error: Flink Hadoop is not in the classpath/dependencies

Error background:

When installing the Flink on yarn cluster, the Flink cluster cannot be started.

Version:

flink-1.14.6

hadoop-3.2.3

org.apache.flink.runtime.entrypoint.ClusterEntrypointException: Failed to initialize the cluster entrypoint StandaloneSessionClusterEntrypoint.
	at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.startCluster(ClusterEntrypoint.java:216) ~[flink-dist_2.12-1.14.6.jar:1.14.6]
	at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.runClusterEntrypoint(ClusterEntrypoint.java:617) [flink-dist_2.12-1.14.6.jar:1.14.6]
	at org.apache.flink.runtime.entrypoint.StandaloneSessionClusterEntrypoint.main(StandaloneSessionClusterEntrypoint.java:59) [flink-dist_2.12-1.14.6.jar:1.14.6]
Caused by: java.io.IOException: Could not create FileSystem for highly available storage path (hdfs:/flink/ha/default)
	at org.apache.flink.runtime.blob.BlobUtils.createFileSystemBlobStore(BlobUtils.java:92) ~[flink-dist_2.12-1.14.6.jar:1.14.6]
	at org.apache.flink.runtime.blob.BlobUtils.createBlobStoreFromConfig(BlobUtils.java:76) ~[flink-dist_2.12-1.14.6.jar:1.14.6]
	at org.apache.flink.runtime.highavailability.HighAvailabilityServicesUtils.createHighAvailabilityServices(HighAvailabilityServicesUtils.java:121) ~[flink-dist_2.12-1.14.6.jar:1.14.6]
	at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.createHaServices(ClusterEntrypoint.java:361) ~[flink-dist_2.12-1.14.6.jar:1.14.6]
	at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.initializeServices(ClusterEntrypoint.java:318) ~[flink-dist_2.12-1.14.6.jar:1.14.6]
	at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.runCluster(ClusterEntrypoint.java:243) ~[flink-dist_2.12-1.14.6.jar:1.14.6]
	at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.lambda$startCluster$1(ClusterEntrypoint.java:193) ~[flink-dist_2.12-1.14.6.jar:1.14.6]
	at org.apache.flink.runtime.security.contexts.NoOpSecurityContext.runSecured(NoOpSecurityContext.java:28) ~[flink-dist_2.12-1.14.6.jar:1.14.6]
	at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.startCluster(ClusterEntrypoint.java:190) ~[flink-dist_2.12-1.14.6.jar:1.14.6]
	... 2 more
Caused by: org.apache.flink.core.fs.UnsupportedFileSystemSchemeException: Could not find a file system implementation for scheme 'hdfs'. The scheme is not directly supported by Flink and no Hadoop file system to support this scheme could be loaded. For a full list of supported file systems, please see https://nightlies.apache.org/flink/flink-docs-stable/ops/filesystems/.
	at org.apache.flink.core.fs.FileSystem.getUnguardedFileSystem(FileSystem.java:532) ~[flink-dist_2.12-1.14.6.jar:1.14.6]
	at org.apache.flink.core.fs.FileSystem.get(FileSystem.java:409) ~[flink-dist_2.12-1.14.6.jar:1.14.6]
	at org.apache.flink.core.fs.Path.getFileSystem(Path.java:274) ~[flink-dist_2.12-1.14.6.jar:1.14.6]
	at org.apache.flink.runtime.blob.BlobUtils.createFileSystemBlobStore(BlobUtils.java:89) ~[flink-dist_2.12-1.14.6.jar:1.14.6]
	at org.apache.flink.runtime.blob.BlobUtils.createBlobStoreFromConfig(BlobUtils.java:76) ~[flink-dist_2.12-1.14.6.jar:1.14.6]
	at org.apache.flink.runtime.highavailability.HighAvailabilityServicesUtils.createHighAvailabilityServices(HighAvailabilityServicesUtils.java:121) ~[flink-dist_2.12-1.14.6.jar:1.14.6]
	at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.createHaServices(ClusterEntrypoint.java:361) ~[flink-dist_2.12-1.14.6.jar:1.14.6]
	at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.initializeServices(ClusterEntrypoint.java:318) ~[flink-dist_2.12-1.14.6.jar:1.14.6]
	at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.runCluster(ClusterEntrypoint.java:243) ~[flink-dist_2.12-1.14.6.jar:1.14.6]
	at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.lambda$startCluster$1(ClusterEntrypoint.java:193) ~[flink-dist_2.12-1.14.6.jar:1.14.6]
	at org.apache.flink.runtime.security.contexts.NoOpSecurityContext.runSecured(NoOpSecurityContext.java:28) ~[flink-dist_2.12-1.14.6.jar:1.14.6]
	at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.startCluster(ClusterEntrypoint.java:190) ~[flink-dist_2.12-1.14.6.jar:1.14.6]
	... 2 more
Caused by: org.apache.flink.core.fs.UnsupportedFileSystemSchemeException: Hadoop is not in the classpath/dependencies.
	at org.apache.flink.core.fs.UnsupportedSchemeFactory.create(UnsupportedSchemeFactory.java:55) ~[flink-dist_2.12-1.14.6.jar:1.14.6]
	at org.apache.flink.core.fs.FileSystem.getUnguardedFileSystem(FileSystem.java:528) ~[flink-dist_2.12-1.14.6.jar:1.14.6]
	at org.apache.flink.core.fs.FileSystem.get(FileSystem.java:409) ~[flink-dist_2.12-1.14.6.jar:1.14.6]
	at org.apache.flink.core.fs.Path.getFileSystem(Path.java:274) ~[flink-dist_2.12-1.14.6.jar:1.14.6]
	at org.apache.flink.runtime.blob.BlobUtils.createFileSystemBlobStore(BlobUtils.java:89) ~[flink-dist_2.12-1.14.6.jar:1.14.6]
	at org.apache.flink.runtime.blob.BlobUtils.createBlobStoreFromConfig(BlobUtils.java:76) ~[flink-dist_2.12-1.14.6.jar:1.14.6]
	at org.apache.flink.runtime.highavailability.HighAvailabilityServicesUtils.createHighAvailabilityServices(HighAvailabilityServicesUtils.java:121) ~[flink-dist_2.12-1.14.6.jar:1.14.6]
	at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.createHaServices(ClusterEntrypoint.java:361) ~[flink-dist_2.12-1.14.6.jar:1.14.6]
	at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.initializeServices(ClusterEntrypoint.java:318) ~[flink-dist_2.12-1.14.6.jar:1.14.6]
	at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.runCluster(ClusterEntrypoint.java:243) ~[flink-dist_2.12-1.14.6.jar:1.14.6]
	at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.lambda$startCluster$1(ClusterEntrypoint.java:193) ~[flink-dist_2.12-1.14.6.jar:1.14.6]
	at org.apache.flink.runtime.security.contexts.NoOpSecurityContext.runSecured(NoOpSecurityContext.java:28) ~[flink-dist_2.12-1.14.6.jar:1.14.6]
	at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.startCluster(ClusterEntrypoint.java:190) ~[flink-dist_2.12-1.14.6.jar:1.14.6]
	... 2 more

The reason for the error
Flink needs two jar package dependencies to access HDFS. Flink does not have them, so it needs to be put in by itself.

  1. flink-shaded-hadoop-3-3.1.1.7.2.9.0-173-9.0.jar
  2. commons-cli-1.5.0.jar

Solution:

Directly search the Maven warehouse for these two jar packages and download them: https://mvnrepository.com/

Put the jar package in the /flink/lib directory.

[Solved] Spark Error: org.apache.spark.SparkException: A master URL must be set in your configuration

Error when running the project to connect to Spark:

Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
22/10/08 21:02:10 INFO SparkContext: Running Spark version 3.0.0
22/10/08 21:02:10 ERROR SparkContext: Error initializing SparkContext.
org.apache.spark.SparkException: A master URL must be set in your configuration
	at org.apache.spark.SparkContext.<init>(SparkContext.scala:380)
	at org.apache.spark.SparkContext.<init>(SparkContext.scala:120)
	at test.wyh.wordcount.TestWordCount$.main(TestWordCount.scala:10)
	at test.wyh.wordcount.TestWordCount.main(TestWordCount.scala)
22/10/08 21:02:10 INFO SparkContext: Successfully stopped SparkContext
Exception in thread "main" org.apache.spark.SparkException: A master URL must be set in your configuration
	at org.apache.spark.SparkContext.<init>(SparkContext.scala:380)
	at org.apache.spark.SparkContext.<init>(SparkContext.scala:120)
	at test.wyh.wordcount.TestWordCount$.main(TestWordCount.scala:10)
	at test.wyh.wordcount.TestWordCount.main(TestWordCount.scala)

Process finished with exit code 1

Solution:

Configure the following parameters:

-Dspark.master=local[*]

Restart IDEA.

kafka Environment Build and Startup Error: ERROR [KafkaServer id=1] Fatal error during KafkaServer startup. Prepare to shutdown

Solution: Modify the configuration file of server.properties as below:

Add these two items to the server.properties configuration file.

listeners=PLAINTEXT://xx.xx.xx.xx(server intranet IP address):9092
advertised.listeners=PLAINTEXT://xx.xx.xx.xx(server external IP address):9092

kafka server fail to startup error:

[2022-09-29 16:04:58,630] ERROR [KafkaServer id=1] Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
org.apache.kafka.common.KafkaException: Socket server failed to bind to 47.100.19.248:9092: Cannot assign requested address.
at kafka.network.Acceptor.openServerSocket(SocketServer.scala:778)
at kafka.network.Acceptor.<init>(SocketServer.scala:672)
at kafka.network.DataPlaneAcceptor.<init>(SocketServer.scala:531)
at kafka.network.SocketServer.createDataPlaneAcceptor(SocketServer.scala:287)
at kafka.network.SocketServer.$anonfun$createDataPlaneAcceptorsAndProcessors$1(SocketServer.scala:267)
at kafka.network.SocketServer.$anonfun$createDataPlaneAcceptorsAndProcessors$1$adapted(SocketServer.scala:261)
at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
at kafka.network.SocketServer.createDataPlaneAcceptorsAndProcessors(SocketServer.scala:261)
at kafka.network.SocketServer.startup(SocketServer.scala:135)
at kafka.server.KafkaServer.startup(KafkaServer.scala:309)
at kafka.Kafka$.main(Kafka.scala:109)
at kafka.Kafka.main(Kafka.scala)
Caused by: java.net.BindException: Cannot assign requested address
at java.base/sun.nio.ch.Net.bind0(Native Method)
at java.base/sun.nio.ch.Net.bind(Net.java:461)
at java.base/sun.nio.ch.Net.bind(Net.java:453)
at java.base/sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:227)
at java.base/sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:80)
at kafka.network.Acceptor.openServerSocket(SocketServer.scala:774)
… 13 more
[2022-09-29 16:04:58,632] INFO [KafkaServer id=1] shutting down (kafka.server.KafkaServer)

[Solved] Spark job failed during runtime. Please check stacktrace for the root cause.

hive on spark reports an error
executing the hive command is an error

[42000][3] Error while processing statement: FAILED: Execution Error, return code 3 from org.apache.hadoop.hive.ql.exec.spark.SparkTask. Spark job failed during runtime. Please check stacktrace for the root cause.

[Reason]
View running tasks on yarn, Query error results from error log

Map operator initialization failed: org.apache.hadoop.hive.ql.metadata.HiveException: Unexpected column vector type LIST

List type error
List corresponds to array in hive, array corresponds to list in Java

[Solution]
Temporarily change the execution engine to MR

set hive.execution.engine=mr;

There are many bugs in hive on spark, When an unknown error occurs, First try to replace the underlying execution engine with MR, to execute the sql statement.

[Subsequent modification]
1. View the current execution engine of hive:

set hive.execution.engine;

2. Manually set hive’s current execution engine to Spark

set hive.execution.engine=spark;

3. Manually set hive’s current execution engine to MR

set hive.execution.engine=mr;

[Solved] Kafka Restarts error | Cloudera Manager Access Returns 500 | HDFS Startup Error

Hi~ Long time no update

1.Problems that need attention after restarting kafka:
Kafka will have a write file a in the target storage location during execution,this file a will keep a write state for a while,usually one hour Heavy
Generate a new write file b,End the last write file a(The duration of this ending needs to check the configuration of each cluster). then restart
Here comes the problem,The last write file a,will be recreated after restarting,The last write file b,So the current
a will keep writing status,when reading and writing file a, it will report an error,including importing Hive query will also report an error&# xff08; load to hive
The table will not report an error,but it will report an error when selecting),because this file is always in the write state,It is inoperable,It is also called writing
Lock(I believe everyone has heard of).
Solution:Then we need to manually terminate the write status of the write file,First we need to determine the status of the write file,In the command
Execute the command on the line :
hdfs fsck /data/logs/( Write the directory where the file is located,Change according to where your file is located) -openforwrite
The displayed files are all in the write state:

insert picture description here

 After seeing the writing file,execute the command to stop all writing files,here explain,why all stop&# xff0c; Logically, it should be stopped before
A write file, but stopping all of them can also solve the problem, is relatively simple and violent, because manual stopping will automatically generate a
, writing files, so you can stop them all. then now execute the command :
hdfs debug recoverLease -path /logs/common_log/2022-09 -16/FlumeData.1663292498820.tmp(Execute the previous command to display Output write file path) -retries 3
It can be solved by executing each file once,Say more,If this file has been loaded into hive,, you need to go to /user/warehouse/hive/ to find this write status file

insert picture description here

2.CDH's Cloudera Manager launch browser access returns 500error:
① First check the configuration of the /etc/hosts file, only need to leave these two lines with the cluster The intranet IP mapping can be
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6

②It is also necessary to check whether the ports related to ,cm are occupied by the firewall.

③ Then restart CM, execute the command
nameNode:systemctl stop cloudera-scm-server
Then execute :systemctl stop cloudera-scm-agent on each node

nameNode:systemctl start cloudera-scm-server
Then execute :systemctl start cloudera-scm-agent on each node
Attention Pay attention to!!! The execution order of these commands cannot be reversed, Otherwise, there may be problems with cluster startup.
Then you can systemctl status cloudera-scm-server, systemctl status cloudera-scm-agent
Check out the operation.

②If cm starts and can access , but starts HDFS error 1 or 2
1.Unable to retrieve non-local non-loopback IP address. Seeing address: cm/127.0.0.1
2.ERROR ScmActive-0:com.cloudera.server.cmf. components.ScmActive: ScmActive was not able to access CM identity  to validate it.2017-04-18 09:40 :29,308 ERROR ScmActive-0

So congratulations ,find a solution.
First find the source database of CM,Some of them were configured at that time,If you don’t know, ask the person who installed them,Almost all of them are in
Don't ask me for the , account password on nameNode ~, then show databases; can See that there is a cm or scm library

insert picture description here

 use this library,then show tables;
You will see a table called HOSTS,View the data of this table-select * from HOSTS ;

insert picture description here

 You will find that there is a different line , that is, there is a difference between NAME and IP_ADDRESS, Then you need to modify it back, to
The name and IP_ADDRESS of the intranet,I believe everyone will modify it!Then restart the CM,It's done!

start-all.sh Execute error: Stopping journal nodes [slave2 slave1 master]…

Stopping journal nodes [slave2 slave1 master]
ERROR: Attempting to operate on hdfs journalnode as root
ERROR: but there is no HDFS_JOURNALNODE_USER defined. Aborting operation.
Stopping ZK Failover Controllers on NN hosts [master slave1 slave2]
ERROR: Attempting to operate on hdfs zkfc as root
ERROR: but there is no HDFS_ZKFC_USER defined. Aborting operation.

 

Error Cause:
Trying to operate on hdfs namenode as root user, but HDFS _ NAMENODE _ user is not defined. Abort the operation.

Solution:
In the environment variables in add.
1. into the environment variables
vi ~/.bash_profile
2. Add the following codes:

#hadoop
export HDFS_NAMENODE_USER=root
export HDFS_DATANODE_USER=root
export HDFS_JOURNALNODE_USER=root
export HDFS_SECONDARYNAMENODE_USER=root
export YARN_RESOURCEMANAGER_USER=root
export YARN_NODEMANAGER_USER=root
export HDFS_ZKFC_USER=root

Enabling environment variables
source ~/.bash_profile
​​​​​

[Windows] elasticsearch.exceptions.RequestError: <unprintable RequestError object>

elasticsearch.exceptions.RequestError: <unprintable RequestError object>

There are many ways to solve this problem.
Use the following two commands in Pycharm
$pip install django haystack
$pip install elasticsearch==2.4.1

Note that the server-side elasticsearch should be consistent with the pip install elasticsearch==2.4.1 version

How to Solve elasticsearch and logstash Install Error

Turning on the logstash service appears: Failed to start logstash.service: Unit not found.

[root@localhost ~]# systemctl start logstash
Failed to start logstash.service: Unit not found.

Issue 1:
First problem: Failed to start logstash.service: Unit not found.
Solution idea.
Generate logstash.service file

[root@localhost ~]# sudo /usr/share/logstash/bin/system-install /etc/logstash/startup.options systemd

Check if the service can be opened normally

Issue 2:
The second problem: If you use this access could not find any executable java binary. Please install java in your PATH or set JAVA_HOME.

[root@localhost ~]# sudo /usr/share/logstash/bin/system-install /etc/logstash/startup.options systemd
Could not find any executable java binary. Please install java in your PATH or set JAVA_HOME.

Reason: logstash can’t get to AVA_HOME variable, need to add refresh profile in the configuration file
Solution.

[root@localhost ~]# vi /etc/profile                #Add the specified version of the JDK directory installed on the local machine
export JAVA_HOME=/usr/local/jdk1.8
export CLASSPATH=$JAVA_HOME/lib:$JAVA_HOME/jre/lib
export PATH=$JAVA_HOME/lib:$JAVA_HOME/jre/bin:$PATH:$HOME/bin

[root@localhost ~]# vi /usr/share/logstash/bin/logstash.lib.sh
Add source /etc/profile in the last line
[root@localhost ~]# vi /usr/share/logstash/bin/logstash
Add source /etc/profile in the last line

Refresh the configuration file, and then see if the service can be opened normally

Issue 3:
Third problem : /usr/share/logstash/vendor/jruby/bin/jruby:line 388: /usr/bin/java: No that file or directory
Unable to install system startup script for Logstash.
Reason: Can’t get the java executable file
Solution:

[root@localhost ~]# ln -s /usr/local/jdk1.8/bin/java /usr/bin/java

Reinstall the service to install.

[root@localhost ~]# rpm -e logstash
Error: package logstash is not installed
[root@localhost ~]# rpm -ivh /mnt/logstash-5.5.1.rpm
Warning: /mnt/logstash-5.5.1.rpm: Head V4 RSA/SHA512 Signature, key ID d88e42b4: NOKEY
In preparation…                          ################################# [100%]
Package logstash-1:5.5.1-1.noarch is installed

Generate logstash.service file

[root@localhost ~]# sudo /usr/share/logstash/bin/system-install /etc/logstash/startup.options systemd
Using provided startup.options file: /etc/logstash/startup.options

Start successfully!

[root@localhost ~]# systemctl start logstash

[Solved] org.springframework.beans.factory.UnsatisfiedDependencyException

It can be understood that the dependency failed, and the dependency could not be found

Some errors are reported as follows:

org.springframework.beans.factory.UnsatisfiedDependencyException: Error creating bean with name 'pageService': Unsatisfied dependency expressed through field 'pageInfoMepper'; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'pageInfoMepper' defined in file [D:projectIDEAProjectdemoPageoutartifactsWEB-INFclassescomligleimapperPageInfoMepper.class]: Cannot resolve reference to bean 'sqlSessionFactory' while setting bean property 'sqlSessionFactory'; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'sqlSessionFactory' defined in file [D:projectIDEAProjectdemoPageoutartifactsWEB-INFclassesapplicationContext.xml]: Invocation of init method failed; nested exception is org.springframework.core.NestedIOException: Failed to parse config resource: class path resource [mybatis-config.xml]; nested exception is org.apache.ibatis.builder.BuilderException: Error parsing SQL Mapper Configuration. Cause: org.apache.ibatis.builder.BuilderException: Error resolving class. Cause: org.apache.ibatis.type.TypeException: Could not resolve type alias 'com.github.pagehelper.PageInterceptor'.  Cause: java.lang.ClassNotFoundException: Cannot find class: com.github.pagehelper.PageInterceptor
	at org.springframework.beans.factory.annotation.AutowiredAnnotationBeanPostProcessor$AutowiredFieldElement.inject(AutowiredAnnotationBeanPostProcessor.java:596)
	at org.springframework.beans.factory.annotation.InjectionMetadata.inject(InjectionMetadata.java:90)
	at org.springframework.beans.factory.annotation.AutowiredAnnotationBeanPostProcessor.postProcessProperties(AutowiredAnnotationBeanPostProcessor.java:374)
	at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.populateBean(AbstractAutowireCapableBeanFactory.java:1411)
	at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:592)
	at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:515)
	at org.springframework.beans.factory.support.AbstractBeanFactory.lambda$doGetBean$0(AbstractBeanFactory.java:320)
	at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:222)
	at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:318)
	at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:199)
	at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:843)
	at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:877)
	at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:549)
	at org.springframework.web.context.ContextLoader.configureAndRefreshWebApplicationContext(ContextLoader.java:400)
	at org.springframework.web.context.ContextLoader.initWebApplicationContext(ContextLoader.java:291)
	at org.springframework.web.context.ContextLoaderListener.contextInitialized(ContextLoaderListener.java:103)
	at org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:4643)
	at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5109)
	at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:183)
	at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:743)
	at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:719)
	at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:703)
	at org.apache.catalina.startup.HostConfig.manageApp(HostConfig.java:1737)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.tomcat.util.modeler.BaseModelMBean.invoke(BaseModelMBean.java:287)
	at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:819)
	at com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:801)
	at org.apache.catalina.mbeans.MBeanFactory.createStandardContext(MBeanFactory.java:457)
	at org.apache.catalina.mbeans.MBeanFactory.createStandardContext(MBeanFactory.java:406)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.tomcat.util.modeler.BaseModelMBean.invoke(BaseModelMBean.java:287)
	at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:819)
	at com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:801)
	at com.sun.jmx.remote.security.MBeanServerAccessController.invoke(MBeanServerAccessController.java:468)
	at javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1468)
	at javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:76)
    .......
    .......

Solution:

1. First, see if the annotation on the Service layer class is added or wrong, it should be @Service, whether the annotation refers to the Spring class, do not import it into another package, and see the path of the package.

2. Also, if the Service layer is divided into interfaces and implementation classes, see if the implementation class is annotated (@Service), and see if there is an implementation class.

3. Take a look at the configuration file. xml, is there any package where the automatic scan service is started?

<context:component-scan base-package="com.liglei.service"></context:component-scan>

4. Check if the jar package is downloaded completely and if there is this jar package

5. Just to see if there are any packages in the red box in the figure below that have not been imported, and if so, follow the steps below:

File–> Project Structure–> Artifacts–> Right click demopage –> Put into Output Root–> OK

Make sure the above steps are OK. Restart the project and try again~

How to Solve hadoop3.x.x sh start-dfs.sh Startup Error

hadoop3.x.x sh start-dfs.sh startup error

Error information:

/app/module/hadoop310/libexec/hadoop-functions.sh: line 398: syntax error near unexpected token `<'
/app/module/hadoop310/libexec/hadoop-functions.sh: line 398: `  done < <(for text in "${input[@]}"; do'
/app/module/hadoop310/libexec/hadoop-config.sh: line 70: hadoop_deprecate_envvar: command not found
/app/module/hadoop310/libexec/hadoop-config.sh: line 87: hadoop_bootstrap: command not found
/app/module/hadoop310/libexec/hadoop-config.sh: line 104: hadoop_parse_args: command not found
/app/module/hadoop310/libexec/hadoop-config.sh: line 105: shift: : numeric argument required
/app/module/hadoop310/libexec/hadoop-config.sh: line 110: hadoop_find_confdir: command not found
/app/module/hadoop310/libexec/hadoop-config.sh: line 111: hadoop_exec_hadoopenv: command not found
/app/module/hadoop310/libexec/hadoop-config.sh: line 112: hadoop_import_shellprofiles: command not found
/app/module/hadoop310/libexec/hadoop-config.sh: line 113: hadoop_exec_userfuncs: command not found
/app/module/hadoop310/libexec/hadoop-config.sh: line 119: hadoop_exec_user_hadoopenv: command not found
/app/module/hadoop310/libexec/hadoop-config.sh: line 120: hadoop_verify_confdir: command not found
/app/module/hadoop310/libexec/hadoop-config.sh: line 122: hadoop_deprecate_envvar: command not found
/app/module/hadoop310/libexec/hadoop-config.sh: line 123: hadoop_deprecate_envvar: command not found
/app/module/hadoop310/libexec/hadoop-config.sh: line 124: hadoop_deprecate_envvar: command not found
/app/module/hadoop310/libexec/hadoop-config.sh: line 129: hadoop_os_tricks: command not found
/app/module/hadoop310/libexec/hadoop-config.sh: line 131: hadoop_java_setup: command not found
/app/module/hadoop310/libexec/hadoop-config.sh: line 133: hadoop_basic_init: command not found
/app/module/hadoop310/libexec/yarn-config.sh: line 36: hadoop_deprecate_envvar: command not found
/app/module/hadoop310/libexec/yarn-config.sh: line 38: hadoop_deprecate_envvar: command not found
/app/module/hadoop310/libexec/yarn-config.sh: line 40: hadoop_deprecate_envvar: command not found
/app/module/hadoop310/libexec/yarn-config.sh: line 42: hadoop_deprecate_envvar: command not found
/app/module/hadoop310/libexec/yarn-config.sh: line 44: hadoop_deprecate_envvar: command not found
/app/module/hadoop310/libexec/yarn-config.sh: line 46: hadoop_deprecate_envvar: command not found
/app/module/hadoop310/libexec/yarn-config.sh: line 48: hadoop_deprecate_envvar: command not found
/app/module/hadoop310/libexec/yarn-config.sh: line 50: hadoop_deprecate_envvar: command not found
/app/module/hadoop310/libexec/yarn-config.sh: line 52: hadoop_deprecate_envvar: command not found
/app/module/hadoop310/libexec/yarn-config.sh: line 54: hadoop_deprecate_envvar: command not found
/app/module/hadoop310/libexec/yarn-config.sh: line 62: hadoop_deprecate_envvar: command not found
/app/module/hadoop310/libexec/yarn-config.sh: line 64: hadoop_deprecate_envvar: command not found
/app/module/hadoop310/libexec/hadoop-config.sh: line 140: hadoop_shellprofiles_init: command not found

The specific cause of the error has not been found yet.
Solution: switch to the Hadoop directory and go directly.
cd /app/module/hadoop310
./sbin/start-dfs.sh

[Solved] SpringBoot Integrate ES Error: Elasticsearch health check failed

Recently, when a springboot integrated es project was started, an error was reported after successful startup: Elasticsearch health check failed

There are two method to solve this error:

1. Close the health check of the actor on elasticsearch (I tried this method, and the project cannot be started later, and this method is not recommended):

management:
  health:
    elasticsearch:
      enabled: false

2. Configure according to spring.elasticsearch.rest. uris (the problem is solved after restart):

spring:
  # ES search engine
  data:
    elasticsearch:
      cluster-nodes: 47.103.5.190:9300
      cluster-name: docker-cluster
      repositories:
        enabled: true
  elasticsearch:
    rest:
      uris: ["http://47.103.5.190:9200"]