HDFS connection failed:
Common causes of errors may be:
1. Hadoop is not started (not all started). Hadoop is normally started including the following services. If the services are not all started, you can check the log

2. Installation of pseudo-distributed mode; localhost or 127.0.0.1 is used in the configuration file; at this time, the real ID should be changed, including core-site.xml, Mapred-site.xml, Slaves,
Masters, after modifying IP, the dataNode may fail to start,
Set DFS. Data.dir in the HDFs-site.xml profile:
& lt; property>
& lt; name> dfs.data.dir< /name>
& lt; value> /data/hdfs/data< /value>
& lt; /property>
Delete all files in the folder in Hadoop and restart Hadoop
Common causes of errors may be:
1. Hadoop is not started (not all started). Hadoop is normally started including the following services. If the services are not all started, you can check the log

2. Installation of pseudo-distributed mode; localhost or 127.0.0.1 is used in the configuration file; at this time, the real ID should be changed, including core-site.xml, Mapred-site.xml, Slaves,
Masters, after modifying IP, the dataNode may fail to start,
Set DFS. Data.dir in the HDFs-site.xml profile:
& lt; property>
& lt; name> dfs.data.dir< /name>
& lt; value> /data/hdfs/data< /value>
& lt; /property>
Delete all files in the folder in Hadoop and restart Hadoop
Read More:
- Centos7 view and close firewall
- Hadoop datanode using JPS to view the solution that can’t be started
- dfs.namenode.name . dir and dfs.datanode.data .dir dfs.name.dir And dfs.data.dir What do you mean
- Why namenode can’t be started and its solution
- Hadoop cluster: about course not obtain block: error reporting
- Datanode startup failed with an error: incompatible clusterids
- ERROR: KeeperErrorCode = NoNode for /hbase/master
- Common problems of Hadoop startup error reporting
- Error: attempting to operate on HDFS namenode as root
- Introduction of Hadoop HDFS and the use of basic client commands
- Error in configuring Hadoop 3.1.3: attempting to operate on yarn nodemanager as root error
- hdfs-bug:DataXceiver error processing WRITE_BLOCK operation
- Summary of Hadoop error handling methods
- Namenode startup error: outofmemoryerror: Java heap space
- HDFS Java API operation error (user permission)
- Flume monitors a single append file in real time
- An error occurs when HBase uses the shell command: pleaseholdexception: Master is initializing solution
- Error in initializing namenode when configuring Hadoop!!!
- Sparkcontext: error initializing sparkcontext workaround
- Linux Mint installs Hadoop environment