Tag Archives: Hadoop Error

[Solved] Hadoop Error: ERROR: Cannot set priority of namenode process

Phenomenon:

solve:

1. Look at Hadoop logs:

Check the namenode log: tail -n 200 hadoop-xinjie-namenode-VM-0-9-centos.log (location of file directory: Hadoop installation location logs file)

2. It is found that the port is occupied

3. Command to check the port occupancy: netstat -anp|grep 9866

4. Kill process: kill -9 9866

5. Restart the cluster after killing all the occupied ports. The problem is solved

[Solved] hadoop Error: 9000 failed on connection exception java.net.ConnectException Denied to Access

To view the files on haddop, enter:

hadoop fs -ls /

The following occurred:

ls: Call From yx/127.0.1.1 to 0.0.0.0:9000 failed on connection exception: 
java.net.ConnectException: Denied to Access 
For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused

Display 9000 port access denied
from the configuration file, you can know that Hadoop needs to access the machine through 9000 port, but now 9000 port access is denied
Input:

telnet localhost 9000

Display:

Trying 127.0.0.1…
telnet: Unable to connect to remote host: Connection refused

Input:

nmap -p 9000 localhost

Display:

Starting Nmap 7.80 ( https://nmap.org ) at 2020-04-25 14:57 CST
Nmap scan report for localhost (127.0.0.1)
Host is up (0.000051s latency).
PORT STATE SERVICE
9000/tcp closed cslistener
Nmap done: 1 IP address (1 host up) scanned in 0.03 seconds

 

Use the command:

 lsof -i :9000

See which app is using the port. If the result is empty (return value 1), it is not opened
the above is just to check the status and will not change anything.

Because hadoop connects to the local port in core-site.xml
Open $HADOOP_HOME/etc/hadoop/core-site.xml

<configuration>
<property>
    <name>hadoop.tmp.dir</name>
    <value>/data/tmp/hadoop/tmp</value>
</property>
<property>
    <name>fs.defaultFS</name>
    <value>hdfs://0.0.0.0:9000</value>
</property>
</configuration>

It is found that I have not configured the port number, which is modified to:
hdfs://localhost:9000
It still not work.
I tried many methods, and finally found that the most commonly used method on the Internet is to use this method, which is really OK:

cd $HADOOP_HOME/bin
hdfs namenode -format

But I still couldn’t run it. I tried many times and all the results were the same. Later, then I found an error:

WARN common.Util: Path /data/tmp/hadoop/hdfs/name should be specified as a URI in configuration files. Please update hdfs configuration.

This is due to the non-standard path. The file describing the path is in:

$HADOOP_HOMW/etc/hadoop/hdfs-site.xml

Place in the file:

 <property>  
     <name>dfs.datanode.data.dir</name>  
     <value>/data/tmp/hadoop/hdfs/data</value>  
 </property>

Amend to read:

  <property>  
     <name>dfs.datanode.data.dir</name>  
     <value>file:///data/tmp/hadoop/hdfs/data</value>  
 </property>

The warning disappears
but this does not play a decisive role. My reason is that the security mode is turned on. Just turn off the security mode

cd $HADOOP_HOME/bin
hadoop dfsadmin -safemode leave

Users can

hadoop dfsadmin -safemode value

Operation security mode
value value:
enter: enter security mode
leave: force to leave security mode
get: return to security mode status
wait: until the end of security mode
now:

root@yx:/apps/hadoop/bin# hadoop fs -ls /
20/04/25 19:24:24 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Found 1 items
drwxr-xr-x   - root supergroup          0 2020-04-25 17:13 /test

[Solved] Hadoop Error: Error: Could not find or load main class org.apache.hadoop.mapreduce.v2.app.MRAppMaster

Problem Description:

When testing yarn , starting the wordcount test case fails, and the following prompt appears

Error: Could not find or load main class org.apache.hadoop.mapreduce.v2.app.MRAppMaster

Please check whether your etc/hadoop/mapred-site.xml contains the below configuration:
<property>
  <name>yarn.app.mapreduce.am.env</name>
  <value>HADOOP_MAPRED_HOME=${full path of your hadoop distribution directory}<alue>
</property>
<property>
  <name>mapreduce.map.env</name>
  <value>HADOOP_MAPRED_HOME=${full path of your hadoop distribution directory}<alue>
</property>
<property>
  <name>mapreduce.reduce.env</name>
  <value>HADOOP_MAPRED_HOME=${full path of your hadoop distribution directory}<alue>
</property>

For more detailed output, check the application tracking page: http://hadoop103:8088/cluster/app/application_1638539388325_0001 Then click on links to logs of each attempt.
. Failing the application.

Cause analysis

Cannot find the main classpath

Solution:

Follow the prompts to add a classpath

At yen site XML and mapred site XML, add the following

<property>
	<name>yarn.application.classpath</name>
	<value>
		${HADOOP_HOME}/etc/*,
		${HADOOP_HOME}/etc/hadoop/*,
		${HADOOP_HOME}/lib/*,
		${HADOOP_HOME}/share/hadoop/common/*,
		${HADOOP_HOME}/share/hadoop/common/lib/*,
		${HADOOP_HOME}/share/hadoop/mapreduce/*,
		${HADOOP_HOME}/share/hadoop/mapreduce/lib-examples/*,
		${HADOOP_HOME}/share/hadoop/hdfs/*,
		${HADOOP_HOME}/share/hadoop/hdfs/lib/*,
		${HADOOP_HOME}/share/hadoop/yarn/*,
		${HADOOP_HOME}/share/hadoop/yarn/lib/*,
	</value>
</property>

Because ${hadoop_home} is used, you need to inherit environment variables in Yard site XML is added as follows, where Hadoop_Home is what we need. Just put the rest as needed. I put some commonly used ones here

<!--Inheritance of environment variables-->
<property>
  <name>yarn.nodemanager.env-whitelist</name>,
  <value>JAVA_HOME,HADOOP_HOME,HADOOP_COMMON_HOME, HADOOP_ HDFS_HOME, HADOOP_ CONF_DIR, CLASSPATH_PREPEND_DISTCACHE, HADOOP_YARN_HOME, HADOOP_MAPRED_HOME
  </value>
</property>

If the yarn service related to multiple servers is enabled, remember to configure each server

[Solved] Hadoop Error: Exception in thread “main“ java.io.IOException: Error opening job jar: /usr/local/hadoop-2.

An exception occurred while running MapReduce task today:
at first, I thought it was my JDK version. The JDK version of Linux was 1.8 and my windows JDK version was 11.0. I changed the JDK environment variable to 1.8, but the problem remained the same after running.

Later, I checked the size of the jar package and found that it was 0kb. Er… I checked the size of other jar packages, no problem. Then I think the jar package is damaged
I transported it through the window again, and it can succeed later

I hope this article is helpful to you~

[Solved] Hadoop Error: HADOOP_HOME and hadoop.home.dir are unset.

catalogue

Solutions to error messages 1. Download apache-hadoop-3.1.0-winutils-master 2. Unzip to the host 3. Add environment variables 4. Restart idea or eclipse

Error message

java.lang.RuntimeException: java.io.FileNotFoundException: java.io.FileNotFoundException: HADOOP_HOME and hadoop.home.dir are unset.

java.lang.RuntimeException: java.io.FileNotFoundException: java.io.FileNotFoundException: HADOOP_HOME and hadoop.home.dir are unset. -see https://wiki.apache.org/hadoop/WindowsProblems

	at org.apache.hadoop.util.Shell.getWinUtilsPath(Shell.java:737)
	at org.apache.hadoop.util.Shell.getSetPermissionCommand(Shell.java:272)
	at org.apache.hadoop.util.Shell.getSetPermissionCommand(Shell.java:288)
	at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:840)
	at org.apache.hadoop.fs.RawLocalFileSystem$LocalFSFileOutputStream.<init>(RawLocalFileSystem.java:239)
	at org.apache.hadoop.fs.RawLocalFileSystem$LocalFSFileOutputStream.<init>(RawLocalFileSystem.java:219)
	at org.apache.hadoop.fs.RawLocalFileSystem.createOutputStreamWithMode(RawLocalFileSystem.java:318)
	at org.apache.hadoop.fs.RawLocalFileSystem.create(RawLocalFileSystem.java:307)
	at org.apache.hadoop.fs.RawLocalFileSystem.create(RawLocalFileSystem.java:338)
	at org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSOutputSummer.<init>(ChecksumFileSystem.java:401)
	at org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:464)
	at org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:443)
	at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1118)
	at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1098)
	at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:987)
	at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:414)
	at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:387)
	at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:337)
	at org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:2434)
	at org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:2403)
	at org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:2379)
	at cn.itcast.hdfs.HDFSClientTest.getFile2Local(HDFSClientTest.java:71)
	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.base/java.lang.reflect.Method.invoke(Method.java:564)
	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
	at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
	at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
	at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
	at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
	at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
	at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
	at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
	at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
	at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
	at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
	at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
	at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
	at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
	at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
	at com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:68)
	at com.intellij.rt.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:33)
	at com.intellij.rt.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:230)
	at com.intellij.rt.junit.JUnitStarter.main(JUnitStarter.java:58)
Caused by: java.io.FileNotFoundException: java.io.FileNotFoundException: HADOOP_HOME and hadoop.home.dir are unset. -see https://wiki.apache.org/hadoop/WindowsProblems
	at org.apache.hadoop.util.Shell.fileNotFoundException(Shell.java:549)
	at org.apache.hadoop.util.Shell.getHadoopHomeDir(Shell.java:570)
	at org.apache.hadoop.util.Shell.getQualifiedBin(Shell.java:593)
	at org.apache.hadoop.util.Shell.<clinit>(Shell.java:690)
	at org.apache.hadoop.util.StringUtils.<clinit>(StringUtils.java:78)
	at org.apache.hadoop.fs.FileSystem$Cache$Key.<init>(FileSystem.java:3482)
	at org.apache.hadoop.fs.FileSystem$Cache$Key.<init>(FileSystem.java:3477)
	at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3319)
	at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:479)
	at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:227)
	at cn.itcast.hdfs.HDFSClientTest.connect2HDFS(HDFSClientTest.java:31)
	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.base/java.lang.reflect.Method.invoke(Method.java:564)
	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
	at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33)
	at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
	... 18 more
Caused by: java.io.FileNotFoundException: HADOOP_HOME and hadoop.home.dir are unset.
	at org.apache.hadoop.util.Shell.checkHadoopHomeInner(Shell.java:469)
	at org.apache.hadoop.util.Shell.checkHadoopHome(Shell.java:440)
	at org.apache.hadoop.util.Shell.<clinit>(Shell.java:517)
	... 34 more

Soltuion:

1. Download apache-hadoop-3.1.0-winutils-master

Apache-hadoop-3.1.0-winutils-master GitHub address.
other versions can also be found on GitHub. I use this version to solve the problem here.

2. Unzip to the host

I unzip it here to the local windows
unzip it. The apache-hadoop-3.1.0-winutils-master folder contains the bin file

3. Add environment variables

Add the path of the parent folder of the bin folder to the environment variable

4. Restart idea or eclipse

Problem solving.

[Solved] Hadoop error java.lang.nosuchmethoderror

Record a Hadoop error:

Exception in thread "main" java.lang.NoSuchMethodError: org.apache.hadoop.mapreduce.Job.getArchiveSharedCacheUploadPolicies(Lorg/apache/hadoop/conf/Configuration;)Ljava/util/Map;
	at org.apache.hadoop.mapreduce.v2.util.MRApps.setupDistributedCache(MRApps.java:491)
	at org.apache.hadoop.mapred.LocalDistributedCacheManager.setup(LocalDistributedCacheManager.java:93)
	at org.apache.hadoop.mapred.LocalJobRunner$Job.<init>(LocalJobRunner.java:172)
	at org.apache.hadoop.mapred.LocalJobRunner.submitJob(LocalJobRunner.java:794)
	at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:240)
	at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1290)
	at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1287)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:422)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1746)
	at org.apache.hadoop.mapreduce.Job.submit(Job.java:1287)
	at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1308)
	at hadoop.mapjoin.MapJoinDriver.main(MapJoinDriver.java:59)

At this point, the dependency introduced in POM. XML is

 <dependency>
            <groupId>org.springframework</groupId>
            <artifactId>spring-web</artifactId>
            <version>4.3.16.RELEASE</version>
        </dependency>
        <!--Dependencies used by hbase-->
        <dependency>
            <groupId>org.apache.hbase</groupId>
            <artifactId>hbase-client</artifactId>
            <version>2.0.0</version>
        </dependency>
        <dependency>
            <groupId>org.apache.hbase</groupId>
            <artifactId>hbase-server</artifactId>
            <version>2.0.0</version>
        </dependency>

        <!--hadoop dependencies-->
        <dependency>
            <groupId>org.apache.hadoop</groupId>
            <artifactId>hadoop-client</artifactId>
            <version>3.2.2</version>
        </dependency>
        <dependency>
            <groupId>org.projectlombok</groupId>
            <artifactId>lombok</artifactId>
            <version>1.16.18</version>
        </dependency>

        <dependency>
            <groupId>junit</groupId>
            <artifactId>junit</artifactId>
            <version>4.12</version>
        </dependency>
        <!--Log information-->
        <dependency>
            <groupId>org.slf4j</groupId>
            <artifactId>slf4j-log4j12</artifactId>
            <version>1.7.30</version>
        </dependency>

However, after the dependency of HBase is removed, wordcount can run normally. Is it a problem with both versions
the corresponding relationship of the recommended version on the official website, and the link to view on the official website http://hbase.apache.org/book.html#java

Hadoop Error: hdfs.DFSClient: Exception in createBlockOutputStream

java.io.EOFException: Premature EOF: no length prefix available
at org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:1492)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1155)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1088)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:514)
15/03/24 18:26:40 INFO hdfs.DFSClient: Abandoning BP-1909118226-192.168.19.234-1427110524238:blk_1073762363_21550
15/03/24 18:26:40 INFO hdfs.DFSClient: Excluding datanode 192.168.21.24:50010
copy from: /root/zenggq/jn2/data2w/t0.head_2000 to /recom1000/t0.head_2000
15/03/24 18:26:41 INFO hdfs.DFSClient: Exception in createBlockOutputStream
java.io.EOFException: Premature EOF: no length prefix available
at org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:1492)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1155)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1088)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:514)
15/03/24 18:26:41 INFO hdfs.DFSClient: Abandoning BP-1909118226-192.168.19.234-1427110524238:blk_1073762365_21552
15/03/24 18:26:41 INFO hdfs.DFSClient: Excluding datanode 192.168.21.23:50010
15/03/24 18:26:41 INFO hdfs.DFSClient: Exception in createBlockOutputStream
java.io.IOException: Bad connect ack with firstBadLink as 192.168.21.24:50010
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1166)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1088)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:514)
15/03/24 18:26:41 INFO hdfs.DFSClient: Abandoning BP-1909118226-192.168.19.234-1427110524238:blk_1073762366_21553
15/03/24 18:26:41 INFO hdfs.DFSClient: Excluding datanode 192.168.21.24:50010
15/03/24 18:26:41 INFO hdfs.DFSClient: Exception in createBlockOutputStream
java.io.EOFException: Premature EOF: no length prefix available
at org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:1492)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1155)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1088)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:514)
15/03/24 18:26:41 INFO hdfs.DFSClient: Abandoning BP-1909118226-192.168.19.234-1427110524238:blk_1073762367_21554
15/03/24 18:26:41 INFO hdfs.DFSClient: Excluding datanode 192.168.19.236:50010
15/03/24 18:26:41 INFO hdfs.DFSClient: Exception in createBlockOutputStream
java.io.EOFException: Premature EOF: no length prefix available
at org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:1492)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1155)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1088)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:514)
15/03/24 18:26:41 INFO hdfs.DFSClient: Abandoning BP-1909118226-192.168.19.234-1427110524238:blk_1073762368_21555
15/03/24 18:26:41 INFO hdfs.DFSClient: Excluding datanode 192.168.21.30:50010
15/03/24 18:26:41 WARN hdfs.DFSClient: DataStreamer Exception
java.io.IOException: Unable to create new block.
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1100)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:514)
15/03/24 18:26:41 WARN hdfs.DFSClient: Could not get block locations. Source file “/recom1000/t1.head_2000” – Aborting…
Exception in thread “main” java.io.EOFException: Premature EOF: no length prefix available
at org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:1492)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1155)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1088)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:514)
15/03/24 18:26:41 ERROR hdfs.DFSClient: Failed to close file /recom1000/t1.head_2000
java.io.EOFException: Premature EOF: no length prefix available
at org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:1492)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1155)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1088)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:514)
[root@master jn2]#

I found the answer in a circle. One said that the datanode process did not exist, but that the firewall was not turned off. It turned out that I had no problem with both.

Later, I deleted the data directory under hadoop-dir. Then reformat the namenode

hadoop purpose -format

And then it’s ready