The error message and screenshot are as follows:
calculation112.aggrx:50010:DataXceiver error processing WRITE_BLOCK operation src: /10.1.1.116:36274 dst: /10.1.1.112:50010 java.io.IOException: Premature EOF from inputStream at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:203) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:501) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:901) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:808) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:169) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:106) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:246) at java.lang.Thread.run(Thread.java:748) ......
Reason: the file operation exceeds the lease term, that is, the file is deleted during the data stream operation
Scheme:
Step 1: modify the maximum number of files opened in the process
cat /etc/security/limits.conf | grep -v ^# * soft nofile 1000000 * hard nofile 1048576 * soft nproc 65536 * hard nproc unlimited * soft memlock unlimited * hard memlock unlimited * - nofile 1000000
Step 2: (modify the number of data transmission threads)
Done.
Read More:
- Datanode startup failed with an error: incompatible clusterids
- Summary of Hadoop error handling methods
- HBase hangs up immediately after startup. The port reports an error of 500 and hmaster aborted
- Error: attempting to operate on HDFS namenode as root
- [Solved] Spark SQL Error: File xxx could only be written to 0 of the 1 minReplication nodes.
- CDH Namenode Abnormal stop Error: flush failed for required journal (JournalAndStream(mgr=QJM to
- Namenode startup error: outofmemoryerror: Java heap space
- dfs.namenode.name . dir and dfs.datanode.data .dir dfs.name.dir And dfs.data.dir What do you mean
- Common problems of Hadoop startup error reporting
- Hadoop datanode using JPS to view the solution that can’t be started
- hdfs 192.168.2.19:9000 failed on connection exception: java.net.ConnectException:Connection refused
- Su – MySQL switch user, display error: resource temporarily unavailable
- Troubleshooting of errors in installing elasticsearch
- Introduction of Hadoop HDFS and the use of basic client commands
- Hadoop cluster: about course not obtain block: error reporting
- Su prompt when switching users: resource temporarily unavailable
- An error occurs when HBase uses the shell command: pleaseholdexception: Master is initializing solution
- HDFS Java API operation error (user permission)
- Error in configuring Hadoop 3.1.3: attempting to operate on yarn nodemanager as root error
- mkdir: Call From hadoop102/192.168.6.102 to hadoop102:8020 failed on connection exception: java.net.