The error message and screenshot are as follows:
calculation112.aggrx:50010:DataXceiver error processing WRITE_BLOCK operation src: /10.1.1.116:36274 dst: /10.1.1.112:50010 java.io.IOException: Premature EOF from inputStream at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:203) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:501) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:901) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:808) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:169) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:106) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:246) at java.lang.Thread.run(Thread.java:748) ......
Reason: the file operation exceeds the lease term, that is, the file is deleted during the data stream operation
Scheme:
Step 1: modify the maximum number of files opened in the process
cat /etc/security/limits.conf | grep -v ^# * soft nofile 1000000 * hard nofile 1048576 * soft nproc 65536 * hard nproc unlimited * soft memlock unlimited * hard memlock unlimited * - nofile 1000000
Step 2: (modify the number of data transmission threads)
Done.
Read More:
- ORA-19502: write error on file “”, block number (block size=)
- HDFS Java API operation error (user permission)
- Built on Ethereum, puppeth cannot be used to create the initial block, and an error is reported Fatal: Failed to write genesis block: unsupported fork ordering: eip15
- The Ethereum private chain is built, and the block information cannot be synchronized. The error is resolved: Node data write error err=”state node failed with all peers(1 tries, 1 peers)
- Python read / write file error valueerror: I/O operation on closed file
- Error: attempting to operate on HDFS namenode as root
- Easynvr operation log reports an error. Fatal error: concurrent map read and map write troubleshooting
- Introduction of Hadoop HDFS and the use of basic client commands
- User space operation GPIO error echo: write error: device or resource busy error resolution
- Hadoop — HDFS data writing process
- HDFS and local file transfer and error reporting
- hdfs 192.168.2.19:9000 failed on connection exception: java.net.ConnectException:Connection refused
- Hadoop cluster: about course not obtain block: error reporting
- C language write() function analysis: write failed bad address
- Solution to “550 create directory operation failed” in FTP operation file
- The image operation of MATLAB — every detail operation of colorbar
- Could not write JSON: write javaBean error, fastjson version x.x.x, class
- Hadoop hdfs dfs -ls/ error: Call From master/192.168.88.108 to slave1:9820 failed on connection except
- CDH HDFS webui browser authentication (after Kerberos authentication is enabled)
- block comment should start with #