Hadoop cluster: about course not obtain block: error reporting
When accessing HDFS, you encounter the above problems,
it is a node problem:
then check whether the firewall is closed, whether the datanode is started, and whether the data block is damaged:
check and find out that the second problem is the second problem. Then restart Hadoop daemon start datanode on the corresponding host on the command line, JPS to see that it has been started,
then try to execute the code to see if there is an error,
Similarly,
datanodes often hang up automatically,
…
go to the web (host: 9870)
find that other nodes are not really started in live node
OK
Restart,
reformat
find the HDFS data storage path in the configuration file:
delete $Hadoop from all nodes_ Home%/data/DFs/data/current
then restart the Hadoop cluster (turn off the security mode% hadoop_home% $bin/HDFS dfsadmin – safemode leave)
you can also see that the data has been deleted on the web side,
the landlord found that there are still previous data directories, but the content has been lost
you need to delete these damaged data blocks as well
execute HDFS fsck
View the data block of the mission
hdfs fsck
-Delete deletes a damaged data block
Then upload the data again and execute it again.
Read More:
- A solution to the problem that the number of nodes does not increase and the name of nodes is unstable after adding nodes dynamically in Hadoop cluster
- Introduction of Hadoop HDFS and the use of basic client commands
- hdfs 192.168.2.19:9000 failed on connection exception: java.net.ConnectException:Connection refused
- Hadoop — HDFS data writing process
- Datanode startup failed with an error: incompatible clusterids
- Common problems of Hadoop startup error reporting
- Namenode startup error: outofmemoryerror: Java heap space
- dfs.namenode.name . dir and dfs.datanode.data .dir dfs.name.dir And dfs.data.dir What do you mean
- Hadoop datanode using JPS to view the solution that can’t be started
- hdfs-bug:DataXceiver error processing WRITE_BLOCK operation
- Why namenode can’t be started and its solution
- Error: attempting to operate on HDFS namenode as root
- Sparkcontext: error initializing sparkcontext workaround
- Linux Mint installs Hadoop environment
- Summary of Hadoop error handling methods
- What to do if you repeatedly format a cluster
- ERROR: KeeperErrorCode = NoNode for /hbase/master
- Error reported when debugging Hadoop cluster under windows failed to find winutils.exe
- HDFS Java API operation error (user permission)
- Start Additional NameNode [How to Solve]