org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /home/maclaren/data/hadoopTempDir/dfs/name is in an inconsistent state: storage directory does not exist or is not accessible.
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:290)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:87)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:311)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:292)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:201)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:279)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:956)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:965)
>
Change the hadoop. TMP. Dir value of core-site. XML, or make sure to format the NameNode the first time you initialize it, otherwise an error will be reported.
So, be sure to clear all contents of the directory specified by hadoop.temp.dir, and then run
sh hadoop namenode-format
2. Error message:
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Not able to place enough replicas, still in need of 1
>
The size of dfs.block.size must be set to the right size. I ran it on my laptop and set it to 1024. Modify the HDFS – core. XML
<property>
<name>dfs.block.size</name>
<value>1024</value>
</property>
3. Error message:
org.apache.hadoop.ipc.RPC$VersionMismatch: Protocol org.apache.hadoop.hdfs.protocol.ClientProtocol version mismatch.
>
Replace hadoop-core-0.20-append-r1056497.jar in $HBASE_HOME/lib with hadoop-0.20.2-core-jar
4. Error message:
Caused by: java.io.IOException: Call to /192.168.1.147:9000 failed on local exception: java.io.EOFException
at org.apache.hadoop.ipc.Client.wrapException(Client.java:1107)
at org.apache.hadoop.ipc.Client.call(Client.java:1075)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
at $Proxy8.getProtocolVersion(Unknown Source)
at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:396)
at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:379)
at org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:119)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:238)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:203)
at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:89)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1386)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1404)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:254)
at org.springframework.data.hadoop.fs.HdfsResourceLoader.<init>(HdfsResourceLoader.java:82)
... 21 more
>
The client Hadoop JAR is not the same as the server JAR. Hdfs-site.xml
Because Eclipse uses Hadoop plug-in to submit jobs, will default to DrWho identity to write the job to the HDFS file system, the corresponding is HDFS /user/hadoop, because DrWho users do not have access to Hadoop directory, so the exception occurs. $hadoop fs-chmod 777 /user/hadoop
$hadoop fs-chmod 777 /user/hadoop
Read More:
- Why namenode can’t be started and its solution
- FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. MetaException(me
- Centos7 view and close firewall
- Introduction of Hadoop HDFS and the use of basic client commands
- Common problems of Hadoop startup error reporting
- mkdir: Call From hadoop102/192.168.6.102 to hadoop102:8020 failed on connection exception: java.net.
- Error: java.lang.RuntimeException: PipeMapRed.waitOutputThreads(): subprocess failed with code 127
- Ranger yarn plug-in installation
- hbase ERROR org.apache.hadoop.hbase.PleaseHoldException: Master is initializing
- Error while instantiating ‘org.apache.spark.sql.hive.HiveExternalCatalog’:
- hdfs 192.168.2.19:9000 failed on connection exception: java.net.ConnectException:Connection refused
- The difference between hive and relational database
- Java query HBase outoforderscannernextexception
- Failed: execution error, return code 1 from org.apache.hadoop . hive.ql.exec .DDLTask…
- Abnormal report error javax.net.ssl .SSLHandshakeException: server certificate change is restrictedduring renegotiation
- What to do if you repeatedly format a cluster
- PySpark ERROR Shell: Failed to locate the winutils binary in the hadoop binary path
- Execution error, return code 1 from org.apache.hadoop . hive.ql.exec .DDLTask.
- Flume monitors a single append file in real time
- Solve the problem that the local flow of the nifi node is inconsistent with the cluster flow, resulting in the failure to join the cluster