org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /home/maclaren/data/hadoopTempDir/dfs/name is in an inconsistent state: storage directory does not exist or is not accessible.
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:290)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:87)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:311)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:292)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:201)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:279)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:956)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:965)
>
Change the hadoop. TMP. Dir value of core-site. XML, or make sure to format the NameNode the first time you initialize it, otherwise an error will be reported.
So, be sure to clear all contents of the directory specified by hadoop.temp.dir, and then run
sh hadoop namenode-format
2. Error message:
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Not able to place enough replicas, still in need of 1
>
The size of dfs.block.size must be set to the right size. I ran it on my laptop and set it to 1024. Modify the HDFS – core. XML
<property>
<name>dfs.block.size</name>
<value>1024</value>
</property>
3. Error message:
org.apache.hadoop.ipc.RPC$VersionMismatch: Protocol org.apache.hadoop.hdfs.protocol.ClientProtocol version mismatch.
>
Replace hadoop-core-0.20-append-r1056497.jar in $HBASE_HOME/lib with hadoop-0.20.2-core-jar
4. Error message:
Caused by: java.io.IOException: Call to /192.168.1.147:9000 failed on local exception: java.io.EOFException
at org.apache.hadoop.ipc.Client.wrapException(Client.java:1107)
at org.apache.hadoop.ipc.Client.call(Client.java:1075)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
at $Proxy8.getProtocolVersion(Unknown Source)
at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:396)
at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:379)
at org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:119)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:238)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:203)
at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:89)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1386)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1404)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:254)
at org.springframework.data.hadoop.fs.HdfsResourceLoader.<init>(HdfsResourceLoader.java:82)
... 21 more
>
The client Hadoop JAR is not the same as the server JAR. Hdfs-site.xml
Because Eclipse uses Hadoop plug-in to submit jobs, will default to DrWho identity to write the job to the HDFS file system, the corresponding is HDFS /user/hadoop, because DrWho users do not have access to Hadoop directory, so the exception occurs. $hadoop fs-chmod 777 /user/hadoop
$hadoop fs-chmod 777 /user/hadoop
Read More:
- Summary of three methods for pandas to convert dict into dataframe
- mkdir: Call From hadoop102/192.168.6.102 to hadoop102:8020 failed on connection exception: java.net.
- Error in initializing namenode when configuring Hadoop!!!
- Common problems of Hadoop startup error reporting
- Introduction of Hadoop HDFS and the use of basic client commands
- Hadoop cluster: about course not obtain block: error reporting
- Linux Mint installs Hadoop environment
- Hadoop datanode using JPS to view the solution that can’t be started
- ERROR: Invalid HADOOP_COMMON_HOME
- Error reported when debugging Hadoop cluster under windows failed to find winutils.exe
- Hadoop — HDFS data writing process
- Summary of common runtimeException exceptions
- FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. MetaException(me
- Error in configuring Hadoop 3.1.3: attempting to operate on yarn nodemanager as root error
- [Solved] Import org.apache.hadoop.hbase.hbaseconfiguration package cannot be imported
- Call to undefined function mysql_ Connect() solution summary
- Summary of MySQL error code
- Handling nullreference exception problems encountered in NHibernate
- Error handling after mybatis custom paging plug-in
- Elasticsearch starts error summary