Tag Archives: Unable to load native-hadoop librar

Common problems of Hadoop startup error reporting

After I have deployed Hadoop and YARN on the local virtual machine, I execute the startup command ./sbin/start-dfs.shbut I have various error reporting problems. Here I document two common problems.
Could not resolve hostname: Name or service not known
Error message:

19/05/17 21:31:18 WARN hdfs.DFSUtil: Namenode for null remains unresolved for ID null.  Check your hdfs-site.xml file to ensure namenodes are configured properly.
Starting namenodes on [jing-hadoop]
jing-hadoop: ssh: Could not resolve hostname jing-hadoop: Name or service not known
......

This is because without the node name in the configuration file jing - hadoop to join the domain mapping, so I can't identify the host name.
Solutions:

vim /etc/hosts
127.0.0.1  jing-hadoop

And then you start it up again.
jing-hadoop configured in h>site.xml , and t>orresponding node IP is 127.0.0.1e>. You should modify>according to your own environment, do not copy directly.
2, Unable to load Native Hadoop Library
Perform the start - DFS. Sh , also appeared the following error:

19/05/17 21:39:32 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
......

If you do not see the NameNode process after executing JPS, then this is definitely not possible.
The local classpath of Hadoop is not configured in the environment variable. The classpath is not configured in the environment variable.

vim /etc/profile

export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib"

source /etc/profile

Then, again start - DFS. Sh , will still be reported as errors found, but after executing the JPS, found the NameNode process and DataNode process has been started normally, so will not affect the use.

[root@localhost hadoop-2.4.1]# jps
3854 NameNode
4211 Jps
3967 DataNode
4110 SecondaryNameNode

Among them, the number of the DataNode and the IP is in $HADOOP_HOME/etc/hadoop slave file configuration, if the configure multiple IP, will start multiple DataNode process.