Hadoop 3.1 starts the cluster after formatting and reports an error
Error message:
;./SBIN/start- dfs.sh
Starting namenodes on [note01]
ERROR: Attempting to operate on hdfs namenode as root
ERROR: but there is no HDFS_ NAMENODE_ USER defined. Aborting operation.
Starting datanodes
ERROR: Attempting to operate on hdfs datanode as root
ERROR: but there is no HDFS_ DATANODE_ USER defined. Aborting operation.
Starting secondary namenodes [note01]
ERROR: Attempting to operate on hdfs secondarynamenode as root
ERROR: but there is no HDFS_ SECONDARYNAMENODE_ USER defined. Aborting operation.
2019-02-06 18:36:04,824 WARN util.NativeCodeLoader : Unable to load native-hadoop library for your platform… using builtin-java classes where applicable
Reason:
after decompressing the installation package, the user and group of the folder are modified, that is, the following command is executed
chown root:root hadoop3.1.1 After that, start the cluster to report an error.
Solution:
you can only delete the current software folder, decompress the installation package, modify the relevant configuration files, and then format the file system, which is equivalent to operating from the beginning.
Read More:
- Error in configuring Hadoop 3.1.3: attempting to operate on yarn nodemanager as root error
- Fatal error: Please read “Security” section of the manual to find out how to run mysqld as root!
- Why namenode can’t be started and its solution
- Namenode startup error: outofmemoryerror: Java heap space
- Error in initializing namenode when configuring Hadoop!!!
- Solve the problem of multiple root tags in as
- Start Additional NameNode [How to Solve]
- Introduction of Hadoop HDFS and the use of basic client commands
- As Error:Failed to find configured root that contains /storage/emulated/0/xxx/xxx/xxx.png
- CDH Namenode Abnormal stop Error: flush failed for required journal (JournalAndStream(mgr=QJM to
- HDFS Java API operation error (user permission)
- Hadoop — HDFS data writing process
- Hadoop hdfs dfs -ls/ error: Call From master/192.168.88.108 to slave1:9820 failed on connection except
- hdfs-bug:DataXceiver error processing WRITE_BLOCK operation
- make modules_ Install compiles the kernel driver as a module, and the location of. Ko file in the root file
- hdfs 192.168.2.19:9000 failed on connection exception: java.net.ConnectException:Connection refused
- HDFS and local file transfer and error reporting
- OpenGL configuration file, as well as unable to parse the file solution
- dfs.namenode.name . dir and dfs.datanode.data .dir dfs.name.dir And dfs.data.dir What do you mean
- A TPM error (7) occurred attempting to read a pcr value