Tag Archives: Hadoop

How to Solve Hadoop Missing Hadoop.dll and winutils.exe file error

The problem encountered today in running mapreduce locally:

Could not locate executable null \bin\winutils.exe in the hadoop binaries
Unable to load native-hadoop library for your platform… using builtin-Java classes where applicable

 

Reason:

  1. Miss winutils.exe file: Could not locate executable null \bin\winutils.exe in the hadoop binaries
  2. Miss hadoop.dll File: Unable to load native-hadoop library for your platform… using builtin-Java classes where applicable

 

Solution: Download these two files, download and import these two files into the bin directory under the hadoop directory

Ranger Install Error: [E] ranger_core_db_mysql.sql file import failed!

Exception information:

Error executing: CREATE FUNCTION `getXportalUIdByLoginId`(input_val VARCHAR(100)) RETURNS int(11) BEGIN DECLARE myid INT; SELECT x_portal_user.id into myid FROM x_portal_user WHERE x_portal_user.login_id = input_val; RETURN myid; END  
java.sql.SQLException: This function has none of DETERMINISTIC, NO SQL, or READS SQL DATA in its declaration and binary logging is enabled (you *might* want to use the less safe log_bin_trust_function_creators variable)
SQLException : SQL state: HY000 java.sql.SQLException: This function has none of DETERMINISTIC, NO SQL, or READS SQL DATA in its declaration and binary logging is enabled (you *might* want to use the less safe log_bin_trust_function_creators variable) ErrorCode: 1418
2022-02-24 18:24:06,439  [E] ranger_core_db_mysql.sql file import failed!
2022-02-24 18:24:06,439  [I] Unable to create DB schema, Please drop the database and try again

...
2022-02-24 18:24:08,667  [E] CORE_DB_SCHEMA import failed!


This error is reported when Ranger is installed

Solution:

SET GLOBAL log_bin_trust_function_creators = 1;

[Solved] java.io.IOException: Got error, status=ERROR, status message, ack with firstBadLink as

Today, I reported such an error when uploading files to Hadoop

2022-03-17 17:17:11,994 INFO hdfs.DataStreamer: Exception in createBlockOutputStream blk_1073741946_1137
java.io.IOException: Got error, status=ERROR, status message , ack with firstBadLink as 120.78.239.136:9866
	at org.apache.hadoop.hdfs.protocol.datatransfer.DataTransferProtoUtil.checkBlockOpStatus(DataTransferProtoUtil.java:134)
	at org.apache.hadoop.hdfs.protocol.datatransfer.DataTransferProtoUtil.checkBlockOpStatus(DataTransferProtoUtil.java:110)
	at org.apache.hadoop.hdfs.DataStreamer.createBlockOutputStream(DataStreamer.java:1778)
	at org.apache.hadoop.hdfs.DataStreamer.nextBlockOutputStream(DataStreamer.java:1679)
	at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:716)
2022-03-17 17:17:11,998 WARN hdfs.DataStreamer: Abandoning BP-1890970308-172.25.12.163-1646541195774:blk_1073741946_1137
2022-03-17 17:17:12,007 WARN hdfs.DataStreamer: Excluding datanode DatanodeInfoWithStorage[120.78.239.136:9866,DS-87287b18-21ac-4314-884e-d78b139945b8,DISK]

The result is that

slave1 has no copy

Reason and Solution:

slave1 did not turn off the firewall, so just turning off the firewall will be OK.

 

[Solved] HBase Error: ERROR: org.apache.hadoop.hbase.PleaseHoldException: Master is initializing

Error reporting

After installing and entering HBase for the first time, you will encounter this problem:

[[email protected] /]# hbase shell
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/usdp-srv/srv/udp/2.0.0.0/hdfs/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/usdp-srv/srv/udp/2.0.0.0/hbase/lib/client-facing-thirdparty/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
HBase Shell
Use "help" to get list of supported commands.
Use "exit" to quit this interactive shell.
For Reference, please visit: http://hbase.apache.org/2.0/book.html#shell
Version 2.1.10, rUnknown, Mon Nov 23 09:56:35 WIB 2020
Took 0.0036 seconds
hbase(main):001:0> list_namespace
NAMESPACE

ERROR: org.apache.hadoop.hbase.PleaseHoldException: Master is initializing
        at org.apache.hadoop.hbase.master.HMaster.checkInitialized(HMaster.java:3003)
        at org.apache.hadoop.hbase.master.HMaster.getNamespaces(HMaster.java:3299)
        at org.apache.hadoop.hbase.master.MasterRpcServices.listNamespaceDescriptors(MasterRpcServices.java:1237)
        at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java)
        at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:413)
        at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:133)
        at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324)
        at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304)

For usage try 'help "list_namespace"'

Took 9.8027 seconds

In this case, I can’t wait all the time. I waited for a while and didn’t initialize well…

Solution

Because it is a new cluster, there is no useful data. You can clear the metadata in the following way to reinitialize. Before clearing metadata, close HBase components in USDP’s Web UI.

Delete ZK’s metadata

Since part of the metadata of HBase is stored on zookeeper, so you should so this as below:

[[email protected] /]# zkCli.sh
Connecting to localhost:2181
2022-03-03 00:09:47,689 [myid:] - INFO  [main:[email protected]] - Client environment:zookeeper.version=3.4.13-2d71af4dbe22557fda74f9a9b4309b15a7487f03, built on 06/29/2018 04:05 GMT
2022-03-03 00:09:47,693 [myid:] - INFO  [main:[email protected]] - Client environment:host.name=zhiyong2
2022-03-03 00:09:47,693 [myid:] - INFO  [main:[email protected]] - Client environment:java.version=1.8.0_202
2022-03-03 00:09:47,695 [myid:] - INFO  [main:[email protected]] - Client environment:java.vendor=Oracle Corporation
2022-03-03 00:09:47,695 [myid:] - INFO  [main:[email protected]] - Client environment:java.home=/usr/java/jdk1.8.0_202/jre
2022-03-03 00:09:47,696 [myid:] - INFO  [main:[email protected]] - Client environment:java.class.path=/srv/udp/2.0.0.0/zookeeper/bin/../zookeeper-server/target/classes:/srv/udp/2.0.0.0/zookeeper/bin/../build/classes:/srv/udp/2.0.0.0/zookeeper/bin/../zookeeper-server/target/lib/*.jar:/srv/udp/2.0.0.0/zookeeper/bin/../build/lib/*.jar:/srv/udp/2.0.0.0/zookeeper/bin/../lib/slf4j-log4j12-1.7.25.jar:/srv/udp/2.0.0.0/zookeeper/bin/../lib/slf4j-api-1.7.25.jar:/srv/udp/2.0.0.0/zookeeper/bin/../lib/netty-3.10.6.Final.jar:/srv/udp/2.0.0.0/zookeeper/bin/../lib/log4j-1.2.17.jar:/srv/udp/2.0.0.0/zookeeper/bin/../lib/jline-0.9.94.jar:/srv/udp/2.0.0.0/zookeeper/bin/../lib/audience-annotations-0.5.0.jar:/srv/udp/2.0.0.0/zookeeper/bin/../zookeeper-3.4.13.jar:/srv/udp/2.0.0.0/zookeeper/bin/../zookeeper-server/src/main/resources/lib/*.jar:/srv/udp/2.0.0.0/zookeeper/bin/../conf:.:/usr/java/jdk1.8.0_202/jre/lib/rt.jar:/usr/java/jdk1.8.0_202/lib/dt.jar:/usr/java/jdk1.8.0_202/lib/tools.jar
2022-03-03 00:09:47,696 [myid:] - INFO  [main:[email protected]] - Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
2022-03-03 00:09:47,696 [myid:] - INFO  [main:[email protected]] - Client environment:java.io.tmpdir=/tmp
2022-03-03 00:09:47,696 [myid:] - INFO  [main:[email protected]] - Client environment:java.compiler=<NA>
2022-03-03 00:09:47,696 [myid:] - INFO  [main:[email protected]] - Client environment:os.name=Linux
2022-03-03 00:09:47,696 [myid:] - INFO  [main:[email protected]] - Client environment:os.arch=amd64
2022-03-03 00:09:47,697 [myid:] - INFO  [main:[email protected]] - Client environment:os.version=3.10.0-957.el7.x86_64
2022-03-03 00:09:47,697 [myid:] - INFO  [main:[email protected]] - Client environment:user.name=root
2022-03-03 00:09:47,697 [myid:] - INFO  [main:[email protected]] - Client environment:user.home=/root
2022-03-03 00:09:47,697 [myid:] - INFO  [main:[email protected]] - Client environment:user.dir=/
2022-03-03 00:09:47,698 [myid:] - INFO  [main:[email protected]] - Initiating client connection, connectString=localhost:2181 sessionTimeout=30000 [email protected]
2022-03-03 00:09:47,721 [myid:] - INFO  [main-SendThread(localhost:2181):[email protected]] - Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)
Welcome to ZooKeeper!
JLine support is enabled
2022-03-03 00:09:47,808 [myid:] - INFO  [main-SendThread(localhost:2181):[email protected]] - Socket connection established to localhost/127.0.0.1:2181, initiating session
2022-03-03 00:09:47,817 [myid:] - INFO  [main-SendThread(localhost:2181):[email protected]] - Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x1000009708f000e, negotiated timeout = 30000

WATCHER::

WatchedEvent state:SyncConnected type:None path:null
[zk: localhost:2181(CONNECTED) 0] ls /
[cluster, brokers, zookeeper, yarn-leader-election, hadoop-ha, admin, isr_change_notification, dolphinscheduler, log_dir_event_notification, controller_epoch, rmstore, consumers, latest_producer_id_block, config, hbase]
[zk: localhost:2181(CONNECTED) 1] rmr /hbase
[zk: localhost:2181(CONNECTED) 2] [[email protected] /]# ^C
[[email protected] /]# ^C
[r[email protected] /]#

HDFS metadata deletion

Since part of HBase data is stored in HDFS, it is also necessary to delete the metadata stored in HDFS by HBase. Since USDP has Ranger and actually has users and permissions, you can’t use root to delete important data of HDFS. You must switch to Hadoop user first.

[[email protected] /]# hadoop fs -rmr /hbase/data/hbase/meta/*
rmr: DEPRECATED: Please use '-rm -r' instead.
rmr: Failed to move to trash: hdfs://zhiyong-1/hbase/data/hbase/meta/.tabledesc: Permission denied: user=root, access=WRITE, inode="/hbase/data/hbase/meta":hadoop:supergroup:drwxr-xr-x
rmr: Failed to move to trash: hdfs://zhiyong-1/hbase/data/hbase/meta/.tmp: Permission denied: user=root, access=WRITE, inode="/hbase/data/hbase/meta":hadoop:supergroup:drwxr-xr-x
rmr: Failed to move to trash: hdfs://zhiyong-1/hbase/data/hbase/meta/1588230740: Permission denied: user=root, access=WRITE, inode="/hbase/data/hbase/meta":hadoop:supergroup:drwxr-xr-x
[[email protected] /]# cd /etc/passwd/
-bash: cd: /etc/passwd/: 不是目录
[[email protected] /]# cat /etc/passwd
root:x:0:0:root:/root:/bin/bash
bin:x:1:1:bin:/bin:/sbin/nologin
daemon:x:2:2:daemon:/sbin:/sbin/nologin
adm:x:3:4:adm:/var/adm:/sbin/nologin
lp:x:4:7:lp:/var/spool/lpd:/sbin/nologin
sync:x:5:0:sync:/sbin:/bin/sync
shutdown:x:6:0:shutdown:/sbin:/sbin/shutdown
halt:x:7:0:halt:/sbin:/sbin/halt
mail:x:8:12:mail:/var/spool/mail:/sbin/nologin
operator:x:11:0:operator:/root:/sbin/nologin
games:x:12:100:games:/usr/games:/sbin/nologin
ftp:x:14:50:FTP User:/var/ftp:/sbin/nologin
nobody:x:99:99:Nobody:/:/sbin/nologin
systemd-network:x:192:192:systemd Network Management:/:/sbin/nologin
dbus:x:81:81:System message bus:/:/sbin/nologin
polkitd:x:999:998:User for polkitd:/:/sbin/nologin
libstoragemgmt:x:998:997:daemon account for libstoragemgmt:/var/run/lsm:/sbin/nologin
abrt:x:173:173::/etc/abrt:/sbin/nologin
rpc:x:32:32:Rpcbind Daemon:/var/lib/rpcbind:/sbin/nologin
apache:x:48:48:Apache:/usr/share/httpd:/sbin/nologin
sshd:x:74:74:Privilege-separated SSH:/var/empty/sshd:/sbin/nologin
postfix:x:89:89::/var/spool/postfix:/sbin/nologin
ntp:x:38:38::/etc/ntp:/sbin/nologin
chrony:x:997:995::/var/lib/chrony:/sbin/nologin
tcpdump:x:72:72::/:/sbin/nologin
hadoop:x:1000:1000::/home/hadoop:/bin/bash
mysql:x:27:27:MySQL Server:/var/lib/mysql:/bin/false
saslauth:x:996:76:Saslauthd user:/run/saslauthd:/sbin/nologin
elastic:x:1001:1001::/home/elastic:/bin/bash
hue:x:1002:1002::/home/hue:/bin/bash
[[email protected] /]# su - hadoop
上一次登录:四 3月  3 00:24:33 CST 2022
[[email protected] ~]$ hadoop fs -rmr /hbase/data/hbase/meta/*
rmr: DEPRECATED: Please use '-rm -r' instead.
2022-03-03 00:27:31 INFO fs.TrashPolicyDefault: Moved: 'hdfs://zhiyong-1/hbase/data/hbase/meta/.tabledesc' to trash at: hdfs://zhiyong-1/user/hadoop/.Trash/Current/hbase/data/hbase/meta/.tabledesc
2022-03-03 00:27:31 INFO fs.TrashPolicyDefault: Moved: 'hdfs://zhiyong-1/hbase/data/hbase/meta/.tmp' to trash at: hdfs://zhiyong-1/user/hadoop/.Trash/Current/hbase/data/hbase/meta/.tmp
2022-03-03 00:27:31 INFO fs.TrashPolicyDefault: Moved: 'hdfs://zhiyong-1/hbase/data/hbase/meta/1588230740' to trash at: hdfs://zhiyong-1/user/hadoop/.Trash/Current/hbase/data/hbase/meta/1588230740
[[email protected] ~]$ hadoop fs -rmr /hbase/data/hbase/namespace/*
rmr: DEPRECATED: Please use '-rm -r' instead.
2022-03-03 00:27:34 INFO fs.TrashPolicyDefault: Moved: 'hdfs://zhiyong-1/hbase/data/hbase/namespace/.tabledesc' to trash at: hdfs://zhiyong-1/user/hadoop/.Trash/Current/hbase/data/hbase/namespace/.tabledesc
2022-03-03 00:27:34 INFO fs.TrashPolicyDefault: Moved: 'hdfs://zhiyong-1/hbase/data/hbase/namespace/.tmp' to trash at: hdfs://zhiyong-1/user/hadoop/.Trash/Current/hbase/data/hbase/namespace/.tmp
2022-03-03 00:27:34 INFO fs.TrashPolicyDefault: Moved: 'hdfs://zhiyong-1/hbase/data/hbase/namespace/98fb8a0448305b2f9af4f9a72495b6df' to trash at: hdfs://zhiyong-1/user/hadoop/.Trash/Current/hbase/data/hbase/namespace/98fb8a0448305b2f9af4f9a72495b6df
[[email protected]iyong2 ~]$ hadoop fs -rmr /hbase/MasterProcWALs/*
rmr: DEPRECATED: Please use '-rm -r' instead.
2022-03-03 00:27:38 INFO fs.TrashPolicyDefault: Moved: 'hdfs://zhiyong-1/hbase/MasterProcWALs/pv2-00000000000000000009.log' to trash at: hdfs://zhiyong-1/user/hadoop/.Trash/Current/hbase/MasterProcWALs/pv2-00000000000000000009.log
2022-03-03 00:27:38 INFO fs.TrashPolicyDefault: Moved: 'hdfs://zhiyong-1/hbase/MasterProcWALs/pv2-00000000000000000010.log' to trash at: hdfs://zhiyong-1/user/hadoop/.Trash/Current/hbase/MasterProcWALs/pv2-00000000000000000010.log
2022-03-03 00:27:38 INFO fs.TrashPolicyDefault: Moved: 'hdfs://zhiyong-1/hbase/MasterProcWALs/pv2-00000000000000000011.log' to trash at: hdfs://zhiyong-1/user/hadoop/.Trash/Current/hbase/MasterProcWALs/pv2-00000000000000000011.log
2022-03-03 00:27:38 INFO fs.TrashPolicyDefault: Moved: 'hdfs://zhiyong-1/hbase/MasterProcWALs/pv2-00000000000000000012.log' to trash at: hdfs://zhiyong-1/user/hadoop/.Trash/Current/hbase/MasterProcWALs/pv2-00000000000000000012.log
2022-03-03 00:27:38 INFO fs.TrashPolicyDefault: Moved: 'hdfs://zhiyong-1/hbase/MasterProcWALs/pv2-00000000000000000013.log' to trash at: hdfs://zhiyong-1/user/hadoop/.Trash/Current/hbase/MasterProcWALs/pv2-00000000000000000013.log
[[email protected] ~]$ exit
登出
[[email protected] /]#

Restart HBase

Available after restart:

[[email protected] ~]$ hbase shell
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/usdp-srv/srv/udp/2.0.0.0/hdfs/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/usdp-srv/srv/udp/2.0.0.0/hbase/lib/client-facing-thirdparty/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
HBase Shell
Use "help" to get list of supported commands.
Use "exit" to quit this interactive shell.
For Reference, please visit: http://hbase.apache.org/2.0/book.html#shell
Version 2.1.10, rUnknown, Mon Nov 23 09:56:35 WIB 2020
Took 0.0045 seconds
hbase(main):001:0> list_namespace
list_namespace          list_namespace_tables
hbase(main):001:0> list_namespace
NAMESPACE
default
hbase
2 row(s)
Took 0.5645 seconds
hbase(main):002:0> exit
[[email protected] ~]$ exit
登出
[[email protected] /]# hbase shell
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/usdp-srv/srv/udp/2.0.0.0/hdfs/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/usdp-srv/srv/udp/2.0.0.0/hbase/lib/client-facing-thirdparty/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
HBase Shell
Use "help" to get list of supported commands.
Use "exit" to quit this interactive shell.
For Reference, please visit: http://hbase.apache.org/2.0/book.html#shell
Version 2.1.10, rUnknown, Mon Nov 23 09:56:35 WIB 2020
Took 0.0035 seconds
hbase(main):001:0> list_namespace
list_namespace          list_namespace_tables
hbase(main):001:0> list_namespace
NAMESPACE
default
hbase
2 row(s)
Took 0.6270 seconds
hbase(main):002:0> exit
[[email protected] /]#

At this time, both the root and the root of the HDFS can see whether the root of the HDFS has the highest permission on the HDFS component or not.

hive Run Error: Error: Java heap space [How to Solve]

Error: Java heap space solution

When using MR engine:

set mapreduce.map.memory.mb=12000;
set mapreduce.reduce.memory.mb=12000;
set mapred.map.child.java.opts=-server -Xmx10000m -Djava.net.preferIPv4Stack=true;
set io.sort.mb=100;
set mapred.reduce.child.java.opts=-server -Xmx10000m -Djava.net.preferIPv4Stack=true;

When using tez engine:

set hive.execution.engine=tez;
set tez.am.resource.memory.mb=9216;
set hive.exec.orc.split.strategy=BI;

[Solved] eclipse Error: org.apache.hadoop.hbase.NotServingRegionException:

Error1: org.apache.hadoop.hbase.NotServingRegionException:
Error 2: Can’t get master address from ZooKeeper; znode data == null

[[email protected] bin]# sh hbase hbck
2022-01-29 16:48:49,797 INFO  [main] client.HConnectionManager$HConnectionImplementation: getMaster attempt 9 of 35 failed; retrying after sleep of 10044, exception=java.io.IOException: Can't get master address from ZooKeeper; znode data == null

Solution:

Stop HBase and go to the bin directory of HBase

sh stop-hbase.sh

Start the zookeeper client and delete the/HBase node

[[email protected] bin]# sh zkCli.sh
[zk: localhost:2181(CONNECTED) 1] rmr /hbase

Restart HBase cluster

sh start-hbase.sh

[Solved] Hive On Spark Error: Remote Spark Driver – HiveServer2 connection has been closed

Error Messages:

Failed to monitor Job[-1] with exception ‘java.lang.IllegalStateException(Connection to remote Spark driver was lost)’ Last known state = SENT
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.spark.SparkTask. Unable to send message SyncJobRequest{job=org.apache.hadoop.[email protected]7805478c} because the Remote Spark Driver - HiveServer2 connection has been closed.

The real cause of this problem requires going to Yarn and looking at the Application’s detailed logs at

It turns out that the executor-memory is too small and needs to be modified in the hive configuration page

Save the changes and restart the relevant components, the problem is solved.

How to Solve Doris dynamic partition table routineload Error

report errors

Reason: no partition for this tuple.tuple=…

analysis

The data in Kafka comes in, but the dynamic partition table does not create the time partition of this data

Solution:

#Adding a partition to a dynamic partition

## Dynamic partition to static partition
ALTER TABLE ods_log_outlog_course_ydyjs_app SET ("dynamic_partition.enable" = "false");

## Add uncreated partitions
ALTER TABLE course_log.ods_log_outlog_course_ydyjs_app
        ADD PARTITION p20220307 VALUES [("2022-03-07"), ("2022-03-08"));
## Add uncreated partitions        
ALTER TABLE course_log.ods_log_outlog_course_ydyjs_app
        ADD PARTITION p20220308 VALUES [("2022-03-08"), ("2022-03-09"));

## Check if the creation is successful        
show partitions from ods_log_outlog_course_ydyjs_app;

## Restore a static partition to a dynamic partition
ALTER TABLE ods_log_outlog_course_ydyjs_app SET ("dynamic_partition.enable" = "true");

## resume routineLoad
resume routine load for ods_log_outlog_course_ydyjs_app_load

[Solved] ERROR: Attempting to operate on hdfs namenode as root ERROR: but there is no HDFS_NAMENODE_USER defi

Question:

ERROR: Attempting to operate on hdfs namenode as root
ERROR: but there is no HDFS_NAMENODE_USER defined. Aborting operation

Solution:

Look for the start-dfs.sh and stop-dfs.sh files in the /hadoop/sbin path, and add both to the top of them:

#!/usr/bin/env bash
HDFS_DATANODE_USER=root
HADOOP_SECURE_DN_USER=hdfs
HDFS_NAMENODE_USER=root
HDFS_SECONDARYNAMENODE_USER=root

By the way, at the top of the start-yarn.sh and stop-yarn.sh files, add:

#!/usr/bin/env bash
YARN_RESOURCEMANAGER_USER=root
HADOOP_SECURE_DN_USER=yarn
YARN_NODEMANAGER_USER=root

[Solved] Unable to connect to a as user root com.jcraft.jsch.JSchException: Auth failUnable to connect

The specific errors encountered in building Hadoop HA are as follows

com.jcraft.jsch.JSchException: Auth fail
	at com.jcraft.jsch.Session.connect(Session.java:452)
	at org.apache.hadoop.ha.SshFenceByTcpPort.tryFence(SshFenceByTcpPort.java:100)
	at org.apache.hadoop.ha.NodeFencer.fence(NodeFencer.java:97)
	at org.apache.hadoop.ha.ZKFailoverController.doFence(ZKFailoverController.java:532)
	at org.apache.hadoop.ha.ZKFailoverController.fenceOldActive(ZKFailoverController.java:505)
	at org.apache.hadoop.ha.ZKFailoverController.access$1100(ZKFailoverController.java:61)
	at org.apache.hadoop.ha.ZKFailoverController$ElectorCallbacks.fenceOldActive(ZKFailoverController.java:892)
	at org.apache.hadoop.ha.ActiveStandbyElector.fenceOldActive(ActiveStandbyElector.java:902)
	at org.apache.hadoop.ha.ActiveStandbyElector.becomeActive(ActiveStandbyElector.java:801)
	at org.apache.hadoop.ha.ActiveStandbyElector.processResult(ActiveStandbyElector.java:416)
	at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:599)
	at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:498)
2021-12-27 11:07:20,846 WARN org.apache.hadoop.ha.NodeFencer: Fencing method org.apache.hadoop.ha.SshFenceByTcpPort(null) was unsuccessful.
2021-12-27 11:07:20,846 ERROR org.apache.hadoop.ha.NodeFencer: Unable to fence service by any configured method.
2021-12-27 11:07:20,846 WARN org.apache.hadoop.ha.ActiveStandbyElector: Exception handling the winning of election
java.lang.RuntimeException: Unable to fence NameNode at a/192.168.0.149:8020
	at org.apache.hadoop.ha.ZKFailoverController.doFence(ZKFailoverController.java:533)
	at org.apache.hadoop.ha.ZKFailoverController.fenceOldActive(ZKFailoverController.java:505)
	at org.apache.hadoop.ha.ZKFailoverController.access$1100(ZKFailoverController.java:61)
	at org.apache.hadoop.ha.ZKFailoverController$ElectorCallbacks.fenceOldActive(ZKFailoverController.java:892)
	at org.apache.hadoop.ha.ActiveStandbyElector.fenceOldActive(ActiveStandbyElector.java:902)
	at org.apache.hadoop.ha.ActiveStandbyElector.becomeActive(ActiveStandbyElector.java:801)
	at org.apache.hadoop.ha.ActiveStandbyElector.processResult(ActiveStandbyElector.java:416)
	at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:599)
	at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:498)
2021-12-27 11:07:20,846 INFO org.apache.hadoop.ha.ActiveStandbyElector: Trying to re-establish ZK session
2021-12-27 11:07:20,851 INFO org.apache.zookeeper.ZooKeeper: Session: 0x37df9b417310059 closed
2021-12-27 11:07:21,852 INFO org.apache.zookeeper.ZooKeeper: Initiating client connection, connectString=a:2181,b:2181,c:2181 sessionTimeout=5000 watcher[email protected]44a90199
2021-12-27 11:07:21,853 INFO org.apache.zookeeper.ClientCnxn: Opening socket connection to server b/192.168.0.150:2181. Will not attempt to authenticate using SASL (unknown error)
2021-12-27 11:07:21,854 INFO org.apache.zookeeper.ClientCnxn: Socket connection established to b/192.168.0.150:2181, initiating session
2021-12-27 11:07:21,859 INFO org.apache.zookeeper.ClientCnxn: Session establishment complete on server b/192.168.0.150:2181, sessionid = 0x27df9b3aaf60068, negotiated timeout = 5000
2021-12-27 11:07:21,860 INFO org.apache.zookeeper.ClientCnxn: EventThread shut down
2021-12-27 11:07:21,861 INFO org.apache.hadoop.ha.ActiveStandbyElector: Session connected.
2021-12-27 11:07:21,862 INFO org.apache.hadoop.ha.ActiveStandbyElector: Checking for any old active which needs to be fenced...
2021-12-27 11:07:21,862 INFO org.apache.hadoop.ha.ActiveStandbyElector: Old node exists: 0a096d79636c757374657212026e311a016120d43e28d33e
2021-12-27 11:07:21,864 INFO org.apache.hadoop.ha.ZKFailoverController: Should fence: NameNode at a/192.168.0.149:8020
2021-12-27 11:07:22,866 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: a/192.168.0.149:8020. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=1, sleepTime=1000 MILLISECONDS)
2021-12-27 11:07:22,867 WARN org.apache.hadoop.ha.FailoverController: Unable to gracefully make NameNode at a/192.168.0.149:8020 standby (unable to connect)
java.net.ConnectException: Call From b/192.168.0.150 to a:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
	at sun.reflect.GeneratedConstructorAccessor26.newInstance(Unknown Source)
	at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
	at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
	at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792)
	at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:732)
	at org.apache.hadoop.ipc.Client.call(Client.java:1480)
	at org.apache.hadoop.ipc.Client.call(Client.java:1407)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
	at com.sun.proxy.$Proxy9.transitionToStandby(Unknown Source)
	at org.apache.hadoop.ha.protocolPB.HAServiceProtocolClientSideTranslatorPB.transitionToStandby(HAServiceProtocolClientSideTranslatorPB.java:112)
	at org.apache.hadoop.ha.FailoverController.tryGracefulFence(FailoverController.java:172)
	at org.apache.hadoop.ha.ZKFailoverController.doFence(ZKFailoverController.java:514)
	at org.apache.hadoop.ha.ZKFailoverController.fenceOldActive(ZKFailoverController.java:505)
	at org.apache.hadoop.ha.ZKFailoverController.access$1100(ZKFailoverController.java:61)
	at org.apache.hadoop.ha.ZKFailoverController$ElectorCallbacks.fenceOldActive(ZKFailoverController.java:892)
	at org.apache.hadoop.ha.ActiveStandbyElector.fenceOldActive(ActiveStandbyElector.java:902)
	at org.apache.hadoop.ha.ActiveStandbyElector.becomeActive(ActiveStandbyElector.java:801)
	at org.apache.hadoop.ha.ActiveStandbyElector.processResult(ActiveStandbyElector.java:416)
	at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:599)
	at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:498)
Caused by: java.net.ConnectException: Connection refused
	at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
	at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
	at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
	at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)
	at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495)
	at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:609)
	at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:707)
	at org.apache.hadoop.ipc.Client$Connection.access$2800(Client.java:370)
	at org.apache.hadoop.ipc.Client.getConnection(Client.java:1529)
	at org.apache.hadoop.ipc.Client.call(Client.java:1446)
	... 14 more

Here are two possible reasons for this error. You are also welcome to point out the shortcomings and discuss with us.

The first is that SSH secret login is not configured. You can try to report an error and log in with other machines to see if you can successfully log in without secret.

The second is because the parameter of dfs.ha.fencing.methods is sshence, and needs to use fuser command; maybe you do not install  fuser (required for each namenode node)
installation command: Yum - y install psmisc

[Solved] sqoop Import Datas to mysql error: ERROR tool.ExportTool: Error during export: Export job failed

An error occurred when exporting and importing data from hive warehouse into MySQL database with sqoop tool in Ubuntu pseudo distributed HDFS.

Give it a try

In the folder generated in ~/TMP/sqoop Chen/compile/jar. java. Copy the calss file to the/usr/local/sqoop/lib/folder, and then run the export and Import command again

The above sqoop-chen directory is sqoop user name

Export-Import command:

The above solution:

After running again, the following figure shows that the export and import succeeded:

View MySQL:

[Solved] HBase shell command Error: ERROR: connection closed

Problem description

During the big data storage experiment, an error is reported with the shell command of HBase. Connection closed

check the log and find that the error reporting service does not exist

Final solution

After a lot of troubleshooting, I finally found that it was a problem with the JDK version. I used version java-17.0.1 is too high. Finally, it was changed to jdk-8u331-linux-x64.tar.gz is solved

My versions are

hadoop 3.2. 2
hbase 2.3. 6
java 1.8. 0

The matching table of Hadoop, HBase and Java is attached


Solution steps

1 empty the temporary files of Hadoop

Close HBase and Hadoop processes first

stop-all.sh

View HDFS site XML

delete all the files in the two folders (the same is true for the name folder)

Re perform Hadoop formatting

2 change java to the specified version (don’t forget to change the Java folder name in the environment variable)

I use 1.8 0_ three hundred and thirty-one

java -version

3 restart the computer and start SSH, Hadoop and HBase

service ssh start
start-dfs.sh
start-hbase.sh

4. Enter HBase shell and find it successful