Tag Archives: Big data

The problem of error reporting in Ranger connection hive is solved

After installing the ranger-1.2.0-hive-plugin plugin, the error is reported in the rangerui interface
org.apache.ranger.plugin.client.HadoopException: Unable to execute SQL [show databases like “*”]..
Unable to execute SQL [show databases like “*”]..
Error while compiling statement: FAILED: HiveAccessControlException Permission denied: user [root] does not have [USE] privilege on [*].
Permission denied: user [root] does not have [USE] privilege on [*].

Also we will get the same error when we connect with beeline -u jdbc:hive2://localhost:10000 in the backend
Error: Error while compiling statement: FAILED: HiveAccessControlException Permission denied: user [anonymous] does not have [USE] privilege on [*] (state=42000,code=40000)

How to Solve Hmaster hangs up issue due to namenode switching in Ha mode

Solve the problem that hmaster hangs up due to namenode switching in Ha mode

Question:

When we build our own big data cluster for learning, the virtual machine often gets stuck and the nodes hang up inexplicably because the machine configuration is not high enough.

In Hadoop’s highly available cluster, the machine configuration is not enough, and the two namenodes always switch state automatically, resulting in the hang up of the hmaster node of the HBase cluster.

Causes of problems:

Let’s check the master log of HBase:

# Go to the log file directory
[root@hadoop001 ~]# cd /opt/module/hbase-1.3.1/logs/
[root@hadoop001 logs]# vim hbase-root-master-hadoop001.log 

From the log, it is easy to find that the error is caused by the active/standby switching of namenode.

resolvent:

1. Modify the hbase-site.xml configuration file

Modify the configuration of base.roodir

<property>
     <name>hbase.roodir</name>
     <value>hdfs://hadoop001:9000/hbase</value>
</property>

# change to 
<property>
     <name>hbase.roodir</name>
     <value>hdfs://ns/hbase</value>
</property>

# Note that the ns here is the value of hadoop's dfs.nameservices (configured in hdfs-site-xml, fill in according to your own configuration)

2. Establish soft connection

[root@hadoop001 ~]# ln -s /opt/module/hadoop-2.7.6/etc/hadoop/hdfs-site.xml /opt/module/hbase-1.3.1/conf/hdfs-site.xml
[root@hadoop001 ~]# ln -s /opt/module/hadoop-2.7.6/etc/hadoop/core-site.xml /opt/module/hbase-1.3.1/conf/core-site.xml 

3. Synchronize HBase profiles for all clusters

Use SCP instruction to distribute to other nodes

Then restart the cluster to solve the hang up problem of the hmaster node

[Solved] Flume Error: java.lang.NoClassDefFoundError: org/apache/hadoop/conf/Configuration

flume error: java.lang.NoClassDefFoundError: org/apache/hadoop/conf/Configuration
Failed to start agent because dependencies were not found in classpath. Error follows. java.lang.NoClassDefFoundError:org/apache/hadoop/conf/Configuration at org.apache.flume.sink.hdfs.HDFSEventSink.getCodec(HDFSEventSink.java:324)Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.conf.Configuration at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
The above error occurs because the server needs to be configured with: environment variables for Hadoop.

[Solved] Failed update hbase:meta table descriptor HBase Startup Error

In the past two days, the content related to big data was deployed on the new server. HBase was installed successfully, but the hmaster failed to start when it was started. Sometimes, JPS found that the hmaster hung up after tens of seconds (there are seven servers in total, node1 is the master and node2 is the Backup Master). Check the log and the error contents are as follows:

2021-08-04 15:32:38,839 INFO  [main] util.FSTableDescriptors: ta', {TABLE_ATTRIBUTES => {IS_META => 'true', REGION_REPLICATION => '1', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}
2021-08-04 15:32:39,026 WARN  [main] util.FSTableDescriptors: Failed cleanup of hdfs://node1:9000/user/hbase/data/hbase/meta/.tmp/.tableinfo.0000000001
2021-08-04 15:32:39,042 WARN  [main] util.FSTableDescriptors: Failed cleanup of hdfs://node1:9000/user/hbase/data/hbase/meta/.tmp/.tableinfo.0000000002
2021-08-04 15:32:39,052 WARN  [main] util.FSTableDescriptors: Failed cleanup of hdfs://node1:9000/user/hbase/data/hbase/meta/.tmp/.tableinfo.0000000003
2021-08-04 15:32:39,062 WARN  [main] util.FSTableDescriptors: Failed cleanup of hdfs://node1:9000/user/hbase/data/hbase/meta/.tmp/.tableinfo.0000000004
2021-08-04 15:32:39,072 WARN  [main] util.FSTableDescriptors: Failed cleanup of hdfs://node1:9000/user/hbase/data/hbase/meta/.tmp/.tableinfo.0000000005
2021-08-04 15:32:39,082 WARN  [main] util.FSTableDescriptors: Failed cleanup of hdfs://node1:9000/user/hbase/data/hbase/meta/.tmp/.tableinfo.0000000006
2021-08-04 15:32:39,094 WARN  [main] util.FSTableDescriptors: Failed cleanup of hdfs://node1:9000/user/hbase/data/hbase/meta/.tmp/.tableinfo.0000000007
2021-08-04 15:32:39,104 WARN  [main] util.FSTableDescriptors: Failed cleanup of hdfs://node1:9000/user/hbase/data/hbase/meta/.tmp/.tableinfo.0000000008
2021-08-04 15:32:39,115 WARN  [main] util.FSTableDescriptors: Failed cleanup of hdfs://node1:9000/user/hbase/data/hbase/meta/.tmp/.tableinfo.0000000009
2021-08-04 15:32:39,123 WARN  [main] util.FSTableDescriptors: Failed cleanup of hdfs://node1:9000/user/hbase/data/hbase/meta/.tmp/.tableinfo.0000000010
2021-08-04 15:32:39,124 ERROR [main] regionserver.HRegionServer: Failed construction RegionServer
java.io.IOException: Failed update hbase:meta table descriptor
	at org.apache.hadoop.hbase.util.FSTableDescriptors.tryUpdateMetaTableDescriptor(FSTableDescriptors.java:144)
	at org.apache.hadoop.hbase.regionserver.HRegionServer.initializeFileSystem(HRegionServer.java:738)
	at org.apache.hadoop.hbase.regionserver.HRegionServer.<init>(HRegionServer.java:635)
	at org.apache.hadoop.hbase.master.HMaster.<init>(HMaster.java:528)
	at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
	at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
	at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
	at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
	at org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:3163)
	at org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:253)
	at org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:149)
	at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
	at org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:149)
	at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:3181)
2021-08-04 15:32:39,135 ERROR [main] master.HMasterCommandLine: Master exiting
java.lang.RuntimeException: Failed construction of Master: class org.apache.hadoop.hbase.master.HMaster. 
	at org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:3170)
	at org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:253)
	at org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:149)
	at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
	at org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:149)
	at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:3181)
Caused by: java.io.IOException: Failed update hbase:meta table descriptor
	at org.apache.hadoop.hbase.util.FSTableDescriptors.tryUpdateMetaTableDescriptor(FSTableDescriptors.java:144)
	at org.apache.hadoop.hbase.regionserver.HRegionServer.initializeFileSystem(HRegionServer.java:738)
	at org.apache.hadoop.hbase.regionserver.HRegionServer.<init>(HRegionServer.java:635)
	at org.apache.hadoop.hbase.master.HMaster.<init>(HMaster.java:528)
	at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
	at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
	at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
	at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
	at org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:3163)
	... 5 more

Checked a lot of information, but they didn’t solve it. Later, there was no initialized HBase folder on HDFS
troubleshooting reasons:
what is the situation?After calming down, continue to look at its log. The log says very clearly that it is impossible to update its metadata information. HDFS does not have this folder. Of course, it cannot be updated. Is there a problem with the metadata when it is created
when using Hadoop FS – MKDIR/user/HBase, it turns out that it is not successful without permission. I use the wearing folder where the root user does not have permission???I suddenly realized that although I have root permission, for Hadoop, if it controls permission, I still can’t create it successfully. The reason has been found, and the following is the solution
solution:
the permission of Hadoop is controlled in hdfs-sit.xml, which I will still do. Therefore,
1. Go to/data/Hadoop/Hadoop/etc/Hadoop and find hdfs-site.xml
2.vim hdfs-site.xml. Sure enough, dfs.permissions.enabled is not added (the default is true)
3. Join

<property>
    <name>dfs.permissions.enabled</name>
    <value>false</value>
</property>

If your is true, change it to false
: WQ save and exit
4. Restart Hadoop, first stop-yard.sh, then stop-dfs.sh, and then JPS to see if the stop is successful (check whether datanode and namenode still exist). When starting, start start-dfs.sh first, and start-yard.xml
5. After the previous step is successful, you can start HBase. This time, the start is successful
direct HBase shell (if the environment variable is not configured, enter the bin directory of HBase and start it with the command./HBase shell) as shown in the following figure

How to Solve Pytorch DataLoader Loading Error: UnicodeDecodeError: ‘utf-8‘ codec can‘t decode byte 0xe5 in position 1023

The complete error reports are:

Traceback (most recent call last):
  File "/Applications/PyCharm.app/Contents/plugins/python/helpers/pydev/_pydevd_bundle/pydevd_comm.py", line 301, in _on_run
    r = r.decode('utf-8')
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xe5 in position 1023: unexpected end of data

 

Solution:

This is not to solve the problem of Unicode decodeerror: 'UTF-8' codec can't decode byte 0xe5 in position 1023: unexpected end of data , but to solve the problem that the model cannot be iterated. The method is as follows:

Replace the data source in tensor format with numpy format, then convert it to tensor , and finally put it into dataloader

Unicode decodeerror will still be reported when moving from numpy to tensor, but the loaded data will not be encapsulated in the dataloader, resulting in the stop of the data cycle and the training of the model will not be affected.

Flink Error: is not serializable. The object probably contains or references non serializable fields.

Today, a colleague suddenly reported such an error. At first, he really didn’t react. Member variables can’t be serialized….

Exception in thread "main" org.apache.flink.api.common.InvalidProgramException: java.lang.ref.ReferenceQueue$Lock@11fc564b is not serializable. The object probably contains or references non serializable fields.
	at org.apache.flink.api.java.ClosureCleaner.clean(ClosureCleaner.java:151)
	at org.apache.flink.api.java.ClosureCleaner.clean(ClosureCleaner.java:126)
	at org.apache.flink.api.java.ClosureCleaner.clean(ClosureCleaner.java:126)
	at org.apache.flink.api.java.ClosureCleaner.clean(ClosureCleaner.java:126)
	at org.apache.flink.api.java.ClosureCleaner.clean(ClosureCleaner.java:126)
	at org.apache.flink.api.java.ClosureCleaner.clean(ClosureCleaner.java:126)
	at org.apache.flink.api.java.ClosureCleaner.clean(ClosureCleaner.java:71)
	at org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.clean(StreamExecutionEnvironment.java:1821)
	at org.apache.flink.streaming.api.datastream.DataStream.clean(DataStream.java:188)
	at org.apache.flink.streaming.api.datastream.KeyedStream.process(KeyedStream.java:398)
	at org.apache.flink.streaming.api.datastream.KeyedStream.process(KeyedStream.java:374)
	at com.xintujing.flinkdemo.text.UserCount_3.main(UserCount_3.java:53)
Caused by: java.io.NotSerializableException: java.lang.ref.ReferenceQueue$Lock
	at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1184)
	at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:348)
	at org.apache.flink.util.InstantiationUtil.serializeObject(InstantiationUtil.java:586)
	at org.apache.flink.api.java.ClosureCleaner.clean(ClosureCleaner.java:133)
	... 11 more

When you see this error, you are in an ignorant state and cannot be serialized. What do you mean?

First, analyze the problem. He tells us that a class cannot be serialized,

What happens if a member in a class does not implement a serializable interface?A simple question about the Java serialization process. If you try to serialize an object that implements a serializable class, but the object contains a reference to a non serializable class, a non serializable exception notserializableexception will be thrown at run time,

Well, if you take an object as a member variable of another object, all member variables of the object must be serializable. If the member variables cannot be serialized, this error will be reported. So. Don’t treat non serializable things as member variables.

[Solved] Kafka2.3.0 Error: Timeout of 60000ms expired before the position for partition could be determined

Flink consumption kafka2.3.0, wrong report, wrong partition allocation

Kafka Client Timeout of 60000ms expired before the position
 for partition could be determined

I found a wave on the Internet, but I didn’t find the reason. Later, I found that it was because of Kafka’s configuration file, server. Properties,   The host name is used as the configuration, which is added in server. Properties

Host. Name = 192.168.0.30 (the IP address of the current server is OK), and each Kafka node should be equipped with its own IP address

The transaction log for database ‘xxxx’ is full due to AVAILABILITY_REPLICA error message in SQL Ser…

reason:

The log has reached the maximum space on the primary copy or the disk is full.

analysis

The log block of the primary replica can only be reused after it is fixed and redo on other replicas.

So if

1. Transmission delay, due to network delay or bandwidth delay.

2. Copy redo is slow due to delay, blocking or insufficient resources.

Causes the log to grow and cannot be backed up.

log_ send_ queue_ Size: a log block that has not been received by the replica. More than one log block means delivery delay.

redo_ queue_ Size: there is no redo log block on the replica. If there is more, it means redo delay.

SELECT ag.name AS [availability_group_name]
, d.name AS [database_name]
, ar.replica_server_name AS [replica_instance_name]
, drs.truncation_lsn , drs.log_send_queue_size
, drs.redo_queue_size
FROM sys.availability_groups ag
INNER JOIN sys.availability_replicas ar
    ON ar.group_id = ag.group_id
INNER JOIN sys.dm_hadr_database_replica_states drs
    ON drs.replica_id = ar.replica_id
INNER JOIN sys.databases d
    ON d.database_id = drs.database_id
WHERE drs.is_local=0
ORDER BY ag.name ASC, d.name ASC, drs.truncation_lsn ASC, ar.replica_server_name ASC

resolvent:

1. Remove the DB from the most delayed replica and join it later.

2. If the redo thread on the replica is blocked by frequent read operations, set the replica as unreadable and change it back later.

3. If there is still space on the disk, the log file will grow automatically.

4. If the maximum space limit is reached and the disk still has space, increase the maximum space limit.

5. If the log file reaches the maximum value of 2T system and there are idle disks, add the log file.

reference material

https://docs.microsoft.com/en-US/troubleshoot/sql/availability-groups/error-9002-transaction-log-large

Exception: logstash:: pluginloadingerror when importing MySQL data into es in Windows

Insert code snippet here
```Error: unable to load D:\work\elasticsearch-7.13.1-windows-x86_64\elasticsearch-7.13.2\logstash-7.13.1-windows-x86_64\logstash-7.13.1\bin\mysql-connector-java-8.0.25\mysql-connector-java-8.0.20.jar from :jdbc_driver_library, file not readable (please check user and group permissions for the path)
  Exception: LogStash::PluginLoadingError
  Stack: D:/work/elasticsearch-7.13.1-windows-x86_64/elasticsearch-7.13.2/logstash-7.13.1-windows-x86_64/logstash-7.13.1/vendor/bundle/jruby/2.5.0/gems/logstash-integration-jdbc-5.0.7/lib/logstash/plugin_mixins/jdbc/common.rb:47:in `block in load_driver_jars'
org/jruby/RubyArray.java:1809:in `each'

[Solved] MetaStoreClient lost connection. Attempting to reconnect (1 of 1) after 1s. getCurrentNotificatio

2021-06-29T16:59:35,856 DEBUG [pool-7-thread-1] ipc.ProtobufRpcEngine: Call: mkdirs took 4ms
2021-06-29T16:59:35,856  INFO [pool-7-thread-1] session.SessionState: Created HDFS directory: /tmp/hive/hive/9b2f06f4-3ed2-4f0d-8f73-c81ec7970bb8/_tmp_space.db
2021-06-29T16:59:35,859  INFO [main] metastore.HiveMetaStoreClient: Trying to connect to metastore with URI thrift://cdh1:9083
2021-06-29T16:59:35,859  INFO [main] metastore.HiveMetaStoreClient: Opened a connection to metastore, current connections: 1
2021-06-29T16:59:35,861  INFO [main] metastore.HiveMetaStoreClient: Connected to metastore.
2021-06-29T16:59:35,861  INFO [main] metastore.RetryingMetaStoreClient: RetryingMetaStoreClient proxy=class org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient ugi=hive (auth:SIMPLE) retrie
s=1 delay=1 lifetime=02021-06-29T16:59:35,888  WARN [main] metastore.RetryingMetaStoreClient: MetaStoreClient lost connection. Attempting to reconnect (1 of 1) after 1s. getCurrentNotificationEventId
org.apache.thrift.TApplicationException: Internal error processing get_current_notificationEventId
	at org.apache.thrift.TApplicationException.read(TApplicationException.java:111) ~[hive-exec-3.1.2.jar:3.1.2]
	at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:79) ~[hive-exec-3.1.2.jar:3.1.2]
	at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_get_current_notificationEventId(ThriftHiveMetastore.java:5575) ~[hive-exec-3.1.2.jar:3.1.2]
	at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.get_current_notificationEventId(ThriftHiveMetastore.java:5563) ~[hive-exec-3.1.2.jar:3.1.2]
	at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getCurrentNotificationEventId(HiveMetaStoreClient.java:2723) ~[hive-exec-3.1.2.jar:3.1.2]
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_291]
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_291]
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_291]
	at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_291]
	at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:212) ~[hive-exec-3.1.2.jar:3.1.2]
	at com.sun.proxy.$Proxy36.getCurrentNotificationEventId(Unknown Source) ~[?:?]
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_291]
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_291]
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_291]
	at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_291]
	at org.apache.hadoop.hive.metastore.HiveMetaStoreClient$SynchronizedHandler.invoke(HiveMetaStoreClient.java:2773) ~[hive-exec-3.1.2.jar:3.1.2]
	at com.sun.proxy.$Proxy36.getCurrentNotificationEventId(Unknown Source) ~[?:?]
	at org.apache.hadoop.hive.metastore.messaging.EventUtils$MSClientNotificationFetcher.getCurrentNotificationEventId(EventUtils.java:73) ~[hive-exec-3.1.2.jar:3.1.2]
	at org.apache.hadoop.hive.ql.metadata.events.NotificationEventPoll.<init>(NotificationEventPoll.java:103) ~[hive-exec-3.1.2.jar:3.1.2]
	at org.apache.hadoop.hive.ql.metadata.events.NotificationEventPoll.initialize(NotificationEventPoll.java:59) ~[hive-exec-3.1.2.jar:3.1.2]
	at org.apache.hive.service.server.HiveServer2.init(HiveServer2.java:273) ~[hive-service-3.1.2.jar:3.1.2]
	at org.apache.hive.service.server.HiveServer2.startHiveServer2(HiveServer2.java:1036) ~[hive-service-3.1.2.jar:3.1.2]
	at org.apache.hive.service.server.HiveServer2.access$1600(HiveServer2.java:140) ~[hive-service-3.1.2.jar:3.1.2]
	at org.apache.hive.service.server.HiveServer2$StartOptionExecutor.execute(HiveServer2.java:1305) ~[hive-service-3.1.2.jar:3.1.2]
	at org.apache.hive.service.server.HiveServer2.main(HiveServer2.java:1149) ~[hive-service-3.1.2.jar:3.1.2]
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_291]
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_291]
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_291]
	at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_291]
	at org.apache.hadoop.util.RunJar.run(RunJar.java:239) ~[hadoop-common-3.0.0.jar:?]
	at org.apache.hadoop.util.RunJar.main(RunJar.java:153) ~[hadoop-common-3.0.0.jar:?]
2021-06-29T16:59:36,888  INFO [main] metastore.RetryingMetaStoreClient: RetryingMetaStoreClient trying reconnect as hive (auth:SIMPLE)
2021-06-29T16:59:36,889 DEBUG [main] security.UserGroupInformation: PrivilegedAction as:hive (auth:SIMPLE) from:org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient
.java:183)2021-06-29T16:59:36,890  INFO [main] metastore.HiveMetaStoreClient: Closed a connection to metastore, current connections: 0
2021-06-29T16:59:36,891  INFO [main] metastore.HiveMetaStoreClient: Trying to connect to metastore with URI thrift://cdh1:9083
2021-06-29T16:59:36,891  INFO [main] metastore.HiveMetaStoreClient: Opened a connection to metastore, current connections: 1
2021-06-29T16:59:36,893  INFO [main] metastore.HiveMetaStoreClient: Connected to metastore.
2021-06-29T16:59:36,896  INFO [main] server.HiveServer2: Shutting down HiveServer2
2021-06-29T16:59:36,896 DEBUG [main] metadata.Hive: Closing current thread's connection to Hive Metastore.
2021-06-29T16:59:36,970  INFO [main] server.HiveServer2: Stopping/Disconnecting tez sessions.
2021-06-29T16:59:36,969  INFO [main] metastore.HiveMetaStoreClient: Closed a connection to metastore, current connections: 0
2021-06-29T16:59:36,970  WARN [main] server.HiveServer2: Error starting HiveServer2 on attempt 1, will retry in 60000ms
java.lang.RuntimeException: Error initializing notification event poll
	at org.apache.hive.service.server.HiveServer2.init(HiveServer2.java:275) ~[hive-service-3.1.2.jar:3.1.2]
	at org.apache.hive.service.server.HiveServer2.startHiveServer2(HiveServer2.java:1036) [hive-service-3.1.2.jar:3.1.2]
	at org.apache.hive.service.server.HiveServer2.access$1600(HiveServer2.java:140) [hive-service-3.1.2.jar:3.1.2]
	at org.apache.hive.service.server.HiveServer2$StartOptionExecutor.execute(HiveServer2.java:1305) [hive-service-3.1.2.jar:3.1.2]
	at org.apache.hive.service.server.HiveServer2.main(HiveServer2.java:1149) [hive-service-3.1.2.jar:3.1.2]
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_291]
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_291]
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_291]
	at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_291]
	at org.apache.hadoop.util.RunJar.run(RunJar.java:239) [hadoop-common-3.0.0.jar:?]
	at org.apache.hadoop.util.RunJar.main(RunJar.java:153) [hadoop-common-3.0.0.jar:?]
Caused by: java.io.IOException: org.apache.thrift.TApplicationException: Internal error processing get_current_notificationEventId
	at org.apache.hadoop.hive.metastore.messaging.EventUtils$MSClientNotificationFetcher.getCurrentNotificationEventId(EventUtils.java:75) ~[hive-exec-3.1.2.jar:3.1.2]
	at org.apache.hadoop.hive.ql.metadata.events.NotificationEventPoll.<init>(NotificationEventPoll.java:103) ~[hive-exec-3.1.2.jar:3.1.2]
	at org.apache.hadoop.hive.ql.metadata.events.NotificationEventPoll.initialize(NotificationEventPoll.java:59) ~[hive-exec-3.1.2.jar:3.1.2]
	at org.apache.hive.service.server.HiveServer2.init(HiveServer2.java:273) ~[hive-service-3.1.2.jar:3.1.2]
	... 10 more
Caused by: org.apache.thrift.TApplicationException: Internal error processing get_current_notificationEventId
	at org.apache.thrift.TApplicationException.read(TApplicationException.java:111) ~[hive-exec-3.1.2.jar:3.1.2]
	at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:79) ~[hive-exec-3.1.2.jar:3.1.2]
	at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_get_current_notificationEventId(ThriftHiveMetastore.java:5575) ~[hive-exec-3.1.2.jar:3.1.2]
	at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.get_current_notificationEventId(ThriftHiveMetastore.java:5563) ~[hive-exec-3.1.2.jar:3.1.2]
	at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getCurrentNotificationEventId(HiveMetaStoreClient.java:2723) ~[hive-exec-3.1.2.jar:3.1.2]
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_291]
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_291]
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_291]
	at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_291]
	at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:212) ~[hive-exec-3.1.2.jar:3.1.2]
	at com.sun.proxy.$Proxy36.getCurrentNotificationEventId(Unknown Source) ~[?:?]
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_291]
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_291]
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_291]
	at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_291]
	at org.apache.hadoop.hive.metastore.HiveMetaStoreClient$SynchronizedHandler.invoke(HiveMetaStoreClient.java:2773) ~[hive-exec-3.1.2.jar:3.1.2]
	at com.sun.proxy.$Proxy36.getCurrentNotificationEventId(Unknown Source) ~[?:?]
	at org.apache.hadoop.hive.metastore.messaging.EventUtils$MSClientNotificationFetcher.getCurrentNotificationEventId(EventUtils.java:73) ~[hive-exec-3.1.2.jar:3.1.2]
	at org.apache.hadoop.hive.ql.metadata.events.NotificationEventPoll.<init>(NotificationEventPoll.java:103) ~[hive-exec-3.1.2.jar:3.1.2]
	at org.apache.hadoop.hive.ql.metadata.events.NotificationEventPoll.initialize(NotificationEventPoll.java:59) ~[hive-exec-3.1.2.jar:3.1.2]
	at org.apache.hive.service.server.HiveServer2.init(HiveServer2.java:273) ~[hive-service-3.1.2.jar:3.1.2]
	... 10 more
2021-06-29T16:59:36,974 ERROR [pool-7-thread-1] utils.MetaStoreUtils: Got exception: org.apache.thrift.transport.TTransportException Cannot write to null outputStream
org.apache.thrift.transport.TTransportException: Cannot write to null outputStream
	at org.apache.thrift.transport.TIOStreamTransport.write(TIOStreamTransport.java:142) ~[hive-exec-3.1.2.jar:3.1.2]
	at org.apache.thrift.protocol.TBinaryProtocol.writeI32(TBinaryProtocol.java:178) ~[hive-exec-3.1.2.jar:3.1.2]
	at org.apache.thrift.protocol.TBinaryProtocol.writeMessageBegin(TBinaryProtocol.java:106) ~[hive-exec-3.1.2.jar:3.1.2]
	at org.apache.thrift.TServiceClient.sendBase(TServiceClient.java:70) ~[hive-exec-3.1.2.jar:3.1.2]
	at org.apache.thrift.TServiceClient.sendBase(TServiceClient.java:62) ~[hive-exec-3.1.2.jar:3.1.2]
	at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.send_get_tables_by_type(ThriftHiveMetastore.java:1913) ~[hive-exec-3.1.2.jar:3.1.2]
	at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.get_tables_by_type(ThriftHiveMetastore.java:1903) ~[hive-exec-3.1.2.jar:3.1.2]
	at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getTables(HiveMetaStoreClient.java:1676) ~[hive-exec-3.1.2.jar:3.1.2]
	at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getTables(HiveMetaStoreClient.java:1665) ~[hive-exec-3.1.2.jar:3.1.2]
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_291]
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_291]
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_291]
	at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_291]
	at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:212) ~[hive-exec-3.1.2.jar:3.1.2]
	at com.sun.proxy.$Proxy36.getTables(Unknown Source) ~[?:?]
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_291]
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_291]
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_291]
	at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_291]
	at org.apache.hadoop.hive.metastore.HiveMetaStoreClient$SynchronizedHandler.invoke(HiveMetaStoreClient.java:2773) ~[hive-exec-3.1.2.jar:3.1.2]
	at com.sun.proxy.$Proxy36.getTables(Unknown Source) ~[?:?]
	at org.apache.hadoop.hive.ql.metadata.Hive.getTablesByType(Hive.java:1310) ~[hive-exec-3.1.2.jar:3.1.2]
	at org.apache.hadoop.hive.ql.metadata.Hive.getTableObjects(Hive.java:1222) ~[hive-exec-3.1.2.jar:3.1.2]
	at org.apache.hadoop.hive.ql.metadata.Hive.getAllMaterializedViewObjects(Hive.java:1217) ~[hive-exec-3.1.2.jar:3.1.2]
	at org.apache.hadoop.hive.ql.metadata.HiveMaterializedViewsRegistry$Loader.run(HiveMaterializedViewsRegistry.java:166) ~[hive-exec-3.1.2.jar:3.1.2]
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) ~[?:1.8.0_291]
	at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[?:1.8.0_291]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) ~[?:1.8.0_291]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ~[?:1.8.0_291]
	at java.lang.Thread.run(Thread.java:748) [?:1.8.0_291]
2021-06-29T16:59:36,974 ERROR [pool-7-thread-1] utils.MetaStoreUtils: Converting exception to MetaException
2021-06-29T16:59:36,976  WARN [pool-7-thread-1] metastore.RetryingMetaStoreClient: MetaStoreClient lost connection. Attempting to reconnect (1 of 1) after 1s. getTables
org.apache.hadoop.hive.metastore.api.MetaException: Got exception: org.apache.thrift.transport.TTransportException Cannot write to null outputStream
	at org.apache.hadoop.hive.metastore.utils.MetaStoreUtils.logAndThrowMetaException(MetaStoreUtils.java:168) ~[hive-exec-3.1.2.jar:3.1.2]
	at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getTables(HiveMetaStoreClient.java:1667) ~[hive-exec-3.1.2.jar:3.1.2]
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_291]
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_291]
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_291]
	at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_291]
	at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:212) ~[hive-exec-3.1.2.jar:3.1.2]
	at com.sun.proxy.$Proxy36.getTables(Unknown Source) ~[?:?]
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_291]
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_291]
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_291]
	at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_291]
	at org.apache.hadoop.hive.metastore.HiveMetaStoreClient$SynchronizedHandler.invoke(HiveMetaStoreClient.java:2773) ~[hive-exec-3.1.2.jar:3.1.2]
	at com.sun.proxy.$Proxy36.getTables(Unknown Source) ~[?:?]
	at org.apache.hadoop.hive.ql.metadata.Hive.getTablesByType(Hive.java:1310) ~[hive-exec-3.1.2.jar:3.1.2]
	at org.apache.hadoop.hive.ql.metadata.Hive.getTableObjects(Hive.java:1222) ~[hive-exec-3.1.2.jar:3.1.2]
	at org.apache.hadoop.hive.ql.metadata.Hive.getAllMaterializedViewObjects(Hive.java:1217) ~[hive-exec-3.1.2.jar:3.1.2]
	at org.apache.hadoop.hive.ql.metadata.HiveMaterializedViewsRegistry$Loader.run(HiveMaterializedViewsRegistry.java:166) ~[hive-exec-3.1.2.jar:3.1.2]
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) ~[?:1.8.0_291]
	at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[?:1.8.0_291]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) ~[?:1.8.0_291]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ~[?:1.8.0_291]
	at java.lang.Thread.run(Thread.java:748) [?:1.8.0_291]
2021-06-29T16:59:37,976  INFO [pool-7-thread-1] metastore.RetryingMetaStoreClient: RetryingMetaStoreClient trying reconnect as hive (auth:SIMPLE)
2021-06-29T16:59:37,977 DEBUG [pool-7-thread-1] security.UserGroupInformation: PrivilegedAction as:hive (auth:SIMPLE) from:org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMeta
StoreClient.java:183)2021-06-29T16:59:37,977 DEBUG [pool-7-thread-1] metastore.HiveMetaStoreClient: Unable to shutdown metastore client. Will try closing transport directly.
org.apache.thrift.transport.TTransportException: Cannot write to null outputStream
	at org.apache.thrift.transport.TIOStreamTransport.write(TIOStreamTransport.java:142) ~[hive-exec-3.1.2.jar:3.1.2]
	at org.apache.thrift.protocol.TBinaryProtocol.writeI32(TBinaryProtocol.java:178) ~[hive-exec-3.1.2.jar:3.1.2]
	at org.apache.thrift.protocol.TBinaryProtocol.writeMessageBegin(TBinaryProtocol.java:106) ~[hive-exec-3.1.2.jar:3.1.2]
	at org.apache.thrift.TServiceClient.sendBase(TServiceClient.java:70) ~[hive-exec-3.1.2.jar:3.1.2]
	at org.apache.thrift.TServiceClient.sendBaseOneway(TServiceClient.java:66) ~[hive-exec-3.1.2.jar:3.1.2]
	at com.facebook.fb303.FacebookService$Client.send_shutdown(FacebookService.java:436) ~[libfb303-0.9.3.jar:?]
	at com.facebook.fb303.FacebookService$Client.shutdown(FacebookService.java:430) ~[libfb303-0.9.3.jar:?]
	at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.close(HiveMetaStoreClient.java:591) ~[hive-exec-3.1.2.jar:3.1.2]
	at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.reconnect(HiveMetaStoreClient.java:366) ~[hive-exec-3.1.2.jar:3.1.2]
	at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient$1.run(RetryingMetaStoreClient.java:187) ~[hive-exec-3.1.2.jar:3.1.2]
	at java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_291]
	at javax.security.auth.Subject.doAs(Subject.java:422) ~[?:1.8.0_291]
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1962) ~[hadoop-common-3.0.0.jar:?]
	at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:183) ~[hive-exec-3.1.2.jar:3.1.2]
	at com.sun.proxy.$Proxy36.getTables(Unknown Source) ~[?:?]
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_291]
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_291]
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_291]
	at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_291]
	at org.apache.hadoop.hive.metastore.HiveMetaStoreClient$SynchronizedHandler.invoke(HiveMetaStoreClient.java:2773) ~[hive-exec-3.1.2.jar:3.1.2]
	at com.sun.proxy.$Proxy36.getTables(Unknown Source) ~[?:?]
	at org.apache.hadoop.hive.ql.metadata.Hive.getTablesByType(Hive.java:1310) ~[hive-exec-3.1.2.jar:3.1.2]
	at org.apache.hadoop.hive.ql.metadata.Hive.getTableObjects(Hive.java:1222) ~[hive-exec-3.1.2.jar:3.1.2]
	at org.apache.hadoop.hive.ql.metadata.Hive.getAllMaterializedViewObjects(Hive.java:1217) ~[hive-exec-3.1.2.jar:3.1.2]
	at org.apache.hadoop.hive.ql.metadata.HiveMaterializedViewsRegistry$Loader.run(HiveMaterializedViewsRegistry.java:166) ~[hive-exec-3.1.2.jar:3.1.2]
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) ~[?:1.8.0_291]
	at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[?:1.8.0_291]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) ~[?:1.8.0_291]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ~[?:1.8.0_291]
	at java.lang.Thread.run(Thread.java:748) [?:1.8.0_291]
2021-06-29T16:59:37,977  INFO [pool-7-thread-1] metastore.HiveMetaStoreClient: Trying to connect to metastore with URI thrift://cdh1:9083
2021-06-29T16:59:37,978  INFO [pool-7-thread-1] metastore.HiveMetaStoreClient: Opened a connection to metastore, current connections: 1
2021-06-29T16:59:37,980  INFO [pool-7-thread-1] metastore.HiveMetaStoreClient: Connected to metastore.
2021-06-29T16:59:38,026  INFO [pool-7-thread-1] metadata.HiveMaterializedViewsRegistry: Materialized views registry has been initialized
2021-06-29T16:59:45,853 DEBUG [IPC Client (369333979) connection to cdh1/192.168.30.191:8020 from hive] ipc.Client: IPC Client (369333979) connection to cdh1/192.168.30.191:8020 from hive: closed
2021-06-29T16:59:45,853 DEBUG [IPC Client (369333979) connection to cdh1/192.168.30.191:8020 from hive] ipc.Client: IPC Client (369333979) connection to cdh1/192.168.30.191:8020 from hive: stopped, r
emaining connections 0

Solution:
Add configuration in hive-site.xml

    <!-- Metadata Storage Licensing  -->
    <property>
        <name>hive.metastore.event.db.notification.api.auth</name>
        <value>false</value>
    </property>

hive is not allowed to impersonate anonymous

2021-06-29T17:47:55,131 DEBUG [HiveServer2-Handler-Pool: Thread-52] retry.RetryInvocationHandler: Exception while invoking call #36 ClientNamenodeProtocolTranslatorPB.getFileInfo over null. Not retry
ing because try once and fail.org.apache.hadoop.ipc.RemoteException: User: hive is not allowed to impersonate anonymous
	at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1491) ~[hadoop-common-3.0.0.jar:?]
	at org.apache.hadoop.ipc.Client.call(Client.java:1437) ~[hadoop-common-3.0.0.jar:?]
	at org.apache.hadoop.ipc.Client.call(Client.java:1347) ~[hadoop-common-3.0.0.jar:?]
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228) ~[hadoop-common-3.0.0.jar:?]
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116) ~[hadoop-common-3.0.0.jar:?]
	at com.sun.proxy.$Proxy31.getFileInfo(Unknown Source) ~[?:?]
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:874) ~[hadoop-hdfs-client-3.0.0.jar:?]
	at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source) ~[?:?]
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_291]
	at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_291]
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) ~[hadoop-common-3.0.0.jar:?]
	at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) ~[hadoop-common-3.0.0.jar:?]
	at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) ~[hadoop-common-3.0.0.jar:?]
	at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) ~[hadoop-common-3.0.0.jar:?]
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) ~[hadoop-common-3.0.0.jar:?]
	at com.sun.proxy.$Proxy32.getFileInfo(Unknown Source) ~[?:?]
	at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1697) ~[hadoop-hdfs-client-3.0.0.jar:?]
	at org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1491) ~[hadoop-hdfs-client-3.0.0.jar:?]
	at org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1488) ~[hadoop-hdfs-client-3.0.0.jar:?]
	at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) ~[hadoop-common-3.0.0.jar:?]
	at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1503) ~[hadoop-hdfs-client-3.0.0.jar:?]
	at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1668) ~[hadoop-common-3.0.0.jar:?]
	at org.apache.hadoop.hive.ql.exec.Utilities.ensurePathIsWritable(Utilities.java:4486) ~[hive-exec-3.1.2.jar:3.1.2]
	at org.apache.hadoop.hive.ql.session.SessionState.createRootHDFSDir(SessionState.java:760) ~[hive-exec-3.1.2.jar:3.1.2]
	at org.apache.hadoop.hive.ql.session.SessionState.createSessionDirs(SessionState.java:701) ~[hive-exec-3.1.2.jar:3.1.2]
	at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:627) ~[hive-exec-3.1.2.jar:3.1.2]
	at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:586) ~[hive-exec-3.1.2.jar:3.1.2]
	at org.apache.hive.service.cli.session.HiveSessionImpl.open(HiveSessionImpl.java:179) ~[hive-service-3.1.2.jar:3.1.2]
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_291]
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_291]
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_291]
	at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_291]
	at org.apache.hive.service.cli.session.HiveSessionProxy.invoke(HiveSessionProxy.java:78) ~[hive-service-3.1.2.jar:3.1.2]
	at org.apache.hive.service.cli.session.HiveSessionProxy.access$000(HiveSessionProxy.java:36) ~[hive-service-3.1.2.jar:3.1.2]
	at org.apache.hive.service.cli.session.HiveSessionProxy$1.run(HiveSessionProxy.java:63) ~[hive-service-3.1.2.jar:3.1.2]
	at java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_291]
	at javax.security.auth.Subject.doAs(Subject.java:422) ~[?:1.8.0_291]
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1962) ~[hadoop-common-3.0.0.jar:?]
	at org.apache.hive.service.cli.session.HiveSessionProxy.invoke(HiveSessionProxy.java:59) ~[hive-service-3.1.2.jar:3.1.2]
	at com.sun.proxy.$Proxy39.open(Unknown Source) ~[?:?]
	at org.apache.hive.service.cli.session.SessionManager.createSession(SessionManager.java:425) ~[hive-service-3.1.2.jar:3.1.2]
	at org.apache.hive.service.cli.session.SessionManager.openSession(SessionManager.java:373) ~[hive-service-3.1.2.jar:3.1.2]
	at org.apache.hive.service.cli.CLIService.openSessionWithImpersonation(CLIService.java:195) ~[hive-service-3.1.2.jar:3.1.2]
	at org.apache.hive.service.cli.thrift.ThriftCLIService.getSessionHandle(ThriftCLIService.java:472) ~[hive-service-3.1.2.jar:3.1.2]
	at org.apache.hive.service.cli.thrift.ThriftCLIService.OpenSession(ThriftCLIService.java:322) ~[hive-service-3.1.2.jar:3.1.2]
	at org.apache.hive.service.rpc.thrift.TCLIService$Processor$OpenSession.getResult(TCLIService.java:1497) ~[hive-exec-3.1.2.jar:3.1.2]
	at org.apache.hive.service.rpc.thrift.TCLIService$Processor$OpenSession.getResult(TCLIService.java:1482) ~[hive-exec-3.1.2.jar:3.1.2]
	at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) ~[hive-exec-3.1.2.jar:3.1.2]
	at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39) ~[hive-exec-3.1.2.jar:3.1.2]
	at org.apache.hive.service.auth.TSetIpAddressProcessor.process(TSetIpAddressProcessor.java:56) ~[hive-service-3.1.2.jar:3.1.2]
	at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286) ~[hive-exec-3.1.2.jar:3.1.2]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) ~[?:1.8.0_291]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ~[?:1.8.0_291]
	at java.lang.Thread.run(Thread.java:748) [?:1.8.0_291]
2021-06-29T17:47:55,133  WARN [HiveServer2-Handler-Pool: Thread-52] service.CompositeService: Failed to open session
java.lang.RuntimeException: java.lang.RuntimeException: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.authorize.AuthorizationException): User: hive is not allowed to impersonate an
onymous	at org.apache.hive.service.cli.session.HiveSessionProxy.invoke(HiveSessionProxy.java:89) ~[hive-service-3.1.2.jar:3.1.2]
	at org.apache.hive.service.cli.session.HiveSessionProxy.access$000(HiveSessionProxy.java:36) ~[hive-service-3.1.2.jar:3.1.2]
	at org.apache.hive.service.cli.session.HiveSessionProxy$1.run(HiveSessionProxy.java:63) ~[hive-service-3.1.2.jar:3.1.2]
	at java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_291]
	at javax.security.auth.Subject.doAs(Subject.java:422) ~[?:1.8.0_291]
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1962) ~[hadoop-common-3.0.0.jar:?]
	at org.apache.hive.service.cli.session.HiveSessionProxy.invoke(HiveSessionProxy.java:59) ~[hive-service-3.1.2.jar:3.1.2]
	at com.sun.proxy.$Proxy39.open(Unknown Source) ~[?:?]
	at org.apache.hive.service.cli.session.SessionManager.createSession(SessionManager.java:425) ~[hive-service-3.1.2.jar:3.1.2]
	at org.apache.hive.service.cli.session.SessionManager.openSession(SessionManager.java:373) ~[hive-service-3.1.2.jar:3.1.2]
	at org.apache.hive.service.cli.CLIService.openSessionWithImpersonation(CLIService.java:195) ~[hive-service-3.1.2.jar:3.1.2]
	at org.apache.hive.service.cli.thrift.ThriftCLIService.getSessionHandle(ThriftCLIService.java:472) ~[hive-service-3.1.2.jar:3.1.2]
	at org.apache.hive.service.cli.thrift.ThriftCLIService.OpenSession(ThriftCLIService.java:322) ~[hive-service-3.1.2.jar:3.1.2]
	at org.apache.hive.service.rpc.thrift.TCLIService$Processor$OpenSession.getResult(TCLIService.java:1497) ~[hive-exec-3.1.2.jar:3.1.2]
	at org.apache.hive.service.rpc.thrift.TCLIService$Processor$OpenSession.getResult(TCLIService.java:1482) ~[hive-exec-3.1.2.jar:3.1.2]
	at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) ~[hive-exec-3.1.2.jar:3.1.2]
	at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39) ~[hive-exec-3.1.2.jar:3.1.2]
	at org.apache.hive.service.auth.TSetIpAddressProcessor.process(TSetIpAddressProcessor.java:56) ~[hive-service-3.1.2.jar:3.1.2]
	at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286) ~[hive-exec-3.1.2.jar:3.1.2]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) ~[?:1.8.0_291]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ~[?:1.8.0_291]
	at java.lang.Thread.run(Thread.java:748) [?:1.8.0_291]
Caused by: java.lang.RuntimeException: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.authorize.AuthorizationException): User: hive is not allowed to impersonate anonymous
	at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:651) ~[hive-exec-3.1.2.jar:3.1.2]
	at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:586) ~[hive-exec-3.1.2.jar:3.1.2]
	at org.apache.hive.service.cli.session.HiveSessionImpl.open(HiveSessionImpl.java:179) ~[hive-service-3.1.2.jar:3.1.2]
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_291]
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_291]
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_291]
	at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_291]
	at org.apache.hive.service.cli.session.HiveSessionProxy.invoke(HiveSessionProxy.java:78) ~[hive-service-3.1.2.jar:3.1.2]
	... 21 more
Caused by: org.apache.hadoop.ipc.RemoteException: User: hive is not allowed to impersonate anonymous
	at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1491) ~[hadoop-common-3.0.0.jar:?]
	at org.apache.hadoop.ipc.Client.call(Client.java:1437) ~[hadoop-common-3.0.0.jar:?]
	at org.apache.hadoop.ipc.Client.call(Client.java:1347) ~[hadoop-common-3.0.0.jar:?]
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228) ~[hadoop-common-3.0.0.jar:?]
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116) ~[hadoop-common-3.0.0.jar:?]
	at com.sun.proxy.$Proxy31.getFileInfo(Unknown Source) ~[?:?]
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:874) ~[hadoop-hdfs-client-3.0.0.jar:?]
	at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source) ~[?:?]
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_291]
	at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_291]
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) ~[hadoop-common-3.0.0.jar:?]
	at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) ~[hadoop-common-3.0.0.jar:?]
	at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) ~[hadoop-common-3.0.0.jar:?]
	at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) ~[hadoop-common-3.0.0.jar:?]
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) ~[hadoop-common-3.0.0.jar:?]
	at com.sun.proxy.$Proxy32.getFileInfo(Unknown Source) ~[?:?]
	at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1697) ~[hadoop-hdfs-client-3.0.0.jar:?]
	at org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1491) ~[hadoop-hdfs-client-3.0.0.jar:?]
	at org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1488) ~[hadoop-hdfs-client-3.0.0.jar:?]
	at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) ~[hadoop-common-3.0.0.jar:?]
	at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1503) ~[hadoop-hdfs-client-3.0.0.jar:?]
	at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1668) ~[hadoop-common-3.0.0.jar:?]
	at org.apache.hadoop.hive.ql.exec.Utilities.ensurePathIsWritable(Utilities.java:4486) ~[hive-exec-3.1.2.jar:3.1.2]
	at org.apache.hadoop.hive.ql.session.SessionState.createRootHDFSDir(SessionState.java:760) ~[hive-exec-3.1.2.jar:3.1.2]
	at org.apache.hadoop.hive.ql.session.SessionState.createSessionDirs(SessionState.java:701) ~[hive-exec-3.1.2.jar:3.1.2]
	at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:627) ~[hive-exec-3.1.2.jar:3.1.2]
	at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:586) ~[hive-exec-3.1.2.jar:3.1.2]
	at org.apache.hive.service.cli.session.HiveSessionImpl.open(HiveSessionImpl.java:179) ~[hive-service-3.1.2.jar:3.1.2]
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_291]
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_291]
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_291]
	at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_291]
	at org.apache.hive.service.cli.session.HiveSessionProxy.invoke(HiveSessionProxy.java:78) ~[hive-service-3.1.2.jar:3.1.2]
	... 21 more

Solution: add the following configuration to core-site.xml in HDFS

	<property>
		<name>hadoop.proxyuser.hive.groups</name>
		<value>*</value>
	</property>
	<property>
		<name>hadoop.proxyuser.hive.hosts</name>
		<value>*</value>
	</property>

Researchers, hadoop
: https://blog.csdn.net/github_38358734/article/details/77522798