Tag Archives: hbase

[Solved] Hbase …ERROR: Unable to read .tableinfo from file:/hbaseData/data/default/table1/xxxx

Solution:

1. Make sure that the table is not less nor more, the judgment scheme gives 2 points

(1) the simplest, compare the number and size of files

(2) compare the contents of the file md5 and directory name, and the number of – (in my computer g:/md5 – to their own words)

Copy the original data away

2. use hbase shell to create 1 new table, the table structure should be the same as the one you had problems with before, and the same name

3. Go to the data directory under your new directory and find the .tabledesc

(local is hidden files, hdfs is not)

4. go into this directory will be . tableinfo.xxxx copy out

5. delete the data in your table, copy the data you copied, and put the tableinfo data into .tabledesc.

6. Execute hbase hbck -repair to repair the table, if one repair is not successful, you can execute it several times

[Solved] eclipse Error: org.apache.hadoop.hbase.NotServingRegionException:

Error1: org.apache.hadoop.hbase.NotServingRegionException:
Error 2: Can’t get master address from ZooKeeper; znode data == null

[root@hadoop01 bin]# sh hbase hbck
2022-01-29 16:48:49,797 INFO  [main] client.HConnectionManager$HConnectionImplementation: getMaster attempt 9 of 35 failed; retrying after sleep of 10044, exception=java.io.IOException: Can't get master address from ZooKeeper; znode data == null

Solution:

Stop HBase and go to the bin directory of HBase

sh stop-hbase.sh

Start the zookeeper client and delete the/HBase node

[root@hadoop01 bin]# sh zkCli.sh
[zk: localhost:2181(CONNECTED) 1] rmr /hbase

Restart HBase cluster

sh start-hbase.sh

[Solved] Hbase-shell 2.x Error: Unhandled Java exception: java.lang.IncompatibleClassChangeError: Found class jline.Terminal…

I. Error message:

Unhandled Java exception: java.lang.IncompatibleClassChangeError: Found class jline.Terminal, but interface was expected
java.lang.IncompatibleClassChangeError: Found class jline.Terminal, but interface was expected

II. Solution
hadoop/share/hadoop/yarn/lib/ Replace jline-0.9.94.jar package with jline-2.12.jar if you have it, or upload jline-2.12.jar if you don’t

Note: How to obtain jline-2.12.jar:
Can be obtained from the hive package
hive package download: https://downloads.apache.org/hive/hive-2.3.9/apache-hive-2.3.9-bin.tar.gz

[Solved] JAVA connect HBase program is stuck and does not report an error

Let me explain my situation first:

In the HBase shell interface, you can run it with commands, but not with Java API.

HBase and zookeeper configuration files are all OK.

During Java API operation, it is stuck and cannot be connected. Look at the HBase and zookeeper logs. There is no information available.

When the program runs for a long time, it reports an error (intercept a useful line):

java. net. UnknownHostException: can not resolve hadoop01,16020,164077701361

Maybe I can’t recognize Hadoop 01. What’s this?It’s my node hostname.

Here’s how to view it:

1 zkServer.SH check whether there are any leaders, followers, etc. in zookeeper.Mine (is it a zookeeper configuration problem)

2. Check the zookeeper log and enter the logs directory. One of mine is Hadoop 01 and the other is master (previous host name).

At this point, I probably know where the problem is

Possible

1 . Host name and configuration conflict

2 . HBase version data conflict (I have installed different versions)

Solution:

1. Close HBase and zookeeper

2 delete the data file of zookeeper (violence works miracles). Mine is in the data. Pay attention not to delete myid (all three nodes are deleted)

3 restart zookeeper and HBase

Run java code

Finish work

[Solved] HBase shell command Error: ERROR: connection closed

Problem description

During the big data storage experiment, an error is reported with the shell command of HBase. Connection closed

check the log and find that the error reporting service does not exist

Final solution

After a lot of troubleshooting, I finally found that it was a problem with the JDK version. I used version java-17.0.1 is too high. Finally, it was changed to jdk-8u331-linux-x64.tar.gz is solved

My versions are

hadoop 3.2. 2
hbase 2.3. 6
java 1.8. 0

The matching table of Hadoop, HBase and Java is attached


Solution steps

1 empty the temporary files of Hadoop

Close HBase and Hadoop processes first

stop-all.sh

View HDFS site XML

delete all the files in the two folders (the same is true for the name folder)

Re perform Hadoop formatting

2 change java to the specified version (don’t forget to change the Java folder name in the environment variable)

I use 1.8 0_ three hundred and thirty-one

java -version

3 restart the computer and start SSH, Hadoop and HBase

service ssh start
start-dfs.sh
start-hbase.sh

4. Enter HBase shell and find it successful

[Solved] hbase Create Sheet Error: ERROR: org.apache.hadoop.hbase.PleaseHoldException: Master is initializing

To create a table in the HBase shell:

A few days ago, it was OK. Today, I want to review it. Suddenly, it doesn’t work.
I checked a lot and said that list can’t create tables. When I encountered a problem, list can’t create tables.
after reading a lot, I found that many of them are about the problem of zookeeper. Clearing HBase cache, I cried because I didn’t install zookeeper client, let alone clearing it.
the next step is to change the configuration, Synchronize the time cycle and eliminate strange error reports

Problem Description:

There is no problem using list in HBase shell
there is a problem when creating a table: error: org apache. hadoop. hbase. PleaseHoldException: Master is initializing

Cause analysis:

1. hbase file configuration
2. The clock does not correspond to
3. Some strange errors are reported when hbase runs (although it can run normally before, there are hidden dangers)
4. hbase is sick (artificial mental retardation)

Solution:

Modify HBase configuration file

By modifying the hbase configuration file hbase-site.xml, hbase.rootdir is changed to hbase.root.dir. The following configuration file


<configuration>
        <property>
                <name>hbase.root.dir</name>
                <value>hdfs://localhost:9000/hbase</value>
        </property>
        <property>
                <name>hbase.cluster.distributed</name>
                <value>true</value>
        </property>
        <property>
                <name>hbase.unsafe.stream.capability.enforce</name>
                <value>false</value>
        </property>
</configuration>

synchronization time

directly enter the following command in the shell

ntpdate 1.cn.pool.ntp.org

3. Eliminate strange messages from HBase

Directly in HBase env Add the following command to the SH file

 export HBASE_DISABLE_HADOOP_CLASSPATH_LOOKUP=true

Reboot

[Solved] hbase Startup Error: ERROR: Can’t get master address from ZooKeeper; znode data == null

Start HBase normally, but an error is reported with the list command:

ERROR: Can’t get master address from ZooKeeper; znode data == null

Here is some help for this command:
List all user tables in hbase. Optional regular expression parameter could
be used to filter the output. Examples:

hbase> list
hbase> list ‘abc.’
hbase> list ‘ns:abc.’
hbase> list ‘ns:.*’

Check the log file of HBase first, and the following error appears:
2021-12-08 23:51:35101 fat [hadoop01:16000. Activemastermanager] master HMaster: Failed to become active master
org. apache. hadoop. ipc. RemoteException(org.apache.hadoop.ipc.StandbyException): Operation category READ is not supported in state standby

Solution:
① it is related to the active and standby of the node. In my case, there are three highly available Hadoop, of which the primary node Hadoop 01 is standby and Hadoop 02 is active, but errors will still be reported. I actively intervene in the zookeeper election, set Hadoop 01 to active and Hadoop 02 to standby, and the result is successful.
actively intervene in the zookeeper election command:

hdfs haadmin -transitionToStandby --forcemanual nn2
hdfs haadmin -transitionToActive --forcemanual nn1

② High availability Hadoop core site XML configuration

and HBase site XML doesn’t match. I configured HBase before the high availability configuration and didn’t change it later, so I had to change it.

after modification:

then put the Hadoop cluster configuration file core site XML and HDFS site The two configuration files, XML, are copied to the conf directory of HBase

Restart Hadoop cluster and HBase

Spring integrated HBase error [How to Solve]

Problem 1
ClassNotFoundException:org/springframework/data/hadoop/configuration/ConfigurationFactoryBean
Solution
Replace the jar package with spring-data-hadoop-1.0.0.RELEASE version
Problem 2
ClassNotFoundException:org/apache/hadoop/conf/Configuration
Solution
Introduce hadoop-client-3.1.3.jar and hadoop-common-3.1.3.jar
Problem 3
java.lang.NoClassDefFoundError: org/apache/commons/configuration2/ConfigurationSolution
Introduce commons-configuration2-2.3.jar
Problem 4
java.lang.NoClassDefFoundError: org/apache/hadoop/util/PlatformName
Solution
Introduce hadoop-auth-3.1.3.jar
Problem 5
java.lang.NoClassDefFoundError: org/apache/hadoop/mapred/JobConf
Solution
Introduce hadoop-mapreduce-client-common-3.1.3.jar, hadoop-mapreduce-client-core-3.1.3.jar and
hadoop-mapreduce-client-jobclient-3.1.3.jar
Problem 6
java.lang.NoClassDefFoundError: com/ctc/wstx/io/SystemId
Solution
Introduce woodstox-core-5.0.3.jar
Problem 7
java.lang.NoClassDefFoundError: com/google/common/collect/Interners
Solution
Introduce guava-30.1.1-jre.jar
Problem 8
java.lang.NoSuchMethodError: com.google.common.collect.MapMaker.keyEquivalence(Lcom/google/common/base/Equivalence;)Lcom/google/ common/collect/MapMaker
Solution
Remove the google-collect-1.0.jar package, guava conflict
Problem 9
java.lang.NoClassDefFoundError: com/fasterxml/jackson/core/JsonGenerator
Solution
Introduce jackson-annotations-2.12.4.jar, jackson-core-2.12.4.jar and jackson-databind-2.12.4.jar
Problem 10
java.lang.NoClassDefFoundError: org/apache/hadoop/hbase/HBaseConfiguration
Solution
Introduce hbase-common-2.2.4.jar
Problem 11
java.lang.NoClassDefFoundError: org/apache/hadoop/hbase/client/HTableInterface
Solution
After searching for a long time, I found that it is written in the configuration file
<bean id=”htemplate” class=”org.springframework.data.hadoop.hbase.HbaseTemplate”>
<property name=”configuration” ref=”hbaseConfiguration”>
</property>
</bean>
Comment it out Summary
Most of the problem is the lack of jar packages, Spring integration with Hbase requires 15 packages.
Among them.
spring-data-hadoop-1.0.0.RELEASE.jar
hadoop-client-3.1.3.jar
hadoop-common-3.1.3.jar
hadoop-auth-3.1.3.jar
hadoop-mapreduce-client-common-3.1.3.jar
hadoop-mapreduce-client-core-3.1.3.jar
hadoop-mapreduce-client-jobclient-3.1.3.jar
commons-configuration2-2.3.jar
guava-30.1.1-jre.jar
jackson-annotations-2.12.4.jar
jackson-core-2.12.4.jar
jackson-databind-2.12.4.jar
These packages are also required when integrating HDFS

[Solved] Hbase Error: ERROR: KeeperErrorCode = NoNode for /hbase/master

Reason: power failure (including computer sleep, etc.) caused Hmaster to fail to connect, and no master node could be found in zookeeper

Solution: delete the hbase node in zookeeper, open hbase will automatically create this node
1) log in to the zookeeper client: zkCli.sh
2) delete the hbase node: deleteall /hbase

The most critical step: restart hbase, restart zookeeper
1) close zookeeper: my_zk.sh stop This is my script to start zk, don’t copy
2) close hbase: stop-hbase.sh is invalid, use the jps command to find each in the cluster The port number of the hmaster and hregionserver of the machine, kill -9 + port number one by one kills the hbase process, which is equivalent to manually closing hbase
3) Open zookeeper and hbase, my_zk.sh start start-hbase.sh
hdfs does not need to be moved, If yours doesn’t work, hdfs can also be restarted.

An error occurs when HBase uses the shell command: pleaseholdexception: Master is initializing solution

Article catalog

Project scenario: Problem Description: Cause Analysis: solution:


Project scenario:

Ubuntu20.04Hadoop3.2.2Hbase2.2.2


Problem Description:

The main errors are as follows: error: org.apache.hadoop.hbase.pleaseholdexception: Master is initializing

After starting the HBase shell, when using create, list and other commands, the following error messages appear:

hbase(main):001:0> list
TABLE 
                                                                                                                    
ERROR: org.apache.hadoop.hbase.PleaseHoldException: Master is initializing
        at org.apache.hadoop.hbase.master.HMaster.checkInitialized(HMaster.java:2452)
        at org.apache.hadoop.hbase.master.MasterRpcServices.getTableNames(MasterRpcServices.java:915)
        at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:58517)
        at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2339)
        at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:123)
        at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:188)
        at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:168)

For usage try 'help "list"'

Took 10.297 seconds

Cause analysis:

Here, my computer is only configured with HBase application for Hadoop pseudo distributed cluster, so I don’t think it’s possible that the time of HBase and zookeeper servers is inconsistent, as others on the Internet say. The main reason should be: the processes of Hadoop and HBase are inconsistent, resulting in the initialization of the master node all the time


Solution:

Format the HBase file system in Hadoop, restart HBase, and resynchronize the two:

Shut down all HBase services first:

cd /usr/local/hbase
bin/stop-hbase.sh

Then close all Hadoop services:

cd /usr/local/hadoop
sbin/stop-all.sh

Enter JPS to ensure that all Hadoop and HBase processes are closed:

zq@fzqs-Laptop:~$ jps
4673 Jps

Then start the Hadoop service:

cd /usr/local/hadoop
sbin/start-all.sh

To view files in HDFS:

bin/hdfs dfs -ls /

The output shall be as follows (including/HBase):

zq@fzqs-Laptop:/usr/local/hadoop$ bin/hdfs dfs -ls /
Found 1 items
drwxr-xr-x		- root supergroup 		0 2021-10-28 21:49 /hbase

Delete/HBase Directory:

bin/hdfs dfs -rm -r /hbase

Start HBase service:

cd /usr/local/hbase
bin/start-hbase.sh

Then start the shell and you should be able to use it:

bin/hbase shell

Clickhouse error: XXXX.XXXX_local20211009 (8fdb18e9-bb4c-42d8-8fdb-18e9bb4c02d8): auto…

Code: 49, e.displayText() = DB::Exception: Part 20211009_67706_67706_0 is covered by 20211009_67118_67714_12 but should be merged into 20211009_67706_67715_1. This shouldn’t happen often., Stack trace (when copying this message, always include the lines below):
Error Messages:
XXXX.XXXX_local20211009 (8fdb18e9-bb4c-42d8-8fdb-18e9bb4c02d8): auto DB::StorageReplicatedMergeTree::processQueueEntry(ReplicatedMergeTreeQueue::SelectedEntryPtr)::(anonymous class)::operator()(DB::StorageReplicatedMergeTree::LogEntryPtr &) const: Code: 49, e.displayText() = DB::Exception: Part 20211009_67706_67706_0 is covered by 20211009_67118_67714_12 but should be merged into 20211009_67706_67715_1. This shouldn’t happen often., Stack trace (when copying this message, always include the lines below):
Solution 1.
1. Try to delete the local table and the distributed table XXXX.XXXX_local20211009