Tag Archives: hbase

HBase hangs up immediately after startup. The port reports an error of 500 and hmaster aborted

[error 1]:

java.lang.RuntimeException: HMaster Aborted
	at org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:261)
	at org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:149)
	at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
	at org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:149)
	at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:2971)
2021-08-26 12:25:35,269 INFO  [main-EventThread] zookeeper.ClientCnxn: EventThread shut down for session: 0x37b80a4f6560008

[attempt 1]: delete the HBase node under ZK
but not solve my problem
[attempt 2]: reinstall HBase
but not solve my problem
[attempt 3]: turn off HDFS security mode

hadoop dfsadmin -safemode leave

Still can’t solve my problem
[try 4]: check zookeeper. You can punch in spark normally or add new nodes. No problem.

Turn up and report an error
[error 2]:

master.HMaster: Failed to become active master
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.StandbyException): Operation category READ is not supported in state standby. Visit https://s.apache.org/sbnn-error
	at org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.checkOperation(StandbyState.java:108)
	at org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.checkOperation(NameNode.java:2044)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOperation(FSNamesystem.java:1409)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:2961)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:1160)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:880)
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034)
	at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1003)
	at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:931)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:422)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1926)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2854)

Here’s the point: an error is reported that my master failed to become an hmaster. The reason is: operation category read is not supported in state standby. That is, the read operation fails because it is in the standby state. Check the NN1 state at this time

#  hdfs haadmin -getServiceState nn1

Sure enough, standby

[solution 1]:
manual activation

hdfs haadmin -transitionToActive --forcemanual nn1

Kill all HBase processes, restart, JPS view, access port

Bravo!!!

It’s been changed for a few hours. It’s finally good. The reason is that I forgot the correct startup sequence
zookeeper —— & gt; hadoop——-> HBase
every time I start Hadoop first, I waste a long time. I hope I can help you.

How to Solve Hmaster hangs up issue due to namenode switching in Ha mode

Solve the problem that hmaster hangs up due to namenode switching in Ha mode

Question:

When we build our own big data cluster for learning, the virtual machine often gets stuck and the nodes hang up inexplicably because the machine configuration is not high enough.

In Hadoop’s highly available cluster, the machine configuration is not enough, and the two namenodes always switch state automatically, resulting in the hang up of the hmaster node of the HBase cluster.

Causes of problems:

Let’s check the master log of HBase:

# Go to the log file directory
[root@hadoop001 ~]# cd /opt/module/hbase-1.3.1/logs/
[root@hadoop001 logs]# vim hbase-root-master-hadoop001.log 

From the log, it is easy to find that the error is caused by the active/standby switching of namenode.

resolvent:

1. Modify the hbase-site.xml configuration file

Modify the configuration of base.roodir

<property>
     <name>hbase.roodir</name>
     <value>hdfs://hadoop001:9000/hbase</value>
</property>

# change to 
<property>
     <name>hbase.roodir</name>
     <value>hdfs://ns/hbase</value>
</property>

# Note that the ns here is the value of hadoop's dfs.nameservices (configured in hdfs-site-xml, fill in according to your own configuration)

2. Establish soft connection

[root@hadoop001 ~]# ln -s /opt/module/hadoop-2.7.6/etc/hadoop/hdfs-site.xml /opt/module/hbase-1.3.1/conf/hdfs-site.xml
[root@hadoop001 ~]# ln -s /opt/module/hadoop-2.7.6/etc/hadoop/core-site.xml /opt/module/hbase-1.3.1/conf/core-site.xml 

3. Synchronize HBase profiles for all clusters

Use SCP instruction to distribute to other nodes

Then restart the cluster to solve the hang up problem of the hmaster node

Hbase Error: Regions In Transition [How to Solve]

1.Problem Analysis
Region Split is executed when the system is down or the Region file in HDFS has been deleted.
The status of Region is tracked by master, including the following status.

State Description
Offline Region is offline
Pending Open A request to open the region was sent to the server
Opening The server has started opening the region
Open The region is open and is fully operational
Pending Close A request to close the region has been sent to the server
Closing The server has started closing the region
Closed The region is closed
Splitting The server started splitting the region
Split The region has been split by the serve

Region migration (transition) between these states can be triggered either by the master or by the region server.
2. Solutions
2.1 Use hbase hbck to find out which Region has Error
2.2 Remove the failed Region using the following command
deleteall “hbase:meta”,”TestTable,00000000000000000005850000,1588444012555.89e1c07384a56c77761e490ae3f34a8d.”
2.3  restart hbase

[Solved] Failed update hbase:meta table descriptor HBase Startup Error

In the past two days, the content related to big data was deployed on the new server. HBase was installed successfully, but the hmaster failed to start when it was started. Sometimes, JPS found that the hmaster hung up after tens of seconds (there are seven servers in total, node1 is the master and node2 is the Backup Master). Check the log and the error contents are as follows:

2021-08-04 15:32:38,839 INFO  [main] util.FSTableDescriptors: ta', {TABLE_ATTRIBUTES => {IS_META => 'true', REGION_REPLICATION => '1', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}
2021-08-04 15:32:39,026 WARN  [main] util.FSTableDescriptors: Failed cleanup of hdfs://node1:9000/user/hbase/data/hbase/meta/.tmp/.tableinfo.0000000001
2021-08-04 15:32:39,042 WARN  [main] util.FSTableDescriptors: Failed cleanup of hdfs://node1:9000/user/hbase/data/hbase/meta/.tmp/.tableinfo.0000000002
2021-08-04 15:32:39,052 WARN  [main] util.FSTableDescriptors: Failed cleanup of hdfs://node1:9000/user/hbase/data/hbase/meta/.tmp/.tableinfo.0000000003
2021-08-04 15:32:39,062 WARN  [main] util.FSTableDescriptors: Failed cleanup of hdfs://node1:9000/user/hbase/data/hbase/meta/.tmp/.tableinfo.0000000004
2021-08-04 15:32:39,072 WARN  [main] util.FSTableDescriptors: Failed cleanup of hdfs://node1:9000/user/hbase/data/hbase/meta/.tmp/.tableinfo.0000000005
2021-08-04 15:32:39,082 WARN  [main] util.FSTableDescriptors: Failed cleanup of hdfs://node1:9000/user/hbase/data/hbase/meta/.tmp/.tableinfo.0000000006
2021-08-04 15:32:39,094 WARN  [main] util.FSTableDescriptors: Failed cleanup of hdfs://node1:9000/user/hbase/data/hbase/meta/.tmp/.tableinfo.0000000007
2021-08-04 15:32:39,104 WARN  [main] util.FSTableDescriptors: Failed cleanup of hdfs://node1:9000/user/hbase/data/hbase/meta/.tmp/.tableinfo.0000000008
2021-08-04 15:32:39,115 WARN  [main] util.FSTableDescriptors: Failed cleanup of hdfs://node1:9000/user/hbase/data/hbase/meta/.tmp/.tableinfo.0000000009
2021-08-04 15:32:39,123 WARN  [main] util.FSTableDescriptors: Failed cleanup of hdfs://node1:9000/user/hbase/data/hbase/meta/.tmp/.tableinfo.0000000010
2021-08-04 15:32:39,124 ERROR [main] regionserver.HRegionServer: Failed construction RegionServer
java.io.IOException: Failed update hbase:meta table descriptor
	at org.apache.hadoop.hbase.util.FSTableDescriptors.tryUpdateMetaTableDescriptor(FSTableDescriptors.java:144)
	at org.apache.hadoop.hbase.regionserver.HRegionServer.initializeFileSystem(HRegionServer.java:738)
	at org.apache.hadoop.hbase.regionserver.HRegionServer.<init>(HRegionServer.java:635)
	at org.apache.hadoop.hbase.master.HMaster.<init>(HMaster.java:528)
	at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
	at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
	at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
	at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
	at org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:3163)
	at org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:253)
	at org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:149)
	at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
	at org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:149)
	at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:3181)
2021-08-04 15:32:39,135 ERROR [main] master.HMasterCommandLine: Master exiting
java.lang.RuntimeException: Failed construction of Master: class org.apache.hadoop.hbase.master.HMaster. 
	at org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:3170)
	at org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:253)
	at org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:149)
	at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
	at org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:149)
	at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:3181)
Caused by: java.io.IOException: Failed update hbase:meta table descriptor
	at org.apache.hadoop.hbase.util.FSTableDescriptors.tryUpdateMetaTableDescriptor(FSTableDescriptors.java:144)
	at org.apache.hadoop.hbase.regionserver.HRegionServer.initializeFileSystem(HRegionServer.java:738)
	at org.apache.hadoop.hbase.regionserver.HRegionServer.<init>(HRegionServer.java:635)
	at org.apache.hadoop.hbase.master.HMaster.<init>(HMaster.java:528)
	at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
	at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
	at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
	at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
	at org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:3163)
	... 5 more

Checked a lot of information, but they didn’t solve it. Later, there was no initialized HBase folder on HDFS
troubleshooting reasons:
what is the situation?After calming down, continue to look at its log. The log says very clearly that it is impossible to update its metadata information. HDFS does not have this folder. Of course, it cannot be updated. Is there a problem with the metadata when it is created
when using Hadoop FS – MKDIR/user/HBase, it turns out that it is not successful without permission. I use the wearing folder where the root user does not have permission???I suddenly realized that although I have root permission, for Hadoop, if it controls permission, I still can’t create it successfully. The reason has been found, and the following is the solution
solution:
the permission of Hadoop is controlled in hdfs-sit.xml, which I will still do. Therefore,
1. Go to/data/Hadoop/Hadoop/etc/Hadoop and find hdfs-site.xml
2.vim hdfs-site.xml. Sure enough, dfs.permissions.enabled is not added (the default is true)
3. Join

<property>
    <name>dfs.permissions.enabled</name>
    <value>false</value>
</property>

If your is true, change it to false
: WQ save and exit
4. Restart Hadoop, first stop-yard.sh, then stop-dfs.sh, and then JPS to see if the stop is successful (check whether datanode and namenode still exist). When starting, start start-dfs.sh first, and start-yard.xml
5. After the previous step is successful, you can start HBase. This time, the start is successful
direct HBase shell (if the environment variable is not configured, enter the bin directory of HBase and start it with the command./HBase shell) as shown in the following figure

Some problems in the development of HBase MapReduce

Recently in the course design, the main process is to collect data from the CSV file, store it in HBase, and then use MapReduce for statistical analysis of the data. During this period, we encountered some problems, which were finally solved through various searches. Record these problems and their solutions here.

1. HBase hmaster auto close problem

Enter zookeeper, delete HBase data (use with caution), and restart HBase

./zkCli.sh
rmr /hbase
stop-hbase.sh 
start-hbase.sh 

2. Dealing with multi module dependency when packaging with Maven

The project structure is shown in the figure below

ETL and statistics both refer to the common module. When they are packaged separately, they are prompted that the common dependency cannot be found and the packaging fails.

Solution steps:

1. To do Maven package and Maven install for common, I use idea to operate directly in Maven on the right.

2. Run Maven clean and Maven install commands in the outermost total project (root)

After completing these two steps, the problem can be solved.

3. When the Chinese language is stored in HBase, it becomes a form similar to “ XE5  x8f  x91  Xe6  x98  x8e”

The classic Chinese encoding problem can be solved by calling the following method before using.

public static String decodeUTF8Str(String xStr) throws UnsupportedEncodingException {
        return URLDecoder.decode(xStr.replaceAll("\\\\x", "%"), "utf-8");
    }

4. Error in job submission of MapReduce

Write the code locally, type it into a jar package and run it on the server. The error is as follows:

Exception in thread "main" java.lang.NoSuchMethodError: org.apache.hadoop.mapreduce.Job.getArchiveSharedCacheUploadPolicies(Lorg/apache/hadoop/conf/Configuration;)Ljava/util/Map;
    at org.apache.hadoop.mapreduce.v2.util.MRApps.setupDistributedCache(MRApps.java:491)
    at org.apache.hadoop.mapred.LocalDistributedCacheManager.setup(LocalDistributedCacheManager.java:92)
    at org.apache.hadoop.mapred.LocalJobRunner$Job.<init>(LocalJobRunner.java:172)
    at org.apache.hadoop.mapred.LocalJobRunner.submitJob(LocalJobRunner.java:788)
    at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:240)
    at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1290)
    at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1287)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:422)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1762)
    at org.apache.hadoop.mapreduce.Job.submit(Job.java:1287)
    at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1308)
    at MapReduce.main(MapReduce.java:49)

Solution: add dependencies

<dependency>
    <groupId>org.apache.hadoop</groupId>
    <artifactId>hadoop-mapreduce-client-core</artifactId>
    <version>3.1.3</version>
</dependency>

 <dependency>
            <groupId>org.apache.hadoop</groupId>
            <artifactId>hadoop-mapreduce-client-common</artifactId>
            <version>3.1.3</version>
        </dependency>

<dependency>
    <groupId>org.apache.hadoop</groupId>
    <artifactId>hadoop-common</artifactId>
    <version>3.1.3</version>
</dependency>

<dependency>
    <groupId>org.apache.hadoop</groupId>
    <artifactId>hadoop-mapreduce-client-jobclient</artifactId>
    <version>3.1.3</version>
    <scope>provided</scope>
</dependency>

Among them

Hadoop-mapreduce-client-core.jar supports running on a cluster

Hadoop-mapreduce-client-common.jar supports running locally

After solving the above problems, my code can run smoothly on the server.

Finally, it should be noted that the output path of MapReduce cannot already exist, otherwise an error will be reported.

I hope this article can help you with similar problems.

HBase shell Find ERROR: org.apache.hadoop.hbase.PleaseHoldException: Master is initializing

Problem description

Because of the HBase version, after I changed the HBase version, I opened the HBase shell query table and reported the following error:

How to solve it

The above problems are generally the cause of the failure of hregionserver node.

    1. first step, JPS checks whether its hregionserver node is normally opened (mostly hung up) and looks at the configuration file hbase-site.xml (my problem is here, because the installation has changed the version, and it has not been integrated with Phoneix yet, it is necessary to comment out all the Phoenix mapping configuration in the configuration file, otherwise hregionserver will not start normally)

 

    1. the following is a positive list Exact configuration file </ OL>
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
/**
 *
 * Licensed to the Apache Software Foundation (ASF) under one
 * or more contributor license agreements.  See the NOTICE file
 * distributed with this work for additional information
 * regarding copyright ownership.  The ASF licenses this file
 * to you under the Apache License, Version 2.0 (the
 * "License"); you may not use this file except in compliance
 * with the License.  You may obtain a copy of the License at
 *
 *     http://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 */
-->
<configuration>
  <property>
    <name>hbase.cluster.distributed</name>
    <value>true</value>
  </property>
  <property>
    <name>hbase.tmp.dir</name>
    <value>/export/data/hbase/tmp</value>
   <!--This tmp temporary file should be created in the hbase installation directory-->
  </property>
  <property>
    <name>hbase.master</name>
    <value>node1:16010</value>
  </property>
  <property>
    <name>hbase.unsafe.stream.capability.enforce</name>
    <value>false</value>
  </property>
    <property>
    <name>hbase.rootdir</name>
    <value>hdfs://node1:9000/hbase</value>
  </property>
    <property>
    <name>hbase.zookeeper.quorum</name>
    <value>node1,node2,node3:2181</value>
  </property>
  <property>
    <name>hbase.unsafe.stream.capability.enforce</name>
	<value>false</value>
  </property>
  <property>
	<name>hbase.zookeeper.property.dataDir</name>
	<value>/export/servers/zookeeper/data</value>
  </property>
</configuration>

3. Close HBase, delete/HBase on HDFS and restart HBase

--Close hbase
stop-hbase.sh
--delete /hbase
hadoop fs -rm -r /hbase
--start hbase
start-hbase.sh

Note: in case of this kind of error, check whether the configuration file is correct, restart HBase (restart needs to delete/HBase), that is, follow step 3 to do it again.

How to Solve HBase error: region is not online

Error information:

2021-06-15 10:55:33.721 ERROR 531 --- [io-8022-exec-33] c.e.dataapi.biz.hbase.HbaseDataProvider  : Table Get batch error the connection is exception: Failed after attempts=1, exceptions:
Tue Jun 15 10:55:33 CST 2021, RpcRetryingCaller{globalStartTime=1623725733712, pause=200, retries=1}, org.apache.hadoop.hbase.NotServingRegionException: org.apache.hadoop.hbase.NotServingRegionException: Region rpt_ewt_theme_xxxx_compare_1d_b,348788221_122887872,1623446719723.ebb056cf4a332e5efa355bd2619033c5. is not online on hadoop17,60020,1620193917454
    at org.apache.hadoop.hbase.regionserver.HRegionServer.getRegionByEncodedName(HRegionServer.java:2997)
    at org.apache.hadoop.hbase.regionserver.RSRpcServices.getRegion(RSRpcServices.java:1069)
    at org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2388)
    at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33648)
    at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2191)
    at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
    at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:183)
    at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:163)

reason:

When HBase is running, the region will start to split when it reaches the set file size. The process of division is as follows:

1) The old region is offline, which corresponds to is not online in the error log

2) Old region split

3) When the old region is closed, this corresponds to the error in the error log   is closing

Solution: turn off automatic splitting and split artificially.

Or set the value of a single region file to uppercase

[Solved] Import org.apache.hadoop.hbase.hbaseconfiguration package cannot be imported

    <dependency>
      <groupId>org.apache.hbase</groupId>
      <artifactId>hbase-client</artifactId>
      <version>2.1.0-cdh6.2.1</version>
    </dependency>
    <dependency>
      <groupId>org.apache.hadoop</groupId>
      <artifactId>hadoop-client</artifactId>
      <version>2.7.1</version>
    </dependency>
    <dependency>
      <groupId>org.apache.hadoop</groupId>
      <artifactId>hadoop-hdfs</artifactId>
      <version>2.7.1</version>
      <exclusions>
        <exclusion>
          <groupId>io.netty</groupId>
          <artifactId>netty</artifactId>
        </exclusion>
      </exclusions>
    </dependency>

After importing hbaseconfiguration, we found that we need to configure the remote warehouse. Add the

  <repositories>
    <repository>
      <id>cloudera</id>
      <url>https://repository.cloudera.com/artifactory/cloudera-repos/</url>
    </repository>
  </repositories>

After the problem is solved

Solution of server not running yetexception in HBase

I encountered a server not running yetexception error while working today. The symptom is that it is normal to enter HBase shell, but this error will be thrown when executing any instruction.

There is no error message when checking the log. At first I thought there was HBase process not up, but the JPS command showed that all started.

It’s a little tricky.. Because the test server was restarted a few days ago, and then I did some other work on it. So it is suspected that the problem is port occupancy.. However.. Neither..

I had no choice, so I moved out the ultimate solution: reloading Hadoop and HBase. After the official configuration is completed. HBase is miraculously good..

I once suspected that it was my configuration.. However.. Neither..

Finally found a solution on the Internet, because Hadoop is in a safe mode. So HBase operation will be abnormal. I’m not sure how it works. The solution is to manually exit safe mode

./hadoop dfsadmin -safemode leave 

Then restart HBase and solve the problem

HBase shell input cannot be deleted using backspace

The command terminal in the virtual machine can be deleted with backspace key in HBase shell, but the virtual machine connected with SecureCRT can not be deleted with backspace key in HBase shell (I didn’t encounter this situation in hbase-0.90.6-cdh3u5, but encountered this problem in hbase-1.0.0-cdh5.5.2)

Options — session options — Simulation — terminal — choose Linux

(VT100 by default)

Delete with Ctrl + backspace

Or:

Hold down shift and click Delete to delete.

Or:

Press the ← key to the previous position of the letter you want to delete, and then press the backspace key to delete the last one

Or:

Options — session options — mapping key — check: backspace send delete and delete send backspace

(you can use the backspace key to delete directly)

HBase checkandput() method

The common method for adding (modifying) data in a table in Hbase is void put(put put), which returns no value
But if you put the same data, you get an error, which is not very friendly
So, there’s also the checkAndPut() method

   * Atomically checks if a row/family/qualifier value matches the expected
   * value. If it does, it adds the put.  If the passed value is null, the check
   * is for the lack of column (ie: non-existance)


boolean checkAndPut(byte[] row, byte[] family, byte[] qualifier,
    byte[] value, Put put) throws IOException

If the data (Row, family, Qualifier, value) is present, then put will not be performed and false will be returned; otherwise, true will be performed
Therefore, this method is safer than the put() method.