Use version cdh5.4.5, hbase1.0.0
Soon after the new company arrived, the regionserver outage occurred. The exception reported is as follows:
2017-05-12 21:15:26,396 FATAL [B.defaultRpcServer.handler=123,queue=6,port=60020] regionserver.RSRpcServices: Run out of memory; RSRpcServices will abort itself immediately
java.lang.OutOfMemoryError: Requested array size exceeds VM limit
at java.nio.HeapByteBuffer.<init>(HeapByteBuffer.java:57)
at java.nio.ByteBuffer.allocate(ByteBuffer.java:331)
at org.apache.hadoop.hbase.io.ByteBufferOutputStream.checkSizeAndGrow(ByteBufferOutputStream.java:77)
at org.apache.hadoop.hbase.io.ByteBufferOutputStream.write(ByteBufferOutputStream.java:116)
at org.apache.hadoop.hbase.KeyValue.oswrite(KeyValue.java:2532)
at org.apache.hadoop.hbase.KeyValueUtil.oswrite(KeyValueUtil.java:548)
at org.apache.hadoop.hbase.codec.KeyValueCodec$KeyValueEncoder.write(KeyValueCodec.java:58)
at org.apache.hadoop.hbase.ipc.IPCUtil.buildCellBlock(IPCUtil.java:122)
at org.apache.hadoop.hbase.ipc.RpcServer$Call.setResponse(RpcServer.java:376)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130)
at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
at java.lang.Thread.run(Thread.java:745)
Focus on “requested array size exceeds VM limit”
In openjdk, there is a limit that the size of the array is 2 to the power of 31 – 2. If it exceeds this size, the JVM will report an error.
In fact, this is definitely a bug in HBase IPC. In some cases, the length of the created array exceeds the limit of the JVM. Through searching, a patch is found and the problem is fixed
hbase-14598 mainly modifies the length of the array. If it exceeds this, an exception will be sent directly to the client. The direct reason is, but for an operation and maintenance company, it is more important to know which table request causes this problem?
We have a patch, hbase-16033 , More logs are provided. Finally, the following types of logs are found:
[B.defaultRpcServer.handler=90,queue=12,port=60020] ipc.RpcServer: (responseTooLarge): {"processingtimems":2822,"call":"Multi(org.apache.hadoop.hbase.protobuf.generated.ClientProtos$MultiRequest)","client":"10.120.69.147:43481","param":"region= ., for 1 actions and 1st row key=A","starttimems":1494609020832,"queuetimems":0,"class":"HRegionServer","responsesize":31697082,"method":"Multi"}
The main problem here is response size, that is, the amount of data returned at one time is too large, which leads to this problem.
In addition, in the search process, we also found that someone had a similar problem. Click Connect, which is basically the same as our type. It is worth noting that the two patches are: hbase-14946 and hbase-14978, which solve the problem of batch reading and writing exceeding the limit. The above pathc is to solve the problem of not reporting errors, and the following is the basis.
We need to find time to upgrade. I hope it will help you.
Read More:
- Idea Error: java: java.lang.OutOfMemoryError: GC overhead limit exceeded
- [Solved] KEIL Compile Error: Error: L6220E: Load region LR_IROM1 size (65552 bytes) exceeds limit (65536 bytes)……
- [Solved] Response Export error on submit request on future invoke, java.lang.OutOfMemoryError: Java heap space
- [Solved] jhat Analyzes dump File Error: java.lang.OutOfMemoryError
- [Solved] the request was rejected because its size (11579386) exceeds the configured maximum (10485760)
- [Solved] Hadoop error java.lang.nosuchmethoderror
- [Solved] Keil Error: *** ERROR L250: CODE SIZE LIMIT IN RESTRICTED VERSION EXCEEDED
- [Solved] ES Query SIZE too large Error: ENTITY CONTENT IS TOO LONG [105539255] FOR THE CONFIGURED BUFFER LIMIT [104857600]
- Java.lang.AbstractMethodError: org.mybatis.spring.transaction.SpringManagedTransaction.getTimeout()Ljava/lang/Integer; error resolution
- Hive ERROR Failed with exception java.io.IOException:java.lang.IllegalArgumentException
- Adjusted frame length exceeds 4096: 5637-discarded server solution
- [Solved] java.lang.reflect.InaccessibleObjectException: Unable to make protected java.net.http.HttpRequest()…
- [Solved] Flume Error: java.lang.NoClassDefFoundError: org/apache/hadoop/conf/Configuration
- [Solved] size_from_dim: Assertion `dim >= 0 && (size_t)dim < sizes_.size()` failed.
- [Solved] Hive tez due to: ROOT_INPUT_INIT_FAILURE java.lang.IllegalArgumentException: Illegal Capacity: -38297
- error: (-215:Assertion failed) size.width>0 && size.height>0 in function ‘cv::imshow‘
- Tomcat startup error: java.lang.NoClassDefFoundError
- [Solved] Weblogic startup error: java.lang.NoClassDefFoundError
- [How to Solve] java.lang.IllegalArgumentException: Request header is too large