Tag Archives: data warehouse

[Solved] QuestDB Exception–ERROR: Cannot insert rows out of order.

I am trying to migrate the data and insert the history into QuestDB. I create the table as

create table records(
type INT,
interval INT,
timestamp TIMESTAMP,
name STRING) timestamp(timestamp)

And inserts data through JDBC.

1. Question details

I received the error “cannot insert rows out of order”. I read that QuestDB supports are out of order, but somehow I can’t make it work.

Caused by: org.postgresql.util.PSQLException: ERROR: Cannot insert rows out of order. Table=/root/.questdb/db/dwd_robot_running_data
	at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2674)
	at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2364)
	... 19 more

2. Cause of problem

You can only insert rows in the partitioned table out of order, create a new partitioned table and copy the data into it

3. Solution

Adding partitions during table creation

create table records2(
 type INT,
 interval INT,
 timestamp TIMESTAMP,
 name STRING
) 
timestamp(timestamp) partition by DAY

insert into records2
select * from records

drop table records

rename table records2 to records

After that, you can insert out of order into the table records

hive Run Error: Error: Java heap space [How to Solve]

Error: Java heap space solution

When using MR engine:

set mapreduce.map.memory.mb=12000;
set mapreduce.reduce.memory.mb=12000;
set mapred.map.child.java.opts=-server -Xmx10000m -Djava.net.preferIPv4Stack=true;
set io.sort.mb=100;
set mapred.reduce.child.java.opts=-server -Xmx10000m -Djava.net.preferIPv4Stack=true;

When using tez engine:

set hive.execution.engine=tez;
set tez.am.resource.memory.mb=9216;
set hive.exec.orc.split.strategy=BI;

[Solved] GBase 8a V95 Replace Node Error: single vc mode does not support ‘–freenode’

The v95 version of gbase 8A supports fast replacement of failed nodes through free nodes (omitting the need to find the installation package and check the installation and configuration process). However, free node is only for the scene with VC enabled. For single VC, that is, the mode compatible with V8 version, free node is not supported, so it does not support this node replacement method. The solution is to use a new node.

Error message
version: 9.5.3.22

install prefix: /opt/gbase/10.0.2.103
execute replace node os user: gbase
replaced nodes: ['10.0.2.104']
freenode: ['10.0.2.106']
Error: replace.py(line 1147) -- single vc mode does not support '--freenode'
[gbase@gbase_rh7_003 gcinstall]$

Reason
the cluster is in single VC mode, compatible with V8, and does not support freenode. Look at the bottom of the gcadmin output. There is really no freenode information.

[gbase@gbase_rh7_003 gcinstall]$ gcadmin
CLUSTER STATE:         ACTIVE
VIRTUAL CLUSTER MODE:  NORMAL

====================================
| GBASE GCWARE CLUSTER INFORMATION |
====================================
| NodeName |  IpAddress   | gcware |
------------------------------------
| gcware1  |  10.0.2.103  |  OPEN  |
------------------------------------
====================================================
|      GBASE COORDINATOR CLUSTER INFORMATION       |
====================================================
|   NodeName   | IpAddress  | gcluster | DataState |
----------------------------------------------------
| coordinator1 | 10.0.2.103 |   OPEN   |     0     |
----------------------------------------------------
=========================================================================================================
|                                    GBASE DATA CLUSTER INFORMATION                                     |
=========================================================================================================
| NodeName |                IpAddress                 | DistributionId | gnode | syncserver | DataState |
---------------------------------------------------------------------------------------------------------
|  node1   |                10.0.2.103                |       1        | OPEN  |    OPEN    |     0     |
---------------------------------------------------------------------------------------------------------
|  node2   |                10.0.2.104                |       1        | OPEN  |    OPEN    |     0     |
---------------------------------------------------------------------------------------------------------
|  node3   |                10.0.2.105                |       1        | OPEN  |    OPEN    |     0     |
---------------------------------------------------------------------------------------------------------
|  node4   |                10.0.2.106                |                | OPEN  |    OPEN    |     0     |
---------------------------------------------------------------------------------------------------------

[gbase@gbase_rh7_003 gcinstall]$ 

Solution
adopt the scheme of new machines and replace nodes on the original IP.

[Solved] waterdrop Import hive to clickhouse Error: Too many partitions for single INSERT block (more than 100).

1. Problem description

Use waterdrop to import data into the Clickhouse, and then the log reports an error:

Caused by: ru.yandex.clickhouse.except.ClickHouseException: ClickHouse exception, code: 252, host: 10.252.32.26, port: 8123; Code: 252, e.displayText() = DB::Exception: Too many partitions for single INSERT block (more than 100). The limit is controlled by 'max_partitions_per_insert_block' setting. Large number of partitions is a common misconception. It will lead to severe negative performance impact, including slow server startup, slow INSERT queries and slow SELECT queries. Recommended total number of partitions for a table is under 1000..10000. Please note, that partitioning is not intended to speed up SELECT queries (ORDER BY key is sufficient to make range queries fast). Partitions are intended for data manipulation (DROP PARTITION, etc). (version 20.3.10.75 (official build))

	at ru.yandex.clickhouse.except.ClickHouseExceptionSpecifier.specify(ClickHouseExceptionSpecifier.java:58)
	at ru.yandex.clickhouse.except.ClickHouseExceptionSpecifier.specify(ClickHouseExceptionSpecifier.java:28)
	at ru.yandex.clickhouse.ClickHouseStatementImpl.checkForErrorAndThrow(ClickHouseStatementImpl.java:680)
	at ru.yandex.clickhouse.ClickHouseStatementImpl.sendStream(ClickHouseStatementImpl.java:656)
	at ru.yandex.clickhouse.ClickHouseStatementImpl.sendStream(ClickHouseStatementImpl.java:639)
	at ru.yandex.clickhouse.ClickHousePreparedStatementImpl.executeBatch(ClickHousePreparedStatementImpl.java:382)
	at io.github.interestinglab.waterdrop.output.Clickhouse$$anonfun$process$1.apply(Clickhouse.scala:133)
	at io.github.interestinglab.waterdrop.output.Clickhouse$$anonfun$process$1.apply(Clickhouse.scala:115)
	at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$29.apply(RDD.scala:926)
	at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$29.apply(RDD.scala:926)
	at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2069)
	at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2069)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
	at org.apache.spark.scheduler.Task.run(Task.scala:108)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:338)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.Throwable: Code: 252, e.displayText() = DB::Exception: Too many partitions for single INSERT block (more than 100). The limit is controlled by 'max_partitions_per_insert_block' setting. Large number of partitions is a common misconception. It will lead to severe negative performance impact, including slow server startup, slow INSERT queries and slow SELECT queries. Recommended total number of partitions for a table is under 1000..10000. Please note, that partitioning is not intended to speed up SELECT queries (ORDER BY key is sufficient to make range queries fast). Partitions are intended for data manipulation (DROP PARTITION, etc). (version 20.3.10.75 (official build))

2. Cause of problem

Clickhouse limit Max_partitions_per_insert_Block, that is, the partition of each inserted block. The solution is to modify this parameter and restart Clickhouse.

3. Solution

1. Modify users XML configuration

vi users.xml

add to

<max_partitions_per_insert_block>5000</max_partitions_per_insert_block>

2. Restart

sudo systemctl restart clickhouse-server

[Solved] kettle Error: GC overhead limit exceeded

An error is reported when running a kettle script online

 java.lang.OutOfMemoryError: GC overhead limit exceeded 

The reason is that the memory settings of the local test are different from the online memory settings, and the memory size configured on the offline can be modified.

Spoon. The memory set in bat (windows side) and spoon.sh (Linux side) is too small. It can be set to 1/4 of the machine memory, such as 16g memory, which can be set to 4G.

– XMS initial heap size.
– Xmx maximum heap size.

In many cases, – XMS and – Xmx are set to the same. This setting is because when heap is not enough, memory jitter will occur, affecting the stability of program operation.

PySpark error: AttributeError: ‘NoneType‘ object has no attribute ‘_jvm‘

Possible reason 1: when you use from pyspark.SQL.Functions import * to pour in the pyspark function, the python built-in function in UDF is replaced by spark function, and you can import it again

Possible reason 2: the user-defined UDF function is not placed in the main function, resulting in an error

[Solved] Error: java.io.EOFException: Premature EOF from inputStream

Solve the problem of error: java.io.eofexception: precondition EOF from InputStream

1. Question

1. Problem process

During the log parsing task, an error is reported suddenly, and the task is always very stable. How can an error be reported suddenly?A tight heart

2. Detailed error type:

Check the log and find the following errors

21/11/18 14:36:29 INFO mapreduce.Job: Task Id : attempt_1628497295151_1290365_m_000002_2, Status : FAILED
Error: java.io.EOFException: Premature EOF from inputStream
	at com.hadoop.compression.lzo.LzopInputStream.readFully(LzopInputStream.java:75)
	at com.hadoop.compression.lzo.LzopInputStream.readHeader(LzopInputStream.java:114)
	at com.hadoop.compression.lzo.LzopInputStream.<init>(LzopInputStream.java:54)
	at com.hadoop.compression.lzo.LzopCodec.createInputStream(LzopCodec.java:83)
	at com.hadoop.mapreduce.LzoSplitRecordReader.initialize(LzoSplitRecordReader.java:58)
	at org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.initialize(MapTask.java:548)
	at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:786)
	at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
	at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:422)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1907)
	at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)

The error is queried through a search engine, and the result points to the upper limit of the dfs.datanode.max.transfer.threads parameter, such as
https://blog.csdn.net/zhoujj303030/article/details/44422415

Viewing the cluster configuration, it is found that the parameter is modified to 8192. Check other problems.

Later, it was found that there was an LZO empty file in the log file. After deletion, the task was executed again and successfully.

2. Solution

To prevent the above problems from happening again, write a script to delete LZO empty files before performing the parsing task

1. Traverse the files under the specified path

for file in `hdfs dfs -ls /xxx/xxx/2037-11-05/pageview | sed '1d;s/  */ /g' | cut -d\  -f8`;
do  
	echo $file; 
done

Result output:

/xxx/xxx/2037-11-05/pageview/log.1631668209557.lzo
/xxx/xxx/2037-11-05/pageview/log.1631668211445.lzo

2. Judge whether the file is empty

for file in `hdfs dfs -ls /xxx/xxx/2037-11-05/pageview | sed '1d;s/  */ /g' | cut -d\  -f8`;
do  
	echo $file; 
	lzoIsEmpty=$(hdfs dfs -count $file | awk '{print $3}')
	echo $lzoIsEmpty;
	if [[ $lzoIsEmpty -eq 0 ]];then 
		# is empty, delete the file
		hdfs dfs -rm $file;
	else
		echo "Loading data"
	fi
done

3. Final script

for type in webclick error pageview exposure login
do
    isEmpty=$(hdfs dfs -count /xxx/xxx/$do_date/$type | awk '{print $2}')
    if [[ $isEmpty -eq 0 ]];then 
        echo "------ Given Path:/xxx/xxx/$do_date/$type is empty" 
    else 
		for file in `hdfs dfs -ls /xxx/xxx/$do_date/$type | sed '1d;s/  */ /g' | cut -d\  -f8`;
		do  
			echo $file; 
			lzoIsEmpty=$(hdfs dfs -count $file | awk '{print $3}')
			echo $lzoIsEmpty;
			if [[ $lzoIsEmpty -eq 0 ]];then 
				echo Delete Files: $file
				hdfs dfs -rm $file;
			fi
		done
		
		echo ================== Import log data of type $do_date $type into ods layer ==================
		... Handling log parsing logic
   fi
done

[Solved] Hive Error while processing statement: FAILED: Execution Error

SQL Error [1] [08S01]: Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask. Permission denied: user=root, access=WRITE, inode="/user":hdfs:supergroup:drwxr-xr-x
    at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:400)
    at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:256)
    at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:194)
    at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1855)
    at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1839)
    at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(FSDirectory.java:1798)
    at org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:61)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3101)
    at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:1123)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:696)
    at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
    at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:869)
    at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:815)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:422)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2675)

 

Solution: if su hdfs enter hdfs user vi /etc/password will
The back of hdfs is changed to the following

Then execute hadoop fs -chmod 777 /user

DB::Exception: test: Authentication failed: [How to Solve]

Problem Description: if you want to restrict specific IP addresses from accessing the Clickhouse, use the following statement to create a user

drop user test;
CREATE USER test HOST IP '172.18.xxx.xxx'  IDENTIFIED WITH sha256_password BY 'test';
GRANT SHOW, SELECT, INSERT ON test.* TO test;

Client login:
ensure that the user name and password entered are correct

clickhouse-client --host 172.18.xxx.xxx  --port 9000  --user test --password test  -m

Error Messages:
Received from 172.18.xxx.xxx:9000. DB::Exception: test: Authentication failed:
password is incorrect or there is no user with such name.

ClickHouse client version 21.4.6.55 (official build).
Connecting to 172.18.52.122:9000 as user test.
Code: 516. DB::Exception:
     Received from 172.18.xxx.xxx:9000. DB::Exception: test: Authentication failed:
     password is incorrect or there is no user with such name.

Problem solving process:

1. Telnet database is accessible
telnet 172.18.xxx.xxx 9000

2. The database can be accessed normally when the user does not specify IP
drop user test;
CREATE USER test IDENTIFIED WITH sha256_ password BY ‘test’;
GRANT SHOW, SELECT, INSERT ON test.* TO test;

The user name and password are correct, and the network is connected. Why can’t you access normally after specifying the IP
the first thought is to look at the Clickhouse log, but no valuable information can be found by querying the Clickhouse log
where else can you provide useful information?Cilckhouse’s query_ The log appears

3. View query_ Log, initial_ Address is not the IP address of the client, but the address of the gateway
select initial_ address n from system.query_ Log QL where user = “test”
why is the address of the gateway?It turns out that the client and Clickhouse are not in the same network segment. Access across network segments must pass through the gateway.

4. After changing the authorized IP address to the gateway, it can be accessed normally
in this way, the meaning of restricting IP addresses from accessing Clickhouse is lost.

Kettle Flash Back Error:The graphical interface cannot be opened. There is an ETI installation problem

1. Install JDK (version 1.6 or above)

2. JDK environment variable configuration is shown in the figure (the following values are their own installation directory):

3. For the first error, directly flash back, find the spoon.bat in the data integration directory, right-click Notepad to edit, and find the following code to modify

if "%SPOON_CONSOLE%"=="1" set PENTAHO_JAVA=java.exe
if not "%SPOON_CONSOLE%"=="1" set PENTAHO_JAVA=javaw.exe

change to 

 if "%SPOON_CONSOLE%"=="1" set PENTAHO_JAVA=java.exe
if not "%SPOON_CONSOLE%"=="1" set PENTAHO_JAVA=java.exe

4. If you change the first error and then flash back (modify the memory configuration in spoon.bat)


 if "%PENTAHO_DI_JAVA_OPTIONS%"=="" set PENTAHO_DI_JAVA_OPTIONS="-Xms1024m" "-Xmx2048m" "-XX:MaxPermSize=256m"

change to

 if "%PENTAHO_DI_JAVA_OPTIONS%"=="" set PENTAHO_DI_JAVA_OPTIONS="-Xms512m" "-Xmx1024m" "-XX:MaxPermSize=256m"

5. If the above errors are corrected, the following occurs:

You need to modify the variable value in spoon.bat

pushd "%JAVA_HOME%\jre\bin"
if exist java.exe goto USEJAVAFROMPATH
goto USEJAVAFROMPATH
:USEJAVAFROMPENTAHOJAVAHOME
FOR /F %%a IN ('.\java.exe -version 2^>^&1^|%windir%\system32\find /C "64-Bit"') DO (SET /a IS64BITJAVA=%%a)
GOTO CHECK32VS64BITJAVA
:USEJAVAFROMPATH
FOR /F %%a IN ('java -version 2^>^&1^|%windir%\system32\find /C "64-Bit"') DO (SET /a IS64BITJAVA=%%a)
GOTO CHECK32VS64BITJAVA
:CHECK32VS64BITJAVA
IF %IS64BITJAVA% == 1 GOTO :USE64

add a line:(set IS64BITJAVA=1,make them all TRUE)

pushd "%JAVA_HOME%\jre\bin"
if exist java.exe goto USEJAVAFROMPATH
goto USEJAVAFROMPATH
:USEJAVAFROMPENTAHOJAVAHOME
FOR /F %%a IN ('.\java.exe -version 2^>^&1^|%windir%\system32\find /C "64-Bit"') DO (SET /a IS64BITJAVA=%%a)
GOTO CHECK32VS64BITJAVA
:USEJAVAFROMPATH
FOR /F %%a IN ('java -version 2^>^&1^|%windir%\system32\find /C "64-Bit"') DO (SET /a IS64BITJAVA=%%a)
GOTO CHECK32VS64BITJAVA
:CHECK32VS64BITJAVA
set IS64BITJAVA=1

IF %IS64BITJAVA% == 1 GOTO :USE64

7. After the modification is successful, the graphical interface can be opened after saving

ERROR server.datanode.DataNode: BlockSender.sendChunks() exception [How to Solve]

View log error messages:

2021-09-13 14:56:08,737 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: BlockSender.sendChunks() exception: 
java.io.IOException: Connection reset by the other party
	at sun.nio.ch.FileChannelImpl.transferTo0(Native Method)
	at sun.nio.ch.FileChannelImpl.transferToDirectlyInternal(FileChannelImpl.java:428)
	at sun.nio.ch.FileChannelImpl.transferToDirectly(FileChannelImpl.java:493)
	at sun.nio.ch.FileChannelImpl.transferTo(FileChannelImpl.java:608)
	at org.apache.hadoop.net.SocketOutputStream.transferToFully(SocketOutputStream.java:223)
	at org.apache.hadoop.hdfs.server.datanode.FileIoProvider.transferToSocketFully(FileIoProvider.java:280)
	at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendPacket(BlockSender.java:619)
	at org.apache.hadoop.hdfs.server.datanode.BlockSender.doSendBlock(BlockSender.java:803)
	at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:750)
	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:606)
	at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:152)
	at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:104)
	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:289)
	at java.lang.Thread.run(Thread.java:748

Add configuration of yarn-site XML

<!-- The minimum memory resources (in MB) requested per container. -->
<property>
	<name>yarn.scheduler.minimum-allocation-mb</name>
	<value>512</value>
</property>
<! -- The maximum memory resource (in MB) requested per container. -->
<property>
	<name>yarn.scheduler.maximum-allocation-mb</name>
	<value>512</value>
</property>
<property>
	<name>yarn.scheduler.minimum-allocation-mb</name>
	<value>512</value>
</property>
<! -- The maximum memory resource (in MB) requested per container. -->
<property>
	<name>yarn.scheduler.maximum-allocation-mb</name>
	<value>2048</value>
</property>
<! -- The ratio between container virtual memory and physical memory-->
<property>
	<name>yarn.nodemanager.vmem-pmem-ratio</name>
	<value>4</value>
</property>

These configurations are all about memory resources

Previously, running with small files can be successful   If you change to a large file, you will report the above error   It should have something to do with memory

Pro test effective