Tag Archives: database

[Solved] sqoop Error: jSQLException in nextKeyValue Caused by: ORA-24920:column size too large for client

Question

When importing Oracle data with sqoop, the following errors are reported:

INFO mapreduce.Job: Task Id : attempt_1646802944907_15460_m_000000_1, Status : FAILED
Error: java.io.IOException: SQLException in nextKeyValue
        at org.apache.sqoop.mapreduce.db.DBRecordReader.nextKeyValue(DBRecordReader.java:275)
        at org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:568)
        at org.apache.hadoop.mapreduce.task.MapContextImpl.nextKeyValue(MapContextImpl.java:80)
        at org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.nextKeyValue(WrappedMapper.java:91)
        at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
        at org.apache.sqoop.mapreduce.AutoProgressMapper.run(AutoProgressMapper.java:64)
        at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:799)
        at org.apache.hadoop.mapred.MapTask.run(MapTask.java:347)
        at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:174)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875)
        at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:168)
Caused by: java.sql.SQLException: ORA-24920: column size too large for client

reason

Before using sqoop import other database is normal, this time from the new database import data problems, first check what is the difference between the two databases, found an Oracle version is 11, the new Oracle database version is 19, which may be the cause of the problem.
Go online to check the ORA-24920 error, said to upgrade the oracle client, further speculation may be the problem of Oracle driver.
Under the lib file of sqoop tool, the Oracle JDBC driver found for sqoop is ojdbc6.jar, which does not match with Oracle version 19.
You can check the Oracle version and the corresponding Oracle JDBC driver version on this page:
https://www.oracle.com/database/technologies/faq-jdbc.html#02_03
The screenshot is as follows:

the link to the download page is as follows:
https://www.oracle.com/database/technologies/appdev/jdbc-downloads.html

Solution:

According to the version, ojdbc8.0.jar was downloaded. After uploading, delete the original version and re import the data.
the driver of the original version here needs to be deleted or moved, otherwise it will not succeed. Guess that if there are two versions, the old version may be read

[Solved] Error 1044: Access denied for user ‘syz-remote‘@‘%‘ to database ‘webapp‘

Error 1044: Access denied for user ‘syz-remote’@‘%’ to database ‘webapp’

Cause: I was testing the code of golang connecting to the database. I finally reinstalled MySQL and set up remote login, but this problem still occurred.

My SYZ-remote account is specially created for remote login. Therefore, unlike the root user, it does not have the due permission. We need to set its permissions in MySQL .

View MySQL account permissions through this code

mysql> SELECT host,user,password_expired,Grant_priv,Super_priv FROM mysql.user;
+-----------+------------------+------------------+------------+------------+
| host      | user             | password_expired | Grant_priv | Super_priv |
+-----------+------------------+------------------+------------+------------+
| localhost | root             | N                | Y          | Y          |
| localhost | mysql.session    | N                | N          | Y          |
| localhost | mysql.sys        | N                | N          | N          |
| localhost | debian-sys-maint | N                | Y          | Y          |
| %         | syz-remote       | N                | N          | N          |
+-----------+------------------+------------------+------------+------------+
5 rows in set (0.00 sec)

You can see that the permission of Grant_priv and Super_priv in syz-remote is N

We gave him authority

update mysql.user set grant_priv = 'Y', Super_priv='Y' where user = 'syz-remote';

Refresh it again

flush privilegs;

DM backup database Error: [-7169]:bakres failed to communicate with DMAP message.

[error reporting]

SQL> backup database backupset 'dm3bak01';
backup database backupset 'dm3bak01';
[-7169]:bakres failed to communicate with DMAP message.
Used time: 00:00:10.033. execution number:0.

[log content]

2022-04-27 09:08:01 [CMD] database P0000048762 PPID4294967295  backup database backupset 'dm3bak01';


2022-04-27 09:08:01 [CMD] database P0000048762 PPID4294967295  BACKUP DATABASE [dm03]
2022-04-27 09:08:01 [INFO] database P0000048762  PPID4294967295  CMD START....
2022-04-27 09:08:01 [INFO] database P0000048762  PPID4294967295  BACKUP DATABASE [dm03],execute......
2022-04-27 09:08:01 [INFO] database P0000048762  PPID4294967295  check limits of huge data
2022-04-27 09:08:01 [INFO] database P0000048762  PPID4294967295  CMD CHECK LSN ......
2022-04-27 09:08:01 [INFO] database P0000048762  PPID4294967295  adjust checkpoint lsn to maximal apply lsn 0
2022-04-27 09:08:11 [INFO] database P0000048762  PPID4294967295  connect to dmap with portnum[4236], tsk_num: [4], code: [-7169].
2022-04-27 09:08:11 [WARNING] database P0000048762  PPID4294967295  CMD END.CODE:[-7169], DESC:[bakres fails to communicate with DMAP messages], COST:[00:00:10]

[reason for error reporting]
because dmapservice is not started or there is a problem with dmapservice

[problem handling]

[pwd:/u01/dm8/dmdbms/bin]$ ./DmAPService start
DmAPService (pid 101899) is running.    # I find it is started

[pwd:/u01/dm8/dmdbms/bin]$ ./DmAPService restart    # try to restart
Stopping DmAPService:                                      [ OK ]
Starting DmAPService:                                      [ OK ]

[pwd:/u01/dm8/dmdbms/bin]$ !disql
SQL> backup database backupset 'dm3bak01';      # backup successfully
The operation has been executed
Time used: 00:00:03.660. Execution number: 55700.

[Solved] MongoDB Error: Command failed with error 251 (NoSuchTransaction)

Recently, I encountered an online bug. Accessing a specific interface will cause occasional exceptions. After checking the log, it is found that an error is reported during the operation of mongodb. The error information is as follows:

error message: Command failed with error 251 (NoSuchTransaction): 'Given transaction number 115 does not match any in-progress transactions. The active transaction number is 114' on server xx.xx.xx.xx:xxxx. The full response is {"errorLabels": ["TransientTransactionError"], "ok": 0.0, "errmsg": "Given transaction number 115 does not match any in-progress transactions. The active transaction number is 114", "code": 251, "codeName": "NoSuchTransaction"}

Through searching and troubleshooting, the problem was located to be due to processing MongoDB operations where two requests in the same transaction are sent to the DB at the same time, with the probability of generating the following scenario.

1. request 1 and request 2 are sent to Mongo at the same time and start execution
2. Request 1 is still executing and request 2 has completed
3. Since request 1 is not yet completed, the transaction has not really started at DB level, so request 2 cannot end normally (that is why the error states that transaction id 115 cannot be found, because the transaction is not yet registered in DB), resulting in transaction rollback and throwing exceptions
4. request 1 execution is complete, but the transaction has been rolled back, the operation is invalid
Reviewing the code, we found that the reason why two requests are sent to DB at the same time is that the zipWith() method is used for data merging. The feature of this method is that it will request two data to be merged to the database at the same time, which will trigger the aforementioned problem when operating Mongo.

The solution is very simple, just use the zipWhen() method instead. zipWhen will block and wait for the first data requested to arrive before requesting the second data, which perfectly circumvents this problem.

[Solved] Mysql Build Error: [ERROR] Slave I/O for channel ‘‘: error connecting to master

Project scenario:

mysql5. 7 build dual master replication:
host a: 192.168.218.62:3306 production library
host B: 192.168.218.95:3307 create a new empty library
operating system: centos7

Problem description

tip: describe the problems encountered in the project here:
start slave on host B:
change master to master_ host=‘192.168.218.62’,master_ port=3306,master_ user=‘repl’,master_ password=‘*****’,master_ log_ file=‘mysql-bin. 000017’,master_ log_ pos=****;
start slave replication is normal;

Enable slave on host a:
change master to master_ host=‘192.168.218.95’,master_ port=3307,master_ user=‘repl’,master_ password=‘*****’,master_ log_ file=‘mysql-bin. 1234’,master_ log_ pos=****;
after start save, show slave status reports the following error:
error connecting to master‘ [email protected] : 3307 ‘- retry time: 60 retries: 6
check that there are no other meaningful logs under/var/log/messages

Cause analysis:

Troubleshooting route:
1 Log in to host B MySQL on host a: MySQL – U repl – p ‘*****’ – P 3307 – H 192.168.218.95 normal – eliminate account and password errors
2. Check the permission of the replication account on host B, show grants for repl @ ‘%’ and find that it has the permission of replication slave and replication client – eliminate the permission problem
I tried to create a new account, but it didn’t work to restart the service.
finally, it is found that SELinux is not related
check SELinux status:

 [root@localhost ~]# getenforce
Enforcing(means selinux does not close)

Solution:

Close SELinux:
I Temporary shutdown
enter the command setenforce = 0 (it will be invalid after restarting the machine)
check SELinux status:

[root@localhost ~]# getenforce
	Permissive(means close successfully)

II Permanently close
open the /etc/selinux/config file and modify SELINUX=DISABLED (the server needs to be restarted to take effect);

re-change master and then restart to copy successfully

[Solved] kitt2bag Error: Failed to find match for field intensity

Problem analysis: this problem usually occurs when the point cloud type with intensity field (such as PointXYZI) is used to load the point cloud without intensity information. The point cloud data structure downloaded from Kitti data set is (x, y, Z, I) which contains intensity information, so the above problem may be caused by the loss of point cloud intensity information when bin file is converted to bag file?

Solution:

  1. Modify kitti2bag.
  2. Generate new bag.

Modification steps:

enter whereis kitti2bag in the terminal to find the path of the file.

whereis kitti2bag

Open the file vim/gedit in this path and change ‘i’ to ‘intensity’

Modified
        # fill pcl msg
        fields = [PointField('x', 0, PointField.FLOAT32, 1),
                  PointField('y', 4, PointField.FLOAT32, 1),
                  PointField('z', 8, PointField.FLOAT32, 1),
                  PointField('intensity', 12, PointField.FLOAT32, 1)]
        pcl_msg = pcl2.create_cloud(header, fields, scan)

#Before
        # fill pcl msg
        fields = [PointField('x', 0, PointField.FLOAT32, 1),
                  PointField('y', 4, PointField.FLOAT32, 1),
                  PointField('z', 8, PointField.FLOAT32, 1),
                  PointField('i', 12, PointField.FLOAT32, 1)]
        pcl_msg = pcl2.create_cloud(header, fields, scan)i

Re-convert bag file.

SQL Server Error: Arithmetic overflow error converting expression to data type int.

1. Problem description

SQL Server (SQL DW) queries the number of data in a table and reports an error using count

select count(*)  from test.test_t;

Then an error is reported:

SQL ERROR [8115] [S0002]: Arithmetic overflow error converting expression to data type int.

2. Cause of the problem

The amount of data is relatively large. The query result directly with count is of type int, which exceeds the range of int.

tinyint: integer from 0 to 255
smallint: integer from – 2 15 (-32768) to 2 15 (32767)
int: integer from – 2 31 (-2147483648) to 2 31 (2147483647)
bigint: integer data (all numbers) from -2 63 (-9223372036854775808) to 2 63 -1 (9223372036854775807) decimal: numeric data with fixed precision and range
from -10 38 -1 to 10 38 -1

 

3. Solution

Microsoft sql provides count_big method to count

select count_big(*)  from test.test_t;

How to Solve zuul Forwarding error (No Retrayable)

Text

Service A restarts, zuul without adding a retry mechanism, and A service error exception will be reported

Solution: Modify gateway profile

hystrix:
    command:
        default:
            execution:
                isolation:
                    thread:
                        timeout-in-milliseconds: 20000
zuul:
  host:
    connect-timeout-millis: 20000
    socket-timeout-millis: 20000
  routes:
    #userMicroservices
    userService:
      #All requests with /user as the header are forwarded to user
      path: /user/**
      serviceId: user
      #url: http://localhost:8091
    pickupService:
      #All requests with /pickup as the header are forwarded to pickup
      path: /pickup/**
      serviceId: pickup
    orderService:
      #All requests with /order as the header are forwarded to order
      path: /order/**
      serviceId: order
  #Whether retryable is enabled
  retryable: true
ribbon:
  #resolve the timeout problem caused by zuul forwarding requests
  ReadTimeout: 60000 
  SocketTimeout: 60000
  # of retries for the current service
  MaxAutoRetries: 2
  # of times to switch the same Server
  MaxAutoRetriesNextServer: 0

How to Solve Gbase 8A Troubleshooting (rhel7 start CGroup error)

Problem
RHEL7 starts cgroup with an error Filed to start cgconfig.service:Unit not found.
Steps
Step 1
Check if the following installation packages are installed, if they are missing, please install them.

libcgroup-0.41-8.el7.x86_64.rpm
libcgroup-tools-0.41-8.el7.x86_64.rpm

Step 2
Set up the cgroup service to start on boot after installation.

systemctl enable cgconfig.service

Step 3
Start the cgconfig service.

systemctl start cgconfig.service

How to Solve Starrocks Various Error

Question 1:

Question:

{"status":"FAILED","msg":"There is no 100-continue header"}

reason:

This prompt appears because the CSV file is empty. In fact, it will not be affected.

Solution:

Add a request header to the command

-H "expect:100-continue"

curl –location-trusted -u root: -H “column_separator:|” -H “columns:__op=‘upsert’” -H “expect:100-continue” -T /home/zw/insert_sr_file/ip_basic.csv http://127.0.0.1:8030/api/zw/ip_basic/_stream_load

 

Question 2:

Question

There is data in the CSV file, but the error is still reported
CSV data

reason:

This error occurs because the corresponding data in the file does not correspond to the table field

Solution:

Receipt can be performed according to the fields specified in columns
usage:

-H "columns:pid,ip_val,area_id......"

Syntax use case:

curl --location-trusted -u root: \
-H "column_separator:|" \
-H "columns:pid,ip_val,area_id,udate,ftime,utime,ip,bip,cip,oui,mac,cname,user,depart,comp,place,os,os_sub,os_version,host_name,detail,m_type,m_sub_type,bip_val,cip_val,is_asset,is_controlled,source,domain,network_type,insert_unixtime,country_code,__op='upsert'" \
-T /home/zw/insert_sr_file/ip_basic.csv \
http://172.16.130.184:8030/api/zw/ip_basic/_stream_load

 

Question 3

problem

all partitions have no load data

reason

Caused by an empty file

solve

Don’t take it seriously. It won’t prompt if it’s not an empty file. Not affect