Tag Archives: database

[Solved] Springboot Connect MongoDB Error: UncategorizedMongoDbException: Command failed with error 13 (Unauthorized)

[phenomenon]

failed; nested exception is org.springframework.data.mongodb.UncategorizedMongoDbException: Command failed with error 13 (Unauthorized): 'command insert requires authentication' on server localhost:27017. The full response is {"ok": 0.0, "errmsg": "command insert requires authentication", "code": 13, "codeName": "Unauthorized"}; nested exception is com.mongodb.MongoCommandException: Command failed with error 13 (Unauthorized): 'command insert requires authentication' on server localhost:27017. The full response is {"ok": 0.0, "errmsg": "command insert requires authentication", "code": 13, "codeName": "Unauthorized"}] with root cause

com.mongodb.MongoCommandException: Command failed with error 13 (Unauthorized): 'command insert requires authentication' on server localhost:27017. The full response is {"ok": 0.0, "errmsg": "command insert requires authentication", "code": 13, "codeName": "Unauthorized"}
	at com.mongodb.internal.connection.ProtocolHelper.getCommandFailureException(ProtocolHelper.java:175) ~[mongodb-driver-core-4.2.3.jar:na]
	at com.mongodb.internal.connection.InternalStreamConnection.receiveCommandMessageResponse(InternalStreamConnection.java:358) ~[mongodb-driver-core-4.2.3.jar:na]
	at com.mongodb.internal.connection.InternalStreamConnection.sendAndReceive(InternalStreamConnection.java:279) ~[mongodb-driver-core-4.2.3.jar:na]
	at com.mongodb.internal.connection.UsageTrackingInternalConnection.sendAndReceive(UsageTrackingInternalConnection.java:100) ~[mongodb-driver-core-4.2.3.jar:na]
	at com.mongodb.internal.connection.DefaultConnectionPool$PooledConnection.sendAndReceive(DefaultConnectionPool.java:490) ~[mongodb-driver-core-4.2.3.jar:na]
	at com.mongodb.internal.connection.CommandProtocolImpl.execute(CommandProtocolImpl.java:71) ~[mongodb-driver-core-4.2.3.jar:na]
	at com.mongodb.internal.connection.DefaultServer$DefaultServerProtocolExecutor.execute(DefaultServer.java:253) ~[mongodb-driver-core-4.2.3.jar:na]
	at com.mongodb.internal.connection.DefaultServerConnection.executeProtocol(DefaultServerConnection.java:202) ~[mongodb-driver-core-4.2.3.jar:na]
	at com.mongodb.internal.connection.DefaultServerConnection.command(DefaultServerConnection.java:118) ~[mongodb-driver-core-4.2.3.jar:na]
	at com.mongodb.internal.operation.MixedBulkWriteOperation.executeCommand(MixedBulkWriteOperation.java:431) ~[mongodb-driver-core-4.2.3.jar:na]
	at com.mongodb.internal.operation.MixedBulkWriteOperation.executeBulkWriteBatch(MixedBulkWriteOperation.java:251) ~[mongodb-driver-core-4.2.3.jar:na]
	at com.mongodb.internal.operation.MixedBulkWriteOperation.access$700(MixedBulkWriteOperation.java:76) ~[mongodb-driver-core-4.2.3.jar:na]
	at com.mongodb.internal.operation.MixedBulkWriteOperation$1.call(MixedBulkWriteOperation.java:194) ~[mongodb-driver-core-4.2.3.jar:na]
	at com.mongodb.internal.operation.MixedBulkWriteOperation$1.call(MixedBulkWriteOperation.java:185) ~[mongodb-driver-core-4.2.3.jar:na]
	at com.mongodb.internal.operation.OperationHelper.withReleasableConnection(OperationHelper.java:621) ~[mongodb-driver-core-4.2.3.jar:na]
	at com.mongodb.internal.operation.MixedBulkWriteOperation.execute(MixedBulkWriteOperation.java:185) ~[mongodb-driver-core-4.2.3.jar:na]
	at com.mongodb.internal.operation.MixedBulkWriteOperation.execute(MixedBulkWriteOperation.java:76) ~[mongodb-driver-core-4.2.3.jar:na]
	at com.mongodb.client.internal.MongoClientDelegate$DelegateOperationExecutor.execute(MongoClientDelegate.java:187) ~[mongodb-driver-sync-4.2.3.jar:na]
	at com.mongodb.client.internal.MongoCollectionImpl.executeInsertMany(MongoCollectionImpl.java:498) ~[mongodb-driver-sync-4.2.3.jar:na]
	at com.mongodb.client.internal.MongoCollectionImpl.insertMany(MongoCollectionImpl.java:480) ~[mongodb-driver-sync-4.2.3.jar:na]
	at com.mongodb.client.internal.MongoCollectionImpl.insertMany(MongoCollectionImpl.java:475) ~[mongodb-driver-sync-4.2.3.jar:na]
	at org.springframework.data.mongodb.core.MongoTemplate.lambda$insertDocumentList$17(MongoTemplate.java:1490) ~[spring-data-mongodb-3.2.6.jar:3.2.6]
	at org.springframework.data.mongodb.core.MongoTemplate.execute(MongoTemplate.java:553) ~[spring-data-mongodb-3.2.6.jar:3.2.6]
	at org.springframework.data.mongodb.core.MongoTemplate.insertDocumentList(MongoTemplate.java:1483) ~[spring-data-mongodb-3.2.6.jar:3.2.6]
	at org.springframework.data.mongodb.core.MongoTemplate.doInsertBatch(MongoTemplate.java:1346) ~[spring-data-mongodb-3.2.6.jar:3.2.6]
	at org.springframework.data.mongodb.core.MongoTemplate.insert(MongoTemplate.java:1280) ~[spring-data-mongodb-3.2.6.jar:3.2.6]

[solution]

Add the following configuration:

spring.data.mongodb.username=admin
spring.data.mongodb.password=admin

Modify the application.properties file as below:

spring.data.mongodb.host=localhost
spring.data.mongodb.port=27017
spring.data.mongodb.username=admin
spring.data.mongodb.password=admin
spring.data.mongodb.database=user

[Solved] ERROR OGG-01232 Receive TCP params error: TCP/IP error 104 (Connection reset by peer), endpoint:

Solution 1:
due to the inconsistency between the ports of the source end and the destination end,
the error message of the source end is error ogg-01232 receive TCP params error: TCP/IP error 104 (connection reset by peer), endpoint: 10.238.83.44:7847

Edit the process and view the configured ports

edit params 42P3  42P3 is the port number

The information configured in the discovery process is rmthost 10.238.83.44 mgrport 7839, compress

It is inconsistent with the port reporting an error. Modify the port Edit params 42p3 to prompt port 7847. Then restart the process start 42p3

Solution 2:
after modifying the port or starting the process, it indicates that the previous port is normal (the port of the destination end is the same as that of the source end)

Just delete the line of port and write it again Edit params 42p3 , and restart start 42p3 after writing

Solution 3:
roll forward the source side delivery process to a file and generate a new file point

alter extract 42P3, etrollover

[Solved] Redis Startup Error: QForkMasterInit: system error caught. error code=0x000005af

 

1. Problems

When you use redis-server.exe to startup directly, it will flashback. When it is started with script and configuration file, it will also flashback. When it is started with command line, an error will be reported:

[23848] 16 Mar 16:10:32.565 # QForkMasterInit: system error caught. error code=0x000005af, message=VirtualAllocEx failed.: unknown error

2. Solutions

Redis’s conf file sets the parameters maxheap and maxmemory

maxmemory 120MB

maxheap 180MB

Maxmemory and maxheap depend on your computer configuration. Usually: maxheap = 1.5 * maxmemory

[Solved] GBase 8a MPP Database sftp Loading Large File Error

Problem phenomenon
Using sftp to load large files reports an error, loading small files is normal.

Cause Analysis
When the number of concurrent loading tasks of the cluster and the maximum number of loading machines of a single task is large, the situation of sftp file loading failure will occur.

Solution
Make sure that the parameter gcluster_loader_max_data_processors is set too large, and no error is reported after the parameter is reduced.

[Solved] Oracle Delete the Archive Error: RMAN-08137

Rman-08137 reports an error when Oracle deletes the archive.

RMAN-08137: warning: archived log not deleted, needed for standby or upstream capture process
archived log file name=/u02/prod/archivelog/1_938000_1004292720.dbf thread=1 sequence=938000
RMAN-08137: warning: archived log not deleted, needed for standby or upstream capture process
archived log file name=/u02/prod/archivelog/1_938001_1004292720.dbf thread=1 sequence=938001
RMAN-08137: warning: archived log not deleted, needed for standby or upstream capture process
archived log file name=/u02/prod/archivelog/1_938002_1004292720.dbf thread=1 sequence=938002
RMAN-08137: warning: archived log not deleted, needed for standby or upstream capture process
archived log file name=/u02/prod/archivelog/1_938003_1004292720.dbf thread=1 sequence=938003
RMAN-08137: warning: archived log not deleted, needed for standby or upstream capture process
archived log file name=/u02/prod/archivelog/1_938004_1004292720.dbf thread=1 sequence=938004
RMAN-08137: warning: archived log not deleted, needed for standby or upstream capture process
archived log file name=/u02/prod/archivelog/1_938005_1004292720.dbf thread=1 sequence=938005
RMAN-08137: warning: archived log not deleted, needed for standby or upstream capture process
archived log file name=/u02/prod/archivelog/1_938006_1004292720.dbf thread=1 sequence=938006
RMAN-08137: warning: archived log not deleted, needed for standby or upstream capture process
archived log file name=/u02/prod/archivelog/1_938007_1004292720.dbf thread=1 sequence=938007

The error message shows that the archive to be deleted cannot be deleted because the backup database still needs to be deleted. Check the archive number applied to the backup database

SQL> select open_mode,database_role from v$database;

OPEN_MODE                           DATABASE_ROLE
----------------------------------- ------------------------------------------------
READ ONLY WITH APPLY                PHYSICAL STANDBY

SQL> select process,sequence# from v$managed_standby;

PROCESS                      SEQUENCE#
--------------------------- ----------
DGRD                                 0
ARCH                            939246
DGRD                                 0
ARCH                            939252
ARCH                            939248
ARCH                            939253
RFS                                  0
MRP0                            939254
DGRD                                 0
RFS                                  0
RFS                                  0

Deleting 938000 from the main database shows that the standby database still needs to be, and the standby database query has been applied to the 939254 archive
query the status of the standby database on the primary database:

SQL> select open_mode,database_role from v$database;

OPEN_MODE                                     DATABASE_ROLE
--------------------------------------------- ------------------------------------------------
READ WRITE                                    PRIMARY

SQL> select dest_name,PROTECTION_MODE,GAP_STATUS,APPLIED_THREAD#,APPLIED_SEQ# from gV$ARCHIVE_DEST_STATUS where type='PHYSICAL';

DEST_NAME                 PROTECTION_MODE                                              GAP_STATUS                APPLIED_THREAD# APPLIED_SEQ#
------------------------- ------------------------------------------------------------ ------------------------- --------------- ------------
LOG_ARCHIVE_DEST_2        MAXIMUM PERFORMANCE                                          RESOLVABLE GAP                          1       939258

The main library shows that the backup library has GAP and the status is RESOLVABLE_GAP
Through the related information query, the solution is:

  1. Mount the primary database.

    Issue the following SQL statement at the primary database:
    SQL> ALTER SYSTEM FLUSH REDO TO <target_db_name>;
    target_db_name is the db_unique_name of the backup database

    View the backup db_unique_name

SQL> select open_mode,database_role from v$database;

OPEN_MODE                           DATABASE_ROLE
----------------------------------- ------------------------------------------------
READ ONLY WITH APPLY                PHYSICAL STANDBY

SQL> show parameter db_unique_name

NAME                                 TYPE                              VALUE
------------------------------------ --------------------------------- ------------------------------
db_unique_name                       string                            standby

Discuss the downtime with the business and restart the production database to mount status

SQL> select open_mode,database_role from v$database;

OPEN_MODE                                     DATABASE_ROLE
--------------------------------------------- ------------------------------------------------
MOUNTED                                          PRIMARY

SQL> ALTER SYSTEM FLUSH REDO TO standby;

System alted.

SQL> alter database open;
 
 Database alted.

Query gap status of Zhu Bei database on the main database

SQL> select open_mode,database_role from v$database;

OPEN_MODE                                     DATABASE_ROLE
--------------------------------------------- ------------------------------------------------
READ WRITE                                    PRIMARY

SQL> select dest_name,PROTECTION_MODE,GAP_STATUS,APPLIED_THREAD#,APPLIED_SEQ# from gV$ARCHIVE_DEST_STATUS where type='PHYSICAL';

DEST_NAME                 PROTECTION_MODE                                              GAP_STATUS                APPLIED_THREAD# APPLIED_SEQ#
------------------------- ------------------------------------------------------------ ------------------------- --------------- ------------
LOG_ARCHIVE_DEST_2        MAXIMUM PERFORMANCE                  NO GAP                          1       939258

GAP_STATUS is now NO GAP, continue to execute the archive delete command, the deletion is normal

[Solved] GBase 8a MPP Database Loading Error: Unsupported version

Problem:
field loading failed with error:

[ERROR 2018-07-03 14:32:21.612 c.z.u.c.p.l.DataLoadingService:122 pool-16-thread-1] (GBA-01EX-700) Gbase general error: Task 780837 failed,
193.168.199.14:5050Failed to query in gnode:DETAIL: (GBA-01EX-700) Gbase general error:
(gns_host: 193.168.199.15) Unsupported version (not an attribute), or file does not exist: ./pm/metadata/a_l_cell_mm_stat_n4.GED/C00045.ctl.S

 

Solution:
There are a large number of ctl.S files in the metadata section of this node. It is confirmed that ctl.S is a temporary file during the disk write operation, which will be cleaned up after the write operation is completed, and the residual ctl.S can be regarded as dirty data and needs to be deleted manually.
After deleting the ctl.S file, it loads normally.

[Solved] Artisan error: SQLSTATE[42000]: Syntax error or access violation: 1071 Specified key was too long; max key length is 1000 bytes

Problem Description:

php artisan migrate Error:

Illuminate\Database\QueryException 

  SQLSTATE[42000]: Syntax error or access violation: 1071 Specified key was too long; max key length is 1000 bytes (SQL: alter table `users` add unique `users_em
ail_unique`(`email`))

Cause Analysis:
The maximum character length of utf8 encoding supported by MySql is 3 bytes, if a wide character of 4 bytes is encountered, an insertion exception will occur. The maximum Unicode character that can be encoded by three bytes UTF-8 is 0xffff, which is the basic multiliterate plane (BMP) in Unicode. Therefore, Unicode characters that are not in the Basic Multicultural Plane, including Emoji emojis (Emoji is a special Unicode encoding), cannot be stored using MySql’s utf8 character set.

This should also be one of the reasons why Laravel 5.4 switched to the 4-byte length utf8mb4 character encoding. However, it should be noted that the utf8mb4 character encoding is only supported from MySql version 5.5.3 onwards (check the version: selection version();). If the MySql version is too low, a version update is required.

Solution:
1. Upgrade MySql version to 5.5.3 or higher. Add in /app/providers/AppServiceProvider.php:

use Illuminate\Support\Facades\Schema;

public function boot()
    {
        Schema::defaultStringLength(191);
    }

2. Delete the table in the database and re execute php artisan migrate

Oracle Database Cannot Open mount Mode Error: ORA-01102

Error in opening mount mode of database: ora-01102: cannot mount database in exclusive mode

SQL> startup nomount;
 
ORACLE instance started.

Total System Global Area 1073741824 bytes
Fixed Size		    2932632 bytes
Variable Size		  427819112 bytes
Database Buffers	  629145600 bytes
Redo Buffers		   13844480 bytes
SQL> SQL> alter database mount;
 
alter database mount
*
ERROR at line 1:
ORA-01102: cannot mount database in EXCLUSIVE mode

Reason: There is a “sgadef.dbf” file in the “ORACLE_HOME/dbs” directory and Oracle processes (pmon, smon, lgwr, and dbwr) still exist – even if the database is closed, the shared memory segments and semaphores still exist – there is an “ORACLE_HOME/dbs/lk” file “lk” and “sgadef. There is an “ORACLE_HOME/dbs/lk” file “lk” and “sgadef. dbf ” files for locking shared memory. It seems that even if no memory is allocated, Oracle thinks the memory is still locked.
To view the startup log:


Solution.
1. Go to /d01/oracle/PROD/db/tech_st/12.1.0/dbs/ directory
2. Delete the lkPOD file

rm -rf  lkPROD

3. Make sure Oracle has no background processes: ps -ef |grep ora_ |grep PROD|grep ora_dbw0_PROD

if there is a background process, please use the command “kill” to delete it.

[oracle@ebs ~]$  kill -9 1912

Log in again using mount mode

[oracle@ebs ~]$ sqlplus/as sysdba

SQL*Plus: Release 12.1.0.2.0 Production on Wed Mar 2 12:52:42 2022

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

Connected to an idle instance.

SQL> startup mount
ORACLE instance started.

Total System Global Area 1073741824 bytes
Fixed Size		    2932632 bytes
Variable Size		  427819112 bytes
Database Buffers	  629145600 bytes
Redo Buffers		   13844480 bytes
Database mounted.

Successfully resolved.

Oracle18c Error: ORA-12012: error on auto execute of job

For the newly installed Oracle 18C database, the alert log keeps making errors:

ORA-12012: error on auto execute of job

ORA-12012: error on auto execute of job "SYS"."ORA$AT_OS_OPT_SY_222"
ORA-20001: Statistics Advisor: Invalid task name for the current user
ORA-06512: at "SYS.DBMS_STATS", line 49538
ORA-06512: at "SYS.DBMS_STATS_ADVISOR", line 881
ORA-06512: at "SYS.DBMS_STATS_INTERNAL", line 21631
ORA-06512: at "SYS.DBMS_STATS_INTERNAL", line 23763
ORA-06512: at "SYS.DBMS_STATS", line 49526
2022-02-28 01:27:20.762000 +08:00
Errors in file /u01/app/oracle/diag/rdbms/orcl/orcl/trace/orcl_j000_104148.trc:
ORA-12012: error on auto execute of job "SYS"."ORA$AT_OS_OPT_SY_224"
ORA-20001: Statistics Advisor: Invalid task name for the current user
ORA-06512: at "SYS.DBMS_STATS", line 49538
ORA-06512: at "SYS.DBMS_STATS_ADVISOR", line 881
ORA-06512: at "SYS.DBMS_STATS_INTERNAL", line 21631
ORA-06512: at "SYS.DBMS_STATS_INTERNAL", line 23763
ORA-06512: at "SYS.DBMS_STATS", line 49526
2022-02-28 01:37:21.758000 +08:00
Errors in file /u01/app/oracle/diag/rdbms/orcl/orcl/trace/orcl_j000_104738.trc:
ORA-12012: error on auto execute of job "SYS"."ORA$AT_OS_OPT_SY_226"
ORA-20001: Statistics Advisor: Invalid task name for the current user
ORA-06512: at "SYS.DBMS_STATS", line 49538
ORA-06512: at "SYS.DBMS_STATS_ADVISOR", line 881
ORA-06512: at "SYS.DBMS_STATS_INTERNAL", line 21631
ORA-06512: at "SYS.DBMS_STATS_INTERNAL", line 23763
ORA-06512: at "SYS.DBMS_STATS", line 49526

 

Solution:
Call the initialization package manually: go to sqlplus as administrator
1. sqlplus / as sysdba

2. check the current auto task belongs to the user already exists, if not then initialize the package

3. EXEC dbms_stats.init_package();

4. Confirm again

SQL> column name format A35
SQL> set linesize 120
SQL> select name, ctime, how_created from sys.wri$_adv_tasks where owner_name = 'SYS' and name in ('AUTO_STATS_ADVISOR_TASK','INDIVIDUAL_STATS_ADVISOR_TASK');

no rows selected

SQL> EXEC dbms_stats.init_package();

PL/SQL procedure successfully completed.

SQL> select name, ctime, how_created from sys.wri$_adv_tasks where owner_name = 'SYS' and name in ('AUTO_STATS_ADVISOR_TASK','INDIVIDUAL_STATS_ADVISOR_TASK');

NAME                                CTIME     HOW_CREATED
----------------------------------- --------- ------------------------------
AUTO_STATS_ADVISOR_TASK             28-FEB-22 CMD
INDIVIDUAL_STATS_ADVISOR_TASK       28-FEB-22 CMD

[Solved] Node.js: Error: connect ECONNREFUSED ::1:3306

Use node JS database module MySQL , connection database query error

Error: connect ECONNREFUSED ::1:3306

reason

I annotated the mapping relationship of /ECT/hosts

# 127.0.0.1   localhost

The original configuration used localhost , so the database could not be connected suddenly

{
  host: 'localhost',
  user: 'root',
  password: '123456',
  database: 'data',
};

Treatment method

Method 1:

Change the mapping relationship of /ECT/hosts

# 127.0.0.1   localhost
127.0.0.1   localhost

Method 2:

You can use 127.0.0.1 instead

{
  // host: 'localhost',
  host: '127.0.0.1',
  user: 'root',
  password: '123456',
  database: 'data',
};