Tag Archives: oracle

tuxedo Compile Background Common Error [How to Fix]

Error 1: When compiling the service written by Proc, the header file cannot be found

Solution:

  • Execute the command first # find . -name “stddef.h” -print to find the storage path of stddef.h;
  • Review the content of the pcscfg.cfg file under $oracle_home/precomp/admin, Modify the content marked in red below to the storage path of stddef.h actually queried by the above find statement.

sys_include=(/usr/include,/usr/lib/gcc-lib/i486-suse-linux/2.95.3/include,/usr/lib/gcc-lib/i386-redhat-linux/2.96 /include)

Error 2: prompt “ORA-01031: insufficient privileges” when compiling background source files

Solution:

  • Log in with the oracle user,check the newsale user’s permission to use /home/oracle,execute the following command

$cd /home

$chmod –R 744 oracle

  • Grant write permissions to files such as tnsnames.ora under $ORACLE_HOME/network/admin.

 

Error 3: When compiling background source files, prompt “ORA-01034”

Problem symptoms:

“ORA-01034: ORACLE not available

ORA-27121: unable to determine size of shared memory segment

SVR4 Error: 13: Permission denied”

Solution:

Log in as the oracle user, execute the following command

$cd $ORACLE_HOME/bin

$ls -altr oracle #Modify if it is not the following permissions after viewing

$ chmod 6751 oracle

 

Error 4: When compiling background source files, prompts “ORA-01034, ORA-27101 and Linux Error: 2: No Such file or dirctory”

Solution:

Start oracle service and monitor.

 

Error 5: When compiling background source files, prompt “ORA-12705”

Solution:

Check whether the language configurations such as nls under the .bash_profile file are the same as the oracle user’s .bash_profile.

Error 6: When compiling background source files, prompt buildserver related errors

Solution:

Check whether the configuration and order related to tuxedo under the .bash_profile file are correct (Compare with the actual path).

Error 7: Compile background source files, prompt “You do not have a valid SDK license”

Solution:

Maybe the license of tuxedo is incorrect. Check whether the lic.txt under /home/tuxedo/Tuxedo 8.1/bealic has been overwritten, or check whether the TYPE= SDK in lic is correct, and the value of type is SDK.

Error 8: When compiling the ubbwinnt file, prompts “CMDTUX_TAT:868:ERROR :tmloadcf can not run on a non-master node”

Solution:

View the machine name in the ubbwinnt file, modify and recompile.

Error 9: When compiling the ubbwinnt file, prompts “CMDTUX_TAT:868:ERROR tmloadcf cannot run on an active node”

Solution:

Tmshutdown -y stop all services, and recompile.

Error 10: When the application starts and closes tmshutdown -y, reports “CMDTUX_CAT:764: ERROR: can’t attach to BB”

Solution:

Ipcrm, ipcs or restart the computer.

Error 11: When the application starts, prompts “CMDTUX_CAT:1685:ERROR:Application initialization failure”

Solution:

  • Check whether the server.ini file in the bin folder exists, whether its configuration is correct;
  • Check the oracle’s Whether related services are started;
  • Check if the ip address in ubbwinnt is correct;
  • bdmconfig is missing Times the same error.

Error 12: prompts “CMDTUX_CAT:816:ERROR:Connot exec, executable file not found” when the application starts up

Solution:

Check whether the executable file generated when the source file is compiled is missing in the bin folder. Also if the tuxconfig file is missing, it will also prompt a “GMDTUX_CAT:1360” error.

Error 13: “Application Initialization Error” when starting center or counter

Solution:

  • Check whether the BDE data source has settings, whether the settings are correct;
  • Check whether the configuration in setreg.reg is correct ;
  • Try another openfund.exe file.

 

Error 14: Client login failed, prompt “WTUXWS32.DLL not found, This application failed to start for this reason. Reinstalling the application may fix the problem”

Solution:

Check if the tuxedo patch has been installed, Check if the tuxedo runner path has been added to the path of the system environment variable. You can add the corresponding files that are prompted to be missing to “C:\WINDOWS\system32”.

Error 15: Use the new makefile file in 3.5, when compiling the background source file, an error is reported as shown below:

6f9bb6f7a97a4dff8d206d2d18cdfd01.png

Solution:

Check if the path in the .bash_profile file is the configured fbase path. Whether the fbase installation package upload is useful requires binary upload.

Error 16: When using the asar middleware, to compile the background source file, an error is reported as shown below:

be14e8d8eacc43e4ac80c60608a034fc.png

Solution:

Check if the fbase path is correctly configured in the .bash_profile file.

Error 17: When using the asar middleware, to compile the background source file, an error is reported as shown below:

f87e4693ec244c2ea6e85ab7c2137351.png

Solution:

json is not compiled successfully, Enter the json path to check whether the .lib folder is generated.

Error 18: The client login interface reports an error as shown below:

46e6df4fda8c40eabad5b7406a809a44.png

Solution:

Add F:\hs\fbase20\Fbase_win32\lib to the environment variable system variable path value; add F:\hs\fbase20\Fbase_win32\lib to the user variable lib value.

Error 19: Linux address settings

  1. View the network segment of this machine

[Solved] Oracle 18C RAC Install Error: Error in invoking target ‘irman ioracle idrdactl idrdalsnr idrdaproc‘ of makefile

When installing 18C RAC, the GI installation is completed, and an error occurs when installing RDBMS:

Error in invoking target ‘irman ioracle idrdactl idrdalsnr idrdaproc’ of makefile ‘/u01/app/oracle/product/18.0.0.0/dbhome_1/rdbms/lib/ins_rdbms.mk’.

View the installation log:

18 -lctx18 -lzx18 -lgx18 -lctx18 -lzx18 -lgx18 -lordimt -lclscest18 -loevm -lclsra18 -ldbcfg18 -lhasgen18 -lskgxn2 -lnnzst18 -lzt18 -lxml18 -lgeneric18 -locr18 -locrb18 -locrutl18 -lhasgen18 -lskgxn2 -lnnzst18 -lzt18 -lxml18 -lgeneric18  -lgeneric18 -lorazip -loraz -llzopro5 -lorabz2 -lipp_z -lipp_bz2 -lippdcemerged -lippsemerged -lippdcmerged  -lippsmerged -lippcore  -lippcpemerged -lippcpmerged  -lsnls18 -lnls18  -lcore18 -lsnls18 -lnls18 -lcore18 -lsnls18 -lnls18 -lxml18 -lcore18 -lunls18 -lsnls18 -lnls1
INFO: 
8 -lcore18 -lnls18 -lsnls18 -lunls18  -lsnls18 -lnls18  -lcore18 -lsnls18 -lnls18 -lcore18 -lsnls18 -lnls18 -lxml18 -lcore18 -lunls18 -lsnls18 -lnls18 -lcore18 -lnls18 -lasmclnt18 -lcommon18 -lcore18  -ledtn18 -laio -lons  -lfthread18   `cat /u01/app/oracle/product/18.0.0.0/dbhome_1/lib/sysliblist` -Wl,-rpath,/u01/app/oracle/product/18.0.0.0/dbhome_1/lib -lm    `cat /u01/app/oracle/product/18.0.0.0/dbhome_1/lib/sysliblist` -ldl -lm   -L/u01/app/oracle/product/18.0.0.0/dbhome_1/lib `test -x /usr/bin/hugeedit
INFO: 
 -a -r /usr/lib64/libhugetlbfs.so && test -r /u01/app/oracle/product/18.0.0.0/dbhome_1/rdbms/lib/shugetlbfs.o && echo -Wl,-zcommon-page-size=2097152 -Wl,-zmax-page-size=2097152 -lhugetlbfs`


报错原因:
↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓
***INFO: 
/usr/bin/ld:/u01/app/oracle/product/18.0.0.0/dbhome_1/lib//libodm18.so: file format not recognized; treating as linker script
/usr/bin/ld:/u01/app/oracle/product/18.0.0.0/dbhome_1/lib//libodm18.so:1: syntax error***

INFO: 
make: *** [/u01/app/oracle/product/18.0.0.0/dbhome_1/rdbms/lib/oracle] Error 1

INFO: End output from spawned process.
INFO: ----------------------------------
INFO: Exception thrown from action: make
Exception Name: MakefileException
Exception String: Error in invoking target 'irman ioracle idrdactl idrdalsnr idrdaproc' of makefile '/u01/app/oracle/product/18.0.0.0/dbhome_1/rdbms/lib/ins_rdbms.mk'. See '/u01/app/oraInventory/logs/InstallActions2022-08-11_03-59-03PM/installActions2022-08-11_03-59-03PM.log' for details.
Exception Severity: 1

From the log file, we know that it is due to a problem with the /u01/app/oracle/product/18.0.0.0/dbhome_1/lib//libodm18.so file, which is usually caused by a problem with the installation package.
According to the file format not recognized; treating as linker script keyword, there are also related articles on mos.
19c Installation Fails with error “libclntsh.so: file format not recognized; treating as linker script” (Doc ID 2631283.1)

Grid Infrastructure Installation Fail in 12.2.0.1 For Standalone And RAC with libodm12.so: file format not recognized; treating as linker script (Doc ID 2373904.1)

Solution:
Redownload the installation media (to be sure, do MD5 verification of the installation media before installation)

H2 memory database Oracle mode page error: rg.springframework.dao.InvalidDataAccessResourceUsageException: could not prepar

I. Cause analysis:

1:When we use hibernate’s NativeQuery for paging, the underlying will use limit or rownum, and which paging method is determined by the dialect of different databases, the following will explain the h2 oracle pattern using NativeQuery for paging when the problem is solved org. InvalidDataAccessResourceUsageException: could not prepare statement; SQL [SELECT * limit ?] SQLGrammarException: could not prepare statement
We will find that h2’s oracle schema uses the limit method for paging, but using limit for paging will report an error
2:h2 paging method
Open h2’s dialect class H2Dialect, we can find that h2’s paging method is using limit

3: Oracle paging mode
open the dialect class of Oracle according to different Oracle versions

we will find that the bottom layer of Oracle is rownum for paging

II. Problem-solving
1: since we only solve the paging problem now, here we create a custom dialect class TestH2Dialect, Inherited from H2Dialect

2: because our custom dialect class inherits from H2Dialect, we don’t need to pay attention to other dialect problems. We just need to rewrite the paging method to solve the above problems. Here we have taken oracle12 as an example
Create TestH2Dialect to customize dialect

public class TestH2Dialect extends H2Dialect {

    private static final TestOracle12LimitHandler LIMIT_HANDLER = new TestOracle12LimitHandler() ;

    @Override
    public LimitHandler getLimitHandler() {
        return LIMIT_HANDLER;
    }

}

Create Oracle paging processing class

public class TestOracle12LimitHandler extends AbstractLimitHandler {
    public boolean bindLimitParametersInReverseOrder;
    public boolean useMaxForLimit;
    public static final TestOracle12LimitHandler INSTANCE = new TestOracle12LimitHandler();

    TestOracle12LimitHandler() {
    }

    @Override
    public String processSql(String sql, RowSelection selection) {
        boolean hasFirstRow = LimitHelper.hasFirstRow(selection);
        boolean hasMaxRows = LimitHelper.hasMaxRows(selection);
        return !hasMaxRows ?sql : this.processSql(sql, this.getForUpdateIndex(sql), hasFirstRow);
    }
    @Override
    public String processSql(String sql, QueryParameters queryParameters) {
        RowSelection selection = queryParameters.getRowSelection();
        boolean hasFirstRow = LimitHelper.hasFirstRow(selection);
        boolean hasMaxRows = LimitHelper.hasMaxRows(selection);
        if (!hasMaxRows) {
            return sql;
        } else {
            sql = sql.trim();
            LockOptions lockOptions = queryParameters.getLockOptions();
            if (lockOptions != null) {
                LockMode lockMode = lockOptions.getLockMode();
                switch(lockMode) {
                    case UPGRADE:
                    case PESSIMISTIC_READ:
                    case PESSIMISTIC_WRITE:
                    case UPGRADE_NOWAIT:
                    case FORCE:
                    case PESSIMISTIC_FORCE_INCREMENT:
                    case UPGRADE_SKIPLOCKED:
                        return this.processSql(sql, selection);
                    default:
                        return this.processSqlOffsetFetch(sql, hasFirstRow);
                }
            } else {
                return this.processSqlOffsetFetch(sql, hasFirstRow);
            }
        }
    }

    private String processSqlOffsetFetch(String sql, boolean hasFirstRow) {
        int forUpdateLastIndex = this.getForUpdateIndex(sql);
        if (forUpdateLastIndex > -1) {
            return this.processSql(sql, forUpdateLastIndex, hasFirstRow);
        } else {
            this.bindLimitParametersInReverseOrder = false;
            this.useMaxForLimit = false;
            String offsetFetchString;
            if (hasFirstRow) {
                offsetFetchString = " offset ?rows fetch next ?rows only";
            } else {
                offsetFetchString = " fetch first ?rows only";
            }

            int offsetFetchLength = sql.length() + offsetFetchString.length();
            return (new StringBuilder(offsetFetchLength)).append(sql).append(offsetFetchString).toString();
        }
    }

    private String processSql(String sql, int forUpdateIndex, boolean hasFirstRow) {
        this.bindLimitParametersInReverseOrder = true;
        this.useMaxForLimit = true;
        String forUpdateClause = null;
        boolean isForUpdate = false;
        if (forUpdateIndex > -1) {
            forUpdateClause = sql.substring(forUpdateIndex);
            sql = sql.substring(0, forUpdateIndex - 1);
            isForUpdate = true;
        }

        int forUpdateClauseLength;
        if (forUpdateClause == null) {
            forUpdateClauseLength = 0;
        } else {
            forUpdateClauseLength = forUpdateClause.length() + 1;
        }

        StringBuilder pagingSelect;
        if (hasFirstRow) {
            pagingSelect = new StringBuilder(sql.length() + forUpdateClauseLength + 98);
            pagingSelect.append("select * from ( select row_.*, rownum rownum_ from ( ");
            pagingSelect.append(sql);
            pagingSelect.append(" ) row_ where rownum <= ?) where rownum_ > ?");
        } else {
            pagingSelect = new StringBuilder(sql.length() + forUpdateClauseLength + 37);
            pagingSelect.append("select * from ( ");
            pagingSelect.append(sql);
            pagingSelect.append(" ) where rownum <= ?");
        }

        if (isForUpdate) {
            pagingSelect.append(" ");
            pagingSelect.append(forUpdateClause);
        }

        return pagingSelect.toString();
    }

    private int getForUpdateIndex(String sql) {
        int forUpdateLastIndex = sql.toLowerCase(Locale.ROOT).lastIndexOf("for update");
        int lastIndexOfQuote = sql.lastIndexOf("'");
        if (forUpdateLastIndex > -1) {
            if (lastIndexOfQuote == -1) {
                return forUpdateLastIndex;
            } else {
                return lastIndexOfQuote > forUpdateLastIndex ?-1 : forUpdateLastIndex;
            }
        } else {
            return forUpdateLastIndex;
        }
    }
    @Override
    public final boolean supportsLimit() {
        return true;
    }
    @Override
    public boolean bindLimitParametersInReverseOrder() {
        return this.bindLimitParametersInReverseOrder;
    }
    @Override
    public boolean useMaxForLimit() {
        return this.useMaxForLimit;
    }
}

3. Modify the dialect class used in the configuration file

to

III. summary
if you encounter other dialect problems later, you can use the same method to solve them

[Solved] Error 4 opening dom ASM/Self in 0x8283c00

Installing Oracle RAC 19.3.0.0 on RHEL 7.9, in the run root.sh script step of the installation GI, it runs normally on the first node, but Error 4 opening dom ASM/Self in 0x8283c00 occurs when running the root.sh script on the second node

Root.sh script executed successfully in node 1

Problem running root.sh script on node 2

Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Relinking oracle with rac_on option
Using configuration parameter file: /u01/app/19.0.0/grid/crs/install/crsconfig_params
The log of current session can be found at:
  /u01/app/grid/crsdata/momdb2/crsconfig/rootcrs_momdb2_2022-06-19_11-05-10AM.log
2022/06/19 11:05:13 CLSRSC-594: Executing installation step 1 of 19: 'SetupTFA'.
2022/06/19 11:05:14 CLSRSC-594: Executing installation step 2 of 19: 'ValidateEnv'.
2022/06/19 11:05:14 CLSRSC-363: User ignored prerequisites during installation
2022/06/19 11:05:14 CLSRSC-594: Executing installation step 3 of 19: 'CheckFirstNode'.
2022/06/19 11:05:14 CLSRSC-594: Executing installation step 4 of 19: 'GenSiteGUIDs'.
2022/06/19 11:05:14 CLSRSC-594: Executing installation step 5 of 19: 'SetupOSD'.
2022/06/19 11:05:14 CLSRSC-594: Executing installation step 6 of 19: 'CheckCRSConfig'.
2022/06/19 11:05:15 CLSRSC-594: Executing installation step 7 of 19: 'SetupLocalGPNP'.
2022/06/19 11:05:16 CLSRSC-594: Executing installation step 8 of 19: 'CreateRootCert'.
2022/06/19 11:05:16 CLSRSC-594: Executing installation step 9 of 19: 'ConfigOLR'.
2022/06/19 11:05:23 CLSRSC-594: Executing installation step 10 of 19: 'ConfigCHMOS'.
2022/06/19 11:05:23 CLSRSC-594: Executing installation step 11 of 19: 'CreateOHASD'.
2022/06/19 11:05:24 CLSRSC-594: Executing installation step 12 of 19: 'ConfigOHASD'.
2022/06/19 11:05:24 CLSRSC-330: Adding Clusterware entries to file 'oracle-ohasd.service'
2022/06/19 11:05:35 CLSRSC-4002: Successfully installed Oracle Trace File Analyzer (TFA) Collector.
2022/06/19 11:06:01 CLSRSC-594: Executing installation step 13 of 19: 'InstallAFD'.
2022/06/19 11:06:27 CLSRSC-594: Executing installation step 14 of 19: 'InstallACFS'.
2022/06/19 11:07:02 CLSRSC-594: Executing installation step 15 of 19: 'InstallKA'.
2022/06/19 11:07:03 CLSRSC-594: Executing installation step 16 of 19: 'InitConfig'.
2022/06/19 11:07:10 CLSRSC-594: Executing installation step 17 of 19: 'StartCluster'.
2022/06/19 11:11:03 CLSRSC-343: Successfully started Oracle Clusterware stack
2022/06/19 11:11:03 CLSRSC-594: Executing installation step 18 of 19: 'ConfigNode'.
2022/06/19 11:11:11 CLSRSC-594: Executing installation step 19 of 19: 'PostConfig'.
2022/06/19 11:11:26 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded
***Error 4 opening dom ASM/Self in 0x8283c00
Domain name to open is ASM/Self 
Error 4 opening dom ASM/Self in 0x8283c00***

According to MOS: 19C: While Executing Root.sh on Remote Nodes HIT UNEXPECTED “ERROR 4 OPENING DOM ASM/SELF IN 0x57f7d60” (Doc ID 2571719.1) description, this issue has no effect on the installation and can be ignored

[Solved] ERROR OGG-01028 Detect partial pdata at rba xxxxxx without coinciding crash recovery marker record

ERROR OGG-01028 Detect partial pdata at rba xxxxxx without coinciding crash recovery marker record

Ogg version: 11.2.1.0.13

Fault description:

The Ogg extraction process was abended because the database instance was automatically restarted in the early morning

Ogg process down view the $GGATE_HOME/ggserr.log. The errors are as follows:

ERROR OGG-01028 Detect partial pdata at rba xxxxxx without coinciding crash recovery marker record in log with seqno = xxxxxx

The contents of log error reports are as follows:
error ogg-01028 detects partial pdata at RBA XXXXXX, which does not coincide with the crash recovery mark record in the log with seqno = XXXXXX

Cause of failure:

When the classic extraction fails in the log writing process, there is incomplete extraction, which will lead to the downtime of the extraction process

Troubleshooting:

1. Try to restart the extraction process, which will be started, generally without any problems, and will continue to process the logs

2. If restarting the extraction process fails, add the following parameters to the extraction process to skip incomplete log data and restart the extraction process

tranlogoptions _skipincompletelogdata

[Solved] sqoop Error: jSQLException in nextKeyValue Caused by: ORA-24920:column size too large for client

Question

When importing Oracle data with sqoop, the following errors are reported:

INFO mapreduce.Job: Task Id : attempt_1646802944907_15460_m_000000_1, Status : FAILED
Error: java.io.IOException: SQLException in nextKeyValue
        at org.apache.sqoop.mapreduce.db.DBRecordReader.nextKeyValue(DBRecordReader.java:275)
        at org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:568)
        at org.apache.hadoop.mapreduce.task.MapContextImpl.nextKeyValue(MapContextImpl.java:80)
        at org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.nextKeyValue(WrappedMapper.java:91)
        at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
        at org.apache.sqoop.mapreduce.AutoProgressMapper.run(AutoProgressMapper.java:64)
        at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:799)
        at org.apache.hadoop.mapred.MapTask.run(MapTask.java:347)
        at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:174)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875)
        at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:168)
Caused by: java.sql.SQLException: ORA-24920: column size too large for client

reason

Before using sqoop import other database is normal, this time from the new database import data problems, first check what is the difference between the two databases, found an Oracle version is 11, the new Oracle database version is 19, which may be the cause of the problem.
Go online to check the ORA-24920 error, said to upgrade the oracle client, further speculation may be the problem of Oracle driver.
Under the lib file of sqoop tool, the Oracle JDBC driver found for sqoop is ojdbc6.jar, which does not match with Oracle version 19.
You can check the Oracle version and the corresponding Oracle JDBC driver version on this page:
https://www.oracle.com/database/technologies/faq-jdbc.html#02_03
The screenshot is as follows:

the link to the download page is as follows:
https://www.oracle.com/database/technologies/appdev/jdbc-downloads.html

Solution:

According to the version, ojdbc8.0.jar was downloaded. After uploading, delete the original version and re import the data.
the driver of the original version here needs to be deleted or moved, otherwise it will not succeed. Guess that if there are two versions, the old version may be read

[Solved] cx_Oracle.DatabaseError: Error while trying to retrieve text for error ORA-01804

Error: 

cx_Oracle connect oracle error:

cx_Oracle.DatabaseError: Error while trying to retrieve text for error ORA-01804
sample code:
import cx_Oracle
conn = cx_Oracle.connect(user,pwd, self.ois_tns)

 

Solution: Check the environment variable settings for oracle in the .bash_profile under the Linux user on the server executing the code, as follows.

export ORACLE_HOME=/test/home/oracle/product/11.2.0.4
export LD_LIBRARY_PATH=O R A C L E H O M E / l i b e x p o r t T N S A D M I N = ORACLE_HOME/lib export TNS_ADMIN=ORACLEHOME/libexportTNSADMIN=ORACLE_HOME/network/admin

 

[INS-06006] Passwordless SSH connectivity not set up [Solved]

After installing RAC, I encountered [INS-06006] Passwordless SSH connectivity not set up between the following node(s) when installing Oracle, rac1 and rac2 mutual trust. When the Setup was executed successfully, the passwordless login was already available on the OS, but Test did not work, so the following steps could not be performed.

Solution: I found that the problem lies in the virtual NIC virbr0 that comes with the virtual machine, remove the virtual NIC.
1. ifconfig virbr0 down
2. brctl delbr virbr0
3. systemctl disable libvirtd
4. Restart the virtual machine
5.Remove the original /home/oracle/.ssh file, and reset,test it.

[Solved] Oracle Delete the Archive Error: RMAN-08137

Rman-08137 reports an error when Oracle deletes the archive.

RMAN-08137: warning: archived log not deleted, needed for standby or upstream capture process
archived log file name=/u02/prod/archivelog/1_938000_1004292720.dbf thread=1 sequence=938000
RMAN-08137: warning: archived log not deleted, needed for standby or upstream capture process
archived log file name=/u02/prod/archivelog/1_938001_1004292720.dbf thread=1 sequence=938001
RMAN-08137: warning: archived log not deleted, needed for standby or upstream capture process
archived log file name=/u02/prod/archivelog/1_938002_1004292720.dbf thread=1 sequence=938002
RMAN-08137: warning: archived log not deleted, needed for standby or upstream capture process
archived log file name=/u02/prod/archivelog/1_938003_1004292720.dbf thread=1 sequence=938003
RMAN-08137: warning: archived log not deleted, needed for standby or upstream capture process
archived log file name=/u02/prod/archivelog/1_938004_1004292720.dbf thread=1 sequence=938004
RMAN-08137: warning: archived log not deleted, needed for standby or upstream capture process
archived log file name=/u02/prod/archivelog/1_938005_1004292720.dbf thread=1 sequence=938005
RMAN-08137: warning: archived log not deleted, needed for standby or upstream capture process
archived log file name=/u02/prod/archivelog/1_938006_1004292720.dbf thread=1 sequence=938006
RMAN-08137: warning: archived log not deleted, needed for standby or upstream capture process
archived log file name=/u02/prod/archivelog/1_938007_1004292720.dbf thread=1 sequence=938007

The error message shows that the archive to be deleted cannot be deleted because the backup database still needs to be deleted. Check the archive number applied to the backup database

SQL> select open_mode,database_role from v$database;

OPEN_MODE                           DATABASE_ROLE
----------------------------------- ------------------------------------------------
READ ONLY WITH APPLY                PHYSICAL STANDBY

SQL> select process,sequence# from v$managed_standby;

PROCESS                      SEQUENCE#
--------------------------- ----------
DGRD                                 0
ARCH                            939246
DGRD                                 0
ARCH                            939252
ARCH                            939248
ARCH                            939253
RFS                                  0
MRP0                            939254
DGRD                                 0
RFS                                  0
RFS                                  0

Deleting 938000 from the main database shows that the standby database still needs to be, and the standby database query has been applied to the 939254 archive
query the status of the standby database on the primary database:

SQL> select open_mode,database_role from v$database;

OPEN_MODE                                     DATABASE_ROLE
--------------------------------------------- ------------------------------------------------
READ WRITE                                    PRIMARY

SQL> select dest_name,PROTECTION_MODE,GAP_STATUS,APPLIED_THREAD#,APPLIED_SEQ# from gV$ARCHIVE_DEST_STATUS where type='PHYSICAL';

DEST_NAME                 PROTECTION_MODE                                              GAP_STATUS                APPLIED_THREAD# APPLIED_SEQ#
------------------------- ------------------------------------------------------------ ------------------------- --------------- ------------
LOG_ARCHIVE_DEST_2        MAXIMUM PERFORMANCE                                          RESOLVABLE GAP                          1       939258

The main library shows that the backup library has GAP and the status is RESOLVABLE_GAP
Through the related information query, the solution is:

  1. Mount the primary database.

    Issue the following SQL statement at the primary database:
    SQL> ALTER SYSTEM FLUSH REDO TO <target_db_name>;
    target_db_name is the db_unique_name of the backup database

    View the backup db_unique_name

SQL> select open_mode,database_role from v$database;

OPEN_MODE                           DATABASE_ROLE
----------------------------------- ------------------------------------------------
READ ONLY WITH APPLY                PHYSICAL STANDBY

SQL> show parameter db_unique_name

NAME                                 TYPE                              VALUE
------------------------------------ --------------------------------- ------------------------------
db_unique_name                       string                            standby

Discuss the downtime with the business and restart the production database to mount status

SQL> select open_mode,database_role from v$database;

OPEN_MODE                                     DATABASE_ROLE
--------------------------------------------- ------------------------------------------------
MOUNTED                                          PRIMARY

SQL> ALTER SYSTEM FLUSH REDO TO standby;

System alted.

SQL> alter database open;
 
 Database alted.

Query gap status of Zhu Bei database on the main database

SQL> select open_mode,database_role from v$database;

OPEN_MODE                                     DATABASE_ROLE
--------------------------------------------- ------------------------------------------------
READ WRITE                                    PRIMARY

SQL> select dest_name,PROTECTION_MODE,GAP_STATUS,APPLIED_THREAD#,APPLIED_SEQ# from gV$ARCHIVE_DEST_STATUS where type='PHYSICAL';

DEST_NAME                 PROTECTION_MODE                                              GAP_STATUS                APPLIED_THREAD# APPLIED_SEQ#
------------------------- ------------------------------------------------------------ ------------------------- --------------- ------------
LOG_ARCHIVE_DEST_2        MAXIMUM PERFORMANCE                  NO GAP                          1       939258

GAP_STATUS is now NO GAP, continue to execute the archive delete command, the deletion is normal

Oracle Database Cannot Open mount Mode Error: ORA-01102

Error in opening mount mode of database: ora-01102: cannot mount database in exclusive mode

SQL> startup nomount;
 
ORACLE instance started.

Total System Global Area 1073741824 bytes
Fixed Size		    2932632 bytes
Variable Size		  427819112 bytes
Database Buffers	  629145600 bytes
Redo Buffers		   13844480 bytes
SQL> SQL> alter database mount;
 
alter database mount
*
ERROR at line 1:
ORA-01102: cannot mount database in EXCLUSIVE mode

Reason: There is a “sgadef.dbf” file in the “ORACLE_HOME/dbs” directory and Oracle processes (pmon, smon, lgwr, and dbwr) still exist – even if the database is closed, the shared memory segments and semaphores still exist – there is an “ORACLE_HOME/dbs/lk” file “lk” and “sgadef. There is an “ORACLE_HOME/dbs/lk” file “lk” and “sgadef. dbf ” files for locking shared memory. It seems that even if no memory is allocated, Oracle thinks the memory is still locked.
To view the startup log:


Solution.
1. Go to /d01/oracle/PROD/db/tech_st/12.1.0/dbs/ directory
2. Delete the lkPOD file

rm -rf  lkPROD

3. Make sure Oracle has no background processes: ps -ef |grep ora_ |grep PROD|grep ora_dbw0_PROD

if there is a background process, please use the command “kill” to delete it.

[oracle@ebs ~]$  kill -9 1912

Log in again using mount mode

[oracle@ebs ~]$ sqlplus/as sysdba

SQL*Plus: Release 12.1.0.2.0 Production on Wed Mar 2 12:52:42 2022

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

Connected to an idle instance.

SQL> startup mount
ORACLE instance started.

Total System Global Area 1073741824 bytes
Fixed Size		    2932632 bytes
Variable Size		  427819112 bytes
Database Buffers	  629145600 bytes
Redo Buffers		   13844480 bytes
Database mounted.

Successfully resolved.

Oracle18c Error: ORA-12012: error on auto execute of job

For the newly installed Oracle 18C database, the alert log keeps making errors:

ORA-12012: error on auto execute of job

ORA-12012: error on auto execute of job "SYS"."ORA$AT_OS_OPT_SY_222"
ORA-20001: Statistics Advisor: Invalid task name for the current user
ORA-06512: at "SYS.DBMS_STATS", line 49538
ORA-06512: at "SYS.DBMS_STATS_ADVISOR", line 881
ORA-06512: at "SYS.DBMS_STATS_INTERNAL", line 21631
ORA-06512: at "SYS.DBMS_STATS_INTERNAL", line 23763
ORA-06512: at "SYS.DBMS_STATS", line 49526
2022-02-28 01:27:20.762000 +08:00
Errors in file /u01/app/oracle/diag/rdbms/orcl/orcl/trace/orcl_j000_104148.trc:
ORA-12012: error on auto execute of job "SYS"."ORA$AT_OS_OPT_SY_224"
ORA-20001: Statistics Advisor: Invalid task name for the current user
ORA-06512: at "SYS.DBMS_STATS", line 49538
ORA-06512: at "SYS.DBMS_STATS_ADVISOR", line 881
ORA-06512: at "SYS.DBMS_STATS_INTERNAL", line 21631
ORA-06512: at "SYS.DBMS_STATS_INTERNAL", line 23763
ORA-06512: at "SYS.DBMS_STATS", line 49526
2022-02-28 01:37:21.758000 +08:00
Errors in file /u01/app/oracle/diag/rdbms/orcl/orcl/trace/orcl_j000_104738.trc:
ORA-12012: error on auto execute of job "SYS"."ORA$AT_OS_OPT_SY_226"
ORA-20001: Statistics Advisor: Invalid task name for the current user
ORA-06512: at "SYS.DBMS_STATS", line 49538
ORA-06512: at "SYS.DBMS_STATS_ADVISOR", line 881
ORA-06512: at "SYS.DBMS_STATS_INTERNAL", line 21631
ORA-06512: at "SYS.DBMS_STATS_INTERNAL", line 23763
ORA-06512: at "SYS.DBMS_STATS", line 49526

 

Solution:
Call the initialization package manually: go to sqlplus as administrator
1. sqlplus / as sysdba

2. check the current auto task belongs to the user already exists, if not then initialize the package

3. EXEC dbms_stats.init_package();

4. Confirm again

SQL> column name format A35
SQL> set linesize 120
SQL> select name, ctime, how_created from sys.wri$_adv_tasks where owner_name = 'SYS' and name in ('AUTO_STATS_ADVISOR_TASK','INDIVIDUAL_STATS_ADVISOR_TASK');

no rows selected

SQL> EXEC dbms_stats.init_package();

PL/SQL procedure successfully completed.

SQL> select name, ctime, how_created from sys.wri$_adv_tasks where owner_name = 'SYS' and name in ('AUTO_STATS_ADVISOR_TASK','INDIVIDUAL_STATS_ADVISOR_TASK');

NAME                                CTIME     HOW_CREATED
----------------------------------- --------- ------------------------------
AUTO_STATS_ADVISOR_TASK             28-FEB-22 CMD
INDIVIDUAL_STATS_ADVISOR_TASK       28-FEB-22 CMD

Error in invoking target [How to Solve]

When Oracle 11g is installed on Linux 7, an error in invoking target is reported when the installation process reaches 86%. The screenshot is as follows

solution:

[oracle@emrtest ~]$ cd $ORACLE_HOME/sysman/lib/
[oracle@emrtest lib]$ vi ins_emagent.mk

Find:
$(SYSMANBIN)emdctl:
        $(MK_EMAGENT_NMECTL)

Modify to:
$(SYSMANBIN)emdctl:
        $(MK_EMAGENT_NMECTL) -lnnz11

Click Retry after the change