SYSRAC

select distinct username,status from gv$session where type <> ‘BACKGROUND’ ;

SQL> select distinct username,status from gv$session where type <> ‘BACKGROUND’ ;

USERNAME             STATUS
——————– ——————–
HAFEDA               INACTIVE
SYS                  ACTIVE
SYSRAC               INACTIVE

Separation of Duty for Administering Oracle Real Application Clusters :

Starting with Oracle Database 12c Release 2 (12.2), Oracle Database provides support for separation of duty best practices when administering
Oracle Real Application Clusters (Oracle RAC) by introducing the SYSRAC administrative privilege for the clusterware agent.

This feature removes the need to use the powerful SYSDBA administrative privilege for Oracle RAC.

SYSRAC, like SYSDG, SYSBACKUP and SYSKM, helps enforce separation of duties and reduce reliance on the use of SYSDBA on production systems.
This administrative privilege is the default mode for connecting to the database by the clusterware agent on behalf of the Oracle RAC utilities such as SRVCTL.

Ref:

https://docs.oracle.com/en/database/oracle/oracle-database/12.2/rilin/changes-in-this-release-for-oracle-rac.html#GUID-360D6CC4-46B1-4A74-A413-C2F3BB80472D

Disabling the Flash Recovery Area

Below the actions how to disabling and roll backing the FRA.

Before disabling the FRA we will take an overview of the below parameters for roll backing operation when is needed.

SQL> show parameter recovery

db_recovery_file_dest_size  big integer 4122M
db_recovery_file_dest  string   /u01/app/oracle/fast_recovery_area

Disabling the Flash Recovery Area

To disable the flash recovery area, set the DB_RECOVERY_FILE_DEST initialzation parameter to a null string.

ALTER SYSTEM SET DB_RECOVERY_FILE_DEST=” SCOPE=BOTH SID=”*”;

The database will no longer provide the space management features of the flash recovery area for the files stored in the old DB_RECOVERY_FILE_DEST location.
The files will still be known to the RMAN repository, however, and available for backup and restore activities.

Examples :

ALTER SYSTEM SET DB_RECOVERY_FILE_DEST = ” scope=both;

To set a new destination for the archivelogs:

ALTER SYSTEM SET LOG_ARCHIVE_DEST=’/u04/archlog’ scope=both;

rollback:

ALTER SYSTEM SET db_recovery_file_dest_size=4122M scope=both;
ALTER SYSTEM SET DB_RECOVERY_FILE_DEST=’/u01/app/oracle/fast_recovery_area’ scope=both;

Note:

We need always specify the size parameter before specifying the location parameter.
In a RAC database, all instances must have the same values for these parameters.

Ref:

How to disable use of Flash Recovery Area for Archivelogs and Backups (Doc ID 297397.1)
https://docs.oracle.com/cd/B19306_01/backup.102/b14192/setup005.htm

ORA-20000: Unable to analyze TABLE “owner”.”tablename”, insufficient privileges or does not exist

ORA-20000: Unable to analyze TABLE “owner”.”tablename”, insufficient privileges or does not exist

On trying to gather stats using dbms_stats.gather_table_stats against a partitioned table the following error is raised:

ORA-20000: Unable to analyze TABLE “owner”.”tablename”, insufficient privileges or does not exist

Error stack shows:
ORA-20000: Unable to analyze TABLE “owner”.”tablename”, insufficient
privileges or does not exist
ORA-06512: at “SYS.DBMS_STATS”, line 9634
ORA-06512: at “SYS.DBMS_STATS”, line 9657
ORA-06512: at line 1

Command being run contains a Granularity = GLOBAL clause and specifies a partition name similar to:

CAUSE

Granularity = GLOBAL is used when gathering statistics in a partition and the partition name is specified.
This is a result of unpublished Bug 5139876: INCORRECT MESSAGE ISSUED FOR GATHER_TABLE_STATS GRANULARITY=GLOBAL

This is fixed in 11.1 where the partition name will be ignored.

SOLUTION

Upgrade to 11g.

Specify either granularity => PARTITION or DEFAULT when gathering statistics on a partition.

Also, when using granularity global without specifying a partition name, the command works correctly :

When using granularity default and specifying a partition name the command also works correctly:

Ref:

‘ORA-20000: Unable to analyze TABLE “owner”.”tablename” %s, insufficient privileges or does not exist’ from SYS.DBMS_STATS.GATHER_TABLE_STATS (Doc ID 333966.1)

ORA-20000: Unable to analyze TABLE “IDB”.”DBMS_TABCOMP_TEMP_CMP”, insufficient privileges or does not exist

ORA-20000: Unable to analyze TABLE “IDB”.”DBMS_TABCOMP_TEMP_CMP”, insufficient privileges or does not exist

Alert log shows the error messages below:

Sat Aug 01 22:09:40 2020
DBMS_STATS: GATHER_STATS_JOB encountered errors.  Check the trace file.
Errors in file /wat01/app/oracle/diag/rdbms/idbpr01/idbpr01/trace/idbpr01_j000_23625.trc:
ORA-20000: Unable to analyze TABLE “IDB”.”DBMS_TABCOMP_TEMP_CMP”, insufficient privileges or does not exist

This issue matches Bug 9939773 which was closed as not a bug as explained by the following description:

The compression advisor creates intermediate tables (creates/drops tables DBMS_TABCOMP_TEMP_UNCMP). In case a GATHER_STATS_JOB is run at the same time , the table(s) will be in the list of tables to be analyzed. In case the table is automatically dropped by the compression advisory job before the GATHER_STATS_JOB has reached to that table in the list, the ORA-2000 will be hit. When DBMS_STATS finds it is no longer available, it writes to the alert log.This is no different from the case where a user table is dropped during a statistics gather.

You can ignore this error due to following:

The error is non-critical.
Other objects won’t be affected by this error and gather stats job continues safely on other user objects. The error will not result in the termination of the gather stats job for the rest of the tables.
The message in the alert log is only informative.

Ref:

DBMS_STATS: GATHER_STATS_JOB encountered errors. ORA-20000: Unable to set values for table DBMS_TABCOMP_TEMP_UNCMP: does not exist or insufficient privileges (Doc ID 1463793.1)

Clusterware Network Check

For the Clusterware Network Check, we can run and review output of the following command:

$ <GRID_HOME>/bin/cluvfy comp nodecon -n all -verbose

Exemple: On CBIO

/u01/11.2.0/grid/bin
bash-3.00$ ./cluvfy comp nodecon -n all -verbose

Verifying node connectivity

Checking node connectivity…

Checking hosts config file…
  Node Name                             Status
  ————————————  ————————
  bdb2                               passed
  bdb1                               passed

Verification of the hosts config file successful

Interface information for node “bdb2”
 Name   IP Address      Subnet          Gateway         Def. Gateway    HW Address        MTU
 —— ————— ————— ————— ————— —————– ——
 aggr1  10.30.236.17    10.30.236.0     10.30.236.17    10.172.16.254   00:21:28:D2:F8:CA 9000
 nxge0  10.173.16.199   10.172.16.192   10.173.16.199   10.172.16.254   00:21:28:88:76:10 1500
 nxge0  10.172.16.231   10.172.16.192   10.173.16.199   10.172.16.254   00:21:28:88:76:10 1500
 nxge0  10.172.16.236   10.172.16.192   10.173.16.199   10.172.16.254   00:21:28:88:76:10 1500
 nxge0  10.172.16.234   10.172.16.192   10.173.16.199   10.172.16.254   00:21:28:88:76:10 1500
 nxge2  192.168.1.2     192.168.1.0     192.168.1.2     10.172.16.254   00:21:28:88:76:12 1500
 nxge2  169.254.14.189  169.254.0.0     169.254.14.189  10.172.16.254   00:21:28:88:76:12 1500
 nxge10 192.168.2.2     192.168.2.0     192.168.2.2     10.172.16.254   00:21:28:88:83:92 1500
 nxge10 169.254.207.9   169.254.128.0   169.254.207.9   10.172.16.254   00:21:28:88:83:92 1500

Interface information for node “bdb1”
 Name   IP Address      Subnet          Gateway         Def. Gateway    HW Address        MTU
 —— ————— ————— ————— ————— —————– ——
 aggr1  10.30.236.16    10.30.236.0     10.30.236.16    10.172.16.254   00:21:28:D2:F8:5E 9000
 nxge0  10.173.16.196   10.172.16.192   10.173.16.196   10.172.16.254   00:21:28:87:22:70 1500
 nxge0  10.172.16.230   10.172.16.192   10.173.16.196   10.172.16.254   00:21:28:87:22:70 1500
 nxge0  10.172.16.235   10.172.16.192   10.173.16.196   10.172.16.254   00:21:28:87:22:70 1500
 nxge2  192.168.1.1     192.168.1.0     192.168.1.1     10.172.16.254   00:21:28:87:22:72 1500
 nxge2  169.254.87.139  169.254.0.0     169.254.87.139  10.172.16.254   00:21:28:87:22:72 1500
 nxge10 192.168.2.1     192.168.2.0     192.168.2.1     10.172.16.254   00:21:28:87:45:92 1500
 nxge10 169.254.232.204 169.254.128.0   169.254.232.204 10.172.16.254   00:21:28:87:45:92 1500

Check: Node connectivity of subnet “10.30.236.0”
  Source                          Destination                     Connected?
  ——————————  ——————————  —————-
  bdb2[10.30.236.17]           bdb1[10.30.236.16]           yes
Result: Node connectivity passed for subnet “10.30.236.0” with node(s) bdb2,bdb1

Check: TCP connectivity of subnet “10.30.236.0”
  Source                          Destination                     Connected?
  ——————————  ——————————  —————-
  bdb1:10.30.236.16            bdb2:10.30.236.17            passed
Result: TCP connectivity check passed for subnet “10.30.236.0”

Check: Node connectivity of subnet “10.172.16.192”
  Source                          Destination                     Connected?
  ——————————  ——————————  —————-
  bdb2[10.173.16.199]          bdb2[10.172.16.231]          yes
  bdb2[10.173.16.199]          bdb2[10.172.16.236]          yes
  bdb2[10.173.16.199]          bdb2[10.172.16.234]          yes
  bdb2[10.173.16.199]          bdb1[10.173.16.196]          yes
  bdb2[10.173.16.199]          bdb1[10.172.16.230]          yes
  bdb2[10.173.16.199]          bdb1[10.172.16.235]          yes
  bdb2[10.172.16.231]          bdb2[10.172.16.236]          yes
  bdb2[10.172.16.231]          bdb2[10.172.16.234]          yes
  bdb2[10.172.16.231]          bdb1[10.173.16.196]          yes
  bdb2[10.172.16.231]          bdb1[10.172.16.230]          yes
  bdb2[10.172.16.231]          bdb1[10.172.16.235]          yes
  bdb2[10.172.16.236]          bdb2[10.172.16.234]          yes
  bdb2[10.172.16.236]          bdb1[10.173.16.196]          yes
  bdb2[10.172.16.236]          bdb1[10.172.16.230]          yes
  bdb2[10.172.16.236]          bdb1[10.172.16.235]          yes
  bdb2[10.172.16.234]          bdb1[10.173.16.196]          yes
  bdb2[10.172.16.234]          bdb1[10.172.16.230]          yes
  bdb2[10.172.16.234]          bdb1[10.172.16.235]          yes
  bdb1[10.173.16.196]          bdb1[10.172.16.230]          yes
  bdb1[10.173.16.196]          bdb1[10.172.16.235]          yes
  bdb1[10.172.16.230]          bdb1[10.172.16.235]          yes
Result: Node connectivity passed for subnet “10.172.16.192” with node(s) bdb2,bdb1

Check: TCP connectivity of subnet “10.172.16.192”
  Source                          Destination                     Connected?
  ——————————  ——————————  —————-
  bdb1:10.173.16.196           bdb2:10.173.16.199           passed
  bdb1:10.173.16.196           bdb2:10.172.16.231           passed
  bdb1:10.173.16.196           bdb2:10.172.16.236           passed
  bdb1:10.173.16.196           bdb2:10.172.16.234           passed
  bdb1:10.173.16.196           bdb1:10.172.16.230           passed
  bdb1:10.173.16.196           bdb1:10.172.16.235           passed
Result: TCP connectivity check passed for subnet “10.172.16.192”

Check: Node connectivity of subnet “192.168.1.0”
  Source                          Destination                     Connected?
  ——————————  ——————————  —————-
  bdb2[192.168.1.2]            bdb1[192.168.1.1]            yes
Result: Node connectivity passed for subnet “192.168.1.0” with node(s) bdb2,bdb1

Check: TCP connectivity of subnet “192.168.1.0”
  Source                          Destination                     Connected?
  ——————————  ——————————  —————-
  bdb1:192.168.1.1             bdb2:192.168.1.2             passed
Result: TCP connectivity check passed for subnet “192.168.1.0”

Check: Node connectivity of subnet “169.254.0.0”
  Source                          Destination                     Connected?
  ——————————  ——————————  —————-
  bdb2[169.254.14.189]         bdb1[169.254.87.139]         yes
Result: Node connectivity passed for subnet “169.254.0.0” with node(s) bdb2,bdb1

Check: TCP connectivity of subnet “169.254.0.0”
  Source                          Destination                     Connected?
  ——————————  ——————————  —————-
  bdb1:169.254.87.139          bdb2:169.254.14.189          passed
Result: TCP connectivity check passed for subnet “169.254.0.0”

Check: Node connectivity of subnet “192.168.2.0”
  Source                          Destination                     Connected?
  ——————————  ——————————  —————-
  bdb2[192.168.2.2]            bdb1[192.168.2.1]            yes
Result: Node connectivity passed for subnet “192.168.2.0” with node(s) bdb2,bdb1

Check: TCP connectivity of subnet “192.168.2.0”
  Source                          Destination                     Connected?
  ——————————  ——————————  —————-
  bdb1:192.168.2.1             bdb2:192.168.2.2             passed
Result: TCP connectivity check passed for subnet “192.168.2.0”

Check: Node connectivity of subnet “169.254.128.0”
  Source                          Destination                     Connected?
  ——————————  ——————————  —————-
  bdb2[169.254.207.9]          bdb1[169.254.232.204]        yes
Result: Node connectivity passed for subnet “169.254.128.0” with node(s) bdb2,bdb1

Check: TCP connectivity of subnet “169.254.128.0”
  Source                          Destination                     Connected?
  ——————————  ——————————  —————-
  bdb1:169.254.232.204         bdb2:169.254.207.9           passed
Result: TCP connectivity check passed for subnet “169.254.128.0”

WARNING:
Could not find a suitable set of interfaces for VIPs

WARNING:
Could not find a suitable set of interfaces for the private interconnect
Checking subnet mask consistency…
Subnet mask consistency check passed for subnet “10.30.236.0”.
Subnet mask consistency check passed for subnet “10.172.16.192”.
Subnet mask consistency check passed for subnet “192.168.1.0”.
Subnet mask consistency check passed for subnet “169.254.0.0”.
Subnet mask consistency check passed for subnet “192.168.2.0”.
Subnet mask consistency check passed for subnet “169.254.128.0”.
Subnet mask consistency check passed.

Result: Node connectivity check passed

Verification of node connectivity was successful.

MVIEWS – ORA-01031: insufficient privileges

ORA-01031: insufficient privileges

When we’re creating a materialized views in a different schema, if the user of that schema does not have create table privilege,
the process of creating materialized view will result in an ORA-01031 error.

ORA-01031: insufficient privileges

In order to avoid this error, you should grant create table to user/schema where materialized view is created.

Atomic refresh mechanism

Atomic refresh mechanism

Oracle 10g introduced the atomic refresh mechanism, a materialized view is refreshed as a whole, as a single transaction.

We might expect that an atomic refresh is faster than a manual rebuild or a refresh full, but this is not always the case.

When “atomic refresh” is set to TRUE, than the whole refresh is done in a single transaction.

exec dbms_mview.refresh(‘My_MV’,atomic_refresh=>TRUE);

If you do not specify an atomic refresh (by setting “atomic refresh = FALSE”) then you can optimize the materialized view refresh with these mechanisms:

– Parallel DML
– Truncate DDL

Setting atomic_refresh=false tells Oracle to truncate data instead of delete the rows, resulting in better performance than setting “atomic_refresh=true”.

When atomic_refresh=>true, Oracle performs deleting from MView table.
When atomic_refresh=>false, Oracle trunctates the MView table.

Note that when a materialized view is refreshed in atomic mode, it is eligible for query rewrite if the rewrite integrity mode is set to stale_tolerated.

MVIEW refresh – wait enq: JI – contention oracle

When launchng the refresh of a MVIEW with parallelism we’re getting the below wait event:

wait enq: JI – contention oracle

Don’t refresh the same Mview object by many sessions at the same time.

CAUSE

JI enqueue is used to serialize the refresh of a materialized view, JI enqueue is acquired in exclusive mode on the mview base (container) table when the refresh is being performed, it ensures that two or more refresh processes do not try to refresh the same object.

SOLUTION

You may want to see which session is holding the JI enqueue with the help of Note 1020008.6 or <Note  1020007.6>.
If the JI enqueue holder is an intended refresh session, the other session(s) that intend to refresh the same mview may just need to wait until the first one finishes.

If however the JI enqueue holder session is stuck (e.g. was killed without the immediate option), then you may want to take further actions to remove/terminate this session, so that the intended refresh can acquire the JI enqueue on the mview and proceed with the refresh.

Materialized View Refresh is Hanging With JI Contention (Doc ID 1358453.1)