Quantcast
Channel: Administration – Oracle DBA – Tips and Techniques
Viewing all 110 articles
Browse latest View live

Oracle 12c New Feature – Privilege Analysis

$
0
0

In many databases we find that over the course of time certain users particularly application owner schemas and developer user accounts have been granted excessive privileges – more than what they need to do their job as developers or required for the application to perform normally.

Excessive privileges violate the basic security principle of least privilege.

In Oracle 12c now we have a package called DBMS_PRIVLEGE_CAPTURE through which we can identify unnecessary object and system privileges which have been granted and revoke privileges which have been granted but which have not yet been used.

The privilege analysis can be at the entire database level, or based on a particular role or context-specific – like a particular user in the database.

These are the main steps involved:

1) Create the Database, Role or Context privilege analysis policy via DBMS_PRIVILEGE_CAPTURE.CREATE_CAPTURE
2) Start the analysis of used privileges via DBMS_PRIVILEGE_CAPTURE.ENABLE_CAPTURE
3)Stop the analysis when required via DBMS_PRIVILEGE_CAPTURE.DISABLE_CAPTURE
4) Generate the report via DBMS_PRIVILEGE_CAPTURE.GENERATE_RESULT
5) Examine the views like DBA_USED_SYSPRIVS, DBA_USED_OBJPRIVS,DBA_USED_PRIVS, DBA_UNUSED_PRIVS etc

In this example below we do context-based analysis – the role ‘DBA’ and the user ‘SH’.

SQL> alter session set container=sales;

Session altered.

SQL>  grant dba to sh;

Grant succeeded.


SQL> exec SYS.DBMS_PRIVILEGE_CAPTURE.CREATE_CAPTURE(-
> name => 'AUDIT_DBA_SH',-
>  type => dbms_privilege_capture.g_role_and_context,-
> roles => role_name_list ('DBA'),-
> condition => 'SYS_CONTEXT (''USERENV'',''SESSION_USER'')=''SH''');

PL/SQL procedure successfully completed.


SQL> exec SYS.DBMS_PRIVILEGE_CAPTURE.ENABLE_CAPTURE(-
> name => 'AUDIT_DBA_SH');

PL/SQL procedure successfully completed.

SQL> conn sh/sh@sales
Connected.


SQL> alter user hr identified by hr;

User altered.

SQL> create table myobjects as select * from all_objects;
create table myobjects as select * from all_objects
             *
ERROR at line 1:
ORA-00955: name is already used by an existing object


SQL> drop table myobjects;

Table dropped.

SQL> alter tablespace users offline;

Tablespace altered.

SQL> alter tablespace users online;

Tablespace altered.



SQL> conn / as sysdba
Connected.


SQL> exec SYS.DBMS_PRIVILEGE_CAPTURE.DISABLE_CAPTURE(-
>  name => 'AUDIT_DBA_SH');

PL/SQL procedure successfully completed.

SQL> exec SYS.DBMS_PRIVILEGE_CAPTURE.GENERATE_RESULT(-
>  name => 'AUDIT_DBA_SH');

PL/SQL procedure successfully completed.


SQL> select name,type,enabled,roles,context
  2  from dba_priv_captures;

NAME           TYPE             E ROLES           CONTEXT
-------------- ---------------- - --------------- ------------------------------------------------------------
AUDIT_DBA_SH   ROLE_AND_CONTEXT N ROLE_ID_LIST(4) SYS_CONTEXT ('USERENV','SESSION_USER')='SH'


SQL> select username,sys_priv from dba_used_sysprivs;


USERNAME             SYS_PRIV
-------------------- ----------------------------------------
SH                   CREATE SESSION
SH                   ALTER USER
SH                   CREATE TABLE
SH                   ALTER TABLESPACE

Upgrade Grid Infrastructure 11g (11.2.0.3) to 12c (12.1.0.2)

$
0
0

I have recently tested the upgrade to RAC Grid Infrastructure 12.1.0.2 on my test RAC Oracle Virtualbox Linux 6.5 x86-64 environment.

The upgrade went very smoothly but we have to take a few things into account – some things have changed in 12.1.0.2 as compared to Grid Infrastructure 12.1.0.1.

The most notable change regards the Grid Infrastructure Management Repository (GIMR).

In 12.1.0.1 we had the option of installing the GIMR database – MGMTDB. But in 12.1.0.2 it is mandatory and the MGMTDB database is automatically created as part of the upgrade or initial installation process of 12.10.2 Grid Infrastructure.

The GIMR primarily stores historical Cluster Health Monitor metric data. It runs as a container database on a single node of the RAC cluster.

The problem I found is that the datafiles for the MGMTDB database are created on the same ASM disk group which holds the OCR and Voting Disk and there is a prerequisite that there is at least 4 GB of free space in that ASM disk group – or an error INS-43100 will be returned as shown in the figure below.

I had to cancel the upgrade process and add another disk to the +OCR ASM disk group to ensure that at least 4 GB of free space was available and after that the upgrade process went through very smoothly.

 

 

 

On both the nodes of the RAC cluster we will create the directory structure for the 12.1.0.2 Grid Infrastructure environment as this is an out-of-place upgrade.

Also it is very important to check the health of the RAC cluster before the upgrade (via the crsctl check cluster -all command) and also run the cluvfy.sh script to verify all the prerequisites for the 12c GI upgrade are in place.

[oracle@rac1 bin]$ crsctl query crs softwareversion rac1
Oracle Clusterware version on node [rac1] is [11.2.0.3.0]

[oracle@rac1 bin]$ crsctl query crs softwareversion rac2
Oracle Clusterware version on node [rac2] is [11.2.0.3.0]

[oracle@rac1 grid]$ ./runcluvfy.sh stage -pre crsinst -upgrade -rolling -src_crshome /u01/app/11.2.0/grid -dest_crshome /u02/app/12.1.0/grid -dest_version 12.1.0.2.0

 

 

 

 

[oracle@rac1 ~]$ crsctl check cluster -all
**************************************************************
rac1:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
rac2:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************

[oracle@rac1 ~]$ crsctl query crs releaseversion
Oracle High Availability Services release version on the local node is [12.1.0.2.0]

[oracle@rac1 ~]$ crsctl query crs softwareversion
Oracle Clusterware version on node [rac1] is [12.1.0.2.0]

[oracle@rac1 ~]$ crsctl query crs activeversion -f
Oracle Clusterware active version on the cluster is [12.1.0.2.0]. The cluster upgrade state is [NORMAL]. The cluster active patch level is [0].

[oracle@rac1 ~]$ ps -ef |grep pmon
oracle 1278 1 0 14:53 ? 00:00:00 mdb_pmon_-MGMTDB
oracle 16354 1 0 14:22 ? 00:00:00 asm_pmon_+ASM1
oracle 17217 1 0 14:23 ? 00:00:00 ora_pmon_orcl1

[root@rac1 bin]# ./oclumon manage -get reppath

CHM Repository Path = +OCR/_MGMTDB/FD9B43BF6A646F8CE043B6A9E80A2815/DATAFILE/sysmgmtdata.269.873212089

[root@rac1 bin]# ./srvctl status mgmtdb -verbose
Database is enabled
Instance -MGMTDB is running on node rac1. Instance status: Open.

[root@rac1 bin]# ./srvctl config mgmtdb
Database unique name: _mgmtdb
Database name:
Oracle home:
Oracle user: oracle
Spfile: +OCR/_MGMTDB/PARAMETERFILE/spfile.268.873211787
Password file:
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Type: Management
PDB name: rac_cluster
PDB service: rac_cluster
Cluster name: rac-cluster
Database instance: -MGMTDB

Oracle 12c New Feature Real-Time Database Monitoring

$
0
0

Real -Time Database Monitoring which is a new feature in Oracle 12c extends Real-Time SQL Monitoring which was a feature introduced in Oracle 11g. The main difference is related to the fact that SQL Monitoring only applies to a single SQL statement .

Very often we run batch jobs and those batch jobs in turn invoke many SQL statements. When batch jobs run slowly all of a sudden it becomes very difficult to identify which of those individual SQL statements which are part of the batch job are now contributing to the performance issue – or maybe batch jobs have started running slowly only after a database upgrade has been performed and we need to identify which particular SQL statement or statements have suffered from performance regressions after the upgrade.

The API used for Real-Time Database Monitoring is the DBMS_SQL_MONITOR package with the  BEGIN_OPERATION and END_OPERATION calls.

So what is a Database Operation?

A database operation is single or multiple SQL statements and/or PL/SQL blocks between two points in time.

Basically to monitor a database operation it needs to be given a name along with a begin and end point.

The database operation name along with its execution ID will help us identify the operation and we can use several views for this purpose like V$SQL_MONITOR as well as V$ACTIVE_SESSION_HISTORY via the DBOP_NAME and DBOP_EXEC_ID columns.

Let us look an example of monitoring database operations using Oracle 12c Database Express.

We create a file called mon.sql and will run it in the SH schema while using Database Express to monitor the operation.

The name of the database operation is DBOPS and we are running a number of SQL statements as part of the same database operation.

DECLARE
n NUMBER;
m  number;
BEGIN
n := dbms_sql_monitor.begin_operation(‘DBOPS’);
END;
/

drop table sales_copy;
CREATE TABLE SALES_COPY AS SELECT * FROM SALES;
INSERT INTO SALES_COPY SELECT * FROM SALES;
COMMIT;
DELETE SALES_COPY;
COMMIT;
SELECT * FROM SALES ;
select * from sales where cust_id=1234;

DECLARE
m NUMBER;
BEGIN
select dbop_exec_id into m from v$sql_monitor
where dbop_name=’DBOPS’
and status=’EXECUTING';
dbms_sql_monitor.end_operation(‘DBOPS’, m);
END;
/

From the Database Express 12c Performance menu > Performance Hub > Monitored SQL

In this figure we can see that the DBOPS database operation is still running.

Click the DBOPS link in the ID column

 

We can see the various SQL statements which are running as part of the operation and we can also see that one particular SQL is taking much more database time as compared to the other 3 SQL ID’s.

 

 

The DELETE SALES_COPY SQL statement is taking over 30 seconds of database time as compared to other statements which are taking around just a second of database time in comparison. It is consuming close to 2 million buffer gets as well.

So we know that for this particular database operation, which is the most costly single SQL statement.

 

 

 

We can now see that the database operation is finally complete and it has taken 42 seconds of database time.

Oracle 12c Partitioning New Features

$
0
0

Online Move Partition

In Oracle 12c we can now move as well as compress partitions online while DML transactions on the partitioned table are in progress.

In earlier versions we would get an error like the one shown below if we attempted to move a partition while a DML statement on the partitioned table was in progress.

ORA-00054: resource busy and acquire with NOWAIT specified or timeout expired

This is tied in to the 12c new feature related to Information Lifecycle Management where tables (and partitions) can be moved to low cost storage and/or compressed as part of an ILM policy. So we would not like to impact any DML statements which are in progress when the partitions are being moved or compressed – hence the online feature.

Another feature in 12c is that this online partition movement will not make the associated partitioned indexes left in an unusable state. The UPDATE INDEXES ONLINE clause will maintain the global and local indexes on the table.

SQL> ALTER TABLE sales MOVE PARTITION sales_q2_1998 TABLESPACE users
2  UPDATE INDEXES ONLINE;

Table altered.

 

Interval Reference Partitioning

In Oracle 11g Interval as well as Reference partitioning methods were introduced. In 12c we take this one step further and combine both those partitioning methods into one. So we can now have a child table to be referenced partitioned based on a parent table which has interval partitioning defined for it.

So two things to keep in mind.

Whenever an interval partition is created in the parent table a partition is also created in the referenced child table and the  partition name inherited from the parent table.

Partitions in the child table corresponding to partitions in the parent table are created when rows are inserted into the child table.

Let us look an example using the classic ORDERS and ORDER_ITEMS table which have a parent-child relationship and the parent ORDERS table has been interval partitioned.

CREATE TABLE "OE"."ORDERS_PART"
 (    
"ORDER_ID" NUMBER(12,0) NOT NULL,
"ORDER_DATE" TIMESTAMP (6)  CONSTRAINT "ORDER_PART_DATE_NN" NOT NULL ENABLE,
"ORDER_MODE" VARCHAR2(8),
"CUSTOMER_ID" NUMBER(6,0) ,
"ORDER_STATUS" NUMBER(2,0),
"ORDER_TOTAL" NUMBER(8,2),
"SALES_REP_ID" NUMBER(6,0),
"PROMOTION_ID" NUMBER(6,0),
CONSTRAINT ORDERS_PART_pk PRIMARY KEY (ORDER_ID)
)
PARTITION BY RANGE (ORDER_DATE)
INTERVAL (NUMTOYMINTERVAL(1,'YEAR'))
(PARTITION P_2006 VALUES LESS THAN (TIMESTAMP'2007-01-01 00:00:00 +00:00'),
PARTITION P_2007 VALUES LESS THAN (TIMESTAMP'2008-01-01 00:00:00 +00:00'),
PARTITION P_2008 VALUES LESS THAN (TIMESTAMP'2009-01-01 00:00:00 +00:00')
)
;

CREATE TABLE "OE"."ORDER_ITEMS_PART"
(    
"ORDER_ID" NUMBER(12,0) NOT NULL,
"LINE_ITEM_ID" NUMBER(3,0) NOT NULL ENABLE,
"PRODUCT_ID" NUMBER(6,0) NOT NULL ENABLE,
"UNIT_PRICE" NUMBER(8,2),
"QUANTITY" NUMBER(8,0),
CONSTRAINT "ORDER_ITEMS_PART_FK" FOREIGN KEY ("ORDER_ID")
REFERENCES "OE"."ORDERS_PART" ("ORDER_ID") ON DELETE CASCADE )
PARTITION BY REFERENCE (ORDER_ITEMS_PART_FK)
;

Note the partitions in the parent table

SQL> SELECT PARTITION_NAME FROM USER_TAB_PARTITIONS WHERE TABLE_NAME='ORDERS_PART';

PARTITION_NAME
--------------------------------------------------------------------------------------------------------------------------------
P_2006
P_2007
P_2008

We can see that the child table has inherited the same partitions from the parent table

SQL> SELECT PARTITION_NAME FROM USER_TAB_PARTITIONS WHERE TABLE_NAME='ORDER_ITEMS_PART';

PARTITION_NAME
--------------------------------------------------------------------------------------------------------------------------------
P_2006
P_2007
P_2008

We now insert a new row into the table which leads to the creation of a new partition automatically

SQL> INSERT INTO ORDERS_PART
  2   VALUES
  3   (9999,'17-MAR-15 01.00.00.000000 PM', 'DIRECT',147,5,1000,163,NULL);

1 row created.

SQL> COMMIT;

Commit complete.

SQL> SELECT PARTITION_NAME FROM USER_TAB_PARTITIONS WHERE TABLE_NAME='ORDERS_PART';

PARTITION_NAME
--------------------------------------------------------------------------------------------------------------------------------
P_2006
P_2007
P_2008
SYS_P301

Note at this point the child table still has only 3 partitions and a new partition corresponding to the parent table will only be created when rows are inserted into the child table.

We now insert some rows into the child table – note that the row insertions leads to a new partition being created in the child table corresponding to the parent table.

SQL> INSERT INTO ORDER_ITEMS_PART
  2  VALUES
  3  (9999,1,2289,10,100);

1 row created.

SQL> INSERT INTO ORDER_ITEMS_PART
  2   VALUES
  3  (9999,2,2268,500,1);

1 row created.

SQL> COMMIT;

Commit complete.

SQL> SELECT PARTITION_NAME FROM USER_TAB_PARTITIONS WHERE TABLE_NAME='ORDER_ITEMS_PART';

PARTITION_NAME
--------------------------------------------------------------------------------------------------------------------------------
P_2006
P_2007
P_2008
SYS_P301

TRUNCATE CASCADE

In Oracle 12c we can add the CASCADE option to the TRUNCATE TABLE or ALTER TABLE TRUNCATE PARTITION commands.

The CASCADE option will truncate all child tables which reference the parent table and also where the referential constraint has been created with the ON DELETE CASCADE option.

The TRUNCATE CASCADE when used at the partition level in a reference partition model will also cascade to the partitions in the child table as shown in the example below.

SQL> alter table orders_part truncate partition SYS_P301 cascade;

Table truncated.


SQL> select count(*) from orders_part partition (SYS_P301);

  COUNT(*)
----------
         0

SQL>  select count(*) from order_items_part partition (SYS_P301);

  COUNT(*)
----------
         0

Multi-Partition Maintenance Operations

In Oracle 12c we can add, truncate or drop multiple partitions as part of a single operation.

In versions prior to 12c, the SPLIT and MERGE PARTITION operations could only be carried out on two partitions at a time. If we had a table with 10 partitions which say we needed to merge, we had to issue 9 separate DDL statements

Now with a single command we can roll out data into smaller partitions or roll up data into a larger partition.

CREATE TABLE sales
( prod_id       NUMBER(6)
, cust_id       NUMBER
, time_id       DATE
, channel_id    CHAR(1)
, promo_id      NUMBER(6)
, quantity_sold NUMBER(3)
, amount_sold   NUMBER(10,2)
)
PARTITION BY RANGE (time_id)
( PARTITION sales_q1_2014 VALUES LESS THAN (TO_DATE('01-APR-2014','dd-MON-yyyy'))
, PARTITION sales_q2_2014 VALUES LESS THAN (TO_DATE('01-JUL-2014','dd-MON-yyyy'))
, PARTITION sales_q3_2014 VALUES LESS THAN (TO_DATE('01-OCT-2014','dd-MON-yyyy'))
, PARTITION sales_q4_2014 VALUES LESS THAN (TO_DATE('01-JAN-2015','dd-MON-yyyy'))
);


ALTER TABLE sales ADD
PARTITION sales_q1_2015 VALUES LESS THAN (TO_DATE('01-APR-2015','dd-MON-yyyy')),
PARTITION sales_q2_2015 VALUES LESS THAN (TO_DATE('01-JUL-2015','dd-MON-yyyy')),
PARTITION sales_q3_2015 VALUES LESS THAN (TO_DATE('01-OCT-2015','dd-MON-yyyy')),
PARTITION sales_q4_2015 VALUES LESS THAN (TO_DATE('01-JAN-2016','dd-MON-yyyy'));


SQL>  ALTER TABLE sales MERGE PARTITIONS sales_q1_2015,sales_q2_2015,sales_q3_2015,sales_q4_2015  INTO PARTITION sales_2015;

Table altered.

SQL>  ALTER TABLE sales SPLIT PARTITION sales_2015 INTO
  2  (PARTITION sales_q1_2015 VALUES LESS THAN (TO_DATE('01-APR-2015','dd-MON-yyyy')),
  3  PARTITION sales_q2_2015 VALUES LESS THAN (TO_DATE('01-JUL-2015','dd-MON-yyyy')),
  4  PARTITION sales_q3_2015 VALUES LESS THAN (TO_DATE('01-OCT-2015','dd-MON-yyyy')),
  5  PARTITION sales_q4_2015);

Table altered.

Partial Indexing

In Oracle 12c we can now have a case where only certain partitions of the table are indexed while the other partitions do not have any indexes. For example we may want the recent partitions which are subject to lots of OLTP type operations to not have any indexes in order to speed up insert activity while the older partitions of the table are subject to DSS type queries and would benefit from indexing.

We can turn indexing on or off at the table level and then enable or disable it selectively at the partition level.

Have a look at the example below.

CREATE TABLE "SH"."SALES_12C"
(
"PROD_ID" NUMBER NOT NULL ENABLE,
"CUST_ID" NUMBER NOT NULL ENABLE,
"TIME_ID" DATE NOT NULL ENABLE,
"CHANNEL_ID" NUMBER NOT NULL ENABLE,
"PROMO_ID" NUMBER NOT NULL ENABLE,
"QUANTITY_SOLD" NUMBER(10,2) NOT NULL ENABLE,
"AMOUNT_SOLD" NUMBER(10,2) NOT NULL ENABLE
) 
TABLESPACE "EXAMPLE"
INDEXING OFF
PARTITION BY RANGE ("TIME_ID")
(PARTITION "SALES_1995"  VALUES LESS THAN (TO_DATE(' 1996-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS')) ,
PARTITION "SALES_1996"  VALUES LESS THAN (TO_DATE(' 1997-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS')) ,
PARTITION "SALES_1997"  VALUES LESS THAN (TO_DATE(' 1998-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS')) ,
PARTITION "SALES_1998"  VALUES LESS THAN (TO_DATE(' 1999-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS')) ,
PARTITION "SALES_1999"  VALUES LESS THAN (TO_DATE(' 2000-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS')) ,
PARTITION "SALES_2000"  VALUES LESS THAN (TO_DATE(' 2001-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS')) INDEXING ON,
PARTITION "SALES_2001"  VALUES LESS THAN (TO_DATE(' 2002-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS')) INDEXING ON,
PARTITION "SALES_2002"  VALUES LESS THAN (TO_DATE(' 2003-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS')) INDEXING ON
 )
;

Create a local partitioned index on the table and note the size of the local index.

SQL> CREATE INDEX SALES_12C_IND ON SALES_12C (TIME_ID) LOCAL;

Index created.


SQL> SELECT SUM(BYTES)/1048576 FROM USER_SEGMENTS WHERE SEGMENT_NAME='SALES_12C_IND';

SUM(BYTES)/1048576
------------------
                32

We drop the index and create the same index, but this time as a partial index. Since the index has only been created on a few partitions of the table and not the entire table, it is half the size of the original index.

SQL> CREATE INDEX SALES_12C_IND ON SALES_12C (TIME_ID) LOCAL INDEXING PARTIAL;

Index created.

SQL> SELECT SUM(BYTES)/1048576 FROM USER_SEGMENTS WHERE SEGMENT_NAME='SALES_12C_IND';

SUM(BYTES)/1048576
------------------
                16

We can see that for the partitions where indexing is not enabled, the index has been created as UNUSABLE.

SQL> SELECT PARTITION_NAME,STATUS FROM USER_IND_PARTITIONS WHERE INDEX_NAME='SALES_12C_IND';

PARTITION_NAME                 STATUS
------------------------------ --------
SALES_2002                     USABLE
SALES_2001                     USABLE
SALES_2000                     USABLE
SALES_1999                     UNUSABLE
SALES_1998                     UNUSABLE
SALES_1997                     UNUSABLE
SALES_1996                     UNUSABLE
SALES_1995                     UNUSABLE

Note the difference in the EXPLAIN PLAN between two queries – which access different partitions of the same table and in one case use the local partial index and in the other case performs a full table scan.

SQL>  EXPLAIN PLAN FOR
  2  SELECT SUM(QUantity_sold) from sales_12c
  3  where time_id <'01-JAN-97'; Explained. SQL> select * from table(dbms_xplan.display);

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Plan hash value: 2557626605

-------------------------------------------------------------------------------------------------------
| Id  | Operation                 | Name      | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
-------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT          |           |     1 |    11 |  1925   (1)| 00:00:01 |       |       |
|   1 |  SORT AGGREGATE           |           |     1 |    11 |            |          |       |       |
|   2 |   PARTITION RANGE ITERATOR|           |   472 |  5192 |  1925   (1)| 00:00:01 |     1 |   KEY |
|*  3 |    TABLE ACCESS FULL      | SALES_12C |   472 |  5192 |  1925   (1)| 00:00:01 |     1 |   KEY |





SQL>  EXPLAIN PLAN FOR
  2   SELECT SUM(QUantity_sold) from sales_12c
  3  where time_id='01-JAN-97';

Explained.

SQL> select * from table(dbms_xplan.display);

PLAN_TABLE_OUTPUT
---------------------------------------------------------------------------------------
Plan hash value: 2794067059

--------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                                      | Name          | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
--------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                               |               |     1 |    22 |     2   (0)| 00:00:01 |       |       |
|   1 |  SORT AGGREGATE                                |               |     1 |    22 |            |          |       |       |
|   2 |   VIEW                                         | VW_TE_2       |     2 |    26 |     2   (0)| 00:00:01 |       |       |
|   3 |    UNION-ALL                                   |               |       |       |            |          |       |       |
|*  4 |     FILTER                                     |               |       |       |            |          |       |       |
|   5 |      PARTITION RANGE SINGLE                    |               |     1 |    22 |     1   (0)| 00:00:01 |KEY(AP)|KEY(AP)|
|   6 |       TABLE ACCESS BY LOCAL INDEX ROWID BATCHED| SALES_12C     |     1 |    22 |     1   (0)| 00:00:01 |KEY(AP)|KEY(AP)|
|*  7 |        INDEX RANGE SCAN                        | SALES_12C_IND |     1 |       |     1   (0)| 00:00:01 |KEY(AP)|KEY(AP)|
|*  8 |     FILTER                                     |               |       |       |            |          |       |       |
|   9 |      PARTITION RANGE SINGLE                    |               |     1 |    22 |     2   (0)| 00:00:01 |KEY(AP)|KEY(AP)|
|* 10 |       TABLE ACCESS FULL                        | SALES_12C     |     1 |    22 |     2   (0)| 00:00:01 |KEY(AP)|KEY(AP)|


--------------------------------------------------------------------------------------------------------------------------------

Note the new columns INDEXING and DEF_INDEXING in the data dictionary views

SQL> select def_indexing from user_part_tables where table_name='SALES_12C';

DEF
---
OFF


SQL> select indexing from user_indexes where index_name='SALES_12C_IND';

INDEXIN
-------
PARTIAL

Asynchronous Global Index Maintenance

In earlier versions operations like TRUNCATE or DROP PARTITION on even a single partition would render the global indexes unusable and would require the indexes to be rebuilt before the application could use the indexes.

Now when we issue the same DROP or TRUNCATE partition commands we can use the UPDATE INDEXES clause and this maintains the global indexes leaving them in a USABLE state.

The global index maintenance is now deferred and is performed by a DBMS_SCHEDULER job called SYS.PMO_DEFERRED_GIDX_MAINT_JOB which is scheduled to run at 2.00 AM on a daily basis.

We can also use the DBMS_PART package which has the CLEANUP_GIDX procedure which cleans up the global indexes.

A new column ORPHANED_ENTRIES in the DBA|USER|ALL_INDEXES view keeps a track of the global index and specifies if the global index partition contains any stale entries caused by the DROP/TRUNCATE PARTITION operation.

Let us look at an example of the same. Note the important point that the global index is left in a USABLE state even after we perform a TRUNCATE operation on the partitioned table.

SQL>  alter table sales_12c truncate partition SALES_2000 UPDATE INDEXES;

Table truncated.

SQL> select distinct status from user_ind_partitions;

STATUS
--------
USABLE


SQL> select partition_name, ORPHANED_ENTRIES from user_ind_partitions
  2  where index_name='SALES_GIDX';

PARTITION_NAME                 ORP
------------------------------ ---
SYS_P348                       YES
SYS_P347                       YES
SYS_P346                       YES
SYS_P345                       YES
SYS_P344                       YES
SYS_P343                       YES
SYS_P342                       YES
SYS_P341                       YES



SQL> exec dbms_part.cleanup_gidx('SH','SALES_12C');

PL/SQL procedure successfully completed.

SQL> select partition_name, ORPHANED_ENTRIES from user_ind_partitions
  2  where index_name='SALES_GIDX';

PARTITION_NAME                 ORP
------------------------------ ---
SYS_P341                       NO
SYS_P342                       NO
SYS_P343                       NO
SYS_P344                       NO
SYS_P345                       NO
SYS_P346                       NO
SYS_P347                       NO
SYS_P348                       NO

PSU Patch Deployment using EM12c

$
0
0

The Patch Deployment feature in EM12c can greatly help in automating the rolling out of patches when we have to deploy the patch on a large number of targets – this significantly reduces both the time and complexity involved in the process.

Let us look at an example of deploying the JAN 2015 PSU patch using EM12c.

We first need to upload the JAN 2015 PSU patch 19769480 as well as the opatch version 12.1.0.1.6 to the EM12c Software Library – as we are operating in Offline Patching mode.

Note that we have to upload the Patch Metadata file as well along with the patch

p1

 

p3 p4

 
Click on 19769480 link in the Patch Name column
 

p5
 

We will add the patch to a New Patch Plan
 
p6
 
We can deploy the PSU patch on all available hosts with 12.1.0.2 Oracle databases – in this example we are deploying the PSU to just a single host.
 
p7
 
p8
 
After the patch plan has been created we edit the patch plan via the Patches & Updates menu
 

p9
 
p10
 
The Patch deployment can be In Place or Out of Place. In this case we will be applying the PSU patch to the existing Oracle Home. Provide the required Normal and Privileged Credentials and validate the same.
 

p11
 
p12
 
We then need to run an Analyze of the PSU patch. The patch is staged and checks are performed to determine if all the patch prerequisites are met.
 

p13
 
p14
 
p15
 
p16
 
p17
 
After the patch has been successfully analyzed, we can see that it is now ready for deployment.
 

p18
 
Review the patch plan and then click on the Deploy button
 

p19
 
Since the PSU patch deployment will require a database outage we can schedule the patch to be deployed at a specfic time or it can be deployed to start immediately.
 

p20
 
p21
 
While the patch deployment is in progress we can view the different actions being performed at each step and have very good visibility of the patch application as it proceeds.
 

p22
 
p23
 
p24
 
p25
 
p26
 
After the PSU patch application we can now see that the Post SQL script is being applied while the database has been started in Upgrade Mode.
 

p27
 
A Blackout is created automatically as patch of the patch deployment and Blackout is cleared as one of the last steps in the patch deployment plan.
 

p28
 
If we select the relevant 12.1.0.2 Oracle Home in the Targets menu, we can see under the Patches Applied tab that patch 19769480 has been successfully applied and we also can see the various bugs which have been fixed by this PSU patch.
 

p29

12.1.0.2 Multitenant Database New Features

$
0
0

Heres a quick look at some of the new features introduced in 12.1.0.2 around Pluggable and Container databases.

PDB CONTAINERS Clause

Using the CONTAINERS clause, from the root container we can issue a query which selects or aggregates data across multiple pluggable databases.

For example each pluggable data can contain data for a specific geographic region and we can issue a query from the root container database which will aggregate the data obtained from all individual regions.

The requirement is that we have to create an empty table in the root container with just the structure of the tables contained in the PDB’s.

In this example we have a table called MYOBJECTS and the pluggable databases are DEV1 and DEV2.

Each pluggable database has its own copy of the MYOBJECTS table.

We have a common user C##USER who owns the MYOBJECTS table in all the pluggable databases.


SQL> alter session set container=dev1;

Session altered.

SQL> select count(*) from myobjects
  2  where object_type='TABLE';

  COUNT(*)
----------
      2387

SQL> alter session set container=dev2;

Session altered.

SQL> select count(*) from myobjects
  2  where object_type='TABLE';

  COUNT(*)
----------
      2350


Now connect to the root container. We are able to issue a query which aggregates data from both Pluggable databases – DEV1 and DEV2.

Note the root container has a table also called MYOBJECTS – but with no rows.



SQL> sho con_name

CON_NAME
------------------------------
CDB$ROOT

SQL> select con_id, name from v$pdbs;

    CON_ID NAME
---------- ------------------------------
         2 PDB$SEED
         3 DEV1
         4 DEV2


SQL> select count(*) from myobjects;

  COUNT(*)
----------
         0


SQL> select count(*) from containers ( myobjects)
  2  where object_type='TABLE'
  3  and con_id in (3,4);

  COUNT(*)
----------
      4737

PDB Subset Cloning

12.1.0.2 extends database cloning where we can just clone a subset of a source database. The USER_TABLESPACES clause allows us to specify which tablespaces need to be available in the new cloned pluggable database

In this example the source Pluggable Database (DEV1) has two tablespaces with application data located in two tablespaces – USERS and TEST_DATA.

The requirement is to create a clone of the DEV1 pluggable database, but the target database only requires the tables contained in the TEST_DATA tablespace.

This would be useful in a case where we are migrating data from a non-CDB database which contains multiple schemas and we perform some kind of schema consolidation where each schema is self-contained in its own pluggable database.

Note the MYOBJECTS table is contained in the USERS tablespace and we are creating a new tablespace TEST_DATA which will contain the MYTABLES table. The cloned database only requires the TEST_DATA tablespace

SQL> alter session set container=dev1;

Session altered.


SQL> select tablespace_name from dba_tables where table_name='MYOBJECTS';

TABLESPACE_NAME
------------------------------
USERS

SQL> select count(*) from system.myobjects;

  COUNT(*)
----------
     90922


SQL> create tablespace test_data
  2  datafile
  3  '/oradata/cdb1/dev1/dev1_test_data01.dbf'
  4  size 50m;

Tablespace created.

SQL> create table system.mytables
  2  tablespace test_data
  3  as select * from dba_tables;

Table created.

SQL> select file_name, tablespace_name from dba_data_files;

FILE_NAME                                TABLESPACE_NAME
---------------------------------------- ------------------------------
/oradata/cdb1/dev1/system01.dbf          SYSTEM
/oradata/cdb1/dev1/sysaux01.dbf          SYSAUX
/oradata/cdb1/dev1/dev1_users01.dbf      USERS
/oradata/cdb1/dev1/dev1_test_data01.dbf  TEST_DATA

We now will create the clone database – DEV3 using DEV1 as the source. Note the USER_TABLESPACES clause which defines the tablespaces which we want to be part of the cloned pluggable database.


SQL> ! mkdir /oradata/cdb1/dev3/

SQL> conn / as sysdba
Connected.

SQL> sho con_name

CON_NAME
------------------------------
CDB$ROOT

SQL> CREATE PLUGGABLE DATABASE dev3 FROM dev1
FILE_NAME_CONVERT = ('/oradata/cdb1/dev1/', '/oradata/cdb1/dev3/')
USER_TABLESPACES=('TEST_DATA')  ;

Pluggable database created.

SQL> alter pluggable database dev3 open;

Pluggable database altered.

If we connect to the DEV3 database we can see a list of the data files which the PDB comprises of .

We can see that the datafile which belongs to the USERS tablespace has the MISSING keyword included in its name. While wed can now select from tables which were contained in the TEST_DATA tablespace on source (like MYTABLES), we cannot access tables (obviously) which existed in other tablespaces which were not part of the USER_TABLESPACES clause of the CREATE PLUGGABLE DATABASE command like MYOBJECTS.

To clean up the database we can now drop the other tablespaces like USERS which are not required in the cloned database.


SQL> alter session set container=dev3;

Session altered.


select file_name, tablespace_name from dba_data_files;


FILE_NAME                                                              TABLESPACE_NAME
---------------------------------------------------------------------- ------------------------------
/oradata/cdb1/dev3/system01.dbf                                        SYSTEM
/oradata/cdb1/dev3/sysaux01.dbf                                        SYSAUX
/u01/app/oracle/product/12.1.0.2/dbs/MISSING00017                      USERS
/oradata/cdb1/dev3/dev1_test_data01.dbf                                TEST_DATA


SQL> select count(*) from system.mytables;

  COUNT(*)
----------
      2339

SQL> select count(*) from system.myobjects;
select count(*) from system.myobjects
                            *
ERROR at line 1:
ORA-00376: file 21 cannot be read at this time
ORA-01111: name for data file 21 is unknown - rename to correct file
ORA-01110: data file 21: '/u01/app/oracle/product/12.1.0.2/dbs/MISSING00021'



SQL> alter database default tablespace test_data;

Database altered.

SQL> drop tablespace users including contents and datafiles;

Tablespace dropped.

PDB Metadata Clone

There is an option to also create a clone of a pluggable database with just the structure or definition of the source database but without any table or index user or application data.

This feature can help in the rapid provisioning of test or development environments where just the structure of the production database is required and after the pluggable database has been created it will be populated with some test data.

In this example we are creating the DEV4 pluggable database which just has the data dictionary and metadata of the source DEV1 database. Note the use of the NO DATA clause.


SQL> conn / as sysdba
Connected.

SQL> ! mkdir /oradata/cdb1/dev4

SQL> CREATE PLUGGABLE DATABASE dev4 FROM dev1
  2  FILE_NAME_CONVERT = ('/oradata/cdb1/dev1/', '/oradata/cdb1/dev4/')
  3  NO DATA;

Pluggable database created.

SQL> alter pluggable database dev4 open;

Pluggable database altered.

SQL> alter session set container=dev4;

Session altered.

SQL>  select count(*) from system.myobjects;

  COUNT(*)
----------
         0

SQL> select count(*) from system.mytables;

  COUNT(*)
----------
         0


SQL> select file_name, tablespace_name from dba_data_files;

FILE_NAME                                                              TABLESPACE_NAME
---------------------------------------------------------------------- ------------------------------
/oradata/cdb1/dev4/system01.dbf                                        SYSTEM
/oradata/cdb1/dev4/sysaux01.dbf                                        SYSAUX
/oradata/cdb1/dev4/dev1_users01.dbf                                    USERS
/oradata/cdb1/dev4/dev1_test_data01.dbf                                TEST_DATA


PDB State Management Across CDB Restart

In Oracle 12c version 12.1.0.1, when we started a CDB, by default all the PDB’s except the seed we left in MOUNTED state and we had to issue an explicit ALTER PLUGGABLE DATABASE ALL OPEN command to open all the PDB’s.

SQL> startup;
ORACLE instance started.

Total System Global Area  805306368 bytes
Fixed Size                  2929552 bytes
Variable Size             318770288 bytes
Database Buffers          478150656 bytes
Redo Buffers                5455872 bytes
Database mounted.
Database opened.


SQL> select name,open_mode from v$pdbs;

NAME                           OPEN_MODE
------------------------------ ----------
PDB$SEED                       READ ONLY
DEV1                           MOUNTED
DEV2                           MOUNTED
DEV3                           MOUNTED
DEV4                           MOUNTED

SQL> alter pluggable database all open;

Pluggable database altered.

SQL> select name,open_mode from v$pdbs;

NAME                           OPEN_MODE
------------------------------ ----------
PDB$SEED                       READ ONLY
DEV1                           READ WRITE
DEV2                           READ WRITE
DEV3                           READ WRITE
DEV4                           READ WRITE

Now in 12.1.0.2 using the SAVE STATE command we can preserve the open mode of a pluggable database (PDB) across multitenant container database (CDB) restarts.

So if a PDB was open in READ WRITE mode when a CDB was shut down, when we restart the CDB all the PDB’s which were in READ WRITE mode when the CDB was shut down will be opened in the same READ WRITE mode automatically without the DBA having to execute the ALTER PLUGGABLE DATABASE ALL OPEN command which was required in the earlier 12c version.

SQL> sho con_name

CON_NAME
------------------------------
CDB$ROOT

SQL> alter pluggable database all save state;

Pluggable database altered.

SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.

SQL> startup;
ORACLE instance started.

Total System Global Area  805306368 bytes
Fixed Size                  2929552 bytes
Variable Size             318770288 bytes
Database Buffers          478150656 bytes
Redo Buffers                5455872 bytes
Database mounted.
Database opened.

SQL> select name,open_mode from v$pdbs;

NAME                           OPEN_MODE
------------------------------ ----------
PDB$SEED                       READ ONLY
DEV1                           READ WRITE
DEV2                           READ WRITE
DEV3                           READ WRITE
DEV4                           READ WRITE

PDB Remote Clone

In 12.1.0.2 we can now create a PDB from a non-CDB source by cloning it over a database link. This feature further enhances the rapid provisioining of pluggable databases.

In non-CDB:

SQL> grant create pluggable database to system;

Grant succeeded.

In CDB root – create a database link to the non-CDB:

SQL> create database link non_cdb_link
  2  connect to system identified by oracle using 'upgr';

Database link created.

SQL> select * from dual@non_cdb_link;

D
-
X

Now shut down the non-CDB and open it in READ ONLY mode.


SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.

SQL> startup mount;
ORACLE instance started.

Total System Global Area  826277888 bytes
Fixed Size                  2929792 bytes
Variable Size             322964352 bytes
Database Buffers          494927872 bytes
Redo Buffers                5455872 bytes
Database mounted.

SQL> alter database open read only;

Database altered.


Create the pluggable database DEV5 from the non-CDB source using the database link we just created.


CREATE PLUGGABLE DATABASE dev5 FROM dev1@non_cdb_link
FILE_NAME_CONVERT = ('/oradata/cdb1/dev1/', '/oradata/cdb1/dev5/');

After the PBD has been created we will now need to run the noncdb_to_pdb.sql script and then open the PDB.


SQL> alter session set container=dev5;

Session altered.

SQL> @$ORACLE_HOME/rdbms/admin/noncdb_to_pdb.sql


SQL> alter pluggable database open;

Pluggable database altered.

SQL> select name from v$datafile;

NAME
--------------------------------------------------------------------------------
/oradata/cdb1/undotbs01.dbf
/oradata/cdb1/dev5/system01.dbf
/oradata/cdb1/dev5/sysaux01.dbf
/oradata/cdb1/dev5/users01.dbf
/oradata/cdb1/dev5/aq01.dbf


Oracle 12c Pluggable Database Upgrade

$
0
0

Until very recently I had really believed the marketing hype and sales pitch about how in 12c database upgrades are so much faster and easier than earlier releases – just unplug the PDB from one container and plug it in to another container and bingo you have an upgraded database!

Partly true …. maybe about 20%!

As Mike Dietrich from Oracle Corp. has rightly pointed out on his great blog (http://blogs.oracle.com/upgrade),it is not as straight forward as pointed out in slides seen I am sure by many of us at various Oracle conferences showcasing Oracle database 12c.

I tested out the upgrade of a PDB from version 12.1.0.1 to the latest 12c version 12.1.0.2 and here are the steps taken.

Note: If we are upgrading the entire CDB and all the PDB’s the steps would be different.

In this case we are upgrading just of the pluggable databases to a higher database software version.
 

Run the preupgrd.sql script and pre-upgrade fixup script

 
Connect to the 12.1.0.1 target database and run the preupgrd.sql script.

The source container database is cdb3 and the PDB which we are upgrading is pdb_gavin.

[oracle@edmbr52p5 ~]$ . oraenv
ORACLE_SID = [cdb1] ? cdb3

The Oracle base for ORACLE_HOME=/u01/app/oracle/product/12.1.0.1/dbhome_1 is /u01/app/oracle
[oracle@edmbr52p5 ~]$ sqlplus sys as sysdba

SQL*Plus: Release 12.1.0.1.0 Production on Fri Aug 21 10:49:21 2015

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

Enter password:

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL> alter session set container=pdb_gavin;

Session altered.

SQL> @?/rdbms/admin/preupgrd.sql
Loading Pre-Upgrade Package...
Executing Pre-Upgrade Checks...
Pre-Upgrade Checks Complete.
      ************************************************************

Results of the checks are located at:
 /u01/app/oracle/cfgtoollogs/cdb3/preupgrade/preupgrade.log

Pre-Upgrade Fixup Script (run in source database environment):
 /u01/app/oracle/cfgtoollogs/cdb3/preupgrade/preupgrade_fixups.sql

Post-Upgrade Fixup Script (run shortly after upgrade):
 /u01/app/oracle/cfgtoollogs/cdb3/preupgrade/postupgrade_fixups.sql

      ************************************************************

         Fixup scripts must be reviewed prior to being executed.

      ************************************************************

      ************************************************************
                   ====>> USER ACTION REQUIRED  <<====
      ************************************************************

 The following are *** ERROR LEVEL CONDITIONS *** that must be addressed
                    prior to attempting your upgrade.
            Failure to do so will result in a failed upgrade.


 1) Check Tag:    OLS_SYS_MOVE
    Check Summary: Check if SYSTEM.AUD$ needs to move to SYS.AUD$ before upgrade
    Fixup Summary:
     "Execute olspreupgrade.sql script prior to upgrade."
    +++ Source Database Manual Action Required +++

            You MUST resolve the above error prior to upgrade

      ************************************************************

The execution of the preupgrd.sql script will generate 3 separate files.

1)preupgrade.log
2)preupgrade_fixups.sql
3)postupgrade_fixups.sql

Let us examine the contents of the preupgrade.log file.

Oracle Database Pre-Upgrade Information Tool 08-21-2015 10:50:04
Script Version: 12.1.0.1.0 Build: 006
**********************************************************************
   Database Name:  CDB3
         Version:  12.1.0.1.0
      Compatible:  12.1.0.0.0
       Blocksize:  8192
        Platform:  Linux x86 64-bit
   Timezone file:  V18
**********************************************************************
                          [Renamed Parameters]
                     [No Renamed Parameters in use]
**********************************************************************
**********************************************************************
                    [Obsolete/Deprecated Parameters]
             [No Obsolete or Desupported Parameters in use]
**********************************************************************
                            [Component List]
**********************************************************************
--> Oracle Catalog Views                   [upgrade]  VALID
--> Oracle Packages and Types              [upgrade]  VALID
--> JServer JAVA Virtual Machine           [upgrade]  VALID
--> Oracle XDK for Java                    [upgrade]  VALID
--> Real Application Clusters              [upgrade]  OPTION OFF
--> Oracle Workspace Manager               [upgrade]  VALID
--> OLAP Analytic Workspace                [upgrade]  VALID
--> Oracle Label Security                  [upgrade]  VALID
--> Oracle Database Vault                  [upgrade]  VALID
--> Oracle Text                            [upgrade]  VALID
--> Oracle XML Database                    [upgrade]  VALID
--> Oracle Java Packages                   [upgrade]  VALID
--> Oracle Multimedia                      [upgrade]  VALID
--> Oracle Spatial                         [upgrade]  VALID
--> Oracle Application Express             [upgrade]  VALID
--> Oracle OLAP API                        [upgrade]  VALID
**********************************************************************
           [ Unsupported Upgrade: Tablespace Data Supressed ]
**********************************************************************
**********************************************************************
                          [Pre-Upgrade Checks]
**********************************************************************
ERROR: --> SYSTEM.AUD$ (audit records) Move

    An error occured retrieving a count from SYSTEM.AUD$
    This can happen when the table has already been cleaned up.
    The olspreupgrade.sql script should be re-executed.



WARNING: --> Existing DBMS_LDAP dependent objects

     Database contains schemas with objects dependent on DBMS_LDAP package.
     Refer to the Upgrade Guide for instructions to configure Network ACLs.
     USER APEX_040200 has dependent objects.


**********************************************************************
                      [Pre-Upgrade Recommendations]
**********************************************************************

                        *****************************************
                        ********* Dictionary Statistics *********
                        *****************************************

Please gather dictionary statistics 24 hours prior to
upgrading the database.
To gather dictionary statistics execute the following command
while connected as SYSDBA:
    EXECUTE dbms_stats.gather_dictionary_stats;

^^^ MANUAL ACTION SUGGESTED ^^^

**********************************************************************
                     [Post-Upgrade Recommendations]
**********************************************************************

                        *****************************************
                        ******** Fixed Object Statistics ********
                        *****************************************

Please create stats on fixed objects two weeks
after the upgrade using the command:
   EXECUTE DBMS_STATS.GATHER_FIXED_OBJECTS_STATS;

^^^ MANUAL ACTION SUGGESTED ^^^

**********************************************************************
                   ************  Summary  ************

 1 ERROR exist that must be addressed prior to performing your upgrade.
 2 WARNINGS that Oracle suggests are addressed to improve database performance.
 0 INFORMATIONAL messages messages have been reported.

 After your database is upgraded and open in normal mode you must run
 rdbms/admin/catuppst.sql which executes several required tasks and completes
 the upgrade process.

 You should follow that with the execution of rdbms/admin/utlrp.sql, and a
 comparison of invalid objects before and after the upgrade using
 rdbms/admin/utluiobj.sql

 If needed you may want to upgrade your timezone data using the process
 described in My Oracle Support note 977512.1
                   ***********************************

So as part of the pre-upgrade preparation we execute :

SQL> @?/rdbms/admin/olspreupgrade.sql

and 

SQL>  EXECUTE dbms_stats.gather_dictionary_stats;

Unplug the PDB from the 12.1.0.1 Container Database

SQL>  alter session set container=CDB$ROOT;

Session altered.

SQL> alter pluggable database  pdb_gavin unplug into '/home/oracle/pdb_gavin.xml';

Pluggable database altered

Create the PDB in the 12.1.0.2 Container Database

[oracle@edmbr52p5 ~]$ . oraenv
ORACLE_SID = [cdb2] ? cdb1
The Oracle base for ORACLE_HOME=/u01/app/oracle/product/12.1.0/dbhome_1 is /u01/app/oracle

[oracle@edmb]$ sqlplus sys as sysdba

SQL*Plus: Release 12.1.0.2.0 Production on Fri Aug 21 12:04:10 2015

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

Enter password:

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options


SQL> show con_name

CON_NAME
------------------------------
CDB$ROOT


SQL> create pluggable database pdb_gavin
  2   using '/home/oracle/pdb_gavin.xml'
  3  nocopy
  4  tempfile reuse;

Pluggable database created..

Upgrade the PDB to 12.1.0.2

After the pluggable database has been created in the 12.1.0.2 container, we will open it with the UPGRADE option in order to run the catupgrd.sql database upgrade script.

We can see that we receive some errors which we can ignore safely as we are in the middle of an upgrade to the PDB.

SQL> alter pluggable database pdb_gavin open upgrade;

Warning: PDB altered with errors.


SQL> select message, status from pdb_plug_in_violations where type like '%ERR%';

MESSAGE
--------------------------------------------------------------------------------
STATUS
---------
Character set mismatch: PDB character set US7ASCII. CDB character set AL32UTF8.
RESOLVED

PDB's version does not match CDB's version: PDB's version 12.1.0.1.0. CDB's vers
ion 12.1.0.2.0.
PENDING

We now run the catctl.pl perl script and we specify the PDB name (if we were upgrading multiple PDBs hee we would separate each PDB name with a comma) – not that we are also running the upgrade in parallel.

[oracle@edm ~]$ cd $ORACLE_HOME/rdbms/admin
[oracle@edm admin]$ $ORACLE_HOME/perl/bin/perl catctl.pl -c "PDB_GAVIN" -n 4 -l /tmp catupgrd.sql

Argument list for [catctl.pl]
SQL Process Count     n = 4
SQL PDB Process Count N = 0
Input Directory       d = 0
Phase Logging Table   t = 0
Log Dir               l = /tmp
Script                s = 0
Serial Run            S = 0
Upgrade Mode active   M = 0
Start Phase           p = 0
End Phase             P = 0
Log Id                i = 0
Run in                c = PDB_GAVIN
Do not run in         C = 0
Echo OFF              e = 1
No Post Upgrade       x = 0
Reverse Order         r = 0
Open Mode Normal      o = 0
Debug catcon.pm       z = 0
Debug catctl.pl       Z = 0
Display Phases        y = 0
Child Process         I = 0

catctl.pl version: 12.1.0.2.0
Oracle Base           = /u01/app/oracle

Analyzing file catupgrd.sql
Log files in /tmp
catcon: ALL catcon-related output will be written to /tmp/catupgrd_catcon_19456.lst
catcon: See /tmp/catupgrd*.log files for output generated by scripts
catcon: See /tmp/catupgrd_*.lst files for spool files, if any
Number of Cpus        = 8
Parallel PDB Upgrades = 2
SQL PDB Process Count = 2
SQL Process Count     = 4

[CONTAINER NAMES]

CDB$ROOT
PDB$SEED
PDB1_1
PDB_GAVIN
PDB Inclusion:[PDB_GAVIN] Exclusion:[]

Starting
[/u01/app/oracle/product/12.1.0/dbhome_1/perl/bin/perl catctl.pl -c 'PDB_GAVIN' -n 2 -l /tmp -I -i pdb_gavin catupgrd.sql]

Argument list for [catctl.pl]
SQL Process Count     n = 2
SQL PDB Process Count N = 0
Input Directory       d = 0
Phase Logging Table   t = 0
Log Dir               l = /tmp
Script                s = 0
Serial Run            S = 0
Upgrade Mode active   M = 0
Start Phase           p = 0
End Phase             P = 0
Log Id                i = pdb_gavin
Run in                c = PDB_GAVIN
Do not run in         C = 0
Echo OFF              e = 1
No Post Upgrade       x = 0
Reverse Order         r = 0
Open Mode Normal      o = 0
Debug catcon.pm       z = 0
Debug catctl.pl       Z = 0
Display Phases        y = 0
Child Process         I = 1

catctl.pl version: 12.1.0.2.0
Oracle Base           = /u01/app/oracle

Analyzing file catupgrd.sql
Log files in /tmp
catcon: ALL catcon-related output will be written to /tmp/catupgrdpdb_gavin_catcon_19562.lst
catcon: See /tmp/catupgrdpdb_gavin*.log files for output generated by scripts
catcon: See /tmp/catupgrdpdb_gavin_*.lst files for spool files, if any
Number of Cpus        = 8
SQL PDB Process Count = 2
SQL Process Count     = 2

[CONTAINER NAMES]

CDB$ROOT
PDB$SEED
PDB1_1
PDB_GAVIN
PDB Inclusion:[PDB_GAVIN] Exclusion:[]

------------------------------------------------------
Phases [0-73]
Container Lists Inclusion:[PDB_GAVIN] Exclusion:[]
Serial   Phase #: 0 Files: 1     Time: 15s   PDB_GAVIN
Serial   Phase #: 1 Files: 5     Time: 107s  PDB_GAVIN
Restart  Phase #: 2 Files: 1     Time: 0s    PDB_GAVIN
Parallel Phase #: 3 Files: 18    Time: 40s   PDB_GAVIN
Restart  Phase #: 4 Files: 1     Time: 0s    PDB_GAVIN
Serial   Phase #: 5 Files: 5     Time: 43s   PDB_GAVIN
Serial   Phase #: 6 Files: 1     Time: 18s   PDB_GAVIN
Serial   Phase #: 7 Files: 4     Time: 11s   PDB_GAVIN
Restart  Phase #: 8 Files: 1     Time: 0s    PDB_GAVIN
Parallel Phase #: 9 Files: 62    Time: 110s  PDB_GAVIN
Restart  Phase #:10 Files: 1     Time: 0s    PDB_GAVIN
Serial   Phase #:11 Files: 1     Time: 28s   PDB_GAVIN
Restart  Phase #:12 Files: 1     Time: 0s    PDB_GAVIN
Parallel Phase #:13 Files: 91    Time: 8s    PDB_GAVIN
Restart  Phase #:14 Files: 1     Time: 0s    PDB_GAVIN
Parallel Phase #:15 Files: 111   Time: 15s   PDB_GAVIN
Restart  Phase #:16 Files: 1     Time: 0s    PDB_GAVIN
Serial   Phase #:17 Files: 3     Time: 2s    PDB_GAVIN
Restart  Phase #:18 Files: 1     Time: 0s    PDB_GAVIN
Parallel Phase #:19 Files: 32    Time: 43s   PDB_GAVIN
Restart  Phase #:20 Files: 1     Time: 0s    PDB_GAVIN
Serial   Phase #:21 Files: 3     Time: 11s   PDB_GAVIN
Restart  Phase #:22 Files: 1     Time: 0s    PDB_GAVIN
Parallel Phase #:23 Files: 23    Time: 75s   PDB_GAVIN
Restart  Phase #:24 Files: 1     Time: 0s    PDB_GAVIN
Parallel Phase #:25 Files: 11    Time: 25s   PDB_GAVIN
Restart  Phase #:26 Files: 1     Time: 0s    PDB_GAVIN
Serial   Phase #:27 Files: 1     Time: 1s    PDB_GAVIN
Restart  Phase #:28 Files: 1     Time: 0s    PDB_GAVIN
Serial   Phase #:30 Files: 1     Time: 0s    PDB_GAVIN
Serial   Phase #:31 Files: 257   Time: 29s   PDB_GAVIN
Serial   Phase #:32 Files: 1     Time: 0s    PDB_GAVIN
Restart  Phase #:33 Files: 1     Time: 0s    PDB_GAVIN
Serial   Phase #:34 Files: 1     Time: 3s    PDB_GAVIN
Restart  Phase #:35 Files: 1     Time: 0s    PDB_GAVIN
Restart  Phase #:36 Files: 1     Time: 0s    PDB_GAVIN
Serial   Phase #:37 Files: 4     Time: 62s   PDB_GAVIN
Restart  Phase #:38 Files: 1     Time: 0s    PDB_GAVIN
Parallel Phase #:39 Files: 13    Time: 33s   PDB_GAVIN
Restart  Phase #:40 Files: 1     Time: 0s    PDB_GAVIN
Parallel Phase #:41 Files: 10    Time: 5s    PDB_GAVIN
Restart  Phase #:42 Files: 1     Time: 0s    PDB_GAVIN
Serial   Phase #:43 Files: 1     Time: 7s    PDB_GAVIN
Restart  Phase #:44 Files: 1     Time: 0s    PDB_GAVIN
Serial   Phase #:45 Files: 1     Time: 1s    PDB_GAVIN
Serial   Phase #:46 Files: 1     Time: 0s    PDB_GAVIN
Restart  Phase #:47 Files: 1     Time: 0s    PDB_GAVIN
Serial   Phase #:48 Files: 1     Time: 71s   PDB_GAVIN
Restart  Phase #:49 Files: 1     Time: 0s    PDB_GAVIN
Serial   Phase #:50 Files: 1     Time: 9s    PDB_GAVIN
Restart  Phase #:51 Files: 1     Time: 0s    PDB_GAVIN
Serial   Phase #:52 Files: 1     Time: 41s   PDB_GAVIN
Restart  Phase #:53 Files: 1     Time: 0s    PDB_GAVIN
Serial   Phase #:54 Files: 1     Time: 51s   PDB_GAVIN
Restart  Phase #:55 Files: 1     Time: 0s    PDB_GAVIN
Serial   Phase #:56 Files: 1     Time: 36s   PDB_GAVIN
Restart  Phase #:57 Files: 1     Time: 0s    PDB_GAVIN
Serial   Phase #:58 Files: 1     Time: 37s   PDB_GAVIN
Restart  Phase #:59 Files: 1     Time: 0s    PDB_GAVIN
Serial   Phase #:60 Files: 1     Time: 48s   PDB_GAVIN
Restart  Phase #:61 Files: 1     Time: 0s    PDB_GAVIN
Serial   Phase #:62 Files: 1     Time: 112s  PDB_GAVIN
Restart  Phase #:63 Files: 1     Time: 0s    PDB_GAVIN
Serial   Phase #:64 Files: 1     Time: 1s    PDB_GAVIN
Serial   Phase #:65 Files: 1 Calling sqlpatch with LD_LIBRARY_PATH=/u01/app/oracle/product/12.1.0/dbhome_1/lib; export LD_LIBRARY_PATH;/u01/app/oracle/product/12.1.0/dbhome_1/perl/bin/perl -I /u01/app/oracle/product/12.1.0/dbhome_1/rdbms/admin -I /u01/app/oracle/product/12.1.0/dbhome_1/rdbms/admin/../../sqlpatch /u01/app/oracle/product/12.1.0/dbhome_1/rdbms/admin/../../sqlpatch/sqlpatch.pl -verbose -upgrade_mode_only -pdbs PDB_GAVIN > /tmp/catupgrdpdb_gavin_datapatch_upgrade.log 2> /tmp/catupgrdpdb_gavin_datapatch_upgrade.err
returned from sqlpatch
    Time: 3s    PDB_GAVIN
Serial   Phase #:66 Files: 1     Time: 1s    PDB_GAVIN
Serial   Phase #:68 Files: 1     Time: 12s   PDB_GAVIN
Serial   Phase #:69 Files: 1 Calling sqlpatch with LD_LIBRARY_PATH=/u01/app/oracle/product/12.1.0/dbhome_1/lib; export LD_LIBRARY_PATH;/u01/app/oracle/product/12.1.0/dbhome_1/perl/bin/perl -I /u01/app/oracle/product/12.1.0/dbhome_1/rdbms/admin -I /u01/app/oracle/product/12.1.0/dbhome_1/rdbms/admin/../../sqlpatch /u01/app/oracle/product/12.1.0/dbhome_1/rdbms/admin/../../sqlpatch/sqlpatch.pl -verbose -pdbs PDB_GAVIN > /tmp/catupgrdpdb_gavin_datapatch_normal.log 2> /tmp/catupgrdpdb_gavin_datapatch_normal.err
returned from sqlpatch
    Time: 3s    PDB_GAVIN
Serial   Phase #:70 Files: 1     Time: 30s   PDB_GAVIN
Serial   Phase #:71 Files: 1     Time: 4s    PDB_GAVIN
Serial   Phase #:72 Files: 1     Time: 3s    PDB_GAVIN
Serial   Phase #:73 Files: 1     Time: 0s    PDB_GAVIN

Grand Total Time: 1155s PDB_GAVIN

LOG FILES: (catupgrdpdb_gavin*.log)

Upgrade Summary Report Located in:
/u01/app/oracle/product/12.1.0/dbhome_1/cfgtoollogs/cdb1/upgrade/upg_summary.log

Total Upgrade Time:          [0d:0h:19m:15s]

     Time: 1156s For PDB(s)

Grand Total Time: 1156s

LOG FILES: (catupgrd*.log)

Grand Total Upgrade Time:    [0d:0h:19m:16s]
[oracle@edmbr52p5 admin]$


Run the post upgrade steps

We then start the PDB and run the post-upgrade steps which includes recompiling all the invalid objects and also gathering fresh statistics on the fixed dictionary objects.

That completes the PDB upgrade – not quite a simple plug and unplug!!

SQL> startup;
Pluggable Database opened.


SQL> @?/rdbms/admin/utlrp.sql

TIMESTAMP
--------------------------------------------------------------------------------
COMP_TIMESTAMP UTLRP_BGN  2015-08-21 12:35:42

DOC>   The following PL/SQL block invokes UTL_RECOMP to recompile invalid
DOC>   objects in the database. Recompilation time is proportional to the
DOC>   number of invalid objects in the database, so this command may take
DOC>   a long time to execute on a database with a large number of invalid
DOC>   objects.
DOC>
DOC>   Use the following queries to track recompilation progress:
DOC>
DOC>   1. Query returning the number of invalid objects remaining. This
DOC>      number should decrease with time.
DOC>         SELECT COUNT(*) FROM obj$ WHERE status IN (4, 5, 6);
DOC>
DOC>   2. Query returning the number of objects compiled so far. This number
DOC>      should increase with time.
DOC>         SELECT COUNT(*) FROM UTL_RECOMP_COMPILED;
DOC>
DOC>   This script automatically chooses serial or parallel recompilation
DOC>   based on the number of CPUs available (parameter cpu_count) multiplied
DOC>   by the number of threads per CPU (parameter parallel_threads_per_cpu).
DOC>   On RAC, this number is added across all RAC nodes.
DOC>
DOC>   UTL_RECOMP uses DBMS_SCHEDULER to create jobs for parallel
DOC>   recompilation. Jobs are created without instance affinity so that they
DOC>   can migrate across RAC nodes. Use the following queries to verify
DOC>   whether UTL_RECOMP jobs are being created and run correctly:
DOC>
DOC>   1. Query showing jobs created by UTL_RECOMP
DOC>         SELECT job_name FROM dba_scheduler_jobs
DOC>            WHERE job_name like 'UTL_RECOMP_SLAVE_%';
DOC>
DOC>   2. Query showing UTL_RECOMP jobs that are running
DOC>         SELECT job_name FROM dba_scheduler_running_jobs
DOC>            WHERE job_name like 'UTL_RECOMP_SLAVE_%';
DOC>#

PL/SQL procedure successfully completed.


TIMESTAMP
--------------------------------------------------------------------------------
COMP_TIMESTAMP UTLRP_END  2015-08-21 12:36:02

DOC> The following query reports the number of objects that have compiled
DOC> with errors.
DOC>
DOC> If the number is higher than expected, please examine the error
DOC> messages reported with each object (using SHOW ERRORS) to see if they
DOC> point to system misconfiguration or resource constraints that must be
DOC> fixed before attempting to recompile these objects.
DOC>#

OBJECTS WITH ERRORS
-------------------
                  0

DOC> The following query reports the number of errors caught during
DOC> recompilation. If this number is non-zero, please query the error
DOC> messages in the table UTL_RECOMP_ERRORS to see if any of these errors
DOC> are due to misconfiguration or resource constraints that must be
DOC> fixed before objects can compile successfully.
DOC>#

ERRORS DURING RECOMPILATION
---------------------------
                          0


Function created.


PL/SQL procedure successfully completed.


Function dropped.

...Database user "SYS", database schema "APEX_040200", user# "98" 12:36:13
...Compiled 0 out of 3014 objects considered, 0 failed compilation 12:36:13
...271 packages
...263 package bodies
...452 tables
...11 functions
...16 procedures
...3 sequences
...457 triggers
...1320 indexes
...211 views
...0 libraries
...6 types
...0 type bodies
...0 operators
...0 index types
...Begin key object existence check 12:36:13
...Completed key object existence check 12:36:13
...Setting DBMS Registry 12:36:13
...Setting DBMS Registry Complete 12:36:13
...Exiting validate 12:36:13

PL/SQL procedure successfully completed.

SQL>

SQL> EXECUTE DBMS_STATS.GATHER_FIXED_OBJECTS_STATS;

PL/SQL procedure successfully completed.



SQL> SELECT NAME,OPEN_MODE FROM V$PDBS;

NAME                           OPEN_MODE
------------------------------ ----------
PDB$SEED                       READ ONLY
PDB_GAVIN                      READ WRITE


Wrong Results On Query With Subquery Using OR EXISTS After upgrade to 12.1.0.2

$
0
0

Recently one my clients encountered an issue with a SQL query which returned no rows in the 12c database which had been upgraded, but was returning rows in any of the 11g databases which had not been upgraded as yet.

The query was


SELECT *
  FROM STORAGE t0
  WHERE ( ( ( ( ( ( (ROWNUM <= 30) AND (t0.BUSINESS_UNIT_ID = 2))   AND (t0.PLCODE = 1001))
                  AND (t0.SM_SERIALNUM = '5500100000149000994'))
                  AND ( (t0.SM_MODDATE IS NULL) OR (t0.SM_MODDATE <= SYSDATE)))
                AND   ( 
                        (t0.DEALER_ID IS NULL)
                         OR 
                        EXISTS   (SELECT t1.CUSTOMER_ID  FROM CUSTOMER_ALL t1 WHERE ( (t1.CUSTOMER_ID = t0.DEALER_ID) AND (t1.CSTYPE <> 'd')))
                        )
        )
        AND (t0.SM_STATUS <> 'b'));

If we added the hint /*+ optimizer_features_enable(‘11.2.0.4’) */ to the query it worked fine.

After a bit of investigation we found that we could possibly be hitting this bug

Bug 18650065 : WRONG RESULTS ON QUERY WITH SUBQUERY USING OR EXISTS

The solution was either to enable this hidden parameter at the session or database level or to apply the patch 18650065 which is now available for download from MOS.

ALTER SESSION SET “_optimizer_null_accepting_semijoin”=FALSE;

The patch 18650065 can be applied online in both a non-RAC as well as RAC environment

For Non-RAC Environments 

$ opatch apply online -connectString orcl:SYS:SYS_PASSWORD

For RAC Environments

2 node RAC example:

$ opatch apply online -connectString orcl1:SYS:SYS_PASSWORD:node1, orcl2:SYS:SYS_PASSWORD:node2



Oracle 12c RMAN DUPLICATE Database

$
0
0

In earlier versions the RMAN DUPLICATE database command was a push-based method. One of the new features in Oracle 12c is that it has been changed to a pull-based method which has many advantages. Let us note the difference between the two methods.

In the earlier push-based method, the source database transfers the required database files to the auxiliary database as image copies. Now let us say we had a tablespace which had a 10GB data file, but the tablespace only contained say about 1 GB of data. Regardless, since it is an image copy, the entire 10 GB data file had to be copied over the network.

Now in Oracle 12c RMAN performs active database duplication using backup sets and not image copies. Taking the earlier example of a tablespace having a 10GB data file but say having only 1 GB of occupied data, only the 1 GB is now copied over the network as a backup set and not the entire 10 GB data file.

With backupsets there are a number of advantages.

So now in Oracle 12c this is what is new in the DUPLICATE …. FROM ACTIVE DATABASE command. And these new features certainly are providing advantages over the earlier pre-12c method.

  • RMAN can employ unused block compression while creating backups, thus reducing the size of backups that are transported over the network (USING BACKUPSET, USING COMPRESSED BACKUPS clause).
  • Using multi-section backups, backup sets can be created in parallel on the source database (SECTION SIZE clause).
  • In addition we can also encrypt backup sets created on the source database via the SET ENCRYPTION command.

Let us look at an example using the pull-based method to create a duplicate database using RMAN backupsets from an active database.

Let us assume source database name is BSPRD and we are creating a clone of this database.

So what all preparation work we have to do for this RMAN Duplicate to work? – same as 11g – this part has not changed.

First and most important thing to do is to do the network part of the work.

Add a static entry in the listener.ora on the target and in the tnsnames.ora file on both database source and target servers add a TNS alias.

Then copy the password file from source to target and rename the file on the target if the ORACLE_SID on target is different to the source.

Create any required directories on the destination host as required if the directory path on the source and target are going to be different – for example we may need to create a directory for audit_dump_dest on the target.

If the ASM disk group names are different then we may have to connect via asmcmd on the target and create any directories we require.

Also don’t forget the DB_FILE_NAME_CONVERT and LOG_FILE_NAME_CONVERT parameters in the target database parameter file if the directory structure is different on the target as compared to the source.

When using the SECTION SIZE parameter take into account the sizes of the data files and the parallelism we are going to use.

In the example I have shown the RMAN parallelism has been set to 4 and two of the bigger data files are 2.2 GB and 1.5 GB – so I have used a section size of 500 MB.

Note – also now when you create the duplicate database via RMAN, we cannot just issue the “TARGET /” command in RMAN.

We have to explicitly provide the user, password as well as the TNS alias for both the target database as well as the auxiliary database.

Like for example:

rman target sys/sys_passwd@bsprd auxiliary sys/sys_passwd@bsprd_dup

Note the RMAN DUPLICATE DATABASE command – it includes the USING BACKUPSET and SECTION SIZE clauses.

Recovery Manager: Release 12.1.0.2.0 - Production on Thu Aug 27 05:27:22 2015
Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved.

connected to target database: BSPRD (DBID=3581332368)
connected to auxiliary database: BSPRD (not mounted)

RMAN> duplicate target database to bsprd from active database
2> using backupset
3> section size 500m;

Note the 4 auxiliary channels being created because we have configured RMAN with a parallelism of 4.

Starting Duplicate Db at 27-AUG-15
using target database control file instead of recovery catalog
allocated channel: ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: SID=19714 device type=DISK
allocated channel: ORA_AUX_DISK_2
channel ORA_AUX_DISK_2: SID=19713 device type=DISK
allocated channel: ORA_AUX_DISK_3
channel ORA_AUX_DISK_3: SID=6 device type=DISK
allocated channel: ORA_AUX_DISK_4
channel ORA_AUX_DISK_4: SID=2820 device type=DISK
current log archived

The SYSTEM tablespace data file was about 2.2 GB in my case. So we can see that RMAN has split this 2.2 GB based on the section size we allocated which was 500 MB. We have 4 auxiliary channels working on ‘sections’ of the single data file in parallel.

channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: using network backup set from service bsprd
channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_1: restoring datafile 00001 to +OEM_DATA/BSPRD/DATAFILE/system.302.888816535
channel ORA_AUX_DISK_1: restoring section 1 of 5

channel ORA_AUX_DISK_3: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_3: restoring datafile 00001 to +OEM_DATA/BSPRD/DATAFILE/system.302.888816535
channel ORA_AUX_DISK_3: restoring section 2 of 5
channel ORA_AUX_DISK_4: restore complete, elapsed time: 00:00:04


....

....

channel ORA_AUX_DISK_4: using network backup set from service bsprd
channel ORA_AUX_DISK_4: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_4: restoring datafile 00001 to +OEM_DATA/BSPRD/DATAFILE/system.302.888816535
channel ORA_AUX_DISK_4: restoring section 5 of 5
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:10


GoldenGate 12c (12.2) New Features

$
0
0

At the recent Oracle Open World 2015 conference I was fortunate to attend a series of very informative presentations on Oracle GoldenGate from senior members of the Product Development team.

Among them was the presentation titled GoldenGate 12.2 New Features Deep Dive which is now available for download via the official OOW15 website.

While no official release date was announced for Goldengate 12.2, the message was being communicated that the release was going to happen ‘very soon’.

So while we eagerly wait for the official product release, here are some of the new 12.2 features which we can look forward to.

 

No more usage of SOURCEDEFS and ASSUMETARGETDEFS parameter –Metadata included as part of Trail File

In earlier versions if the structure of the table between the source and target database was different in terms of column names, data types and even column positions (among other things), we had to create a flat file which contained the table definitions and column mapping via the DEFGEN utility. Then we had to transfer this file to the target system.

If we used the parameter ASSUMETARGETDEFS, the assumption was that the internal structure of the target tables was the same as the source – which was not always the case – and we encountered issues

Now in 12.2, GoldenGate Trail Files are Self-Describing. Metadata information is included in the Trail Files called Table Definition Record (TDR) before the first occurrence of DML on that particular table and this TDR contains the table and column definition like the column number, data type, column length etc .

For new installations which will use the GoldenGate 12.2 software, metadata gets automatically populated in trail files by default. For existing installations we can use the parameter FORMAT RELEASE 12.2 and then any SOURCEDEFS or ASSUMETARGETDEFS parameters are no longer required or are ignored.

 

Automatic Heartbeat Table

In earlier versions, one of the recommendations to monitor lag was to create a heartbeat table.

Now in 12.2, there is a built-in mechanism to monitor replication lag. There is a new GGSCI command called ADD HEARTBEATTABLE .

This ADD HEARTBEATTABLE will automatically create the heartbeat tables and views as well as database jobs which updates heartbeat tables every 60 seconds.

One of the views created is called GG_LAG and it contains columns like INCOMING_LAG which will show the period of time between a remote database generating heartbeat and a local database receiving heartbeat.

Similarly to support an Active-Active Bi-Directional GoldenGate configuration, there is also a column called OUTGOING_LAG which is the period of time between local database generating heartbeat and remote database receiving heartbeat.

The GG_HEARTBEAT table is one of the main tables on which other heartbeat views are built and it will contain lag information for each component – Extract, Pump as well as Replicat. So we can quite easily identify where the bottleneck is when faced with diagnosing a GoldenGate performance issue.

Historical heartbeat and lag information is also maintained in tables like GG_LAG_HISTORY and GG_HEARTBEAT_HISTORY tables.

 

Parameter Files – checkprm , INFO PARAM, GETPARAMINFO

A new utility is available in 12.2 called checkprm which can be used to validate parameter files before they are deployed.

The INFO PARAM command will give us a lot of information about a particular parameter – like what is the default value and what are valid range of values. It is like accessing the online documentation from the GGSCI command line.

When a process like replicat or extract is running, we can use the SEND [process] GETPARAMINFO command to identify the runtime parameters – not only parameters included in the process parameter file, but also any other parameters the process has accessed which are say not included in the parameter file. Sometimes we are not aware of the many default parameters a process will use and this command will show this information real-time while the extract or replicat or manager is up and running.

 

Transparent Integration with Oracle Clusterware

In earlier releases, when we used the Grid Infrastructure Agent (XAG) to provide high availability capability for Oracle GoldenGate, we had to use the AGCTL to manage the GoldenGate instance like stop and start. If we used the GGSCI commands to start or stop the manager it could cause issues and the recommendation was to only use AGCTL and not GGSCI in that case.

Now in 12.2, once the GoldenGate instance has been registered with Oracle Clusterware using AGCTL, we can then continue to use GGSCI to start and stop GoldenGate without concern of any issues arising because AGCTL was not used. A new parameter for the GLOBALS file is now available called XAG_ENABLE.

 

Integration of GoldenGate with Datapump

In earlier releases when we added new tables to an existing GoldenGate configuration, we had to obtain the CURRENT_SCN from v$DATABASE view, pass that SCN value to the FLASHBACK_SCN parameter of expdp and then when we started the Replicat we had to use the AFTERCSN parameter with the same value.

Now in 12.2, the ADD TRANDATA or ADD SCHEMATRANDATA will prepare the tables automatically. Oracle Datapump export (expdp) will automatically generate import actions to set the instantiation CSN when that table is imported. We just have to include the new parameter for the Replicat called DBOPTIONS_ENABLE_INSTANTIATION_FILTERING which will then filter out any DML or DDL records based on the instantiation CSN of that table.

 

Improved Trail File Recovery

In earlier releases if a trail file was missing or corrupt, the Replicat used to abend.

Now in 12.2, if we have a corrupted or missing trail file, we can delete the corrupted trail file and the trail file is rebuilt by restarting the Extract Pump – the same is the case for a missing trail file which can be automatically rebuilt by bouncing the Extract Pump process. Replicat will automatically filter duplicate transactions by default to transactions already applied in the regenerated trail files.

 

Support for INVISIBLE Columns

The new MAPINVISIBLECOLUMNS parameter in 12.2 now enables replication support for tables (Oracle database only ) which contained any such INVISIBLE columns.

 

Extended Metrics and Fine-grained Performance Monitoring

Release 12.2 now provides real-time process and thread level Metrics for Extract, Pump and Replicat which  can be accessed through RESTful Web Services.  Real time database statistics for Extract and Replicat, Queues, as well as network statistics for the Extract Pump can be accessed using a URL like:

http://<hostname>:<manager port>/mpointsx

ENABLEMONITORING parameter needs to be included in the GLOBALS file.

The Java application is also available for free download (and can also be modified and customised) via the URL:

https://java.net/projects/oracledi/downloads/download/GoldenGate/OGGPTRK.jar

 

GoldenGate Studio

New in Release 12.2 is GoldenGate Studio – a GUI tool which will enable us to quickly design and deploy GoldenGate solutions. It separates the logical from the physical design and enables us to create a one-click and drag and drop logical design based on business needs without knowing all the details.

It has a concept of Projects and Solutions where one Project could contain a number of solutions and Solution contains one logical design and possibly many physical deployments. Rapid design is enabled with a number of out of the box Solution templates like Cascading, Bi-Directional, Unidirectional, Consolidation etc.

GoldenGate Studio enables us to design once and deploy it to many environments like Dev,Test, QA and Production with one click deployment.

 

GoldenGate Cloud Service

GoldenGate Cloud Service is the public cloud-based offering on a Subscription or Hourly basis.

The GoldenGate Cloud Service provides the delivery mechanisms to move Oracle as well as non-Oracle databases from On Premise to DBaaS – Oracle Database Cloud Service as well as Exadata Cloud Service delivery via GoldenGate. GoldenGate Cloud Service also provides Big Data Cloud Service delivery to Hadoop and NoSQL.

 

Nine Digit Trail File Sequence Length

In 12.2, the default is to create trail files with 9 digit sequence numbers instead of the earlier 6 digit sequence. This now will allow 1000 times more files per trail – basically 1 billion files per trail!.

We can upgrade existing trail files from 6 to 9 digit sequence numbers using a utility called convchk and there is also backward compatibility support for existing 6 digit sequences using a GLOBAL parameter called TRAIL_SEQLEN_6D.

Goldengate 12.2 New Feature – Check and validate parameter files using chkprm

$
0
0

In GoldenGate 12.2 we can now validate parameter files before deployment.

There is a new utility called chkprm which can be used for this purpose.’

To run the chkprm utility we provide the name of the parameter file and can optionally indicate what process this parameter file belongs to using the COMPONENT keyword.

Let us look at an example.

 

ors-db-01@oracle:omprd1>./checkprm ./dirprm/eomprd1.prm --COMPONENT EXTRACT

2016-01-21 21:53:13  INFO    OGG-02095  Successfully set environment variable ORACLE_HOME=/orasw/app/oracle/product/12.1.0/db_1.

2016-01-21 21:53:13  INFO    OGG-02095  Successfully set environment variable ORACLE_SID=omprd2.

2016-01-21 21:53:13  INFO    OGG-02095  Successfully set environment variable TNS_ADMIN=/orasw/app/oracle/product/12.1.0/db_1/network/admin.

2016-01-21 21:53:13  INFO    OGG-02095  Successfully set environment variable NLS_LANG=AMERICAN_AMERICA.AL32UTF8.

(eomprd1.prm) line 13: Parsing error, [DYNAMICRESOLUTION] is deprecated.

(eomprd1.prm) line 22: Parameter [REPORTDETAIL] is not valid for this configuration.

2016-01-21 21:53:13  INFO    OGG-10139  Parameter file ./dirprm/eomprd1.prm:  Validity check: FAIL.


We can see that this parameter file has failed the validation check because we had used this line in the parameter file and REPORTDETAIL is not supported now in 12.2.

STATOPTIONS REPORTDETAIL, RESETREPORTSTATS

We changed the parameter file to include

STATOPTIONS RESETREPORTSTATS

and now run the chkprm utility again. We now see that the verification of the parameter file has completed successfully.


ors-db-01@oracle:BSSTG1>./checkprm ./dirprm/eomprd1.prm

2015-11-18 19:29:45  INFO    OGG-10139  Parameter file ./dirprm/eomprd1.prm:  Validity check: PASS.

Runtime parameter validation is not reflected in the above check.


GoldenGate 12.2 New Feature – INFO and GETPARAMINFO

$
0
0

New in Oracle GoldenGate 12.2 is the feature to detailed help about the usage of a particular parameter (INFO) as well as information about the active parameters associated with a running Extract, Replicat as well as Manager process (GETPARAMINFO)

 

INFO
 
In this example we see all the information about the use of the parameter PORT

GGSCI (qa008 as oggsuser@BSSTG1) 12> info param port

param name : port
description : TCP IP port number for the Manager process
argument : integer
default : 7809
range : 1 – 65535
options :
component(s): MGR
mode(s) : none
platform(s) : all platforms
versions :
database(s) : all supported databases (on the supported platforms).
status : current
mandatory : false
dynamic : false
relations : none

 
GETPARAMINFO
 
In this example we see both the default values used by a running extract as well as the actual parameters which the process is using.

GGSCI (qa008 as oggsuser@BSSTG1) 19> send extract etest getparaminfo

Sending GETPARAMINFO request to EXTRACT ETEST …

GLOBALS

enablemonitoring :

/orasw/app/ogg12.2/dirprm/etest.prm

extract : etest
useridalias : oggsuser_bsstg
logallsupcols :
updaterecordformat : COMPACT
tranlogoptions :
integratedparams : (max_sga_size 2048, parallelism 2)
excludeuser : OGGSUSER
exttrail : ./dirdat/bsstg/test/lt
discardfile : ./dirrpt/etest.dsc
append :
megabytes : 1000
warnlongtrans : 2 hour(s)
checkinterval : 30 minute(s)
reportcount :
every : 15 minute(s)
rate :
statoptions :
resetreportstats :
report :
AT : 23:59
reportrollover :
AT : 00:01
ON : MONDAY
getupdatebefores :
table : TEST.*

Default Values

deletelogrecs :
fetchoptions :
userowid :
usekey :
missingrow : ALLOW
usesnapshot :
uselatestversion :
maxfetchstatements : 100
usediagnostics :
detaileddiagnostics :
diagnosticsonall :
nosuppressduplicates :
flushsecs : 1
passthrumessages :
ptkcapturecachemgr :
ptkcaptureift :
ptkcapturenetwork :
ptkcapturequeuestats :
ptkspstats :
tcpsourcetimer :
tranlogoptions :
bufsize : 1024000
asynctransprocessing : 300
checkpointretentiontime : 7.000000
failovertargetdestid : 0
getctasdml :
minefromsnapshotstby :
usenativeobjsupport :
retrydelay : 60
allocfiles : 500
allowduptargetmap :
binarychars :
checkpointsecs : 10 second(s)
cmdtrace : OFF
dynamicresolution :
eofdelay : 1
eofdelaycsecs : 100
functionstacksize : 200
numfiles : 1000
ptkcapturetablestats :
ptkmaxtables : 100
ptktablepollfrequency : 1
statoptions :
reportfetch :
varwidthnchar :
enableheartbeat :
ptkcaptureprocstats :
ptkmonitorfrequency : 1
use_traildefs :
.

 
GGSCI (qa008 as oggsuser@BSSTG1) 21> send etest getparaminfo tranlogoptions

Sending getparaminfo request to EXTRACT ETEST …

/orasw/app/ogg12.2/dirprm/etest.prm

tranlogoptions :
integratedparams : (max_sga_size 2048, parallelism 2)
excludeuser : OGGSUSER

Default Values

tranlogoptions :
bufsize : 1024000
asynctransprocessing : 300
checkpointretentiontime : 7.000000
failovertargetdestid : 0
getctasdml :
minefromsnapshotstby :
usenativeobjsupport :

Oracle GoldenGate 12.2 New Feature – Integration with Oracle Datapump

$
0
0

In earlier versions when we had to do an Oracle database table instantiation or initial load, we had to perform a number of steps – basically to handle DML changes which were occurring on the source table while the export was in progress.

So we had to first ensure that there were no open or long running transactions in progress. Then obtain the Current SCN of the database – pass this SCN to the FLASHBACK_SCN parameter of the Export Datapump. Then after the import was over we had to ensure that we used the HANDLECOLLISIONS parameter initially for the replicat and also start the Replicat from a particular position in the trail using the AFTERCSN parameter.

Now with Goldengate 12.2, there is tighter integration with Oracle Datapump Export and Import.

The ADD SCHEMATRANDATA command with the PREPARECSN parameter will ensure that the Datapump export will have information about the instantiation CSN’s for each table part of the export – this will populate the system tables and views with instantiation CSNs on the import and further the new Replicat parameter DBOPTIONS ENABLE_INSTANTIATION_FILTERING will filter out DML and DDL records based on the table’s instantiation CSN.

Let us look at an example of this new 12.2 feature.

We have a table called TESTME in the SYSADM schema which initially has 266448 rows.

Before running the Datapump export, let us ‘prepare’ the tables via the PREPARECSN parameter of the ADD SCHEMATRANDATA command.

GGSCI (pcu008 as oggsuser@BSDIT1) 12> add schematrandata sysadm preparecsn
2015-12-10 06:38:58 INFO OGG-01788 SCHEMATRANDATA has been added on schema sysadm.
2015-12-10 06:38:58 INFO OGG-01976 SCHEMATRANDATA for scheduling columns has been added on schema sysadm.
2015-12-10 06:38:59 INFO OGG-10154 Schema level PREPARECSN set to mode NOWAIT on schema sysadm.

GGSCI (pcu008 as oggsuser@omqat41) 3> info schematrandata SYSADM
2015-12-13 07:21:55 INFO OGG-06480 Schema level supplemental logging, excluding non-validated keys, is enabled on schema SYSADM.
2015-12-13 07:21:55 INFO OGG-01980 Schema level supplemental logging is enabled on schema SYSADM for all scheduling columns.
2015-12-13 07:21:55 INFO OGG-10462 Schema SYSADM have 571 prepared tables for instantiation.

We run the Datapump export. Note the line :

“FLASHBACK automatically enabled to preserve database integrity.”

pcu008@oracle:BSSTG1>expdp directory=BACKUP_DUMP_DIR dumpfile=testme.dmp tables=sysadm.testme
Export: Release 12.1.0.2.0 – Production on Mon Jan 25 23:45:27 2016
Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved.
Username: sys as sysdba
Password:
Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 – 64bitProduction
With the Partitioning, Real Application Clusters, Automatic Storage Management,OLAP,
Advanced Analytics and Real Application Testing options
FLASHBACK automatically enabled to preserve database integrity.
Starting “SYS”.”SYS_EXPORT_TABLE_01?: sys/******** AS SYSDBA directory=BACKUP_DUMP_DIR dumpfile=testme.dmp tables=sysadm.testme
Estimate in progress using BLOCKS method…
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 28 MB
Processing object type TABLE_EXPORT/TABLE/PROCACT_INSTANCE
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
. . exported “SYSADM”.”TESTME” 26.86 MB 266448 rows
Master table “SYS”.”SYS_EXPORT_TABLE_01? successfully loaded/unloaded
******************************************************************************
Dump file set for SYS.SYS_EXPORT_TABLE_01 is:
/home/oracle/backup/testme.dmp
Job “SYS”.”SYS_EXPORT_TABLE_01? successfully completed at Mon Jan 25 23:46:45 2016 elapsed 0 00:00:49

While the export of the TESTME table is in progress, we will insert 29622 more rows into the table. The table will now have 296070 rows.

SQL> insert into sysadm.testme select * from dba_objects;
29622 rows created.

SQL> select count(*) from sysadm.testme;
COUNT(*)
———-
296070

SQL> commit;
Commit complete.

We perform the import on the target database next. Note the number of rows imported. So we do not have the 29622 rows which were inserted into the table while export is in progress.

qat408@oracle:BSSTG1>impdp directory=BACKUP_DUMP_DIR dumpfile=testme.dmp full=y
Import: Release 12.1.0.2.0 – Production on Mon Jan 25 23:51:42 2016
Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved.
Username: sys as sysdba
Password:
Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 – 64bitProduction
With the Partitioning, Real Application Clusters, Automatic Storage Management,OLAP,
Advanced Analytics and Real Application Testing options
Master table “SYS”.”SYS_IMPORT_FULL_01? successfully loaded/unloaded
import done in AL32UTF8 character set and AL16UTF16 NCHAR character set
export done in WE8ISO8859P1 character set and AL16UTF16 NCHAR character set
WARNING: possible data loss in character set conversions
Starting “SYS”.”SYS_IMPORT_FULL_01?: sys/******** AS SYSDBA directory=BACKUP_DUMP_DIR dumpfile=testme.dmp full=y
Processing object type TABLE_EXPORT/TABLE/PROCACT_INSTANCE
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
. . imported “SYSADM”.”TESTME” 26.86 MB 266448 rows
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
Job “SYS”.”SYS_IMPORT_FULL_01? successfully completed at Mon Jan 25 23:52:22 2016 elapsed 0 00:00:25

We start the Replicat process on the target – note we are not positioning the replicat liked we used to do earlier using the AFTERCSN command.


GGSCI (qat408 as oggsuser@BSSTG2) 7> start rbsstg1
Sending START request to MANAGER …
REPLICAT RBSSTG1 starting

After starting the replicat, if we look at the report file for the replicat, we can see that the Replicat process is aware of the SCN or CSN number existing in the database while the export was in progress and it knows that any DML or DDL changes post that SCN now need to be applied on the target table.

2016-01-25 23:56:59 INFO OGG-10155 Instantiation CSN filtering is enabled on table SYSADM.TESTME at CSN 402,702,624.

If we query the replicat statistics a while after the replicat has started, we can see that the replicat has applied the insert statement (29622 rows) which was running while the export of the table was in progress.

GGSCI (qat408 as oggsuser@BSSTG1) 12> stats rbsstg1 latest

Sending STATS request to REPLICAT RBSSTG1 …

Start of Statistics at 2016-01-26 00:14:55.

Integrated Replicat Statistics:

Total transactions 1.00
Redirected 0.00
DDL operations 0.00
Stored procedures 0.00
Datatype functionality 0.00
Event actions 0.00
Direct transactions ratio 0.00%

Replicating from SYSADM.TESTME to SYSADM.TESTME:

*** Latest statistics since 2016-01-26 00:05:19 ***
Total inserts 29622.00
Total updates 0.00
Total deletes 0.00
Total discards 0.00
Total operations 29622.00

End of Statistics.


Oracle Exadata X5-2 Data Guard Configuration

$
0
0

This note describes the procedure of creating an Oracle 11.2.0.4 Data Guard Physical Standby database with a two-node Real Application Cluster (RAC) Primary and Standby database on an Oracle Exadata X5-2 eight rack.

The procedure will use RMAN for the creation of the Physical Standby database and will use the DUPLICATE FROM ACTIVE DATABASE method which is available in Oracle 11g.

Note – creation of the Standby database is done online while the Primary database is open and being accessed and no physical RMAN backups are utilized for the purpose of creating the standby database.

The note also describes the process of configuring Data Guard Broker to manage the Data Guard environment and also illustrates how to perform a database role-reversal via a Data Guard switch over operation.

 Download the full note ….

Installing the Oracle GoldenGate monitoring plug-in (13.2.1.0.0) for Cloud Control 13c Release 2


Oracle 12c GoldenGate Implementation Workshop online training

$
0
0

Oracle 12c GoldenGate Implementation Workshop online training is commencing 23rd January.

 

This 20 hour workshop will comprise topics included in the official Oracle University GoldenGate 12c Essentials, GoldenGate Advanced Configuration and GoldenGate Tuning and Troubleshooting classes.

 

Use the following links to register for the  online  training classes:

 7.00 to 9.00 PM IST Batch

https://attendee.gotowebinar.com/register/4325373570465792259

 

7.00 to 9.00 PM CST (USA) Batch

https://attendee.gotowebinar.com/register/7493591037135257347

 

The cost is only 499.00 USD and compares very favorably with the official OU course price which is over 3000 USD!

 

Oracle GoldenGate 12c Implementation Workshop

 

Course Topics and Objectives

  • Learn about Oracle GoldenGate 12c (12.2) architecture, topologies and components
  • Installation and deinstallation of GoldenGate using both OUI as well as command-line silent method
  • Configuring the Manager process
  • Prepare the Oracle database for GoldenGate replication
  • Create Classic extracts and replicat process groups
  • Create Integrated extracts and replicat process groups
  • Create Co-Ordinated replicats
  • Configure and manage DDL replication
  • Configuring security and encryption of trail files and credentials in GoldenGate
  • Column mapping
  • Data filtering and transformation
  • Using the Logdump utility to examine trail files
  • Using OBEY files, macros and tokens
  • Handling errors and exceptions in GoldenGate
  • Configuring Automatic Heartbeat Tables
  • Monitoring Lag
  • Configuring Bi-Directional replication
  • Configuring Conflict Detection and Resolution

 

All the topics listed above will carry hands-on lab exercises as well

GoldenGate Performance Tuning Webinar

$
0
0

The Oracle GoldenGate Performance Tuning Webinar was well received by over 200 attendees over two separate sessions.

Feedback received was very positive and am sharing the slide deck which can be downloaded from the link below:

Download the presentation ….

 

Installing and Configuring Oracle GoldenGate Veridata 12c

$
0
0

This note demonstrates how to install and configure Oracle GoldenGate Veridata 12c both server as well as agent.

At a high level the steps include:

  • Install Veridata Server
  • Create the GoldenGate Veridata Repository Schema using RCU
  • Configure WebLogic domain for Oracle GoldenGate Veridata
  • Start Admin and Managed Servers
  • Create the VERIDATA_ADMIN user
  • Launch and test Veridata Web User Interface
  • Install the Veridata Agent on hosts which you want to run Veridata comparison job
  • Configure and start the Veridata agent

Download the note …..

Oracle Database In-Memory 12c Release 2 New Features

$
0
0

The Oracle Database In-Memory 12c Release 2 New Features  webinar conducted last week was well received by a global audience and feedback was positive. For those who missed the session you can download the slide deck from the link below. Feedback and questions are welcomed!

12.2_InMemory_new_features

Oracle Database 12c Release 2 (12.2.0.1) upgrade using DBUA

$
0
0

Oracle 12c Release 2 (12.2.0.1) was officially released for on-premise deployment yesterday. I tested an upgrade of one of my test 12.1.0.2 databases using the Database Upgrade Assistant (DBUA) and the upgrade went smoothly.

The Parallel Upgrade command line utility catctl.pl has a number of changes and enhancements as compared to 12c Release 1 and I will discuss that in a later post.

Here are the screen shots of the database upgrade process.

 

upg2

 

 

upg1

 

 

upg3

 

 

 

 

 

upg5

 

 

upg6

 

 

upg7 upg8

 

 

upg9

 

 

Note – I only converted my database to NOARCHIVELOG mode because I did not have the recommended free space in the FRA. Don’t do this in production because ideally you would want to either take a backup of archivelogs or last incremental level 1 backup or set a Guaranteed Restore Point so as to be able to Flashback the database if required.

But I did see that the redo generated by the upgrade process seems to far more than that in case of earlier version upgrades. Even the DBUA recommendation was to double the Fast Recovery Area space allocation.

 

upg10

 

 

 

upg11

 

 

upg12

 

 

upg13

 

 

upg14

 

 

upg15

 

 

 

upg16

 

 

 

upg17

 

 

upg19

 

 

upg20

 

 

upg21

 

 

upg22

Viewing all 110 articles
Browse latest View live


Latest Images