Saturday, July 14, 2018

Oracle has released BUG fix for "Oracle Exadata Storage Server Patch (12.2.1.1.7) encountered bugs - 'CELLSRV' Restarting On Exadata x4-2 and Exadata x6-2"


Two weeks back we have encountered BUG after patching PSU APR 2018 including Oracle Exadata Storage Server 12.2.1.1.7 version. 
So CELLSRV started giving some issues (Restarting frequently CELLSRV). I have posted the blog regarding the issue.

Oracle Exadata Storage Server Patch (12.2.1.1.7) encountered bugs - 'CELLSRV' Restarting On Exadata x4-2 and Exadata x6-2
http://yvrk1973.blogspot.com/2018/06/oracle-exadata-storage-server-patch.html

Thanks to Oracle Exadata Development Team. They developed Interim Patch for the fix for 
Oracle Exadata Storage Server Version 12.2.1.1.7 recently.


Patch: p28181789_122117_Linux-x86-64.zip

The zip file contains two rpms and this README.txt.

a.1) cell-12.2.1.1.7_LINUX.X64_180506-1-rpm.bin - 
     This is the base release rpm. This is included in case a rollback is needed.

a.2) cell-12.2.1.1.7.28181789V1_LINUX.X64_180706-1-rpm.bin - 
     This is the interim patch that contains the fix for the bug listed below.  
     This patch will replace existing storage server software.

a.3) ========== Bug fixes or Diagnostics included in this ONEOFF ===========

Bug Fixes: 28181789    ORA-07445: [0000000000000000+0] AFTER UPGRADING CELL TO 12.2.1.1.7


Non-Rolling
============================
Copy the Cell Oneoff RPM File(s) to Target Cell Nodes

   1.a) If the patch has already been downloaded to a database server and unzipped under /tmp, move to 1.b
        Download and unzip the p28181789_122117_Linux-x86_64.zip under /tmp/ on one of the database servers.
       # unzip p28181789_122117_Linux-x86_64.zip -d /tmp

   1.b) Change directory to the location where oneoff cell rpm is located
       # cd /tmp/RPM_patch_12.2.1.1.7

   1.c) Create temporary working directory /var/log/exadatatmp/SAVE_patch_28181789 on cells
       # dcli -l root -g cell_group mkdir -p /var/log/exadatatmp/SAVE_patch_28181789

   1.d) Copy new cell RPM bin file to /var/log/exadatatmp/SAVE_patch_28181789 on cells
       # dcli -l root -g cell_group -f /tmp/RPM_patch_12.2.1.1.7/cell-12.2.1.1.7.28181789V1_LINUX.X64_180706-1-rpm.bin 
        -d /var/log/exadatatmp/SAVE_patch_28181789

2. Shut Down the Cluster
   ------------------------
   If you want to stop cluster from one node, execute the following command from that node.
   [root@dbnode]# crsctl stop cluster -all

   If you want to stop crs on each node, execute following command on each dbnode.
   [root@dbnode]# crsctl stop crs

3. Install the RPM on Each Cell
------------------------------

   3.a) Check if the cell-12.2.1.1.7.28181789V1_LINUX.X64_180706-1-rpm.bin has execute permissions and enable them 
        if needed.

   3.b) Run the following command to apply the cell interim patch.
   
        [root@cell ~]# /var/log/exadatatmp/SAVE_patch_28181789/cell-12.2.1.1.7.28181789V1_LINUX.X64_180706-1-rpm.bin 
                       --doall --force
        ...
        [INFO] Upgrade was successful.
        "Upgrade was successful" is displayed on success.

Note: Refer to section (f) for any known issues during the installation

   3.c) Run the following command to verify the installation.

        [root@cell ~]# rpm -qa | grep ^cell-
        cell-12.2.1.1.7_LINUX.X64_180706-1.x86_64

   3.d) Logout and login the shell to reflect new command path.

       Then, run the following command to verify all services are running.
       [root@cell ~]# cellcli -e list cell attributes cellsrvStatus,msStatus,rsStatus

4. Restart the Rluster 
----------------------
   If you want to start cluster from one node, execute the following command from that node.
   [root@dbnode]# crsctl start cluster -all

   If you want to start crs on each node, execute following command from each node.
   [root@dbnode]# crsctl start crs


5. Remove the Patch File
   -------------------------
   After successful RPM installation, you can remove the temporary patch staging directory to save disk space.

[root@dbnode ~]# dcli -l root -g cell_group rm -rf /var/log/exadatatmp/SAVE_patch_28181789

==============================================================
e) Rolling Back This Interim Patch on Exadata Storage Servers
==============================================================
   To roll back, follow either the rolling or non-rolling steps above
   to install cell-12.2.1.1.7_LINUX.X64_180506-1-rpm.bin.
   This rpm is included in the zip file for this patch.

Note: We can apply the patch "ROLLING" manner also. Please refer for more details MOS with BUG number.

Saturday, July 7, 2018

Downgrading Grid Infrastructure from V12.2.0.1.0 to V12.1.0.2.0

Downgrading Grid Infrastructure from V12.2.0.1.0 to V12.1.0.2.0


We have recently upgraded from Grid Infrastructure from (12.1.0.2.0) to (12.2.0.1.0) in Oracle Exadata x7-2 and for 
some reason we have planned to downgrade from Grid Infrastructure (12.2.0.1.0) to (12.1.0.2.0). 
It is straight forward method and Oracle documentation too good for this process.

Delete the Oracle Grid Infrastructure 12c Release 2 (12.2) Management Database

==================================================================================
dbca -silent -deleteDatabase -sourceDB -MGMTDB

[oracle@rac2-12c ~]$ ps -ef | grep pmon
oracle    5679     1  0 07:09 ?        00:00:00 asm_pmon_+ASM2
oracle    6478     1  0 07:09 ?        00:00:00 apx_pmon_+APX2
oracle    6755     1  0 07:10 ?        00:00:00 mdb_pmon_-MGMTDB
oracle   16873     1  0 07:14 ?        00:00:00 ora_pmon_contdb2
oracle   26100 25673  0 07:25 pts/1    00:00:00 grep pmon

Run “rootcrs.sh –downgrade” to downgrade Oracle Grid Infrastructure on all nodes except the first node.
========================================================================================================
[oracle@rac2-12c ~]$ su - root
Password:
[root@rac2-12c ~]# . oraenv
ORACLE_SID = [root] ? +ASM2
The Oracle base has been set to /u01/app/oracle

[root@rac2-12c ~]# /u01/app/12.2.0/grid/crs/install/rootcrs.sh -downgrade

Using configuration parameter file: /u01/app/12.2.0/grid/crs/install/crsconfig_params
The log of current session can be found at:
/u01/app/oracle/crsdata/rac2-12c/crsconfig/crsdowngrade_rac2-12c_2018-07-03_07-27-42AM.log
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac2-12c'
CRS-2673: Attempting to stop 'ora.crsd' on 'rac2-12c'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on server 'rac2-12c'
CRS-2673: Attempting to stop 'ora.chad' on 'rac1-12c'
CRS-2673: Attempting to stop 'ora.chad' on 'rac2-12c'
CRS-2673: Attempting to stop 'ora.acfs_dg.vol1.acfs' on 'rac2-12c'
CRS-2673: Attempting to stop 'ora.contdb.serv2.svc' on 'rac2-12c'
CRS-2673: Attempting to stop 'ora.qosmserver' on 'rac2-12c'
CRS-2677: Stop of 'ora.contdb.serv2.svc' on 'rac2-12c' succeeded
CRS-2673: Attempting to stop 'ora.contdb.db' on 'rac2-12c'
CRS-2677: Stop of 'ora.acfs_dg.vol1.acfs' on 'rac2-12c' succeeded
CRS-2673: Attempting to stop 'ora.ACFS_DG.VOL1.advm' on 'rac2-12c'
CRS-2677: Stop of 'ora.ACFS_DG.VOL1.advm' on 'rac2-12c' succeeded
CRS-2673: Attempting to stop 'ora.proxy_advm' on 'rac2-12c'
CRS-2677: Stop of 'ora.contdb.db' on 'rac2-12c' succeeded
CRS-2673: Attempting to stop 'ora.ACFS_DG.dg' on 'rac2-12c'
CRS-2673: Attempting to stop 'ora.LISTENER.lsnr' on 'rac2-12c'
CRS-2673: Attempting to stop 'ora.LISTENER_SCAN2.lsnr' on 'rac2-12c'
CRS-2673: Attempting to stop 'ora.LISTENER_SCAN3.lsnr' on 'rac2-12c'
CRS-2673: Attempting to stop 'ora.cvu' on 'rac2-12c'
CRS-2673: Attempting to stop 'xag.gg_1-vip.vip' on 'rac2-12c'
CRS-2677: Stop of 'ora.ACFS_DG.dg' on 'rac2-12c' succeeded
CRS-2677: Stop of 'ora.LISTENER.lsnr' on 'rac2-12c' succeeded
CRS-2673: Attempting to stop 'ora.DATA.dg' on 'rac2-12c'
CRS-2673: Attempting to stop 'ora.rac2-12c.vip' on 'rac2-12c'
CRS-2677: Stop of 'ora.DATA.dg' on 'rac2-12c' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'rac2-12c'
CRS-2677: Stop of 'ora.LISTENER_SCAN2.lsnr' on 'rac2-12c' succeeded
CRS-2677: Stop of 'ora.LISTENER_SCAN3.lsnr' on 'rac2-12c' succeeded
CRS-2673: Attempting to stop 'ora.scan2.vip' on 'rac2-12c'
CRS-2673: Attempting to stop 'ora.scan3.vip' on 'rac2-12c'
CRS-2677: Stop of 'ora.cvu' on 'rac2-12c' succeeded
CRS-2677: Stop of 'ora.asm' on 'rac2-12c' succeeded
CRS-2673: Attempting to stop 'ora.ASMNET1LSNR_ASM.lsnr' on 'rac2-12c'
CRS-2677: Stop of 'xag.gg_1-vip.vip' on 'rac2-12c' succeeded
CRS-2677: Stop of 'ora.chad' on 'rac1-12c' succeeded
CRS-2677: Stop of 'ora.chad' on 'rac2-12c' succeeded
CRS-2673: Attempting to stop 'ora.mgmtdb' on 'rac2-12c'
CRS-2677: Stop of 'ora.scan2.vip' on 'rac2-12c' succeeded
CRS-2677: Stop of 'ora.qosmserver' on 'rac2-12c' succeeded
CRS-2677: Stop of 'ora.rac2-12c.vip' on 'rac2-12c' succeeded
CRS-2677: Stop of 'ora.scan3.vip' on 'rac2-12c' succeeded
CRS-2677: Stop of 'ora.mgmtdb' on 'rac2-12c' succeeded
CRS-2673: Attempting to stop 'ora.MGMTLSNR' on 'rac2-12c'
CRS-2677: Stop of 'ora.ASMNET1LSNR_ASM.lsnr' on 'rac2-12c' succeeded
CRS-2677: Stop of 'ora.proxy_advm' on 'rac2-12c' succeeded
CRS-2677: Stop of 'ora.MGMTLSNR' on 'rac2-12c' succeeded
CRS-2672: Attempting to start 'ora.MGMTLSNR' on 'rac1-12c'
CRS-2672: Attempting to start 'ora.qosmserver' on 'rac1-12c'
CRS-2672: Attempting to start 'ora.scan2.vip' on 'rac1-12c'
CRS-2672: Attempting to start 'ora.scan3.vip' on 'rac1-12c'
CRS-2672: Attempting to start 'ora.cvu' on 'rac1-12c'
CRS-2672: Attempting to start 'ora.rac2-12c.vip' on 'rac1-12c'
CRS-2672: Attempting to start 'xag.gg_1-vip.vip' on 'rac1-12c'
CRS-2676: Start of 'ora.cvu' on 'rac1-12c' succeeded
CRS-2676: Start of 'ora.scan2.vip' on 'rac1-12c' succeeded
CRS-2672: Attempting to start 'ora.LISTENER_SCAN2.lsnr' on 'rac1-12c'
CRS-2676: Start of 'ora.scan3.vip' on 'rac1-12c' succeeded
CRS-2672: Attempting to start 'ora.LISTENER_SCAN3.lsnr' on 'rac1-12c'
CRS-2676: Start of 'ora.rac2-12c.vip' on 'rac1-12c' succeeded
CRS-2676: Start of 'xag.gg_1-vip.vip' on 'rac1-12c' succeeded
CRS-2676: Start of 'ora.MGMTLSNR' on 'rac1-12c' succeeded
CRS-2672: Attempting to start 'ora.mgmtdb' on 'rac1-12c'
CRS-2676: Start of 'ora.LISTENER_SCAN2.lsnr' on 'rac1-12c' succeeded
CRS-2676: Start of 'ora.LISTENER_SCAN3.lsnr' on 'rac1-12c' succeeded
CRS-2676: Start of 'ora.qosmserver' on 'rac1-12c' succeeded
CRS-2676: Start of 'ora.mgmtdb' on 'rac1-12c' succeeded
CRS-2672: Attempting to start 'ora.chad' on 'rac1-12c'
CRS-2676: Start of 'ora.chad' on 'rac1-12c' succeeded
CRS-2673: Attempting to stop 'ora.ons' on 'rac2-12c'
CRS-2677: Stop of 'ora.ons' on 'rac2-12c' succeeded
CRS-2673: Attempting to stop 'ora.net1.network' on 'rac2-12c'
CRS-2677: Stop of 'ora.net1.network' on 'rac2-12c' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'rac2-12c' has completed
CRS-2677: Stop of 'ora.crsd' on 'rac2-12c' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'rac2-12c'
CRS-2673: Attempting to stop 'ora.crf' on 'rac2-12c'
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'rac2-12c'
CRS-2673: Attempting to stop 'ora.gpnpd' on 'rac2-12c'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'rac2-12c'
CRS-2677: Stop of 'ora.drivers.acfs' on 'rac2-12c' succeeded
CRS-2677: Stop of 'ora.crf' on 'rac2-12c' succeeded
CRS-2677: Stop of 'ora.gpnpd' on 'rac2-12c' succeeded
CRS-2677: Stop of 'ora.mdnsd' on 'rac2-12c' succeeded
CRS-2677: Stop of 'ora.asm' on 'rac2-12c' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'rac2-12c'
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'rac2-12c' succeeded
CRS-2673: Attempting to stop 'ora.ctssd' on 'rac2-12c'
CRS-2673: Attempting to stop 'ora.evmd' on 'rac2-12c'
CRS-2677: Stop of 'ora.ctssd' on 'rac2-12c' succeeded
CRS-2677: Stop of 'ora.evmd' on 'rac2-12c' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'rac2-12c'
CRS-2677: Stop of 'ora.cssd' on 'rac2-12c' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'rac2-12c'
CRS-2677: Stop of 'ora.gipcd' on 'rac2-12c' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac2-12c' has completed
CRS-4133: Oracle High Availability Services has been stopped.
2018/07/03 07:30:20 CLSRSC-4006: Removing Oracle Trace File Analyzer (TFA) Collector.
2018/07/03 07:30:47 CLSRSC-4007: Successfully removed Oracle Trace File Analyzer (TFA) Collector.
2018/07/03 07:30:47 CLSRSC-4001: Installing Oracle Trace File Analyzer (TFA) Collector.
2018/07/03 07:31:12 CLSRSC-4002: Successfully installed Oracle Trace File Analyzer (TFA) Collector.
2018/07/03 07:31:12 CLSRSC-591: successfully downgraded Oracle Clusterware stack on this node
[root@rac2-12c ~]#

Execute the same command in other nodes i.e rac1-12c
==================================================
[oracle@rac1-12c ~]$ su - root
Password:
[root@rac1-12c ~]# . oraenv
ORACLE_SID = [root] ? +ASM1
The Oracle base has been set to /u01/app/oracle
[root@rac1-12c ~]# /u01/app/12.2.0/grid/crs/install/rootcrs.sh -downgrade

Set Oracle Grid Infrastructure 12c Release 1 (12.1) Grid home as the active Oracle Clusterware home
======================================================================================================
$ cd /u01/app/12.1.0.2/grid/oui/bin
$./runInstaller -nowait -waitforcompletion -ignoreSysPrereqs -updateNodeList -silent CRS=true 
ORACLE_HOME=/u01/app/12.1.0.2/grid "CLUSTER_NODES=rac1-12c,rac2-12c"

Start the 12.1 Oracle Clusterware stack on all nodes.
====================================================
[oracle@rac1-12c ~]$ su - root
Password:
[root@rac1-12c ~]# . oraenv
ORACLE_SID = [root] ? +ASM1
The Oracle base has been set to /u01/app/oracle

[root@rac1-12c ~]# crsctl start crs
CRS-4123: Oracle High Availability Services has been started.

[root@rac1-12c ~]# ps -ef | grep pmon
oracle   10415     1  0 07:49 ?        00:00:00 asm_pmon_+ASM1
root     16414  4942  0 07:52 pts/1    00:00:00 grep pmon

[root@rac2-12c bin]# ./crsctl start crs
CRS-4123: Oracle High Availability Services has been started.

[root@rac2-12c bin]# ps -ef | grep pmon
oracle    5059     1  0 07:50 ?        00:00:00 asm_pmon_+ASM2
root      9921 26564  0 07:51 pts/1    00:00:00 grep pmon

On any node, remove the MGMTDB resource as follows:
===================================================
[oracle@rac1-12c ~]$ . oraenv
ORACLE_SID = [contdb2] ? contdb
The Oracle base remains unchanged with value /u01/app/oracle
[oracle@rac1-12c ~]$
[oracle@rac1-12c ~]$
[oracle@rac1-12c ~]$ srvctl remove mgmtdb
Remove the database _mgmtdb? (y/[n]) y
[oracle@rac1-12c ~]$

[oracle@rac1-12c templates]$ ls
MGMTSeed_Database.ctl    MGMTSeed_Database.dfb  mgmtseed_pdb.xml  pdbseed.dfb
MGMTSeed_Database.dbc  mgmtseed_pdb.dfb       New_Database.dbt  pdbseed.xml
[oracle@rac1-12c templates]$ pwd
/u01/app/12.1.0.2/grid/assistants/dbca/templates
[oracle@rac1-12c templates]$
[oracle@rac1-12c templates]$ cd ../../..
[oracle@rac1-12c grid]$ cd bin/

Create the MGMTDB in silent mode using the templates
====================================================
[oracle@rac1-12c bin]$ ./dbca -silent -createDatabase -sid -MGMTDB -createAsContainerDatabase true
-templateName MGMTSeed_Database.dbc 
-gdbName _mgmtdb -storageType ASM -diskGroupName +DATA 
-datafileJarLocation /u01/app/12.1.0.2/grid/assistants/dbca/templates 
-characterset AL32UTF8 -autoGeneratePasswords -skipUserTemplateCheck

Registering database with Oracle Grid Infrastructure
5% complete
Copying database files
7% complete
9% complete
16% complete
23% complete
30% complete
37% complete
41% complete
Creating and starting Oracle instance
43% complete
48% complete
49% complete
50% complete
55% complete
60% complete
61% complete
64% complete
Completing Database Creation
68% complete
79% complete
89% complete
100% complete
Look at the log file "/u01/app/oracle/cfgtoollogs/dbca/_mgmtdb/_mgmtdb1.log" for further details.
[oracle@rac1-12c bin]$

Check the log File:
=====================
[oracle@rac1-12c ~]$ cat /u01/app/oracle/cfgtoollogs/dbca/_mgmtdb/_mgmtdb1.log

+DATA has enough space. Required space is 1566 MB , available space is 55751 MB.
File Validations Successful.

Validation of server pool succeeded.
Registering database with Oracle Grid Infrastructure
DBCA_PROGRESS : 5%
Copying database files
DBCA_PROGRESS : 7%
DBCA_PROGRESS : 9%
DBCA_PROGRESS : 16%
DBCA_PROGRESS : 23%
DBCA_PROGRESS : 30%
DBCA_PROGRESS : 37%
DBCA_PROGRESS : 41%
Creating and starting Oracle instance
DBCA_PROGRESS : 43%
DBCA_PROGRESS : 48%
DBCA_PROGRESS : 49%
DBCA_PROGRESS : 50%
DBCA_PROGRESS : 55%
DBCA_PROGRESS : 60%
DBCA_PROGRESS : 61%
DBCA_PROGRESS : 64%
Completing Database Creation
DBCA_PROGRESS : 68%
DBCA_PROGRESS : 79%
DBCA_PROGRESS : 89%
DBCA_PROGRESS : 100%
Database creation complete. For details check the logfiles at:
 /u01/app/oracle/cfgtoollogs/dbca/_mgmtdb.
Database Information:
Global Database Name:_mgmtdb
System Identifier(SID):-MGMTDB
[oracle@rac1-12c ~]$
[oracle@rac1-12c bin]$ srvctl status MGMTDB
Database is enabled
Instance -MGMTDB is running on node rac1-12c
[oracle@rac1-12c bin]$

Check the MGMTDB instance
==========================
[oracle@rac1-12c bin]$ ps -ef | grep pmon
oracle   10415     1  0 07:49 ?        00:00:00 asm_pmon_+ASM1
oracle   23026     1  0 08:06 ?        00:00:00 mdb_pmon_-MGMTDB
oracle   26089 18664  0 08:14 pts/1    00:00:00 grep pmon
[oracle@rac1-12c bin]$

[oracle@rac1-12c ~]$ su - root
Password:
[root@rac1-12c ~]# . oraenv
ORACLE_SID = [root] ? +ASM1
The Oracle base has been set to /u01/app/oracle

[root@rac1-12c ~]# cd /u01/app/12.1.0.2/grid/bin/
[root@rac1-12c bin]# crsctl modify res ora.crf -attr ENABLED=1 -init

[root@rac1-12c bin]# crsctl start res ora.crf -init
CRS-2672: Attempting to start 'ora.crf' on 'rac1-12c'
CRS-2676: Start of 'ora.crf' on 'rac1-12c' succeeded
[root@rac1-12c bin]#

[root@rac2-12c ~]# crsctl modify res ora.crf -attr ENABLED=1 -init
[root@rac2-12c ~]# crsctl start res ora.crf -init
CRS-2672: Attempting to start 'ora.crf' on 'rac2-12c'
CRS-2676: Start of 'ora.crf' on 'rac2-12c' succeeded
[root@rac2-12c ~]#


Check the GI version 
=====================
[root@rac1-12c bin]# crsctl query crs activeversion
Oracle Clusterware active version on the cluster is [12.1.0.2.0]

[root@rac1-12c bin]# crsctl query crs releaseversion
Oracle High Availability Services release version on the local node is [12.1.0.2.0]
[root@rac1-12c bin]#
[root@rac1-12c bin]# exit
logout

[oracle@rac1-12c ~]$ . oraenv
ORACLE_SID = [contdb] ? contdb
The Oracle base remains unchanged with value /u01/app/oracle
[oracle@rac1-12c ~]$ ps -ef | grep pmon
oracle   10415     1  0 07:49 ?        00:00:00 asm_pmon_+ASM1
oracle   23026     1  0 08:06 ?        00:00:00 mdb_pmon_-MGMTDB
oracle   27904 18664  0 08:19 pts/1    00:00:00 grep pmon

[oracle@rac1-12c ~]$ srvctl status database -d contdb
Instance contdb1 is not running on node rac1-12c
Instance contdb2 is not running on node rac2-12c
[oracle@rac1-12c ~]$

[oracle@rac1-12c ~]$ srvctl start database -d contdb

[oracle@rac1-12c ~]$ srvctl status database -d contdb
Instance contdb1 is running on node rac1-12c
Instance contdb2 is running on node rac2-12c
[oracle@rac1-12c ~]$

[oracle@rac2-12c ~]$ . oraenv
ORACLE_SID = [primdb2] ? contdb
The Oracle base remains unchanged with value /u01/app/oracle

[oracle@rac2-12c ~]$
[oracle@rac2-12c ~]$ srvctl status database -d contdb
Instance contdb1 is running on node rac1-12c
Instance contdb2 is running on node rac2-12c
[oracle@rac2-12c ~]$

[oracle@rac2-12c ~]$ ps -ef | grep pmon
oracle     547 25673  0 08:56 pts/1    00:00:00 grep pmon
oracle    5059     1  0 07:50 ?        00:00:00 asm_pmon_+ASM2
oracle   21595     1  0 08:20 ?        00:00:00 ora_pmon_contdb2
[oracle@rac2-12c ~]$

Friday, July 6, 2018

Online Operations in Oracle 11g/ 12c (12.1) /12c (12.2) / 18c Databases


Online Operations in Oracle 11g/12c (12.1)/12c (12.2)/18c Databases

Oracle 11g (11.2) & Prior
--------------------------

1. Create index online
2. rebuild index online
3. rebuild index partition online
4. Add Column
5. Add Constraint enable novalidate

Oracle 12c (12.1)
------------------

1. Online move partition
2. Drop index online
3. Set unused column online
4. alter column visible/invisible
5. alter index unusable online
6. alter index visible/invisible 
7. alter index parallel/noparallel

Oracle 12c (12.2)
------------------

1. Alter table  move online for non-partitioned tables
2. Alter table  from non-partitioned to partitioned online
3. Alter table  split partition online
4. Create table for exchange (usable for online partition exchange)
5. Move/merge/split partition maintenance operations can now do data filtering

Oracle 18c
-----------

1. Alter table modify partitioned table to a different partitioning method
2. Alter table merge partition/subpartition online

Saturday, June 23, 2018

Oracle has changed the License Documentation that Standby CDB can be open in Read Only without using Active Data Guard license


Oracle has changed the License Documentation that Standby CDB can be open in Read Only without using 
Active Data Guard (ADG) license

link to the official Active Data Guard License Document

https://docs.oracle.com/en/database/oracle/oracle-database/18/dblic/Licensing-Information.html#GUID-AB56CEE3-955E-4E56-8B44-6075E889C283

Oracle Exadata Storage Server Patch (12.2.1.1.7) encountered bugs - 'CELLSRV' Restarting On Exadata x4-2 and Exadata x6-2


Oracle Exadata Storage Server Patch (12.2.1.1.7) encountered bugs - 'CELLSRV' Restarting

We have encountered bugs ('CELLSRV' Process restarted couple of times across the Exadata Storage Servers) after applying 
Oracle Exadata Storage Server Patch (12.2.1.1.7) on Oracle Exadata x4-2 (Quarter Rack) and Oracle Exadata x6-2 (Half Rack).

Finding INCIDENT Info in Exadata Storage Server 

Login to Oracle Exadata Storage Server and use "ADRCI"



Viewing Alertlog file in Exadata Storage Server



Raised SR and confirmed with following BUGs in Oracle Exadata Storage Server (12.2.1.1.7)



Required ACFS Driver Updates

Install updated ACFS drivers before updating Exadata database servers to 18.1.5.0.0 or 12.2.1.1.7. Note the following:


1. Updated ACFS drivers are needed for the most efficient CVE-2017-5715 mitigation to be in place once 18.1.5.0.0 or 
   12.2.1.1.7 is installed.
2. Updated ACFS drivers are needed whether or not ACFS is in use.
3. In an OVM configuration dom0 may be updated to 18.1.5.0.0 or 12.2.1.1.7 before the ACFS drivers are updated in domU.
4. Updated drivers are delivered via an ACFS patch that is installed in the grid infrastructure home. 
   
GI Home guidelines - The following patches are available at publication time:

1. Grid infrastructure 18.2 and 12.2.0.1 - Patch 27463879
2. Grid infrastructure 12.1.0.2.180417 - Patch 27942938 (merge patch containing fixes for bug 27463879 and bug 23312691)
3. Grid infrastructure 12.1.0.2.180116 - Patch 27942821 (merge patch containing fixes for bug 27463879 and bug 23312691)

How to check ACFS drivers installed based on Grid Infrastructure Version

1. Login as 'grid' user and check the patches


2. Check "ACFS" drivers Information on Oracle Exadata Compute Nodes


Note: Please refer for more details : Oracle Exadata Database Machine Patch Availability Document for CVE-2017-5715, 
CVE-2017-5753, and CVE-2017-5754 (Doc ID 2356385.1)

Hope helps.....

Saturday, May 26, 2018

Oracle Exadata x7-2 Eighth Rack Improvements


Oracle Exadata x7-2 Eighth Rack Improvements

1. Used 10TB HHDs - 53 TB with High Redundancy (67 TB with Normal Redundancy)
2. Cores Per CPU/Server - 24/48
3. DRAM Per Server - 384 GB
4. Infiband (Internal) - 40 Gb/s
5. Ethernet (External) - 25 GigE
6. All-Flash per server - 51.2 TB
7. Flash Cache per server - 25.6 TB
8. Disk capacity per server - 120 TB
9. SQL offload cores per server - 20
10. Two separate internal HDDs for OS and Cell Software (M2 Disks)

Friday, May 4, 2018

AWR Snapshots and Reports - PDB level in Oracle 12c R2 (12.2.0.1.0)


AWR Snapshots and Reports - PDB level in Oracle 12c R2 (12.2.0.1.0)


1. AWR Snapshots and reports can be created only at the container database (CDB) level in 
   Oracle 12c R1 (12.1.0.1.0 / 12.1.0.2.0)
2. AWR Snapshots and reports can be created at the container database (CDB) level as well as pluggable database (PDB) level 
   in Oracle 12c R2 (12.2.0.1.0)
3. By default, AWR Snapshots and reports can be generated only at the container database (CDB) level
4. If you want to generate AWR Snapshots and reports at PDB level, set the following parameter 
   "awr_pdb_autoflush_enabled=TRUE" (by default FALSE)

oracle@exaserver01 ~]$ sqlplus / as sysdba


SQL*Plus: Release 12.2.0.1.0 Production on Thu May 3 14:29:18 2018
Copyright (c) 1982, 2016, Oracle.  All rights reserved.
Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

SQL> show con_name

CON_NAME
----------
CDB$ROOT

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 ORCLPDB                        READ WRITE NO

SQL> alter session set container=ORCLPDB;
Session altered.

SQL> show con_name

CON_NAME
--------
ORCLPDB

SQL> show user
USER is "SYS"

SQL> show parameter awr_pdb_autoflush_enabled

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
awr_pdb_autoflush_enabled            boolean     FALSE

SQL> alter system set awr_pdb_autoflush_enabled=TRUE;
System altered.

SQL> show parameter awr_pdb_autoflush_enabled

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
awr_pdb_autoflush_enabled            boolean     TRUE


SQL> set lines 100
SQL> select * from cdb_hist_wr_control;

      DBID SNAP_INTERVAL
---------- ---------------------------------------------------------------------------
RETENTION                                                                   TOPNSQL        CON_ID
--------------------------------------------------------------------------- ---------- ----------
1826807715 +40150 00:01:00.0
+00008 00:00:00.0                                                           DEFAULT             3

SQL> execute dbms_workload_repository.modify_snapshot_settings(interval => 60);
PL/SQL procedure successfully completed.

SQL> select * from cdb_hist_wr_control;

      DBID SNAP_INTERVAL
---------- ---------------------------------------------------------------------------
RETENTION                                                                   TOPNSQL        CON_ID
--------------------------------------------------------------------------- ---------- ----------
1826807715 +00000 01:00:00.0
+00008 00:00:00.0                                                           DEFAULT             3

SQL> show parameter AWR_SNAPSHOT_TIME_OFFSET

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
awr_snapshot_time_offset             integer     0

SQL> alter system set AWR_SNAPSHOT_TIME_OFFSET=1000000 scope=both;
alter system set AWR_SNAPSHOT_TIME_OFFSET=1000000 scope=both
*
ERROR at line 1:
ORA-65040: operation not allowed from within a pluggable database


SQL> alter session set container=CDB$ROOT;
Session altered.

SQL> SHOW CON_NAME

CON_NAME
------------------------------
CDB$ROOT

SQL> show user
USER is "SYS"

SQL> show parameter AWR_SNAPSHOT_TIME_OFFSET

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
awr_snapshot_time_offset             integer     0

SQL> alter system set AWR_SNAPSHOT_TIME_OFFSET=1000000 scope=both;
System altered.

SQL> show parameter AWR_SNAPSHOT_TIME_OFFSET

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
awr_snapshot_time_offset             integer     1000000

SQL> select * from awr_pdb_snapshot;


AWR Report at PDB level:






Note: Pluggable Database (PDB) Snapshot ID's can be different from Container Database (CDB)
Snapshot ID's when you generate PDB Snapshot ID's manually.

Friday, March 30, 2018

Virtual IP Addresses In Oracle Exadata x6/x7


Virtual IP Addresses In Oracle Exadata

Note: If not able to find VIPs in /etc/hosts, check the below you can find out VIPs in Oracle Exadata :)


$ srvctl status vip -n dbserver01
VIP dbserver01-vip.abc.com is enabled
VIP dbserver01-vip.abc.com is running on node: dbserver01
[ oracle@dbserver01.abc.com : Thu Mar 29, 10:06 AM : /home/oracle ]

$ srvctl status vip -n dbserver02
VIP dbserver02-vip.abc.com is enabled
VIP dbserver02-vip.abc.com is running on node: dbserver02
[ oracle@dbserver01.abc.com : Thu Mar 29, 10:06 AM : /home/oracle ]

$ srvctl status vip -n dbserver03
VIP dbserver03-vip.abc.com is enabled
VIP dbserver03-vip.abc.com is running on node: dbserver03
[ oracle@dbserver01.abc.com : Thu Mar 29, 10:06 AM : /home/oracle ]

$ srvctl status vip -n dbserver04
VIP dbserver04-vip.abc.com is enabled
VIP dbserver04-vip.abc.com is running on node: dbserver04
[ oracle@dbserver01.abc.com : Thu Mar 29, 10:06 AM : /home/oracle ]
$

[ oracle@dbserver01.abc.com : Thu Mar 29, 10:06 AM : /home/oracle ]
$ nslookup dbserver01-vip
Server:         17.2.3.20
Address:        17.2.3.20#53
Name:   dbserver01-vip.abc.com
Address: 10.5.3.77

[ oracle@dbserver01.abc.com : Thu Mar 29, 10:06 AM : /home/oracle ]
$ nslookup dbserver02-vip
Server:         17.2.3.20
Address:        17.2.3.20#53
Name:   dbserver02-vip.abc.com
Address: 10.5.3.78

[ oracle@dbserver01.abc.com : Thu Mar 29, 10:06 AM : /home/oracle ]
$ nslookup dbserver03-vip
Server:         17.2.3.20
Address:        17.2.3.20#53
Name:   dbserver03-vip.abc.com
Address: 10.5.3.79

[ oracle@dbserver01.abc.com : Thu Mar 29, 10:06 AM : /home/oracle ]
$ nslookup dbserver04-vip
Server:         17.2.3.20
Address:        17.2.3.20#53
Name:   dbserver04-vip.abc.com
Address: 10.5.3.80

RAC Service Not Failover during Instance Shutdown Using srvctl stop instance with Abort and Force option in 11.2.0.4/12.1.0.2/12.2.0.1


RAC Service Not Failover during Instance Shutdown Using srvctl stop instance with Abort and 
Force option in 11.2.0.4/12.1.0.2/12.2.0.1

Check the GRID version

[grid@dbserver01 ~]$ crsctl query crs releaseversion
Oracle High Availability Services release version on the local node is [12.2.0.1.0]
[grid@dbserver01 ~]$

[grid@dbserver01 ~]$ crsctl query crs softwareversion
Oracle Clusterware version on node [dbserver01] is [12.2.0.1.0]
[grid@dbserver01 ~]$

$ su - oracle
[oracle@dbserver01 ~]$ ps -ef | grep pmon
oracle    26358      1  0 Mar29 ?        00:00:04 ora_pmon_dbm011
grid      64858      1  0 Feb28 ?        00:01:54 asm_pmon_+ASM1
oracle   289299 289250  0 12:32 pts/1    00:00:00 grep pmon
[oracle@dbserver01 ~]$

[oracle@dbserver01 ~]$ . oraenv
ORACLE_SID = [oracle] ? dbm011
The Oracle base has been set to /u01/app/oracle
[oracle@dbserver01 ~]$ sqlplus / as sysdba
SQL*Plus: Release 12.2.0.1.0 Production on Fri Mar 30 12:32:34 2018
Copyright (c) 1982, 2016, Oracle.  All rights reserved.
Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

SQL> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

Creating service with PREFERRED in both the instances. 

[oracle@dbserver01 ~]$ srvctl add service -d dbm01 -s serv5 -r dbm011,dbm012 -tafpolicy BASIC -failovertype SELECT 
-failovermethod BASIC -failoverretry 2 -failoverdelay 5

[oracle@dbserver01 ~]$ srvctl status service -d dbm01
Service serv5 is not running.
[oracle@dbserver01 ~]$

[oracle@dbserver01 ~]$ srvctl start service -d dbm01 -s serv5
[oracle@dbserver01 ~]$

[oracle@dbserver01 ~]$ srvctl status service -d dbm01 -s serv5
Service serv5 is running on instance(s) dbm011,dbm012
[oracle@dbserver01 ~]$

[oracle@dbserver01 ~]$ srvctl config service -d dbm01 -s serv5
Service name: serv5
Server pool:
Cardinality: 2
Service role: PRIMARY
Management policy: AUTOMATIC
DTP transaction: false
AQ HA notifications: false
Global: false
Commit Outcome: false
Failover type: SELECT
Failover method: BASIC
TAF failover retries: 2
TAF failover delay: 5
Failover restore: NONE
Connection Load Balancing Goal: LONG
Runtime Load Balancing Goal: NONE
TAF policy specification: BASIC
Edition:
Pluggable database name:
Maximum lag time: ANY
SQL Translation Profile:
Retention: 86400 seconds
Replay Initiation Time: 300 seconds
Drain timeout:
Stop option:
Session State Consistency: DYNAMIC
GSM Flags: 0
Service is enabled
Preferred instances: dbm011,dbm012
Available instances:
CSS critical: no
[oracle@dbserver01 ~]$

[oracle@dbserver01 ~]$ srvctl stop instance -d dbm01 -i dbm011 -f
[oracle@dbserver01 ~]$

[oracle@dbserver01 ~]$ srvctl status service -d dbm01
Service serv5 is running on instance(s) dbm012
[oracle@dbserver01 ~]$

[oracle@dbserver01 ~]$ srvctl config service -d dbm01 -s serv5
Service name: serv5
Server pool:
Cardinality: 2
Service role: PRIMARY
Management policy: AUTOMATIC
DTP transaction: false
AQ HA notifications: false
Global: false
Commit Outcome: false
Failover type: SELECT
Failover method: BASIC
TAF failover retries: 2
TAF failover delay: 5
Failover restore: NONE
Connection Load Balancing Goal: LONG
Runtime Load Balancing Goal: NONE
TAF policy specification: BASIC
Edition:
Pluggable database name:
Maximum lag time: ANY
SQL Translation Profile:
Retention: 86400 seconds
Replay Initiation Time: 300 seconds
Drain timeout:
Stop option:
Session State Consistency: DYNAMIC
GSM Flags: 0
Service is enabled
Preferred instances: dbm011,dbm012
Available instances:
CSS critical: no

[oracle@dbserver01 ~]$ srvctl start instance -d dbm01 -i dbm011

[oracle@dbserver01 ~]$ srvctl status service -d dbm01
Service serv5 is running on instance(s) dbm011,dbm012
[oracle@dbserver01 ~]$

[oracle@dbserver01 ~]$ srvctl status service -d dbm01 -s serv5
Service serv5 is running on instance(s) dbm011,dbm012
[oracle@dbserver01 ~]$

Creating service with PREFERRED and AVAILABLE in the instances. 

[oracle@dbserver01 ~]$ srvctl add service -d dbm01 -s serv6 -r dbm011 -a dbm012 -tafpolicy BASIC -failovertype SELECT 
-failovermethod BASIC -failoverretry 2 -failoverdelay 5
[oracle@dbserver01 ~]$

[oracle@dbserver01 ~]$ srvctl status service -d dbm01 -s serv6
Service serv6 is not running.
[oracle@dbserver01 ~]$

[oracle@dbserver01 ~]$ srvctl start service -d dbm01 -s serv6
[oracle@dbserver01 ~]$

[oracle@dbserver01 ~]$ srvctl status service -d dbm01 -s serv6
Service serv6 is running on instance(s) dbm011
[oracle@dbserver01 ~]$

[oracle@dbserver01 ~]$ srvctl status service -d dbm01 -s serv6
Service serv6 is running on instance(s) dbm011

[oracle@dbserver01 ~]$ srvctl config service -d dbm01 -s serv6
Service name: serv6
Server pool:
Cardinality: 1
Service role: PRIMARY
Management policy: AUTOMATIC
DTP transaction: false
AQ HA notifications: false
Global: false
Commit Outcome: false
Failover type: SELECT
Failover method: BASIC
TAF failover retries: 2
TAF failover delay: 5
Failover restore: NONE
Connection Load Balancing Goal: LONG
Runtime Load Balancing Goal: NONE
TAF policy specification: BASIC
Edition:
Pluggable database name:
Maximum lag time: ANY
SQL Translation Profile:
Retention: 86400 seconds
Replay Initiation Time: 300 seconds
Drain timeout:
Stop option:
Session State Consistency: DYNAMIC
GSM Flags: 0
Service is enabled
Preferred instances: dbm011
Available instances: dbm012
CSS critical: no
[oracle@dbserver01 ~]$

[oracle@dbserver01 ~]$ srvctl stop instance -d dbm01 -i dbm011 -f

[oracle@dbserver01 ~]$ srvctl status service -d dbm01 -s serv6
Service serv6 is not running.

[oracle@dbserver01 ~]$ srvctl start instance -d dbm01 -i dbm011

[oracle@dbserver01 ~]$ srvctl status service -d dbm01 -s serv6
Service serv6 is running on instance(s) dbm011

Please refer MOS: 

1. Services Are Not Failing Over To Node2 Automatically When First Node1 Is Down And Vice Versa (Doc ID 2177143.1)
2. The Bug 22472054 is seen fixed in GI 12.2 version
3. Apply the patch for bug 22472054.

Sunday, February 25, 2018

Moving Pluggable Database (PDB) between the different Container Database (CDB) in Different Servers with ASM Storage


Moving Pluggable Database (PDB) between the different Container Database (CDB) with ASM Storage


Moving Pluggable Database (PDB) between the different Container Database (CDB) with ASM Storage

==========Source Cluster RAC1 and RAC2==============================>

Source Cluster: RAC1 and RAC2
Source Container Database: CONTDB
Source Pluggable Database: CONPDB3


Target Cluster: RAC3 and RAC4
Target Container Database: CONTDB
Target Pluggable Database: CONPDB3



[oracle@rac1-12c ~]$ . oraenv
ORACLE_SID = [primdb1] ? contdb
The Oracle base remains unchanged with value /u01/app/oracle

[oracle@rac1-12c ~]$ sqlplus /nolog
SQL*Plus: Release 12.1.0.2.0 Production on Sun Feb 25 06:10:56 2018
Copyright (c) 1982, 2014, Oracle.  All rights reserved.
SQL> connect sys/oracle@contdb as sysdba
Connected.

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 CONPDB1                        READ WRITE NO
         4 CONPDB2                        READ WRITE NO
         5 CONPDB3                        READ WRITE NO
SQL> alter session set container=CONPDB3;
Session altered.

SQL> select file_name from dba_data_files;

FILE_NAME
--------------------------------------------------------------------------------
+DATA/CONTDB/65FEDAA33CD86078E0536502A8C07E3C/DATAFILE/system.335.968994861
+DATA/CONTDB/65FEDAA33CD86078E0536502A8C07E3C/DATAFILE/sysaux.334.968994861
+DATA/CONTDB/65FEDAA33CD86078E0536502A8C07E3C/DATAFILE/users.332.968994889

SQL> connect sys/oracle@contdb as sysdba
Connected.

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 CONPDB1                        READ WRITE NO
         4 CONPDB2                        READ WRITE NO
         5 CONPDB3                        READ WRITE NO


SQL> alter pluggable database conpdb3 close immediate instances=all;
Pluggable database altered.

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 CONPDB1                        READ WRITE NO
         4 CONPDB2                        READ WRITE NO
         5 CONPDB3                        MOUNTED
SQL> select name ,open_mode from v$pdbs;

NAME                           OPEN_MODE
------------------------------ ----------
PDB$SEED                       READ ONLY
CONPDB1                        READ WRITE
CONPDB2                        READ WRITE
CONPDB3                        MOUNTED

SQL> alter pluggable database conpdb3 unplug into '/u01/app/oracle/CONPDB3_NEW.xml';
Pluggable database altered.

SQL> exit

[oracle@rac1-12c ~]$ . oraenv
ORACLE_SID = [contdb] ? contdb1
The Oracle base remains unchanged with value /u01/app/oracle

[oracle@rac1-12c ~]$ rman target /
Recovery Manager: Release 12.1.0.2.0 - Production on Sun Feb 25 06:16:04 2018
Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.
connected to target database: CONTDB (DBID=1349816764)

RMAN> BACKUP FOR TRANSPORT AS COMPRESSED BACKUPSET FILESPERSET=1024 
PLUGGABLE DATABASE 'CONPDB3' FORMAT '/u01/app/oracle/CONPDB3.dfb';

Starting backup at 25-FEB-18
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=82 instance=contdb1 device type=DISK
channel ORA_DISK_1: starting compressed full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00019 name=+DATA/CONTDB/65FEDAA33CD86078E0536502A8C07E3C/DATAFILE/sysaux.334.968994861
input datafile file number=00018 name=+DATA/CONTDB/65FEDAA33CD86078E0536502A8C07E3C/DATAFILE/system.335.968994861
input datafile file number=00020 name=+DATA/CONTDB/65FEDAA33CD86078E0536502A8C07E3C/DATAFILE/users.332.968994889
channel ORA_DISK_1: starting piece 1 at 25-FEB-18
channel ORA_DISK_1: finished piece 1 at 25-FEB-18
piece handle=/u01/app/oracle/CONPDB3.dfb tag=TAG20180225T061635 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:25
Finished backup at 25-FEB-18

RMAN> exit


[oracle@rac1-12c ~]$ cd /u01/app/oracle/
[oracle@rac1-12c oracle]$ ls -lrth
total 148M
drwxrwxr-x.  3 oracle oinstall 4.0K Jun 23  2017 product
drwxr-xr-x   4 root   root     4.0K Jun 23  2017 tfa
drwxr-xr-x   3 oracle oinstall 4.0K Jun 23  2017 rac1-12c
drwxr-xr-x   3 oracle oinstall 4.0K Jun 24  2017 log
drwxr-xr-x   2 oracle oinstall 4.0K Jun 24  2017 checkpoints
drwxrwxr-x   5 oracle oinstall 4.0K Jun 24  2017 crsdata
drwxr-x---   7 oracle oinstall 4.0K Jun 29  2017 cfgtoollogs
drwxrwxr-x  19 oracle oinstall 4.0K Oct 25 22:48 diag
drwxr-x---   8 oracle oinstall 4.0K Nov  8 04:53 audit
drwxr-x---   7 oracle oinstall 4.0K Feb 25 05:13 admin
-rw-r--r--   1 oracle vboxsf   5.4K Feb 25 05:31 conpdb3.xml
-rw-r--r--   1 oracle vboxsf   5.4K Feb 25 06:13 CONPDB3_NEW.xml
-rw-r-----   1 oracle vboxsf   148M Feb 25 06:16 CONPDB3.dfb

[oracle@rac1-12c oracle]$ scp CONPDB3_NEW.xml oracle@192.168.2.201:/u01/app/oracle/
oracle@192.168.2.201's password:
CONPDB3_NEW.xml                                   100% 5525     5.4KB/s   00:00
[oracle@rac1-12c oracle]$ scp CONPDB3.dfb oracle@192.168.2.201:/u01/app/oracle/
oracle@192.168.2.201's password:
CONPDB3.dfb                                       100%  148MB  73.9MB/s   00:02
[oracle@rac1-12c oracle]$ cd


[oracle@rac1-12c ~]$ . oraenv
ORACLE_SID = [contdb1] ? contdb
The Oracle base remains unchanged with value /u01/app/oracle

[oracle@rac1-12c ~]$ sqlplus /nolog
SQL*Plus: Release 12.1.0.2.0 Production on Sun Feb 25 09:43:06 2018
Copyright (c) 1982, 2014, Oracle.  All rights reserved.

SQL> connect sys/oracle@contdb as sysdba
Connected.

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 CONPDB1                        READ WRITE NO
         4 CONPDB2                        READ WRITE NO
         5 CONPDB3                        MOUNTED

SQL> col PATCH_ID format 999999999
SQL> col PATCH_UID format 99999999
SQL> col VERSION format a10
SQL> col STATUS format a15
SQL> col DESCRIPTION format a80
SQL> set lines 200 pages 1000
SQL> select PATCH_ID, PATCH_UID, VERSION, STATUS, DESCRIPTION 
from DBA_REGISTRY_SQLPATCH order by BUNDLE_SERIES;
no rows selected

SQL> !
[oracle@rac1-12c ~]$ cd /u01/app/oracle/product/12.1.0.2/db_1/OPatch/
[oracle@rac1-12c OPatch]$ ./opatch lspatches
There are no Interim patches installed in this Oracle Home "/u01/app/oracle/product/12.1.0.2/db_1".

OPatch succeeded.
[oracle@rac1-12c OPatch]$

===========================Target Cluster RAC3 and RAC4=================>

[oracle@rac3-12c ~]$ . oraenv
ORACLE_SID = [orcldb1] ? contdb
The Oracle base remains unchanged with value /u01/app/oracle

[oracle@rac3-12c ~]$ sqlplus /nolog
SQL*Plus: Release 12.1.0.2.0 Production on Sun Feb 25 06:22:16 2018
Copyright (c) 1982, 2014, Oracle.  All rights reserved.

SQL> connect sys/oracle@contdb as sysdba
Connected.
SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 CONPDB1                        READ WRITE NO
         4 CONPDB2                        READ WRITE NO
SQL> exit


[oracle@rac3-12c ~]$ . oraenv
ORACLE_SID = [+ASM1] ? contdb1
ORACLE_HOME = [/home/oracle] ? /u01/app/oracle/product/12.1.0.2/db_1
The Oracle base remains unchanged with value /u01/app/oracle

[oracle@rac3-12c ~]$ rman target /
Recovery Manager: Release 12.1.0.2.0 - Production on Sun Feb 25 06:51:58 2018
Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.
connected to target database: CONTDB (DBID=1349963590)

RMAN> run
{
set command id  to 'CONPDB3';
RESTORE FOREIGN DATAFILE  18 TO NEW, 19 TO NEW, 
20 TO NEW FROM BACKUPSET '/u01/app/oracle/CONPDB3.dfb';
};
2> 3> 4> 5>
executing command: SET COMMAND ID

Starting restore at 25-FEB-18
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=97 instance=contdb1 device type=DISK

channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring foreign file 00018
channel ORA_DISK_1: restoring foreign file 00019
channel ORA_DISK_1: restoring foreign file 00020
channel ORA_DISK_1: reading from backup piece /u01/app/oracle/CONPDB3.dfb
channel ORA_DISK_1: restoring foreign file 18 to +DATA/CONTDB/65FEDAA33CD86078E0536502A8C07E3C/DATAFILE/system.350.969000723
channel ORA_DISK_1: restoring foreign file 19 to +DATA/CONTDB/65FEDAA33CD86078E0536502A8C07E3C/DATAFILE/sysaux.349.969000723
channel ORA_DISK_1: restoring foreign file 20 to +DATA/CONTDB/65FEDAA33CD86078E0536502A8C07E3C/DATAFILE/users.351.969000723
channel ORA_DISK_1: foreign piece handle=/u01/app/oracle/CONPDB3.dfb
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:26
Finished restore at 25-FEB-18
RMAN> exit

[oracle@rac3-12c ~]$ . oraenv
ORACLE_SID = [contdb1] ? +ASM1
The Oracle base remains unchanged with value /u01/app/oracle

ASMCMD> cd DATA
ASMCMD> cd CONTDB
ASMCMD> ls
5D900B1A1E953788E053C902A8C0836A/
5D902973DAA94A24E053C902A8C0F831/
5D902BDD14014BFCE053C902A8C0636A/
65FEDAA33CD86078E0536502A8C07E3C/
ARCHIVELOG/
AUTOBACKUP/
CONTROLFILE/
DATAFILE/
FD9AC20F64D244D7E043B6A9E80A2F2F/
ONLINELOG/
PARAMETERFILE/
PASSWORD/
TEMPFILE/
ASMCMD> cd 65FEDAA33CD86078E0536502A8C07E3C/

ASMCMD> ls
DATAFILE/
ASMCMD> cd datafile
ASMCMD> ls
SYSAUX.349.969000723
SYSTEM.350.969000723
USERS.351.969000723
ASMCMD> ls -l
Type      Redund  Striped  Time             Sys  Name
DATAFILE  UNPROT  COARSE   FEB 25 06:00:00  Y    SYSAUX.349.969000723
DATAFILE  UNPROT  COARSE   FEB 25 06:00:00  Y    SYSTEM.350.969000723
DATAFILE  UNPROT  COARSE   FEB 25 06:00:00  Y    USERS.351.969000723
ASMCMD> exit

[oracle@rac3-12c ~]$ . oraenv
ORACLE_SID = [+ASM1] ? contdb
The Oracle base remains unchanged with value /u01/app/oracle

[oracle@rac3-12c ~]$ sqlplus /nolog
SQL*Plus: Release 12.1.0.2.0 Production on Sun Feb 25 06:55:00 2018
Copyright (c) 1982, 2014, Oracle.  All rights reserved.

SQL> connect sys/oracle@contdb as sysdba
Connected.

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 CONPDB1                        READ WRITE NO
         4 CONPDB2                        READ WRITE NO
SQL> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Advanced Analytics and Real Application Testing options

[oracle@rac3-12c ~]$ . oraenv
ORACLE_SID = [contdb] ? contdb1
ORACLE_HOME = [/home/oracle] ? /u01/app/oracle/product/12.1.0.2/db_1
The Oracle base remains unchanged with value /u01/app/oracle

[oracle@rac3-12c ~]$ rman target /
Recovery Manager: Release 12.1.0.2.0 - Production on Sun Feb 25 06:58:45 2018
Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.
connected to target database: CONTDB (DBID=1349963590)

RMAN> change datafilecopy '+DATA/CONTDB/65FEDAA33CD86078E0536502A8C07E3C/DATAFILE/system.350.969000723',
2> '+DATA/CONTDB/65FEDAA33CD86078E0536502A8C07E3C/DATAFILE/sysaux.349.969000723',
3> '+DATA/CONTDB/65FEDAA33CD86078E0536502A8C07E3C/DATAFILE/users.351.969000723' uncatalog;

using target database control file instead of recovery catalog
uncataloged datafile copy
datafile copy file name=+DATA/CONTDB/65FEDAA33CD86078E0536502A8C07E3C/DATAFILE/
system.350.969000723 RECID=7 STAMP=969000740
uncataloged datafile copy
datafile copy file name=+DATA/CONTDB/65FEDAA33CD86078E0536502A8C07E3C/DATAFILE/
sysaux.349.969000723 RECID=8 STAMP=969000748
uncataloged datafile copy
datafile copy file name=+DATA/CONTDB/65FEDAA33CD86078E0536502A8C07E3C/DATAFILE/
users.351.969000723 RECID=6 STAMP=969000724
Uncataloged 3 objects

RMAN> exit
Recovery Manager complete.

[oracle@rac3-12c ~]$ cd /u01/app/oracle/
[oracle@rac3-12c oracle]$ cat CONPDB3_NEW.xml | grep path
  +DATA/CONTDB/65FEDAA33CD86078E0536502A8C07E3C/DATAFILE/system.335.968994861
  +DATA/CONTDB/65FEDAA33CD86078E0536502A8C07E3C/DATAFILE/sysaux.334.968994861
  +DATA/CONTDB/65FEDAA33CD86078E0536502A8C07E3C/TEMPFILE/temp.333.968997067
  +DATA/CONTDB/65FEDAA33CD86078E0536502A8C07E3C/DATAFILE/users.332.968994889
[oracle@rac3-12c oracle]$

ASMCMD> pwd
+data/contdb/65FEDAA33CD86078E0536502A8C07E3C/datafile

ASMCMD> exit
[oracle@rac3-12c oracle]$ cd

[oracle@rac3-12c ~]$ . oraenv
ORACLE_SID = [+ASM1] ? contdb1
ORACLE_HOME = [/home/oracle] ? /u01/app/oracle/product/12.1.0.2/db_1
The Oracle base remains unchanged with value /u01/app/oracle

[oracle@rac3-12c ~]$ sqlplus /nolog
SQL*Plus: Release 12.1.0.2.0 Production on Sun Feb 25 07:04:54 2018
Copyright (c) 1982, 2014, Oracle.  All rights reserved.

SQL> connect sys/oracle@contdb as sysdba
Connected.

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 CONPDB1                        READ WRITE NO
         4 CONPDB2                        READ WRITE NO

SQL> create pluggable database CONPDB3 as clone  using '/u01/app/oracle/CONPDB3_NEW.xml'
  2  source_file_name_convert = (
  3  '+DATA/CONTDB/65FEDAA33CD86078E0536502A8C07E3C/DATAFILE/system.335.968994861',
     '+DATA/CONTDB/65FEDAA33CD86078E0536502A8C07E3C/DATAFILE/system.350.969000723',
  4  '+DATA/CONTDB/65FEDAA33CD86078E0536502A8C07E3C/DATAFILE/sysaux.334.968994861',
     '+DATA/CONTDB/65FEDAA33CD86078E0536502A8C07E3C/DATAFILE/sysaux.349.969000723',
  5  '+DATA/CONTDB/65FEDAA33CD86078E0536502A8C07E3C/DATAFILE/users.332.968994889',
     '+DATA/CONTDB/65FEDAA33CD86078E0536502A8C07E3C/DATAFILE/users.351.969000723',
  6  '+DATA/CONTDB/65FEDAA33CD86078E0536502A8C07E3C/TEMPFILE/temp.333.968997067',
     '+DATA/CONTDB/65FEDAA33CD86078E0536502A8C07E3C/DATAFILE/conpdb3_temp01.dbf')
  7  file_name_convert=NONE
  8  NOCOPY;

Pluggable database created.

SQL> select name,open_mode from v$pdbs;

NAME                           OPEN_MODE
------------------------------ ----------
PDB$SEED                       READ ONLY
CONPDB1                        READ WRITE
CONPDB2                        READ WRITE
CONPDB3                        MOUNTED

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 CONPDB1                        READ WRITE NO
         4 CONPDB2                        READ WRITE NO
         5 CONPDB3                        MOUNTED
SQL>
SQL> alter pluggable database CONPDB3 open instances=all;

Warning: PDB altered with errors.

[oracle@rac3-12c ~]$ . oraenv
ORACLE_SID = [contdb] ?
The Oracle base remains unchanged with value /u01/app/oracle

[oracle@rac3-12c ~]$ sqlplus /nolog
SQL*Plus: Release 12.1.0.2.0 Production on Sun Feb 25 10:54:01 2018
Copyright (c) 1982, 2014, Oracle.  All rights reserved.

SQL> connect sys/oracle@contdb as sysdba
Connected.
SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 CONPDB1                        READ WRITE NO
         4 CONPDB2                        READ WRITE NO
         5 CONPDB3                        READ WRITE YES

SQL> select cause, type, message from PDB_PLUG_IN_VIOLATIONS where name = 'CONPDB3';

CAUSE                                                            TYPE
---------------------------------------------------------------- ---------
MESSAGE
---
SQL Patch                                                        ERROR
PSU bundle patch 171017 (DATABASE PATCH SET UPDATE 12.1.0.2.171017): Installed in the CDB but not in the PDB.

[oracle@rac3-12c ~]$ . oraenv
ORACLE_SID = [contdb] ? contdb1
ORACLE_HOME = [/home/oracle] ? /u01/app/oracle/product/12.1.0.2/db_1
The Oracle base remains unchanged with value /u01/app/oracle

[oracle@rac3-12c ~]$ cd /u01/app/oracle/product/12.1.0.2/db_1/OPatch/

[oracle@rac3-12c OPatch]$ ./datapatch -verbose
SQL Patching tool version 12.1.0.2.0 Production on Sun Feb 25 10:56:36 2018
Copyright (c) 2012, 2016, Oracle.  All rights reserved.

Log file for this invocation: /u01/app/oracle/cfgtoollogs/sqlpatch/
sqlpatch_11332_2018_02_25_10_56_36/sqlpatch_invocation.log

Connecting to database...OK
Note:  Datapatch will only apply or rollback SQL fixes for PDBs
       that are in an open state, no patches will be applied to closed PDBs.
       Please refer to Note: Datapatch: Database 12c Post Patch SQL Automation
       (Doc ID 1585822.1)
Bootstrapping registry and package to current versions...done
Determining current state...done

Current state of SQL patches:
Bundle series PSU:
  Not installed in the binary registry and ID 171017 in PDB CDB$ROOT, ID 171017 in PDB PDB$SEED, 
  ID 171017 in PDB CONPDB1, ID 171017 in PDB CONPDB2

Adding patches to installation queue and performing prereq checks...
Installation queue:
  For the following PDBs: CDB$ROOT PDB$SEED CONPDB1 CONPDB2
    The following patches will be rolled back:
      26713565 (DATABASE PATCH SET UPDATE 12.1.0.2.171017)
    Nothing to apply
  For the following PDBs: CONPDB3
    Nothing to roll back
    Nothing to apply

Installing patches...
Patch installation complete.  Total patches installed: 4

Validating logfiles...
Patch 26713565 rollback (pdb CDB$ROOT): SUCCESS
  logfile: /u01/app/oracle/cfgtoollogs/sqlpatch/26713565/21602269/
26713565_rollback_CONTDB_CDBROOT_2018Feb25_10_57_32.log (no errors)
Patch 26713565 rollback (pdb PDB$SEED): SUCCESS
  logfile: /u01/app/oracle/cfgtoollogs/sqlpatch/26713565/21602269/
26713565_rollback_CONTDB_PDBSEED_2018Feb25_10_58_23.log (no errors)
Patch 26713565 rollback (pdb CONPDB1): SUCCESS
  logfile: /u01/app/oracle/cfgtoollogs/sqlpatch/26713565/21602269/
26713565_rollback_CONTDB_CONPDB1_2018Feb25_10_58_23.log (no errors)
Patch 26713565 rollback (pdb CONPDB2): SUCCESS
  logfile: /u01/app/oracle/cfgtoollogs/sqlpatch/26713565/21602269/
26713565_rollback_CONTDB_CONPDB2_2018Feb25_10_58_23.log (no errors)
SQL Patching tool complete on Sun Feb 25 10:59:37 2018
[oracle@rac3-12c OPatch]$

[oracle@rac3-12c OPatch]$ . oraenv
ORACLE_SID = [contdb] ? contdb1
The Oracle base remains unchanged with value /u01/app/oracle

[oracle@rac3-12c OPatch]$ sqlplus / as sysdba
SQL*Plus: Release 12.1.0.2.0 Production on Sun Feb 25 11:01:09 2018
Copyright (c) 1982, 2014, Oracle.  All rights reserved.
Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Advanced Analytics and Real Application Testing options

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 CONPDB1                        READ WRITE NO
         4 CONPDB2                        READ WRITE NO
         5 CONPDB3                        READ WRITE YES

SQL> select inst_id, name, restricted from gv$containers;

   INST_ID NAME                           RES
---------- ------------------------------ ---
         1 CDB$ROOT                       NO
         1 PDB$SEED                       NO
         1 CONPDB1                        NO
         1 CONPDB2                        NO
         1 CONPDB3                        YES
         2 CDB$ROOT                       NO
         2 PDB$SEED                       NO
         2 CONPDB1                        NO
         2 CONPDB2                        NO
         2 CONPDB3                        YES

10 rows selected.

SQL> alter pluggable database CONPDB3 close immediate instances=ALL;
Pluggable database altered.

SQL> select inst_id, name, restricted from gv$containers;

   INST_ID NAME                           RES
---------- ------------------------------ ---
         1 CDB$ROOT                       NO
         1 PDB$SEED                       NO
         1 CONPDB1                        NO
         1 CONPDB2                        NO
         1 CONPDB3
         2 CDB$ROOT                       NO
         2 PDB$SEED                       NO
         2 CONPDB1                        NO
         2 CONPDB2                        NO
         2 CONPDB3

10 rows selected.

SQL> alter pluggable database CONPDB3 open read write instances=ALL;
Pluggable database altered.

SQL> select inst_id, name, restricted from gv$containers;

   INST_ID NAME                           RES
---------- ------------------------------ ---
         1 CDB$ROOT                       NO
         1 PDB$SEED                       NO
         1 CONPDB1                        NO
         1 CONPDB2                        NO
         1 CONPDB3                        NO
         2 CDB$ROOT                       NO
         2 PDB$SEED                       NO
         2 CONPDB1                        NO
         2 CONPDB2                        NO
         2 CONPDB3                        NO

10 rows selected.

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 CONPDB1                        READ WRITE NO
         4 CONPDB2                        READ WRITE NO
         5 CONPDB3                        READ WRITE NO

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 CONPDB1                        READ WRITE NO
         4 CONPDB2                        READ WRITE NO
         5 CONPDB3                        READ WRITE NO

SQL> connect sys/oracle@192.168.2.201:1521/conpdb3 as sysdba
Connected.

SQL> show con_name

CON_NAME
------------------------------
CONPDB3

SQL> connect user1/oracle@192.168.2.201:1521/conpdb3
Connected.

SQL> select * from tab1;

        NO
----------
         1

SQL>