Thursday, July 6, 2017

Flash Cache Cards in Oracle Exadata x4 and Oracle Exadata x6


Introduction

Previous Exadata models used 4 Flash "assemblies" which contained 4 individual "Flash Modules" (known as "FMOD's") on 
each Flash card, for a total of 16 Flash Modules/Flashdisks seen by the OS. In contrast, the newer Exadata Storage 
Servers used in X5-2, X4-8 (when equipped with X5-L storage nodes), X5-8, X6-2 and X6-8 and Supercluster (with 
Exadata X5-2 storage) use 4 Flash F160 (X5) or Flash F320 (X6) NVMe (Non-Volatile Memory Express) PCI-E Flash cards 
which have their NAND flash configured as a single module. Since these cards do not use multiple FMOD's they appear to 
the OS as 1 Flash disk per Flash card, or total of 4 Flash disks per server instead of 16 Flash disks per server seen in 
the previous models of Flash cards. Therefore the cell configuration layer also sees 1 physical disk and creates 1 lun on 
the whole card, total of 4 disks per Server, rather than 16 per server seen in the previous
models of Exadata Storage Servers.

More information: Why does the Exadata X5 and X6 storage nodes have only 4 Flashdisks per node? (Doc ID 2139076.1)

Exadata X4 (800 GB * 4 Flash Cache Cards)
===========================================

Oracle Exadata x4 - "Sun Flash Accelerator F80 PCIe Card"

1. Each Falsh Card divides into 4 regions (Per Exadata Storage Server)

   186.2 GB * 4 - First Card  (FD_00_celladm01 to FD_03_celladm01)
   186.2 GB * 4 - Second Card  (FD_04_celladm01 to FD_07_celladm01)
   186.2 GB * 4 - Third Card  (FD_08_celladm01 to FD_11_celladm01)
   186.2 GB * 4 - Fourth Card  (FD_12_celladm01 to FD_15_celladm01)

CellCLI> list flashcache
         celladm01_FLASHCACHE         normal

CellCLI> list flashcache detail
         name:                   celladm01_FLASHCACHE
         cellDisk:               
     FD_15_celladm01,
            FD_04_celladm01,
            FD_11_celladm01,
     FD_00_celladm01,
     FD_07_celladm01,
     FD_14_celladm01,
     FD_06_celladm01,
     FD_12_celladm01,
     FD_05_celladm01,
     FD_09_celladm01,
     FD_10_celladm01,
     FD_08_celladm01,
     FD_02_celladm01,
     FD_13_celladm01,
     FD_01_celladm01,
     FD_03_celladm01
         creationTime:           2016-12-13T19:14:45-05:00
         degradedCelldisks:
         effectiveCacheSize:     2.908935546875T
         id:                     86ecd6a4-501f-4a8d-8c8a-eea9e5b58f77
         size:                   2.908935546875T
         status:                 normal
CellCLI>


CellCLI> list lun detail
         name:                   1_0
         cellDisk:               FD_00_celladm01
         deviceName:             /dev/sdi
         diskType:               FlashDisk
         id:                     1_0
         isSystemLun:            FALSE
         lunSize:                186.26451539993286G
         physicalDrives:         FLASH_1_0
         status:                 normal

         name:                   1_1
         cellDisk:               FD_01_celladm01
         deviceName:             /dev/sdj
         diskType:               FlashDisk
         id:                     1_1
         isSystemLun:            FALSE
         lunSize:                186.26451539993286G
         physicalDrives:         FLASH_1_1
         status:                 normal

         name:                   1_2
         cellDisk:               FD_02_celladm01
         deviceName:             /dev/sdk
         diskType:               FlashDisk
         id:                     1_2
         isSystemLun:            FALSE
         lunSize:                186.26451539993286G
         physicalDrives:         FLASH_1_2
         status:                 normal

         name:                   1_3
         cellDisk:               FD_03_celladm01
         deviceName:             /dev/sdl
         diskType:               FlashDisk
         id:                     1_3
         isSystemLun:            FALSE
         lunSize:                186.26451539993286G
         physicalDrives:         FLASH_1_3
         status:                 normal

         name:                   2_0
         cellDisk:               FD_04_celladm01
         deviceName:             /dev/sdm
         diskType:               FlashDisk
         id:                     2_0
         isSystemLun:            FALSE
         lunSize:                186.26451539993286G
         physicalDrives:         FLASH_2_0
         status:                 normal

         name:                   2_1
         cellDisk:               FD_05_celladm01
         deviceName:             /dev/sdn
         diskType:               FlashDisk
         id:                     2_1
         isSystemLun:            FALSE
         lunSize:                186.26451539993286G
         physicalDrives:         FLASH_2_1
         status:                 normal

         name:                   2_2
         cellDisk:               FD_06_celladm01
         deviceName:             /dev/sdo
         diskType:               FlashDisk
         id:                     2_2
         isSystemLun:            FALSE
         lunSize:                186.26451539993286G
         physicalDrives:         FLASH_2_2
         status:                 normal

         name:                   2_3
         cellDisk:               FD_07_celladm01
         deviceName:             /dev/sdp
         diskType:               FlashDisk
         id:                     2_3
         isSystemLun:            FALSE
         lunSize:                186.26451539993286G
         physicalDrives:         FLASH_2_3
         status:                 normal

         name:                   4_0
         cellDisk:               FD_08_celladm01
         deviceName:             /dev/sde
         diskType:               FlashDisk
         id:                     4_0
         isSystemLun:            FALSE
         lunSize:                186.26451539993286G
         physicalDrives:         FLASH_4_0
         status:                 normal

         name:                   4_1
         cellDisk:               FD_09_celladm01
         deviceName:             /dev/sdf
         diskType:               FlashDisk
         id:                     4_1
         isSystemLun:            FALSE
         lunSize:                186.26451539993286G
         physicalDrives:         FLASH_4_1
         status:                 normal

         name:                   4_2
         cellDisk:               FD_10_celladm01
         deviceName:             /dev/sdg
         diskType:               FlashDisk
         id:                     4_2
         isSystemLun:            FALSE
         lunSize:                186.26451539993286G
         physicalDrives:         FLASH_4_2
         status:                 normal

         name:                   4_3
         cellDisk:               FD_11_celladm01
         deviceName:             /dev/sdh
         diskType:               FlashDisk
         id:                     4_3
         isSystemLun:            FALSE
         lunSize:                186.26451539993286G
         physicalDrives:         FLASH_4_3
         status:                 normal

         name:                   5_0
         cellDisk:               FD_12_celladm01
         deviceName:             /dev/sda
         diskType:               FlashDisk
         id:                     5_0
         isSystemLun:            FALSE
         lunSize:                186.26451539993286G
         physicalDrives:         FLASH_5_0
         status:                 normal

         name:                   5_1
         cellDisk:               FD_13_celladm01
         deviceName:             /dev/sdb
         diskType:               FlashDisk
         id:                     5_1
         isSystemLun:            FALSE
         lunSize:                186.26451539993286G
         physicalDrives:         FLASH_5_1
         status:                 normal

         name:                   5_2
         cellDisk:               FD_14_celladm01
         deviceName:             /dev/sdc
         diskType:               FlashDisk
         id:                     5_2
         isSystemLun:            FALSE
         lunSize:                186.26451539993286G
         physicalDrives:         FLASH_5_2
         status:                 normal

         name:                   5_3
         cellDisk:               FD_15_celladm01
         deviceName:             /dev/sdd
         diskType:               FlashDisk
         id:                     5_3
         isSystemLun:            FALSE
         lunSize:                186.26451539993286G
         physicalDrives:         FLASH_5_3
         status:                 normal
CellCLI>

CellCLI> list physicaldisk
         FLASH_1_0       11000152396     normal
         FLASH_1_1       11000152536     normal
         FLASH_1_2       11000152472     normal
         FLASH_1_3       11000151778     normal
         FLASH_2_0       11000153100     normal
         FLASH_2_1       11000152864     normal
         FLASH_2_2       11000152013     normal
         FLASH_2_3       11000152389     normal
         FLASH_4_0       11000151113     normal
         FLASH_4_1       11000151026     normal
         FLASH_4_2       11000151119     normal
         FLASH_4_3       11000151104     normal
         FLASH_5_0       11000151785     normal
         FLASH_5_1       11000152298     normal
         FLASH_5_2       11000131661     normal
         FLASH_5_3       11000131306     normal


Note: I have truncated physical disk information for clarity


CellCLI> list physicaldisk detail
         name:                   FLASH_1_0
         deviceName:             /dev/sdi
         diskType:               FlashDisk
         flashLifeLeft:          100
         luns:                   1_0
         makeModel:              "Sun Flash Accelerator F80 PCIe Card"
         physicalFirmware:       UIO6
         physicalInsertTime:     2016-12-13T12:45:11-05:00
         physicalSize:           186.26451539993286G
         slotNumber:             "PCI Slot: 1; FDOM: 0"
         status:                 normal

         name:                   FLASH_1_1
         deviceName:             /dev/sdj
         diskType:               FlashDisk
         flashLifeLeft:          100
         luns:                   1_1
         makeModel:              "Sun Flash Accelerator F80 PCIe Card"
         physicalFirmware:       UIO6
         physicalInsertTime:     2016-12-13T12:45:11-05:00
         physicalSize:           186.26451539993286G
         slotNumber:             "PCI Slot: 1; FDOM: 1"
         status:                 normal

         name:                   FLASH_1_2
         deviceName:             /dev/sdk
         diskType:               FlashDisk
         flashLifeLeft:          100
         luns:                   1_2
         makeModel:              "Sun Flash Accelerator F80 PCIe Card"
         physicalFirmware:       UIO6
         physicalInsertTime:     2016-12-13T12:45:11-05:00
         physicalSize:           186.26451539993286G
         slotNumber:             "PCI Slot: 1; FDOM: 2"
         status:                 normal

         name:                   FLASH_1_3
         deviceName:             /dev/sdl
         diskType:               FlashDisk
         flashLifeLeft:          100
         luns:                   1_3
         makeModel:              "Sun Flash Accelerator F80 PCIe Card"
         physicalFirmware:       UIO6
         physicalInsertTime:     2016-12-13T12:45:11-05:00
         physicalSize:           186.26451539993286G
         slotNumber:             "PCI Slot: 1; FDOM: 3"
         status:                 normal

         name:                   FLASH_2_0
         deviceName:             /dev/sdm
         diskType:               FlashDisk
         flashLifeLeft:          100
         luns:                   2_0
         makeModel:              "Sun Flash Accelerator F80 PCIe Card"
         physicalFirmware:       UIO6
         physicalInsertTime:     2016-12-13T12:45:11-05:00
         physicalSize:           186.26451539993286G
         slotNumber:             "PCI Slot: 2; FDOM: 0"
         status:                 normal

         name:                   FLASH_2_1
         deviceName:             /dev/sdn
         diskType:               FlashDisk
         flashLifeLeft:          100
         luns:                   2_1
         makeModel:              "Sun Flash Accelerator F80 PCIe Card"
         physicalFirmware:       UIO6
         physicalInsertTime:     2016-12-13T12:45:11-05:00
         physicalSize:           186.26451539993286G
         slotNumber:             "PCI Slot: 2; FDOM: 1"
         status:                 normal

         name:                   FLASH_2_2
         deviceName:             /dev/sdo
         diskType:               FlashDisk
         flashLifeLeft:          99
         luns:                   2_2
         makeModel:              "Sun Flash Accelerator F80 PCIe Card"
         physicalFirmware:       UIO6
         physicalInsertTime:     2016-12-13T12:45:11-05:00
         physicalSize:           186.26451539993286G
         slotNumber:             "PCI Slot: 2; FDOM: 2"
         status:                 normal

         name:                   FLASH_2_3
         deviceName:             /dev/sdp
         diskType:               FlashDisk
         flashLifeLeft:          100
         luns:                   2_3
         makeModel:              "Sun Flash Accelerator F80 PCIe Card"
         physicalFirmware:       UIO6
         physicalInsertTime:     2016-12-13T12:45:11-05:00
         physicalSize:           186.26451539993286G
         slotNumber:             "PCI Slot: 2; FDOM: 3"
         status:                 normal

         name:                   FLASH_4_0
         deviceName:             /dev/sde
         diskType:               FlashDisk
         flashLifeLeft:          100
         luns:                   4_0
         makeModel:              "Sun Flash Accelerator F80 PCIe Card"
         physicalFirmware:       UIO6
         physicalInsertTime:     2016-12-13T12:45:11-05:00
         physicalSize:           186.26451539993286G
         slotNumber:             "PCI Slot: 4; FDOM: 0"
         status:                 normal

         name:                   FLASH_4_1
         deviceName:             /dev/sdf
         diskType:               FlashDisk
         flashLifeLeft:          100
         luns:                   4_1
         makeModel:              "Sun Flash Accelerator F80 PCIe Card"
         physicalFirmware:       UIO6
         physicalInsertTime:     2016-12-13T12:45:11-05:00
         physicalSize:           186.26451539993286G
         slotNumber:             "PCI Slot: 4; FDOM: 1"
         status:                 normal

         name:                   FLASH_4_2
         deviceName:             /dev/sdg
         diskType:               FlashDisk
         flashLifeLeft:          100
         luns:                   4_2
         makeModel:              "Sun Flash Accelerator F80 PCIe Card"
         physicalFirmware:       UIO6
         physicalInsertTime:     2016-12-13T12:45:11-05:00
         physicalSize:           186.26451539993286G
         slotNumber:             "PCI Slot: 4; FDOM: 2"
         status:                 normal

         name:                   FLASH_4_3
         deviceName:             /dev/sdh
         diskType:               FlashDisk
         flashLifeLeft:          100
         luns:                   4_3
         makeModel:              "Sun Flash Accelerator F80 PCIe Card"
         physicalFirmware:       UIO6
         physicalInsertTime:     2016-12-13T12:45:11-05:00
         physicalSize:           186.26451539993286G
         slotNumber:             "PCI Slot: 4; FDOM: 3"
         status:                 normal

         name:                   FLASH_5_0
         deviceName:             /dev/sda
         diskType:               FlashDisk
         flashLifeLeft:          100
         luns:                   5_0
         makeModel:              "Sun Flash Accelerator F80 PCIe Card"
         physicalFirmware:       UIO6
         physicalInsertTime:     2016-12-13T12:45:11-05:00
         physicalSize:           186.26451539993286G
         slotNumber:             "PCI Slot: 5; FDOM: 0"
         status:                 normal

         name:                   FLASH_5_1
         deviceName:             /dev/sdb
         diskType:               FlashDisk
         flashLifeLeft:          100
         luns:                   5_1
         makeModel:              "Sun Flash Accelerator F80 PCIe Card"
         physicalFirmware:       UIO6
         physicalInsertTime:     2016-12-13T12:45:11-05:00
         physicalSize:           186.26451539993286G
         slotNumber:             "PCI Slot: 5; FDOM: 1"
         status:                 normal

         name:                   FLASH_5_2
         deviceName:             /dev/sdc
         diskType:               FlashDisk
         flashLifeLeft:          100
         luns:                   5_2
         makeModel:              "Sun Flash Accelerator F80 PCIe Card"
         physicalFirmware:       UIO6
         physicalInsertTime:     2016-12-13T12:45:11-05:00
         physicalSize:           186.26451539993286G
         slotNumber:             "PCI Slot: 5; FDOM: 2"
         status:                 normal

         name:                   FLASH_5_3
         deviceName:             /dev/sdd
         diskType:               FlashDisk
         flashLifeLeft:          100
         luns:                   5_3
         makeModel:              "Sun Flash Accelerator F80 PCIe Card"
         physicalFirmware:       UIO6
         physicalInsertTime:     2016-12-13T12:45:11-05:00
         physicalSize:           186.26451539993286G
         slotNumber:             "PCI Slot: 5; FDOM: 3"
         status:                 normal

CellCLI> help


Exadata X6 (3 TB * 4 Flash Cache Cards)
========================================

Oracle Exadata x6 - "Oracle Flash Accelerator F320 PCIe Card"


1. Each Flash Card divides into 1 region - Per Storage Server

   2.9 TB * 1 - First Card  (FLASH_1_1)
   2.9 TB * 1 - Second Card (FLASH_2_1)
   2.9 TB * 1 - Third Card  (FLASH_4_1)
   2.9 TB * 1 - Fourth Card (FLASH_5_1)


CellCLI> list flashcache
         celadm01_FLASHCACHE         normal

CellCLI> list flashcache detail
         name:                   celadm01_FLASHCACHE
         cellDisk:               FD_02_celadm01,FD_01_celadm01,FD_00_celadm01,FD_03_celadm01
         creationTime:           2016-08-02T17:26:28-04:00
         degradedCelldisks:
         effectiveCacheSize:     11.64312744140625T
         id:                     f37efd89-6030-455d-9300-f48657344bf6
         size:                   11.64312744140625T
         status:                 normal
CellCLI>

CellCLI> list physicaldisk
         FLASH_1_1       S2T7NAAH301716  normal
         FLASH_2_1       S2T7NAAH301662  normal
         FLASH_4_1       S2T7NAAH301680  normal
         FLASH_5_1       S2T7NAAH301723  normal


CellCLI> list physicaldisk detail
         name:                   FLASH_1_1
         deviceName:             /dev/nvme3n1
         diskType:               FlashDisk
         luns:                   1_1
         makeModel:              "Oracle Flash Accelerator F320 PCIe Card"
         physicalFirmware:       KPYAGR3Q
         physicalInsertTime:     2016-07-25T12:58:28-04:00
         physicalSize:           2.910957656800747T
         slotNumber:             "PCI Slot: 1; FDOM: 1"
         status:                 normal

         name:                   FLASH_2_1
         deviceName:             /dev/nvme2n1
         diskType:               FlashDisk
         luns:                   2_1
         makeModel:              "Oracle Flash Accelerator F320 PCIe Card"
         physicalFirmware:       KPYAGR3Q
         physicalInsertTime:     2016-07-25T12:58:28-04:00
         physicalSize:           2.910957656800747T
         slotNumber:             "PCI Slot: 2; FDOM: 1"
         status:                 normal

         name:                   FLASH_4_1
         deviceName:             /dev/nvme0n1
         diskType:               FlashDisk
         luns:                   4_1
         makeModel:              "Oracle Flash Accelerator F320 PCIe Card"
         physicalFirmware:       KPYAGR3Q
         physicalInsertTime:     2016-07-25T12:58:28-04:00
         physicalSize:           2.910957656800747T
         slotNumber:             "PCI Slot: 4; FDOM: 1"
         status:                 normal

         name:                   FLASH_5_1
         deviceName:             /dev/nvme1n1
         diskType:               FlashDisk
         luns:                   5_1
         makeModel:              "Oracle Flash Accelerator F320 PCIe Card"
         physicalFirmware:       KPYAGR3Q
         physicalInsertTime:     2016-07-25T12:58:28-04:00
         physicalSize:           2.910957656800747T
         slotNumber:             "PCI Slot: 5; FDOM: 1"
         status:                 normal
CellCLI>

Friday, June 30, 2017

Disk Scrubbing Feature – Oracle Exadata Database Machine


Disk Scrubbing Feature – Oracle Exadata Database Machine


Introduction:

Disk scrubbing is a new feature introduced in Oracle 11.2.0.4 and Oracle Exadata 11.2.3.3.0 storage software version. 
The usage of disk scrubbing is to periodically validate the integrity of the mirrored ASM extents and thus eliminate 
latent corruption. Disk Scrubbing is designed to schedule on production servers when average I/O utilization is minimal 
because disk scrubbing can cause spikes in disk utilization and latency and adversely affect database performance. 
By default, the hard disk scrub runs every two weeks.

The following Parameters controlling the disk scrubbing:

• hardDiskScrubInterval: 
Sets the interval for proactive resilvering of latent bad sectors. Valid options are daily, weekly, biweekly and none.
• hardDiskScrubStartTime:
Command sets the start time for proactive resilvering of latent bad sectors. Valid options are a date/time 
combination or now.

Schedules available to enable Harddisk scrub Activity

• hardDiskScrubInterval=daily
• hardDiskScrubInterval=weekly
• hardDiskScrubInterval=biweekly

Ways to check alert log in Oracle Exadata Storage Server

1. ADRCI
2. CELLTRACE

Where to look in Oracle Exadata Storage Servers:

Exadata Storage Server-1: CellServer01

[celladmin@CellServer01 ~]$ cd $CELLTRACE
[celladmin@CellServer01 trace]$ pwd
/opt/oracle/cell12.1.2.3.3_LINUX.X64_161109/log/diag/asm/cell/CellServer01/trace
[celladmin@CellServer01 trace]$ ls -l alert*
-rw-rw---- 1 root celladmin 254890 Mar 11 05:03 alert.log
[celladmin@CellServer01 trace]$

(OR)

[celladmin@CellServer01 ~]$ adrci
ADRCI: Release 12.1.0.2.0 - Production on Mon Mar 13 12:49:12 2017
Copyright (c) 1982, 2016, Oracle and/or its affiliates.  All rights reserved.
ADR base = "/opt/oracle/cell12.1.2.3.3_LINUX.X64_161109/log"

adrci> show alert

Choose the home from which to view the alert log:

1: diag/asm/user_root/host_136421473_80
2: diag/asm/user_root/host_136421473_82
3: diag/asm/cell/CellServer01
4: diag/asm/cell/SYS_121233_161109
5: diag/asm/cell/SYS_112331_151006
Q: to quit

Please select option: 3
Output the results to file: /tmp/alert_35417_1399_CellServer01_1.ado

Begin scrubbing CellDisk:CD_03_CellServer01.
Begin scrubbing CellDisk:CD_04_CellServer01.
Begin scrubbing CellDisk:CD_07_CellServer01.
Begin scrubbing CellDisk:CD_06_CellServer01.
Begin scrubbing CellDisk:CD_10_CellServer01.
Begin scrubbing CellDisk:CD_05_CellServer01.
Begin scrubbing CellDisk:CD_01_CellServer01.
Begin scrubbing CellDisk:CD_09_CellServer01.
Begin scrubbing CellDisk:CD_11_CellServer01.
Begin scrubbing CellDisk:CD_08_CellServer01.
Begin scrubbing CellDisk:CD_00_CellServer01.
Begin scrubbing CellDisk:CD_02_CellServer01.

2017-02-24 10:55:08.976000 -05:00
Finished scrubbing CellDisk:CD_01_CellServer01, scrubbed blocks (1MB):7465024, found bad blocks:0
2017-02-24 12:19:33.389000 -05:00
Finished scrubbing CellDisk:CD_00_CellServer01, scrubbed blocks (1MB):7465024, found bad blocks:0
2017-02-24 17:40:33.013000 -05:00
Finished scrubbing CellDisk:CD_05_CellServer01, scrubbed blocks (1MB):7499632, found bad blocks:0
2017-02-24 17:44:36.352000 -05:00
Finished scrubbing CellDisk:CD_08_CellServer01, scrubbed blocks (1MB):7499632, found bad blocks:0
2017-02-24 17:50:16.765000 -05:00
Finished scrubbing CellDisk:CD_10_CellServer01, scrubbed blocks (1MB):7499632, found bad blocks:0
2017-02-24 17:50:20.052000 -05:00
Finished scrubbing CellDisk:CD_07_CellServer01, scrubbed blocks (1MB):7499632, found bad blocks:0
2017-02-24 17:53:45.900000 -05:00
Finished scrubbing CellDisk:CD_06_CellServer01, scrubbed blocks (1MB):7499632, found bad blocks:0
2017-02-24 17:57:31.965000 -05:00
Finished scrubbing CellDisk:CD_04_CellServer01, scrubbed blocks (1MB):7499632, found bad blocks:0
2017-02-24 18:23:17.292000 -05:00
Finished scrubbing CellDisk:CD_11_CellServer01, scrubbed blocks (1MB):7499632, found bad blocks:0
2017-02-24 18:47:43.248000 -05:00
Finished scrubbing CellDisk:CD_09_CellServer01, scrubbed blocks (1MB):7499632, found bad blocks:0
2017-02-24 19:12:58.308000 -05:00
Finished scrubbing CellDisk:CD_02_CellServer01, scrubbed blocks (1MB):7499632, found bad blocks:0
2017-02-24 22:25:42.408000 -05:00
Finished scrubbing CellDisk:CD_03_CellServer01, scrubbed blocks (1MB):7499632, found bad blocks:0

Exadata Storage Server-2: CellServer02


[celladmin@CellServer02 ~]$ cd $CELLTRACE
[celladmin@CellServer02 trace]$ pwd
/opt/oracle/cell12.1.2.3.3_LINUX.X64_161109/log/diag/asm/cell/CellServer02/trace
[celladmin@CellServer02 trace]$ ls -lrth alert*
-rw-rw---- 1 root celladmin 4.2M Mar 11 02:57 alert.log
[celladmin@CellServer02 trace]$

(OR)

[celladmin@CellServer02 ~]$ adrci
ADRCI: Release 12.1.0.2.0 - Production on Mon Mar 13 13:32:42 2017
Copyright (c) 1982, 2016, Oracle and/or its affiliates.  All rights reserved.

ADR base = "/opt/oracle/cell12.1.2.3.3_LINUX.X64_161109/log"
adrci> show alert

Choose the home from which to view the alert log:

1: diag/asm/user_root/host_1634209856_80
2: diag/asm/user_root/host_1634209856_82
3: diag/asm/cell/CellServer02
4: diag/asm/cell/SYS_112331_151006
5: diag/asm/cell/SYS_121233_161109
Q: to quit

Please select option: 3
Output the results to file: /tmp/alert_5413_14027_CellServer02_1.ado

Begin scrubbing CellDisk:CD_02_CellServer02.
Begin scrubbing CellDisk:CD_00_CellServer02.
Begin scrubbing CellDisk:CD_11_CellServer02.
Begin scrubbing CellDisk:CD_10_CellServer02.
Begin scrubbing CellDisk:CD_09_CellServer02.
Begin scrubbing CellDisk:CD_06_CellServer02.
Begin scrubbing CellDisk:CD_01_CellServer02.
Begin scrubbing CellDisk:CD_04_CellServer02.
Begin scrubbing CellDisk:CD_05_CellServer02.
Begin scrubbing CellDisk:CD_03_CellServer02.
Begin scrubbing CellDisk:CD_07_CellServer02.
Begin scrubbing CellDisk:CD_08_CellServer02.
2017-02-24 11:32:04.092000 -05:00
Finished scrubbing CellDisk:CD_01_CellServer02, scrubbed blocks (1MB):7465024, found bad blocks:0
2017-02-24 12:47:37.032000 -05:00
Finished scrubbing CellDisk:CD_00_CellServer02, scrubbed blocks (1MB):7465024, found bad blocks:0
2017-02-24 18:33:47.058000 -05:00
Finished scrubbing CellDisk:CD_06_CellServer02, scrubbed blocks (1MB):7499632, found bad blocks:0
2017-02-24 18:34:45.791000 -05:00
Finished scrubbing CellDisk:CD_02_CellServer02, scrubbed blocks (1MB):7499632, found bad blocks:0
2017-02-24 18:39:05.954000 -05:00
Finished scrubbing CellDisk:CD_03_CellServer02, scrubbed blocks (1MB):7499632, found bad blocks:0
2017-02-24 18:45:55.826000 -05:00
Finished scrubbing CellDisk:CD_08_CellServer02, scrubbed blocks (1MB):7499632, found bad blocks:0
2017-02-24 18:51:21.961000 -05:00
Finished scrubbing CellDisk:CD_05_CellServer02, scrubbed blocks (1MB):7499632, found bad blocks:0
2017-02-24 18:56:44.404000 -05:00
Finished scrubbing CellDisk:CD_07_CellServer02, scrubbed blocks (1MB):7499632, found bad blocks:0
2017-02-24 19:07:25.359000 -05:00
Finished scrubbing CellDisk:CD_10_CellServer02, scrubbed blocks (1MB):7499632, found bad blocks:0
2017-02-24 19:09:20.616000 -05:00
Finished scrubbing CellDisk:CD_04_CellServer02, scrubbed blocks (1MB):7499632, found bad blocks:0
2017-02-24 19:10:06.256000 -05:00
Finished scrubbing CellDisk:CD_09_CellServer02, scrubbed blocks (1MB):7499632, found bad blocks:0
2017-02-24 22:56:26.134000 -05:00
Finished scrubbing CellDisk:CD_11_CellServer02, scrubbed blocks (1MB):7499632, found bad blocks:0
2017-03-08 19:00:06.986000 -05:00

Exadata Storage Server-3: CellServer03


[celladmin@CellServer03 ~]$ cd $CELLTRACE
[celladmin@CellServer03 trace]$ pwd
/opt/oracle/cell12.1.2.3.3_LINUX.X64_161109/log/diag/asm/cell/CellServer03/trace
[celladmin@CellServer03 trace]$ ls -l alert*
-rw-rw---- 1 root celladmin 254890 Mar 11 05:03 alert.log
[celladmin@CellServer03 trace]$

OR

[celladmin@CellServer03 ~]$ adrci
ADRCI: Release 12.1.0.2.0 - Production on Tue Mar 14 14:48:47 2017
Copyright (c) 1982, 2016, Oracle and/or its affiliates.  All rights reserved.
ADR base = "/opt/oracle/cell12.1.2.3.3_LINUX.X64_161109/log"

adrci> show alert

Choose the home from which to view the alert log:

1: diag/asm/user_root/host_4214962514_80
2: diag/asm/user_root/host_4214962514_82
3: diag/asm/cell/SYS_112331_151006
4: diag/asm/cell/SYS_121233_161109
5: diag/asm/cell/CellServer03
Q: to quit

Please select option: 5
Output the results to file: /tmp/alert_37829_1402_CellServer03_1.ado

Begin scrubbing CellDisk:CD_02_CellServer03.
Begin scrubbing CellDisk:CD_07_CellServer03.
Begin scrubbing CellDisk:CD_10_CellServer03.
Begin scrubbing CellDisk:CD_11_CellServer03.
Begin scrubbing CellDisk:CD_01_CellServer03.
Begin scrubbing CellDisk:CD_05_CellServer03.
Begin scrubbing CellDisk:CD_06_CellServer03.
Begin scrubbing CellDisk:CD_08_CellServer03.
Begin scrubbing CellDisk:CD_09_CellServer03.
Begin scrubbing CellDisk:CD_04_CellServer03.
Begin scrubbing CellDisk:CD_03_CellServer03.
Begin scrubbing CellDisk:CD_00_CellServer03.
2017-02-24 12:26:46.102000 -05:00
Finished scrubbing CellDisk:CD_00_CellServer03, scrubbed blocks (1MB):7465024, found bad blocks:0
2017-02-24 13:31:16.168000 -05:00
Finished scrubbing CellDisk:CD_01_CellServer03, scrubbed blocks (1MB):7465024, found bad blocks:0
2017-02-24 18:02:35.900000 -05:00
Finished scrubbing CellDisk:CD_03_CellServer03, scrubbed blocks (1MB):7499632, found bad blocks:0
2017-02-24 18:35:41.075000 -05:00
Finished scrubbing CellDisk:CD_04_CellServer03, scrubbed blocks (1MB):7499632, found bad blocks:0
2017-02-24 19:36:04.680000 -05:00
Finished scrubbing CellDisk:CD_10_CellServer03, scrubbed blocks (1MB):7499632, found bad blocks:0
2017-02-24 20:12:15.913000 -05:00
Finished scrubbing CellDisk:CD_11_CellServer03, scrubbed blocks (1MB):7499632, found bad blocks:0
2017-02-24 20:12:55.832000 -05:00
Finished scrubbing CellDisk:CD_09_CellServer03, scrubbed blocks (1MB):7499632, found bad blocks:0
2017-02-24 20:36:54.813000 -05:00
Finished scrubbing CellDisk:CD_06_CellServer03, scrubbed blocks (1MB):7499632, found bad blocks:0
2017-02-24 20:42:25.369000 -05:00
Finished scrubbing CellDisk:CD_05_CellServer03, scrubbed blocks (1MB):7499632, found bad blocks:0
2017-02-24 20:58:07.648000 -05:00
Finished scrubbing CellDisk:CD_07_CellServer03, scrubbed blocks (1MB):7499632, found bad blocks:0
2017-02-25 04:16:47.907000 -05:00
Finished scrubbing CellDisk:CD_08_CellServer03, scrubbed blocks (1MB):7499632, found bad blocks:0
2017-02-25 09:07:33.619000 -05:00
Finished scrubbing CellDisk:CD_02_CellServer03, scrubbed blocks (1MB):7499632, found bad blocks:0

Note: Same process we can do based on Oracle Exadata Database Machine Model 
(1/8th Rack, Quarter Rack, Half Rack and Full Rack).


Command to verify Harddisk scrub Activity enabled on Oracle Exadata:


[celladmin@CellServer01 ~]$ cellcli -e list cell attributes name,hardDiskScrubInterval
         CellServer01    biweekly

[celladmin@CellServer02 ~]$ cellcli -e list cell attributes name,hardDiskScrubInterval
         CellServer02    biweekly

[celladmin@CellServer03 ~]$ cellcli -e list cell attributes name,hardDiskScrubInterval
         CellServer03    biweekly

Command to stop Harddisk scrub Activity enabled on Oracle Exadata:


[celladmin@CellServer01 ~]$ cellcli –e alter cell hardDiskScrubInterval=none
[celladmin@CellServer02 ~]$ cellcli –e alter cell hardDiskScrubInterval=none
[celladmin@CellServer03 ~]$ cellcli –e alter cell hardDiskScrubInterval=none

When to schedule:


Disk scrubbing will take I/O when it is running on the storage servers so there will be small load will be there 
in the oracle database. Before disk scrubbing check the idle window for mission critical production databases. 
Check the below steps to schedule in planned time.

Stop disk scrubbing and reschedule it for non peak hours time.
CellCLI> ALTER CELL hardDiskScrubInterval=none

Please decide on hardDiskScrubStartTime to start over weekend/non-peak hours and set appropriately. 
CellCLI> ALTER CELL hardDiskScrubStartTime='' 

Change the interval to BIWEEKLY if the previous action plan was implemented to stop the disk scrub.
CellCLI> ALTER CELL hardDiskScrubInterval=biweekly

Summary:
Disk scrubbing is use to periodically validate the integrity of the mirrored ASM extents across the 
Oracle Exadata storage servers and thus eliminate latent corruption. 

Monday, June 19, 2017

Oracle GoldenGate Version 12.2.0.1.1 Not Supports Oracle Database 12cR2 (12.2.0.1.0)




Oracle GoldenGate Version (12.2.0.1.1) Not Supports Oracle Database 12cR2 (12.2.0.1.0).

Saturday, June 10, 2017

Duplicate a controlfile when ASM is involved with OMF


1. Modify the spfile specifically the parameter control_files


SQL> alter system set control_files='+RECO/ORCL/CONTROLFILE/current.257.946348789','+DATA' scope=spfile sid='*';
System altered.

2. Start the instance in NOMOUNT mode.


[oracle@rac1-12c ~]$ . oraenv
ORACLE_SID = [orcl] ? orcl1
The Oracle base remains unchanged with value /u01/app/oracle

[oracle@rac1-12c ~]$ sqlplus /nolog
SQL*Plus: Release 12.1.0.2.0 Production on Sun Jun 11 03:48:49 2017
Copyright (c) 1982, 2014, Oracle.  All rights reserved.

SQL> connect sys/oracle as sysdba
Connected to an idle instance.

SQL> startup nomount;
ORACLE instance started.

Total System Global Area 2499805184 bytes
Fixed Size      2927480 bytes
Variable Size    738198664 bytes
Database Buffers  1744830464 bytes
Redo Buffers     13848576 bytes
SQL> 

3. From rman, duplicate the controlfile


[oracle@rac1-12c ~]$ rman target /

Recovery Manager: Release 12.1.0.2.0 - Production on Sun Jun 11 03:49:25 2017
Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.
connected to target database: ORCL (not mounted)

RMAN> restore controlfile from '+RECO/ORCL/CONTROLFILE/current.257.946348789';

Starting restore at 11-JUN-17
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=13 instance=orcl1 device type=DISK

channel ORA_DISK_1: copied control file copy
output file name=+RECO/ORCL/CONTROLFILE/current.257.946348789
output file name=+DATA/ORCL/CONTROLFILE/current.425.946352989
Finished restore at 11-JUN-17

RMAN> exit

4. Modify the control_files parameter with the complete path of the new file: 


[oracle@rac1-12c ~]$ . oraenv
ORACLE_SID = [orcl1] ? orcl1
ORACLE_HOME = [/home/oracle] ? /u01/app/oracle/product/12.1.0.2/db_1
The Oracle base remains unchanged with value /u01/app/oracle

[oracle@rac1-12c ~]$ sqlplus /nolog
SQL*Plus: Release 12.1.0.2.0 Production on Sun Jun 11 03:50:56 2017
Copyright (c) 1982, 2014, Oracle.  All rights reserved.

SQL> connect sys/oracle as sysdba
Connected.

SQL> select open_mode from v$database;
select open_mode from v$database
                      *
ERROR at line 1:
ORA-01507: database not mounted

SQL> alter system set control_files='+RECO/ORCL/CONTROLFILE/current.257.946348789',
                                    '+DATA/ORCL/CONTROLFILE/current.425.946352989' 
                                     scope=spfile sid='*';
System altered.

SQL> shu immediate;
ORA-01507: database not mounted
ORACLE instance shut down.

SQL> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Advanced Analytics and Real Application Testing options

5. Check control_files parameter


[oracle@rac1-12c ~]$ srvctl start database -d orcl

[oracle@rac1-12c ~]$ . oraenv
ORACLE_SID = [orcl] ? orcl
The Oracle base remains unchanged with value /u01/app/oracle

[oracle@rac1-12c ~]$ sqlplus /nolog
SQL*Plus: Release 12.1.0.2.0 Production on Sun Jun 11 03:54:34 2017
Copyright (c) 1982, 2014, Oracle.  All rights reserved.

SQL> connect sys/oracle@orcl as sysdba
Connected.

SQL> select instance_name from v$instance;

INSTANCE_NAME
----------------
orcl2

SQL> show parameter control

NAME         TYPE  VALUE
------------------------------------ ----------- ------------------------------
control_file_record_keep_time      integer  7
control_files        string  +RECO/ORCL/CONTROLFILE/current.257.946348789, 
       +DATA/ORCL/CONTROLFILE/current.425.946352989
control_management_pack_access      string  DIAGNOSTIC+TUNING

Wednesday, May 24, 2017

Calculating Network Bandwidth On Exadata Compute Nodes and to check PORT availability (netcat)


Introduction: Calculating Network Bandwidth On Exadata Compute Nodes and to check PORT availability we can use "netcat".


Testing PORT availability:
=============================

Node-1
==================

1. Download and Install

[oracle@rac1-12c ~]$ chmod -R 777 netcat-0.7.1.tar.gz
[oracle@rac1-12c ~]$ tar -xvf netcat-0.7.1.tar.gz
[oracle@rac1-12c ~]$ cd netcat-0.7.1
[oracle@rac1-12c netcat-0.7.1]$ ./configure --prefix=/home/oracle/netcat
[oracle@rac1-12c netcat-0.7.1]$ make
[oracle@rac1-12c netcat-0.7.1]$ find . -name "netcat"
./src/netcat
[oracle@rac1-12c ~]$ cd /home/oracle/netcat-0.7.1/src/
[oracle@rac1-12c src]$ ./netcat -l -t -p 1523

Note: Keep this window open and go to Node-2 check the sent message


Node-2
===================
[oracle@rac2-12c ~]$ chmod -R 777 netcat-0.7.1.tar.gz
[oracle@rac2-12c ~]$ tar -xvf netcat-0.7.1.tar.gz
[oracle@rac2-12c ~]$ cd netcat-0.7.1
[oracle@rac2-12c netcat-0.7.1]$
[oracle@rac2-12c netcat-0.7.1]$ ./configure --prefix=/home/oracle/netcat
[oracle@rac2-12c netcat-0.7.1]$ make
[oracle@rac2-12c netcat-0.7.1]$ cd src/
[oracle@rac2-12c src]$ pwd
/home/oracle/netcat-0.7.1/src
[oracle@rac2-12c src]$ echo "hello" | ./netcat rac1-12c 1523

Note: You can see "hello" message in Node-1 so we can test same test from Node-1 also based on PORT.


Calculating Network Bandwidth On Exadata Compute Nodes:
==========================================================


Install nc (netcat) package on both the nodes either using yum or rpm via download.
# rpm -ivh nc-1.84-10.fc6.x86_64.rpm

Using nc utility start a receiver on a given port,  in this example it uses port# 6789 you need to ensure it is not 
an active port.

host01:  # nc -l 6789 > output

On the other compute node,  create a 1gb file and attempt to send it to the remote nc port on the remote node

host02:
# dd if=/dev/urandom of=input bs=1M count=1000 # ls -lah input # time nc host01 6789 < input 
From the above we can calculate the speed.

file size / real time  = megabytes per second

Finally when complete file will exist on the remote node, remember to remove the file and Ctrl  the nc (netcat) shell

host01:
Issue a Cntrl  in the nc shell # ls -lah output

Tuesday, May 2, 2017

Published articles in Oracle Technology Network (Spanish and Portuguese)


Article-82: Oracle Exadata Database Machine - Funcionalidad :Disk Scrubbing
http://www.oracle.com/technetwork/es/articles/database-performance/exadata-db-machine-disk-scrubbing-3679915-esa.html

Article-83: Flashback Pluggable Database (PDB) emambiente Multi-Tenant usando Oracle Database 12c R2.
http://www.oracle.com/technetwork/pt/articles/database-performance/flashback-3703752-ptb.html






Saturday, April 29, 2017

ILOM and BIOS update: Oracle Exadata Database Machine (x6-2)


Introduction:


While updating the patch 25031476 - Exadata Storage Server software 12.1.2.3.4 for Oracle Exadata Storage Servers 
BIOS and ILOM not updated properly. So, we need to poweroff and Poweron Oracle Exadata Storage Servers to update 
latest version of ILOM and BIOS version.



Sequence of Steps:


Login to Oracle Exadata Storage Server


[celladmin@exp2cellser01 ~]$ cellcli

Check MS, RS and Cellsrv status


CellCLI> list cell detail

Check griddisk status


CellCLI> list griddisk
         DATAC1_CD_00_exp2cellser01        active
         DATAC1_CD_01_exp2cellser01        active
         DATAC1_CD_02_exp2cellser01        active
         DATAC1_CD_03_exp2cellser01        active
         DATAC1_CD_04_exp2cellser01        active
         DATAC1_CD_05_exp2cellser01        active
         DATAC1_CD_06_exp2cellser01        active
         DATAC1_CD_07_exp2cellser01        active
         DATAC1_CD_08_exp2cellser01        active
         DATAC1_CD_09_exp2cellser01        active
         DATAC1_CD_10_exp2cellser01        active
         DATAC1_CD_11_exp2cellser01        active
         DBFS_DG_CD_02_exp2cellser01      active
         DBFS_DG_CD_03_exp2cellser01      active
         DBFS_DG_CD_04_exp2cellser01      active
         DBFS_DG_CD_05_exp2cellser01      active
         DBFS_DG_CD_06_exp2cellser01      active
         DBFS_DG_CD_07_exp2cellser01      active
         DBFS_DG_CD_08_exp2cellser01      active
         DBFS_DG_CD_09_exp2cellser01      active
         DBFS_DG_CD_10_exp2cellser01      active
         DBFS_DG_CD_11_exp2cellser01      active
         RECOC1_CD_00_exp2cellser01        active
         RECOC1_CD_01_exp2cellser01       active
         RECOC1_CD_02_exp2cellser01        active
         RECOC1_CD_03_exp2cellser01        active
         RECOC1_CD_04_exp2cellser01        active
         RECOC1_CD_05_exp2cellser01        active
         RECOC1_CD_06_exp2cellser01        active
         RECOC1_CD_07_exp2cellser01        active
         RECOC1_CD_08_exp2cellser01        active
         RECOC1_CD_09_exp2cellser01        active
         RECOC1_CD_10_exp2cellser01        active
         RECOC1_CD_11_exp2cellser01        active
CellCLI>

Note: Before shutdown Oracle Exadara Storage Server, asmdeactivationoutcome should be ‘Yes’ for all the diskgroups 
in that Storage Server.


CellCLI> list griddisk attributes name, status, asmdeactivationoutcome
         DATAC1_CD_00_exp2cellser01       active  Yes
         DATAC1_CD_01_exp2cellser01       active  Yes
         DATAC1_CD_02_exp2cellser01       active  Yes
         DATAC1_CD_03_exp2cellser01       active  Yes
         DATAC1_CD_04_exp2cellser01       active  Yes
         DATAC1_CD_05_exp2cellser01       active  Yes
         DATAC1_CD_06_exp2cellser01       active  Yes
         DATAC1_CD_07_exp2cellser01       active  Yes
         DATAC1_CD_08_exp2cellser01       active  Yes
         DATAC1_CD_09_exp2cellser01       active  Yes
         DATAC1_CD_10_exp2cellser01       active  Yes
         DATAC1_CD_11_exp2cellser01       active  Yes
         DBFS_DG_CD_02_exp2cellser01      active  Yes
         DBFS_DG_CD_03_exp2cellser01      active  Yes
         DBFS_DG_CD_04_exp2cellser01      active  Yes
         DBFS_DG_CD_05_exp2cellser01      active  Yes
         DBFS_DG_CD_06_exp2cellser01      active  Yes
         DBFS_DG_CD_07_exp2cellser01      active  Yes
         DBFS_DG_CD_08_exp2cellser01      active  Yes
         DBFS_DG_CD_09_exp2cellser01      active  Yes
         DBFS_DG_CD_10_exp2cellser01      active  Yes
         DBFS_DG_CD_11_exp2cellser01      active  Yes
         RECOC1_CD_00_exp2cellser01       active  Yes
         RECOC1_CD_01_exp2cellser01       active  Yes
         RECOC1_CD_02_exp2cellser01       active  Yes
         RECOC1_CD_03_exp2cellser01       active  Yes
         RECOC1_CD_04_exp2cellser01       active  Yes
         RECOC1_CD_05_exp2cellser01       active  Yes
         RECOC1_CD_06_exp2cellser01       active  Yes
         RECOC1_CD_07_exp2cellser01       active  Yes
         RECOC1_CD_08_exp2cellser01       active  Yes
         RECOC1_CD_09_exp2cellser01       active  Yes
         RECOC1_CD_10_exp2cellser01       active  Yes
         RECOC1_CD_11_exp2cellser01       active  Yes
CellCLI>

Shutdown and Restart of Exadata Storage Servers through ILOM Interface


SSH to ILOm then run stop -f /SYS 
check 
show /SYS 

wait for 5 minutes after the power is down. 
start /SYS 
start /SP/console 
show /SYS 

Check cellcli - asmmodestatus

a. Login to Storage Server
b. cellcli -e list cell detail


[celladmin@exp2cellser01 ~]$ cellcli -e list griddisk attributes name, asmmodestatus
         DATAC1_CD_00_exp2cellser01       ONLINE
         DATAC1_CD_01_exp2cellser01       ONLINE
         DATAC1_CD_02_exp2cellser01       ONLINE
         DATAC1_CD_03_exp2cellser01       ONLINE
         DATAC1_CD_04_exp2cellser01       ONLINE
         DATAC1_CD_05_exp2cellser01       ONLINE
         DATAC1_CD_06_exp2cellser01       ONLINE
         DATAC1_CD_07_exp2cellser01       ONLINE
         DATAC1_CD_08_exp2cellser01       ONLINE
         DATAC1_CD_09_exp2cellser01       ONLINE
         DATAC1_CD_10_exp2cellser01       ONLINE
         DATAC1_CD_11_exp2cellser01       ONLINE
         DBFS_DG_CD_02_exp2cellser01      ONLINE
         DBFS_DG_CD_03_exp2cellser01      ONLINE
         DBFS_DG_CD_04_exp2cellser01      ONLINE
         DBFS_DG_CD_05_exp2cellser01      ONLINE
         DBFS_DG_CD_06_exp2cellser01      ONLINE
         DBFS_DG_CD_07_exp2cellser01      ONLINE
         DBFS_DG_CD_08_exp2cellser01      ONLINE
         DBFS_DG_CD_09_exp2cellser01      ONLINE
         DBFS_DG_CD_10_exp2cellser01      ONLINE
         DBFS_DG_CD_11_exp2cellser01      ONLINE
         RECOC1_CD_00_exp2cellser01       SYNCING
         RECOC1_CD_01_exp2cellser01       SYNCING
         RECOC1_CD_02_exp2cellser01       SYNCING
         RECOC1_CD_03_exp2cellser01       SYNCING
         RECOC1_CD_04_exp2cellser01       SYNCING
         RECOC1_CD_05_exp2cellser01       SYNCING
         RECOC1_CD_06_exp2cellser01       SYNCING
         RECOC1_CD_07_exp2cellser01       SYNCING
         RECOC1_CD_08_exp2cellser01       SYNCING
         RECOC1_CD_09_exp2cellser01       SYNCING
         RECOC1_CD_10_exp2cellser01       SYNCING
         RECOC1_CD_11_exp2cellser01       SYNCING
[celladmin@exp2cellser01 ~]$

d. cellcli -e list griddisk

Note-1: Waiting the status to change from SYNCING to ONLINE 
Note-2: Wait for grid disks to SYNC and then move on to next cell server. 
 
Alertlog:

Published: 12 events ASM ONLINE disk of opcode 3 for diskgroup DATAC1 to:
ClientHostName = exp2dbadm01,  ClientPID = 333592
Published: 10 events ASM ONLINE disk of opcode 3 for diskgroup DBFS_DG to:
ClientHostName = exp2dbadm03,  ClientPID = 344462
Published: 12 events ASM ONLINE disk of opcode 3 for diskgroup RECOC1 to:
ClientHostName = exp2dbadm04,  ClientPID = 291317
2017-04-22 20:30:26.835000 -04:00
Published: 12 events ASM OFFLINE disk due to shutdown of opcode 49 for diskgroup DATAC1 to:
ClientHostName = exp2dbadm03,  ClientPID = 344462
Published: 10 events ASM OFFLINE disk due to shutdown of opcode 49 for diskgroup DBFS_DG to:
ClientHostName = exp2dbadm01,  ClientPID = 333592
Published: 12 events ASM OFFLINE disk due to shutdown of opcode 49 for diskgroup RECOC1 to:
ClientHostName = exp2dbadm04,  ClientPID = 291317
2017-04-22 20:59:29.564000 -04:00
RS version=12.1.2.3.4,label=OSS_12.1.2.3.4_LINUX.X64_170111,Thu_Jan_12_01:51:41_PST_2017
[RS] Started Service RS_MAIN with pid 19853
[RS] Kill previous monitoring processes for RS_BACKUP, MS and CELLSRV
[RS] Started monitoring process /opt/oracle/cell/cellsrv/bin/cellrsbmt with pid 19860
[RS] Started monitoring process /opt/oracle/cell/cellsrv/bin/cellrsmmt with pid 19861
[RS] Started monitoring process /opt/oracle/cell/cellsrv/bin/cellrsomt with pid 19862
RSBK version=12.1.2.3.4,label=OSS_12.1.2.3.4_LINUX.X64_170111,Thu_Jan_12_01:51:41_PST_2017
[RS] Started Service RS_BACKUP with pid 19863
[RS] Kill previous monitoring process for core RS
[RS] Started monitoring process /opt/oracle/cell/cellsrv/bin/cellrssmt with pid 19874
CELL process id=19867
CELL host name=exp2cellser01
CELL version=12.1.2.3.4,label=OSS_12.1.2.3.4_LINUX.X64_170111,Thu_Jan_12_01:51:48_PST_2017
CELLSRV version md5: e4bd28a1a5d53d2d9e1e5e66fe3ca512
OS Stats: Physical memory: 128618 MB. Num cores: 40
CELLSRV configuration parameters:
Memory reserved for cellsrv: 125718 MB Memory for other processes: 2900 MB
_cell_fc_persistence_state=WriteBack
Successfully allocated 6400 MB for Storage Index. Storage Index memory usage can grow up to a maximum of 12571 MB.
CELL communication is configured to use 2 interface(s):
    192.168.10.108
    192.168.10.109
IPC version: Oracle RDS/IP (generic)
IPC Vendor 1 Protocol 3
  Version 4.1
2017-04-22 20:59:30.753000 -04:00
MS_ALERT HUGEPAGE CLEAR
2017-04-22 20:59:32.750000 -04:00
Cellsrv Incarnation is set: 11

CellDisk v0.10 name=FD_02_exp2cellser01   guid=41f081d1-e580-45bc-b058-c9ff1ce31e27 dev=/dev/nvme0n1 status=NORMAL
CellDisk v0.10 name=FD_01_exp2cellser01   guid=52f98ef1-28c6-4be9-ba8a-777f94a6925e dev=/dev/nvme2n1 status=NORMAL
CellDisk v0.10 name=FD_00_exp2cellser01   guid=bf804521-a5e6-4f2e-928e-0ec878e3b551 dev=/dev/nvme3n1 status=NORMAL
CellDisk v0.10 name=FD_03_exp2cellser01   guid=b00c9032-5da6-4377-a50d-cc9229bcdf3a dev=/dev/nvme1n1 status=NORMAL
CellDisk v0.10 name=CD_03_exp2cellser01   guid=d1e09adf-9688-4293-bf38-df2542bf0920 dev=/dev/sdd  status=NORMAL
CellDisk v0.10 name=CD_02_exp2cellser01   guid=792e1944-016f-4491-b3c0-340779fddef3 dev=/dev/sdc  status=NORMAL
GridDisk name=RECOC1_CD_03_exp2cellser01      guid=3fcdbb3a-11a4-4e8d-9012-3a9191dc393a (3382660076) status=GDISK_ACTIVE
GridDisk name=DATAC1_CD_03_exp2cellser01      guid=2e3db2a3-c462-4b7b-84ce-010b03738ea3 (2295850524) status=GDISK_ACTIVE   
cached by FlashCache:   144184636
GridDisk name=DBFS_DG_CD_03_exp2cellser01     guid=038d6e98-bb78-4b65-a338-8f1edfab3983 (3035701804) status=GDISK_ACTIVE   
cached by FlashCache:   144184636
GridDisk name=RECOC1_CD_02_exp2cellser01      guid=1d29cf2f-2073-4c13-be42-4641b9b5144e (2010707748) status=GDISK_ACTIVE
GridDisk name=DATAC1_CD_02_exp2cellser01      guid=63099498-4938-4bef-b0bc-aeb642404bcc (4031275892) status=GDISK_ACTIVE   
cached by FlashCache:  3909900404
GridDisk name=DBFS_DG_CD_02_exp2cellser01     guid=611830f9-8c9c-4c44-acf3-5d024aa672a8 (3860942228) status=GDISK_ACTIVE   
cached by FlashCache:  3909900404
CellDisk v0.10 name=CD_09_exp2cellser01   guid=f58aedf9-fe59-4b07-8517-5f9f8393d267 dev=/dev/sdj  status=NORMAL
CellDisk v0.10 name=CD_06_exp2cellser01   guid=8a11c6a2-4394-43b2-8b8f-8fb8606872ac dev=/dev/sdg  status=NORMAL
CellDisk v0.10 name=CD_04_exp2cellser01   guid=31417d0b-4bb7-4737-b244-dd149bcdcdc1 dev=/dev/sde  status=NORMAL
GridDisk name=RECOC1_CD_09_exp2cellser01      guid=48239cd3-12ce-43dd-8f60-efb4f72235b1 (2254531732) status=GDISK_ACTIVE
CellDisk v0.10 name=CD_07_exp2cellser01   guid=d26dea0e-d7e4-4bf3-a9ad-8db306fd1ff7 dev=/dev/sdh  status=NORMAL
GridDisk name=DATAC1_CD_09_exp2cellser01      guid=29aff2d4-ac2a-4931-979b-1452393e1e7f (1494348060) status=GDISK_ACTIVE   
cached by FlashCache:  1071653524
GridDisk name=DBFS_DG_CD_09_exp2cellser01     guid=f5a277d1-07a2-49b3-a843-29b6a8c6c76b (3693027468) status=GDISK_ACTIVE   
cached by FlashCache:  1071653524
GridDisk name=RECOC1_CD_06_exp2cellser01      guid=f6810b96-1967-4242-93fa-bd8792b3138a (3216602924) status=GDISK_ACTIVE
GridDisk name=DATAC1_CD_06_exp2cellser01      guid=da486922-7051-4e78-84b7-cfd8682468c6 (3872160012) status=GDISK_ACTIVE   
cached by FlashCache:  3909900404
GridDisk name=DBFS_DG_CD_06_exp2cellser01     guid=adebbe43-0715-4c57-acd6-a3dcc5cb06c4 ( 391051324) status=GDISK_ACTIVE   
cached by FlashCache:  3909900404
GridDisk name=RECOC1_CD_04_exp2cellser01      guid=f2d3112b-a8c2-4c2d-b193-b20307500fa2 ( 722897908) status=GDISK_ACTIVE
GridDisk name=DATAC1_CD_04_exp2cellser01      guid=29eff557-2200-459a-9e09-1fb4e5133b5a (1927686076) status=GDISK_ACTIVE   
cached by FlashCache:   144184636
GridDisk name=DBFS_DG_CD_04_exp2cellser01     guid=0653d6c1-6433-4c5e-9a1c-1507365533a1 (2211506908) status=GDISK_ACTIVE   
cached by FlashCache:   144184636
GridDisk name=RECOC1_CD_07_exp2cellser01      guid=185923d6-1062-470d-9fc1-9e7209909dce (1752228692) status=GDISK_ACTIVE
GridDisk name=DATAC1_CD_07_exp2cellser01      guid=18f75204-fea4-40fc-a726-590e4b6cd205 (3985897548) status=GDISK_ACTIVE   
cached by FlashCache:   390767876
GridDisk name=DBFS_DG_CD_07_exp2cellser01     guid=042ff207-1c59-4378-9dc2-b5180ed835d0 (4254859964) status=GDISK_ACTIVE   
cached by FlashCache:   390767876
CellDisk v0.10 name=CD_05_exp2cellser01   guid=dcdad52f-9754-4732-9582-ed6ae1ff6224 dev=/dev/sdf  status=NORMAL
CellDisk v0.10 name=CD_10_exp2cellser01   guid=30f88892-2c3a-48f3-9552-3f5b8a0aa26f dev=/dev/sdk  status=NORMAL
CellDisk v0.10 name=CD_08_exp2cellser01   guid=c4dbf950-e2c8-4d9b-a97f-f8a2fcc47c7b dev=/dev/sdi  status=NORMAL
GridDisk name=RECOC1_CD_05_exp2cellser01      guid=c26bc482-e4a9-4c1b-8136-e69f53bee904 (4074799916) status=GDISK_ACTIVE
CellDisk v0.10 name=CD_01_exp2cellser01   guid=56d81ff3-d44d-4687-ba6d-8a1b97bab34a dev=/dev/sdb3 status=NORMAL
GridDisk name=DATAC1_CD_05_exp2cellser01      guid=151695fb-9564-4137-91f3-d7e06935fdfe (2058527788) status=GDISK_ACTIVE   
cached by FlashCache:   390767876
GridDisk name=DBFS_DG_CD_05_exp2cellser01     guid=58456fb0-9fdc-4583-927f-1193a2bb9f20 (1314698676) status=GDISK_ACTIVE   
cached by FlashCache:   390767876
GridDisk name=RECOC1_CD_10_exp2cellser01      guid=4b14b557-ab26-4174-a2f0-67aeb08c6ddc (3007942876) status=GDISK_ACTIVE
GridDisk name=DATAC1_CD_10_exp2cellser01      guid=5f3a084a-d97a-4c05-82e1-6f1132719ee3 (2963687100) status=GDISK_ACTIVE   
cached by FlashCache:  1071653524
GridDisk name=DBFS_DG_CD_10_exp2cellser01     guid=35d3112b-645b-4e4e-8f88-9baa8ad0e566 (1449712116) status=GDISK_ACTIVE   
cached by FlashCache:  1071653524
GridDisk name=RECOC1_CD_08_exp2cellser01      guid=90b53813-492a-478e-84f6-d690d799dc8c (1467455180) status=GDISK_ACTIVE
GridDisk name=DATAC1_CD_08_exp2cellser01      guid=41c73862-7da8-4d48-9b52-e5a568f3ea43 ( 609209124) status=GDISK_ACTIVE   
cached by FlashCache:   390767876
GridDisk name=DBFS_DG_CD_08_exp2cellser01     guid=8ed02352-6126-46e2-99a5-5e36b078f62d ( 221297292) status=GDISK_ACTIVE   
cached by FlashCache:   390767876
GridDisk name=RECOC1_CD_01_exp2cellser01      guid=80eff78a-dd57-4b47-bd20-ac303a8a0933 ( 771870308) status=GDISK_ACTIVE
GridDisk name=DATAC1_CD_01_exp2cellser01      guid=6537f7da-0693-440e-ab26-e8825381956d (3479777268) status=GDISK_ACTIVE   
cached by FlashCache:   144184636
CellDisk v0.10 name=CD_11_exp2cellser01   guid=90b6f932-7c23-49f4-bf4c-9e08bb65fef2 dev=/dev/sdl  status=NORMAL
GridDisk name=RECOC1_CD_11_exp2cellser01      guid=aed6c99e-a9ec-477d-932e-3a572bda8c9d (3107202132) status=GDISK_ACTIVE
GridDisk name=DATAC1_CD_11_exp2cellser01      guid=a1b471f9-5d6b-44b6-bdfb-00597dcc144a (1588989724) status=GDISK_ACTIVE   
cached by FlashCache:  1071653524
GridDisk name=DBFS_DG_CD_11_exp2cellser01     guid=ac845edb-253f-4312-9e10-79595d9169e2 ( 641647404) status=GDISK_ACTIVE   
cached by FlashCache:  1071653524
CellDisk v0.10 name=CD_00_exp2cellser01   guid=ffbf0f44-1474-4bf6-a6bc-d05df0465979 dev=/dev/sda3 status=NORMAL
GridDisk name=RECOC1_CD_00_exp2cellser01      guid=36190c48-b89e-46c6-b8d7-d21f1c959ac6 (3904486924) status=GDISK_ACTIVE
GridDisk name=DATAC1_CD_00_exp2cellser01      guid=5b50dc4b-9cc2-430c-99c0-505e31be7cc2 (2544290724) status=GDISK_ACTIVE   
cached by FlashCache:  3909900404
2017-04-22 20:59:41.317000 -04:00
Smart Flash Caching enabled  on FlashCache FC-FD_00_exp2cellser01 guid=54ba6a06-852d-4b11-9ce0-82fec68b4a62 (1071653524) size=2980GB cdisk=FD_00_exp2cellser01
Smart Flash Caching enabled  on FlashCache FC-FD_02_exp2cellser01 guid=0d35cbc7-6702-4c1e-8077-83f785a499d5 ( 390767876) size=2980GB cdisk=FD_02_exp2cellser01
Smart Flash Caching enabled  on FlashCache FC-FD_03_exp2cellser01 guid=ec9d49e6-063d-4e24-b6b6-18eb7ff6dfaf ( 144184636) size=2980GB cdisk=FD_03_exp2cellser01
Smart Flash Caching enabled  on FlashCache FC-FD_01_exp2cellser01 guid=de7f8d29-5947-4803-ad4c-d15a5e40ea62 (3909900404) size=2980GB cdisk=FD_01_exp2cellser01
FlashLog FL-FD_00_exp2cellser01 guid=42914cb2-62a9-400f-b50f-395b9d38def0 ( 580904564) cdisk=FD_00_exp2cellser01 is being recovered
Smart Flash Logging ENABLED on FlashLog FL-FD_00_exp2cellser01 guid=42914cb2-62a9-400f-b50f-395b9d38def0 ( 580904564) size=128MB cdisk=FD_00_exp2cellser01
FlashLog FL-FD_01_exp2cellser01 guid=db1d3b31-a934-4cc0-b1fe-45c62d0fcac5 (3406766740) cdisk=FD_01_exp2cellser01 is being recovered
FlashLog FL-FD_02_exp2cellser01 guid=a2feb2a8-622c-46e6-aa65-1e24cc42af90 (2463701044) cdisk=FD_02_exp2cellser01 is being recovered
Smart Flash Logging ENABLED on FlashLog FL-FD_02_exp2cellser01 guid=a2feb2a8-622c-46e6-aa65-1e24cc42af90 (2463701044) size=128MB cdisk=FD_02_exp2cellser01
Smart Flash Logging ENABLED on FlashLog FL-FD_01_exp2cellser01 guid=db1d3b31-a934-4cc0-b1fe-45c62d0fcac5 (3406766740) size=128MB cdisk=FD_01_exp2cellser01
FlashLog FL-FD_03_exp2cellser01 guid=dcf39aec-4914-4e43-9b46-90a7971c303f ( 249645220) cdisk=FD_03_exp2cellser01 is being recovered
Smart Flash Logging ENABLED on FlashLog FL-FD_03_exp2cellser01 guid=dcf39aec-4914-4e43-9b46-90a7971c303f ( 249645220) size=128MB cdisk=FD_03_exp2cellser01
Sat Apr 22 20:59:41 2017
CELLSRV Server startup complete
Heartbeat with diskmon (pid 325328) started on exp2dbadm03
Heartbeat with diskmon (pid 316734) started on exp2dbadm01
Heartbeat with diskmon (pid 206116) started on exp2dbadm04
Heartbeat with diskmon (pid 386039) started on exp2dbadm02
2017-04-22 20:59:45.396000 -04:00
[RS] Started Service CELLSRV with pid 19867
Successfully registered pkg cellofl-12.1.2.3.4_LINUX.X64_170111
2017-04-22 20:59:46.681000 -04:00
[RS] Started Service MS with pid 19959
2017-04-22 20:59:48.313000 -04:00
Successfully registered pkg cellofl-11.2.3.3.1_LINUX.X64_151006
2017-04-22 20:59:49.422000 -04:00
All offload packages have been successfully registered
[RS] Starting offload server with pid 32595 for group SYS_121234_170111, package cellofl-12.1.2.3.4_LINUX.X64_170111
[RS] Starting offload server with pid 32605 for group SYS_112331_151006, package cellofl-11.2.3.3.1_LINUX.X64_151006
2017-04-22 20:59:53.405000 -04:00
[RS] Offload server with pid 32595 for group SYS_121234_170111, package cellofl-12.1.2.3.4_LINUX.X64_170111 
successfully started
MS_ALERT OFFLOADGROUP_STATEFUL CLEAR SYS_121234_170111
2017-04-22 20:59:55.406000 -04:00
[RS] Offload server with pid 32605 for group SYS_112331_151006, package cellofl-11.2.3.3.1_LINUX.X64_151006 
successfully started
MS_ALERT OFFLOADGROUP_STATEFUL CLEAR SYS_112331_151006

Alert from Oracle Exadata Database Machine for BIOS and ILOM

Note: Cell name removed for security reasons. Conclusion: After restart Oracle Exadata Storage Servers, ILOM and BIOS updated with latest version.

Thursday, December 15, 2016

Oracle GoldenGate - Hands-On Articles


Article-1: Multiple Pluggable Database (PDBs) Replication in Multitenant Database Using Oracle GoldenGate 12c
https://community.oracle.com/docs/DOC-995763

Article-2: Data Replication with Multiple Extracts and Multiple Replicats with Integrated Capture Mode - OGG 12c
https://community.oracle.com/docs/DOC-995764

Article-3: Bi-Directional Replication with Pluggable Database (PDB) in Multitenant Database - OGG 12c
https://community.oracle.com/docs/DOC-995762

Article-4: Bi-Directional Replication with conflict detection and resolution (CDR) - Oracle GoldenGate 12c
http://otechmag.com/magazine/2015/summer/ravikumar-yv.html

Article-5: Oracle 12c (12.1.0.2.0) Standard Edition (SE2) with Multitenant Environment with HA Options
http://www.otechmag.com/magazine/2015/fall/ravikumar-yv.html

Article-6: Integrated DDL and DML with Encrypt using Oracle GoldenGate 12c.
http://allthingsoracle.com/integrated-ddl-and-dml-with-encrypt-using-oracle-goldengate-12c/

Article-7: Real-Time Downstream Integrated Capture between Oracle 11g and Oracle 12c using Oracle GoldenGate 12c
http://www.toadworld.com/platforms/oracle/w/wiki/11186.real-time-downstream-integrated-capture-between-oracle-11g-and-oracle-12c-using-oracle-goldengate-12c.aspx

Saturday, November 5, 2016

Applying July 2016 PSU Patches (GI Patch - 23273629 & RDBMS Patch - 23054246) for Oracle 12c (12.1.0.2.0) - 3 Node RAC


1. Take backup of GRID and Oracle Database Home
===============================================

1.
[root@rac1-12c u01]# cd /u01/app/12.1.0.2/grid/
[root@rac1-12c grid]# pwd
/u01/app/12.1.0.2/grid

[root@rac1-12c u01]# tar -zcvf /u01/12c_GRID_Backup .

[root@rac1-12c u01]# ls -lrth
total 5.0G
drwxr-xr-x. 5 root oinstall 4.0K Oct 19 21:20 app
-rw-r--r--  1 root root     5.0G Nov  5 16:53 12c_GRID_Backup

2.
[root@rac1-12c u01]# cd /u01/app/oracle/product/12.1.0.2/db_1/

[root@rac1-12c db_1]# tar -zcvf /u01/12c_ORACLE_DB_HOME_Backup .

[root@rac1-12c u01]# ls -lrth
total 8.1G
drwxr-xr-x. 5 root oinstall 4.0K Oct 19 21:20 app
-rw-r--r--  1 root root     5.0G Nov  5 16:53 12c_GRID_Backup
-rw-r--r--  1 root root     3.1G Nov  5 17:01 12c_ORACLE_DB_HOME_Backup

2. Upgrade OPatch version for Oracle GI Home and Oracle RDBMS Home for all 3 Nodes
===================================================================================

Using username "oracle".
oracle@192.168.2.101's password:
Last login: Fri Oct 21 07:56:09 2016 from 192.168.2.1

[oracle@rac1-12c sf_grid]$ unzip p6880880_121010_Linux-x86-64.zip -d /u01/app/12.1.0.2/grid/
Archive:  p6880880_121010_Linux-x86-64.zip
replace /u01/app/12.1.0.2/grid/OPatch/datapatch? [y]es, [n]o, [A]ll, [N]one, [r]ename: A
  inflating: /u01/app/12.1.0.2/grid/OPatch/datapatch
  inflating: /u01/app/12.1.0.2/grid/OPatch/operr
   creating: /u01/app/12.1.0.2/grid/OPatch/modules/

---->Output Truncated----------------------->

[oracle@rac1-12c sf_grid]$ unzip p6880880_121010_Linux-x86-64.zip -d /u01/app/oracle/product/12.1.0.2/db_1/
Archive:  p6880880_121010_Linux-x86-64.zip
replace /u01/app/oracle/product/12.1.0.2/db_1/OPatch/datapatch? [y]es, [n]o, [A]ll, [N]one, [r]ename: A
  inflating: /u01/app/oracle/product/12.1.0.2/db_1/OPatch/datapatch
  inflating: /u01/app/oracle/product/12.1.0.2/db_1/OPatch/operr
   creating: /u01/app/oracle/product/12.1.0.2/db_1/OPatch/modules/

---->Output Truncated----------------------->

[oracle@rac1-12c sf_grid]$ scp p6880880_121010_Linux-x86-64.zip oracle@rac2-12c:/tmp
p6880880_121010_Linux-x86-64.zip      100%   76MB  76.1MB/s   00:01

[oracle@rac1-12c sf_grid]$ scp p6880880_121010_Linux-x86-64.zip oracle@rac3-12c:/tmp
p6880880_121010_Linux-x86-64.zip      100%   76MB  76.1MB/s   00:01

[oracle@rac1-12c sf_grid]$ ssh rac2-12c
Last login: Fri Nov  4 13:51:34 2016 from rac1-12c.localdomain

[oracle@rac2-12c ~]$ cd /tmp/
[oracle@rac2-12c tmp]$ unzip p6880880_121010_Linux-x86-64.zip -d /u01/app/12.1.0.2/grid/
Archive:  p6880880_121010_Linux-x86-64.zip
replace /u01/app/12.1.0.2/grid/OPatch/datapatch? [y]es, [n]o, [A]ll, [N]one, [r]ename: A
  inflating: /u01/app/12.1.0.2/grid/OPatch/datapatch
  inflating: /u01/app/12.1.0.2/grid/OPatch/operr
   creating: /u01/app/12.1.0.2/grid/OPatch/modules/

---->Output Truncated----------------------->

[oracle@rac2-12c tmp]$ unzip p6880880_121010_Linux-x86-64.zip -d /u01/app/oracle/product/12.1.0.2/db_1/
Archive:  p6880880_121010_Linux-x86-64.zip
replace /u01/app/oracle/product/12.1.0.2/db_1/OPatch/datapatch? [y]es, [n]o, [A]ll, [N]one, [r]ename: A
  inflating: /u01/app/oracle/product/12.1.0.2/db_1/OPatch/datapatch
  inflating: /u01/app/oracle/product/12.1.0.2/db_1/OPatch/operr
   creating: /u01/app/oracle/product/12.1.0.2/db_1/OPatch/modules/

---->Output Truncated----------------------->

[oracle@rac2-12c tmp]$ ssh rac3-12c
Last login: Fri Nov  4 14:04:39 2016 from rac2-12c.localdomain

[oracle@rac3-12c ~]$ cd /tmp/
[oracle@rac3-12c tmp]$

[oracle@rac3-12c tmp]$ unzip p6880880_121010_Linux-x86-64.zip -d /u01/app/12.1.0.2/grid/
Archive:  p6880880_121010_Linux-x86-64.zip
replace /u01/app/12.1.0.2/grid/OPatch/datapatch? [y]es, [n]o, [A]ll, [N]one, [r]ename: A
  inflating: /u01/app/12.1.0.2/grid/OPatch/datapatch
  inflating: /u01/app/12.1.0.2/grid/OPatch/operr
  creating: /u01/app/12.1.0.2/grid/OPatch/modules/

---->Output Truncated----------------------->

[oracle@rac3-12c tmp]$ unzip p6880880_121010_Linux-x86-64.zip -d /u01/app/oracle/product/12.1.0.2/db_1/
Archive:  p6880880_121010_Linux-x86-64.zip
replace /u01/app/oracle/product/12.1.0.2/db_1/OPatch/datapatch? [y]es, [n]o, [A]ll, [N]one, [r]ename: A
  inflating: /u01/app/oracle/product/12.1.0.2/db_1/OPatch/datapatch
  inflating: /u01/app/oracle/product/12.1.0.2/db_1/OPatch/operr
   creating: /u01/app/oracle/product/12.1.0.2/db_1/OPatch/modules/
  inflating: /u01/app/oracle/product/12.1.0.2/db_1/OPatch/modules/com.oracle.glcm.patch.opatchauto-wallet_13.9.1.1.jar

---->Output Truncated----------------------->

3. Apply July 2016 PSU Patch for both Oracle GI and Oracle RDBMS Home
======================================================================

[oracle@rac1-12c sf_grid]$ su - root
Password:

[root@rac1-12c ~]# cd /media/sf_grid/
[root@rac1-12c sf_grid]# ls -lrth
total 1.6G
drwxrwx--- 1 root vboxsf 4.0K Jul  5 10:07 23054246
drwxrwx--- 1 root vboxsf 4.0K Aug  1 02:08 23273629
-rwxrwx--- 1 root vboxsf 148K Aug  1 04:09 PatchSearch.xml
-rwxrwx--- 1 root vboxsf 209M Oct 29 19:44 p23054246_121020_Linux-x86-64.zip
-rwxrwx--- 1 root vboxsf 1.4G Oct 29 19:48 p23273629_121020_Linux-x86-64.zip
-rwxrwx--- 1 root vboxsf  77M Nov  1 15:45 p6880880_121010_Linux-x86-64.zip

[root@rac1-12c sf_grid]# sh /u01/app/12.1.0.2/grid/OPatch/opatchauto apply /media/sf_grid/23273629/

OPatchauto session is initiated at Fri Nov  4 14:20:13 2016

System initialization log file is /u01/app/12.1.0.2/grid/cfgtoollogs/opatchautodb/systemconfig2016-11-04_02-20-24PM.log.

Session log file is /u01/app/12.1.0.2/grid/cfgtoollogs/opatchauto/opatchauto2016-11-04_02-20-42PM.log
The id for this session is I5AG

Executing OPatch prereq operations to verify patch applicability on home /u01/app/oracle/product/12.1.0.2/db_1

Executing OPatch prereq operations to verify patch applicability on home /u01/app/12.1.0.2/grid
Patch applicablity verified successfully on home /u01/app/oracle/product/12.1.0.2/db_1

Patch applicablity verified successfully on home /u01/app/12.1.0.2/grid

Verifying patch inventory on home /u01/app/oracle/product/12.1.0.2/db_1

Verifying patch inventory on home /u01/app/12.1.0.2/grid
Patch inventory verified successfully on home /u01/app/oracle/product/12.1.0.2/db_1

Patch inventory verified successfully on home /u01/app/12.1.0.2/grid

Verifying SQL patch applicablity on home /u01/app/oracle/product/12.1.0.2/db_1
SQL patch applicablity verified successfully on home /u01/app/oracle/product/12.1.0.2/db_1

Preparing to bring down database service on home /u01/app/oracle/product/12.1.0.2/db_1
Successfully prepared home /u01/app/oracle/product/12.1.0.2/db_1 to bring down database service

Bringing down CRS service on home /u01/app/12.1.0.2/grid
Prepatch operation log file location: /u01/app/12.1.0.2/grid/cfgtoollogs/crsconfig/
crspatch_rac1-12c_2016-11-04_02-23-27PM.log

CRS service brought down successfully on home /u01/app/12.1.0.2/grid

Performing prepatch operation on home /u01/app/oracle/product/12.1.0.2/db_1
Perpatch operation completed successfully on home /u01/app/oracle/product/12.1.0.2/db_1

Start applying binary patch on home /u01/app/oracle/product/12.1.0.2/db_1
Binary patch applied successfully on home /u01/app/oracle/product/12.1.0.2/db_1

Performing postpatch operation on home /u01/app/oracle/product/12.1.0.2/db_1
Postpatch operation completed successfully on home /u01/app/oracle/product/12.1.0.2/db_1

Start applying binary patch on home /u01/app/12.1.0.2/grid
Binary patch applied successfully on home /u01/app/12.1.0.2/grid

Starting CRS service on home /u01/app/12.1.0.2/grid
Postpatch operation log file location: /u01/app/12.1.0.2/grid/cfgtoollogs/crsconfig/
crspatch_rac1-12c_2016-11-04_02-35-17PM.log

CRS service started successfully on home /u01/app/12.1.0.2/grid

Preparing home /u01/app/oracle/product/12.1.0.2/db_1 after database service restarted
No step execution required.........
Prepared home /u01/app/oracle/product/12.1.0.2/db_1 successfully after database service restarted

Trying to apply SQL patch on home /u01/app/oracle/product/12.1.0.2/db_1
SQL patch applied successfully on home /u01/app/oracle/product/12.1.0.2/db_1

Verifying patches applied on home /u01/app/12.1.0.2/grid
Patch verification completed with warning on home /u01/app/12.1.0.2/grid

Verifying patches applied on home /u01/app/oracle/product/12.1.0.2/db_1
Patch verification completed with warning on home /u01/app/oracle/product/12.1.0.2/db_1

OPatchAuto successful.
--------------------------------Summary--------------------------------

Patching is completed successfully. Please find the summary as follows:

Host:rac1-12c
RAC Home:/u01/app/oracle/product/12.1.0.2/db_1
Summary:

==Following patches were SKIPPED:

Patch: /media/sf_grid/23273629/21436941
Reason: This patch is not applicable to this specified target type - "rac_database"

Patch: /media/sf_grid/23273629/23054341
Reason: This patch is not applicable to this specified target type - "rac_database"


==Following patches were SUCCESSFULLY applied:

Patch: /media/sf_grid/23273629/23054246
Log: /u01/app/oracle/product/12.1.0.2/db_1/cfgtoollogs/opatchauto/core/opatch/opatch2016-11-04_14-24-32PM_1.log

Patch: /media/sf_grid/23273629/23054327
Log: /u01/app/oracle/product/12.1.0.2/db_1/cfgtoollogs/opatchauto/core/opatch/opatch2016-11-04_14-24-32PM_1.log


Host:rac1-12c
CRS Home:/u01/app/12.1.0.2/grid
Summary:

==Following patches were SUCCESSFULLY applied:

Patch: /media/sf_grid/23273629/21436941
Log: /u01/app/12.1.0.2/grid/cfgtoollogs/opatchauto/core/opatch/opatch2016-11-04_14-27-20PM_1.log

Patch: /media/sf_grid/23273629/23054246
Log: /u01/app/12.1.0.2/grid/cfgtoollogs/opatchauto/core/opatch/opatch2016-11-04_14-27-20PM_1.log

Patch: /media/sf_grid/23273629/23054327
Log: /u01/app/12.1.0.2/grid/cfgtoollogs/opatchauto/core/opatch/opatch2016-11-04_14-27-20PM_1.log

Patch: /media/sf_grid/23273629/23054341
Log: /u01/app/12.1.0.2/grid/cfgtoollogs/opatchauto/core/opatch/opatch2016-11-04_14-27-20PM_1.log


OPatchauto session completed at Fri Nov  4 14:43:05 2016
Time taken to complete the session 22 minutes, 52 seconds

4. Copy July 2016 PSU Patches to other cluster nodes (rac2-12c & rac3-12c)
===========================================================================

[root@rac1-12c sf_grid]# scp p23273629_121020_Linux-x86-64.zip oracle@rac2-12c:/u01/
oracle@rac2-12c's password:
p23273629_121020_Linux-x86-64.zip          100% 1353MB  58.8MB/s   00:23

[root@rac1-12c sf_grid]# scp p23054246_121020_Linux-x86-64.zip oracle@rac2-12c:/u01/
oracle@rac2-12c's password:
p23054246_121020_Linux-x86-64.zip          100%  209MB  69.6MB/s   00:03
You have mail in /var/spool/mail/root

[root@rac1-12c sf_grid]# scp p23273629_121020_Linux-x86-64.zip oracle@rac3-12c:/u01/
The authenticity of host 'rac3-12c (192.168.2.103)' can't be established.
RSA key fingerprint is dd:63:56:3a:97:6b:03:0c:b0:15:ea:2b:cd:a6:59:4b.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'rac3-12c,192.168.2.103' (RSA) to the list of known hosts.
oracle@rac3-12c's password:
p23273629_121020_Linux-x86-64.zip           100% 1353MB  75.1MB/s   00:18

[root@rac1-12c sf_grid]# scp p23054246_121020_Linux-x86-64.zip oracle@rac3-12c:/u01/
oracle@rac3-12c's password:
p23054246_121020_Linux-x86-64.zip           100%  209MB  69.6MB/s   00:03

5.unzip the folders Oracle GI and Oracle RDBMS July 2016 PSU Patches for Node-2 (rac2-12c)
==================================================================== =====================

[root@rac1-12c sf_grid]# ssh rac2-12c
root@rac2-12c's password:
Last login: Thu Oct 20 11:18:15 2016 from rac1-12c.localdomain

[root@rac2-12c ~]# su - oracle
[oracle@rac2-12c ~]$ cd /u01/
[oracle@rac2-12c u01]$ ls -lrth
total 1.6G
drwxrwxr-x. 5 oracle oinstall 4.0K Oct 19 21:52 app
-rwxr-x---  1 oracle oinstall 1.4G Nov  4 14:52 p23273629_121020_Linux-x86-64.zip
-rwxr-x---  1 oracle oinstall 209M Nov  4 14:52 p23054246_121020_Linux-x86-64.zip

[oracle@rac2-12c u01]$ unzip p23273629_121020_Linux-x86-64.zip
Archive:  p23273629_121020_Linux-x86-64.zip
   creating: 23273629/
   creating: 23273629/23054327/
   creating: 23273629/23054327/files/
   creating: 23273629/23054327/files/inventory/

---->Output Truncated----------------------->

[oracle@rac2-12c u01]$ unzip p23054246_121020_Linux-x86-64.zip
Archive:  p23054246_121020_Linux-x86-64.zip
   creating: 23054246/
   creating: 23054246/20299023/
   creating: 23054246/20299023/etc/
   creating: 23054246/20299023/etc/config/
  inflating: 23054246/20299023/etc/config/inventory.xml
 extracting: 23054246/README.txt
replace PatchSearch.xml? [y]es, [n]o, [A]ll, [N]one, [r]ename: n
[oracle@rac2-12c u01]$

---->Output Truncated----------------------->

[root@rac2-12c ~]# cd /u01/
[root@rac2-12c u01]# ls -lrth
total 1.6G
drwxrwxr-x  9 oracle oinstall 4.0K Jul  5 10:07 23054246
drwxr-xr-x  7 oracle oinstall 4.0K Aug  1 02:08 23273629
-rw-rw-r--  1 oracle oinstall 148K Aug  1 04:09 PatchSearch.xml
drwxrwxr-x. 5 oracle oinstall 4.0K Oct 19 21:52 app
-rwxr-x---  1 oracle oinstall 1.4G Nov  4 14:52 p23273629_121020_Linux-x86-64.zip
-rwxr-x---  1 oracle oinstall 209M Nov  4 14:52 p23054246_121020_Linux-x86-64.zip

[root@rac2-12c u01]# sh /u01/app/12.1.0.2/grid/OPatch/opatchauto apply /u01/23273629/

OPatchauto session is initiated at Fri Nov  4 15:04:59 2016

System initialization log file is /u01/app/12.1.0.2/grid/cfgtoollogs/opatchautodb/systemconfig2016-11-04_03-05-01PM.log.

Session log file is /u01/app/12.1.0.2/grid/cfgtoollogs/opatchauto/opatchauto2016-11-04_03-05-32PM.log
The id for this session is 368E

Executing OPatch prereq operations to verify patch applicability on home /u01/app/oracle/product/12.1.0.2/db_1

Executing OPatch prereq operations to verify patch applicability on home /u01/app/12.1.0.2/grid
Patch applicablity verified successfully on home /u01/app/12.1.0.2/grid

Patch applicablity verified successfully on home /u01/app/oracle/product/12.1.0.2/db_1

Verifying patch inventory on home /u01/app/oracle/product/12.1.0.2/db_1

Verifying patch inventory on home /u01/app/12.1.0.2/grid
Patch inventory verified successfully on home /u01/app/oracle/product/12.1.0.2/db_1

Patch inventory verified successfully on home /u01/app/12.1.0.2/grid

Verifying SQL patch applicablity on home /u01/app/oracle/product/12.1.0.2/db_1
SQL patch applicablity verified successfully on home /u01/app/oracle/product/12.1.0.2/db_1

Preparing to bring down database service on home /u01/app/oracle/product/12.1.0.2/db_1
Successfully prepared home /u01/app/oracle/product/12.1.0.2/db_1 to bring down database service

Bringing down CRS service on home /u01/app/12.1.0.2/grid
Prepatch operation log file location: /u01/app/12.1.0.2/grid/cfgtoollogs/crsconfig/
crspatch_rac2-12c_2016-11-04_03-10-33PM.log

CRS service brought down successfully on home /u01/app/12.1.0.2/grid

Performing prepatch operation on home /u01/app/oracle/product/12.1.0.2/db_1
Perpatch operation completed successfully on home /u01/app/oracle/product/12.1.0.2/db_1

Start applying binary patch on home /u01/app/oracle/product/12.1.0.2/db_1
Binary patch applied successfully on home /u01/app/oracle/product/12.1.0.2/db_1

Performing postpatch operation on home /u01/app/oracle/product/12.1.0.2/db_1
Postpatch operation completed successfully on home /u01/app/oracle/product/12.1.0.2/db_1

Start applying binary patch on home /u01/app/12.1.0.2/grid
Binary patch applied successfully on home /u01/app/12.1.0.2/grid

Starting CRS service on home /u01/app/12.1.0.2/grid
Postpatch operation log file location: /u01/app/12.1.0.2/grid/cfgtoollogs/crsconfig/
crspatch_rac2-12c_2016-11-04_03-19-23PM.log

CRS service started successfully on home /u01/app/12.1.0.2/grid

Preparing home /u01/app/oracle/product/12.1.0.2/db_1 after database service restarted
No step execution required.........
Prepared home /u01/app/oracle/product/12.1.0.2/db_1 successfully after database service restarted

Trying to apply SQL patch on home /u01/app/oracle/product/12.1.0.2/db_1
SQL patch applied successfully on home /u01/app/oracle/product/12.1.0.2/db_1

Verifying patches applied on home /u01/app/12.1.0.2/grid
Patch verification completed with warning on home /u01/app/12.1.0.2/grid

Verifying patches applied on home /u01/app/oracle/product/12.1.0.2/db_1
Patch verification completed with warning on home /u01/app/oracle/product/12.1.0.2/db_1

OPatchAuto successful.

--------------------------------Summary--------------------------------

Patching is completed successfully. Please find the summary as follows:

Host:rac2-12c
RAC Home:/u01/app/oracle/product/12.1.0.2/db_1
Summary:

==Following patches were SKIPPED:

Patch: /u01/23273629/21436941
Reason: This patch is not applicable to this specified target type - "rac_database"

Patch: /u01/23273629/23054341
Reason: This patch is not applicable to this specified target type - "rac_database"

==Following patches were SUCCESSFULLY applied:

Patch: /u01/23273629/23054246
Log: /u01/app/oracle/product/12.1.0.2/db_1/cfgtoollogs/opatchauto/core/opatch/opatch2016-11-04_15-11-28PM_1.log

Patch: /u01/23273629/23054327
Log: /u01/app/oracle/product/12.1.0.2/db_1/cfgtoollogs/opatchauto/core/opatch/opatch2016-11-04_15-11-28PM_1.log

Host:rac2-12c
CRS Home:/u01/app/12.1.0.2/grid
Summary:

==Following patches were SUCCESSFULLY applied:

Patch: /u01/23273629/21436941
Log: /u01/app/12.1.0.2/grid/cfgtoollogs/opatchauto/core/opatch/opatch2016-11-04_15-13-47PM_1.log

Patch: /u01/23273629/23054246
Log: /u01/app/12.1.0.2/grid/cfgtoollogs/opatchauto/core/opatch/opatch2016-11-04_15-13-47PM_1.log

Patch: /u01/23273629/23054327
Log: /u01/app/12.1.0.2/grid/cfgtoollogs/opatchauto/core/opatch/opatch2016-11-04_15-13-47PM_1.log

Patch: /u01/23273629/23054341
Log: /u01/app/12.1.0.2/grid/cfgtoollogs/opatchauto/core/opatch/opatch2016-11-04_15-13-47PM_1.log

OPatchauto session completed at Fri Nov  4 15:24:22 2016
Time taken to complete the session 19 minutes, 23 seconds

[root@rac2-12c u01]# ps -ef | grep pmon
oracle   14443     1  0 15:21 ?        00:00:00 asm_pmon_+ASM2
oracle   14832     1  0 15:21 ?        00:00:00 ora_pmon_orcl2
root     21076 17699  0 15:25 pts/0    00:00:00 grep pmon

[root@rac1-12c sf_grid]# ps -ef | grep pmon
oracle   23966     1  0 14:37 ?        00:00:00 asm_pmon_+ASM1
oracle   24334     1  0 14:37 ?        00:00:00 ora_pmon_orcl1
root     25031   342  0 15:25 pts/0    00:00:00 grep pmon
oracle   25319     1  0 14:38 ?        00:00:00 mdb_pmon_-MGMTDB

6. Login cluster node (rac3-12c) and unzip Oracle GI and Oracle RDBMS July 2016 Patches for Node-3 (rac3-12c)
=============================================================================================================

[root@rac1-12c sf_grid]# ssh rac3-12c
root@rac3-12c's password:
Last login: Thu Oct 20 11:18:30 2016 from rac2-12c.localdomain

[root@rac3-12c ~]# su - oracle
[oracle@rac3-12c ~]$ cd /u01/

[oracle@rac3-12c u01]$ ls -lrth
total 1.6G
drwxrwxr-x. 5 oracle oinstall 4.0K Oct 19 21:52 app
-rwxr-x---  1 oracle oinstall 1.4G Nov  4 14:57 p23273629_121020_Linux-x86-64.zip
-rwxr-x---  1 oracle oinstall 209M Nov  4 14:59 p23054246_121020_Linux-x86-64.zip

[oracle@rac3-12c u01]$ unzip p23273629_121020_Linux-x86-64.zip
Archive:  p23273629_121020_Linux-x86-64.zip
   creating: 23273629/
   creating: 23273629/23054327/
   creating: 23273629/23054327/files/
   creating: 23273629/23054327/files/inventory/
   creating: 23273629/23054327/files/inventory/Scripts/
   creating: 23273629/23054327/files/inventory/Scripts/ext/
  inflating: 23273629/23054246/23054246/etc/config/actions.xml
  inflating: PatchSearch.xml

---->Output Truncated----------------------->


[oracle@rac3-12c u01]$ unzip p23054246_121020_Linux-x86-64.zip
Archive:  p23054246_121020_Linux-x86-64.zip
   creating: 23054246/
   creating: 23054246/20299023/
   creating: 23054246/20299023/etc/
   creating: 23054246/20299023/etc/config/
  inflating: 23054246/20299023/etc/config/inventory.xml
  inflating: 23054246/20299023/etc/config/actions.xml
   creating: 23054246/20299023/files/
   creating: 23054246/20299023/files/rdbms/

---->Output Truncated----------------------->

[root@rac3-12c u01]# ls -lrth
total 1.6G
drwxrwxr-x  9 oracle oinstall 4.0K Jul  5 10:07 23054246
drwxr-xr-x  7 oracle oinstall 4.0K Aug  1 02:08 23273629
-rw-rw-r--  1 oracle oinstall 148K Aug  1 04:09 PatchSearch.xml
drwxrwxr-x. 5 oracle oinstall 4.0K Oct 19 21:52 app
-rwxr-x---  1 oracle oinstall 1.4G Nov  4 14:57 p23273629_121020_Linux-x86-64.zip
-rwxr-x---  1 oracle oinstall 209M Nov  4 14:59 p23054246_121020_Linux-x86-64.zip

[root@rac3-12c u01]# sh /u01/app/12.1.0.2/grid/OPatch/opatchauto apply /u01/23273629/

OPatchauto session is initiated at Fri Nov  4 15:28:18 2016

System initialization log file is /u01/app/12.1.0.2/grid/cfgtoollogs/opatchautodb/systemconfig2016-11-04_03-28-20PM.log.

Session log file is /u01/app/12.1.0.2/grid/cfgtoollogs/opatchauto/opatchauto2016-11-04_03-28-38PM.log
The id for this session is CJDU

Executing OPatch prereq operations to verify patch applicability on home /u01/app/oracle/product/12.1.0.2/db_1

Executing OPatch prereq operations to verify patch applicability on home /u01/app/12.1.0.2/grid
Patch applicablity verified successfully on home /u01/app/12.1.0.2/grid

Patch applicablity verified successfully on home /u01/app/oracle/product/12.1.0.2/db_1

Verifying patch inventory on home /u01/app/oracle/product/12.1.0.2/db_1

Verifying patch inventory on home /u01/app/12.1.0.2/grid
Patch inventory verified successfully on home /u01/app/oracle/product/12.1.0.2/db_1

Patch inventory verified successfully on home /u01/app/12.1.0.2/grid

Verifying SQL patch applicablity on home /u01/app/oracle/product/12.1.0.2/db_1
SQL patch applicablity verified successfully on home /u01/app/oracle/product/12.1.0.2/db_1

Preparing to bring down database service on home /u01/app/oracle/product/12.1.0.2/db_1
Successfully prepared home /u01/app/oracle/product/12.1.0.2/db_1 to bring down database service

Bringing down CRS service on home /u01/app/12.1.0.2/grid
Prepatch operation log file location: /u01/app/12.1.0.2/grid/cfgtoollogs/crsconfig/
crspatch_rac3-12c_2016-11-04_03-33-05PM.log

CRS service brought down successfully on home /u01/app/12.1.0.2/grid

Performing prepatch operation on home /u01/app/oracle/product/12.1.0.2/db_1
Perpatch operation completed successfully on home /u01/app/oracle/product/12.1.0.2/db_1

Start applying binary patch on home /u01/app/oracle/product/12.1.0.2/db_1
Binary patch applied successfully on home /u01/app/oracle/product/12.1.0.2/db_1

Performing postpatch operation on home /u01/app/oracle/product/12.1.0.2/db_1
Postpatch operation completed successfully on home /u01/app/oracle/product/12.1.0.2/db_1

Start applying binary patch on home /u01/app/12.1.0.2/grid
Binary patch applied successfully on home /u01/app/12.1.0.2/grid

Starting CRS service on home /u01/app/12.1.0.2/grid
Postpatch operation log file location: /u01/app/12.1.0.2/grid/cfgtoollogs/crsconfig/
crspatch_rac3-12c_2016-11-04_03-42-01PM.log

CRS service started successfully on home /u01/app/12.1.0.2/grid

Preparing home /u01/app/oracle/product/12.1.0.2/db_1 after database service restarted
No step execution required.........
Prepared home /u01/app/oracle/product/12.1.0.2/db_1 successfully after database service restarted

Trying to apply SQL patch on home /u01/app/oracle/product/12.1.0.2/db_1
SQL patch applied successfully on home /u01/app/oracle/product/12.1.0.2/db_1

Verifying patches applied on home /u01/app/12.1.0.2/grid
Patch verification completed with warning on home /u01/app/12.1.0.2/grid

Verifying patches applied on home /u01/app/oracle/product/12.1.0.2/db_1
Patch verification completed with warning on home /u01/app/oracle/product/12.1.0.2/db_1

OPatchAuto successful.
--------------------------------Summary--------------------------------

Patching is completed successfully. Please find the summary as follows:

Host:rac3-12c
RAC Home:/u01/app/oracle/product/12.1.0.2/db_1
Summary:

==Following patches were SKIPPED:

Patch: /u01/23273629/21436941
Reason: This patch is not applicable to this specified target type - "rac_database"

Patch: /u01/23273629/23054341
Reason: This patch is not applicable to this specified target type - "rac_database"

==Following patches were SUCCESSFULLY applied:

Patch: /u01/23273629/23054246
Log: /u01/app/oracle/product/12.1.0.2/db_1/cfgtoollogs/opatchauto/core/opatch/opatch2016-11-04_15-34-01PM_1.log

Patch: /u01/23273629/23054327
Log: /u01/app/oracle/product/12.1.0.2/db_1/cfgtoollogs/opatchauto/core/opatch/opatch2016-11-04_15-34-01PM_1.log

Host:rac3-12c
CRS Home:/u01/app/12.1.0.2/grid
Summary:

==Following patches were SUCCESSFULLY applied:

Patch: /u01/23273629/21436941
Log: /u01/app/12.1.0.2/grid/cfgtoollogs/opatchauto/core/opatch/opatch2016-11-04_15-36-23PM_1.log

Patch: /u01/23273629/23054246
Log: /u01/app/12.1.0.2/grid/cfgtoollogs/opatchauto/core/opatch/opatch2016-11-04_15-36-23PM_1.log

Patch: /u01/23273629/23054327
Log: /u01/app/12.1.0.2/grid/cfgtoollogs/opatchauto/core/opatch/opatch2016-11-04_15-36-23PM_1.log

Patch: /u01/23273629/23054341
Log: /u01/app/12.1.0.2/grid/cfgtoollogs/opatchauto/core/opatch/opatch2016-11-04_15-36-23PM_1.log


OPatchauto session completed at Fri Nov  4 15:47:52 2016
Time taken to complete the session 19 minutes, 35 seconds
[root@rac3-12c u01]#

7. Login to ORCL database and check the database instance (orcl)
==================================================================
[oracle@rac1-12c ~]$ . oraenv
ORACLE_SID = [orcl] ? orcl
The Oracle base remains unchanged with value /u01/app/oracle

[oracle@rac1-12c ~]$ sqlplus /nolog

SQL*Plus: Release 12.1.0.2.0 Production on Fri Nov 4 16:01:16 2016
Copyright (c) 1982, 2014, Oracle.  All rights reserved.

SQL> connect sys/oracle@orcl as sysdba
Connected.

SQL> col action_time format a30
SQL> col description  format a60
SQL> set lines 300

SQL> select action_time, patch_id, patch_uid, version, status, bundle_series, description from dba_registry_sqlpatch;

ACTION_TIME                      PATCH_ID  PATCH_UID VERSION              STATUS          BUNDLE_SERIES                  
------------------------------ ---------- ---------- -------------------- --------------- ------------------------------ 
04-NOV-16 03.47.30.689909 PM     23054246   20213895 12.1.0.2             SUCCESS         PSU                            

DESCRIPTION
------------------------------------------------------
Database Patch Set Update : 12.1.0.2.160719 (23054246)

Tuesday, November 1, 2016

Oracle GoldenGate - LOG dump utility


Login to Source Database (ORCL) as a user ‘scott’

SQL> connect scott/oracle@orcl
Connected

SQL> insert into dept values (75,'SQL SERVER','NY');
1 row created.

SQL> insert into dept values (76,'IBM DB2','NJ');
1 row created.

SQL> insert into dept values (77,'SYBASE','VA');
1 row created.

SQL> commit;
Commit complete.

SQL> select * from dept;

DEPTNO  DNAME            LOC
----------  --------------   ----------
75   SQL SERVER       NY
76   IBM DB2          NJ
77   SYBASE           VA

3 rows selected.

Login to GGSCI from Source database:

[oracle@linux66-ggs-11g-12c ~]$ source 11g.env
[oracle@linux66-ggs-11g-12c ~]$ cd $GG
[oracle@linux66-ggs-11g-12c 11g]$ ./logdump

Oracle GoldenGate Log File Dump Utility for Oracle
Version 12.1.2.1.0 OGGCORE_12.1.2.1.0_PLATFORMS_140727.2135.1
Copyright (C) 1995, 2014, Oracle and/or its affiliates. All rights reserved.

Logdump 89 >open dirdat/lt000007
Current LogTrail is /u01/app/ogg/11g/dirdat/lt000007

Logdump 90 >pos 0
Reading forward from RBA 0
Logdump 91 >detail data
Logdump 92 >ghdr on
Logdump 93 >filter include filename SCOTT.DEPT;filter string "VA";filter match all
Logdump 94 >n


Search-1: Finding one more transaction using the string “NJ” Logdump 95 >filter include filename SCOTT.DEPT;filter string "NJ";filter match all Logdump 96 >n Note: Make it Position “0” and search again Logdump 97 >pos 0 Reading forward from RBA 0 Logdump 98 >filter include filename SCOTT.DEPT;filter string "NJ";filter match all Logdump 99 >n
Search-2: Find the transaction using the HEX decimal using “DEPT ID” coulmn (Dept ID : 77 and convert into HEX using calc utility) Logdump 105 >pos 0 Reading forward from RBA 0 Logdump 106 >filter clear Logdump 107 >filter filename SCOTT.DEPT; filter HEX "4D"; filter match all Logdump 108 >n
Logdump 109>n
Logdump 110>filter clear Note: Find the transaction using all the trail files Logdump 115 >ghdr on Logdump 116 >detail on Logdump 117 >filter filename SCOTT.DEPT; filter HEX "4D"; filter string "SYBASE"; filter match all Logdump 118 >count log /u01/app/ogg/11g/dirdat/lt* Current LogTrail is /u01/app/ogg/11g/dirdat/lt000003 LogTrail /u01/app/ogg/11g/dirdat/lt000003 has 0 records LogTrail /u01/app/ogg/11g/dirdat/lt000003 closed Current LogTrail is /u01/app/ogg/11g/dirdat/lt000001 LogTrail /u01/app/ogg/11g/dirdat/lt000001 has 0 records LogTrail /u01/app/ogg/11g/dirdat/lt000001 closed Current LogTrail is /u01/app/ogg/11g/dirdat/lt000005 LogTrail /u01/app/ogg/11g/dirdat/lt000005 has 0 records LogTrail /u01/app/ogg/11g/dirdat/lt000005 closed Current LogTrail is /u01/app/ogg/11g/dirdat/lt000007 LogTrail /u01/app/ogg/11g/dirdat/lt000007 has 1 records LogTrail /u01/app/ogg/11g/dirdat/lt000007 closed Current LogTrail is /u01/app/ogg/11g/dirdat/lt000006 LogTrail /u01/app/ogg/11g/dirdat/lt000006 has 0 records LogTrail /u01/app/ogg/11g/dirdat/lt000006 closed Current LogTrail is /u01/app/ogg/11g/dirdat/lt000002 LogTrail /u01/app/ogg/11g/dirdat/lt000002 has 0 records LogTrail /u01/app/ogg/11g/dirdat/lt000002 closed Current LogTrail is /u01/app/ogg/11g/dirdat/lt000000 LogTrail /u01/app/ogg/11g/dirdat/lt000000 has 0 records LogTrail /u01/app/ogg/11g/dirdat/lt000000 closed Current LogTrail is /u01/app/ogg/11g/dirdat/lt000004 LogTrail /u01/app/ogg/11g/dirdat/lt000004 has 0 records LogTrail /u01/app/ogg/11g/dirdat/lt000004 closed LogTrail /u01/app/ogg/11g/dirdat/lt* has 1 records Total Data Bytes 38 Avg Bytes/Record 38 Insert 1 After Images 1 Filtering matched 1 records suppressed 25 records Average of 1 Transactions Bytes/Trans ..... 86 Records/Trans ... 1 Files/Trans ..... 1 SCOTT.DEPT Partition 4 Total Data Bytes 38 Avg Bytes/Record 38 Insert 1 After Images 1 Login to Target Database (ORCLDB) as a user ‘scott’ SQL> connect scott/oracle@orcl Connected SQL> select * from dept; DEPTNO DNAME LOC ---------- -------------- ---------- 75 SQL SERVER NY 76 IBM DB2 NJ 77 SYBASE VA 3 rows selected. Login to GGSCI from Target database: [oracle@linux66-ggs-11g-12c ~]$ source 11g.env [oracle@linux66-ggs-11g-12c ~]$ cd $GG [oracle@linux66-ggs-11g-12c 11g]$ ./logdump Oracle GoldenGate Log File Dump Utility for Oracle Version 12.1.2.1.0 OGGCORE_12.1.2.1.0_PLATFORMS_140727.2135.1 Copyright (C) 1995, 2014, Oracle and/or its affiliates. All rights reserved. Logdump 89 >open dirdat/lt000007 Current LogTrail is /u01/app/ogg/11g/dirdat/lt000007 Logdump 90 >pos 0 Reading forward from RBA 0 Logdump 91 >detail data Logdump 92 >ghdr on Logdump 93 >filter include filename SCOTT.DEPT;filter string "VA";filter match all Logdump 94 >n Finding one more transaction Logdump 95 >filter include filename SCOTT.DEPT;filter string "NJ";filter match all Logdump 96 >n Logdump 97 >pos 0 Reading forward from RBA 0 Logdump 98 >filter include filename SCOTT.DEPT;filter string "NJ";filter match all Logdump 99 >n