Check the "lsinvetory" from Oracle Home
=========================================
[oracle@sharddb3 OPatch]$ ./opatch lsinventory
Oracle Interim Patch Installer version 12.2.0.1.14
Copyright (c) 2018, Oracle Corporation. All rights reserved.
Oracle Home : /u01/app/oracle/product/18.0.0/dbhome_1
Central Inventory : /u01/app/oraInventory
from : /u01/app/oracle/product/18.0.0/dbhome_1/oraInst.loc
OPatch version : 12.2.0.1.14
OUI version : 12.2.0.4.0
Log file location : /u01/app/oracle/product/18.0.0/dbhome_1/cfgtoollogs/opatch/
opatch2018-07-27_15-17-37PM_1.log
Lsinventory Output file location : /u01/app/oracle/product/18.0.0/dbhome_1/cfgtoollogs/opatch/lsinv/
lsinventory2018-07-27_15-17-37PM.txt
--------------------------------------------------------------------------------
Local Machine Information::
Hostname: sharddb3
ARU platform id: 226
ARU platform description:: Linux x86-64
Installed Top-level Products (1):
Oracle Database 18c 18.0.0.0.0
There are 1 products installed in this Oracle Home.
Interim patches (4) :
Patch 27908644 : applied on Wed Jul 18 13:44:11 EDT 2018
Unique Patch ID: 22153180
Patch description: "UPDATE 18.3 DATABASE CLIENT JDK IN ORACLE HOME TO JDK8U171"
Created on 4 May 2018, 01:21:02 hrs PST8PDT
Bugs fixed:
27908644
Patch 27923415 : applied on Wed Jul 18 13:41:38 EDT 2018
Unique Patch ID: 22239273
Patch description: "OJVM RELEASE UPDATE: 18.3.0.0.180717 (27923415)"
Created on 15 Jul 2018, 10:33:22 hrs PST8PDT
Bugs fixed:
27304131, 27539876, 27952586, 27642235, 27636900, 27461740
Patch 28090553 : applied on Wed Jul 18 13:40:01 EDT 2018
Unique Patch ID: 22256940
Patch description: "OCW RELEASE UPDATE 18.3.0.0.0 (28090553)"
Created on 11 Jul 2018, 19:20:31 hrs PST8PDT
Bugs fixed:
12816839, 18701017, 22734786, 23698980, 23840305, 25709124, 25724089
26299684, 26313403, 26433972, 26527054, 26586174, 26587652, 26647619
26827699, 26860285, 26882126, 26882316, 26943660, 26996813, 27012915
27018734, 27032726, 27034318, 27040560, 27080748, 27086406, 27092991
27098733, 27106915, 27114112, 27121566, 27133637, 27144533, 27153755
27166715, 27174938, 27174948, 27177551, 27177852, 27182006, 27182064
27184253, 27204476, 27212837, 27213140, 27220610, 27222423, 27222938
27238077, 27238258, 27249544, 27252023, 27257509, 27263677, 27265816
27267992, 27271876, 27274143, 27285557, 27299455, 27300007, 27302415
27309182, 27314512, 27315159, 27320985, 27334353, 27338838, 27346984
27358232, 27362190, 27370933, 27377219, 27378959, 27379846, 27379956
27393421, 27398223, 27399499, 27399762, 27399985, 27401618, 27403244
27404599, 27426277, 27428790, 27430219, 27430254, 27433163, 27452897
27458829, 27465480, 27475272, 27481406, 27481765, 27492916, 27496806
27503318, 27503413, 27508936, 27508984, 27513114, 27519708, 27526362
27528204, 27532009, 27534289, 27560562, 27560735, 27573154, 27573408
27574335, 27577122, 27579969, 27581484, 27593587, 27595801, 27600706
27609819, 27625010, 27625050, 27627992, 27654039, 27657467, 27657920
27668379, 27682288, 27691717, 27702244, 27703242, 27708711, 27714373
27725967, 27731346, 27734470, 27735534, 27739957, 27740854, 27747407
27748321, 27757979, 27766679, 27768034, 27778433, 27782464, 27783059
27786669, 27786699, 27801774, 27811439, 27839732, 27850736, 27862636
27864737, 27865439, 27889841, 27896388, 27897639, 27906509, 27931506
27935826, 27941514, 27957892, 27978668, 27984314, 27993298, 28023410
28025398, 28032758, 28039471, 28039953, 28045209, 28099592, 28109698
28174926, 28182503, 28204423, 28240153
Patch 28090523 : applied on Wed Jul 18 13:39:24 EDT 2018
Unique Patch ID: 22329768
Patch description: "Database Release Update : 18.3.0.0.180717 (28090523)"
Created on 14 Jul 2018, 00:03:50 hrs PST8PDT
Bugs fixed:
9062315, 13554903, 21547051, 21766220, 21806121, 23003564, 23310101
24489904, 24689376, 24737581, 24925863, 25035594, 25035599, 25287072
25348956, 25634405, 25726981, 25743479, 25824236, 25943740, 26226953
26336101, 26423085, 26427905, 26450454, 26476244, 26598422, 26615291
26646549, 26654411, 26731697, 26785169, 26792891, 26818960, 26822620
26843558, 26843664, 26846077, 26894737, 26898279, 26928317, 26933599
26956033, 26961415, 26966120, 26986173, 26992964, 27005278, 27026401
27028251, 27030974, 27036408, 27038986, 27041253, 27044575, 27047831
27053044, 27058530, 27060167, 27060859, 27061736, 27066451, 27066519
27073066, 27086821, 27090765, 27101527, 27101652, 27110878, 27112686
27119621, 27126666, 27128580, 27135647, 27143756, 27143882, 27147979
27153641, 27155549, 27156355, 27163928, 27169796, 27181521, 27181537
27189611, 27190851, 27193810, 27199245, 27208953, 27210038, 27210872
27214085, 27215007, 27216224, 27221900, 27222121, 27222626, 27224987
27226913, 27232983, 27233563, 27236052, 27236110, 27240246, 27240570
27241221, 27241247, 27244337, 27244785, 27249215, 27250547, 27254851
27258578, 27259386, 27259983, 27262650, 27262945, 27263276, 27263996
27270197, 27274456, 27274536, 27275136, 27275776, 27282707, 27283029
27283960, 27284499, 27285244, 27288230, 27292213, 27294480, 27301308
27301568, 27302594, 27302681, 27302695, 27302711, 27302730, 27302777
27302800, 27302960, 27304410, 27304936, 27305318, 27307868, 27310092
27313687, 27314206, 27314390, 27318869, 27321179, 27321834, 27326204
27329812, 27330158, 27330161, 27333658, 27333664, 27333693, 27334316
27334648, 27335682, 27338912, 27338946, 27339115, 27339396, 27339483
27339495, 27341036, 27345190, 27345231, 27345450, 27345498, 27346329
27346644, 27346709, 27346949, 27347126, 27348081, 27348707, 27349393
27352600, 27354783, 27356373, 27357773, 27358241, 27359178, 27359368
27360126, 27364891, 27364916, 27364947, 27365139, 27365702, 27365993
27367194, 27368850, 27372756, 27375260, 27375542, 27376871, 27378103
27379233, 27381383, 27381656, 27384222, 27389352, 27392187, 27395404
27395416, 27395794, 27396357, 27396365, 27396377, 27396624, 27396666
27396672, 27396813, 27398080, 27398660, 27401637, 27405242, 27405696
27410300, 27410595, 27412805, 27417186, 27420715, 27421101, 27422874
27423251, 27425507, 27425622, 27426363, 27427805, 27430802, 27432338
27432355, 27433870, 27434050, 27434193, 27434486, 27434974, 27435537
27439835, 27441326, 27442041, 27444727, 27445330, 27445462, 27447452
27447687, 27448162, 27450355, 27450400, 27450783, 27451049, 27451182
27451187, 27451531, 27452760, 27453225, 27457666, 27457891, 27458164
27459909, 27460675, 27467543, 27469329, 27471876, 27472969, 27473800
27479358, 27483974, 27484556, 27486253, 27487795, 27489719, 27496224
27496308, 27497950, 27498477, 27501327, 27501413, 27501465, 27502420
27504190, 27505603, 27506774, 27508985, 27511196, 27512439, 27517818
27518227, 27518310, 27520070, 27520900, 27522245, 27523368, 27523800
27525909, 27532375, 27533819, 27534509, 27537472, 27544030, 27545630
27547732, 27550341, 27551855, 27558557, 27558559, 27558861, 27560702
27563629, 27563767, 27570318, 27577758, 27579353, 27580996, 27585755
27585800, 27586810, 27586895, 27587672, 27591842, 27592466, 27593389
27595973, 27599689, 27602091, 27602488, 27603841, 27604293, 27607805
27608669, 27610269, 27613080, 27613247, 27615608, 27616657, 27617522
27625274, 27625620, 27631506, 27634676, 27635508, 27644757, 27649707
27652302, 27663370, 27664702, 27679488, 27679664, 27679806, 27679961
27680162, 27680509, 27682151, 27688099, 27688692, 27690578, 27691809
27692215, 27693713, 27697092, 27701795, 27705761, 27707544, 27709046
27718914, 27719187, 27723002, 27726269, 27726780, 27732323, 27739006
27740844, 27744211, 27745220, 27747869, 27748954, 27751006, 27753336
27757567, 27772815, 27773602, 27774320, 27774539, 27779886, 27780562
27782339, 27783289, 27786772, 27791223, 27797290, 27803665, 27807441
27812560, 27812593, 27813267, 27815347, 27818871, 27832643, 27833369
27834984, 27840386, 27847259, 27851757, 27861909, 27869339, 27873643
27882176, 27892488, 27924147, 27926113, 27930478, 27934468, 27941896
27945870, 27950708, 27952762, 27961746, 27964051, 27970265, 27971575
27984028, 27989849, 27993289, 27994333, 27997875, 27999597, 28021205
28022847, 28033429, 28057267, 28059199, 28072130, 28098865, 28106402
28132287, 28169711, 28174827, 28184554, 28188330, 25929650, 28264172
--------------------------------------------------------------------------------
OPatch succeeded.
Check "lspatches" from Oracle Home
==================================
[oracle@sharddb3 OPatch]$ ./opatch lspatches
27908644;UPDATE 18.3 DATABASE CLIENT JDK IN ORACLE HOME TO JDK8U171
27923415;OJVM RELEASE UPDATE: 18.3.0.0.180717 (27923415)
28090553;OCW RELEASE UPDATE 18.3.0.0.0 (28090553)
28090523;Database Release Update : 18.3.0.0.180717 (28090523)
OPatch succeeded.
[oracle@sharddb3 OPatch]$ cd
Check the "Patch Storage" as a "grid" user
==========================================
[grid@sharddb3 ~]$ du -sh /u01/app/18.0.0/grid/.patch_storage/
2.5G /u01/app/18.0.0/grid/.patch_storage/
[grid@sharddb3 ~]$ ls -lrth /u01/app/18.0.0/grid/.patch_storage/
total 52K
drwxr-xr-x. 4 grid oinstall 4.0K Jul 18 14:07 28090523_Jul_14_2018_00_03_50
drwxr-xr-x. 4 grid oinstall 4.0K Jul 18 14:08 28090553_Jul_11_2018_19_20_31
drwxr-xr-x. 4 grid oinstall 4.0K Jul 18 14:09 28090557_Jun_25_2018_00_35_26
drwxr-xr-x. 4 grid oinstall 4.0K Jul 18 14:09 28090564_Jul_4_2018_23_13_47
drwxr-xr-x. 4 grid oinstall 4.0K Jul 18 14:09 28256701_Jun_29_2018_03_28_30
drwxr-xr-x. 4 grid oinstall 4.0K Jul 18 14:11 27923415_Jul_15_2018_10_33_22
drwxr-xr-x. 14 grid oinstall 4.0K Jul 18 14:11 NApply
-rw-r--r--. 1 grid oinstall 6.1K Jul 18 14:12 record_inventory.txt
-rw-r--r--. 1 grid oinstall 92 Jul 18 14:12 LatestOPatchSession.properties
-rw-r--r--. 1 grid oinstall 6.3K Jul 18 14:12 interim_inventory.txt
drwxr-xr-x. 4 grid oinstall 4.0K Jul 18 14:12 27908644_May_4_2018_01_21_02
[grid@sharddb3 ~]$ exit
Check the "Patch Storage" as a "oracle" user
============================================
[oracle@sharddb3 ~]$ du -sh /u01/app/oracle/product/18.0.0/dbhome_1/.patch_storage/
1.4G /u01/app/oracle/product/18.0.0/dbhome_1/.patch_storage/
[oracle@sharddb3 ~]$ ls -lrth /u01/app/oracle/product/18.0.0/dbhome_1/.patch_storage/
total 40K
drwxr-xr-x. 4 oracle oinstall 4.0K Jul 18 13:39 28090523_Jul_14_2018_00_03_50
drwxr-xr-x. 4 oracle oinstall 4.0K Jul 18 13:40 28090553_Jul_11_2018_19_20_31
drwxr-xr-x. 4 oracle oinstall 4.0K Jul 18 13:41 27923415_Jul_15_2018_10_33_22
drwxr-xr-x. 8 oracle oinstall 4.0K Jul 18 13:43 NApply
-rw-r--r--. 1 oracle oinstall 5.7K Jul 18 13:43 record_inventory.txt
-rw-r--r--. 1 oracle oinstall 92 Jul 18 13:43 LatestOPatchSession.properties
-rw-r--r--. 1 oracle oinstall 5.8K Jul 18 13:43 interim_inventory.txt
drwxr-xr-x. 4 oracle oinstall 4.0K Jul 18 13:44 27908644_May_4_2018_01_21_02
[oracle@sharddb3 ~]$
Check the "software version/release version" using "crsctl"
===========================================================
[root@sharddb3 ~]# crsctl query has softwareversion
Oracle High Availability Services version on the local node is [18.0.0.0.0]
[root@sharddb3 ~]# crsctl query has releasepatch
Oracle Clusterware release patch level is [70732493] and
the complete list of patches [27908644 27923415 28090523 28090553 28090557 28090564 28256701 ] have been
applied on the local node.
The release patch string is [18.3.0.0.0].
[root@sharddb3 ~]# crsctl query has releaseversion
Oracle High Availability Services release version on the local node is [18.0.0.0.0]
[root@sharddb3 ~]# crsctl query has softwarepatch
Oracle Clusterware patch level on node sharddb3 is [70732493].
Check the database status using "srvctl"
========================================
[oracle@sharddb3 ~]$ srvctl status database -d orcl
Database is running.
[oracle@sharddb3 ~]$ srvctl config database -d orcl
Database unique name: orcl
Database name: orcl
Oracle home: /u01/app/oracle/product/18.0.0/dbhome_1
Oracle user: oracle
Spfile: +DATA/ORCL/PARAMETERFILE/spfile.270.982592731
Password file:
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Disk Groups: DATA
Services:
OSDBA group:
OSOPER group:
Database instance: orcl
[oracle@sharddb3 ~]$
Check the Databases including Pluggable Databases
==================================================
[oracle@sharddb3 ~]$ sqlplus /nolog
SQL*Plus: Release 18.0.0.0.0 - Production on Fri Jul 27 15:26:39 2018
Version 18.3.0.0.0
Copyright (c) 1982, 2018, Oracle. All rights reserved.
SQL> connect sys/oracle@orcl as sysdba
Connected.
SQL> archive log list;
Database log mode No Archive Mode
Automatic archival Disabled
Archive destination /u01/app/oracle/product/18.0.0/dbhome_1/dbs/arch
Oldest online log sequence 2
Current log sequence 4
SQL> show pdbs
CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
2 PDB$SEED READ ONLY NO
3 ORCLPDB READ WRITE NO
SQL> alter session set container = ORCLPDB;
Session altered.
SQL> show pdbs
CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
3 ORCLPDB READ WRITE NO
SQL> connect sys/oracle@192.168.2.60:1521/orclpdb as sysdba
Connected.
SQL> connect sys/oracle@orcl as sysdba
Connected.
Oracle 18c included Oracle Trace File Analyzer (TFA)
====================================================
[root@sharddb3 ~]# sh /u01/app/oracle/product/18.0.0/dbhome_1/root.sh
Performing root user operation.
The following environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME= /u01/app/oracle/product/18.0.0/dbhome_1
Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Do you want to setup Oracle Trace File Analyzer (TFA) now ? yes|[no] :
yes
Installing Oracle Trace File Analyzer (TFA).
Log File: /u01/app/oracle/product/18.0.0/dbhome_1/install/
root_sharddb3_2018-07-27_14-06-45-665345917.log
Finished installing Oracle Trace File Analyzer (TFA)
[root@sharddb3 ~]#
Friday, July 27, 2018
Installed Oracle 18c with Grid Infrastructure (+ASM) with Multi-Tenant Database
Saturday, July 14, 2018
Oracle has released BUG fix for "Oracle Exadata Storage Server Patch (12.2.1.1.7) encountered bugs - 'CELLSRV' Restarting On Exadata x4-2 and Exadata x6-2"
Two weeks back we have encountered BUG after patching PSU APR 2018 including Oracle Exadata Storage Server 12.2.1.1.7 version.
So CELLSRV started giving some issues (Restarting frequently CELLSRV). I have posted the blog regarding the issue.
Oracle Exadata Storage Server Patch (12.2.1.1.7) encountered bugs - 'CELLSRV' Restarting On Exadata x4-2 and Exadata x6-2
http://yvrk1973.blogspot.com/2018/06/oracle-exadata-storage-server-patch.html
Thanks to Oracle Exadata Development Team. They developed Interim Patch for the fix for
Oracle Exadata Storage Server Version 12.2.1.1.7 recently.
Patch: p28181789_122117_Linux-x86-64.zip
The zip file contains two rpms and this README.txt.
a.1) cell-12.2.1.1.7_LINUX.X64_180506-1-rpm.bin -
This is the base release rpm. This is included in case a rollback is needed.
a.2) cell-12.2.1.1.7.28181789V1_LINUX.X64_180706-1-rpm.bin -
This is the interim patch that contains the fix for the bug listed below.
This patch will replace existing storage server software.
a.3) ========== Bug fixes or Diagnostics included in this ONEOFF ===========
Bug Fixes: 28181789 ORA-07445: [0000000000000000+0] AFTER UPGRADING CELL TO 12.2.1.1.7
Non-Rolling
============================
Copy the Cell Oneoff RPM File(s) to Target Cell Nodes
1.a) If the patch has already been downloaded to a database server and unzipped under /tmp, move to 1.b
Download and unzip the p28181789_122117_Linux-x86_64.zip under /tmp/ on one of the database servers.
# unzip p28181789_122117_Linux-x86_64.zip -d /tmp
1.b) Change directory to the location where oneoff cell rpm is located
# cd /tmp/RPM_patch_12.2.1.1.7
1.c) Create temporary working directory /var/log/exadatatmp/SAVE_patch_28181789 on cells
# dcli -l root -g cell_group mkdir -p /var/log/exadatatmp/SAVE_patch_28181789
1.d) Copy new cell RPM bin file to /var/log/exadatatmp/SAVE_patch_28181789 on cells
# dcli -l root -g cell_group -f /tmp/RPM_patch_12.2.1.1.7/cell-12.2.1.1.7.28181789V1_LINUX.X64_180706-1-rpm.bin
-d /var/log/exadatatmp/SAVE_patch_28181789
2. Shut Down the Cluster
------------------------
If you want to stop cluster from one node, execute the following command from that node.
[root@dbnode]# crsctl stop cluster -all
If you want to stop crs on each node, execute following command on each dbnode.
[root@dbnode]# crsctl stop crs
3. Install the RPM on Each Cell
------------------------------
3.a) Check if the cell-12.2.1.1.7.28181789V1_LINUX.X64_180706-1-rpm.bin has execute permissions and enable them
if needed.
3.b) Run the following command to apply the cell interim patch.
[root@cell ~]# /var/log/exadatatmp/SAVE_patch_28181789/cell-12.2.1.1.7.28181789V1_LINUX.X64_180706-1-rpm.bin
--doall --force
...
[INFO] Upgrade was successful.
"Upgrade was successful" is displayed on success.
Note: Refer to section (f) for any known issues during the installation
3.c) Run the following command to verify the installation.
[root@cell ~]# rpm -qa | grep ^cell-
cell-12.2.1.1.7_LINUX.X64_180706-1.x86_64
3.d) Logout and login the shell to reflect new command path.
Then, run the following command to verify all services are running.
[root@cell ~]# cellcli -e list cell attributes cellsrvStatus,msStatus,rsStatus
4. Restart the Rluster
----------------------
If you want to start cluster from one node, execute the following command from that node.
[root@dbnode]# crsctl start cluster -all
If you want to start crs on each node, execute following command from each node.
[root@dbnode]# crsctl start crs
5. Remove the Patch File
-------------------------
After successful RPM installation, you can remove the temporary patch staging directory to save disk space.
[root@dbnode ~]# dcli -l root -g cell_group rm -rf /var/log/exadatatmp/SAVE_patch_28181789
==============================================================
e) Rolling Back This Interim Patch on Exadata Storage Servers
==============================================================
To roll back, follow either the rolling or non-rolling steps above
to install cell-12.2.1.1.7_LINUX.X64_180506-1-rpm.bin.
This rpm is included in the zip file for this patch.
Note: We can apply the patch "ROLLING" manner also. Please refer for more details MOS with BUG number.
Saturday, July 7, 2018
Downgrading Grid Infrastructure from V12.2.0.1.0 to V12.1.0.2.0
Downgrading Grid Infrastructure from V12.2.0.1.0 to V12.1.0.2.0 We have recently upgraded from Grid Infrastructure from (12.1.0.2.0) to (12.2.0.1.0) in Oracle Exadata x7-2 and for some reason we have planned to downgrade from Grid Infrastructure (12.2.0.1.0) to (12.1.0.2.0). It is straight forward method and Oracle documentation too good for this process. Delete the Oracle Grid Infrastructure 12c Release 2 (12.2) Management Database ================================================================================== dbca -silent -deleteDatabase -sourceDB -MGMTDB [oracle@rac2-12c ~]$ ps -ef | grep pmon oracle 5679 1 0 07:09 ? 00:00:00 asm_pmon_+ASM2 oracle 6478 1 0 07:09 ? 00:00:00 apx_pmon_+APX2 oracle 6755 1 0 07:10 ? 00:00:00 mdb_pmon_-MGMTDB oracle 16873 1 0 07:14 ? 00:00:00 ora_pmon_contdb2 oracle 26100 25673 0 07:25 pts/1 00:00:00 grep pmon Run “rootcrs.sh –downgrade” to downgrade Oracle Grid Infrastructure on all nodes except the first node. ======================================================================================================== [oracle@rac2-12c ~]$ su - root Password: [root@rac2-12c ~]# . oraenv ORACLE_SID = [root] ? +ASM2 The Oracle base has been set to /u01/app/oracle [root@rac2-12c ~]# /u01/app/12.2.0/grid/crs/install/rootcrs.sh -downgrade Using configuration parameter file: /u01/app/12.2.0/grid/crs/install/crsconfig_params The log of current session can be found at: /u01/app/oracle/crsdata/rac2-12c/crsconfig/crsdowngrade_rac2-12c_2018-07-03_07-27-42AM.log CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac2-12c' CRS-2673: Attempting to stop 'ora.crsd' on 'rac2-12c' CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on server 'rac2-12c' CRS-2673: Attempting to stop 'ora.chad' on 'rac1-12c' CRS-2673: Attempting to stop 'ora.chad' on 'rac2-12c' CRS-2673: Attempting to stop 'ora.acfs_dg.vol1.acfs' on 'rac2-12c' CRS-2673: Attempting to stop 'ora.contdb.serv2.svc' on 'rac2-12c' CRS-2673: Attempting to stop 'ora.qosmserver' on 'rac2-12c' CRS-2677: Stop of 'ora.contdb.serv2.svc' on 'rac2-12c' succeeded CRS-2673: Attempting to stop 'ora.contdb.db' on 'rac2-12c' CRS-2677: Stop of 'ora.acfs_dg.vol1.acfs' on 'rac2-12c' succeeded CRS-2673: Attempting to stop 'ora.ACFS_DG.VOL1.advm' on 'rac2-12c' CRS-2677: Stop of 'ora.ACFS_DG.VOL1.advm' on 'rac2-12c' succeeded CRS-2673: Attempting to stop 'ora.proxy_advm' on 'rac2-12c' CRS-2677: Stop of 'ora.contdb.db' on 'rac2-12c' succeeded CRS-2673: Attempting to stop 'ora.ACFS_DG.dg' on 'rac2-12c' CRS-2673: Attempting to stop 'ora.LISTENER.lsnr' on 'rac2-12c' CRS-2673: Attempting to stop 'ora.LISTENER_SCAN2.lsnr' on 'rac2-12c' CRS-2673: Attempting to stop 'ora.LISTENER_SCAN3.lsnr' on 'rac2-12c' CRS-2673: Attempting to stop 'ora.cvu' on 'rac2-12c' CRS-2673: Attempting to stop 'xag.gg_1-vip.vip' on 'rac2-12c' CRS-2677: Stop of 'ora.ACFS_DG.dg' on 'rac2-12c' succeeded CRS-2677: Stop of 'ora.LISTENER.lsnr' on 'rac2-12c' succeeded CRS-2673: Attempting to stop 'ora.DATA.dg' on 'rac2-12c' CRS-2673: Attempting to stop 'ora.rac2-12c.vip' on 'rac2-12c' CRS-2677: Stop of 'ora.DATA.dg' on 'rac2-12c' succeeded CRS-2673: Attempting to stop 'ora.asm' on 'rac2-12c' CRS-2677: Stop of 'ora.LISTENER_SCAN2.lsnr' on 'rac2-12c' succeeded CRS-2677: Stop of 'ora.LISTENER_SCAN3.lsnr' on 'rac2-12c' succeeded CRS-2673: Attempting to stop 'ora.scan2.vip' on 'rac2-12c' CRS-2673: Attempting to stop 'ora.scan3.vip' on 'rac2-12c' CRS-2677: Stop of 'ora.cvu' on 'rac2-12c' succeeded CRS-2677: Stop of 'ora.asm' on 'rac2-12c' succeeded CRS-2673: Attempting to stop 'ora.ASMNET1LSNR_ASM.lsnr' on 'rac2-12c' CRS-2677: Stop of 'xag.gg_1-vip.vip' on 'rac2-12c' succeeded CRS-2677: Stop of 'ora.chad' on 'rac1-12c' succeeded CRS-2677: Stop of 'ora.chad' on 'rac2-12c' succeeded CRS-2673: Attempting to stop 'ora.mgmtdb' on 'rac2-12c' CRS-2677: Stop of 'ora.scan2.vip' on 'rac2-12c' succeeded CRS-2677: Stop of 'ora.qosmserver' on 'rac2-12c' succeeded CRS-2677: Stop of 'ora.rac2-12c.vip' on 'rac2-12c' succeeded CRS-2677: Stop of 'ora.scan3.vip' on 'rac2-12c' succeeded CRS-2677: Stop of 'ora.mgmtdb' on 'rac2-12c' succeeded CRS-2673: Attempting to stop 'ora.MGMTLSNR' on 'rac2-12c' CRS-2677: Stop of 'ora.ASMNET1LSNR_ASM.lsnr' on 'rac2-12c' succeeded CRS-2677: Stop of 'ora.proxy_advm' on 'rac2-12c' succeeded CRS-2677: Stop of 'ora.MGMTLSNR' on 'rac2-12c' succeeded CRS-2672: Attempting to start 'ora.MGMTLSNR' on 'rac1-12c' CRS-2672: Attempting to start 'ora.qosmserver' on 'rac1-12c' CRS-2672: Attempting to start 'ora.scan2.vip' on 'rac1-12c' CRS-2672: Attempting to start 'ora.scan3.vip' on 'rac1-12c' CRS-2672: Attempting to start 'ora.cvu' on 'rac1-12c' CRS-2672: Attempting to start 'ora.rac2-12c.vip' on 'rac1-12c' CRS-2672: Attempting to start 'xag.gg_1-vip.vip' on 'rac1-12c' CRS-2676: Start of 'ora.cvu' on 'rac1-12c' succeeded CRS-2676: Start of 'ora.scan2.vip' on 'rac1-12c' succeeded CRS-2672: Attempting to start 'ora.LISTENER_SCAN2.lsnr' on 'rac1-12c' CRS-2676: Start of 'ora.scan3.vip' on 'rac1-12c' succeeded CRS-2672: Attempting to start 'ora.LISTENER_SCAN3.lsnr' on 'rac1-12c' CRS-2676: Start of 'ora.rac2-12c.vip' on 'rac1-12c' succeeded CRS-2676: Start of 'xag.gg_1-vip.vip' on 'rac1-12c' succeeded CRS-2676: Start of 'ora.MGMTLSNR' on 'rac1-12c' succeeded CRS-2672: Attempting to start 'ora.mgmtdb' on 'rac1-12c' CRS-2676: Start of 'ora.LISTENER_SCAN2.lsnr' on 'rac1-12c' succeeded CRS-2676: Start of 'ora.LISTENER_SCAN3.lsnr' on 'rac1-12c' succeeded CRS-2676: Start of 'ora.qosmserver' on 'rac1-12c' succeeded CRS-2676: Start of 'ora.mgmtdb' on 'rac1-12c' succeeded CRS-2672: Attempting to start 'ora.chad' on 'rac1-12c' CRS-2676: Start of 'ora.chad' on 'rac1-12c' succeeded CRS-2673: Attempting to stop 'ora.ons' on 'rac2-12c' CRS-2677: Stop of 'ora.ons' on 'rac2-12c' succeeded CRS-2673: Attempting to stop 'ora.net1.network' on 'rac2-12c' CRS-2677: Stop of 'ora.net1.network' on 'rac2-12c' succeeded CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'rac2-12c' has completed CRS-2677: Stop of 'ora.crsd' on 'rac2-12c' succeeded CRS-2673: Attempting to stop 'ora.asm' on 'rac2-12c' CRS-2673: Attempting to stop 'ora.crf' on 'rac2-12c' CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'rac2-12c' CRS-2673: Attempting to stop 'ora.gpnpd' on 'rac2-12c' CRS-2673: Attempting to stop 'ora.mdnsd' on 'rac2-12c' CRS-2677: Stop of 'ora.drivers.acfs' on 'rac2-12c' succeeded CRS-2677: Stop of 'ora.crf' on 'rac2-12c' succeeded CRS-2677: Stop of 'ora.gpnpd' on 'rac2-12c' succeeded CRS-2677: Stop of 'ora.mdnsd' on 'rac2-12c' succeeded CRS-2677: Stop of 'ora.asm' on 'rac2-12c' succeeded CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'rac2-12c' CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'rac2-12c' succeeded CRS-2673: Attempting to stop 'ora.ctssd' on 'rac2-12c' CRS-2673: Attempting to stop 'ora.evmd' on 'rac2-12c' CRS-2677: Stop of 'ora.ctssd' on 'rac2-12c' succeeded CRS-2677: Stop of 'ora.evmd' on 'rac2-12c' succeeded CRS-2673: Attempting to stop 'ora.cssd' on 'rac2-12c' CRS-2677: Stop of 'ora.cssd' on 'rac2-12c' succeeded CRS-2673: Attempting to stop 'ora.gipcd' on 'rac2-12c' CRS-2677: Stop of 'ora.gipcd' on 'rac2-12c' succeeded CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac2-12c' has completed CRS-4133: Oracle High Availability Services has been stopped. 2018/07/03 07:30:20 CLSRSC-4006: Removing Oracle Trace File Analyzer (TFA) Collector. 2018/07/03 07:30:47 CLSRSC-4007: Successfully removed Oracle Trace File Analyzer (TFA) Collector. 2018/07/03 07:30:47 CLSRSC-4001: Installing Oracle Trace File Analyzer (TFA) Collector. 2018/07/03 07:31:12 CLSRSC-4002: Successfully installed Oracle Trace File Analyzer (TFA) Collector. 2018/07/03 07:31:12 CLSRSC-591: successfully downgraded Oracle Clusterware stack on this node [root@rac2-12c ~]# Execute the same command in other nodes i.e rac1-12c ================================================== [oracle@rac1-12c ~]$ su - root Password: [root@rac1-12c ~]# . oraenv ORACLE_SID = [root] ? +ASM1 The Oracle base has been set to /u01/app/oracle [root@rac1-12c ~]# /u01/app/12.2.0/grid/crs/install/rootcrs.sh -downgrade Set Oracle Grid Infrastructure 12c Release 1 (12.1) Grid home as the active Oracle Clusterware home ====================================================================================================== $ cd /u01/app/12.1.0.2/grid/oui/bin $./runInstaller -nowait -waitforcompletion -ignoreSysPrereqs -updateNodeList -silent CRS=true ORACLE_HOME=/u01/app/12.1.0.2/grid "CLUSTER_NODES=rac1-12c,rac2-12c" Start the 12.1 Oracle Clusterware stack on all nodes. ==================================================== [oracle@rac1-12c ~]$ su - root Password: [root@rac1-12c ~]# . oraenv ORACLE_SID = [root] ? +ASM1 The Oracle base has been set to /u01/app/oracle [root@rac1-12c ~]# crsctl start crs CRS-4123: Oracle High Availability Services has been started. [root@rac1-12c ~]# ps -ef | grep pmon oracle 10415 1 0 07:49 ? 00:00:00 asm_pmon_+ASM1 root 16414 4942 0 07:52 pts/1 00:00:00 grep pmon [root@rac2-12c bin]# ./crsctl start crs CRS-4123: Oracle High Availability Services has been started. [root@rac2-12c bin]# ps -ef | grep pmon oracle 5059 1 0 07:50 ? 00:00:00 asm_pmon_+ASM2 root 9921 26564 0 07:51 pts/1 00:00:00 grep pmon On any node, remove the MGMTDB resource as follows: =================================================== [oracle@rac1-12c ~]$ . oraenv ORACLE_SID = [contdb2] ? contdb The Oracle base remains unchanged with value /u01/app/oracle [oracle@rac1-12c ~]$ [oracle@rac1-12c ~]$ [oracle@rac1-12c ~]$ srvctl remove mgmtdb Remove the database _mgmtdb? (y/[n]) y [oracle@rac1-12c ~]$ [oracle@rac1-12c templates]$ ls MGMTSeed_Database.ctl MGMTSeed_Database.dfb mgmtseed_pdb.xml pdbseed.dfb MGMTSeed_Database.dbc mgmtseed_pdb.dfb New_Database.dbt pdbseed.xml [oracle@rac1-12c templates]$ pwd /u01/app/12.1.0.2/grid/assistants/dbca/templates [oracle@rac1-12c templates]$ [oracle@rac1-12c templates]$ cd ../../.. [oracle@rac1-12c grid]$ cd bin/ Create the MGMTDB in silent mode using the templates ==================================================== [oracle@rac1-12c bin]$ ./dbca -silent -createDatabase -sid -MGMTDB -createAsContainerDatabase true -templateName MGMTSeed_Database.dbc -gdbName _mgmtdb -storageType ASM -diskGroupName +DATA -datafileJarLocation /u01/app/12.1.0.2/grid/assistants/dbca/templates -characterset AL32UTF8 -autoGeneratePasswords -skipUserTemplateCheck Registering database with Oracle Grid Infrastructure 5% complete Copying database files 7% complete 9% complete 16% complete 23% complete 30% complete 37% complete 41% complete Creating and starting Oracle instance 43% complete 48% complete 49% complete 50% complete 55% complete 60% complete 61% complete 64% complete Completing Database Creation 68% complete 79% complete 89% complete 100% complete Look at the log file "/u01/app/oracle/cfgtoollogs/dbca/_mgmtdb/_mgmtdb1.log" for further details. [oracle@rac1-12c bin]$ Check the log File: ===================== [oracle@rac1-12c ~]$ cat /u01/app/oracle/cfgtoollogs/dbca/_mgmtdb/_mgmtdb1.log +DATA has enough space. Required space is 1566 MB , available space is 55751 MB. File Validations Successful. Validation of server pool succeeded. Registering database with Oracle Grid Infrastructure DBCA_PROGRESS : 5% Copying database files DBCA_PROGRESS : 7% DBCA_PROGRESS : 9% DBCA_PROGRESS : 16% DBCA_PROGRESS : 23% DBCA_PROGRESS : 30% DBCA_PROGRESS : 37% DBCA_PROGRESS : 41% Creating and starting Oracle instance DBCA_PROGRESS : 43% DBCA_PROGRESS : 48% DBCA_PROGRESS : 49% DBCA_PROGRESS : 50% DBCA_PROGRESS : 55% DBCA_PROGRESS : 60% DBCA_PROGRESS : 61% DBCA_PROGRESS : 64% Completing Database Creation DBCA_PROGRESS : 68% DBCA_PROGRESS : 79% DBCA_PROGRESS : 89% DBCA_PROGRESS : 100% Database creation complete. For details check the logfiles at: /u01/app/oracle/cfgtoollogs/dbca/_mgmtdb. Database Information: Global Database Name:_mgmtdb System Identifier(SID):-MGMTDB [oracle@rac1-12c ~]$ [oracle@rac1-12c bin]$ srvctl status MGMTDB Database is enabled Instance -MGMTDB is running on node rac1-12c [oracle@rac1-12c bin]$ Check the MGMTDB instance ========================== [oracle@rac1-12c bin]$ ps -ef | grep pmon oracle 10415 1 0 07:49 ? 00:00:00 asm_pmon_+ASM1 oracle 23026 1 0 08:06 ? 00:00:00 mdb_pmon_-MGMTDB oracle 26089 18664 0 08:14 pts/1 00:00:00 grep pmon [oracle@rac1-12c bin]$ [oracle@rac1-12c ~]$ su - root Password: [root@rac1-12c ~]# . oraenv ORACLE_SID = [root] ? +ASM1 The Oracle base has been set to /u01/app/oracle [root@rac1-12c ~]# cd /u01/app/12.1.0.2/grid/bin/ [root@rac1-12c bin]# crsctl modify res ora.crf -attr ENABLED=1 -init [root@rac1-12c bin]# crsctl start res ora.crf -init CRS-2672: Attempting to start 'ora.crf' on 'rac1-12c' CRS-2676: Start of 'ora.crf' on 'rac1-12c' succeeded [root@rac1-12c bin]# [root@rac2-12c ~]# crsctl modify res ora.crf -attr ENABLED=1 -init [root@rac2-12c ~]# crsctl start res ora.crf -init CRS-2672: Attempting to start 'ora.crf' on 'rac2-12c' CRS-2676: Start of 'ora.crf' on 'rac2-12c' succeeded [root@rac2-12c ~]# Check the GI version ===================== [root@rac1-12c bin]# crsctl query crs activeversion Oracle Clusterware active version on the cluster is [12.1.0.2.0] [root@rac1-12c bin]# crsctl query crs releaseversion Oracle High Availability Services release version on the local node is [12.1.0.2.0] [root@rac1-12c bin]# [root@rac1-12c bin]# exit logout [oracle@rac1-12c ~]$ . oraenv ORACLE_SID = [contdb] ? contdb The Oracle base remains unchanged with value /u01/app/oracle [oracle@rac1-12c ~]$ ps -ef | grep pmon oracle 10415 1 0 07:49 ? 00:00:00 asm_pmon_+ASM1 oracle 23026 1 0 08:06 ? 00:00:00 mdb_pmon_-MGMTDB oracle 27904 18664 0 08:19 pts/1 00:00:00 grep pmon [oracle@rac1-12c ~]$ srvctl status database -d contdb Instance contdb1 is not running on node rac1-12c Instance contdb2 is not running on node rac2-12c [oracle@rac1-12c ~]$ [oracle@rac1-12c ~]$ srvctl start database -d contdb [oracle@rac1-12c ~]$ srvctl status database -d contdb Instance contdb1 is running on node rac1-12c Instance contdb2 is running on node rac2-12c [oracle@rac1-12c ~]$ [oracle@rac2-12c ~]$ . oraenv ORACLE_SID = [primdb2] ? contdb The Oracle base remains unchanged with value /u01/app/oracle [oracle@rac2-12c ~]$ [oracle@rac2-12c ~]$ srvctl status database -d contdb Instance contdb1 is running on node rac1-12c Instance contdb2 is running on node rac2-12c [oracle@rac2-12c ~]$ [oracle@rac2-12c ~]$ ps -ef | grep pmon oracle 547 25673 0 08:56 pts/1 00:00:00 grep pmon oracle 5059 1 0 07:50 ? 00:00:00 asm_pmon_+ASM2 oracle 21595 1 0 08:20 ? 00:00:00 ora_pmon_contdb2 [oracle@rac2-12c ~]$
Friday, July 6, 2018
Online Operations in Oracle 11g/ 12c (12.1) /12c (12.2) / 18c Databases
Online Operations in Oracle 11g/12c (12.1)/12c (12.2)/18c Databases Oracle 11g (11.2) & Prior -------------------------- 1. Create index online 2. rebuild index online 3. rebuild index partition online 4. Add Column 5. Add Constraint enable novalidate Oracle 12c (12.1) ------------------ 1. Online move partition 2. Drop index online 3. Set unused column online 4. alter column visible/invisible 5. alter index unusable online 6. alter index visible/invisible 7. alter index parallel/noparallel Oracle 12c (12.2) ------------------ 1. Alter tablemove online for non-partitioned tables 2. Alter table from non-partitioned to partitioned online 3. Alter table split partition online 4. Create table for exchange (usable for online partition exchange) 5. Move/merge/split partition maintenance operations can now do data filtering Oracle 18c ----------- 1. Alter table modify partitioned table to a different partitioning method 2. Alter table merge partition/subpartition online
Subscribe to:
Comments (Atom)