Good Contents Are Everywhere, But Here, We Deliver The Best of The Best.Please Hold on!
Exadata Database machine consists of 3 ASM Disk Groups:
+DATA for Database Files
+RECO for Online Redo log and Archive log files
+DBFS_DG for Cluster configuration files such as OCR and Voting disks

In a Customized environment Customers can choose to have more than 3 Disk Groups. But it is recommended to have 3 Disk Groups. The DATA and RECO disk groups can be sized 80%-20% or 40%-60% respectively of over all storage capacity. Sometimes it is possible that +DATA disk group can be filled very fast if you have several databases.

In this article we will demostrate how to move a Database from +DATA disk group to +RECO disk group.


Steps to move a database from +DATA to +RECO ASM Disk Group:


Step 1: Get the ASM Disk Information

SQL> select state,name from v$asm_diskgroup;

STATE       NAME
———– ——————————
MOUNTED     RECO
MOUNTED     DBFS_DG
MOUNTED     DATA


Step 2: Get the Database files details

SQL> select name, open_mode,database_role from gv$database;

NAME      OPEN_MODE            DATABASE_ROLE
——— ——————– —————-
DBM01     READ WRITE           PRIMARY
DBM01     READ WRITE           PRIMARY
DBM01     READ WRITE           PRIMARY
DBM01     READ WRITE           PRIMARY

SQL> select name from v$controlfile;

NAME
——————————————————————————–
+DATA/dbm01/controlfile/current.256.976374731

SQL> select name from v$datafile;

NAME
——————————————————————————–
+DATA/dbm01/datafile/system.259.976374739
+DATA/dbm01/datafile/sysaux.260.976374743
+DATA/dbm01/datafile/undotbs1.261.976374745
+DATA/dbm01/datafile/undotbs2.263.976374753
+DATA/dbm01/datafile/undotbs3.264.976374755
+DATA/dbm01/datafile/undotbs4.265.976374757
+DATA/dbm01/datafile/users.266.976374757

7 rows selected.

SQL> select name from v$tempfile;

NAME
——————————————————————————–
+DATAC1/dbm01/tempfile/temp.262.976375229

SQL>

SQL> select member from v$logfile;

MEMBER
——————————————————————————–
+DATA/dbm01/onlinelog/group_1.257.976374733
+DATA/dbm01/onlinelog/group_2.258.976374735
+DATA/dbm01/onlinelog/group_7.267.976375073
+DATA/dbm01/onlinelog/group_8.268.976375075
+DATA/dbm01/onlinelog/group_5.269.976375079
+DATA/dbm01/onlinelog/group_6.270.976375083
+DATA/dbm01/onlinelog/group_3.271.976375085
+DATA/dbm01/onlinelog/group_4.272.976375087
+DATA/dbm01/onlinelog/group_9.274.976375205
+DATA/dbm01/onlinelog/group_10.275.976375209
+DATA/dbm01/onlinelog/group_11.276.976375211
+DATA/dbm01/onlinelog/group_12.277.976375215
+DATA/dbm01/onlinelog/group_13.278.976375217
+DATA/dbm01/onlinelog/group_14.279.976375219
+DATA/dbm01/onlinelog/group_15.280.976375223
+DATA/dbm01/onlinelog/group_16.281.976375225

16 rows selected.

SQL> select filename from v$block_change_tracking;

FILENAME
——————————————————————–
+DATA/dbm01/changetracking/ctf.282.976375227


Step 3: Backup Database using RMAN copy command as shown below. Here we are moving database to +RECO ASM Disk Group.

RMAN> run {
allocate channel c1 device type disk;
allocate channel c2 device type disk;
allocate channel c3 device type disk;
allocate channel c4 device type disk;
allocate channel c5 device type disk;
allocate channel c6 device type disk;
allocate channel c7 device type disk;
allocate channel c8 device type disk;
backup as copy database include current controlfile format ‘+RECO’;
release channel c1;
release channel c2;
release channel c3;
release channel c4;
release channel c5;
release channel c6;
release channel c7;
release channel c8;
}

released channel: ORA_DISK_1
allocated channel: c1
channel c1: SID=1189 instance=dbm011 device type=DISK

allocated channel: c2
channel c2: SID=1321 instance=dbm011 device type=DISK

allocated channel: c3
channel c3: SID=1343 instance=dbm011 device type=DISK

allocated channel: c4
channel c4: SID=1387 instance=dbm011 device type=DISK

allocated channel: c5
channel c5: SID=1497 instance=dbm011 device type=DISK

allocated channel: c6
channel c6: SID=1519 instance=dbm011 device type=DISK

allocated channel: c7
channel c7: SID=1541 instance=dbm011 device type=DISK

allocated channel: c8
channel c8: SID=1563 instance=dbm011 device type=DISK

Starting backup at 26-MAY-18
channel c1: starting datafile copy
input datafile file number=00001 name=+DATA/dbm01/datafile/system.259.976374739
channel c2: starting datafile copy
input datafile file number=00002 name=+DATA/dbm01/datafile/sysaux.260.976374743
channel c3: starting datafile copy
input datafile file number=00003 name=+DATA/dbm01/datafile/undotbs1.261.976374745
channel c4: starting datafile copy
input datafile file number=00004 name=+DATA/dbm01/datafile/undotbs2.263.976374753
channel c5: starting datafile copy
input datafile file number=00005 name=+DATA/dbm01/datafile/undotbs3.264.976374755
channel c6: starting datafile copy
input datafile file number=00006 name=+DATA/dbm01/datafile/undotbs4.265.976374757
channel c7: starting datafile copy
input datafile file number=00007 name=+DATA/dbm01/datafile/users.266.976374757
channel c8: starting datafile copy
copying current control file
output file name=+RECO/dbm01/datafile/users.284.977121353 tag=TAG20180526T063551 RECID=16 STAMP=977121353
channel c7: datafile copy complete, elapsed time: 00:00:02
output file name=+RECO/dbm01/controlfile/backup.283.977121353 tag=TAG20180526T063551 RECID=17 STAMP=977121353
channel c8: datafile copy complete, elapsed time: 00:00:01
output file name=+RECO/dbm01/datafile/system.291.977121353 tag=TAG20180526T063551 RECID=18 STAMP=977121389
channel c1: datafile copy complete, elapsed time: 00:00:46
output file name=+RECO/dbm01/datafile/sysaux.290.977121353 tag=TAG20180526T063551 RECID=23 STAMP=977121392
channel c2: datafile copy complete, elapsed time: 00:00:46
output file name=+RECO/dbm01/datafile/undotbs1.289.977121353 tag=TAG20180526T063551 RECID=21 STAMP=977121392
channel c3: datafile copy complete, elapsed time: 00:00:46
output file name=+RECO/dbm01/datafile/undotbs2.288.977121353 tag=TAG20180526T063551 RECID=19 STAMP=977121392
channel c4: datafile copy complete, elapsed time: 00:00:46
output file name=+RECO/dbm01/datafile/undotbs3.287.977121353 tag=TAG20180526T063551 RECID=20 STAMP=977121392
channel c5: datafile copy complete, elapsed time: 00:00:46
output file name=+RECO/dbm01/datafile/undotbs4.286.977121353 tag=TAG20180526T063551 RECID=22 STAMP=977121392
channel c6: datafile copy complete, elapsed time: 00:00:46
Finished backup at 26-MAY-18

Starting Control File and SPFILE Autobackup at 26-MAY-18
piece handle=+RECO/dbm01/autobackup/2018_05_26/s_977121397.282.977121399 comment=NONE
Finished Control File and SPFILE Autobackup at 26-MAY-18

released channel: c1

released channel: c2

released channel: c3

released channel: c4

released channel: c5

released channel: c6

released channel: c7

released channel: c8


Step 4: Verify the RMAN Database Copy using RMAN

RMAN> list copy of database;

List of Datafile Copies
=======================

Key     File S Completion Time Ckp SCN    Ckp Time
——- —- – ————— ———- —————
18      1    A 26-MAY-18       1330853    26-MAY-18
        Name: +RECO/dbm01/datafile/system.291.977121353
        Tag: TAG20180526T063551

9       1    A 26-MAY-18       1330410    26-MAY-18
        Name: +RECO/dbm01/datafile/system.286.977120961
        Tag: TAG20180526T062919

3       1    A 26-MAY-18       1330155    26-MAY-18
        Name: +RECO/dbm01/datafile/system.280.977120795
        Tag: TAG20180526T062633

23      2    A 26-MAY-18       1330856    26-MAY-18
        Name: +RECO/dbm01/datafile/sysaux.290.977121353
        Tag: TAG20180526T063551

12      2    A 26-MAY-18       1330413    26-MAY-18
        Name: +RECO/dbm01/datafile/sysaux.287.977120961
        Tag: TAG20180526T062919

2       2    A 26-MAY-18       1330158    26-MAY-18
        Name: +RECO/dbm01/datafile/sysaux.281.977120795
        Tag: TAG20180526T062633

21      3    A 26-MAY-18       1330859    26-MAY-18
        Name: +RECO/dbm01/datafile/undotbs1.289.977121353
        Tag: TAG20180526T063551

11      3    A 26-MAY-18       1330416    26-MAY-18
        Name: +RECO/dbm01/datafile/undotbs1.288.977120961
        Tag: TAG20180526T062919

4       3    A 26-MAY-18       1330154    26-MAY-18
        Name: +RECO/dbm01/datafile/undotbs1.279.977120795
        Tag: TAG20180526T062633

19      4    A 26-MAY-18       1330862    26-MAY-18
        Name: +RECO/dbm01/datafile/undotbs2.288.977121353
        Tag: TAG20180526T063551

10      4    A 26-MAY-18       1330419    26-MAY-18
        Name: +RECO/dbm01/datafile/undotbs2.289.977120961
        Tag: TAG20180526T062919

1       4    A 26-MAY-18       1330153    26-MAY-18
        Name: +RECO/dbm01/datafile/undotbs2.278.977120795
        Tag: TAG20180526T062633

20      5    A 26-MAY-18       1330865    26-MAY-18
        Name: +RECO/dbm01/datafile/undotbs3.287.977121353
        Tag: TAG20180526T063551

13      5    A 26-MAY-18       1330422    26-MAY-18
        Name: +RECO/dbm01/datafile/undotbs3.290.977120961
        Tag: TAG20180526T062919

7       5    A 26-MAY-18       1330184    26-MAY-18
        Name: +RECO/dbm01/datafile/undotbs3.282.977120829
        Tag: TAG20180526T062633

22      6    A 26-MAY-18       1330868    26-MAY-18
        Name: +RECO/dbm01/datafile/undotbs4.286.977121353
        Tag: TAG20180526T063551

15      6    A 26-MAY-18       1330425    26-MAY-18
        Name: +RECO/dbm01/datafile/undotbs4.291.977120961
        Tag: TAG20180526T062919

6       6    A 26-MAY-18       1330187    26-MAY-18
        Name: +RECO/dbm01/datafile/undotbs4.283.977120829
        Tag: TAG20180526T062633

16      7    A 26-MAY-18       1330871    26-MAY-18
        Name: +RECO/dbm01/datafile/users.284.977121353
        Tag: TAG20180526T063551

8       7    A 26-MAY-18       1330428    26-MAY-18
        Name: +RECO/dbm01/datafile/users.292.977120961
        Tag: TAG20180526T062919

5       7    A 26-MAY-18       1330190    26-MAY-18
        Name: +RECO/dbm01/datafile/users.284.977120829
        Tag: TAG20180526T062633

RMAN> list copy of controlfile;

List of Control File Copies
===========================

Key     S Completion Time Ckp SCN    Ckp Time
——- – ————— ———- —————
17      A 26-MAY-18       1330876    26-MAY-18
        Name: +RECO/dbm01/controlfile/backup.283.977121353
        Tag: TAG20180526T063551

14      A 26-MAY-18       1330434    26-MAY-18
        Name: +RECO/dbm01/controlfile/backup.293.977120965
        Tag: TAG20180526T062919


Step 5: Verify the RMAN Database Copy backup in ASM

[oracle@dm01db01 ~]$ asmcmd -p
ASMCMD [+] > lsdg
State    Type    Rebal  Sector  Block       AU   Total_MB    Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name
MOUNTED  HIGH    N         512   4096  4194304  272154624  271558968          6479872        88359698              0             N  DATA/
MOUNTED  HIGH    N         512   4096  4194304    2404640    2402468            68704          777921              0             Y  DBFS_DG/
MOUNTED  NORMAL  N         512   4096  4194304   45389568   45183784           540352        22321716              0             N  RECO/

ASMCMD [+] > cd +RECO

ASMCMD [+RECO] > ls -l
Type  Redund  Striped  Time             Sys  Name
                                        Y    DBM01/
ASMCMD [+RECO] > cd DBM01
ASMCMD [+RECO/DBM01] > ls -l
Type         Redund  Striped  Time             Sys  Name
                                               Y    ARCHIVELOG/
                                               Y    AUTOBACKUP/
                                               Y    CONTROLFILE/
                                               Y    DATAFILE/
                                               N    snapcf_dbm01.f => +RECO/DBM01/CONTROLFILE/Backup.285.977120961
ASMCMD [+RECO/DBM01] > ls -l DATAFILE/
Type      Redund  Striped  Time             Sys  Name
DATAFILE  MIRROR  COARSE   MAY 26 06:00:00  Y    SYSAUX.290.977121353
DATAFILE  MIRROR  COARSE   MAY 26 06:00:00  Y    SYSTEM.291.977121353
DATAFILE  MIRROR  COARSE   MAY 26 06:00:00  Y    UNDOTBS1.289.977121353
DATAFILE  MIRROR  COARSE   MAY 26 06:00:00  Y    UNDOTBS2.288.977121353
DATAFILE  MIRROR  COARSE   MAY 26 06:00:00  Y    UNDOTBS3.287.977121353
DATAFILE  MIRROR  COARSE   MAY 26 06:00:00  Y    UNDOTBS4.286.977121353
DATAFILE  MIRROR  COARSE   MAY 26 06:00:00  Y    USERS.284.977121353
ASMCMD [+RECO/DBM01] > ls -l CONTROLFILE/
Type         Redund  Striped  Time             Sys  Name
CONTROLFILE  HIGH    FINE     MAY 26 06:00:00  Y    Backup.283.977121353
CONTROLFILE  HIGH    FINE     MAY 26 06:00:00  Y    Backup.285.977120961
CONTROLFILE  HIGH    FINE     MAY 26 06:00:00  Y    Backup.293.977120965

Step 6: Switch Database to RMAN backup copy. This command will switch the database from +DATA to +RECO ASM Disk Group.

[oracle@dm01db01 ~]$ srvctl stop database -d dbm01

[oracle@dm01db01 ~]$ sqlplus / as sysdba

SQL*Plus: Release 11.2.0.4.0 Production on Sat May 26 07:22:06 2018

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

Connected to an idle instance.

SQL> startup mount;
ORACLE instance started.

Total System Global Area 2.5655E+10 bytes
Fixed Size                  2265224 bytes
Variable Size            4160753528 bytes
Database Buffers         2.1341E+10 bytes
Redo Buffers              151113728 bytes
Database mounted.

[oracle@dm01db01 ~]$ rman target /

Recovery Manager: Release 11.2.0.4.0 – Production on Sat May 26 07:23:09 2018

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

connected to target database: DBM01 (DBID=1180720008, not open)

RMAN> switch database to copy;

using target database control file instead of recovery catalog
datafile 1 switched to datafile copy “+RECO/dbm01/datafile/system.291.977121353”
datafile 2 switched to datafile copy “+RECO/dbm01/datafile/sysaux.290.977121353”
datafile 3 switched to datafile copy “+RECO/dbm01/datafile/undotbs1.289.977121353”
datafile 4 switched to datafile copy “+RECO/dbm01/datafile/undotbs2.288.977121353”
datafile 5 switched to datafile copy “+RECO/dbm01/datafile/undotbs3.287.977121353”
datafile 6 switched to datafile copy “+RECO/dbm01/datafile/undotbs4.286.977121353”
datafile 7 switched to datafile copy “+RECO/dbm01/datafile/users.284.977121353”

RMAN> recover database;

Starting recover at 26-MAY-18
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=991 instance=dbm011 device type=DISK

starting media recovery
media recovery complete, elapsed time: 00:00:02

Finished recover at 26-MAY-18

RMAN> alter database open resetlogs;

RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of alter db command at 05/26/2018 07:25:06
ORA-01139: RESETLOGS option only valid after an incomplete database recovery

RMAN> alter database open;

database opened

[oracle@dm01db01 ~]$ srvctl stop database -d dbm01

[oracle@dm01db01 ~]$ srvctl start database -d dbm01

[oracle@dm01db01 ~]$ srvctl status database -d dbm01
Instance dbm011 is running on node dm01db01
Instance dbm012 is running on node dm01db02
Instance dbm013 is running on node dm01db03
Instance dbm014 is running on node dm01db04

[oracle@dm01db01 ~]$ sqlplus / as sysdba

SQL*Plus: Release 11.2.0.4.0 Production on Sat May 26 07:28:11 2018

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 – 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options

SQL> select name, open_mode,database_role from gv$database;

NAME      OPEN_MODE            DATABASE_ROLE
——— ——————– —————-
DBM01     READ WRITE           PRIMARY
DBM01     READ WRITE           PRIMARY
DBM01     READ WRITE           PRIMARY
DBM01     READ WRITE           PRIMARY


Step 7: Move Temp and online redo log files

SQL> alter database tempfile ‘+DATAC1/dbm01/tempfile/temp.262.976375229’ drop;

Database altered.

SQL> alter tablespace TEMP add tempfile ‘+RECO’ SIZE 1024M;

Tablespace altered.

SQL> alter database add logfile member ‘+RECO’ to group 1;

Database altered.

SQL> alter database add logfile member ‘+RECO’ to group 2;

Database altered.

SQL> alter database add logfile member ‘+RECO’ to group 3;

Database altered.

SQL> alter database add logfile member ‘+RECO’ to group 4;

Database altered.

SQL> alter database add logfile member ‘+RECO’ to group 5;

Database altered.

SQL> alter database add logfile member ‘+RECO’ to group 6;

Database altered.

SQL> alter database add logfile member ‘+RECO’ to group 7;

Database altered.

SQL> alter database add logfile member ‘+RECO’ to group 8;

Database altered.

SQL> alter database add logfile member ‘+RECO’ to group 9;

Database altered.

SQL> alter database add logfile member ‘+RECO’ to group 10;

Database altered.

SQL> alter database add logfile member ‘+RECO’ to group 11;

Database altered.

SQL> alter database add logfile member ‘+RECO’ to group 12;

Database altered.

SQL> alter database add logfile member ‘+RECO’ to group 13;

Database altered.

SQL> alter database add logfile member ‘+RECO’ to group 14;

Database altered.

SQL> alter database add logfile member ‘+RECO’ to group 15;

Database altered.

SQL> alter database add logfile member ‘+RECO’ to group 16;

Database altered.

SQL> alter database drop logfile member ‘+DATA/dbm01/onlinelog/group_1.257.976374733’;
SQL> alter database drop logfile member ‘+DATA/dbm01/onlinelog/group_2.258.976374735’;
SQL> alter database drop logfile member ‘+DATA/dbm01/onlinelog/group_7.267.976375073’;
SQL> alter database drop logfile member ‘+DATA/dbm01/onlinelog/group_8.268.976375075’;
SQL> alter database drop logfile member ‘+DATA/dbm01/onlinelog/group_5.269.976375079’;
SQL> alter database drop logfile member ‘+DATA/dbm01/onlinelog/group_6.270.976375083’;
SQL> alter database drop logfile member ‘+DATA/dbm01/onlinelog/group_3.271.976375085’;
SQL> alter database drop logfile member ‘+DATA/dbm01/onlinelog/group_4.272.976375087’;
SQL> alter database drop logfile member ‘+DATA/dbm01/onlinelog/group_9.274.976375205’;
SQL> alter database drop logfile member ‘+DATA/dbm01/onlinelog/group_10.275.976375209’;
SQL> alter database drop logfile member ‘+DATA/dbm01/onlinelog/group_11.276.976375211’;
SQL> alter database drop logfile member ‘+DATA/dbm01/onlinelog/group_12.277.976375215’;
SQL> alter database drop logfile member ‘+DATA/dbm01/onlinelog/group_13.278.976375217’;
SQL> alter database drop logfile member ‘+DATA/dbm01/onlinelog/group_14.279.976375219’;
SQL> alter database drop logfile member ‘+DATA/dbm01/onlinelog/group_15.280.976375223’;
SQL> alter database drop logfile member ‘+DATA/dbm01/onlinelog/group_16.281.976375225’;


Step 8: Move control file to +RECO Disk Group

[oracle@dm01db01 ~]$ srvctl stop database -d dbm01
[oracle@dm01db01 ~]$ sqlplus / as sysdba

SQL*Plus: Release 11.2.0.4.0 Production on Sat May 26 08:53:35 2018

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

Connected to an idle instance.

SQL> startup nomount;
ORACLE instance started.

Total System Global Area 2.5655E+10 bytes
Fixed Size                  2265224 bytes
Variable Size            4429188984 bytes
Database Buffers         2.1072E+10 bytes
Redo Buffers              151113728 bytes
SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 – 64bit Production
With the Partitioning, Real Application Clusters, OLAP, Data Mining
and Real Application Testing options
[oracle@dm01db01 ~]$ rman target /

Recovery Manager: Release 11.2.0.4.0 – Production on Sat May 26 08:53:59 2018

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

connected to target database: DBM01 (not mounted)

RMAN> restore controlfile to ‘+RECO’ from ‘+DATA/dbm01/controlfile/current.256.976374731’;

Starting restore at 26-MAY-18
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=969 instance=dbm011 device type=DISK

channel ORA_DISK_1: copied control file copy
Finished restore at 26-MAY-18

RMAN> exit

Recovery Manager complete.

[oracle@dm01db01 ~]$ . oraenv
ORACLE_SID = [dbm011] ? +ASM1
The Oracle base remains unchanged with value /u01/app/oracle

[oracle@dm01db01 ~]$ asmcmd -p

ASMCMD [+] > cd +RECO

ASMCMD [+RECO] > ls -l
Type  Redund  Striped  Time             Sys  Name
                                        Y    DBM01/
ASMCMD [+RECO] > cd DBM01
ASMCMD [+RECO/DBM01] > ls -l
Type         Redund  Striped  Time             Sys  Name
                                               Y    ARCHIVELOG/
                                               Y    AUTOBACKUP/
                                               Y    CHANGETRACKING/
                                               Y    CONTROLFILE/
                                               Y    DATAFILE/
                                               Y    ONLINELOG/
                                               Y    TEMPFILE/
                                               N    snapcf_dbm01.f => +RECO/DBM01/CONTROLFILE/Backup.285.977120961
ASMCMD [+RECO/DBM01] > cd CONTROLFILE/

ASMCMD [+RECO/DBM01/CONTROLFILE] > ls -l
Type         Redund  Striped  Time             Sys  Name
CONTROLFILE  HIGH    FINE     MAY 26 06:00:00  Y    Backup.283.977121353
CONTROLFILE  HIGH    FINE     MAY 26 08:00:00  Y    Backup.285.977120961
CONTROLFILE  HIGH    FINE     MAY 26 06:00:00  Y    Backup.293.977120965
CONTROLFILE  HIGH    FINE     MAY 26 08:00:00  Y    Backup.321.977128799
CONTROLFILE  HIGH    FINE     MAY 26 08:00:00  Y    current.331.977129649

ASMCMD [+RECO/DBM01/CONTROLFILE] > pwd
+RECO/DBM01/CONTROLFILE

ASMCMD [+RECO/DBM01/CONTROLFILE] > exit

[oracle@dm01db01 ~]$ . oraenv
ORACLE_SID = [+ASM1] ? dbm011
The Oracle base remains unchanged with value /u01/app/oracle

[oracle@dm01db01 ~]$ sqlplus / as sysdba

SQL*Plus: Release 11.2.0.4.0 Production on Sat May 26 08:55:09 2018

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 – 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options

SQL> alter system set control_files=’+RECO/DBM01/CONTROLFILE/current.331.977129649′ scope=spfile sid=’*’;

System altered.

SQL> shutdown immediate;
ORA-01507: database not mounted


ORACLE instance shut down.
SQL> exit

[oracle@dm01db01 ~]$ srvctl start database -d dbm01

[oracle@dm01db01 ~]$ srvctl status database -d dbm01
Instance dbm011 is running on node dm01db01
Instance dbm012 is running on node dm01db02
Instance dbm013 is running on node dm01db03
Instance dbm014 is running on node dm01db04


Step 9: Move block change tracking file to +RECO Disk Group

SQL> select filename from v$block_change_tracking;

FILENAME
——————————————————————–
+DATA/dbm01/changetracking/ctf.282.976375227

SQL> alter database disable block change tracking;

Database altered.

SQL> alter database enable block change tracking using file ‘+RECO’;

Database altered.

SQL> select filename from v$block_change_tracking;

FILENAME
——————————————————————–
+RECO/dbm01/changetracking/ctf.319.977128195


Step 10: Move Flash Recovery Area to +RECO Disk Group

SQL> show parameter recover

NAME                                 TYPE        VALUE
———————————— ———– ——————————
db_recovery_file_dest                string      +DATA

SQL> alter system set db_recovery_file_dest=’+RECO’;

System altered.

SQL> show parameter db_recovery_file_dest

NAME                                 TYPE        VALUE
———————————— ———– ——————————
db_recovery_file_dest                string      +RECO


Step 11: Update OMF parameter to point to +RECO

SQL> show parameter online

NAME                                 TYPE        VALUE
———————————— ———– ——————————
db_create_online_log_dest_1          string      +DATA

SQL> alter system set db_create_online_log_dest_1=’+RECO’;

System altered.

SQL> show parameter db_create_online_log_dest_1

NAME                                 TYPE        VALUE
———————————— ———– ——————————
db_create_online_log_dest_1          string      +RECO


Step 12: Verify the entire database is moved to +RECO ASM Disk Group

[oracle@dm01db01 ~]$ sqlplus / as sysdba

SQL*Plus: Release 11.2.0.4.0 Production on Sat May 26 08:57:57 2018

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 – 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options

SQL> select name, open_mode,database_role from gv$database;

NAME      OPEN_MODE            DATABASE_ROLE
——— ——————– —————-
DBM01     READ WRITE           PRIMARY
DBM01     READ WRITE           PRIMARY
DBM01     READ WRITE           PRIMARY
DBM01     READ WRITE           PRIMARY

SQL> set lines 200
SQL> set pages 200
SQL> select name from v$tempfile;

NAME
——————————————————-
+RECO/dbm01/tempfile/temp.297.977125145

SQL> select name from v$controlfile;

NAME
——————————————————-
+RECO/dbm01/controlfile/current.331.977129649

SQL> select name from v$datafile;

NAME
——————————————————–
+RECO/dbm01/datafile/system.291.977121353
+RECO/dbm01/datafile/sysaux.290.977121353
+RECO/dbm01/datafile/undotbs1.289.977121353
+RECO/dbm01/datafile/undotbs2.288.977121353
+RECO/dbm01/datafile/undotbs3.287.977121353
+RECO/dbm01/datafile/undotbs4.286.977121353
+RECO/dbm01/datafile/users.284.977121353

7 rows selected.

SQL> select member from v$logfile;

MEMBER
———————————————————
+RECO/dbm01/onlinelog/group_1.298.977127719
+RECO/dbm01/onlinelog/group_2.299.977125295
+RECO/dbm01/onlinelog/group_3.300.977125299
+RECO/dbm01/onlinelog/group_4.301.977125309
+RECO/dbm01/onlinelog/group_5.302.977125313
+RECO/dbm01/onlinelog/group_6.303.977125317
+RECO/dbm01/onlinelog/group_7.304.977125321
+RECO/dbm01/onlinelog/group_8.305.977125327
+RECO/dbm01/onlinelog/group_9.306.977125329
+RECO/dbm01/onlinelog/group_10.307.977125333
+RECO/dbm01/onlinelog/group_11.308.977125335
+RECO/dbm01/onlinelog/group_12.309.977125339
+RECO/dbm01/onlinelog/group_13.310.977125343
+RECO/dbm01/onlinelog/group_14.311.977125345
+RECO/dbm01/onlinelog/group_15.312.977125349
+RECO/dbm01/onlinelog/group_16.313.977125351

16 rows selected.


Conclusion

In this article we have learned how to move a Database from +DATA ASM Disk Group to +RECO Disk Group. Using RMAN along with FRA makes it easy to move a database from one location to another. 

0

During the Oracle Database Appliance Deployment you can optionally configure CloudFS file system. The default mount point is /cloudfs and set to default size of 50GB. Oracle Database Appliance uses the Oracle Automatic Storage Management Cluster File System (Oracle ACFS) for database and virtual machine files storage. ACFS can only be used to configure shared storage file system on ODA. Oracle ACFS provides both servers with concurrent access to /cloudfs shared file system. The default size of 50GB may not sufficient and must be increased to store big files for business requirement.


In this article we will demonstrate how to resize the /cloudfs file system using ASMCA GUI interface


Steps to resize the /cloudfs file system using asmca GUI Interface

Step 1: Get the current /cloudfs file system size

Step 2: Start VNC on node 1. Here I am starting VNC as root user. You can choose to start it as Grid or Oracle user.

[root@odanode1 ~]# rpm -qa *vnc*
tigervnc-1.1.0-18.el6.x86_64
tigervnc-server-1.1.0-18.el6.x86_64
[root@odanode1 ~]# ps -ef|grep vnc
root     23281 20754  0 13:32 pts/1    00:00:00 grep vnc

[root@odanode1 ~]# vncserver :1

You will require a password to access your desktops.

Password:
Verify:

New ‘odanode1:1 (root)’ desktop is odanode1:1

Creating default startup script /root/.vnc/xstartup
Starting applications specified in /root/.vnc/xstartup
Log file is /root/.vnc/odanode1:1.log

[root@odanode1 ~]# ps -ef|grep vnc
root     23399     1  1 13:32 pts/1    00:00:00 /usr/bin/Xvnc :1 -desktop odanode1:1 (root) -auth /root/.Xauthority -geometry 1024×768 -rfbwait 30000 -rfbauth /root/.vnc/passwd -rfbport 5901 -fp catalogue:/etc/X11/fontpath.d -pn
root     23481 23480  0 13:33 pts/1    00:00:00 vncconfig -iconic
root     23636 20754  0 13:33 pts/1    00:00:00 grep vnc

Step 3: Start VNC viewer on desktop and enter the hostname/IP address on node 1. Enter the root password as we have started the VNC server using root user.



Step 4: Switch to grid user and verify the Grid Home

Step 5: Set Oracle Home to Grid home and start asmca

Step 6: Click on ASM Cluster File System Tab

Step 7: Right Click on /cloudfs and select Resize option

Step 8: Enter the desired new size. Here I am resizing the /cloudfs to 200GB

Step 9: Resize in progress

Step 10: Resize completed

Step 11: Verify the /cloudfs size



Conclusion

In this article we have learned how to resize/increase the size of /cloudfs ACFS file system on ODA using ASMCA GUI utility. The cloudfs file system is configured during the ODA deployment and it is set to 50GB which is not sufficient for storing the big files. The cloudfs is build using ACFS and it can be resized easily using ASMCA graphical interface.

0

Oracle Database Appliance (ODA) is an entry level Engineered System. ODA is a pre-configured, highly available Oracle Database Engineered system. ODA system consists of hardware, software, storage and networking. The hardware configuration is designed to provide redundancy and protection against single points of failures in the system.

ODA consists of two physical servers (Node 0 and Node 1), a storage shelf and optionally an additional storage shelf. The two independent physical servers are interconnected and direct attached to SAS and SSD storage.

ODA is basically a 2-node RAC cluster database system running Oracle Linux operating (OEL), Oracle Database Enterprise Edition, Oracle Grid Infrastructure (Clusterware and ASM). All these together provides the Oracle Database high availability running on ODA.

ODA consists of several hardware components such as Mother Board, Processor, Memory, Power Supply, FAN, Network cards and so on. You can monitor the hardware status of these components using OAKCLI command for both Bare Metal and Virtualized platform.

Note: ODACLI is used for Hardware monitoring and administrative tasks on the Oracle Database Appliance on X6-2 S/M/L & X7-2 S/M.


In this article we will demonstrate how to monitor different hardware component status on ODA nodes.

Using OAKCLI command to Get Hardware Status


  • Execute the following command to display ODA server details

[root@odanoden1 ~]# oakcli show server

        Power State              : On
        Open Problems            : 0
        Model                    : ODA X4-2
        Type                     : Rack Mount
        Part Number              : 33060862+1+1
        Serial Number            : 1440XXXXXX
        Primary OS               : Not Available
        ILOM Address             : 10.10.20.1
        ILOM MAC Address         : 00:10:E0:62:3F:F6
        Description              : Oracle Database Appliance X4-2 1440XXXXX
        Locator Light            : Off
        Actual Power Consumption : 261 watts
        Ambient Temperature      : 18.500 degree C
        Open Problems Report     : System is healthy

[root@odanoden2 ~]# oakcli show server

        Power State              : On
        Open Problems            : 0
        Model                    : ODA X4-2
        Type                     : Rack Mount
        Part Number              : 33060862+1+1
        Serial Number            : 1440XXXXXX
        Primary OS               : Not Available
        ILOM Address             : 10.10.20.2
        ILOM MAC Address         : 00:10:E0:62:41:D6
        Description              : Oracle Database Appliance X4-2 1440XXXXX
        Locator Light            : Off
        Actual Power Consumption : 269 watts
        Ambient Temperature      : 17.750 degree C
        Open Problems Report     : System is healthy


  • Execute the following command to display ODA model

[root@odanoden1 ~]# oakcli show env_hw
BM ODA X4-2
Public interface : COPPER


  • Execute the following command to display ODA software version details

[root@odanoden1 ~]# oakcli show version -detail
Reading the metadata. It takes a while…
System Version  Component Name            Installed Version         Supported Version
————–  —————           ——————        —————–
12.1.2.10.0
                Controller_INT            11.05.03.00               Up-to-date
                Controller_EXT            11.05.03.00               Up-to-date
                Expander                  0018                      Up-to-date
                SSD_SHARED                944A                      Up-to-date
                HDD_LOCAL                 A72A                      Up-to-date
                HDD_SHARED                A72A                      Up-to-date
                ILOM                      3.2.8.25 r114493          Up-to-date
                BIOS                      25040100                  Up-to-date
                IPMI                      1.8.12.4                  Up-to-date
                HMP                       2.3.5.2.8                 Up-to-date
                OAK                       12.1.2.10.0               Up-to-date
                OL                        6.8                       Up-to-date
                GI_HOME                   12.1.0.2.170117(2473      Up-to-date
                                          2082,24828633)
                DB_HOME                   12.1.0.2.170117(2473      Up-to-date
                                          2082,24828633)


  • Execute the following command to display ‘oakcli show’ help

[root@odanoden1 ~]# oakcli show -h
Usage:
oakcli show {disk|diskgroup|expander|fs|raidsycstatus|controller|server|processor|memory|power|cooling|network|enclosure|storage|core_config_key|version|dbhomes|dbstorage|databases|db_config_params|asr|env_hw} [<options>]
where:
        disk                     – About the disk
        diskgroup                – ASM disk group
        expander                 – Expander
        fs                       – Filesystem
        controller               – Controller
        storage                  – All storage components
        version                  – Running software version
        dbhomes                  – Installed oracle database homes
        dbstorage                – Details of ACFS storage setup for the databases
        databases                – Database names
        db_config_params         – db_config_params file
        asr                      – ASR configuration
        env_hw                   – Environment and Hardware information
        server                   – Details of server sub-system
        processor                – Details of processor sub-system
        memory                   – Details of memory sub-system
        power                    – Details of power supply sub-system
        cooling                  – Details of cooling sub-system
        network                  – Details of network sub-system
        enclosure                – Details of enclosure sub-system
        raidsyncstatus           – RAID sync status information
        core_config_key          – Core configuration
For detailed help on each command and object and its options use:
oakcli <command> <object> -h


  • Execute the following command to monitor the Processor Status

[root@odanoden1 ~]# oakcli show processor

        NAME  HEALTH HEALTH_DETAILS PART_NO. LOCATION   MODEL                         MAX_CLK_SPEED TOTAL_CORES ENABLED_CORES

        CPU_0 OK     –              060E     P0 (CPU 0) Intel(R) Xeon(R) CPU E5-2697  2.700 GHz       12        NA
        CPU_1 OK     –              060E     P1 (CPU 1) Intel(R) Xeon(R) CPU E5-2697  2.700 GHz       12        NA


  • Execute the following command to monitor the Memory Status

[root@odanoden1 ~]# oakcli show memory

        NAME    HEALTH HEALTH_DETAILS PART_NO.         SERIAL_NO.         LOCATION MANUFACTURER MEMORY_SIZE CURR_CLK_SPEED ECC_Errors

        DIMM_0  OK     –              M393B2G70DB0-YK0 00CE03143317593248 P0/D0    Samsung      16 GB       1600 MHz       0
        DIMM_1  OK     –              M393B2G70DB0-YK0 00CE0314331759238B P0/D1    Samsung      16 GB       1600 MHz       0
        DIMM_10 OK     –              M393B2G70DB0-YK0 00CE031433175926CD P1/D2    Samsung      16 GB       1600 MHz       0
        DIMM_11 OK     –              M393B2G70DB0-YK0 00CE031433175927AD P1/D3    Samsung      16 GB       1600 MHz       0
        DIMM_12 OK     –              M393B2G70DB0-YK0 00CE031433175922C3 P1/D4    Samsung      16 GB       1600 MHz       0
        DIMM_13 OK     –              M393B2G70DB0-YK0 00CE03143317593250 P1/D5    Samsung      16 GB       1600 MHz       0
        DIMM_14 OK     –              M393B2G70DB0-YK0 00CE0314331759367A P1/D6    Samsung      16 GB       1600 MHz       0
        DIMM_15 OK     –              M393B2G70DB0-YK0 00CE03143317593319 P1/D7    Samsung      16 GB       1600 MHz       0
        DIMM_2  OK     –              M393B2G70DB0-YK0 00CE031433175927A8 P0/D2    Samsung      16 GB       1600 MHz       0
        DIMM_3  OK     –              M393B2G70DB0-YK0 00CE03143317592B31 P0/D3    Samsung      16 GB       1600 MHz       0
        DIMM_4  OK     –              M393B2G70DB0-YK0 00CE03143317592B35 P0/D4    Samsung      16 GB       1600 MHz       0
        DIMM_5  OK     –              M393B2G70DB0-YK0 00CE03143317591C3C P0/D5    Samsung      16 GB       1600 MHz       0
        DIMM_6  OK     –              M393B2G70DB0-YK0 00CE031433175922C7 P0/D6    Samsung      16 GB       1600 MHz       0
        DIMM_7  OK     –              M393B2G70DB0-YK0 00CE0314331759324E P0/D7    Samsung      16 GB       1600 MHz       0
        DIMM_8  OK     –              M393B2G70DB0-YK0 00CE0314331759324B P1/D0    Samsung      16 GB       1600 MHz       0
        DIMM_9  OK     –              M393B2G70DB0-YK0 00CE0314331759331A P1/D1    Samsung      16 GB       1600 MHz       0


  • Execute the following command to monitor Power Status

[root@odanoden1 ~]# oakcli show power

        NAME            HEALTH HEALTH_DETAILS PART_NO. SERIAL_NO.         LOCATION INPUT_POWER OUTPUT_POWER INLET_TEMP     EXHAUST_TEMP

        Power_Supply_0  OK     –              7079395  476856Z+1435CE00EU PS0      Present     119 watts    32.250 degree C 36.562 degree C
        Power_Supply_1  OK     –              7079395  476856Z+1435CE00F6 PS1      Present     112 watts    37.000 degree C 40.375 degree C


  • Execute the following command to monitor Network Status

[root@odanoden1 ~]# oakcli show network

        NAME           HEALTH HEALTH_DETAILS LOCATION PART_NO MANUFACTURER MAC_ADDRESS        LINK_DETECTED DIE_TEMP

        Ethernet_NIC_0 OK     –              NET0     X540    INTEL        00:10:E0:62:3F:F2  yes (eth2)    46.250 degree C
        Ethernet_NIC_1 OK     –              NET1     X540    INTEL        00:10:E0:62:3F:F3  no (eth3)     46.250 degree C
        Ethernet_NIC_2 OK     –              NET2     X540    INTEL        00:10:E0:62:3F:F4  no (eth4)     51.000 degree C
        Ethernet_NIC_3 OK     –              NET3     X540    INTEL        00:10:E0:62:3F:F5  no (eth5)     51.500 degree C
        Ethernet_NIC_4 –      –              NET4     X540    INTEL        90:E2:BA:81:2B:B4  yes (eth0)    –
        Ethernet_NIC_5 –      –              NET5     X540    INTEL        90:E2:BA:81:2B:B5  yes (eth1)    –


  • Execute the following command to monitor Storage Status

[root@odanoden1 ~]# oakcli show storage
==== BEGIN STORAGE DUMP ========
Host Description: Oracle Corporation:SUN SERVER X4-2
Total number of controllers: 3
        Id         = 1
        Serial Num = 500605b008030030
        Vendor     = LSI Logic
        Model      = SGX-SAS6-EXT-Z
        FwVers     = 11.05.03.00
        strId      = mpt2sas:30:00.0

        Id         = 2
        Serial Num = 500605b00802fbc0
        Vendor     = LSI Logic
        Model      = SGX-SAS6-EXT-Z
        FwVers     = 11.05.03.00
        strId      = mpt2sas:40:00.0

        Id         = 0
        Serial Num = 500605b008071240
        Vendor     = LSI Logic
        Model      = SGX-SAS6-INT-Z
        FwVers     = 11.05.03.00
        strId      = mpt2sas:50:00.0

Total number of expanders: 2
        Id         = 1
        Serial Num = 50800200019f0002
        Vendor     = ORACLE
        Model      = DE2-24P
        FwVers     = 0018
        strId      = Primary
        WWN        = 5080020001a6b97e

        Id         = 0
        Serial Num = 50800200019f0002
        Vendor     = ORACLE
        Model      = DE2-24P
        FwVers     = 0018
        strId      = Secondary
        WWN        = 5080020001a6babe

Total number of PDs: 24
        /dev/sdl        LSI Logic         HDD  900gb slot:  0  exp:  0
        /dev/sdn        LSI Logic         HDD  900gb slot:  1  exp:  0
        /dev/sdah       LSI Logic         HDD  900gb slot:  2  exp:  0
        /dev/sdai       LSI Logic         HDD  900gb slot:  3  exp:  0
        /dev/sdaj       LSI Logic         HDD  900gb slot:  4  exp:  0
        /dev/sdak       LSI Logic         HDD  900gb slot:  5  exp:  0
        /dev/sdal       LSI Logic         HDD  900gb slot:  6  exp:  0
        /dev/sdam       LSI Logic         HDD  900gb slot:  7  exp:  0
        /dev/sdan       LSI Logic         HDD  900gb slot:  8  exp:  0
        /dev/sdao       LSI Logic         HDD  900gb slot:  9  exp:  0
        /dev/sdap       LSI Logic         HDD  900gb slot: 10  exp:  0
        /dev/sdaq       LSI Logic         HDD  900gb slot: 11  exp:  0
        /dev/sdar       LSI Logic         HDD  900gb slot: 12  exp:  0
        /dev/sdaa       LSI Logic         HDD  900gb slot: 13  exp:  0
        /dev/sdab       LSI Logic         HDD  900gb slot: 14  exp:  0
        /dev/sdac       LSI Logic         HDD  900gb slot: 15  exp:  0
        /dev/sdad       LSI Logic         HDD  900gb slot: 16  exp:  0
        /dev/sdae       LSI Logic         HDD  900gb slot: 17  exp:  0
        /dev/sdaf       LSI Logic         HDD  900gb slot: 18  exp:  0
        /dev/sdag       LSI Logic         HDD  900gb slot: 19  exp:  0
        /dev/sda        LSI Logic         SSD  200gb slot: 20  exp:  0
        /dev/sdb        LSI Logic         SSD  200gb slot: 21  exp:  0
        /dev/sdc        LSI Logic         SSD  200gb slot: 22  exp:  0
        /dev/sdd        LSI Logic         SSD  200gb slot: 23  exp:  0
==== END STORAGE DUMP =========


  • Execute the following command to monitor Shared Disk Status

[root@odanoden1 ~]# oakcli show disk
        NAME            PATH            TYPE            STATE           STATE_DETAILS

        e0_pd_00        /dev/sdl        HDD             ONLINE          Good
        e0_pd_01        /dev/sdn        HDD             ONLINE          Good
        e0_pd_02        /dev/sdah       HDD             ONLINE          Good
        e0_pd_03        /dev/sdai       HDD             ONLINE          Good
        e0_pd_04        /dev/sdaj       HDD             ONLINE          Good
        e0_pd_05        /dev/sdak       HDD             ONLINE          Good
        e0_pd_06        /dev/sdal       HDD             ONLINE          Good
        e0_pd_07        /dev/sdam       HDD             ONLINE          Good
        e0_pd_08        /dev/sdan       HDD             ONLINE          Good
        e0_pd_09        /dev/sdao       HDD             ONLINE          Good
        e0_pd_10        /dev/sdap       HDD             ONLINE          Good
        e0_pd_11        /dev/sdaq       HDD             ONLINE          Good
        e0_pd_12        /dev/sdar       HDD             ONLINE          Good
        e0_pd_13        /dev/sdaa       HDD             ONLINE          Good
        e0_pd_14        /dev/sdab       HDD             ONLINE          Good
        e0_pd_15        /dev/sdac       HDD             ONLINE          Good
        e0_pd_16        /dev/sdad       HDD             ONLINE          Good
        e0_pd_17        /dev/sdae       HDD             ONLINE          Good
        e0_pd_18        /dev/sdaf       HDD             ONLINE          Good
        e0_pd_19        /dev/sdag       HDD             ONLINE          Good
        e0_pd_20        /dev/sda        SSD             ONLINE          Good
        e0_pd_21        /dev/sdb        SSD             ONLINE          Good
        e0_pd_22        /dev/sdc        SSD             ONLINE          Good
        e0_pd_23        /dev/sdd        SSD             ONLINE          Good


  • Execute the following command to monitor ODA server enclosure

[root@odanoden1 ~]# oakcli show enclosure

        NAME        SUBSYSTEM         STATUS      METRIC

        E0_FAN0     Cooling           OK          3450 rpm
        E0_FAN1     Cooling           OK          3070 rpm
        E0_FAN2     Cooling           OK          3070 rpm
        E0_FAN3     Cooling           OK          3070 rpm
        E0_IOM0     Encl_Electronics  OK          –
        E0_IOM1     Encl_Electronics  OK          –
        E0_PSU0     Power_Supply      OK          –
        E0_PSU1     Power_Supply      OK          –
        E0_TEMP0    Amb_Temp          OK          19 C
        E0_TEMP1    Midplane_Temp     OK          27 C
        E0_TEMP2    PCM0_Inlet_Temp   OK          32 C
        E0_TEMP3    PCM0_Hotspot_Temp OK          38 C
        E0_TEMP4    PCM1_Inlet_Temp   OK          27 C
        E0_TEMP5    PCM1_Hotspot_Temp OK          36 C
        E0_TEMP6    IOM0_Temp         OK          38 C
        E0_TEMP7    IOM1_Temp         OK          45 C


Using ILOM CLI to Get the Hardware Status


  • Execute the following command to connect to ILOM and monitor Hardware Status

[root@odanoden2 ~]# ssh odanoden2-ilom
Password:

Oracle(R) Integrated Lights Out Manager

Version 3.2.8.25 r114493

Copyright (c) 2016, Oracle and/or its affiliates. All rights reserved.

Warning: password is set to factory default.

Warning: HTTPS certificate is set to factory default.

Hostname: odanoden2-ilom

-> show -level all -output table /SP/faultmgmt
Target                          | Property                             | Value
——————————–+————————————–+———————————————————

-> show -l all /SYS type==’Hard Disk’

 /SYS/DBP0/HDD0
    Targets:
        OK2RM
        PRSNT
        SERVICE
        STATE

    Properties:
        type = Hard Disk
        ipmi_name = HDD0

    Commands:
        cd
        show

 /SYS/DBP0/HDD1
    Targets:
        OK2RM
        PRSNT
        SERVICE
        STATE

    Properties:
        type = Hard Disk
        ipmi_name = HDD1

    Commands:
        cd
        show


Using ILOM GUI to Get the Hardware Status




Conclusion

In this article we have learned how to monitor various hardware components status on ODA nodes using oakcli and ILOM. ODA server comes with different hardware components and monitoring them is key for ODA availability.

2

During Oracle Database Appliance Deployment you can optionally configure CloudFS file system. The default mount point is /cloudfs and set to default size of 50GB. Oracle Database Appliance uses the Oracle Automatic Storage Management Cluster File System (Oracle ACFS) for database and virtual machine files storage. ACFS can only be used to configure shared storage file system on ODA. Oracle ACFS provides both servers with concurrent access to /cloudfs shared file system. The default size of 50GB may not sufficient and must be increased to store big files for business requirement.





In this article we will demonstrate how to resize the /cloudfs file system manually using ACFS commands.


Steps to resize the /cloudfs file system


Step 1: Login to node 1 as grid user the owner of Grid Infrastructure software

[grid@odanoden1 ~]$ id
uid=1000(grid) gid=1001(oinstall) groups=1001(oinstall),1003(racoper),1004(asmdba),1005(asmoper),1006(asmadmin)

Step 2: Verify the existing size of /cloudfs. Here is my case the /cloufs is 200GB and it was resized in the past from 50GB to 200GB

[grid@odanoden1 ~]$ df -h /cloudfs
Filesystem           Size  Used Avail Use% Mounted on
/dev/asm/acfsvol-23  200G  483M  200G   1% /cloudfs

Step 3: Set the ORACLE SID to +ASM1

[grid@odanoden1 ~]$ echo $ORACLE_SID

[grid@odanoden1 ~]$ . oraenv

ORACLE_SID = [grid] ? +ASM1
The Oracle base has been set to /u01/app/grid

[grid@odanoden1 ~]$ echo $ORACLE_SID

+ASM1

Step 4: List the ACFS Mounts. Here we can see that /cloudfs volume is /dev/asm/acfsvol-23

[grid@odanoden1 ~]$ mount |grep asm
/dev/asm/acfsvol-23 on /cloudfs type acfs (rw)
/dev/asm/datastore-272 on /u01/app/oracle/oradata/datastore type acfs (rw)
/dev/asm/datastore-97 on /u02/app/oracle/oradata/datastore type acfs (rw)
/dev/asm/datastore-23 on /u01/app/oracle/fast_recovery_area/datastore type acfs (rw)

Step 5: Get the size of the volume /dev/asm/acfsvol-23

[grid@odanoden1 ~]$ /sbin/advmutil volinfo /dev/asm/acfsvol-23
Device: /dev/asm/acfsvol-23
Interface Version: 1
Size (MB): 204800
Resize Increment (MB): 64
Redundancy: high
Stripe Columns: 8
Stripe Width (KB): 1024
Disk Group: RECO
Volume: ACFSVOL
Compatible.advm: 12.1.0.2.0

Step 6: Resize the /cloudfs as follows. Here we are increasing /cloudfs by 50GB

[grid@odanoden1 ~]$ /sbin/acfsutil size +50g /cloudfs
acfsutil size: new file system size: 268435456000 (256000MB)

Step 7: Verify the new size of the volume /dev/asm/acfsvol-23

[grid@odanoden1 ~]$ /sbin/advmutil volinfo /dev/asm/acfsvol-23
Device: /dev/asm/acfsvol-23
Interface Version: 1
Size (MB): 256000
Resize Increment (MB): 64
Redundancy: high
Stripe Columns: 8
Stripe Width (KB): 1024
Disk Group: RECO
Volume: ACFSVOL
Compatible.advm: 12.1.0.2.0

Step 8: Verify the new size of /cloudfs file system

[grid@odanoden1 ~]$ df -h /cloudfs
Filesystem           Size  Used Avail Use% Mounted on
/dev/asm/acfsvol-23  250G  585M  250G   1% /cloudfs


Conclusion

In this article we have learned how to resize/increase the size of /cloudfs shared file system on ODA. The cloudfs file system is configured during the ODA deployment and it is set to 50GB which is not sufficient for storing the big files. The cloudfs is build using ACFS and it can be resized easily using ACFS commands.

0

When Oracle ACS build Exadata Database Machine, they use the OEDA file that you sent them over for Exadata Install. The Exadata is built with default ASM Disk Group name DATAC1, RECOC1 & DBFS_DG. If you want to rename the DATAC1 and RECOC1 to something different to match your Organization standards you can do that by using the Oracle renamedg utility. The number Database version required to rename an ASM Disk Group is 11.2.

In this article we will demonstrate how to rename ASM Disk Group on Exadata Database Machine running Oracle Database 11.2

Here we want to change the following ASM Disk Group Names:
DATAC1 to DATA
RECOC1 to RECO

Steps to rename ASM Disk Group


  • Get the Database version

ORACLE_SID = [+ASM1] ? dbm011
The Oracle base remains unchanged with value /u01/app/oracle

[oracle@dm01db01 ~]$ sqlplus / as sysdba

SQL*Plus: Release 11.2.0.4.0 Production on Thu May 24 16:36:34 2018
Copyright (c) 1982, 2013, Oracle.  All rights reserved.

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 – 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options

SQL> select * from v$version;

BANNER
——————————————————————————–
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 – 64bit Production
PL/SQL Release 11.2.0.4.0 – Production
CORE    11.2.0.4.0      Production
TNS for Linux: Version 11.2.0.4.0 – Production
NLSRTL Version 11.2.0.4.0 – Production

  • Connect to asmcmd and make a note of the Disk Group Names

[oracle@dm01db01 ~]$ . oraenv

ORACLE_SID = [oracle] ? +ASM1
The Oracle base has been set to /u01/app/oracle

[oracle@dm01db01 ~]$ asmcmd -p

ASMCMD [+] > lsdg
State    Type    Rebal  Sector  Block       AU   Total_MB    Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name
MOUNTED  HIGH    N         512   4096  4194304  272154624  271558968          6479872        88359698              0             N  DATAC1/
MOUNTED  HIGH    N         512   4096  4194304    2404640    2402468            68704          777921              0             Y  DBFS_DG/
MOUNTED  NORMAL  N         512   4096  4194304   45389568   45386648           540352        22423148              0             N  RECOC1/

  • Check the versions

ASMCMD [+] > lsct
DB_Name  Status     Software_Version  Compatible_version  Instance_Name  Disk_Group
+ASM     CONNECTED        11.2.0.4.0          11.2.0.4.0  +ASM1          DBFS_DG
DBM01    CONNECTED        11.2.0.4.0          11.2.0.4.0  dbm011         DATAC1

  • Check the database status

[oracle@dm01db01 ~]$ srvctl status database -d dbm01
Instance dbm011 is running on node dm01db01
Instance dbm012 is running on node dm01db02
Instance dbm013 is running on node dm01db03
Instance dbm014 is running on node dm01db04

  • Make a note of the control files, Datafiles, Redo logfiles before stopping the database.

SQL> select name from v$controlfile;
SQL> select name from v$datafile;
SQL> select member from v$logfile;
SQL> select * from v$block_change_tracking;

  • Stop database database

[oracle@dm01db01 ~]$ srvctl stop database -d dbm01

[oracle@dm01db01 ~]$ srvctl status database -d dbm01
Instance dbm011 is not running on node dm01db01
Instance dbm012 is not running on node dm01db02
Instance dbm013 is not running on node dm01db03
Instance dbm014 is not running on node dm01db04

  • Umount the ASM disk group(s) that you want to rename. Connect to ASM command prompt and umount the disk group. umount the disk group from all nodes.

ASMCMD [+] > umount DATAC1

ASMCMD [+] > umount RECOC1

ASMCMD [+] > lsdg
State    Type  Rebal  Sector  Block       AU  Total_MB  Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name
MOUNTED  HIGH  N         512   4096  4194304   2404640  2402468            68704          777921              0             Y  DBFS_DG/

Note: If you don’t stop the databases using ASM disk group you will get the following error message:

ASMCMD [+] > umount DATAC1
ORA-15032: not all alterations performed
ORA-15027: active use of diskgroup “DATAC1” precludes its dismount (DBD ERROR: OCIStmtExecute)

*** Repeat the above steps on all the remaining nodes in the Cluster***

[oracle@dm01db01 ~]$ ssh dm01db02
Last login: Thu May 17 15:23:31 2018 from dm01db01

[oracle@dm01db02 ~]$ . oraenv
ORACLE_SID = [oracle] ? +ASM2
The Oracle base has been set to /u01/app/oracle

[oracle@dm01db02 ~]$ asmcmd lsdg
State    Type    Rebal  Sector  Block       AU   Total_MB    Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name
MOUNTED  HIGH    N         512   4096  4194304  272154624  271558968          6479872        88359698              0             N  DATAC1/
MOUNTED  HIGH    N         512   4096  4194304    2404640    2402468            68704          777921              0             Y  DBFS_DG/
MOUNTED  NORMAL  N         512   4096  4194304   45389568   45385040           540352        22422344              0             N  RECOC1/

[oracle@dm01db02 ~]$ asmcmd umount DATAC1

[oracle@dm01db02 ~]$ asmcmd umount RECOC1

[oracle@dm01db02 ~]$ asmcmd lsdg
State    Type  Rebal  Sector  Block       AU  Total_MB  Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name
MOUNTED  HIGH  N         512   4096  4194304   2404640  2402468            68704          777921              0             Y  DBFS_DG/

[oracle@dm01db01 ~]$ ssh dm01db03
Last login: Thu May 17 15:23:31 2018 from dm01db01

[oracle@dm01db03 ~]$ . oraenv
ORACLE_SID = [oracle] ? +ASM3
The Oracle base has been set to /u01/app/oracle

[oracle@dm01db03 ~]$ asmcmd umount DATAC1

[oracle@dm01db03 ~]$ asmcmd umount RECOC1

[oracle@dm01db03 ~]$ asmcmd lsdg
State    Type  Rebal  Sector  Block       AU  Total_MB  Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name
MOUNTED  HIGH  N         512   4096  4194304   2404640  2402468            68704          777921              0             Y  DBFS_DG/

[oracle@dm01db01 ~]$ ssh dm01db04
Last login: Thu May 17 15:23:31 2018 from dm01db01

[oracle@dm01db04 ~]$ . oraenv
ORACLE_SID = [oracle] ? +ASM4
The Oracle base has been set to /u01/app/oracle

[oracle@dm01db04 ~]$ asmcmd umount DATAC1

[oracle@dm01db04 ~]$ asmcmd umount RECOC1

[oracle@dm01db04 ~]$ asmcmd lsdg
State    Type  Rebal  Sector  Block       AU  Total_MB  Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name
MOUNTED  HIGH  N         512   4096  4194304   2404640  2402468            68704          777921              0             Y  DBFS_DG/

  • Verify that renamedg is in PATH

[oracle@dm01db01 ~]$ which renamedg
/u01/app/11.2.0.4/grid/bin/renamedg

  • As owner of the Grid Infrastruture software execute the renamdg command. Here the owner of GI home is ‘oracle’ user. First I am renaming DATAC1 disk group to DATA

[oracle@dm01db01 ~]$ renamedg phase=both dgname=DATAC1 newdgname=DATA verbose=true

NOTE: No asm libraries found in the system
Parsing parameters..
Parameters in effect:

         Old DG name       : DATAC1
         New DG name       : DATA
         Phases            :
                 Phase 1
                 Phase 2
         Discovery str      : (null)
         Clean              : TRUE
         Raw only           : TRUE
renamedg operation: phase=both dgname=DATAC1 newdgname=DATA verbose=true
Executing phase 1
Discovering the group
Performing discovery with string:
Identified disk OSS::o/192.168.10.9;192.168.10.10/DATAC1_CD_02_dm01cel01 with disk number:74 and timestamp (33068591 612262912)
Identified disk OSS::o/192.168.10.9;192.168.10.10/DATAC1_CD_11_dm01cel01 with disk number:83 and timestamp (33068591 612262912)
Identified disk OSS::o/192.168.10.9;192.168.10.10/DATAC1_CD_07_dm01cel01 with disk number:79 and timestamp (33068591 612262912)
Identified disk OSS::o/192.168.10.9;192.168.10.10/DATAC1_CD_04_dm01cel01 with disk number:76 and timestamp (33068591 612262912)
Identified disk OSS::o/192.168.10.9;192.168.10.10/DATAC1_CD_05_dm01cel01 with disk number:77 and timestamp (33068591 612262912)
Identified disk OSS::o/192.168.10.9;192.168.10.10/DATAC1_CD_10_dm01cel01 with disk number:82 and timestamp (33068591 612262912)
Identified disk OSS::o/192.168.10.9;192.168.10.10/DATAC1_CD_06_dm01cel01 with disk number:78 and timestamp (33068591 612262912)
Identified disk OSS::o/192.168.10.9;192.168.10.10/DATAC1_CD_03_dm01cel01 with disk number:75 and timestamp (33068591 612262912)
Identified disk OSS::o/192.168.10.9;192.168.10.10/DATAC1_CD_00_dm01cel01 with disk number:72 and timestamp (33068591 612262912)
Identified disk OSS::o/192.168.10.9;192.168.10.10/DATAC1_CD_08_dm01cel01 with disk number:80 and timestamp (33068591 612262912)
Identified disk OSS::o/192.168.10.9;192.168.10.10/DATAC1_CD_01_dm01cel01 with disk number:73 and timestamp (33068591 612262912)
Identified disk OSS::o/192.168.10.9;192.168.10.10/DATAC1_CD_09_dm01cel01 with disk number:81 and timestamp (33068591 612262912)



Identified disk OSS::o/192.168.10.21;192.168.10.22/DATAC1_CD_09_dm01cel07 with disk number:69 and timestamp (33068591 612262912)
Identified disk OSS::o/192.168.10.21;192.168.10.22/DATAC1_CD_03_dm01cel07 with disk number:63 and timestamp (33068591 612262912)
Identified disk OSS::o/192.168.10.21;192.168.10.22/DATAC1_CD_06_dm01cel07 with disk number:66 and timestamp (33068591 612262912)
Identified disk OSS::o/192.168.10.21;192.168.10.22/DATAC1_CD_08_dm01cel07 with disk number:68 and timestamp (33068591 612262912)
Identified disk OSS::o/192.168.10.21;192.168.10.22/DATAC1_CD_04_dm01cel07 with disk number:64 and timestamp (33068591 612262912)
Identified disk OSS::o/192.168.10.21;192.168.10.22/DATAC1_CD_07_dm01cel07 with disk number:67 and timestamp (33068591 612262912)
Identified disk OSS::o/192.168.10.21;192.168.10.22/DATAC1_CD_01_dm01cel07 with disk number:61 and timestamp (33068591 612262912)
Identified disk OSS::o/192.168.10.21;192.168.10.22/DATAC1_CD_05_dm01cel07 with disk number:65 and timestamp (33068591 612262912)
Identified disk OSS::o/192.168.10.21;192.168.10.22/DATAC1_CD_02_dm01cel07 with disk number:62 and timestamp (33068591 612262912)
Identified disk OSS::o/192.168.10.21;192.168.10.22/DATAC1_CD_00_dm01cel07 with disk number:60 and timestamp (33068591 612262912)
Identified disk OSS::o/192.168.10.21;192.168.10.22/DATAC1_CD_11_dm01cel07 with disk number:71 and timestamp (33068591 612262912)
Identified disk OSS::o/192.168.10.21;192.168.10.22/DATAC1_CD_10_dm01cel07 with disk number:70 and timestamp (33068591 612262912)
Checking for hearbeat…
Re-discovering the group
Performing discovery with string:
Identified disk OSS::o/192.168.10.9;192.168.10.10/DATAC1_CD_02_dm01cel01 with disk number:74 and timestamp (33068591 612262912)
Identified disk OSS::o/192.168.10.9;192.168.10.10/DATAC1_CD_11_dm01cel01 with disk number:83 and timestamp (33068591 612262912)
Identified disk OSS::o/192.168.10.9;192.168.10.10/DATAC1_CD_07_dm01cel01 with disk number:79 and timestamp (33068591 612262912)
Identified disk OSS::o/192.168.10.9;192.168.10.10/DATAC1_CD_04_dm01cel01 with disk number:76 and timestamp (33068591 612262912)
Identified disk OSS::o/192.168.10.9;192.168.10.10/DATAC1_CD_05_dm01cel01 with disk number:77 and timestamp (33068591 612262912)
Identified disk OSS::o/192.168.10.9;192.168.10.10/DATAC1_CD_10_dm01cel01 with disk number:82 and timestamp (33068591 612262912)
Identified disk OSS::o/192.168.10.9;192.168.10.10/DATAC1_CD_06_dm01cel01 with disk number:78 and timestamp (33068591 612262912)
Identified disk OSS::o/192.168.10.9;192.168.10.10/DATAC1_CD_03_dm01cel01 with disk number:75 and timestamp (33068591 612262912)
Identified disk OSS::o/192.168.10.9;192.168.10.10/DATAC1_CD_00_dm01cel01 with disk number:72 and timestamp (33068591 612262912)
Identified disk OSS::o/192.168.10.9;192.168.10.10/DATAC1_CD_08_dm01cel01 with disk number:80 and timestamp (33068591 612262912)
Identified disk OSS::o/192.168.10.9;192.168.10.10/DATAC1_CD_01_dm01cel01 with disk number:73 and timestamp (33068591 612262912)
Identified disk OSS::o/192.168.10.9;192.168.10.10/DATAC1_CD_09_dm01cel01 with disk number:81 and timestamp (33068591 612262912)



Identified disk OSS::o/192.168.10.21;192.168.10.22/DATAC1_CD_09_dm01cel07 with disk number:69 and timestamp (33068591 612262912)
Identified disk OSS::o/192.168.10.21;192.168.10.22/DATAC1_CD_03_dm01cel07 with disk number:63 and timestamp (33068591 612262912)
Identified disk OSS::o/192.168.10.21;192.168.10.22/DATAC1_CD_06_dm01cel07 with disk number:66 and timestamp (33068591 612262912)
Identified disk OSS::o/192.168.10.21;192.168.10.22/DATAC1_CD_08_dm01cel07 with disk number:68 and timestamp (33068591 612262912)
Identified disk OSS::o/192.168.10.21;192.168.10.22/DATAC1_CD_04_dm01cel07 with disk number:64 and timestamp (33068591 612262912)
Identified disk OSS::o/192.168.10.21;192.168.10.22/DATAC1_CD_07_dm01cel07 with disk number:67 and timestamp (33068591 612262912)
Identified disk OSS::o/192.168.10.21;192.168.10.22/DATAC1_CD_01_dm01cel07 with disk number:61 and timestamp (33068591 612262912)
Identified disk OSS::o/192.168.10.21;192.168.10.22/DATAC1_CD_05_dm01cel07 with disk number:65 and timestamp (33068591 612262912)
Identified disk OSS::o/192.168.10.21;192.168.10.22/DATAC1_CD_02_dm01cel07 with disk number:62 and timestamp (33068591 612262912)
Identified disk OSS::o/192.168.10.21;192.168.10.22/DATAC1_CD_00_dm01cel07 with disk number:60 and timestamp (33068591 612262912)
Identified disk OSS::o/192.168.10.21;192.168.10.22/DATAC1_CD_11_dm01cel07 with disk number:71 and timestamp (33068591 612262912)
Identified disk OSS::o/192.168.10.21;192.168.10.22/DATAC1_CD_10_dm01cel07 with disk number:70 and timestamp (33068591 612262912)
Checking if the diskgroup is mounted or used by CSS
Checking disk number:74
Checking disk number:83
Checking disk number:79
Checking disk number:76
Checking disk number:77
Checking disk number:82
Checking disk number:78
Checking disk number:75
Checking disk number:72
Checking disk number:80
Checking disk number:73
Checking disk number:81
Checking disk number:69
Checking disk number:63
Checking disk number:66
Checking disk number:68
Checking disk number:64
Checking disk number:67
Checking disk number:61
Checking disk number:65
Checking disk number:62
Checking disk number:60


Generating configuration file..
Completed phase 1
Executing phase 2
Looking for o/192.168.10.9;192.168.10.10/DATAC1_CD_02_dm01cel01
Modifying the header
Looking for o/192.168.10.9;192.168.10.10/DATAC1_CD_11_dm01cel01
Modifying the header
Looking for o/192.168.10.9;192.168.10.10/DATAC1_CD_07_dm01cel01
Modifying the header
Looking for o/192.168.10.9;192.168.10.10/DATAC1_CD_04_dm01cel01
Modifying the header
Looking for o/192.168.10.9;192.168.10.10/DATAC1_CD_05_dm01cel01
Modifying the header
Looking for o/192.168.10.9;192.168.10.10/DATAC1_CD_10_dm01cel01
Modifying the header
Looking for o/192.168.10.9;192.168.10.10/DATAC1_CD_06_dm01cel01
Modifying the header
Looking for o/192.168.10.9;192.168.10.10/DATAC1_CD_03_dm01cel01
Modifying the header
Looking for o/192.168.10.9;192.168.10.10/DATAC1_CD_00_dm01cel01
Modifying the header
Looking for o/192.168.10.9;192.168.10.10/DATAC1_CD_08_dm01cel01
Modifying the header
Looking for o/192.168.10.9;192.168.10.10/DATAC1_CD_01_dm01cel01
Modifying the header


Modifying the header
Completed phase 2
Terminating kgfd context 0x7f9d346240a0

  • Now rename RECOC1 ASM disk group to RECO using renamdg command

[oracle@dm01db01 ~]$ renamedg phase=both dgname=RECOC1 newdgname=RECO verbose=true

NOTE: No asm libraries found in the system
Parsing parameters..
Parameters in effect:

         Old DG name       : RECOC1
         New DG name       : RECO
         Phases            :
                 Phase 1
                 Phase 2
         Discovery str      : (null)
         Clean              : TRUE
         Raw only           : TRUE
renamedg operation: phase=both dgname=RECOC1 newdgname=RECO verbose=true
Executing phase 1
Discovering the group
Performing discovery with string:
Identified disk OSS::o/192.168.10.9;192.168.10.10/RECOC1_CD_03_dm01cel01 with disk number:75 and timestamp (33068591 628813824)
Identified disk OSS::o/192.168.10.9;192.168.10.10/RECOC1_CD_04_dm01cel01 with disk number:76 and timestamp (33068591 628813824)
Identified disk OSS::o/192.168.10.9;192.168.10.10/RECOC1_CD_05_dm01cel01 with disk number:77 and timestamp (33068591 628813824)
Identified disk OSS::o/192.168.10.9;192.168.10.10/RECOC1_CD_00_dm01cel01 with disk number:72 and timestamp (33068591 628813824)
Identified disk OSS::o/192.168.10.9;192.168.10.10/RECOC1_CD_10_dm01cel01 with disk number:82 and timestamp (33068591 628813824)
Identified disk OSS::o/192.168.10.9;192.168.10.10/RECOC1_CD_07_dm01cel01 with disk number:79 and timestamp (33068591 628813824)
Identified disk OSS::o/192.168.10.9;192.168.10.10/RECOC1_CD_02_dm01cel01 with disk number:74 and timestamp (33068591 628813824)
Identified disk OSS::o/192.168.10.9;192.168.10.10/RECOC1_CD_01_dm01cel01 with disk number:73 and timestamp (33068591 628813824)
Identified disk OSS::o/192.168.10.9;192.168.10.10/RECOC1_CD_11_dm01cel01 with disk number:83 and timestamp (33068591 628813824)
Identified disk OSS::o/192.168.10.9;192.168.10.10/RECOC1_CD_08_dm01cel01 with disk number:80 and timestamp (33068591 628813824)
Identified disk OSS::o/192.168.10.9;192.168.10.10/RECOC1_CD_09_dm01cel01 with disk number:81 and timestamp (33068591 628813824)
Identified disk OSS::o/192.168.10.9;192.168.10.10/RECOC1_CD_06_dm01cel01 with disk number:78 and timestamp (33068591 628813824)


Identified disk OSS::o/192.168.10.21;192.168.10.22/RECOC1_CD_01_dm01cel07 with disk number:61 and timestamp (33068591 628813824)
Identified disk OSS::o/192.168.10.21;192.168.10.22/RECOC1_CD_08_dm01cel07 with disk number:68 and timestamp (33068591 628813824)
Identified disk OSS::o/192.168.10.21;192.168.10.22/RECOC1_CD_06_dm01cel07 with disk number:66 and timestamp (33068591 628813824)
Identified disk OSS::o/192.168.10.21;192.168.10.22/RECOC1_CD_11_dm01cel07 with disk number:71 and timestamp (33068591 628813824)
Identified disk OSS::o/192.168.10.21;192.168.10.22/RECOC1_CD_10_dm01cel07 with disk number:70 and timestamp (33068591 628813824)
Identified disk OSS::o/192.168.10.21;192.168.10.22/RECOC1_CD_05_dm01cel07 with disk number:65 and timestamp (33068591 628813824)
Identified disk OSS::o/192.168.10.21;192.168.10.22/RECOC1_CD_00_dm01cel07 with disk number:60 and timestamp (33068591 628813824)
Identified disk OSS::o/192.168.10.21;192.168.10.22/RECOC1_CD_03_dm01cel07 with disk number:63 and timestamp (33068591 628813824)
Identified disk OSS::o/192.168.10.21;192.168.10.22/RECOC1_CD_09_dm01cel07 with disk number:69 and timestamp (33068591 628813824)
Identified disk OSS::o/192.168.10.21;192.168.10.22/RECOC1_CD_07_dm01cel07 with disk number:67 and timestamp (33068591 628813824)
Identified disk OSS::o/192.168.10.21;192.168.10.22/RECOC1_CD_02_dm01cel07 with disk number:62 and timestamp (33068591 628813824)
Identified disk OSS::o/192.168.10.21;192.168.10.22/RECOC1_CD_04_dm01cel07 with disk number:64 and timestamp (33068591 628813824)
Checking for hearbeat…
Re-discovering the group
Performing discovery with string:
Identified disk OSS::o/192.168.10.9;192.168.10.10/RECOC1_CD_03_dm01cel01 with disk number:75 and timestamp (33068591 628813824)
Identified disk OSS::o/192.168.10.9;192.168.10.10/RECOC1_CD_04_dm01cel01 with disk number:76 and timestamp (33068591 628813824)
Identified disk OSS::o/192.168.10.9;192.168.10.10/RECOC1_CD_05_dm01cel01 with disk number:77 and timestamp (33068591 628813824)
Identified disk OSS::o/192.168.10.9;192.168.10.10/RECOC1_CD_00_dm01cel01 with disk number:72 and timestamp (33068591 628813824)
Identified disk OSS::o/192.168.10.9;192.168.10.10/RECOC1_CD_10_dm01cel01 with disk number:82 and timestamp (33068591 628813824)
Identified disk OSS::o/192.168.10.9;192.168.10.10/RECOC1_CD_07_dm01cel01 with disk number:79 and timestamp (33068591 628813824)
Identified disk OSS::o/192.168.10.9;192.168.10.10/RECOC1_CD_02_dm01cel01 with disk number:74 and timestamp (33068591 628813824)
Identified disk OSS::o/192.168.10.9;192.168.10.10/RECOC1_CD_01_dm01cel01 with disk number:73 and timestamp (33068591 628813824)
Identified disk OSS::o/192.168.10.9;192.168.10.10/RECOC1_CD_11_dm01cel01 with disk number:83 and timestamp (33068591 628813824)
Identified disk OSS::o/192.168.10.9;192.168.10.10/RECOC1_CD_08_dm01cel01 with disk number:80 and timestamp (33068591 628813824)
Identified disk OSS::o/192.168.10.9;192.168.10.10/RECOC1_CD_09_dm01cel01 with disk number:81 and timestamp (33068591 628813824)
Identified disk OSS::o/192.168.10.9;192.168.10.10/RECOC1_CD_06_dm01cel01 with disk number:78 and timestamp (33068591 628813824)


Identified disk OSS::o/192.168.10.21;192.168.10.22/RECOC1_CD_01_dm01cel07 with disk number:61 and timestamp (33068591 628813824)
Identified disk OSS::o/192.168.10.21;192.168.10.22/RECOC1_CD_08_dm01cel07 with disk number:68 and timestamp (33068591 628813824)
Identified disk OSS::o/192.168.10.21;192.168.10.22/RECOC1_CD_06_dm01cel07 with disk number:66 and timestamp (33068591 628813824)
Identified disk OSS::o/192.168.10.21;192.168.10.22/RECOC1_CD_11_dm01cel07 with disk number:71 and timestamp (33068591 628813824)
Identified disk OSS::o/192.168.10.21;192.168.10.22/RECOC1_CD_10_dm01cel07 with disk number:70 and timestamp (33068591 628813824)
Identified disk OSS::o/192.168.10.21;192.168.10.22/RECOC1_CD_05_dm01cel07 with disk number:65 and timestamp (33068591 628813824)
Identified disk OSS::o/192.168.10.21;192.168.10.22/RECOC1_CD_00_dm01cel07 with disk number:60 and timestamp (33068591 628813824)
Identified disk OSS::o/192.168.10.21;192.168.10.22/RECOC1_CD_03_dm01cel07 with disk number:63 and timestamp (33068591 628813824)
Identified disk OSS::o/192.168.10.21;192.168.10.22/RECOC1_CD_09_dm01cel07 with disk number:69 and timestamp (33068591 628813824)
Identified disk OSS::o/192.168.10.21;192.168.10.22/RECOC1_CD_07_dm01cel07 with disk number:67 and timestamp (33068591 628813824)
Identified disk OSS::o/192.168.10.21;192.168.10.22/RECOC1_CD_02_dm01cel07 with disk number:62 and timestamp (33068591 628813824)
Identified disk OSS::o/192.168.10.21;192.168.10.22/RECOC1_CD_04_dm01cel07 with disk number:64 and timestamp (33068591 628813824)


Checking if the diskgroup is mounted or used by CSS
Checking disk number:75
Checking disk number:76
Checking disk number:77
Checking disk number:72
Checking disk number:82
Checking disk number:79
Checking disk number:74
Checking disk number:73
Checking disk number:83
Checking disk number:80
Checking disk number:81
Checking disk number:78
Checking disk number:61
Checking disk number:68
Checking disk number:66
Checking disk number:71
Checking disk number:70
Checking disk number:65
Checking disk number:60
Checking disk number:63
Checking disk number:69
Checking disk number:67
Checking disk number:62
Checking disk number:64
Checking disk number:49


Generating configuration file..
Completed phase 1
Executing phase 2
Looking for o/192.168.10.9;192.168.10.10/RECOC1_CD_03_dm01cel01
Modifying the header
Looking for o/192.168.10.9;192.168.10.10/RECOC1_CD_04_dm01cel01
Modifying the header
Looking for o/192.168.10.9;192.168.10.10/RECOC1_CD_05_dm01cel01
Modifying the header
Looking for o/192.168.10.9;192.168.10.10/RECOC1_CD_00_dm01cel01
Modifying the header
Looking for o/192.168.10.9;192.168.10.10/RECOC1_CD_10_dm01cel01
Modifying the header
Looking for o/192.168.10.9;192.168.10.10/RECOC1_CD_07_dm01cel01
Modifying the header
Looking for o/192.168.10.9;192.168.10.10/RECOC1_CD_02_dm01cel01
Modifying the header
Looking for o/192.168.10.9;192.168.10.10/RECOC1_CD_01_dm01cel01
Modifying the header
Looking for o/192.168.10.9;192.168.10.10/RECOC1_CD_11_dm01cel01
Modifying the header
Looking for o/192.168.10.9;192.168.10.10/RECOC1_CD_08_dm01cel01
Modifying the header
Looking for o/192.168.10.9;192.168.10.10/RECOC1_CD_09_dm01cel01
Modifying the header
Looking for o/192.168.10.9;192.168.10.10/RECOC1_CD_06_dm01cel01
Modifying the header


Modifying the header
Completed phase 2
Terminating kgfd context 0x7f8d42f6c0a0

  • Mount the DATA and RECO ASM disk groups on all the nodes.

[oracle@dm01db01 ~]$ asmcmd -p

ASMCMD [+] > lsdg
State    Type  Rebal  Sector  Block       AU  Total_MB  Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name
MOUNTED  HIGH  N         512   4096  4194304   2404640  2402468            68704          777921              0             Y  DBFS_DG/

ASMCMD [+] > mount DATA

ASMCMD [+] > mount RECO

ASMCMD [+] > lsdg
State    Type    Rebal  Sector  Block       AU   Total_MB    Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name
MOUNTED  HIGH    N         512   4096  4194304  272154624  271558968          6479872        88359698              0             N  DATA/
MOUNTED  HIGH    N         512   4096  4194304    2404640    2402468            68704          777921              0             Y  DBFS_DG/
MOUNTED  NORMAL  N         512   4096  4194304   45389568   45385040           540352        22422344              0             N  RECO/

*** Repeat the above steps on all the remaning compute nodes in the Cluster***

Note: 

  • renamedg utility cannot rename the associated ASM/Grid disk disk name
  • renamedg utility cannot rename/update the control files, datafiles, redo log files and any other files that reference ASM DG for all databases


Steps to rename control files, datafiles, redo log files and other database files


  • Update SPFILE location

[oracle@dm01db01 ~]$ cd $ORACLE_HOME/dbs

[oracle@dm01db01 dbs]$ cat initdbm011.ora
SPFILE=’+DATAC1/dbm01/spfiledbm01.ora’

[oracle@dm01db01 dbs]$ vi initdbm011.ora

[oracle@dm01db01 dbs]$ cat initdbm011.ora
SPFILE=’+DATA/dbm01/spfiledbm01.ora’

[oracle@dm01db01 dbs]$ scp initdbm011.ora dm01db02:/u01/app/oracle/product/11.2.0.4/dbhome/dbs/initdbm012.ora
initdbm011.ora                                                                                                                                             100%   42     0.0KB/s   00:00

[oracle@dm01db01 dbs]$ scp initdbm011.ora dm01db03:/u01/app/oracle/product/11.2.0.4/dbhome/dbs/initdbm013.ora
initdbm011.ora                                                                                                                                             100%   42     0.0KB/s   00:00

[oracle@dm01db01 dbs]$ scp initdbm011.ora dm01db04:/u01/app/oracle/product/11.2.0.4/dbhome/dbs/initdbm014.ora
initdbm011.ora                                                                                                                                             100%   42     0.0KB/s   00:00

  • Update control file location

[oracle@dm01db01 dbs]$ . oraenv
ORACLE_SID = [dbm011] ?
The Oracle base remains unchanged with value /u01/app/oracle

[oracle@dm01db01 dbs]$ sqlplus / as sysdba

SQL*Plus: Release 11.2.0.4.0 Production on Fri May 25 10:17:49 2018
Copyright (c) 1982, 2013, Oracle.  All rights reserved.
Connected to an idle instance.

SQL> startup nomount;
ORACLE instance started.

Total System Global Area 2.5655E+10 bytes
Fixed Size                  2265224 bytes
Variable Size            4160753528 bytes
Database Buffers         2.1341E+10 bytes
Redo Buffers              151113728 bytes

SQL> show parameter control_files

NAME                                 TYPE        VALUE
———————————— ———– ——————————
control_files                        string      +DATAC1/dbm01/controlfile/current.256.976374731

SQL> alter system set control_files=’+DATA/dbm01/controlfile/current.256.976374731′ scope=spfile;

System altered.

SQL> shutdown immediate;
ORA-01507: database not mounted


ORACLE instance shut down.
SQL> startup mount
ORACLE instance started.

Total System Global Area 2.5655E+10 bytes
Fixed Size                  2265224 bytes
Variable Size            4160753528 bytes
Database Buffers         2.1341E+10 bytes
Redo Buffers              151113728 bytes
Database mounted.

SQL> select name from v$controlfile;

NAME
——————————————————————————–
+DATA/dbm01/controlfile/current.256.976374731

  • Update datafile and redo log file locations

SQL> select name from v$datafile;

NAME
——————————————————————————–
+DATAC1/dbm01/datafile/system.259.976374739
+DATAC1/dbm01/datafile/sysaux.260.976374743
+DATAC1/dbm01/datafile/undotbs1.261.976374745
+DATAC1/dbm01/datafile/undotbs2.263.976374753
+DATAC1/dbm01/datafile/undotbs3.264.976374755
+DATAC1/dbm01/datafile/undotbs4.265.976374757
+DATAC1/dbm01/datafile/users.266.976374757

7 rows selected.

SQL> select member from v$logfile;

MEMBER
——————————————————————————–
+DATAC1/dbm01/onlinelog/group_1.257.976374733
+DATAC1/dbm01/onlinelog/group_2.258.976374735
+DATAC1/dbm01/onlinelog/group_7.267.976375073
+DATAC1/dbm01/onlinelog/group_8.268.976375075
+DATAC1/dbm01/onlinelog/group_5.269.976375079
+DATAC1/dbm01/onlinelog/group_6.270.976375083
+DATAC1/dbm01/onlinelog/group_3.271.976375085
+DATAC1/dbm01/onlinelog/group_4.272.976375087
+DATAC1/dbm01/onlinelog/group_9.274.976375205
+DATAC1/dbm01/onlinelog/group_10.275.976375209
+DATAC1/dbm01/onlinelog/group_11.276.976375211
+DATAC1/dbm01/onlinelog/group_12.277.976375215
+DATAC1/dbm01/onlinelog/group_13.278.976375217
+DATAC1/dbm01/onlinelog/group_14.279.976375219
+DATAC1/dbm01/onlinelog/group_15.280.976375223
+DATAC1/dbm01/onlinelog/group_16.281.976375225

16 rows selected.

SQL> alter database rename file ‘+DATAC1/dbm01/datafile/system.259.976374739’ to ‘+DATA/dbm01/datafile/system.259.976374739’;

Database altered.

SQL> alter database rename file ‘+DATAC1/dbm01/datafile/sysaux.260.976374743’ to ‘+DATA/dbm01/datafile/sysaux.260.976374743’;

Database altered.

SQL> alter database rename file ‘+DATAC1/dbm01/datafile/undotbs1.261.976374745’ to ‘+DATA/dbm01/datafile/undotbs1.261.976374745’;

Database altered.

SQL> alter database rename file ‘+DATAC1/dbm01/datafile/undotbs2.263.976374753’ to ‘+DATA/dbm01/datafile/undotbs2.263.976374753’;

Database altered.

SQL> alter database rename file ‘+DATAC1/dbm01/datafile/undotbs3.264.976374755’ to ‘+DATA/dbm01/datafile/undotbs3.264.976374755’;

Database altered.

SQL> alter database rename file ‘+DATAC1/dbm01/datafile/undotbs4.265.976374757’ to ‘+DATA/dbm01/datafile/undotbs4.265.976374757’;

Database altered.

SQL> alter database rename file ‘+DATAC1/dbm01/datafile/users.266.976374757’ to ‘+DATA/dbm01/datafile/users.266.976374757’;

Database altered.

SQL> alter database rename file ‘+DATAC1/dbm01/onlinelog/group_1.257.976374733’  to ‘+DATA/dbm01/onlinelog/group_1.257.976374733’;

Database altered.

SQL> alter database rename file ‘+DATAC1/dbm01/onlinelog/group_2.258.976374735’  to ‘+DATA/dbm01/onlinelog/group_2.258.976374735’;

Database altered.

SQL> alter database rename file ‘+DATAC1/dbm01/onlinelog/group_7.267.976375073’  to ‘+DATA/dbm01/onlinelog/group_7.267.976375073’;

Database altered.

SQL> alter database rename file ‘+DATAC1/dbm01/onlinelog/group_8.268.976375075’  to ‘+DATA/dbm01/onlinelog/group_8.268.976375075’;

Database altered.

SQL> alter database rename file ‘+DATAC1/dbm01/onlinelog/group_5.269.976375079’  to ‘+DATA/dbm01/onlinelog/group_5.269.976375079’;

Database altered.

SQL> alter database rename file ‘+DATAC1/dbm01/onlinelog/group_6.270.976375083’  to ‘+DATA/dbm01/onlinelog/group_6.270.976375083’;

Database altered.

SQL> alter database rename file ‘+DATAC1/dbm01/onlinelog/group_3.271.976375085’  to ‘+DATA/dbm01/onlinelog/group_3.271.976375085’;

Database altered.

SQL> alter database rename file ‘+DATAC1/dbm01/onlinelog/group_4.272.976375087’  to ‘+DATA/dbm01/onlinelog/group_4.272.976375087’;

Database altered.

SQL> alter database rename file ‘+DATAC1/dbm01/onlinelog/group_9.274.976375205’  to ‘+DATA/dbm01/onlinelog/group_9.274.976375205’;

Database altered.

SQL> alter database rename file ‘+DATAC1/dbm01/onlinelog/group_10.275.976375209’ to ‘+DATA/dbm01/onlinelog/group_10.275.976375209’;

Database altered.

SQL> alter database rename file ‘+DATAC1/dbm01/onlinelog/group_11.276.976375211’ to ‘+DATA/dbm01/onlinelog/group_11.276.976375211’;

Database altered.

SQL> alter database rename file ‘+DATAC1/dbm01/onlinelog/group_12.277.976375215’ to ‘+DATA/dbm01/onlinelog/group_12.277.976375215’;

Database altered.

SQL> alter database rename file ‘+DATAC1/dbm01/onlinelog/group_13.278.976375217’ to ‘+DATA/dbm01/onlinelog/group_13.278.976375217’;

Database altered.

SQL> alter database rename file ‘+DATAC1/dbm01/onlinelog/group_14.279.976375219’ to ‘+DATA/dbm01/onlinelog/group_14.279.976375219’;

Database altered.

SQL> alter database rename file ‘+DATAC1/dbm01/onlinelog/group_15.280.976375223’ to ‘+DATA/dbm01/onlinelog/group_15.280.976375223’;

Database altered.

SQL> alter database rename file ‘+DATAC1/dbm01/onlinelog/group_16.281.976375225’ to ‘+DATA/dbm01/onlinelog/group_16.281.976375225’;

Database altered.

  • Verify the datafiles and redo log files names

SQL> select name from v$datafile;

NAME
——————————————————————————–
+DATA/dbm01/datafile/system.259.976374739
+DATA/dbm01/datafile/sysaux.260.976374743
+DATA/dbm01/datafile/undotbs1.261.976374745
+DATA/dbm01/datafile/undotbs2.263.976374753
+DATA/dbm01/datafile/undotbs3.264.976374755
+DATA/dbm01/datafile/undotbs4.265.976374757
+DATA/dbm01/datafile/users.266.976374757

7 rows selected.

SQL> select member from v$logfile;

MEMBER
——————————————————————————–
+DATA/dbm01/onlinelog/group_1.257.976374733
+DATA/dbm01/onlinelog/group_2.258.976374735
+DATA/dbm01/onlinelog/group_7.267.976375073
+DATA/dbm01/onlinelog/group_8.268.976375075
+DATA/dbm01/onlinelog/group_5.269.976375079
+DATA/dbm01/onlinelog/group_6.270.976375083
+DATA/dbm01/onlinelog/group_3.271.976375085
+DATA/dbm01/onlinelog/group_4.272.976375087
+DATA/dbm01/onlinelog/group_9.274.976375205
+DATA/dbm01/onlinelog/group_10.275.976375209
+DATA/dbm01/onlinelog/group_11.276.976375211
+DATA/dbm01/onlinelog/group_12.277.976375215
+DATA/dbm01/onlinelog/group_13.278.976375217
+DATA/dbm01/onlinelog/group_14.279.976375219
+DATA/dbm01/onlinelog/group_15.280.976375223
+DATA/dbm01/onlinelog/group_16.281.976375225

16 rows selected.

  • Update block change tracking file location

SQL> alter database rename file ‘+DATAC1/dbm01/changetracking/ctf.282.976375227’ to ‘+DATA/dbm01/changetracking/ctf.282.976375227’;

Database altered.

SQL> select * from v$block_change_tracking;

STATUS
———-
FILENAME
——————————————————————————–
     BYTES
———-
ENABLED
+DATA/dbm01/changetracking/ctf.282.976375227
  11599872

  • Update OMF related parameters

SQL> show parameter db_create_online_log_dest_1

NAME                                 TYPE        VALUE
———————————— ———– ——————————
db_create_online_log_dest_1          string      +DATAC1

SQL> alter system set db_create_online_log_dest_1=’+DATA’;

System altered.

SQL> show parameter db_create_online_log_dest_1

NAME                                 TYPE        VALUE
———————————— ———– ——————————
db_create_online_log_dest_1          string      +DATA

  • Update Fast Recovery Area location

SQL> show parameter db_recovery_file_dest

NAME                                 TYPE        VALUE
———————————— ———– ——————————
db_recovery_file_dest                string      +RECOC1
db_recovery_file_dest_size           big integer 20425000M

SQL> alter system set db_recovery_file_dest=’+RECO’;

System altered.

SQL> show parameter db_recovery_file_dest

NAME                                 TYPE        VALUE
———————————— ———– ——————————
db_recovery_file_dest                string      +RECO
db_recovery_file_dest_size           big integer 20425000M

  • Shutdown the database

SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> exit

  • Update the database configuration

[oracle@dm01db01 dbs]$ srvctl config database -d dbm01
Database unique name: dbm01
Database name: dbm01
Oracle home: /u01/app/oracle/product/11.2.0.4/dbhome
Oracle user: oracle
Spfile: +DATAC1/dbm01/spfiledbm01.ora
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: dbm01
Database instances: dbm011,dbm012,dbm013,dbm014
Disk Groups: DATAC1,RECOC1,DATA
Mount point paths:
Services:
Type: RAC
Database is administrator managed

[oracle@dm01db01 dbs]$ srvctl modify database -p +DATA/dbm01/spfiledbm01.ora -a DATA,RECO -d dbm01

[oracle@dm01db01 dbs]$ srvctl config database -d dbm01
Database unique name: dbm01
Database name: dbm01
Oracle home: /u01/app/oracle/product/11.2.0.4/dbhome
Oracle user: oracle
Spfile: +DATA/dbm01/spfiledbm01.ora
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: dbm01
Database instances: dbm011,dbm012,dbm013,dbm014
Disk Groups: DATA,RECO
Mount point paths:
Services:
Type: RAC
Database is administrator managed

  • Start the database and verify

[oracle@dm01db01 dbs]$ srvctl start database -d dbm01

[oracle@dm01db01 dbs]$ srvctl status database -d dbm01
Instance dbm011 is running on node dm01db01
Instance dbm012 is running on node dm01db02
Instance dbm013 is running on node dm01db03
Instance dbm014 is running on node dm01db04

[oracle@dm01db01 dbs]$ sqlplus / as sysdba

SQL*Plus: Release 11.2.0.4.0 Production on Fri May 25 10:40:34 2018
Copyright (c) 1982, 2013, Oracle.  All rights reserved.

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 – 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options

SQL> select name, open_mode,database_role from gv$database;

NAME      OPEN_MODE            DATABASE_ROLE
——— ——————– —————-
DBM01     READ WRITE           PRIMARY
DBM01     READ WRITE           PRIMARY
DBM01     READ WRITE           PRIMARY
DBM01     READ WRITE           PRIMARY

ORACLE_SID = [+ASM1] ? dbm011
The Oracle base remains unchanged with value /u01/app/oracle

[oracle@dm01db01 ~]$ sqlplus / as sysdba

SQL*Plus: Release 11.2.0.4.0 Production on Thu May 24 16:36:34 2018
Copyright (c) 1982, 2013, Oracle.  All rights reserved.

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 – 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options

SQL> select * from v$version;

BANNER
——————————————————————————–
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 – 64bit Production
PL/SQL Release 11.2.0.4.0 – Production
CORE    11.2.0.4.0      Production
TNS for Linux: Version 11.2.0.4.0 – Production
NLSRTL Version 11.2.0.4.0 – Production


[oracle@dm01db01 ~]$ . oraenv

ORACLE_SID = [oracle] ? +ASM1
The Oracle base has been set to /u01/app/oracle

[oracle@dm01db01 ~]$ asmcmd -p

ASMCMD [+] > lsdg
State    Type    Rebal  Sector  Block       AU   Total_MB    Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name
MOUNTED  HIGH    N         512   4096  4194304  272154624  271558968          6479872        88359698              0             N  DATAC1/
MOUNTED  HIGH    N         512   4096  4194304    2404640    2402468            68704          777921              0             Y  DBFS_DG/
MOUNTED  NORMAL  N         512   4096  4194304   45389568   45386648           540352        22423148              0             N  RECOC1/

ASMCMD [+] > lsct
DB_Name  Status     Software_Version  Compatible_version  Instance_Name  Disk_Group
+ASM     CONNECTED        11.2.0.4.0          11.2.0.4.0  +ASM1          DBFS_DG
DBM01    CONNECTED        11.2.0.4.0          11.2.0.4.0  dbm011         DATAC1

[oracle@dm01db01 ~]$ srvctl status database -d dbm01
Instance dbm011 is running on node dm01db01
Instance dbm012 is running on node dm01db02
Instance dbm013 is running on node dm01db03
Instance dbm014 is running on node dm01db04


Conclusion

In this article we have learned how to rename an ASM Disk group on Exadata running Oracle Database 11.2. Starting with Oracle Database 11.2 you can use the renamedg command to rename an ASM Disk Group. renamedg utility cannot rename the associated ASM/Grid disk disk name. Also renamedg cannot rename/update the control files, datafiles, redo log files and any other files that reference ASM DG for all databases. You must update database files manually after renaming ASM disk groups.

0

We had a failed hard disk on a Exadata Storage cell X6-2. So we scheduled the Oracle Field Engineer to replace the bad disk. Oracle Field Engineer came onsite and replaced the faulty hard disk. Post hard disk replacement we found that the physical disk and luns are created successfully but the Cell disk and Grid disks were not created automatically. When a hard disk is replaced, the lun, cell disk and grid disks are created automatically and grid disks are added to ASM disk group for you without any manual intervention. In some odd cases, the Cell disk and grid disks are not created automatically, in those cases you must manually create the Cell disk, create the Grid disks with proper sizes and add them to the ASM disk group.

In this article we will demonstrate how to create the Cell disk, Grid disks manually and add them to the respective ASM Disk Group.

Environment

  • Exadata X6-2 Elastic Configuration
  • 4 Compute nodes and 6 Storage cells
  • Hard Disk Size: 8TB
  • 3 ASM Disk Group: DATA, RECO & DBFS_DG
  • Total Number of Grid disks: DATA – 72, RECO – 72 & DBFS_DG – 60

Here the disk in the location 8:5 was back and replaced.

Before Replacing Hard Disk:

CellCLI> list physicaldisk
         8:0             PYJZKV                  normal
         8:1             PMU3LV                  normal
         8:2             P1Y2KV                  normal
         8:3             PYH48V                  normal
         8:4             PY7MAV                  normal
         8:5             PPZ47V                  not present
         8:6             PEJKHR                  normal
         8:7             PY4XSV                  normal
         8:8             PYL00V                  normal
         8:9             PV5RGV                  normal
         8:10            PSU26V                  normal
         8:11            PY522V                  normal
         FLASH_1_1       CVMD522500AG1P6NGN      normal
         FLASH_2_1       CVMD522401AC1P6NGN      normal
         FLASH_4_1       CVMD522500AC1P6NGN      normal
         FLASH_5_1       CVMD5230000Y1P6NGN      normal

CellCLI> list lun
         0_0     0_0     normal
         0_1     0_1     normal
         0_2     0_2     normal
         0_3     0_3     normal
         0_4     0_4     normal
         0_5     0_5     not present
         0_6     0_6     normal
         0_7     0_7     normal
         0_8     0_8     normal
         0_9     0_9     normal
         0_10    0_10    normal
         0_11    0_11    normal
         1_1     1_1     normal
         2_1     2_1     normal
         4_1     4_1     normal
         5_1     5_1     normal

After replacing Hard Disk:

CellCLI> list physicaldisk
         8:0             PYJZKV                  normal
         8:1             PMU3LV                  normal
         8:2             P1Y2KV                  normal
         8:3             PYH48V                  normal
         8:4             PY7MAV                  normal
         8:5             PPZ47V                  normal
         8:6             PEJKHR                  normal
         8:7             PY4XSV                  normal
         8:8             PYL00V                  normal
         8:9             PV5RGV                  normal
         8:10            PSU26V                  normal
         8:11            PY522V                  normal
         FLASH_1_1       CVMD522500AG1P6NGN      normal
         FLASH_2_1       CVMD522401AC1P6NGN      normal
         FLASH_4_1       CVMD522500AC1P6NGN      normal
         FLASH_5_1       CVMD5230000Y1P6NGN      normal

CellCLI> list lun
         0_0     0_0     normal
         0_1     0_1     normal
         0_2     0_2     normal
         0_3     0_3     normal
         0_4     0_4     normal
         0_5     0_5     normal
         0_6     0_6     normal
         0_7     0_7     normal
         0_8     0_8     normal
         0_9     0_9     normal
         0_10    0_10    normal
         0_11    0_11    normal
         1_1     1_1     normal
         2_1     2_1     normal
         4_1     4_1     normal
         5_1     5_1     normal

[root@dm01cel03 ~]# cellcli -e list physicaldisk 8:5 detail
         name:                   8:5
         deviceId:               21
         deviceName:             /dev/sdf
         diskType:               HardDisk
         enclosureDeviceId:      8
         errOtherCount:          0
         luns:                   0_5
         makeModel:              “HGST    H7280A520SUN8.0T”
         physicalFirmware:       PD51
         physicalInsertTime:     2018-05-18T10:52:29-05:00
         physicalInterface:      sas
         physicalSerial:         PPZ47V
         physicalSize:           7.1536639072000980377197265625T
         slotNumber:             5
         status:                 normal

[root@dm01cel03 ~]# cellcli -e list celldisk where lun=0_5 detail


[root@dm01cel03 ~]# cellcli -e list griddisk where cellDisk=CD_05_cm01cel01 attributes name,status
DATA_CD_05_dm01cel03 not present
DBFS_DG_CD_05_dm01cel03 not present
RECO_CD_05_dm01cel03 not present

[root@dm01cel03 ~]# cellcli -e list griddisk where celldisk=CD_05_dm01cel03 detail
         name:                   DATA_CD_05_dm01cel03
         availableTo:
         cachingPolicy:          default
         cellDisk:               CD_05_dm01cel03
         comment:                “Cluster dm01-cluster diskgroup DATA”
         creationTime:           2016-03-29T20:25:56-05:00
         diskType:               HardDisk
         errorCount:             0
         id:                     db221d77-25b0-4f9e-af6f-95e1c3134af5
         size:                   5.6953125T
         status:                 not present

         name:                   DBFS_DG_CD_05_dm01cel03
         availableTo:
         cachingPolicy:          default
         cellDisk:               CD_05_dm01cel03
         comment:                “Cluster dm01-cluster diskgroup DBFS_DG”
         creationTime:           2016-03-29T20:25:53-05:00
         diskType:               HardDisk
         errorCount:             0
         id:                     216fbec9-6ed4-4ef6-a0d4-d09517906fd5
         size:                   33.796875G
         status:                 not present

         name:                   RECO_CD_05_dm01cel03
         availableTo:
         cachingPolicy:          none
         cellDisk:               CD_05_dm01cel03
         comment:                “Cluster dm01-cluster diskgroup RECO”
         creationTime:           2016-03-29T20:25:58-05:00
         diskType:               HardDisk
         errorCount:             0
         id:                     e8ca6943-0ddd-48ab-b890-e14bbf4e591c
         size:                   1.42388916015625T
         status:                 not present

We can clearly see that the GRID DISKs are not present. So we have to create the GRID DISKs Manually.

Steps to create Celldisk, Griddisks and add them to ASM Disk Group


  • List Cell Disks

[root@dm01cel03 ~]# cellcli -e list celldisk
         CD_00_dm01cel03         normal
         CD_01_dm01cel03         normal
         CD_02_dm01cel03         normal
         CD_03_dm01cel03         normal
         CD_04_dm01cel03         normal
         CD_05_dm01cel03         not present
         CD_06_dm01cel03         normal
         CD_07_dm01cel03         normal
         CD_08_dm01cel03         normal
         CD_09_dm01cel03         normal
         CD_10_dm01cel03         normal
         CD_11_dm01cel03         normal
         FD_00_dm01cel03         normal
         FD_01_dm01cel03         normal
         FD_02_dm01cel03         normal
         FD_03_dm01cel03         normal

  • List Grid Disks

[root@dm01cel03 ~]# cellcli -e list griddisk
         DATA_CD_00_dm01cel03       active
         DATA_CD_01_dm01cel03       active
         DATA_CD_02_dm01cel03       active
         DATA_CD_03_dm01cel03       active
         DATA_CD_04_dm01cel03       active
         DATA_CD_05_dm01cel03       not present
         DATA_CD_06_dm01cel03       active
         DATA_CD_07_dm01cel03       active
         DATA_CD_08_dm01cel03       active
         DATA_CD_09_dm01cel03       active
         DATA_CD_10_dm01cel03       active
         DATA_CD_11_dm01cel03       active
         DBFS_DG_CD_02_dm01cel03    active
         DBFS_DG_CD_03_dm01cel03    active
         DBFS_DG_CD_04_dm01cel03    active
         DBFS_DG_CD_05_dm01cel03    not present
         DBFS_DG_CD_06_dm01cel03    active
         DBFS_DG_CD_07_dm01cel03    active
         DBFS_DG_CD_08_dm01cel03    active
         DBFS_DG_CD_09_dm01cel03    active
         DBFS_DG_CD_10_dm01cel03    active
         DBFS_DG_CD_11_dm01cel03    active
         RECO_CD_00_dm01cel03       active
         RECO_CD_01_dm01cel03       active
         RECO_CD_02_dm01cel03       active
         RECO_CD_03_dm01cel03       active
         RECO_CD_04_dm01cel03       active
         RECO_CD_05_dm01cel03       not present
         RECO_CD_06_dm01cel03       active
         RECO_CD_07_dm01cel03       active
         RECO_CD_08_dm01cel03       active
         RECO_CD_09_dm01cel03       active
         RECO_CD_10_dm01cel03       active
         RECO_CD_11_dm01cel03       active

  • List Physical Disk details

[root@dm01cel03 ~]# cellcli -e list physicaldisk where physicalSerial=PPZ47V detail
         name:                   8:5
         deviceId:               21
         deviceName:             /dev/sdf
         diskType:               HardDisk
         enclosureDeviceId:      8
         errOtherCount:          0
         luns:                   0_5
         makeModel:              “HGST    H7280A520SUN8.0T”
         physicalFirmware:       PD51
         physicalInsertTime:     2018-05-18T10:52:29-05:00
         physicalInterface:      sas
         physicalSerial:         PPZ47V
         physicalSize:           7.1536639072000980377197265625T
         slotNumber:             5
         status:                 normal

  • Let’s try to create the Cell Disk

[root@dm01cel03 ~]# cellcli -e create celldisk CD_09_dm01cel03 lun=0_5

CELL-02526: Pre-existing cell disk: CD_09_dm01cel03

It says the Cell Disk already exists.

  • Let’s try to create the Grid Disk. To create the Grid Disk with proper size, get the Grid Disk size from a good Cell Disk as shown below.

[root@dm01cel03 ~]# cellcli -e list griddisk where celldisk=CD_07_dm01cel03 attributes name,size,offset
         DATA_CD_07_dm01cel03       5.6953125T              32M
         DBFS_DG_CD_07_dm01cel03         33.796875G         7.1192474365234375T
         RECO_CD_07_dm01cel03       1.42388916015625T       5.6953582763671875T

  • Now create the Grid Disk

[root@dm01cel03 ~]# cellcli -e create griddisk DATA_CD_05_dm01cel03 celldisk=CD_05_dm01cel03,size=5.6953125T

CELL-02701: Cannot create grid disk on cell disk CD_05_dm01cel03 because its status is not normal.

Looks like we can’t create the Grid Disk. We will now drop the Cell Disk and recreate it.

  • Drop Cell Disk

CellCLI> drop celldisk CD_05_dm01cel03 force
CellDisk CD_05_dm01cel03 successfully dropped

  • Create Cell Disk

CellCLI> create celldisk CD_05_dm01cel03 lun=0_5
CellDisk CD_05_dm01cel03 successfully created

  • Create Grid Disks with proper sizes

CellCLI> create griddisk DATA_CD_05_dm01cel03 celldisk=CD_05_dm01cel03,size=5.6953125T
GridDisk DATA_CD_05_dm01cel03 successfully created

CellCLI> create griddisk RECO_CD_05_dm01cel03 celldisk=CD_05_dm01cel03,size=1.42388916015625T
GridDisk RECO_CD_05_dm01cel03 successfully created

CellCLI> create griddisk DBFS_DG_CD_05_dm01cel03 celldisk=CD_05_dm01cel03,size=33.796875G
GridDisk DBFS_DG_CD_05_dm01cel03 successfully created

  • List Grid Disks

CellCLI> list griddisk where celldisk=CD_05_dm01cel03 attributes name,size,offset
         DATA_CD_05_dm01cel03       5.6953125T              32M
         DBFS_DG_CD_05_dm01cel03         33.796875G              7.1192474365234375T
         RECO_CD_05_dm01cel03       1.42388916015625T       5.6953582763671875T

CellCLI> list griddisk
         DATA_CD_00_dm01cel03       active
         DATA_CD_01_dm01cel03       active
         DATA_CD_02_dm01cel03       active
         DATA_CD_03_dm01cel03       active
         DATA_CD_04_dm01cel03       active
         DATA_CD_05_dm01cel03       active
         DATA_CD_06_dm01cel03       active
         DATA_CD_07_dm01cel03       active
         DATA_CD_08_dm01cel03       active
         DATA_CD_09_dm01cel03       active
         DATA_CD_10_dm01cel03       active
         DATA_CD_11_dm01cel03       active
         DBFS_DG_CD_02_dm01cel03    active
         DBFS_DG_CD_03_dm01cel03    active
         DBFS_DG_CD_04_dm01cel03    active
         DBFS_DG_CD_05_dm01cel03    active
         DBFS_DG_CD_06_dm01cel03    active
         DBFS_DG_CD_07_dm01cel03    active
         DBFS_DG_CD_08_dm01cel03    active
         DBFS_DG_CD_09_dm01cel03    active
         DBFS_DG_CD_10_dm01cel03    active
         DBFS_DG_CD_11_dm01cel03    active
         RECO_CD_00_dm01cel03       active
         RECO_CD_01_dm01cel03       active
         RECO_CD_02_dm01cel03       active
         RECO_CD_03_dm01cel03       active
         RECO_CD_04_dm01cel03       active
         RECO_CD_05_dm01cel03       active
         RECO_CD_06_dm01cel03       active
         RECO_CD_07_dm01cel03       active
         RECO_CD_08_dm01cel03       active
         RECO_CD_09_dm01cel03       active
         RECO_CD_10_dm01cel03       active
         RECO_CD_11_dm01cel03       active

The Grid Disks show active now. We can go ahead and add them to ASM disk Group Manually by connecting to ASM instance.


  • Log into +ASM1 instance and add the new disk.  Set the rebalance power higher (11) to perform faster rebalance operation.

dm01db01-orcldb1 {/home/oracle}:. oraenv
ORACLE_SID = [orcldb1] ? +ASM1
The Oracle base remains unchanged with value /u01/app/oracle
dm01db01-+ASM1 {/home/oracle}:sqlplus / as sysasm

SQL*Plus: Release 11.2.0.4.0 Production on Wed May 23 09:30:13 2018
Copyright (c) 1982, 2013, Oracle.  All rights reserved.

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 – 64bit Production
With the Real Application Clusters and Automatic Storage Management options

SQL> alter diskgroup DATA add failgroup dm01CEL03 disk ‘o/192.168.10.1;192.168.10.2/DATA_CD_05_dm01cel03’ name DATA_CD_05_dm01cel03 rebalance power 11;

Diskgroup altered.

SQL> alter diskgroup RECO add failgroup dm01CEL03 disk ‘o/192.168.10.1;192.168.10.2/RECO_CD_05_dm01cel03’ name RECO_CD_05_dm01cel03 rebalance power 11;

Diskgroup altered.

SQL> alter diskgroup DBFS_DG add failgroup dm01CEL03 disk ‘o/192.168.10.1;192.168.10.2/DBFS_DG_CD_05_dm01cel03’ name DBFS_DG_CD_05_dm01cel03 rebalance power 11;

Diskgroup altered.

SQL> select a.name,a.total_mb,a.free_mb,a.type,
    decode(a.type,’NORMAL’,a.total_mb/2,’HIGH’,a.total_mb/3) avail_mb,
    decode(a.type,’NORMAL’,a.free_mb/2,’HIGH’,a.free_mb/3) usable_mb,
    count(b.path) cell_disks  from v$asm_diskgroup a, v$asm_disk b
    where a.group_number=b.group_number group by a.name,a.total_mb,a.free_mb,a.type,
    decode(a.type,’NORMAL’,a.total_mb/2,’HIGH’,a.total_mb/3) ,
    decode(a.type,’NORMAL’,a.free_mb/2,’HIGH’,a.free_mb/3)
   order by 2,1;

               Total MB    Free MB          Total MB    Free MB
Disk Group          Raw        Raw TYPE       Usable     Usable     CELL_DISKS
———— ———- ———- —— ———- ———- ———-
DBFS_DG    2076480    2074688 NORMAL    1038240    1037344         60
RECO     107500032   57573496 HIGH     35833344   19191165         72
DATA     429981696  282905064 HIGH    143327232   94301688         72

SQL> select * from v$asm_operation;

GROUP_NUMBER OPERA STAT      POWER     ACTUAL      SOFAR   EST_WORK   EST_RATE EST_MINUTES ERROR_CODE
———— —– —- ———- ———- ———- ———- ———- ———– ——————————————–
           1 REBAL RUN          11         11      85992    6697959      11260         587
           3 REBAL WAIT         11


SQL> select * from gv$asm_operation;

no rows selected


Conclusion

In this article we have learned how to create the Celldisk, Griddisks and add the newly created Griddisks to ASM Disk Group. When a hard disk is replaced, the lun, celldisk and griddisks are created automatically and griddisks are added to ASM disk group for you without any manual intervention. In some cases, if the Celldisk and grid disks are not created automatically, then you must manually create them and add them to the ASM disk group.

1

Introduction:
In Oracle databases, it is recommended to multiplex you controlfile to safeguard against different failures like corruption, accidentally removing control file and so on.

In this article I will demonstrate how to multiplex/duplicating a controlfile into Automatic Storage Management (ASM).

Current Setup

Exadata 8-node RAC using ASM.
Current controlfile is stored in ASM.
Database is using SPFILE.
There are diffferent ASM Disk Groups available such as DATA, RECO, DBFS_DG, ACFS_DG.
 

dm01db01-orcldb1 {/home/oracle}:srvctl status database -d orcldb
Instance orcldb1 is running on node dm01db01
Instance orcldb2 is running on node dm01db02
Instance orcldb3 is running on node dm01db04
Instance orcldb4 is running on node dm01db05
Instance orcldb5 is running on node dm01db07
Instance orcldb6 is running on node dm01db06
Instance orcldb7 is running on node dm01db03
Instance orcldb8 is running on node dm01db08

SQL> show parameter spfile

NAME                                 TYPE        VALUE
———————————— ———– ——————————
spfile                               string      +DATA/ORCLDB/PARAMETERFILE/spfile.431.939367673

SQL> select name from v$controlfile;

NAME
——————————————————————————–
+DATA/ORCLDB/CONTROLFILE/current.384.939367517


dm01db01-+ASM1 {/home/oracle}:asmcmd lsdg
State    Type    Rebal  Sector  Logical_Sector  Block       AU  Total_MB   Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name
MOUNTED  NORMAL  Y         512             512   4096  4194304  10092544       424           315392         -157484              1             N  ACFS_DG/
MOUNTED  NORMAL  Y         512             512   4096  4194304   7208960       532           225280         -112374              1             N  DATA/
MOUNTED  HIGH    N         512             512   4096  4194304  12390400  12012736           450560         3854058              0             N  RECO/
MOUNTED  NORMAL  N         512             512   4096  4194304   2106432   2104640            30528         1037056              0             Y  DBFS_DG/


Steps to multiplex controlfile in ASM When Database is using SPFILE

  • Update the control_files to include the location for second control file.  The second controlfile is going to be created on different diskgroup RECO.
SQL> alter system set control_files=’+DATA/ORCLDB/CONTROLFILE/current.384.939367517′,‘+RECO’ scope=spfile sid=’*’;

System altered.

  • Stop and start the instance on node 1 in NOMOUNT state.
dm01db01-orcldb1 {/home/oracle}:srvctl status database -d orcldb
Instance orcldb1 is running on node dm01db01
Instance orcldb2 is running on node dm01db02
Instance orcldb3 is running on node dm01db04
Instance orcldb4 is running on node dm01db05
Instance orcldb5 is running on node dm01db07
Instance orcldb6 is running on node dm01db06
Instance orcldb7 is running on node dm01db03
Instance orcldb8 is running on node dm01db08


dm01db01-orcldb1 {/home/oracle}:srvctl stop database -d orcldb

dm01db01-orcldb1 {/home/oracle}:srvctl status database -d orcldb
Instance orcldb1 is not running on node dm01db01
Instance orcldb2 is not running on node dm01db02
Instance orcldb3 is not running on node dm01db04
Instance orcldb4 is not running on node dm01db05
Instance orcldb5 is not running on node dm01db07
Instance orcldb6 is not running on node dm01db06
Instance orcldb7 is not running on node dm01db03
Instance orcldb8 is not running on node dm01db08

dm01db01-orcldb1 {/home/oracle}:srvctl start instance -d orcldb -i orcldb1 -o nomount

SQL> set lines 200
SQL> select * from v$instance;

INSTANCE_NUMBER INSTANCE_NAME    HOST_NAME                                                        VERSION           STARTUP_T STATUS       PAR    THREAD# ARCHIVE LOG_SWITCH_WAIT LOGINS     SHU
————— —————- —————————————————————- —————– ——— ———— — ———- ——- ————— ———- —
DATABASE_STATUS   INSTANCE_ROLE      ACTIVE_ST BLO     CON_ID INSTANCE_MO EDITION FAMILY                                                                           DATABASE_TYPE
—————– —————— ——— — ———- ———– ——- ——————————————————————————– —————
              1 orcldb1          dm01db01                                  12.2.0.1.0        09-MAY-17 STARTED      YES          0 STOPPED                 ALLOWED    NO
ACTIVE            UNKNOWN            NORMAL    NO           0 REGULAR     EE                                                                                       RAC

  • Connect to RMAN and duplicate the controlfile
dm01db01-orcldb1 {/home/oracle}:rman target /

Recovery Manager: Release 12.2.0.1.0 – Production on Tue May 9 05:07:45 2017

Copyright (c) 1982, 2017, Oracle and/or its affiliates.  All rights reserved.

connected to target database: ORCLDB (not mounted)

RMAN> restore controlfile from ‘+DATA/ORCLDB/CONTROLFILE/current.384.939367517’;

Starting restore at 09-MAY-17
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=372 instance=orcldb1 device type=DISK

channel ORA_DISK_1: copied control file copy
output file name=+DATA/ORCLDB/CONTROLFILE/current.384.939367517
output file name=+RECO/ORCLDB/CONTROLFILE/current.1003.943506471
Finished restore at 09-MAY-17

RMAN> exit

Recovery Manager complete.

  • update the control_file parameter with the full path and name.
SQL> alter system set control_files=’+DATA/ORCLDB/CONTROLFILE/current.384.939367517′,’+RECO/ORCLDB/CONTROLFILE/current.1003.943506471′ scope=spfile sid=’*’;

System altered.

  • Shutdown and start database
SQL> shutdown immediate;
ORA-01507: database not mounted


ORACLE instance shut down.
SQL> exit

dm01db01-orcldb1 {/home/oracle}:srvctl start database -d orcldb

dm01db01-orcldb1 {/home/oracle}:srvctl status database -d orcldb
Instance orcldb1 is running on node dm01db01
Instance orcldb2 is running on node dm01db02
Instance orcldb3 is running on node dm01db04
Instance orcldb4 is running on node dm01db05
Instance orcldb5 is running on node dm01db07
Instance orcldb6 is running on node dm01db06
Instance orcldb7 is running on node dm01db03
Instance orcldb8 is running on node dm01db08

  • verify that both controlfiles are in ASM now.
SQL> select name from v$controlfile;

NAME
——————————————————————————–
+DATA/ORCLDB/CONTROLFILE/current.384.939367517
+RECO/ORCLDB/CONTROLFILE/current.1003.943506471

SQL> show parameter control_files

NAME                                 TYPE        VALUE
———————————— ———– ——————————
control_files                        string      +DATA1/ORCLDB/CONTROLFILE/current.384.939367517, +RECO/ORCLDB/CONTROLFILE/current.1003.943506471


Conclusion
In this article we have learned how to duplicate a control file in ASM. Multiplexing control file is recommended to safeguard against controlfil failures.
0

Overview

In one of previous article I have demonstrated how to configure ACFS on Exadata database running Oracle database 12.1.0.2 using standard ASM. Now I have an opportunity to configure Oracle GoldenGate on Exadata running Oracle database 12.2.0.1. So I have decides to configure ACFS on Exadata running Oracle database 12.2.0.1 using ASM Flex Architecture.

In this article I will demostrate how to configure Oracle ACFS on Exadata Database machine running Oracle  Database 12.2.0.1 using Flex ASM.

For details on ACFS and how to configure ACFS on Standard ASM look at my previous article at:
http://netsoftmate.blogspot.in/2017/02/configure-acfs-on-exadata-database.html

Prerequisites

  • Verify ASM configuration on Standard Cluster (8 nodes)
Here the owner of the Grid Home is oracle user.

dm01db01-+ASM1 {/home/oracle}:id oracle
uid=1000(oracle) gid=1001(oinstall) groups=1001(oinstall),101(fuse),1002(dba),1003(oper),1004(asmdba)

  • Determine the ASM architecture
Here it is an ASM Flex configuration using the default cardinally=3 and the ASM instances are running on node 1, 2 and 4 out of 8 nodes:

dm01db01-+ASM1 {/home/oracle}:asmcmd showclustermode
ASM cluster : Flex mode enabled

dm01db01-orp258c1 {/home/oracle}:srvctl config asm -detail
ASM home: <CRS home>
Password file: +DBFS_DG/orapwASM
Backup of Password file:
ASM listener: LISTENER
ASM is enabled.
ASM is individually enabled on nodes:
ASM is individually disabled on nodes:
ASM instance count: 3
Cluster ASM listener: ASMNET1LSNR_ASM

dm01db01-+ASM1 {/home/oracle}:srvctl status asm
ASM is running on dm01db04,dm01db02,dm01db01
ASM is enabled.

  • Verify if ACFS/ADVM modules is enabled/loaded on every node in a standard cluster or every Hub Node in a Flex Cluster.
[root@dm01db01 ~]# dcli -g dbs_group -l root ‘lsmod | grep oracle’
dm01db01: oracleacfs           4609822  3
dm01db01: oracleadvm            803161  17
dm01db01: oracleoks             660948  2 oracleacfs,oracleadvm
dm01db02: oracleacfs           4609822  3
dm01db02: oracleadvm            803161  17
dm01db02: oracleoks             660948  2 oracleacfs,oracleadvm
dm01db03: oracleacfs           4609822  3
dm01db03: oracleadvm            803161  17
dm01db03: oracleoks             660948  2 oracleacfs,oracleadvm
dm01db04: oracleacfs           4609822  3
dm01db04: oracleadvm            803161  17
dm01db04: oracleoks             660948  2 oracleacfs,oracleadvm
dm01db05: oracleacfs           4609822  3
dm01db05: oracleadvm            803161  17
dm01db05: oracleoks             660948  2 oracleacfs,oracleadvm
dm01db06: oracleacfs           4609822  3
dm01db06: oracleadvm            803161  17
dm01db06: oracleoks             660948  2 oracleacfs,oracleadvm
dm01db07: oracleacfs           4609822  3
dm01db07: oracleadvm            803161  17
dm01db07: oracleoks             660948  2 oracleacfs,oracleadvm
dm01db08: oracleacfs           4609822  3
dm01db08: oracleadvm            803161  17
dm01db08: oracleoks             660948  2 oracleacfs,oracleadvm

  • Check if ASM ADVM proxy is running on every node in a standard cluster or every Hub Node in a Flex Cluster.
dm01db01-+ASM1 {/home/oracle}:srvctl status asm -proxy
ADVM proxy is running on node dm01db06,dm01db05,dm01db04,dm01db03,dm01db02,dm01db01,dm01db08,dm01db07


dm01db01-+ASM1 {/home/oracle}:srvctl config asm -proxy -detail
ASM home: <CRS home>
ADVM proxy is enabled
ADVM proxy is individually enabled on nodes:
ADVM proxy is individually disabled on nodes:

dm01db01-orp258c1 {/home/oracle}:/u01/app/12.2.0.1/grid/bin/crsctl stat res ora.proxy_advm -t
——————————————————————————–
Name           Target  State        Server                   State details
——————————————————————————–
Local Resources
——————————————————————————–
ora.proxy_advm
               ONLINE  ONLINE       dm01db01                 STABLE
               ONLINE  ONLINE       dm01db02                 STABLE
               ONLINE  ONLINE       dm01db03                 STABLE
               ONLINE  ONLINE       dm01db04                 STABLE
               ONLINE  ONLINE       dm01db05                 STABLE
               ONLINE  ONLINE       dm01db06                 STABLE
               ONLINE  ONLINE       dm01db07                 STABLE
               ONLINE  ONLINE       dm01db08                 STABLE
——————————————————————————–


Step I – Create ASM Disk Group for ACFS

  • Login as Oracle Grid Infrastructure software owner and start asmca.
dm01db01-orcldb1 {/home/oracle}: id oracle
uid=1000(oracle) gid=1001(oinstall) groups=1001(oinstall),1002(dba),1003(oper),1004(asmdba)

  • Set the ORACLE HOME and SID to ASM
dm01db01-orcldb1 {/home/oracle}:. oraenv
ORACLE_SID = [orcldb1] ? +ASM1
The Oracle base remains unchanged with value /u01/app/oracle


dm01db01-+ASM1 {/home/oracle}:env | grep ORA
ORACLE_SID=+ASM1
ORACLE_BASE=/u01/app/oracle
ORACLE_HOME=/u01/app/12.2.0.1/grid

  • Set DISPLAY
dm01db01-+ASM1 {/home/oracle}:export DISPLAY=10.30.20.13:0.0
  • Start asmca utility
dm01db01-+ASM1 {/home/oracle}:which asmca
/u01/app/12.1.0.2/grid/bin/asmca

dm01db01-+ASM1 {/home/oracle}:asmca
 

ASMCA warming up

ASMCA Home page  appears

Click on ASM Instances to view the ASM configuration. We can see that it a Flex ASM Configuration

Click on “Dis Groups”  and click on “Create” button to create a new ASM Disk Group to be used for ACFS

On Create Disk Group page, specify:
    – Disk Group Name: ACFS_DG
    – Redundancy: Normal
    – Select Member Disk: Show Eligible
        – Select all candidate disks to be part of ACFS_DG disk group
        – In Exadata each Storage cell is a failure Group
    – Click on Show Advanced Options, specify:
       – Specify the ASM/DB/ADVM Compatibilty
    – Click Ok


ACFS_DG ASM Disk Group creation in progress

We can see our ACFS_DG Disk Group created 
  • Verify newly created ACFS_DG Disk Group as follows:
SQL> select inst_id, name, total_mb, group_number from gv$asm_diskgroup where name like ‘ACFS_DG’;

   INST_ID NAME                             TOTAL_MB GROUP_NUMBER
———- —————————— ———- ————
         1 ACFS_DG                          10407936            5
         3 ACFS_DG                          10407936            5
         2 ACFS_DG                          10407936            5

SQL> col value for a30
SQL> col name for a30
SQL> select name, value from v$asm_attribute where GROUP_NUMBER=5 and name like ‘compatible%’;

NAME                           VALUE
—————————— ——————————
compatible.asm                 12.2.0.1.0
compatible.rdbms               11.2.0.4.0
compatible.advm                12.2.0.1.0

dm01db01-+ASM1 {/home/oracle}:/u01/app/12.2.0.1/grid/bin/crsctl stat res ora.ACFS_DG.dg -t
——————————————————————————–
Name           Target  State        Server                   State details
——————————————————————————–
Local Resources
——————————————————————————–
ora.ACFS_DG.dg
               ONLINE  ONLINE       dm01db01                 STABLE
               ONLINE  ONLINE       dm01db02                 STABLE
               ONLINE  OFFLINE      dm01db03                 STABLE
               ONLINE  ONLINE       dm01db04                 STABLE
               ONLINE  OFFLINE      dm01db05                 STABLE
               ONLINE  OFFLINE      dm01db06                 STABLE
               ONLINE  OFFLINE      dm01db07                 STABLE
               ONLINE  OFFLINE      dm01db08                 STABLE
——————————————————————————–


Step II – Create ASM Volume

Click on Volumes

On Volumes page click on “Create”

On Create Volume page, specify:
    – Volumn name: acfsvol
    – Disk Group name: ACFS_DG
    – Size: 4927.56 (it can be anything based on your requirement from usable space avaialble.)
    – Click Ok


acfsvol creation is in progress

We can see our Volume created
  •  Verify newly created Volume as follows:
ASMCMD> volinfo –all
Diskgroup Name: ACFS_DG

         Volume Name: ACFSVOL
         Volume Device: /dev/asm/acfsvol-223
         State: ENABLED
         Size (MB): 5045824
         Resize Unit (MB): 64
         Redundancy: MIRROR
         Stripe Columns: 8
         Stripe Width (K): 1024
         Usage: ACFS
         Mountpath: /acfs_ogg


dm01db01-+ASM1 {/home/oracle}:/u01/app/12.2.0.1/grid/bin/crsctl stat res ora.ACFS_DG.ACFSVOL.advm -t
——————————————————————————–
Name           Target  State        Server                   State details
——————————————————————————–
Local Resources
——————————————————————————–
ora.ACFS_DG.ACFSVOL.advm
               ONLINE  ONLINE       dm01db01                 STABLE
               ONLINE  ONLINE       dm01db02                 STABLE
               ONLINE  ONLINE       dm01db03                 STABLE
               ONLINE  ONLINE       dm01db04                 STABLE
               ONLINE  ONLINE       dm01db05                 STABLE
               ONLINE  ONLINE       dm01db06                 STABLE
               ONLINE  ONLINE       dm01db07                 STABLE
               ONLINE  ONLINE       dm01db08                 STABLE
——————————————————————————–

dm01db01-+ASM1 {/home/oracle}:dcli -g ~/dbs_group -l oracle ‘ls -l /dev/asm/*’
dm01db01: brwxrwx— 1 root asmdba 251, 114177 Mar 23 08:53 /dev/asm/acfsvol-223
dm01db02: brwxrwx— 1 root asmdba 251, 114177 Mar 23 08:53 /dev/asm/acfsvol-223
dm01db03: brwxrwx— 1 root asmdba 251, 114177 Mar 23 08:53 /dev/asm/acfsvol-223
dm01db04: brwxrwx— 1 root asmdba 251, 114177 Mar 23 08:53 /dev/asm/acfsvol-223
dm01db05: brwxrwx— 1 root asmdba 251, 114177 Mar 23 08:53 /dev/asm/acfsvol-223
dm01db06: brwxrwx— 1 root asmdba 251, 114177 Mar 23 08:53 /dev/asm/acfsvol-223
dm01db07: brwxrwx— 1 root asmdba 251, 114177 Mar 23 08:53 /dev/asm/acfsvol-223
dm01db08: brwxrwx— 1 root asmdba 251, 114177 Mar 23 08:53 /dev/asm/acfsvol-223


III – Create ASM Cluster File system

Click on “ASM Cluster File Systems” on left pane


On ASM Cluster File Systems page click on “Create”

On Create ASM Cluster File System page, specify:
    – Type of ACFS: Cluster File System
    – Mount point: /acfs_ogg
    – Auto Mount: check
    – User Name: oracle
    – Group Name: oinstall
    – Description: acfs for OGG
    – Select Volume: ACFSVOL – /dev/asm/acfsvol-223 – 4927.5625G
    – Click Ok


ACFS creation in progress

Run the ACFS script on Node 1 only to register ACFS with Grid Infrastructure and to mount the ACFS file system.

  • Contents of ACFS script looks like this:
[root@dm01db01 ~]# cat /u01/app/grid/cfgtoollogs/asmca/scripts/acfs_script.sh | more
#!/bin/sh
/u01/app/12.2.0.1/grid/bin/srvctl add filesystem -d /dev/asm/acfsvol-223 -m /acfs_ogg -u oracle -fstype ACFS –

description ‘acfs for OGG’ -autostart ALWAYS
if [ $? = “0” -o $? = “2” ]; then
   /u01/app/12.2.0.1/grid/bin/srvctl start filesystem -d /dev/asm/acfsvol-223
   if [ $? = “0” ]; then
      chown oracle:oinstall /acfs_ogg
      chmod 775 /acfs_ogg
      /u01/app/12.2.0.1/grid/bin/srvctl status filesystem -d /dev/asm/acfsvol-223
      exit 0
   else
      exit $?
   fi
   /u01/app/12.2.0.1/grid/bin/srvctl status filesystem -d /dev/asm/acfsvol-223
fi

  • Execute the script
[root@dm01db01 ~]# /u01/app/grid/cfgtoollogs/asmca/scripts/acfs_script.sh
ACFS file system /acfs_ogg is mounted on nodes
dm01db01,dm01db02,dm01db03,dm01db04,dm01db05,dm01db06,dm01db07,dm01db08

Click Close on the Run ACFS script window

We can see our Volume created
  •  Verify newly created Volume as follows:
dm01db01-+ASM1 {/home/oracle}:/u01/app/12.2.0.1/grid/bin/crsctl stat res ora.acfs_dg.acfsvol.acfs -t
——————————————————————————–
Name           Target  State        Server                   State details
——————————————————————————–
Local Resources
——————————————————————————–
ora.acfs_dg.acfsvol.acfs
               ONLINE  ONLINE       dm01db01                 mounted on /acfs_ogg
                                                             ,STABLE
               ONLINE  ONLINE       dm01db02                 mounted on /acfs_ogg
                                                             ,STABLE
               ONLINE  ONLINE       dm01db03                 mounted on /acfs_ogg
                                                             ,STABLE
               ONLINE  ONLINE       dm01db04                 mounted on /acfs_ogg
                                                             ,STABLE
               ONLINE  ONLINE       dm01db05                 mounted on /acfs_ogg
                                                             ,STABLE
               ONLINE  ONLINE       dm01db06                 mounted on /acfs_ogg
                                                             ,STABLE
               ONLINE  ONLINE       dm01db07                 mounted on /acfs_ogg
                                                             ,STABLE
               ONLINE  ONLINE       dm01db08                 mounted on /acfs_ogg
                                                             ,STABLE
——————————————————————————–

dm01db01-+ASM1 {/home/oracle}:dcli -g ~/dbs_group -l oracle ‘df -h /acfs_ogg’
dm01db01: Filesystem            Size  Used Avail Use% Mounted on
dm01db01: /dev/asm/acfsvol-223  4.9T   12G  4.9T   1% /acfs_ogg
dm01db02: Filesystem            Size  Used Avail Use% Mounted on
dm01db02: /dev/asm/acfsvol-223  4.9T   12G  4.9T   1% /acfs_ogg
dm01db03: Filesystem            Size  Used Avail Use% Mounted on
dm01db03: /dev/asm/acfsvol-223  4.9T   12G  4.9T   1% /acfs_ogg
dm01db04: Filesystem            Size  Used Avail Use% Mounted on
dm01db04: /dev/asm/acfsvol-223  4.9T   12G  4.9T   1% /acfs_ogg
dm01db05: Filesystem            Size  Used Avail Use% Mounted on
dm01db05: /dev/asm/acfsvol-223  4.9T   12G  4.9T   1% /acfs_ogg
dm01db06: Filesystem            Size  Used Avail Use% Mounted on
dm01db06: /dev/asm/acfsvol-223  4.9T   12G  4.9T   1% /acfs_ogg
dm01db07: Filesystem            Size  Used Avail Use% Mounted on
dm01db07: /dev/asm/acfsvol-223  4.9T   12G  4.9T   1% /acfs_ogg
dm01db08: Filesystem            Size  Used Avail Use% Mounted on
dm01db08: /dev/asm/acfsvol-223  4.9T   12G  4.9T   1% /acfs_ogg


dm01db01-orp258c1 {/home/oracle}:/u01/app/12.2.0.1/grid/bin/crsctl stat res -t
——————————————————————————–
Name           Target  State        Server                   State details
——————————————————————————–
Local Resources
——————————————————————————–
ora.ACFS_DG.ACFSVOL.advm
               ONLINE  ONLINE       dm01db01                 STABLE
               ONLINE  ONLINE       dm01db02                 STABLE
               ONLINE  ONLINE       dm01db03                 STABLE
               ONLINE  ONLINE       dm01db04                 STABLE
               ONLINE  ONLINE       dm01db05                 STABLE
               ONLINE  ONLINE       dm01db06                 STABLE
               ONLINE  ONLINE       dm01db07                 STABLE
               ONLINE  ONLINE       dm01db08                 STABLE
ora.ACFS_DG.dg
               ONLINE  ONLINE       dm01db01                 STABLE
               ONLINE  ONLINE       dm01db02                 STABLE
               ONLINE  OFFLINE      dm01db03                 STABLE
               ONLINE  ONLINE       dm01db04                 STABLE
               ONLINE  OFFLINE      dm01db05                 STABLE
               ONLINE  OFFLINE      dm01db06                 STABLE
               ONLINE  OFFLINE      dm01db07                 STABLE
               ONLINE  OFFLINE      dm01db08                 STABLE
ora.ASMNET1LSNR_ASM.lsnr
               ONLINE  ONLINE       dm01db01                 STABLE
               ONLINE  ONLINE       dm01db02                 STABLE
               ONLINE  ONLINE       dm01db03                 STABLE
               ONLINE  ONLINE       dm01db04                 STABLE
               ONLINE  ONLINE       dm01db05                 STABLE
               ONLINE  ONLINE       dm01db06                 STABLE
               ONLINE  ONLINE       dm01db07                 STABLE
               ONLINE  ONLINE       dm01db08                 STABLE
ora.DATA.ACFSVOL1.advm
               ONLINE  ONLINE       dm01db01                 STABLE
               ONLINE  ONLINE       dm01db02                 STABLE
               ONLINE  ONLINE       dm01db03                 STABLE
               ONLINE  ONLINE       dm01db04                 STABLE
               ONLINE  ONLINE       dm01db05                 STABLE
               ONLINE  ONLINE       dm01db06                 STABLE
               ONLINE  ONLINE       dm01db07                 STABLE
               ONLINE  ONLINE       dm01db08                 STABLE
ora.DATA.dg
               ONLINE  ONLINE       dm01db01                 STABLE
               ONLINE  ONLINE       dm01db02                 STABLE
               ONLINE  OFFLINE      dm01db03                 STABLE
               ONLINE  ONLINE       dm01db04                 STABLE
               ONLINE  OFFLINE      dm01db05                 STABLE
               ONLINE  OFFLINE      dm01db06                 STABLE
               ONLINE  OFFLINE      dm01db07                 STABLE
               ONLINE  OFFLINE      dm01db08                 STABLE
ora.DATA1.dg
               ONLINE  ONLINE       dm01db01                 STABLE
               ONLINE  ONLINE       dm01db02                 STABLE
               ONLINE  OFFLINE      dm01db03                 STABLE
               ONLINE  ONLINE       dm01db04                 STABLE
               ONLINE  OFFLINE      dm01db05                 STABLE
               ONLINE  OFFLINE      dm01db06                 STABLE
               ONLINE  OFFLINE      dm01db07                 STABLE
               ONLINE  OFFLINE      dm01db08                 STABLE
ora.DBFS_DG.dg
               ONLINE  ONLINE       dm01db01                 STABLE
               ONLINE  ONLINE       dm01db02                 STABLE
               OFFLINE OFFLINE      dm01db03                 STABLE
               ONLINE  ONLINE       dm01db04                 STABLE
               OFFLINE OFFLINE      dm01db05                 STABLE
               OFFLINE OFFLINE      dm01db06                 STABLE
               OFFLINE OFFLINE      dm01db07                 STABLE
               OFFLINE OFFLINE      dm01db08                 STABLE
ora.LISTENER.lsnr
               ONLINE  ONLINE       dm01db01                 STABLE
               ONLINE  ONLINE       dm01db02                 STABLE
               ONLINE  ONLINE       dm01db03                 STABLE
               ONLINE  ONLINE       dm01db04                 STABLE
               ONLINE  ONLINE       dm01db05                 STABLE
               ONLINE  ONLINE       dm01db06                 STABLE
               ONLINE  ONLINE       dm01db07                 STABLE
               ONLINE  ONLINE       dm01db08                 STABLE
ora.MGMT_DG.dg
               ONLINE  ONLINE       dm01db01                 STABLE
               ONLINE  ONLINE       dm01db02                 STABLE
               OFFLINE OFFLINE      dm01db03                 STABLE
               ONLINE  ONLINE       dm01db04                 STABLE
               OFFLINE OFFLINE      dm01db05                 STABLE
               OFFLINE OFFLINE      dm01db06                 STABLE
               OFFLINE OFFLINE      dm01db07                 STABLE
               OFFLINE OFFLINE      dm01db08                 STABLE
ora.acfs_dg.acfsvol.acfs
               ONLINE  ONLINE       dm01db01                 mounted on /acfs_ogg
                                                             ,STABLE
               ONLINE  ONLINE       dm01db02                 mounted on /acfs_ogg
                                                             ,STABLE
               ONLINE  ONLINE       dm01db03                 mounted on /acfs_ogg
                                                             ,STABLE
               ONLINE  ONLINE       dm01db04                 mounted on /acfs_ogg
                                                             ,STABLE
               ONLINE  ONLINE       dm01db05                 mounted on /acfs_ogg
                                                             ,STABLE
               ONLINE  ONLINE       dm01db06                 mounted on /acfs_ogg
                                                             ,STABLE
               ONLINE  ONLINE       dm01db07                 mounted on /acfs_ogg
                                                             ,STABLE
               ONLINE  ONLINE       dm01db08                 mounted on /acfs_ogg
                                                             ,STABLE
ora.chad
               ONLINE  ONLINE       dm01db01                 STABLE
               ONLINE  ONLINE       dm01db02                 STABLE
               ONLINE  ONLINE       dm01db03                 STABLE
               ONLINE  ONLINE       dm01db04                 STABLE
               ONLINE  ONLINE       dm01db05                 STABLE
               ONLINE  ONLINE       dm01db06                 STABLE
               ONLINE  ONLINE       dm01db07                 STABLE
               ONLINE  ONLINE       dm01db08                 STABLE
ora.data.acfsvol1.acfs
               ONLINE  ONLINE       dm01db01                 mounted on /acfs_ogg
                                                             1,STABLE
               ONLINE  ONLINE       dm01db02                 mounted on /acfs_ogg
                                                             1,STABLE
               ONLINE  ONLINE       dm01db03                 mounted on /acfs_ogg
                                                             1,STABLE
               ONLINE  ONLINE       dm01db04                 mounted on /acfs_ogg
                                                             1,STABLE
               ONLINE  ONLINE       dm01db05                 mounted on /acfs_ogg
                                                             1,STABLE
               ONLINE  ONLINE       dm01db06                 mounted on /acfs_ogg
                                                             1,STABLE
               ONLINE  ONLINE       dm01db07                 mounted on /acfs_ogg
                                                             1,STABLE
               ONLINE  ONLINE       dm01db08                 mounted on /acfs_ogg
                                                             1,STABLE
ora.net1.network
               ONLINE  ONLINE       dm01db01                 STABLE
               ONLINE  ONLINE       dm01db02                 STABLE
               ONLINE  ONLINE       dm01db03                 STABLE
               ONLINE  ONLINE       dm01db04                 STABLE
               ONLINE  ONLINE       dm01db05                 STABLE
               ONLINE  ONLINE       dm01db06                 STABLE
               ONLINE  ONLINE       dm01db07                 STABLE
               ONLINE  ONLINE       dm01db08                 STABLE
ora.ons
               ONLINE  ONLINE       dm01db01                 STABLE
               ONLINE  ONLINE       dm01db02                 STABLE
               ONLINE  ONLINE       dm01db03                 STABLE
               ONLINE  ONLINE       dm01db04                 STABLE
               ONLINE  ONLINE       dm01db05                 STABLE
               ONLINE  ONLINE       dm01db06                 STABLE
               ONLINE  ONLINE       dm01db07                 STABLE
               ONLINE  ONLINE       dm01db08                 STABLE
ora.proxy_advm
               ONLINE  ONLINE       dm01db01                 STABLE
               ONLINE  ONLINE       dm01db02                 STABLE
               ONLINE  ONLINE       dm01db03                 STABLE
               ONLINE  ONLINE       dm01db04                 STABLE
               ONLINE  ONLINE       dm01db05                 STABLE
               ONLINE  ONLINE       dm01db06                 STABLE
               ONLINE  ONLINE       dm01db07                 STABLE
               ONLINE  ONLINE       dm01db08                 STABLE
——————————————————————————–
Cluster Resources
——————————————————————————–
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       dm01db02                 STABLE
ora.LISTENER_SCAN2.lsnr
      1        ONLINE  ONLINE       dm01db04                 STABLE
ora.LISTENER_SCAN3.lsnr
      1        ONLINE  ONLINE       dm01db01                 STABLE
ora.MGMTLSNR
      1        ONLINE  ONLINE       dm01db01                 192.168.2.1,STABLE
ora.asm
      1        ONLINE  ONLINE       dm01db01                 Started,STABLE
      2        ONLINE  ONLINE       dm01db02                 Started,STABLE
      3        ONLINE  ONLINE       dm01db04                 Started,STABLE
ora.dm01db01.vip
      1        ONLINE  ONLINE       dm01db01                 STABLE
ora.dm01db02.vip
      1        ONLINE  ONLINE       dm01db02                 STABLE
ora.dm01db03.vip
      1        ONLINE  ONLINE       dm01db03                 STABLE
ora.dm01db04.vip
      1        ONLINE  ONLINE       dm01db04                 STABLE
ora.dm01db05.vip
      1        ONLINE  ONLINE       dm01db05                 STABLE
ora.dm01db06.vip
      1        ONLINE  ONLINE       dm01db06                 STABLE
ora.dm01db07.vip
      1        ONLINE  ONLINE       dm01db07                 STABLE
ora.dm01db08.vip
      1        ONLINE  ONLINE       dm01db08                 STABLE
ora.cvu
      1        ONLINE  ONLINE       dm01db01                 STABLE
ora.mgmtdb
      1        ONLINE  ONLINE       dm01db01                 Open,STABLE
ora.orcldb.db
      1        ONLINE  ONLINE       dm01db01                 Open,HOME=/u01/app/o
                                                             racle/product/12.2.0
                                                             .1/dbhome,STABLE
      2        ONLINE  ONLINE       dm01db02                 Open,HOME=/u01/app/o
                                                             racle/product/12.2.0
                                                             .1/dbhome,STABLE
      3        ONLINE  ONLINE       dm01db04                 Open,HOME=/u01/app/o
                                                             racle/product/12.2.0
                                                             .1/dbhome,STABLE
      4        ONLINE  ONLINE       dm01db05                 Open,HOME=/u01/app/o
                                                             racle/product/12.2.0
                                                             .1/dbhome,STABLE
      5        ONLINE  ONLINE       dm01db07                 Open,HOME=/u01/app/o
                                                             racle/product/12.2.0
                                                             .1/dbhome,STABLE
      6        ONLINE  ONLINE       dm01db06                 Open,HOME=/u01/app/o
                                                             racle/product/12.2.0
                                                             .1/dbhome,STABLE
      7        ONLINE  ONLINE       dm01db03                 Open,HOME=/u01/app/o
                                                             racle/product/12.2.0
                                                             .1/dbhome,STABLE
      8        ONLINE  ONLINE       dm01db08                 Open,HOME=/u01/app/o
                                                             racle/product/12.2.0
                                                             .1/dbhome,STABLE
ora.prod.db
      1        ONLINE  ONLINE       dm01db01                 Open,HOME=/u01/app/o
                                                             racle/product/12.2.0
                                                             .1/dbhome,STABLE
      2        ONLINE  ONLINE       dm01db02                 Open,HOME=/u01/app/o
                                                             racle/product/12.2.0
                                                             .1/dbhome,STABLE
      3        ONLINE  ONLINE       dm01db04                 Open,HOME=/u01/app/o
                                                             racle/product/12.2.0
                                                             .1/dbhome,STABLE
      4        ONLINE  ONLINE       dm01db05                 Open,HOME=/u01/app/o
                                                             racle/product/12.2.0
                                                             .1/dbhome,STABLE
      5        ONLINE  ONLINE       dm01db07                 Open,HOME=/u01/app/o
                                                             racle/product/12.2.0
                                                             .1/dbhome,STABLE
      6        ONLINE  ONLINE       dm01db06                 Open,HOME=/u01/app/o
                                                             racle/product/12.2.0
                                                             .1/dbhome,STABLE
      7        ONLINE  ONLINE       dm01db03                 Open,HOME=/u01/app/o
                                                             racle/product/12.2.0
                                                             .1/dbhome,STABLE
      8        ONLINE  ONLINE       dm01db08                 Open,HOME=/u01/app/o
                                                             racle/product/12.2.0
                                                             .1/dbhome,STABLE
ora.qosmserver
      1        ONLINE  ONLINE       dm01db01                 STABLE
ora.scan1.vip
      1        ONLINE  ONLINE       dm01db02                 STABLE
ora.scan2.vip
      1        ONLINE  ONLINE       dm01db04                 STABLE
ora.scan3.vip
      1        ONLINE  ONLINE       dm01db01                 STABLE
——————————————————————————–


Conclusion:
In this article we have learned how to quickly configure ACFS on Exadata Database machine running Oracle 12.2.0.1 on ASM Flex Architecture.


0


Overview

Oracle ASM disk groups are built on a set of Exadata grid disks. Exadata uses ASM disk groups store database files. 


ASM Provides 3 Types of Redundancy:

External: ASM doesn’t provide redundancy. External redundancy is not an option on Exadata. You must either use normal or high redundancy.

Normal: It provides two-way data mirroring. It maintains two copies of data blocks in separate failure groups

High: It provides three-way data mirroring. It maintains three copies of data blocks in separate failure groups


In this article I will demonstrate how to created ASM Disk Groups on Exadata database machine using ASMCA.

Environment

Exadata Database machine X2-2
8 Compute nodes, 14 Storage cells and 2 IB Switch

Steps to create ASM disk Group using ASMCA utility.


Set the environment variable to Grid Home and start asmca


dm01db01-orcldb1 {/home/oracle}:. oraenv

ORACLE_SID = [orcldb1] ? +ASM1

The Oracle base has been changed from /u01/app/oracle to /u01/app/grid

dm01db01-+ASM1 {/home/oracle}:which asmca
/u01/app/12.2.0.1/grid/bin/asmca

dm01db01-+ASM1 {/home/oracle}:asmca

1. Create ASM Disk Group using Normal Redundancy (3 failure groups)

First we will create an ASM disk group using Normal Redundancy using 3 storage cells.

ASMCA starting

Click on ASM Instances on left pane

Here we are running Flex ASM and current ASM instance is running on Nodes 1, 2 and 4

Click on Disk Groups. We can see currently there 2 disk groups one for OCR/Voting disks and another one for MGMT database repository. To create a new disk group click on “Create” button.

Click on “Change Disk Discovery Path”

Enter the following path to discover the grid disks.

Select desired grid disks to create ASM Disk Group. 
Here I am creating DATA disk group by selecting DATA grid disks from 3 storage cells

Click on “Show Advanced Options” and select the ASM/Database/ADVM compatibility. Finally click Ok to create DATA disk group

DATA disk group creation in progress

We can now see that the DATA disk group is created

Let’s verify the newly created DATA disk group. Right click on the DATA disk group and select “view status details”

We can see that the DATA disk group is mounted on node 1, 2 and 4

2. Creaet ASM Disk Group Using High Redundancy (5 failure groups)


Now let’s create another ASM disk group using High Redundancy using grid disks from 5 storage cells.

Click Create button to create new ASM disk group


Enter the Disk Group name, select the desired grid disks and ASM/Database/ADVM attributes and click ok

DATA1 disk group creation is in process

We can see that DATA1 disk group created



3. Add disks to ASM Disk Group (add grid disks from one storage cell)

Now let’s Add disks to DATA1 disk group. I am going to add DATA grid disks from a storage to DATA1.


Right click on the DATA1 disk group and select Add Disks


Select the desired grid disks from storage cell and click ok

Disks are being added to DATA1

We can see the size of DATA1 disk group has increased


 4. Drop disks from ASM Disk Group (remove grid disks from one storage cell)



This time let’s Drop disks from DATA1 disk group. I am going to remove DATA grid disks from a storage used by DATA1.


Right click on DATA1 disk group and select Drop Disks

Process started

Select the desired Grid disks to be dropped from DATA1 disk group and click ok

Disks are being dropped from DATA1

We can see that the DATA1 disk group size is dropped




Conclusion:


In this article we have learned different ASM disk group redundancy levels and how to create ASM disk group on Exadata using a set of Exadata grid disks. We have created different ASM disk group using different redundancy levels and performed few disk operations like adding and dropping.

1

Overview
In Exadata ASM disk groups are created from ASM disks which are provisioned as grid disks from Exadata storage cells. The grid disks are created from the celldisks. Normally, there is no free space in celldisks, as all space is used for grid disks, as shown below:

[root@dm01cel01 ~]# cellcli -e “list celldisk where name like ‘CD.*’ attributes name, size, freespace”
         CD_00_dm01cel01         528.734375G     0
         CD_01_dm01cel01         528.734375G     0
         CD_02_dm01cel01         557.859375G     0
         CD_03_dm01cel01         557.859375G     0
         CD_04_dm01cel01         557.859375G     0
         CD_05_dm01cel01         557.859375G     0
         CD_06_dm01cel01         557.859375G     0
         CD_07_dm01cel01         557.859375G     0
         CD_08_dm01cel01         557.859375G     0
         CD_09_dm01cel01         557.859375G     0
         CD_10_dm01cel01         557.859375G     0
         CD_11_dm01cel01         557.859375G     0


In this article I will demonstrate how to free up some space from the grid disks in RECO ASM disk group, and then reuse that space to increase the size of DATA disk group. The free space can be anywhere on the cell disks.


Environment
  • Exadata Full Rack X2-2
  • 8 Compute nodes, 14 Storage cells and 3 IB Switches
  • High Performance Disks (600GB per disk)

1. Free up space on celldisks
Let’s say that we want to free up 50GB per disk in RECO disk group, we need to reduce the disk size in ASM, and then reduce the grid disk size in Exadata storage cells. Let’s do that for RECO disk group.

We start with the RECO grid disks with the size of 268.6875G:

[root@dm01cel01 ~]# cellcli -e “list grid disk where name like ‘RECO.*’ attributes name, size”
         RECO_dm01_CD_00_dm01cel01       105.6875G
         RECO_dm01_CD_01_dm01cel01       105.6875G
         RECO_dm01_CD_02_dm01cel01       105.6875G
         RECO_dm01_CD_03_dm01cel01       105.6875G
         RECO_dm01_CD_04_dm01cel01       105.6875G
         RECO_dm01_CD_05_dm01cel01       105.6875G
         RECO_dm01_CD_06_dm01cel01       105.6875G
         RECO_dm01_CD_07_dm01cel01       105.6875G
         RECO_dm01_CD_08_dm01cel01       105.6875G
         RECO_dm01_CD_09_dm01cel01       105.6875G
         RECO_dm01_CD_10_dm01cel01       105.6875G
         RECO_dm01_CD_11_dm01cel01       105.6875G


To free up 50 GB, the new grid disks size will be 105.6875 G – 50 GB = 55.6875 GB = 57024 MB.

2. Reduce size of RECO disks in ASM

dm01db01-+ASM1 {/home/oracle}:sqlplus / as sysasm

SQL*Plus: Release 12.1.0.2.0 Production on Wed Jan 18 04:16:57 2017

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 – 64bit Production
With the Real Application Clusters and Automatic Storage Management options

SQL> alter diskgroup RECO_dm01 resize all size 57024M rebalance power 32;

Diskgroup altered.


The command will trigger the rebalance operation for RECO disk group.

3. Monitor the rebalance with the following command:
 

SQL> set lines 200
SQL> set pages 200
SQL> select * from gv$asm_operation;

   INST_ID GROUP_NUMBER OPERA PASS      STAT      POWER     ACTUAL      SOFAR   EST_WORK   EST_RATE EST_MINUTES ERROR_CODE                                       CON_ID
———- ———— —– ——— —- ———- ———- ———- ———- ———- ———– ——————————————– ———-
         2            2 REBAL RESYNC    DONE         32                                                                                                               0
         2            2 REBAL RESILVER  DONE         32                                                                                                               0
         2            2 REBAL REBALANCE WAIT         32                                                                                                               0
         2            2 REBAL COMPACT   WAIT         32                                                                                                               0
         6            2 REBAL RESYNC    DONE         32                                                                                                               0
         6            2 REBAL RESILVER  DONE         32                                                                                                               0
         6            2 REBAL REBALANCE WAIT         32                                                                                                               0
         6            2 REBAL COMPACT   WAIT         32                                                                                                               0
         4            2 REBAL RESYNC    DONE         32                                                                                                               0
         4            2 REBAL RESILVER  DONE         32                                                                                                               0
         4            2 REBAL REBALANCE WAIT         32                                                                                                               0
         4            2 REBAL COMPACT   WAIT         32                                                                                                               0
         3            2 REBAL RESYNC    DONE         32                                                                                                               0
         3            2 REBAL RESILVER  DONE         32                                                                                                               0
         3            2 REBAL REBALANCE WAIT         32                                                                                                               0
         3            2 REBAL COMPACT   WAIT         32                                                                                                               0
         8            2 REBAL RESYNC    DONE         32                                                                                                               0
         8            2 REBAL RESILVER  DONE         32                                                                                                               0
         8            2 REBAL REBALANCE WAIT         32                                                                                                               0
         8            2 REBAL COMPACT   WAIT         32                                                                                                               0
         5            2 REBAL RESYNC    DONE         32                                                                                                               0
         5            2 REBAL RESILVER  DONE         32                                                                                                               0
         5            2 REBAL REBALANCE WAIT         32                                                                                                               0
         5            2 REBAL COMPACT   WAIT         32                                                                                                               0
         7            2 REBAL RESYNC    DONE         32                                                                                                               0
         7            2 REBAL RESILVER  DONE         32                                                                                                               0
         7            2 REBAL REBALANCE WAIT         32                                                                                                               0
         7            2 REBAL COMPACT   WAIT         32                                                                                                               0
         1            2 REBAL RESYNC    DONE         32         32          0          0          0           0                                                       0
         1            2 REBAL RESILVER  DONE         32         32          0          0          0           0                                                       0
         1            2 REBAL REBALANCE EST          32         32          0          0          0           0                                                       0
         1            2 REBAL COMPACT   WAIT         32         32          0          0          0           0                                                       0

32 rows selected.

SQL> select * from gv$asm_operation;

no rows selected


Once the command returns “no rows selected”, the rebalance would have completed and all disks in RECO disk group should show new size.

SQL> select name, total_mb from v$asm_disk_stat where name like ‘RECO%’;

NAME                             TOTAL_MB
—————————— ———-
RECO_dm01_CD_02_dm01CEL01           57024
RECO_dm01_CD_05_dm01CEL01           57024
RECO_dm01_CD_06_dm01CEL01           57024
RECO_dm01_CD_08_dm01CEL01           57024
RECO_dm01_CD_04_dm01CEL01           57024
RECO_dm01_CD_00_dm01CEL01           57024
RECO_dm01_CD_03_dm01CEL01           57024
RECO_dm01_CD_09_dm01CEL01           57024
RECO_dm01_CD_07_dm01CEL01           57024
RECO_dm01_CD_11_dm01CEL01           57024
RECO_dm01_CD_10_dm01CEL01           57024
RECO_dm01_CD_01_dm01CEL01           57024
RECO_dm01_CD_05_dm01CEL02           57024
RECO_dm01_CD_07_dm01CEL02           57024
RECO_dm01_CD_01_dm01CEL02           57024
RECO_dm01_CD_04_dm01CEL02           57024
RECO_dm01_CD_10_dm01CEL02           57024
RECO_dm01_CD_03_dm01CEL02           57024
RECO_dm01_CD_00_dm01CEL02           57024
RECO_dm01_CD_08_dm01CEL02           57024
RECO_dm01_CD_06_dm01CEL02           57024
RECO_dm01_CD_02_dm01CEL02           57024
RECO_dm01_CD_11_dm01CEL02           57024
RECO_dm01_CD_09_dm01CEL02           57024

RECO_dm01_CD_10_dm01CEL14           57024
RECO_dm01_CD_02_dm01CEL14           57024
RECO_dm01_CD_05_dm01CEL14           57024
RECO_dm01_CD_03_dm01CEL14           57024
RECO_dm01_CD_00_dm01CEL14           57024
RECO_dm01_CD_01_dm01CEL14           57024
RECO_dm01_CD_04_dm01CEL14           57024
RECO_dm01_CD_09_dm01CEL14           57024
RECO_dm01_CD_11_dm01CEL14           57024
RECO_dm01_CD_07_dm01CEL14           57024
RECO_dm01_CD_06_dm01CEL14           57024
RECO_dm01_CD_08_dm01CEL14           57024

168 rows selected.


4. Reduce size of RECO disks in storage cells

[root@dm01cel01 ~]# cellcli
CellCLI: Release 12.1.2.1.1 – Production on Wed Jan 18 05:12:33 CST 2017

Copyright (c) 2007, 2013, Oracle.  All rights reserved.
Cell Efficiency Ratio: 7,004

CellCLI> alter grid disk RECO_dm01_CD_00_dm01CEL01, RECO_dm01_CD_01_dm01CEL01, RECO_dm01_CD_02_dm01CEL01, RECO_dm01_CD_03_dm01CEL01, RECO_dm01_CD_04_dm01CEL01, RECO_dm01_CD_05_dm01CEL01, RECO_dm01_CD_06_dm01CEL01, RECO_dm01_CD_07_dm01CEL01, RECO_dm01_CD_08_dm01CEL01, RECO_dm01_CD_09_dm01CEL01, RECO_dm01_CD_10_dm01CEL01, RECO_dm01_CD_11_dm01CEL01 size=57024M;
grid disk RECO_dm01_CD_00_dm01cel01 successfully altered
grid disk RECO_dm01_CD_01_dm01cel01 successfully altered
grid disk RECO_dm01_CD_02_dm01cel01 successfully altered
grid disk RECO_dm01_CD_03_dm01cel01 successfully altered
grid disk RECO_dm01_CD_04_dm01cel01 successfully altered
grid disk RECO_dm01_CD_05_dm01cel01 successfully altered
grid disk RECO_dm01_CD_06_dm01cel01 successfully altered
grid disk RECO_dm01_CD_07_dm01cel01 successfully altered
grid disk RECO_dm01_CD_08_dm01cel01 successfully altered
grid disk RECO_dm01_CD_09_dm01cel01 successfully altered
grid disk RECO_dm01_CD_10_dm01cel01 successfully altered
grid disk RECO_dm01_CD_11_dm01cel01 successfully altered

[root@dm01cel01 ~]# ssh dm01cel02
Last login: Sun Feb 28 10:22:27 2016 from dm01cel01
[root@dm01cel02 ~]# cellcli
CellCLI: Release 12.1.2.1.1 – Production on Wed Jan 18 05:22:50 CST 2017

Copyright (c) 2007, 2013, Oracle.  All rights reserved.
Cell Efficiency Ratio: 6,999

CellCLI> alter grid disk RECO_dm01_CD_00_dm01CEL02, RECO_dm01_CD_01_dm01CEL02, RECO_dm01_CD_02_dm01CEL02, RECO_dm01_CD_03_dm01CEL02, RECO_dm01_CD_04_dm01CEL02, RECO_dm01_CD_05_dm01CEL02, RECO_dm01_CD_06_dm01CEL02, RECO_dm01_CD_07_dm01CEL02, RECO_dm01_CD_08_dm01CEL02, RECO_dm01_CD_09_dm01CEL02, RECO_dm01_CD_10_dm01CEL02, RECO_dm01_CD_11_dm01CEL02 size=57024M;
grid disk RECO_dm01_CD_00_dm01cel02 successfully altered
grid disk RECO_dm01_CD_01_dm01cel02 successfully altered
grid disk RECO_dm01_CD_02_dm01cel02 successfully altered
grid disk RECO_dm01_CD_03_dm01cel02 successfully altered
grid disk RECO_dm01_CD_04_dm01cel02 successfully altered
grid disk RECO_dm01_CD_05_dm01cel02 successfully altered
grid disk RECO_dm01_CD_06_dm01cel02 successfully altered
grid disk RECO_dm01_CD_07_dm01cel02 successfully altered
grid disk RECO_dm01_CD_08_dm01cel02 successfully altered
grid disk RECO_dm01_CD_09_dm01cel02 successfully altered
grid disk RECO_dm01_CD_10_dm01cel02 successfully altered
grid disk RECO_dm01_CD_11_dm01cel02 successfully altered

[root@dm01cel01 ~]# ssh dm01cel03
Last login: Mon Mar 28 13:24:31 2016 from dm01db01
[root@dm01cel03 ~]# cellcli
CellCLI: Release 12.1.2.1.1 – Production on Wed Jan 18 05:23:40 CST 2017

Copyright (c) 2007, 2013, Oracle.  All rights reserved.
Cell Efficiency Ratio: 7,599

CellCLI> alter grid disk RECO_dm01_CD_00_dm01CEL03, RECO_dm01_CD_01_dm01CEL03, RECO_dm01_CD_02_dm01CEL03, RECO_dm01_CD_03_dm01CEL03, RECO_dm01_CD_04_dm01CEL03, RECO_dm01_CD_05_dm01CEL03, RECO_dm01_CD_06_dm01CEL03, RECO_dm01_CD_07_dm01CEL03, RECO_dm01_CD_08_dm01CEL03, RECO_dm01_CD_09_dm01CEL03, RECO_dm01_CD_10_dm01CEL03, RECO_dm01_CD_11_dm01CEL03 size=57024M;
grid disk RECO_dm01_CD_00_dm01cel03 successfully altered
grid disk RECO_dm01_CD_01_dm01cel03 successfully altered
grid disk RECO_dm01_CD_02_dm01cel03 successfully altered
grid disk RECO_dm01_CD_03_dm01cel03 successfully altered
grid disk RECO_dm01_CD_04_dm01cel03 successfully altered
grid disk RECO_dm01_CD_05_dm01cel03 successfully altered
grid disk RECO_dm01_CD_06_dm01cel03 successfully altered
grid disk RECO_dm01_CD_07_dm01cel03 successfully altered
grid disk RECO_dm01_CD_08_dm01cel03 successfully altered
grid disk RECO_dm01_CD_09_dm01cel03 successfully altered
grid disk RECO_dm01_CD_10_dm01cel03 successfully altered
grid disk RECO_dm01_CD_11_dm01cel03 successfully altered

[root@dm01cel03 ~]# ssh dm01cel04
Last login: Sun Feb 28 10:23:17 2016 from dm01cel02
[root@dm01cel04 ~]# cellcli
CellCLI: Release 12.1.2.1.1 – Production on Wed Jan 18 05:24:27 CST 2017

Copyright (c) 2007, 2013, Oracle.  All rights reserved.
Cell Efficiency Ratio: 7,140

CellCLI> alter grid disk RECO_dm01_CD_00_dm01CEL04, RECO_dm01_CD_01_dm01CEL04, RECO_dm01_CD_02_dm01CEL04, RECO_dm01_CD_03_dm01CEL04, RECO_dm01_CD_04_dm01CEL04, RECO_dm01_CD_05_dm01CEL04, RECO_dm01_CD_06_dm01CEL04, RECO_dm01_CD_07_dm01CEL04, RECO_dm01_CD_08_dm01CEL04, RECO_dm01_CD_09_dm01CEL04, RECO_dm01_CD_10_dm01CEL04, RECO_dm01_CD_11_dm01CEL04 size=57024M;
grid disk RECO_dm01_CD_00_dm01cel04 successfully altered
grid disk RECO_dm01_CD_01_dm01cel04 successfully altered
grid disk RECO_dm01_CD_02_dm01cel04 successfully altered
grid disk RECO_dm01_CD_03_dm01cel04 successfully altered
grid disk RECO_dm01_CD_04_dm01cel04 successfully altered
grid disk RECO_dm01_CD_05_dm01cel04 successfully altered
grid disk RECO_dm01_CD_06_dm01cel04 successfully altered
grid disk RECO_dm01_CD_07_dm01cel04 successfully altered
grid disk RECO_dm01_CD_08_dm01cel04 successfully altered
grid disk RECO_dm01_CD_09_dm01cel04 successfully altered
grid disk RECO_dm01_CD_10_dm01cel04 successfully altered
grid disk RECO_dm01_CD_11_dm01cel04 successfully altered

CellCLI> alter grid disk RECO_dm01_CD_00_dm01CEL05, RECO_dm01_CD_01_dm01CEL05, RECO_dm01_CD_02_dm01CEL05, RECO_dm01_CD_03_dm01CEL05, RECO_dm01_CD_04_dm01CEL05, RECO_dm01_CD_05_dm01CEL05, RECO_dm01_CD_06_dm01CEL05, RECO_dm01_CD_07_dm01CEL05, RECO_dm01_CD_08_dm01CEL05, RECO_dm01_CD_09_dm01CEL05, RECO_dm01_CD_10_dm01CEL05, RECO_dm01_CD_11_dm01CEL05 size=57024M;
grid disk RECO_dm01_CD_00_dm01cel05 successfully altered
grid disk RECO_dm01_CD_01_dm01cel05 successfully altered
grid disk RECO_dm01_CD_02_dm01cel05 successfully altered
grid disk RECO_dm01_CD_03_dm01cel05 successfully altered
grid disk RECO_dm01_CD_04_dm01cel05 successfully altered
grid disk RECO_dm01_CD_05_dm01cel05 successfully altered
grid disk RECO_dm01_CD_06_dm01cel05 successfully altered
grid disk RECO_dm01_CD_07_dm01cel05 successfully altered
grid disk RECO_dm01_CD_08_dm01cel05 successfully altered
grid disk RECO_dm01_CD_09_dm01cel05 successfully altered
grid disk RECO_dm01_CD_10_dm01cel05 successfully altered
grid disk RECO_dm01_CD_11_dm01cel05 successfully altered

CellCLI> alter grid disk RECO_dm01_CD_00_dm01CEL06, RECO_dm01_CD_01_dm01CEL06, RECO_dm01_CD_02_dm01CEL06, RECO_dm01_CD_03_dm01CEL06, RECO_dm01_CD_04_dm01CEL06, RECO_dm01_CD_05_dm01CEL06, RECO_dm01_CD_06_dm01CEL06, RECO_dm01_CD_07_dm01CEL06, RECO_dm01_CD_08_dm01CEL06, RECO_dm01_CD_09_dm01CEL06, RECO_dm01_CD_10_dm01CEL06, RECO_dm01_CD_11_dm01CEL06 size=57024M;
grid disk RECO_dm01_CD_00_dm01cel06 successfully altered
grid disk RECO_dm01_CD_01_dm01cel06 successfully altered
grid disk RECO_dm01_CD_02_dm01cel06 successfully altered
grid disk RECO_dm01_CD_03_dm01cel06 successfully altered
grid disk RECO_dm01_CD_04_dm01cel06 successfully altered
grid disk RECO_dm01_CD_05_dm01cel06 successfully altered
grid disk RECO_dm01_CD_06_dm01cel06 successfully altered
grid disk RECO_dm01_CD_07_dm01cel06 successfully altered
grid disk RECO_dm01_CD_08_dm01cel06 successfully altered
grid disk RECO_dm01_CD_09_dm01cel06 successfully altered
grid disk RECO_dm01_CD_10_dm01cel06 successfully altered
grid disk RECO_dm01_CD_11_dm01cel06 successfully altered

CellCLI> alter grid disk RECO_dm01_CD_00_dm01CEL07, RECO_dm01_CD_01_dm01CEL07, RECO_dm01_CD_02_dm01CEL07, RECO_dm01_CD_03_dm01CEL07, RECO_dm01_CD_04_dm01CEL07, RECO_dm01_CD_05_dm01CEL07, RECO_dm01_CD_06_dm01CEL07, RECO_dm01_CD_07_dm01CEL07, RECO_dm01_CD_08_dm01CEL07, RECO_dm01_CD_09_dm01CEL07, RECO_dm01_CD_10_dm01CEL07, RECO_dm01_CD_11_dm01CEL07 size=57024M;
grid disk RECO_dm01_CD_00_dm01cel07 successfully altered
grid disk RECO_dm01_CD_01_dm01cel07 successfully altered
grid disk RECO_dm01_CD_02_dm01cel07 successfully altered
grid disk RECO_dm01_CD_03_dm01cel07 successfully altered
grid disk RECO_dm01_CD_04_dm01cel07 successfully altered
grid disk RECO_dm01_CD_05_dm01cel07 successfully altered
grid disk RECO_dm01_CD_06_dm01cel07 successfully altered
grid disk RECO_dm01_CD_07_dm01cel07 successfully altered
grid disk RECO_dm01_CD_08_dm01cel07 successfully altered
grid disk RECO_dm01_CD_09_dm01cel07 successfully altered
grid disk RECO_dm01_CD_10_dm01cel07 successfully altered
grid disk RECO_dm01_CD_11_dm01cel07 successfully altered

CellCLI> alter grid disk RECO_dm01_CD_00_dm01CEL08, RECO_dm01_CD_01_dm01CEL08, RECO_dm01_CD_02_dm01CEL08, RECO_dm01_CD_03_dm01CEL08, RECO_dm01_CD_04_dm01CEL08, RECO_dm01_CD_05_dm01CEL08, RECO_dm01_CD_06_dm01CEL08, RECO_dm01_CD_07_dm01CEL08, RECO_dm01_CD_08_dm01CEL08, RECO_dm01_CD_09_dm01CEL08, RECO_dm01_CD_10_dm01CEL08, RECO_dm01_CD_11_dm01CEL08 size=57024M;
grid disk RECO_dm01_CD_00_dm01cel08 successfully altered
grid disk RECO_dm01_CD_01_dm01cel08 successfully altered
grid disk RECO_dm01_CD_02_dm01cel08 successfully altered
grid disk RECO_dm01_CD_03_dm01cel08 successfully altered
grid disk RECO_dm01_CD_04_dm01cel08 successfully altered
grid disk RECO_dm01_CD_05_dm01cel08 successfully altered
grid disk RECO_dm01_CD_06_dm01cel08 successfully altered
grid disk RECO_dm01_CD_07_dm01cel08 successfully altered
grid disk RECO_dm01_CD_08_dm01cel08 successfully altered
grid disk RECO_dm01_CD_09_dm01cel08 successfully altered
grid disk RECO_dm01_CD_10_dm01cel08 successfully altered
grid disk RECO_dm01_CD_11_dm01cel08 successfully altered

CellCLI> alter grid disk RECO_dm01_CD_00_dm01CEL09, RECO_dm01_CD_01_dm01CEL09, RECO_dm01_CD_02_dm01CEL09, RECO_dm01_CD_03_dm01CEL09, RECO_dm01_CD_04_dm01CEL09, RECO_dm01_CD_05_dm01CEL09, RECO_dm01_CD_06_dm01CEL09, RECO_dm01_CD_07_dm01CEL09, RECO_dm01_CD_08_dm01CEL09, RECO_dm01_CD_09_dm01CEL09, RECO_dm01_CD_10_dm01CEL09, RECO_dm01_CD_11_dm01CEL09 size=57024M;
grid disk RECO_dm01_CD_00_dm01cel09 successfully altered
grid disk RECO_dm01_CD_01_dm01cel09 successfully altered
grid disk RECO_dm01_CD_02_dm01cel09 successfully altered
grid disk RECO_dm01_CD_03_dm01cel09 successfully altered
grid disk RECO_dm01_CD_04_dm01cel09 successfully altered
grid disk RECO_dm01_CD_05_dm01cel09 successfully altered
grid disk RECO_dm01_CD_06_dm01cel09 successfully altered
grid disk RECO_dm01_CD_07_dm01cel09 successfully altered
grid disk RECO_dm01_CD_08_dm01cel09 successfully altered
grid disk RECO_dm01_CD_09_dm01cel09 successfully altered
grid disk RECO_dm01_CD_10_dm01cel09 successfully altered
grid disk RECO_dm01_CD_11_dm01cel09 successfully altered

CellCLI> alter grid disk RECO_dm01_CD_00_dm01CEL10, RECO_dm01_CD_01_dm01CEL10, RECO_dm01_CD_02_dm01CEL10, RECO_dm01_CD_03_dm01CEL10, RECO_dm01_CD_04_dm01CEL10, RECO_dm01_CD_05_dm01CEL10, RECO_dm01_CD_06_dm01CEL10, RECO_dm01_CD_07_dm01CEL10, RECO_dm01_CD_08_dm01CEL10, RECO_dm01_CD_09_dm01CEL10, RECO_dm01_CD_10_dm01CEL10, RECO_dm01_CD_11_dm01CEL10 size=57024M;
grid disk RECO_dm01_CD_00_dm01cel10 successfully altered
grid disk RECO_dm01_CD_01_dm01cel10 successfully altered
grid disk RECO_dm01_CD_02_dm01cel10 successfully altered
grid disk RECO_dm01_CD_03_dm01cel10 successfully altered
grid disk RECO_dm01_CD_04_dm01cel10 successfully altered
grid disk RECO_dm01_CD_05_dm01cel10 successfully altered
grid disk RECO_dm01_CD_06_dm01cel10 successfully altered
grid disk RECO_dm01_CD_07_dm01cel10 successfully altered
grid disk RECO_dm01_CD_08_dm01cel10 successfully altered
grid disk RECO_dm01_CD_09_dm01cel10 successfully altered
grid disk RECO_dm01_CD_10_dm01cel10 successfully altered
grid disk RECO_dm01_CD_11_dm01cel10 successfully altered

CellCLI> alter grid disk RECO_dm01_CD_00_dm01CEL11, RECO_dm01_CD_01_dm01CEL11, RECO_dm01_CD_02_dm01CEL11, RECO_dm01_CD_03_dm01CEL11, RECO_dm01_CD_04_dm01CEL11, RECO_dm01_CD_05_dm01CEL11, RECO_dm01_CD_06_dm01CEL11, RECO_dm01_CD_07_dm01CEL11, RECO_dm01_CD_08_dm01CEL11, RECO_dm01_CD_09_dm01CEL11, RECO_dm01_CD_10_dm01CEL11, RECO_dm01_CD_11_dm01CEL11 size=57024M;
grid disk RECO_dm01_CD_00_dm01cel11 successfully altered
grid disk RECO_dm01_CD_01_dm01cel11 successfully altered
grid disk RECO_dm01_CD_02_dm01cel11 successfully altered
grid disk RECO_dm01_CD_03_dm01cel11 successfully altered
grid disk RECO_dm01_CD_04_dm01cel11 successfully altered
grid disk RECO_dm01_CD_05_dm01cel11 successfully altered
grid disk RECO_dm01_CD_06_dm01cel11 successfully altered
grid disk RECO_dm01_CD_07_dm01cel11 successfully altered
grid disk RECO_dm01_CD_08_dm01cel11 successfully altered
grid disk RECO_dm01_CD_09_dm01cel11 successfully altered
grid disk RECO_dm01_CD_10_dm01cel11 successfully altered
grid disk RECO_dm01_CD_11_dm01cel11 successfully altered

CellCLI> alter grid disk RECO_dm01_CD_00_dm01CEL12, RECO_dm01_CD_01_dm01CEL12, RECO_dm01_CD_02_dm01CEL12, RECO_dm01_CD_03_dm01CEL12, RECO_dm01_CD_04_dm01CEL12, RECO_dm01_CD_05_dm01CEL12, RECO_dm01_CD_06_dm01CEL12, RECO_dm01_CD_07_dm01CEL12, RECO_dm01_CD_08_dm01CEL12, RECO_dm01_CD_09_dm01CEL12, RECO_dm01_CD_10_dm01CEL12, RECO_dm01_CD_11_dm01CEL12 size=57024M;
grid disk RECO_dm01_CD_00_dm01cel12 successfully altered
grid disk RECO_dm01_CD_01_dm01cel12 successfully altered
grid disk RECO_dm01_CD_02_dm01cel12 successfully altered
grid disk RECO_dm01_CD_03_dm01cel12 successfully altered
grid disk RECO_dm01_CD_04_dm01cel12 successfully altered
grid disk RECO_dm01_CD_05_dm01cel12 successfully altered
grid disk RECO_dm01_CD_06_dm01cel12 successfully altered
grid disk RECO_dm01_CD_07_dm01cel12 successfully altered
grid disk RECO_dm01_CD_08_dm01cel12 successfully altered
grid disk RECO_dm01_CD_09_dm01cel12 successfully altered
grid disk RECO_dm01_CD_10_dm01cel12 successfully altered
grid disk RECO_dm01_CD_11_dm01cel12 successfully altered

CellCLI> alter grid disk RECO_dm01_CD_00_dm01CEL13, RECO_dm01_CD_01_dm01CEL13, RECO_dm01_CD_02_dm01CEL13, RECO_dm01_CD_03_dm01CEL13, RECO_dm01_CD_04_dm01CEL13, RECO_dm01_CD_05_dm01CEL13, RECO_dm01_CD_06_dm01CEL13, RECO_dm01_CD_07_dm01CEL13, RECO_dm01_CD_08_dm01CEL13, RECO_dm01_CD_09_dm01CEL13, RECO_dm01_CD_10_dm01CEL13, RECO_dm01_CD_11_dm01CEL13 size=57024M;
grid disk RECO_dm01_CD_00_dm01cel13 successfully altered
grid disk RECO_dm01_CD_01_dm01cel13 successfully altered
grid disk RECO_dm01_CD_02_dm01cel13 successfully altered
grid disk RECO_dm01_CD_03_dm01cel13 successfully altered
grid disk RECO_dm01_CD_04_dm01cel13 successfully altered
grid disk RECO_dm01_CD_05_dm01cel13 successfully altered
grid disk RECO_dm01_CD_06_dm01cel13 successfully altered
grid disk RECO_dm01_CD_07_dm01cel13 successfully altered
grid disk RECO_dm01_CD_08_dm01cel13 successfully altered
grid disk RECO_dm01_CD_09_dm01cel13 successfully altered
grid disk RECO_dm01_CD_10_dm01cel13 successfully altered
grid disk RECO_dm01_CD_11_dm01cel13 successfully altered

CellCLI> alter grid disk RECO_dm01_CD_00_dm01CEL14, RECO_dm01_CD_01_dm01CEL14, RECO_dm01_CD_02_dm01CEL14, RECO_dm01_CD_03_dm01CEL14, RECO_dm01_CD_04_dm01CEL14, RECO_dm01_CD_05_dm01CEL14, RECO_dm01_CD_06_dm01CEL14, RECO_dm01_CD_07_dm01CEL14, RECO_dm01_CD_08_dm01CEL14, RECO_dm01_CD_09_dm01CEL14, RECO_dm01_CD_10_dm01CEL14, RECO_dm01_CD_11_dm01CEL14 size=57024M;
grid disk RECO_dm01_CD_00_dm01cel14 successfully altered
grid disk RECO_dm01_CD_01_dm01cel14 successfully altered
grid disk RECO_dm01_CD_02_dm01cel14 successfully altered
grid disk RECO_dm01_CD_03_dm01cel14 successfully altered
grid disk RECO_dm01_CD_04_dm01cel14 successfully altered
grid disk RECO_dm01_CD_05_dm01cel14 successfully altered
grid disk RECO_dm01_CD_06_dm01cel14 successfully altered
grid disk RECO_dm01_CD_07_dm01cel14 successfully altered
grid disk RECO_dm01_CD_08_dm01cel14 successfully altered
grid disk RECO_dm01_CD_09_dm01cel14 successfully altered
grid disk RECO_dm01_CD_10_dm01cel14 successfully altered
grid disk RECO_dm01_CD_11_dm01cel14 successfully altered


Now we have some free space in cell disk

[root@dm01cel01 ~]# cellcli -e “list celldisk where name like ‘CD.*’ attributes name, size, freespace”
         CD_00_dm01cel01         528.734375G     50G
         CD_01_dm01cel01         528.734375G     50G
         CD_02_dm01cel01         557.859375G     50G
         CD_03_dm01cel01         557.859375G     50G
         CD_04_dm01cel01         557.859375G     50G
         CD_05_dm01cel01         557.859375G     50G
         CD_06_dm01cel01         557.859375G     50G
         CD_07_dm01cel01         557.859375G     50G
         CD_08_dm01cel01         557.859375G     50G
         CD_09_dm01cel01         557.859375G     50G
         CD_10_dm01cel01         557.859375G     50G
         CD_11_dm01cel01         557.859375G     50G


5. Increase size of DATA disks in storage cells

We can now increase the size of DATA grid disks, and then increase all disks size of disk group DATA in ASM.

The current DATA grid disks size is 423 GB:

[root@dm01cel01 ~]# cellcli -e “list grid disk where name like ‘DATA.*’ attributes name, size”
         DATA_dm01_CD_00_dm01cel01       423G
         DATA_dm01_CD_01_dm01cel01       423G
         DATA_dm01_CD_02_dm01cel01       423G
         DATA_dm01_CD_03_dm01cel01       423G
         DATA_dm01_CD_04_dm01cel01       423G
         DATA_dm01_CD_05_dm01cel01       423G
         DATA_dm01_CD_06_dm01cel01       423G
         DATA_dm01_CD_07_dm01cel01       423G
         DATA_dm01_CD_08_dm01cel01       423G
         DATA_dm01_CD_09_dm01cel01       423G
         DATA_dm01_CD_10_dm01cel01       423G
         DATA_dm01_CD_11_dm01cel01       423G


The new grid disks size will be 423 GB + 50 GB = 473 GB.

Resize the DATA grid disks on all storage cells. On storage cell 1, the command would be:

[root@dm01cel01 ~]# cellcli
CellCLI: Release 12.1.2.1.1 – Production on Wed Jan 18 05:39:49 CST 2017

Copyright (c) 2007, 2013, Oracle.  All rights reserved.
Cell Efficiency Ratio: 7,004

CellCLI> alter grid disk DATA_dm01_CD_00_dm01CEL01, DATA_dm01_CD_01_dm01CEL01, DATA_dm01_CD_02_dm01CEL01, DATA_dm01_CD_03_dm01CEL01, DATA_dm01_CD_04_dm01CEL01, DATA_dm01_CD_05_dm01CEL01, DATA_dm01_CD_06_dm01CEL01, DATA_dm01_CD_07_dm01CEL01, DATA_dm01_CD_08_dm01CEL01, DATA_dm01_CD_09_dm01CEL01, DATA_dm01_CD_10_dm01CEL01, DATA_dm01_CD_11_dm01CEL01 size=484352M;
grid disk DATA_dm01_CD_00_dm01cel01 successfully altered
grid disk DATA_dm01_CD_01_dm01cel01 successfully altered
grid disk DATA_dm01_CD_02_dm01cel01 successfully altered
grid disk DATA_dm01_CD_03_dm01cel01 successfully altered
grid disk DATA_dm01_CD_04_dm01cel01 successfully altered
grid disk DATA_dm01_CD_05_dm01cel01 successfully altered
grid disk DATA_dm01_CD_06_dm01cel01 successfully altered
grid disk DATA_dm01_CD_07_dm01cel01 successfully altered
grid disk DATA_dm01_CD_08_dm01cel01 successfully altered
grid disk DATA_dm01_CD_09_dm01cel01 successfully altered
grid disk DATA_dm01_CD_10_dm01cel01 successfully altered
grid disk DATA_dm01_CD_11_dm01cel01 successfully altered

[root@dm01cel01 ~]# ssh dm01cel02
Last login: Wed Jan 18 05:22:46 2017 from dm01cel01
[root@dm01cel02 ~]# cellcli
CellCLI: Release 12.1.2.1.1 – Production on Wed Jan 18 05:41:01 CST 2017

Copyright (c) 2007, 2013, Oracle.  All rights reserved.
Cell Efficiency Ratio: 6,999

CellCLI> alter grid disk DATA_dm01_CD_00_dm01CEL02, DATA_dm01_CD_01_dm01CEL02, DATA_dm01_CD_02_dm01CEL02, DATA_dm01_CD_03_dm01CEL02, DATA_dm01_CD_04_dm01CEL02, DATA_dm01_CD_05_dm01CEL02, DATA_dm01_CD_06_dm01CEL02, DATA_dm01_CD_07_dm01CEL02, DATA_dm01_CD_08_dm01CEL02, DATA_dm01_CD_09_dm01CEL02, DATA_dm01_CD_10_dm01CEL02, DATA_dm01_CD_11_dm01CEL02 size=484352M;
grid disk DATA_dm01_CD_00_dm01cel02 successfully altered
grid disk DATA_dm01_CD_01_dm01cel02 successfully altered
grid disk DATA_dm01_CD_02_dm01cel02 successfully altered
grid disk DATA_dm01_CD_03_dm01cel02 successfully altered
grid disk DATA_dm01_CD_04_dm01cel02 successfully altered
grid disk DATA_dm01_CD_05_dm01cel02 successfully altered
grid disk DATA_dm01_CD_06_dm01cel02 successfully altered
grid disk DATA_dm01_CD_07_dm01cel02 successfully altered
grid disk DATA_dm01_CD_08_dm01cel02 successfully altered
grid disk DATA_dm01_CD_09_dm01cel02 successfully altered
grid disk DATA_dm01_CD_10_dm01cel02 successfully altered
grid disk DATA_dm01_CD_11_dm01cel02 successfully altered

[root@dm01cel02 ~]# ssh dm01cel03
Last login: Wed Jan 18 05:23:38 2017 from dm01cel01
[root@dm01cel03 ~]# cellcli
CellCLI: Release 12.1.2.1.1 – Production on Wed Jan 18 05:41:49 CST 2017

Copyright (c) 2007, 2013, Oracle.  All rights reserved.
Cell Efficiency Ratio: 7,599

CellCLI> alter grid disk DATA_dm01_CD_00_dm01CEL03, DATA_dm01_CD_01_dm01CEL03, DATA_dm01_CD_02_dm01CEL03, DATA_dm01_CD_03_dm01CEL03, DATA_dm01_CD_04_dm01CEL03, DATA_dm01_CD_05_dm01CEL03, DATA_dm01_CD_06_dm01CEL03, DATA_dm01_CD_07_dm01CEL03, DATA_dm01_CD_08_dm01CEL03, DATA_dm01_CD_09_dm01CEL03, DATA_dm01_CD_10_dm01CEL03, DATA_dm01_CD_11_dm01CEL03 size=484352M;
grid disk DATA_dm01_CD_00_dm01cel03 successfully altered
grid disk DATA_dm01_CD_01_dm01cel03 successfully altered
grid disk DATA_dm01_CD_02_dm01cel03 successfully altered
grid disk DATA_dm01_CD_03_dm01cel03 successfully altered
grid disk DATA_dm01_CD_04_dm01cel03 successfully altered
grid disk DATA_dm01_CD_05_dm01cel03 successfully altered
grid disk DATA_dm01_CD_06_dm01cel03 successfully altered
grid disk DATA_dm01_CD_07_dm01cel03 successfully altered
grid disk DATA_dm01_CD_08_dm01cel03 successfully altered
grid disk DATA_dm01_CD_09_dm01cel03 successfully altered
grid disk DATA_dm01_CD_10_dm01cel03 successfully altered
grid disk DATA_dm01_CD_11_dm01cel03 successfully altered

CellCLI> alter grid disk DATA_dm01_CD_00_dm01CEL04, DATA_dm01_CD_01_dm01CEL04, DATA_dm01_CD_02_dm01CEL04, DATA_dm01_CD_03_dm01CEL04, DATA_dm01_CD_04_dm01CEL04, DATA_dm01_CD_05_dm01CEL04, DATA_dm01_CD_06_dm01CEL04, DATA_dm01_CD_07_dm01CEL04, DATA_dm01_CD_08_dm01CEL04, DATA_dm01_CD_09_dm01CEL04, DATA_dm01_CD_10_dm01CEL04, DATA_dm01_CD_11_dm01CEL04 size=484352M;
grid disk DATA_dm01_CD_00_dm01cel04 successfully altered
grid disk DATA_dm01_CD_01_dm01cel04 successfully altered
grid disk DATA_dm01_CD_02_dm01cel04 successfully altered
grid disk DATA_dm01_CD_03_dm01cel04 successfully altered
grid disk DATA_dm01_CD_04_dm01cel04 successfully altered
grid disk DATA_dm01_CD_05_dm01cel04 successfully altered
grid disk DATA_dm01_CD_06_dm01cel04 successfully altered
grid disk DATA_dm01_CD_07_dm01cel04 successfully altered
grid disk DATA_dm01_CD_08_dm01cel04 successfully altered
grid disk DATA_dm01_CD_09_dm01cel04 successfully altered
grid disk DATA_dm01_CD_10_dm01cel04 successfully altered
grid disk DATA_dm01_CD_11_dm01cel04 successfully altered

CellCLI> alter grid disk DATA_dm01_CD_00_dm01CEL05, DATA_dm01_CD_01_dm01CEL05, DATA_dm01_CD_02_dm01CEL05, DATA_dm01_CD_03_dm01CEL05, DATA_dm01_CD_04_dm01CEL05, DATA_dm01_CD_05_dm01CEL05, DATA_dm01_CD_06_dm01CEL05, DATA_dm01_CD_07_dm01CEL05, DATA_dm01_CD_08_dm01CEL05, DATA_dm01_CD_09_dm01CEL05, DATA_dm01_CD_10_dm01CEL05, DATA_dm01_CD_11_dm01CEL05 size=484352M;
grid disk DATA_dm01_CD_00_dm01cel05 successfully altered
grid disk DATA_dm01_CD_01_dm01cel05 successfully altered
grid disk DATA_dm01_CD_02_dm01cel05 successfully altered
grid disk DATA_dm01_CD_03_dm01cel05 successfully altered
grid disk DATA_dm01_CD_04_dm01cel05 successfully altered
grid disk DATA_dm01_CD_05_dm01cel05 successfully altered
grid disk DATA_dm01_CD_06_dm01cel05 successfully altered
grid disk DATA_dm01_CD_07_dm01cel05 successfully altered
grid disk DATA_dm01_CD_08_dm01cel05 successfully altered
grid disk DATA_dm01_CD_09_dm01cel05 successfully altered
grid disk DATA_dm01_CD_10_dm01cel05 successfully altered

CellCLI> alter grid disk DATA_dm01_CD_00_dm01CEL06, DATA_dm01_CD_01_dm01CEL06, DATA_dm01_CD_02_dm01CEL06, DATA_dm01_CD_03_dm01CEL06, DATA_dm01_CD_04_dm01CEL06, DATA_dm01_CD_05_dm01CEL06, DATA_dm01_CD_06_dm01CEL06, DATA_dm01_CD_07_dm01CEL06, DATA_dm01_CD_08_dm01CEL06, DATA_dm01_CD_09_dm01CEL06, DATA_dm01_CD_10_dm01CEL06, DATA_dm01_CD_11_dm01CEL06 size=484352M;
grid disk DATA_dm01_CD_00_dm01cel06 successfully altered
grid disk DATA_dm01_CD_01_dm01cel06 successfully altered
grid disk DATA_dm01_CD_02_dm01cel06 successfully altered
grid disk DATA_dm01_CD_03_dm01cel06 successfully altered
grid disk DATA_dm01_CD_04_dm01cel06 successfully altered
grid disk DATA_dm01_CD_05_dm01cel06 successfully altered
grid disk DATA_dm01_CD_06_dm01cel06 successfully altered
grid disk DATA_dm01_CD_07_dm01cel06 successfully altered
grid disk DATA_dm01_CD_08_dm01cel06 successfully altered
grid disk DATA_dm01_CD_09_dm01cel06 successfully altered
grid disk DATA_dm01_CD_10_dm01cel06 successfully altered
grid disk DATA_dm01_CD_11_dm01cel06 successfully altered

CellCLI> alter grid disk DATA_dm01_CD_00_dm01CEL07, DATA_dm01_CD_01_dm01CEL07, DATA_dm01_CD_02_dm01CEL07, DATA_dm01_CD_03_dm01CEL07, DATA_dm01_CD_04_dm01CEL07, DATA_dm01_CD_05_dm01CEL07, DATA_dm01_CD_06_dm01CEL07, DATA_dm01_CD_07_dm01CEL07, DATA_dm01_CD_08_dm01CEL07, DATA_dm01_CD_09_dm01CEL07, DATA_dm01_CD_10_dm01CEL07, DATA_dm01_CD_11_dm01CEL07 size=484352M;
grid disk DATA_dm01_CD_00_dm01cel07 successfully altered
grid disk DATA_dm01_CD_01_dm01cel07 successfully altered
grid disk DATA_dm01_CD_02_dm01cel07 successfully altered
grid disk DATA_dm01_CD_03_dm01cel07 successfully altered
grid disk DATA_dm01_CD_04_dm01cel07 successfully altered
grid disk DATA_dm01_CD_05_dm01cel07 successfully altered
grid disk DATA_dm01_CD_06_dm01cel07 successfully altered
grid disk DATA_dm01_CD_07_dm01cel07 successfully altered
grid disk DATA_dm01_CD_08_dm01cel07 successfully altered
grid disk DATA_dm01_CD_09_dm01cel07 successfully altered
grid disk DATA_dm01_CD_10_dm01cel07 successfully altered
grid disk DATA_dm01_CD_11_dm01cel07 successfully altered

CellCLI> alter grid disk DATA_dm01_CD_00_dm01CEL08, DATA_dm01_CD_01_dm01CEL08, DATA_dm01_CD_02_dm01CEL08, DATA_dm01_CD_03_dm01CEL08, DATA_dm01_CD_04_dm01CEL08, DATA_dm01_CD_05_dm01CEL08, DATA_dm01_CD_06_dm01CEL08, DATA_dm01_CD_07_dm01CEL08, DATA_dm01_CD_08_dm01CEL08, DATA_dm01_CD_09_dm01CEL08, DATA_dm01_CD_10_dm01CEL08, DATA_dm01_CD_11_dm01CEL08 size=484352M;
grid disk DATA_dm01_CD_00_dm01cel08 successfully altered
grid disk DATA_dm01_CD_01_dm01cel08 successfully altered
grid disk DATA_dm01_CD_02_dm01cel08 successfully altered
grid disk DATA_dm01_CD_03_dm01cel08 successfully altered
grid disk DATA_dm01_CD_04_dm01cel08 successfully altered
grid disk DATA_dm01_CD_05_dm01cel08 successfully altered
grid disk DATA_dm01_CD_06_dm01cel08 successfully altered
grid disk DATA_dm01_CD_07_dm01cel08 successfully altered
grid disk DATA_dm01_CD_08_dm01cel08 successfully altered
grid disk DATA_dm01_CD_09_dm01cel08 successfully altered
grid disk DATA_dm01_CD_10_dm01cel08 successfully altered
grid disk DATA_dm01_CD_11_dm01cel08 successfully altered

CellCLI> alter grid disk DATA_dm01_CD_00_dm01CEL09, DATA_dm01_CD_01_dm01CEL09, DATA_dm01_CD_02_dm01CEL09, DATA_dm01_CD_03_dm01CEL09, DATA_dm01_CD_04_dm01CEL09, DATA_dm01_CD_05_dm01CEL09, DATA_dm01_CD_06_dm01CEL09, DATA_dm01_CD_07_dm01CEL09, DATA_dm01_CD_08_dm01CEL09, DATA_dm01_CD_09_dm01CEL09, DATA_dm01_CD_10_dm01CEL09, DATA_dm01_CD_11_dm01CEL09 size=484352M;
grid disk DATA_dm01_CD_00_dm01cel09 successfully altered
grid disk DATA_dm01_CD_01_dm01cel09 successfully altered
grid disk DATA_dm01_CD_02_dm01cel09 successfully altered
grid disk DATA_dm01_CD_03_dm01cel09 successfully altered
grid disk DATA_dm01_CD_04_dm01cel09 successfully altered
grid disk DATA_dm01_CD_05_dm01cel09 successfully altered
grid disk DATA_dm01_CD_06_dm01cel09 successfully altered
grid disk DATA_dm01_CD_07_dm01cel09 successfully altered
grid disk DATA_dm01_CD_08_dm01cel09 successfully altered
grid disk DATA_dm01_CD_09_dm01cel09 successfully altered
grid disk DATA_dm01_CD_10_dm01cel09 successfully altered
grid disk DATA_dm01_CD_11_dm01cel09 successfully altered

CellCLI> alter grid disk DATA_dm01_CD_00_dm01CEL10, DATA_dm01_CD_01_dm01CEL10, DATA_dm01_CD_02_dm01CEL10, DATA_dm01_CD_03_dm01CEL10, DATA_dm01_CD_04_dm01CEL10, DATA_dm01_CD_05_dm01CEL10, DATA_dm01_CD_06_dm01CEL10, DATA_dm01_CD_07_dm01CEL10, DATA_dm01_CD_08_dm01CEL10, DATA_dm01_CD_09_dm01CEL10, DATA_dm01_CD_10_dm01CEL10, DATA_dm01_CD_11_dm01CEL10 size=484352M;
grid disk DATA_dm01_CD_00_dm01cel10 successfully altered
grid disk DATA_dm01_CD_01_dm01cel10 successfully altered
grid disk DATA_dm01_CD_02_dm01cel10 successfully altered
grid disk DATA_dm01_CD_03_dm01cel10 successfully altered
grid disk DATA_dm01_CD_04_dm01cel10 successfully altered
grid disk DATA_dm01_CD_05_dm01cel10 successfully altered
grid disk DATA_dm01_CD_06_dm01cel10 successfully altered
grid disk DATA_dm01_CD_07_dm01cel10 successfully altered
grid disk DATA_dm01_CD_08_dm01cel10 successfully altered
grid disk DATA_dm01_CD_09_dm01cel10 successfully altered
grid disk DATA_dm01_CD_10_dm01cel10 successfully altered
grid disk DATA_dm01_CD_11_dm01cel10 successfully altered

CellCLI> alter grid disk DATA_dm01_CD_00_dm01CEL11, DATA_dm01_CD_01_dm01CEL11, DATA_dm01_CD_02_dm01CEL11, DATA_dm01_CD_03_dm01CEL11, DATA_dm01_CD_04_dm01CEL11, DATA_dm01_CD_05_dm01CEL11, DATA_dm01_CD_06_dm01CEL11, DATA_dm01_CD_07_dm01CEL11, DATA_dm01_CD_08_dm01CEL11, DATA_dm01_CD_09_dm01CEL11, DATA_dm01_CD_10_dm01CEL11, DATA_dm01_CD_11_dm01CEL11 size=484352M;
grid disk DATA_dm01_CD_00_dm01cel11 successfully altered
grid disk DATA_dm01_CD_01_dm01cel11 successfully altered
grid disk DATA_dm01_CD_02_dm01cel11 successfully altered
grid disk DATA_dm01_CD_03_dm01cel11 successfully altered
grid disk DATA_dm01_CD_04_dm01cel11 successfully altered
grid disk DATA_dm01_CD_05_dm01cel11 successfully altered
grid disk DATA_dm01_CD_06_dm01cel11 successfully altered
grid disk DATA_dm01_CD_07_dm01cel11 successfully altered
grid disk DATA_dm01_CD_08_dm01cel11 successfully altered
grid disk DATA_dm01_CD_09_dm01cel11 successfully altered
grid disk DATA_dm01_CD_10_dm01cel11 successfully altered
grid disk DATA_dm01_CD_11_dm01cel11 successfully altered

CellCLI> alter grid disk DATA_dm01_CD_00_dm01CEL12, DATA_dm01_CD_01_dm01CEL12, DATA_dm01_CD_02_dm01CEL12, DATA_dm01_CD_03_dm01CEL12, DATA_dm01_CD_04_dm01CEL12, DATA_dm01_CD_05_dm01CEL12, DATA_dm01_CD_06_dm01CEL12, DATA_dm01_CD_07_dm01CEL12, DATA_dm01_CD_08_dm01CEL12, DATA_dm01_CD_09_dm01CEL12, DATA_dm01_CD_10_dm01CEL12, DATA_dm01_CD_11_dm01CEL12 size=484352M;
grid disk DATA_dm01_CD_00_dm01cel12 successfully altered
grid disk DATA_dm01_CD_01_dm01cel12 successfully altered
grid disk DATA_dm01_CD_02_dm01cel12 successfully altered
grid disk DATA_dm01_CD_03_dm01cel12 successfully altered
grid disk DATA_dm01_CD_04_dm01cel12 successfully altered
grid disk DATA_dm01_CD_05_dm01cel12 successfully altered
grid disk DATA_dm01_CD_06_dm01cel12 successfully altered
grid disk DATA_dm01_CD_07_dm01cel12 successfully altered
grid disk DATA_dm01_CD_08_dm01cel12 successfully altered
grid disk DATA_dm01_CD_09_dm01cel12 successfully altered
grid disk DATA_dm01_CD_10_dm01cel12 successfully altered
grid disk DATA_dm01_CD_11_dm01cel12 successfully altered

CellCLI> alter grid disk DATA_dm01_CD_00_dm01CEL13, DATA_dm01_CD_01_dm01CEL13, DATA_dm01_CD_02_dm01CEL13, DATA_dm01_CD_03_dm01CEL13, DATA_dm01_CD_04_dm01CEL13, DATA_dm01_CD_05_dm01CEL13, DATA_dm01_CD_06_dm01CEL13, DATA_dm01_CD_07_dm01CEL13, DATA_dm01_CD_08_dm01CEL13, DATA_dm01_CD_09_dm01CEL13, DATA_dm01_CD_10_dm01CEL13, DATA_dm01_CD_11_dm01CEL13 size=484352M;
grid disk DATA_dm01_CD_00_dm01cel13 successfully altered
grid disk DATA_dm01_CD_01_dm01cel13 successfully altered
grid disk DATA_dm01_CD_02_dm01cel13 successfully altered
grid disk DATA_dm01_CD_03_dm01cel13 successfully altered
grid disk DATA_dm01_CD_04_dm01cel13 successfully altered
grid disk DATA_dm01_CD_05_dm01cel13 successfully altered
grid disk DATA_dm01_CD_06_dm01cel13 successfully altered
grid disk DATA_dm01_CD_07_dm01cel13 successfully altered
grid disk DATA_dm01_CD_08_dm01cel13 successfully altered
grid disk DATA_dm01_CD_09_dm01cel13 successfully altered
grid disk DATA_dm01_CD_10_dm01cel13 successfully altered
grid disk DATA_dm01_CD_11_dm01cel13 successfully altered

CellCLI> alter grid disk DATA_dm01_CD_00_dm01CEL14, DATA_dm01_CD_01_dm01CEL14, DATA_dm01_CD_02_dm01CEL14, DATA_dm01_CD_03_dm01CEL14, DATA_dm01_CD_04_dm01CEL14, DATA_dm01_CD_05_dm01CEL14, DATA_dm01_CD_06_dm01CEL14, DATA_dm01_CD_07_dm01CEL14, DATA_dm01_CD_08_dm01CEL14, DATA_dm01_CD_09_dm01CEL14, DATA_dm01_CD_10_dm01CEL14, DATA_dm01_CD_11_dm01CEL14 size=484352M;
grid disk DATA_dm01_CD_00_dm01cel14 successfully altered
grid disk DATA_dm01_CD_01_dm01cel14 successfully altered
grid disk DATA_dm01_CD_02_dm01cel14 successfully altered
grid disk DATA_dm01_CD_03_dm01cel14 successfully altered
grid disk DATA_dm01_CD_04_dm01cel14 successfully altered
grid disk DATA_dm01_CD_05_dm01cel14 successfully altered
grid disk DATA_dm01_CD_06_dm01cel14 successfully altered
grid disk DATA_dm01_CD_07_dm01cel14 successfully altered
grid disk DATA_dm01_CD_08_dm01cel14 successfully altered
grid disk DATA_dm01_CD_09_dm01cel14 successfully altered
grid disk DATA_dm01_CD_10_dm01cel14 successfully altered
grid disk DATA_dm01_CD_11_dm01cel14 successfully altered

7. Verify the new size

CellCLI> list grid disk where name like ‘DATA.*’ attributes name, size
         DATA_dm01_CD_00_dm01cel14       473G
         DATA_dm01_CD_01_dm01cel14       473G
         DATA_dm01_CD_02_dm01cel14       473G
         DATA_dm01_CD_03_dm01cel14       473G
         DATA_dm01_CD_04_dm01cel14       473G
         DATA_dm01_CD_05_dm01cel14       473G
         DATA_dm01_CD_06_dm01cel14       473G
         DATA_dm01_CD_07_dm01cel14       473G
         DATA_dm01_CD_08_dm01cel14       473G
         DATA_dm01_CD_09_dm01cel14       473G
         DATA_dm01_CD_10_dm01cel14       473G
         DATA_dm01_CD_11_dm01cel14       473G


8. Increase size of DATA disks in ASM

SQL> alter diskgroup DATA_dm01 resize all rebalance power 32;

Diskgroup altered.


Note that there was no need to specify the new disks size, as ASM will get that from the grid disks. The rebalance clause is optional.

The command will trigger the rebalance operation for disk group DATA.

Monitor the rebalance with the following command:

SQL> set lines 200
SQL> set pages 200
SQL> select * from gv$asm_operation;

no rows selected


Once the command returns “no rows selected”, the rebalance would have completed and all disks in disk group DATA should show new size:

SQL> select name, total_mb/1024 “GB” from v$asm_disk_stat where name like ‘DATA%’;

NAME                                   GB
—————————— ———-
DATA_dm01_CD_08_dm01CEL01             473
DATA_dm01_CD_01_dm01CEL01             473
DATA_dm01_CD_07_dm01CEL01             473
DATA_dm01_CD_09_dm01CEL01             473
DATA_dm01_CD_04_dm01CEL01             473
DATA_dm01_CD_05_dm01CEL01             473
DATA_dm01_CD_10_dm01CEL01             473
DATA_dm01_CD_03_dm01CEL01             473
DATA_dm01_CD_02_dm01CEL01             423
DATA_dm01_CD_11_dm01CEL01             473
DATA_dm01_CD_06_dm01CEL01             473
DATA_dm01_CD_00_dm01CEL01             473

DATA_dm01_CD_03_dm01CEL14             473
DATA_dm01_CD_08_dm01CEL14             473
DATA_dm01_CD_00_dm01CEL14             473
DATA_dm01_CD_05_dm01CEL14             473
DATA_dm01_CD_09_dm01CEL14             473
DATA_dm01_CD_02_dm01CEL14             473
DATA_dm01_CD_07_dm01CEL14             473
DATA_dm01_CD_10_dm01CEL14             473
DATA_dm01_CD_01_dm01CEL14             473
DATA_dm01_CD_11_dm01CEL14             473
DATA_dm01_CD_04_dm01CEL14             473
DATA_dm01_CD_06_dm01CEL14             473

168 rows selected.


Conclusion

In this article we have learned how to resize ASM disk in Exadata Database. If there is free space in Exadata cell disks, increasing the disk group size can be accomplished in two steps – grid disk size increase on all storage cells followed by the disk size increase in ASM. This requires a single ASM rebalance operation. If there is no free space in celldisks, some space may be freed up from other disk group. To reduce the size, first the ASM disk size is reduced and then reduced the grid disks size. To increase the size, first increase the grid disks size and the increase the ASM disk size.
0

PREVIOUS POSTSPage 1 of 2NO NEW POSTS