Good Contents Are Everywhere, But Here, We Deliver The Best of The Best.Please Hold on!
Database Management Services, Oracle Exadata, Oracle Exadata X8M, Remote Database Management

If you are looking for highest levels of database performance for your Oracle database then, Oracle Exadata is an outstanding solution. It delivers finest performance for mixed data, data warehousing (DW), analytics, and OLTP (online transaction processing) workloads. Enriched with a variety of deployment options, it lets you run your Oracle Database and other data workloads anywhere you need, whether its on-premises or in the Oracle Cloud. Oracle Exadata storage provides a cutting-edge technology which is simple to use, manage and provides mission-critical accessibility and reliability. Here are 5 reasons stating why should you run the Oracle Database on Oracle Exadata.


5 Reasons Why you Should Run Your Oracle Database on Oracle Exadata | Netsoftmate


1. Bespoke for Oracle Database


Following a standard approach to build your database infrastructure will hamper your business growth. With time, databases grow, which means, your business needs more servers, more storage solutions and more labor to manage it. As a result, the management cost will go up and there is a huge exposure to risk of errors, ultimately hampering your business growth. That’s why each business, big or small, needs a new approach that’s engineered to cater the critical database workloads.
The only technique to handle these critical database workloads is through Oracle Exadata. It is specially equipped to provide high storage bandwidth to seamlessly manage the Oracle Database and other data workloads. Oracle Exadata, as a part of Oracle Engineered System offers a highly integrated platform that delivers more power with less hardware. It eliminates the IT complexity while supplying greater performance, scalability, security and data protection.


5 Reasons Why you Should Run Your Oracle Database on Oracle Exadata | Netsoftmate


2. Increase Employee Productivity


Timely delivery of persuasive data that supports business operations and lessens the time required to deliver new business applications will surely result in better revenue. Oracle Exadata’s congregated and optimized infrastructure platform for database workloads helps the IT staff to spend less time on everyday operations and work more towards other IT development efforts. Accidental outages have less effect on employees and business operations that have lesser database related failures.
The consolidated Oracle Exadata platform provides an economical base for Oracle database operations. It increases employee productivity and helps grow revenues with less cost and complexity. With an enhanced performance up to 100X faster, accessing the data becomes easy and you can engage with customers quickly. With the same power, you can consolidate your databases onto a single platform, and deliver more than four times the density


5 Reasons Why you Should Run Your Oracle Database on Oracle Exadata | Netsoftmate


3. Achieve Operational Benefits


Businesses that rely on multiple vendors may face problems in managing a complex database infrastructure. Retaining and managing each database and server overstrains IT staff, and establishing new applications can take longer than usual. You may also need IT specialists to take care of each different component. As the number of applications and their associated databases increases, your admin costs go up, and so will your data center footprint.
Oracle Exadata delivers greater database and application performance with less hardware—and fewer licenses. Oracle Exadata, from Oracle Engineered Systems means easier upgrades, tuning, patching, observing and support, so you can manage your costs. They process transactions faster, complete queries in less time, and have decreased load and backup recovery times.


5 Reasons Why you Should Run Your Oracle Database on Oracle Exadata | Netsoftmate


4. Maximize Accessibility


Positive data security and database uptime are critical components that directly impact the business operations and revenue progress. Database collapse makes it tough to establish dependable security, plan and policies for sensitive data. There are too many points of control to monitor and maintain. A larger base is vulnerable to attack and there’s hardly enough budget for the specialist skills vital to manage it.
That’s the reason; businesses use Oracle Exadata to run their most important Oracle database and other data workloads. With software and hardware operational together, Oracle Exadata eliminates system downtime, using its in-built flexibility and redundancy. With Oracle Maximum Availability Architecture, you can get the ultimate in invincible uptime. The benefits include, less business impact from outages, less IT impact in managing downtime and reliable application and developer productivity.


5 Reasons Why you Should Run Your Oracle Database on Oracle Exadata | Netsoftmate


5. Invest in the Cloud


Businesses always plan for a simple and comprehensive cloud strategy and application. Ideally, businesses strategize to invest in an architecture offering an apt pathway to a cloud consumption model for the future. A full-proof plan that is flexible to mix and match on-premises deployment with a well-matched public cloud option, whether that’s for development, improvement and testing or ensuring business endurance.
Oracle Exadata offers the best of both worlds for the database and the business. Businesses can either purchase and manage on-premises Oracle Exadata or choose an Oracle Database Cloud Exadata Service. Oracle Cloud service is equivalent to an on-premises Oracle Exadata, just with a different consumption model. That’s why all the components of Oracle Engineered System are a powerful set of options. They are designed with the same architecture, with all the same benefits. All you need to do is choose which consumption model works best for you.




About Netsoftmate Technologies Inc.

Netsoftmate is an Oracle Gold Partner and a boutique IT services company specializing in installation, implementation and 24/7 support for Oracle Engineered Systems like Oracle Exadata, Oracle Database Appliance, Oracle ZDLRA, Oracle ZFS Storage and Oracle Private Cloud Appliance. Apart from OES, we have specialized teams of  experts providing round the clock remote database administration support for any type of database and cyber security compliance and auditing services.

 

Feel free to get in touch with us by signing up on the link below –


Priority Suport for Oracle Engineered Systems | Netsoftmate
0

Cloud Services, Database Management Services, Oracle Exadata, Training

 

Backing up file systems on Oracle Exadata compute can be a daunting task if you are unaware of the prerequisites and best practices. To help you backup your file systems effectively and with least discrepancies, we are bringing you this interesting step-by-step guide on how to back up your file system using Oracle Exadata Sanpshot Based Backup of Compute Node to NFS Share    

 

It is very important to take file system backup on Oracle Exadata compute nodes before we make any major changes to the operating system or critical software. On Oracle Exadata Compute nodes / (root) and /u01 file systems contains the operating system and GI/DB software respectively. These are the most critical file systems on Oracle Exadata computes.

 

By default, the / (root) and /u01 file system are sized 30GB and 100GB respectively.

 

Scenarios in which we must take file system backup are:

 

  • – Operating System Patching or Upgrade
  •  
  • – Grid Infrastructure Patching or Upgrade
  •  
  • – Database Patching or Upgrade
  •  
  • – Operating System configuration changes
  •  
  • – Increasing/decreasing file system size

 

In this article, we will demonstrate how to backup file system on Oracle Exadata Compute nodes running Linux Operating System to external storage NFS Share.

eBook - Oracle Exadata X8M Patching Recipes | Netsoftmate

Environment Details

 

Exadata Model

X4-2 Half Rack HP 4 TB

Exadata Components

4 – Compute nodes, 7 – Storage cells & 2 – IB switches

Exadata Storage cells

DBM01CEL01 – DBM01CEL07

Exadata Compute nodes

DBM01DB01 – DBM01DB04

Exadata Software Version

19.2.3.0

Exadata DB Version

11.2.0.4.180717

 

 

Prerequisites

 

  • – Root user access on Compute nodes
  •  
  • – NFS mount with sufficient storage to storing file system backup




Current root and /u01 file system sizes

 

[root@ip01db01 ~]# df -h / /u01

Filesystem Size  Used Avail Use% Mounted on

/dev/mapper/VGExaDb-LVDbSys1   59G   39G   18G  70% /

/dev/mapper/VGExaDb-LVDbOra1  197G  171G   17G  92% /u01



NFS share details

 

10.10.10.1:/nfs/backup/

 

  • 1. As root user, log in to the Exadata Compute node you wish to backup

 

[root@ip01db01 ~]# id root

uid=0(root) gid=0(root) groups=0(root),1(bin),2(daemon),3(sys),4(adm),6(disk),10(wheel)


 

  • 2. Create a mount point to mount NFS share

 

[root@dm01db01 ~]# mkdir -p /mnt/backup

[root@dm01db01 ~]# mount -t nfs -o rw,intr,soft,proto=tcp,nolock 10.10.10.1:/nfs/backup/ /mnt/backup

 


3. Determine the file system type extension root and u01 file system

 

[root@ip01db01 ~]# mount -l

sysfs on /sys type sysfs (rw,relatime)

proc on /proc type proc (rw,relatime)

devtmpfs on /dev type devtmpfs (rw,nosuid,size=131804372k,nr_inodes=32951093,mode=755)

securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime)

tmpfs on /dev/shm type tmpfs (rw,size=264225792k)

/dev/mapper/VGExaDb-LVDbSys1 on / type ext3 (rw,relatime,errors=continue,barrier=1,data=ordered) [DBSYS]

/dev/mapper/VGExaDb-LVDbOra1 on /u01 type ext3 (rw,nodev,relatime,errors=continue,barrier=1,data=ordered) [DBORA]

/dev/sda1 on /boot type ext3 (rw,nodev,relatime,errors=continue,barrier=1,data=ordered) [BOOT]

 

 

4.  Note down the file system type for root and /u01: ext3. Take a snapshot-based of the root and u01 directories, Label the snapshot and Mount the snapshot

 

[root@dm01db01 ~]# lvcreate -L5G -s -n root_snap /dev/VGExaDb/LVDbSys1

  Logical volume “root_snap” created.

 

[root@dm01db01 ~]# lvcreate -L5G -s -n u01_snap /dev/VGExaDb/LVDbOra1

  Logical volume “u01_snap” created.



[root@dm01db01 ~]# e2label /dev/VGExaDb/root_snap DBSYS_SNAP

[root@dm01db01 ~]# e2label /dev/VGExaDb/u01_snap DBORA_SNAP

 

[root@dm01db01 ~]# mkdir -p /mnt/snap/root

[root@dm01db01 ~]# mkdir -p /mnt/snap/u01

 

 

[root@dm01db01 ~]# mount /dev/VGExaDb/root_snap /mnt/snap/root -t ext3

[root@dm01db01 ~]# mount /dev/VGExaDb/u01_snap /mnt/snap/u01 -t ext3

 

 

[root@dm01db01 ~]# df -h /mnt/snap/root

[root@dm01db01 ~]# df -h /mnt/snap/u01

 

 

  • 5. Change to the directory for the backup and Create the backup file

 

[root@dm01db01 ~]#  cd /mnt/backup

 

[root@dm01db01 ~]#  tar -pjcvf /mnt/backup/mybackup.tar.bz2 * /boot –exclude /mnt/backup/mybackup.tar.bz2 > /tmp/backup_tar.stdout 2> /tmp/backup_tar.stderr

 

  • 6. Monitor the /tmp/backup_tar.stderr file for errors. Errors such as failing to tar open sockets can be ignored.

 

  • 7. Unmount the snapshots and remove the snapshots for the root and /01 directories.

 

[root@dm01db01 ~]# cd /

 

[root@dm01db01 /]# umount /mnt/snap/u01

 

[root@dm01db01 /]# umount /mnt/snap/root

 

[root@dm01db01 /]# df -h /mnt/snap/u01

[root@dm01db01 /]# df -h /mnt/snap/root

 

[root@dm01db01 /]# ls -l /mnt/snap

 

[root@dm01db01 /]# rm -rf /mnt/snap

 

 

[root@dm01db01 /]# lvremove /dev/VGExaDb/u01_snap

Do you really want to remove active logical volume u01_snap? [y/n]: y

  Logical volume “u01_snap” successfully removed


 

[root@dm01db01 /]# lvremove /dev/VGExaDb/root_snap

Do you really want to remove active logical volume root_snap? [y/n]: y

  Logical volume “root_snap” successfully removed


 

  • 8. Unmount the NFS share

 

[root@dm01db01 /] umount /mnt/backup


 

  • Repeat the above steps on the remaining compute nodes to backup root & u01 file system



We hope this article helps you smoothly backup your file system on Oracle Exadata Compute node running Oracle Linux using NFS Share. Stay tuned for more step-by-step guides on implementing and using Oracle Database Systems exclusively on Netsoftmate




Netsoftmate provides the best standard services when it comes to Oracle databases management, covering all comples database products. Sign-up for a free 30 minutes call by clicking on the link below –



Click here and fill the contact us form for a free 30 minutes consultation

 

3

Exadata Database machine consists of 3 ASM Disk Groups:
+DATA for Database Files
+RECO for Online Redo log and Archive log files
+DBFS_DG for Cluster configuration files such as OCR and Voting disks

In a Customized environment Customers can choose to have more than 3 Disk Groups. But it is recommended to have 3 Disk Groups. The DATA and RECO disk groups can be sized 80%-20% or 40%-60% respectively of over all storage capacity. Sometimes it is possible that +DATA disk group can be filled very fast if you have several databases.

In this article we will demostrate how to move a Database from +DATA disk group to +RECO disk group.


Steps to move a database from +DATA to +RECO ASM Disk Group:


Step 1: Get the ASM Disk Information

SQL> select state,name from v$asm_diskgroup;

STATE       NAME
———– ——————————
MOUNTED     RECO
MOUNTED     DBFS_DG
MOUNTED     DATA


Step 2: Get the Database files details

SQL> select name, open_mode,database_role from gv$database;

NAME      OPEN_MODE            DATABASE_ROLE
——— ——————– —————-
DBM01     READ WRITE           PRIMARY
DBM01     READ WRITE           PRIMARY
DBM01     READ WRITE           PRIMARY
DBM01     READ WRITE           PRIMARY

SQL> select name from v$controlfile;

NAME
——————————————————————————–
+DATA/dbm01/controlfile/current.256.976374731

SQL> select name from v$datafile;

NAME
——————————————————————————–
+DATA/dbm01/datafile/system.259.976374739
+DATA/dbm01/datafile/sysaux.260.976374743
+DATA/dbm01/datafile/undotbs1.261.976374745
+DATA/dbm01/datafile/undotbs2.263.976374753
+DATA/dbm01/datafile/undotbs3.264.976374755
+DATA/dbm01/datafile/undotbs4.265.976374757
+DATA/dbm01/datafile/users.266.976374757

7 rows selected.

SQL> select name from v$tempfile;

NAME
——————————————————————————–
+DATAC1/dbm01/tempfile/temp.262.976375229

SQL>

SQL> select member from v$logfile;

MEMBER
——————————————————————————–
+DATA/dbm01/onlinelog/group_1.257.976374733
+DATA/dbm01/onlinelog/group_2.258.976374735
+DATA/dbm01/onlinelog/group_7.267.976375073
+DATA/dbm01/onlinelog/group_8.268.976375075
+DATA/dbm01/onlinelog/group_5.269.976375079
+DATA/dbm01/onlinelog/group_6.270.976375083
+DATA/dbm01/onlinelog/group_3.271.976375085
+DATA/dbm01/onlinelog/group_4.272.976375087
+DATA/dbm01/onlinelog/group_9.274.976375205
+DATA/dbm01/onlinelog/group_10.275.976375209
+DATA/dbm01/onlinelog/group_11.276.976375211
+DATA/dbm01/onlinelog/group_12.277.976375215
+DATA/dbm01/onlinelog/group_13.278.976375217
+DATA/dbm01/onlinelog/group_14.279.976375219
+DATA/dbm01/onlinelog/group_15.280.976375223
+DATA/dbm01/onlinelog/group_16.281.976375225

16 rows selected.

SQL> select filename from v$block_change_tracking;

FILENAME
——————————————————————–
+DATA/dbm01/changetracking/ctf.282.976375227


Step 3: Backup Database using RMAN copy command as shown below. Here we are moving database to +RECO ASM Disk Group.

RMAN> run {
allocate channel c1 device type disk;
allocate channel c2 device type disk;
allocate channel c3 device type disk;
allocate channel c4 device type disk;
allocate channel c5 device type disk;
allocate channel c6 device type disk;
allocate channel c7 device type disk;
allocate channel c8 device type disk;
backup as copy database include current controlfile format ‘+RECO’;
release channel c1;
release channel c2;
release channel c3;
release channel c4;
release channel c5;
release channel c6;
release channel c7;
release channel c8;
}

released channel: ORA_DISK_1
allocated channel: c1
channel c1: SID=1189 instance=dbm011 device type=DISK

allocated channel: c2
channel c2: SID=1321 instance=dbm011 device type=DISK

allocated channel: c3
channel c3: SID=1343 instance=dbm011 device type=DISK

allocated channel: c4
channel c4: SID=1387 instance=dbm011 device type=DISK

allocated channel: c5
channel c5: SID=1497 instance=dbm011 device type=DISK

allocated channel: c6
channel c6: SID=1519 instance=dbm011 device type=DISK

allocated channel: c7
channel c7: SID=1541 instance=dbm011 device type=DISK

allocated channel: c8
channel c8: SID=1563 instance=dbm011 device type=DISK

Starting backup at 26-MAY-18
channel c1: starting datafile copy
input datafile file number=00001 name=+DATA/dbm01/datafile/system.259.976374739
channel c2: starting datafile copy
input datafile file number=00002 name=+DATA/dbm01/datafile/sysaux.260.976374743
channel c3: starting datafile copy
input datafile file number=00003 name=+DATA/dbm01/datafile/undotbs1.261.976374745
channel c4: starting datafile copy
input datafile file number=00004 name=+DATA/dbm01/datafile/undotbs2.263.976374753
channel c5: starting datafile copy
input datafile file number=00005 name=+DATA/dbm01/datafile/undotbs3.264.976374755
channel c6: starting datafile copy
input datafile file number=00006 name=+DATA/dbm01/datafile/undotbs4.265.976374757
channel c7: starting datafile copy
input datafile file number=00007 name=+DATA/dbm01/datafile/users.266.976374757
channel c8: starting datafile copy
copying current control file
output file name=+RECO/dbm01/datafile/users.284.977121353 tag=TAG20180526T063551 RECID=16 STAMP=977121353
channel c7: datafile copy complete, elapsed time: 00:00:02
output file name=+RECO/dbm01/controlfile/backup.283.977121353 tag=TAG20180526T063551 RECID=17 STAMP=977121353
channel c8: datafile copy complete, elapsed time: 00:00:01
output file name=+RECO/dbm01/datafile/system.291.977121353 tag=TAG20180526T063551 RECID=18 STAMP=977121389
channel c1: datafile copy complete, elapsed time: 00:00:46
output file name=+RECO/dbm01/datafile/sysaux.290.977121353 tag=TAG20180526T063551 RECID=23 STAMP=977121392
channel c2: datafile copy complete, elapsed time: 00:00:46
output file name=+RECO/dbm01/datafile/undotbs1.289.977121353 tag=TAG20180526T063551 RECID=21 STAMP=977121392
channel c3: datafile copy complete, elapsed time: 00:00:46
output file name=+RECO/dbm01/datafile/undotbs2.288.977121353 tag=TAG20180526T063551 RECID=19 STAMP=977121392
channel c4: datafile copy complete, elapsed time: 00:00:46
output file name=+RECO/dbm01/datafile/undotbs3.287.977121353 tag=TAG20180526T063551 RECID=20 STAMP=977121392
channel c5: datafile copy complete, elapsed time: 00:00:46
output file name=+RECO/dbm01/datafile/undotbs4.286.977121353 tag=TAG20180526T063551 RECID=22 STAMP=977121392
channel c6: datafile copy complete, elapsed time: 00:00:46
Finished backup at 26-MAY-18

Starting Control File and SPFILE Autobackup at 26-MAY-18
piece handle=+RECO/dbm01/autobackup/2018_05_26/s_977121397.282.977121399 comment=NONE
Finished Control File and SPFILE Autobackup at 26-MAY-18

released channel: c1

released channel: c2

released channel: c3

released channel: c4

released channel: c5

released channel: c6

released channel: c7

released channel: c8


Step 4: Verify the RMAN Database Copy using RMAN

RMAN> list copy of database;

List of Datafile Copies
=======================

Key     File S Completion Time Ckp SCN    Ckp Time
——- —- – ————— ———- —————
18      1    A 26-MAY-18       1330853    26-MAY-18
        Name: +RECO/dbm01/datafile/system.291.977121353
        Tag: TAG20180526T063551

9       1    A 26-MAY-18       1330410    26-MAY-18
        Name: +RECO/dbm01/datafile/system.286.977120961
        Tag: TAG20180526T062919

3       1    A 26-MAY-18       1330155    26-MAY-18
        Name: +RECO/dbm01/datafile/system.280.977120795
        Tag: TAG20180526T062633

23      2    A 26-MAY-18       1330856    26-MAY-18
        Name: +RECO/dbm01/datafile/sysaux.290.977121353
        Tag: TAG20180526T063551

12      2    A 26-MAY-18       1330413    26-MAY-18
        Name: +RECO/dbm01/datafile/sysaux.287.977120961
        Tag: TAG20180526T062919

2       2    A 26-MAY-18       1330158    26-MAY-18
        Name: +RECO/dbm01/datafile/sysaux.281.977120795
        Tag: TAG20180526T062633

21      3    A 26-MAY-18       1330859    26-MAY-18
        Name: +RECO/dbm01/datafile/undotbs1.289.977121353
        Tag: TAG20180526T063551

11      3    A 26-MAY-18       1330416    26-MAY-18
        Name: +RECO/dbm01/datafile/undotbs1.288.977120961
        Tag: TAG20180526T062919

4       3    A 26-MAY-18       1330154    26-MAY-18
        Name: +RECO/dbm01/datafile/undotbs1.279.977120795
        Tag: TAG20180526T062633

19      4    A 26-MAY-18       1330862    26-MAY-18
        Name: +RECO/dbm01/datafile/undotbs2.288.977121353
        Tag: TAG20180526T063551

10      4    A 26-MAY-18       1330419    26-MAY-18
        Name: +RECO/dbm01/datafile/undotbs2.289.977120961
        Tag: TAG20180526T062919

1       4    A 26-MAY-18       1330153    26-MAY-18
        Name: +RECO/dbm01/datafile/undotbs2.278.977120795
        Tag: TAG20180526T062633

20      5    A 26-MAY-18       1330865    26-MAY-18
        Name: +RECO/dbm01/datafile/undotbs3.287.977121353
        Tag: TAG20180526T063551

13      5    A 26-MAY-18       1330422    26-MAY-18
        Name: +RECO/dbm01/datafile/undotbs3.290.977120961
        Tag: TAG20180526T062919

7       5    A 26-MAY-18       1330184    26-MAY-18
        Name: +RECO/dbm01/datafile/undotbs3.282.977120829
        Tag: TAG20180526T062633

22      6    A 26-MAY-18       1330868    26-MAY-18
        Name: +RECO/dbm01/datafile/undotbs4.286.977121353
        Tag: TAG20180526T063551

15      6    A 26-MAY-18       1330425    26-MAY-18
        Name: +RECO/dbm01/datafile/undotbs4.291.977120961
        Tag: TAG20180526T062919

6       6    A 26-MAY-18       1330187    26-MAY-18
        Name: +RECO/dbm01/datafile/undotbs4.283.977120829
        Tag: TAG20180526T062633

16      7    A 26-MAY-18       1330871    26-MAY-18
        Name: +RECO/dbm01/datafile/users.284.977121353
        Tag: TAG20180526T063551

8       7    A 26-MAY-18       1330428    26-MAY-18
        Name: +RECO/dbm01/datafile/users.292.977120961
        Tag: TAG20180526T062919

5       7    A 26-MAY-18       1330190    26-MAY-18
        Name: +RECO/dbm01/datafile/users.284.977120829
        Tag: TAG20180526T062633

RMAN> list copy of controlfile;

List of Control File Copies
===========================

Key     S Completion Time Ckp SCN    Ckp Time
——- – ————— ———- —————
17      A 26-MAY-18       1330876    26-MAY-18
        Name: +RECO/dbm01/controlfile/backup.283.977121353
        Tag: TAG20180526T063551

14      A 26-MAY-18       1330434    26-MAY-18
        Name: +RECO/dbm01/controlfile/backup.293.977120965
        Tag: TAG20180526T062919


Step 5: Verify the RMAN Database Copy backup in ASM

[oracle@dm01db01 ~]$ asmcmd -p
ASMCMD [+] > lsdg
State    Type    Rebal  Sector  Block       AU   Total_MB    Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name
MOUNTED  HIGH    N         512   4096  4194304  272154624  271558968          6479872        88359698              0             N  DATA/
MOUNTED  HIGH    N         512   4096  4194304    2404640    2402468            68704          777921              0             Y  DBFS_DG/
MOUNTED  NORMAL  N         512   4096  4194304   45389568   45183784           540352        22321716              0             N  RECO/

ASMCMD [+] > cd +RECO

ASMCMD [+RECO] > ls -l
Type  Redund  Striped  Time             Sys  Name
                                        Y    DBM01/
ASMCMD [+RECO] > cd DBM01
ASMCMD [+RECO/DBM01] > ls -l
Type         Redund  Striped  Time             Sys  Name
                                               Y    ARCHIVELOG/
                                               Y    AUTOBACKUP/
                                               Y    CONTROLFILE/
                                               Y    DATAFILE/
                                               N    snapcf_dbm01.f => +RECO/DBM01/CONTROLFILE/Backup.285.977120961
ASMCMD [+RECO/DBM01] > ls -l DATAFILE/
Type      Redund  Striped  Time             Sys  Name
DATAFILE  MIRROR  COARSE   MAY 26 06:00:00  Y    SYSAUX.290.977121353
DATAFILE  MIRROR  COARSE   MAY 26 06:00:00  Y    SYSTEM.291.977121353
DATAFILE  MIRROR  COARSE   MAY 26 06:00:00  Y    UNDOTBS1.289.977121353
DATAFILE  MIRROR  COARSE   MAY 26 06:00:00  Y    UNDOTBS2.288.977121353
DATAFILE  MIRROR  COARSE   MAY 26 06:00:00  Y    UNDOTBS3.287.977121353
DATAFILE  MIRROR  COARSE   MAY 26 06:00:00  Y    UNDOTBS4.286.977121353
DATAFILE  MIRROR  COARSE   MAY 26 06:00:00  Y    USERS.284.977121353
ASMCMD [+RECO/DBM01] > ls -l CONTROLFILE/
Type         Redund  Striped  Time             Sys  Name
CONTROLFILE  HIGH    FINE     MAY 26 06:00:00  Y    Backup.283.977121353
CONTROLFILE  HIGH    FINE     MAY 26 06:00:00  Y    Backup.285.977120961
CONTROLFILE  HIGH    FINE     MAY 26 06:00:00  Y    Backup.293.977120965

Step 6: Switch Database to RMAN backup copy. This command will switch the database from +DATA to +RECO ASM Disk Group.

[oracle@dm01db01 ~]$ srvctl stop database -d dbm01

[oracle@dm01db01 ~]$ sqlplus / as sysdba

SQL*Plus: Release 11.2.0.4.0 Production on Sat May 26 07:22:06 2018

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

Connected to an idle instance.

SQL> startup mount;
ORACLE instance started.

Total System Global Area 2.5655E+10 bytes
Fixed Size                  2265224 bytes
Variable Size            4160753528 bytes
Database Buffers         2.1341E+10 bytes
Redo Buffers              151113728 bytes
Database mounted.

[oracle@dm01db01 ~]$ rman target /

Recovery Manager: Release 11.2.0.4.0 – Production on Sat May 26 07:23:09 2018

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

connected to target database: DBM01 (DBID=1180720008, not open)

RMAN> switch database to copy;

using target database control file instead of recovery catalog
datafile 1 switched to datafile copy “+RECO/dbm01/datafile/system.291.977121353”
datafile 2 switched to datafile copy “+RECO/dbm01/datafile/sysaux.290.977121353”
datafile 3 switched to datafile copy “+RECO/dbm01/datafile/undotbs1.289.977121353”
datafile 4 switched to datafile copy “+RECO/dbm01/datafile/undotbs2.288.977121353”
datafile 5 switched to datafile copy “+RECO/dbm01/datafile/undotbs3.287.977121353”
datafile 6 switched to datafile copy “+RECO/dbm01/datafile/undotbs4.286.977121353”
datafile 7 switched to datafile copy “+RECO/dbm01/datafile/users.284.977121353”

RMAN> recover database;

Starting recover at 26-MAY-18
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=991 instance=dbm011 device type=DISK

starting media recovery
media recovery complete, elapsed time: 00:00:02

Finished recover at 26-MAY-18

RMAN> alter database open resetlogs;

RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of alter db command at 05/26/2018 07:25:06
ORA-01139: RESETLOGS option only valid after an incomplete database recovery

RMAN> alter database open;

database opened

[oracle@dm01db01 ~]$ srvctl stop database -d dbm01

[oracle@dm01db01 ~]$ srvctl start database -d dbm01

[oracle@dm01db01 ~]$ srvctl status database -d dbm01
Instance dbm011 is running on node dm01db01
Instance dbm012 is running on node dm01db02
Instance dbm013 is running on node dm01db03
Instance dbm014 is running on node dm01db04

[oracle@dm01db01 ~]$ sqlplus / as sysdba

SQL*Plus: Release 11.2.0.4.0 Production on Sat May 26 07:28:11 2018

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 – 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options

SQL> select name, open_mode,database_role from gv$database;

NAME      OPEN_MODE            DATABASE_ROLE
——— ——————– —————-
DBM01     READ WRITE           PRIMARY
DBM01     READ WRITE           PRIMARY
DBM01     READ WRITE           PRIMARY
DBM01     READ WRITE           PRIMARY


Step 7: Move Temp and online redo log files

SQL> alter database tempfile ‘+DATAC1/dbm01/tempfile/temp.262.976375229’ drop;

Database altered.

SQL> alter tablespace TEMP add tempfile ‘+RECO’ SIZE 1024M;

Tablespace altered.

SQL> alter database add logfile member ‘+RECO’ to group 1;

Database altered.

SQL> alter database add logfile member ‘+RECO’ to group 2;

Database altered.

SQL> alter database add logfile member ‘+RECO’ to group 3;

Database altered.

SQL> alter database add logfile member ‘+RECO’ to group 4;

Database altered.

SQL> alter database add logfile member ‘+RECO’ to group 5;

Database altered.

SQL> alter database add logfile member ‘+RECO’ to group 6;

Database altered.

SQL> alter database add logfile member ‘+RECO’ to group 7;

Database altered.

SQL> alter database add logfile member ‘+RECO’ to group 8;

Database altered.

SQL> alter database add logfile member ‘+RECO’ to group 9;

Database altered.

SQL> alter database add logfile member ‘+RECO’ to group 10;

Database altered.

SQL> alter database add logfile member ‘+RECO’ to group 11;

Database altered.

SQL> alter database add logfile member ‘+RECO’ to group 12;

Database altered.

SQL> alter database add logfile member ‘+RECO’ to group 13;

Database altered.

SQL> alter database add logfile member ‘+RECO’ to group 14;

Database altered.

SQL> alter database add logfile member ‘+RECO’ to group 15;

Database altered.

SQL> alter database add logfile member ‘+RECO’ to group 16;

Database altered.

SQL> alter database drop logfile member ‘+DATA/dbm01/onlinelog/group_1.257.976374733’;
SQL> alter database drop logfile member ‘+DATA/dbm01/onlinelog/group_2.258.976374735’;
SQL> alter database drop logfile member ‘+DATA/dbm01/onlinelog/group_7.267.976375073’;
SQL> alter database drop logfile member ‘+DATA/dbm01/onlinelog/group_8.268.976375075’;
SQL> alter database drop logfile member ‘+DATA/dbm01/onlinelog/group_5.269.976375079’;
SQL> alter database drop logfile member ‘+DATA/dbm01/onlinelog/group_6.270.976375083’;
SQL> alter database drop logfile member ‘+DATA/dbm01/onlinelog/group_3.271.976375085’;
SQL> alter database drop logfile member ‘+DATA/dbm01/onlinelog/group_4.272.976375087’;
SQL> alter database drop logfile member ‘+DATA/dbm01/onlinelog/group_9.274.976375205’;
SQL> alter database drop logfile member ‘+DATA/dbm01/onlinelog/group_10.275.976375209’;
SQL> alter database drop logfile member ‘+DATA/dbm01/onlinelog/group_11.276.976375211’;
SQL> alter database drop logfile member ‘+DATA/dbm01/onlinelog/group_12.277.976375215’;
SQL> alter database drop logfile member ‘+DATA/dbm01/onlinelog/group_13.278.976375217’;
SQL> alter database drop logfile member ‘+DATA/dbm01/onlinelog/group_14.279.976375219’;
SQL> alter database drop logfile member ‘+DATA/dbm01/onlinelog/group_15.280.976375223’;
SQL> alter database drop logfile member ‘+DATA/dbm01/onlinelog/group_16.281.976375225’;


Step 8: Move control file to +RECO Disk Group

[oracle@dm01db01 ~]$ srvctl stop database -d dbm01
[oracle@dm01db01 ~]$ sqlplus / as sysdba

SQL*Plus: Release 11.2.0.4.0 Production on Sat May 26 08:53:35 2018

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

Connected to an idle instance.

SQL> startup nomount;
ORACLE instance started.

Total System Global Area 2.5655E+10 bytes
Fixed Size                  2265224 bytes
Variable Size            4429188984 bytes
Database Buffers         2.1072E+10 bytes
Redo Buffers              151113728 bytes
SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 – 64bit Production
With the Partitioning, Real Application Clusters, OLAP, Data Mining
and Real Application Testing options
[oracle@dm01db01 ~]$ rman target /

Recovery Manager: Release 11.2.0.4.0 – Production on Sat May 26 08:53:59 2018

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

connected to target database: DBM01 (not mounted)

RMAN> restore controlfile to ‘+RECO’ from ‘+DATA/dbm01/controlfile/current.256.976374731’;

Starting restore at 26-MAY-18
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=969 instance=dbm011 device type=DISK

channel ORA_DISK_1: copied control file copy
Finished restore at 26-MAY-18

RMAN> exit

Recovery Manager complete.

[oracle@dm01db01 ~]$ . oraenv
ORACLE_SID = [dbm011] ? +ASM1
The Oracle base remains unchanged with value /u01/app/oracle

[oracle@dm01db01 ~]$ asmcmd -p

ASMCMD [+] > cd +RECO

ASMCMD [+RECO] > ls -l
Type  Redund  Striped  Time             Sys  Name
                                        Y    DBM01/
ASMCMD [+RECO] > cd DBM01
ASMCMD [+RECO/DBM01] > ls -l
Type         Redund  Striped  Time             Sys  Name
                                               Y    ARCHIVELOG/
                                               Y    AUTOBACKUP/
                                               Y    CHANGETRACKING/
                                               Y    CONTROLFILE/
                                               Y    DATAFILE/
                                               Y    ONLINELOG/
                                               Y    TEMPFILE/
                                               N    snapcf_dbm01.f => +RECO/DBM01/CONTROLFILE/Backup.285.977120961
ASMCMD [+RECO/DBM01] > cd CONTROLFILE/

ASMCMD [+RECO/DBM01/CONTROLFILE] > ls -l
Type         Redund  Striped  Time             Sys  Name
CONTROLFILE  HIGH    FINE     MAY 26 06:00:00  Y    Backup.283.977121353
CONTROLFILE  HIGH    FINE     MAY 26 08:00:00  Y    Backup.285.977120961
CONTROLFILE  HIGH    FINE     MAY 26 06:00:00  Y    Backup.293.977120965
CONTROLFILE  HIGH    FINE     MAY 26 08:00:00  Y    Backup.321.977128799
CONTROLFILE  HIGH    FINE     MAY 26 08:00:00  Y    current.331.977129649

ASMCMD [+RECO/DBM01/CONTROLFILE] > pwd
+RECO/DBM01/CONTROLFILE

ASMCMD [+RECO/DBM01/CONTROLFILE] > exit

[oracle@dm01db01 ~]$ . oraenv
ORACLE_SID = [+ASM1] ? dbm011
The Oracle base remains unchanged with value /u01/app/oracle

[oracle@dm01db01 ~]$ sqlplus / as sysdba

SQL*Plus: Release 11.2.0.4.0 Production on Sat May 26 08:55:09 2018

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 – 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options

SQL> alter system set control_files=’+RECO/DBM01/CONTROLFILE/current.331.977129649′ scope=spfile sid=’*’;

System altered.

SQL> shutdown immediate;
ORA-01507: database not mounted


ORACLE instance shut down.
SQL> exit

[oracle@dm01db01 ~]$ srvctl start database -d dbm01

[oracle@dm01db01 ~]$ srvctl status database -d dbm01
Instance dbm011 is running on node dm01db01
Instance dbm012 is running on node dm01db02
Instance dbm013 is running on node dm01db03
Instance dbm014 is running on node dm01db04


Step 9: Move block change tracking file to +RECO Disk Group

SQL> select filename from v$block_change_tracking;

FILENAME
——————————————————————–
+DATA/dbm01/changetracking/ctf.282.976375227

SQL> alter database disable block change tracking;

Database altered.

SQL> alter database enable block change tracking using file ‘+RECO’;

Database altered.

SQL> select filename from v$block_change_tracking;

FILENAME
——————————————————————–
+RECO/dbm01/changetracking/ctf.319.977128195


Step 10: Move Flash Recovery Area to +RECO Disk Group

SQL> show parameter recover

NAME                                 TYPE        VALUE
———————————— ———– ——————————
db_recovery_file_dest                string      +DATA

SQL> alter system set db_recovery_file_dest=’+RECO’;

System altered.

SQL> show parameter db_recovery_file_dest

NAME                                 TYPE        VALUE
———————————— ———– ——————————
db_recovery_file_dest                string      +RECO


Step 11: Update OMF parameter to point to +RECO

SQL> show parameter online

NAME                                 TYPE        VALUE
———————————— ———– ——————————
db_create_online_log_dest_1          string      +DATA

SQL> alter system set db_create_online_log_dest_1=’+RECO’;

System altered.

SQL> show parameter db_create_online_log_dest_1

NAME                                 TYPE        VALUE
———————————— ———– ——————————
db_create_online_log_dest_1          string      +RECO


Step 12: Verify the entire database is moved to +RECO ASM Disk Group

[oracle@dm01db01 ~]$ sqlplus / as sysdba

SQL*Plus: Release 11.2.0.4.0 Production on Sat May 26 08:57:57 2018

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 – 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options

SQL> select name, open_mode,database_role from gv$database;

NAME      OPEN_MODE            DATABASE_ROLE
——— ——————– —————-
DBM01     READ WRITE           PRIMARY
DBM01     READ WRITE           PRIMARY
DBM01     READ WRITE           PRIMARY
DBM01     READ WRITE           PRIMARY

SQL> set lines 200
SQL> set pages 200
SQL> select name from v$tempfile;

NAME
——————————————————-
+RECO/dbm01/tempfile/temp.297.977125145

SQL> select name from v$controlfile;

NAME
——————————————————-
+RECO/dbm01/controlfile/current.331.977129649

SQL> select name from v$datafile;

NAME
——————————————————–
+RECO/dbm01/datafile/system.291.977121353
+RECO/dbm01/datafile/sysaux.290.977121353
+RECO/dbm01/datafile/undotbs1.289.977121353
+RECO/dbm01/datafile/undotbs2.288.977121353
+RECO/dbm01/datafile/undotbs3.287.977121353
+RECO/dbm01/datafile/undotbs4.286.977121353
+RECO/dbm01/datafile/users.284.977121353

7 rows selected.

SQL> select member from v$logfile;

MEMBER
———————————————————
+RECO/dbm01/onlinelog/group_1.298.977127719
+RECO/dbm01/onlinelog/group_2.299.977125295
+RECO/dbm01/onlinelog/group_3.300.977125299
+RECO/dbm01/onlinelog/group_4.301.977125309
+RECO/dbm01/onlinelog/group_5.302.977125313
+RECO/dbm01/onlinelog/group_6.303.977125317
+RECO/dbm01/onlinelog/group_7.304.977125321
+RECO/dbm01/onlinelog/group_8.305.977125327
+RECO/dbm01/onlinelog/group_9.306.977125329
+RECO/dbm01/onlinelog/group_10.307.977125333
+RECO/dbm01/onlinelog/group_11.308.977125335
+RECO/dbm01/onlinelog/group_12.309.977125339
+RECO/dbm01/onlinelog/group_13.310.977125343
+RECO/dbm01/onlinelog/group_14.311.977125345
+RECO/dbm01/onlinelog/group_15.312.977125349
+RECO/dbm01/onlinelog/group_16.313.977125351

16 rows selected.


Conclusion

In this article we have learned how to move a Database from +DATA ASM Disk Group to +RECO Disk Group. Using RMAN along with FRA makes it easy to move a database from one location to another. 

0

While working on an Exadata database machine X4-2 half rack we found that the Smart Flash Cache on a storage cell was missing. We ran an Exachk and found the Smart Flash cache was indeed missing on Storage Cell 05.

Here is the message printed in the Exachk report.

FAIL Storage Server Check Storage Server Flash Memory is not configured as Exadata Smart Flash Cache dm01cel05 

Exadata Smart Flash Cache:

Oracle first introduced Exadata Flash Cache in the version Exadata V2. Exadata Smart Flash Cache, caches the data on flash based Storage. Exadata uses a caching algorithm to cache data intelligently in the flash card on the Storage cells. It improves the performance for OLTP databases. Exadata Flash Cache is used for Smart Flash Cache and Smart Flash log features. 


Courtesy Oracle

In this article we will demonstrate how to create a missing Smart Flash Cache on Exadata Storage cell.

Step to create/configure Smart Flash Cache


  • List the Flash cache

From the output below we can clearly see that the storge cell 05 is missing.

[root@dm01db01 ~]# dcli -g ~/cell_group -l root cellcli -e list flashcache attributes name,size,status
dm01cel01: dm01cel01_FLASHCACHE  2978.75G        normal
dm01cel02: dm01cel02_FLASHCACHE  2978.75G        normal
dm01cel03: dm01cel03_FLASHCACHE  2978.75G        normal
dm01cel04: dm01cel04_FLASHCACHE  2978.75G        normal
dm01cel06: dm01cel06_FLASHCACHE  2978.75G        normal
dm01cel07: dm01cel07_FLASHCACHE  2978.75G        normal


  • List the Flash log. Flash Log are is fine

[root@dm01db01 ~]# dcli -g ~/cell_group -l root cellcli -e list flashlog attributes name,size
dm01cel01: dm01cel01_FLASHLOG    512M
dm01cel02: dm01cel02_FLASHLOG    512M
dm01cel03: dm01cel03_FLASHLOG    512M
dm01cel04: dm01cel04_FLASHLOG    512M
dm01cel05: dm01cel05_FLASHLOG    512M
dm01cel06: dm01cel06_FLASHLOG    512M
dm01cel07: dm01cel07_FLASHLOG    512M


  • Connect to Storage cell 05 and list the flash cache

[root@dm01cel05 ~]# cellcli
CellCLI: Release 12.2.1.1.6 – Production on Tue Jun 16 03:57:24 CDT 2018

Copyright (c) 2007, 2013, Oracle.  All rights reserved.
Cell Efficiency Ratio: 1,656


  • List flash cache. We can’t find flash cache on it as there is no output of the command reported

CellCLI> list flashcache detail


  • Create Flash Cache as shown below

CellCLI> create flashcache all
Flash cache dm01cel05_FLASHCACHE successfully created


  • List flash cache

CellCLI>  list flashcache detail
         name:                   dm01cel05_FLASHCACHE
         cellDisk:               FD_04_dm01cel05,FD_05_dm01cel05,FD_10_dm01cel05,FD_06_dm01cel05,FD_11_dm01cel05,FD_13_dm01cel05,FD_08_dm01cel05,FD_15_dm01cel05,FD_12_dm01cel05,FD_01_dm01cel05,FD_00_dm01cel05,FD_07_dm01cel05,FD_09_dm01cel05,FD_14_dm01cel05,FD_02_dm01cel05,FD_03_dm01cel05
         creationTime:           2015-06-16T04:03:21-05:00
         degradedCelldisks:
         effectiveCacheSize:     2978.75G
         id:                     ce4589c7-183c-4346-965d-3f43a4e47de5
         size:                   2978.75G
         status:                 normal


  • List flash cache for all cells now. From compute node 1 , execute the following command

[root@dm01db01 ~]# dcli -g ~/cell_group -l root cellcli -e list flashcache attributes name,size,status
dm01cel01: dm01cel01_FLASHCACHE  2978.75G        normal
dm01cel02: dm01cel02_FLASHCACHE  2978.75G        normal
dm01cel03: dm01cel03_FLASHCACHE  2978.75G        normal
dm01cel04: dm01cel04_FLASHCACHE  2978.75G        normal
dm01cel05: dm01cel05_FLASHCACHE  2978.75G        normal
dm01cel06: dm01cel06_FLASHCACHE  2978.75G        normal
dm01cel07: dm01cel07_FLASHCACHE  2978.75G        normal


Conclusion

In this article we have learned how to create missing flash cache on  Exadata Storage cell. Exadata Smart Flash Cache, caches the data on flash based Storage. Exadata Flash Cache is used for Smart Flash Cache and Smart Flash log features. Exachk utility helps you diagnose and report the hardware problems and it also recommends the solution,

0

When Oracle ACS build Exadata Database Machine, they use the OEDA file that you sent them over for Exadata Install. The Exadata is built with default ASM Disk Group name DATAC1, RECOC1 & DBFS_DG. If you want to rename the DATAC1 and RECOC1 to something different to match your Organization standards you can do that by using the Oracle renamedg utility. The number Database version required to rename an ASM Disk Group is 11.2.

In this article we will demonstrate how to rename ASM Disk Group on Exadata Database Machine running Oracle Database 11.2

Here we want to change the following ASM Disk Group Names:
DATAC1 to DATA
RECOC1 to RECO

Steps to rename ASM Disk Group


  • Get the Database version

ORACLE_SID = [+ASM1] ? dbm011
The Oracle base remains unchanged with value /u01/app/oracle

[oracle@dm01db01 ~]$ sqlplus / as sysdba

SQL*Plus: Release 11.2.0.4.0 Production on Thu May 24 16:36:34 2018
Copyright (c) 1982, 2013, Oracle.  All rights reserved.

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 – 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options

SQL> select * from v$version;

BANNER
——————————————————————————–
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 – 64bit Production
PL/SQL Release 11.2.0.4.0 – Production
CORE    11.2.0.4.0      Production
TNS for Linux: Version 11.2.0.4.0 – Production
NLSRTL Version 11.2.0.4.0 – Production

  • Connect to asmcmd and make a note of the Disk Group Names

[oracle@dm01db01 ~]$ . oraenv

ORACLE_SID = [oracle] ? +ASM1
The Oracle base has been set to /u01/app/oracle

[oracle@dm01db01 ~]$ asmcmd -p

ASMCMD [+] > lsdg
State    Type    Rebal  Sector  Block       AU   Total_MB    Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name
MOUNTED  HIGH    N         512   4096  4194304  272154624  271558968          6479872        88359698              0             N  DATAC1/
MOUNTED  HIGH    N         512   4096  4194304    2404640    2402468            68704          777921              0             Y  DBFS_DG/
MOUNTED  NORMAL  N         512   4096  4194304   45389568   45386648           540352        22423148              0             N  RECOC1/

  • Check the versions

ASMCMD [+] > lsct
DB_Name  Status     Software_Version  Compatible_version  Instance_Name  Disk_Group
+ASM     CONNECTED        11.2.0.4.0          11.2.0.4.0  +ASM1          DBFS_DG
DBM01    CONNECTED        11.2.0.4.0          11.2.0.4.0  dbm011         DATAC1

  • Check the database status

[oracle@dm01db01 ~]$ srvctl status database -d dbm01
Instance dbm011 is running on node dm01db01
Instance dbm012 is running on node dm01db02
Instance dbm013 is running on node dm01db03
Instance dbm014 is running on node dm01db04

  • Make a note of the control files, Datafiles, Redo logfiles before stopping the database.

SQL> select name from v$controlfile;
SQL> select name from v$datafile;
SQL> select member from v$logfile;
SQL> select * from v$block_change_tracking;

  • Stop database database

[oracle@dm01db01 ~]$ srvctl stop database -d dbm01

[oracle@dm01db01 ~]$ srvctl status database -d dbm01
Instance dbm011 is not running on node dm01db01
Instance dbm012 is not running on node dm01db02
Instance dbm013 is not running on node dm01db03
Instance dbm014 is not running on node dm01db04

  • Umount the ASM disk group(s) that you want to rename. Connect to ASM command prompt and umount the disk group. umount the disk group from all nodes.

ASMCMD [+] > umount DATAC1

ASMCMD [+] > umount RECOC1

ASMCMD [+] > lsdg
State    Type  Rebal  Sector  Block       AU  Total_MB  Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name
MOUNTED  HIGH  N         512   4096  4194304   2404640  2402468            68704          777921              0             Y  DBFS_DG/

Note: If you don’t stop the databases using ASM disk group you will get the following error message:

ASMCMD [+] > umount DATAC1
ORA-15032: not all alterations performed
ORA-15027: active use of diskgroup “DATAC1” precludes its dismount (DBD ERROR: OCIStmtExecute)

*** Repeat the above steps on all the remaining nodes in the Cluster***

[oracle@dm01db01 ~]$ ssh dm01db02
Last login: Thu May 17 15:23:31 2018 from dm01db01

[oracle@dm01db02 ~]$ . oraenv
ORACLE_SID = [oracle] ? +ASM2
The Oracle base has been set to /u01/app/oracle

[oracle@dm01db02 ~]$ asmcmd lsdg
State    Type    Rebal  Sector  Block       AU   Total_MB    Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name
MOUNTED  HIGH    N         512   4096  4194304  272154624  271558968          6479872        88359698              0             N  DATAC1/
MOUNTED  HIGH    N         512   4096  4194304    2404640    2402468            68704          777921              0             Y  DBFS_DG/
MOUNTED  NORMAL  N         512   4096  4194304   45389568   45385040           540352        22422344              0             N  RECOC1/

[oracle@dm01db02 ~]$ asmcmd umount DATAC1

[oracle@dm01db02 ~]$ asmcmd umount RECOC1

[oracle@dm01db02 ~]$ asmcmd lsdg
State    Type  Rebal  Sector  Block       AU  Total_MB  Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name
MOUNTED  HIGH  N         512   4096  4194304   2404640  2402468            68704          777921              0             Y  DBFS_DG/

[oracle@dm01db01 ~]$ ssh dm01db03
Last login: Thu May 17 15:23:31 2018 from dm01db01

[oracle@dm01db03 ~]$ . oraenv
ORACLE_SID = [oracle] ? +ASM3
The Oracle base has been set to /u01/app/oracle

[oracle@dm01db03 ~]$ asmcmd umount DATAC1

[oracle@dm01db03 ~]$ asmcmd umount RECOC1

[oracle@dm01db03 ~]$ asmcmd lsdg
State    Type  Rebal  Sector  Block       AU  Total_MB  Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name
MOUNTED  HIGH  N         512   4096  4194304   2404640  2402468            68704          777921              0             Y  DBFS_DG/

[oracle@dm01db01 ~]$ ssh dm01db04
Last login: Thu May 17 15:23:31 2018 from dm01db01

[oracle@dm01db04 ~]$ . oraenv
ORACLE_SID = [oracle] ? +ASM4
The Oracle base has been set to /u01/app/oracle

[oracle@dm01db04 ~]$ asmcmd umount DATAC1

[oracle@dm01db04 ~]$ asmcmd umount RECOC1

[oracle@dm01db04 ~]$ asmcmd lsdg
State    Type  Rebal  Sector  Block       AU  Total_MB  Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name
MOUNTED  HIGH  N         512   4096  4194304   2404640  2402468            68704          777921              0             Y  DBFS_DG/

  • Verify that renamedg is in PATH

[oracle@dm01db01 ~]$ which renamedg
/u01/app/11.2.0.4/grid/bin/renamedg

  • As owner of the Grid Infrastruture software execute the renamdg command. Here the owner of GI home is ‘oracle’ user. First I am renaming DATAC1 disk group to DATA

[oracle@dm01db01 ~]$ renamedg phase=both dgname=DATAC1 newdgname=DATA verbose=true

NOTE: No asm libraries found in the system
Parsing parameters..
Parameters in effect:

         Old DG name       : DATAC1
         New DG name       : DATA
         Phases            :
                 Phase 1
                 Phase 2
         Discovery str      : (null)
         Clean              : TRUE
         Raw only           : TRUE
renamedg operation: phase=both dgname=DATAC1 newdgname=DATA verbose=true
Executing phase 1
Discovering the group
Performing discovery with string:
Identified disk OSS::o/192.168.10.9;192.168.10.10/DATAC1_CD_02_dm01cel01 with disk number:74 and timestamp (33068591 612262912)
Identified disk OSS::o/192.168.10.9;192.168.10.10/DATAC1_CD_11_dm01cel01 with disk number:83 and timestamp (33068591 612262912)
Identified disk OSS::o/192.168.10.9;192.168.10.10/DATAC1_CD_07_dm01cel01 with disk number:79 and timestamp (33068591 612262912)
Identified disk OSS::o/192.168.10.9;192.168.10.10/DATAC1_CD_04_dm01cel01 with disk number:76 and timestamp (33068591 612262912)
Identified disk OSS::o/192.168.10.9;192.168.10.10/DATAC1_CD_05_dm01cel01 with disk number:77 and timestamp (33068591 612262912)
Identified disk OSS::o/192.168.10.9;192.168.10.10/DATAC1_CD_10_dm01cel01 with disk number:82 and timestamp (33068591 612262912)
Identified disk OSS::o/192.168.10.9;192.168.10.10/DATAC1_CD_06_dm01cel01 with disk number:78 and timestamp (33068591 612262912)
Identified disk OSS::o/192.168.10.9;192.168.10.10/DATAC1_CD_03_dm01cel01 with disk number:75 and timestamp (33068591 612262912)
Identified disk OSS::o/192.168.10.9;192.168.10.10/DATAC1_CD_00_dm01cel01 with disk number:72 and timestamp (33068591 612262912)
Identified disk OSS::o/192.168.10.9;192.168.10.10/DATAC1_CD_08_dm01cel01 with disk number:80 and timestamp (33068591 612262912)
Identified disk OSS::o/192.168.10.9;192.168.10.10/DATAC1_CD_01_dm01cel01 with disk number:73 and timestamp (33068591 612262912)
Identified disk OSS::o/192.168.10.9;192.168.10.10/DATAC1_CD_09_dm01cel01 with disk number:81 and timestamp (33068591 612262912)



Identified disk OSS::o/192.168.10.21;192.168.10.22/DATAC1_CD_09_dm01cel07 with disk number:69 and timestamp (33068591 612262912)
Identified disk OSS::o/192.168.10.21;192.168.10.22/DATAC1_CD_03_dm01cel07 with disk number:63 and timestamp (33068591 612262912)
Identified disk OSS::o/192.168.10.21;192.168.10.22/DATAC1_CD_06_dm01cel07 with disk number:66 and timestamp (33068591 612262912)
Identified disk OSS::o/192.168.10.21;192.168.10.22/DATAC1_CD_08_dm01cel07 with disk number:68 and timestamp (33068591 612262912)
Identified disk OSS::o/192.168.10.21;192.168.10.22/DATAC1_CD_04_dm01cel07 with disk number:64 and timestamp (33068591 612262912)
Identified disk OSS::o/192.168.10.21;192.168.10.22/DATAC1_CD_07_dm01cel07 with disk number:67 and timestamp (33068591 612262912)
Identified disk OSS::o/192.168.10.21;192.168.10.22/DATAC1_CD_01_dm01cel07 with disk number:61 and timestamp (33068591 612262912)
Identified disk OSS::o/192.168.10.21;192.168.10.22/DATAC1_CD_05_dm01cel07 with disk number:65 and timestamp (33068591 612262912)
Identified disk OSS::o/192.168.10.21;192.168.10.22/DATAC1_CD_02_dm01cel07 with disk number:62 and timestamp (33068591 612262912)
Identified disk OSS::o/192.168.10.21;192.168.10.22/DATAC1_CD_00_dm01cel07 with disk number:60 and timestamp (33068591 612262912)
Identified disk OSS::o/192.168.10.21;192.168.10.22/DATAC1_CD_11_dm01cel07 with disk number:71 and timestamp (33068591 612262912)
Identified disk OSS::o/192.168.10.21;192.168.10.22/DATAC1_CD_10_dm01cel07 with disk number:70 and timestamp (33068591 612262912)
Checking for hearbeat…
Re-discovering the group
Performing discovery with string:
Identified disk OSS::o/192.168.10.9;192.168.10.10/DATAC1_CD_02_dm01cel01 with disk number:74 and timestamp (33068591 612262912)
Identified disk OSS::o/192.168.10.9;192.168.10.10/DATAC1_CD_11_dm01cel01 with disk number:83 and timestamp (33068591 612262912)
Identified disk OSS::o/192.168.10.9;192.168.10.10/DATAC1_CD_07_dm01cel01 with disk number:79 and timestamp (33068591 612262912)
Identified disk OSS::o/192.168.10.9;192.168.10.10/DATAC1_CD_04_dm01cel01 with disk number:76 and timestamp (33068591 612262912)
Identified disk OSS::o/192.168.10.9;192.168.10.10/DATAC1_CD_05_dm01cel01 with disk number:77 and timestamp (33068591 612262912)
Identified disk OSS::o/192.168.10.9;192.168.10.10/DATAC1_CD_10_dm01cel01 with disk number:82 and timestamp (33068591 612262912)
Identified disk OSS::o/192.168.10.9;192.168.10.10/DATAC1_CD_06_dm01cel01 with disk number:78 and timestamp (33068591 612262912)
Identified disk OSS::o/192.168.10.9;192.168.10.10/DATAC1_CD_03_dm01cel01 with disk number:75 and timestamp (33068591 612262912)
Identified disk OSS::o/192.168.10.9;192.168.10.10/DATAC1_CD_00_dm01cel01 with disk number:72 and timestamp (33068591 612262912)
Identified disk OSS::o/192.168.10.9;192.168.10.10/DATAC1_CD_08_dm01cel01 with disk number:80 and timestamp (33068591 612262912)
Identified disk OSS::o/192.168.10.9;192.168.10.10/DATAC1_CD_01_dm01cel01 with disk number:73 and timestamp (33068591 612262912)
Identified disk OSS::o/192.168.10.9;192.168.10.10/DATAC1_CD_09_dm01cel01 with disk number:81 and timestamp (33068591 612262912)



Identified disk OSS::o/192.168.10.21;192.168.10.22/DATAC1_CD_09_dm01cel07 with disk number:69 and timestamp (33068591 612262912)
Identified disk OSS::o/192.168.10.21;192.168.10.22/DATAC1_CD_03_dm01cel07 with disk number:63 and timestamp (33068591 612262912)
Identified disk OSS::o/192.168.10.21;192.168.10.22/DATAC1_CD_06_dm01cel07 with disk number:66 and timestamp (33068591 612262912)
Identified disk OSS::o/192.168.10.21;192.168.10.22/DATAC1_CD_08_dm01cel07 with disk number:68 and timestamp (33068591 612262912)
Identified disk OSS::o/192.168.10.21;192.168.10.22/DATAC1_CD_04_dm01cel07 with disk number:64 and timestamp (33068591 612262912)
Identified disk OSS::o/192.168.10.21;192.168.10.22/DATAC1_CD_07_dm01cel07 with disk number:67 and timestamp (33068591 612262912)
Identified disk OSS::o/192.168.10.21;192.168.10.22/DATAC1_CD_01_dm01cel07 with disk number:61 and timestamp (33068591 612262912)
Identified disk OSS::o/192.168.10.21;192.168.10.22/DATAC1_CD_05_dm01cel07 with disk number:65 and timestamp (33068591 612262912)
Identified disk OSS::o/192.168.10.21;192.168.10.22/DATAC1_CD_02_dm01cel07 with disk number:62 and timestamp (33068591 612262912)
Identified disk OSS::o/192.168.10.21;192.168.10.22/DATAC1_CD_00_dm01cel07 with disk number:60 and timestamp (33068591 612262912)
Identified disk OSS::o/192.168.10.21;192.168.10.22/DATAC1_CD_11_dm01cel07 with disk number:71 and timestamp (33068591 612262912)
Identified disk OSS::o/192.168.10.21;192.168.10.22/DATAC1_CD_10_dm01cel07 with disk number:70 and timestamp (33068591 612262912)
Checking if the diskgroup is mounted or used by CSS
Checking disk number:74
Checking disk number:83
Checking disk number:79
Checking disk number:76
Checking disk number:77
Checking disk number:82
Checking disk number:78
Checking disk number:75
Checking disk number:72
Checking disk number:80
Checking disk number:73
Checking disk number:81
Checking disk number:69
Checking disk number:63
Checking disk number:66
Checking disk number:68
Checking disk number:64
Checking disk number:67
Checking disk number:61
Checking disk number:65
Checking disk number:62
Checking disk number:60


Generating configuration file..
Completed phase 1
Executing phase 2
Looking for o/192.168.10.9;192.168.10.10/DATAC1_CD_02_dm01cel01
Modifying the header
Looking for o/192.168.10.9;192.168.10.10/DATAC1_CD_11_dm01cel01
Modifying the header
Looking for o/192.168.10.9;192.168.10.10/DATAC1_CD_07_dm01cel01
Modifying the header
Looking for o/192.168.10.9;192.168.10.10/DATAC1_CD_04_dm01cel01
Modifying the header
Looking for o/192.168.10.9;192.168.10.10/DATAC1_CD_05_dm01cel01
Modifying the header
Looking for o/192.168.10.9;192.168.10.10/DATAC1_CD_10_dm01cel01
Modifying the header
Looking for o/192.168.10.9;192.168.10.10/DATAC1_CD_06_dm01cel01
Modifying the header
Looking for o/192.168.10.9;192.168.10.10/DATAC1_CD_03_dm01cel01
Modifying the header
Looking for o/192.168.10.9;192.168.10.10/DATAC1_CD_00_dm01cel01
Modifying the header
Looking for o/192.168.10.9;192.168.10.10/DATAC1_CD_08_dm01cel01
Modifying the header
Looking for o/192.168.10.9;192.168.10.10/DATAC1_CD_01_dm01cel01
Modifying the header


Modifying the header
Completed phase 2
Terminating kgfd context 0x7f9d346240a0

  • Now rename RECOC1 ASM disk group to RECO using renamdg command

[oracle@dm01db01 ~]$ renamedg phase=both dgname=RECOC1 newdgname=RECO verbose=true

NOTE: No asm libraries found in the system
Parsing parameters..
Parameters in effect:

         Old DG name       : RECOC1
         New DG name       : RECO
         Phases            :
                 Phase 1
                 Phase 2
         Discovery str      : (null)
         Clean              : TRUE
         Raw only           : TRUE
renamedg operation: phase=both dgname=RECOC1 newdgname=RECO verbose=true
Executing phase 1
Discovering the group
Performing discovery with string:
Identified disk OSS::o/192.168.10.9;192.168.10.10/RECOC1_CD_03_dm01cel01 with disk number:75 and timestamp (33068591 628813824)
Identified disk OSS::o/192.168.10.9;192.168.10.10/RECOC1_CD_04_dm01cel01 with disk number:76 and timestamp (33068591 628813824)
Identified disk OSS::o/192.168.10.9;192.168.10.10/RECOC1_CD_05_dm01cel01 with disk number:77 and timestamp (33068591 628813824)
Identified disk OSS::o/192.168.10.9;192.168.10.10/RECOC1_CD_00_dm01cel01 with disk number:72 and timestamp (33068591 628813824)
Identified disk OSS::o/192.168.10.9;192.168.10.10/RECOC1_CD_10_dm01cel01 with disk number:82 and timestamp (33068591 628813824)
Identified disk OSS::o/192.168.10.9;192.168.10.10/RECOC1_CD_07_dm01cel01 with disk number:79 and timestamp (33068591 628813824)
Identified disk OSS::o/192.168.10.9;192.168.10.10/RECOC1_CD_02_dm01cel01 with disk number:74 and timestamp (33068591 628813824)
Identified disk OSS::o/192.168.10.9;192.168.10.10/RECOC1_CD_01_dm01cel01 with disk number:73 and timestamp (33068591 628813824)
Identified disk OSS::o/192.168.10.9;192.168.10.10/RECOC1_CD_11_dm01cel01 with disk number:83 and timestamp (33068591 628813824)
Identified disk OSS::o/192.168.10.9;192.168.10.10/RECOC1_CD_08_dm01cel01 with disk number:80 and timestamp (33068591 628813824)
Identified disk OSS::o/192.168.10.9;192.168.10.10/RECOC1_CD_09_dm01cel01 with disk number:81 and timestamp (33068591 628813824)
Identified disk OSS::o/192.168.10.9;192.168.10.10/RECOC1_CD_06_dm01cel01 with disk number:78 and timestamp (33068591 628813824)


Identified disk OSS::o/192.168.10.21;192.168.10.22/RECOC1_CD_01_dm01cel07 with disk number:61 and timestamp (33068591 628813824)
Identified disk OSS::o/192.168.10.21;192.168.10.22/RECOC1_CD_08_dm01cel07 with disk number:68 and timestamp (33068591 628813824)
Identified disk OSS::o/192.168.10.21;192.168.10.22/RECOC1_CD_06_dm01cel07 with disk number:66 and timestamp (33068591 628813824)
Identified disk OSS::o/192.168.10.21;192.168.10.22/RECOC1_CD_11_dm01cel07 with disk number:71 and timestamp (33068591 628813824)
Identified disk OSS::o/192.168.10.21;192.168.10.22/RECOC1_CD_10_dm01cel07 with disk number:70 and timestamp (33068591 628813824)
Identified disk OSS::o/192.168.10.21;192.168.10.22/RECOC1_CD_05_dm01cel07 with disk number:65 and timestamp (33068591 628813824)
Identified disk OSS::o/192.168.10.21;192.168.10.22/RECOC1_CD_00_dm01cel07 with disk number:60 and timestamp (33068591 628813824)
Identified disk OSS::o/192.168.10.21;192.168.10.22/RECOC1_CD_03_dm01cel07 with disk number:63 and timestamp (33068591 628813824)
Identified disk OSS::o/192.168.10.21;192.168.10.22/RECOC1_CD_09_dm01cel07 with disk number:69 and timestamp (33068591 628813824)
Identified disk OSS::o/192.168.10.21;192.168.10.22/RECOC1_CD_07_dm01cel07 with disk number:67 and timestamp (33068591 628813824)
Identified disk OSS::o/192.168.10.21;192.168.10.22/RECOC1_CD_02_dm01cel07 with disk number:62 and timestamp (33068591 628813824)
Identified disk OSS::o/192.168.10.21;192.168.10.22/RECOC1_CD_04_dm01cel07 with disk number:64 and timestamp (33068591 628813824)
Checking for hearbeat…
Re-discovering the group
Performing discovery with string:
Identified disk OSS::o/192.168.10.9;192.168.10.10/RECOC1_CD_03_dm01cel01 with disk number:75 and timestamp (33068591 628813824)
Identified disk OSS::o/192.168.10.9;192.168.10.10/RECOC1_CD_04_dm01cel01 with disk number:76 and timestamp (33068591 628813824)
Identified disk OSS::o/192.168.10.9;192.168.10.10/RECOC1_CD_05_dm01cel01 with disk number:77 and timestamp (33068591 628813824)
Identified disk OSS::o/192.168.10.9;192.168.10.10/RECOC1_CD_00_dm01cel01 with disk number:72 and timestamp (33068591 628813824)
Identified disk OSS::o/192.168.10.9;192.168.10.10/RECOC1_CD_10_dm01cel01 with disk number:82 and timestamp (33068591 628813824)
Identified disk OSS::o/192.168.10.9;192.168.10.10/RECOC1_CD_07_dm01cel01 with disk number:79 and timestamp (33068591 628813824)
Identified disk OSS::o/192.168.10.9;192.168.10.10/RECOC1_CD_02_dm01cel01 with disk number:74 and timestamp (33068591 628813824)
Identified disk OSS::o/192.168.10.9;192.168.10.10/RECOC1_CD_01_dm01cel01 with disk number:73 and timestamp (33068591 628813824)
Identified disk OSS::o/192.168.10.9;192.168.10.10/RECOC1_CD_11_dm01cel01 with disk number:83 and timestamp (33068591 628813824)
Identified disk OSS::o/192.168.10.9;192.168.10.10/RECOC1_CD_08_dm01cel01 with disk number:80 and timestamp (33068591 628813824)
Identified disk OSS::o/192.168.10.9;192.168.10.10/RECOC1_CD_09_dm01cel01 with disk number:81 and timestamp (33068591 628813824)
Identified disk OSS::o/192.168.10.9;192.168.10.10/RECOC1_CD_06_dm01cel01 with disk number:78 and timestamp (33068591 628813824)


Identified disk OSS::o/192.168.10.21;192.168.10.22/RECOC1_CD_01_dm01cel07 with disk number:61 and timestamp (33068591 628813824)
Identified disk OSS::o/192.168.10.21;192.168.10.22/RECOC1_CD_08_dm01cel07 with disk number:68 and timestamp (33068591 628813824)
Identified disk OSS::o/192.168.10.21;192.168.10.22/RECOC1_CD_06_dm01cel07 with disk number:66 and timestamp (33068591 628813824)
Identified disk OSS::o/192.168.10.21;192.168.10.22/RECOC1_CD_11_dm01cel07 with disk number:71 and timestamp (33068591 628813824)
Identified disk OSS::o/192.168.10.21;192.168.10.22/RECOC1_CD_10_dm01cel07 with disk number:70 and timestamp (33068591 628813824)
Identified disk OSS::o/192.168.10.21;192.168.10.22/RECOC1_CD_05_dm01cel07 with disk number:65 and timestamp (33068591 628813824)
Identified disk OSS::o/192.168.10.21;192.168.10.22/RECOC1_CD_00_dm01cel07 with disk number:60 and timestamp (33068591 628813824)
Identified disk OSS::o/192.168.10.21;192.168.10.22/RECOC1_CD_03_dm01cel07 with disk number:63 and timestamp (33068591 628813824)
Identified disk OSS::o/192.168.10.21;192.168.10.22/RECOC1_CD_09_dm01cel07 with disk number:69 and timestamp (33068591 628813824)
Identified disk OSS::o/192.168.10.21;192.168.10.22/RECOC1_CD_07_dm01cel07 with disk number:67 and timestamp (33068591 628813824)
Identified disk OSS::o/192.168.10.21;192.168.10.22/RECOC1_CD_02_dm01cel07 with disk number:62 and timestamp (33068591 628813824)
Identified disk OSS::o/192.168.10.21;192.168.10.22/RECOC1_CD_04_dm01cel07 with disk number:64 and timestamp (33068591 628813824)


Checking if the diskgroup is mounted or used by CSS
Checking disk number:75
Checking disk number:76
Checking disk number:77
Checking disk number:72
Checking disk number:82
Checking disk number:79
Checking disk number:74
Checking disk number:73
Checking disk number:83
Checking disk number:80
Checking disk number:81
Checking disk number:78
Checking disk number:61
Checking disk number:68
Checking disk number:66
Checking disk number:71
Checking disk number:70
Checking disk number:65
Checking disk number:60
Checking disk number:63
Checking disk number:69
Checking disk number:67
Checking disk number:62
Checking disk number:64
Checking disk number:49


Generating configuration file..
Completed phase 1
Executing phase 2
Looking for o/192.168.10.9;192.168.10.10/RECOC1_CD_03_dm01cel01
Modifying the header
Looking for o/192.168.10.9;192.168.10.10/RECOC1_CD_04_dm01cel01
Modifying the header
Looking for o/192.168.10.9;192.168.10.10/RECOC1_CD_05_dm01cel01
Modifying the header
Looking for o/192.168.10.9;192.168.10.10/RECOC1_CD_00_dm01cel01
Modifying the header
Looking for o/192.168.10.9;192.168.10.10/RECOC1_CD_10_dm01cel01
Modifying the header
Looking for o/192.168.10.9;192.168.10.10/RECOC1_CD_07_dm01cel01
Modifying the header
Looking for o/192.168.10.9;192.168.10.10/RECOC1_CD_02_dm01cel01
Modifying the header
Looking for o/192.168.10.9;192.168.10.10/RECOC1_CD_01_dm01cel01
Modifying the header
Looking for o/192.168.10.9;192.168.10.10/RECOC1_CD_11_dm01cel01
Modifying the header
Looking for o/192.168.10.9;192.168.10.10/RECOC1_CD_08_dm01cel01
Modifying the header
Looking for o/192.168.10.9;192.168.10.10/RECOC1_CD_09_dm01cel01
Modifying the header
Looking for o/192.168.10.9;192.168.10.10/RECOC1_CD_06_dm01cel01
Modifying the header


Modifying the header
Completed phase 2
Terminating kgfd context 0x7f8d42f6c0a0

  • Mount the DATA and RECO ASM disk groups on all the nodes.

[oracle@dm01db01 ~]$ asmcmd -p

ASMCMD [+] > lsdg
State    Type  Rebal  Sector  Block       AU  Total_MB  Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name
MOUNTED  HIGH  N         512   4096  4194304   2404640  2402468            68704          777921              0             Y  DBFS_DG/

ASMCMD [+] > mount DATA

ASMCMD [+] > mount RECO

ASMCMD [+] > lsdg
State    Type    Rebal  Sector  Block       AU   Total_MB    Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name
MOUNTED  HIGH    N         512   4096  4194304  272154624  271558968          6479872        88359698              0             N  DATA/
MOUNTED  HIGH    N         512   4096  4194304    2404640    2402468            68704          777921              0             Y  DBFS_DG/
MOUNTED  NORMAL  N         512   4096  4194304   45389568   45385040           540352        22422344              0             N  RECO/

*** Repeat the above steps on all the remaning compute nodes in the Cluster***

Note: 

  • renamedg utility cannot rename the associated ASM/Grid disk disk name
  • renamedg utility cannot rename/update the control files, datafiles, redo log files and any other files that reference ASM DG for all databases


Steps to rename control files, datafiles, redo log files and other database files


  • Update SPFILE location

[oracle@dm01db01 ~]$ cd $ORACLE_HOME/dbs

[oracle@dm01db01 dbs]$ cat initdbm011.ora
SPFILE=’+DATAC1/dbm01/spfiledbm01.ora’

[oracle@dm01db01 dbs]$ vi initdbm011.ora

[oracle@dm01db01 dbs]$ cat initdbm011.ora
SPFILE=’+DATA/dbm01/spfiledbm01.ora’

[oracle@dm01db01 dbs]$ scp initdbm011.ora dm01db02:/u01/app/oracle/product/11.2.0.4/dbhome/dbs/initdbm012.ora
initdbm011.ora                                                                                                                                             100%   42     0.0KB/s   00:00

[oracle@dm01db01 dbs]$ scp initdbm011.ora dm01db03:/u01/app/oracle/product/11.2.0.4/dbhome/dbs/initdbm013.ora
initdbm011.ora                                                                                                                                             100%   42     0.0KB/s   00:00

[oracle@dm01db01 dbs]$ scp initdbm011.ora dm01db04:/u01/app/oracle/product/11.2.0.4/dbhome/dbs/initdbm014.ora
initdbm011.ora                                                                                                                                             100%   42     0.0KB/s   00:00

  • Update control file location

[oracle@dm01db01 dbs]$ . oraenv
ORACLE_SID = [dbm011] ?
The Oracle base remains unchanged with value /u01/app/oracle

[oracle@dm01db01 dbs]$ sqlplus / as sysdba

SQL*Plus: Release 11.2.0.4.0 Production on Fri May 25 10:17:49 2018
Copyright (c) 1982, 2013, Oracle.  All rights reserved.
Connected to an idle instance.

SQL> startup nomount;
ORACLE instance started.

Total System Global Area 2.5655E+10 bytes
Fixed Size                  2265224 bytes
Variable Size            4160753528 bytes
Database Buffers         2.1341E+10 bytes
Redo Buffers              151113728 bytes

SQL> show parameter control_files

NAME                                 TYPE        VALUE
———————————— ———– ——————————
control_files                        string      +DATAC1/dbm01/controlfile/current.256.976374731

SQL> alter system set control_files=’+DATA/dbm01/controlfile/current.256.976374731′ scope=spfile;

System altered.

SQL> shutdown immediate;
ORA-01507: database not mounted


ORACLE instance shut down.
SQL> startup mount
ORACLE instance started.

Total System Global Area 2.5655E+10 bytes
Fixed Size                  2265224 bytes
Variable Size            4160753528 bytes
Database Buffers         2.1341E+10 bytes
Redo Buffers              151113728 bytes
Database mounted.

SQL> select name from v$controlfile;

NAME
——————————————————————————–
+DATA/dbm01/controlfile/current.256.976374731

  • Update datafile and redo log file locations

SQL> select name from v$datafile;

NAME
——————————————————————————–
+DATAC1/dbm01/datafile/system.259.976374739
+DATAC1/dbm01/datafile/sysaux.260.976374743
+DATAC1/dbm01/datafile/undotbs1.261.976374745
+DATAC1/dbm01/datafile/undotbs2.263.976374753
+DATAC1/dbm01/datafile/undotbs3.264.976374755
+DATAC1/dbm01/datafile/undotbs4.265.976374757
+DATAC1/dbm01/datafile/users.266.976374757

7 rows selected.

SQL> select member from v$logfile;

MEMBER
——————————————————————————–
+DATAC1/dbm01/onlinelog/group_1.257.976374733
+DATAC1/dbm01/onlinelog/group_2.258.976374735
+DATAC1/dbm01/onlinelog/group_7.267.976375073
+DATAC1/dbm01/onlinelog/group_8.268.976375075
+DATAC1/dbm01/onlinelog/group_5.269.976375079
+DATAC1/dbm01/onlinelog/group_6.270.976375083
+DATAC1/dbm01/onlinelog/group_3.271.976375085
+DATAC1/dbm01/onlinelog/group_4.272.976375087
+DATAC1/dbm01/onlinelog/group_9.274.976375205
+DATAC1/dbm01/onlinelog/group_10.275.976375209
+DATAC1/dbm01/onlinelog/group_11.276.976375211
+DATAC1/dbm01/onlinelog/group_12.277.976375215
+DATAC1/dbm01/onlinelog/group_13.278.976375217
+DATAC1/dbm01/onlinelog/group_14.279.976375219
+DATAC1/dbm01/onlinelog/group_15.280.976375223
+DATAC1/dbm01/onlinelog/group_16.281.976375225

16 rows selected.

SQL> alter database rename file ‘+DATAC1/dbm01/datafile/system.259.976374739’ to ‘+DATA/dbm01/datafile/system.259.976374739’;

Database altered.

SQL> alter database rename file ‘+DATAC1/dbm01/datafile/sysaux.260.976374743’ to ‘+DATA/dbm01/datafile/sysaux.260.976374743’;

Database altered.

SQL> alter database rename file ‘+DATAC1/dbm01/datafile/undotbs1.261.976374745’ to ‘+DATA/dbm01/datafile/undotbs1.261.976374745’;

Database altered.

SQL> alter database rename file ‘+DATAC1/dbm01/datafile/undotbs2.263.976374753’ to ‘+DATA/dbm01/datafile/undotbs2.263.976374753’;

Database altered.

SQL> alter database rename file ‘+DATAC1/dbm01/datafile/undotbs3.264.976374755’ to ‘+DATA/dbm01/datafile/undotbs3.264.976374755’;

Database altered.

SQL> alter database rename file ‘+DATAC1/dbm01/datafile/undotbs4.265.976374757’ to ‘+DATA/dbm01/datafile/undotbs4.265.976374757’;

Database altered.

SQL> alter database rename file ‘+DATAC1/dbm01/datafile/users.266.976374757’ to ‘+DATA/dbm01/datafile/users.266.976374757’;

Database altered.

SQL> alter database rename file ‘+DATAC1/dbm01/onlinelog/group_1.257.976374733’  to ‘+DATA/dbm01/onlinelog/group_1.257.976374733’;

Database altered.

SQL> alter database rename file ‘+DATAC1/dbm01/onlinelog/group_2.258.976374735’  to ‘+DATA/dbm01/onlinelog/group_2.258.976374735’;

Database altered.

SQL> alter database rename file ‘+DATAC1/dbm01/onlinelog/group_7.267.976375073’  to ‘+DATA/dbm01/onlinelog/group_7.267.976375073’;

Database altered.

SQL> alter database rename file ‘+DATAC1/dbm01/onlinelog/group_8.268.976375075’  to ‘+DATA/dbm01/onlinelog/group_8.268.976375075’;

Database altered.

SQL> alter database rename file ‘+DATAC1/dbm01/onlinelog/group_5.269.976375079’  to ‘+DATA/dbm01/onlinelog/group_5.269.976375079’;

Database altered.

SQL> alter database rename file ‘+DATAC1/dbm01/onlinelog/group_6.270.976375083’  to ‘+DATA/dbm01/onlinelog/group_6.270.976375083’;

Database altered.

SQL> alter database rename file ‘+DATAC1/dbm01/onlinelog/group_3.271.976375085’  to ‘+DATA/dbm01/onlinelog/group_3.271.976375085’;

Database altered.

SQL> alter database rename file ‘+DATAC1/dbm01/onlinelog/group_4.272.976375087’  to ‘+DATA/dbm01/onlinelog/group_4.272.976375087’;

Database altered.

SQL> alter database rename file ‘+DATAC1/dbm01/onlinelog/group_9.274.976375205’  to ‘+DATA/dbm01/onlinelog/group_9.274.976375205’;

Database altered.

SQL> alter database rename file ‘+DATAC1/dbm01/onlinelog/group_10.275.976375209’ to ‘+DATA/dbm01/onlinelog/group_10.275.976375209’;

Database altered.

SQL> alter database rename file ‘+DATAC1/dbm01/onlinelog/group_11.276.976375211’ to ‘+DATA/dbm01/onlinelog/group_11.276.976375211’;

Database altered.

SQL> alter database rename file ‘+DATAC1/dbm01/onlinelog/group_12.277.976375215’ to ‘+DATA/dbm01/onlinelog/group_12.277.976375215’;

Database altered.

SQL> alter database rename file ‘+DATAC1/dbm01/onlinelog/group_13.278.976375217’ to ‘+DATA/dbm01/onlinelog/group_13.278.976375217’;

Database altered.

SQL> alter database rename file ‘+DATAC1/dbm01/onlinelog/group_14.279.976375219’ to ‘+DATA/dbm01/onlinelog/group_14.279.976375219’;

Database altered.

SQL> alter database rename file ‘+DATAC1/dbm01/onlinelog/group_15.280.976375223’ to ‘+DATA/dbm01/onlinelog/group_15.280.976375223’;

Database altered.

SQL> alter database rename file ‘+DATAC1/dbm01/onlinelog/group_16.281.976375225’ to ‘+DATA/dbm01/onlinelog/group_16.281.976375225’;

Database altered.

  • Verify the datafiles and redo log files names

SQL> select name from v$datafile;

NAME
——————————————————————————–
+DATA/dbm01/datafile/system.259.976374739
+DATA/dbm01/datafile/sysaux.260.976374743
+DATA/dbm01/datafile/undotbs1.261.976374745
+DATA/dbm01/datafile/undotbs2.263.976374753
+DATA/dbm01/datafile/undotbs3.264.976374755
+DATA/dbm01/datafile/undotbs4.265.976374757
+DATA/dbm01/datafile/users.266.976374757

7 rows selected.

SQL> select member from v$logfile;

MEMBER
——————————————————————————–
+DATA/dbm01/onlinelog/group_1.257.976374733
+DATA/dbm01/onlinelog/group_2.258.976374735
+DATA/dbm01/onlinelog/group_7.267.976375073
+DATA/dbm01/onlinelog/group_8.268.976375075
+DATA/dbm01/onlinelog/group_5.269.976375079
+DATA/dbm01/onlinelog/group_6.270.976375083
+DATA/dbm01/onlinelog/group_3.271.976375085
+DATA/dbm01/onlinelog/group_4.272.976375087
+DATA/dbm01/onlinelog/group_9.274.976375205
+DATA/dbm01/onlinelog/group_10.275.976375209
+DATA/dbm01/onlinelog/group_11.276.976375211
+DATA/dbm01/onlinelog/group_12.277.976375215
+DATA/dbm01/onlinelog/group_13.278.976375217
+DATA/dbm01/onlinelog/group_14.279.976375219
+DATA/dbm01/onlinelog/group_15.280.976375223
+DATA/dbm01/onlinelog/group_16.281.976375225

16 rows selected.

  • Update block change tracking file location

SQL> alter database rename file ‘+DATAC1/dbm01/changetracking/ctf.282.976375227’ to ‘+DATA/dbm01/changetracking/ctf.282.976375227’;

Database altered.

SQL> select * from v$block_change_tracking;

STATUS
———-
FILENAME
——————————————————————————–
     BYTES
———-
ENABLED
+DATA/dbm01/changetracking/ctf.282.976375227
  11599872

  • Update OMF related parameters

SQL> show parameter db_create_online_log_dest_1

NAME                                 TYPE        VALUE
———————————— ———– ——————————
db_create_online_log_dest_1          string      +DATAC1

SQL> alter system set db_create_online_log_dest_1=’+DATA’;

System altered.

SQL> show parameter db_create_online_log_dest_1

NAME                                 TYPE        VALUE
———————————— ———– ——————————
db_create_online_log_dest_1          string      +DATA

  • Update Fast Recovery Area location

SQL> show parameter db_recovery_file_dest

NAME                                 TYPE        VALUE
———————————— ———– ——————————
db_recovery_file_dest                string      +RECOC1
db_recovery_file_dest_size           big integer 20425000M

SQL> alter system set db_recovery_file_dest=’+RECO’;

System altered.

SQL> show parameter db_recovery_file_dest

NAME                                 TYPE        VALUE
———————————— ———– ——————————
db_recovery_file_dest                string      +RECO
db_recovery_file_dest_size           big integer 20425000M

  • Shutdown the database

SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> exit

  • Update the database configuration

[oracle@dm01db01 dbs]$ srvctl config database -d dbm01
Database unique name: dbm01
Database name: dbm01
Oracle home: /u01/app/oracle/product/11.2.0.4/dbhome
Oracle user: oracle
Spfile: +DATAC1/dbm01/spfiledbm01.ora
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: dbm01
Database instances: dbm011,dbm012,dbm013,dbm014
Disk Groups: DATAC1,RECOC1,DATA
Mount point paths:
Services:
Type: RAC
Database is administrator managed

[oracle@dm01db01 dbs]$ srvctl modify database -p +DATA/dbm01/spfiledbm01.ora -a DATA,RECO -d dbm01

[oracle@dm01db01 dbs]$ srvctl config database -d dbm01
Database unique name: dbm01
Database name: dbm01
Oracle home: /u01/app/oracle/product/11.2.0.4/dbhome
Oracle user: oracle
Spfile: +DATA/dbm01/spfiledbm01.ora
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: dbm01
Database instances: dbm011,dbm012,dbm013,dbm014
Disk Groups: DATA,RECO
Mount point paths:
Services:
Type: RAC
Database is administrator managed

  • Start the database and verify

[oracle@dm01db01 dbs]$ srvctl start database -d dbm01

[oracle@dm01db01 dbs]$ srvctl status database -d dbm01
Instance dbm011 is running on node dm01db01
Instance dbm012 is running on node dm01db02
Instance dbm013 is running on node dm01db03
Instance dbm014 is running on node dm01db04

[oracle@dm01db01 dbs]$ sqlplus / as sysdba

SQL*Plus: Release 11.2.0.4.0 Production on Fri May 25 10:40:34 2018
Copyright (c) 1982, 2013, Oracle.  All rights reserved.

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 – 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options

SQL> select name, open_mode,database_role from gv$database;

NAME      OPEN_MODE            DATABASE_ROLE
——— ——————– —————-
DBM01     READ WRITE           PRIMARY
DBM01     READ WRITE           PRIMARY
DBM01     READ WRITE           PRIMARY
DBM01     READ WRITE           PRIMARY

ORACLE_SID = [+ASM1] ? dbm011
The Oracle base remains unchanged with value /u01/app/oracle

[oracle@dm01db01 ~]$ sqlplus / as sysdba

SQL*Plus: Release 11.2.0.4.0 Production on Thu May 24 16:36:34 2018
Copyright (c) 1982, 2013, Oracle.  All rights reserved.

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 – 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options

SQL> select * from v$version;

BANNER
——————————————————————————–
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 – 64bit Production
PL/SQL Release 11.2.0.4.0 – Production
CORE    11.2.0.4.0      Production
TNS for Linux: Version 11.2.0.4.0 – Production
NLSRTL Version 11.2.0.4.0 – Production


[oracle@dm01db01 ~]$ . oraenv

ORACLE_SID = [oracle] ? +ASM1
The Oracle base has been set to /u01/app/oracle

[oracle@dm01db01 ~]$ asmcmd -p

ASMCMD [+] > lsdg
State    Type    Rebal  Sector  Block       AU   Total_MB    Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name
MOUNTED  HIGH    N         512   4096  4194304  272154624  271558968          6479872        88359698              0             N  DATAC1/
MOUNTED  HIGH    N         512   4096  4194304    2404640    2402468            68704          777921              0             Y  DBFS_DG/
MOUNTED  NORMAL  N         512   4096  4194304   45389568   45386648           540352        22423148              0             N  RECOC1/

ASMCMD [+] > lsct
DB_Name  Status     Software_Version  Compatible_version  Instance_Name  Disk_Group
+ASM     CONNECTED        11.2.0.4.0          11.2.0.4.0  +ASM1          DBFS_DG
DBM01    CONNECTED        11.2.0.4.0          11.2.0.4.0  dbm011         DATAC1

[oracle@dm01db01 ~]$ srvctl status database -d dbm01
Instance dbm011 is running on node dm01db01
Instance dbm012 is running on node dm01db02
Instance dbm013 is running on node dm01db03
Instance dbm014 is running on node dm01db04


Conclusion

In this article we have learned how to rename an ASM Disk group on Exadata running Oracle Database 11.2. Starting with Oracle Database 11.2 you can use the renamedg command to rename an ASM Disk Group. renamedg utility cannot rename the associated ASM/Grid disk disk name. Also renamedg cannot rename/update the control files, datafiles, redo log files and any other files that reference ASM DG for all databases. You must update database files manually after renaming ASM disk groups.

0

We had a failed hard disk on a Exadata Storage cell X6-2. So we scheduled the Oracle Field Engineer to replace the bad disk. Oracle Field Engineer came onsite and replaced the faulty hard disk. Post hard disk replacement we found that the physical disk and luns are created successfully but the Cell disk and Grid disks were not created automatically. When a hard disk is replaced, the lun, cell disk and grid disks are created automatically and grid disks are added to ASM disk group for you without any manual intervention. In some odd cases, the Cell disk and grid disks are not created automatically, in those cases you must manually create the Cell disk, create the Grid disks with proper sizes and add them to the ASM disk group.

In this article we will demonstrate how to create the Cell disk, Grid disks manually and add them to the respective ASM Disk Group.

Environment

  • Exadata X6-2 Elastic Configuration
  • 4 Compute nodes and 6 Storage cells
  • Hard Disk Size: 8TB
  • 3 ASM Disk Group: DATA, RECO & DBFS_DG
  • Total Number of Grid disks: DATA – 72, RECO – 72 & DBFS_DG – 60

Here the disk in the location 8:5 was back and replaced.

Before Replacing Hard Disk:

CellCLI> list physicaldisk
         8:0             PYJZKV                  normal
         8:1             PMU3LV                  normal
         8:2             P1Y2KV                  normal
         8:3             PYH48V                  normal
         8:4             PY7MAV                  normal
         8:5             PPZ47V                  not present
         8:6             PEJKHR                  normal
         8:7             PY4XSV                  normal
         8:8             PYL00V                  normal
         8:9             PV5RGV                  normal
         8:10            PSU26V                  normal
         8:11            PY522V                  normal
         FLASH_1_1       CVMD522500AG1P6NGN      normal
         FLASH_2_1       CVMD522401AC1P6NGN      normal
         FLASH_4_1       CVMD522500AC1P6NGN      normal
         FLASH_5_1       CVMD5230000Y1P6NGN      normal

CellCLI> list lun
         0_0     0_0     normal
         0_1     0_1     normal
         0_2     0_2     normal
         0_3     0_3     normal
         0_4     0_4     normal
         0_5     0_5     not present
         0_6     0_6     normal
         0_7     0_7     normal
         0_8     0_8     normal
         0_9     0_9     normal
         0_10    0_10    normal
         0_11    0_11    normal
         1_1     1_1     normal
         2_1     2_1     normal
         4_1     4_1     normal
         5_1     5_1     normal

After replacing Hard Disk:

CellCLI> list physicaldisk
         8:0             PYJZKV                  normal
         8:1             PMU3LV                  normal
         8:2             P1Y2KV                  normal
         8:3             PYH48V                  normal
         8:4             PY7MAV                  normal
         8:5             PPZ47V                  normal
         8:6             PEJKHR                  normal
         8:7             PY4XSV                  normal
         8:8             PYL00V                  normal
         8:9             PV5RGV                  normal
         8:10            PSU26V                  normal
         8:11            PY522V                  normal
         FLASH_1_1       CVMD522500AG1P6NGN      normal
         FLASH_2_1       CVMD522401AC1P6NGN      normal
         FLASH_4_1       CVMD522500AC1P6NGN      normal
         FLASH_5_1       CVMD5230000Y1P6NGN      normal

CellCLI> list lun
         0_0     0_0     normal
         0_1     0_1     normal
         0_2     0_2     normal
         0_3     0_3     normal
         0_4     0_4     normal
         0_5     0_5     normal
         0_6     0_6     normal
         0_7     0_7     normal
         0_8     0_8     normal
         0_9     0_9     normal
         0_10    0_10    normal
         0_11    0_11    normal
         1_1     1_1     normal
         2_1     2_1     normal
         4_1     4_1     normal
         5_1     5_1     normal

[root@dm01cel03 ~]# cellcli -e list physicaldisk 8:5 detail
         name:                   8:5
         deviceId:               21
         deviceName:             /dev/sdf
         diskType:               HardDisk
         enclosureDeviceId:      8
         errOtherCount:          0
         luns:                   0_5
         makeModel:              “HGST    H7280A520SUN8.0T”
         physicalFirmware:       PD51
         physicalInsertTime:     2018-05-18T10:52:29-05:00
         physicalInterface:      sas
         physicalSerial:         PPZ47V
         physicalSize:           7.1536639072000980377197265625T
         slotNumber:             5
         status:                 normal

[root@dm01cel03 ~]# cellcli -e list celldisk where lun=0_5 detail


[root@dm01cel03 ~]# cellcli -e list griddisk where cellDisk=CD_05_cm01cel01 attributes name,status
DATA_CD_05_dm01cel03 not present
DBFS_DG_CD_05_dm01cel03 not present
RECO_CD_05_dm01cel03 not present

[root@dm01cel03 ~]# cellcli -e list griddisk where celldisk=CD_05_dm01cel03 detail
         name:                   DATA_CD_05_dm01cel03
         availableTo:
         cachingPolicy:          default
         cellDisk:               CD_05_dm01cel03
         comment:                “Cluster dm01-cluster diskgroup DATA”
         creationTime:           2016-03-29T20:25:56-05:00
         diskType:               HardDisk
         errorCount:             0
         id:                     db221d77-25b0-4f9e-af6f-95e1c3134af5
         size:                   5.6953125T
         status:                 not present

         name:                   DBFS_DG_CD_05_dm01cel03
         availableTo:
         cachingPolicy:          default
         cellDisk:               CD_05_dm01cel03
         comment:                “Cluster dm01-cluster diskgroup DBFS_DG”
         creationTime:           2016-03-29T20:25:53-05:00
         diskType:               HardDisk
         errorCount:             0
         id:                     216fbec9-6ed4-4ef6-a0d4-d09517906fd5
         size:                   33.796875G
         status:                 not present

         name:                   RECO_CD_05_dm01cel03
         availableTo:
         cachingPolicy:          none
         cellDisk:               CD_05_dm01cel03
         comment:                “Cluster dm01-cluster diskgroup RECO”
         creationTime:           2016-03-29T20:25:58-05:00
         diskType:               HardDisk
         errorCount:             0
         id:                     e8ca6943-0ddd-48ab-b890-e14bbf4e591c
         size:                   1.42388916015625T
         status:                 not present

We can clearly see that the GRID DISKs are not present. So we have to create the GRID DISKs Manually.

Steps to create Celldisk, Griddisks and add them to ASM Disk Group


  • List Cell Disks

[root@dm01cel03 ~]# cellcli -e list celldisk
         CD_00_dm01cel03         normal
         CD_01_dm01cel03         normal
         CD_02_dm01cel03         normal
         CD_03_dm01cel03         normal
         CD_04_dm01cel03         normal
         CD_05_dm01cel03         not present
         CD_06_dm01cel03         normal
         CD_07_dm01cel03         normal
         CD_08_dm01cel03         normal
         CD_09_dm01cel03         normal
         CD_10_dm01cel03         normal
         CD_11_dm01cel03         normal
         FD_00_dm01cel03         normal
         FD_01_dm01cel03         normal
         FD_02_dm01cel03         normal
         FD_03_dm01cel03         normal

  • List Grid Disks

[root@dm01cel03 ~]# cellcli -e list griddisk
         DATA_CD_00_dm01cel03       active
         DATA_CD_01_dm01cel03       active
         DATA_CD_02_dm01cel03       active
         DATA_CD_03_dm01cel03       active
         DATA_CD_04_dm01cel03       active
         DATA_CD_05_dm01cel03       not present
         DATA_CD_06_dm01cel03       active
         DATA_CD_07_dm01cel03       active
         DATA_CD_08_dm01cel03       active
         DATA_CD_09_dm01cel03       active
         DATA_CD_10_dm01cel03       active
         DATA_CD_11_dm01cel03       active
         DBFS_DG_CD_02_dm01cel03    active
         DBFS_DG_CD_03_dm01cel03    active
         DBFS_DG_CD_04_dm01cel03    active
         DBFS_DG_CD_05_dm01cel03    not present
         DBFS_DG_CD_06_dm01cel03    active
         DBFS_DG_CD_07_dm01cel03    active
         DBFS_DG_CD_08_dm01cel03    active
         DBFS_DG_CD_09_dm01cel03    active
         DBFS_DG_CD_10_dm01cel03    active
         DBFS_DG_CD_11_dm01cel03    active
         RECO_CD_00_dm01cel03       active
         RECO_CD_01_dm01cel03       active
         RECO_CD_02_dm01cel03       active
         RECO_CD_03_dm01cel03       active
         RECO_CD_04_dm01cel03       active
         RECO_CD_05_dm01cel03       not present
         RECO_CD_06_dm01cel03       active
         RECO_CD_07_dm01cel03       active
         RECO_CD_08_dm01cel03       active
         RECO_CD_09_dm01cel03       active
         RECO_CD_10_dm01cel03       active
         RECO_CD_11_dm01cel03       active

  • List Physical Disk details

[root@dm01cel03 ~]# cellcli -e list physicaldisk where physicalSerial=PPZ47V detail
         name:                   8:5
         deviceId:               21
         deviceName:             /dev/sdf
         diskType:               HardDisk
         enclosureDeviceId:      8
         errOtherCount:          0
         luns:                   0_5
         makeModel:              “HGST    H7280A520SUN8.0T”
         physicalFirmware:       PD51
         physicalInsertTime:     2018-05-18T10:52:29-05:00
         physicalInterface:      sas
         physicalSerial:         PPZ47V
         physicalSize:           7.1536639072000980377197265625T
         slotNumber:             5
         status:                 normal

  • Let’s try to create the Cell Disk

[root@dm01cel03 ~]# cellcli -e create celldisk CD_09_dm01cel03 lun=0_5

CELL-02526: Pre-existing cell disk: CD_09_dm01cel03

It says the Cell Disk already exists.

  • Let’s try to create the Grid Disk. To create the Grid Disk with proper size, get the Grid Disk size from a good Cell Disk as shown below.

[root@dm01cel03 ~]# cellcli -e list griddisk where celldisk=CD_07_dm01cel03 attributes name,size,offset
         DATA_CD_07_dm01cel03       5.6953125T              32M
         DBFS_DG_CD_07_dm01cel03         33.796875G         7.1192474365234375T
         RECO_CD_07_dm01cel03       1.42388916015625T       5.6953582763671875T

  • Now create the Grid Disk

[root@dm01cel03 ~]# cellcli -e create griddisk DATA_CD_05_dm01cel03 celldisk=CD_05_dm01cel03,size=5.6953125T

CELL-02701: Cannot create grid disk on cell disk CD_05_dm01cel03 because its status is not normal.

Looks like we can’t create the Grid Disk. We will now drop the Cell Disk and recreate it.

  • Drop Cell Disk

CellCLI> drop celldisk CD_05_dm01cel03 force
CellDisk CD_05_dm01cel03 successfully dropped

  • Create Cell Disk

CellCLI> create celldisk CD_05_dm01cel03 lun=0_5
CellDisk CD_05_dm01cel03 successfully created

  • Create Grid Disks with proper sizes

CellCLI> create griddisk DATA_CD_05_dm01cel03 celldisk=CD_05_dm01cel03,size=5.6953125T
GridDisk DATA_CD_05_dm01cel03 successfully created

CellCLI> create griddisk RECO_CD_05_dm01cel03 celldisk=CD_05_dm01cel03,size=1.42388916015625T
GridDisk RECO_CD_05_dm01cel03 successfully created

CellCLI> create griddisk DBFS_DG_CD_05_dm01cel03 celldisk=CD_05_dm01cel03,size=33.796875G
GridDisk DBFS_DG_CD_05_dm01cel03 successfully created

  • List Grid Disks

CellCLI> list griddisk where celldisk=CD_05_dm01cel03 attributes name,size,offset
         DATA_CD_05_dm01cel03       5.6953125T              32M
         DBFS_DG_CD_05_dm01cel03         33.796875G              7.1192474365234375T
         RECO_CD_05_dm01cel03       1.42388916015625T       5.6953582763671875T

CellCLI> list griddisk
         DATA_CD_00_dm01cel03       active
         DATA_CD_01_dm01cel03       active
         DATA_CD_02_dm01cel03       active
         DATA_CD_03_dm01cel03       active
         DATA_CD_04_dm01cel03       active
         DATA_CD_05_dm01cel03       active
         DATA_CD_06_dm01cel03       active
         DATA_CD_07_dm01cel03       active
         DATA_CD_08_dm01cel03       active
         DATA_CD_09_dm01cel03       active
         DATA_CD_10_dm01cel03       active
         DATA_CD_11_dm01cel03       active
         DBFS_DG_CD_02_dm01cel03    active
         DBFS_DG_CD_03_dm01cel03    active
         DBFS_DG_CD_04_dm01cel03    active
         DBFS_DG_CD_05_dm01cel03    active
         DBFS_DG_CD_06_dm01cel03    active
         DBFS_DG_CD_07_dm01cel03    active
         DBFS_DG_CD_08_dm01cel03    active
         DBFS_DG_CD_09_dm01cel03    active
         DBFS_DG_CD_10_dm01cel03    active
         DBFS_DG_CD_11_dm01cel03    active
         RECO_CD_00_dm01cel03       active
         RECO_CD_01_dm01cel03       active
         RECO_CD_02_dm01cel03       active
         RECO_CD_03_dm01cel03       active
         RECO_CD_04_dm01cel03       active
         RECO_CD_05_dm01cel03       active
         RECO_CD_06_dm01cel03       active
         RECO_CD_07_dm01cel03       active
         RECO_CD_08_dm01cel03       active
         RECO_CD_09_dm01cel03       active
         RECO_CD_10_dm01cel03       active
         RECO_CD_11_dm01cel03       active

The Grid Disks show active now. We can go ahead and add them to ASM disk Group Manually by connecting to ASM instance.


  • Log into +ASM1 instance and add the new disk.  Set the rebalance power higher (11) to perform faster rebalance operation.

dm01db01-orcldb1 {/home/oracle}:. oraenv
ORACLE_SID = [orcldb1] ? +ASM1
The Oracle base remains unchanged with value /u01/app/oracle
dm01db01-+ASM1 {/home/oracle}:sqlplus / as sysasm

SQL*Plus: Release 11.2.0.4.0 Production on Wed May 23 09:30:13 2018
Copyright (c) 1982, 2013, Oracle.  All rights reserved.

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 – 64bit Production
With the Real Application Clusters and Automatic Storage Management options

SQL> alter diskgroup DATA add failgroup dm01CEL03 disk ‘o/192.168.10.1;192.168.10.2/DATA_CD_05_dm01cel03’ name DATA_CD_05_dm01cel03 rebalance power 11;

Diskgroup altered.

SQL> alter diskgroup RECO add failgroup dm01CEL03 disk ‘o/192.168.10.1;192.168.10.2/RECO_CD_05_dm01cel03’ name RECO_CD_05_dm01cel03 rebalance power 11;

Diskgroup altered.

SQL> alter diskgroup DBFS_DG add failgroup dm01CEL03 disk ‘o/192.168.10.1;192.168.10.2/DBFS_DG_CD_05_dm01cel03’ name DBFS_DG_CD_05_dm01cel03 rebalance power 11;

Diskgroup altered.

SQL> select a.name,a.total_mb,a.free_mb,a.type,
    decode(a.type,’NORMAL’,a.total_mb/2,’HIGH’,a.total_mb/3) avail_mb,
    decode(a.type,’NORMAL’,a.free_mb/2,’HIGH’,a.free_mb/3) usable_mb,
    count(b.path) cell_disks  from v$asm_diskgroup a, v$asm_disk b
    where a.group_number=b.group_number group by a.name,a.total_mb,a.free_mb,a.type,
    decode(a.type,’NORMAL’,a.total_mb/2,’HIGH’,a.total_mb/3) ,
    decode(a.type,’NORMAL’,a.free_mb/2,’HIGH’,a.free_mb/3)
   order by 2,1;

               Total MB    Free MB          Total MB    Free MB
Disk Group          Raw        Raw TYPE       Usable     Usable     CELL_DISKS
———— ———- ———- —— ———- ———- ———-
DBFS_DG    2076480    2074688 NORMAL    1038240    1037344         60
RECO     107500032   57573496 HIGH     35833344   19191165         72
DATA     429981696  282905064 HIGH    143327232   94301688         72

SQL> select * from v$asm_operation;

GROUP_NUMBER OPERA STAT      POWER     ACTUAL      SOFAR   EST_WORK   EST_RATE EST_MINUTES ERROR_CODE
———— —– —- ———- ———- ———- ———- ———- ———– ——————————————–
           1 REBAL RUN          11         11      85992    6697959      11260         587
           3 REBAL WAIT         11


SQL> select * from gv$asm_operation;

no rows selected


Conclusion

In this article we have learned how to create the Celldisk, Griddisks and add the newly created Griddisks to ASM Disk Group. When a hard disk is replaced, the lun, celldisk and griddisks are created automatically and griddisks are added to ASM disk group for you without any manual intervention. In some cases, if the Celldisk and grid disks are not created automatically, then you must manually create them and add them to the ASM disk group.

1

Another Exadata Storage Software 18c update released!! Oracle Released Exadata Storage Software version 18.1.1.0.0 on October 26th 2017. Earlier release was ESS version 18.1.0.0.0 on October 2nd 2017. – Patch (26875767) for Storage server software and Infiniband Switch software For more details review the MOS “Exadata 18.1.1.0.0 release and patch (26875767) (Doc ID 2311786.1)” ESS version 18.1.1.0.0 supports the following Oracle Database software releases: – 12.2.0.1.0 – 12.1.0.2.0 – 12.1.0.1.0 – 11.2.0.4.0 – 11.2.0.3.0 Exadata 18.1.1.0.0 Software and Image files for upgrade are: – Patch 26875767 – Storage server software (18.1.1.0.0.171018) and InfiniBand switch software (2.2.7-1) – Patch 26923500 – Database server bare metal / domU ULN exadata_dbserver_18.1.1.0.0_x86_64_base OL6 channel ISO image (18.1.1.0.0.171018) – Patch 26923501 – Database server dom0 ULN exadata_dbserver_dom0_18.1.1.0.0_x86_64_base OVM channel ISO image (18.1.1.0.0.171018)
0


Overview

Oracle ASM disk groups are built on a set of Exadata grid disks. Exadata uses ASM disk groups store database files. 


ASM Provides 3 Types of Redundancy:

External: ASM doesn’t provide redundancy. External redundancy is not an option on Exadata. You must either use normal or high redundancy.

Normal: It provides two-way data mirroring. It maintains two copies of data blocks in separate failure groups

High: It provides three-way data mirroring. It maintains three copies of data blocks in separate failure groups


In this article I will demonstrate how to created ASM Disk Groups on Exadata database machine using ASMCA.

Environment

Exadata Database machine X2-2
8 Compute nodes, 14 Storage cells and 2 IB Switch

Steps to create ASM disk Group using ASMCA utility.


Set the environment variable to Grid Home and start asmca


dm01db01-orcldb1 {/home/oracle}:. oraenv

ORACLE_SID = [orcldb1] ? +ASM1

The Oracle base has been changed from /u01/app/oracle to /u01/app/grid

dm01db01-+ASM1 {/home/oracle}:which asmca
/u01/app/12.2.0.1/grid/bin/asmca

dm01db01-+ASM1 {/home/oracle}:asmca

1. Create ASM Disk Group using Normal Redundancy (3 failure groups)

First we will create an ASM disk group using Normal Redundancy using 3 storage cells.

ASMCA starting

Click on ASM Instances on left pane

Here we are running Flex ASM and current ASM instance is running on Nodes 1, 2 and 4

Click on Disk Groups. We can see currently there 2 disk groups one for OCR/Voting disks and another one for MGMT database repository. To create a new disk group click on “Create” button.

Click on “Change Disk Discovery Path”

Enter the following path to discover the grid disks.

Select desired grid disks to create ASM Disk Group. 
Here I am creating DATA disk group by selecting DATA grid disks from 3 storage cells

Click on “Show Advanced Options” and select the ASM/Database/ADVM compatibility. Finally click Ok to create DATA disk group

DATA disk group creation in progress

We can now see that the DATA disk group is created

Let’s verify the newly created DATA disk group. Right click on the DATA disk group and select “view status details”

We can see that the DATA disk group is mounted on node 1, 2 and 4

2. Creaet ASM Disk Group Using High Redundancy (5 failure groups)


Now let’s create another ASM disk group using High Redundancy using grid disks from 5 storage cells.

Click Create button to create new ASM disk group


Enter the Disk Group name, select the desired grid disks and ASM/Database/ADVM attributes and click ok

DATA1 disk group creation is in process

We can see that DATA1 disk group created



3. Add disks to ASM Disk Group (add grid disks from one storage cell)

Now let’s Add disks to DATA1 disk group. I am going to add DATA grid disks from a storage to DATA1.


Right click on the DATA1 disk group and select Add Disks


Select the desired grid disks from storage cell and click ok

Disks are being added to DATA1

We can see the size of DATA1 disk group has increased


 4. Drop disks from ASM Disk Group (remove grid disks from one storage cell)



This time let’s Drop disks from DATA1 disk group. I am going to remove DATA grid disks from a storage used by DATA1.


Right click on DATA1 disk group and select Drop Disks

Process started

Select the desired Grid disks to be dropped from DATA1 disk group and click ok

Disks are being dropped from DATA1

We can see that the DATA1 disk group size is dropped




Conclusion:


In this article we have learned different ASM disk group redundancy levels and how to create ASM disk group on Exadata using a set of Exadata grid disks. We have created different ASM disk group using different redundancy levels and performed few disk operations like adding and dropping.

1

Overview
In Exadata ASM disk groups are created from ASM disks which are provisioned as grid disks from Exadata storage cells. The grid disks are created from the celldisks. Normally, there is no free space in celldisks, as all space is used for grid disks, as shown below:

[root@dm01cel01 ~]# cellcli -e “list celldisk where name like ‘CD.*’ attributes name, size, freespace”
         CD_00_dm01cel01         528.734375G     0
         CD_01_dm01cel01         528.734375G     0
         CD_02_dm01cel01         557.859375G     0
         CD_03_dm01cel01         557.859375G     0
         CD_04_dm01cel01         557.859375G     0
         CD_05_dm01cel01         557.859375G     0
         CD_06_dm01cel01         557.859375G     0
         CD_07_dm01cel01         557.859375G     0
         CD_08_dm01cel01         557.859375G     0
         CD_09_dm01cel01         557.859375G     0
         CD_10_dm01cel01         557.859375G     0
         CD_11_dm01cel01         557.859375G     0


In this article I will demonstrate how to free up some space from the grid disks in RECO ASM disk group, and then reuse that space to increase the size of DATA disk group. The free space can be anywhere on the cell disks.


Environment
  • Exadata Full Rack X2-2
  • 8 Compute nodes, 14 Storage cells and 3 IB Switches
  • High Performance Disks (600GB per disk)

1. Free up space on celldisks
Let’s say that we want to free up 50GB per disk in RECO disk group, we need to reduce the disk size in ASM, and then reduce the grid disk size in Exadata storage cells. Let’s do that for RECO disk group.

We start with the RECO grid disks with the size of 268.6875G:

[root@dm01cel01 ~]# cellcli -e “list grid disk where name like ‘RECO.*’ attributes name, size”
         RECO_dm01_CD_00_dm01cel01       105.6875G
         RECO_dm01_CD_01_dm01cel01       105.6875G
         RECO_dm01_CD_02_dm01cel01       105.6875G
         RECO_dm01_CD_03_dm01cel01       105.6875G
         RECO_dm01_CD_04_dm01cel01       105.6875G
         RECO_dm01_CD_05_dm01cel01       105.6875G
         RECO_dm01_CD_06_dm01cel01       105.6875G
         RECO_dm01_CD_07_dm01cel01       105.6875G
         RECO_dm01_CD_08_dm01cel01       105.6875G
         RECO_dm01_CD_09_dm01cel01       105.6875G
         RECO_dm01_CD_10_dm01cel01       105.6875G
         RECO_dm01_CD_11_dm01cel01       105.6875G


To free up 50 GB, the new grid disks size will be 105.6875 G – 50 GB = 55.6875 GB = 57024 MB.

2. Reduce size of RECO disks in ASM

dm01db01-+ASM1 {/home/oracle}:sqlplus / as sysasm

SQL*Plus: Release 12.1.0.2.0 Production on Wed Jan 18 04:16:57 2017

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 – 64bit Production
With the Real Application Clusters and Automatic Storage Management options

SQL> alter diskgroup RECO_dm01 resize all size 57024M rebalance power 32;

Diskgroup altered.


The command will trigger the rebalance operation for RECO disk group.

3. Monitor the rebalance with the following command:
 

SQL> set lines 200
SQL> set pages 200
SQL> select * from gv$asm_operation;

   INST_ID GROUP_NUMBER OPERA PASS      STAT      POWER     ACTUAL      SOFAR   EST_WORK   EST_RATE EST_MINUTES ERROR_CODE                                       CON_ID
———- ———— —– ——— —- ———- ———- ———- ———- ———- ———– ——————————————– ———-
         2            2 REBAL RESYNC    DONE         32                                                                                                               0
         2            2 REBAL RESILVER  DONE         32                                                                                                               0
         2            2 REBAL REBALANCE WAIT         32                                                                                                               0
         2            2 REBAL COMPACT   WAIT         32                                                                                                               0
         6            2 REBAL RESYNC    DONE         32                                                                                                               0
         6            2 REBAL RESILVER  DONE         32                                                                                                               0
         6            2 REBAL REBALANCE WAIT         32                                                                                                               0
         6            2 REBAL COMPACT   WAIT         32                                                                                                               0
         4            2 REBAL RESYNC    DONE         32                                                                                                               0
         4            2 REBAL RESILVER  DONE         32                                                                                                               0
         4            2 REBAL REBALANCE WAIT         32                                                                                                               0
         4            2 REBAL COMPACT   WAIT         32                                                                                                               0
         3            2 REBAL RESYNC    DONE         32                                                                                                               0
         3            2 REBAL RESILVER  DONE         32                                                                                                               0
         3            2 REBAL REBALANCE WAIT         32                                                                                                               0
         3            2 REBAL COMPACT   WAIT         32                                                                                                               0
         8            2 REBAL RESYNC    DONE         32                                                                                                               0
         8            2 REBAL RESILVER  DONE         32                                                                                                               0
         8            2 REBAL REBALANCE WAIT         32                                                                                                               0
         8            2 REBAL COMPACT   WAIT         32                                                                                                               0
         5            2 REBAL RESYNC    DONE         32                                                                                                               0
         5            2 REBAL RESILVER  DONE         32                                                                                                               0
         5            2 REBAL REBALANCE WAIT         32                                                                                                               0
         5            2 REBAL COMPACT   WAIT         32                                                                                                               0
         7            2 REBAL RESYNC    DONE         32                                                                                                               0
         7            2 REBAL RESILVER  DONE         32                                                                                                               0
         7            2 REBAL REBALANCE WAIT         32                                                                                                               0
         7            2 REBAL COMPACT   WAIT         32                                                                                                               0
         1            2 REBAL RESYNC    DONE         32         32          0          0          0           0                                                       0
         1            2 REBAL RESILVER  DONE         32         32          0          0          0           0                                                       0
         1            2 REBAL REBALANCE EST          32         32          0          0          0           0                                                       0
         1            2 REBAL COMPACT   WAIT         32         32          0          0          0           0                                                       0

32 rows selected.

SQL> select * from gv$asm_operation;

no rows selected


Once the command returns “no rows selected”, the rebalance would have completed and all disks in RECO disk group should show new size.

SQL> select name, total_mb from v$asm_disk_stat where name like ‘RECO%’;

NAME                             TOTAL_MB
—————————— ———-
RECO_dm01_CD_02_dm01CEL01           57024
RECO_dm01_CD_05_dm01CEL01           57024
RECO_dm01_CD_06_dm01CEL01           57024
RECO_dm01_CD_08_dm01CEL01           57024
RECO_dm01_CD_04_dm01CEL01           57024
RECO_dm01_CD_00_dm01CEL01           57024
RECO_dm01_CD_03_dm01CEL01           57024
RECO_dm01_CD_09_dm01CEL01           57024
RECO_dm01_CD_07_dm01CEL01           57024
RECO_dm01_CD_11_dm01CEL01           57024
RECO_dm01_CD_10_dm01CEL01           57024
RECO_dm01_CD_01_dm01CEL01           57024
RECO_dm01_CD_05_dm01CEL02           57024
RECO_dm01_CD_07_dm01CEL02           57024
RECO_dm01_CD_01_dm01CEL02           57024
RECO_dm01_CD_04_dm01CEL02           57024
RECO_dm01_CD_10_dm01CEL02           57024
RECO_dm01_CD_03_dm01CEL02           57024
RECO_dm01_CD_00_dm01CEL02           57024
RECO_dm01_CD_08_dm01CEL02           57024
RECO_dm01_CD_06_dm01CEL02           57024
RECO_dm01_CD_02_dm01CEL02           57024
RECO_dm01_CD_11_dm01CEL02           57024
RECO_dm01_CD_09_dm01CEL02           57024

RECO_dm01_CD_10_dm01CEL14           57024
RECO_dm01_CD_02_dm01CEL14           57024
RECO_dm01_CD_05_dm01CEL14           57024
RECO_dm01_CD_03_dm01CEL14           57024
RECO_dm01_CD_00_dm01CEL14           57024
RECO_dm01_CD_01_dm01CEL14           57024
RECO_dm01_CD_04_dm01CEL14           57024
RECO_dm01_CD_09_dm01CEL14           57024
RECO_dm01_CD_11_dm01CEL14           57024
RECO_dm01_CD_07_dm01CEL14           57024
RECO_dm01_CD_06_dm01CEL14           57024
RECO_dm01_CD_08_dm01CEL14           57024

168 rows selected.


4. Reduce size of RECO disks in storage cells

[root@dm01cel01 ~]# cellcli
CellCLI: Release 12.1.2.1.1 – Production on Wed Jan 18 05:12:33 CST 2017

Copyright (c) 2007, 2013, Oracle.  All rights reserved.
Cell Efficiency Ratio: 7,004

CellCLI> alter grid disk RECO_dm01_CD_00_dm01CEL01, RECO_dm01_CD_01_dm01CEL01, RECO_dm01_CD_02_dm01CEL01, RECO_dm01_CD_03_dm01CEL01, RECO_dm01_CD_04_dm01CEL01, RECO_dm01_CD_05_dm01CEL01, RECO_dm01_CD_06_dm01CEL01, RECO_dm01_CD_07_dm01CEL01, RECO_dm01_CD_08_dm01CEL01, RECO_dm01_CD_09_dm01CEL01, RECO_dm01_CD_10_dm01CEL01, RECO_dm01_CD_11_dm01CEL01 size=57024M;
grid disk RECO_dm01_CD_00_dm01cel01 successfully altered
grid disk RECO_dm01_CD_01_dm01cel01 successfully altered
grid disk RECO_dm01_CD_02_dm01cel01 successfully altered
grid disk RECO_dm01_CD_03_dm01cel01 successfully altered
grid disk RECO_dm01_CD_04_dm01cel01 successfully altered
grid disk RECO_dm01_CD_05_dm01cel01 successfully altered
grid disk RECO_dm01_CD_06_dm01cel01 successfully altered
grid disk RECO_dm01_CD_07_dm01cel01 successfully altered
grid disk RECO_dm01_CD_08_dm01cel01 successfully altered
grid disk RECO_dm01_CD_09_dm01cel01 successfully altered
grid disk RECO_dm01_CD_10_dm01cel01 successfully altered
grid disk RECO_dm01_CD_11_dm01cel01 successfully altered

[root@dm01cel01 ~]# ssh dm01cel02
Last login: Sun Feb 28 10:22:27 2016 from dm01cel01
[root@dm01cel02 ~]# cellcli
CellCLI: Release 12.1.2.1.1 – Production on Wed Jan 18 05:22:50 CST 2017

Copyright (c) 2007, 2013, Oracle.  All rights reserved.
Cell Efficiency Ratio: 6,999

CellCLI> alter grid disk RECO_dm01_CD_00_dm01CEL02, RECO_dm01_CD_01_dm01CEL02, RECO_dm01_CD_02_dm01CEL02, RECO_dm01_CD_03_dm01CEL02, RECO_dm01_CD_04_dm01CEL02, RECO_dm01_CD_05_dm01CEL02, RECO_dm01_CD_06_dm01CEL02, RECO_dm01_CD_07_dm01CEL02, RECO_dm01_CD_08_dm01CEL02, RECO_dm01_CD_09_dm01CEL02, RECO_dm01_CD_10_dm01CEL02, RECO_dm01_CD_11_dm01CEL02 size=57024M;
grid disk RECO_dm01_CD_00_dm01cel02 successfully altered
grid disk RECO_dm01_CD_01_dm01cel02 successfully altered
grid disk RECO_dm01_CD_02_dm01cel02 successfully altered
grid disk RECO_dm01_CD_03_dm01cel02 successfully altered
grid disk RECO_dm01_CD_04_dm01cel02 successfully altered
grid disk RECO_dm01_CD_05_dm01cel02 successfully altered
grid disk RECO_dm01_CD_06_dm01cel02 successfully altered
grid disk RECO_dm01_CD_07_dm01cel02 successfully altered
grid disk RECO_dm01_CD_08_dm01cel02 successfully altered
grid disk RECO_dm01_CD_09_dm01cel02 successfully altered
grid disk RECO_dm01_CD_10_dm01cel02 successfully altered
grid disk RECO_dm01_CD_11_dm01cel02 successfully altered

[root@dm01cel01 ~]# ssh dm01cel03
Last login: Mon Mar 28 13:24:31 2016 from dm01db01
[root@dm01cel03 ~]# cellcli
CellCLI: Release 12.1.2.1.1 – Production on Wed Jan 18 05:23:40 CST 2017

Copyright (c) 2007, 2013, Oracle.  All rights reserved.
Cell Efficiency Ratio: 7,599

CellCLI> alter grid disk RECO_dm01_CD_00_dm01CEL03, RECO_dm01_CD_01_dm01CEL03, RECO_dm01_CD_02_dm01CEL03, RECO_dm01_CD_03_dm01CEL03, RECO_dm01_CD_04_dm01CEL03, RECO_dm01_CD_05_dm01CEL03, RECO_dm01_CD_06_dm01CEL03, RECO_dm01_CD_07_dm01CEL03, RECO_dm01_CD_08_dm01CEL03, RECO_dm01_CD_09_dm01CEL03, RECO_dm01_CD_10_dm01CEL03, RECO_dm01_CD_11_dm01CEL03 size=57024M;
grid disk RECO_dm01_CD_00_dm01cel03 successfully altered
grid disk RECO_dm01_CD_01_dm01cel03 successfully altered
grid disk RECO_dm01_CD_02_dm01cel03 successfully altered
grid disk RECO_dm01_CD_03_dm01cel03 successfully altered
grid disk RECO_dm01_CD_04_dm01cel03 successfully altered
grid disk RECO_dm01_CD_05_dm01cel03 successfully altered
grid disk RECO_dm01_CD_06_dm01cel03 successfully altered
grid disk RECO_dm01_CD_07_dm01cel03 successfully altered
grid disk RECO_dm01_CD_08_dm01cel03 successfully altered
grid disk RECO_dm01_CD_09_dm01cel03 successfully altered
grid disk RECO_dm01_CD_10_dm01cel03 successfully altered
grid disk RECO_dm01_CD_11_dm01cel03 successfully altered

[root@dm01cel03 ~]# ssh dm01cel04
Last login: Sun Feb 28 10:23:17 2016 from dm01cel02
[root@dm01cel04 ~]# cellcli
CellCLI: Release 12.1.2.1.1 – Production on Wed Jan 18 05:24:27 CST 2017

Copyright (c) 2007, 2013, Oracle.  All rights reserved.
Cell Efficiency Ratio: 7,140

CellCLI> alter grid disk RECO_dm01_CD_00_dm01CEL04, RECO_dm01_CD_01_dm01CEL04, RECO_dm01_CD_02_dm01CEL04, RECO_dm01_CD_03_dm01CEL04, RECO_dm01_CD_04_dm01CEL04, RECO_dm01_CD_05_dm01CEL04, RECO_dm01_CD_06_dm01CEL04, RECO_dm01_CD_07_dm01CEL04, RECO_dm01_CD_08_dm01CEL04, RECO_dm01_CD_09_dm01CEL04, RECO_dm01_CD_10_dm01CEL04, RECO_dm01_CD_11_dm01CEL04 size=57024M;
grid disk RECO_dm01_CD_00_dm01cel04 successfully altered
grid disk RECO_dm01_CD_01_dm01cel04 successfully altered
grid disk RECO_dm01_CD_02_dm01cel04 successfully altered
grid disk RECO_dm01_CD_03_dm01cel04 successfully altered
grid disk RECO_dm01_CD_04_dm01cel04 successfully altered
grid disk RECO_dm01_CD_05_dm01cel04 successfully altered
grid disk RECO_dm01_CD_06_dm01cel04 successfully altered
grid disk RECO_dm01_CD_07_dm01cel04 successfully altered
grid disk RECO_dm01_CD_08_dm01cel04 successfully altered
grid disk RECO_dm01_CD_09_dm01cel04 successfully altered
grid disk RECO_dm01_CD_10_dm01cel04 successfully altered
grid disk RECO_dm01_CD_11_dm01cel04 successfully altered

CellCLI> alter grid disk RECO_dm01_CD_00_dm01CEL05, RECO_dm01_CD_01_dm01CEL05, RECO_dm01_CD_02_dm01CEL05, RECO_dm01_CD_03_dm01CEL05, RECO_dm01_CD_04_dm01CEL05, RECO_dm01_CD_05_dm01CEL05, RECO_dm01_CD_06_dm01CEL05, RECO_dm01_CD_07_dm01CEL05, RECO_dm01_CD_08_dm01CEL05, RECO_dm01_CD_09_dm01CEL05, RECO_dm01_CD_10_dm01CEL05, RECO_dm01_CD_11_dm01CEL05 size=57024M;
grid disk RECO_dm01_CD_00_dm01cel05 successfully altered
grid disk RECO_dm01_CD_01_dm01cel05 successfully altered
grid disk RECO_dm01_CD_02_dm01cel05 successfully altered
grid disk RECO_dm01_CD_03_dm01cel05 successfully altered
grid disk RECO_dm01_CD_04_dm01cel05 successfully altered
grid disk RECO_dm01_CD_05_dm01cel05 successfully altered
grid disk RECO_dm01_CD_06_dm01cel05 successfully altered
grid disk RECO_dm01_CD_07_dm01cel05 successfully altered
grid disk RECO_dm01_CD_08_dm01cel05 successfully altered
grid disk RECO_dm01_CD_09_dm01cel05 successfully altered
grid disk RECO_dm01_CD_10_dm01cel05 successfully altered
grid disk RECO_dm01_CD_11_dm01cel05 successfully altered

CellCLI> alter grid disk RECO_dm01_CD_00_dm01CEL06, RECO_dm01_CD_01_dm01CEL06, RECO_dm01_CD_02_dm01CEL06, RECO_dm01_CD_03_dm01CEL06, RECO_dm01_CD_04_dm01CEL06, RECO_dm01_CD_05_dm01CEL06, RECO_dm01_CD_06_dm01CEL06, RECO_dm01_CD_07_dm01CEL06, RECO_dm01_CD_08_dm01CEL06, RECO_dm01_CD_09_dm01CEL06, RECO_dm01_CD_10_dm01CEL06, RECO_dm01_CD_11_dm01CEL06 size=57024M;
grid disk RECO_dm01_CD_00_dm01cel06 successfully altered
grid disk RECO_dm01_CD_01_dm01cel06 successfully altered
grid disk RECO_dm01_CD_02_dm01cel06 successfully altered
grid disk RECO_dm01_CD_03_dm01cel06 successfully altered
grid disk RECO_dm01_CD_04_dm01cel06 successfully altered
grid disk RECO_dm01_CD_05_dm01cel06 successfully altered
grid disk RECO_dm01_CD_06_dm01cel06 successfully altered
grid disk RECO_dm01_CD_07_dm01cel06 successfully altered
grid disk RECO_dm01_CD_08_dm01cel06 successfully altered
grid disk RECO_dm01_CD_09_dm01cel06 successfully altered
grid disk RECO_dm01_CD_10_dm01cel06 successfully altered
grid disk RECO_dm01_CD_11_dm01cel06 successfully altered

CellCLI> alter grid disk RECO_dm01_CD_00_dm01CEL07, RECO_dm01_CD_01_dm01CEL07, RECO_dm01_CD_02_dm01CEL07, RECO_dm01_CD_03_dm01CEL07, RECO_dm01_CD_04_dm01CEL07, RECO_dm01_CD_05_dm01CEL07, RECO_dm01_CD_06_dm01CEL07, RECO_dm01_CD_07_dm01CEL07, RECO_dm01_CD_08_dm01CEL07, RECO_dm01_CD_09_dm01CEL07, RECO_dm01_CD_10_dm01CEL07, RECO_dm01_CD_11_dm01CEL07 size=57024M;
grid disk RECO_dm01_CD_00_dm01cel07 successfully altered
grid disk RECO_dm01_CD_01_dm01cel07 successfully altered
grid disk RECO_dm01_CD_02_dm01cel07 successfully altered
grid disk RECO_dm01_CD_03_dm01cel07 successfully altered
grid disk RECO_dm01_CD_04_dm01cel07 successfully altered
grid disk RECO_dm01_CD_05_dm01cel07 successfully altered
grid disk RECO_dm01_CD_06_dm01cel07 successfully altered
grid disk RECO_dm01_CD_07_dm01cel07 successfully altered
grid disk RECO_dm01_CD_08_dm01cel07 successfully altered
grid disk RECO_dm01_CD_09_dm01cel07 successfully altered
grid disk RECO_dm01_CD_10_dm01cel07 successfully altered
grid disk RECO_dm01_CD_11_dm01cel07 successfully altered

CellCLI> alter grid disk RECO_dm01_CD_00_dm01CEL08, RECO_dm01_CD_01_dm01CEL08, RECO_dm01_CD_02_dm01CEL08, RECO_dm01_CD_03_dm01CEL08, RECO_dm01_CD_04_dm01CEL08, RECO_dm01_CD_05_dm01CEL08, RECO_dm01_CD_06_dm01CEL08, RECO_dm01_CD_07_dm01CEL08, RECO_dm01_CD_08_dm01CEL08, RECO_dm01_CD_09_dm01CEL08, RECO_dm01_CD_10_dm01CEL08, RECO_dm01_CD_11_dm01CEL08 size=57024M;
grid disk RECO_dm01_CD_00_dm01cel08 successfully altered
grid disk RECO_dm01_CD_01_dm01cel08 successfully altered
grid disk RECO_dm01_CD_02_dm01cel08 successfully altered
grid disk RECO_dm01_CD_03_dm01cel08 successfully altered
grid disk RECO_dm01_CD_04_dm01cel08 successfully altered
grid disk RECO_dm01_CD_05_dm01cel08 successfully altered
grid disk RECO_dm01_CD_06_dm01cel08 successfully altered
grid disk RECO_dm01_CD_07_dm01cel08 successfully altered
grid disk RECO_dm01_CD_08_dm01cel08 successfully altered
grid disk RECO_dm01_CD_09_dm01cel08 successfully altered
grid disk RECO_dm01_CD_10_dm01cel08 successfully altered
grid disk RECO_dm01_CD_11_dm01cel08 successfully altered

CellCLI> alter grid disk RECO_dm01_CD_00_dm01CEL09, RECO_dm01_CD_01_dm01CEL09, RECO_dm01_CD_02_dm01CEL09, RECO_dm01_CD_03_dm01CEL09, RECO_dm01_CD_04_dm01CEL09, RECO_dm01_CD_05_dm01CEL09, RECO_dm01_CD_06_dm01CEL09, RECO_dm01_CD_07_dm01CEL09, RECO_dm01_CD_08_dm01CEL09, RECO_dm01_CD_09_dm01CEL09, RECO_dm01_CD_10_dm01CEL09, RECO_dm01_CD_11_dm01CEL09 size=57024M;
grid disk RECO_dm01_CD_00_dm01cel09 successfully altered
grid disk RECO_dm01_CD_01_dm01cel09 successfully altered
grid disk RECO_dm01_CD_02_dm01cel09 successfully altered
grid disk RECO_dm01_CD_03_dm01cel09 successfully altered
grid disk RECO_dm01_CD_04_dm01cel09 successfully altered
grid disk RECO_dm01_CD_05_dm01cel09 successfully altered
grid disk RECO_dm01_CD_06_dm01cel09 successfully altered
grid disk RECO_dm01_CD_07_dm01cel09 successfully altered
grid disk RECO_dm01_CD_08_dm01cel09 successfully altered
grid disk RECO_dm01_CD_09_dm01cel09 successfully altered
grid disk RECO_dm01_CD_10_dm01cel09 successfully altered
grid disk RECO_dm01_CD_11_dm01cel09 successfully altered

CellCLI> alter grid disk RECO_dm01_CD_00_dm01CEL10, RECO_dm01_CD_01_dm01CEL10, RECO_dm01_CD_02_dm01CEL10, RECO_dm01_CD_03_dm01CEL10, RECO_dm01_CD_04_dm01CEL10, RECO_dm01_CD_05_dm01CEL10, RECO_dm01_CD_06_dm01CEL10, RECO_dm01_CD_07_dm01CEL10, RECO_dm01_CD_08_dm01CEL10, RECO_dm01_CD_09_dm01CEL10, RECO_dm01_CD_10_dm01CEL10, RECO_dm01_CD_11_dm01CEL10 size=57024M;
grid disk RECO_dm01_CD_00_dm01cel10 successfully altered
grid disk RECO_dm01_CD_01_dm01cel10 successfully altered
grid disk RECO_dm01_CD_02_dm01cel10 successfully altered
grid disk RECO_dm01_CD_03_dm01cel10 successfully altered
grid disk RECO_dm01_CD_04_dm01cel10 successfully altered
grid disk RECO_dm01_CD_05_dm01cel10 successfully altered
grid disk RECO_dm01_CD_06_dm01cel10 successfully altered
grid disk RECO_dm01_CD_07_dm01cel10 successfully altered
grid disk RECO_dm01_CD_08_dm01cel10 successfully altered
grid disk RECO_dm01_CD_09_dm01cel10 successfully altered
grid disk RECO_dm01_CD_10_dm01cel10 successfully altered
grid disk RECO_dm01_CD_11_dm01cel10 successfully altered

CellCLI> alter grid disk RECO_dm01_CD_00_dm01CEL11, RECO_dm01_CD_01_dm01CEL11, RECO_dm01_CD_02_dm01CEL11, RECO_dm01_CD_03_dm01CEL11, RECO_dm01_CD_04_dm01CEL11, RECO_dm01_CD_05_dm01CEL11, RECO_dm01_CD_06_dm01CEL11, RECO_dm01_CD_07_dm01CEL11, RECO_dm01_CD_08_dm01CEL11, RECO_dm01_CD_09_dm01CEL11, RECO_dm01_CD_10_dm01CEL11, RECO_dm01_CD_11_dm01CEL11 size=57024M;
grid disk RECO_dm01_CD_00_dm01cel11 successfully altered
grid disk RECO_dm01_CD_01_dm01cel11 successfully altered
grid disk RECO_dm01_CD_02_dm01cel11 successfully altered
grid disk RECO_dm01_CD_03_dm01cel11 successfully altered
grid disk RECO_dm01_CD_04_dm01cel11 successfully altered
grid disk RECO_dm01_CD_05_dm01cel11 successfully altered
grid disk RECO_dm01_CD_06_dm01cel11 successfully altered
grid disk RECO_dm01_CD_07_dm01cel11 successfully altered
grid disk RECO_dm01_CD_08_dm01cel11 successfully altered
grid disk RECO_dm01_CD_09_dm01cel11 successfully altered
grid disk RECO_dm01_CD_10_dm01cel11 successfully altered
grid disk RECO_dm01_CD_11_dm01cel11 successfully altered

CellCLI> alter grid disk RECO_dm01_CD_00_dm01CEL12, RECO_dm01_CD_01_dm01CEL12, RECO_dm01_CD_02_dm01CEL12, RECO_dm01_CD_03_dm01CEL12, RECO_dm01_CD_04_dm01CEL12, RECO_dm01_CD_05_dm01CEL12, RECO_dm01_CD_06_dm01CEL12, RECO_dm01_CD_07_dm01CEL12, RECO_dm01_CD_08_dm01CEL12, RECO_dm01_CD_09_dm01CEL12, RECO_dm01_CD_10_dm01CEL12, RECO_dm01_CD_11_dm01CEL12 size=57024M;
grid disk RECO_dm01_CD_00_dm01cel12 successfully altered
grid disk RECO_dm01_CD_01_dm01cel12 successfully altered
grid disk RECO_dm01_CD_02_dm01cel12 successfully altered
grid disk RECO_dm01_CD_03_dm01cel12 successfully altered
grid disk RECO_dm01_CD_04_dm01cel12 successfully altered
grid disk RECO_dm01_CD_05_dm01cel12 successfully altered
grid disk RECO_dm01_CD_06_dm01cel12 successfully altered
grid disk RECO_dm01_CD_07_dm01cel12 successfully altered
grid disk RECO_dm01_CD_08_dm01cel12 successfully altered
grid disk RECO_dm01_CD_09_dm01cel12 successfully altered
grid disk RECO_dm01_CD_10_dm01cel12 successfully altered
grid disk RECO_dm01_CD_11_dm01cel12 successfully altered

CellCLI> alter grid disk RECO_dm01_CD_00_dm01CEL13, RECO_dm01_CD_01_dm01CEL13, RECO_dm01_CD_02_dm01CEL13, RECO_dm01_CD_03_dm01CEL13, RECO_dm01_CD_04_dm01CEL13, RECO_dm01_CD_05_dm01CEL13, RECO_dm01_CD_06_dm01CEL13, RECO_dm01_CD_07_dm01CEL13, RECO_dm01_CD_08_dm01CEL13, RECO_dm01_CD_09_dm01CEL13, RECO_dm01_CD_10_dm01CEL13, RECO_dm01_CD_11_dm01CEL13 size=57024M;
grid disk RECO_dm01_CD_00_dm01cel13 successfully altered
grid disk RECO_dm01_CD_01_dm01cel13 successfully altered
grid disk RECO_dm01_CD_02_dm01cel13 successfully altered
grid disk RECO_dm01_CD_03_dm01cel13 successfully altered
grid disk RECO_dm01_CD_04_dm01cel13 successfully altered
grid disk RECO_dm01_CD_05_dm01cel13 successfully altered
grid disk RECO_dm01_CD_06_dm01cel13 successfully altered
grid disk RECO_dm01_CD_07_dm01cel13 successfully altered
grid disk RECO_dm01_CD_08_dm01cel13 successfully altered
grid disk RECO_dm01_CD_09_dm01cel13 successfully altered
grid disk RECO_dm01_CD_10_dm01cel13 successfully altered
grid disk RECO_dm01_CD_11_dm01cel13 successfully altered

CellCLI> alter grid disk RECO_dm01_CD_00_dm01CEL14, RECO_dm01_CD_01_dm01CEL14, RECO_dm01_CD_02_dm01CEL14, RECO_dm01_CD_03_dm01CEL14, RECO_dm01_CD_04_dm01CEL14, RECO_dm01_CD_05_dm01CEL14, RECO_dm01_CD_06_dm01CEL14, RECO_dm01_CD_07_dm01CEL14, RECO_dm01_CD_08_dm01CEL14, RECO_dm01_CD_09_dm01CEL14, RECO_dm01_CD_10_dm01CEL14, RECO_dm01_CD_11_dm01CEL14 size=57024M;
grid disk RECO_dm01_CD_00_dm01cel14 successfully altered
grid disk RECO_dm01_CD_01_dm01cel14 successfully altered
grid disk RECO_dm01_CD_02_dm01cel14 successfully altered
grid disk RECO_dm01_CD_03_dm01cel14 successfully altered
grid disk RECO_dm01_CD_04_dm01cel14 successfully altered
grid disk RECO_dm01_CD_05_dm01cel14 successfully altered
grid disk RECO_dm01_CD_06_dm01cel14 successfully altered
grid disk RECO_dm01_CD_07_dm01cel14 successfully altered
grid disk RECO_dm01_CD_08_dm01cel14 successfully altered
grid disk RECO_dm01_CD_09_dm01cel14 successfully altered
grid disk RECO_dm01_CD_10_dm01cel14 successfully altered
grid disk RECO_dm01_CD_11_dm01cel14 successfully altered


Now we have some free space in cell disk

[root@dm01cel01 ~]# cellcli -e “list celldisk where name like ‘CD.*’ attributes name, size, freespace”
         CD_00_dm01cel01         528.734375G     50G
         CD_01_dm01cel01         528.734375G     50G
         CD_02_dm01cel01         557.859375G     50G
         CD_03_dm01cel01         557.859375G     50G
         CD_04_dm01cel01         557.859375G     50G
         CD_05_dm01cel01         557.859375G     50G
         CD_06_dm01cel01         557.859375G     50G
         CD_07_dm01cel01         557.859375G     50G
         CD_08_dm01cel01         557.859375G     50G
         CD_09_dm01cel01         557.859375G     50G
         CD_10_dm01cel01         557.859375G     50G
         CD_11_dm01cel01         557.859375G     50G


5. Increase size of DATA disks in storage cells

We can now increase the size of DATA grid disks, and then increase all disks size of disk group DATA in ASM.

The current DATA grid disks size is 423 GB:

[root@dm01cel01 ~]# cellcli -e “list grid disk where name like ‘DATA.*’ attributes name, size”
         DATA_dm01_CD_00_dm01cel01       423G
         DATA_dm01_CD_01_dm01cel01       423G
         DATA_dm01_CD_02_dm01cel01       423G
         DATA_dm01_CD_03_dm01cel01       423G
         DATA_dm01_CD_04_dm01cel01       423G
         DATA_dm01_CD_05_dm01cel01       423G
         DATA_dm01_CD_06_dm01cel01       423G
         DATA_dm01_CD_07_dm01cel01       423G
         DATA_dm01_CD_08_dm01cel01       423G
         DATA_dm01_CD_09_dm01cel01       423G
         DATA_dm01_CD_10_dm01cel01       423G
         DATA_dm01_CD_11_dm01cel01       423G


The new grid disks size will be 423 GB + 50 GB = 473 GB.

Resize the DATA grid disks on all storage cells. On storage cell 1, the command would be:

[root@dm01cel01 ~]# cellcli
CellCLI: Release 12.1.2.1.1 – Production on Wed Jan 18 05:39:49 CST 2017

Copyright (c) 2007, 2013, Oracle.  All rights reserved.
Cell Efficiency Ratio: 7,004

CellCLI> alter grid disk DATA_dm01_CD_00_dm01CEL01, DATA_dm01_CD_01_dm01CEL01, DATA_dm01_CD_02_dm01CEL01, DATA_dm01_CD_03_dm01CEL01, DATA_dm01_CD_04_dm01CEL01, DATA_dm01_CD_05_dm01CEL01, DATA_dm01_CD_06_dm01CEL01, DATA_dm01_CD_07_dm01CEL01, DATA_dm01_CD_08_dm01CEL01, DATA_dm01_CD_09_dm01CEL01, DATA_dm01_CD_10_dm01CEL01, DATA_dm01_CD_11_dm01CEL01 size=484352M;
grid disk DATA_dm01_CD_00_dm01cel01 successfully altered
grid disk DATA_dm01_CD_01_dm01cel01 successfully altered
grid disk DATA_dm01_CD_02_dm01cel01 successfully altered
grid disk DATA_dm01_CD_03_dm01cel01 successfully altered
grid disk DATA_dm01_CD_04_dm01cel01 successfully altered
grid disk DATA_dm01_CD_05_dm01cel01 successfully altered
grid disk DATA_dm01_CD_06_dm01cel01 successfully altered
grid disk DATA_dm01_CD_07_dm01cel01 successfully altered
grid disk DATA_dm01_CD_08_dm01cel01 successfully altered
grid disk DATA_dm01_CD_09_dm01cel01 successfully altered
grid disk DATA_dm01_CD_10_dm01cel01 successfully altered
grid disk DATA_dm01_CD_11_dm01cel01 successfully altered

[root@dm01cel01 ~]# ssh dm01cel02
Last login: Wed Jan 18 05:22:46 2017 from dm01cel01
[root@dm01cel02 ~]# cellcli
CellCLI: Release 12.1.2.1.1 – Production on Wed Jan 18 05:41:01 CST 2017

Copyright (c) 2007, 2013, Oracle.  All rights reserved.
Cell Efficiency Ratio: 6,999

CellCLI> alter grid disk DATA_dm01_CD_00_dm01CEL02, DATA_dm01_CD_01_dm01CEL02, DATA_dm01_CD_02_dm01CEL02, DATA_dm01_CD_03_dm01CEL02, DATA_dm01_CD_04_dm01CEL02, DATA_dm01_CD_05_dm01CEL02, DATA_dm01_CD_06_dm01CEL02, DATA_dm01_CD_07_dm01CEL02, DATA_dm01_CD_08_dm01CEL02, DATA_dm01_CD_09_dm01CEL02, DATA_dm01_CD_10_dm01CEL02, DATA_dm01_CD_11_dm01CEL02 size=484352M;
grid disk DATA_dm01_CD_00_dm01cel02 successfully altered
grid disk DATA_dm01_CD_01_dm01cel02 successfully altered
grid disk DATA_dm01_CD_02_dm01cel02 successfully altered
grid disk DATA_dm01_CD_03_dm01cel02 successfully altered
grid disk DATA_dm01_CD_04_dm01cel02 successfully altered
grid disk DATA_dm01_CD_05_dm01cel02 successfully altered
grid disk DATA_dm01_CD_06_dm01cel02 successfully altered
grid disk DATA_dm01_CD_07_dm01cel02 successfully altered
grid disk DATA_dm01_CD_08_dm01cel02 successfully altered
grid disk DATA_dm01_CD_09_dm01cel02 successfully altered
grid disk DATA_dm01_CD_10_dm01cel02 successfully altered
grid disk DATA_dm01_CD_11_dm01cel02 successfully altered

[root@dm01cel02 ~]# ssh dm01cel03
Last login: Wed Jan 18 05:23:38 2017 from dm01cel01
[root@dm01cel03 ~]# cellcli
CellCLI: Release 12.1.2.1.1 – Production on Wed Jan 18 05:41:49 CST 2017

Copyright (c) 2007, 2013, Oracle.  All rights reserved.
Cell Efficiency Ratio: 7,599

CellCLI> alter grid disk DATA_dm01_CD_00_dm01CEL03, DATA_dm01_CD_01_dm01CEL03, DATA_dm01_CD_02_dm01CEL03, DATA_dm01_CD_03_dm01CEL03, DATA_dm01_CD_04_dm01CEL03, DATA_dm01_CD_05_dm01CEL03, DATA_dm01_CD_06_dm01CEL03, DATA_dm01_CD_07_dm01CEL03, DATA_dm01_CD_08_dm01CEL03, DATA_dm01_CD_09_dm01CEL03, DATA_dm01_CD_10_dm01CEL03, DATA_dm01_CD_11_dm01CEL03 size=484352M;
grid disk DATA_dm01_CD_00_dm01cel03 successfully altered
grid disk DATA_dm01_CD_01_dm01cel03 successfully altered
grid disk DATA_dm01_CD_02_dm01cel03 successfully altered
grid disk DATA_dm01_CD_03_dm01cel03 successfully altered
grid disk DATA_dm01_CD_04_dm01cel03 successfully altered
grid disk DATA_dm01_CD_05_dm01cel03 successfully altered
grid disk DATA_dm01_CD_06_dm01cel03 successfully altered
grid disk DATA_dm01_CD_07_dm01cel03 successfully altered
grid disk DATA_dm01_CD_08_dm01cel03 successfully altered
grid disk DATA_dm01_CD_09_dm01cel03 successfully altered
grid disk DATA_dm01_CD_10_dm01cel03 successfully altered
grid disk DATA_dm01_CD_11_dm01cel03 successfully altered

CellCLI> alter grid disk DATA_dm01_CD_00_dm01CEL04, DATA_dm01_CD_01_dm01CEL04, DATA_dm01_CD_02_dm01CEL04, DATA_dm01_CD_03_dm01CEL04, DATA_dm01_CD_04_dm01CEL04, DATA_dm01_CD_05_dm01CEL04, DATA_dm01_CD_06_dm01CEL04, DATA_dm01_CD_07_dm01CEL04, DATA_dm01_CD_08_dm01CEL04, DATA_dm01_CD_09_dm01CEL04, DATA_dm01_CD_10_dm01CEL04, DATA_dm01_CD_11_dm01CEL04 size=484352M;
grid disk DATA_dm01_CD_00_dm01cel04 successfully altered
grid disk DATA_dm01_CD_01_dm01cel04 successfully altered
grid disk DATA_dm01_CD_02_dm01cel04 successfully altered
grid disk DATA_dm01_CD_03_dm01cel04 successfully altered
grid disk DATA_dm01_CD_04_dm01cel04 successfully altered
grid disk DATA_dm01_CD_05_dm01cel04 successfully altered
grid disk DATA_dm01_CD_06_dm01cel04 successfully altered
grid disk DATA_dm01_CD_07_dm01cel04 successfully altered
grid disk DATA_dm01_CD_08_dm01cel04 successfully altered
grid disk DATA_dm01_CD_09_dm01cel04 successfully altered
grid disk DATA_dm01_CD_10_dm01cel04 successfully altered
grid disk DATA_dm01_CD_11_dm01cel04 successfully altered

CellCLI> alter grid disk DATA_dm01_CD_00_dm01CEL05, DATA_dm01_CD_01_dm01CEL05, DATA_dm01_CD_02_dm01CEL05, DATA_dm01_CD_03_dm01CEL05, DATA_dm01_CD_04_dm01CEL05, DATA_dm01_CD_05_dm01CEL05, DATA_dm01_CD_06_dm01CEL05, DATA_dm01_CD_07_dm01CEL05, DATA_dm01_CD_08_dm01CEL05, DATA_dm01_CD_09_dm01CEL05, DATA_dm01_CD_10_dm01CEL05, DATA_dm01_CD_11_dm01CEL05 size=484352M;
grid disk DATA_dm01_CD_00_dm01cel05 successfully altered
grid disk DATA_dm01_CD_01_dm01cel05 successfully altered
grid disk DATA_dm01_CD_02_dm01cel05 successfully altered
grid disk DATA_dm01_CD_03_dm01cel05 successfully altered
grid disk DATA_dm01_CD_04_dm01cel05 successfully altered
grid disk DATA_dm01_CD_05_dm01cel05 successfully altered
grid disk DATA_dm01_CD_06_dm01cel05 successfully altered
grid disk DATA_dm01_CD_07_dm01cel05 successfully altered
grid disk DATA_dm01_CD_08_dm01cel05 successfully altered
grid disk DATA_dm01_CD_09_dm01cel05 successfully altered
grid disk DATA_dm01_CD_10_dm01cel05 successfully altered

CellCLI> alter grid disk DATA_dm01_CD_00_dm01CEL06, DATA_dm01_CD_01_dm01CEL06, DATA_dm01_CD_02_dm01CEL06, DATA_dm01_CD_03_dm01CEL06, DATA_dm01_CD_04_dm01CEL06, DATA_dm01_CD_05_dm01CEL06, DATA_dm01_CD_06_dm01CEL06, DATA_dm01_CD_07_dm01CEL06, DATA_dm01_CD_08_dm01CEL06, DATA_dm01_CD_09_dm01CEL06, DATA_dm01_CD_10_dm01CEL06, DATA_dm01_CD_11_dm01CEL06 size=484352M;
grid disk DATA_dm01_CD_00_dm01cel06 successfully altered
grid disk DATA_dm01_CD_01_dm01cel06 successfully altered
grid disk DATA_dm01_CD_02_dm01cel06 successfully altered
grid disk DATA_dm01_CD_03_dm01cel06 successfully altered
grid disk DATA_dm01_CD_04_dm01cel06 successfully altered
grid disk DATA_dm01_CD_05_dm01cel06 successfully altered
grid disk DATA_dm01_CD_06_dm01cel06 successfully altered
grid disk DATA_dm01_CD_07_dm01cel06 successfully altered
grid disk DATA_dm01_CD_08_dm01cel06 successfully altered
grid disk DATA_dm01_CD_09_dm01cel06 successfully altered
grid disk DATA_dm01_CD_10_dm01cel06 successfully altered
grid disk DATA_dm01_CD_11_dm01cel06 successfully altered

CellCLI> alter grid disk DATA_dm01_CD_00_dm01CEL07, DATA_dm01_CD_01_dm01CEL07, DATA_dm01_CD_02_dm01CEL07, DATA_dm01_CD_03_dm01CEL07, DATA_dm01_CD_04_dm01CEL07, DATA_dm01_CD_05_dm01CEL07, DATA_dm01_CD_06_dm01CEL07, DATA_dm01_CD_07_dm01CEL07, DATA_dm01_CD_08_dm01CEL07, DATA_dm01_CD_09_dm01CEL07, DATA_dm01_CD_10_dm01CEL07, DATA_dm01_CD_11_dm01CEL07 size=484352M;
grid disk DATA_dm01_CD_00_dm01cel07 successfully altered
grid disk DATA_dm01_CD_01_dm01cel07 successfully altered
grid disk DATA_dm01_CD_02_dm01cel07 successfully altered
grid disk DATA_dm01_CD_03_dm01cel07 successfully altered
grid disk DATA_dm01_CD_04_dm01cel07 successfully altered
grid disk DATA_dm01_CD_05_dm01cel07 successfully altered
grid disk DATA_dm01_CD_06_dm01cel07 successfully altered
grid disk DATA_dm01_CD_07_dm01cel07 successfully altered
grid disk DATA_dm01_CD_08_dm01cel07 successfully altered
grid disk DATA_dm01_CD_09_dm01cel07 successfully altered
grid disk DATA_dm01_CD_10_dm01cel07 successfully altered
grid disk DATA_dm01_CD_11_dm01cel07 successfully altered

CellCLI> alter grid disk DATA_dm01_CD_00_dm01CEL08, DATA_dm01_CD_01_dm01CEL08, DATA_dm01_CD_02_dm01CEL08, DATA_dm01_CD_03_dm01CEL08, DATA_dm01_CD_04_dm01CEL08, DATA_dm01_CD_05_dm01CEL08, DATA_dm01_CD_06_dm01CEL08, DATA_dm01_CD_07_dm01CEL08, DATA_dm01_CD_08_dm01CEL08, DATA_dm01_CD_09_dm01CEL08, DATA_dm01_CD_10_dm01CEL08, DATA_dm01_CD_11_dm01CEL08 size=484352M;
grid disk DATA_dm01_CD_00_dm01cel08 successfully altered
grid disk DATA_dm01_CD_01_dm01cel08 successfully altered
grid disk DATA_dm01_CD_02_dm01cel08 successfully altered
grid disk DATA_dm01_CD_03_dm01cel08 successfully altered
grid disk DATA_dm01_CD_04_dm01cel08 successfully altered
grid disk DATA_dm01_CD_05_dm01cel08 successfully altered
grid disk DATA_dm01_CD_06_dm01cel08 successfully altered
grid disk DATA_dm01_CD_07_dm01cel08 successfully altered
grid disk DATA_dm01_CD_08_dm01cel08 successfully altered
grid disk DATA_dm01_CD_09_dm01cel08 successfully altered
grid disk DATA_dm01_CD_10_dm01cel08 successfully altered
grid disk DATA_dm01_CD_11_dm01cel08 successfully altered

CellCLI> alter grid disk DATA_dm01_CD_00_dm01CEL09, DATA_dm01_CD_01_dm01CEL09, DATA_dm01_CD_02_dm01CEL09, DATA_dm01_CD_03_dm01CEL09, DATA_dm01_CD_04_dm01CEL09, DATA_dm01_CD_05_dm01CEL09, DATA_dm01_CD_06_dm01CEL09, DATA_dm01_CD_07_dm01CEL09, DATA_dm01_CD_08_dm01CEL09, DATA_dm01_CD_09_dm01CEL09, DATA_dm01_CD_10_dm01CEL09, DATA_dm01_CD_11_dm01CEL09 size=484352M;
grid disk DATA_dm01_CD_00_dm01cel09 successfully altered
grid disk DATA_dm01_CD_01_dm01cel09 successfully altered
grid disk DATA_dm01_CD_02_dm01cel09 successfully altered
grid disk DATA_dm01_CD_03_dm01cel09 successfully altered
grid disk DATA_dm01_CD_04_dm01cel09 successfully altered
grid disk DATA_dm01_CD_05_dm01cel09 successfully altered
grid disk DATA_dm01_CD_06_dm01cel09 successfully altered
grid disk DATA_dm01_CD_07_dm01cel09 successfully altered
grid disk DATA_dm01_CD_08_dm01cel09 successfully altered
grid disk DATA_dm01_CD_09_dm01cel09 successfully altered
grid disk DATA_dm01_CD_10_dm01cel09 successfully altered
grid disk DATA_dm01_CD_11_dm01cel09 successfully altered

CellCLI> alter grid disk DATA_dm01_CD_00_dm01CEL10, DATA_dm01_CD_01_dm01CEL10, DATA_dm01_CD_02_dm01CEL10, DATA_dm01_CD_03_dm01CEL10, DATA_dm01_CD_04_dm01CEL10, DATA_dm01_CD_05_dm01CEL10, DATA_dm01_CD_06_dm01CEL10, DATA_dm01_CD_07_dm01CEL10, DATA_dm01_CD_08_dm01CEL10, DATA_dm01_CD_09_dm01CEL10, DATA_dm01_CD_10_dm01CEL10, DATA_dm01_CD_11_dm01CEL10 size=484352M;
grid disk DATA_dm01_CD_00_dm01cel10 successfully altered
grid disk DATA_dm01_CD_01_dm01cel10 successfully altered
grid disk DATA_dm01_CD_02_dm01cel10 successfully altered
grid disk DATA_dm01_CD_03_dm01cel10 successfully altered
grid disk DATA_dm01_CD_04_dm01cel10 successfully altered
grid disk DATA_dm01_CD_05_dm01cel10 successfully altered
grid disk DATA_dm01_CD_06_dm01cel10 successfully altered
grid disk DATA_dm01_CD_07_dm01cel10 successfully altered
grid disk DATA_dm01_CD_08_dm01cel10 successfully altered
grid disk DATA_dm01_CD_09_dm01cel10 successfully altered
grid disk DATA_dm01_CD_10_dm01cel10 successfully altered
grid disk DATA_dm01_CD_11_dm01cel10 successfully altered

CellCLI> alter grid disk DATA_dm01_CD_00_dm01CEL11, DATA_dm01_CD_01_dm01CEL11, DATA_dm01_CD_02_dm01CEL11, DATA_dm01_CD_03_dm01CEL11, DATA_dm01_CD_04_dm01CEL11, DATA_dm01_CD_05_dm01CEL11, DATA_dm01_CD_06_dm01CEL11, DATA_dm01_CD_07_dm01CEL11, DATA_dm01_CD_08_dm01CEL11, DATA_dm01_CD_09_dm01CEL11, DATA_dm01_CD_10_dm01CEL11, DATA_dm01_CD_11_dm01CEL11 size=484352M;
grid disk DATA_dm01_CD_00_dm01cel11 successfully altered
grid disk DATA_dm01_CD_01_dm01cel11 successfully altered
grid disk DATA_dm01_CD_02_dm01cel11 successfully altered
grid disk DATA_dm01_CD_03_dm01cel11 successfully altered
grid disk DATA_dm01_CD_04_dm01cel11 successfully altered
grid disk DATA_dm01_CD_05_dm01cel11 successfully altered
grid disk DATA_dm01_CD_06_dm01cel11 successfully altered
grid disk DATA_dm01_CD_07_dm01cel11 successfully altered
grid disk DATA_dm01_CD_08_dm01cel11 successfully altered
grid disk DATA_dm01_CD_09_dm01cel11 successfully altered
grid disk DATA_dm01_CD_10_dm01cel11 successfully altered
grid disk DATA_dm01_CD_11_dm01cel11 successfully altered

CellCLI> alter grid disk DATA_dm01_CD_00_dm01CEL12, DATA_dm01_CD_01_dm01CEL12, DATA_dm01_CD_02_dm01CEL12, DATA_dm01_CD_03_dm01CEL12, DATA_dm01_CD_04_dm01CEL12, DATA_dm01_CD_05_dm01CEL12, DATA_dm01_CD_06_dm01CEL12, DATA_dm01_CD_07_dm01CEL12, DATA_dm01_CD_08_dm01CEL12, DATA_dm01_CD_09_dm01CEL12, DATA_dm01_CD_10_dm01CEL12, DATA_dm01_CD_11_dm01CEL12 size=484352M;
grid disk DATA_dm01_CD_00_dm01cel12 successfully altered
grid disk DATA_dm01_CD_01_dm01cel12 successfully altered
grid disk DATA_dm01_CD_02_dm01cel12 successfully altered
grid disk DATA_dm01_CD_03_dm01cel12 successfully altered
grid disk DATA_dm01_CD_04_dm01cel12 successfully altered
grid disk DATA_dm01_CD_05_dm01cel12 successfully altered
grid disk DATA_dm01_CD_06_dm01cel12 successfully altered
grid disk DATA_dm01_CD_07_dm01cel12 successfully altered
grid disk DATA_dm01_CD_08_dm01cel12 successfully altered
grid disk DATA_dm01_CD_09_dm01cel12 successfully altered
grid disk DATA_dm01_CD_10_dm01cel12 successfully altered
grid disk DATA_dm01_CD_11_dm01cel12 successfully altered

CellCLI> alter grid disk DATA_dm01_CD_00_dm01CEL13, DATA_dm01_CD_01_dm01CEL13, DATA_dm01_CD_02_dm01CEL13, DATA_dm01_CD_03_dm01CEL13, DATA_dm01_CD_04_dm01CEL13, DATA_dm01_CD_05_dm01CEL13, DATA_dm01_CD_06_dm01CEL13, DATA_dm01_CD_07_dm01CEL13, DATA_dm01_CD_08_dm01CEL13, DATA_dm01_CD_09_dm01CEL13, DATA_dm01_CD_10_dm01CEL13, DATA_dm01_CD_11_dm01CEL13 size=484352M;
grid disk DATA_dm01_CD_00_dm01cel13 successfully altered
grid disk DATA_dm01_CD_01_dm01cel13 successfully altered
grid disk DATA_dm01_CD_02_dm01cel13 successfully altered
grid disk DATA_dm01_CD_03_dm01cel13 successfully altered
grid disk DATA_dm01_CD_04_dm01cel13 successfully altered
grid disk DATA_dm01_CD_05_dm01cel13 successfully altered
grid disk DATA_dm01_CD_06_dm01cel13 successfully altered
grid disk DATA_dm01_CD_07_dm01cel13 successfully altered
grid disk DATA_dm01_CD_08_dm01cel13 successfully altered
grid disk DATA_dm01_CD_09_dm01cel13 successfully altered
grid disk DATA_dm01_CD_10_dm01cel13 successfully altered
grid disk DATA_dm01_CD_11_dm01cel13 successfully altered

CellCLI> alter grid disk DATA_dm01_CD_00_dm01CEL14, DATA_dm01_CD_01_dm01CEL14, DATA_dm01_CD_02_dm01CEL14, DATA_dm01_CD_03_dm01CEL14, DATA_dm01_CD_04_dm01CEL14, DATA_dm01_CD_05_dm01CEL14, DATA_dm01_CD_06_dm01CEL14, DATA_dm01_CD_07_dm01CEL14, DATA_dm01_CD_08_dm01CEL14, DATA_dm01_CD_09_dm01CEL14, DATA_dm01_CD_10_dm01CEL14, DATA_dm01_CD_11_dm01CEL14 size=484352M;
grid disk DATA_dm01_CD_00_dm01cel14 successfully altered
grid disk DATA_dm01_CD_01_dm01cel14 successfully altered
grid disk DATA_dm01_CD_02_dm01cel14 successfully altered
grid disk DATA_dm01_CD_03_dm01cel14 successfully altered
grid disk DATA_dm01_CD_04_dm01cel14 successfully altered
grid disk DATA_dm01_CD_05_dm01cel14 successfully altered
grid disk DATA_dm01_CD_06_dm01cel14 successfully altered
grid disk DATA_dm01_CD_07_dm01cel14 successfully altered
grid disk DATA_dm01_CD_08_dm01cel14 successfully altered
grid disk DATA_dm01_CD_09_dm01cel14 successfully altered
grid disk DATA_dm01_CD_10_dm01cel14 successfully altered
grid disk DATA_dm01_CD_11_dm01cel14 successfully altered

7. Verify the new size

CellCLI> list grid disk where name like ‘DATA.*’ attributes name, size
         DATA_dm01_CD_00_dm01cel14       473G
         DATA_dm01_CD_01_dm01cel14       473G
         DATA_dm01_CD_02_dm01cel14       473G
         DATA_dm01_CD_03_dm01cel14       473G
         DATA_dm01_CD_04_dm01cel14       473G
         DATA_dm01_CD_05_dm01cel14       473G
         DATA_dm01_CD_06_dm01cel14       473G
         DATA_dm01_CD_07_dm01cel14       473G
         DATA_dm01_CD_08_dm01cel14       473G
         DATA_dm01_CD_09_dm01cel14       473G
         DATA_dm01_CD_10_dm01cel14       473G
         DATA_dm01_CD_11_dm01cel14       473G


8. Increase size of DATA disks in ASM

SQL> alter diskgroup DATA_dm01 resize all rebalance power 32;

Diskgroup altered.


Note that there was no need to specify the new disks size, as ASM will get that from the grid disks. The rebalance clause is optional.

The command will trigger the rebalance operation for disk group DATA.

Monitor the rebalance with the following command:

SQL> set lines 200
SQL> set pages 200
SQL> select * from gv$asm_operation;

no rows selected


Once the command returns “no rows selected”, the rebalance would have completed and all disks in disk group DATA should show new size:

SQL> select name, total_mb/1024 “GB” from v$asm_disk_stat where name like ‘DATA%’;

NAME                                   GB
—————————— ———-
DATA_dm01_CD_08_dm01CEL01             473
DATA_dm01_CD_01_dm01CEL01             473
DATA_dm01_CD_07_dm01CEL01             473
DATA_dm01_CD_09_dm01CEL01             473
DATA_dm01_CD_04_dm01CEL01             473
DATA_dm01_CD_05_dm01CEL01             473
DATA_dm01_CD_10_dm01CEL01             473
DATA_dm01_CD_03_dm01CEL01             473
DATA_dm01_CD_02_dm01CEL01             423
DATA_dm01_CD_11_dm01CEL01             473
DATA_dm01_CD_06_dm01CEL01             473
DATA_dm01_CD_00_dm01CEL01             473

DATA_dm01_CD_03_dm01CEL14             473
DATA_dm01_CD_08_dm01CEL14             473
DATA_dm01_CD_00_dm01CEL14             473
DATA_dm01_CD_05_dm01CEL14             473
DATA_dm01_CD_09_dm01CEL14             473
DATA_dm01_CD_02_dm01CEL14             473
DATA_dm01_CD_07_dm01CEL14             473
DATA_dm01_CD_10_dm01CEL14             473
DATA_dm01_CD_01_dm01CEL14             473
DATA_dm01_CD_11_dm01CEL14             473
DATA_dm01_CD_04_dm01CEL14             473
DATA_dm01_CD_06_dm01CEL14             473

168 rows selected.


Conclusion

In this article we have learned how to resize ASM disk in Exadata Database. If there is free space in Exadata cell disks, increasing the disk group size can be accomplished in two steps – grid disk size increase on all storage cells followed by the disk size increase in ASM. This requires a single ASM rebalance operation. If there is no free space in celldisks, some space may be freed up from other disk group. To reduce the size, first the ASM disk size is reduced and then reduced the grid disks size. To increase the size, first increase the grid disks size and the increase the ASM disk size.
0

Overview

You purchased a new Storage cell for expansion or removed Storage from an existing Cluster and want to add it to the new a Cluster due to space demand.

In this article I will demonstrate how to add a storage cell to an existing Exadata Database Machine.

Here we are adding new storage cell with 600 GB disks to existing Exadata database machine using 600 GB disks.


Environment

  • Exadata Database Machine X2-2 Full Rack
  • Exadata Storage Software version 12.1.2.2.0
  • Oracle Grid/Database Version 11.2.0.4


Pre Checks:

  • Create a new file named “new_cell_group” containing only the hostnames for the new cells.

[root@dm01db01 ~]# vi new_cell_group

[root@dm01db01 ~]# cat new_cell_group
dm01cel05
  • It is assumed that the file “cell_group” only contains the original cells, not the new ones.  If you have already modified “cell_group” to contain the new cells, just comment it out or remove it from the file.

[root@dm01db01 ~]# cat cell_group
dm01cel01
dm01cel02
dm01cel03
dm01cel04
dm01cel05
dm01cel06
dm01cel07
dm01cel08
dm01cel09
dm01cel10
dm01cel11
dm01cel12
dm01cel13
dm01cel14

[root@dm01db01 ~]# vi cell_group

[root@dm01db01 ~]# cat cell_group
dm01cel01
dm01cel02
dm01cel03
dm01cel04
#dm01cel05
dm01cel06
dm01cel07
dm01cel08
dm01cel09
dm01cel10
dm01cel11
dm01cel12
dm01cel13
dm01cel14
  • Here we use the root user to run the cellcli commands; ensure that user equivalency setup for root user between Storage cells and Compute nodes.

[root@dm01db01 ~]# dcli -g dbs_group -l root ‘uptime’
dm01db01: 09:55:05 up 425 days, 23:16,  1 user,  load average: 3.26, 3.37, 3.39
dm01db02: 09:55:05 up 117 days, 18:20,  0 users,  load average: 1.19, 1.42, 1.56
dm01db03: 09:55:05 up 85 days, 10:07,  0 users,  load average: 6.25, 6.20, 6.22
dm01db04: 09:55:05 up 519 days, 11:33,  0 users,  load average: 1.53, 1.48, 1.47
dm01db05: 09:55:05 up 519 days, 12:45,  0 users,  load average: 1.36, 1.35, 1.47
dm01db06: 09:55:05 up 515 days, 21:40,  0 users,  load average: 1.47, 1.36, 1.36
dm01db07: 09:55:05 up 519 days, 12:03,  0 users,  load average: 1.44, 1.64, 1.71
dm01db08: 09:55:05 up 519 days, 11:29,  0 users,  load average: 1.78, 1.90, 1.78

[root@dm01db01 ~]# dcli -g cell_group -l root ‘uptime’
dm01cel01: 09:55:15 up 466 days, 20:25,  0 users,  load average: 1.44, 1.35, 1.26
dm01cel02: 09:55:15 up 519 days, 14:48,  0 users,  load average: 1.49, 1.44, 1.51
dm01cel03: 09:55:15 up 519 days, 14:47,  0 users,  load average: 1.01, 0.96, 1.10
dm01cel04: 09:55:15 up 519 days, 14:47,  0 users,  load average: 1.40, 1.32, 1.24
dm01cel06: 09:55:15 up 519 days, 14:46,  0 users,  load average: 1.10, 1.25, 1.30
dm01cel07: 09:55:15 up 519 days, 14:46,  0 users,  load average: 0.99, 1.14, 1.18
dm01cel08: 09:55:15 up 466 days, 19:37,  0 users,  load average: 1.52, 1.24, 1.17
dm01cel09: 09:55:15 up 519 days, 14:46,  0 users,  load average: 1.00, 1.40, 1.56
dm01cel10: 09:55:15 up 519 days, 14:46,  0 users,  load average: 1.09, 1.22, 1.22
dm01cel11: 09:55:15 up 519 days, 14:46,  0 users,  load average: 1.24, 1.27, 1.22
dm01cel12: 09:55:15 up 519 days, 14:46,  0 users,  load average: 1.11, 1.14, 1.14
dm01cel13: 09:55:15 up 519 days, 14:46,  0 users,  load average: 1.13, 1.27, 1.16
dm01cel14: 09:55:15 up 519 days, 14:46,  0 users,  load average: 1.20, 1.12, 1.15

Ensure Cell Disks are Ready

  • First verify if any cell disks already exist:

[root@dm01db01 ~]# dcli -g new_cell_group -l root “cellcli -e list celldisk attributes name,status,freespace where disktype=harddisk”
dm01cel05: CD_00_dm01cel05       normal  0
dm01cel05: CD_01_dm01cel05       normal  0
dm01cel05: CD_02_dm01cel05       normal  0
dm01cel05: CD_03_dm01cel05       normal  0
dm01cel05: CD_04_dm01cel05       normal  0
dm01cel05: CD_05_dm01cel05       normal  0
dm01cel05: CD_06_dm01cel05       normal  0
dm01cel05: CD_07_dm01cel05       normal  0
dm01cel05: CD_08_dm01cel05       normal  0
dm01cel05: CD_09_dm01cel05       normal  0
dm01cel05: CD_10_dm01cel05       normal  0
dm01cel05: CD_11_dm01cel05       normal  0

In my case the cell disks exists.
  • If no cell disks are created. There will be no output.

[root@dm01db01 ~]# dcli -g new_cell_group -l root “cellcli -e list celldisk attributes name,status,freespace where disktype=harddisk”
[root@dm01db01 ~]#
  • If this command returns a list of cell disks, then you should verify if any grid disks exist:

[root@dm01db01 ~]# dcli -g new_cell_group -l root “cellcli -e list griddisk where disktype=harddisk”
dm01cel05: DATA_CD_00_dm01cel05          active
dm01cel05: DATA_CD_01_dm01cel05          active
dm01cel05: DATA_CD_02_dm01cel05          active
dm01cel05: DATA_CD_03_dm01cel05          active
dm01cel05: DATA_CD_04_dm01cel05          active
dm01cel05: DATA_CD_05_dm01cel05          active
dm01cel05: DATA_CD_06_dm01cel05          active
dm01cel05: DATA_CD_07_dm01cel05          active
dm01cel05: DATA_CD_08_dm01cel05          active
dm01cel05: DATA_CD_09_dm01cel05          active
dm01cel05: DATA_CD_10_dm01cel05          active
dm01cel05: DATA_CD_11_dm01cel05          active
dm01cel05: RECO_CD_00_dm01cel05          active
dm01cel05: RECO_CD_01_dm01cel05          active
dm01cel05: RECO_CD_02_dm01cel05          active
dm01cel05: RECO_CD_03_dm01cel05          active
dm01cel05: RECO_CD_04_dm01cel05          active
dm01cel05: RECO_CD_05_dm01cel05          active
dm01cel05: RECO_CD_06_dm01cel05          active
dm01cel05: RECO_CD_07_dm01cel05          active
dm01cel05: RECO_CD_08_dm01cel05          active
dm01cel05: RECO_CD_09_dm01cel05          active
dm01cel05: RECO_CD_10_dm01cel05          active
dm01cel05: RECO_CD_11_dm01cel05          active
dm01cel05: DBFS_DG_CD_02_dm01cel05      active
dm01cel05: DBFS_DG_CD_03_dm01cel05      active
dm01cel05: DBFS_DG_CD_04_dm01cel05      active
dm01cel05: DBFS_DG_CD_05_dm01cel05      active
dm01cel05: DBFS_DG_CD_06_dm01cel05      active
dm01cel05: DBFS_DG_CD_07_dm01cel05      active
dm01cel05: DBFS_DG_CD_08_dm01cel05      active
dm01cel05: DBFS_DG_CD_09_dm01cel05      active
dm01cel05: DBFS_DG_CD_10_dm01cel05      active
dm01cel05: DBFS_DG_CD_11_dm01cel05      active

In my case grid disks exists.
  • If no grid disks are created. There will be no output.

[root@dm01db01 ~]# dcli -g new_cell_group -l root “cellcli -e list griddisk where disktype=harddisk”
[root@dm01db01 ~]#

  • If any grid disks are present, then you must ensure they are not in use by looking in V$ASM_DISK and verifying the mount status is “closed” and the header status is “former” or “candidate”:

dm01db01-orcldb1 {/home/oracle}:. oraenv
ORACLE_SID = [orcldb1] ? +ASM1
The Oracle base remains unchanged with value /u01/app/oracle
dm01db01-+ASM1 {/home/oracle}:sqlplus / as sysasm

SQL*Plus: Release 11.2.0.4.0 Production on Mon Oct 17 10:01:15 2016

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 – 64bit Production
With the Real Application Clusters and Automatic Storage Management options

SQL> select name, path, state, mode_status,header_status, mount_status from v$asm_disk where header_status <> ‘MEMBER’ order by path;

no rows selected
  • If it is OK to drop the grid disks, you can use the following command to drop set of grid disks (by prefix like, RECO):

[root@dm01db01 ~]# dcli -g new_cell_group -l root “cellcli -e drop griddisk all harddisk prefix=RECO”
dm01cel05: GridDisk RECO_CD_00_dm01cel05 successfully dropped
dm01cel05: GridDisk RECO_CD_01_dm01cel05 successfully dropped
dm01cel05: GridDisk RECO_CD_02_dm01cel05 successfully dropped
dm01cel05: GridDisk RECO_CD_03_dm01cel05 successfully dropped
dm01cel05: GridDisk RECO_CD_04_dm01cel05 successfully dropped
dm01cel05: GridDisk RECO_CD_05_dm01cel05 successfully dropped
dm01cel05: GridDisk RECO_CD_06_dm01cel05 successfully dropped
dm01cel05: GridDisk RECO_CD_07_dm01cel05 successfully dropped
dm01cel05: GridDisk RECO_CD_08_dm01cel05 successfully dropped
dm01cel05: GridDisk RECO_CD_09_dm01cel05 successfully dropped
dm01cel05: GridDisk RECO_CD_10_dm01cel05 successfully dropped
dm01cel05: GridDisk RECO_CD_11_dm01cel05 successfully dropped

[root@dm01db01 ~]# dcli -g new_cell_group -l root “cellcli -e drop griddisk all harddisk prefix=DATA”
dm01cel05: GridDisk DATA_CD_00_dm01cel05 successfully dropped
dm01cel05: GridDisk DATA_CD_01_dm01cel05 successfully dropped
dm01cel05: GridDisk DATA_CD_02_dm01cel05 successfully dropped
dm01cel05: GridDisk DATA_CD_03_dm01cel05 successfully dropped
dm01cel05: GridDisk DATA_CD_04_dm01cel05 successfully dropped
dm01cel05: GridDisk DATA_CD_05_dm01cel05 successfully dropped
dm01cel05: GridDisk DATA_CD_06_dm01cel05 successfully dropped
dm01cel05: GridDisk DATA_CD_07_dm01cel05 successfully dropped
dm01cel05: GridDisk DATA_CD_08_dm01cel05 successfully dropped
dm01cel05: GridDisk DATA_CD_09_dm01cel05 successfully dropped
dm01cel05: GridDisk DATA_CD_10_dm01cel05 successfully dropped
dm01cel05: GridDisk DATA_CD_11_dm01cel05 successfully dropped

[root@dm01db01 ~]# dcli -g new_cell_group -l root “cellcli -e drop griddisk all harddisk prefix=DBFS_DG”
dm01cel05: GridDisk DBFS_DG_CD_02_dm01cel05 successfully dropped
dm01cel05: GridDisk DBFS_DG_CD_03_dm01cel05 successfully dropped
dm01cel05: GridDisk DBFS_DG_CD_04_dm01cel05 successfully dropped
dm01cel05: GridDisk DBFS_DG_CD_05_dm01cel05 successfully dropped
dm01cel05: GridDisk DBFS_DG_CD_06_dm01cel05 successfully dropped
dm01cel05: GridDisk DBFS_DG_CD_07_dm01cel05 successfully dropped
dm01cel05: GridDisk DBFS_DG_CD_08_dm01cel05 successfully dropped
dm01cel05: GridDisk DBFS_DG_CD_09_dm01cel05 successfully dropped
dm01cel05: GridDisk DBFS_DG_CD_10_dm01cel05 successfully dropped
dm01cel05: GridDisk DBFS_DG_CD_11_dm01cel05 successfully dropped
  • Verify Grid disks are dropped

[root@dm01db01 ~]# dcli -g new_cell_group -l root “cellcli -e list griddisk where disktype=harddisk”
[root@dm01db01 ~]#
  • Drop cell disks

[root@dm01db01 ~]# dcli -g new_cell_group -l root “cellcli -e drop celldisk all harddisk force”
dm01cel05: CellDisk CD_00_dm01cel05 successfully dropped
dm01cel05: CellDisk CD_01_dm01cel05 successfully dropped
dm01cel05: CellDisk CD_02_dm01cel05 successfully dropped
dm01cel05: CellDisk CD_03_dm01cel05 successfully dropped
dm01cel05: CellDisk CD_04_dm01cel05 successfully dropped
dm01cel05: CellDisk CD_05_dm01cel05 successfully dropped
dm01cel05: CellDisk CD_06_dm01cel05 successfully dropped
dm01cel05: CellDisk CD_07_dm01cel05 successfully dropped
dm01cel05: CellDisk CD_08_dm01cel05 successfully dropped
dm01cel05: CellDisk CD_09_dm01cel05 successfully dropped
dm01cel05: CellDisk CD_10_dm01cel05 successfully dropped
dm01cel05: CellDisk CD_11_dm01cel05 successfully dropped
  • If no cell disks exist, create cell disks on all hard disks (output will be similar to the following): 

[root@dm01db01 ~]# dcli -g new_cell_group -l root “cellcli -e create celldisk all harddisk”
dm01cel05: CellDisk CD_00_dm01cel05 successfully created
dm01cel05: CellDisk CD_01_dm01cel05 successfully created
dm01cel05: CellDisk CD_02_dm01cel05 successfully created
dm01cel05: CellDisk CD_03_dm01cel05 successfully created
dm01cel05: CellDisk CD_04_dm01cel05 successfully created
dm01cel05: CellDisk CD_05_dm01cel05 successfully created
dm01cel05: CellDisk CD_06_dm01cel05 successfully created
dm01cel05: CellDisk CD_07_dm01cel05 successfully created
dm01cel05: CellDisk CD_08_dm01cel05 successfully created
dm01cel05: CellDisk CD_09_dm01cel05 successfully created
dm01cel05: CellDisk CD_10_dm01cel05 successfully created
dm01cel05: CellDisk CD_11_dm01cel05 successfully created
  • Verify that celldisks were created (output should be similar to the following)

[root@dm01db01 ~]# dcli -g new_cell_group -l root “cellcli -e list celldisk attributes name,status,freespace where disktype=harddisk”
dm01cel05: CD_00_dm01cel05       normal  528.6875G
dm01cel05: CD_01_dm01cel05       normal  528.6875G
dm01cel05: CD_02_dm01cel05       normal  557.8125G
dm01cel05: CD_03_dm01cel05       normal  557.8125G
dm01cel05: CD_04_dm01cel05       normal  557.8125G
dm01cel05: CD_05_dm01cel05       normal  557.8125G
dm01cel05: CD_06_dm01cel05       normal  557.8125G
dm01cel05: CD_07_dm01cel05       normal  557.8125G
dm01cel05: CD_08_dm01cel05       normal  557.8125G
dm01cel05: CD_09_dm01cel05       normal  557.8125G
dm01cel05: CD_10_dm01cel05       normal  557.8125G
dm01cel05: CD_11_dm01cel05       normal  557.8125G

Create DATA and RECO grid disks at the same offsets found on the existing disks (outer tracks)

  • Verify the current sizes and offsets for existing grid disks (on the existing cells):

[root@dm01db01 ~]# dcli -g cell_group -l root “cellcli -e list griddisk attributes name,size | grep ‘CD_03’ | grep -i DBFS_DG”
dm01cel01: DBFS_DG_CD_03_dm01cel01      29.125G
dm01cel02: DBFS_DG_CD_03_dm01cel02      29.125G
dm01cel03: DBFS_DG_CD_03_dm01cel03      29.125G
dm01cel04: DBFS_DG_CD_03_dm01cel04      29.125G
dm01cel06: DBFS_DG_CD_03_dm01cel06      29.125G
dm01cel07: DBFS_DG_CD_03_dm01cel07      29.125G
dm01cel08: DBFS_DG_CD_03_dm01cel08      29.125G
dm01cel09: DBFS_DG_CD_03_dm01cel09      29.125G
dm01cel10: DBFS_DG_CD_03_dm01cel10      29.125G
dm01cel11: DBFS_DG_CD_03_dm01cel11      29.125G
dm01cel12: DBFS_DG_CD_03_dm01cel12      29.125G
dm01cel13: DBFS_DG_CD_03_dm01cel13      29.125G
dm01cel14: DBFS_DG_CD_03_dm01cel14      29.125G

[root@dm01db01 ~]# dcli -g cell_group -l root “cellcli -e list griddisk attributes name,size,offset | grep ‘CD_03’ | grep -i data”
dm01cel01: DATA_CD_03_dm01cel01          260G            32M
dm01cel02: DATA_CD_03_dm01cel02          260G            32M
dm01cel03: DATA_CD_03_dm01cel03          260G            32M
dm01cel04: DATA_CD_03_dm01cel04          260G            32M
dm01cel06: DATA_CD_03_dm01cel06          260G            32M
dm01cel07: DATA_CD_03_dm01cel07          260G            32M
dm01cel08: DATA_CD_03_dm01cel08          260G            32M
dm01cel09: DATA_CD_03_dm01cel09          260G            32M
dm01cel10: DATA_CD_03_dm01cel10          260G            32M
dm01cel11: DATA_CD_03_dm01cel11          260G            32M
dm01cel12: DATA_CD_03_dm01cel12          260G            32M
dm01cel13: DATA_CD_03_dm01cel13          260G            32M
dm01cel14: DATA_CD_03_dm01cel14          260G            32M

[root@dm01db01 ~]# dcli -g cell_group -l root “cellcli -e list griddisk attributes name,size,offset | grep ‘CD_03’ | grep -i reco”
dm01cel01: RECO_CD_03_dm01cel01          268.6875G       260.046875G
dm01cel02: RECO_CD_03_dm01cel02          268.6875G       260.046875G
dm01cel03: RECO_CD_03_dm01cel03          268.6875G       260.046875G
dm01cel04: RECO_CD_03_dm01cel04          268.6875G       260.046875G
dm01cel06: RECO_CD_03_dm01cel06          268.6875G       260.046875G
dm01cel07: RECO_CD_03_dm01cel07          268.6875G       260.046875G
dm01cel08: RECO_CD_03_dm01cel08          268.6875G       260.046875G
dm01cel09: RECO_CD_03_dm01cel09          268.6875G       260.046875G
dm01cel10: RECO_CD_03_dm01cel10          268.6875G       260.046875G
dm01cel11: RECO_CD_03_dm01cel11          268.6875G       260.046875G
dm01cel12: RECO_CD_03_dm01cel12          268.6875G       260.046875G
dm01cel13: RECO_CD_03_dm01cel13          268.6875G       260.046875G
dm01cel14: RECO_CD_03_dm01cel14          268.6875G       260.046875G

  • Create similar grid disks on the new cells with the same size; be sure to create them in order of offset position, from smallest to largest offset. 

[root@dm01db01 ~]# dcli -g new_cell_group -l root “cellcli -e CREATE GRIDDISK ALL HARDDISK PREFIX=’DATA’, size=260G”
dm01cel05: GridDisk DATA_CD_00_dm01cel05 successfully created
dm01cel05: GridDisk DATA_CD_01_dm01cel05 successfully created
dm01cel05: GridDisk DATA_CD_02_dm01cel05 successfully created
dm01cel05: GridDisk DATA_CD_03_dm01cel05 successfully created
dm01cel05: GridDisk DATA_CD_04_dm01cel05 successfully created
dm01cel05: GridDisk DATA_CD_05_dm01cel05 successfully created
dm01cel05: GridDisk DATA_CD_06_dm01cel05 successfully created
dm01cel05: GridDisk DATA_CD_07_dm01cel05 successfully created
dm01cel05: GridDisk DATA_CD_08_dm01cel05 successfully created
dm01cel05: GridDisk DATA_CD_09_dm01cel05 successfully created
dm01cel05: GridDisk DATA_CD_10_dm01cel05 successfully created
dm01cel05: GridDisk DATA_CD_11_dm01cel05 successfully created

[root@dm01db01 ~]# dcli -g new_cell_group -l root “cellcli -e CREATE GRIDDISK ALL HARDDISK PREFIX=’RECO’, size=268.6875G”
dm01cel05: GridDisk RECO_CD_00_dm01cel05 successfully created
dm01cel05: GridDisk RECO_CD_01_dm01cel05 successfully created
dm01cel05: GridDisk RECO_CD_02_dm01cel05 successfully created
dm01cel05: GridDisk RECO_CD_03_dm01cel05 successfully created
dm01cel05: GridDisk RECO_CD_04_dm01cel05 successfully created
dm01cel05: GridDisk RECO_CD_05_dm01cel05 successfully created
dm01cel05: GridDisk RECO_CD_06_dm01cel05 successfully created
dm01cel05: GridDisk RECO_CD_07_dm01cel05 successfully created
dm01cel05: GridDisk RECO_CD_08_dm01cel05 successfully created
dm01cel05: GridDisk RECO_CD_09_dm01cel05 successfully created
dm01cel05: GridDisk RECO_CD_10_dm01cel05 successfully created
dm01cel05: GridDisk RECO_CD_11_dm01cel05 successfully created

  • Validate that grid disks were created properly on the new cells (size and offset matches the original cells)

[root@dm01db01 ~]# dcli -g new_cell_group -l root “cellcli -e list griddisk attributes name,size,offset | grep -i data”
dm01cel05: DATA_CD_00_dm01cel05  260G            32M
dm01cel05: DATA_CD_01_dm01cel05  260G            32M
dm01cel05: DATA_CD_02_dm01cel05  260G            32M
dm01cel05: DATA_CD_03_dm01cel05  260G            32M
dm01cel05: DATA_CD_04_dm01cel05  260G            32M
dm01cel05: DATA_CD_05_dm01cel05  260G            32M
dm01cel05: DATA_CD_06_dm01cel05  260G            32M
dm01cel05: DATA_CD_07_dm01cel05  260G            32M
dm01cel05: DATA_CD_08_dm01cel05  260G            32M
dm01cel05: DATA_CD_09_dm01cel05  260G            32M
dm01cel05: DATA_CD_10_dm01cel05  260G            32M
dm01cel05: DATA_CD_11_dm01cel05  260G            32M

[root@dm01db01 ~]# dcli -g new_cell_group -l root “cellcli -e list griddisk attributes name,size,offset | grep -i reco”
dm01cel05: RECO_CD_00_dm01cel05  268.6875G       260.046875G
dm01cel05: RECO_CD_01_dm01cel05  268.6875G       260.046875G
dm01cel05: RECO_CD_02_dm01cel05  268.6875G       260.046875G
dm01cel05: RECO_CD_03_dm01cel05  268.6875G       260.046875G
dm01cel05: RECO_CD_04_dm01cel05  268.6875G       260.046875G
dm01cel05: RECO_CD_05_dm01cel05  268.6875G       260.046875G
dm01cel05: RECO_CD_06_dm01cel05  268.6875G       260.046875G
dm01cel05: RECO_CD_07_dm01cel05  268.6875G       260.046875G
dm01cel05: RECO_CD_08_dm01cel05  268.6875G       260.046875G
dm01cel05: RECO_CD_09_dm01cel05  268.6875G       260.046875G
dm01cel05: RECO_CD_10_dm01cel05  268.6875G       260.046875G
dm01cel05: RECO_CD_11_dm01cel05  268.6875G       260.046875G

Create the new DBFS_DG grid disks only on disks 3 through 12 of the new storage servers

  • List the size of the DBFS_DG grid disks on the original cells (they should all be the same size) – we’ll call this size, “dbfs_dg_size”:

[root@dm01db01 ~]# dcli -c dm01cel01 -l root “cellcli -e list griddisk attributes name,size | grep ‘DBFS_DG'”
dm01cel01: DBFS_DG_CD_02_dm01cel01      29.125G
dm01cel01: DBFS_DG_CD_03_dm01cel01      29.125G
dm01cel01: DBFS_DG_CD_04_dm01cel01      29.125G
dm01cel01: DBFS_DG_CD_05_dm01cel01      29.125G
dm01cel01: DBFS_DG_CD_06_dm01cel01      29.125G
dm01cel01: DBFS_DG_CD_07_dm01cel01      29.125G
dm01cel01: DBFS_DG_CD_08_dm01cel01      29.125G
dm01cel01: DBFS_DG_CD_09_dm01cel01      29.125G
dm01cel01: DBFS_DG_CD_10_dm01cel01      29.125G
dm01cel01: DBFS_DG_CD_11_dm01cel01      29.125G

  • The DBFS_DG grid disks will be automatically placed just after the RECO grid disks on the innermost tracks.

NOTE: This command will “fail” on the first and second disks of each cell, but that is expected since there is no more free space on those disks due to the system partitions. The “error” message is similar to:  “Cell disks were skipped because they had no freespace for grid disks: CD_00_dm01cel05, CD_01_dm01cel05.”

[root@dm01db01 ~]# dcli -g new_cell_group -l root “cellcli -e CREATE GRIDDISK ALL HARDDISK PREFIX=’DBFS_DG’, size=29.125G”
dm01cel05: Cell disks were skipped because they had no freespace for grid disks: CD_00_dm01cel05, CD_01_dm01cel05.
dm01cel05: GridDisk DBFS_DG_CD_02_dm01cel05 successfully created
dm01cel05: GridDisk DBFS_DG_CD_03_dm01cel05 successfully created
dm01cel05: GridDisk DBFS_DG_CD_04_dm01cel05 successfully created
dm01cel05: GridDisk DBFS_DG_CD_05_dm01cel05 successfully created
dm01cel05: GridDisk DBFS_DG_CD_06_dm01cel05 successfully created
dm01cel05: GridDisk DBFS_DG_CD_07_dm01cel05 successfully created
dm01cel05: GridDisk DBFS_DG_CD_08_dm01cel05 successfully created
dm01cel05: GridDisk DBFS_DG_CD_09_dm01cel05 successfully created
dm01cel05: GridDisk DBFS_DG_CD_10_dm01cel05 successfully created
dm01cel05: GridDisk DBFS_DG_CD_11_dm01cel05 successfully created

Validate that Grid Disks on all Cells are the Same Size

  • All Cells at once:

[root@dm01db01 ~]# dcli -g cell_group -l root “cellcli -e list griddisk attributes name,size,offset | grep -i data”
dm01cel01: DATA_CD_00_dm01cel01          260G            32M
dm01cel01: DATA_CD_01_dm01cel01          260G            32M
dm01cel01: DATA_CD_02_dm01cel01          260G            32M
dm01cel01: DATA_CD_03_dm01cel01          260G            32M
dm01cel01: DATA_CD_04_dm01cel01          260G            32M
dm01cel01: DATA_CD_05_dm01cel01          260G            32M
dm01cel01: DATA_CD_06_dm01cel01          260G            32M
dm01cel01: DATA_CD_07_dm01cel01          260G            32M
dm01cel01: DATA_CD_08_dm01cel01          260G            32M
dm01cel01: DATA_CD_09_dm01cel01          260G            32M
dm01cel01: DATA_CD_10_dm01cel01          260G            32M
dm01cel01: DATA_CD_11_dm01cel01          260G            32M
…..
dm01cel14: DATA_CD_00_dm01cel14          260G            32M
dm01cel14: DATA_CD_01_dm01cel14          260G            32M
dm01cel14: DATA_CD_02_dm01cel14          260G            32M
dm01cel14: DATA_CD_03_dm01cel14          260G            32M
dm01cel14: DATA_CD_04_dm01cel14          260G            32M
dm01cel14: DATA_CD_05_dm01cel14          260G            32M
dm01cel14: DATA_CD_06_dm01cel14          260G            32M
dm01cel14: DATA_CD_07_dm01cel14          260G            32M
dm01cel14: DATA_CD_08_dm01cel14          260G            32M
dm01cel14: DATA_CD_09_dm01cel14          260G            32M
dm01cel14: DATA_CD_10_dm01cel14          260G            32M
dm01cel14: DATA_CD_11_dm01cel14          260G            32M

[root@dm01db01 ~]# dcli -g cell_group -l root “cellcli -e list griddisk attributes name,size,offset | grep -i reco”
dm01cel01: RECO_CD_00_dm01cel01          268.6875G       260.046875G
dm01cel01: RECO_CD_01_dm01cel01          268.6875G       260.046875G
dm01cel01: RECO_CD_02_dm01cel01          268.6875G       260.046875G
dm01cel01: RECO_CD_03_dm01cel01          268.6875G       260.046875G
dm01cel01: RECO_CD_04_dm01cel01          268.6875G       260.046875G
dm01cel01: RECO_CD_05_dm01cel01          268.6875G       260.046875G
dm01cel01: RECO_CD_06_dm01cel01          268.6875G       260.046875G
dm01cel01: RECO_CD_07_dm01cel01          268.6875G       260.046875G
dm01cel01: RECO_CD_08_dm01cel01          268.6875G       260.046875G
dm01cel01: RECO_CD_09_dm01cel01          268.6875G       260.046875G
dm01cel01: RECO_CD_10_dm01cel01          268.6875G       260.046875G
dm01cel01: RECO_CD_11_dm01cel01          268.6875G       260.046875G
……
dm01cel13: RECO_CD_11_dm01cel13          268.6875G       260.046875G
dm01cel14: RECO_CD_00_dm01cel14          268.6875G       260.046875G
dm01cel14: RECO_CD_01_dm01cel14          268.6875G       260.046875G
dm01cel14: RECO_CD_02_dm01cel14          268.6875G       260.046875G
dm01cel14: RECO_CD_03_dm01cel14          268.6875G       260.046875G
dm01cel14: RECO_CD_04_dm01cel14          268.6875G       260.046875G
dm01cel14: RECO_CD_05_dm01cel14          268.6875G       260.046875G
dm01cel14: RECO_CD_06_dm01cel14          268.6875G       260.046875G
dm01cel14: RECO_CD_07_dm01cel14          268.6875G       260.046875G
dm01cel14: RECO_CD_08_dm01cel14          268.6875G       260.046875G
dm01cel14: RECO_CD_09_dm01cel14          268.6875G       260.046875G
dm01cel14: RECO_CD_10_dm01cel14          268.6875G       260.046875G
dm01cel14: RECO_CD_11_dm01cel14          268.6875G       260.046875G

[root@dm01db01 ~]# dcli -g cell_group -l root “cellcli -e list griddisk attributes name,size,offset | grep -i DBFS_DG”
dm01cel01: DBFS_DG_CD_02_dm01cel01      29.125G         528.734375G
dm01cel01: DBFS_DG_CD_03_dm01cel01      29.125G         528.734375G
dm01cel01: DBFS_DG_CD_04_dm01cel01      29.125G         528.734375G
dm01cel01: DBFS_DG_CD_05_dm01cel01      29.125G         528.734375G
dm01cel01: DBFS_DG_CD_06_dm01cel01      29.125G         528.734375G
dm01cel01: DBFS_DG_CD_07_dm01cel01      29.125G         528.734375G
dm01cel01: DBFS_DG_CD_08_dm01cel01      29.125G         528.734375G
dm01cel01: DBFS_DG_CD_09_dm01cel01      29.125G         528.734375G
dm01cel01: DBFS_DG_CD_10_dm01cel01      29.125G         528.734375G
dm01cel01: DBFS_DG_CD_11_dm01cel01      29.125G         528.734375G
….
dm01cel14: DBFS_DG_CD_02_dm01cel14      29.125G         528.734375G
dm01cel14: DBFS_DG_CD_03_dm01cel14      29.125G         528.734375G
dm01cel14: DBFS_DG_CD_04_dm01cel14      29.125G         528.734375G
dm01cel14: DBFS_DG_CD_05_dm01cel14      29.125G         528.734375G
dm01cel14: DBFS_DG_CD_06_dm01cel14      29.125G         528.734375G
dm01cel14: DBFS_DG_CD_07_dm01cel14      29.125G         528.734375G
dm01cel14: DBFS_DG_CD_08_dm01cel14      29.125G         528.734375G
dm01cel14: DBFS_DG_CD_09_dm01cel14      29.125G         528.734375G
dm01cel14: DBFS_DG_CD_10_dm01cel14      29.125G         528.734375G
dm01cel14: DBFS_DG_CD_11_dm01cel14      29.125G         528.734375G

  • One cell at a time:

[root@dm01db01 ~]# dcli -c dm01cel01 -l root “cellcli -e list griddisk attributes name,size,offset | grep -i data”
dm01cel01: DATA_CD_00_dm01cel01          260G            32M
dm01cel01: DATA_CD_01_dm01cel01          260G            32M
dm01cel01: DATA_CD_02_dm01cel01          260G            32M
dm01cel01: DATA_CD_03_dm01cel01          260G            32M
dm01cel01: DATA_CD_04_dm01cel01          260G            32M
dm01cel01: DATA_CD_05_dm01cel01          260G            32M
dm01cel01: DATA_CD_06_dm01cel01          260G            32M
dm01cel01: DATA_CD_07_dm01cel01          260G            32M
dm01cel01: DATA_CD_08_dm01cel01          260G            32M
dm01cel01: DATA_CD_09_dm01cel01          260G            32M
dm01cel01: DATA_CD_10_dm01cel01          260G            32M
dm01cel01: DATA_CD_11_dm01cel01          260G            32M

[root@dm01db01 ~]# dcli -c dm01cel01 -l root “cellcli -e list griddisk attributes name,size,offset | grep -i reco”
dm01cel01: RECO_CD_00_dm01cel01          268.6875G       260.046875G
dm01cel01: RECO_CD_01_dm01cel01          268.6875G       260.046875G
dm01cel01: RECO_CD_02_dm01cel01          268.6875G       260.046875G
dm01cel01: RECO_CD_03_dm01cel01          268.6875G       260.046875G
dm01cel01: RECO_CD_04_dm01cel01          268.6875G       260.046875G
dm01cel01: RECO_CD_05_dm01cel01          268.6875G       260.046875G
dm01cel01: RECO_CD_06_dm01cel01          268.6875G       260.046875G
dm01cel01: RECO_CD_07_dm01cel01          268.6875G       260.046875G
dm01cel01: RECO_CD_08_dm01cel01          268.6875G       260.046875G
dm01cel01: RECO_CD_09_dm01cel01          268.6875G       260.046875G
dm01cel01: RECO_CD_10_dm01cel01          268.6875G       260.046875G
dm01cel01: RECO_CD_11_dm01cel01          268.6875G       260.046875G

[root@dm01db01 ~]# dcli -c dm01cel01 -l root “cellcli -e list griddisk attributes name,size,offset | grep -i DBFS_DG”
dm01cel01: DBFS_DG_CD_02_dm01cel01      29.125G         528.734375G
dm01cel01: DBFS_DG_CD_03_dm01cel01      29.125G         528.734375G
dm01cel01: DBFS_DG_CD_04_dm01cel01      29.125G         528.734375G
dm01cel01: DBFS_DG_CD_05_dm01cel01      29.125G         528.734375G
dm01cel01: DBFS_DG_CD_06_dm01cel01      29.125G         528.734375G
dm01cel01: DBFS_DG_CD_07_dm01cel01      29.125G         528.734375G
dm01cel01: DBFS_DG_CD_08_dm01cel01      29.125G         528.734375G
dm01cel01: DBFS_DG_CD_09_dm01cel01      29.125G         528.734375G
dm01cel01: DBFS_DG_CD_10_dm01cel01      29.125G         528.734375G
dm01cel01: DBFS_DG_CD_11_dm01cel01      29.125G         528.734375G

Add the DBFS_DG grid disks to the existing DBFS_DG disk group

  • Update the cellip.ora file

[root@dm01db01 ~]# vi /etc/oracle/cell/network-config/cellip.ora

[root@dm01db01 network-config]# cat /etc/oracle/cell/network-config/cellip.ora
cell=”192.168.2.9″
cell=”192.168.2.10″
cell=”192.168.2.11″
cell=”192.168.2.12″
cell=”192.168.2.13″
cell=”192.168.2.14″
cell=”192.168.2.15″
cell=”192.168.2.16″
cell=”192.168.2.17″
cell=”192.168.2.18″
cell=”192.168.2.19″
cell=”192.168.2.20″
cell=”192.168.2.21″
cell=”192.168.2.22″

[root@dm01db01 ~]# cd /etc/oracle/cell/network-config/

[root@dm01db01 network-config]# dcli -g ~/dbs_group -l root -d /etc/oracle/cell/network-config -f cellip.ora

[root@dm01db01 network-config]# dcli -g ~/dbs_group -l root ls -l  /etc/oracle/cell/network-config/cellip.ora
dm01db01: -rwxr-xr-x 1 oracle oinstall 279 Oct 17 11:15 /etc/oracle/cell/network-config/cellip.ora
dm01db02: -rwx—— 1 oracle oinstall 279 Oct 17 11:15 /etc/oracle/cell/network-config/cellip.ora
dm01db03: -rwx—— 1 oracle oinstall 279 Oct 17 11:15 /etc/oracle/cell/network-config/cellip.ora
dm01db04: -rwx—— 1 oracle oinstall 279 Oct 17 11:15 /etc/oracle/cell/network-config/cellip.ora
dm01db05: -rwx—— 1 oracle oinstall 279 Oct 17 11:15 /etc/oracle/cell/network-config/cellip.ora
dm01db06: -rwx—— 1 oracle oinstall 279 Oct 17 11:15 /etc/oracle/cell/network-config/cellip.ora
dm01db07: -rwx—— 1 oracle oinstall 279 Oct 17 11:15 /etc/oracle/cell/network-config/cellip.ora
dm01db08: -rwx—— 1 oracle oinstall 279 Oct 17 11:15 /etc/oracle/cell/network-config/cellip.ora

dm01db01-orcldb1 {/home/oracle}:. oraenv
ORACLE_SID = [orcldb1] ? +ASM1
The Oracle base remains unchanged with value /u01/app/oracle
dm01db01-+ASM1 {/home/oracle}:sqlplus / as sysasm

SQL*Plus: Release 11.2.0.4.0 Production on Mon Oct 17 11:18:34 2016

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 – 64bit Production
With the Real Application Clusters and Automatic Storage Management options

SQL> alter diskgroup DBFS_DG add disk ‘o/*/DBFS_DG*dm01cel05’  rebalance power 32 NOWAIT;

Diskgroup altered.

Add the DATA and RECO grid disks to the existing DATA and RECO disk groups

  • From one DB node add the new DATA disks to the DATA disk group (replace the pattern below ” ‘o/*/DATA_DM01*new_cel01’, ‘o/*/DATA_DM01*new_cel02’, ‘o/*/DATA_DM01*new_cel03′” with list of actual new cells):

SQL> alter diskgroup DATA add disk ‘o/*/DATA*dm01cel05’ rebalance power 32 NOWAIT;

Diskgroup altered.

  •  From another DB node add the new RECO disks to the RECO disk group (replace the pattern below, ” ‘o/*/RECO_DM01*new_cel01′,’o/*/RECO_DM01*new_cel02′,’o/*/RECO_DM01*new_cel03’ ” with list of actual new cells):

SQL> alter diskgroup RECO add disk ‘o/*/RECO*dm01cel05’ rebalance power 32 NOWAIT;

Diskgroup altered.

  • Monitor the progress of the rebalance activities using this query (you can proceed with the rest of the steps while this is continuing):

SQL> select * from gv$asm_operation;

   INST_ID GROUP_NUMBER OPERA STAT      POWER     ACTUAL      SOFAR   EST_WORK   EST_RATE EST_MINUTES ERROR_CODE
———- ———— —– —- ———- ———- ———- ———- ———- ———– ——————————————–
         7            1 REBAL WAIT         32
         3            1 REBAL WAIT         32
         2            1 REBAL WAIT         32
         1            1 REBAL WAIT         32
         6            1 REBAL WAIT         32
         8            1 REBAL WAIT         32
         4            1 REBAL WAIT         32
         5            1 REBAL RUN          32         32      41666      45014      17263           0

8 rows selected.

SQL> select * from gv$asm_operation;

no rows selected

Validate that DATA, RECO, and DBFS_DG contain disks from new cells

Before:
dm01db01-orcldb1 {/home/oracle}:. oraenv
ORACLE_SID = [orcldb1] ? +ASM1
The Oracle base remains unchanged with value /u01/app/oracle
dm01db01-+ASM1 {/home/oracle}:sqlplus / as sysasm

SQL*Plus: Release 11.2.0.4.0 Production on Mon Oct 17 10:43:16 2016

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 – 64bit Production
With the Real Application Clusters and Automatic Storage Management options

SQL> set lines 200
SQL> set pages 200
SQL> select d.failgroup,dg.name, sum(d.total_mb) cell_total_mb, count(1) cell_disk_count from v$asm_disk d, v$asm_diskgroup dg where d.group_number = dg.group_number group by d.failgroup, dg.name order by d.failgroup, dg.name;

FAILGROUP                      NAME                           CELL_TOTAL_MB CELL_DISK_COUNT
—————————— —————————— ————- —————
DM01CEL01                      DATA                                 3194880              12
DM01CEL01                      RECO                                 3301632              12
DM01CEL01                      DBFS_DG                              298240              10
DM01CEL02                      DATA                                 2928640              12
DM01CEL02                      RECO                                 3026496              12
DM01CEL02                      DBFS_DG                              268416              10
DM01CEL03                      DATA                                 3194880              12
DM01CEL03                      RECO                                 3301632              12
DM01CEL03                      DBFS_DG                              298240              10
DM01CEL04                      DATA                                 3194880              12
DM01CEL04                      RECO                                 3301632              12
DM01CEL04                      DBFS_DG                              298240              10
DM01CEL06                      DATA                                 3194880              12
DM01CEL06                      RECO                                 3301632              12
DM01CEL06                      DBFS_DG                              298240              10
DM01CEL07                      DATA                                 2928640              11
DM01CEL07                      RECO                                 3026496              11
DM01CEL07                      DBFS_DG                              268416              10
DM01CEL08                      DATA                                 3194880              12
DM01CEL08                      RECO                                 3301632              12
DM01CEL08                      DBFS_DG                              298240              10
DM01CEL09                      DATA                                 2396160               12
DM01CEL09                      RECO                                 2476224               12
DM01CEL09                      DBFS_DG                              238592               10
DM01CEL10                      DATA                                 3194880              12
DM01CEL10                      RECO                                 3301632              12
DM01CEL10                      DBFS_DG                              298240              10
DM01CEL11                      DATA                                 2928640              12
DM01CEL11                      RECO                                 3026496              12
DM01CEL11                      DBFS_DG                              298240              10
DM01CEL12                      DATA                                 3194880              12
DM01CEL12                      RECO                                 3301632              12
DM01CEL12                      DBFS_DG                              298240              10
DM01CEL13                      DATA                                 3194880              12
DM01CEL13                      RECO                                 3301632              12
DM01CEL13                      DBFS_DG                              298240              10
DM01CEL14                      DATA                                 3194880              12
DM01CEL14                      RECO                                 3301632              12
DM01CEL14                      DBFS_DG                              298240              10

39 rows selected.

After
SQL> set lines 200
SQL> set pages 200
SQL> select d.failgroup,dg.name, sum(d.total_mb) cell_total_mb, count(1) cell_disk_count from v$asm_disk d, v$asm_diskgroup dg where d.group_number = dg.group_number group by d.failgroup, dg.name order by d.failgroup, dg.name;

FAILGROUP                      NAME                           CELL_TOTAL_MB CELL_DISK_COUNT
—————————— —————————— ————- —————
DM01CEL01                      DATA                                 3194880              12
DM01CEL01                      RECO                                 3301632              12
DM01CEL01                      DBFS_DG                              298240              10
DM01CEL02                      DATA                                 2928640              12
DM01CEL02                      RECO                                 3026496              12
DM01CEL02                      DBFS_DG                              268416               10
DM01CEL03                      DATA                                 3194880              12
DM01CEL03                      RECO                                 3301632              12
DM01CEL03                      DBFS_DG                              298240              10
DM01CEL04                      DATA                                 2928640              12
DM01CEL04                      RECO                                 3026496              12
DM01CEL04                      DBFS_DG                              268416               10
DM01CEL05                      DATA                                 3194880              12
DM01CEL05                      RECO                                 3301632              12
DM01CEL05                      DBFS_DG                              298240              10
DM01CEL06                      DATA                                 3194880              12
DM01CEL06                      RECO                                 3301632              12
DM01CEL06                      DBFS_DG                              298240              10
DM01CEL07                      DATA                                 2928640              12
DM01CEL07                      RECO                                 3026496              12
DM01CEL07                      DBFS_DG                              268416               10
DM01CEL08                      DATA                                 3194880              12
DM01CEL08                      RECO                                 3301632              12
DM01CEL08                      DBFS_DG                              298240              10
DM01CEL09                      DATA                                 2396160               12
DM01CEL09                      RECO                                 2476224               12
DM01CEL09                      DBFS_DG                              238592               10
DM01CEL10                      DATA                                 3194880              12
DM01CEL10                      RECO                                 3301632              12
DM01CEL10                      DBFS_DG                              298240              10
DM01CEL11                      DATA                                 2928640              12
DM01CEL11                      RECO                                 3026496              12
DM01CEL11                      DBFS_DG                              298240              10
DM01CEL12                      DATA                                 3194880              12
DM01CEL12                      RECO                                 3301632              12
DM01CEL12                      DBFS_DG                              298240              10
DM01CEL13                      DATA                                 3194880              12
DM01CEL13                      RECO                                 3301632              12
DM01CEL13                      DBFS_DG                              298240              10
DM01CEL14                      DATA                                 3194880              12
DM01CEL14                      RECO                                 3301632              12
DM01CEL14                      DBFS_DG                              298240              10

42 rows selected.

Ensure that there are no offline disks in any disk group:

SQL> select name,total_mb,free_mb,offline_disks from v$asm_diskgroup;

NAME                             TOTAL_MB    FREE_MB OFFLINE_DISKS
—————————— ———- ———- ————-
DATA                             42864640   41713708             0
RECO                             44296896   44293768             0
DBFS_DG                          4026240    4023232             0

Conclusion
In this article we have learned how to add a storage cell to an existing Exadata Database machine of same disk size.

0