Good Contents Are Everywhere, But Here, We Deliver The Best of The Best.Please Hold on!

Overview

Oracle ASM disk groups are built on a set of Exadata grid disks. Exadata uses ASM disk groups store database files. 


ASM Provides 3 Types of Redundancy:

External: ASM doesn’t provide redundancy. External redundancy is not an option on Exadata. You must either use normal or high redundancy.

Normal: It provides two-way data mirroring. It maintains two copies of data blocks in separate failure groups

High: It provides three-way data mirroring. It maintains three copies of data blocks in separate failure groups


In this article I will demonstrate how to created ASM Disk Groups on Exadata database machine using ASMCA.

Environment

Exadata Database machine X2-2
8 Compute nodes, 14 Storage cells and 2 IB Switch

Steps to create ASM disk Group using ASMCA utility.


Set the environment variable to Grid Home and start asmca


dm01db01-orcldb1 {/home/oracle}:. oraenv

ORACLE_SID = [orcldb1] ? +ASM1

The Oracle base has been changed from /u01/app/oracle to /u01/app/grid

dm01db01-+ASM1 {/home/oracle}:which asmca
/u01/app/12.2.0.1/grid/bin/asmca

dm01db01-+ASM1 {/home/oracle}:asmca

1. Create ASM Disk Group using Normal Redundancy (3 failure groups)

First we will create an ASM disk group using Normal Redundancy using 3 storage cells.

ASMCA starting

Click on ASM Instances on left pane

Here we are running Flex ASM and current ASM instance is running on Nodes 1, 2 and 4

Click on Disk Groups. We can see currently there 2 disk groups one for OCR/Voting disks and another one for MGMT database repository. To create a new disk group click on “Create” button.

Click on “Change Disk Discovery Path”

Enter the following path to discover the grid disks.

Select desired grid disks to create ASM Disk Group. 
Here I am creating DATA disk group by selecting DATA grid disks from 3 storage cells

Click on “Show Advanced Options” and select the ASM/Database/ADVM compatibility. Finally click Ok to create DATA disk group

DATA disk group creation in progress

We can now see that the DATA disk group is created

Let’s verify the newly created DATA disk group. Right click on the DATA disk group and select “view status details”

We can see that the DATA disk group is mounted on node 1, 2 and 4

2. Creaet ASM Disk Group Using High Redundancy (5 failure groups)


Now let’s create another ASM disk group using High Redundancy using grid disks from 5 storage cells.

Click Create button to create new ASM disk group


Enter the Disk Group name, select the desired grid disks and ASM/Database/ADVM attributes and click ok

DATA1 disk group creation is in process

We can see that DATA1 disk group created



3. Add disks to ASM Disk Group (add grid disks from one storage cell)

Now let’s Add disks to DATA1 disk group. I am going to add DATA grid disks from a storage to DATA1.


Right click on the DATA1 disk group and select Add Disks


Select the desired grid disks from storage cell and click ok

Disks are being added to DATA1

We can see the size of DATA1 disk group has increased


 4. Drop disks from ASM Disk Group (remove grid disks from one storage cell)



This time let’s Drop disks from DATA1 disk group. I am going to remove DATA grid disks from a storage used by DATA1.


Right click on DATA1 disk group and select Drop Disks

Process started

Select the desired Grid disks to be dropped from DATA1 disk group and click ok

Disks are being dropped from DATA1

We can see that the DATA1 disk group size is dropped




Conclusion:


In this article we have learned different ASM disk group redundancy levels and how to create ASM disk group on Exadata using a set of Exadata grid disks. We have created different ASM disk group using different redundancy levels and performed few disk operations like adding and dropping.

1

Overview
You can create a database on Exadata using DBCA or Manually using Create Database Command. Creating a database using dbca is easy and straight forward. Creating a database on Exadata is no different from creating a database on RAC. You will select the nodes on which you want to create the database. In Exadata since the storage option is only ASM, you must select to ASM for database files.

Environment

  • Exadata database machine Full Rack X5-2
  • 8 Compute nodes, 14 Storage cells and 2 IB Switch

Steps to create Database on Exadata database machine

dm01db01-prod1 {/home/oracle}:which dbca
/u01/app/oracle/product/12.2.0.1/dbhome/bin/dbca

dm01db01-prod1 {/home/oracle}:dbca


DBCA starting

 Select “Create a database” and click Next

Select “Advanced configuration” and click Next

Select “General Purpose”  and click Next

Select all nodes and click Next

Enter the Database name and select if you want to create it as container database. Specify the number of PDBs you want to create and the PDB name. Click Next

Select ASM for storage type and use OMF. Click Next

Select ASM and +DATA1 for FRA. Enable Archive log mode . Click Next

Click Next

On Memory tab, Select ASMM provide values as AMM is not compatible with Huge pages.

On Sizing tab, Select the processes

On Character set page, Select Character set and National character set.

On Connection mode tab, Select Connection mode

On Sample Schema tab, place check mark if you want to create sample schemas. Click Next

Select EM if you want to  configure EM to monitor and manage your database. Click Next

Enter the password for all accounts and click Next

Click Yes to ignore warning and continue

Check Create database to generate script. click Next

Prerequisite check begins

Review Summary Page. Click Finish to start database creation process

Database creation in progress

Click close to complete the Database creation process

Post Database creating steps
dm01db01- {/home/oracle}:export ORACLE_SID=prod1
dm01db01-prod1 {/home/oracle}:sqlplus / as sysdba

SQL*Plus: Release 12.2.0.1.0 Production on Sat Mar 18 04:50:24 2017

Copyright (c) 1982, 2016, Oracle.  All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 – 64bit Production

SQL> select con_id, cdb, name, open_mode,database_role,log_mode from v$database;

    CON_ID CDB NAME      OPEN_MODE            DATABASE_ROLE    LOG_MODE
———- — ——— ——————– —————- ————
         0 YES PROD      READ WRITE           PRIMARY          ARCHIVELOG


SQL> col name for a30
SQL> select con_id,open_mode, name from v$pdbs;

    CON_ID OPEN_MODE  NAME
———- ———- ——————————
         2 READ ONLY  PDB$SEED
         3 READ WRITE PRODPDP1


SQL> select count(*) from cdb_data_files where con_id=3;

  COUNT(*)
———-
        11


Conclusion
In this article we have learned how to create a database on Exadata database machine X5-2 using DBCA.



0

Overview
Upgrading Grid Infrastructure and Database is a very lengthy process and it is obvious that you might encounter issues due to missing prerequisites or BUGS in the upgrade process. If you are lucky or if you meet all the prerequisites before starting the upgrade then it is possible that you might not fall in to issue.

When I upgraded Grid Infrastructure recently from 11.2.0.4 to 12.1.0.2 I encountered an issue which was preventing me from upgrade the Database as a Cluster database. The entire GI upgrade process when smooth, so I can only say that it was a BUG that I encountered.

Error
  • crsctl query crs activeversion show error for node 1.

dm01db01-orcldb1 {/home/oracle}:dcli -g dbs_group -l oracle ‘/u01/app/12.1.0.2/grid/bin/crsctl query crs activeversion’
dm01db01: kgfnGetFacility: facility=0x22761d8
dm01db01: kgfnInitDiag: diagctx=0x216a100
dm01db01: Oracle Clusterware active version on the cluster is [12.1.0.2.0]
dm01db02: Oracle Clusterware active version on the cluster is [12.1.0.2.0]
dm01db03: Oracle Clusterware active version on the cluster is [12.1.0.2.0]
dm01db04: Oracle Clusterware active version on the cluster is [12.1.0.2.0]
dm01db05: Oracle Clusterware active version on the cluster is [12.1.0.2.0]
dm01db06: Oracle Clusterware active version on the cluster is [12.1.0.2.0]
dm01db07: Oracle Clusterware active version on the cluster is [12.1.0.2.0]
dm01db08: Oracle Clusterware active version on the cluster is [12.1.0.2.0]

  • dbua failed with “The database orcldb is a cluster database and cannot be upgrade as the target Oracle Home is for single instance database only”. This is due the GI issue above.
  • Review rootupgrade.sh output
I went back and checked my rootupgrade.sh output on node 1. I can see it showed 2 lines which is not expected as part of output.

[root@bs01db01 ~]# /u01/app/12.1.0.2/grid/rootupgrade.sh
Performing root user operation.

The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /u01/app/12.1.0.2/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]:
The file “dbhome” already exists in /usr/local/bin.  Overwrite it? (y/n)
[n]: y
   Copying dbhome to /usr/local/bin …
The file “oraenv” already exists in /usr/local/bin.  Overwrite it? (y/n)
[n]: y
   Copying oraenv to /usr/local/bin …
The file “coraenv” already exists in /usr/local/bin.  Overwrite it? (y/n)
[n]: y
   Copying coraenv to /usr/local/bin …

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/12.1.0.2/grid/crs/install/crsconfig_params
2017/01/10 06:52:50 CLSRSC-4015: Performing install or upgrade action for Oracle Trace File Analyzer (TFA) Collector.

2017/01/10 06:52:51 CLSRSC-4003: Successfully patched Oracle Trace File Analyzer (TFA) Collector.

2017/01/10 06:52:53 CLSRSC-464: Starting retrieval of the cluster configuration data

2017/01/10 06:53:02 CLSRSC-465: Retrieval of the cluster configuration data has successfully completed.

2017/01/10 06:53:02 CLSRSC-363: User ignored prerequisites during installation

2017/01/10 06:53:13 CLSRSC-515: Starting OCR manual backup.

2017/01/10 06:53:15 CLSRSC-516: OCR manual backup successful.

2017/01/10 06:53:19 CLSRSC-468: Setting Oracle Clusterware and ASM to rolling migration mode

2017/01/10 06:53:19 CLSRSC-482: Running command: ‘/u01/app/12.1.0.2/grid/bin/asmca -silent -upgradeNodeASM -nonRolling false -oldCRSHome /u01/app/11.2.0.4/grid -oldCRSVersion 11.2.0.4.0 -nodeNumber 1 -firstNode true -startRolling true’

ASM configuration upgraded in local node successfully.

2017/01/10 06:53:27 CLSRSC-469: Successfully set Oracle Clusterware and ASM to rolling migration mode

2017/01/10 06:53:27 CLSRSC-466: Starting shutdown of the current Oracle Grid Infrastructure stack

2017/01/10 06:53:53 CLSRSC-467: Shutdown of the current Oracle Grid Infrastructure stack has successfully completed.

OLR initialization – successful
2017/01/10 06:58:48 CLSRSC-329: Replacing Clusterware entries in file ‘oracle-ohasd.conf’

CRS-4133: Oracle High Availability Services has been stopped.
kgfnGetFacility: facility=0x1564768
kgfnInitDiag: diagctx=0x13e4160

CRS-4123: Oracle High Availability Services has been started.
2017/01/10 07:03:18 CLSRSC-472: Attempting to export the OCR

2017/01/10 07:03:18 CLSRSC-482: Running command: ‘ocrconfig -upgrade oracle oinstall’

2017/01/10 07:03:29 CLSRSC-473: Successfully exported the OCR

2017/01/10 07:03:33 CLSRSC-486:
 At this stage of upgrade, the OCR has changed.
 Any attempt to downgrade the cluster after this point will require a complete cluster outage to restore the OCR.

2017/01/10 07:03:33 CLSRSC-541:
 To downgrade the cluster:
 1. All nodes that have been upgraded must be downgraded.

2017/01/10 07:03:33 CLSRSC-542:
 2. Before downgrading the last node, the Grid Infrastructure stack on all other cluster nodes must be down.

2017/01/10 07:03:33 CLSRSC-543:
 3. The downgrade command must be run on the node bs01db04 with the ‘-lastnode’ option to restore global configuration data.

2017/01/10 07:04:04 CLSRSC-343: Successfully started Oracle Clusterware stack

clscfg: EXISTING configuration version 5 detected.
clscfg: version 5 is 11g Release 2.
Successfully taken the backup of node specific configuration in OCR.
Successfully accumulated necessary OCR keys.
Creating OCR keys for user ‘root’, privgrp ‘root’..
Operation successful.
2017/01/10 07:04:13 CLSRSC-474: Initiating upgrade of resource types

2017/01/10 07:04:38 CLSRSC-482: Running command: ‘upgrade model  -s 11.2.0.4.0 -d 12.1.0.2.0 -p first’

2017/01/10 07:04:38 CLSRSC-475: Upgrade of resource types successfully initiated.

2017/01/10 07:04:43 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster … succeeded

So here is the problem…. How to resolve it?

Follow the steps below to resolve the issue and proceed with Database upgrade process.

Solution

The workaround is simple, all you have to do is modify the sqlnet.ora in the Grid Infrastructure home to either set sqlnet.ora parameter “DIAG_ADR_ENABLED” to ON (the default value), or set sqlnet.ora parameter “DIAG_ADR_ENABLED” to OFF and set  environment variable “ORA_CLIENTTRACE_DIR” to a valid directory.
dm01db01-+ASM1 {/home/oracle}:cd /u01/app/12.1.0.2/grid/network/admin/
 

dm01db01-+ASM1 {/u01/app/12.1.0.2/grid/network/admin}:ls -ltr sqlnet.ora
-rw-r–r– 1 oracle oinstall 434 Jan 10 06:53 sqlnet.ora
 

dm01db01-+ASM1 {/u01/app/12.1.0.2/grid/network/admin}:vi sqlnet.ora
 

dm01db01-+ASM1 {/u01/app/12.1.0.2/grid/network/admin}:cat sqlnet.ora
# sqlnet.ora.dm01db01 Network Configuration File: /u01/app/11.2.0/grid/network/admin/sqlnet.ora.dm01db01
# Generated by Oracle configuration tools.

ADR_BASE = /u01/app/oracle
#SQLNET.AUTHENTICATION_SERVICES = (ALL)
NAMES.DIRECTORY_PATH= (TNSNAMES, EZCONNECT)

SQLNET.DEFAULT_SDU_SIZE=32767

SQLNET.INBOUND_CONNECT_TIMEOUT=90
DIAG_ADR_ENABLED=ON
ORA_CLIENTTRACE_DIR=/u01/trace

TRACE_TIMESTAMP_CLIENT = ON
TRACE_DIRECTORY_SERVER = /u01/trace


Now verify the CRS active version

dm01db01-+ASM1 {/home/oracle}:dcli -g dbs_group -l oracle ‘/u01/app/12.1.0.2/grid/bin/crsctl query crs activeversion’
dm01db01: Oracle Clusterware active version on the cluster is [12.1.0.2.0]
dm01db02: Oracle Clusterware active version on the cluster is [12.1.0.2.0]
dm01db03: Oracle Clusterware active version on the cluster is [12.1.0.2.0]
dm01db04: Oracle Clusterware active version on the cluster is [12.1.0.2.0]
dm01db05: Oracle Clusterware active version on the cluster is [12.1.0.2.0]
dm01db06: Oracle Clusterware active version on the cluster is [12.1.0.2.0]
dm01db07: Oracle Clusterware active version on the cluster is [12.1.0.2.0]
dm01db08: Oracle Clusterware active version on the cluster is [12.1.0.2.0]


Conclusion
It is possible that you GI and Database upgrade process fail due prerequisite failure or upgrade BUGS. In my case the rootupgrade.sh overall was successful but there were couple of lines that went unnoticed and caused the database upgrade problem. We have seen how to fix that issue and resolve the database upgrade issue.

0

Overview
In my previous article I have demonstrated how to Install and Upgrade Oracle Grid Infrastructure to 12.1.0.2 on Exadata Database Machine. Now it is time to Upgrade Database to 12.1.0.2.

Take a look at the link below for the steps required to Install and Upgrade Oracle GI to 12.1.0.2:
http://netsoftmate.blogspot.in/2017/03/exadata-upgrade-oracle-grid.html

This article covers the following steps:

  • Install Database 12.1.0.2 Software 
  • Upgrade Database to 12.1.0.2 using DBCA &
  • Post Upgrade Steps

Install Oracle Database 12.1.0.2 Software

Oracle Database 12.1.0.2 software will be installed in a new Oracle Home without any downtime. The databases currently running will continue run without any issues.

  • Unzip the 12.1.0.2 database software if not already done earlier.
dm01db01-orcldb1 {/u01/app/oracle/software}: unzip -q /u01/app/oracle/software/p21419221_121020_Linux-x86-64_1of10.zip -d /u01/app/oracle/software

dm01db01-orcldb1 {/u01/app/oracle/software}: unzip -q /u01/app/oracle/software/p21419221_121020_Linux-x86-64_2of10.zip -d /u01/app/oracle/software

  • Create the new Oracle Home directory on all Compute nodes
dm01db01-orcldb1 {/u01/app/oracle/software}:dcli -g ~/dbs_group -l oracle mkdir -p /u01/app/oracle/product/12.1.0.2/dbhome
  • Install Oracle Database 12.1.0.2
dm01db01-orcldb1 {/home/oracle}:unset ORACLE_HOME ORACLE_BASE ORACLE_SID

dm01db01- {/home/oracle}:echo $ORACLE_HOME

dm01db01- {/home/oracle}:echo $ORACLE_SID

dm01db01- {/home/oracle}:echo $ORACLE_BASE

dm01db01- {/home/oracle}:export DISPLAY=10.30.20.1:0.0

dm01db01- {/home/oracle}:cd /u01/app/oracle/software/database/

dm01db01- {/u01/app/oracle/software/database}:./runInstaller
Starting Oracle Universal Installer…

Checking Temp space: must be greater than 500 MB.   Actual 7567 MB    Passed
Checking swap space: must be greater than 150 MB.   Actual 16378 MB    Passed
Checking monitor: must be configured to display at least 256 colors

Preparing to launch Oracle Universal Installer from /tmp/OraInstall2017-01-11_04-55-53AM. Please wait …

dm01db01- {/u01/app/oracle/software/database}:  
        


Uncheck Security Update and click Next

Click Yes to ignore the warning and continue with installation

Select “Install database software only” and click Next

Select “Oracle RAC database installation” and click Next

Select all nodes and click Next

Select Language and click Next

Select “Enterprise Edition” and click Next

Specify the Oracle Base and Oracle Home location and click Next

Select OS group for OS authentication and click Next

Prerequisite checks start

Review the summary page and Install button to begin installation

Oracle Software Installation started
 
Software Installation  in progress

 Execute Configuration script page appears. Open a new terminal and execute the configuration script on all the compute nodes as root user.

First execute the configuration script on node 1 and then it can be executed on all the remaining nodes parallel.

[root@dm01db01 ~]# /u01/app/oracle/product/12.1.0.2/dbhome/root.sh
Performing root user operation.

The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /u01/app/oracle/product/12.1.0.2/dbhome

Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of “dbhome” have not changed. No need to overwrite.
The contents of “oraenv” have not changed. No need to overwrite.
The contents of “coraenv” have not changed. No need to overwrite.

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.


[root@dm01db01 ~]# dcli -g dbs_group -l root ‘/u01/app/oracle/product/12.1.0.2/dbhome/root.sh’
dm01db01: Performing root user operation.
dm01db01:
dm01db01: The following environment variables are set as:
dm01db01: ORACLE_OWNER= oracle
dm01db01: ORACLE_HOME=  /u01/app/oracle/product/12.1.0.2/dbhome
dm01db01:
dm01db01: Enter the full pathname of the local bin directory: [/usr/local/bin]: The contents of “dbhome” have

not changed. No need to overwrite.
dm01db01: The contents of “oraenv” have not changed. No need to overwrite.
dm01db01: The contents of “coraenv” have not changed. No need to overwrite.
dm01db01:
dm01db01: Entries will be added to the /etc/oratab file as needed by
dm01db01: Database Configuration Assistant when a database is created
dm01db01: Finished running generic part of root script.
dm01db01: Now product-specific root actions will be performed.
dm01db02: Performing root user operation.
dm01db02:
dm01db02: The following environment variables are set as:
dm01db02: ORACLE_OWNER= oracle
dm01db02: ORACLE_HOME=  /u01/app/oracle/product/12.1.0.2/dbhome
dm01db02:
dm01db02: Enter the full pathname of the local bin directory: [/usr/local/bin]: The contents of “dbhome” have

not changed. No need to overwrite.
dm01db02: The contents of “oraenv” have not changed. No need to overwrite.
dm01db02: The contents of “coraenv” have not changed. No need to overwrite.
dm01db02:
dm01db02: Entries will be added to the /etc/oratab file as needed by
dm01db02: Database Configuration Assistant when a database is created
dm01db02: Finished running generic part of root script.
dm01db02: Now product-specific root actions will be performed.
dm01db03: Performing root user operation.
dm01db03:
dm01db03: The following environment variables are set as:
dm01db03: ORACLE_OWNER= oracle
dm01db03: ORACLE_HOME=  /u01/app/oracle/product/12.1.0.2/dbhome
dm01db03:
dm01db03: Enter the full pathname of the local bin directory: [/usr/local/bin]: The contents of “dbhome” have

not changed. No need to overwrite.
dm01db03: The contents of “oraenv” have not changed. No need to overwrite.
dm01db03: The contents of “coraenv” have not changed. No need to overwrite.
dm01db03:
dm01db03: Entries will be added to the /etc/oratab file as needed by
dm01db03: Database Configuration Assistant when a database is created
dm01db03: Finished running generic part of root script.
dm01db03: Now product-specific root actions will be performed.
dm01db04: Performing root user operation.
dm01db04:
dm01db04: The following environment variables are set as:
dm01db04: ORACLE_OWNER= oracle
dm01db04: ORACLE_HOME=  /u01/app/oracle/product/12.1.0.2/dbhome
dm01db04:
dm01db04: Enter the full pathname of the local bin directory: [/usr/local/bin]: The contents of “dbhome” have

not changed. No need to overwrite.
dm01db04: The contents of “oraenv” have not changed. No need to overwrite.
dm01db04: The contents of “coraenv” have not changed. No need to overwrite.
dm01db04:
dm01db04: Entries will be added to the /etc/oratab file as needed by
dm01db04: Database Configuration Assistant when a database is created
dm01db04: Finished running generic part of root script.
dm01db04: Now product-specific root actions will be performed.
dm01db05: Performing root user operation.
dm01db05:
dm01db05: The following environment variables are set as:
dm01db05: ORACLE_OWNER= oracle
dm01db05: ORACLE_HOME=  /u01/app/oracle/product/12.1.0.2/dbhome
dm01db05:
dm01db05: Enter the full pathname of the local bin directory: [/usr/local/bin]: The contents of “dbhome” have

not changed. No need to overwrite.
dm01db05: The contents of “oraenv” have not changed. No need to overwrite.
dm01db05: The contents of “coraenv” have not changed. No need to overwrite.
dm01db05:
dm01db05: Entries will be added to the /etc/oratab file as needed by
dm01db05: Database Configuration Assistant when a database is created
dm01db05: Finished running generic part of root script.
dm01db05: Now product-specific root actions will be performed.
dm01db06: Performing root user operation.
dm01db06:
dm01db06: The following environment variables are set as:
dm01db06: ORACLE_OWNER= oracle
dm01db06: ORACLE_HOME=  /u01/app/oracle/product/12.1.0.2/dbhome
dm01db06:
dm01db06: Enter the full pathname of the local bin directory: [/usr/local/bin]: The contents of “dbhome” have

not changed. No need to overwrite.
dm01db06: The contents of “oraenv” have not changed. No need to overwrite.
dm01db06: The contents of “coraenv” have not changed. No need to overwrite.
dm01db06:
dm01db06: Entries will be added to the /etc/oratab file as needed by
dm01db06: Database Configuration Assistant when a database is created
dm01db06: Finished running generic part of root script.
dm01db06: Now product-specific root actions will be performed.
dm01db07: Performing root user operation.
dm01db07:
dm01db07: The following environment variables are set as:
dm01db07: ORACLE_OWNER= oracle
dm01db07: ORACLE_HOME=  /u01/app/oracle/product/12.1.0.2/dbhome
dm01db07:
dm01db07: Enter the full pathname of the local bin directory: [/usr/local/bin]: The contents of “dbhome” have

not changed. No need to overwrite.
dm01db07: The contents of “oraenv” have not changed. No need to overwrite.
dm01db07: The contents of “coraenv” have not changed. No need to overwrite.
dm01db07:
dm01db07: Entries will be added to the /etc/oratab file as needed by
dm01db07: Database Configuration Assistant when a database is created
dm01db07: Finished running generic part of root script.
dm01db07: Now product-specific root actions will be performed.
dm01db08: Performing root user operation.
dm01db08:
dm01db08: The following environment variables are set as:
dm01db08: ORACLE_OWNER= oracle
dm01db08: ORACLE_HOME=  /u01/app/oracle/product/12.1.0.2/dbhome
dm01db08:
dm01db08: Enter the full pathname of the local bin directory: [/usr/local/bin]: The contents of “dbhome” have

not changed. No need to overwrite.
dm01db08: The contents of “oraenv” have not changed. No need to overwrite.
dm01db08: The contents of “coraenv” have not changed. No need to overwrite.
dm01db08:
dm01db08: Entries will be added to the /etc/oratab file as needed by
dm01db08: Database Configuration Assistant when a database is created
dm01db08: Finished running generic part of root script.
dm01db08: Now product-specific root actions will be performed.


Go back to the Installation screen and click ok

Click Close to complete installation.

Post Oracle Database 12.1.0.2 Installation
  • Re-link Oracle Executable in new Oracle Home with RDS if required
dm01db01-orcldb1 {/home/oracle}:dcli -g ~/dbs_group -l oracle ‘/u01/app/oracle/product/12.1.0.2/dbhome/bin/skgxpinfo’
dm01db01: rds
dm01db02: rds
dm01db03: rds
dm01db04: rds
dm01db05: rds
dm01db06: rds
dm01db07: rds
dm01db08: rds


If the command does not return rds, relink as follows:

dm01db01-orcldb1 {/home/oracle}:dcli oracle -g -l oracle ~/dbs_group ORACLE_HOME=/u01/app/oracle/product/12.1.0.2/dbhome make -C /u01/app/oracle/product/12.1.0.2/dbhome/rdbms/lib -f ins_rdbms.mk ipc_rds ioracle  

  • Install Latest OPatch in the new Oracle Home on all compute nodes
dm01db01-orcldb1 {/u01/app/oracle/software}:dcli -g ~/dbs_group -l oracle unzip -oq -d /u01/app/oracle/product/12.1.0.2/dbhome /u01/app/oracle/software/p6880880_112000_Linux-x86-64.zip

dm01db01-orcldb1 {/u01/app/oracle/software}:dcli -g ~/dbs_group -l oracle /u01/app/oracle/product/12.1.0.2/dbhome/OPatch/opatch version
dm01db01: OPatch Version: 11.2.0.3.15
dm01db01:
dm01db01: OPatch succeeded.
dm01db02: OPatch Version: 11.2.0.3.15
dm01db02:
dm01db02: OPatch succeeded.
dm01db03: OPatch Version: 11.2.0.3.15
dm01db03:
dm01db03: OPatch succeeded.
dm01db04: OPatch Version: 11.2.0.3.15
dm01db04:
dm01db04: OPatch succeeded.
dm01db05: OPatch Version: 11.2.0.3.15
dm01db05:
dm01db05: OPatch succeeded.
dm01db06: OPatch Version: 11.2.0.3.15
dm01db06:
dm01db06: OPatch succeeded.
dm01db07: OPatch Version: 11.2.0.3.15
dm01db07:
dm01db07: OPatch succeeded.
dm01db08: OPatch Version: 11.2.0.3.15
dm01db08:
dm01db08: OPatch succeeded.


Upgrade Database to 12.1.0.2

You can upgrade the Database using DBUA or manually. Here I am using DBUA to upgrade my database orcldb

Prerequisites
  • Backing up the database
RMAN> backup database plus archivelog;
  • Create a Guaranteed Restore Point
SQL> CREATE RESTORE POINT before_upgrade_12102 GUARANTEE FLASHBACK DATABASE;

Restore point created.

  • Analyze the Database to Upgrade with the Pre-Upgrade Information Tool (if not done earlier)
SQL> @/u01/app/oracle/product/12.1.0.2/dbhome_1/rdbms/admin/preupgrd.sql

Loading Pre-Upgrade Package…

***************************************************************************
Executing Pre-Upgrade Checks in ORCLDB…
***************************************************************************

      ************************************************************
                  ====>> ERRORS FOUND for ORCLDB <<====

 The following are *** ERROR LEVEL CONDITIONS *** that must be addressed
                    prior to attempting your upgrade.
            Failure to do so will result in a failed upgrade.

 1) Check Tag:    PURGE_RECYCLEBIN
    Check Summary: Check that recycle bin is empty prior to upgrade
    Fixup Summary:
     “The recycle bin will be purged.”
            You MUST resolve the above error prior to upgrade
      ************************************************************
      ************************************************************
              ====>> PRE-UPGRADE RESULTS for ORCLDB <<====

ACTIONS REQUIRED:

1. Review results of the pre-upgrade checks:
 /u01/app/oracle/cfgtoollogs/orcldb/preupgrade/preupgrade.log

2. Execute in the SOURCE environment BEFORE upgrade:
 /u01/app/oracle/cfgtoollogs/orcldb/preupgrade/preupgrade_fixups.sql

3. Execute in the NEW environment AFTER upgrade:
 /u01/app/oracle/cfgtoollogs/orcldb/preupgrade/postupgrade_fixups.sql

      ************************************************************
***************************************************************************
Pre-Upgrade Checks in ORCLDB Completed.
***************************************************************************
***************************************************************************
***************************************************************************

  • Review the preupgrade.log file for errors/issues:
dm01db01-orcldb1 {/u01/app/oracle/software}:vi /u01/app/oracle/cfgtoollogs/orcldb/preupgrade/preupgrade.log
  • Run the preupgrade_fixups.sql produced by pre-upgrade utility above:
dm01db01-orcldb1 {/u01/app/oracle/software}:sqlplus / as sysdba

SQL*Plus: Release 11.2.0.4.0 Production on Mon Jan 9 06:29:50 2017

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 – 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options

SQL> @/u01/app/oracle/cfgtoollogs/orcldb/preupgrade/preupgrade_fixups.sql
Pre-Upgrade Fixup Script Generated on 2017-01-09 06:26:48  Version: 12.1.0.2 Build: 014
Beginning Pre-Upgrade Fixups…
Executing in container ORCLDB

**********************************************************************
Check Tag:     DEFAULT_PROCESS_COUNT
Check Summary: Verify min process count is not too low
Fix Summary:   Review and increase if needed, your PROCESSES value.
**********************************************************************
Fixup Returned Information:
WARNING: –> Process Count may be too low

     Database has a maximum process count of 150 which is lower than the
     default value of 300 for this release.
     You should update your processes value prior to the upgrade
     to a value of at least 300.
     For example:
        ALTER SYSTEM SET PROCESSES=300 SCOPE=SPFILE
     or update your init.ora file.
**********************************************************************

**********************************************************************
Check Tag:     EM_PRESENT
Check Summary: Check if Enterprise Manager is present
Fix Summary:   Execute emremove.sql prior to upgrade.
**********************************************************************
Fixup Returned Information:
WARNING: –> Enterprise Manager Database Control repository found in the database

     In Oracle Database 12c, Database Control is removed during
     the upgrade. To save time during the Upgrade, this action
     can be done prior to upgrading using the following steps after
     copying rdbms/admin/emremove.sql from the new Oracle home
   – Stop EM Database Control:
    $> emctl stop dbconsole
   – Connect to the Database using the SYS account AS SYSDBA:
   SET ECHO ON;
   SET SERVEROUTPUT ON;
   @emremove.sql
     Without the set echo and serveroutput commands you will not
     be able to follow the progress of the script.
**********************************************************************

**********************************************************************
Check Tag:     AMD_EXISTS
Check Summary: Check to see if AMD is present in the database
Fix Summary:   Manually execute ORACLE_HOME/oraolap/admin/catnoamd.sql script to remove OLAP.
**********************************************************************
Fixup Returned Information:
INFORMATION: –> OLAP Catalog(AMD) exists in database

     Starting with Oracle Database 12c, OLAP Catalog component is desupported.
     If you are not using the OLAP Catalog component and want
     to remove it, then execute the
     ORACLE_HOME/olap/admin/catnoamd.sql script before or
     after the upgrade.
**********************************************************************

**********************************************************************
Check Tag:     APEX_UPGRADE_MSG
Check Summary: Check that APEX will need to be upgraded.
Fix Summary:   Oracle Application Express can be manually upgraded prior to database upgrade.
**********************************************************************
Fixup Returned Information:
INFORMATION: –> Oracle Application Express (APEX) can be
     manually upgraded prior to database upgrade

     APEX is currently at version 3.2.1.00.12 and will need to be
     upgraded to APEX version 4.2.5 in the new release.
     Note 1: To reduce database upgrade time, APEX can be manually
             upgraded outside of and prior to database upgrade.
     Note 2: See MOS Note 1088970.1 for information on APEX
             installation upgrades.
**********************************************************************

**********************************************************************
                      [Pre-Upgrade Recommendations]
**********************************************************************

                        *****************************************
                        ********* Dictionary Statistics *********
                        *****************************************

Please gather dictionary statistics 24 hours prior to
upgrading the database.
To gather dictionary statistics execute the following command
while connected as SYSDBA:
    EXECUTE dbms_stats.gather_dictionary_stats;

^^^ MANUAL ACTION SUGGESTED ^^^

           **************************************************
                ************* Fixup Summary ************

 4 fixup routines generated INFORMATIONAL messages that should be reviewed.

**************** Pre-Upgrade Fixup Script Complete *********************

PL/SQL procedure successfully completed.

  • Take care of pre-upgrade Recommendation stated above:
    • Increase processes parameter to 300
SQL> ALTER SYSTEM SET PROCESSES=300 SCOPE=SPFILE;

System altered.

    • Drop OLAP Catalog
SQL> @$ORACLE_HOME/olap/admin/catnoamd.sql

Synonym dropped.

Synonym dropped.

Synonym dropped.

Synonym dropped.

Synonym dropped.

Synonym dropped.

Type dropped.

Type dropped.

PL/SQL procedure successfully completed.

Role dropped.

PL/SQL procedure successfully completed.

1 row deleted.

    • Gather dictionary statistics
SQL> EXECUTE dbms_stats.gather_dictionary_stats;

PL/SQL procedure successfully completed.

  • Before starting the Database Upgrade Assistant it is required change the preference for ‘concurrent statisticsgathering’ on the current release if the current setting is not set to ‘FALSE’.
First, while still on the 11.2. release, obtain the current setting:

SQL> SELECT dbms_stats.get_prefs(‘CONCURRENT’) from dual;

DBMS_STATS.GET_PREFS(‘CONCURRENT’)
——————————————————————————–
FALSE


When ‘concurrent statistics gathering’ is not not set to ‘FALSE’, change the value to ‘FALSE before the upgrade.

SQL> BEGIN
DBMS_STATS.SET_GLOBAL_PREFS(‘CONCURRENT’,’FALSE’);
END;
/

SQL> SELECT dbms_stats.get_prefs(‘CONCURRENT’) from dual;

  • For each database being upgraded use the srvctl command to determine if a ‘TAF policy specification’ with ‘PRECONNECT’ is defined.
dm01db01-orcldb1 {/home/oracle}:srvctl config service -d orcldb | grep -i preconnect | wc -l
0

  • Oracle recommends removing the value for the init.ora parameter ‘listener_networks’ before starting DBUA.
SQL> col name for a30
SQL> col value for a30
SQL> select name, value from v$parameter where name=’listener_networks’;

NAME                           VALUE
—————————— ——————————
listener_networks

 
Run DBUA from the new 12.1.0.2 ORACLE_HOME as follows:
dm01db01-orcldb1 {/home/oracle}:export DISPLAY=10.30.204.35:0.0
dm01db01-orcldb1 {/home/oracle}:/u01/app/oracle/product/12.1.0.2/dbhome_1/bin/dbua


Select Upgrade Oracle Database and click Next

Select the source Database Oracle Home, Verify the database details and click Next.

Prerequisite checks begins

Click Next

Increase the Upgrade and recompilation parallelism based on the number of CPUs you have.
Here as part of Upgrade I am upgrading Timezone Data, Gather Stats, Set user tbs to read only.
Click Next

Check if you want to configure EM express for this database. Click Next

Optionally, take RMAN backup if not already done so. Click Next

Review the summary page and click Next

Database Upgrade process begins

Upgrade process continues

While the upgrade process is running, you can monitor the database log as follows:
dm01db01-orcldb1 {/u01/app/oracle/diag/rdbms/orcldb/orcldb1/trace}:tail -f alert_orcldb1.log
 
Upgrade process continues

Upgrade process continues

Finally the upgrade results appear. Take a note of the result

Scroll down to review the Upgrade result. Once complete, Click close to complete the upgrade process

Post upgrade steps
  • Validate databases
dm01db01-dbm011 {/u01/app/11.2.0.4/grid/crs/script}:srvctl status database -d orcldb
Instance orcldb1 is running on node dm01db01
Instance orcldb2 is running on node dm01db02
Instance orcldb3 is running on node dm01db03
Instance orcldb4 is running on node dm01db04
Instance orcldb5 is running on node dm01db05
Instance orcldb6 is running on node dm01db06
Instance orcldb7 is running on node dm01db07
Instance orcldb8 is running on node dm01db08

SQL> select * from v$version;

BANNER                                                                               CON_ID
——————————————————————————– ———-
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 – 64bit Production              0
PL/SQL Release 12.1.0.2.0 – Production                                                    0
CORE    12.1.0.2.0      Production                                                        0
TNS for Linux: Version 12.1.0.2.0 – Production                                            0
NLSRTL Version 12.1.0.2.0 – Production   

  • Remove restore point
SQL> DROP RESTORE POINT before_upgrade_12102;
Restore point dropped.

  • Run Exachk report
See MOS note 1070954.1
  • Deinstall Old Oracle Home (11.2.0.4)
See the article: http://netsoftmate.blogspot.in/2016/12/deinstall-oracle-homes-on-exadata.html
  • Perform a database backup
RMAN> backup database plus archivelog;
  • Modify ASM compatibility attribute
SQL> ALTER DISKGROUP RECO SET ATTRIBUTE ‘compatible.asm’ = ‘12.1.0.2.0’;

Diskgroup altered.

SQL> ALTER DISKGROUP DATA SET ATTRIBUTE ‘compatible.asm’ = ‘12.1.0.2.0’;

Diskgroup altered.

SQL> alter diskgroup SYSTEMDG set attribute ‘compatible.asm’=’12.1.0.2.0’;

Diskgroup altered.

SQL> select a.group_number,b.name dgname,a.name,a.value from v$asm_attribute a, v$asm_diskgroup b where a.name in (‘au_size’,’disk_repair_time’,’compatible.rdbms’,’compatible.asm’) and a.group_number=b.group_number order by b.name,a.name;

GROUP_NUMBER DGNAME                         NAME                           VALUE
———— —————————— —————————— ——————————
           1 DATA                           au_size                        4194304
           1 DATA                           compatible.asm                 12.1.0.2.0
           1 DATA                           compatible.rdbms               11.2.0.4.0
           1 DATA                           disk_repair_time               3.6h
           2 RECO                           au_size                        4194304
           2 RECO                           compatible.asm                 12.1.0.2.0
           2 RECO                           compatible.rdbms               11.2.0.4.0
           2 RECO                           disk_repair_time               3.6h
           3 SYSTEMDG                       au_size                        4194304
           3 SYSTEMDG                       compatible.asm                 12.1.0.2.0
           3 SYSTEMDG                       compatible.rdbms               11.2.0.4.0
           3 SYSTEMDG                       disk_repair_time               3.6h

12 rows selected.


Conclusion
In this article we have Upgrade Oracle Database from 11.2.0.4 to 12.1.0.2. First we have installed 12.1.0.2 in Oracle Home and then upgraded the database to 12.1.0.2. Using DBCA to upgrade an Oracle Database is straight forward and easy to use.


0

Overview
It’s time to upgrade Oracle Database 11g (11.2.0.4 in this article) to Oracle Database 12.1.0.2. In this article I will demostrate the Step by Step how to upgrade Oracle Grid/Database 11.2.0.4 to 12.1.0.2 on Exadata Database Machine X5-2 Full Rack.

Since the upgrade task is very lengthy, I will split the article in 2 different parts:

  • Part I:
  • Prepare the Current Exadata Environment &
  • Install and Upgrade Grid Infrastructure to 12.1.0.2
  • Part II:
  • Install Database 12.1.0.2 Software &
  • Upgrade Database to 12.1.0.2 using DBCA and Post Upgrade Steps

Current Environment Details:
  • Exadata Database Machine X5-2 Full Rack
  • Exadata Storage Software version 12.1.2.1.1
  • Oracle Grid/Database Version 11.2.0.4

Download Grid/Database Software and Copy it Exadata Compute node 1:

Follow the steps by step instruction below to download the Grid/Database software for upgrade














dm01db01-orcldb1 {/u01/app/oracle/software}:ls -l
-rw-r–r– 1 oracle oinstall 1673519571 Jan  9 00:30 p21419221_121020_Linux-x86-64_1of10.zip
-rw-r–r– 1 oracle oinstall 1014527110 Jan  9 00:30 p21419221_121020_Linux-x86-64_2of10.zip
-rw-r–r– 1 oracle oinstall  646969279 Jan  9 01:25 p21419221_121020_Linux-x86-64_6of10.zip
-rw-r–r– 1 oracle oinstall 1747021273 Jan  9 01:38 p21419221_121020_Linux-x86-64_5of10.zip


Note: There are 2 zip files each for Database and Grid Infrastructure.

  • zip files 1of10 and 2of10 are Database specific files
  • zip files 5of10 and 6of10 are Grid specific files
Unzip of both zip files in same directory as follows:
  • Unzip database zip files
dm01db01-orcldb1 {/u01/app/oracle/software}:unzip p21419221_121020_Linux-x86-64_1of10.zip
Archive:  p21419221_121020_Linux-x86-64_1of10.zip
   creating: database/
  inflating: database/runInstaller
   creating: database/rpm/
  inflating: database/rpm/cvuqdisk-1.0.9-1.rpm
   creating: database/install/
  inflating: database/install/attachHome.sh
  inflating: database/install/oraparam.ini.deinstall
   creating: database/install/images/
  inflating: database/install/images/billboards.gif
   creating: database/install/resource/
  inflating: database/install/resource/cons.nls

  inflating: database/stage/Actions/launchPadActions/10.1.0.2.0/1/launchpadaction.jar
   creating: database/response/
  inflating: database/response/netca.rsp
  inflating: database/response/dbca.rsp
  inflating: database/response/db_install.rsp
  inflating: PatchSearch.xml

dm01db01-orcldb1 {/u01/app/oracle/software}:unzip p21419221_121020_Linux-x86-64_2of10.zip
Archive:  p21419221_121020_Linux-x86-64_2of10.zip
   creating: database/stage/Components/oracle.ctx/
   creating: database/stage/Components/oracle.ctx/12.1.0.2.0/
   creating: database/stage/Components/oracle.ctx/12.1.0.2.0/1/
   creating: database/stage/Components/oracle.ctx/12.1.0.2.0/1/DataFiles/
  inflating: database/stage/Components/oracle.ctx/12.1.0.2.0/1/DataFiles/filegroup15.21.1.jar
  inflating: database/stage/Components/oracle.ctx/12.1.0.2.0/1/DataFiles/filegroup15.9.1.jar
  inflating: database/stage/Components/oracle.ctx/12.1.0.2.0/1/DataFiles/filegroup3.jar
  inflating: database/stage/Components/oracle.ctx/12.1.0.2.0/1/DataFiles/filegroup15.16.1.jar
  inflating: database/stage/Components/oracle.ctx/12.1.0.2.0/1/DataFiles/filegroup4.jar

  inflating: database/stage/Components/oracle.rdbms/12.1.0.2.0/1/DataFiles/filegroup40.jar
   creating: database/stage/Components/oracle.javavm.containers/
   creating: database/stage/Components/oracle.javavm.containers/12.1.0.2.0/
   creating: database/stage/Components/oracle.javavm.containers/12.1.0.2.0/1/
   creating: database/stage/Components/oracle.javavm.containers/12.1.0.2.0/1/DataFiles/
  inflating: database/stage/Components/oracle.javavm.containers/12.1.0.2.0/1/DataFiles/filegroup1.jar
  inflating: database/stage/Components/oracle.javavm.containers/12.1.0.2.0/1/DataFiles/filegroup2.jar
  inflating: database/install/.oui

  • Unzip Grid Infrastructure zip files
dm01db01-orcldb1 {/u01/app/oracle/software}:unzip p21419221_121020_Linux-x86-64_5of10.zip
Archive:  p21419221_121020_Linux-x86-64_5of10.zip
   creating: grid/
  inflating: grid/runcluvfy.sh
  inflating: grid/welcome.html
   creating: grid/install/
  inflating: grid/install/oraparam.ini
  inflating: grid/install/clusterparam.ini
  inflating: grid/install/detachHome.sh
  inflating: grid/install/cvu.ini
  inflating: grid/install/lsnodes
   creating: grid/install/resource/
  inflating: grid/install/resource/cons_pt_BR.nls
  inflating: grid/install/resource/cons_es.nls
  inflating: grid/install/resource/cons_it.nls
  inflating: grid/install/resource/cons.nls
  inflating: grid/install/resource/cons_ko.nls
  inflating: grid/install/resource/cons_zh_TW.nls

 extracting: grid/stage/sizes/oracle.crs.Complete.sizes.properties
  inflating: grid/stage/sizes/oracle.crs12.1.0.2.0Complete.sizes.properties
   creating: grid/response/
  inflating: grid/response/grid_install.rsp
   creating: grid/sshsetup/
  inflating: grid/sshsetup/sshUserSetup.sh
  inflating: grid/runInstaller
  inflating: grid/readme.html
   creating: grid/rpm/
  inflating: grid/rpm/cvuqdisk-1.0.9-1.rpm

dm01db01-orcldb1 {/u01/app/oracle/software}:unzip p21419221_121020_Linux-x86-64_6of10.zip
Archive:  p21419221_121020_Linux-x86-64_6of10.zip
   creating: grid/stage/Components/oracle.has.crs/
   creating: grid/stage/Components/oracle.has.crs/12.1.0.2.0/
   creating: grid/stage/Components/oracle.has.crs/12.1.0.2.0/1/
   creating: grid/stage/Components/oracle.has.crs/12.1.0.2.0/1/DataFiles/
  inflating: grid/stage/Components/oracle.has.crs/12.1.0.2.0/1/DataFiles/filegroup44.jar
  inflating: grid/stage/Components/oracle.has.crs/12.1.0.2.0/1/DataFiles/filegroup8.jar
  inflating: grid/stage/Components/oracle.has.crs/12.1.0.2.0/1/DataFiles/filegroup38.jar
  inflating: grid/stage/Components/oracle.has.crs/12.1.0.2.0/1/DataFiles/filegroup43.jar
  inflating: grid/stage/Components/oracle.has.crs/12.1.0.2.0/1/DataFiles/filegroup31.jar

  inflating: grid/stage/Components/oracle.rdbms/12.1.0.2.0/1/DataFiles/filegroup19.4.1.jar
  inflating: grid/stage/Components/oracle.rdbms/12.1.0.2.0/1/DataFiles/filegroup73.jar
  inflating: grid/stage/Components/oracle.rdbms/12.1.0.2.0/1/DataFiles/filegroup19.8.1.jar
  inflating: grid/stage/Components/oracle.rdbms/12.1.0.2.0/1/DataFiles/filegroup4.jar
  inflating: grid/stage/Components/oracle.rdbms/12.1.0.2.0/1/DataFiles/filegroup3.jar
  inflating: grid/stage/Components/oracle.rdbms/12.1.0.2.0/1/DataFiles/filegroup19.22.1.jar
  inflating: grid/stage/Components/oracle.rdbms/12.1.0.2.0/1/DataFiles/filegroup74.jar
  inflating: grid/install/.oui

dm01db01-orcldb1 {/u01/app/oracle/software}:ls -ltr
total 4967820
drwxr-xr-x 7 oracle oinstall       4096 Jul 11  2014 database
drwxr-xr-x 7 oracle oinstall       4096 Jul 11  2014 grid
-rw-rw-r– 1 oracle oinstall       6584 Aug 13  2015 PatchSearch.xml
-rw-r–r– 1 oracle oinstall 1673519571 Jan  9 00:30 p21419221_121020_Linux-x86-64_1of10.zip
-rw-r–r– 1 oracle oinstall 1014527110 Jan  9 00:30 p21419221_121020_Linux-x86-64_2of10.zip
-rw-r–r– 1 oracle oinstall  646969279 Jan  9 01:25 p21419221_121020_Linux-x86-64_6of10.zip
-rw-r–r– 1 oracle oinstall 1747021273 Jan  9 01:38 p21419221_121020_Linux-x86-64_5of10.zip


Things to consider before upgrade to Oracle Grid Infrastructure 12c and Oracle Database 12c:

  • Current Oracle Database and Grid Infrastructure version must be 11.2.0.2, 11.2.0.3, 11.2.0.4 or 12.1.0.1.
dm01db01-+ASM1 {/home/oracle}:sqlplus / as sysasm

SQL*Plus: Release 11.2.0.4.0 Production on Mon Jan 9 00:33:25 2017

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 – 64bit Production
With the Real Application Clusters and Automatic Storage Management options

SQL> select * from v$version;

BANNER
——————————————————————————–
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 – 64bit Production
PL/SQL Release 11.2.0.4.0 – Production
CORE    11.2.0.4.0      Production
TNS for Linux: Version 11.2.0.4.0 – Production
NLSRTL Version 11.2.0.4.0 – Production

dm01db01-orcldb1 {/home/oracle}:sqlplus / as sysdba

SQL*Plus: Release 11.2.0.4.0 Production on Mon Jan 9 00:32:24 2017

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 – 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options

SQL> select * from v$version;

BANNER
——————————————————————————–
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 – 64bit Production
PL/SQL Release 11.2.0.4.0 – Production
CORE    11.2.0.4.0      Production
TNS for Linux: Version 11.2.0.4.0 – Production
NLSRTL Version 11.2.0.4.0 – Production

  • Oracle Database 12c and Exadata
For full Exadata functionality including:
  • Smart Scan offloaded filtering
  • storage indexes
  • I/O Resource Management (IORM),
Exadata Storage Server version 12.1.1.1.0 is required and 12.1.1.1.1 is recommended for Oracle Databases Release 12.1.0.2.

[root@dm01db01 ~]# imageinfo
Kernel version: 2.6.39-400.248.3.el6uek.x86_64 #1 SMP Wed Mar 11 18:04:34 PDT 2015 x86_64
Image version: 12.1.2.1.1.150316.2
Image activated: 2015-04-11 15:17:40 -0500
Image status: success
System partition on device: /dev/mapper/VGExaDb-LVDbSys1


Note: Customers unable to upgrade to Exadata 12.1.1.1.1 require a minimum version of 11.2.3.3.1 or later on Exadata Storage Servers and Database Servers. See document 1537407.1 for restrictions when running Oracle Database 12c in combination with Exadata releases earlier than 12.1.1.1.0.

  • Sun Datacenter InfiniBand Switch 36 is running software release 1.3.3-2 or later, 2.1.3-4 recommended
[root@dm01db01 ~]# ssh dm01sw-ib1 version
root@dm01sw-ib1’s password:
SUN DCS 36p version: 2.1.5-1
Build time: Oct  6 2014 10:35:15
SP board info:
Manufacturing Date: 2012.09.12
Serial Number: “NCDA70501”
Hardware Revision: 0x0007
Firmware Revision: 0x0000
BIOS version: SUN0R100
BIOS date: 06/22/2010


In my case I am running ESS version 12.1.2.1.1 and IB Switch software version 2.1.5-1.

  • Fix for bug 12539000 is required to successfully upgrade.
11.2.0.2 BP12 and later, 11.2.0.3, 11.2.0.4 and 12.1.0.1 already contain this fix. An interim patch must be installed for 11.2.0.2 Grid Infrastructure and Database installations running BP11 or earlier.
  • Fix for bug 14639430 is required to properly rollback Grid Infrastructure upgrade, if necessary (not required when Grid Infrastructure already is on 11.2.0.4 or 12.1.0.1).
  • Fix for bug 13460353 is required to create a new 11g database after Grid Infrastructure is upgraded to 12c.(not required when database already is on 11.2.0.4 or 12.1.0.1).
  • When available: GI PSU 12.1.0.2.1 or later (which includes DB PSU 12.1.0.2.1). To be applied:
    •     during the upgrade process, before running rootupgrade.sh on the Grid Infrastructure home, or
    • after installing the new Database home, before upgrading the database.
  • Grid Infrastructure upgrades on nodes with different length hostnames in the same cluster require fix for bug 19453778 – CTSSD FAILED TO START WHILE RUNNING ROOTUPGRADE.SH. Contact Oracle Support to obtain the patch.
In my case I am upgrading from 11.2.0.4 to 12.1.0.2, these bugs are already fixed. So I don’t need to apply any of these Bug fixes.

High level steps involved in Oracle Grid/Database Upgrade process

  • Pre-Upgrade steps: Preparing the current environment
  • Install and Upgrade Oracle Grid Infrastructure to 12.1.0.2
  • Install Oracle Database 12.1.0.2 Software
  • Upgrade Oracle Database(s) to 12.1.0.2
  • Post-upgrade steps
  • Troubleshooting steps

Assumptions
  • The Oracle Grid/Databasse software owner is oracle.
dm01db01-orcldb1 {/u01/app/oracle/software}:id oracle
uid=1000(oracle) gid=1001(oinstall) groups=1001(oinstall),1002(dba),1003(oper),1004(asmdba)

  • The Oracle inventory group is oinstall.
dm01db01-orcldb1 {/u01/app/oracle/software}:ls -ld /u01/app/oraInventory/
drwxrwx— 8 oracle oinstall 4096 Jan  8 04:44 /u01/app/oraInventory/

  • Create dbs_group file in Oracle and Root users home directory and specify the all compute nodes one per line.
dm01db01-orcldb1 {/u01/app/oracle/software}:cat ~/dbs_group
dm01db01
dm01db02
dm01db03
dm01db04
dm01db05
dm01db06
dm01db07
dm01db08

  • Current database home. In my case database version is 11.2.0.4
dm01db01-orcldb1 {/u01/app/oracle/software}:echo $ORACLE_HOME
/u01/app/oracle/product/11.2.0.4/dbhome

  • Current Grid Infrastructure home. In my case grid version is 11.2.0.4
dm01db01-+ASM1 {/u01/app/oracle/software}:echo $ORACLE_HOME
/u01/app/11.2.0.4/grid

  • The primary database to be upgraded is named “orcldb”.
  • Run Exadata Health Check (Exachk) and make sure no major hardware/software issues are reported.

Important MOS note:
Read them carefully before starting upgrade process:

  • Document 888828.1 – Database Machine and Exadata Storage Server Supported Releases
  • Document 1537407.1 – Requirements and restrictions when using Oracle Database 12c on Exadata Database Machine
  • Document 1270094.1 – Exadata Critical Issues
  • Document 1070954.1 – Oracle Exadata Database Machine exachk or HealthCheck

Pre-Upgrade steps: Preparing the current environment
  • Test the Upgrade on non-production environment first
  • Ensure Oracle Grid/Database Homes are backed up before upgrade
As root user:
 
Database Home backup:
[oracle@pr04db01 ~]# cd /u01/app/oracle/product/11.2.0.4/dbhome_1
[oracle@pr04db01 ~]# tar –zcvf /u01/app/oracle/product/11.2.0.4/db11204.tgz .


Grid Home backup:
[oracle@pr04db01 ~]# cd /u01/app/11.2.0.4/grid
[oracle@pr04db01 ~]# tar –zcvf /u01/app/11.2.0.4/gi11204.tgz .
  • Optionally take a snapshot backup of /u01 file system
  • Ensure the databases are backup before upgrade. Using RMAN backup to FRA or ZFS is recommended.
RMAN> backup database plus archivelog;
  • It is recommended to create Guaranteed Restore Point in addition to Database backup
SQL> CREATE RESTORE POINT before_upgrade_12102 GUARANTEE FLASHBACK DATABASE;
  • Perform one-off patch assesment
Verify that one-off patches currently installed on top of 11.2.0.4 are fixed in 12.1.0.2. Review the README. Contact Oracle Support if you are unable to determine if a one-off patch is still required on top of 12.1.0.2.
  • Do not place the new ORACLE_HOME under /opt/oracle
  • Download 12.1.0.2 Grid/Database software – 21419221
  • Download latest Opatch utility software – 6880880
  • Download one-off patch required for your existing software version if any.
  • Apply patches where required before upgrading proceeds
  • Run Exachk before upgrade

Validate Readiness for Oracle Clusterware upgrade using CVU and Exachck
  • Navigate to grid software directory and run the Cluster verification utility as follows:
dm01db01-+ASM1 {/u01/app/oracle/software}:cd grid/
dm01db01-+ASM1 {/u01/app/oracle/software/grid}:ls -ltr
total 80
-rwxr-xr-x  1 oracle oinstall   500 Feb  6  2013 welcome.html
-rwxr-xr-x  1 oracle oinstall  5085 Dec 20  2013 runcluvfy.sh
-rwxr-xr-x  1 oracle oinstall  8534 Jul  7  2014 runInstaller
drwxr-xr-x  2 oracle oinstall  4096 Jul  7  2014 rpm
drwxrwxr-x  2 oracle oinstall  4096 Jul  7  2014 sshsetup
drwxrwxr-x  2 oracle oinstall  4096 Jul  7  2014 response
drwxr-xr-x 14 oracle oinstall  4096 Jul  7  2014 stage
-rwxr-xr-x  1 oracle oinstall 33934 Aug  7  2015 readme.html
drwxr-xr-x  4 oracle oinstall  4096 Jan  9 04:43 install

dm01db01-+ASM1 {/u01/app/oracle/software/grid}:./runcluvfy.sh stage -pre crsinst -upgrade -rolling -src_crshome /u01/app/11.2.0.4/grid -dest_crshome /u01/app/12.1.0.2/grid -dest_version 12.1.0.2.0 -fixup -verbose

Performing pre-checks for cluster services setup

Checking node reachability…

Check: Node reachability from node “dm01db01”
  Destination Node                      Reachable?
  ————————————  ————————
  dm01db01                              yes
  dm01db02                              yes
  dm01db03                              yes
  dm01db04                              yes
  dm01db05                              yes
  dm01db06                              yes
  dm01db07                              yes
  dm01db08                              yes
Result: Node reachability check passed from node “dm01db01”

Checking user equivalence…

Check: User equivalence for user “oracle”
  Node Name                             Status
  ————————————  ————————
  dm01db08                              passed
  dm01db07                              passed
  dm01db06                              passed
  dm01db05                              passed
  dm01db04                              passed
  dm01db03                              passed
  dm01db02                              passed
  dm01db01                              passed
Result: User equivalence check passed for user “oracle”

Checking CRS user consistency
Result: CRS user consistency check successful
Checking network configuration consistency.
Result: Check for network configuration consistency passed.
Checking ASM disk size consistency
All ASM disks are correctly sized
Checking if ASM parameter file is in use by an ASM instance on the local node
Result: ASM instance is using parameter file “+SYSTEMDG/dm01-cluster/asmparameterfile/registry.253.726236669” on

node “dm01db01” on which upgrade is requested.

Checking OLR integrity…
Check of existence of OLR configuration file “/etc/oracle/olr.loc” passed
Check of attributes of OLR configuration file “/etc/oracle/olr.loc” passed

WARNING:
This check does not verify the integrity of the OLR contents. Execute ‘ocrcheck -local’ as a privileged user to verify the contents of OLR.

OLR integrity check passed

Checking node connectivity…

Checking hosts config file…
  Node Name                             Status
  ————————————  ————————
  dm01db01                              passed
  dm01db08                              passed
  dm01db07                              passed
  dm01db06                              passed
  dm01db05                              passed
  dm01db04                              passed
  dm01db03                              passed
  dm01db02                              passed

Verification of the hosts config file successful

…………….

Check for Reverse path filter setting passed

Starting check for Network interface bonding status of private interconnect network interfaces …

Check for Network interface bonding status of private interconnect network interfaces passed

Starting check for /dev/shm mounted as temporary file system …

Check for /dev/shm mounted as temporary file system passed

Starting check for /boot mount …

Check for /boot mount passed

Starting check for zeroconf check …

Check for zeroconf check passed

Pre-check for cluster services setup was unsuccessful on all the nodes.

NOTE:
No fixable verification failures to fix

 
– OR – 

You can also redirect the output to the file and read it easily.

$ ./runcluvfy.sh stage -pre crsinst -upgrade -rolling -src_crshome /u01/app/11.2.0.4/grid -dest_crshome /u01/app/12.1.0.2/grid -dest_version 12.1.0.2.0 -fixup -verbose > cluvfy_pre_upgrade.log

Pre-upgrade Information Tool
  • Analyze your databases to be upgraded with the Pre-Upgrade Information Tool. Download and copy Pre-upgradeutility. Take a look at the MOS note below:
How to Download and Run Oracle’s Database Pre-Upgrade Utility (Doc ID 884522.1)
  • Run the pre-upgrade utitlity as follows:
dm01db01-+ASM1 {/u01/app/oracle/software}:ls -ltr preupgrade_12.1.0.2.0_14_lf.zip
-rw-r–r– 1 oracle oinstall 101162 Jan  9 06:19 preupgrade_12.1.0.2.0_14_lf.zip


dm01db01-+ASM1 {/u01/app/oracle/software}:unzip preupgrade_12.1.0.2.0_14_lf.zip
Archive:  preupgrade_12.1.0.2.0_14_lf.zip
  inflating: preupgrd.sql
  inflating: utluppkg.sql


dm01db01-+ASM1 {/u01/app/oracle/software}:. oraenv
ORACLE_SID = [+ASM1] ? orcldb1
The Oracle base remains unchanged with value /u01/app/oracle
dm01db01-orcldb1 {/u01/app/oracle/software}:sqlplus / as sysdba

SQL*Plus: Release 11.2.0.4.0 Production on Mon Jan 9 06:25:51 2017

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 – 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options

SQL> @preupgrd.sql

Loading Pre-Upgrade Package…

***************************************************************************
Executing Pre-Upgrade Checks in ORCLDB…
************************************************************
***************   ************************************************************
                  ====>> ERRORS FOUND for ORCLDB <<====
 The following are *** ERROR LEVEL CONDITIONS *** that must be addressed
                    prior to attempting your upgrade.
            Failure to do so will result in a failed upgrade.

           You MUST resolve the above errors prior to upgrade
     ************************************************************
      ************************************************************
              ====>> PRE-UPGRADE RESULTS for ORCLDB <<====

ACTIONS REQUIRED:

1. Review results of the pre-upgrade checks:
 /u01/app/oracle/cfgtoollogs/orcldb/preupgrade/preupgrade.log

2. Execute in the SOURCE environment BEFORE upgrade:
 /u01/app/oracle/cfgtoollogs/orcldb/preupgrade/preupgrade_fixups.sql

3. Execute in the NEW environment AFTER upgrade:
 /u01/app/oracle/cfgtoollogs/orcldb/preupgrade/postupgrade_fixups.sql
    ************************************************************

***************************************************************************
Pre-Upgrade Checks in ORCLDB Completed.
***************************************************************************

***************************************************************************
***************************************************************************
  • Review the preupgrade.log file for errors/issues:
dm01db01-orcldb1 {/u01/app/oracle/software}:vi /u01/app/oracle/cfgtoollogs/orcldb/preupgrade/preupgrade.log
  • Run the preupgrade_fixups.sql produced by pre-upgrade utility above:
dm01db01-orcldb1 {/u01/app/oracle/software}:sqlplus / as sysdba

SQL*Plus: Release 11.2.0.4.0 Production on Mon Jan 9 06:29:50 2017

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 – 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options

SQL> @/u01/app/oracle/cfgtoollogs/orcldb/preupgrade/preupgrade_fixups.sql
Pre-Upgrade Fixup Script Generated on 2017-01-09 06:26:48  Version: 12.1.0.2 Build: 014
Beginning Pre-Upgrade Fixups…
Executing in container ORCLDB

**********************************************************************
Check Tag:     DEFAULT_PROCESS_COUNT
Check Summary: Verify min process count is not too low
Fix Summary:   Review and increase if needed, your PROCESSES value.
**********************************************************************
Fixup Returned Information:
WARNING: –> Process Count may be too low

     Database has a maximum process count of 150 which is lower than the
     default value of 300 for this release.
     You should update your processes value prior to the upgrade
     to a value of at least 300.
     For example:
        ALTER SYSTEM SET PROCESSES=300 SCOPE=SPFILE
     or update your init.ora file.
**********************************************************************

**********************************************************************
Check Tag:     EM_PRESENT
Check Summary: Check if Enterprise Manager is present
Fix Summary:   Execute emremove.sql prior to upgrade.
**********************************************************************
Fixup Returned Information:
WARNING: –> Enterprise Manager Database Control repository found in the database

     In Oracle Database 12c, Database Control is removed during
     the upgrade. To save time during the Upgrade, this action
     can be done prior to upgrading using the following steps after
     copying rdbms/admin/emremove.sql from the new Oracle home
   – Stop EM Database Control:
    $> emctl stop dbconsole

   – Connect to the Database using the SYS account AS SYSDBA:

   SET ECHO ON;
   SET SERVEROUTPUT ON;
   @emremove.sql
     Without the set echo and serveroutput commands you will not
     be able to follow the progress of the script.
**********************************************************************

**********************************************************************
Check Tag:     AMD_EXISTS
Check Summary: Check to see if AMD is present in the database
Fix Summary:   Manually execute ORACLE_HOME/oraolap/admin/catnoamd.sql script to remove OLAP.
**********************************************************************
Fixup Returned Information:
INFORMATION: –> OLAP Catalog(AMD) exists in database

     Starting with Oracle Database 12c, OLAP Catalog component is desupported.
     If you are not using the OLAP Catalog component and want
     to remove it, then execute the
     ORACLE_HOME/olap/admin/catnoamd.sql script before or
     after the upgrade.
**********************************************************************

**********************************************************************
Check Tag:     APEX_UPGRADE_MSG
Check Summary: Check that APEX will need to be upgraded.
Fix Summary:   Oracle Application Express can be manually upgraded prior to database upgrade.
**********************************************************************
Fixup Returned Information:
INFORMATION: –> Oracle Application Express (APEX) can be
     manually upgraded prior to database upgrade

     APEX is currently at version 3.2.1.00.12 and will need to be
     upgraded to APEX version 4.2.5 in the new release.
     Note 1: To reduce database upgrade time, APEX can be manually
             upgraded outside of and prior to database upgrade.
     Note 2: See MOS Note 1088970.1 for information on APEX
             installation upgrades.
**********************************************************************

**********************************************************************
                      [Pre-Upgrade Recommendations]
**********************************************************************

                        *****************************************
                        ********* Dictionary Statistics *********
                        *****************************************

Please gather dictionary statistics 24 hours prior to
upgrading the database.
To gather dictionary statistics execute the following command
while connected as SYSDBA:
    EXECUTE dbms_stats.gather_dictionary_stats;

^^^ MANUAL ACTION SUGGESTED ^^^

           **************************************************
                ************* Fixup Summary ************

 4 fixup routines generated INFORMATIONAL messages that should be reviewed.

**************** Pre-Upgrade Fixup Script Complete *********************

PL/SQL procedure successfully completed.

  • Take care of pre-upgrade Recommendation stated above:
    • Increase processes parameter to 300
SQL> ALTER SYSTEM SET PROCESSES=300 SCOPE=SPFILE;

System altered.
    • Drop OLAP Catalog
SQL> @$ORACLE_HOME/olap/admin/catnoamd.sql

Synonym dropped.

Synonym dropped.

Synonym dropped.

Synonym dropped.

Synonym dropped.

Synonym dropped.

….

Type dropped.

Type dropped.

PL/SQL procedure successfully completed.

Role dropped.

PL/SQL procedure successfully completed.

1 row deleted.
    • Gather dictionary statistics
SQL> EXECUTE dbms_stats.gather_dictionary_stats;

PL/SQL procedure successfully completed.


Install and Upgrade Grid Infrastructure to 12.1.0.2
  • Create the new Grid Infrastructure (GI_HOME) directory where 12.1.0.2 will be installed
[root@dm01db01 ~]# dcli -g ~/dbs_group -l root mkdir -p /u01/app/12.1.0.2/grid/
[root@dm01db01 ~]# dcli -g ~/dbs_group -l root chown oracle /u01/app/12.1.0.2/grid
[root@dm01db01 ~]# dcli -g ~/dbs_group -l root chgrp -R oinstall /u01/app/12.1.0.2/grid
[root@dm01db01 ~]# dcli -g ~/dbs_group -l root ls -l  /u01/app/12.1.0.2/
dm01db01: total 4
dm01db01: drwxr-xr-x 2 oracle oinstall 4096 Jan 10 03:44 grid
dm01db02: total 4
dm01db02: drwxr-xr-x 2 oracle oinstall 4096 Jan 10 03:44 grid
dm01db03: total 4
dm01db03: drwxr-xr-x 2 oracle oinstall 4096 Jan 10 03:44 grid
dm01db04: total 4
dm01db04: drwxr-xr-x 2 oracle oinstall 4096 Jan 10 03:44 grid
dm01db05: total 4
dm01db05: drwxr-xr-x 2 oracle oinstall 4096 Jan 10 03:44 grid
dm01db06: total 4
dm01db06: drwxr-xr-x 2 oracle oinstall 4096 Jan 10 03:44 grid
dm01db07: total 4
dm01db07: drwxr-xr-x 2 oracle oinstall 4096 Jan 10 03:44 grid
dm01db08: total 4
dm01db08: drwxr-xr-x 2 oracle oinstall 4096 Jan 10 03:44 grid
  • Unzip the software if not already done earlier.
dm01db01-orcldb1 {/u01/app/oracle/software}: unzip -q /u01/app/oracle/software/p21419221_121020_Linux-x86-64_5of10.zip -d /u01/app/oracle/software

dm01db01-orcldb1 {/u01/app/oracle/software}: unzip -q /u01/app/oracle/software/p21419221_121020_Linux-x86-64_6of10.zip -d /u01/app/oracle/software
  • Unset the environment and run the installer
dm01db01-orcldb1 {/u01/app/oracle/software}:unset ORACLE_HOME ORACLE_BASE ORACLE_SID
dm01db01- {/u01/app/oracle/software}:echo $ORACLE_HOME

dm01db01- {/u01/app/oracle/software}:echo $ORACLE_BASE

dm01db01- {/u01/app/oracle/software}:echo $ORACLE_SID

dm01db01- {/home/oracle}:export SRVM_USE_RACTRANS=true
dm01db01- {/home/oracle}:export DISPLAY=10.10.20.1:0.0
dm01db01- {/home/oracle}:cd /u01/app/oracle/software/grid/
dm01db01- {/u01/app/oracle/software/grid}:./runInstaller -J-Doracle.install.mgmtDB=false
Starting Oracle Universal Installer…

Checking Temp space: must be greater than 415 MB.   Actual 7663 MB    Passed
Checking swap space: must be greater than 150 MB.   Actual 16378 MB    Passed
Checking monitor: must be configured to display at least 256 colors
   
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2017-01-10_03-50-56AM. Please wait …dm01db01- {/u01/app/oracle/software/grid}:

Select “Upgrade Oracle GI or ASM” and click Next

Select Language and click Next

Select all the nodes you want to upgrade and click Next

Click Next

Select the OS group for OS authentication and click Next

Enter the Oracle Base and Grid Home and click Next

Click Next as we want to run the root script manually.

Prerequisites check in progress

Click Install to begin Installation process

 Installation in progress

Follow the steps below before executing rootupgrade.sh script

  • Update OPatch and when available apply the latest Bundle Patch on top of the 12.1.0.2 Grid Infrastructure Installation
dm01db01-orcldb1 {/u01/app/oracle/software}:dcli -g ~/dbs_group -l oracle -d /u01/app/oracle/software -f p6880880_112000_Linux-x86-64.zip

dm01db01-orcldb1 {/u01/app/oracle/software}:dcli -g ~/dbs_group -l oracle unzip -oq -d /u01/app/12.1.0.2/grid  /u01/app/oracle/software/p6880880_112000_Linux-x86-64.zip

dm01db01-orcldb1 {/u01/app/oracle/software}:dcli -g ~/dbs_group -l oracle /u01/app/12.1.0.2/grid/OPatch/opatch version
dm01db01: OPatch Version: 11.2.0.3.15
dm01db01:
dm01db01: OPatch succeeded.
dm01db02: OPatch Version: 11.2.0.3.15
dm01db02:
dm01db02: OPatch succeeded.
dm01db03: OPatch Version: 11.2.0.3.15
dm01db03:
dm01db03: OPatch succeeded.
dm01db04: OPatch Version: 11.2.0.3.15
dm01db04:
dm01db04: OPatch succeeded.
dm01db05: OPatch Version: 11.2.0.3.15
dm01db05:
dm01db05: OPatch succeeded.
dm01db06: OPatch Version: 11.2.0.3.15
dm01db06:
dm01db06: OPatch succeeded.
dm01db07: OPatch Version: 11.2.0.3.15
dm01db07:
dm01db07: OPatch succeeded.
dm01db08: OPatch Version: 11.2.0.3.15
dm01db08:
dm01db08: OPatch succeeded.
  • When required relink the 12.1.0.2 Grid Infrastructure oracle binary with RDS
dm01db01-orcldb1 {/u01/app/oracle/software}:dcli -g ~/dbs_group -l oracle /u01/app/12.1.0.2/grid/bin/skgxpinfo
dm01db01: rds
dm01db02: rds
dm01db03: rds
dm01db04: rds
dm01db05: rds
dm01db06: rds
dm01db07: rds
dm01db08: rds


If the command does not return ‘rds’ relink as follows:
 
dm01db01-orcldb1 {/u01/app/oracle/software}: dcli -g ~/dbs_group -l oracle ORACLE_HOME=/u01/app/12.1.0.2/grid make -C /u01/app/12.1.0.2/grid/rdbms/lib -f ins_rdbms.mk ipc_rds ioracle
  • Change SGA memory settings for ASM
dm01db01-+ASM1 {/u01/app/oracle/software}:sqlplus / as sysasm

SQL*Plus: Release 11.2.0.4.0 Production on Tue Jan 10 06:35:32 2017

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 – 64bit Production
With the Real Application Clusters and Automatic Storage Management options

SQL> alter system set sga_max_size = 2G scope=spfile sid=’*’;

System altered.

SQL> alter system set sga_target = 2G scope=spfile sid=’*’;

System altered.
  • Verify values for memory_target, memory_max_target and use_large_pages
SQL> col sid format a4
SQL> col name format a25
SQL> col value format a20
SQL> set lines 200
SQL> set pages 200
SQL> select sid, name, value from v$spparameter where name in(‘memory_target’,’memory_max_target’,’use_large_pages’);

SID   NAME                           VALUE
—- ————————-          ——————–
*     use_large_pages                TRUE
*     memory_target                  0
*     memory_max_target
  • Verify no active rebalance is running
SQL> select count(*) from gv$asm_operation;

  COUNT(*)
———-
         0
  • Now Execute rootupgrade.sh on each database server
Note: Run rootupgrade.sh on node 1 first. After rootupgrade.sh completes successfully on the local node, you can run the script in parallel on other nodes except for the last node. When the script has completed successfully on all the nodes except the last node, run the script on the last node.  Do not run rootupgrade.sh on the last node until the script has run successfully on all other nodes.
 
On Node 1

[root@dm01db01 ~]# id
uid=0(root) gid=0(root) groups=0(root),1(bin),2(daemon),3(sys),4(adm),6(disk),10
[root@dm01db01 ~]# /u01/app/12.1.0.2/grid/rootupgrade.sh
Performing root user operation.

The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /u01/app/12.1.0.2/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]:
The file “dbhome” already exists in /usr/local/bin.  Overwrite it? (y/n)
[n]: y
   Copying dbhome to /usr/local/bin …
The file “oraenv” already exists in /usr/local/bin.  Overwrite it? (y/n)
[n]: y
   Copying oraenv to /usr/local/bin …
The file “coraenv” already exists in /usr/local/bin.  Overwrite it? (y/n)
[n]: y
   Copying coraenv to /usr/local/bin …

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/12.1.0.2/grid/crs/install/crsconfig_params
2017/01/10 06:52:50 CLSRSC-4015: Performing install or upgrade action for Oracle Trace File Analyzer (TFA)

Collector.

2017/01/10 06:52:51 CLSRSC-4003: Successfully patched Oracle Trace File Analyzer (TFA) Collector.

2017/01/10 06:52:53 CLSRSC-464: Starting retrieval of the cluster configuration data

2017/01/10 06:53:02 CLSRSC-465: Retrieval of the cluster configuration data has successfully completed.

2017/01/10 06:53:02 CLSRSC-363: User ignored prerequisites during installation

2017/01/10 06:53:13 CLSRSC-515: Starting OCR manual backup.

2017/01/10 06:53:15 CLSRSC-516: OCR manual backup successful.

2017/01/10 06:53:19 CLSRSC-468: Setting Oracle Clusterware and ASM to rolling migration mode

2017/01/10 06:53:19 CLSRSC-482: Running command: ‘/u01/app/12.1.0.2/grid/bin/asmca -silent -upgradeNodeASM –

nonRolling false -oldCRSHome /u01/app/11.2.0.4/grid -oldCRSVersion 11.2.0.4.0 -nodeNumber 1 -firstNode true -startRolling true’

ASM configuration upgraded in local node successfully.

2017/01/10 06:53:27 CLSRSC-469: Successfully set Oracle Clusterware and ASM to rolling migration mode

2017/01/10 06:53:27 CLSRSC-466: Starting shutdown of the current Oracle Grid Infrastructure stack

2017/01/10 06:53:53 CLSRSC-467: Shutdown of the current Oracle Grid Infrastructure stack has successfully completed.

OLR initialization – successful
2017/01/10 06:58:48 CLSRSC-329: Replacing Clusterware entries in file ‘oracle-ohasd.conf’

CRS-4133: Oracle High Availability Services has been stopped.
kgfnGetFacility: facility=0x1564768
kgfnInitDiag: diagctx=0x13e4160
CRS-4123: Oracle High Availability Services has been started.
2017/01/10 07:03:18 CLSRSC-472: Attempting to export the OCR

2017/01/10 07:03:18 CLSRSC-482: Running command: ‘ocrconfig -upgrade oracle oinstall’

2017/01/10 07:03:29 CLSRSC-473: Successfully exported the OCR

2017/01/10 07:03:33 CLSRSC-486:
 At this stage of upgrade, the OCR has changed.
 Any attempt to downgrade the cluster after this point will require a complete cluster outage to restore the OCR.

2017/01/10 07:03:33 CLSRSC-541:
 To downgrade the cluster:
 1. All nodes that have been upgraded must be downgraded.

2017/01/10 07:03:33 CLSRSC-542:
 2. Before downgrading the last node, the Grid Infrastructure stack on all other cluster nodes must be down.

2017/01/10 07:03:33 CLSRSC-543:
 3. The downgrade command must be run on the node dm01db04 with the ‘-lastnode’ option to restore global configuration data.

2017/01/10 07:04:04 CLSRSC-343: Successfully started Oracle Clusterware stack

clscfg: EXISTING configuration version 5 detected.
clscfg: version 5 is 11g Release 2.
Successfully taken the backup of node specific configuration in OCR.
Successfully accumulated necessary OCR keys.
Creating OCR keys for user ‘root’, privgrp ‘root’..
Operation successful.
2017/01/10 07:04:13 CLSRSC-474: Initiating upgrade of resource types

2017/01/10 07:04:38 CLSRSC-482: Running command: ‘upgrade model  -s 11.2.0.4.0 -d 12.1.0.2.0 -p first’

2017/01/10 07:04:38 CLSRSC-475: Upgrade of resource types successfully initiated.

2017/01/10 07:04:43 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster … succeeded

Now execute the rootupgrade.sh on all the other nodes in the cluster.

Go back to the installation screen and click Ok

Click Close to complete the upgrade process.

Verify cluster status

dm01db01-+ASM1 {/home/oracle}:dcli -g dbs_group -l oracle ‘/u01/app/12.1.0.2/grid/bin/crsctl query crs activeversion’
dm01db01: Oracle Clusterware active version on the cluster is [12.1.0.2.0]
dm01db02: Oracle Clusterware active version on the cluster is [12.1.0.2.0]
dm01db03: Oracle Clusterware active version on the cluster is [12.1.0.2.0]
dm01db04: Oracle Clusterware active version on the cluster is [12.1.0.2.0]
dm01db05: Oracle Clusterware active version on the cluster is [12.1.0.2.0]
dm01db06: Oracle Clusterware active version on the cluster is [12.1.0.2.0]
dm01db07: Oracle Clusterware active version on the cluster is [12.1.0.2.0]
dm01db08: Oracle Clusterware active version on the cluster is [12.1.0.2.0]

dm01db01-+ASM1 {/home/oracle}:dcli -g dbs_group -l oracle ‘/u01/app/12.1.0.2/grid/bin/crsctl query crs softwareversion’
dm01db01: Oracle Clusterware version on node [dm01db01] is [12.1.0.2.0]
dm01db02: Oracle Clusterware version on node [dm01db02] is [12.1.0.2.0]
dm01db03: Oracle Clusterware version on node [dm01db03] is [12.1.0.2.0]
dm01db04: Oracle Clusterware version on node [dm01db04] is [12.1.0.2.0]
dm01db05: Oracle Clusterware version on node [dm01db05] is [12.1.0.2.0]
dm01db06: Oracle Clusterware version on node [dm01db06] is [12.1.0.2.0]
dm01db07: Oracle Clusterware version on node [dm01db07] is [12.1.0.2.0]
dm01db08: Oracle Clusterware version on node [dm01db08] is [12.1.0.2.0]

dm01db01-+ASM1 {/home/oracle}:/u01/app/12.1.0.2/grid/bin/crsctl check cluster -all
**************************************************************
dm01db01:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
dm01db02:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
dm01db03:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
dm01db04:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
dm01db05:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
dm01db06:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
dm01db07:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
dm01db08:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
 
Change Custom Scripts and environment variables to Reference the 12.1.0.2 Grid Home

dm01db01-+ASM1 {/home/oracle}:vi .bash_profile
dm01db01-+ASM1 {/home/oracle}:vi .exadata_profile

SQL> set lines 200
SQL> set pages 200
SQL> select * from v$version;

BANNER                                                                               CON_ID
——————————————————————————– ———-
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 – 64bit Production              0
PL/SQL Release 12.1.0.2.0 – Production                                                    0
CORE    12.1.0.2.0      Production                                                                0
TNS for Linux: Version 12.1.0.2.0 – Production                                            0
NLSRTL Version 12.1.0.2.0 – Production                                                    0

ASMCMD> lsct -g DATA
Instance_ID  DB_Name  Status     Software_Version  Compatible_version  Instance_Name  Disk_Group
          1  +ASM     CONNECTED        12.1.0.2.0          12.1.0.2.0  +ASM1          DATA
          2  +ASM     CONNECTED        12.1.0.2.0          12.1.0.2.0  +ASM2          DATA
          3  +ASM     CONNECTED        12.1.0.2.0          12.1.0.2.0  +ASM3          DATA
          4  +ASM     CONNECTED        12.1.0.2.0          12.1.0.2.0  +ASM4          DATA
          5  +ASM     CONNECTED        12.1.0.2.0          12.1.0.2.0  +ASM5          DATA
          6  +ASM     CONNECTED        12.1.0.2.0          12.1.0.2.0  +ASM6          DATA
          7  +ASM     CONNECTED        12.1.0.2.0          12.1.0.2.0  +ASM7          DATA
          8  +ASM     CONNECTED        12.1.0.2.0          12.1.0.2.0  +ASM8          DATA
          1  orcldb   CONNECTED        11.2.0.4.0          11.2.0.4.0  orcldb1        DATA
          2  orcldb   CONNECTED        11.2.0.4.0          11.2.0.4.0  orcldb2        DATA
          3  orcldb   CONNECTED        11.2.0.4.0          11.2.0.4.0  orcldb3        DATA
          4  orcldb   CONNECTED        11.2.0.4.0          11.2.0.4.0  orcldb4        DATA
          5  orcldb   CONNECTED        11.2.0.4.0          11.2.0.4.0  orcldb5        DATA
          6  orcldb   CONNECTED        11.2.0.4.0          11.2.0.4.0  orcldb6        DATA
          7  orcldb   CONNECTED        11.2.0.4.0          11.2.0.4.0  orcldb7        DATA
          8  orcldb   CONNECTED        11.2.0.4.0          11.2.0.4.0  orcldb8        DATA

[root@dm01db01 ~]# /u01/app/12.1.0.2/grid/bin/ocrcheck
Status of Oracle Cluster Registry is as follows :
         Version                  :          4
         Total space (kbytes)     :     409568
         Used space (kbytes)      :      10592
         Available space (kbytes) :     398976
         ID                       : 1119121028
         Device/File Name         :  +SYSTEMDG
                                    Device/File integrity check succeeded

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured

         Cluster registry integrity check succeeded

         Logical corruption check succeeded


dm01db01-+ASM1 {/home/oracle}:crsctl query css votedisk
##  STATE    File Universal Id                File Name Disk group
—  —–    —————–                ——— ———
 1. ONLINE   6938ea097e324f0fbf48da02892564cb (o/192.168.1.9/SYSTEMDG_CD_02_dm01cel01) [SYSTEMDG]
 2. ONLINE   42561e3fd68a4f7dbfe0687fb6bb6a1f (o/192.168.1.22/SYSTEMDG_CD_08_dm01cel14) [SYSTEMDG]
 3. ONLINE   2cc2afc24d504f99bfb05a60620e01ef (o/192.168.1.13/SYSTEMDG_CD_02_dm01cel05) [SYSTEMDG]
Located 3 voting disk(s).

Conclusion
In this article we have successfully upgraded Oracle Grid Infrastructure from 11.2.0.4 to 12.1.0.2. Oracle 12.1.0.2 includes the support for Smart Scan, Storage Indexes and IORM, so Customer can take full advantage of running 12.1.0.2 on Exadata Database Machine. The upgrade from 11.2.0.4 to 12.1.0.2 works seamless if all the prerequisites are met properly.

2

Overview
Earlier I have shared an article on how to Install Oracle Grid Infrastructure 12.2.0.1 on Exadata X2 Full Rack. Now it is time to Install Oracle Database 12.2.0.1 on the same cluster.

In this article I will demonstrate how to Install Oracle Database 12.2.0.1 software on a full Rack Oracle Exadata X2-2 Database machine.

Environment

  • Exadata X2-2 Full Rack
  • 8 Compute nodes, 14 Storage cell and 3 IB Switch
  • Exadata Storage Software version 12.1.2.1.1

Steps to Install Oracle Database 12.2.0.1 on OEL 6:
  • Download Oracle Grid and Database software from OTN.
http://www.oracle.com/technetwork/database/enterprise-edition/downloads/index.html

 
  • Create the new Oracle Database Home (ORACLE_HOME) directory where 12.2.0.1 will be installed
[root@dm01db01 ~]# dcli -g ~/dbs_group -l root mkdir -p /u01/app/oracle

[root@dm01db01 ~]# dcli -g ~/dbs_group -l root chown -R oracle:oinstall /u01/app/oracle

[root@dm01db01 ~]# dcli -g ~/dbs_group -l root chmod -R 775 /u01/app/oracle

[oracle@dm01db01 oracle]$ dcli -g ~/dbs_group -l oracle mkdir -p /u01/app/oracle/product/12.2.0.1/dbhome

[oracle@dm01db01 oracle]$ dcli -g ~/dbs_group -l oracle ls -l  /u01/app/oracle/product/12.2.0.1
dm01db01: total 4
dm01db01: drwxr-xr-x 2 oracle oinstall 4096 Mar 10 07:48 dbhome
dm01db02: total 4
dm01db02: drwxr-xr-x 2 oracle oinstall 4096 Mar 10 07:48 dbhome
dm01db03: total 4
dm01db03: drwxr-xr-x 2 oracle oinstall 4096 Mar 10 07:48 dbhome
dm01db04: total 4
dm01db04: drwxr-xr-x 2 oracle oinstall 4096 Mar 10 07:48 dbhome
dm01db05: total 4
dm01db05: drwxr-xr-x 2 oracle oinstall 4096 Mar 10 07:48 dbhome
dm01db06: total 4
dm01db06: drwxr-xr-x 2 oracle oinstall 4096 Mar 10 07:48 dbhome
dm01db07: total 4
dm01db07: drwxr-xr-x 2 oracle oinstall 4096 Mar 10 07:48 dbhome
dm01db08: total 4
dm01db08: drwxr-xr-x 2 oracle oinstall 4096 Mar 10 07:48 dbhome

  • Unzip installation software
Database Software

dm01db01-orp258c1 {/home/oracle}:cd /u01/app/oracle/software/
dm01db01-orp258c1 {/u01/app/oracle/software}:unzip linuxx64_12201_database.zip
Archive:  linuxx64_12201_database.zip
   creating: database/
   creating: database/sshsetup/
  inflating: database/sshsetup/sshUserSetup.sh
   creating: database/rpm/
  inflating: database/rpm/cvuqdisk-1.0.10-1.rpm
   creating: database/response/
  inflating: database/response/dbca.rsp
  inflating: database/response/netca.rsp
  inflating: database/response/db_install.rsp
   creating: database/install/
  inflating: database/install/detachHome.sh
 extracting: database/install/addLangs.sh
  inflating: database/stage/OuiConfigVariables.xml
   creating: database/stage/Actions/
   creating: database/stage/Actions/jarActions/
   creating: database/stage/Actions/jarActions/10.2.0.0.0/
   creating: database/stage/Actions/jarActions/10.2.0.0.0/1/
  inflating: database/stage/Actions/jarActions/10.2.0.0.0/1/jarActionLib.jar
   creating: database/stage/Actions/customFileActions/
   creating: database/stage/Actions/customFileActions/1.2.1/
   creating: database/stage/Actions/customFileActions/1.2.1/1/
  inflating: database/stage/Actions/customFileActions/1.2.1/1/customFileActions.jar
   creating: database/stage/Actions/ntGrpActionLib/
   creating: database/stage/Actions/ntGrpActionLib/10.2.0.1.0/
   creating: database/stage/Actions/ntGrpActionLib/10.2.0.1.0/1/
  inflating: database/stage/Actions/ntGrpActionLib/10.2.0.1.0/1/ntGrpActionLib.jar
   creating: database/stage/Actions/ServiceProcessActions/
   creating: database/stage/Actions/ServiceProcessActions/1.0/
   creating: database/stage/Actions/ServiceProcessActions/1.0/1/
  inflating: database/stage/Actions/ServiceProcessActions/1.0/1/ServiceProcessActions.jar
   creating: database/stage/Actions/fileActions/
   creating: database/stage/Actions/fileActions/12.2.0.1.1/
   creating: database/stage/Actions/fileActions/12.2.0.1.1/1/
  inflating: database/stage/Actions/fileActions/12.2.0.1.1/1/fileActionLib.jar
   creating: database/stage/Actions/unixActions/
   creating: database/stage/Actions/unixActions/10.2.0.3.0/
   creating: database/stage/Actions/unixActions/10.2.0.3.0/1/
  inflating: database/stage/Actions/unixActions/10.2.0.3.0/1/unixActions.jar
….


dm01db01-orp258c1 {/u01/app/oracle/software}:ls -l database/
total 36
drwxr-xr-x  4 oracle oinstall 4096 Jan 26 08:39 install
drwxrwxr-x  2 oracle oinstall 4096 Jan 26 09:53 response
drwxr-xr-x  2 oracle oinstall 4096 Jan 26 08:39 rpm
-rwxr-xr-x  1 oracle oinstall 8771 Jan 26 08:39 runInstaller
drwxrwxr-x  2 oracle oinstall 4096 Jan 26 09:53 sshsetup
drwxr-xr-x 14 oracle oinstall 4096 Jan 26 09:55 stage
-rwxr-xr-x  1 oracle oinstall  500 Feb  6  2013 welcome.html

  • Setup User equivalance
dm01db01-orcldb1 {/home/oracle}: dcli -g dbs_group -l oracle -k
  • Create user’s profile
dm01db01-orcldb1 {/u01/app/oracle/software}: vi .bash_profile

export TMP=/tmp
export TMPDIR=$TMP

export ORACLE_BASE=/u01/app/oracle
export GRID_HOME=/u01/app/12.2.0.1/grid
export ORACLE_HOME=/u01/app/oracle/product/12.2.0.1/dbhome
export ORACLE_TERM=xterm
export PATH=$PATH:$ORACLE_HOME/bin
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib

  • Validate Readiness for Oracle Clusterware upgrade using CVU
dm01db01-orp258c1 {/home/oracle}: /u01/app/12.2.0.1/grid/runcluvfy.sh stage -pre dbinst -n

dm01db01,dm01db02,dm01db03,dm01db04,dm01db05,dm01db06,dm01db07,dm01db08 -r 12.2 -fixupnoexec

Verifying Physical Memory …PASSED
Verifying Available Physical Memory …PASSED
Verifying Swap Size …PASSED
Verifying Free Space: dm01db05:/tmp …PASSED
Verifying Free Space: dm01db04:/tmp …PASSED
Verifying Free Space: dm01db03:/tmp …PASSED
Verifying Free Space: dm01db02:/tmp …PASSED
Verifying Free Space: dm01db01:/tmp …PASSED
Verifying Free Space: dm01db08:/tmp …PASSED
Verifying Free Space: dm01db07:/tmp …PASSED
Verifying Free Space: dm01db06:/tmp …PASSED
Verifying User Existence: oracle …
  Verifying Users With Same UID: 1000 …PASSED
Verifying User Existence: oracle …PASSED
Verifying Group Existence: dba …PASSED
Verifying Group Existence: oinstall …PASSED
Verifying Group Membership: oinstall(Primary) …PASSED
Verifying Group Membership: dba …PASSED
Verifying Run Level …PASSED
Verifying Hard Limit: maximum open file descriptors …PASSED
Verifying Soft Limit: maximum open file descriptors …PASSED
Verifying Hard Limit: maximum user processes …PASSED
Verifying Soft Limit: maximum user processes …PASSED
Verifying Soft Limit: maximum stack size …PASSED
Verifying Architecture …PASSED
Verifying OS Kernel Version …PASSED
Verifying OS Kernel Parameter: semmsl …PASSED
Verifying OS Kernel Parameter: semmns …PASSED
Verifying OS Kernel Parameter: semopm …PASSED
Verifying OS Kernel Parameter: semmni …PASSED
Verifying OS Kernel Parameter: shmmax …PASSED
Verifying OS Kernel Parameter: shmmni …PASSED
Verifying OS Kernel Parameter: shmall …PASSED
Verifying OS Kernel Parameter: file-max …PASSED
Verifying OS Kernel Parameter: ip_local_port_range …PASSED
Verifying OS Kernel Parameter: rmem_default …PASSED
Verifying OS Kernel Parameter: rmem_max …PASSED
Verifying OS Kernel Parameter: wmem_default …PASSED
Verifying OS Kernel Parameter: wmem_max …PASSED
Verifying OS Kernel Parameter: aio-max-nr …PASSED
Verifying Package: binutils-2.20.51.0.2 …PASSED
Verifying Package: compat-libcap1-1.10 …PASSED
Verifying Package: compat-libstdc++-33-3.2.3 (x86_64) …PASSED
Verifying Package: libgcc-4.4.7 (x86_64) …PASSED
Verifying Package: libstdc++-4.4.7 (x86_64) …PASSED
Verifying Package: libstdc++-devel-4.4.7 (x86_64) …PASSED
Verifying Package: sysstat-9.0.4 …PASSED
Verifying Package: ksh …PASSED
Verifying Package: make-3.81 …PASSED
Verifying Package: glibc-2.12 (x86_64) …PASSED
Verifying Package: glibc-devel-2.12 (x86_64) …PASSED
Verifying Package: libaio-0.3.107 (x86_64) …PASSED
Verifying Package: libaio-devel-0.3.107 (x86_64) …PASSED
Verifying Package: smartmontools-5.43-1 …PASSED
Verifying Package: net-tools-1.60-110 …PASSED
Verifying Users With Same UID: 0 …PASSED
Verifying Current Group ID …PASSED
Verifying Root user consistency …PASSED
Verifying Node Connectivity …
  Verifying Hosts File …PASSED
  Verifying Check that maximum (MTU) size packet goes through subnet …PASSED
  Verifying subnet mask consistency for subnet “192.168.2.0” …PASSED
  Verifying subnet mask consistency for subnet “10.220.30.128” …PASSED
Verifying Node Connectivity …PASSED
Verifying Multicast check …PASSED
Verifying User Mask …PASSED
Verifying CRS Integrity …
  Verifying Clusterware Version Consistency …PASSED
Verifying CRS Integrity …PASSED
Verifying Cluster Manager Integrity …PASSED
Verifying Node Application Existence …PASSED
Verifying Clock Synchronization …
CTSS is in Observer state. Switching over to clock synchronization checks using NTP

  Verifying Network Time Protocol (NTP) …
    Verifying ‘/etc/ntp.conf’ …PASSED
    Verifying ‘/var/run/ntpd.pid’ …PASSED
    Verifying Daemon ‘ntpd’ …PASSED
    Verifying NTP daemon command line for slewing option “-x” …PASSED
    Verifying NTP daemon’s boot time configuration, in file “/etc/sysconfig/ntpd”, for slewing option “-x”…PASSED
    Verifying NTP daemon or service using UDP port 123 …PASSED
    Verifying NTP daemon is synchronized with at least one external time source …PASSED
  Verifying Network Time Protocol (NTP) …PASSED
Verifying Clock Synchronization …PASSED
Verifying resolv.conf Integrity …
  Verifying (Linux) resolv.conf Integrity …PASSED
Verifying resolv.conf Integrity …PASSED
Verifying Time zone consistency …PASSED
Verifying Single Client Access Name (SCAN) …
  Verifying DNS/NIS name service ‘dm01-scan’ …
    Verifying Name Service Switch Configuration File Integrity …PASSED
  Verifying DNS/NIS name service ‘dm01-scan’ …PASSED
Verifying Single Client Access Name (SCAN) …PASSED
Verifying VIP Subnet configuration check …PASSED
Verifying Database Clusterware Version Compatibility …PASSED
Verifying ASM storage privileges for the user: oracle …
  Verifying Group Membership: asmdba …PASSED
Verifying ASM storage privileges for the user: oracle …PASSED
Verifying Daemon “proxyt” not configured and running …PASSED
Verifying ACFS device special file …PASSED
Verifying /dev/shm mounted as temporary file system …PASSED
Verifying Maximum locked memory check …PASSED

Pre-check for database installation was successful.

CVU operation performed:      stage -pre dbinst
Date:                         Mar 11, 2017 4:29:40 AM
CVU home:                     /u01/app/12.2.0.1/grid/
User:                         oracle
dm01db01-orp258c1 {/home/oracle}:

  • Unset the environment variables
dm01db01-orcldb1 {/u01/app/oracle/software}:unset ORACLE_HOME ORACLE_BASE ORACLE_SID
  • Set the display and execute runInstaller
dm01db01- {/u01/app/oracle/software/database}:export DISPLAY=10.30.21.30:0.0
dm01db01- {/u01/app/oracle/software}:cd /u01/app/oracle/software/database
[oracle@dm01db01 database]$ ./runInstaller
Starting Oracle Universal Installer…

Checking Temp space: must be greater than 500 MB.   Actual 27165 MB    Passed
Checking swap space: must be greater than 150 MB.   Actual 16350 MB    Passed
Checking monitor: must be configured to display at least 256 colors

Preparing to launch Oracle Universal Installer from /tmp/OraInstall2017-03-10_07-58-20AM. Please wait …
[oracle@dm01db01 database]$


You can find the log of this install session at:
 /u01/app/oraInventory/logs/installActions2017-03-10_07-58-20AM.log
[oracle@dm01db01 database]$ 


The Installer appears with the first page. Uncheck the security updates.

Click Yes to continue

Select Install database software only

Select Oracle Real Application Clusters database installation

Click select all, and click next

Select Enterprise Edition

Specify the Oracle Base and Oracle Home where you want to install Oracle Database software

Select OS groups for OS authentication

Prerequisites checks are in progress

Prerequisite checks failed for soft limit. Check the ignore all box and click next

Click Yes to continue.

Review the summary page and click Install

Software Installation  in progress

Execute the root.sh script on all the nodes

Open a new terminal and execute the root.sh script on all the nodes as follows.

[root@dm01db01 ~]# /u01/app/oracle/product/12.2.0.1/dbhome/root.sh
Performing root user operation.

The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /u01/app/oracle/product/12.2.0.1/dbhome

Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of “dbhome” have not changed. No need to overwrite.
The contents of “oraenv” have not changed. No need to overwrite.
The contents of “coraenv” have not changed. No need to overwrite.

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.


Repeat the above step on all other nodes in the cluster.

Go back to the installer screen and click ok

Click close to complete the Installation
Conclusion
In this article we have learned how to Install Oracle Database 12.2.0.1 on Exadata X2 Full Rack. We have not observed any differences in Installing Oracle Database
 
0

Overview
Oracle has release Oracle database 12.2 for on-premises just a few days ago. It is time to Install the brand new Oracle 12.2 Grid Infrastructure and database software installation on Exadata database.

With Oracle 12.2 the way Grid Infrastructure is installed has changed little bit. Here are those changes:

  • Unzip Grid Infrastructure software in the GI Home directory
  • Use gridSetup.sh to install and configure GI.
Note: GI software installation no more uses runInstaller to install the GI software

In this article I will demonstrate how to install Oracle 12.2 GI and Database software on a full Rack Oracle Exadata X2-2 Database machine.

Environment

  • Exadata X2-2 Full Rack
  • 8 Compute nodes, 14 Storage cell and 3 IB Switch
  • Exadata Storage Software version 12.1.2.1.1

Steps to install Oracle Grid Infrastructure 12.2.0.1 on Exadata X2 Full Rack (OEL 6.8)
  • Download Oracle Grid and Database software from OTN.
http://www.oracle.com/technetwork/database/enterprise-edition/downloads/index.html
  •  Create the new Grid Infrastructure (GI_HOME) directory where 12.2.0.1 will be installed
[root@dm01db01 ~]# dcli -g ~/dbs_group -l root mkdir -p /u01/app/grid

[root@dm01db01 ~]# dcli -g ~/dbs_group -l root chown oracle:oinstall /u01/app/grid

[root@dm01db01 ~]# dcli -g ~/dbs_group -l root mkdir -p /u01/app/12.2.0.1/grid

[root@dm01db01 ~]# dcli -g ~/dbs_group -l root chown oracle:oinstall /u01/app/12.2.0.1/grid

[root@dm01db01 ~]# dcli -g ~/dbs_group -l root ls -l  /u01/app/12.2.0.1
dm01db01: total 4
dm01db01: drwxr-xr-x 2 oracle oinstall 4096 Mar  4 08:13 grid
dm01db02: total 4
dm01db02: drwxr-xr-x 2 oracle oinstall 4096 Mar  4 08:13 grid
dm01db03: total 4
dm01db03: drwxr-xr-x 2 oracle oinstall 4096 Mar  4 08:13 grid
dm01db04: total 4
dm01db04: drwxr-xr-x 2 oracle oinstall 4096 Mar  4 08:13 grid
dm01db05: total 4
dm01db05: drwxr-xr-x 2 oracle oinstall 4096 Mar  4 08:13 grid
dm01db06: total 4
dm01db06: drwxr-xr-x 2 oracle oinstall 4096 Mar  4 08:13 grid
dm01db07: total 4
dm01db07: drwxr-xr-x 2 oracle oinstall 4096 Mar  4 08:13 grid
dm01db08: total 4
dm01db08: drwxr-xr-x 2 oracle oinstall 4096 Mar  4 08:13 grid

[root@dm01db01 ~]# dcli -g ~/dbs_group -l root ls -ld /u01/app/grid

dm01db01: drwxr-xr-x 2 oracle oinstall 4096 Mar  9 03:02 /u01/app/grid
dm01db02: drwxr-xr-x 2 oracle oinstall 4096 Mar  9 03:02 /u01/app/grid
dm01db03: drwxr-xr-x 2 oracle oinstall 4096 Mar  9 03:02 /u01/app/grid
dm01db04: drwxr-xr-x 2 oracle oinstall 4096 Mar  9 03:02 /u01/app/grid
dm01db05: drwxr-xr-x 2 oracle oinstall 4096 Mar  9 03:02 /u01/app/grid
dm01db06: drwxr-xr-x 2 oracle oinstall 4096 Mar  9 03:02 /u01/app/grid
dm01db07: drwxr-xr-x 2 oracle oinstall 4096 Mar  9 03:02 /u01/app/grid
dm01db08: drwxr-xr-x 2 oracle oinstall 4096 Mar  9 03:02 /u01/app/grid

  • Unzip installation Oracle software
Grid Infrastructure software

dm01db01-orcldb1 {/home/oracle}:cd /u01/app/oracle/software/

dm01db01-orcldb1 {/u01/app/oracle/software}:unzip -q /u01/app/oracle/software/linuxx64_12201_grid_home.zip -d /u01/app/12.2.0.1/grid

dm01db01-orcldb1 {/u01/app/oracle/software}:ls -l /u01/app/12.2.0.1/grid
total 296
drwxr-xr-x  2 oracle oinstall 4096 Jan 26 10:12 addnode
drwxr-xr-x 11 oracle oinstall 4096 Jan 26 10:10 assistants
drwxr-xr-x  2 oracle oinstall 8192 Jan 26 10:12 bin
drwxr-xr-x  3 oracle oinstall 4096 Jan 26 10:12 cdata
drwxr-xr-x  3 oracle oinstall 4096 Jan 26 10:10 cha
drwxr-xr-x  4 oracle oinstall 4096 Jan 26 10:12 clone
drwxr-xr-x 16 oracle oinstall 4096 Jan 26 10:12 crs
drwxr-xr-x  6 oracle oinstall 4096 Jan 26 10:12 css
drwxr-xr-x  7 oracle oinstall 4096 Jan 26 10:10 cv
drwxr-xr-x  3 oracle oinstall 4096 Jan 26 10:10 dbjava
drwxr-xr-x  2 oracle oinstall 4096 Jan 26 10:11 dbs
drwxr-xr-x  2 oracle oinstall 4096 Jan 26 10:12 dc_ocm
drwxr-xr-x  5 oracle oinstall 4096 Jan 26 10:12 deinstall
drwxr-xr-x  3 oracle oinstall 4096 Jan 26 10:10 demo
drwxr-xr-x  3 oracle oinstall 4096 Jan 26 10:10 diagnostics
drwxr-xr-x  8 oracle oinstall 4096 Jan 26 10:11 dmu
-rw-r–r–  1 oracle oinstall  852 Aug 19  2015 env.ora
drwxr-xr-x  7 oracle oinstall 4096 Jan 26 10:12 evm
drwxr-xr-x  5 oracle oinstall 4096 Jan 26 10:10 gpnp
-rwxr-x—  1 oracle oinstall 5395 Jul 21  2016 gridSetup.sh
drwxr-xr-x  4 oracle oinstall 4096 Jan 26 10:10 has
drwxr-xr-x  3 oracle oinstall 4096 Jan 26 10:10 hs
….

  • Setup User equivalance
dm01db01-orcldb1 {/home/oracle}: dcli -g dbs_group -l oracle -k
  • Automatic Configure prerequisites using Oracle RPMs
Oracle recommends that you install Oracle Linux 6 or Oracle Linux 7 and use Oracle RPMs to configure your operating systems for Oracle Database and Oracle Grid Infrastructure installations.

[root@dm01db01 ~]# yum install oracle-database-server-12cR2-preinstall

  • Manually prerequisites steps
Perform the following steps if you are not planning to use the Oracle RPM to perform prerequisites.

Set kernel parameters
Minimum requirement for GI installation. Edit the /etc/sysctl.conf file, and add or edit the following lines.

[root@dm01db01 ~]# vi /etc/sysctl.conf

fs.aio-max-nr = 1048576
fs.file-max = 6815744
kernel.shmall = 2097152
kernel.shmmax = 4294967295
kernel.shmmni = 4096
kernel.sem = 250 32000 100 128
net.ipv4.ip_local_port_range = 9000 65500
net.core.rmem_default = 262144
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 1048576


Update the current values of the kernel parameters:

[root@dm01db01 ~]# /sbin/sysctl -p

Confirm that the values are set correctly:

[root@dm01db01 ~]# /sbin/sysctl -a

Install required OS packages
The following packages (or later versions) are required for OEL 6:

binutils-2.20.51.0.2-5.36.el6 (x86_64)
compat-libcap1-1.10-1 (x86_64)
compat-libstdc++-33-3.2.3-69.el6 (x86_64)
compat-libstdc++-33-3.2.3-69.el6 (i686)
e2fsprogs-1.41.12-14.el6 (x86_64)
e2fsprogs-libs-1.41.12-14.el6 (x86_64)
glibc-2.12-1.7.el6 (i686)
glibc-2.12-1.7.el6 (x86_64)
glibc-devel-2.12-1.7.el6 (x86_64)
glibc-devel-2.12-1.7.el6 (i686)
ksh
libgcc-4.4.4-13.el6 (i686)
libgcc-4.4.4-13.el6 (x86_64)
libs-1.42.8-1.0.2.el6.x86_64
libstdc++-4.4.4-13.el6 (x86_64)
libstdc++-4.4.4-13.el6 (i686)
libstdc++-devel-4.4.4-13.el6 (x86_64)
libstdc++-devel-4.4.4-13.el6 (i686)
libaio-0.3.107-10.el6 (x86_64)
libaio-0.3.107-10.el6 (i686)
libaio-devel-0.3.107-10.el6 (x86_64)
libaio-devel-0.3.107-10.el6 (i686)
libXtst-1.0.99.2 (x86_64)
libXtst-1.0.99.2 (i686)
libX11-1.5.0-4.el6 (i686)
libX11-1.5.0-4.el6 (x86_64)
libXau-1.0.6-4.el6 (i686)
libXau-1.0.6-4.el6 (x86_64)
libxcb-1.8.1-1.el6 (i686)
libxcb-1.8.1-1.el6 (x86_64)
libXi-1.3 (x86_64)
libXi-1.3 (i686)
make-3.81-19.el6
net-tools-1.60-110.el6_2.x86_64 (for Oracle RAC and Oracle Clusterware)
nfs-utils-1.2.3-15.0.1 (for Oracle ACFS)
sysstat-9.0.4-11.el6 (x86_64)
smartmontools-5.43-1.el6.x86_64


Create OS groups and users
Create the following groups and user


[root@dm01db01 ~]# groupadd -g 1001 oinstall
[root@dm01db01 ~]# groupadd -g 1002 dba
[root@dm01db01 ~]# groupadd -g 1003 oper

[root@dm01db01 ~]# groupadd -g 1004 asmdba
[root@dm01db01 ~]# groupadd -g 1005 asmadmin
[root@dm01db01 ~]# groupadd -g 1006 asmoper

[root@dm01db01 ~]# groupadd -g 1007 backupdba
[root@dm01db01 ~]# groupadd -g 1008 dgdba
[root@dm01db01 ~]# groupadd -g 1009 kmdba
[root@dm01db01 ~]# groupadd -g 1010 racdba

[root@dm01db01 ~]# useradd -u 1000 -g oinstall -G dba,oper,asmdba,asmoper,asmadmin,backupdba,dgdba,kmdba,racdba oracle


Ensure the IPs are allocated
Following IPs are required for Oracle GI installation. There are taken care by Exadata DBA or Networking Engineer at the time of Exadata installation.

  • Management IP
  • Public IP
  • Private IP
  • Virtual IP
  • Scan IP
Register hostname/IPs in DNS
This is done as part Exadata installation by Exadata DBA or Network Engineer.

Check hostname resolution
[root@dm01db01 ~]# ping dm01db01
[root@dm01db01 ~]# ping dm01db01.nsm.com
[root@dm01db01 ~]# nslookup dm01db01


Disable firewall
[root@dm01db01 ~]# service iptables status
[root@dm01db01 ~]# service iptables stop


Create user’s profile
dm01db01-orcldb1 {/u01/app/oracle/software}: vi .bash_profile

export TMP=/tmp
export TMPDIR=$TMP

export ORACLE_BASE=/u01/app/oracle
export GRID_HOME=/u01/app/12.2.0.1/grid
export ORACLE_HOME=/u01/app/oracle/product/12.2.0.1/dbhome
export ORACLE_TERM=xterm
export PATH=$PATH:$ORACLE_HOME/bin
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib
  • Validate Readiness for Oracle Clusterware upgrade using CVU
dm01db01-orcldb1 {/u01/app/oracle/software}:/u01/app/12.2.0.1/grid/runcluvfy.sh stage -pre crsinst -n

dm01db01,dm01db02,dm01db03,dm01db04,dm01db05,dm01db06,dm01db07,dm01db08 -r 12.2 -orainv oinstall -fixupnoexec -verbose


Verifying Physical Memory …
  Node Name     Available                 Required                  Status
  ————  ————————  ————————  ———-
  dm01db05      70.5587GB (7.3986188E7KB)  8GB (8388608.0KB)         passed
  dm01db04      70.5587GB (7.3986188E7KB)  8GB (8388608.0KB)         passed
  dm01db03      62.6681GB (6.5712308E7KB)  8GB (8388608.0KB)         passed
  dm01db02      70.5587GB (7.3986188E7KB)  8GB (8388608.0KB)         passed
  dm01db01      70.5587GB (7.3986188E7KB)  8GB (8388608.0KB)         passed
  dm01db08      70.5587GB (7.3986188E7KB)  8GB (8388608.0KB)         passed
  dm01db07      70.5587GB (7.3986188E7KB)  8GB (8388608.0KB)         passed
  dm01db06      70.5587GB (7.3986188E7KB)  8GB (8388608.0KB)         passed
Verifying Physical Memory …PASSED
Verifying Available Physical Memory …
  Node Name     Available                 Required                  Status
  ————  ————————  ————————  ———-
  dm01db05      64.6698GB (6.7811208E7KB)  50MB (51200.0KB)          passed
  dm01db04      65.1885GB (6.8355116E7KB)  50MB (51200.0KB)          passed
  dm01db03      59.677GB (6.257588E7KB)   50MB (51200.0KB)           passed
  dm01db02      66.6203GB (6.9856476E7KB)  50MB (51200.0KB)          passed
  dm01db01      65.2446GB (6.8413932E7KB)  50MB (51200.0KB)          passed
  dm01db08      64.9811GB (6.8137596E7KB)  50MB (51200.0KB)          passed
  dm01db07      65.0432GB (6.820278E7KB)  50MB (51200.0KB)           passed
  dm01db06      65.0759GB (6.823698E7KB)  50MB (51200.0KB)           passed
Verifying Available Physical Memory …PASSED
Verifying Swap Size …
  Node Name     Available                 Required                  Status
  ————  ————————  ————————  ———-
  dm01db05      15.9949GB (1.6771856E7KB)  16GB (1.6777216E7KB)      passed
  dm01db04      15.9949GB (1.6771856E7KB)  16GB (1.6777216E7KB)      passed
  dm01db03      15.9949GB (1.6771856E7KB)  15.9689GB (1.6744628E7KB)  passed
  dm01db02      15.9949GB (1.6771856E7KB)  16GB (1.6777216E7KB)      passed
  dm01db01      15.9949GB (1.6771856E7KB)  16GB (1.6777216E7KB)      passed
  dm01db08      15.9949GB (1.6771856E7KB)  16GB (1.6777216E7KB)      passed
  dm01db07      15.9949GB (1.6771856E7KB)  16GB (1.6777216E7KB)      passed
  dm01db06      15.9949GB (1.6771856E7KB)  16GB (1.6777216E7KB)      passed
Verifying Swap Size …PASSED
Verifying Free Space: dm01db05:/usr,dm01db05:/var,dm01db05:/etc,dm01db05:/sbin,dm01db05:/tmp …
  Path              Node Name     Mount point   Available     Required      Status
  —————-  ————  ————  ————  ————  ————
  /usr              dm01db05      /             31.1182GB     25MB          passed
  /var              dm01db05      /             31.1182GB     5MB           passed
  /etc              dm01db05      /             31.1182GB     25MB          passed
  /sbin             dm01db05      /             31.1182GB     10MB          passed
  /tmp              dm01db05      /             31.1182GB     1GB           passed
Verifying Free Space: dm01db05:/usr,dm01db05:/var,dm01db05:/etc,dm01db05:/sbin,dm01db05:/tmp …PASSED
Verifying Free Space: dm01db04:/usr,dm01db04:/var,dm01db04:/etc,dm01db04:/sbin,dm01db04:/tmp …
  Path              Node Name     Mount point   Available     Required      Status
  —————-  ————  ————  ————  ————  ————
  /usr              dm01db04      /             30.8779GB     25MB          passed
  /var              dm01db04      /             30.8779GB     5MB           passed
  /etc              dm01db04      /             30.8779GB     25MB          passed
  /sbin             dm01db04      /             30.8779GB     10MB          passed
  /tmp              dm01db04      /             30.8779GB     1GB           passed
Verifying Free Space: dm01db04:/usr,dm01db04:/var,dm01db04:/etc,dm01db04:/sbin,dm01db04:/tmp …PASSED
Verifying Free Space: dm01db03:/usr,dm01db03:/var,dm01db03:/etc,dm01db03:/sbin,dm01db03:/tmp …
  Path              Node Name     Mount point   Available     Required      Status
  —————-  ————  ————  ————  ————  ————
  /usr              dm01db03      /             32.7207GB     25MB          passed
  /var              dm01db03      /             32.7207GB     5MB           passed
  /etc              dm01db03      /             32.7207GB     25MB          passed
  /sbin             dm01db03      /             32.7207GB     10MB          passed
  /tmp              dm01db03      /             32.7207GB     1GB           passed
Verifying Free Space: dm01db03:/usr,dm01db03:/var,dm01db03:/etc,dm01db03:/sbin,dm01db03:/tmp …PASSED
Verifying Free Space: dm01db02:/usr,dm01db02:/var,dm01db02:/etc,dm01db02:/sbin,dm01db02:/tmp …
  Path              Node Name     Mount point   Available     Required      Status
  —————-  ————  ————  ————  ————  ————
  /usr              dm01db02      /             24.959GB      25MB          passed
  /var              dm01db02      /             24.959GB      5MB           passed
  /etc              dm01db02      /             24.959GB      25MB          passed
  /sbin             dm01db02      /             24.959GB      10MB          passed
  /tmp              dm01db02      /             24.959GB      1GB           passed
Verifying Free Space: dm01db02:/usr,dm01db02:/var,dm01db02:/etc,dm01db02:/sbin,dm01db02:/tmp …PASSED
Verifying Free Space: dm01db01:/usr,dm01db01:/var,dm01db01:/etc,dm01db01:/sbin,dm01db01:/tmp …
  Path              Node Name     Mount point   Available     Required      Status
  —————-  ————  ————  ————  ————  ————
  /usr              dm01db01      /             27.3477GB     25MB          passed
  /var              dm01db01      /             27.3477GB     5MB           passed
  /etc              dm01db01      /             27.3477GB     25MB          passed
  /sbin             dm01db01      /             27.3477GB     10MB          passed
  /tmp              dm01db01      /             27.3477GB     1GB           passed
Verifying Free Space: dm01db01:/usr,dm01db01:/var,dm01db01:/etc,dm01db01:/sbin,dm01db01:/tmp …PASSED
Verifying Free Space: dm01db08:/usr,dm01db08:/var,dm01db08:/etc,dm01db08:/sbin,dm01db08:/tmp …
  Path              Node Name     Mount point   Available     Required      Status
  —————-  ————  ————  ————  ————  ————
  /usr              dm01db08      /             36.6592GB     25MB          passed
  /var              dm01db08      /             36.6592GB     5MB           passed
  /etc              dm01db08      /             36.6592GB     25MB          passed
  /sbin             dm01db08      /             36.6592GB     10MB          passed
  /tmp              dm01db08      /             36.6592GB     1GB           passed
Verifying Free Space: dm01db08:/usr,dm01db08:/var,dm01db08:/etc,dm01db08:/sbin,dm01db08:/tmp …PASSED
Verifying Free Space: dm01db07:/usr,dm01db07:/var,dm01db07:/etc,dm01db07:/sbin,dm01db07:/tmp …
  Path              Node Name     Mount point   Available     Required      Status
  —————-  ————  ————  ————  ————  ————
  /usr              dm01db07      /             38.7031GB     25MB          passed
  /var              dm01db07      /             38.7031GB     5MB           passed
  /etc              dm01db07      /             38.7031GB     25MB          passed
  /sbin             dm01db07      /             38.7031GB     10MB          passed
  /tmp              dm01db07      /             38.7031GB     1GB           passed
Verifying Free Space: dm01db07:/usr,dm01db07:/var,dm01db07:/etc,dm01db07:/sbin,dm01db07:/tmp …PASSED
Verifying Free Space: dm01db06:/usr,dm01db06:/var,dm01db06:/etc,dm01db06:/sbin,dm01db06:/tmp …
  Path              Node Name     Mount point   Available     Required      Status
  —————-  ————  ————  ————  ————  ————
  /usr              dm01db06      /             38.2168GB     25MB          passed
  /var              dm01db06      /             38.2168GB     5MB           passed
  /etc              dm01db06      /             38.2168GB     25MB          passed
  /sbin             dm01db06      /             38.2168GB     10MB          passed
  /tmp              dm01db06      /             38.2168GB     1GB           passed
Verifying Free Space: dm01db06:/usr,dm01db06:/var,dm01db06:/etc,dm01db06:/sbin,dm01db06:/tmp …PASSED
Verifying User Existence: oracle …
  Node Name     Status                    Comment
  ————  ————————  ————————
  dm01db05      passed                    exists(1000)
  dm01db04      passed                    exists(1000)
  dm01db03      passed                    exists(1000)
  dm01db02      passed                    exists(1000)
  dm01db01      passed                    exists(1000)
  dm01db08      passed                    exists(1000)
  dm01db07      passed                    exists(1000)
  dm01db06      passed                    exists(1000)

Verifying Users With Same UID: 1000 …PASSED
Verifying User Existence: oracle …PASSED

******************************************************************************************
Following is the list of fixable prerequisites selected to fix in this session
******************************************************************************************
————–                —————     —————-
Check failed.                 Failed on nodes     Reboot required?
————–                —————     —————-
Soft Limit: maximum stack     dm01db05,dm01db04,  no
size                          dm01db03,dm01db02,
                              dm01db01,dm01db08,
                              dm01db07,dm01db06
Group Existence: asmadmin     dm01db05,dm01db04,  no
                              dm01db03,dm01db02,
                              dm01db01,dm01db08,
                              dm01db07,dm01db06

Execute “/tmp/CVU_12.2.0.1.0_oracle/runfixup.sh” as root user on nodes “dm01db06,dm01db05,dm01db04,dm01db03,dm01db02,dm01db01,dm01db08,dm01db07” to perform the fix up operations manually

  • Unset the environment variables
dm01db01-orcldb1 {/u01/app/oracle/software}:unset ORACLE_HOME ORACLE_BASE ORACLE_SID
  • Set the display and execute gridSetup.sh
dm01db01-orcldb1 {/home/oracle}:export DISPLAY=10.30.221.39:0.0

dm01db01-orcldb1 {/home/oracle}:cd /u01/app/12.2.0.1/grid

dm01db01-orcldb1 {/u01/app/12.2.0.1/grid}:ls -l gridSetup.sh
-rwxr-x— 1 oracle oinstall 5395 Jul 21  2016 gridSetup.sh

dm01db01-orcldb1 {/u01/app/12.2.0.1/grid}:./gridSetup.sh
Launching Oracle Grid Infrastructure Setup Wizard…


OUI starting …

Select “Configure Oracle Grid Infrastructure for a New Cluster”

Select “Configure an Oracle Standalone Cluster”. With Oracle 12.2 Oracle has introduced different Cluster option. For more information please refer to Oracle Grid Infrastructure Install and Upgrade document.

Specify the Cluster name, SCAN Name and the SCAN port to be used for Installation

Provide the list of nodes. Click Add button

 On this page you can define one node at a time or range of nodes. Here I am specifying one node.

 To specify a range of node, select Add a range of nodes and use the pattern below.

Verify all the nodes and VIP and click next.
Select the correct interface for ASM & Private network and Public network.

 Select “Configure ASM using Block Devices”.

 Select Yes, if you are planning to create a separate ASM disk group for GIMR. Else select No.

 Select “Change Discovery Path” to enter the correct string to discover the candidate disks.

Enter “o/*/SYSTEMDG* to search for all ASM disks with name “SYSTEMDG”

We can see all the ASM disk with the prefix SYSTEMDG 


Click Specify Failure Groups to create the ASM failure groups. Here I have 14 storage cells, so each cell act as a failure group in Exadata

Select the ASM disks to create DBFS_DG ASM disk group to store OCR and Voting disks. Here I am creating DBFS_DG disk group using normal redundancy and disks from 7 storage cells.

 Create MGMT_DG for storing GIMR repository. Here I am creating MGMT_DG disk group using normal redundancy and disks from 7 storage cells.

Specify the ASM password for SYS and ASMSNMP users

Specify a strong password to avoid this warning. Click YES to continue.

 Select “Do not user IPMI” and click Next.

Click next. This step can be performed at a later time.

Specify the OS groups for ASM authentication

Click Yes to continue.

Specify the Oracle Base. GI HOME is selected by default as we unzipped the GI software and running gridSetup from GI HOME.

Click Next. We can run the root script manually at the end of the installation.

Prerequisite checks started

Prerequisite checks in progress

Prerequisite check failed for Soft Limit. You can ignore it and move forward or use “Fix & Check Again” 
Click Fix & Check Again

A pop-up box appears asking you to run the “runfixup.sh” script

Login as root user and run the fixup script

If you don’t want to run the fixup script. Click Ignore All and click next.
Click Yes to continue.

On Installation summary page verify the details and click Install

Grid Infrastructure Installation progress

The following screen appears to run the root.sh script as root user.

Open a new terminal and run the /u01/app/12.2.0.1/grid/root.sh on all the nodes. Run the script on node 1 first. Once it is completed successfully on node 1, you can run the script on all other nodes in parallel.

Node 1:
[root@dm01db01 ~]# /u01/app/12.2.0.1/grid/root.sh
Performing root user operation.

The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /u01/app/12.2.0.1/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of “dbhome” have not changed. No need to overwrite.
The file “oraenv” already exists in /usr/local/bin.  Overwrite it? (y/n)
[n]: y
   Copying oraenv to /usr/local/bin …
The contents of “coraenv” have not changed. No need to overwrite.

Creating /etc/oratab file…
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Relinking oracle with rac_on option
Using configuration parameter file: /u01/app/12.2.0.1/grid/crs/install/crsconfig_params
The log of current session can be found at:
  /u01/app/grid/crsdata/dm01db01/crsconfig/rootcrs_dm01db01_2017-03-09_03-33-02AM.log
2017/03/09 03:33:04 CLSRSC-594: Executing installation step 1 of 19: ‘SetupTFA’.
2017/03/09 03:33:05 CLSRSC-4001: Installing Oracle Trace File Analyzer (TFA) Collector.
2017/03/09 03:33:36 CLSRSC-4002: Successfully installed Oracle Trace File Analyzer (TFA) Collector.
2017/03/09 03:33:36 CLSRSC-594: Executing installation step 2 of 19: ‘ValidateEnv’.
2017/03/09 03:33:43 CLSRSC-363: User ignored prerequisites during installation
2017/03/09 03:33:43 CLSRSC-594: Executing installation step 3 of 19: ‘CheckFirstNode’.
2017/03/09 03:33:45 CLSRSC-594: Executing installation step 4 of 19: ‘GenSiteGUIDs’.
2017/03/09 03:33:47 CLSRSC-594: Executing installation step 5 of 19: ‘SaveParamFile’.
2017/03/09 03:33:54 CLSRSC-594: Executing installation step 6 of 19: ‘SetupOSD’.
2017/03/09 03:33:54 CLSRSC-594: Executing installation step 7 of 19: ‘CheckCRSConfig’.
2017/03/09 03:33:54 CLSRSC-594: Executing installation step 8 of 19: ‘SetupLocalGPNP’.
2017/03/09 03:34:16 CLSRSC-594: Executing installation step 9 of 19: ‘ConfigOLR’.
2017/03/09 03:34:24 CLSRSC-594: Executing installation step 10 of 19: ‘ConfigCHMOS’.
2017/03/09 03:34:24 CLSRSC-594: Executing installation step 11 of 19: ‘CreateOHASD’.
2017/03/09 03:34:30 CLSRSC-594: Executing installation step 12 of 19: ‘ConfigOHASD’.
2017/03/09 03:34:45 CLSRSC-330: Adding Clusterware entries to file ‘oracle-ohasd.conf’
2017/03/09 03:35:05 CLSRSC-594: Executing installation step 13 of 19: ‘InstallAFD’.
2017/03/09 03:35:11 CLSRSC-594: Executing installation step 14 of 19: ‘InstallACFS’.
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on ‘dm01db01’
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on ‘dm01db01’ has completed
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
2017/03/09 03:35:43 CLSRSC-594: Executing installation step 15 of 19: ‘InstallKA’.
2017/03/09 03:35:56 CLSRSC-594: Executing installation step 16 of 19: ‘InitConfig’.
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on ‘dm01db01’
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on ‘dm01db01’ has completed
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
CRS-2672: Attempting to start ‘ora.evmd’ on ‘dm01db01’
CRS-2672: Attempting to start ‘ora.mdnsd’ on ‘dm01db01’
CRS-2676: Start of ‘ora.mdnsd’ on ‘dm01db01’ succeeded
CRS-2676: Start of ‘ora.evmd’ on ‘dm01db01’ succeeded
CRS-2672: Attempting to start ‘ora.gpnpd’ on ‘dm01db01’
CRS-2676: Start of ‘ora.gpnpd’ on ‘dm01db01’ succeeded
CRS-2672: Attempting to start ‘ora.cssdmonitor’ on ‘dm01db01’
CRS-2672: Attempting to start ‘ora.gipcd’ on ‘dm01db01’
CRS-2676: Start of ‘ora.cssdmonitor’ on ‘dm01db01’ succeeded
CRS-2676: Start of ‘ora.gipcd’ on ‘dm01db01’ succeeded
CRS-2672: Attempting to start ‘ora.cssd’ on ‘dm01db01’
CRS-2672: Attempting to start ‘ora.diskmon’ on ‘dm01db01’
CRS-2676: Start of ‘ora.diskmon’ on ‘dm01db01’ succeeded
CRS-2676: Start of ‘ora.cssd’ on ‘dm01db01’ succeeded

Disk groups created successfully. Check /u01/app/grid/cfgtoollogs/asmca/asmca-170309AM033635.log for details.

2017/03/09 03:37:26 CLSRSC-482: Running command: ‘/u01/app/12.2.0.1/grid/bin/ocrconfig -upgrade oracle oinstall’
CRS-2672: Attempting to start ‘ora.crf’ on ‘dm01db01’
CRS-2672: Attempting to start ‘ora.storage’ on ‘dm01db01’
CRS-2676: Start of ‘ora.storage’ on ‘dm01db01’ succeeded
CRS-2676: Start of ‘ora.crf’ on ‘dm01db01’ succeeded
CRS-2672: Attempting to start ‘ora.crsd’ on ‘dm01db01’
CRS-2676: Start of ‘ora.crsd’ on ‘dm01db01’ succeeded
CRS-4256: Updating the profile
Successful addition of voting disk 78c7111b10e44f2dbfed007a03782cef.
Successful addition of voting disk 42f1ce36d82b4f43bf7802841a0511d4.
Successful addition of voting disk a43a383b20ff4f50bf690962ce8e9bb5.
Successfully replaced voting disk group with +DBFS_DG.
CRS-4256: Updating the profile
CRS-4266: Voting file(s) successfully replaced
##  STATE    File Universal Id                File Name Disk group
—  —–    —————–                ——— ———
 1. ONLINE   78c7111b10e44f2dbfed007a03782cef (o/192.168.2.11/SYSTEMDG_CD_10_dm01cel03) [DBFS_DG]
 2. ONLINE   42f1ce36d82b4f43bf7802841a0511d4 (o/192.168.2.18/SYSTEMDG_CD_06_dm01cel10) [DBFS_DG]
 3. ONLINE   a43a383b20ff4f50bf690962ce8e9bb5 (o/192.168.2.16/SYSTEMDG_CD_05_dm01cel08) [DBFS_DG]
Located 3 voting disk(s).
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on ‘dm01db01’
CRS-2673: Attempting to stop ‘ora.crsd’ on ‘dm01db01’
CRS-2677: Stop of ‘ora.crsd’ on ‘dm01db01’ succeeded
CRS-2673: Attempting to stop ‘ora.storage’ on ‘dm01db01’
CRS-2673: Attempting to stop ‘ora.crf’ on ‘dm01db01’
CRS-2673: Attempting to stop ‘ora.drivers.acfs’ on ‘dm01db01’
CRS-2673: Attempting to stop ‘ora.gpnpd’ on ‘dm01db01’
CRS-2673: Attempting to stop ‘ora.mdnsd’ on ‘dm01db01’
CRS-2677: Stop of ‘ora.drivers.acfs’ on ‘dm01db01’ succeeded
CRS-2677: Stop of ‘ora.crf’ on ‘dm01db01’ succeeded
CRS-2677: Stop of ‘ora.gpnpd’ on ‘dm01db01’ succeeded
CRS-2677: Stop of ‘ora.storage’ on ‘dm01db01’ succeeded
CRS-2673: Attempting to stop ‘ora.asm’ on ‘dm01db01’
CRS-2677: Stop of ‘ora.mdnsd’ on ‘dm01db01’ succeeded
CRS-2677: Stop of ‘ora.asm’ on ‘dm01db01’ succeeded
CRS-2673: Attempting to stop ‘ora.ctssd’ on ‘dm01db01’
CRS-2673: Attempting to stop ‘ora.evmd’ on ‘dm01db01’
CRS-2677: Stop of ‘ora.ctssd’ on ‘dm01db01’ succeeded
CRS-2677: Stop of ‘ora.evmd’ on ‘dm01db01’ succeeded
CRS-2673: Attempting to stop ‘ora.cssd’ on ‘dm01db01’
CRS-2677: Stop of ‘ora.cssd’ on ‘dm01db01’ succeeded
CRS-2673: Attempting to stop ‘ora.diskmon’ on ‘dm01db01’
CRS-2673: Attempting to stop ‘ora.gipcd’ on ‘dm01db01’
CRS-2677: Stop of ‘ora.gipcd’ on ‘dm01db01’ succeeded
CRS-2677: Stop of ‘ora.diskmon’ on ‘dm01db01’ succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on ‘dm01db01’ has completed
CRS-4133: Oracle High Availability Services has been stopped.
2017/03/09 03:39:09 CLSRSC-594: Executing installation step 17 of 19: ‘StartCluster’.
CRS-4123: Starting Oracle High Availability Services-managed resources
CRS-2672: Attempting to start ‘ora.evmd’ on ‘dm01db01’
CRS-2672: Attempting to start ‘ora.mdnsd’ on ‘dm01db01’
CRS-2676: Start of ‘ora.mdnsd’ on ‘dm01db01’ succeeded
CRS-2676: Start of ‘ora.evmd’ on ‘dm01db01’ succeeded
CRS-2672: Attempting to start ‘ora.gpnpd’ on ‘dm01db01’
CRS-2676: Start of ‘ora.gpnpd’ on ‘dm01db01’ succeeded
CRS-2672: Attempting to start ‘ora.gipcd’ on ‘dm01db01’
CRS-2676: Start of ‘ora.gipcd’ on ‘dm01db01’ succeeded
CRS-2672: Attempting to start ‘ora.cssdmonitor’ on ‘dm01db01’
CRS-2676: Start of ‘ora.cssdmonitor’ on ‘dm01db01’ succeeded
CRS-2672: Attempting to start ‘ora.cssd’ on ‘dm01db01’
CRS-2672: Attempting to start ‘ora.diskmon’ on ‘dm01db01’
CRS-2676: Start of ‘ora.diskmon’ on ‘dm01db01’ succeeded
CRS-2676: Start of ‘ora.cssd’ on ‘dm01db01’ succeeded
CRS-2672: Attempting to start ‘ora.ctssd’ on ‘dm01db01’
CRS-2676: Start of ‘ora.ctssd’ on ‘dm01db01’ succeeded
CRS-2672: Attempting to start ‘ora.asm’ on ‘dm01db01’
CRS-2676: Start of ‘ora.asm’ on ‘dm01db01’ succeeded
CRS-2672: Attempting to start ‘ora.storage’ on ‘dm01db01’
CRS-2676: Start of ‘ora.storage’ on ‘dm01db01’ succeeded
CRS-2672: Attempting to start ‘ora.crf’ on ‘dm01db01’
CRS-2676: Start of ‘ora.crf’ on ‘dm01db01’ succeeded
CRS-2672: Attempting to start ‘ora.crsd’ on ‘dm01db01’
CRS-2676: Start of ‘ora.crsd’ on ‘dm01db01’ succeeded
CRS-6023: Starting Oracle Cluster Ready Services-managed resources
CRS-6017: Processing resource auto-start for servers: dm01db01
CRS-6016: Resource auto-start has completed for server dm01db01
CRS-6024: Completed start of Oracle Cluster Ready Services-managed resources
CRS-4123: Oracle High Availability Services has been started.
2017/03/09 03:41:28 CLSRSC-343: Successfully started Oracle Clusterware stack
2017/03/09 03:41:28 CLSRSC-594: Executing installation step 18 of 19: ‘ConfigNode’.
CRS-2672: Attempting to start ‘ora.ASMNET1LSNR_ASM.lsnr’ on ‘dm01db01’
CRS-2676: Start of ‘ora.ASMNET1LSNR_ASM.lsnr’ on ‘dm01db01’ succeeded
CRS-2672: Attempting to start ‘ora.asm’ on ‘dm01db01’
CRS-2676: Start of ‘ora.asm’ on ‘dm01db01’ succeeded
CRS-2672: Attempting to start ‘ora.DBFS_DG.dg’ on ‘dm01db01’
CRS-2676: Start of ‘ora.DBFS_DG.dg’ on ‘dm01db01’ succeeded
2017/03/09 03:43:32 CLSRSC-594: Executing installation step 19 of 19: ‘PostConfig’.

2017/03/09 03:44:19 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster … succeeded
 

Now you can run the script on all other nodes in parallel.

Go back to the installation screen and click ok.

Installation continues


Click Ok to finish the Installation.
  • Verify Installation
[root@dm01db01 ~]# /u01/app/12.2.0.1/grid/bin/crsctl stat res -t
——————————————————————————–
Name           Target  State        Server                   State details
——————————————————————————–
Local Resources
——————————————————————————–
ora.ASMNET1LSNR_ASM.lsnr
               ONLINE  ONLINE       dm01db01                 STABLE
               ONLINE  ONLINE       dm01db02                 STABLE
               OFFLINE OFFLINE      dm01db03                 STABLE
               ONLINE  ONLINE       dm01db04                 STABLE
               OFFLINE OFFLINE      dm01db05                 STABLE
               OFFLINE OFFLINE      dm01db06                 STABLE
               OFFLINE OFFLINE      dm01db07                 STABLE
               OFFLINE OFFLINE      dm01db08                 STABLE
ora.DBFS_DG.dg
               ONLINE  ONLINE       dm01db01                 STABLE
               ONLINE  ONLINE       dm01db02                 STABLE
               OFFLINE OFFLINE      dm01db03                 STABLE
               ONLINE  ONLINE       dm01db04                 STABLE
               OFFLINE OFFLINE      dm01db05                 STABLE
               OFFLINE OFFLINE      dm01db06                 STABLE
               OFFLINE OFFLINE      dm01db07                 STABLE
               OFFLINE OFFLINE      dm01db08                 STABLE
ora.LISTENER.lsnr
               ONLINE  ONLINE       dm01db01                 STABLE
               ONLINE  ONLINE       dm01db02                 STABLE
               ONLINE  ONLINE       dm01db03                 STABLE
               ONLINE  ONLINE       dm01db04                 STABLE
               ONLINE  ONLINE       dm01db05                 STABLE
               ONLINE  ONLINE       dm01db06                 STABLE
               ONLINE  ONLINE       dm01db07                 STABLE
               ONLINE  ONLINE       dm01db08                 STABLE
ora.MGMT_DG.dg
               ONLINE  ONLINE       dm01db01                 STABLE
               ONLINE  ONLINE       dm01db02                 STABLE
               OFFLINE OFFLINE      dm01db03                 STABLE
               ONLINE  ONLINE       dm01db04                 STABLE
               OFFLINE OFFLINE      dm01db05                 STABLE
               OFFLINE OFFLINE      dm01db06                 STABLE
               OFFLINE OFFLINE      dm01db07                 STABLE
               OFFLINE OFFLINE      dm01db08                 STABLE
ora.chad
               ONLINE  ONLINE       dm01db01                 STABLE
               ONLINE  ONLINE       dm01db02                 STABLE
               ONLINE  ONLINE       dm01db03                 STABLE
               ONLINE  ONLINE       dm01db04                 STABLE
               ONLINE  ONLINE       dm01db05                 STABLE
               ONLINE  ONLINE       dm01db06                 STABLE
               ONLINE  ONLINE       dm01db07                 STABLE
               ONLINE  ONLINE       dm01db08                 STABLE
ora.net1.network
               ONLINE  ONLINE       dm01db01                 STABLE
               ONLINE  ONLINE       dm01db02                 STABLE
               ONLINE  ONLINE       dm01db03                 STABLE
               ONLINE  ONLINE       dm01db04                 STABLE
               ONLINE  ONLINE       dm01db05                 STABLE
               ONLINE  ONLINE       dm01db06                 STABLE
               ONLINE  ONLINE       dm01db07                 STABLE
               ONLINE  ONLINE       dm01db08                 STABLE
ora.ons
               ONLINE  ONLINE       dm01db01                 STABLE
               ONLINE  ONLINE       dm01db02                 STABLE
               ONLINE  ONLINE       dm01db03                 STABLE
               ONLINE  ONLINE       dm01db04                 STABLE
               ONLINE  ONLINE       dm01db05                 STABLE
               ONLINE  ONLINE       dm01db06                 STABLE
               ONLINE  ONLINE       dm01db07                 STABLE
               ONLINE  ONLINE       dm01db08                 STABLE
ora.proxy_advm
               OFFLINE OFFLINE      dm01db01                 STABLE
               OFFLINE OFFLINE      dm01db02                 STABLE
               OFFLINE OFFLINE      dm01db03                 STABLE
               OFFLINE OFFLINE      dm01db04                 STABLE
               OFFLINE OFFLINE      dm01db05                 STABLE
               OFFLINE OFFLINE      dm01db06                 STABLE
               OFFLINE OFFLINE      dm01db07                 STABLE
               OFFLINE OFFLINE      dm01db08                 STABLE
——————————————————————————–
Cluster Resources
——————————————————————————–
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       dm01db02                 STABLE
ora.LISTENER_SCAN2.lsnr
      1        ONLINE  ONLINE       dm01db04                 STABLE
ora.LISTENER_SCAN3.lsnr
      1        ONLINE  ONLINE       dm01db01                 STABLE
ora.MGMTLSNR
      1        ONLINE  ONLINE       dm01db01                 192.168.2.1,STABLE
ora.asm
      1        ONLINE  ONLINE       dm01db01                 Started,STABLE
      2        ONLINE  ONLINE       dm01db02                 Started,STABLE
      3        ONLINE  ONLINE       dm01db04                 Started,STABLE
ora.dm01db01.vip
      1        ONLINE  ONLINE       dm01db01                 STABLE
ora.dm01db02.vip
      1        ONLINE  ONLINE       dm01db02                 STABLE
ora.dm01db03.vip
      1        ONLINE  ONLINE       dm01db03                 STABLE
ora.dm01db04.vip
      1        ONLINE  ONLINE       dm01db04                 STABLE
ora.dm01db05.vip
      1        ONLINE  ONLINE       dm01db05                 STABLE
ora.dm01db06.vip
      1        ONLINE  ONLINE       dm01db06                 STABLE
ora.dm01db07.vip
      1        ONLINE  ONLINE       dm01db07                 STABLE
ora.dm01db08.vip
      1        ONLINE  ONLINE       dm01db08                 STABLE
ora.cvu
      1        ONLINE  ONLINE       dm01db01                 STABLE
ora.mgmtdb
      1        ONLINE  ONLINE       dm01db01                 Open,STABLE
ora.qosmserver
      1        ONLINE  ONLINE       dm01db01                 STABLE
ora.scan1.vip
      1        ONLINE  ONLINE       dm01db02                 STABLE
ora.scan2.vip
      1        ONLINE  ONLINE       dm01db04                 STABLE
ora.scan3.vip
      1        ONLINE  ONLINE       dm01db01                 STABLE
——————————————————————————–
[root@dm01db01 ~]#

ASMCMD> lsdg
State    Type    Rebal  Sector  Logical_Sector  Block       AU  Total_MB  Free_MB  Req_mir_free_MB 

Usable_file_MB  Offline_disks  Voting_files  Name
MOUNTED  NORMAL  N         512             512   4096  4194304   2136960  2135156            30528        

1052314              0             Y  DBFS_DG/
MOUNTED  NORMAL  N         512             512   4096  4194304    824256   755900            30528         

362686              0             N  MGMT_DG/

ASMCMD> showclustermode
ASM cluster : Flex mode enabled

ASMCMD> showversion
ASM version         : 12.2.0.1.0


Conclusion
In this article we have learned how to Install Oracle Grid Infrastructure 12.2.0.1 on Exadata X2 Full Rack. The GI installation has change little bit 12.2. GI 12.2 installation now uses gridSetup.sh to installation GI software.

0

Overview
Earlier Opatch utility used unzip to install files in the home. Now the version greater than or equal to 13.6.x it uses the OUI installation method. With this it ensures that installer both executes the file updates and logs the components and file changes to the OUI meta-data. With unzip method the OUI is not aware of these changes.

This procedure is only applicable to Enterprise Manager Cloud Control environment.

In this article I will demonstrate how to upgrade Opatch utility in OEM 13c Agent home. The same procedure is also applicable when upgrading Opatch in OMS home.

Download Opatch utility










 
  
dm01db01-orcldb1 {/u01/app}:cd /u01/app/oracle/software/
dm01db01-orcldb1 {/u01/app/oracle/software}:ls -l p6880880_139000_Generic.zip
-rw-r–r– 1 oracle oinstall 41188149 Jan  9 05:55 p6880880_139000_Generic.zip

dm01db01-orcldb1 {/u01/app/oracle/software}:unzip p6880880_139000_Generic.zip
Archive:  p6880880_139000_Generic.zip
   creating: 6880880/
  inflating: 6880880/README.txt
  inflating: 6880880/opatch_generic.jar
  inflating: 6880880/version.txt

dm01db01-orcldb1 {/u01/app/oracle/software}: cd 6880880/
dm01db01-orcldb1 {/u01/app/oracle/software/6880880}:ls -ltr
total 40424
-rw-r–r– 1 oracle oinstall       10 Nov 21 12:17 version.txt
-rw-r–r– 1 oracle oinstall 41338422 Nov 21 12:17 opatch_generic.jar
-rw-rw-r– 1 oracle oinstall     3084 Dec  9 17:04 README.txt

dm01db01-orcldb1 {/u01/app/oracle/software/6880880}:view README.txt

dm01db01-orcldb1 {/u01/app/oracle/software/6880880}:export

ORACLE_HOME=/u01/app/oracle/product/Agent13c/agent_13.2.0.0.0

dm01db01-orcldb1 {/u01/app/oracle/software/6880880}:echo $ORACLE_HOME
/u01/app/oracle/product/Agent13c/agent_13.2.0.0.0


Here my current opatch version is 13.8 which is greater than 13.6. So I must use the new approach to upgrade my opatch utility.

dm01db02-orcldb2 {/u01/app/oracle/software/6880880}:cd $ORACLE_HOME/OPatch

dm01db02-orcldb2 {/u01/app/oracle/product/Agent13c/agent_13.2.0.0.0/OPatch}:./opatch version
OPatch Version: 13.8.0.0.0

OPatch succeeded.

  • Backup your <ORACLE_HOME>
dm01db02-orcldb2 {/home/oracle}:cd $ORACLE_HOME

dm01db02-orcldb2 {/u01/app/oracle/product/Agent13c/agent_13.2.0.0.0}:pwd
/u01/app/oracle/product/Agent13c/agent_13.2.0.0.0

dm01db02-orcldb2 {/u01/app/oracle/product/Agent13c/agent_13.2.0.0.0}:tar -zxvf /u01/app/oracle/product/Agent13c/agent13.2.tgz .

  • Verify Java
dm01db02-orcldb2 {/u01/app/oracle/product/Agent13c/agent_13.2.0.0.0/OPatch}:which java
/usr/bin/java

dm01db02-orcldb2 {/u01/app/oracle/product/Agent13c/agent_13.2.0.0.0/OPatch}:java -version
java version “1.7.0_91”
Java(TM) SE Runtime Environment (build 1.7.0_91-b32)
Java HotSpot(TM) 64-Bit Server VM (build 24.91-b03, mixed mode)

  • Install the Opatch software using java:
 dm01db02-orcldb2 {/u01/app/oracle/product/Agent13c/agent_13.2.0.0.0/OPatch}:java -jar

/u01/app/oracle/software/6880880/opatch_generic.jar -silent

oracle_home=/u01/app/oracle/product/Agent13c/agent_13.2.0.0.0
Launcher log file is /tmp/OraInstall2017-02-01_05-15-22AM/launcher2017-02-01_05-15-22AM.log.
Extracting the installer . . . . Done
Checking if CPU speed is above 300 MHz.   Actual 2526.856 MHz    Passed
Checking swap space: must be greater than 512 MB.   Actual 16378 MB    Passed
Checking if this platform requires a 64-bit JVM.   Actual 64    Passed (64-bit not required)
Checking temp space: must be greater than 300 MB.   Actual 37114 MB    Passed

Preparing to launch the Oracle Universal Installer from /tmp/OraInstall2017-02-01_05-15-22AM
Installation Summary

Disk Space : Required 6 MB, Available 87,486 MB
Feature Sets to Install:
        Next Generation Install Core 13.9.1.0.0
        OPatch 13.9.1.0.0
        OPatch Auto OPlan 13.9.1.0.0
Session log file is /tmp/OraInstall2017-02-01_05-15-22AM/install2017-02-01_05-15-22AM.log

Loading products list. Please wait.
 1%
 40%

Loading products. Please wait.
 43%
 46%
 49%
 51%
 54%
 57%
 60%
 62%
 65%
 68%
 71%
 74%
 77%
 80%
 82%
 85%
 88%
 91%
 94%
 97%
 99%
Updating Libraries

Starting Installations
 1%
 2%
 3%
 4%
 5%
 6%
 7%
 8%
 9%
 10%
 11%
 12%
 13%
 14%
 15%
 16%
 17%
 18%
 19%
 20%
 21%
 22%
 23%
 24%
 25%
 26%
 27%
 28%
 29%
 30%
 31%
 32%
 33%
 34%
 35%
 36%
 37%
 38%
 39%
 40%
 41%
 42%
 43%
 44%
 45%
 46%
 47%
 48%
 49%
 50%
 51%
 52%
 53%
 54%
 55%
 56%
 57%
 58%
 59%
 60%
 61%
 62%
 63%
 64%
 65%
 66%
 67%
 68%
 69%
 70%
 71%
 72%
 73%
 74%
 75%
 76%
 77%
 78%
 79%
 80%
 81%
 82%
 83%
 84%
 85%
 86%
 87%
 88%
 89%
 90%
 91%
 92%

Install pending
Installation in progress


 Component : oracle.swd.opatch 13.9.1.0.0
Copying files for ‘oracle.swd.opatch 13.9.1.0.0 ‘
 Component : oracle.glcm.osys.core 13.9.1.0.0
Copying files for ‘oracle.glcm.osys.core 13.9.1.0.0 ‘
 Component : oracle.glcm.oplan.core 13.9.1.0.0
Copying files for ‘oracle.glcm.oplan.core 13.9.1.0.0 ‘
 Component : oracle.glcm.opatch.common.api 13.9.1.0.0
Copying files for ‘oracle.glcm.opatch.common.api 13.9.1.0.0 ‘
 Component : oracle.glcm.opatchauto.core 13.9.1.0.0
Copying files for ‘oracle.glcm.opatchauto.core 13.9.1.0.0 ‘
 

Install successful

Post feature install pending
Post Feature installing
 Feature Set : oracle.glcm.osys.core.classpath
 Feature Set : apache_commons_cli_lib
 Feature Set : oracle.glcm.oplan.core.classpath
Post Feature installing ‘apache_commons_cli_lib’
Post Feature installing ‘oracle.glcm.oplan.core.classpath’
 Feature Set : oracle.glcm.opatch.common.api.classpath
 Feature Set : oracle.glcm.opatchauto.core.binary.classpath
 Feature Set : apache_commons_compress_lib
 Feature Set : oracle.glcm.opatchauto.core.wallet.classpath
 Feature Set : oracle.glcm.opatchauto.core.classpath
 

Post Feature installing ‘oracle.glcm.opatchauto.core.wallet.classpath’
Post Feature installing ‘apache_commons_compress_lib’
Post Feature installing ‘oracle.glcm.opatchauto.core.binary.classpath’
 Feature Set : oracle.glcm.opatchauto.core.actions.classpath
Post Feature installing ‘oracle.glcm.opatch.common.api.classpath’
Post Feature installing ‘oracle.glcm.opatchauto.core.actions.classpath’
Post Feature installing ‘oracle.glcm.osys.core.classpath’
Post Feature installing ‘oracle.glcm.opatchauto.core.classpath’
Post feature install complete
String substitutions pending
String substituting
 Component : oracle.swd.opatch 13.9.1.0.0
String substituting ‘oracle.swd.opatch 13.9.1.0.0 ‘
 Component : oracle.glcm.osys.core 13.9.1.0.0
String substituting ‘oracle.glcm.osys.core 13.9.1.0.0 ‘
 Component : oracle.glcm.oplan.core 13.9.1.0.0
String substituting ‘oracle.glcm.oplan.core 13.9.1.0.0 ‘
 Component : oracle.glcm.opatch.common.api 13.9.1.0.0
String substituting ‘oracle.glcm.opatch.common.api 13.9.1.0.0 ‘
 Component : oracle.glcm.opatchauto.core 13.9.1.0.0
String substituting ‘oracle.glcm.opatchauto.core 13.9.1.0.0 ‘
String substitutions complete
Link pending
Linking in progress
 Component : oracle.swd.opatch 13.9.1.0.0
Linking ‘oracle.swd.opatch 13.9.1.0.0 ‘
 Component : oracle.glcm.osys.core 13.9.1.0.0
Linking ‘oracle.glcm.osys.core 13.9.1.0.0 ‘
 Component : oracle.glcm.oplan.core 13.9.1.0.0
Linking ‘oracle.glcm.oplan.core 13.9.1.0.0 ‘
 Component : oracle.glcm.opatch.common.api 13.9.1.0.0
Linking ‘oracle.glcm.opatch.common.api 13.9.1.0.0 ‘
 Component : oracle.glcm.opatchauto.core 13.9.1.0.0
Linking ‘oracle.glcm.opatchauto.core 13.9.1.0.0 ‘
Linking in progress

Link successful

Setup pending
Setup in progress
 Component : oracle.swd.opatch 13.9.1.0.0
Setting up ‘oracle.swd.opatch 13.9.1.0.0 ‘
 Component : oracle.glcm.osys.core 13.9.1.0.0
Setting up ‘oracle.glcm.osys.core 13.9.1.0.0 ‘
 Component : oracle.glcm.oplan.core 13.9.1.0.0
Setting up ‘oracle.glcm.oplan.core 13.9.1.0.0 ‘
 Component : oracle.glcm.opatch.common.api 13.9.1.0.0
Setting up ‘oracle.glcm.opatch.common.api 13.9.1.0.0 ‘
 Component : oracle.glcm.opatchauto.core 13.9.1.0.0
Setting up ‘oracle.glcm.opatchauto.core 13.9.1.0.0 ‘
 

Setup successful

Save inventory pending
Saving inventory
 93%
Saving inventory complete
 94%
Configuration complete
Logs successfully copied to /u01/app/oraInventory/logs.
dm01db02-orcldb2 {/u01/app/oracle/product/Agent13c/agent_13.2.0.0.0/OPatch}:

  • Verify Opatch software is upgraded
dm01db02-orcldb2 {/u01/app/oracle/product/Agent13c/agent_13.2.0.0.0/OPatch}:./opatch version
OPatch Version: 13.9.1.0.0

OPatch succeeded.

  • Test new Opatch software
dm01db02-orcldb2 {/u01/app/oracle/product/Agent13c/agent_13.2.0.0.0/OPatch}:./opatch lspatches
24470104;

OPatch succeeded.




Conclusion
In this article we have learned to how to upgrade the Opatch utility in OEM 13c agent home using OUI. Opatch 13.6 and above uses a new method to upgrade Opatch.

Note: There is no way to revert only OPatch to an older version. To revert OPatch, restore the backup for your ORACLE_HOME.
0

Overview
In Exadata ASM disk groups are created from ASM disks which are provisioned as grid disks from Exadata storage cells. The grid disks are created from the celldisks. Normally, there is no free space in celldisks, as all space is used for grid disks, as shown below:

[root@dm01cel01 ~]# cellcli -e “list celldisk where name like ‘CD.*’ attributes name, size, freespace”
         CD_00_dm01cel01         528.734375G     0
         CD_01_dm01cel01         528.734375G     0
         CD_02_dm01cel01         557.859375G     0
         CD_03_dm01cel01         557.859375G     0
         CD_04_dm01cel01         557.859375G     0
         CD_05_dm01cel01         557.859375G     0
         CD_06_dm01cel01         557.859375G     0
         CD_07_dm01cel01         557.859375G     0
         CD_08_dm01cel01         557.859375G     0
         CD_09_dm01cel01         557.859375G     0
         CD_10_dm01cel01         557.859375G     0
         CD_11_dm01cel01         557.859375G     0


In this article I will demonstrate how to free up some space from the grid disks in RECO ASM disk group, and then reuse that space to increase the size of DATA disk group. The free space can be anywhere on the cell disks.


Environment
  • Exadata Full Rack X2-2
  • 8 Compute nodes, 14 Storage cells and 3 IB Switches
  • High Performance Disks (600GB per disk)

1. Free up space on celldisks
Let’s say that we want to free up 50GB per disk in RECO disk group, we need to reduce the disk size in ASM, and then reduce the grid disk size in Exadata storage cells. Let’s do that for RECO disk group.

We start with the RECO grid disks with the size of 268.6875G:

[root@dm01cel01 ~]# cellcli -e “list grid disk where name like ‘RECO.*’ attributes name, size”
         RECO_dm01_CD_00_dm01cel01       105.6875G
         RECO_dm01_CD_01_dm01cel01       105.6875G
         RECO_dm01_CD_02_dm01cel01       105.6875G
         RECO_dm01_CD_03_dm01cel01       105.6875G
         RECO_dm01_CD_04_dm01cel01       105.6875G
         RECO_dm01_CD_05_dm01cel01       105.6875G
         RECO_dm01_CD_06_dm01cel01       105.6875G
         RECO_dm01_CD_07_dm01cel01       105.6875G
         RECO_dm01_CD_08_dm01cel01       105.6875G
         RECO_dm01_CD_09_dm01cel01       105.6875G
         RECO_dm01_CD_10_dm01cel01       105.6875G
         RECO_dm01_CD_11_dm01cel01       105.6875G


To free up 50 GB, the new grid disks size will be 105.6875 G – 50 GB = 55.6875 GB = 57024 MB.

2. Reduce size of RECO disks in ASM

dm01db01-+ASM1 {/home/oracle}:sqlplus / as sysasm

SQL*Plus: Release 12.1.0.2.0 Production on Wed Jan 18 04:16:57 2017

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 – 64bit Production
With the Real Application Clusters and Automatic Storage Management options

SQL> alter diskgroup RECO_dm01 resize all size 57024M rebalance power 32;

Diskgroup altered.


The command will trigger the rebalance operation for RECO disk group.

3. Monitor the rebalance with the following command:
 

SQL> set lines 200
SQL> set pages 200
SQL> select * from gv$asm_operation;

   INST_ID GROUP_NUMBER OPERA PASS      STAT      POWER     ACTUAL      SOFAR   EST_WORK   EST_RATE EST_MINUTES ERROR_CODE                                       CON_ID
———- ———— —– ——— —- ———- ———- ———- ———- ———- ———– ——————————————– ———-
         2            2 REBAL RESYNC    DONE         32                                                                                                               0
         2            2 REBAL RESILVER  DONE         32                                                                                                               0
         2            2 REBAL REBALANCE WAIT         32                                                                                                               0
         2            2 REBAL COMPACT   WAIT         32                                                                                                               0
         6            2 REBAL RESYNC    DONE         32                                                                                                               0
         6            2 REBAL RESILVER  DONE         32                                                                                                               0
         6            2 REBAL REBALANCE WAIT         32                                                                                                               0
         6            2 REBAL COMPACT   WAIT         32                                                                                                               0
         4            2 REBAL RESYNC    DONE         32                                                                                                               0
         4            2 REBAL RESILVER  DONE         32                                                                                                               0
         4            2 REBAL REBALANCE WAIT         32                                                                                                               0
         4            2 REBAL COMPACT   WAIT         32                                                                                                               0
         3            2 REBAL RESYNC    DONE         32                                                                                                               0
         3            2 REBAL RESILVER  DONE         32                                                                                                               0
         3            2 REBAL REBALANCE WAIT         32                                                                                                               0
         3            2 REBAL COMPACT   WAIT         32                                                                                                               0
         8            2 REBAL RESYNC    DONE         32                                                                                                               0
         8            2 REBAL RESILVER  DONE         32                                                                                                               0
         8            2 REBAL REBALANCE WAIT         32                                                                                                               0
         8            2 REBAL COMPACT   WAIT         32                                                                                                               0
         5            2 REBAL RESYNC    DONE         32                                                                                                               0
         5            2 REBAL RESILVER  DONE         32                                                                                                               0
         5            2 REBAL REBALANCE WAIT         32                                                                                                               0
         5            2 REBAL COMPACT   WAIT         32                                                                                                               0
         7            2 REBAL RESYNC    DONE         32                                                                                                               0
         7            2 REBAL RESILVER  DONE         32                                                                                                               0
         7            2 REBAL REBALANCE WAIT         32                                                                                                               0
         7            2 REBAL COMPACT   WAIT         32                                                                                                               0
         1            2 REBAL RESYNC    DONE         32         32          0          0          0           0                                                       0
         1            2 REBAL RESILVER  DONE         32         32          0          0          0           0                                                       0
         1            2 REBAL REBALANCE EST          32         32          0          0          0           0                                                       0
         1            2 REBAL COMPACT   WAIT         32         32          0          0          0           0                                                       0

32 rows selected.

SQL> select * from gv$asm_operation;

no rows selected


Once the command returns “no rows selected”, the rebalance would have completed and all disks in RECO disk group should show new size.

SQL> select name, total_mb from v$asm_disk_stat where name like ‘RECO%’;

NAME                             TOTAL_MB
—————————— ———-
RECO_dm01_CD_02_dm01CEL01           57024
RECO_dm01_CD_05_dm01CEL01           57024
RECO_dm01_CD_06_dm01CEL01           57024
RECO_dm01_CD_08_dm01CEL01           57024
RECO_dm01_CD_04_dm01CEL01           57024
RECO_dm01_CD_00_dm01CEL01           57024
RECO_dm01_CD_03_dm01CEL01           57024
RECO_dm01_CD_09_dm01CEL01           57024
RECO_dm01_CD_07_dm01CEL01           57024
RECO_dm01_CD_11_dm01CEL01           57024
RECO_dm01_CD_10_dm01CEL01           57024
RECO_dm01_CD_01_dm01CEL01           57024
RECO_dm01_CD_05_dm01CEL02           57024
RECO_dm01_CD_07_dm01CEL02           57024
RECO_dm01_CD_01_dm01CEL02           57024
RECO_dm01_CD_04_dm01CEL02           57024
RECO_dm01_CD_10_dm01CEL02           57024
RECO_dm01_CD_03_dm01CEL02           57024
RECO_dm01_CD_00_dm01CEL02           57024
RECO_dm01_CD_08_dm01CEL02           57024
RECO_dm01_CD_06_dm01CEL02           57024
RECO_dm01_CD_02_dm01CEL02           57024
RECO_dm01_CD_11_dm01CEL02           57024
RECO_dm01_CD_09_dm01CEL02           57024

RECO_dm01_CD_10_dm01CEL14           57024
RECO_dm01_CD_02_dm01CEL14           57024
RECO_dm01_CD_05_dm01CEL14           57024
RECO_dm01_CD_03_dm01CEL14           57024
RECO_dm01_CD_00_dm01CEL14           57024
RECO_dm01_CD_01_dm01CEL14           57024
RECO_dm01_CD_04_dm01CEL14           57024
RECO_dm01_CD_09_dm01CEL14           57024
RECO_dm01_CD_11_dm01CEL14           57024
RECO_dm01_CD_07_dm01CEL14           57024
RECO_dm01_CD_06_dm01CEL14           57024
RECO_dm01_CD_08_dm01CEL14           57024

168 rows selected.


4. Reduce size of RECO disks in storage cells

[root@dm01cel01 ~]# cellcli
CellCLI: Release 12.1.2.1.1 – Production on Wed Jan 18 05:12:33 CST 2017

Copyright (c) 2007, 2013, Oracle.  All rights reserved.
Cell Efficiency Ratio: 7,004

CellCLI> alter grid disk RECO_dm01_CD_00_dm01CEL01, RECO_dm01_CD_01_dm01CEL01, RECO_dm01_CD_02_dm01CEL01, RECO_dm01_CD_03_dm01CEL01, RECO_dm01_CD_04_dm01CEL01, RECO_dm01_CD_05_dm01CEL01, RECO_dm01_CD_06_dm01CEL01, RECO_dm01_CD_07_dm01CEL01, RECO_dm01_CD_08_dm01CEL01, RECO_dm01_CD_09_dm01CEL01, RECO_dm01_CD_10_dm01CEL01, RECO_dm01_CD_11_dm01CEL01 size=57024M;
grid disk RECO_dm01_CD_00_dm01cel01 successfully altered
grid disk RECO_dm01_CD_01_dm01cel01 successfully altered
grid disk RECO_dm01_CD_02_dm01cel01 successfully altered
grid disk RECO_dm01_CD_03_dm01cel01 successfully altered
grid disk RECO_dm01_CD_04_dm01cel01 successfully altered
grid disk RECO_dm01_CD_05_dm01cel01 successfully altered
grid disk RECO_dm01_CD_06_dm01cel01 successfully altered
grid disk RECO_dm01_CD_07_dm01cel01 successfully altered
grid disk RECO_dm01_CD_08_dm01cel01 successfully altered
grid disk RECO_dm01_CD_09_dm01cel01 successfully altered
grid disk RECO_dm01_CD_10_dm01cel01 successfully altered
grid disk RECO_dm01_CD_11_dm01cel01 successfully altered

[root@dm01cel01 ~]# ssh dm01cel02
Last login: Sun Feb 28 10:22:27 2016 from dm01cel01
[root@dm01cel02 ~]# cellcli
CellCLI: Release 12.1.2.1.1 – Production on Wed Jan 18 05:22:50 CST 2017

Copyright (c) 2007, 2013, Oracle.  All rights reserved.
Cell Efficiency Ratio: 6,999

CellCLI> alter grid disk RECO_dm01_CD_00_dm01CEL02, RECO_dm01_CD_01_dm01CEL02, RECO_dm01_CD_02_dm01CEL02, RECO_dm01_CD_03_dm01CEL02, RECO_dm01_CD_04_dm01CEL02, RECO_dm01_CD_05_dm01CEL02, RECO_dm01_CD_06_dm01CEL02, RECO_dm01_CD_07_dm01CEL02, RECO_dm01_CD_08_dm01CEL02, RECO_dm01_CD_09_dm01CEL02, RECO_dm01_CD_10_dm01CEL02, RECO_dm01_CD_11_dm01CEL02 size=57024M;
grid disk RECO_dm01_CD_00_dm01cel02 successfully altered
grid disk RECO_dm01_CD_01_dm01cel02 successfully altered
grid disk RECO_dm01_CD_02_dm01cel02 successfully altered
grid disk RECO_dm01_CD_03_dm01cel02 successfully altered
grid disk RECO_dm01_CD_04_dm01cel02 successfully altered
grid disk RECO_dm01_CD_05_dm01cel02 successfully altered
grid disk RECO_dm01_CD_06_dm01cel02 successfully altered
grid disk RECO_dm01_CD_07_dm01cel02 successfully altered
grid disk RECO_dm01_CD_08_dm01cel02 successfully altered
grid disk RECO_dm01_CD_09_dm01cel02 successfully altered
grid disk RECO_dm01_CD_10_dm01cel02 successfully altered
grid disk RECO_dm01_CD_11_dm01cel02 successfully altered

[root@dm01cel01 ~]# ssh dm01cel03
Last login: Mon Mar 28 13:24:31 2016 from dm01db01
[root@dm01cel03 ~]# cellcli
CellCLI: Release 12.1.2.1.1 – Production on Wed Jan 18 05:23:40 CST 2017

Copyright (c) 2007, 2013, Oracle.  All rights reserved.
Cell Efficiency Ratio: 7,599

CellCLI> alter grid disk RECO_dm01_CD_00_dm01CEL03, RECO_dm01_CD_01_dm01CEL03, RECO_dm01_CD_02_dm01CEL03, RECO_dm01_CD_03_dm01CEL03, RECO_dm01_CD_04_dm01CEL03, RECO_dm01_CD_05_dm01CEL03, RECO_dm01_CD_06_dm01CEL03, RECO_dm01_CD_07_dm01CEL03, RECO_dm01_CD_08_dm01CEL03, RECO_dm01_CD_09_dm01CEL03, RECO_dm01_CD_10_dm01CEL03, RECO_dm01_CD_11_dm01CEL03 size=57024M;
grid disk RECO_dm01_CD_00_dm01cel03 successfully altered
grid disk RECO_dm01_CD_01_dm01cel03 successfully altered
grid disk RECO_dm01_CD_02_dm01cel03 successfully altered
grid disk RECO_dm01_CD_03_dm01cel03 successfully altered
grid disk RECO_dm01_CD_04_dm01cel03 successfully altered
grid disk RECO_dm01_CD_05_dm01cel03 successfully altered
grid disk RECO_dm01_CD_06_dm01cel03 successfully altered
grid disk RECO_dm01_CD_07_dm01cel03 successfully altered
grid disk RECO_dm01_CD_08_dm01cel03 successfully altered
grid disk RECO_dm01_CD_09_dm01cel03 successfully altered
grid disk RECO_dm01_CD_10_dm01cel03 successfully altered
grid disk RECO_dm01_CD_11_dm01cel03 successfully altered

[root@dm01cel03 ~]# ssh dm01cel04
Last login: Sun Feb 28 10:23:17 2016 from dm01cel02
[root@dm01cel04 ~]# cellcli
CellCLI: Release 12.1.2.1.1 – Production on Wed Jan 18 05:24:27 CST 2017

Copyright (c) 2007, 2013, Oracle.  All rights reserved.
Cell Efficiency Ratio: 7,140

CellCLI> alter grid disk RECO_dm01_CD_00_dm01CEL04, RECO_dm01_CD_01_dm01CEL04, RECO_dm01_CD_02_dm01CEL04, RECO_dm01_CD_03_dm01CEL04, RECO_dm01_CD_04_dm01CEL04, RECO_dm01_CD_05_dm01CEL04, RECO_dm01_CD_06_dm01CEL04, RECO_dm01_CD_07_dm01CEL04, RECO_dm01_CD_08_dm01CEL04, RECO_dm01_CD_09_dm01CEL04, RECO_dm01_CD_10_dm01CEL04, RECO_dm01_CD_11_dm01CEL04 size=57024M;
grid disk RECO_dm01_CD_00_dm01cel04 successfully altered
grid disk RECO_dm01_CD_01_dm01cel04 successfully altered
grid disk RECO_dm01_CD_02_dm01cel04 successfully altered
grid disk RECO_dm01_CD_03_dm01cel04 successfully altered
grid disk RECO_dm01_CD_04_dm01cel04 successfully altered
grid disk RECO_dm01_CD_05_dm01cel04 successfully altered
grid disk RECO_dm01_CD_06_dm01cel04 successfully altered
grid disk RECO_dm01_CD_07_dm01cel04 successfully altered
grid disk RECO_dm01_CD_08_dm01cel04 successfully altered
grid disk RECO_dm01_CD_09_dm01cel04 successfully altered
grid disk RECO_dm01_CD_10_dm01cel04 successfully altered
grid disk RECO_dm01_CD_11_dm01cel04 successfully altered

CellCLI> alter grid disk RECO_dm01_CD_00_dm01CEL05, RECO_dm01_CD_01_dm01CEL05, RECO_dm01_CD_02_dm01CEL05, RECO_dm01_CD_03_dm01CEL05, RECO_dm01_CD_04_dm01CEL05, RECO_dm01_CD_05_dm01CEL05, RECO_dm01_CD_06_dm01CEL05, RECO_dm01_CD_07_dm01CEL05, RECO_dm01_CD_08_dm01CEL05, RECO_dm01_CD_09_dm01CEL05, RECO_dm01_CD_10_dm01CEL05, RECO_dm01_CD_11_dm01CEL05 size=57024M;
grid disk RECO_dm01_CD_00_dm01cel05 successfully altered
grid disk RECO_dm01_CD_01_dm01cel05 successfully altered
grid disk RECO_dm01_CD_02_dm01cel05 successfully altered
grid disk RECO_dm01_CD_03_dm01cel05 successfully altered
grid disk RECO_dm01_CD_04_dm01cel05 successfully altered
grid disk RECO_dm01_CD_05_dm01cel05 successfully altered
grid disk RECO_dm01_CD_06_dm01cel05 successfully altered
grid disk RECO_dm01_CD_07_dm01cel05 successfully altered
grid disk RECO_dm01_CD_08_dm01cel05 successfully altered
grid disk RECO_dm01_CD_09_dm01cel05 successfully altered
grid disk RECO_dm01_CD_10_dm01cel05 successfully altered
grid disk RECO_dm01_CD_11_dm01cel05 successfully altered

CellCLI> alter grid disk RECO_dm01_CD_00_dm01CEL06, RECO_dm01_CD_01_dm01CEL06, RECO_dm01_CD_02_dm01CEL06, RECO_dm01_CD_03_dm01CEL06, RECO_dm01_CD_04_dm01CEL06, RECO_dm01_CD_05_dm01CEL06, RECO_dm01_CD_06_dm01CEL06, RECO_dm01_CD_07_dm01CEL06, RECO_dm01_CD_08_dm01CEL06, RECO_dm01_CD_09_dm01CEL06, RECO_dm01_CD_10_dm01CEL06, RECO_dm01_CD_11_dm01CEL06 size=57024M;
grid disk RECO_dm01_CD_00_dm01cel06 successfully altered
grid disk RECO_dm01_CD_01_dm01cel06 successfully altered
grid disk RECO_dm01_CD_02_dm01cel06 successfully altered
grid disk RECO_dm01_CD_03_dm01cel06 successfully altered
grid disk RECO_dm01_CD_04_dm01cel06 successfully altered
grid disk RECO_dm01_CD_05_dm01cel06 successfully altered
grid disk RECO_dm01_CD_06_dm01cel06 successfully altered
grid disk RECO_dm01_CD_07_dm01cel06 successfully altered
grid disk RECO_dm01_CD_08_dm01cel06 successfully altered
grid disk RECO_dm01_CD_09_dm01cel06 successfully altered
grid disk RECO_dm01_CD_10_dm01cel06 successfully altered
grid disk RECO_dm01_CD_11_dm01cel06 successfully altered

CellCLI> alter grid disk RECO_dm01_CD_00_dm01CEL07, RECO_dm01_CD_01_dm01CEL07, RECO_dm01_CD_02_dm01CEL07, RECO_dm01_CD_03_dm01CEL07, RECO_dm01_CD_04_dm01CEL07, RECO_dm01_CD_05_dm01CEL07, RECO_dm01_CD_06_dm01CEL07, RECO_dm01_CD_07_dm01CEL07, RECO_dm01_CD_08_dm01CEL07, RECO_dm01_CD_09_dm01CEL07, RECO_dm01_CD_10_dm01CEL07, RECO_dm01_CD_11_dm01CEL07 size=57024M;
grid disk RECO_dm01_CD_00_dm01cel07 successfully altered
grid disk RECO_dm01_CD_01_dm01cel07 successfully altered
grid disk RECO_dm01_CD_02_dm01cel07 successfully altered
grid disk RECO_dm01_CD_03_dm01cel07 successfully altered
grid disk RECO_dm01_CD_04_dm01cel07 successfully altered
grid disk RECO_dm01_CD_05_dm01cel07 successfully altered
grid disk RECO_dm01_CD_06_dm01cel07 successfully altered
grid disk RECO_dm01_CD_07_dm01cel07 successfully altered
grid disk RECO_dm01_CD_08_dm01cel07 successfully altered
grid disk RECO_dm01_CD_09_dm01cel07 successfully altered
grid disk RECO_dm01_CD_10_dm01cel07 successfully altered
grid disk RECO_dm01_CD_11_dm01cel07 successfully altered

CellCLI> alter grid disk RECO_dm01_CD_00_dm01CEL08, RECO_dm01_CD_01_dm01CEL08, RECO_dm01_CD_02_dm01CEL08, RECO_dm01_CD_03_dm01CEL08, RECO_dm01_CD_04_dm01CEL08, RECO_dm01_CD_05_dm01CEL08, RECO_dm01_CD_06_dm01CEL08, RECO_dm01_CD_07_dm01CEL08, RECO_dm01_CD_08_dm01CEL08, RECO_dm01_CD_09_dm01CEL08, RECO_dm01_CD_10_dm01CEL08, RECO_dm01_CD_11_dm01CEL08 size=57024M;
grid disk RECO_dm01_CD_00_dm01cel08 successfully altered
grid disk RECO_dm01_CD_01_dm01cel08 successfully altered
grid disk RECO_dm01_CD_02_dm01cel08 successfully altered
grid disk RECO_dm01_CD_03_dm01cel08 successfully altered
grid disk RECO_dm01_CD_04_dm01cel08 successfully altered
grid disk RECO_dm01_CD_05_dm01cel08 successfully altered
grid disk RECO_dm01_CD_06_dm01cel08 successfully altered
grid disk RECO_dm01_CD_07_dm01cel08 successfully altered
grid disk RECO_dm01_CD_08_dm01cel08 successfully altered
grid disk RECO_dm01_CD_09_dm01cel08 successfully altered
grid disk RECO_dm01_CD_10_dm01cel08 successfully altered
grid disk RECO_dm01_CD_11_dm01cel08 successfully altered

CellCLI> alter grid disk RECO_dm01_CD_00_dm01CEL09, RECO_dm01_CD_01_dm01CEL09, RECO_dm01_CD_02_dm01CEL09, RECO_dm01_CD_03_dm01CEL09, RECO_dm01_CD_04_dm01CEL09, RECO_dm01_CD_05_dm01CEL09, RECO_dm01_CD_06_dm01CEL09, RECO_dm01_CD_07_dm01CEL09, RECO_dm01_CD_08_dm01CEL09, RECO_dm01_CD_09_dm01CEL09, RECO_dm01_CD_10_dm01CEL09, RECO_dm01_CD_11_dm01CEL09 size=57024M;
grid disk RECO_dm01_CD_00_dm01cel09 successfully altered
grid disk RECO_dm01_CD_01_dm01cel09 successfully altered
grid disk RECO_dm01_CD_02_dm01cel09 successfully altered
grid disk RECO_dm01_CD_03_dm01cel09 successfully altered
grid disk RECO_dm01_CD_04_dm01cel09 successfully altered
grid disk RECO_dm01_CD_05_dm01cel09 successfully altered
grid disk RECO_dm01_CD_06_dm01cel09 successfully altered
grid disk RECO_dm01_CD_07_dm01cel09 successfully altered
grid disk RECO_dm01_CD_08_dm01cel09 successfully altered
grid disk RECO_dm01_CD_09_dm01cel09 successfully altered
grid disk RECO_dm01_CD_10_dm01cel09 successfully altered
grid disk RECO_dm01_CD_11_dm01cel09 successfully altered

CellCLI> alter grid disk RECO_dm01_CD_00_dm01CEL10, RECO_dm01_CD_01_dm01CEL10, RECO_dm01_CD_02_dm01CEL10, RECO_dm01_CD_03_dm01CEL10, RECO_dm01_CD_04_dm01CEL10, RECO_dm01_CD_05_dm01CEL10, RECO_dm01_CD_06_dm01CEL10, RECO_dm01_CD_07_dm01CEL10, RECO_dm01_CD_08_dm01CEL10, RECO_dm01_CD_09_dm01CEL10, RECO_dm01_CD_10_dm01CEL10, RECO_dm01_CD_11_dm01CEL10 size=57024M;
grid disk RECO_dm01_CD_00_dm01cel10 successfully altered
grid disk RECO_dm01_CD_01_dm01cel10 successfully altered
grid disk RECO_dm01_CD_02_dm01cel10 successfully altered
grid disk RECO_dm01_CD_03_dm01cel10 successfully altered
grid disk RECO_dm01_CD_04_dm01cel10 successfully altered
grid disk RECO_dm01_CD_05_dm01cel10 successfully altered
grid disk RECO_dm01_CD_06_dm01cel10 successfully altered
grid disk RECO_dm01_CD_07_dm01cel10 successfully altered
grid disk RECO_dm01_CD_08_dm01cel10 successfully altered
grid disk RECO_dm01_CD_09_dm01cel10 successfully altered
grid disk RECO_dm01_CD_10_dm01cel10 successfully altered
grid disk RECO_dm01_CD_11_dm01cel10 successfully altered

CellCLI> alter grid disk RECO_dm01_CD_00_dm01CEL11, RECO_dm01_CD_01_dm01CEL11, RECO_dm01_CD_02_dm01CEL11, RECO_dm01_CD_03_dm01CEL11, RECO_dm01_CD_04_dm01CEL11, RECO_dm01_CD_05_dm01CEL11, RECO_dm01_CD_06_dm01CEL11, RECO_dm01_CD_07_dm01CEL11, RECO_dm01_CD_08_dm01CEL11, RECO_dm01_CD_09_dm01CEL11, RECO_dm01_CD_10_dm01CEL11, RECO_dm01_CD_11_dm01CEL11 size=57024M;
grid disk RECO_dm01_CD_00_dm01cel11 successfully altered
grid disk RECO_dm01_CD_01_dm01cel11 successfully altered
grid disk RECO_dm01_CD_02_dm01cel11 successfully altered
grid disk RECO_dm01_CD_03_dm01cel11 successfully altered
grid disk RECO_dm01_CD_04_dm01cel11 successfully altered
grid disk RECO_dm01_CD_05_dm01cel11 successfully altered
grid disk RECO_dm01_CD_06_dm01cel11 successfully altered
grid disk RECO_dm01_CD_07_dm01cel11 successfully altered
grid disk RECO_dm01_CD_08_dm01cel11 successfully altered
grid disk RECO_dm01_CD_09_dm01cel11 successfully altered
grid disk RECO_dm01_CD_10_dm01cel11 successfully altered
grid disk RECO_dm01_CD_11_dm01cel11 successfully altered

CellCLI> alter grid disk RECO_dm01_CD_00_dm01CEL12, RECO_dm01_CD_01_dm01CEL12, RECO_dm01_CD_02_dm01CEL12, RECO_dm01_CD_03_dm01CEL12, RECO_dm01_CD_04_dm01CEL12, RECO_dm01_CD_05_dm01CEL12, RECO_dm01_CD_06_dm01CEL12, RECO_dm01_CD_07_dm01CEL12, RECO_dm01_CD_08_dm01CEL12, RECO_dm01_CD_09_dm01CEL12, RECO_dm01_CD_10_dm01CEL12, RECO_dm01_CD_11_dm01CEL12 size=57024M;
grid disk RECO_dm01_CD_00_dm01cel12 successfully altered
grid disk RECO_dm01_CD_01_dm01cel12 successfully altered
grid disk RECO_dm01_CD_02_dm01cel12 successfully altered
grid disk RECO_dm01_CD_03_dm01cel12 successfully altered
grid disk RECO_dm01_CD_04_dm01cel12 successfully altered
grid disk RECO_dm01_CD_05_dm01cel12 successfully altered
grid disk RECO_dm01_CD_06_dm01cel12 successfully altered
grid disk RECO_dm01_CD_07_dm01cel12 successfully altered
grid disk RECO_dm01_CD_08_dm01cel12 successfully altered
grid disk RECO_dm01_CD_09_dm01cel12 successfully altered
grid disk RECO_dm01_CD_10_dm01cel12 successfully altered
grid disk RECO_dm01_CD_11_dm01cel12 successfully altered

CellCLI> alter grid disk RECO_dm01_CD_00_dm01CEL13, RECO_dm01_CD_01_dm01CEL13, RECO_dm01_CD_02_dm01CEL13, RECO_dm01_CD_03_dm01CEL13, RECO_dm01_CD_04_dm01CEL13, RECO_dm01_CD_05_dm01CEL13, RECO_dm01_CD_06_dm01CEL13, RECO_dm01_CD_07_dm01CEL13, RECO_dm01_CD_08_dm01CEL13, RECO_dm01_CD_09_dm01CEL13, RECO_dm01_CD_10_dm01CEL13, RECO_dm01_CD_11_dm01CEL13 size=57024M;
grid disk RECO_dm01_CD_00_dm01cel13 successfully altered
grid disk RECO_dm01_CD_01_dm01cel13 successfully altered
grid disk RECO_dm01_CD_02_dm01cel13 successfully altered
grid disk RECO_dm01_CD_03_dm01cel13 successfully altered
grid disk RECO_dm01_CD_04_dm01cel13 successfully altered
grid disk RECO_dm01_CD_05_dm01cel13 successfully altered
grid disk RECO_dm01_CD_06_dm01cel13 successfully altered
grid disk RECO_dm01_CD_07_dm01cel13 successfully altered
grid disk RECO_dm01_CD_08_dm01cel13 successfully altered
grid disk RECO_dm01_CD_09_dm01cel13 successfully altered
grid disk RECO_dm01_CD_10_dm01cel13 successfully altered
grid disk RECO_dm01_CD_11_dm01cel13 successfully altered

CellCLI> alter grid disk RECO_dm01_CD_00_dm01CEL14, RECO_dm01_CD_01_dm01CEL14, RECO_dm01_CD_02_dm01CEL14, RECO_dm01_CD_03_dm01CEL14, RECO_dm01_CD_04_dm01CEL14, RECO_dm01_CD_05_dm01CEL14, RECO_dm01_CD_06_dm01CEL14, RECO_dm01_CD_07_dm01CEL14, RECO_dm01_CD_08_dm01CEL14, RECO_dm01_CD_09_dm01CEL14, RECO_dm01_CD_10_dm01CEL14, RECO_dm01_CD_11_dm01CEL14 size=57024M;
grid disk RECO_dm01_CD_00_dm01cel14 successfully altered
grid disk RECO_dm01_CD_01_dm01cel14 successfully altered
grid disk RECO_dm01_CD_02_dm01cel14 successfully altered
grid disk RECO_dm01_CD_03_dm01cel14 successfully altered
grid disk RECO_dm01_CD_04_dm01cel14 successfully altered
grid disk RECO_dm01_CD_05_dm01cel14 successfully altered
grid disk RECO_dm01_CD_06_dm01cel14 successfully altered
grid disk RECO_dm01_CD_07_dm01cel14 successfully altered
grid disk RECO_dm01_CD_08_dm01cel14 successfully altered
grid disk RECO_dm01_CD_09_dm01cel14 successfully altered
grid disk RECO_dm01_CD_10_dm01cel14 successfully altered
grid disk RECO_dm01_CD_11_dm01cel14 successfully altered


Now we have some free space in cell disk

[root@dm01cel01 ~]# cellcli -e “list celldisk where name like ‘CD.*’ attributes name, size, freespace”
         CD_00_dm01cel01         528.734375G     50G
         CD_01_dm01cel01         528.734375G     50G
         CD_02_dm01cel01         557.859375G     50G
         CD_03_dm01cel01         557.859375G     50G
         CD_04_dm01cel01         557.859375G     50G
         CD_05_dm01cel01         557.859375G     50G
         CD_06_dm01cel01         557.859375G     50G
         CD_07_dm01cel01         557.859375G     50G
         CD_08_dm01cel01         557.859375G     50G
         CD_09_dm01cel01         557.859375G     50G
         CD_10_dm01cel01         557.859375G     50G
         CD_11_dm01cel01         557.859375G     50G


5. Increase size of DATA disks in storage cells

We can now increase the size of DATA grid disks, and then increase all disks size of disk group DATA in ASM.

The current DATA grid disks size is 423 GB:

[root@dm01cel01 ~]# cellcli -e “list grid disk where name like ‘DATA.*’ attributes name, size”
         DATA_dm01_CD_00_dm01cel01       423G
         DATA_dm01_CD_01_dm01cel01       423G
         DATA_dm01_CD_02_dm01cel01       423G
         DATA_dm01_CD_03_dm01cel01       423G
         DATA_dm01_CD_04_dm01cel01       423G
         DATA_dm01_CD_05_dm01cel01       423G
         DATA_dm01_CD_06_dm01cel01       423G
         DATA_dm01_CD_07_dm01cel01       423G
         DATA_dm01_CD_08_dm01cel01       423G
         DATA_dm01_CD_09_dm01cel01       423G
         DATA_dm01_CD_10_dm01cel01       423G
         DATA_dm01_CD_11_dm01cel01       423G


The new grid disks size will be 423 GB + 50 GB = 473 GB.

Resize the DATA grid disks on all storage cells. On storage cell 1, the command would be:

[root@dm01cel01 ~]# cellcli
CellCLI: Release 12.1.2.1.1 – Production on Wed Jan 18 05:39:49 CST 2017

Copyright (c) 2007, 2013, Oracle.  All rights reserved.
Cell Efficiency Ratio: 7,004

CellCLI> alter grid disk DATA_dm01_CD_00_dm01CEL01, DATA_dm01_CD_01_dm01CEL01, DATA_dm01_CD_02_dm01CEL01, DATA_dm01_CD_03_dm01CEL01, DATA_dm01_CD_04_dm01CEL01, DATA_dm01_CD_05_dm01CEL01, DATA_dm01_CD_06_dm01CEL01, DATA_dm01_CD_07_dm01CEL01, DATA_dm01_CD_08_dm01CEL01, DATA_dm01_CD_09_dm01CEL01, DATA_dm01_CD_10_dm01CEL01, DATA_dm01_CD_11_dm01CEL01 size=484352M;
grid disk DATA_dm01_CD_00_dm01cel01 successfully altered
grid disk DATA_dm01_CD_01_dm01cel01 successfully altered
grid disk DATA_dm01_CD_02_dm01cel01 successfully altered
grid disk DATA_dm01_CD_03_dm01cel01 successfully altered
grid disk DATA_dm01_CD_04_dm01cel01 successfully altered
grid disk DATA_dm01_CD_05_dm01cel01 successfully altered
grid disk DATA_dm01_CD_06_dm01cel01 successfully altered
grid disk DATA_dm01_CD_07_dm01cel01 successfully altered
grid disk DATA_dm01_CD_08_dm01cel01 successfully altered
grid disk DATA_dm01_CD_09_dm01cel01 successfully altered
grid disk DATA_dm01_CD_10_dm01cel01 successfully altered
grid disk DATA_dm01_CD_11_dm01cel01 successfully altered

[root@dm01cel01 ~]# ssh dm01cel02
Last login: Wed Jan 18 05:22:46 2017 from dm01cel01
[root@dm01cel02 ~]# cellcli
CellCLI: Release 12.1.2.1.1 – Production on Wed Jan 18 05:41:01 CST 2017

Copyright (c) 2007, 2013, Oracle.  All rights reserved.
Cell Efficiency Ratio: 6,999

CellCLI> alter grid disk DATA_dm01_CD_00_dm01CEL02, DATA_dm01_CD_01_dm01CEL02, DATA_dm01_CD_02_dm01CEL02, DATA_dm01_CD_03_dm01CEL02, DATA_dm01_CD_04_dm01CEL02, DATA_dm01_CD_05_dm01CEL02, DATA_dm01_CD_06_dm01CEL02, DATA_dm01_CD_07_dm01CEL02, DATA_dm01_CD_08_dm01CEL02, DATA_dm01_CD_09_dm01CEL02, DATA_dm01_CD_10_dm01CEL02, DATA_dm01_CD_11_dm01CEL02 size=484352M;
grid disk DATA_dm01_CD_00_dm01cel02 successfully altered
grid disk DATA_dm01_CD_01_dm01cel02 successfully altered
grid disk DATA_dm01_CD_02_dm01cel02 successfully altered
grid disk DATA_dm01_CD_03_dm01cel02 successfully altered
grid disk DATA_dm01_CD_04_dm01cel02 successfully altered
grid disk DATA_dm01_CD_05_dm01cel02 successfully altered
grid disk DATA_dm01_CD_06_dm01cel02 successfully altered
grid disk DATA_dm01_CD_07_dm01cel02 successfully altered
grid disk DATA_dm01_CD_08_dm01cel02 successfully altered
grid disk DATA_dm01_CD_09_dm01cel02 successfully altered
grid disk DATA_dm01_CD_10_dm01cel02 successfully altered
grid disk DATA_dm01_CD_11_dm01cel02 successfully altered

[root@dm01cel02 ~]# ssh dm01cel03
Last login: Wed Jan 18 05:23:38 2017 from dm01cel01
[root@dm01cel03 ~]# cellcli
CellCLI: Release 12.1.2.1.1 – Production on Wed Jan 18 05:41:49 CST 2017

Copyright (c) 2007, 2013, Oracle.  All rights reserved.
Cell Efficiency Ratio: 7,599

CellCLI> alter grid disk DATA_dm01_CD_00_dm01CEL03, DATA_dm01_CD_01_dm01CEL03, DATA_dm01_CD_02_dm01CEL03, DATA_dm01_CD_03_dm01CEL03, DATA_dm01_CD_04_dm01CEL03, DATA_dm01_CD_05_dm01CEL03, DATA_dm01_CD_06_dm01CEL03, DATA_dm01_CD_07_dm01CEL03, DATA_dm01_CD_08_dm01CEL03, DATA_dm01_CD_09_dm01CEL03, DATA_dm01_CD_10_dm01CEL03, DATA_dm01_CD_11_dm01CEL03 size=484352M;
grid disk DATA_dm01_CD_00_dm01cel03 successfully altered
grid disk DATA_dm01_CD_01_dm01cel03 successfully altered
grid disk DATA_dm01_CD_02_dm01cel03 successfully altered
grid disk DATA_dm01_CD_03_dm01cel03 successfully altered
grid disk DATA_dm01_CD_04_dm01cel03 successfully altered
grid disk DATA_dm01_CD_05_dm01cel03 successfully altered
grid disk DATA_dm01_CD_06_dm01cel03 successfully altered
grid disk DATA_dm01_CD_07_dm01cel03 successfully altered
grid disk DATA_dm01_CD_08_dm01cel03 successfully altered
grid disk DATA_dm01_CD_09_dm01cel03 successfully altered
grid disk DATA_dm01_CD_10_dm01cel03 successfully altered
grid disk DATA_dm01_CD_11_dm01cel03 successfully altered

CellCLI> alter grid disk DATA_dm01_CD_00_dm01CEL04, DATA_dm01_CD_01_dm01CEL04, DATA_dm01_CD_02_dm01CEL04, DATA_dm01_CD_03_dm01CEL04, DATA_dm01_CD_04_dm01CEL04, DATA_dm01_CD_05_dm01CEL04, DATA_dm01_CD_06_dm01CEL04, DATA_dm01_CD_07_dm01CEL04, DATA_dm01_CD_08_dm01CEL04, DATA_dm01_CD_09_dm01CEL04, DATA_dm01_CD_10_dm01CEL04, DATA_dm01_CD_11_dm01CEL04 size=484352M;
grid disk DATA_dm01_CD_00_dm01cel04 successfully altered
grid disk DATA_dm01_CD_01_dm01cel04 successfully altered
grid disk DATA_dm01_CD_02_dm01cel04 successfully altered
grid disk DATA_dm01_CD_03_dm01cel04 successfully altered
grid disk DATA_dm01_CD_04_dm01cel04 successfully altered
grid disk DATA_dm01_CD_05_dm01cel04 successfully altered
grid disk DATA_dm01_CD_06_dm01cel04 successfully altered
grid disk DATA_dm01_CD_07_dm01cel04 successfully altered
grid disk DATA_dm01_CD_08_dm01cel04 successfully altered
grid disk DATA_dm01_CD_09_dm01cel04 successfully altered
grid disk DATA_dm01_CD_10_dm01cel04 successfully altered
grid disk DATA_dm01_CD_11_dm01cel04 successfully altered

CellCLI> alter grid disk DATA_dm01_CD_00_dm01CEL05, DATA_dm01_CD_01_dm01CEL05, DATA_dm01_CD_02_dm01CEL05, DATA_dm01_CD_03_dm01CEL05, DATA_dm01_CD_04_dm01CEL05, DATA_dm01_CD_05_dm01CEL05, DATA_dm01_CD_06_dm01CEL05, DATA_dm01_CD_07_dm01CEL05, DATA_dm01_CD_08_dm01CEL05, DATA_dm01_CD_09_dm01CEL05, DATA_dm01_CD_10_dm01CEL05, DATA_dm01_CD_11_dm01CEL05 size=484352M;
grid disk DATA_dm01_CD_00_dm01cel05 successfully altered
grid disk DATA_dm01_CD_01_dm01cel05 successfully altered
grid disk DATA_dm01_CD_02_dm01cel05 successfully altered
grid disk DATA_dm01_CD_03_dm01cel05 successfully altered
grid disk DATA_dm01_CD_04_dm01cel05 successfully altered
grid disk DATA_dm01_CD_05_dm01cel05 successfully altered
grid disk DATA_dm01_CD_06_dm01cel05 successfully altered
grid disk DATA_dm01_CD_07_dm01cel05 successfully altered
grid disk DATA_dm01_CD_08_dm01cel05 successfully altered
grid disk DATA_dm01_CD_09_dm01cel05 successfully altered
grid disk DATA_dm01_CD_10_dm01cel05 successfully altered

CellCLI> alter grid disk DATA_dm01_CD_00_dm01CEL06, DATA_dm01_CD_01_dm01CEL06, DATA_dm01_CD_02_dm01CEL06, DATA_dm01_CD_03_dm01CEL06, DATA_dm01_CD_04_dm01CEL06, DATA_dm01_CD_05_dm01CEL06, DATA_dm01_CD_06_dm01CEL06, DATA_dm01_CD_07_dm01CEL06, DATA_dm01_CD_08_dm01CEL06, DATA_dm01_CD_09_dm01CEL06, DATA_dm01_CD_10_dm01CEL06, DATA_dm01_CD_11_dm01CEL06 size=484352M;
grid disk DATA_dm01_CD_00_dm01cel06 successfully altered
grid disk DATA_dm01_CD_01_dm01cel06 successfully altered
grid disk DATA_dm01_CD_02_dm01cel06 successfully altered
grid disk DATA_dm01_CD_03_dm01cel06 successfully altered
grid disk DATA_dm01_CD_04_dm01cel06 successfully altered
grid disk DATA_dm01_CD_05_dm01cel06 successfully altered
grid disk DATA_dm01_CD_06_dm01cel06 successfully altered
grid disk DATA_dm01_CD_07_dm01cel06 successfully altered
grid disk DATA_dm01_CD_08_dm01cel06 successfully altered
grid disk DATA_dm01_CD_09_dm01cel06 successfully altered
grid disk DATA_dm01_CD_10_dm01cel06 successfully altered
grid disk DATA_dm01_CD_11_dm01cel06 successfully altered

CellCLI> alter grid disk DATA_dm01_CD_00_dm01CEL07, DATA_dm01_CD_01_dm01CEL07, DATA_dm01_CD_02_dm01CEL07, DATA_dm01_CD_03_dm01CEL07, DATA_dm01_CD_04_dm01CEL07, DATA_dm01_CD_05_dm01CEL07, DATA_dm01_CD_06_dm01CEL07, DATA_dm01_CD_07_dm01CEL07, DATA_dm01_CD_08_dm01CEL07, DATA_dm01_CD_09_dm01CEL07, DATA_dm01_CD_10_dm01CEL07, DATA_dm01_CD_11_dm01CEL07 size=484352M;
grid disk DATA_dm01_CD_00_dm01cel07 successfully altered
grid disk DATA_dm01_CD_01_dm01cel07 successfully altered
grid disk DATA_dm01_CD_02_dm01cel07 successfully altered
grid disk DATA_dm01_CD_03_dm01cel07 successfully altered
grid disk DATA_dm01_CD_04_dm01cel07 successfully altered
grid disk DATA_dm01_CD_05_dm01cel07 successfully altered
grid disk DATA_dm01_CD_06_dm01cel07 successfully altered
grid disk DATA_dm01_CD_07_dm01cel07 successfully altered
grid disk DATA_dm01_CD_08_dm01cel07 successfully altered
grid disk DATA_dm01_CD_09_dm01cel07 successfully altered
grid disk DATA_dm01_CD_10_dm01cel07 successfully altered
grid disk DATA_dm01_CD_11_dm01cel07 successfully altered

CellCLI> alter grid disk DATA_dm01_CD_00_dm01CEL08, DATA_dm01_CD_01_dm01CEL08, DATA_dm01_CD_02_dm01CEL08, DATA_dm01_CD_03_dm01CEL08, DATA_dm01_CD_04_dm01CEL08, DATA_dm01_CD_05_dm01CEL08, DATA_dm01_CD_06_dm01CEL08, DATA_dm01_CD_07_dm01CEL08, DATA_dm01_CD_08_dm01CEL08, DATA_dm01_CD_09_dm01CEL08, DATA_dm01_CD_10_dm01CEL08, DATA_dm01_CD_11_dm01CEL08 size=484352M;
grid disk DATA_dm01_CD_00_dm01cel08 successfully altered
grid disk DATA_dm01_CD_01_dm01cel08 successfully altered
grid disk DATA_dm01_CD_02_dm01cel08 successfully altered
grid disk DATA_dm01_CD_03_dm01cel08 successfully altered
grid disk DATA_dm01_CD_04_dm01cel08 successfully altered
grid disk DATA_dm01_CD_05_dm01cel08 successfully altered
grid disk DATA_dm01_CD_06_dm01cel08 successfully altered
grid disk DATA_dm01_CD_07_dm01cel08 successfully altered
grid disk DATA_dm01_CD_08_dm01cel08 successfully altered
grid disk DATA_dm01_CD_09_dm01cel08 successfully altered
grid disk DATA_dm01_CD_10_dm01cel08 successfully altered
grid disk DATA_dm01_CD_11_dm01cel08 successfully altered

CellCLI> alter grid disk DATA_dm01_CD_00_dm01CEL09, DATA_dm01_CD_01_dm01CEL09, DATA_dm01_CD_02_dm01CEL09, DATA_dm01_CD_03_dm01CEL09, DATA_dm01_CD_04_dm01CEL09, DATA_dm01_CD_05_dm01CEL09, DATA_dm01_CD_06_dm01CEL09, DATA_dm01_CD_07_dm01CEL09, DATA_dm01_CD_08_dm01CEL09, DATA_dm01_CD_09_dm01CEL09, DATA_dm01_CD_10_dm01CEL09, DATA_dm01_CD_11_dm01CEL09 size=484352M;
grid disk DATA_dm01_CD_00_dm01cel09 successfully altered
grid disk DATA_dm01_CD_01_dm01cel09 successfully altered
grid disk DATA_dm01_CD_02_dm01cel09 successfully altered
grid disk DATA_dm01_CD_03_dm01cel09 successfully altered
grid disk DATA_dm01_CD_04_dm01cel09 successfully altered
grid disk DATA_dm01_CD_05_dm01cel09 successfully altered
grid disk DATA_dm01_CD_06_dm01cel09 successfully altered
grid disk DATA_dm01_CD_07_dm01cel09 successfully altered
grid disk DATA_dm01_CD_08_dm01cel09 successfully altered
grid disk DATA_dm01_CD_09_dm01cel09 successfully altered
grid disk DATA_dm01_CD_10_dm01cel09 successfully altered
grid disk DATA_dm01_CD_11_dm01cel09 successfully altered

CellCLI> alter grid disk DATA_dm01_CD_00_dm01CEL10, DATA_dm01_CD_01_dm01CEL10, DATA_dm01_CD_02_dm01CEL10, DATA_dm01_CD_03_dm01CEL10, DATA_dm01_CD_04_dm01CEL10, DATA_dm01_CD_05_dm01CEL10, DATA_dm01_CD_06_dm01CEL10, DATA_dm01_CD_07_dm01CEL10, DATA_dm01_CD_08_dm01CEL10, DATA_dm01_CD_09_dm01CEL10, DATA_dm01_CD_10_dm01CEL10, DATA_dm01_CD_11_dm01CEL10 size=484352M;
grid disk DATA_dm01_CD_00_dm01cel10 successfully altered
grid disk DATA_dm01_CD_01_dm01cel10 successfully altered
grid disk DATA_dm01_CD_02_dm01cel10 successfully altered
grid disk DATA_dm01_CD_03_dm01cel10 successfully altered
grid disk DATA_dm01_CD_04_dm01cel10 successfully altered
grid disk DATA_dm01_CD_05_dm01cel10 successfully altered
grid disk DATA_dm01_CD_06_dm01cel10 successfully altered
grid disk DATA_dm01_CD_07_dm01cel10 successfully altered
grid disk DATA_dm01_CD_08_dm01cel10 successfully altered
grid disk DATA_dm01_CD_09_dm01cel10 successfully altered
grid disk DATA_dm01_CD_10_dm01cel10 successfully altered
grid disk DATA_dm01_CD_11_dm01cel10 successfully altered

CellCLI> alter grid disk DATA_dm01_CD_00_dm01CEL11, DATA_dm01_CD_01_dm01CEL11, DATA_dm01_CD_02_dm01CEL11, DATA_dm01_CD_03_dm01CEL11, DATA_dm01_CD_04_dm01CEL11, DATA_dm01_CD_05_dm01CEL11, DATA_dm01_CD_06_dm01CEL11, DATA_dm01_CD_07_dm01CEL11, DATA_dm01_CD_08_dm01CEL11, DATA_dm01_CD_09_dm01CEL11, DATA_dm01_CD_10_dm01CEL11, DATA_dm01_CD_11_dm01CEL11 size=484352M;
grid disk DATA_dm01_CD_00_dm01cel11 successfully altered
grid disk DATA_dm01_CD_01_dm01cel11 successfully altered
grid disk DATA_dm01_CD_02_dm01cel11 successfully altered
grid disk DATA_dm01_CD_03_dm01cel11 successfully altered
grid disk DATA_dm01_CD_04_dm01cel11 successfully altered
grid disk DATA_dm01_CD_05_dm01cel11 successfully altered
grid disk DATA_dm01_CD_06_dm01cel11 successfully altered
grid disk DATA_dm01_CD_07_dm01cel11 successfully altered
grid disk DATA_dm01_CD_08_dm01cel11 successfully altered
grid disk DATA_dm01_CD_09_dm01cel11 successfully altered
grid disk DATA_dm01_CD_10_dm01cel11 successfully altered
grid disk DATA_dm01_CD_11_dm01cel11 successfully altered

CellCLI> alter grid disk DATA_dm01_CD_00_dm01CEL12, DATA_dm01_CD_01_dm01CEL12, DATA_dm01_CD_02_dm01CEL12, DATA_dm01_CD_03_dm01CEL12, DATA_dm01_CD_04_dm01CEL12, DATA_dm01_CD_05_dm01CEL12, DATA_dm01_CD_06_dm01CEL12, DATA_dm01_CD_07_dm01CEL12, DATA_dm01_CD_08_dm01CEL12, DATA_dm01_CD_09_dm01CEL12, DATA_dm01_CD_10_dm01CEL12, DATA_dm01_CD_11_dm01CEL12 size=484352M;
grid disk DATA_dm01_CD_00_dm01cel12 successfully altered
grid disk DATA_dm01_CD_01_dm01cel12 successfully altered
grid disk DATA_dm01_CD_02_dm01cel12 successfully altered
grid disk DATA_dm01_CD_03_dm01cel12 successfully altered
grid disk DATA_dm01_CD_04_dm01cel12 successfully altered
grid disk DATA_dm01_CD_05_dm01cel12 successfully altered
grid disk DATA_dm01_CD_06_dm01cel12 successfully altered
grid disk DATA_dm01_CD_07_dm01cel12 successfully altered
grid disk DATA_dm01_CD_08_dm01cel12 successfully altered
grid disk DATA_dm01_CD_09_dm01cel12 successfully altered
grid disk DATA_dm01_CD_10_dm01cel12 successfully altered
grid disk DATA_dm01_CD_11_dm01cel12 successfully altered

CellCLI> alter grid disk DATA_dm01_CD_00_dm01CEL13, DATA_dm01_CD_01_dm01CEL13, DATA_dm01_CD_02_dm01CEL13, DATA_dm01_CD_03_dm01CEL13, DATA_dm01_CD_04_dm01CEL13, DATA_dm01_CD_05_dm01CEL13, DATA_dm01_CD_06_dm01CEL13, DATA_dm01_CD_07_dm01CEL13, DATA_dm01_CD_08_dm01CEL13, DATA_dm01_CD_09_dm01CEL13, DATA_dm01_CD_10_dm01CEL13, DATA_dm01_CD_11_dm01CEL13 size=484352M;
grid disk DATA_dm01_CD_00_dm01cel13 successfully altered
grid disk DATA_dm01_CD_01_dm01cel13 successfully altered
grid disk DATA_dm01_CD_02_dm01cel13 successfully altered
grid disk DATA_dm01_CD_03_dm01cel13 successfully altered
grid disk DATA_dm01_CD_04_dm01cel13 successfully altered
grid disk DATA_dm01_CD_05_dm01cel13 successfully altered
grid disk DATA_dm01_CD_06_dm01cel13 successfully altered
grid disk DATA_dm01_CD_07_dm01cel13 successfully altered
grid disk DATA_dm01_CD_08_dm01cel13 successfully altered
grid disk DATA_dm01_CD_09_dm01cel13 successfully altered
grid disk DATA_dm01_CD_10_dm01cel13 successfully altered
grid disk DATA_dm01_CD_11_dm01cel13 successfully altered

CellCLI> alter grid disk DATA_dm01_CD_00_dm01CEL14, DATA_dm01_CD_01_dm01CEL14, DATA_dm01_CD_02_dm01CEL14, DATA_dm01_CD_03_dm01CEL14, DATA_dm01_CD_04_dm01CEL14, DATA_dm01_CD_05_dm01CEL14, DATA_dm01_CD_06_dm01CEL14, DATA_dm01_CD_07_dm01CEL14, DATA_dm01_CD_08_dm01CEL14, DATA_dm01_CD_09_dm01CEL14, DATA_dm01_CD_10_dm01CEL14, DATA_dm01_CD_11_dm01CEL14 size=484352M;
grid disk DATA_dm01_CD_00_dm01cel14 successfully altered
grid disk DATA_dm01_CD_01_dm01cel14 successfully altered
grid disk DATA_dm01_CD_02_dm01cel14 successfully altered
grid disk DATA_dm01_CD_03_dm01cel14 successfully altered
grid disk DATA_dm01_CD_04_dm01cel14 successfully altered
grid disk DATA_dm01_CD_05_dm01cel14 successfully altered
grid disk DATA_dm01_CD_06_dm01cel14 successfully altered
grid disk DATA_dm01_CD_07_dm01cel14 successfully altered
grid disk DATA_dm01_CD_08_dm01cel14 successfully altered
grid disk DATA_dm01_CD_09_dm01cel14 successfully altered
grid disk DATA_dm01_CD_10_dm01cel14 successfully altered
grid disk DATA_dm01_CD_11_dm01cel14 successfully altered

7. Verify the new size

CellCLI> list grid disk where name like ‘DATA.*’ attributes name, size
         DATA_dm01_CD_00_dm01cel14       473G
         DATA_dm01_CD_01_dm01cel14       473G
         DATA_dm01_CD_02_dm01cel14       473G
         DATA_dm01_CD_03_dm01cel14       473G
         DATA_dm01_CD_04_dm01cel14       473G
         DATA_dm01_CD_05_dm01cel14       473G
         DATA_dm01_CD_06_dm01cel14       473G
         DATA_dm01_CD_07_dm01cel14       473G
         DATA_dm01_CD_08_dm01cel14       473G
         DATA_dm01_CD_09_dm01cel14       473G
         DATA_dm01_CD_10_dm01cel14       473G
         DATA_dm01_CD_11_dm01cel14       473G


8. Increase size of DATA disks in ASM

SQL> alter diskgroup DATA_dm01 resize all rebalance power 32;

Diskgroup altered.


Note that there was no need to specify the new disks size, as ASM will get that from the grid disks. The rebalance clause is optional.

The command will trigger the rebalance operation for disk group DATA.

Monitor the rebalance with the following command:

SQL> set lines 200
SQL> set pages 200
SQL> select * from gv$asm_operation;

no rows selected


Once the command returns “no rows selected”, the rebalance would have completed and all disks in disk group DATA should show new size:

SQL> select name, total_mb/1024 “GB” from v$asm_disk_stat where name like ‘DATA%’;

NAME                                   GB
—————————— ———-
DATA_dm01_CD_08_dm01CEL01             473
DATA_dm01_CD_01_dm01CEL01             473
DATA_dm01_CD_07_dm01CEL01             473
DATA_dm01_CD_09_dm01CEL01             473
DATA_dm01_CD_04_dm01CEL01             473
DATA_dm01_CD_05_dm01CEL01             473
DATA_dm01_CD_10_dm01CEL01             473
DATA_dm01_CD_03_dm01CEL01             473
DATA_dm01_CD_02_dm01CEL01             423
DATA_dm01_CD_11_dm01CEL01             473
DATA_dm01_CD_06_dm01CEL01             473
DATA_dm01_CD_00_dm01CEL01             473

DATA_dm01_CD_03_dm01CEL14             473
DATA_dm01_CD_08_dm01CEL14             473
DATA_dm01_CD_00_dm01CEL14             473
DATA_dm01_CD_05_dm01CEL14             473
DATA_dm01_CD_09_dm01CEL14             473
DATA_dm01_CD_02_dm01CEL14             473
DATA_dm01_CD_07_dm01CEL14             473
DATA_dm01_CD_10_dm01CEL14             473
DATA_dm01_CD_01_dm01CEL14             473
DATA_dm01_CD_11_dm01CEL14             473
DATA_dm01_CD_04_dm01CEL14             473
DATA_dm01_CD_06_dm01CEL14             473

168 rows selected.


Conclusion

In this article we have learned how to resize ASM disk in Exadata Database. If there is free space in Exadata cell disks, increasing the disk group size can be accomplished in two steps – grid disk size increase on all storage cells followed by the disk size increase in ASM. This requires a single ASM rebalance operation. If there is no free space in celldisks, some space may be freed up from other disk group. To reduce the size, first the ASM disk size is reduced and then reduced the grid disks size. To increase the size, first increase the grid disks size and the increase the ASM disk size.
0

Overview

You purchased a new Storage cell for expansion or removed Storage from an existing Cluster and want to add it to the new a Cluster due to space demand.

In this article I will demonstrate how to add a storage cell to an existing Exadata Database Machine.

Here we are adding new storage cell with 600 GB disks to existing Exadata database machine using 600 GB disks.


Environment

  • Exadata Database Machine X2-2 Full Rack
  • Exadata Storage Software version 12.1.2.2.0
  • Oracle Grid/Database Version 11.2.0.4


Pre Checks:

  • Create a new file named “new_cell_group” containing only the hostnames for the new cells.

[root@dm01db01 ~]# vi new_cell_group

[root@dm01db01 ~]# cat new_cell_group
dm01cel05
  • It is assumed that the file “cell_group” only contains the original cells, not the new ones.  If you have already modified “cell_group” to contain the new cells, just comment it out or remove it from the file.

[root@dm01db01 ~]# cat cell_group
dm01cel01
dm01cel02
dm01cel03
dm01cel04
dm01cel05
dm01cel06
dm01cel07
dm01cel08
dm01cel09
dm01cel10
dm01cel11
dm01cel12
dm01cel13
dm01cel14

[root@dm01db01 ~]# vi cell_group

[root@dm01db01 ~]# cat cell_group
dm01cel01
dm01cel02
dm01cel03
dm01cel04
#dm01cel05
dm01cel06
dm01cel07
dm01cel08
dm01cel09
dm01cel10
dm01cel11
dm01cel12
dm01cel13
dm01cel14
  • Here we use the root user to run the cellcli commands; ensure that user equivalency setup for root user between Storage cells and Compute nodes.

[root@dm01db01 ~]# dcli -g dbs_group -l root ‘uptime’
dm01db01: 09:55:05 up 425 days, 23:16,  1 user,  load average: 3.26, 3.37, 3.39
dm01db02: 09:55:05 up 117 days, 18:20,  0 users,  load average: 1.19, 1.42, 1.56
dm01db03: 09:55:05 up 85 days, 10:07,  0 users,  load average: 6.25, 6.20, 6.22
dm01db04: 09:55:05 up 519 days, 11:33,  0 users,  load average: 1.53, 1.48, 1.47
dm01db05: 09:55:05 up 519 days, 12:45,  0 users,  load average: 1.36, 1.35, 1.47
dm01db06: 09:55:05 up 515 days, 21:40,  0 users,  load average: 1.47, 1.36, 1.36
dm01db07: 09:55:05 up 519 days, 12:03,  0 users,  load average: 1.44, 1.64, 1.71
dm01db08: 09:55:05 up 519 days, 11:29,  0 users,  load average: 1.78, 1.90, 1.78

[root@dm01db01 ~]# dcli -g cell_group -l root ‘uptime’
dm01cel01: 09:55:15 up 466 days, 20:25,  0 users,  load average: 1.44, 1.35, 1.26
dm01cel02: 09:55:15 up 519 days, 14:48,  0 users,  load average: 1.49, 1.44, 1.51
dm01cel03: 09:55:15 up 519 days, 14:47,  0 users,  load average: 1.01, 0.96, 1.10
dm01cel04: 09:55:15 up 519 days, 14:47,  0 users,  load average: 1.40, 1.32, 1.24
dm01cel06: 09:55:15 up 519 days, 14:46,  0 users,  load average: 1.10, 1.25, 1.30
dm01cel07: 09:55:15 up 519 days, 14:46,  0 users,  load average: 0.99, 1.14, 1.18
dm01cel08: 09:55:15 up 466 days, 19:37,  0 users,  load average: 1.52, 1.24, 1.17
dm01cel09: 09:55:15 up 519 days, 14:46,  0 users,  load average: 1.00, 1.40, 1.56
dm01cel10: 09:55:15 up 519 days, 14:46,  0 users,  load average: 1.09, 1.22, 1.22
dm01cel11: 09:55:15 up 519 days, 14:46,  0 users,  load average: 1.24, 1.27, 1.22
dm01cel12: 09:55:15 up 519 days, 14:46,  0 users,  load average: 1.11, 1.14, 1.14
dm01cel13: 09:55:15 up 519 days, 14:46,  0 users,  load average: 1.13, 1.27, 1.16
dm01cel14: 09:55:15 up 519 days, 14:46,  0 users,  load average: 1.20, 1.12, 1.15

Ensure Cell Disks are Ready

  • First verify if any cell disks already exist:

[root@dm01db01 ~]# dcli -g new_cell_group -l root “cellcli -e list celldisk attributes name,status,freespace where disktype=harddisk”
dm01cel05: CD_00_dm01cel05       normal  0
dm01cel05: CD_01_dm01cel05       normal  0
dm01cel05: CD_02_dm01cel05       normal  0
dm01cel05: CD_03_dm01cel05       normal  0
dm01cel05: CD_04_dm01cel05       normal  0
dm01cel05: CD_05_dm01cel05       normal  0
dm01cel05: CD_06_dm01cel05       normal  0
dm01cel05: CD_07_dm01cel05       normal  0
dm01cel05: CD_08_dm01cel05       normal  0
dm01cel05: CD_09_dm01cel05       normal  0
dm01cel05: CD_10_dm01cel05       normal  0
dm01cel05: CD_11_dm01cel05       normal  0

In my case the cell disks exists.
  • If no cell disks are created. There will be no output.

[root@dm01db01 ~]# dcli -g new_cell_group -l root “cellcli -e list celldisk attributes name,status,freespace where disktype=harddisk”
[root@dm01db01 ~]#
  • If this command returns a list of cell disks, then you should verify if any grid disks exist:

[root@dm01db01 ~]# dcli -g new_cell_group -l root “cellcli -e list griddisk where disktype=harddisk”
dm01cel05: DATA_CD_00_dm01cel05          active
dm01cel05: DATA_CD_01_dm01cel05          active
dm01cel05: DATA_CD_02_dm01cel05          active
dm01cel05: DATA_CD_03_dm01cel05          active
dm01cel05: DATA_CD_04_dm01cel05          active
dm01cel05: DATA_CD_05_dm01cel05          active
dm01cel05: DATA_CD_06_dm01cel05          active
dm01cel05: DATA_CD_07_dm01cel05          active
dm01cel05: DATA_CD_08_dm01cel05          active
dm01cel05: DATA_CD_09_dm01cel05          active
dm01cel05: DATA_CD_10_dm01cel05          active
dm01cel05: DATA_CD_11_dm01cel05          active
dm01cel05: RECO_CD_00_dm01cel05          active
dm01cel05: RECO_CD_01_dm01cel05          active
dm01cel05: RECO_CD_02_dm01cel05          active
dm01cel05: RECO_CD_03_dm01cel05          active
dm01cel05: RECO_CD_04_dm01cel05          active
dm01cel05: RECO_CD_05_dm01cel05          active
dm01cel05: RECO_CD_06_dm01cel05          active
dm01cel05: RECO_CD_07_dm01cel05          active
dm01cel05: RECO_CD_08_dm01cel05          active
dm01cel05: RECO_CD_09_dm01cel05          active
dm01cel05: RECO_CD_10_dm01cel05          active
dm01cel05: RECO_CD_11_dm01cel05          active
dm01cel05: DBFS_DG_CD_02_dm01cel05      active
dm01cel05: DBFS_DG_CD_03_dm01cel05      active
dm01cel05: DBFS_DG_CD_04_dm01cel05      active
dm01cel05: DBFS_DG_CD_05_dm01cel05      active
dm01cel05: DBFS_DG_CD_06_dm01cel05      active
dm01cel05: DBFS_DG_CD_07_dm01cel05      active
dm01cel05: DBFS_DG_CD_08_dm01cel05      active
dm01cel05: DBFS_DG_CD_09_dm01cel05      active
dm01cel05: DBFS_DG_CD_10_dm01cel05      active
dm01cel05: DBFS_DG_CD_11_dm01cel05      active

In my case grid disks exists.
  • If no grid disks are created. There will be no output.

[root@dm01db01 ~]# dcli -g new_cell_group -l root “cellcli -e list griddisk where disktype=harddisk”
[root@dm01db01 ~]#

  • If any grid disks are present, then you must ensure they are not in use by looking in V$ASM_DISK and verifying the mount status is “closed” and the header status is “former” or “candidate”:

dm01db01-orcldb1 {/home/oracle}:. oraenv
ORACLE_SID = [orcldb1] ? +ASM1
The Oracle base remains unchanged with value /u01/app/oracle
dm01db01-+ASM1 {/home/oracle}:sqlplus / as sysasm

SQL*Plus: Release 11.2.0.4.0 Production on Mon Oct 17 10:01:15 2016

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 – 64bit Production
With the Real Application Clusters and Automatic Storage Management options

SQL> select name, path, state, mode_status,header_status, mount_status from v$asm_disk where header_status <> ‘MEMBER’ order by path;

no rows selected
  • If it is OK to drop the grid disks, you can use the following command to drop set of grid disks (by prefix like, RECO):

[root@dm01db01 ~]# dcli -g new_cell_group -l root “cellcli -e drop griddisk all harddisk prefix=RECO”
dm01cel05: GridDisk RECO_CD_00_dm01cel05 successfully dropped
dm01cel05: GridDisk RECO_CD_01_dm01cel05 successfully dropped
dm01cel05: GridDisk RECO_CD_02_dm01cel05 successfully dropped
dm01cel05: GridDisk RECO_CD_03_dm01cel05 successfully dropped
dm01cel05: GridDisk RECO_CD_04_dm01cel05 successfully dropped
dm01cel05: GridDisk RECO_CD_05_dm01cel05 successfully dropped
dm01cel05: GridDisk RECO_CD_06_dm01cel05 successfully dropped
dm01cel05: GridDisk RECO_CD_07_dm01cel05 successfully dropped
dm01cel05: GridDisk RECO_CD_08_dm01cel05 successfully dropped
dm01cel05: GridDisk RECO_CD_09_dm01cel05 successfully dropped
dm01cel05: GridDisk RECO_CD_10_dm01cel05 successfully dropped
dm01cel05: GridDisk RECO_CD_11_dm01cel05 successfully dropped

[root@dm01db01 ~]# dcli -g new_cell_group -l root “cellcli -e drop griddisk all harddisk prefix=DATA”
dm01cel05: GridDisk DATA_CD_00_dm01cel05 successfully dropped
dm01cel05: GridDisk DATA_CD_01_dm01cel05 successfully dropped
dm01cel05: GridDisk DATA_CD_02_dm01cel05 successfully dropped
dm01cel05: GridDisk DATA_CD_03_dm01cel05 successfully dropped
dm01cel05: GridDisk DATA_CD_04_dm01cel05 successfully dropped
dm01cel05: GridDisk DATA_CD_05_dm01cel05 successfully dropped
dm01cel05: GridDisk DATA_CD_06_dm01cel05 successfully dropped
dm01cel05: GridDisk DATA_CD_07_dm01cel05 successfully dropped
dm01cel05: GridDisk DATA_CD_08_dm01cel05 successfully dropped
dm01cel05: GridDisk DATA_CD_09_dm01cel05 successfully dropped
dm01cel05: GridDisk DATA_CD_10_dm01cel05 successfully dropped
dm01cel05: GridDisk DATA_CD_11_dm01cel05 successfully dropped

[root@dm01db01 ~]# dcli -g new_cell_group -l root “cellcli -e drop griddisk all harddisk prefix=DBFS_DG”
dm01cel05: GridDisk DBFS_DG_CD_02_dm01cel05 successfully dropped
dm01cel05: GridDisk DBFS_DG_CD_03_dm01cel05 successfully dropped
dm01cel05: GridDisk DBFS_DG_CD_04_dm01cel05 successfully dropped
dm01cel05: GridDisk DBFS_DG_CD_05_dm01cel05 successfully dropped
dm01cel05: GridDisk DBFS_DG_CD_06_dm01cel05 successfully dropped
dm01cel05: GridDisk DBFS_DG_CD_07_dm01cel05 successfully dropped
dm01cel05: GridDisk DBFS_DG_CD_08_dm01cel05 successfully dropped
dm01cel05: GridDisk DBFS_DG_CD_09_dm01cel05 successfully dropped
dm01cel05: GridDisk DBFS_DG_CD_10_dm01cel05 successfully dropped
dm01cel05: GridDisk DBFS_DG_CD_11_dm01cel05 successfully dropped
  • Verify Grid disks are dropped

[root@dm01db01 ~]# dcli -g new_cell_group -l root “cellcli -e list griddisk where disktype=harddisk”
[root@dm01db01 ~]#
  • Drop cell disks

[root@dm01db01 ~]# dcli -g new_cell_group -l root “cellcli -e drop celldisk all harddisk force”
dm01cel05: CellDisk CD_00_dm01cel05 successfully dropped
dm01cel05: CellDisk CD_01_dm01cel05 successfully dropped
dm01cel05: CellDisk CD_02_dm01cel05 successfully dropped
dm01cel05: CellDisk CD_03_dm01cel05 successfully dropped
dm01cel05: CellDisk CD_04_dm01cel05 successfully dropped
dm01cel05: CellDisk CD_05_dm01cel05 successfully dropped
dm01cel05: CellDisk CD_06_dm01cel05 successfully dropped
dm01cel05: CellDisk CD_07_dm01cel05 successfully dropped
dm01cel05: CellDisk CD_08_dm01cel05 successfully dropped
dm01cel05: CellDisk CD_09_dm01cel05 successfully dropped
dm01cel05: CellDisk CD_10_dm01cel05 successfully dropped
dm01cel05: CellDisk CD_11_dm01cel05 successfully dropped
  • If no cell disks exist, create cell disks on all hard disks (output will be similar to the following): 

[root@dm01db01 ~]# dcli -g new_cell_group -l root “cellcli -e create celldisk all harddisk”
dm01cel05: CellDisk CD_00_dm01cel05 successfully created
dm01cel05: CellDisk CD_01_dm01cel05 successfully created
dm01cel05: CellDisk CD_02_dm01cel05 successfully created
dm01cel05: CellDisk CD_03_dm01cel05 successfully created
dm01cel05: CellDisk CD_04_dm01cel05 successfully created
dm01cel05: CellDisk CD_05_dm01cel05 successfully created
dm01cel05: CellDisk CD_06_dm01cel05 successfully created
dm01cel05: CellDisk CD_07_dm01cel05 successfully created
dm01cel05: CellDisk CD_08_dm01cel05 successfully created
dm01cel05: CellDisk CD_09_dm01cel05 successfully created
dm01cel05: CellDisk CD_10_dm01cel05 successfully created
dm01cel05: CellDisk CD_11_dm01cel05 successfully created
  • Verify that celldisks were created (output should be similar to the following)

[root@dm01db01 ~]# dcli -g new_cell_group -l root “cellcli -e list celldisk attributes name,status,freespace where disktype=harddisk”
dm01cel05: CD_00_dm01cel05       normal  528.6875G
dm01cel05: CD_01_dm01cel05       normal  528.6875G
dm01cel05: CD_02_dm01cel05       normal  557.8125G
dm01cel05: CD_03_dm01cel05       normal  557.8125G
dm01cel05: CD_04_dm01cel05       normal  557.8125G
dm01cel05: CD_05_dm01cel05       normal  557.8125G
dm01cel05: CD_06_dm01cel05       normal  557.8125G
dm01cel05: CD_07_dm01cel05       normal  557.8125G
dm01cel05: CD_08_dm01cel05       normal  557.8125G
dm01cel05: CD_09_dm01cel05       normal  557.8125G
dm01cel05: CD_10_dm01cel05       normal  557.8125G
dm01cel05: CD_11_dm01cel05       normal  557.8125G

Create DATA and RECO grid disks at the same offsets found on the existing disks (outer tracks)

  • Verify the current sizes and offsets for existing grid disks (on the existing cells):

[root@dm01db01 ~]# dcli -g cell_group -l root “cellcli -e list griddisk attributes name,size | grep ‘CD_03’ | grep -i DBFS_DG”
dm01cel01: DBFS_DG_CD_03_dm01cel01      29.125G
dm01cel02: DBFS_DG_CD_03_dm01cel02      29.125G
dm01cel03: DBFS_DG_CD_03_dm01cel03      29.125G
dm01cel04: DBFS_DG_CD_03_dm01cel04      29.125G
dm01cel06: DBFS_DG_CD_03_dm01cel06      29.125G
dm01cel07: DBFS_DG_CD_03_dm01cel07      29.125G
dm01cel08: DBFS_DG_CD_03_dm01cel08      29.125G
dm01cel09: DBFS_DG_CD_03_dm01cel09      29.125G
dm01cel10: DBFS_DG_CD_03_dm01cel10      29.125G
dm01cel11: DBFS_DG_CD_03_dm01cel11      29.125G
dm01cel12: DBFS_DG_CD_03_dm01cel12      29.125G
dm01cel13: DBFS_DG_CD_03_dm01cel13      29.125G
dm01cel14: DBFS_DG_CD_03_dm01cel14      29.125G

[root@dm01db01 ~]# dcli -g cell_group -l root “cellcli -e list griddisk attributes name,size,offset | grep ‘CD_03’ | grep -i data”
dm01cel01: DATA_CD_03_dm01cel01          260G            32M
dm01cel02: DATA_CD_03_dm01cel02          260G            32M
dm01cel03: DATA_CD_03_dm01cel03          260G            32M
dm01cel04: DATA_CD_03_dm01cel04          260G            32M
dm01cel06: DATA_CD_03_dm01cel06          260G            32M
dm01cel07: DATA_CD_03_dm01cel07          260G            32M
dm01cel08: DATA_CD_03_dm01cel08          260G            32M
dm01cel09: DATA_CD_03_dm01cel09          260G            32M
dm01cel10: DATA_CD_03_dm01cel10          260G            32M
dm01cel11: DATA_CD_03_dm01cel11          260G            32M
dm01cel12: DATA_CD_03_dm01cel12          260G            32M
dm01cel13: DATA_CD_03_dm01cel13          260G            32M
dm01cel14: DATA_CD_03_dm01cel14          260G            32M

[root@dm01db01 ~]# dcli -g cell_group -l root “cellcli -e list griddisk attributes name,size,offset | grep ‘CD_03’ | grep -i reco”
dm01cel01: RECO_CD_03_dm01cel01          268.6875G       260.046875G
dm01cel02: RECO_CD_03_dm01cel02          268.6875G       260.046875G
dm01cel03: RECO_CD_03_dm01cel03          268.6875G       260.046875G
dm01cel04: RECO_CD_03_dm01cel04          268.6875G       260.046875G
dm01cel06: RECO_CD_03_dm01cel06          268.6875G       260.046875G
dm01cel07: RECO_CD_03_dm01cel07          268.6875G       260.046875G
dm01cel08: RECO_CD_03_dm01cel08          268.6875G       260.046875G
dm01cel09: RECO_CD_03_dm01cel09          268.6875G       260.046875G
dm01cel10: RECO_CD_03_dm01cel10          268.6875G       260.046875G
dm01cel11: RECO_CD_03_dm01cel11          268.6875G       260.046875G
dm01cel12: RECO_CD_03_dm01cel12          268.6875G       260.046875G
dm01cel13: RECO_CD_03_dm01cel13          268.6875G       260.046875G
dm01cel14: RECO_CD_03_dm01cel14          268.6875G       260.046875G

  • Create similar grid disks on the new cells with the same size; be sure to create them in order of offset position, from smallest to largest offset. 

[root@dm01db01 ~]# dcli -g new_cell_group -l root “cellcli -e CREATE GRIDDISK ALL HARDDISK PREFIX=’DATA’, size=260G”
dm01cel05: GridDisk DATA_CD_00_dm01cel05 successfully created
dm01cel05: GridDisk DATA_CD_01_dm01cel05 successfully created
dm01cel05: GridDisk DATA_CD_02_dm01cel05 successfully created
dm01cel05: GridDisk DATA_CD_03_dm01cel05 successfully created
dm01cel05: GridDisk DATA_CD_04_dm01cel05 successfully created
dm01cel05: GridDisk DATA_CD_05_dm01cel05 successfully created
dm01cel05: GridDisk DATA_CD_06_dm01cel05 successfully created
dm01cel05: GridDisk DATA_CD_07_dm01cel05 successfully created
dm01cel05: GridDisk DATA_CD_08_dm01cel05 successfully created
dm01cel05: GridDisk DATA_CD_09_dm01cel05 successfully created
dm01cel05: GridDisk DATA_CD_10_dm01cel05 successfully created
dm01cel05: GridDisk DATA_CD_11_dm01cel05 successfully created

[root@dm01db01 ~]# dcli -g new_cell_group -l root “cellcli -e CREATE GRIDDISK ALL HARDDISK PREFIX=’RECO’, size=268.6875G”
dm01cel05: GridDisk RECO_CD_00_dm01cel05 successfully created
dm01cel05: GridDisk RECO_CD_01_dm01cel05 successfully created
dm01cel05: GridDisk RECO_CD_02_dm01cel05 successfully created
dm01cel05: GridDisk RECO_CD_03_dm01cel05 successfully created
dm01cel05: GridDisk RECO_CD_04_dm01cel05 successfully created
dm01cel05: GridDisk RECO_CD_05_dm01cel05 successfully created
dm01cel05: GridDisk RECO_CD_06_dm01cel05 successfully created
dm01cel05: GridDisk RECO_CD_07_dm01cel05 successfully created
dm01cel05: GridDisk RECO_CD_08_dm01cel05 successfully created
dm01cel05: GridDisk RECO_CD_09_dm01cel05 successfully created
dm01cel05: GridDisk RECO_CD_10_dm01cel05 successfully created
dm01cel05: GridDisk RECO_CD_11_dm01cel05 successfully created

  • Validate that grid disks were created properly on the new cells (size and offset matches the original cells)

[root@dm01db01 ~]# dcli -g new_cell_group -l root “cellcli -e list griddisk attributes name,size,offset | grep -i data”
dm01cel05: DATA_CD_00_dm01cel05  260G            32M
dm01cel05: DATA_CD_01_dm01cel05  260G            32M
dm01cel05: DATA_CD_02_dm01cel05  260G            32M
dm01cel05: DATA_CD_03_dm01cel05  260G            32M
dm01cel05: DATA_CD_04_dm01cel05  260G            32M
dm01cel05: DATA_CD_05_dm01cel05  260G            32M
dm01cel05: DATA_CD_06_dm01cel05  260G            32M
dm01cel05: DATA_CD_07_dm01cel05  260G            32M
dm01cel05: DATA_CD_08_dm01cel05  260G            32M
dm01cel05: DATA_CD_09_dm01cel05  260G            32M
dm01cel05: DATA_CD_10_dm01cel05  260G            32M
dm01cel05: DATA_CD_11_dm01cel05  260G            32M

[root@dm01db01 ~]# dcli -g new_cell_group -l root “cellcli -e list griddisk attributes name,size,offset | grep -i reco”
dm01cel05: RECO_CD_00_dm01cel05  268.6875G       260.046875G
dm01cel05: RECO_CD_01_dm01cel05  268.6875G       260.046875G
dm01cel05: RECO_CD_02_dm01cel05  268.6875G       260.046875G
dm01cel05: RECO_CD_03_dm01cel05  268.6875G       260.046875G
dm01cel05: RECO_CD_04_dm01cel05  268.6875G       260.046875G
dm01cel05: RECO_CD_05_dm01cel05  268.6875G       260.046875G
dm01cel05: RECO_CD_06_dm01cel05  268.6875G       260.046875G
dm01cel05: RECO_CD_07_dm01cel05  268.6875G       260.046875G
dm01cel05: RECO_CD_08_dm01cel05  268.6875G       260.046875G
dm01cel05: RECO_CD_09_dm01cel05  268.6875G       260.046875G
dm01cel05: RECO_CD_10_dm01cel05  268.6875G       260.046875G
dm01cel05: RECO_CD_11_dm01cel05  268.6875G       260.046875G

Create the new DBFS_DG grid disks only on disks 3 through 12 of the new storage servers

  • List the size of the DBFS_DG grid disks on the original cells (they should all be the same size) – we’ll call this size, “dbfs_dg_size”:

[root@dm01db01 ~]# dcli -c dm01cel01 -l root “cellcli -e list griddisk attributes name,size | grep ‘DBFS_DG'”
dm01cel01: DBFS_DG_CD_02_dm01cel01      29.125G
dm01cel01: DBFS_DG_CD_03_dm01cel01      29.125G
dm01cel01: DBFS_DG_CD_04_dm01cel01      29.125G
dm01cel01: DBFS_DG_CD_05_dm01cel01      29.125G
dm01cel01: DBFS_DG_CD_06_dm01cel01      29.125G
dm01cel01: DBFS_DG_CD_07_dm01cel01      29.125G
dm01cel01: DBFS_DG_CD_08_dm01cel01      29.125G
dm01cel01: DBFS_DG_CD_09_dm01cel01      29.125G
dm01cel01: DBFS_DG_CD_10_dm01cel01      29.125G
dm01cel01: DBFS_DG_CD_11_dm01cel01      29.125G

  • The DBFS_DG grid disks will be automatically placed just after the RECO grid disks on the innermost tracks.

NOTE: This command will “fail” on the first and second disks of each cell, but that is expected since there is no more free space on those disks due to the system partitions. The “error” message is similar to:  “Cell disks were skipped because they had no freespace for grid disks: CD_00_dm01cel05, CD_01_dm01cel05.”

[root@dm01db01 ~]# dcli -g new_cell_group -l root “cellcli -e CREATE GRIDDISK ALL HARDDISK PREFIX=’DBFS_DG’, size=29.125G”
dm01cel05: Cell disks were skipped because they had no freespace for grid disks: CD_00_dm01cel05, CD_01_dm01cel05.
dm01cel05: GridDisk DBFS_DG_CD_02_dm01cel05 successfully created
dm01cel05: GridDisk DBFS_DG_CD_03_dm01cel05 successfully created
dm01cel05: GridDisk DBFS_DG_CD_04_dm01cel05 successfully created
dm01cel05: GridDisk DBFS_DG_CD_05_dm01cel05 successfully created
dm01cel05: GridDisk DBFS_DG_CD_06_dm01cel05 successfully created
dm01cel05: GridDisk DBFS_DG_CD_07_dm01cel05 successfully created
dm01cel05: GridDisk DBFS_DG_CD_08_dm01cel05 successfully created
dm01cel05: GridDisk DBFS_DG_CD_09_dm01cel05 successfully created
dm01cel05: GridDisk DBFS_DG_CD_10_dm01cel05 successfully created
dm01cel05: GridDisk DBFS_DG_CD_11_dm01cel05 successfully created

Validate that Grid Disks on all Cells are the Same Size

  • All Cells at once:

[root@dm01db01 ~]# dcli -g cell_group -l root “cellcli -e list griddisk attributes name,size,offset | grep -i data”
dm01cel01: DATA_CD_00_dm01cel01          260G            32M
dm01cel01: DATA_CD_01_dm01cel01          260G            32M
dm01cel01: DATA_CD_02_dm01cel01          260G            32M
dm01cel01: DATA_CD_03_dm01cel01          260G            32M
dm01cel01: DATA_CD_04_dm01cel01          260G            32M
dm01cel01: DATA_CD_05_dm01cel01          260G            32M
dm01cel01: DATA_CD_06_dm01cel01          260G            32M
dm01cel01: DATA_CD_07_dm01cel01          260G            32M
dm01cel01: DATA_CD_08_dm01cel01          260G            32M
dm01cel01: DATA_CD_09_dm01cel01          260G            32M
dm01cel01: DATA_CD_10_dm01cel01          260G            32M
dm01cel01: DATA_CD_11_dm01cel01          260G            32M
…..
dm01cel14: DATA_CD_00_dm01cel14          260G            32M
dm01cel14: DATA_CD_01_dm01cel14          260G            32M
dm01cel14: DATA_CD_02_dm01cel14          260G            32M
dm01cel14: DATA_CD_03_dm01cel14          260G            32M
dm01cel14: DATA_CD_04_dm01cel14          260G            32M
dm01cel14: DATA_CD_05_dm01cel14          260G            32M
dm01cel14: DATA_CD_06_dm01cel14          260G            32M
dm01cel14: DATA_CD_07_dm01cel14          260G            32M
dm01cel14: DATA_CD_08_dm01cel14          260G            32M
dm01cel14: DATA_CD_09_dm01cel14          260G            32M
dm01cel14: DATA_CD_10_dm01cel14          260G            32M
dm01cel14: DATA_CD_11_dm01cel14          260G            32M

[root@dm01db01 ~]# dcli -g cell_group -l root “cellcli -e list griddisk attributes name,size,offset | grep -i reco”
dm01cel01: RECO_CD_00_dm01cel01          268.6875G       260.046875G
dm01cel01: RECO_CD_01_dm01cel01          268.6875G       260.046875G
dm01cel01: RECO_CD_02_dm01cel01          268.6875G       260.046875G
dm01cel01: RECO_CD_03_dm01cel01          268.6875G       260.046875G
dm01cel01: RECO_CD_04_dm01cel01          268.6875G       260.046875G
dm01cel01: RECO_CD_05_dm01cel01          268.6875G       260.046875G
dm01cel01: RECO_CD_06_dm01cel01          268.6875G       260.046875G
dm01cel01: RECO_CD_07_dm01cel01          268.6875G       260.046875G
dm01cel01: RECO_CD_08_dm01cel01          268.6875G       260.046875G
dm01cel01: RECO_CD_09_dm01cel01          268.6875G       260.046875G
dm01cel01: RECO_CD_10_dm01cel01          268.6875G       260.046875G
dm01cel01: RECO_CD_11_dm01cel01          268.6875G       260.046875G
……
dm01cel13: RECO_CD_11_dm01cel13          268.6875G       260.046875G
dm01cel14: RECO_CD_00_dm01cel14          268.6875G       260.046875G
dm01cel14: RECO_CD_01_dm01cel14          268.6875G       260.046875G
dm01cel14: RECO_CD_02_dm01cel14          268.6875G       260.046875G
dm01cel14: RECO_CD_03_dm01cel14          268.6875G       260.046875G
dm01cel14: RECO_CD_04_dm01cel14          268.6875G       260.046875G
dm01cel14: RECO_CD_05_dm01cel14          268.6875G       260.046875G
dm01cel14: RECO_CD_06_dm01cel14          268.6875G       260.046875G
dm01cel14: RECO_CD_07_dm01cel14          268.6875G       260.046875G
dm01cel14: RECO_CD_08_dm01cel14          268.6875G       260.046875G
dm01cel14: RECO_CD_09_dm01cel14          268.6875G       260.046875G
dm01cel14: RECO_CD_10_dm01cel14          268.6875G       260.046875G
dm01cel14: RECO_CD_11_dm01cel14          268.6875G       260.046875G

[root@dm01db01 ~]# dcli -g cell_group -l root “cellcli -e list griddisk attributes name,size,offset | grep -i DBFS_DG”
dm01cel01: DBFS_DG_CD_02_dm01cel01      29.125G         528.734375G
dm01cel01: DBFS_DG_CD_03_dm01cel01      29.125G         528.734375G
dm01cel01: DBFS_DG_CD_04_dm01cel01      29.125G         528.734375G
dm01cel01: DBFS_DG_CD_05_dm01cel01      29.125G         528.734375G
dm01cel01: DBFS_DG_CD_06_dm01cel01      29.125G         528.734375G
dm01cel01: DBFS_DG_CD_07_dm01cel01      29.125G         528.734375G
dm01cel01: DBFS_DG_CD_08_dm01cel01      29.125G         528.734375G
dm01cel01: DBFS_DG_CD_09_dm01cel01      29.125G         528.734375G
dm01cel01: DBFS_DG_CD_10_dm01cel01      29.125G         528.734375G
dm01cel01: DBFS_DG_CD_11_dm01cel01      29.125G         528.734375G
….
dm01cel14: DBFS_DG_CD_02_dm01cel14      29.125G         528.734375G
dm01cel14: DBFS_DG_CD_03_dm01cel14      29.125G         528.734375G
dm01cel14: DBFS_DG_CD_04_dm01cel14      29.125G         528.734375G
dm01cel14: DBFS_DG_CD_05_dm01cel14      29.125G         528.734375G
dm01cel14: DBFS_DG_CD_06_dm01cel14      29.125G         528.734375G
dm01cel14: DBFS_DG_CD_07_dm01cel14      29.125G         528.734375G
dm01cel14: DBFS_DG_CD_08_dm01cel14      29.125G         528.734375G
dm01cel14: DBFS_DG_CD_09_dm01cel14      29.125G         528.734375G
dm01cel14: DBFS_DG_CD_10_dm01cel14      29.125G         528.734375G
dm01cel14: DBFS_DG_CD_11_dm01cel14      29.125G         528.734375G

  • One cell at a time:

[root@dm01db01 ~]# dcli -c dm01cel01 -l root “cellcli -e list griddisk attributes name,size,offset | grep -i data”
dm01cel01: DATA_CD_00_dm01cel01          260G            32M
dm01cel01: DATA_CD_01_dm01cel01          260G            32M
dm01cel01: DATA_CD_02_dm01cel01          260G            32M
dm01cel01: DATA_CD_03_dm01cel01          260G            32M
dm01cel01: DATA_CD_04_dm01cel01          260G            32M
dm01cel01: DATA_CD_05_dm01cel01          260G            32M
dm01cel01: DATA_CD_06_dm01cel01          260G            32M
dm01cel01: DATA_CD_07_dm01cel01          260G            32M
dm01cel01: DATA_CD_08_dm01cel01          260G            32M
dm01cel01: DATA_CD_09_dm01cel01          260G            32M
dm01cel01: DATA_CD_10_dm01cel01          260G            32M
dm01cel01: DATA_CD_11_dm01cel01          260G            32M

[root@dm01db01 ~]# dcli -c dm01cel01 -l root “cellcli -e list griddisk attributes name,size,offset | grep -i reco”
dm01cel01: RECO_CD_00_dm01cel01          268.6875G       260.046875G
dm01cel01: RECO_CD_01_dm01cel01          268.6875G       260.046875G
dm01cel01: RECO_CD_02_dm01cel01          268.6875G       260.046875G
dm01cel01: RECO_CD_03_dm01cel01          268.6875G       260.046875G
dm01cel01: RECO_CD_04_dm01cel01          268.6875G       260.046875G
dm01cel01: RECO_CD_05_dm01cel01          268.6875G       260.046875G
dm01cel01: RECO_CD_06_dm01cel01          268.6875G       260.046875G
dm01cel01: RECO_CD_07_dm01cel01          268.6875G       260.046875G
dm01cel01: RECO_CD_08_dm01cel01          268.6875G       260.046875G
dm01cel01: RECO_CD_09_dm01cel01          268.6875G       260.046875G
dm01cel01: RECO_CD_10_dm01cel01          268.6875G       260.046875G
dm01cel01: RECO_CD_11_dm01cel01          268.6875G       260.046875G

[root@dm01db01 ~]# dcli -c dm01cel01 -l root “cellcli -e list griddisk attributes name,size,offset | grep -i DBFS_DG”
dm01cel01: DBFS_DG_CD_02_dm01cel01      29.125G         528.734375G
dm01cel01: DBFS_DG_CD_03_dm01cel01      29.125G         528.734375G
dm01cel01: DBFS_DG_CD_04_dm01cel01      29.125G         528.734375G
dm01cel01: DBFS_DG_CD_05_dm01cel01      29.125G         528.734375G
dm01cel01: DBFS_DG_CD_06_dm01cel01      29.125G         528.734375G
dm01cel01: DBFS_DG_CD_07_dm01cel01      29.125G         528.734375G
dm01cel01: DBFS_DG_CD_08_dm01cel01      29.125G         528.734375G
dm01cel01: DBFS_DG_CD_09_dm01cel01      29.125G         528.734375G
dm01cel01: DBFS_DG_CD_10_dm01cel01      29.125G         528.734375G
dm01cel01: DBFS_DG_CD_11_dm01cel01      29.125G         528.734375G

Add the DBFS_DG grid disks to the existing DBFS_DG disk group

  • Update the cellip.ora file

[root@dm01db01 ~]# vi /etc/oracle/cell/network-config/cellip.ora

[root@dm01db01 network-config]# cat /etc/oracle/cell/network-config/cellip.ora
cell=”192.168.2.9″
cell=”192.168.2.10″
cell=”192.168.2.11″
cell=”192.168.2.12″
cell=”192.168.2.13″
cell=”192.168.2.14″
cell=”192.168.2.15″
cell=”192.168.2.16″
cell=”192.168.2.17″
cell=”192.168.2.18″
cell=”192.168.2.19″
cell=”192.168.2.20″
cell=”192.168.2.21″
cell=”192.168.2.22″

[root@dm01db01 ~]# cd /etc/oracle/cell/network-config/

[root@dm01db01 network-config]# dcli -g ~/dbs_group -l root -d /etc/oracle/cell/network-config -f cellip.ora

[root@dm01db01 network-config]# dcli -g ~/dbs_group -l root ls -l  /etc/oracle/cell/network-config/cellip.ora
dm01db01: -rwxr-xr-x 1 oracle oinstall 279 Oct 17 11:15 /etc/oracle/cell/network-config/cellip.ora
dm01db02: -rwx—— 1 oracle oinstall 279 Oct 17 11:15 /etc/oracle/cell/network-config/cellip.ora
dm01db03: -rwx—— 1 oracle oinstall 279 Oct 17 11:15 /etc/oracle/cell/network-config/cellip.ora
dm01db04: -rwx—— 1 oracle oinstall 279 Oct 17 11:15 /etc/oracle/cell/network-config/cellip.ora
dm01db05: -rwx—— 1 oracle oinstall 279 Oct 17 11:15 /etc/oracle/cell/network-config/cellip.ora
dm01db06: -rwx—— 1 oracle oinstall 279 Oct 17 11:15 /etc/oracle/cell/network-config/cellip.ora
dm01db07: -rwx—— 1 oracle oinstall 279 Oct 17 11:15 /etc/oracle/cell/network-config/cellip.ora
dm01db08: -rwx—— 1 oracle oinstall 279 Oct 17 11:15 /etc/oracle/cell/network-config/cellip.ora

dm01db01-orcldb1 {/home/oracle}:. oraenv
ORACLE_SID = [orcldb1] ? +ASM1
The Oracle base remains unchanged with value /u01/app/oracle
dm01db01-+ASM1 {/home/oracle}:sqlplus / as sysasm

SQL*Plus: Release 11.2.0.4.0 Production on Mon Oct 17 11:18:34 2016

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 – 64bit Production
With the Real Application Clusters and Automatic Storage Management options

SQL> alter diskgroup DBFS_DG add disk ‘o/*/DBFS_DG*dm01cel05’  rebalance power 32 NOWAIT;

Diskgroup altered.

Add the DATA and RECO grid disks to the existing DATA and RECO disk groups

  • From one DB node add the new DATA disks to the DATA disk group (replace the pattern below ” ‘o/*/DATA_DM01*new_cel01’, ‘o/*/DATA_DM01*new_cel02’, ‘o/*/DATA_DM01*new_cel03′” with list of actual new cells):

SQL> alter diskgroup DATA add disk ‘o/*/DATA*dm01cel05’ rebalance power 32 NOWAIT;

Diskgroup altered.

  •  From another DB node add the new RECO disks to the RECO disk group (replace the pattern below, ” ‘o/*/RECO_DM01*new_cel01′,’o/*/RECO_DM01*new_cel02′,’o/*/RECO_DM01*new_cel03’ ” with list of actual new cells):

SQL> alter diskgroup RECO add disk ‘o/*/RECO*dm01cel05’ rebalance power 32 NOWAIT;

Diskgroup altered.

  • Monitor the progress of the rebalance activities using this query (you can proceed with the rest of the steps while this is continuing):

SQL> select * from gv$asm_operation;

   INST_ID GROUP_NUMBER OPERA STAT      POWER     ACTUAL      SOFAR   EST_WORK   EST_RATE EST_MINUTES ERROR_CODE
———- ———— —– —- ———- ———- ———- ———- ———- ———– ——————————————–
         7            1 REBAL WAIT         32
         3            1 REBAL WAIT         32
         2            1 REBAL WAIT         32
         1            1 REBAL WAIT         32
         6            1 REBAL WAIT         32
         8            1 REBAL WAIT         32
         4            1 REBAL WAIT         32
         5            1 REBAL RUN          32         32      41666      45014      17263           0

8 rows selected.

SQL> select * from gv$asm_operation;

no rows selected

Validate that DATA, RECO, and DBFS_DG contain disks from new cells

Before:
dm01db01-orcldb1 {/home/oracle}:. oraenv
ORACLE_SID = [orcldb1] ? +ASM1
The Oracle base remains unchanged with value /u01/app/oracle
dm01db01-+ASM1 {/home/oracle}:sqlplus / as sysasm

SQL*Plus: Release 11.2.0.4.0 Production on Mon Oct 17 10:43:16 2016

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 – 64bit Production
With the Real Application Clusters and Automatic Storage Management options

SQL> set lines 200
SQL> set pages 200
SQL> select d.failgroup,dg.name, sum(d.total_mb) cell_total_mb, count(1) cell_disk_count from v$asm_disk d, v$asm_diskgroup dg where d.group_number = dg.group_number group by d.failgroup, dg.name order by d.failgroup, dg.name;

FAILGROUP                      NAME                           CELL_TOTAL_MB CELL_DISK_COUNT
—————————— —————————— ————- —————
DM01CEL01                      DATA                                 3194880              12
DM01CEL01                      RECO                                 3301632              12
DM01CEL01                      DBFS_DG                              298240              10
DM01CEL02                      DATA                                 2928640              12
DM01CEL02                      RECO                                 3026496              12
DM01CEL02                      DBFS_DG                              268416              10
DM01CEL03                      DATA                                 3194880              12
DM01CEL03                      RECO                                 3301632              12
DM01CEL03                      DBFS_DG                              298240              10
DM01CEL04                      DATA                                 3194880              12
DM01CEL04                      RECO                                 3301632              12
DM01CEL04                      DBFS_DG                              298240              10
DM01CEL06                      DATA                                 3194880              12
DM01CEL06                      RECO                                 3301632              12
DM01CEL06                      DBFS_DG                              298240              10
DM01CEL07                      DATA                                 2928640              11
DM01CEL07                      RECO                                 3026496              11
DM01CEL07                      DBFS_DG                              268416              10
DM01CEL08                      DATA                                 3194880              12
DM01CEL08                      RECO                                 3301632              12
DM01CEL08                      DBFS_DG                              298240              10
DM01CEL09                      DATA                                 2396160               12
DM01CEL09                      RECO                                 2476224               12
DM01CEL09                      DBFS_DG                              238592               10
DM01CEL10                      DATA                                 3194880              12
DM01CEL10                      RECO                                 3301632              12
DM01CEL10                      DBFS_DG                              298240              10
DM01CEL11                      DATA                                 2928640              12
DM01CEL11                      RECO                                 3026496              12
DM01CEL11                      DBFS_DG                              298240              10
DM01CEL12                      DATA                                 3194880              12
DM01CEL12                      RECO                                 3301632              12
DM01CEL12                      DBFS_DG                              298240              10
DM01CEL13                      DATA                                 3194880              12
DM01CEL13                      RECO                                 3301632              12
DM01CEL13                      DBFS_DG                              298240              10
DM01CEL14                      DATA                                 3194880              12
DM01CEL14                      RECO                                 3301632              12
DM01CEL14                      DBFS_DG                              298240              10

39 rows selected.

After
SQL> set lines 200
SQL> set pages 200
SQL> select d.failgroup,dg.name, sum(d.total_mb) cell_total_mb, count(1) cell_disk_count from v$asm_disk d, v$asm_diskgroup dg where d.group_number = dg.group_number group by d.failgroup, dg.name order by d.failgroup, dg.name;

FAILGROUP                      NAME                           CELL_TOTAL_MB CELL_DISK_COUNT
—————————— —————————— ————- —————
DM01CEL01                      DATA                                 3194880              12
DM01CEL01                      RECO                                 3301632              12
DM01CEL01                      DBFS_DG                              298240              10
DM01CEL02                      DATA                                 2928640              12
DM01CEL02                      RECO                                 3026496              12
DM01CEL02                      DBFS_DG                              268416               10
DM01CEL03                      DATA                                 3194880              12
DM01CEL03                      RECO                                 3301632              12
DM01CEL03                      DBFS_DG                              298240              10
DM01CEL04                      DATA                                 2928640              12
DM01CEL04                      RECO                                 3026496              12
DM01CEL04                      DBFS_DG                              268416               10
DM01CEL05                      DATA                                 3194880              12
DM01CEL05                      RECO                                 3301632              12
DM01CEL05                      DBFS_DG                              298240              10
DM01CEL06                      DATA                                 3194880              12
DM01CEL06                      RECO                                 3301632              12
DM01CEL06                      DBFS_DG                              298240              10
DM01CEL07                      DATA                                 2928640              12
DM01CEL07                      RECO                                 3026496              12
DM01CEL07                      DBFS_DG                              268416               10
DM01CEL08                      DATA                                 3194880              12
DM01CEL08                      RECO                                 3301632              12
DM01CEL08                      DBFS_DG                              298240              10
DM01CEL09                      DATA                                 2396160               12
DM01CEL09                      RECO                                 2476224               12
DM01CEL09                      DBFS_DG                              238592               10
DM01CEL10                      DATA                                 3194880              12
DM01CEL10                      RECO                                 3301632              12
DM01CEL10                      DBFS_DG                              298240              10
DM01CEL11                      DATA                                 2928640              12
DM01CEL11                      RECO                                 3026496              12
DM01CEL11                      DBFS_DG                              298240              10
DM01CEL12                      DATA                                 3194880              12
DM01CEL12                      RECO                                 3301632              12
DM01CEL12                      DBFS_DG                              298240              10
DM01CEL13                      DATA                                 3194880              12
DM01CEL13                      RECO                                 3301632              12
DM01CEL13                      DBFS_DG                              298240              10
DM01CEL14                      DATA                                 3194880              12
DM01CEL14                      RECO                                 3301632              12
DM01CEL14                      DBFS_DG                              298240              10

42 rows selected.

Ensure that there are no offline disks in any disk group:

SQL> select name,total_mb,free_mb,offline_disks from v$asm_diskgroup;

NAME                             TOTAL_MB    FREE_MB OFFLINE_DISKS
—————————— ———- ———- ————-
DATA                             42864640   41713708             0
RECO                             44296896   44293768             0
DBFS_DG                          4026240    4023232             0

Conclusion
In this article we have learned how to add a storage cell to an existing Exadata Database machine of same disk size.

0

PREVIOUS POSTSPage 13 of 18NEXT POSTS

[contact-form-7 id=”4973″ title=”Lead”]