Good Contents Are Everywhere, But Here, We Deliver The Best of The Best.Please Hold on!

Introduction

By default Oracle ACS configure root file system with 30GB space on Exadata Computed nodes X2 and above. In most of the cases this space is sufficient to store operating system, Exadata software, log file and diagnostic files. Over time if you store patches, software and log files are not purged this space will be filled faster. Exadata X2 and above uses volume group and it is easy to extend the logical volume space on which the root file system is mounted.

Root file system is created on two system partitions LVDb Sys1 and LVDb Sys2 and both system partitions must be size equally at the same time. Only one system partition is active at any time and other is inactive.

In this article, I will demonstrate how you can extend root file system size on Exadata Compute nodes online without any downtime.

Environemt

Exadata X5-2 Half Rack
Exadata storage software version 12.1.2.3.4

Current Root File System Allocation

[root@exa01db01 ~]# df -h /
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/VGExaDb-LVDbSys1
                       30G   25G  3.5G  88% /
 


List Logical Volume and It’s Details

lvm> lvs

lvm> lvs -o lv_name,lv_path,vg_name,lv_size
  LV        Path                   VG      LSize
  LVDbOra1  /dev/VGExaDb/LVDbOra1  VGExaDb 200.00g
  LVDbSwap1 /dev/VGExaDb/LVDbSwap1 VGExaDb  24.00g
  LVDbSys1  /dev/VGExaDb/LVDbSys1  VGExaDb  30.00g
  LVDbSys2  /dev/VGExaDb/LVDbSys2  VGExaDb  30.00g
  perflv    /dev/VGExaDb/perflv    VGExaDb   5.00g



Get the Current Active System Partition

[root@exa01db01 ~]# imageinfo

Kernel version: 2.6.39-400.294.1.el6uek.x86_64 #1 SMP Wed Jan 11 08:46:38 PST 2017 x86_64
Image kernel version: 2.6.39-400.294.1.el6uek
Image version: 12.1.2.3.4.170111
Image activated: 2017-04-08 12:14:23 -0500
Image status: success
System partition on device: /dev/mapper/VGExaDb-LVDbSys1




Steps to Increase Root File System on Compute Nodes:

  • Get the Current Root File System Utilization

[root@exa01db01 ~]# df -h /
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/VGExaDb-LVDbSys1
                       30G   25G  3.5G  88% /


  • Get Current Logical Volume Configuration

lvm> lvs -o lv_name,lv_path,vg_name,lv_size
  LV        Path                   VG      LSize
  LVDbOra1  /dev/VGExaDb/LVDbOra1  VGExaDb 200.00g
  LVDbSwap1 /dev/VGExaDb/LVDbSwap1 VGExaDb  24.00g
  LVDbSys1  /dev/VGExaDb/LVDbSys1  VGExaDb  30.00g
  LVDbSys2  /dev/VGExaDb/LVDbSys2  VGExaDb  30.00g
  perflv    /dev/VGExaDb/perflv    VGExaDb   5.00g


  • Ensure Root File System Can be Resized Online

[root@exa01db01 ~]# tune2fs -l /dev/mapper/VGExaDb-LVDbSys1 | grep resize_inode
Filesystem features:      has_journal ext_attr resize_inode dir_index filetype needs_recovery extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize

All Nodes:



[root@exa01db01 ~]# dcli -g dbs_group -l root ‘tune2fs -l /dev/mapper/VGExaDb-LVDbSys1 | grep resize_inode’exa01db01: Filesystem features:      has_journal ext_attr resize_inode dir_index filetype needs_recovery extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isizeexa01db02: Filesystem features:      has_journal ext_attr resize_inode dir_index filetype needs_recovery extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isizeexa01db03: Filesystem features:      has_journal ext_attr resize_inode dir_index filetype needs_recovery extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isizeexa01db04: Filesystem features:      has_journal ext_attr resize_inode dir_index filetype needs_recovery extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize


  • Get the free space available in the Volume Group

lvm> vgdisplay -s
  “VGExaDb” 1.63 TiB  [295.00 GiB used / 1.34 TiB free]


  • Extend both logical volumes using lvextend command. Here we are extending the root file system by 50GB, so the file system become 80GB in total.

[root@exa01db01 ~]# lvextend -L +50G /dev/VGExaDb/LVDbSys1
  Size of logical volume VGExaDb/LVDbSys1 changed from 30.00 GiB (7680 extents) to 80.00 GiB (20480 extents).
  Logical volume LVDbSys1 successfully resized

[root@exa01db01 ~]# lvextend -L +50G /dev/VGExaDb/LVDbSys2
  Size of logical volume VGExaDb/LVDbSys2 changed from 30.00 GiB (7680 extents) to 80.00 GiB (20480 extents).
  Logical volume LVDbSys2 successfully resized


  • Now resize the file system using resize2fs command.

[root@exa01db01 ~]# resize2fs /dev/VGExaDb/LVDbSys1
resize2fs 1.43-WIP (20-Jun-2013)
Filesystem at /dev/VGExaDb/LVDbSys1 is mounted on /; on-line resizing required
old_desc_blocks = 2, new_desc_blocks = 4
Performing an on-line resize of /dev/VGExaDb/LVDbSys1 to 15728640 (4k) blocks.
The filesystem on /dev/VGExaDb/LVDbSys1 is now 15728640 blocks long.

[root@exa01db02 ~]# e2fsck -f /dev/VGExaDb/LVDbSys1
e2fsck 1.43-WIP (20-Jun-2013)
/dev/VGExaDb/LVDbSys1 is mounted.
e2fsck: Cannot continue, aborting.




The resize command ro LVDbSys2 is failed as it is inactive. So we must execute the fsck first before resizing.

[root@exa01db01 ~]# resize2fs /dev/VGExaDb/LVDbSys2
resize2fs 1.43-WIP (20-Jun-2013)
Please run ‘e2fsck -f /dev/VGExaDb/LVDbSys2’ first.

[root@exa01db01 ~]# e2fsck -f /dev/VGExaDb/LVDbSys2
e2fsck 1.43-WIP (20-Jun-2013)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
/dev/VGExaDb/LVDbSys2: 122199/3932160 files (0.3% non-contiguous), 5496667/7864320 blocks

Now execute the resize file system again

[root@exa01db01 ~]# resize2fs /dev/VGExaDb/LVDbSys2
resize2fs 1.43-WIP (20-Jun-2013)
Resizing the filesystem on /dev/VGExaDb/LVDbSys2 to 15728640 (4k) blocks.
The filesystem on /dev/VGExaDb/LVDbSys2 is now 15728640 blocks long.


  • Validate the root file system

[root@exa01db01 ~]# df -h /
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/VGExaDb-LVDbSys1
                       80G   25G   55G  31% /
 


Conclusion

In this article we have demonstrated how to resize root file system on Exadata Compute node online without any outage. It is important to note that root file system is create on two system partitions for high availability. 
0

Introduction:
In Oracle databases, it is recommended to multiplex you controlfile to safeguard against different failures like corruption, accidentally removing control file and so on.

In this article I will demonstrate how to multiplex/duplicating a controlfile into Automatic Storage Management (ASM).

Current Setup

Exadata 8-node RAC using ASM.
Current controlfile is stored in ASM.
Database is using SPFILE.
There are diffferent ASM Disk Groups available such as DATA, RECO, DBFS_DG, ACFS_DG.
 

dm01db01-orcldb1 {/home/oracle}:srvctl status database -d orcldb
Instance orcldb1 is running on node dm01db01
Instance orcldb2 is running on node dm01db02
Instance orcldb3 is running on node dm01db04
Instance orcldb4 is running on node dm01db05
Instance orcldb5 is running on node dm01db07
Instance orcldb6 is running on node dm01db06
Instance orcldb7 is running on node dm01db03
Instance orcldb8 is running on node dm01db08

SQL> show parameter spfile

NAME                                 TYPE        VALUE
———————————— ———– ——————————
spfile                               string      +DATA/ORCLDB/PARAMETERFILE/spfile.431.939367673

SQL> select name from v$controlfile;

NAME
——————————————————————————–
+DATA/ORCLDB/CONTROLFILE/current.384.939367517


dm01db01-+ASM1 {/home/oracle}:asmcmd lsdg
State    Type    Rebal  Sector  Logical_Sector  Block       AU  Total_MB   Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name
MOUNTED  NORMAL  Y         512             512   4096  4194304  10092544       424           315392         -157484              1             N  ACFS_DG/
MOUNTED  NORMAL  Y         512             512   4096  4194304   7208960       532           225280         -112374              1             N  DATA/
MOUNTED  HIGH    N         512             512   4096  4194304  12390400  12012736           450560         3854058              0             N  RECO/
MOUNTED  NORMAL  N         512             512   4096  4194304   2106432   2104640            30528         1037056              0             Y  DBFS_DG/


Steps to multiplex controlfile in ASM When Database is using SPFILE

  • Update the control_files to include the location for second control file.  The second controlfile is going to be created on different diskgroup RECO.
SQL> alter system set control_files=’+DATA/ORCLDB/CONTROLFILE/current.384.939367517′,‘+RECO’ scope=spfile sid=’*’;

System altered.

  • Stop and start the instance on node 1 in NOMOUNT state.
dm01db01-orcldb1 {/home/oracle}:srvctl status database -d orcldb
Instance orcldb1 is running on node dm01db01
Instance orcldb2 is running on node dm01db02
Instance orcldb3 is running on node dm01db04
Instance orcldb4 is running on node dm01db05
Instance orcldb5 is running on node dm01db07
Instance orcldb6 is running on node dm01db06
Instance orcldb7 is running on node dm01db03
Instance orcldb8 is running on node dm01db08


dm01db01-orcldb1 {/home/oracle}:srvctl stop database -d orcldb

dm01db01-orcldb1 {/home/oracle}:srvctl status database -d orcldb
Instance orcldb1 is not running on node dm01db01
Instance orcldb2 is not running on node dm01db02
Instance orcldb3 is not running on node dm01db04
Instance orcldb4 is not running on node dm01db05
Instance orcldb5 is not running on node dm01db07
Instance orcldb6 is not running on node dm01db06
Instance orcldb7 is not running on node dm01db03
Instance orcldb8 is not running on node dm01db08

dm01db01-orcldb1 {/home/oracle}:srvctl start instance -d orcldb -i orcldb1 -o nomount

SQL> set lines 200
SQL> select * from v$instance;

INSTANCE_NUMBER INSTANCE_NAME    HOST_NAME                                                        VERSION           STARTUP_T STATUS       PAR    THREAD# ARCHIVE LOG_SWITCH_WAIT LOGINS     SHU
————— —————- —————————————————————- —————– ——— ———— — ———- ——- ————— ———- —
DATABASE_STATUS   INSTANCE_ROLE      ACTIVE_ST BLO     CON_ID INSTANCE_MO EDITION FAMILY                                                                           DATABASE_TYPE
—————– —————— ——— — ———- ———– ——- ——————————————————————————– —————
              1 orcldb1          dm01db01                                  12.2.0.1.0        09-MAY-17 STARTED      YES          0 STOPPED                 ALLOWED    NO
ACTIVE            UNKNOWN            NORMAL    NO           0 REGULAR     EE                                                                                       RAC

  • Connect to RMAN and duplicate the controlfile
dm01db01-orcldb1 {/home/oracle}:rman target /

Recovery Manager: Release 12.2.0.1.0 – Production on Tue May 9 05:07:45 2017

Copyright (c) 1982, 2017, Oracle and/or its affiliates.  All rights reserved.

connected to target database: ORCLDB (not mounted)

RMAN> restore controlfile from ‘+DATA/ORCLDB/CONTROLFILE/current.384.939367517’;

Starting restore at 09-MAY-17
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=372 instance=orcldb1 device type=DISK

channel ORA_DISK_1: copied control file copy
output file name=+DATA/ORCLDB/CONTROLFILE/current.384.939367517
output file name=+RECO/ORCLDB/CONTROLFILE/current.1003.943506471
Finished restore at 09-MAY-17

RMAN> exit

Recovery Manager complete.

  • update the control_file parameter with the full path and name.
SQL> alter system set control_files=’+DATA/ORCLDB/CONTROLFILE/current.384.939367517′,’+RECO/ORCLDB/CONTROLFILE/current.1003.943506471′ scope=spfile sid=’*’;

System altered.

  • Shutdown and start database
SQL> shutdown immediate;
ORA-01507: database not mounted


ORACLE instance shut down.
SQL> exit

dm01db01-orcldb1 {/home/oracle}:srvctl start database -d orcldb

dm01db01-orcldb1 {/home/oracle}:srvctl status database -d orcldb
Instance orcldb1 is running on node dm01db01
Instance orcldb2 is running on node dm01db02
Instance orcldb3 is running on node dm01db04
Instance orcldb4 is running on node dm01db05
Instance orcldb5 is running on node dm01db07
Instance orcldb6 is running on node dm01db06
Instance orcldb7 is running on node dm01db03
Instance orcldb8 is running on node dm01db08

  • verify that both controlfiles are in ASM now.
SQL> select name from v$controlfile;

NAME
——————————————————————————–
+DATA/ORCLDB/CONTROLFILE/current.384.939367517
+RECO/ORCLDB/CONTROLFILE/current.1003.943506471

SQL> show parameter control_files

NAME                                 TYPE        VALUE
———————————— ———– ——————————
control_files                        string      +DATA1/ORCLDB/CONTROLFILE/current.384.939367517, +RECO/ORCLDB/CONTROLFILE/current.1003.943506471


Conclusion
In this article we have learned how to duplicate a control file in ASM. Multiplexing control file is recommended to safeguard against controlfil failures.
0