Good Contents Are Everywhere, But Here, We Deliver The Best of The Best.Please Hold on!
Uncategorized
When the Exadata Database Machine is installed by Oracle ACS, you will see the following default file system created:
  • /
  • /dev/shm
  • /boot
  • /u01
There is a plenty of space available in the volume group and this can be used to increase the existing
file system size or you can create a new file system.


In this article we will demonstrate how to create a new file system (named /u02) on Exadata Compute
node.


  • Connect to the compute node as root user
login as: rootroot@dm01db01’s password:
Last login: Mon Nov 19 13:11:39 2018 from 10.xx.xxx.xxx



  • List the exisint file system on Exadata Compute node
[root@dm01db01 ~]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/VGExaDb-LVDbSys1
                       59G   38G   19G  67% /
tmpfs                 252G  6.0M  252G   1% /dev/shm
/dev/sda1             480M   63M  393M  14% /boot
/dev/mapper/VGExaDb-LVDbOra1
                      197G   97G   91G  52% /u01



  • Get the free space available in the volume group
[root@dm01db01 ~]# vgdisplay | grep Free
  Free  PE / Size       337428 / 1.29 TiB



 
  • List the physical volumes and logical volumes
[root@dm01db01 ~]# pvs
 PV         VG      Fmt  Attr PSize   PFree
  /dev/sda2  VGExaDb lvm2 a–u 557.36g 202.36g
  /dev/sda3  VGExaDb lvm2 a–u   1.09t   1.09t


[root@dm01db01 ~]# lvs
  LV                 VG      Attr       LSize   Pool Origin   Data%  Meta%  Move Log Cpy%Sync Convert
  LVDbOra1           VGExaDb owi-aos— 200.00g
  LVDbSwap1          VGExaDb -wi-ao—-  24.00g
  LVDbSys1           VGExaDb owi-aos—  60.00g
  LVDbSys2           VGExaDb -wi-a—–  60.00g
  LVDoNotRemoveOrUse VGExaDb -wi-a—–   1.00g
  root_snap          VGExaDb swi-I-s—   5.00g      LVDbSys1 100.00
  u01_snap           VGExaDb swi-I-s—   5.00g      LVDbOra1 100.00



 
  • Create a new logical volume of your desired size. Here we are creating a logical volume of 100GB size
[root@dm01db01 ~]# lvcreate -L100GB -n LVDbOra2 VGExaDb
  Logical volume “LVDbOra2” created.



  • List the logical volumes and ensure our new logical volume is displayed
[root@dm01db01 ~]# lvs
  LV                 VG      Attr       LSize   Pool Origin   Data%  Meta%  Move Log Cpy%Sync Convert
  LVDbOra1           VGExaDb owi-aos— 200.00g
  LVDbOra2           VGExaDb -wi-a—– 100.00g
  LVDbSwap1          VGExaDb -wi-ao—-  24.00g
  LVDbSys1           VGExaDb owi-aos—  60.00g
  LVDbSys2           VGExaDb -wi-a—–  60.00g
  LVDoNotRemoveOrUse VGExaDb -wi-a—–   1.00g
  root_snap          VGExaDb swi-I-s—   5.00g      LVDbSys1 100.00
  u01_snap           VGExaDb swi-I-s—   5.00g      LVDbOra1 100.00

 


  • Now create the new file system as shown below
[root@dm01db01 ~]# mkfs.ext3 -j -L u02 /dev/VGExaDb/LVDbOra2
mke2fs 1.43-WIP (20-Jun-2013)
Filesystem label=u02
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
6553600 inodes, 26214400 blocks
1310720 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=4294967296
800 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
        4096000, 7962624, 11239424, 20480000, 23887872

Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done



  • Mount the new file system
[root@dm01db01 ~]# mkdir /u02

[root@dm01db01 ~]# mount -t ext3 /dev/VGExaDb/LVDbOra2 /u02
 


  • Verify that the new file system is mounted and accessible
[root@dm01db01 ~]# df -kh
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/VGExaDb-LVDbSys1
                       59G   38G   19G  67% /
tmpfs                 252G  6.0M  252G   1% /dev/shm
/dev/sda1             480M   63M  393M  14% /boot
/dev/mapper/VGExaDb-LVDbOra1
                      197G   97G   91G  52% /u01
/dev/mapper/VGExaDb-LVDbOra2
                       99G   60M   94G   1% /u02



Conclusion
In this article we have learned how to create a new file system on Exadata Compute node using the free space available in the volume group.


eBook - Oracle Exadata X8M Patching Recipes | Netsoftmate
0

When Exadata database machine is installed by Oracle ACS the root file system size is set to 30GB. This space may not be sufficient for storing large files, logfiles, patches and so on and it can be filled very quickly. So you must consider increasing the root file system to avoid space issues. The root file system is built on volume group which makes it easy to resize the logical volume on which the root file system is mounted.

Root file system is created on two system partitions LVDbSys1 and LVDbSys2 and both system partitions must be size equally at the same time. Only one system partition is active at any time and other is inactive.

In this article, we will demonstrate how you can extend/increase the root file system size on Exadata Compute node. This activity can be done online without any downtime if the file system feature supports it.

Steps to extend/increase the root file system on Exadata Compute node

Step 1: Get the current root file system size and utilization. Here we can see that the root file system was expanded early to 60GB and currently it is 100% used.

[root@dm01db01 /]# df -h /
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/VGExaDb-LVDbSys1
                       59G   57G     0 100% /

Step 2: Get the logical volumne details

[root@dm01db01 /]# lvs -o lv_name,lv_path,vg_name,lv_size
  LV                 Path                            VG      LSize
  LVDbOra1           /dev/VGExaDb/LVDbOra1           VGExaDb 200.00g
  LVDbSwap1          /dev/VGExaDb/LVDbSwap1          VGExaDb  24.00g
  LVDbSys1           /dev/VGExaDb/LVDbSys1           VGExaDb  60.00g
  LVDbSys2           /dev/VGExaDb/LVDbSys2           VGExaDb  60.00g
  LVDoNotRemoveOrUse /dev/VGExaDb/LVDoNotRemoveOrUse VGExaDb   1.00g

Step 3: Check to make sure that root file system can be resized online. Execute the following to determine it. If you get an output the file system can be resized online

[root@dm01db01 /]# tune2fs -l /dev/mapper/VGExaDb-LVDbSys1 | grep resize_inode
Filesystem features:      has_journal ext_attr resize_inode dir_index filetype needs_recovery extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize

[root@dm01db01 /]# dcli -g ~/dbs_group -l root ‘tune2fs -l /dev/mapper/VGExaDb-LVDbSys1 | grep resize_inode’
dm01db01: Filesystem features:      has_journal ext_attr resize_inode dir_index filetype needs_recovery extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize
dm01db02: Filesystem features:      has_journal ext_attr resize_inode dir_index filetype needs_recovery extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize
dm01db03: Filesystem features:      has_journal ext_attr resize_inode dir_index filetype needs_recovery extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize
dm01db04: Filesystem features:      has_journal ext_attr resize_inode dir_index filetype needs_recovery extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize

Step 4: Get the current active partition information. Here the current active partition is LVDbSys1

[root@dm01db01 ~]# imageinfo

Kernel version: 4.1.12-94.7.8.el6uek.x86_64 #2 SMP Thu Jan 11 20:41:01 PST 2018 x86_64
Image kernel version: 4.1.12-94.7.8.el6uek
Image version: 12.2.1.1.6.180125.1
Image activated: 2018-04-13 22:11:49 -0500
Image status: success
System partition on device: /dev/mapper/VGExaDb-LVDbSys1


Step 5: Get the free space available in the Volume Group. Currently we have around 1.3TB free space available. So we can easily increase the root file system.

[root@dm01db01 /]# vgdisplay -s
  “VGExaDb” 1.63 TiB  [345.00 GiB used / 1.30 TiB free]

Step 6: Using lvextend command increase the both logical volumes. In our case, we are increasing the root file system by 30GB to make 90GB total size.

[root@dm01db01 /]# lvextend -L +30G /dev/VGExaDb/LVDbSys1
  Size of logical volume VGExaDb/LVDbSys1 changed from 60.00 GiB (15360 extents) to 90.00 GiB (23040 extents).
  Logical volume LVDbSys1 successfully resized.

[root@dm01db01 /]# lvextend -L +30G /dev/VGExaDb/LVDbSys2
  Size of logical volume VGExaDb/LVDbSys2 changed from 60.00 GiB (15360 extents) to 90.00 GiB (23040 extents).
  Logical volume LVDbSys2 successfully resized.

Step 7: Using resize2fs command resize the file system as follows

[root@dm01db01 /]# resize2fs /dev/VGExaDb/LVDbSys1
resize2fs 1.43-WIP (20-Jun-2013)
Filesystem at /dev/VGExaDb/LVDbSys1 is mounted on /; on-line resizing required
old_desc_blocks = 4, new_desc_blocks = 6
The filesystem on /dev/VGExaDb/LVDbSys1 is now 23592960 blocks long.

[root@dm01db01 /]# e2fsck -f /dev/VGExaDb/LVDbSys1
e2fsck 1.43-WIP (20-Jun-2013)
/dev/VGExaDb/LVDbSys1 is mounted.
e2fsck: Cannot continue, aborting.


[root@dm01db01 /]# resize2fs /dev/VGExaDb/LVDbSys2
resize2fs 1.43-WIP (20-Jun-2013)
Please run ‘e2fsck -f /dev/VGExaDb/LVDbSys2’ first.

[root@dm01db01 /]# e2fsck -f /dev/VGExaDb/LVDbSys2
e2fsck 1.43-WIP (20-Jun-2013)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
/dev/VGExaDb/LVDbSys2: 167407/3932160 files (0.1% non-contiguous), 4710754/15728640 blocks

[root@dm01db01 /]# resize2fs /dev/VGExaDb/LVDbSys2
resize2fs 1.43-WIP (20-Jun-2013)
Resizing the filesystem on /dev/VGExaDb/LVDbSys2 to 23592960 (4k) blocks.
The filesystem on /dev/VGExaDb/LVDbSys2 is now 23592960 blocks long.

Note: You can’t run the e2fsck for the active paritition LVDbSys1. You run the e2fsck for inactive partition LVDbSys2 first before resizing it.

Step 8: Verify the new file system size

[root@dm01db01 /]# df -h /
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/VGExaDb-LVDbSys1
                       89G   57G   29G  67% /

Repeat the steps above on all the compute nodes in the Exadata Rack.


Conclusion

In this article we have learned how to extend/increase the root file system on Exadata Compute node online without any outage. Root file system is created on two system partitions LVDbSys1 and LVDbSys2 and both system partitions must be size equally at the same time. Only one system partition is active at any time and other is inactive.

0