Good Contents Are Everywhere, But Here, We Deliver The Best of The Best.Please Hold on!
Uncategorized
When the Exadata Database Machine is installed by Oracle ACS, you will see the following default file system created:
  • /
  • /dev/shm
  • /boot
  • /u01
There is a plenty of space available in the volume group and this can be used to increase the existing
file system size or you can create a new file system.


In this article we will demonstrate how to create a new file system (named /u02) on Exadata Compute
node.


  • Connect to the compute node as root user
login as: rootroot@dm01db01’s password:
Last login: Mon Nov 19 13:11:39 2018 from 10.xx.xxx.xxx



  • List the exisint file system on Exadata Compute node
[root@dm01db01 ~]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/VGExaDb-LVDbSys1
                       59G   38G   19G  67% /
tmpfs                 252G  6.0M  252G   1% /dev/shm
/dev/sda1             480M   63M  393M  14% /boot
/dev/mapper/VGExaDb-LVDbOra1
                      197G   97G   91G  52% /u01



  • Get the free space available in the volume group
[root@dm01db01 ~]# vgdisplay | grep Free
  Free  PE / Size       337428 / 1.29 TiB



 
  • List the physical volumes and logical volumes
[root@dm01db01 ~]# pvs
 PV         VG      Fmt  Attr PSize   PFree
  /dev/sda2  VGExaDb lvm2 a–u 557.36g 202.36g
  /dev/sda3  VGExaDb lvm2 a–u   1.09t   1.09t


[root@dm01db01 ~]# lvs
  LV                 VG      Attr       LSize   Pool Origin   Data%  Meta%  Move Log Cpy%Sync Convert
  LVDbOra1           VGExaDb owi-aos— 200.00g
  LVDbSwap1          VGExaDb -wi-ao—-  24.00g
  LVDbSys1           VGExaDb owi-aos—  60.00g
  LVDbSys2           VGExaDb -wi-a—–  60.00g
  LVDoNotRemoveOrUse VGExaDb -wi-a—–   1.00g
  root_snap          VGExaDb swi-I-s—   5.00g      LVDbSys1 100.00
  u01_snap           VGExaDb swi-I-s—   5.00g      LVDbOra1 100.00



 
  • Create a new logical volume of your desired size. Here we are creating a logical volume of 100GB size
[root@dm01db01 ~]# lvcreate -L100GB -n LVDbOra2 VGExaDb
  Logical volume “LVDbOra2” created.



  • List the logical volumes and ensure our new logical volume is displayed
[root@dm01db01 ~]# lvs
  LV                 VG      Attr       LSize   Pool Origin   Data%  Meta%  Move Log Cpy%Sync Convert
  LVDbOra1           VGExaDb owi-aos— 200.00g
  LVDbOra2           VGExaDb -wi-a—– 100.00g
  LVDbSwap1          VGExaDb -wi-ao—-  24.00g
  LVDbSys1           VGExaDb owi-aos—  60.00g
  LVDbSys2           VGExaDb -wi-a—–  60.00g
  LVDoNotRemoveOrUse VGExaDb -wi-a—–   1.00g
  root_snap          VGExaDb swi-I-s—   5.00g      LVDbSys1 100.00
  u01_snap           VGExaDb swi-I-s—   5.00g      LVDbOra1 100.00

 


  • Now create the new file system as shown below
[root@dm01db01 ~]# mkfs.ext3 -j -L u02 /dev/VGExaDb/LVDbOra2
mke2fs 1.43-WIP (20-Jun-2013)
Filesystem label=u02
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
6553600 inodes, 26214400 blocks
1310720 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=4294967296
800 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
        4096000, 7962624, 11239424, 20480000, 23887872

Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done



  • Mount the new file system
[root@dm01db01 ~]# mkdir /u02

[root@dm01db01 ~]# mount -t ext3 /dev/VGExaDb/LVDbOra2 /u02
 


  • Verify that the new file system is mounted and accessible
[root@dm01db01 ~]# df -kh
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/VGExaDb-LVDbSys1
                       59G   38G   19G  67% /
tmpfs                 252G  6.0M  252G   1% /dev/shm
/dev/sda1             480M   63M  393M  14% /boot
/dev/mapper/VGExaDb-LVDbOra1
                      197G   97G   91G  52% /u01
/dev/mapper/VGExaDb-LVDbOra2
                       99G   60M   94G   1% /u02



Conclusion
In this article we have learned how to create a new file system on Exadata Compute node using the free space available in the volume group.


eBook - Oracle Exadata X8M Patching Recipes | Netsoftmate
0

During the Oracle Database Appliance Deployment you can optionally configure CloudFS file system. The default mount point is /cloudfs and set to default size of 50GB. Oracle Database Appliance uses the Oracle Automatic Storage Management Cluster File System (Oracle ACFS) for database and virtual machine files storage. ACFS can only be used to configure shared storage file system on ODA. Oracle ACFS provides both servers with concurrent access to /cloudfs shared file system. The default size of 50GB may not sufficient and must be increased to store big files for business requirement.


In this article we will demonstrate how to resize the /cloudfs file system using ASMCA GUI interface


Steps to resize the /cloudfs file system using asmca GUI Interface

Step 1: Get the current /cloudfs file system size

Step 2: Start VNC on node 1. Here I am starting VNC as root user. You can choose to start it as Grid or Oracle user.

[root@odanode1 ~]# rpm -qa *vnc*
tigervnc-1.1.0-18.el6.x86_64
tigervnc-server-1.1.0-18.el6.x86_64
[root@odanode1 ~]# ps -ef|grep vnc
root     23281 20754  0 13:32 pts/1    00:00:00 grep vnc

[root@odanode1 ~]# vncserver :1

You will require a password to access your desktops.

Password:
Verify:

New ‘odanode1:1 (root)’ desktop is odanode1:1

Creating default startup script /root/.vnc/xstartup
Starting applications specified in /root/.vnc/xstartup
Log file is /root/.vnc/odanode1:1.log

[root@odanode1 ~]# ps -ef|grep vnc
root     23399     1  1 13:32 pts/1    00:00:00 /usr/bin/Xvnc :1 -desktop odanode1:1 (root) -auth /root/.Xauthority -geometry 1024×768 -rfbwait 30000 -rfbauth /root/.vnc/passwd -rfbport 5901 -fp catalogue:/etc/X11/fontpath.d -pn
root     23481 23480  0 13:33 pts/1    00:00:00 vncconfig -iconic
root     23636 20754  0 13:33 pts/1    00:00:00 grep vnc

Step 3: Start VNC viewer on desktop and enter the hostname/IP address on node 1. Enter the root password as we have started the VNC server using root user.



Step 4: Switch to grid user and verify the Grid Home

Step 5: Set Oracle Home to Grid home and start asmca

Step 6: Click on ASM Cluster File System Tab

Step 7: Right Click on /cloudfs and select Resize option

Step 8: Enter the desired new size. Here I am resizing the /cloudfs to 200GB

Step 9: Resize in progress

Step 10: Resize completed

Step 11: Verify the /cloudfs size



Conclusion

In this article we have learned how to resize/increase the size of /cloudfs ACFS file system on ODA using ASMCA GUI utility. The cloudfs file system is configured during the ODA deployment and it is set to 50GB which is not sufficient for storing the big files. The cloudfs is build using ACFS and it can be resized easily using ASMCA graphical interface.

0

During Oracle Database Appliance Deployment you can optionally configure CloudFS file system. The default mount point is /cloudfs and set to default size of 50GB. Oracle Database Appliance uses the Oracle Automatic Storage Management Cluster File System (Oracle ACFS) for database and virtual machine files storage. ACFS can only be used to configure shared storage file system on ODA. Oracle ACFS provides both servers with concurrent access to /cloudfs shared file system. The default size of 50GB may not sufficient and must be increased to store big files for business requirement.





In this article we will demonstrate how to resize the /cloudfs file system manually using ACFS commands.


Steps to resize the /cloudfs file system


Step 1: Login to node 1 as grid user the owner of Grid Infrastructure software

[grid@odanoden1 ~]$ id
uid=1000(grid) gid=1001(oinstall) groups=1001(oinstall),1003(racoper),1004(asmdba),1005(asmoper),1006(asmadmin)

Step 2: Verify the existing size of /cloudfs. Here is my case the /cloufs is 200GB and it was resized in the past from 50GB to 200GB

[grid@odanoden1 ~]$ df -h /cloudfs
Filesystem           Size  Used Avail Use% Mounted on
/dev/asm/acfsvol-23  200G  483M  200G   1% /cloudfs

Step 3: Set the ORACLE SID to +ASM1

[grid@odanoden1 ~]$ echo $ORACLE_SID

[grid@odanoden1 ~]$ . oraenv

ORACLE_SID = [grid] ? +ASM1
The Oracle base has been set to /u01/app/grid

[grid@odanoden1 ~]$ echo $ORACLE_SID

+ASM1

Step 4: List the ACFS Mounts. Here we can see that /cloudfs volume is /dev/asm/acfsvol-23

[grid@odanoden1 ~]$ mount |grep asm
/dev/asm/acfsvol-23 on /cloudfs type acfs (rw)
/dev/asm/datastore-272 on /u01/app/oracle/oradata/datastore type acfs (rw)
/dev/asm/datastore-97 on /u02/app/oracle/oradata/datastore type acfs (rw)
/dev/asm/datastore-23 on /u01/app/oracle/fast_recovery_area/datastore type acfs (rw)

Step 5: Get the size of the volume /dev/asm/acfsvol-23

[grid@odanoden1 ~]$ /sbin/advmutil volinfo /dev/asm/acfsvol-23
Device: /dev/asm/acfsvol-23
Interface Version: 1
Size (MB): 204800
Resize Increment (MB): 64
Redundancy: high
Stripe Columns: 8
Stripe Width (KB): 1024
Disk Group: RECO
Volume: ACFSVOL
Compatible.advm: 12.1.0.2.0

Step 6: Resize the /cloudfs as follows. Here we are increasing /cloudfs by 50GB

[grid@odanoden1 ~]$ /sbin/acfsutil size +50g /cloudfs
acfsutil size: new file system size: 268435456000 (256000MB)

Step 7: Verify the new size of the volume /dev/asm/acfsvol-23

[grid@odanoden1 ~]$ /sbin/advmutil volinfo /dev/asm/acfsvol-23
Device: /dev/asm/acfsvol-23
Interface Version: 1
Size (MB): 256000
Resize Increment (MB): 64
Redundancy: high
Stripe Columns: 8
Stripe Width (KB): 1024
Disk Group: RECO
Volume: ACFSVOL
Compatible.advm: 12.1.0.2.0

Step 8: Verify the new size of /cloudfs file system

[grid@odanoden1 ~]$ df -h /cloudfs
Filesystem           Size  Used Avail Use% Mounted on
/dev/asm/acfsvol-23  250G  585M  250G   1% /cloudfs


Conclusion

In this article we have learned how to resize/increase the size of /cloudfs shared file system on ODA. The cloudfs file system is configured during the ODA deployment and it is set to 50GB which is not sufficient for storing the big files. The cloudfs is build using ACFS and it can be resized easily using ACFS commands.

0