Good Contents Are Everywhere, But Here, We Deliver The Best of The Best.Please Hold on!
Overview

Datapump introduce in 10g which is very powerful utility to perform the both load and unload data using external dump files.Oracle Data Pump technology consists of two components: the Data Pump Export utility, to unload data objects from a database, and the Data Pump Import utility, to load data objects into a database. You access the two Data Pump utilities through a pair of clients called expdp and impdp.As their names indicate, the first of these corresponds to the Data Pump Export utility and the latter
to the Data Pump Import utility. You can control both Data Pump Export and Import jobs with the help of several parameters.

In this article we will demonstrate different Data Pump (expdp/impdb) scenarios.

Prerequisites

  • Creating a Database Directory
  • Create a directory named expdp_dir and specifies that it is to map to the filesystem/location and physical location on disk:
SQL> create directory expdp_dir as ‘ /orahm/app/oracle/admin/db01/dpdump’;

  • Granting Access to the Directory
  • Grant permissions on the database-directory object to a user that wants to use Data Pump:-
SQL> grant read, write on directory expdp_dir to sys;

Table level

Export Source DB

SQL> select * from dba_directories;
OWNER       DIRECTORY_NAME        DIRECTORY_PATH
—————————–
SYS         DATA_PUMP_DIR        /orahm/app/oracle/admin/db01/dpdump

$expdp directory= DATA_PUMP_DIR tables=owner.table_name dumpfile=exp-tab.dmp logfile=exp-tab.log

scp /orahm/app/oracle/admin/db01/dpdump/exp-tab.dmp  oracle@targetdb:/orahm/app/oracle/admin/db01/dpdump/

Import Target DB

SQL> select * from dba_directories;

OWNER       DIRECTORY_NAME        DIRECTORY_PATH
————————————————————————-
SYS         DATA_PUMP_DIR        /orahm/app/oracle/admin/db01/dpdump

$impdp directory= DATA_PUMP_DIR tables=owner.table_name dumpfile=exp-tab.dmp logfile=exp-tab.log


Schema level
Below example to take the schema level export and import.

Export Source DB

SQL> select * from dba_directories;

OWNER       DIRECTORY_NAME        DIRECTORY_PATH
————————————————————————-
SYS         DATA_PUMP_DIR        /orahm/app/oracle/admin/db01/dpdump
$expdp directory= DATA_PUMP_DIR schemas=ABC dumpfile= exp-schema.dmp logfile= exp-schmea.log

scp /orahm/app/oracle/admin/db01/dpdump/exp-schema.dmp oracle@targetdb:/orahm/app/oracle/admin/db01/dpdump/

Import Target DB

$impdp directory= DATA_PUMP_DIR schemas=ABC dumpfile= exp-schema.dmp logfile=imp-schema.log

Check invalid object and compile if required.

select object_name from dba_objects where owner=’ABC’ and status=’INVALID’ ;
compiling invalid object.

ALTER PROCEDURE <PROCEDURE_NAME> COMPILE;

Database level
Below example for full DB export and import.

Export Source DB

SQL> select * from dba_directories;
OWNER       DIRECTORY_NAME        DIRECTORY_PATH
————————————————————————-
SYS         DATA_PUMP_DIR        /orahm/app/oracle/admin/db01/dpdump
Check the tablespace name and size to match same on target db

select    a.TABLESPACE_NAME,
ROUND(a.BYTES/1024000) “Total (MB)”,
ROUND(b.BYTES/1024000) “Free (MB)”,
round(((a.BYTES-b.BYTES)/a.BYTES)*100,2) “% USED”,
100-round(((a.BYTES-b.BYTES)/a.BYTES)*100,2) “% FREE”
from
(
select  TABLESPACE_NAME,
sum(BYTES) BYTES
from    dba_data_files
group   by TABLESPACE_NAME
)
a,
(
select  TABLESPACE_NAME,
sum(BYTES) BYTES ,
max(BYTES) largest
from    dba_free_space
group   by TABLESPACE_NAME
)b
where    a.TABLESPACE_NAME=b.TABLESPACE_NAME
and a.TABLESPACE_NAME like ‘%’
order      by ((a.BYTES-b.BYTES)/a.BYTES) desc;

$expdp directory= DATA_PUMP_DIR full=y dumpfile=exp-full.dmp logfile=exp-full.log

scp /orahm/app/oracle/admin/db01/dpdump/ exp-full.dmp oracle@targetdb:/orahm/app/oracle/admin/db01/dpdump/


Import Target DB

Make target db contain all the tablespace and available free space to accommodate the source object into target DB

Sql> select    a.TABLESPACE_NAME,
ROUND(a.BYTES/1024000) “Total (MB)”,
ROUND(b.BYTES/1024000) “Free (MB)”,
round(((a.BYTES-b.BYTES)/a.BYTES)*100,2) “% USED”,
100-round(((a.BYTES-b.BYTES)/a.BYTES)*100,2) “% FREE”
from
(
select  TABLESPACE_NAME,
sum(BYTES) BYTES
from    dba_data_files
group   by TABLESPACE_NAME
)
a,
(
select  TABLESPACE_NAME,
sum(BYTES) BYTES ,
max(BYTES) largest
from    dba_free_space
group   by TABLESPACE_NAME
)b
where    a.TABLESPACE_NAME=b.TABLESPACE_NAME
and a.TABLESPACE_NAME like ‘%’
order      by ((a.BYTES-b.BYTES)/a.BYTES) desc;

$impdp directory= DATA_PUMP_DIR full=y dumpfile=exp-full.dmp logfile=imp-full.log

Check the invalid object and compile.
Sql>select object_name,owner,status from dba_objects where status=’INVALID’;
Sql>@$ORACLE_HOME/rdbms/admin/utlrp.sql
Sql>select object_name,owner,status from dba_objects where status=’INVALID’;

Using Parameter file
Below export and import example using parameter file.

Export Source DB

SQL> select * from dba_directories;
OWNER       DIRECTORY_NAME        DIRECTORY_PATH
————————————————————————-
SYS         DATA_PUMP_DIR        /orahm/app/oracle/admin/db01/dpdump
Check the tablespace name and size to match same on target db

select    a.TABLESPACE_NAME,
ROUND(a.BYTES/1024000) “Total (MB)”,
ROUND(b.BYTES/1024000) “Free (MB)”,
round(((a.BYTES-b.BYTES)/a.BYTES)*100,2) “% USED”,
100-round(((a.BYTES-b.BYTES)/a.BYTES)*100,2) “% FREE”
from
(
select  TABLESPACE_NAME,
sum(BYTES) BYTES
from    dba_data_files
group   by TABLESPACE_NAME
)
a,
(
select  TABLESPACE_NAME,
sum(BYTES) BYTES ,
max(BYTES) largest
from    dba_free_space
group   by TABLESPACE_NAME
)b
where    a.TABLESPACE_NAME=b.TABLESPACE_NAME
and a.TABLESPACE_NAME like ‘%’
order      by ((a.BYTES-b.BYTES)/a.BYTES) desc;
Create parameter file as below under data pump directory.

vi full_db.par
directory=DATA_PUMP_DIR
full=y dumpfile=exp-full.dmp
logfile=exp-full.log

Take export using par file.
$ expdp parfile=full_db.par

scp /orahm/app/oracle/admin/db01/dpdump/exp-tab.dmp  oracle@targetdb:/orahm/app/oracle/admin/db01/dpdump/


Import Target DB

Make target db contain all the tablespace and available free space to accommodate the source object into target DB

Sql> select    a.TABLESPACE_NAME,
ROUND(a.BYTES/1024000) “Total (MB)”,
ROUND(b.BYTES/1024000) “Free (MB)”,
round(((a.BYTES-b.BYTES)/a.BYTES)*100,2) “% USED”,
100-round(((a.BYTES-b.BYTES)/a.BYTES)*100,2) “% FREE”
from
(
select  TABLESPACE_NAME,
sum(BYTES) BYTES
from    dba_data_files
group   by TABLESPACE_NAME
)
a,
(
select  TABLESPACE_NAME,
sum(BYTES) BYTES ,
max(BYTES) largest
from    dba_free_space
group   by TABLESPACE_NAME
)b
where    a.TABLESPACE_NAME=b.TABLESPACE_NAME
and a.TABLESPACE_NAME like ‘%’
order      by ((a.BYTES-b.BYTES)/a.BYTES) desc;
                               
Create parameter file as below under data pump directory.


vi full_db.par
directory=DATA_PUMP_DIR
full=y dumpfile=exp-full.dmp
logfile=imp-full.log


Perform import using parameter file.
$impdp parfile=full_db.par

Check the invalid object and compile.
Sql>select object_name,owner,status from dba_objects where status=’INVALID’;
Sql>@$ORACLE_HOME/rdbms/admin/utlrp.sql
Sql>select object_name,owner,status from dba_objects where status=’INVALID’;


Data Pump Export/Import more examples

  • Using parallel option
Using parallel option we can make export faster. It generate more dump files depends on parallel option during export.

Export Source DB

SQL> select * from dba_directories;

OWNER       DIRECTORY_NAME        DIRECTORY_PATH
————————————————————————-
SYS         DATA_PUMP_DIR        /orahm/app/oracle/admin/db01/dpdump

$expdp directory= DATA_PUMP_DIR schemas=ABC dumpfile=exp-schema_%Udmp logfile= exp-schmea.log parallel=2

scp /orahm/app/oracle/admin/db01/dpdump/exp-schema_%U.dmp oracle@targetdb:/orahm/app/oracle/admin/db01/dpdump/

Import Target DB

$impdp directory= DATA_PUMP_DIR schemas=ABC dumpfile= exp-schema_%U.dmp logfile=imp-schema.log

Check invalid object and compile if required.

select object_name from dba_objects where owner=’ABC’ and status=‘INVALID’;
Compiling invalid object.

ALTER PROCEDURE <PROCEDURE_NAME> COMPILE;

  • Using compress option
Using compress option we can reduce the size of dump files.

Export Source DB

SQL> select * from dba_directories;

OWNER       DIRECTORY_NAME        DIRECTORY_PATH
————————————————————————-
SYS         DATA_PUMP_DIR        /orahm/app/oracle/admin/db01/dpdump

$expdp directory= DATA_PUMP_DIR compression=all schemas=ABC dumpfile= exp-schema.dmp logfile= exp-schmea.log

scp /orahm/app/oracle/admin/db01/dpdump/exp-schema.dmp oracle@targetdb:/orahm/app/oracle/admin/db01/dpdump/

Import Target DB

$impdp directory= DATA_PUMP_DIR schemas=ABC dumpfile= exp-schema.dmp logfile=imp-schema.log

Check invalid object and compile if required.

select object_name from dba_objects where owner=’ABC’ and status=‘INVALID’;
Compiling invalid object.

ALTER PROCEDURE <PROCEDURE_NAME> COMPILE;


Creating DDL file
Using Sqlfile option we can only extract the DDL without data.

Export Source DB

SQL> select * from dba_directories;

OWNER       DIRECTORY_NAME        DIRECTORY_PATH
————————————————————————-
SYS         DATA_PUMP_DIR        /orahm/app/oracle/admin/db01/dpdump

$expdp directory= DATA_PUMP_DIR tables=owner.table_name dumpfile=exp-tab.dmp logfile=exp-tab.log sqlfile=ddl-script.sql

scp /orahm/app/oracle/admin/db01/dpdump/exp-tab.dmp ddl-script.sql oracle@targetdb:/orahm/app/oracle/admin/db01/dpdump/

Import Target DB

Import only DDL
$impdp directory= DATA_PUMP_DIR tables=owner.table_name dumpfile=exp-tab.dmp logfile=exp-tab.log sqlfile=ddl-script.sql

  • Cloning user (remap_schema)
Using Remap_schema option we can import the object from one schema to other schema.

Export Source DB

SQL> select * from dba_directories;

OWNER       DIRECTORY_NAME        DIRECTORY_PATH
————————————————————————-
SYS         DATA_PUMP_DIR        /orahm/app/oracle/admin/db01/dpdump

$expdp directory= DATA_PUMP_DIR compression=all schemas=ABC dumpfile= exp-schema.dmp logfile= exp-schmea.log

scp /orahm/app/oracle/admin/db01/dpdump/exp-schema.dmp oracle@targetdb:/orahm/app/oracle/admin/db01/dpdump/

Import Target DB

$impdp directory= DATA_PUMP_DIR remap_schema=ABC:XYZ dumpfile= exp-schema.dmp logfile=imp-schema.log

Check invalid object and compile if required.

select object_name from dba_objects where owner=’XZY’ and status=‘INVALID’;

Compiling invalid object.

ALTER PROCEDURE <PROCEDURE_NAME> COMPILE;

  • Cloning table (remap_table)
Remap_table option is to create the table with new name during import to avoid overwriting existing table in target DB.

Export Source DB

SQL> select * from dba_directories;

OWNER       DIRECTORY_NAME        DIRECTORY_PATH
————————————————————————-
SYS         DATA_PUMP_DIR        /orahm/app/oracle/admin/db01/dpdump

$expdp directory= DATA_PUMP_DIR tables=ABC.table1 dumpfile=exp-tab.dmp logfile=exp-tab.log

scp /orahm/app/oracle/admin/db01/dpdump/exp-tab.dmp oracle@targetdb:/orahm/app/oracle/admin/db01/dpdump/

Import Target DB

Remap table to new name.

$impdp directory= DATA_PUMP_DIR tables=ABC.table1 remap_table= ABC.table1:table2 dumpfile=exp-tab.dmp logfile=exp-tab.log

  • Importing when objects Already Exist
Using the TABLE_EXISTS_ACTION option we can import the object if the object already exist on target.

TABLE_EXISTS_ACTION=[SKIP | APPEND | TRUNCATE | REPLACE]
SKIP leaves the table as is and moves on to the next object. This is not a valid option if the CONTENT parameter is set to DATA_ONLY.
APPEND loads rows from the source and leaves existing rows unchanged.
TRUNCATE deletes existing rows and then loads rows from the source.
REPLACE drops the existing table and then creates and loads it from the source. This is not a valid option if the CONTENT parameter is set to DATA_ONLY.

Export Source DB

SQL> select * from dba_directories;

OWNER       DIRECTORY_NAME        DIRECTORY_PATH
————————————————————————-
SYS         DATA_PUMP_DIR        /orahm/app/oracle/admin/db01/dpdump

$expdp directory= DATA_PUMP_DIR schemas=ABC dumpfile= exp-schema.dmp logfile= exp-schmea.log

scp /orahm/app/oracle/admin/db01/dpdump/exp-schema.dmp oracle@targetdb:/orahm/app/oracle/admin/db01/dpdump/

Import

$impdp directory= DATA_PUMP_DIR schemas=ABC dumpfile= exp-schema.dmp logfile=imp-schema.log table_exists_action==[SKIP | APPEND | TRUNCATE | REPLACE]

Check invalid object and compile if required.

select object_name from dba_objects where owner=’ABC’ and status=’INVALID’ ;
compiling invalid object.

ALTER PROCEDURE <PROCEDURE_NAME> COMPILE;

  • Exclude option
EXCLUDE parameter is used, all objects except those specified by it will be included in the export/import

Export Source DB

SQL> select * from dba_directories;
OWNER       DIRECTORY_NAME        DIRECTORY_PATH
————————————————————————-
SYS         DATA_PUMP_DIR        /orahm/app/oracle/admin/db01/dpdump
Check the tablespace name and size to match same on target db

select    a.TABLESPACE_NAME,
ROUND(a.BYTES/1024000) “Total (MB)”,
ROUND(b.BYTES/1024000) “Free (MB)”,
round(((a.BYTES-b.BYTES)/a.BYTES)*100,2) “% USED”,
100-round(((a.BYTES-b.BYTES)/a.BYTES)*100,2) “% FREE”
from
(
select  TABLESPACE_NAME,
sum(BYTES) BYTES
from    dba_data_files
group   by TABLESPACE_NAME
)
a,
(
select  TABLESPACE_NAME,
sum(BYTES) BYTES ,
max(BYTES) largest
from    dba_free_space
group   by TABLESPACE_NAME
)b
where    a.TABLESPACE_NAME=b.TABLESPACE_NAME
and a.TABLESPACE_NAME like ‘%’
order      by ((a.BYTES-b.BYTES)/a.BYTES) desc;

Create parameter file as below under data pump directory.

vi full_db.par
directory=DATA_PUMP_DIR
full=y dumpfile=exp-full.dmp
logfile=exp-full.log
exclude=schema:”IN (‘SYS’,’SYSTEM’,’ANONYMOUS’,’DBSNMP’,’DIP’,’EXFSYS’,’ORACLE_OCM’,’OUTLN’,’WMSYS’,’XDB’)”

Take export using par file.

$ expdp parfile=full_db.par

scp /orahm/app/oracle/admin/db01/dpdump/exp-tab.dmp oracle@targetdb:/orahm/app/oracle/admin/db01/dpdump/

Import Target DB

Make target db contain all the tablespace and available free space to accommodate the source object into target DB

Sql> select    a.TABLESPACE_NAME,
ROUND(a.BYTES/1024000) “Total (MB)”,
ROUND(b.BYTES/1024000) “Free (MB)”,
round(((a.BYTES-b.BYTES)/a.BYTES)*100,2) “% USED”,
100-round(((a.BYTES-b.BYTES)/a.BYTES)*100,2) “% FREE”
from
(
select  TABLESPACE_NAME,
sum(BYTES) BYTES
from    dba_data_files
group   by TABLESPACE_NAME
)
a,
(
select  TABLESPACE_NAME,
sum(BYTES) BYTES ,
max(BYTES) largest
from    dba_free_space
group   by TABLESPACE_NAME
)b
where    a.TABLESPACE_NAME=b.TABLESPACE_NAME
and a.TABLESPACE_NAME like ‘%’
order      by ((a.BYTES-b.BYTES)/a.BYTES) desc;
                               
Create parameter file as below under data pump directory.

vi full_db.par
directory=DATA_PUMP_DIR
full=y dumpfile=exp-full.dmp
logfile=imp-full.log

Perform import using parameter file.
$impdp parfile=full_db.par

Check the invalid object and compile.
Sql>select object_name,owner,status from dba_objects where status=’INVALID’;
Sql>@$ORACLE_HOME/rdbms/admin/utlrp.sql
Sql>select object_name,owner,status from dba_objects where status=’INVALID’;

  • Include option
INCLUDE parameter is used, only those objects specified by it will be included in the export/import.

Export Source DB

SQL> select * from dba_directories;

OWNER       DIRECTORY_NAME        DIRECTORY_PATH
————————————————————————-
SYS         DATA_PUMP_DIR        /orahm/app/oracle/admin/db01/dpdump

$expdp directory= DATA_PUMP_DIR schemas=ABC  include=TABLE:”IN(“EMP’,’DEPT’) dumpfile= exp-schema.dmp logfile= exp-schmea.log

scp /orahm/app/oracle/admin/db01/dpdump/exp-schema.dmp oracle@targetdb:/orahm/app/oracle/admin/db01/dpdump/

Import Target DB

$impdp directory= DATA_PUMP_DIR schemas=ABC dumpfile= exp-schema.dmp logfile=imp-schema.log

Check invalid object and compile if required.

select object_name from dba_objects where owner=’ABC’ and status=’INVALID’ ;
compiling invalid object.

ALTER PROCEDURE <PROCEDURE_NAME> COMPILE;

  • Creating a Consistent Export
CONSISTENT=Y parameter to indicate the export should be consistent to a point in time

Export Source DB

SQL> select * from dba_directories;
OWNER       DIRECTORY_NAME        DIRECTORY_PATH
————————————————————————-
SYS         DATA_PUMP_DIR        /orahm/app/oracle/admin/db01/dpdump

$expdp directory= DATA_PUMP_DIR tables=owner.table_name dumpfile=exp-tab.dmp logfile=exp-tab.log consistent=y

scp /orahm/app/oracle/admin/db01/dpdump/exp-tab.dmp  oracle@targetdb:/orahm/app/oracle/admin/db01/dpdump/

Import Target DB

$impdp directory= DATA_PUMP_DIR tables=owner.table_name dumpfile=exp-tab.dmp logfile=exp-tab.log

  • Network based import
NETWORK_LINK parameter identifies a database link to be used as the source for a network export/import.

Export Source DB

SQL> select * from dba_directories;
OWNER       DIRECTORY_NAME        DIRECTORY_PATH
————————————————————————-
SYS         DATA_PUMP_DIR        /orahm/app/oracle/admin/db01/dpdump

Create the db link on on local db which is point to source db for export task.It will export the source db and put the dumpfile in dump directory

sql>CREATE DATABASE LINK remote_expdp CONNECT TO sys IDENTIFIED BY sys123 USING ‘DB01’;

$expdp directory= DATA_PUMP_DIR tables=owner.table_name network_link=remote_expdp dumpfile=exp-tab.dmp logfile=exp-tab.log

Import Target DB

Create the db link on local DB which is point to source db. Here it will directly import the data using netwok link option without dumpfile of source db.

sql>CREATE DATABASE LINK remote_impdp CONNECT TO sys IDENTIFIED BY sys123 USING ‘DB01’;

$impdp directory= DATA_PUMP_DIR tables=owner.table_name logfile=exp-tab.log network_link=remote_impdp


Conclusion

In this article we have learnt different Data Pump (expdp/impdb) scenarios.

1

Overview:
In this document we will Upgrade Oracle GoldenGate 11g to 12c

Environment Details
 Hostname
Node1 and Node2
Operating system
Linux
Environment
production
Oracle Home
/u01/app/oracle/product/11.2.0/db
GoldenGate software staging location
/u01/app/oracle/software/OGG12c
GoldenGate 11g Shared Home
/oraggs/app/ggs
GoldenGate 12c Shared Home
/oraggs/app/ggs12c/ggs

Configure Goldengate 12c:

  • Install Oracle GoldenGate 12c software without creating and start manager process in a new home. Look at the steps for installing Oracle GoldenGate 12c at:

http://netsoftmate.blogspot.in/2017/01/installing-oracle-goldengate-12c-base.html

  • After installing oracle GoldenGate now the time to upgrade the existing 11g to 12c GoldenGate.

Create sub dirs.

cd /oraggs/app/gg12c
GGSCI>create subdirs
Parameter files                /oraggs/app/gg12c/dirprm
Report files                   /oraggs/app/gg12c/dirrpt
Checkpoint files               /oraggs/app/gg12c/dirchk
Process status files           /oraggs/app/gg12c/dirpcs
SQL script files               /oraggs/app/gg12c/dirsql
Database definitions files     /oraggs/app/gg12c/dirdef
Extract data files             /oraggs/app/gg12c/dirdat
Temporary files                /oraggs/app/gg12c/dirtmp
Stdout files                   /oraggs/app/gg12c/dirout

Create softlink

cd /oraggs

ln -s /oraggs/dirtmp /oraggs/app/gg12c
ln -s /oraggs/dirsql /oraggs/app/gg12c
ln -s /oraggs/dirdef /oraggs/app/gg12c
ln -s /oraggs/dirdat /oraggs/app/gg12c
ln -s /oraggs/dirout /oraggs/app/gg12c

Upgrade Oracle GoldenGate 11g to 12c
  • Take backup of existing GlodenGate 11g HOME.
$ cd /oraggs/app
$ cp -pR gss ggs_bkp
  • Navigate to GoldenGate 11g home and stop the GoldenGate processes
$ cd /oraggs/app/ggs
$ ./ggsci
GGSCI> send extract <name>, logend
GGSCI> info extract <name>
GGSCI > stop er *
  • Now, stop the Manager process
$ cd /oraggs/app/ggs
$ ./ggsci
GGSCI> stop MGR!
GGSCI> info all
  • Copy the GoldenGate dir* from GoldenGate 11g home (/oraggs/app/ggs) to GoldenGate 12c home (/oraggs/app/ggs12/ggs)
$ cd /oraggs/app/ggs
$ ls -l dir*
$ cp -pR dir* /oraggs/app/ggs12c/ggs/
  • Verify the directories are copied successfully to GoldenGate 12c home
$ cd /oraggs/app/ggs12c/ggs
$ ls -l dir*
  • Start GoldenGate process from Oracle GoldenGate 12c home
$ cd /oraggs/app/ggs12c/ggs
$ ./ggsci
GGSCI> info all
GGSCI> start mgr
GGSCI> info all
GGSCI> start er *
GGSCI> info all
ln -s /oraggs/dirrpt /oraggs/app/gg12c
ln -s /oraggs/dirpcs /oraggs/app/gg12c
ln -s /oraggs/dirchk /oraggs/app/gg12c
ln –s /oraggs/dirprm /oraggs/app/gg12c
  • Check the goldengate process status , and that should be visible as Abended
cd /oraggs/app/gg12c
./ggsci
GGSCI> info all

Conclusion
In this article we have learnt how to Upgrade Oracle GoldenGate 11g to 12c

2

Overview
In this document we will perform Upgrade of oracle GoldenGate 12c from 12.1.2.1.0 to 12.1.2.1.10

Environment Details
 Hostname
Node1 and Node2
Operating system
Linux
Environment
Production
Oracle Home
/u01/app/oracle/product/11.2.0/db
GoldenGate software staging location
/u01/app/oracle/software/OGG12c
GoldenGate Shared Home
/oraggs/app/ggs12c/ggs


Software requirement:
Download the Oracle GoldenGate 12.1.2.1.10 (p21785294_1212110_Linux-x86-64.zip) patch from My Oracle support by entering the following URL into your web browser: http://support.oracle.com

Steps to Upgrade Oracle GoldenGate Software
  • Create staging area for GoldenGate software.
$ cd /u01/app/oracle/software/
$ mkdir OGG12c
 Copy downloaded Oracle GoldenGate 12c Base software to Node 1
  • Unzip the Goldengate patch from directory OGG12c
$ cd /u01/app/oracle/software/OGG12c
$ unzip p21785294_1212110_Linux-x86-64.zip
  • Navigate to Oracle GoldenGate 12c home and validate OraInventory
$ cd /oraggs/app/gg12c
$ cd OPatch
$ ./opatch lsinventory
Invoking OPatch 11.2.0.1.7
Oracle Interim Patch Installer version 11.2.0.1.7
Copyright (c) 2011, Oracle Corporation.  All rights reserved.
Oracle Home       : /oraggs/app/gg12c
Central Inventory : /oraggs/app/oraInventory
from           : /etc/oraInst.loc
OPatch version    : 11.2.0.1.7
OUI version       : 11.2.0.3.0
Log file location : /oraggs/app/gg12c/cfgtoollogs/opatch/opatch2016-01-13_13-21-12PM.log
Lsinventory Output file location : /oraggs/app/gg12c/cfgtoollogs/opatch/lsinv/lsinventory2016-01-13_13-21-12PM.txt
——————————————————————————–
Installed Top-level Products (1):
Oracle GoldenGate Core                                               12.1.2.1.0
There are 1 products installed in this Oracle Home.
There are no Interim patches installed in this Oracle Home.
OPatch succeeded.
  • Verify the Opatch version
$ cd /oraggs/app/gg12c/OPatch

[oracle@Node1 OPatch]$ ./opatch version
Invoking OPatch 11.2.0.1.7
OPatch Version: 11.2.0.1.7
OPatch succeeded.
  • Set Oracle Home variable to Oracle GoldenGate 12c software location
$ export ORACLE_HOME=/oraggs/app/gg12c
  •  Ensure Oracle GoldenGate Processes are offline.
./ggsci
GGSCI> info all
  • Stop Oracle GoldenGate Processes as follows:
./ggsci
GGSCI> stop er *
GGSCI> stop mgr!
    • Navigate to GoldenGate patch location and apply the patch
    $ cd /oraggs/app/oracle/software/OGG12c/21785294
    $ /oraggs/app/gg12c/OPatch/opatch apply
    •  Verify the patch is applied to GoldenGate 12c Home
    $ cd /oraggs/app/gg12c/OPatch
    $ ./opatch lsinventory

    Invoking OPatch 11.2.0.1.7
    Oracle Interim Patch Installer version 11.2.0.1.7
    Copyright (c) 2011, Oracle Corporation.  All rights reserved.
    Oracle Home       : /oraggs/app/ggs12c/ggs
    Central Inventory : /oraggs/app/oraInventory
    from           : /etc/oraInst.loc
    OPatch version    : 11.2.0.1.7
    OUI version       : 11.2.0.3.0
    Log file location : /oraggs/app/ggs12c/ggs/cfgtoollogs/opatch/opatch2016-01-13_14-37-10PM.log
    Lsinventory Output file location : /oraggs/app/ggs12c/ggs/cfgtoollogs/opatch/lsinv/lsinventory2016-01-13_14-37-10PM.txt
    ——————————————————————————–
    Installed Top-level Products (1):
     Oracle GoldenGate Core                                               12.1.2.1.0
    There are 1 products installed in this Oracle Home.
    Interim patches (1) :

    Patch  21785294     : applied on Mon Nov 23 21:30:51 PST 2015
    Unique Patch ID:  19375140
       Created on 3 Sep 2015, 10:27:21 hrs PST8PDT
       Bugs fixed:
         21181052, 19241234, 20660883, 21473564, 20898978, 21180187, 19132627
         20643144, 21131159, 18875912, 20543714, 21079205, 20996660, 20367405
         20803745, 20657667, 19889991, 20659379, 20045536, 19987316, 20554464
         19142865, 19681198, 19813884, 20347665, 20148126, 20917331, 19048634
         21182116, 19920244, 19798268, 19602692, 19535319, 19441114, 21110369
         19374174, 20664851, 21045025, 20681946, 19903590, 19724915, 21165823
         19327073, 20531064, 21226965, 19818362, 20021370, 20724906, 21124385
         21328574, 19264441, 18996447, 21345919, 21329486, 21090428, 19414121
         19545128, 21121987, 20648352, 19721652, 20418503, 21209445, 21246170
         20853556, 20756705, 21103736, 17866697, 16084751, 20778054, 21111031
         20853777, 19244349, 20092876, 20782113, 20343181, 20532340, 20951173
         20660006, 19067094, 19560158, 20577891, 18958026, 17423191, 20641896
         19624524, 19516537, 20563015, 20078949, 21026962, 19681035, 20213167
         19782389, 20679687, 19781984, 19594967, 20878216, 20989581
    ——————————————————————————–
    OPatch succeeded.
    • Verify the new Oracle GoldenGate 12c version is 12.1.2.1.10
    $ cd /oraggs/app/gg12c
    $ ./ggsci –v
    $ cd /oraggs/app/ggs12c/ggs
    $ ./ggsci –v
    Oracle GoldenGate Command Interpreter for Oracle
    Version 12.1.2.1.10 21604177 OGGCORE_12.1.2.1.0OGGBP_PLATFORMS_150902.1337_FBO
    Linux, x64, 64bit (optimized), Oracle 11g on Sep  2 2015 20:46:03
    Copyright (C) 1995, 2015, Oracle and/or its affiliates. All rights reserved

    Conclusion:
    In this article we have learnt how to Upgrade Oracle GoldenGate 12c from 12.1.2.1.0 to 12.1.2.1.10

    0

    Uncategorized
    Overview
    The Oracle Database File System (DBFS) creates a file system interface to files stored in the database. DBFS is similar to NFS in that it provides a shared network file system that looks like a local file system. Because the data is stored in the database, the file system inherits all the HA/DR capabilities provided by the database.
     
    With DBFS, the server is the Oracle Database. Files are stored as SecureFiles LOBs. PL/SQL procedures implement file system access primitives such as create, open, read, write, and list directory. The implementation of the file system in the database is called the DBFS SecureFiles Store. The DBFS SecureFiles Store allows users to create file systems that can be mounted by clients. Each file system has its own dedicated tables that hold the file system content.
     
    In this article we will demonstrate the steps to configure Oracle Database Filesystem (DBFS) on Oracle Exadata Database Machine.
     
    Note: On platforms other than Oracle Exadata Database Machine, additional setup steps may be required to install the required fuse RPM packages which are installed by default on Oracle Database Machine database servers.
     
    Assumption
     
    • Exadata DBM running Oracle 11.2 Software
    • User equivalence for root and Oracle user is setup between compute nodes
    • dbs_group file is created in users home directory containing hostname or IP per line
    oraclouddbadm01-dbm011 {/home/oracle}:cat dbs_group
    oraclouddbadm01
    oraclouddbadm02

    • root and Oracle user password


    eBook - Oracle Exadata X8M Patching Recipes | Netsoftmate
    Steps to Configure DBFS on Exadata Database Machine

    • Add the oracle user to the fuse group on Linux.  Run these commands as the root user.
    [root@oraclouddbadm01 ~]# dcli -g ~/dbs_group -l root usermod -a -G fuse oracle
    • Create the /etc/fuse.conf file with the user_allow_other option. Ensure proper privileges are applied to this file.
    [root@oraclouddbadm01 ~]# dcli -g ~/dbs_group -l root “echo user_allow_other > /etc/fuse.conf”
    [root@oraclouddbadm01 ~]# dcli -g ~/dbs_group -l root chmod 644 /etc/fuse.conf
    [root@oraclouddbadm01 ~]# cat /etc/fuse.conf
    user_allow_other
     
    • On all Compute nodes, create a directory that will be used as the mount point for the DBFS file system.
    [root@oraclouddbadm01 ~]# dcli -g ~/dbs_group -l root mkdir /dbfs_direct
    [root@oraclouddbadm01 ~]# ls -l /dbfs_direct
    total 0
     
    • Change ownership on the mount point directory so oracle can access it.
    [root@oraclouddbadm01 ~]# dcli -g ~/dbs_group -l root chown oracle:dba /dbfs_direct
     
    • Restart the Clusterware to pickup additional group (fuse) membership for Oracle user.
    Run use the following commands as root:
    [root@oraclouddbadm01 ~]# dcli -g ~/dbs_group -l root /u01/app/11.2.0.4/grid/bin/crsctl stop crs
     
    [root@oraclouddbadm01 ~]# dcli -g ~/dbs_group -l root /u01/app/11.2.0.4/grid/bin/crsctl start crs
    oraclouddbadm01: CRS-4123: Oracle High Availability Services has been started.
    oraclouddbadm02: CRS-4123: Oracle High Availability Services has been started.
     
    • Create a database to hold the DBFS repository. Refer to the link below to create the DBFS repository database.
    http://netsoftmate.blogspot.in/2017/01/configure-database-for-dbfs-on-exadata.html
     
    • To create the repository, create a new tablespace to hold the DBFS objects and a database user that will own the objects.
    SQL> create bigfile tablespace dbfsts datafile ‘+FLASH_DATA01’ size 100g autoextend on next 8g maxsize 300g NOLOGGING EXTENT MANAGEMENT LOCAL AUTOALLOCATE  SEGMENT SPACE MANAGEMENT AUTO ;
     
    Tablespace created.
     
    • Create a user and assign necessary privileges to create objects.
    SQL> create user dbfs_user identified by oracle default tablespace dbfsts quota unlimited on dbfsts;
     
    User created.
     
    SQL> grant create session, create table, create view, create procedure, dbfs_role to dbfs_user;
     
    Grant succeeded.
     
    • Now create the database objects that will hold DBFS. The script takes 2 arguments tablespace name and File System name.
    oraclouddbadm01-fsdb1 {/home/oracle}:. oraenv
    ORACLE_SID = [fsdb1] ? fsdb1
    The Oracle base remains unchanged with value /u01/app/oracle
     
    oraclouddbadm01-fsdb1 {/home/oracle}:cd $ORACLE_HOME/rdbms/admin
     
    oraclouddbadm01-fsdb1 {/u01/app/oracle/product/11.2.0.4/dbhome_1/rdbms/admin}:sqlplus dbfs_user/oracle
     
    SQL*Plus: Release 11.2.0.4.0 Production on Fri Jul 4 12:46:22 2014
     
    Copyright (c) 1982, 2013, Oracle.  All rights reserved.
     
    Connected to:
    Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 – 64bit Production
    With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
    Data Mining and Real Application Testing options
     
    SQL> start dbfs_create_filesystem dbfsts FS1
    No errors.
    ——–
    CREATE STORE:
    begin dbms_dbfs_sfs.createFilesystem(store_name => ‘FS_FS1’, tbl_name =>
    ‘T_FS1’, tbl_tbs => ‘dbfsts’, lob_tbs => ‘dbfsts’, do_partition => false,
    partition_key => 1, do_compress => false, compression => ”, do_dedup => false,
    do_encrypt => false); end;
    ——–
    REGISTER STORE:
    begin dbms_dbfs_content.registerStore(store_name=> ‘FS_FS1’, provider_name =>
    ‘sample1’, provider_package => ‘dbms_dbfs_sfs’); end;
    ——–
    MOUNT STORE:
    begin dbms_dbfs_content.mountStore(store_name=>’FS_FS1′, store_mount=>’FS1′);
    end;
    ——–
    CHMOD STORE:
    declare m integer; begin m := dbms_fuse.fs_chmod(‘/FS1’, 16895); end;
    No errors.
     
    • Perform the one-time setup steps for mounting the filesystem. You can download the mount-dbfs.sh script attached to the MOS note 1054431.1. It provides the logic and necessary scripting to mount DBFS as a cluster resource.


    • The one-time setup steps required for each of the two mount methods (dbfs_client or mount). There are two options for mounting the DBFS filesystem and each will result in the filesystem being available at /dbfs_direct. Choose one of the two options.
    •  
    •  
      • The first option is to utilize the dbfs_client command directly, without using an Oracle Wallet. There are no additional setup steps required to use this option.
    •  
    •  
      • The second option is to use the Oracle Wallet to store the password and make use of the mount command.
    • On All Compute nodes, set the library path on all nodes using the commands that follow:
    [root@oraclouddbadm01 ~]# dcli -g dbs_group -l root mkdir -p /usr/local/lib
    [root@oraclouddbadm01 ~]# dcli -g dbs_group -l root ln -s /u01/app/oracle/product/11.2.0.4/dbhome_1/lib/libnnz11.so /usr/local/lib/libnnz11.so
    [root@oraclouddbadm01 ~]# dcli -g dbs_group -l root ln -s /u01/app/oracle/product/11.2.0.4/dbhome_1/lib/libclntsh.so.11.1 /usr/local/lib/libclntsh.so.11.1
    [root@oraclouddbadm01 ~]# dcli -g dbs_group -l root ln -s /lib64/libfuse.so.2 /usr/local/lib/libfuse.so.2
    [root@oraclouddbadm01 ~]# dcli -g dbs_group -l root ‘echo /usr/local/lib >> /etc/ld.so.conf.d/usr_local_lib.conf’
    [root@oraclouddbadm01 ~]# dcli -g dbs_group -l root ldconfig
     
    • Create a new TNS_ADMIN directory ($HOME/dbfs/tnsadmin) for exclusive use by the DBFS mount script.
    oraclouddbadm01-dbm011 {/home/oracle}:dcli -g dbs_group -l oracle mkdir -p $HOME/dbfs/tnsadmin
     
    • Create the $HOME/dbfs/tnsadmin/tnsnames.ora file with the following contents on the first node. Here the name of the DBFS repository database is fsdb and the instance on the first node is named fsdb1 and ORACLE_HOME is /u01/app/oracle/product/11.2.0.4/dbhome_1.
    oraclouddbadm01-dbm011 {/home/oracle}:vi $HOME/dbfs/tnsadmin/tnsnames.ora
    oraclouddbadm01-dbm011 {/home/oracle}:cat $HOME/dbfs/tnsadmin/tnsnames.ora
    fsdb.local =
       (DESCRIPTION =
           (ADDRESS =
             (PROTOCOL=BEQ)
      (PROGRAM=/u01/app/oracle/product/11.2.0.4/dbhome_1/bin/oracle)
             (ARGV0=oraclefsdb1)
             (ARGS='(DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=BEQ)))’)
             (ENVS=’ORACLE_HOME=/u01/app/oracle/product/11.2.0.4/dbhome_1,ORACLE_SID=fsdb1′)
           )
        (CONNECT_DATA=(SID=fsdb1))
       )
     
    • On other nodes, create similar entries (all using the name “fsdb.local”) and change all occurrences of fsdb1 to the appropriate instance name to match the instance name running on the node where that tnsnames.ora file resides. The tnsnames.ora file on each node will be slightly different so that each tnsnames.ora file references the instance running locally on that node.
    oraclouddbadm02-dbm012 {/home/oracle}:cd dbfs/tnsadmin/
    oraclouddbadm02-dbm012 {/home/oracle/dbfs/tnsadmin}:cat tnsnames.ora
    fsdb.local =
       (DESCRIPTION =
           (ADDRESS =
             (PROTOCOL=BEQ)
             (PROGRAM=/u01/app/oracle/product/11.2.0.4/dbhome_1/bin/oracle)
             (ARGV0=oraclefsdb2)
             (ARGS='(DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=BEQ)))’)
             (ENVS=’ORACLE_HOME=/u01/app/oracle/product/11.2.0.4/dbhome_1,ORACLE_SID=fsdb2′)
           )
        (CONNECT_DATA=(SID=fsdb2))
       )
    • On each node, create the $HOME/dbfs/tnsadmin/sqlnet.ora file with the same contents on each node.
    oraclouddbadm01-dbm011 {/home/oracle}:vi $HOME/dbfs/tnsadmin/sqlnet.ora
    oraclouddbadm01-dbm011 {/home/oracle}:cat $HOME/dbfs/tnsadmin/sqlnet.ora
    WALLET_LOCATION =
      (SOURCE=(METHOD=FILE)
              (METHOD_DATA=(DIRECTORY=/home/oracle/dbfs/wallet))
      )
    SQLNET.WALLET_OVERRIDE = TRUE
     
    • Copy the file to all nodes using dcli:
    oraclouddbadm01-dbm011 {/home/oracle}:dcli -g ~/dbs_group -l oracle -d $HOME/dbfs/tnsadmin -f $HOME/dbfs/tnsadmin/sqlnet.ora
     
    • Create a wallet directory on one database server as the oracle user. For example:
    oraclouddbadm01-dbm011 {/home/oracle}:mkdir -p $HOME/dbfs/wallet
     
    • Create an empty auto-login wallet:
    oraclouddbadm01-dbm011 {/home/oracle}:mkstore -wrl $HOME/dbfs/wallet -create
    Oracle Secret Store Tool : Version 11.2.0.4.0 – Production
    Copyright (c) 2004, 2013, Oracle and/or its affiliates. All rights reserved.
     
    Enter password:
    Enter password again:
     
    • Add the necessary credentials to the wallet. The credentials can be specific for the connect string used as shown here:
    oraclouddbadm01-dbm011 {/home/oracle}:mkstore -wrl $HOME/dbfs/wallet -createCredential fsdb.local dbfs_user oracle
    Oracle Secret Store Tool : Version 11.2.0.4.0 – Production
    Copyright (c) 2004, 2013, Oracle and/or its affiliates. All rights reserved.
     
    Enter wallet password:
    Create credential oracle.security.client.connect_string1
     
    • Copy the wallet files to all database nodes.
    oraclouddbadm01-dbm011 {/home/oracle}:dcli -g ~/dbs_group -l oracle mkdir -p $HOME/dbfs/wallet
    oraclouddbadm01-dbm011 {/home/oracle}:dcli -g ~/dbs_group -l oracle -d $HOME/dbfs/wallet -f $HOME/dbfs/wallet/ewallet.p12
    oraclouddbadm01-dbm011 {/home/oracle}:dcli -g ~/dbs_group -l oracle -d $HOME/dbfs/wallet -f $HOME/dbfs/wallet/cwallet.sso
     
    • Ensure that the TNS entry specified above (fsdb.local in the example) is defined and working properly (checking with “TNS_ADMIN=/home/oracle/dbfs/tnsadmin tnsping fsdb.local” is a good test).
    oraclouddbadm01-fsdb1 {/home/oracle/dbfs/tnsadmin}:dcli -g ~/dbs_group -l oracle “export ORACLE_HOME=/u01/app/oracle/product/11.2.0.4/dbhome_1; TNS_ADMIN=$HOME/dbfs/tnsadmin /u01/app/oracle/product/11.2.0.4/dbhome_1/bin/tnsping fsdb.local | grep OK”
    oraclouddbadm01: OK (20 msec)
    oraclouddbadm02: OK (20 msec)
     
    • Download the mount-dbfs.sh script attached to MOS note 1054431.1 and place it on one database server in a temporary location (like /tmp/mount-dbfs.sh).
    Run the following command to ensure file transfer didn’t modify the file contents.
    [root@oraclouddbadm01 ~]# dos2unix /tmp/mount-dbfs.sh
    dos2unix: converting file /tmp/mount-dbfs.sh to UNIX format …
     
    • Edit the variable settings in the top of the script for your environment. Comments in the script will help you to confirm the values for these variables.
    DBNAME –> fsdb
    MOUNT_POINT –> /dbfs_direct
    DBFS_USER –> dbfs_user
    ORACLE_HOME (should be the RDBMS ORACLE_HOME directory) –> /u01/app/oracle/product/11.2.0.4/dbhome_1
    LOGGER_FACILITY (used by syslog to log the messages/output from this script) –> user
    MOUNT_OPTIONS –> allow_other,direct_io
    DBFS_PASSWD (used only if WALLET=false) –> oracle
    DBFS_PWDFILE_BASE (used only if WALET=false) –> /tmp/.dbfs-passwd.txt
    WALLET (must be true or false) –> TRUE
    TNS_ADMIN (used only if WALLET=true) –> /home/oracle/dbfs/tnsadmin
    DBFS_LOCAL_TNSALIAS –> fsdb.local
     
    • After editing, copy the script to the proper directory (GI_HOME/crs/script) on database nodes and set proper permissions on it, as the root user:
    [root@oraclouddbadm01 ~]# dcli -g ~/dbs_group -l root -d /u01/app/11.2.0.4/grid/crs/script -f /tmp/mount-dbfs.sh
    [root@oraclouddbadm01 ~]# dcli -g ~/dbs_group -l root chown oracle:dba /u01/app/11.2.0.4/grid/crs/script/mount-dbfs.sh
    [root@oraclouddbadm01 ~]# dcli -g ~/dbs_group -l root chmod 750 /u01/app/11.2.0.4/grid/crs/script/mount-dbfs.sh
     
    • Now that we have completed the one-time setup, the Clusterware resource for DBFS mounting can now be registered. Create this short script and run it as the RDBMS owner (oracle) on only one compute node in the cluster.
    oraclouddbadm01-dbm011 {/home/oracle}:vi add-dbfs-resource.sh
    oraclouddbadm01-dbm011 {/home/oracle}:cat add-dbfs-resource.sh
    #!/bin/bash
    ACTION_SCRIPT=/u01/app/11.2.0.4/grid/crs/script/mount-dbfs.sh
    RESNAME=dbfs_mount
    DBNAME=fsdb
    DBNAMEL=`echo $DBNAME | tr A-Z a-z`
    ORACLE_HOME=/u01/app/11.2.0.4/grid
    PATH=$ORACLE_HOME/bin:$PATH
    export PATH ORACLE_HOME
    crsctl add resource $RESNAME
      -type local_resource
      -attr “ACTION_SCRIPT=$ACTION_SCRIPT,
             CHECK_INTERVAL=30,RESTART_ATTEMPTS=10,
             START_DEPENDENCIES=’hard(ora.$DBNAMEL.db)pullup(ora.$DBNAMEL.db)’,
             STOP_DEPENDENCIES=’hard(ora.$DBNAMEL.db)’,
             SCRIPT_TIMEOUT=300″
     
    • Then run this as the Grid Infrastructure owner (typically oracle) on one database server only:
    oraclouddbadm01-dbm011 {/home/oracle}:sh ./add-dbfs-resource.sh 
     
    When successful, this command has no output.
     
    • It is not necessary to restart the database resource at this point, however, you should review the following note regarding restarting the database now that the dependencies have been added.
    oraclouddbadm01-dbm011 {/home/oracle}:srvctl stop database -d fsdb -f
     
    oraclouddbadm01-dbm011 {/home/oracle}:srvctl status database -d fsdb
    Instance fsdb1 is not running on node oraclouddbadm01
    Instance fsdb2 is not running on node oraclouddbadm02
     
    oraclouddbadm01-dbm011 {/u01/app/11.2.0.4/grid/crs/script}:srvctl start database -d fsdb
     
    oraclouddbadm01-dbm011 {/u01/app/11.2.0.4/grid/crs/script}:srvctl status database -d fsdb
    Instance fsdb1 is running on node oraclouddbadm01
    Instance fsdb2 is running on node oraclouddbadm02
     
    Note: After creating the $RESNAME resource, in order to stop the $DBNAME database when the $RESNAME resource is ONLINE, you will have to specify the force flag when using srvctl.
     
    For example: “srvctl stop database -d fsdb -f”. If you do not specify the -f flag, you will receive an error like this:
     
    (oracle)$ srvctl stop database -d fsdb
    PRCD-1124 : Failed to stop database fsdb and its services
    PRCR-1065 : Failed to stop resource (((((NAME STARTS_WITH ora.fsdb.) && (NAME ENDS_WITH .svc)) && (TYPE == ora.service.type)) && ((STATE != OFFLINE) || (TARGET != OFFLINE))) || (((NAME == ora.fsdb.db) && (TYPE == ora.database.type)) && (STATE != OFFLINE)))
    CRS-2529: Unable to act on ‘ora.fsdb.db’ because that would require stopping or relocating ‘dbfs_mount’, but the force option was not specified
     
    Using the -f flag allows a successful shutdown and results in no output.
     
    How to Manage DBFS mount
     
    • After the resource is created, you should be able to see the dbfs_mount resource by running crsctl stat res dbfs_mount and it should show OFFLINE on all nodes as below:
    oraclouddbadm01-dbm011 {/home/oracle}:/u01/app/11.2.0.4/grid/bin/crsctl stat res dbfs_mount -t
    ——————————————————————————–
    NAME           TARGET  STATE        SERVER                   STATE_DETAILS
    ——————————————————————————–
    Local Resources
    ——————————————————————————–
    dbfs_mount
                   OFFLINE OFFLINE      oraclouddbadm01
                   OFFLINE OFFLINE      oraclouddbadm02
     
    • To bring dbfs_mount online which will mount the filesystem on all nodes, run crsctl start resource dbfs_mount from any cluster node. This will mount DBFS on all nodes. For example:
    oraclouddbadm01-dbm011 {/u01/app/11.2.0.4/grid/crs/script}:/u01/app/11.2.0.4/grid/bin/crsctl start resource dbfs_mount
    CRS-2672: Attempting to start ‘dbfs_mount’ on ‘oraclouddbadm02’
    CRS-2672: Attempting to start ‘dbfs_mount’ on ‘oraclouddbadm01’
    CRS-2676: Start of ‘dbfs_mount’ on ‘oraclouddbadm01’ succeeded
    CRS-2676: Start of ‘dbfs_mount’ on ‘oraclouddbadm02’ succeeded
     
    oraclouddbadm01-dbm011 {/u01/app/11.2.0.4/grid/crs/script}:/u01/app/11.2.0.4/grid/bin/crsctl stat res dbfs_mount -t
    ——————————————————————————–
    NAME           TARGET  STATE        SERVER                   STATE_DETAILS
    ——————————————————————————–
    Local Resources
    ——————————————————————————–
    dbfs_mount
                   ONLINE  ONLINE      oraclouddbadm01
                   ONLINE  ONLINE      oraclouddbadm02
     
    • Once the dbfs_mount Clusterware resource is online, you should be able to see the mount point with df -h on each node. Also, the default startup for this resource is “restore” which means that if it is online before Clusterware is stopped, it will attempt to come online after Clusterware is restarted.
    oraclouddbadm01-fsdb1 {/home/oracle}: df -h /dbfs_direct
    Filesystem            Size  Used Avail Use% Mounted on
    dbfs                  1.5M   40K  1.4M   3% /dbfs_direct
    • To unmount DBFS on all nodes, run this as the oracle user:
    oraclouddbadm01-fsdb1 {/home/oracle}:/u01/app/11.2.0.4/grid/bin/crsctl stop res dbfs_mount
     

    Conclusion
    In this article we have learnt how to configure Database File System (DBFS) on Exadata Database Machine running Oracle Database software 11.2.0.4. The DBFS can be used for many things like store large file, ETL, GoldenGate trail files and check point files and so on.
     
    0