Good Contents Are Everywhere, But Here, We Deliver The Best of The Best.Please Hold on!
Database Management Services, Oracle Database Management Solution, Oracle Exadata, Remote Database Management
What are the different Exadata Deployment Option available?

 

– Exadata On-Premises

– Exadata Cloud Service

– Exadata Cloud at Customer

 

What is Oracle Exadata Database Machine?

 

Exadata Database Machine is an Engineered System which consists of Compute nodes, Storage cells and Infiniband Switches or RoCE Switches (starting X8M).

 

Exadata Database Machine or simply known as Exadata is:

  • – An Engineered System

    – A preconfigured combination of balanced Hardware and unique software

    – A unique platform for running Oracle Databases

    – Consists of Compute Grid, Storage Grid and Network Grid

    – A fully integrated platform for Oracle Database

    – Ideal for Database Consolidation platform

    – It provides High Availability and High Performance for all types of Workloads

 

The Oracle Exadata Database Machine is an Engineered System designed to deliver extreme performance and high availability for all type of Oracle database workloads (OLTP, OLAP & Mixed Workload).

 

 

Exadata Database Machine Components

  • 1. Compute nodes (Database Server Grid)
  •  
  • 2. Exadata Storage Server (Storage Server Grid)
  •  
  • 3. Network (Network Grid)
  •  
    •             – Exadata Infiniband switches
    •  
    •             – Exadata RoCE switches – From Exadata X8M
    •  
  • 4. Other Components
  •  
    •             – Cisco Switch, PDUs

 

Oracle Exadata Cloud Service

Oracle Database Exadata Cloud Service delivers the world’s most advanced database cloud by combining the world’s #1 database technology and Exadata, the most powerful database platform, with the simplicity, agility and elasticity of a cloud-based deployment.

 

Oracle Exadata Cloud @ Customer

Exadata C@C is ideal for customers desiring cloud benefits but cannot move their databases to the public cloud due to sovereignty laws, industry regulations, corporate policies, security requirements, network latency, or organizations that find it impractical to move databases away from other tightly coupled on-premises IT infrastructure. Oracle Exadata C@C delivers the world’s most advanced database cloud to customers who require their databases to be located on-premises. It is identical to Oracle’s Exadata Cloud Service but located in customers’ own data centers and managed by Oracle.

 

Oracle Exadata Deployment Comparison

 

Let’s compare each Exadata deployment to learn about them in detail so we can choose the right deployment option for our Business need.



Oracle Exadata Deployment Option Chart | Netsoftmate

eBook - Oracle Exadata X8M Patching Recipes | Netsoftmate
0

Database Management Services, Oracle Database Management Solution, Oracle Databases

Recently Oracle introduced “Autonomous Health Framework”. Oracle Autonomous Health Framework contains Oracle ORAchk, Oracle EXAchk, and Oracle Trace File Analyzer.

You have access to Oracle Autonomous Health Framework as a value add-on to your existing support contract. There is no additional fee or license required to run Oracle Autonomous Health Framework.

 

In this article we will learn how to Install, setup and execute AHF for Oracle Exadata Database machine in detail.

 

Step 1: Download AHF for Linux operating system as shown below. Here we are using the wget command to download file directly to the server. If you don’t have proxy you can download the file MOS to your desktop and copy it the server using WinSCP.

 

[root@dm01db01 ~]# cd /u01/app/oracle/software/

 

[root@dm01db01 software]# mkdir Exachk

 

[root@dm01db01 software]# cd Exachk/

 

[root@dm01db01 Exachk]# export use_proxy=on

[root@dm01db01 Exachk]# export http_proxy=”webproxy.netsoftmate.come:80/”

 

  • Download the AHF zip file

 

[root@dm01db01 Exachk]# wget  –http-user=abdul.mohammed@netsoftmate.com –http-password=************ –no-check-certificate –output-document=AHF-LINUX_v20.1.1.zip “https://updates.oracle.com/Orion/Services/download/AHF-LINUX_v20.1.1.zip?aru=23443431&patch_file=AHF-LINUX_v20.1.1.zip”

 

  • Download the latest cvu. This will be used by the exachk to run the cluster verification

 

[root@dm01db01 Exachk]# wget  –http-user=abdul.mohammed@netsoftmate.com –http-password=************ –no-check-certificate –output-document=cvupack_Linux_x86_64.zip “https://download.oracle.com/otndocs/products/clustering/cvu/cvupack_Linux_x86_64.zip”

 

[root@dm01db01 Exachk]# ls -ltr

total 356748

-rw-r–r– 1 root root 365267646 Mar 17 16:02 AHF-LINUX_v20.1.1.zip

-rw-r–r– 1 root root 293648959 Jul 13  2018 cvupack_Linux_x86_64.zip

 

eBook - Oracle Exadata X8M Patching Recipes | Netsoftmate

Step 2:
Unzip the AHF zip file

 

[root@dm01db01 Exachk]# unzip AHF-LINUX_v20.1.1.zip

Archive:  AHF-LINUX_v20.1.1.zip

  inflating: README.txt

  inflating: ahf_setup

 

[root@dm01db01 Exachk]# ./ahf_setup -v

AHF Build ID : 20110020200317092524

AHF Build Platform : Linux

AHF Build Architecture : x86_64

 

Step 3: Execute the AHF setup

 

[root@dm01db01 Exachk]# ./ahf_setup

 

AHF Installer for Platform Linux Architecture x86_64

 

AHF Installation Log : /tmp/ahf_install_344489_2020_04_06-12_20_51.log

 

Starting Autonomous Health Framework (AHF) Installation

 

AHF Version: 20.1.1.0.0 Build Date: 202003170925

 

TFA is already installed at : /u01/app/11.2.0.4/grid/tfa/dm01db01/tfa_home

 

Installed TFA Version : 122111 Build ID : 20170612164756

 

Default AHF Location : /opt/oracle.ahf

 

Do you want to install AHF at [/opt/oracle.ahf] ? [Y]|N : Y

 

AHF Location : /opt/oracle.ahf

 

AHF Data Directory stores diagnostic collections and metadata.

AHF Data Directory requires at least 5GB (Recommended 10GB) of free space.

 

Choose Data Directory from below options :

 

  1. /u01/app/oracle [Free Space : 50454 MB]
  2. Enter a different Location

 

Choose Option [1 – 2] : 1

 

AHF Data Directory : /u01/app/oracle/oracle.ahf/data

 

exachk scheduler is already running at : /root/Exachk

 

Installed exachk version : EXACHK  VERSION: 19.2.0_20190717

 

Stopping exachk scheduler

 

Copying exachk configuration from /root/Exachk

 

Shutting down TFA : /u01/app/11.2.0.4/grid/tfa/dm01db01/tfa_home

 

Copying TFA Data Files from /u01/app/11.2.0.4/grid/tfa/dm01db01/tfa_home

 

Uninstalling TFA : /u01/app/11.2.0.4/grid/tfa/dm01db01/tfa_home

 

Do you want to add AHF Notification Email IDs ? [Y]|N : Y

 

Enter Email IDs separated by space : abdul.mohammed@netsoftmate.com

 

AHF will also be installed/upgraded on these Cluster Nodes :

 

  1. dm01db02
  2. dm01db03
  3. dm01db04

 

The AHF Location and AHF Data Directory must exist on the above nodes

AHF Location : /opt/oracle.ahf

AHF Data Directory : /u01/app/oracle/oracle.ahf/data

 

Do you want to install/upgrade AHF on Cluster Nodes ? [Y]|N : Y

 

Extracting AHF to /opt/oracle.ahf

 

Configuring TFA Services

 

Copying TFA Data Files to AHF

 

Discovering Nodes and Oracle Resources

 

 

TFA will configure Storage Cells using SSH Setup:

 

 

.———————————–.

|   | EXADATA CELL | CURRENT STATUS |

+—+————–+—————-+

| 1 | dm01cel01   | ONLINE         |

| 2 | dm01cel02   | ONLINE         |

| 3 | dm01cel03   | ONLINE         |

| 4 | dm01cel04   | ONLINE         |

| 5 | dm01cel05   | ONLINE         |

| 6 | dm01cel06   | ONLINE         |

| 7 | dm01cel07   | ONLINE         |

‘—+————–+—————-‘

 

 

Not generating certificates as GI discovered

 

Starting TFA Services

 

.——————————————————————————-.

| Host      | Status of TFA | PID    | Port | Version    | Build ID             |

+———–+—————+——–+——+————+———————-+

| dm01db01 | RUNNING       | 365382 | 5000 | 20.1.1.0.0 | 20110020200317092524 |

‘———–+—————+——–+——+————+———————-‘

 

Running TFA Inventory…

 

Adding default users to TFA Access list…

 

.——————————————————————.

|                   Summary of AHF Configuration                   |

+—————–+————————————————+

| Parameter       | Value                                          |

+—————–+————————————————+

| AHF Location    | /opt/oracle.ahf                                |

| TFA Location    | /opt/oracle.ahf/tfa                            |

| Exachk Location | /opt/oracle.ahf/exachk                         |

| Data Directory  | /u01/app/oracle/oracle.ahf/data                |

| Repository      | /u01/app/oracle/oracle.ahf/data/repository     |

| Diag Directory  | /u01/app/oracle/oracle.ahf/data/dm01db01/diag |

‘—————–+————————————————‘

 

Retrieving legacy exachk wallet details …

Storing exachk wallet details into AHF config/wallet …

 

Starting exachk daemon from AHF …

 

AHF install completed on dm01db01

 

Installing AHF on Remote Nodes :

 

AHF will be installed on dm01db02, Please wait.

 

Installing AHF on dm01db02 :

 

[dm01db02] Copying AHF Installer

 

[dm01db02] Running AHF Installer

 

AHF will be installed on dm01db03, Please wait.

 

Installing AHF on dm01db03 :

 

[dm01db03] Copying AHF Installer

 

[dm01db03] Running AHF Installer

 

AHF will be installed on dm01db04, Please wait.

 

Installing AHF on dm01db04 :

 

[dm01db04] Copying AHF Installer

 

[dm01db04] Running AHF Installer

 

AHF binaries are available in /opt/oracle.ahf/bin

 

AHF is successfully installed

 

Moving /tmp/ahf_install_251936_2020_04_06-13_07_32.log to /u01/app/oracle/oracle.ahf/data/dm01db01/diag/ahf/

 

Step 4: Verify AHF setup

 

[root@dm01db01 Exachk]# cd /opt/oracle.ahf/

 

[root@dm01db01 oracle.ahf]# ls -ltr

total 36

drwxr-xr-x 7 root root 4096 Nov 19 02:38 python

drwxr-xr-x 5 root root 4096 Mar 17 11:25 ahf

drwxr-xr-x 6 root root 4096 Mar 17 11:25 common

drwxr-x–x 5 root root 4096 Mar 17 11:25 jre

drwxr-xr-x 8 root root 4096 Apr  6 12:28 exachk

drwxr-x–x 2 root root 4096 Apr  6 12:28 analyzer

-rw-r–r– 1 root root 1057 Apr  6 12:28 install.properties

drwxr-x–x 9 root root 4096 Apr  6 12:28 tfa

drwxr-x–x 2 root root 4096 Apr  6 12:28 bin

 

 

[root@dm01db01 oracle.ahf]# cd exachk/

 

[root@dm01db01 exachk]# ls -ltr

total 81772

-rw-r–r– 1 root root   186651 Mar 17 11:20 exachk.pyc

-rw-r–r– 1 root root 65423079 Mar 17 11:23 collections.dat

-rw-r–r– 1 root root  9674765 Mar 17 11:23 rules.dat

-rw-r–r– 1 root root  8341706 Mar 17 11:24 Apex5_CollectionManager_App.sql

-rw-r–r– 1 root root    43473 Mar 17 11:24 sample_user_defined_checks.xml

-r–r–r– 1 root root     3217 Mar 17 11:24 user_defined_checks.xsd

drwxr-xr-x 2 root root     4096 Mar 17 11:24 messages

drwxr-xr-x 2 root root     4096 Mar 17 11:25 web

drwxr-xr-x 3 root root     4096 Mar 17 11:25 lib

drwxr-xr-x 2 root root     4096 Mar 17 11:25 build

drwxr-xr-x 2 root root     4096 Apr  6 12:28 bash

-rwxr-xr-x 1 root root    25788 Apr  6 12:28 exachk

 

Step 5: unzip the cvu zip file under AHF home as show below

 

[root@dm01db01 Exachk]# unzip cvupack_Linux_x86_64.zip -d /opt/oracle.ahf/common/cvu

 

[root@dm01db01 Exachk]# ls -ltr /opt/oracle.ahf/common/cvu

total 92

drwxrwxr-x 7 root root 4096 Jun 13  2018 jdk

drwxrwxr-x 3 root root 4096 Jun 13  2018 srvm

drwxrwxr-x 3 root root 4096 Jun 13  2018 has

drwxrwxr-x 3 root root 4096 Jun 13  2018 crs

drwxrwxr-x 3 root root 4096 Jun 13  2018 suptools

drwxrwxr-x 3 root root 4096 Jun 13  2018 oss

drwxrwxr-x 7 root root 4096 Jun 13  2018 cv

drwxrwxr-x 3 root root 4096 Jun 13  2018 xdk

drwxrwxr-x 2 root root 4096 Jun 13  2018 utl

drwxrwxr-x 4 root root 4096 Jun 13  2018 rdbms

drwxrwxr-x 6 root root 4096 Jun 13  2018 install

drwxrwxr-x 4 root root 4096 Jun 13  2018 deinstall

drwxrwxr-x 4 root root 4096 Jun 13  2018 clone

drwxrwxr-x 8 root root 4096 Jun 13  2018 oui

drwxrwxr-x 3 root root 4096 Jun 13  2018 diagnostics

drwxrwxr-x 3 root root 4096 Jun 13  2018 oracore

drwxrwxr-x 3 root root 4096 Jun 13  2018 nls

drwxrwxr-x 3 root root 4096 Jun 13  2018 jdbc

drwxrwxr-x 3 root root 4096 Jun 13  2018 dbjava

drwxrwxr-x 6 root root 4096 Jun 13  2018 network

drwxrwxr-x 2 root root 4096 Jun 13  2018 jlib

drwxrwxr-x 2 root root 4096 Jun 13  2018 lib

drwxrwxr-x 2 root root 4096 Jun 13  2018 bin

 

 

Note: If you don’t download and extract the cvupack you will get the following warning message.

 

“Either Cluster Verification Utility pack (cvupack) does not exist at /opt/oracle.ahf/common/cvu or it is an old or invalid cvupack”

 

 

Step 6: Execute Exachk for Exadata

 

[root@dm01db01 ~]# cd /opt/oracle.ahf/exachk/

[root@dm01db01 exachk]# ./exachk

 

 

Checking ssh user equivalency settings on all nodes in cluster for root

 

Node dm01db02 is configured for ssh user equivalency for root user

 

 

Node dm01db03 is configured for ssh user equivalency for root user

 

 

Node dm01db04 is configured for ssh user equivalency for root user

 

Searching for running databases . . . . .

 

.  .  .  .

List of running databases registered in OCR

 

  1. testdb
  2. orcldb
  3. All of above
  4. None of above

 

Select databases from list for checking best practices. For multiple databases, select 3 for All or comma separated number like 1,2 etc [1-4][3].

 

Searching out ORACLE_HOME for selected databases.

 

.  .  .  .  .  .  .

.

 

Checking Status of Oracle Software Stack – Clusterware, ASM, RDBMS

 

.  .  .  . . . .  .  .  . . . .

.  .  .  . . . .  .  .  .  .  .  .  .  .  .  .  .  .  . . . .  .  .  .  .  .  .  .  .  .  .  .  .  . . . .  .  .  .  .  .  .  .  .  .  .  .  .  . . . .  .  .  .  .  .  .  .  .  .  .  .

——————————————————————————————————-

                                                 Oracle Stack Status

——————————————————————————————————-

  Host Name       CRS Installed  RDBMS Installed    CRS UP    ASM UP  RDBMS UP    DB Instance Name

——————————————————————————————————-

  dm01db01                  Yes          Yes          Yes      Yes      Yes          orcldb1 testdb1

  dm01db02                  Yes          Yes          Yes      Yes      Yes          testdb2 orcldb2

  dm01db03                  Yes          Yes          Yes      Yes      Yes          orcldb3 testdb3

  dm01db04                  Yes          Yes          Yes      Yes      Yes          testdb4 orcldb4

——————————————————————————————————-

 

 

Copying plug-ins

 

. .

 

Node dm01cel01-priv2 is configured for ssh user equivalency for root user

 

 

Node dm01cel02-priv2 is configured for ssh user equivalency for root user

 

 

Node dm01cel03-priv2 is configured for ssh user equivalency for root user

 

 

Node dm01cel04-priv2 is configured for ssh user equivalency for root user

 

 

Node dm01cel05-priv2 is configured for ssh user equivalency for root user

 

 

Node dm01cel06-priv2 is configured for ssh user equivalency for root user

 

 

Node dm01cel07-priv2 is configured for ssh user equivalency for root user

 

 

.  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .

dm01sw-ibb01 is configured for ssh user equivalency for root user

.

dm01sw-iba01 is configured for ssh user equivalency for root user

 

dm01sw-iba01 is configured for ssh user equivalency for root user

 

 

*** Checking Best Practice Recommendations ( Pass / Warning / Fail ) ***

 

.  .

 

Collections and audit checks log file is

/u01/app/oracle/oracle.ahf/data/dm01db01/exachk/exachk_dm01db01_orcldb_040620_12376/log/exachk.log

 

Starting to run exachk in background on dm01db02

 

Starting to run exachk in background on dm01db03

 

 

Starting to run exachk in background on dm01db04

 

 

 

============================================================

              Node name – dm01db01

============================================================

 

 Collecting – ASM Disk Group for Infrastructure Software and Configuration

 Collecting – ASM Diskgroup Attributes

 Collecting – ASM diskgroup usable free space

 Collecting – ASM initialization parameters

 Collecting – Database Parameters for testdb database

 Collecting – Database Parameters for orcldb database

 Collecting – Database Undocumented Parameters for orcldb database

 Collecting – RDBMS Feature Usage for orcldb database

 Collecting – CPU Information

 Collecting – Clusterware and RDBMS software version

 Collecting – Compute node PCI bus slot speed for infiniband HCAs

 Collecting – Kernel parameters

 Collecting – Maximum number of semaphore sets on system

 Collecting – Maximum number of semaphores on system

 Collecting – OS Packages

 Collecting – Patches for Grid Infrastructure

 Collecting – Patches for RDBMS Home

 Collecting – RDBMS patch inventory

 Collecting – Switch Version Information

 Collecting – number of semaphore operations per semop system call

 Collecting – CRS user limits configuration

 Collecting – CRS user time zone check

 Collecting – Check alerthistory for non-test open stateless alerts [Database Server]

 Collecting – Check alerthistory for stateful alerts not cleared [Database Server]

 Collecting – Clusterware patch inventory

 Collecting – Discover switch type(spine or leaf)

 Collecting – Enterprise Manager agent targets

 Collecting – Exadata Critical Issue DB09

 Collecting – Exadata Critical Issue EX30

 Collecting – Exadata Critical Issue EX36

 Collecting – Exadata Critical Issue EX56

 Collecting – Exadata Critical Issue EX57

 Collecting – Exadata Critical Issue EX58

 Collecting – Exadata critical issue EX48

 Collecting – Exadata critical issue EX55

 Collecting – Exadata software version on database server

 Collecting – Exadata system model number

 Collecting – Exadata version on database server

 Collecting – HCA firmware version on database server

 Collecting – HCA transfer rate on database server

 Collecting – Infrastructure Software and Configuration for compute

 Collecting – MaxStartups setting in sshd_config

 Collecting – OFED Software version on database server

 Collecting – Obtain hardware information

 Collecting – Operating system and Kernel version on database server

 Collecting – Oracle monitoring agent and/or OS settings on ADR diagnostic directories

 Collecting – Raid controller bus link speed

 Collecting – Review Non-Exadata components in use on the InfiniBand fabric

 Collecting – System Event Log

 Collecting – Validate key sysctl.conf parameters on database servers

 Collecting – Verify Data Network is Separate from Management Network

 Collecting – Verify Database Server Disk Controller Configuration

 Collecting – Verify Database Server Physical Drive Configuration

 Collecting – Verify Database Server Virtual Drive Configuration

 Collecting – Verify Disk Cache Policy on database server

 Collecting – Verify Hardware and Firmware on Database and Storage Servers (CheckHWnFWProfile) [Database Server]

 Collecting – Verify ILOM Power Up Configuration for HOST_AUTO_POWER_ON

 Collecting – Verify ILOM Power Up Configuration for HOST_LAST_POWER_STATE

 Collecting – Verify IP routing configuration on database servers

 Collecting – Verify InfiniBand Address Resolution Protocol (ARP) Configuration on Database Servers

 Collecting – Verify Master (Rack) Serial Number is Set [Database Server]

 Collecting – Verify Quorum disks configuration

 Collecting – Verify RAID Controller Battery Temperature [Database Server]

 Collecting – Verify RAID disk controller CacheVault capacitor condition [Database Server]

 Collecting – Verify RAID disk controller CacheVault capacitor condition [Database Server]

 Collecting – Verify TCP Segmentation Offload (TSO) is set to off

 Collecting – Verify available ksplice fixes are installed [Database Server]

 Collecting – Verify basic Logical Volume(LVM) system devices configuration

 Collecting – Verify database server InfiniBand network MTU size

 Collecting – Verify database server disk controllers use writeback cache

 Collecting – Verify database server file systems have Check interval = 0

 Collecting – Verify database server file systems have Maximum mount count = -1

 Collecting – Verify imageinfo on database server

 Collecting – Verify imageinfo on database server to compare systemwide

 Collecting – Verify installed rpm(s) kernel type match the active kernel version

 Collecting – Verify key InfiniBand fabric error counters are not present

 Collecting – Verify no database server kernel out of memory errors

 Collecting – Verify proper ACFS drivers are installed for Spectre v2 mitigation

 Collecting – Verify service exachkcfg autostart status on database server

 Collecting – Verify the localhost alias is pingable [Database Server]

 Collecting – Verify the InfiniBand Fabric Topology (verify-topology)

 Collecting – Verify the Master Subnet Manager is running on an InfiniBand switch

 Collecting – Verify the Name Service Cache Daemon (NSCD) configuration

 Collecting – Verify the Subnet Manager is properly disabled [Database Server]

 Collecting – Verify the currently active image status [Database Server]

 Collecting – Verify the ib_sdp module is not loaded into the kernel

 Collecting – Verify the storage servers in use configuration matches across the cluster

 Collecting – Verify the vm.min_free_kbytes configuration

 Collecting – Verify there are no files present that impact normal firmware update procedures [Database Server]

 Collecting – collect time server data [Database Server]

 Collecting – root time zone check

 Collecting – verify asr exadata configuration check via ASREXACHECK on database server

Starting to run root privileged commands in background on storage server dm01cel01 (192.168.1.6)

 

Starting to run root privileged commands in background on storage server dm01cel02 (192.168.1.8)

 

Starting to run root privileged commands in background on storage server dm01cel03 (192.168.1.10)

 

Starting to run root privileged commands in background on storage server dm01cel04 (192.168.1.16)

 

Starting to run root privileged commands in background on storage server dm01cel05 (192.168.1.18)

 

Starting to run root privileged commands in background on storage server dm01cel06 (192.168.1.20)

 

Starting to run root privileged commands in background on storage server dm01cel07 (192.168.1.22)

 

Starting to run root privileged commands in background on infiniband switch (dm01sw-ibb01)

 

Starting to run root privileged commands in background on infiniband switch (dm01sw-iba01)

 

 

Collections from storage server:

————————————————————

 

 

Collections from Infiniband Switch:

————————————————————

 Collecting – Exadata Critical Issue IB5

 Collecting – Exadata Critical Issue IB6

 Collecting – Exadata Critical Issue IB8

 Collecting – Hostname in /etc/hosts

 Collecting – Infiniband Switch NTP configuration

 Collecting – Infiniband subnet manager status

 Collecting – Infiniband switch HCA status

 Collecting – Infiniband switch HOSTNAME configuration

 Collecting – Infiniband switch firmware version

 Collecting – Infiniband switch health

 Collecting – Infiniband switch localtime configuration

 Collecting – Infiniband switch module configuration

 Collecting – Infiniband switch subnet manager configuration

 Collecting – Infiniband switch type(Spine or leaf)

 Collecting – Infrastructure Software and Configuration for switch

 Collecting – Verify average ping times to DNS nameserver [IB Switch]

 Collecting – Verify no IB switch ports disabled due to excessive symbol errors

 Collecting – Verify the localhost alias is pingable [IB Switch]

 Collecting – Verify there are no unhealthy InfiniBand switch sensors

 Collecting – sm_priority configuration on Infiniband switch

 

 

Data collections completed. Checking best practices on dm01db01.

————————————————————

 

 

 

 FAIL =>     Exadata software version on database server does not meet certified platinum configuration

 FAIL =>     Oracle database does not meet certified platinum configuration for /u01/app/oracle/product/11.2.0.4/dbhome

 WARNING =>  Database parameter AUDIT_SYS_OPERATIONS should be set to the recommended value on testdb1 instance

 WARNING =>  Database parameter AUDIT_SYS_OPERATIONS should be set to the recommended value on orcldb1 instance

 INFO =>     Oracle GoldenGate failure prevention best practices

 INFO =>     One or more non-default AWR baselines should be created for orcldb

 WARNING =>  Non-default database Services are not configured for orcldb

 WARNING =>  Database parameter processes should be set to recommended value on testdb1 instance

 WARNING =>  Database parameter processes should be set to recommended value on orcldb1 instance

 FAIL =>     _reconnect_to_cell_attempts parameter in cellinit.ora is not set to recommended value

 FAIL =>     Oracle monitoring agent and Operating systems settings on Automatic diagnostic  repository directories are not correct or not all targets have been scanned or not all diagnostic directories found

 FAIL =>     Storage Server user “CELLDIAG” should exist

 FAIL =>     Downdelay attribute is not set to recommended value on bonded client interface

 FAIL =>     One or more of SYSTEM, SYSAUX, USERS, TEMP tablespaces are not of type bigfile for orcldb

 FAIL =>     The initialization parameter cluster_database_instances should be at the default value for testdb

 FAIL =>     The initialization parameter cluster_database_instances should be at the default value for orcldb

 WARNING =>  SYS or SYSTEM objects were found to be INVALID for orcldb

 WARNING =>  There are non-Exadata components in use on the InfiniBand fabric

 INFO =>     Database parameter AUDIT_TRAIL should be set to the recommended value for testdb

 INFO =>     Database parameter AUDIT_TRAIL should be set to the recommended value for orcldb

 FAIL =>     Memlock settings do not meet the Oracle best practice recommendations for /u01/app/oracle/product/11.2.0.4/dbhome

 WARNING =>  All disk groups should have compatible.advm attribute set to recommended values

 WARNING =>  All disk groups should have compatible.rdbms attribute set to recommended values

 WARNING =>  Database has one or more dictionary managed tablespace for orcldb

 CRITICAL => System is exposed to Exadata Critical Issue EX58

 CRITICAL => System is exposed to Exadata Critical Issue EX58

 FAIL =>     Some data or temp files are not autoextensible for orcldb

 WARNING =>  Key InfiniBand fabric error counters should not be present

 CRITICAL => One or more log archive destination and alternate log archive destination settings are not as recommended for orcldb

 FAIL =>     Database parameter DB_LOST_WRITE_PROTECT is not set to recommended value on testdb1 instance

 FAIL =>     Database parameter DB_LOST_WRITE_PROTECT is not set to recommended value on orcldb1 instance

 FAIL =>     Database parameter GLOBAL_NAMES is not set to recommended value on testdb1 instance

 FAIL =>     Database parameter GLOBAL_NAMES is not set to recommended value on orcldb1 instance

 FAIL =>     Database parameter PARALLEL_ADAPTIVE_MULTI_USER is not set to recommended value on testdb1 instance

 FAIL =>     Database parameter PARALLEL_ADAPTIVE_MULTI_USER is not set to recommended value on orcldb1 instance

 FAIL =>     Database parameter PARALLEL_THREADS_PER_CPU is not set to recommended value on testdb1 instance

 FAIL =>     Database parameter PARALLEL_THREADS_PER_CPU is not set to recommended value on orcldb1 instance

 FAIL =>     Database parameter OS_AUTHENT_PREFIX is not set to recommended value on testdb1 instance

 FAIL =>     Database parameter OS_AUTHENT_PREFIX is not set to recommended value on orcldb1 instance

 FAIL =>     Database parameter sql92_security is not set to recommended value on testdb1 instance

 FAIL =>     Database parameter sql92_security is not set to recommended value on orcldb1 instance

 FAIL =>     Database parameter COMPATIBLE should be set to recommended value for testdb

 FAIL =>     Database parameter COMPATIBLE should be set to recommended value for orcldb

 CRITICAL => Database parameters log_archive_dest_n with Location attribute are not all set to recommended value for orcldb

 CRITICAL => Database parameter Db_create_online_log_dest_n is not set to recommended value for testdb

 FAIL =>     Flashback on PRIMARY is not configured for orcldb

 FAIL =>     Flashback on STANDBY is not configured for testdb

 INFO =>     Operational Best Practices

 INFO =>     Database Consolidation Best Practices

 INFO =>     Computer failure prevention best practices

 INFO =>     Data corruption prevention best practices

 INFO =>     Logical corruption prevention best practices

 INFO =>     Database/Cluster/Site failure prevention best practices

 INFO =>     Client failover operational best practices

 INFO =>     Verify the percent of available celldisk space used by the griddisks

 WARNING =>  Application objects were found to be invalid for orcldb

 CRITICAL => Database control files are not configured as recommended for testdb

 CRITICAL => Database control files are not configured as recommended for orcldb

 WARNING =>  ASM parameter ASM_POWER_LIMIT is not set to the default value.

 INFO =>     While initialization parameter LOG_ARCHIVE_CONFIG is set it should be verified for your environment on Standby Database for testdb

 WARNING =>  Redo log files should be appropriately sized for testdb

 WARNING =>  Redo log files should be appropriately sized for orcldb

 FAIL =>     Table AUD$[FGA_LOG$] should use Automatic Segment Space Management for orcldb

 INFO =>     Database failure prevention best practices

 WARNING =>  Database has one or more dictionary managed tablespace for orcldb

 FAIL =>     Primary database is not protected with Data Guard (standby database) for real-time data protection and availability for orcldb

 FAIL =>     Database parameter LOG_BUFFER is not set to recommended value on orcldb1 instance

 INFO =>     Storage failures prevention best practices

 INFO =>     Software maintenance best practices

 CRITICAL => The data files should be recoverable for testdb

 CRITICAL => The data files should be recoverable for orcldb

 FAIL =>     FRA space management problem file types are present without an RMAN backup completion within the last 7 days for testdb

 INFO =>     Oracle recovery manager(rman) best practices

 WARNING =>  control_file_record_keep_time should be within recommended range [1-9] for testdb

 INFO =>     Exadata Critical Issues (Doc ID 1270094.1):- DB1-DB4,DB6,DB9-DB44, EX1-EX60 and IB1-IB3,IB5-IB8

Collecting patch inventory on CRS_HOME /u01/app/11.2.0.4/grid

Collecting patch inventory on ORACLE_HOME /u01/app/oracle/product/11.2.0.4/dbhome

 

Copying results from dm01db02 and generating report. This might take a while. Be patient.

 

.

============================================================

              Node name – dm01db02

============================================================

 

 Collecting – CPU Information

 Collecting – Clusterware and RDBMS software version

 Collecting – Compute node PCI bus slot speed for infiniband HCAs

 Collecting – Kernel parameters

 Collecting – Maximum number of semaphore sets on system

 Collecting – Maximum number of semaphores on system

 Collecting – OS Packages

 Collecting – Patches for Grid Infrastructure

 Collecting – Patches for RDBMS Home

 Collecting – RDBMS patch inventory

 Collecting – number of semaphore operations per semop system call

 Collecting – CRS user limits configuration

 Collecting – CRS user time zone check

 Collecting – Check alerthistory for non-test open stateless alerts [Database Server]

 Collecting – Check alerthistory for stateful alerts not cleared [Database Server]

 Collecting – Clusterware patch inventory

 Collecting – Exadata Critical Issue DB09

 Collecting – Exadata Critical Issue EX30

 Collecting – Exadata Critical Issue EX36

 Collecting – Exadata Critical Issue EX56

 Collecting – Exadata Critical Issue EX57

 Collecting – Exadata Critical Issue EX58

 Collecting – Exadata critical issue EX48

 Collecting – Exadata critical issue EX55

 Collecting – Exadata software version on database server

 Collecting – Exadata system model number

 Collecting – Exadata version on database server

 Collecting – HCA firmware version on database server

 Collecting – HCA transfer rate on database server

 Collecting – Infrastructure Software and Configuration for compute

 Collecting – MaxStartups setting in sshd_config

 Collecting – OFED Software version on database server

 Collecting – Obtain hardware information

 Collecting – Operating system and Kernel version on database server

 Collecting – Oracle monitoring agent and/or OS settings on ADR diagnostic directories

 Collecting – Raid controller bus link speed

 Collecting – System Event Log

 Collecting – Validate key sysctl.conf parameters on database servers

 Collecting – Verify Data Network is Separate from Management Network

 Collecting – Verify Database Server Disk Controller Configuration

 Collecting – Verify Database Server Physical Drive Configuration

 Collecting – Verify Database Server Virtual Drive Configuration

 Collecting – Verify Disk Cache Policy on database server

 Collecting – Verify Hardware and Firmware on Database and Storage Servers (CheckHWnFWProfile) [Database Server]

 Collecting – Verify ILOM Power Up Configuration for HOST_AUTO_POWER_ON

 Collecting – Verify ILOM Power Up Configuration for HOST_LAST_POWER_STATE

 Collecting – Verify IP routing configuration on database servers

 Collecting – Verify InfiniBand Address Resolution Protocol (ARP) Configuration on Database Servers

 Collecting – Verify Master (Rack) Serial Number is Set [Database Server]

 Collecting – Verify Quorum disks configuration

 Collecting – Verify RAID Controller Battery Temperature [Database Server]

 Collecting – Verify RAID disk controller CacheVault capacitor condition [Database Server]

 Collecting – Verify RAID disk controller CacheVault capacitor condition [Database Server]

 Collecting – Verify TCP Segmentation Offload (TSO) is set to off

 Collecting – Verify available ksplice fixes are installed [Database Server]

 Collecting – Verify basic Logical Volume(LVM) system devices configuration

 Collecting – Verify database server InfiniBand network MTU size

 Collecting – Verify database server disk controllers use writeback cache

 Collecting – Verify database server file systems have Check interval = 0

 Collecting – Verify database server file systems have Maximum mount count = -1

 Collecting – Verify imageinfo on database server

 Collecting – Verify imageinfo on database server to compare systemwide

 Collecting – Verify installed rpm(s) kernel type match the active kernel version

 Collecting – Verify no database server kernel out of memory errors

 Collecting – Verify proper ACFS drivers are installed for Spectre v2 mitigation

 Collecting – Verify service exachkcfg autostart status on database server

 Collecting – Verify the localhost alias is pingable [Database Server]

 Collecting – Verify the InfiniBand Fabric Topology (verify-topology)

 Collecting – Verify the Name Service Cache Daemon (NSCD) configuration

 Collecting – Verify the Subnet Manager is properly disabled [Database Server]

 Collecting – Verify the currently active image status [Database Server]

 Collecting – Verify the ib_sdp module is not loaded into the kernel

 Collecting – Verify the storage servers in use configuration matches across the cluster

 Collecting – Verify the vm.min_free_kbytes configuration

 Collecting – Verify there are no files present that impact normal firmware update procedures [Database Server]

 Collecting – collect time server data [Database Server]

 Collecting – root time zone check

 Collecting – verify asr exadata configuration check via ASREXACHECK on database server

list index out of range

 

Data collections completed. Checking best practices on dm01db02.

————————————————————

 

 FAIL =>     Exadata software version on database server does not meet certified platinum configuration

 FAIL =>     Oracle database does not meet certified platinum configuration for /u01/app/oracle/product/11.2.0.4/dbhome

 WARNING =>  Database parameter AUDIT_SYS_OPERATIONS should be set to the recommended value on testdb2 instance

 WARNING =>  Database parameter AUDIT_SYS_OPERATIONS should be set to the recommended value on orcldb2 instance

 INFO =>     Oracle GoldenGate failure prevention best practices

 WARNING =>  Non-default database Services are not configured for orcldb

 WARNING =>  Database parameter processes should be set to recommended value on testdb2 instance

 WARNING =>  Database parameter processes should be set to recommended value on orcldb2 instance

 FAIL =>     _reconnect_to_cell_attempts parameter in cellinit.ora is not set to recommended value

 FAIL =>     Oracle monitoring agent and Operating systems settings on Automatic diagnostic  repository directories are not correct or not all targets have been scanned or not all diagnostic directories found

 FAIL =>     Downdelay attribute is not set to recommended value on bonded client interface

 FAIL =>     One or more of SYSTEM, SYSAUX, USERS, TEMP tablespaces are not of type bigfile for orcldb

 FAIL =>     The initialization parameter cluster_database_instances should be at the default value for testdb

 FAIL =>     The initialization parameter cluster_database_instances should be at the default value for orcldb

 INFO =>     Database parameter AUDIT_TRAIL should be set to the recommended value for testdb

 INFO =>     Database parameter AUDIT_TRAIL should be set to the recommended value for orcldb

 FAIL =>     Memlock settings do not meet the Oracle best practice recommendations for /u01/app/oracle/product/11.2.0.4/dbhome

 CRITICAL => System is exposed to Exadata Critical Issue EX58

 CRITICAL => One or more log archive destination and alternate log archive destination settings are not as recommended

 CRITICAL => One or more disk groups which contain critical files do not use high redundancy

 FAIL =>     Database parameter DB_LOST_WRITE_PROTECT is not set to recommended value on testdb2 instance

 FAIL =>     Database parameter DB_LOST_WRITE_PROTECT is not set to recommended value on orcldb2 instance

 FAIL =>     Database parameter GLOBAL_NAMES is not set to recommended value on testdb2 instance

 FAIL =>     Database parameter GLOBAL_NAMES is not set to recommended value on orcldb2 instance

 FAIL =>     Database parameter PARALLEL_ADAPTIVE_MULTI_USER is not set to recommended value on testdb2 instance

 FAIL =>     Database parameter PARALLEL_ADAPTIVE_MULTI_USER is not set to recommended value on orcldb2 instance

 FAIL =>     Database parameter PARALLEL_THREADS_PER_CPU is not set to recommended value on testdb2 instance

 FAIL =>     Database parameter PARALLEL_THREADS_PER_CPU is not set to recommended value on orcldb2 instance

 FAIL =>     Database parameter OS_AUTHENT_PREFIX is not set to recommended value on testdb2 instance

 FAIL =>     Database parameter OS_AUTHENT_PREFIX is not set to recommended value on orcldb2 instance

 FAIL =>     Database parameter sql92_security is not set to recommended value on testdb2 instance

 FAIL =>     Database parameter sql92_security is not set to recommended value on orcldb2 instance

 FAIL =>     Database parameter COMPATIBLE should be set to recommended value for testdb

 FAIL =>     Database parameter COMPATIBLE should be set to recommended value for orcldb

 CRITICAL => Database parameters log_archive_dest_n with Location attribute are not all set to recommended value for orcldb

 CRITICAL => Database parameter Db_create_online_log_dest_n is not set to recommended value for testdb

 CRITICAL => Database control files are not configured as recommended

 WARNING =>  ASM parameter ASM_POWER_LIMIT is not set to the default value.

 INFO =>     While initialization parameter LOG_ARCHIVE_CONFIG is set it should be verified for your environment on Standby Database for testdb

 WARNING =>  Redo log files should be appropriately sized for testdb

 WARNING =>  Redo log files should be appropriately sized for orcldb

 FAIL =>     Database parameter LOG_BUFFER is not set to recommended value on orcldb2 instance

Collecting patch inventory on CRS_HOME /u01/app/11.2.0.4/grid

Collecting patch inventory on ORACLE_HOME /u01/app/oracle/product/11.2.0.4/dbhome

 

 

Copying results from dm01db03 and generating report. This might take a while. Be patient.

 

.

============================================================

              Node name – dm01db03

============================================================

 

 Collecting – CPU Information

 Collecting – Clusterware and RDBMS software version

 Collecting – Compute node PCI bus slot speed for infiniband HCAs

 Collecting – Kernel parameters

 Collecting – Maximum number of semaphore sets on system

 Collecting – Maximum number of semaphores on system

 Collecting – OS Packages

 Collecting – Patches for Grid Infrastructure

 Collecting – Patches for RDBMS Home

 Collecting – RDBMS patch inventory

 Collecting – number of semaphore operations per semop system call

 Collecting – CRS user limits configuration

 Collecting – CRS user time zone check

 Collecting – Check alerthistory for non-test open stateless alerts [Database Server]

 Collecting – Check alerthistory for stateful alerts not cleared [Database Server]

 Collecting – Clusterware patch inventory

 Collecting – Exadata Critical Issue DB09

 Collecting – Exadata Critical Issue EX30

 Collecting – Exadata Critical Issue EX36

 Collecting – Exadata Critical Issue EX56

 Collecting – Exadata Critical Issue EX57

 Collecting – Exadata Critical Issue EX58

 Collecting – Exadata critical issue EX48

 Collecting – Exadata critical issue EX55

 Collecting – Exadata software version on database server

 Collecting – Exadata system model number

 Collecting – Exadata version on database server

 Collecting – HCA firmware version on database server

 Collecting – HCA transfer rate on database server

 Collecting – Infrastructure Software and Configuration for compute

 Collecting – MaxStartups setting in sshd_config

 Collecting – OFED Software version on database server

 Collecting – Obtain hardware information

 Collecting – Operating system and Kernel version on database server

 Collecting – Oracle monitoring agent and/or OS settings on ADR diagnostic directories

 Collecting – Raid controller bus link speed

 Collecting – System Event Log

 Collecting – Validate key sysctl.conf parameters on database servers

 Collecting – Verify Data Network is Separate from Management Network

 Collecting – Verify Database Server Disk Controller Configuration

 Collecting – Verify Database Server Physical Drive Configuration

 Collecting – Verify Database Server Virtual Drive Configuration

 Collecting – Verify Disk Cache Policy on database server

 Collecting – Verify Hardware and Firmware on Database and Storage Servers (CheckHWnFWProfile) [Database Server]

 Collecting – Verify ILOM Power Up Configuration for HOST_AUTO_POWER_ON

 Collecting – Verify ILOM Power Up Configuration for HOST_LAST_POWER_STATE

 Collecting – Verify IP routing configuration on database servers

 Collecting – Verify InfiniBand Address Resolution Protocol (ARP) Configuration on Database Servers

 Collecting – Verify Master (Rack) Serial Number is Set [Database Server]

 Collecting – Verify Quorum disks configuration

 Collecting – Verify RAID Controller Battery Temperature [Database Server]

 Collecting – Verify RAID disk controller CacheVault capacitor condition [Database Server]

 Collecting – Verify RAID disk controller CacheVault capacitor condition [Database Server]

 Collecting – Verify TCP Segmentation Offload (TSO) is set to off

 Collecting – Verify available ksplice fixes are installed [Database Server]

 Collecting – Verify basic Logical Volume(LVM) system devices configuration

 Collecting – Verify database server InfiniBand network MTU size

 Collecting – Verify database server disk controllers use writeback cache

 Collecting – Verify database server file systems have Check interval = 0

 Collecting – Verify database server file systems have Maximum mount count = -1

 Collecting – Verify imageinfo on database server

 Collecting – Verify imageinfo on database server to compare systemwide

 Collecting – Verify installed rpm(s) kernel type match the active kernel version

 Collecting – Verify no database server kernel out of memory errors

 Collecting – Verify proper ACFS drivers are installed for Spectre v2 mitigation

 Collecting – Verify service exachkcfg autostart status on database server

 Collecting – Verify the localhost alias is pingable [Database Server]

 Collecting – Verify the InfiniBand Fabric Topology (verify-topology)

 Collecting – Verify the Name Service Cache Daemon (NSCD) configuration

 Collecting – Verify the Subnet Manager is properly disabled [Database Server]

 Collecting – Verify the currently active image status [Database Server]

 Collecting – Verify the ib_sdp module is not loaded into the kernel

 Collecting – Verify the storage servers in use configuration matches across the cluster

 Collecting – Verify the vm.min_free_kbytes configuration

 Collecting – Verify there are no files present that impact normal firmware update procedures [Database Server]

 Collecting – collect time server data [Database Server]

 Collecting – root time zone check

 Collecting – verify asr exadata configuration check via ASREXACHECK on database server

list index out of range

 

 

Data collections completed. Checking best practices on dm01db03.

————————————————————

 

 FAIL =>     Exadata software version on database server does not meet certified platinum configuration

 FAIL =>     Oracle database does not meet certified platinum configuration for /u01/app/oracle/product/11.2.0.4/dbhome

 WARNING =>  Database parameter AUDIT_SYS_OPERATIONS should be set to the recommended value on testdb3 instance

 WARNING =>  Database parameter AUDIT_SYS_OPERATIONS should be set to the recommended value on orcldb3 instance

 INFO =>     Oracle GoldenGate failure prevention best practices

 WARNING =>  Non-default database Services are not configured for orcldb

 WARNING =>  Database parameter processes should be set to recommended value on testdb3 instance

 WARNING =>  Database parameter processes should be set to recommended value on orcldb3 instance

 FAIL =>     _reconnect_to_cell_attempts parameter in cellinit.ora is not set to recommended value

 FAIL =>     Oracle monitoring agent and Operating systems settings on Automatic diagnostic  repository directories are not correct or not all targets have been scanned or not all diagnostic directories found

 FAIL =>     Downdelay attribute is not set to recommended value on bonded client interface

 WARNING =>  The IP routing configuration is not correct

 FAIL =>     One or more of SYSTEM, SYSAUX, USERS, TEMP tablespaces are not of type bigfile for orcldb

 FAIL =>     The initialization parameter cluster_database_instances should be at the default value for testdb

 FAIL =>     The initialization parameter cluster_database_instances should be at the default value for orcldb

 INFO =>     Database parameter AUDIT_TRAIL should be set to the recommended value for testdb

 INFO =>     Database parameter AUDIT_TRAIL should be set to the recommended value for orcldb

 FAIL =>     Memlock settings do not meet the Oracle best practice recommendations for /u01/app/oracle/product/11.2.0.4/dbhome

 CRITICAL => System is exposed to Exadata Critical Issue EX58

 FAIL =>     Management network is not separate from data network

 CRITICAL => One or more log archive destination and alternate log archive destination settings are not as recommended

 CRITICAL => One or more disk groups which contain critical files do not use high redundancy

 FAIL =>     Database parameter DB_LOST_WRITE_PROTECT is not set to recommended value on testdb3 instance

 FAIL =>     Database parameter DB_LOST_WRITE_PROTECT is not set to recommended value on orcldb3 instance

 FAIL =>     Database parameter GLOBAL_NAMES is not set to recommended value on testdb3 instance

 FAIL =>     Database parameter GLOBAL_NAMES is not set to recommended value on orcldb3 instance

 FAIL =>     Database parameter PARALLEL_ADAPTIVE_MULTI_USER is not set to recommended value on testdb3 instance

 FAIL =>     Database parameter PARALLEL_ADAPTIVE_MULTI_USER is not set to recommended value on orcldb3 instance

 FAIL =>     Database parameter PARALLEL_THREADS_PER_CPU is not set to recommended value on testdb3 instance

 FAIL =>     Database parameter PARALLEL_THREADS_PER_CPU is not set to recommended value on orcldb3 instance

 FAIL =>     Database parameter OS_AUTHENT_PREFIX is not set to recommended value on testdb3 instance

 FAIL =>     Database parameter OS_AUTHENT_PREFIX is not set to recommended value on orcldb3 instance

 FAIL =>     Database parameter sql92_security is not set to recommended value on testdb3 instance

 FAIL =>     Database parameter sql92_security is not set to recommended value on orcldb3 instance

 FAIL =>     Database parameter COMPATIBLE should be set to recommended value for testdb

 FAIL =>     Database parameter COMPATIBLE should be set to recommended value for orcldb

 CRITICAL => Database parameters log_archive_dest_n with Location attribute are not all set to recommended value for orcldb

 CRITICAL => Database parameter Db_create_online_log_dest_n is not set to recommended value for testdb

 CRITICAL => Database control files are not configured as recommended

 WARNING =>  ASM parameter ASM_POWER_LIMIT is not set to the default value.

 INFO =>     While initialization parameter LOG_ARCHIVE_CONFIG is set it should be verified for your environment on Standby Database for testdb

 WARNING =>  Redo log files should be appropriately sized for testdb

 WARNING =>  Redo log files should be appropriately sized for orcldb

 FAIL =>     Database parameter LOG_BUFFER is not set to recommended value on orcldb3 instance

Collecting patch inventory on CRS_HOME /u01/app/11.2.0.4/grid

Collecting patch inventory on ORACLE_HOME /u01/app/oracle/product/11.2.0.4/dbhome

 

 

Copying results from dm01db04 and generating report. This might take a while. Be patient.

 

.

============================================================

              Node name – dm01db04

============================================================

 

 Collecting – CPU Information

 Collecting – Clusterware and RDBMS software version

 Collecting – Compute node PCI bus slot speed for infiniband HCAs

 Collecting – Kernel parameters

 Collecting – Maximum number of semaphore sets on system

 Collecting – Maximum number of semaphores on system

 Collecting – OS Packages

 Collecting – Patches for Grid Infrastructure

 Collecting – Patches for RDBMS Home

 Collecting – RDBMS patch inventory

 Collecting – number of semaphore operations per semop system call

 Collecting – CRS user limits configuration

 Collecting – CRS user time zone check

 Collecting – Check alerthistory for non-test open stateless alerts [Database Server]

 Collecting – Check alerthistory for stateful alerts not cleared [Database Server]

 Collecting – Clusterware patch inventory

 Collecting – Exadata Critical Issue DB09

 Collecting – Exadata Critical Issue EX30

 Collecting – Exadata Critical Issue EX36

 Collecting – Exadata Critical Issue EX56

 Collecting – Exadata Critical Issue EX57

 Collecting – Exadata Critical Issue EX58

 Collecting – Exadata critical issue EX48

 Collecting – Exadata critical issue EX55

 Collecting – Exadata software version on database server

 Collecting – Exadata system model number

 Collecting – Exadata version on database server

 Collecting – HCA firmware version on database server

 Collecting – HCA transfer rate on database server

 Collecting – Infrastructure Software and Configuration for compute

 Collecting – MaxStartups setting in sshd_config

 Collecting – OFED Software version on database server

 Collecting – Obtain hardware information

 Collecting – Operating system and Kernel version on database server

 Collecting – Oracle monitoring agent and/or OS settings on ADR diagnostic directories

 Collecting – Raid controller bus link speed

 Collecting – System Event Log

 Collecting – Validate key sysctl.conf parameters on database servers

 Collecting – Verify Data Network is Separate from Management Network

 Collecting – Verify Database Server Disk Controller Configuration

 Collecting – Verify Database Server Physical Drive Configuration

 Collecting – Verify Database Server Virtual Drive Configuration

 Collecting – Verify Disk Cache Policy on database server

 Collecting – Verify Hardware and Firmware on Database and Storage Servers (CheckHWnFWProfile) [Database Server]

 Collecting – Verify ILOM Power Up Configuration for HOST_AUTO_POWER_ON

 Collecting – Verify ILOM Power Up Configuration for HOST_LAST_POWER_STATE

 Collecting – Verify IP routing configuration on database servers

 Collecting – Verify InfiniBand Address Resolution Protocol (ARP) Configuration on Database Servers

 Collecting – Verify Master (Rack) Serial Number is Set [Database Server]

 Collecting – Verify Quorum disks configuration

 Collecting – Verify RAID Controller Battery Temperature [Database Server]

 Collecting – Verify RAID disk controller CacheVault capacitor condition [Database Server]

 Collecting – Verify RAID disk controller CacheVault capacitor condition [Database Server]

 Collecting – Verify TCP Segmentation Offload (TSO) is set to off

 Collecting – Verify available ksplice fixes are installed [Database Server]

 Collecting – Verify basic Logical Volume(LVM) system devices configuration

 Collecting – Verify database server InfiniBand network MTU size

 Collecting – Verify database server disk controllers use writeback cache

 Collecting – Verify database server file systems have Check interval = 0

 Collecting – Verify database server file systems have Maximum mount count = -1

 Collecting – Verify imageinfo on database server

 Collecting – Verify imageinfo on database server to compare systemwide

 Collecting – Verify installed rpm(s) kernel type match the active kernel version

 Collecting – Verify no database server kernel out of memory errors

 Collecting – Verify proper ACFS drivers are installed for Spectre v2 mitigation

 Collecting – Verify service exachkcfg autostart status on database server

 Collecting – Verify the localhost alias is pingable [Database Server]

 Collecting – Verify the InfiniBand Fabric Topology (verify-topology)

 Collecting – Verify the Name Service Cache Daemon (NSCD) configuration

 Collecting – Verify the Subnet Manager is properly disabled [Database Server]

 Collecting – Verify the currently active image status [Database Server]

 Collecting – Verify the ib_sdp module is not loaded into the kernel

 Collecting – Verify the storage servers in use configuration matches across the cluster

 Collecting – Verify the vm.min_free_kbytes configuration

 Collecting – Verify there are no files present that impact normal firmware update procedures [Database Server]

 Collecting – collect time server data [Database Server]

 Collecting – root time zone check

 Collecting – verify asr exadata configuration check via ASREXACHECK on database server

list index out of range

 

 

Data collections completed. Checking best practices on dm01db04.

————————————————————

 

 FAIL =>     Exadata software version on database server does not meet certified platinum configuration

 FAIL =>     Oracle database does not meet certified platinum configuration for /u01/app/oracle/product/11.2.0.4/dbhome

 WARNING =>  Database parameter AUDIT_SYS_OPERATIONS should be set to the recommended value on testdb4 instance

 WARNING =>  Database parameter AUDIT_SYS_OPERATIONS should be set to the recommended value on orcldb4 instance

 INFO =>     Oracle GoldenGate failure prevention best practices

 WARNING =>  Non-default database Services are not configured for orcldb

 WARNING =>  Database parameter processes should be set to recommended value on testdb4 instance

 WARNING =>  Database parameter processes should be set to recommended value on orcldb4 instance

 FAIL =>     _reconnect_to_cell_attempts parameter in cellinit.ora is not set to recommended value

 FAIL =>     Oracle monitoring agent and Operating systems settings on Automatic diagnostic  repository directories are not correct or not all targets have been scanned or not all diagnostic directories found

 FAIL =>     Downdelay attribute is not set to recommended value on bonded client interface

 WARNING =>  The IP routing configuration is not correct

 FAIL =>     One or more of SYSTEM, SYSAUX, USERS, TEMP tablespaces are not of type bigfile for orcldb

 FAIL =>     The initialization parameter cluster_database_instances should be at the default value for testdb

 FAIL =>     The initialization parameter cluster_database_instances should be at the default value for orcldb

 INFO =>     Database parameter AUDIT_TRAIL should be set to the recommended value for testdb

 INFO =>     Database parameter AUDIT_TRAIL should be set to the recommended value for orcldb

 FAIL =>     Memlock settings do not meet the Oracle best practice recommendations for /u01/app/oracle/product/11.2.0.4/dbhome

 CRITICAL => System is exposed to Exadata Critical Issue EX58

 FAIL =>     Management network is not separate from data network

 CRITICAL => One or more log archive destination and alternate log archive destination settings are not as recommended

 CRITICAL => One or more disk groups which contain critical files do not use high redundancy

 FAIL =>     Database parameter DB_LOST_WRITE_PROTECT is not set to recommended value on testdb4 instance

 FAIL =>     Database parameter DB_LOST_WRITE_PROTECT is not set to recommended value on orcldb4 instance

 FAIL =>     Database parameter GLOBAL_NAMES is not set to recommended value on testdb4 instance

 FAIL =>     Database parameter GLOBAL_NAMES is not set to recommended value on orcldb4 instance

 FAIL =>     Database parameter PARALLEL_ADAPTIVE_MULTI_USER is not set to recommended value on testdb4 instance

 FAIL =>     Database parameter PARALLEL_ADAPTIVE_MULTI_USER is not set to recommended value on orcldb4 instance

 FAIL =>     Database parameter PARALLEL_THREADS_PER_CPU is not set to recommended value on testdb4 instance

 FAIL =>     Database parameter PARALLEL_THREADS_PER_CPU is not set to recommended value on orcldb4 instance

 FAIL =>     Database parameter OS_AUTHENT_PREFIX is not set to recommended value on testdb4 instance

 FAIL =>     Database parameter OS_AUTHENT_PREFIX is not set to recommended value on orcldb4 instance

 FAIL =>     Database parameter sql92_security is not set to recommended value on testdb4 instance

 FAIL =>     Database parameter sql92_security is not set to recommended value on orcldb4 instance

 FAIL =>     Database parameter COMPATIBLE should be set to recommended value for testdb

 FAIL =>     Database parameter COMPATIBLE should be set to recommended value for orcldb

 CRITICAL => Database parameters log_archive_dest_n with Location attribute are not all set to recommended value for orcldb

 CRITICAL => Database parameter Db_create_online_log_dest_n is not set to recommended value for testdb

 CRITICAL => Database control files are not configured as recommended

 WARNING =>  ASM parameter ASM_POWER_LIMIT is not set to the default value.

 INFO =>     While initialization parameter LOG_ARCHIVE_CONFIG is set it should be verified for your environment on Standby Database for testdb

 WARNING =>  Redo log files should be appropriately sized for testdb

 WARNING =>  Redo log files should be appropriately sized for orcldb

 FAIL =>     Database parameter LOG_BUFFER is not set to recommended value on orcldb4 instance

Collecting patch inventory on CRS_HOME /u01/app/11.2.0.4/grid

Collecting patch inventory on ORACLE_HOME /u01/app/oracle/product/11.2.0.4/dbhome

 

 

————————————————————

                      CLUSTERWIDE CHECKS

————————————————————

 

————————————————————

Detailed report (html) –  /u01/app/oracle/oracle.ahf/data/dm01db01/exachk/exachk_dm01db01_orcldb_040620_12376/exachk_dm01db01_orcldb_040620_12376.html

 

 

 

UPLOAD [if required] – /u01/app/oracle/oracle.ahf/data/dm01db01/exachk/exachk_dm01db01_orcldb_040620_12376.zip

 

 

Step 7: Review the Exachk report or Upload file to Oracle Support

 

[root@dm01db01 Exachk]# curl -x webproxy.netsoftmate.com:80 -T /u01/app/oracle/oracle.ahf/data/dm01db01/exachk/exachk_dm01db01_orcldb_040620_12376.zip  -u abdul.mohammed@netsoftmate.com   https://transport.oracle.com/upload/issue/3-XXXXXXXX/ -v

 

 

Sample Exadata Output:

 

Oracle Autonomous Health Check Installation and Execution | Netsoftmate

 

To Uninstall AHF

 

[root@dm01db01 ~]# cd /opt/oracle.ahf/ahf/bin

 

[root@dm01db01 bin]# ls -ltr

total 88

-r-xr-xr-x 1 root root 19623 Mar 17 11:25 uninstallahf.sh

-r-xr-xr-x 1 root root 14504 Mar 17 11:25 uninstallahf.pl

-rwxr-xr-x 1 root root  3296 Mar 17 11:25 tfactl

-r-xr-xr-x 1 root root 45597 Mar 17 11:25 installAHF.pl

 

[root@dm01db01 bin]# ./uninstallahf.sh -h

 

   Usage for ./uninstallahf.sh

 

   ./uninstallahf.sh [-local] [-silent] [-deleterepo]

 

        -local            –    Uninstall AHF only on the local node

        -silent           –    Do not ask any uninstall questions

        -deleterepo       –    Delete AHF repository

 

 

   Note: If -local is not passed, AHF will be uninstalled from all configured nodes.

 

 

[root@dm01db01 bin]# ./uninstallahf.sh -deleterepo

Starting AHF Uninstall

AHF will be uninstalled on:

dm01db01

dm01db02

 

Do you want to continue with AHF uninstall ? [Y]|N : Y

 

Stopping AHF service on local node dm01db01…

Stopping TFA Support Tools…

 

 

TFA-00002 Oracle Trace File Analyzer (TFA) is not running

Stopping exachk scheduler …

Removing exachk cache discovery….

No exachk cache discovery found.

 

Removed exachk from inittab

 

Stopping and removing AHF in dm01db02…

TFA-00002 Oracle Trace File Analyzer (TFA) is not running

Removing exachk cache discovery….

Successfully completed exachk cache discovery removal.

 

Removed exachk from inittab

 

Successfully uninstalled AHF on node dm01db02

Removing AHF setup on dm01db01:

Removing /etc/rc.d/rc0.d/K17init.tfa

Removing /etc/rc.d/rc1.d/K17init.tfa

Removing /etc/rc.d/rc2.d/K17init.tfa

Removing /etc/rc.d/rc4.d/K17init.tfa

Removing /etc/rc.d/rc6.d/K17init.tfa

Removing /etc/init.d/init.tfa…

Removing /opt/oracle.ahf/jre

Removing /opt/oracle.ahf/common

Removing /opt/oracle.ahf/bin

Removing /opt/oracle.ahf/python

Removing /opt/oracle.ahf/analyzer

Removing /opt/oracle.ahf/tfa

Removing /opt/oracle.ahf/ahf

Removing /opt/oracle.ahf/exachk

Removing /u01/app/oracle/oracle.ahf/data/dm01db01

Removing /opt/oracle.ahf/install.properties

Removing /u01/app/oracle/oracle.ahf/data/repository

Removing /u01/app/oracle/oracle.ahf/data

Removing /u01/app/oracle/oracle.ahf

Removing AHF Home : /opt/oracle.ahf

 

 

Conclusion:

 

In this article we have learned how to install, setup and execute Autonomous Health Check Frame work for Exadata Database Machines. We have also seen how to uninstall the AHF software.



About Netsoftmate: 

Netsoftmate is an Oracle Gold Partner and a boutique IT services company specializing in installation, implementation and 24/7 support for Oracle Engineered Systems like Oracle Exadata, Oracle Database Appliance, Oracle ZDLRA, Oracle ZFS Storage and Oracle Private Cloud Appliance. Apart from OES, we have specialized teams of  experts providing round the clock remote database administration support for any type of database and cyber security compliance and auditing services.


Feel free to get in touch with us by signing up on the link below – 


Priority Suport for Oracle Engineered Systems | Netsoftmate

0

Database Management Services, Oracle Database Management Solution, Oracle Databases, Oracle Exadata, Oracle Exadata X8M

Cloud is an innovative operational model for IT and its transforming the business coordination. Attain superior results today and plan for better tomorrow with the help of a cloud-ready IT infrastructure. Oracle engineered systems are fabricated, integrated, tested, and optimized to work collectively. They are co-created with Oracle software for cloud-engineered integration. Oracle Exadata as a part of Oracle Engineered Systems is the only channel that provides optimal database performance and proficiency for analytics, mixed data and OLTP workloads.


Here are five top reasons stating why Oracle Exadata is important for Business Continuity.


5 Reasons why Exadata is Important for Your Business Continuity | Netsoftmate

Oracle Exadata for Business

Information is an important component of a business and most of it is available in the databases that power the business’ growth. Cloud-ready Oracle Exadata makes sure that your business is getting the most from your valuable information. Exadata for business provides amplified database performance, improved efficiency and operational flexibility for your business strategy around IoT, digital transformation, or agile IT. It diminishes the complexity of your database infrastructure and arranges for cloud migration, improving effectiveness and efficiency and helping you show a financial return.

5 Reasons why Exadata is Important for Your Business Continuity | Netsoftmate

Enhance Business Operations

Database Management is a crucial effort and includes appropriate administration of database infrastructure. However, generic infrastructure can create problems that delay application deployment and query revert times, affecting business and revenue growth.

Oracle Exadata enhances business operations and makes application development teams more productive. It also ensures that database administrator teams become more efficient. It helps in providing more value by establishing new business applications quickly and getting data that supports business operations. Oracle Exadata also improves database management, accessibility and dependability to enhance business operations altogether.


5 Reasons why Exadata is Important for Your Business Continuity | Netsoftmate


Massively Reduce Capital Expenses

With increasing demand for more data storage, there are more chances of increasing complexity, greater costs and less efficiency. A larger data center requires more power, floor space to pay for without any assurance of error-free management.

Oracle engineered systems’ Exadata is an engineered system that delivers more database and application performance with less hardware and reduced licenses. This means, you can get increased productivity, better services and massively reduce capital expenses. Oracle Exadata makes it possible without any increase in headcount or IT specialization. It reduces capital expenditure and operational costs, so you get the most from Oracle Database licenses.


5 Reasons why Exadata is Important for Your Business Continuity | Netsoftmate


Deliver Greater Business Value

Oracle Exadata increases business value by reducing deployment time, delivering better performance and enabling deeper customer insights. The long-term costs of DIY infrastructure are 53% higher than the integrated Oracle Exadata system. It integrates exceptional capabilities and operational mechanization to enable extreme performance, and significant cost savings. It empowers a business to revolutionize, drive innovative digital transformations and deliver greater business value. According to IDC, Exadata delivers 94% less unplanned downtime with the help of its built-in resilience and redundancy. Oracle Engineered System’s Exadata is a single entity closed system, making it more secure by design, with integral encryption.


5 Reasons why Exadata is Important for Your Business Continuity | Netsoftmate

Get the Benefits of Cloud


Exadata Cloud Service delivers the most advanced database cloud by combining the world’s top database technology and Exadata. It is the most powerful database platform, with the simplicity, agility and resistance of a cloud-based deployment. Businesses can now access Oracle databases on Oracle Exadata without capital investments for IT infrastructure such as space, power, compute servers, storing, networks and software. Exadata Cloud Service is fully compatible with on-premises Oracle databases and all existing applications. With Exadata Cloud Service, businesses can easily embrace a pure cloud or hybrid cloud strategy. Choose the Oracle Engineered System’s Exadata Cloud Service that’s equal to an on-premises Exadata, but in a cloud form.


About Netsoftmate Technologies Inc.

Netsoftmate is an Oracle Gold Partner and a boutique IT services company specializing in installation, implementation and 24/7 support for Oracle Engineered Systems like Oracle Exadata, Oracle Database Appliance, Oracle ZDLRA, Oracle ZFS Storage and Oracle Private Cloud Appliance. Apart from OES, we have specialized teams of  experts providing round the clock remote database administration support for any type of database and cyber security compliance and auditing services.

 

Feel free to get in touch with us by signing up on the link below – 


Priority Suport for Oracle Engineered Systems | Netsoftmate
0

Database Management Services, Oracle Database Management Solution, Oracle Exadata

Netsoftmate experts are back again with another interesting article which will help you setup and restore compute node from snapshot backup of Oracle Exadata. In our previous blog we have demonstrated a step-by-step process of how to take a snapshot based back-up of compute node to NFS share.

If you haven’t yet read the previous article, here’s the link for reference – 

Step-by-step guide of exadata sanpshot based backup of compute node to nfs share

In this article, we will be focussing on how to setup and restore the compute node from the snapshot backup on a live Oracle Exadata Database Machine. 


Introduction


You have Oracle Exadata compute snapshot backup, but you don’t know the procedure to restore the compute node. How would you restore your compute node?

Snapshot backup is very helpful in case of OS failure or any other failure that causes Compute node failure. With the snapshot backup you can restore the compute node with few simple steps without having to go through the complex Oracle Exadata BareMetal restore.


Environment Details

 

Exadata Model

X5-2 Full Rack

Exadata Components

8 – Compute nodes, 14 – Storage cells & 2 – IB switches

Exadata Storage cells

DBM01CEL01 – DBM01CEL14

Exadata Compute nodes

DBM01DB01 – DBM01DB08

Exadata Software Version

12.1.2.3

Exadata DB Version

11.2.0.4.180717



Prerequisites

 

  • – Root user access on Compute nodes
  • – Snapshot backup taken before the failure
  • – NFS mount storing the snapshot backup
  •  

Note: We can’t use Infiniband interface to mount the NFS file system. Only Management interface can be used to mount the NFS file system.


Step 1  

 

Copy snapshot backup to the NFS mount mounted using management interface.

In this example: I have mounted the NFS share using the following directory

 

/nfssa/dm01/os_snapshot

 

[root@dm01db07 os_snapshot]# cd /nfssa/dm01/os_snapshot

 

[root@dm01db07 os_snapshot]# ls -lrt|grep Rt_U01

-rw-r–r– 1 4294967294 4294967294 24268161485 Jun 17 04:36 Rt_U01_20190617_dm01db07_bkp.tar.bz2

 

 

Step 2

Copy diag.iso from MOS or from another goo compute node to the NFS mount.

 

[root@dm01db07 os_snapshot]# cd /nfssa/dm01/os_snapshot

 

 [root@dm01db07 os_snapshot]# ls -lrt|grep diag.iso

-r–r—– 1 4294967294 4294967294    78139392 Jul 12  2019 diag.iso

 

Step 3:

 

During the restore process you will be prompted to provide the following details. Make a note of these inputs before proceeding to the next step


i. The full path of the backup 

10.10.2.21:/export/dm01/os_snapshot/ Rt_U01_20190617_dm01db07_bkp.tar.bz2

ii. Host IP:  10.2.15

  • iii. Netmask: 255.255.192
  •  
  • iii. Gateway: 10.2.100

 

Step 4

 

Login to the serial ILOM server of the node in question, load the diag.iso image and reboot the server as follows:

 

a) Log in to the Oracle ILOM CLI

 

[root@dm01db06 ~]# ssh dm01db06-ilom

Password:

Oracle(R) Integrated Lights Out Manager

Version 3.2.8.24 r114580

Copyright (c) 2016, Oracle and/or its affiliates. All rights reserved.

Warning: HTTPS certificate is set to factory default.

Hostname: dm01db06-ilom

 

 

b) Run the following command on CLI to mount ISO from NFS server

 

-> cd /SP/services/kvms/host_storage_device/remote/

/SP/services/kvms/host_storage_device/remote

 

-> set server_URI=nfs://10.10.2.21:/export/dm01/os_snapshot/diag.iso

Set ‘server_URI’ to ‘nfs://10.10.2.21:/export/dm01/os_snapshot/diag.iso’

 

-> show server_URI

  /SP/services/kvms/host_storage_device/remote

    Properties:

        server_URI = nfs://10.10.2.21:/export/dm01/os_snapshot/diag.iso

 

c) Enable storage redirection by typing:

 

-> set /SP/services/kvms/host_storage_device/ mode=remote

Set ‘mode’ to ‘remote’

 

 

To view the status of redirection, type the command:

 

-> show /SP/services/kvms/host_storage_device/ status

  /SP/services/kvms/host_storage_device

    Properties:

        status = operational

 

Note – Redirection is active if the status is set to either Operational or Connecting.

 

d) Set the next boot device to cdrom

 

-> set /HOST boot_device=cdrom

Set ‘boot_device’ to ‘cdrom’

 

To ensure next boot device, check

 

-> show /HOST

 

 /HOST

    Targets:

        console

        diag

        provisioning

 

    Properties:

        boot_device = cdrom

        generate_host_nmi = (Cannot show property)

 

    Commands:

        cd

        set

        show

 

e) Reboot Server

 

-> reset /SYS

Are you sure you want to reset /SYS (y/n)? y

Performing hard reset on /SYS

 

Step 5

 

Start the serial console using the command below:

 

-> start /SP/console
Are you sure you want to start /SP/console (y/n)? y

 

Serial console started. To stop, type ESC (

 

Note: optionally you can also start the Remote direction using Web ILOM.


Wait for the server to boot from the diag.iso

On both the Remote Console window and the putty/SSH session window you will see the server going through  BIOS POST, then the kernel boot messages.

At the end of the boot up sequence, there should be the menu prompt such as the one below.:

  • – Input (r) for restore
  • – ‘y’ to continue
  • – Rescue password: sos1Exadata




Next prompt would be for path of backup file provide as follows from Step 3:

10.10.2.21:/export/dm01/os_snapshot/ Rt_U01_20190617_dm01db07_bkp.tar.bz2

Restore Compute node from Snapshot Backup


Next prompt for LVM schema say (y). Type y and hit return

Restore Compute node from Snapshot Backup


Next prompt input interface, IP address of host and gateway taken from Step 3

Restore Compute node from Snapshot Backup


At the end of this step, the server would enter recovery phase which may take about 3 hours.

Restore Compute node from Snapshot Backup



Step 6:

 

When the recovery completes, the login screen appears. Verify the file system.

Restore Compute node from Snapshot Backup

This concludes a successful recovery

 

Step 7:

 

Disable cd redirection

 

-> set /SP/services/kvms/host_storage_device/ mode=disabled

Set ‘mode’ to ‘disabled’

 

-> show /SP/services/kvms/host_storage_device/ mode

 

  /SP/services/kvms/host_storage_device

    Properties:

        mode = disabled

 

-> set /SP/services/kvms/host_storage_device/remote server_URI=”

Set ‘server_URI’ to ”

 

-> show /SP/services/kvms/host_storage_device/remote server_URI

 

  /SP/services/kvms/host_storage_device/remote

    Properties:

        server_URI = (none)

 

-> show /HOST

 

/HOST

    Targets:

        console

        diag

        provisioning

 

    Properties:

        boot_device = default

        generate_host_nmi = (Cannot show property)

 

    Commands:

        cd

        set

        show


Reboot server to use default BIO image

 

-> reset /SYS

Are you sure you want to reset /SYS (y/n)? y

Performing hard reset on /SYS

 

Step 8:

 

Verify server

 

[root@dm01db07 ~]# imageinfo 
Kernel version: 2.6.39-400.294.1.el6uek.x86_64 #1 SMP Wed Jan 11 08:46:38 PST 2017 x86_64 
Image kernel version: 2.6.39-400.294.1.el6uek 
Image version: 12.1.2.3.4.170111 
Image activated: 2017-09-19 13:23:57 -0500 
Image status: success
System partition on device: /dev/mapper/VGExaDb-LVDbSys1 

If the image status post restore shows failure then perform the following addition steps to make it success.

 

Image status: failure

 

  • # /usr/sbin/ubiosconfig export all -x /tmp/bios_current.xml –expert_mode -y
  • If it still fails, then try reset the SP and try above the command again.
  • If the command ran successful without error, then reboot the system.
  • After system comes back up, wait for approximately 10min, and check and confirm that the output of imageinfo command is “Image status: success”

 

 

Conclusion:

In this article we have learned how to restore an Exadata Compute node from Exadata Compute node snapshot backup



0

Database Management Services, Oracle Database Appliance - ODA, Oracle Database Management Solution, Oracle Databases, Remote Database Management, Technology Consulting Services

September 2019 Oracle announced Oracle Database Appliance X8-2 (Small, Medium and HA). ODA X8-2 comes with more computing resources compared with X7-2 Models.

Let’s take a quick look at few benefits of ODA followed by the technical specification on ODA X8-2 Small/Medium and HA.

Oracle Database Appliance is an Engineered System. Software, server, storage, and networking, all co-engineered and optimized to run Oracle Database and applications.


Benefits of Oracle Database Appliance (ODA):

  1. Software, server, storage, and networking engineered and optimized to run Oracle Database and applications.

  2. Supports Oracle Database Standard Edition, Standard Edition One, Standard Edition 2, and Enterprise Edition. Optimized for Cloud.

  3. Capacity on Demand Licensing – Reduced Cost.

  4. Ease of deployment, patching, management, and support.

  5. Increased performance and reliability with NVMe flash storage.

  6. Reliable hardware architecture with redundant power, cooling, networking, and storage.

  7. Browser User Interface (BUI)


In this article we will compare the technical specifications of ODA X8-2 model family (Small, Medium and HA). This comparison table comes handy when you want to quickly take a look at the resources available for a given model.

 

For more information on the technical specification loot at the ODA X8-2 HA and Small/Medium Data Sheet at:

https://www.oracle.com/technetwork/database/database-appliance/oda-x8-2-ha-datasheet-5730739.pdf

https://www.oracle.com/technetwork/database/database-appliance/oda-x8-2sm-datasheet-5730738.pdf



Component

ODA X8-2 Small

ODA X8-2 Medium

ODA X8-2 HA


Database Server

1

1

2

Storage Shelf

NA

NA

1 4U DE3-24C Storage Shelf per System

Optional Second Storage Shelf for Expansion

Rack Size

1 2RU Server

1 2RU Server

2 2RU Servers & 1 4U Storage Shelf

Processor

One 16-core Intel Xeon Gold 5218

Two 16-core Intel Xeon Gold 5218

Two 16-core Intel Xeon Gold 5218 Per Server

Physical Memory

192 GB

Expandable to 384 GB

384 GB

Expandable to 768 GB

384 GB

Expandable to 768 GB per server

Storage

Two 6.4 TB NVMe SSDs

12.8 TB (raw)

Base: Two 6.4 TB NVMe SSDs

12.8 TB (raw)

Base: Six 7.68 TB SSDs

46 TB (raw)

Storage Expansion

Not expandable

Expandable up to 76.8 TB (raw)

Expandable up to 369 TB SSD or up to 92 TB SSD / 504 TB HDD (Raw)

Network

4 x 10GBase-T ports (RJ45) expandable up to 12 x 10GBase-T ports or

2 x 10/25 GbE ports (SFP28) expandable up to 6 x 10/25 GbE ports

4 x 10GBase-T ports (RJ45) expandable up to 12 x 10GBase-T ports or

2 x 10/25 GbE ports (SFP28) expandable up to 6 x 10/25 GbE ports

4 x 10GBase-T ports (RJ45) expandable up to 12 x 10GBase-T ports or

2 x 10/25 GbE ports (SFP28) expandable up to 6 x 10/25 GbE ports

Oracle Database

Oracle Database 18c/19c EE & SE 2

Oracle Database 12c R1/R2 EE & SE 2

Oracle Database 11g R2 EE, SE & SE 1

Oracle Database 18c/19c EE & SE 2

Oracle Database 12c R1/R2 EE & SE 2

Oracle Database 11g R2 EE, SE & SE 1

Oracle Database 18c/19c EE & SE 2

Oracle Database 12c R1/R2 EE & SE 2

Oracle Database 11g R2 EE, SE & SE 1

Database Deployment

Single Instance

Single Instance

Single Instance, RAC & RAC One node

Virtualization

Oracle Linux KVM

Oracle Linux KVM

Oracle VM & Oracle Linux KVM

Operating System

Oracle Linux

Oracle Linux

Oracle Linux





Are you and your team considering setting up Oracle Database Appliance? Let Netsoftmate help you choose the right product keeping under consideration your budget, requirement and usage forecasting. Click on the image below to sign-up NOW!



0

Database Management Services, Oracle Database Appliance - ODA, Oracle Database Management Solution, Oracle Databases, Remote Database Management, Technology Consulting Services
In September 2019, Oracle announced Oracle Database Appliance X8-2 (Small, Medium and HA). ODA X8-2 comes with more computing resources compared with X7-2 Models.


Let’s take a quick look at few benefits of ODA followed by the technical specification on ODA X8-2 Small/Medium and HA.


Oracle Database Appliance is an Engineered System. Software, server, storage, and networking, all co-engineered and optimized to run Oracle Database and applications.


Benefits of Oracle Database Appliance (ODA):

  1. Software, server, storage, and networking engineered and optimized to run Oracle Database and applications.
  2. Supports Oracle Database Standard Edition, Standard Edition One, Standard Edition 2, and Enterprise Edition.
  3. Optimized for Cloud
  4. Capacity on Demand Licensing – Reduced Cost
  5. Ease of deployment, patching, management, and support
  6. Increased performance and reliability with NVMe flash storage
  7. Reliable hardware architecture with redundant power, cooling, networking, and storage
  8. Browser User Interface (BUI)


Oracle Database Appliance X8-2 HA Benefits & Technical specification


  1. Support mission-critical applications and consolidation of many databases
  2. Built for high availability
  3. Choice of high-performance flash or high-capacity drives
  4. 32 cores per server (64 cores in total for 2 servers)
  5. 384 GB physical memory per server expandable upto 768 (1.5 TB memory in total for 2 servers)
  6. Storage Shelf
  7. High Capacity: 46 TB SSD and 252 TB SDD raw capacity per shelf
  8. High Performance: 184 TB SSD raw capacity per shelf
  9. Choice of 10GBase-T or 10/25 GbE SFP28 public networking
  10. 25GbE interconnect for cluster communication


For more information on the technical specification loot at the ODA X8-2 HA Data Sheet at:
https://www.oracle.com/technetwork/database/database-appliance/oda-x8-2-ha-datasheet-5730739.pdf


 
 


 
  • Oracle Database Appliance X8-2 Small Technical specification

  1. One server
  2. 1 Intel Xeon processor, 16 Cores
  3. 192GB Physical memory expandable upto 384GB
  4. Choice of 10GBase-T or 10/25 GbE SFP28 public networking
  5. 12.8TB NVMe raw storage



Oracle Database Appliance X8-2 Medium Technical specification

  1. One server
  2. 2 Intel Xeon processor, 32 Cores
  3. 384 GB Physical memory expandable upto 768GB
  4. Choice of 10GBase-T or 10/25 GbE SFP28 public networking
  5. 12.8 TB NVMe raw storage capacity with optional expansion to 76.8 TB NVMe raw storage


For more information on the technical specification loot at the ODA X8-2 S/M Data Sheet at:
https://www.oracle.com/technetwork/database/database-appliance/oda-x8-2sm-datasheet-5730738.pdf



Conclusion


In this article we seen the benefits and the technical specification of latest Oracle Database Appliance X8 model family. ODA is the right choice for all type of Businesses as an on-premises solution and cloud ready option.



Are you and your team considering setting up Oracle Database Appliance? Let Netsoftmate help you choose the right product keeping under consideration your budget, requirement and usage forecasting. Click on the image below to sign-up NOW!



0

Cloud Services, Oracle Database Management Solution, Oracle Databases, Oracle Exadata


  • What is Oracle Autonomous Database?


    Oracle Autonomous Database allows you rapidly & easily create mission critical databases, It protects data from both external and internal threats, automates all infrastructure & database maintenance, recovers from any failure without downtime and scales online for highest performance & lower cost.



    Components of Oracle Autonomous Database:

    An Oracle Autonomous database comprises of 3 components:

    Oracle Exadata
    Oracle database
    Automated Data Center Operations and Machine Learning

     

    How Does Oracle Database Works:

    An Oracle Autonomous Database is self driving, self securing and self repairing.

  • Self Driving: It Automates all databases and infrastructure management, Patching, tune Queries and Monitoring

  • Self Securing: Protects database from both external and malicious internal users by automatically encrypting data both at rest and in transit

  • Self Repairing: Automatically recover from any failure. Protects from all downtime including planned maintenance


    Machine Learning:

    Automation  built up on the revolutionary machine learning platform enables the Customers with greater database autonomy and capabilities.

  • Workload Optimization: Automatically adapts to the changing workload and optimization of query execution. So Customers doesn’t have to tune queries manually.

  • Monitoring & Diagnostics: Detects anomalies and fixes issues ensuring optimal performance and availability. So Customers doesn’t have to install or waiting for monitoring and alerting notification.

  • Security: Protects database from both external attacks and malicious internal users by automatically encrypting data and apply security updates.

     

    Oracle Autonomous Database Family:

  • Oracle Autonomous Data Warehouse (ADW): It is optimized for Data Warehouse, Data Mart & Data lake. Easy provision, connect, load data and execute queries.

  • Oracle Autonomous Transaction Processing (ATP): It is optimized for Transaction processing, batch, Report, Mixed workload, IoT & Application Development. Easy provision, connect, load data and execute queries.
  •  


  •  

    Benefits of Autonomous Database:

  • Fast Provisioning: Create the database in minutes, load data & execute queries
  •  
  • Autonomous: Automatically tune queries without DBA intervention
  •  
  • Extreme Performance: Run Oracle workload up to 13x faster on Oracle Exadata


    Steps to create an Autonomous Database:

    It just takes 4 steps to create an Autonomous Database (DW, Data mart or OLTP) and in few minutes the customers can have Autonomous database ready to connect and load data and start using it. 




    Migrating to Oracle Autonomous Database:

  • Oracle Database is same in the cloud as on-premises. You can move it to the cloud without having to change application code.Quickly obtain environments for testing and development. Take on-premises data, move onto the cloud storage for fast analysis, backup or archiving. Get an enterprise production ready database in minutes for fast migration to cloud. Tuning, patching, backup, disaster recovery, high availability for them automatically.


    Oracle Autonomous database Security Capabilities:

  • Autonomous database automatically applies patches and upgrades eliminating human error, keeping the system protected. Oracle Database Vault protects the database from internal administrator access, allows administrators to perform their job, but not access the data itself. By default, Oracle Autonomous database uses TDE to protect data at rest. It also protects data in transit when the client uses SSL/TLS 1.2.


    Oracle Autonomous Database Deployment option:

  • Autonomous Database Serverless: Simple & Elastic. Oracle automates and manages everything. You just choose the compute , storage and region. Start with minimum 1 OCPU & 1 hour minimum commitment time and Instantly grow or shrink online.

  • Autonomous Database Dedicated: Provides a Private database cloud running on a dedicated Exadata cloud infrastructure in the Public cloud. Highly isolated and Customizable operation policies. Available as Cloud at Customer solution.


    Which Autonomous Database Deployment is best for me?

    Regardless of which Autonomous Database deployment you chose you will get the same great features, functionality, security and performance you have grown to expect from the Oracle Database.

    For users that are simply looking for a database for a specific application or project and don’t want to be involved in choosing any database details like versions, patching, etc., Serverless is right choice. Whereas users that want to rethink their IT strategy and care about things like patching schedules, software versions, workload isolation, and want to be involved in choosing these, then Dedicated is the right choice.


    Conclusion:

    In this article we have learned about Oracle Autonomous database cloud, its components, benefits and capabilities and different autonomous database deployment options available.



3

Database Management Services, Oracle Database Management Solution, Oracle Databases, Oracle Exadata
 
You will end up performing storage cell rescue under the following situations:

  • Improper Battery Replacement
  • Improper Card Seating
  • Card Damage During Battery Replacement
  • Corrupted Root File System

In this article we will demonstrate step by step process to Rescue an Exadata Storage Cell or server.
 
Open a browser and enter the ILOM hostname or IP address of the Storage cell you want to rescue
https://dm01cel02-ilom.netsoftmate.com
 
Enter root crendentials

 
On the left pane under “Remote Control”, click “Redirection”. Select “Use video redirection” and click “Launch Remote Console” button

 
Click OK
 
 Click OK

 
Click Continue

 
Click Run

 
Click Continue (not recommended)

 
From the ILOM video console we can see that the root file system can’t be mounted due to corruption and it will be rebooted again in 60 seconds

 
On the left pane under “Host Management” click on “Power Control”. From the drop down list Select “Power Cycle”

 
Click Save

 
Click OK

 
Rebooting in progress

 
Server is no rebooting

 
 
Immediately press Ctrl+S on keyboard 

 
Select the “CELL_USB_BOOT_CELLBOOT_usb_in_rescue_mode

 
At the point, we will have continue the rescue process using serial ILOM

 
As root, ssh to the storage cell ILOM and start the serial console

 
Enter r and hit return

 
Enter y and hit return

 
Enter the rescue password sos1exadata. Enter n and hit return

 
Enter the root user password 

 
We are into the rescue mode. At this moment check to make sure that the there are no file system issue. Fix any other issue you may have. Consult Oracle if required
 
Reboot the server again to complete the rescue process

 
Hit return

 
The server is powered off

 
Power on the server using web ILOM as shown below

 
Rescue process is completed and we got the root login prompt

 
 
Login to the server as root user and perform the post rescue steps

  
Verify the image version of the storage cell

 
 
Post Storage Cell Rescue steps:
 
[root@dm01cel02 ~]# imageinfo

Kernel version: 4.1.12-94.8.4.el6uek.x86_64 #2 SMP Sat May 5 16:14:51 PDT 2018 x86_64
Cell version: OSS_18.1.7.0.0AUG_LINUX.X64_180821
Cell rpm version: cell-18.1.7.0.0_LINUX.X64_180821-1.x86_64

Active image version: 18.1.7.0.0.180821
Active image kernel version: 4.1.12-94.8.4.el6uek
Active image activated: 2019-03-17 03:27:41 -0500
Active image status: success
Active system partition on device: /dev/md5
Active software partition on device: /dev/md7

Cell boot usb partition: /dev/sdm1
Cell boot usb version: 18.1.7.0.0.180821

Inactive image version: undefined
Rollback to the inactive partitions: Impossible


CellCLI> import celldisk all force
No cell disks qualified for this import operation

CellCLI> list physicaldisk
         12:0            PST0XV          normal
         12:1            PZNDSV          normal
         12:2            PT5Z4V          normal
         12:3            PU3XLV          normal
         12:4            PYAKLV          normal
         12:5            PV828V          normal
         12:6            PZE5NV          normal
         12:7            PYV0YV          normal
         12:8            PZKUXV          normal
         12:9            PYD86V          normal
         12:10           PZL15V          normal
         12:11           PZPLAV          normal
         FLASH_1_1       S2T7NCAHA00958  normal
         FLASH_2_1       S2T7NCAHA00986  normal
         FLASH_4_1       S2T7NCAHA00956  normal
         FLASH_5_1       S2T7NCAHA00947  normal

CellCLI> list celldisk
         CD_00_dm01cel02        normal
         CD_01_dm01cel02        normal
         CD_02_dm01cel02        normal
         CD_03_dm01cel02        normal
         CD_04_dm01cel02        normal
         CD_05_dm01cel02        normal
         CD_06_dm01cel02        normal
         CD_07_dm01cel02        normal
         CD_08_dm01cel02        normal
         CD_09_dm01cel02        normal
         CD_10_dm01cel02        normal
         CD_11_dm01cel02        normal
         FD_00_dm01cel02        normal
         FD_01_dm01cel02        normal
         FD_02_dm01cel02        normal
         FD_03_dm01cel02        normal

CellCLI> list griddisk
         DATA_DM01_CD_00_dm01cel02     active
         DATA_DM01_CD_01_dm01cel02     active
         DATA_DM01_CD_02_dm01cel02     active
         DATA_DM01_CD_03_dm01cel02     active
         DATA_DM01_CD_04_dm01cel02     active
         DATA_DM01_CD_05_dm01cel02     active
         DATA_DM01_CD_06_dm01cel02     active
         DATA_DM01_CD_07_dm01cel02     active
         DATA_DM01_CD_08_dm01cel02     active
         DATA_DM01_CD_09_dm01cel02     active
         DATA_DM01_CD_10_dm01cel02     active
         DATA_DM01_CD_11_dm01cel02     active
         DBFS_DG_CD_02_dm01cel02       active
         DBFS_DG_CD_03_dm01cel02       active
         DBFS_DG_CD_04_dm01cel02       active
         DBFS_DG_CD_05_dm01cel02       active
         DBFS_DG_CD_06_dm01cel02       active
         DBFS_DG_CD_07_dm01cel02       active
         DBFS_DG_CD_08_dm01cel02       active
         DBFS_DG_CD_09_dm01cel02       active
         DBFS_DG_CD_10_dm01cel02       active
         DBFS_DG_CD_11_dm01cel02       active
         RECO_DM01_CD_00_dm01cel02     active
         RECO_DM01_CD_01_dm01cel02     active
         RECO_DM01_CD_02_dm01cel02     active
         RECO_DM01_CD_03_dm01cel02     active
         RECO_DM01_CD_04_dm01cel02     active
         RECO_DM01_CD_05_dm01cel02     active
         RECO_DM01_CD_06_dm01cel02     active
         RECO_DM01_CD_07_dm01cel02     active
         RECO_DM01_CD_08_dm01cel02     active
         RECO_DM01_CD_09_dm01cel02     active
         RECO_DM01_CD_10_dm01cel02     active
         RECO_DM01_CD_11_dm01cel02     active


[root@dm01cel02 ~]# cellcli -e list flashcache detail
         name:                   dm01cel02_FLASHCACHE
         cellDisk:               FD_03_dm01cel02,FD_01_dm01cel02,FD_02_dm01cel02,FD_00_dm01cel02
         creationTime:           2019-03-17T03:19:43-05:00
         degradedCelldisks:
         effectiveCacheSize:     11.64312744140625T
         id:                     574c3bd1-7a35-42ba-a03b-75f3a93edac7
         size:                   11.64312744140625T
         status:                 normal

[root@dm01cel02 ~]# cellcli -e list flashlog detail
         name:                   dm01cel02_FLASHLOG
         cellDisk:               FD_03_dm01cel02,FD_00_dm01cel02,FD_01_dm01cel02,FD_02_dm01cel02
         creationTime:           2019-03-17T03:19:43-05:00
         degradedCelldisks:
         effectiveSize:          512M
         efficiency:             100.0
         id:                     73cd8288-c6d8-42c3-95a1-97ce287cf7d0
         size:                   512M
         status:                 normal
 
SQL> select a.name,b.path,b.state,b.mode_status,b.failgroup
    from v$asm_diskgroup a, v$asm_disk b
    where a.group_number=b.group_number
    and b.failgroup=’dm01cel02′
    order by 2,1;

no rows selected


SQL> alter diskgroup DBFS_DG add disk ‘o/192.168.1.1;192.168.1.2/DBFS_DG_*_dm01cel02’ force;

Diskgroup altered.

 
SQL> alter diskgroup DATA_DM01 add disk ‘o/192.168.1.1;192.168.1.2/DATA_DM01_*_dm01cel02’ force;

Diskgroup altered.

 
SQL> alter diskgroup RECO_DM01 add disk ‘o/192.168.1.1;192.168.1.2/RECO_DM01_*_dm01cel02’ force;

Diskgroup altered.

 
SQL> select * from v$asm_operation;

GROUP_NUMBER OPERA STAT      POWER     ACTUAL      SOFAR   EST_WORK   EST_RATE EST_MINUTES ERROR_CODE
———— —– —- ———- ———- ———- ———- ———- ———– ——————————————–
           1 REBAL RUN           4          4     204367    3521267      13041         254
           3 REBAL WAIT          4


 
SQL> select * from v$asm_operation;

no rows selected


SQL> col path for a70
SQL> set lines 200
SQL> set pages 200
SQL> select a.name,b.path,b.state,b.mode_status,b.failgroup
    from v$asm_diskgroup a, v$asm_disk b
    where a.group_number=b.group_number
    and b.failgroup=’dm01cel02′
    order by 2,1;  2    3    4    5

NAME                           PATH                                                                   STATE    MODE_ST FAILGROUP
—————————— ———————————————————————- ——– ——- ——————————
DATA_DM01                     o/192.168.1.1;192.168.1.2/DATA_DM01_CD_00_dm01cel02              NORMAL   ONLINE  dm01cel02
DATA_DM01                     o/192.168.1.1;192.168.1.2/DATA_DM01_CD_01_dm01cel02              NORMAL   ONLINE  dm01cel02
DATA_DM01                     o/192.168.1.1;192.168.1.2/DATA_DM01_CD_02_dm01cel02              NORMAL   ONLINE  dm01cel02
DATA_DM01                     o/192.168.1.1;192.168.1.2/DATA_DM01_CD_03_dm01cel02              NORMAL   ONLINE  dm01cel02
DATA_DM01                     o/192.168.1.1;192.168.1.2/DATA_DM01_CD_04_dm01cel02              NORMAL   ONLINE  dm01cel02
DATA_DM01                     o/192.168.1.1;192.168.1.2/DATA_DM01_CD_05_dm01cel02              NORMAL   ONLINE  dm01cel02
DATA_DM01                     o/192.168.1.1;192.168.1.2/DATA_DM01_CD_06_dm01cel02              NORMAL   ONLINE  dm01cel02
DATA_DM01                     o/192.168.1.1;192.168.1.2/DATA_DM01_CD_07_dm01cel02              NORMAL   ONLINE  dm01cel02
DATA_DM01                     o/192.168.1.1;192.168.1.2/DATA_DM01_CD_08_dm01cel02              NORMAL   ONLINE  dm01cel02
DATA_DM01                     o/192.168.1.1;192.168.1.2/DATA_DM01_CD_09_dm01cel02              NORMAL   ONLINE  dm01cel02
DATA_DM01                     o/192.168.1.1;192.168.1.2/DATA_DM01_CD_10_dm01cel02              NORMAL   ONLINE  dm01cel02
DATA_DM01                     o/192.168.1.1;192.168.1.2/DATA_DM01_CD_11_dm01cel02              NORMAL   ONLINE  dm01cel02
DBFS_DG                        o/192.168.1.1;192.168.1.2/DBFS_DG_CD_02_dm01cel02                 NORMAL   ONLINE  dm01cel02
DBFS_DG                        o/192.168.1.1;192.168.1.2/DBFS_DG_CD_03_dm01cel02                 NORMAL   ONLINE  dm01cel02
DBFS_DG                        o/192.168.1.1;192.168.1.2/DBFS_DG_CD_04_dm01cel02                 NORMAL   ONLINE  dm01cel02
DBFS_DG                        o/192.168.1.1;192.168.1.2/DBFS_DG_CD_05_dm01cel02                 NORMAL   ONLINE  dm01cel02
DBFS_DG                        o/192.168.1.1;192.168.1.2/DBFS_DG_CD_06_dm01cel02                 NORMAL   ONLINE  dm01cel02
DBFS_DG                        o/192.168.1.1;192.168.1.2/DBFS_DG_CD_07_dm01cel02                 NORMAL   ONLINE  dm01cel02
DBFS_DG                        o/192.168.1.1;192.168.1.2/DBFS_DG_CD_08_dm01cel02                 NORMAL   ONLINE  dm01cel02
DBFS_DG                        o/192.168.1.1;192.168.1.2/DBFS_DG_CD_09_dm01cel02                 NORMAL   ONLINE  dm01cel02
DBFS_DG                        o/192.168.1.1;192.168.1.2/DBFS_DG_CD_10_dm01cel02                 NORMAL   ONLINE  dm01cel02
DBFS_DG                        o/192.168.1.1;192.168.1.2/DBFS_DG_CD_11_dm01cel02                 NORMAL   ONLINE  dm01cel02
RECO_DM01                     o/192.168.1.1;192.168.1.2/RECO_DM01_CD_00_dm01cel02              NORMAL   ONLINE  dm01cel02
RECO_DM01                     o/192.168.1.1;192.168.1.2/RECO_DM01_CD_01_dm01cel02              NORMAL   ONLINE  dm01cel02
RECO_DM01                     o/192.168.1.1;192.168.1.2/RECO_DM01_CD_02_dm01cel02              NORMAL   ONLINE  dm01cel02
RECO_DM01                     o/192.168.1.1;192.168.1.2/RECO_DM01_CD_03_dm01cel02              NORMAL   ONLINE  dm01cel02
RECO_DM01                     o/192.168.1.1;192.168.1.2/RECO_DM01_CD_04_dm01cel02              NORMAL   ONLINE  dm01cel02
RECO_DM01                     o/192.168.1.1;192.168.1.2/RECO_DM01_CD_05_dm01cel02              NORMAL   ONLINE  dm01cel02
RECO_DM01                     o/192.168.1.1;192.168.1.2/RECO_DM01_CD_06_dm01cel02              NORMAL   ONLINE  dm01cel02
RECO_DM01                     o/192.168.1.1;192.168.1.2/RECO_DM01_CD_07_dm01cel02              NORMAL   ONLINE  dm01cel02
RECO_DM01                     o/192.168.1.1;192.168.1.2/RECO_DM01_CD_08_dm01cel02              NORMAL   ONLINE  dm01cel02
RECO_DM01                     o/192.168.1.1;192.168.1.2/RECO_DM01_CD_09_dm01cel02              NORMAL   ONLINE  dm01cel02
RECO_DM01                     o/192.168.1.1;192.168.1.2/RECO_DM01_CD_10_dm01cel02              NORMAL   ONLINE  dm01cel02
RECO_DM01                     o/192.168.1.1;192.168.1.2/RECO_DM01_CD_11_dm01cel02              NORMAL   ONLINE  dm01cel02

34 rows selected.

 
 
Conclusion
 
In this article we have demonstrated step by step procedure to perform Storage Cell Rescue. You may have to perform the Storage cell rescue for multiple reasons such as root file system corrupted, Kernel panic, server rebooting continuously and so on. With the help of CELLBOOT usb one can perform the storage cell rescue very easily.
 
0

Database Management Services, Oracle Database Appliance - ODA, Oracle Database Management Solution, Oracle Databases, Remote Database Management, Uncategorized

ODA is basically a 2-node RAC cluster database system running Oracle Linux operating (OEL), Oracle Database Enterprise Edition or Standard Edition, Oracle Grid Infrastructure (Clusterware and ASM). All these together provides the Oracle Database high availability running on ODA.

 
In 2016, Oracle added 3 new models to expand Oracle Database Appliance portfolio. These 3 new models are:
  • Oracle Database Appliance X6-2S (single-instance database)
  • Oracle Database Appliance X6-2M (single-instance database)
  • Oracle Database Appliance X6-2L (single-instance database)
  •  
 
The High Available ODA X6-2 is now known as X6-2 HA which consists of 2 nodes and a storage shelf and optionally an additional storage shelf.

 
In October 2017, Oracle announced Oracle Database Appliance X7-2 (Small, Medium and HA). ODA X7-2 comes with more computing resources compared with X6-2 Models.


  • Oracle Database Appliance X7-2S (single-instance database)
  • Oracle Database Appliance X7-2M (single-instance database)
  • Oracle Database Appliance X7-2 HA
  •  
With ODA X7-2, the ODA Large configuration is discontinued.
 
 
With the different model families there is always a confusion that which command line tool to be used for managing, monitoring and administrating Oracle Database Appliance.
 
 
 
 
In this article we will explain different command line tools that can be used to manage and administer an Oracle Database Appliance Small, Medium, Large and HA models for both Bare Metal and Virtualized Platform environment.
 
 
Let’s look at the different command line tools available:
 
OAKCLI: oakcli stands for Oracle Appliance Kit Command Line Interface. oakcli utility is used to manage Oracle Database Appliance. It used to carry out management tasks such as, Deploying, Patching, validating, monitoring, troubleshooting, Create Database, create database homes, configuring core key, manage Virtual machines and so on.

 
ODACLI: It is used for Hardware and administrative tasks on the Oracle Database Appliance, Example: Hardware monitoring and Storage Configuration

 
ODAADMICLI: It is used for everyday task on the Oracle Database Appliance, Example: Database Creation, Patches and upgrades, Job creation and manage and so on.

The following table provides a quick reference on when to use oakcli Vs. odacli/odaadmcli
 
  • For Oracle Database Appliance software version 12.2.1.4 or older use the tools as shown in the following table
  •  
  •  
Oakcli
odacli/odaadmcli
ODA V1
ODA X6-2 S, M, L
ODA X3-2
ODA X7-2 S, M
ODA X4-2
ODA X7-2 HA (Bare Metal only) 
ODA X5-2
 
ODA X6-2 HA
 
ODA X7-2 HA (VM Only)
 
 


  • For Oracle Database Appliance software version 18.3.0.0 and later user the tools as shown in the following table


  •  
oakcli
odacli/odaadmcli
All hardware versions running Virtualized platform
All hardware versions running Bare Metal (physical)
 


Examples using oakcli, odacli and odaadmcli:
 
[root@odanode1 ~]# odacli describe-appliance
 
Appliance Information
—————————————————————-
                     ID: 9aef262c-xxxx-xxxx-xxxx-0d877c03d762
               Platform: ODA
        Data Disk Count: 2
         CPU Core Count: 10
                Created: May 23, 2017 3:08:03 AM CST
 
System Information
—————————————————————-
                   Name: odanode
            Domain Name: netsoftmate.com
              Time Zone: Asia/Pacific
             DB Edition: EE
            DNS Servers: 10.1.1.1
            NTP Servers: ntp1.netsoftmate.com
 
Disk Group Information
—————————————————————-
DG Name                   Redundancy                Percentage
————————- ————————- ————
Data                      Normal                    80
Reco                      Normal                    20
 
 
[root@odanode1 ~]# odaadmcli show disk
        NAME            PATH            TYPE            STATE           STATE_DETAILS
 
        pd_00           /dev/nvme0n1    NVD             ONLINE          Good
        pd_01           /dev/nvme1n1    NVD             ONLINE          Good
 
 
[root@odanode1 ~]# odaadmcli show diskgroup
DiskGroups
———-
DATA
RECO
 
 
[root@odanode1 ~]# odaadmcli show env_hw
BM ODA X6-2 Small
 
 
[root@odanode1 ~]# odaadmcli show storage
==== BEGIN STORAGE DUMP ========
Host Description: Oracle Corporation:ORACLE SERVER X6-2
Total number of controllers: 2
        Id          = 0
        Pci Slot    = 10
        Serial Num  = xxxxxxxxxx
        Vendor      = Samsung
        Model       = MS1PC2DD3ORA3.2T
        FwVers      = KPYABR3Q
        strId       = nvme:19:00.00
        Pci Address = 19:00.0
 
        Id          = 1
        Pci Slot    = 11
        Serial Num  = xxxxxxxxxxx
        Vendor      = Samsung
        Model       = MS1PC2DD3ORA3.2T
        FwVers      = KPYABR3Q
        strId       = nvme:1b:00.00
        Pci Address = 1b:00.0
 
Total number of expanders: 0
Total number of PDs: 2
        /dev/nvme0n1    Samsung           NVD 3200gb slot:  0  pci : 19
        /dev/nvme1n1    Samsung           NVD 3200gb slot:  1  pci : 1b
==== END STORAGE DUMP =========
 
 
[root@odanode1 ~]# oakcli show env_hw
BM ODA X5-2
Public interface : COPPER
 
 
[root@odanode1 ~]# oakcli show oda_base
ODA base domain
ODA base CPU cores :36
ODA base domain memory :362
ODA base template :/OVS/template.tar.gz
ODA base vlans :[‘priv1’, ‘net1’]
ODA base current status :Running
 
 
[root@odanode1 ~]# oakcli show env_hw
VM-oda_base ODA X7-2 HA
 
 
 
Conclusion

In this article we have learned about Oracle Database Appliance X6-2 and X7-2 model family. Also, we have learned when to use different ODA command lines tools such as oakcli, odacli and odaadmcli to manage and administer an Oracle Database Appliance.


eBook - Oracle Exadata X8M Patching Recipes | Netsoftmate
 
 
 
0

Database Management Services, Oracle Database Management Solution, Oracle Databases, Oracle Exadata



An engineered system comprising of Compute Nodes, Storage Cells and Infiniband  – all of it packaged inside a single physical cabinet called “Exadata Rack


Exadata Hardware Generation At A Glance

 

Exadata Database Machine X8-2




0

[contact-form-7 id=”4973″ title=”Lead”]