Good Contents Are Everywhere, But Here, We Deliver The Best of The Best.Please Hold on!
Database Management Services, Oracle Database Management Solution, Oracle Databases

Recently Oracle introduced “Autonomous Health Framework”. Oracle Autonomous Health Framework contains Oracle ORAchk, Oracle EXAchk, and Oracle Trace File Analyzer.

You have access to Oracle Autonomous Health Framework as a value add-on to your existing support contract. There is no additional fee or license required to run Oracle Autonomous Health Framework.

 

In this article we will learn how to Install, setup and execute AHF for Oracle Exadata Database machine in detail.

 

Step 1: Download AHF for Linux operating system as shown below. Here we are using the wget command to download file directly to the server. If you don’t have proxy you can download the file MOS to your desktop and copy it the server using WinSCP.

 

[root@dm01db01 ~]# cd /u01/app/oracle/software/

 

[root@dm01db01 software]# mkdir Exachk

 

[root@dm01db01 software]# cd Exachk/

 

[root@dm01db01 Exachk]# export use_proxy=on

[root@dm01db01 Exachk]# export http_proxy=”webproxy.netsoftmate.come:80/”

 

  • Download the AHF zip file

 

[root@dm01db01 Exachk]# wget  –http-user=abdul.mohammed@netsoftmate.com –http-password=************ –no-check-certificate –output-document=AHF-LINUX_v20.1.1.zip “https://updates.oracle.com/Orion/Services/download/AHF-LINUX_v20.1.1.zip?aru=23443431&patch_file=AHF-LINUX_v20.1.1.zip”

 

  • Download the latest cvu. This will be used by the exachk to run the cluster verification

 

[root@dm01db01 Exachk]# wget  –http-user=abdul.mohammed@netsoftmate.com –http-password=************ –no-check-certificate –output-document=cvupack_Linux_x86_64.zip “https://download.oracle.com/otndocs/products/clustering/cvu/cvupack_Linux_x86_64.zip”

 

[root@dm01db01 Exachk]# ls -ltr

total 356748

-rw-r–r– 1 root root 365267646 Mar 17 16:02 AHF-LINUX_v20.1.1.zip

-rw-r–r– 1 root root 293648959 Jul 13  2018 cvupack_Linux_x86_64.zip

 

eBook - Oracle Exadata X8M Patching Recipes | Netsoftmate

Step 2:
Unzip the AHF zip file

 

[root@dm01db01 Exachk]# unzip AHF-LINUX_v20.1.1.zip

Archive:  AHF-LINUX_v20.1.1.zip

  inflating: README.txt

  inflating: ahf_setup

 

[root@dm01db01 Exachk]# ./ahf_setup -v

AHF Build ID : 20110020200317092524

AHF Build Platform : Linux

AHF Build Architecture : x86_64

 

Step 3: Execute the AHF setup

 

[root@dm01db01 Exachk]# ./ahf_setup

 

AHF Installer for Platform Linux Architecture x86_64

 

AHF Installation Log : /tmp/ahf_install_344489_2020_04_06-12_20_51.log

 

Starting Autonomous Health Framework (AHF) Installation

 

AHF Version: 20.1.1.0.0 Build Date: 202003170925

 

TFA is already installed at : /u01/app/11.2.0.4/grid/tfa/dm01db01/tfa_home

 

Installed TFA Version : 122111 Build ID : 20170612164756

 

Default AHF Location : /opt/oracle.ahf

 

Do you want to install AHF at [/opt/oracle.ahf] ? [Y]|N : Y

 

AHF Location : /opt/oracle.ahf

 

AHF Data Directory stores diagnostic collections and metadata.

AHF Data Directory requires at least 5GB (Recommended 10GB) of free space.

 

Choose Data Directory from below options :

 

  1. /u01/app/oracle [Free Space : 50454 MB]
  2. Enter a different Location

 

Choose Option [1 – 2] : 1

 

AHF Data Directory : /u01/app/oracle/oracle.ahf/data

 

exachk scheduler is already running at : /root/Exachk

 

Installed exachk version : EXACHK  VERSION: 19.2.0_20190717

 

Stopping exachk scheduler

 

Copying exachk configuration from /root/Exachk

 

Shutting down TFA : /u01/app/11.2.0.4/grid/tfa/dm01db01/tfa_home

 

Copying TFA Data Files from /u01/app/11.2.0.4/grid/tfa/dm01db01/tfa_home

 

Uninstalling TFA : /u01/app/11.2.0.4/grid/tfa/dm01db01/tfa_home

 

Do you want to add AHF Notification Email IDs ? [Y]|N : Y

 

Enter Email IDs separated by space : abdul.mohammed@netsoftmate.com

 

AHF will also be installed/upgraded on these Cluster Nodes :

 

  1. dm01db02
  2. dm01db03
  3. dm01db04

 

The AHF Location and AHF Data Directory must exist on the above nodes

AHF Location : /opt/oracle.ahf

AHF Data Directory : /u01/app/oracle/oracle.ahf/data

 

Do you want to install/upgrade AHF on Cluster Nodes ? [Y]|N : Y

 

Extracting AHF to /opt/oracle.ahf

 

Configuring TFA Services

 

Copying TFA Data Files to AHF

 

Discovering Nodes and Oracle Resources

 

 

TFA will configure Storage Cells using SSH Setup:

 

 

.———————————–.

|   | EXADATA CELL | CURRENT STATUS |

+—+————–+—————-+

| 1 | dm01cel01   | ONLINE         |

| 2 | dm01cel02   | ONLINE         |

| 3 | dm01cel03   | ONLINE         |

| 4 | dm01cel04   | ONLINE         |

| 5 | dm01cel05   | ONLINE         |

| 6 | dm01cel06   | ONLINE         |

| 7 | dm01cel07   | ONLINE         |

‘—+————–+—————-‘

 

 

Not generating certificates as GI discovered

 

Starting TFA Services

 

.——————————————————————————-.

| Host      | Status of TFA | PID    | Port | Version    | Build ID             |

+———–+—————+——–+——+————+———————-+

| dm01db01 | RUNNING       | 365382 | 5000 | 20.1.1.0.0 | 20110020200317092524 |

‘———–+—————+——–+——+————+———————-‘

 

Running TFA Inventory…

 

Adding default users to TFA Access list…

 

.——————————————————————.

|                   Summary of AHF Configuration                   |

+—————–+————————————————+

| Parameter       | Value                                          |

+—————–+————————————————+

| AHF Location    | /opt/oracle.ahf                                |

| TFA Location    | /opt/oracle.ahf/tfa                            |

| Exachk Location | /opt/oracle.ahf/exachk                         |

| Data Directory  | /u01/app/oracle/oracle.ahf/data                |

| Repository      | /u01/app/oracle/oracle.ahf/data/repository     |

| Diag Directory  | /u01/app/oracle/oracle.ahf/data/dm01db01/diag |

‘—————–+————————————————‘

 

Retrieving legacy exachk wallet details …

Storing exachk wallet details into AHF config/wallet …

 

Starting exachk daemon from AHF …

 

AHF install completed on dm01db01

 

Installing AHF on Remote Nodes :

 

AHF will be installed on dm01db02, Please wait.

 

Installing AHF on dm01db02 :

 

[dm01db02] Copying AHF Installer

 

[dm01db02] Running AHF Installer

 

AHF will be installed on dm01db03, Please wait.

 

Installing AHF on dm01db03 :

 

[dm01db03] Copying AHF Installer

 

[dm01db03] Running AHF Installer

 

AHF will be installed on dm01db04, Please wait.

 

Installing AHF on dm01db04 :

 

[dm01db04] Copying AHF Installer

 

[dm01db04] Running AHF Installer

 

AHF binaries are available in /opt/oracle.ahf/bin

 

AHF is successfully installed

 

Moving /tmp/ahf_install_251936_2020_04_06-13_07_32.log to /u01/app/oracle/oracle.ahf/data/dm01db01/diag/ahf/

 

Step 4: Verify AHF setup

 

[root@dm01db01 Exachk]# cd /opt/oracle.ahf/

 

[root@dm01db01 oracle.ahf]# ls -ltr

total 36

drwxr-xr-x 7 root root 4096 Nov 19 02:38 python

drwxr-xr-x 5 root root 4096 Mar 17 11:25 ahf

drwxr-xr-x 6 root root 4096 Mar 17 11:25 common

drwxr-x–x 5 root root 4096 Mar 17 11:25 jre

drwxr-xr-x 8 root root 4096 Apr  6 12:28 exachk

drwxr-x–x 2 root root 4096 Apr  6 12:28 analyzer

-rw-r–r– 1 root root 1057 Apr  6 12:28 install.properties

drwxr-x–x 9 root root 4096 Apr  6 12:28 tfa

drwxr-x–x 2 root root 4096 Apr  6 12:28 bin

 

 

[root@dm01db01 oracle.ahf]# cd exachk/

 

[root@dm01db01 exachk]# ls -ltr

total 81772

-rw-r–r– 1 root root   186651 Mar 17 11:20 exachk.pyc

-rw-r–r– 1 root root 65423079 Mar 17 11:23 collections.dat

-rw-r–r– 1 root root  9674765 Mar 17 11:23 rules.dat

-rw-r–r– 1 root root  8341706 Mar 17 11:24 Apex5_CollectionManager_App.sql

-rw-r–r– 1 root root    43473 Mar 17 11:24 sample_user_defined_checks.xml

-r–r–r– 1 root root     3217 Mar 17 11:24 user_defined_checks.xsd

drwxr-xr-x 2 root root     4096 Mar 17 11:24 messages

drwxr-xr-x 2 root root     4096 Mar 17 11:25 web

drwxr-xr-x 3 root root     4096 Mar 17 11:25 lib

drwxr-xr-x 2 root root     4096 Mar 17 11:25 build

drwxr-xr-x 2 root root     4096 Apr  6 12:28 bash

-rwxr-xr-x 1 root root    25788 Apr  6 12:28 exachk

 

Step 5: unzip the cvu zip file under AHF home as show below

 

[root@dm01db01 Exachk]# unzip cvupack_Linux_x86_64.zip -d /opt/oracle.ahf/common/cvu

 

[root@dm01db01 Exachk]# ls -ltr /opt/oracle.ahf/common/cvu

total 92

drwxrwxr-x 7 root root 4096 Jun 13  2018 jdk

drwxrwxr-x 3 root root 4096 Jun 13  2018 srvm

drwxrwxr-x 3 root root 4096 Jun 13  2018 has

drwxrwxr-x 3 root root 4096 Jun 13  2018 crs

drwxrwxr-x 3 root root 4096 Jun 13  2018 suptools

drwxrwxr-x 3 root root 4096 Jun 13  2018 oss

drwxrwxr-x 7 root root 4096 Jun 13  2018 cv

drwxrwxr-x 3 root root 4096 Jun 13  2018 xdk

drwxrwxr-x 2 root root 4096 Jun 13  2018 utl

drwxrwxr-x 4 root root 4096 Jun 13  2018 rdbms

drwxrwxr-x 6 root root 4096 Jun 13  2018 install

drwxrwxr-x 4 root root 4096 Jun 13  2018 deinstall

drwxrwxr-x 4 root root 4096 Jun 13  2018 clone

drwxrwxr-x 8 root root 4096 Jun 13  2018 oui

drwxrwxr-x 3 root root 4096 Jun 13  2018 diagnostics

drwxrwxr-x 3 root root 4096 Jun 13  2018 oracore

drwxrwxr-x 3 root root 4096 Jun 13  2018 nls

drwxrwxr-x 3 root root 4096 Jun 13  2018 jdbc

drwxrwxr-x 3 root root 4096 Jun 13  2018 dbjava

drwxrwxr-x 6 root root 4096 Jun 13  2018 network

drwxrwxr-x 2 root root 4096 Jun 13  2018 jlib

drwxrwxr-x 2 root root 4096 Jun 13  2018 lib

drwxrwxr-x 2 root root 4096 Jun 13  2018 bin

 

 

Note: If you don’t download and extract the cvupack you will get the following warning message.

 

“Either Cluster Verification Utility pack (cvupack) does not exist at /opt/oracle.ahf/common/cvu or it is an old or invalid cvupack”

 

 

Step 6: Execute Exachk for Exadata

 

[root@dm01db01 ~]# cd /opt/oracle.ahf/exachk/

[root@dm01db01 exachk]# ./exachk

 

 

Checking ssh user equivalency settings on all nodes in cluster for root

 

Node dm01db02 is configured for ssh user equivalency for root user

 

 

Node dm01db03 is configured for ssh user equivalency for root user

 

 

Node dm01db04 is configured for ssh user equivalency for root user

 

Searching for running databases . . . . .

 

.  .  .  .

List of running databases registered in OCR

 

  1. testdb
  2. orcldb
  3. All of above
  4. None of above

 

Select databases from list for checking best practices. For multiple databases, select 3 for All or comma separated number like 1,2 etc [1-4][3].

 

Searching out ORACLE_HOME for selected databases.

 

.  .  .  .  .  .  .

.

 

Checking Status of Oracle Software Stack – Clusterware, ASM, RDBMS

 

.  .  .  . . . .  .  .  . . . .

.  .  .  . . . .  .  .  .  .  .  .  .  .  .  .  .  .  . . . .  .  .  .  .  .  .  .  .  .  .  .  .  . . . .  .  .  .  .  .  .  .  .  .  .  .  .  . . . .  .  .  .  .  .  .  .  .  .  .  .

——————————————————————————————————-

                                                 Oracle Stack Status

——————————————————————————————————-

  Host Name       CRS Installed  RDBMS Installed    CRS UP    ASM UP  RDBMS UP    DB Instance Name

——————————————————————————————————-

  dm01db01                  Yes          Yes          Yes      Yes      Yes          orcldb1 testdb1

  dm01db02                  Yes          Yes          Yes      Yes      Yes          testdb2 orcldb2

  dm01db03                  Yes          Yes          Yes      Yes      Yes          orcldb3 testdb3

  dm01db04                  Yes          Yes          Yes      Yes      Yes          testdb4 orcldb4

——————————————————————————————————-

 

 

Copying plug-ins

 

. .

 

Node dm01cel01-priv2 is configured for ssh user equivalency for root user

 

 

Node dm01cel02-priv2 is configured for ssh user equivalency for root user

 

 

Node dm01cel03-priv2 is configured for ssh user equivalency for root user

 

 

Node dm01cel04-priv2 is configured for ssh user equivalency for root user

 

 

Node dm01cel05-priv2 is configured for ssh user equivalency for root user

 

 

Node dm01cel06-priv2 is configured for ssh user equivalency for root user

 

 

Node dm01cel07-priv2 is configured for ssh user equivalency for root user

 

 

.  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .

dm01sw-ibb01 is configured for ssh user equivalency for root user

.

dm01sw-iba01 is configured for ssh user equivalency for root user

 

dm01sw-iba01 is configured for ssh user equivalency for root user

 

 

*** Checking Best Practice Recommendations ( Pass / Warning / Fail ) ***

 

.  .

 

Collections and audit checks log file is

/u01/app/oracle/oracle.ahf/data/dm01db01/exachk/exachk_dm01db01_orcldb_040620_12376/log/exachk.log

 

Starting to run exachk in background on dm01db02

 

Starting to run exachk in background on dm01db03

 

 

Starting to run exachk in background on dm01db04

 

 

 

============================================================

              Node name – dm01db01

============================================================

 

 Collecting – ASM Disk Group for Infrastructure Software and Configuration

 Collecting – ASM Diskgroup Attributes

 Collecting – ASM diskgroup usable free space

 Collecting – ASM initialization parameters

 Collecting – Database Parameters for testdb database

 Collecting – Database Parameters for orcldb database

 Collecting – Database Undocumented Parameters for orcldb database

 Collecting – RDBMS Feature Usage for orcldb database

 Collecting – CPU Information

 Collecting – Clusterware and RDBMS software version

 Collecting – Compute node PCI bus slot speed for infiniband HCAs

 Collecting – Kernel parameters

 Collecting – Maximum number of semaphore sets on system

 Collecting – Maximum number of semaphores on system

 Collecting – OS Packages

 Collecting – Patches for Grid Infrastructure

 Collecting – Patches for RDBMS Home

 Collecting – RDBMS patch inventory

 Collecting – Switch Version Information

 Collecting – number of semaphore operations per semop system call

 Collecting – CRS user limits configuration

 Collecting – CRS user time zone check

 Collecting – Check alerthistory for non-test open stateless alerts [Database Server]

 Collecting – Check alerthistory for stateful alerts not cleared [Database Server]

 Collecting – Clusterware patch inventory

 Collecting – Discover switch type(spine or leaf)

 Collecting – Enterprise Manager agent targets

 Collecting – Exadata Critical Issue DB09

 Collecting – Exadata Critical Issue EX30

 Collecting – Exadata Critical Issue EX36

 Collecting – Exadata Critical Issue EX56

 Collecting – Exadata Critical Issue EX57

 Collecting – Exadata Critical Issue EX58

 Collecting – Exadata critical issue EX48

 Collecting – Exadata critical issue EX55

 Collecting – Exadata software version on database server

 Collecting – Exadata system model number

 Collecting – Exadata version on database server

 Collecting – HCA firmware version on database server

 Collecting – HCA transfer rate on database server

 Collecting – Infrastructure Software and Configuration for compute

 Collecting – MaxStartups setting in sshd_config

 Collecting – OFED Software version on database server

 Collecting – Obtain hardware information

 Collecting – Operating system and Kernel version on database server

 Collecting – Oracle monitoring agent and/or OS settings on ADR diagnostic directories

 Collecting – Raid controller bus link speed

 Collecting – Review Non-Exadata components in use on the InfiniBand fabric

 Collecting – System Event Log

 Collecting – Validate key sysctl.conf parameters on database servers

 Collecting – Verify Data Network is Separate from Management Network

 Collecting – Verify Database Server Disk Controller Configuration

 Collecting – Verify Database Server Physical Drive Configuration

 Collecting – Verify Database Server Virtual Drive Configuration

 Collecting – Verify Disk Cache Policy on database server

 Collecting – Verify Hardware and Firmware on Database and Storage Servers (CheckHWnFWProfile) [Database Server]

 Collecting – Verify ILOM Power Up Configuration for HOST_AUTO_POWER_ON

 Collecting – Verify ILOM Power Up Configuration for HOST_LAST_POWER_STATE

 Collecting – Verify IP routing configuration on database servers

 Collecting – Verify InfiniBand Address Resolution Protocol (ARP) Configuration on Database Servers

 Collecting – Verify Master (Rack) Serial Number is Set [Database Server]

 Collecting – Verify Quorum disks configuration

 Collecting – Verify RAID Controller Battery Temperature [Database Server]

 Collecting – Verify RAID disk controller CacheVault capacitor condition [Database Server]

 Collecting – Verify RAID disk controller CacheVault capacitor condition [Database Server]

 Collecting – Verify TCP Segmentation Offload (TSO) is set to off

 Collecting – Verify available ksplice fixes are installed [Database Server]

 Collecting – Verify basic Logical Volume(LVM) system devices configuration

 Collecting – Verify database server InfiniBand network MTU size

 Collecting – Verify database server disk controllers use writeback cache

 Collecting – Verify database server file systems have Check interval = 0

 Collecting – Verify database server file systems have Maximum mount count = -1

 Collecting – Verify imageinfo on database server

 Collecting – Verify imageinfo on database server to compare systemwide

 Collecting – Verify installed rpm(s) kernel type match the active kernel version

 Collecting – Verify key InfiniBand fabric error counters are not present

 Collecting – Verify no database server kernel out of memory errors

 Collecting – Verify proper ACFS drivers are installed for Spectre v2 mitigation

 Collecting – Verify service exachkcfg autostart status on database server

 Collecting – Verify the localhost alias is pingable [Database Server]

 Collecting – Verify the InfiniBand Fabric Topology (verify-topology)

 Collecting – Verify the Master Subnet Manager is running on an InfiniBand switch

 Collecting – Verify the Name Service Cache Daemon (NSCD) configuration

 Collecting – Verify the Subnet Manager is properly disabled [Database Server]

 Collecting – Verify the currently active image status [Database Server]

 Collecting – Verify the ib_sdp module is not loaded into the kernel

 Collecting – Verify the storage servers in use configuration matches across the cluster

 Collecting – Verify the vm.min_free_kbytes configuration

 Collecting – Verify there are no files present that impact normal firmware update procedures [Database Server]

 Collecting – collect time server data [Database Server]

 Collecting – root time zone check

 Collecting – verify asr exadata configuration check via ASREXACHECK on database server

Starting to run root privileged commands in background on storage server dm01cel01 (192.168.1.6)

 

Starting to run root privileged commands in background on storage server dm01cel02 (192.168.1.8)

 

Starting to run root privileged commands in background on storage server dm01cel03 (192.168.1.10)

 

Starting to run root privileged commands in background on storage server dm01cel04 (192.168.1.16)

 

Starting to run root privileged commands in background on storage server dm01cel05 (192.168.1.18)

 

Starting to run root privileged commands in background on storage server dm01cel06 (192.168.1.20)

 

Starting to run root privileged commands in background on storage server dm01cel07 (192.168.1.22)

 

Starting to run root privileged commands in background on infiniband switch (dm01sw-ibb01)

 

Starting to run root privileged commands in background on infiniband switch (dm01sw-iba01)

 

 

Collections from storage server:

————————————————————

 

 

Collections from Infiniband Switch:

————————————————————

 Collecting – Exadata Critical Issue IB5

 Collecting – Exadata Critical Issue IB6

 Collecting – Exadata Critical Issue IB8

 Collecting – Hostname in /etc/hosts

 Collecting – Infiniband Switch NTP configuration

 Collecting – Infiniband subnet manager status

 Collecting – Infiniband switch HCA status

 Collecting – Infiniband switch HOSTNAME configuration

 Collecting – Infiniband switch firmware version

 Collecting – Infiniband switch health

 Collecting – Infiniband switch localtime configuration

 Collecting – Infiniband switch module configuration

 Collecting – Infiniband switch subnet manager configuration

 Collecting – Infiniband switch type(Spine or leaf)

 Collecting – Infrastructure Software and Configuration for switch

 Collecting – Verify average ping times to DNS nameserver [IB Switch]

 Collecting – Verify no IB switch ports disabled due to excessive symbol errors

 Collecting – Verify the localhost alias is pingable [IB Switch]

 Collecting – Verify there are no unhealthy InfiniBand switch sensors

 Collecting – sm_priority configuration on Infiniband switch

 

 

Data collections completed. Checking best practices on dm01db01.

————————————————————

 

 

 

 FAIL =>     Exadata software version on database server does not meet certified platinum configuration

 FAIL =>     Oracle database does not meet certified platinum configuration for /u01/app/oracle/product/11.2.0.4/dbhome

 WARNING =>  Database parameter AUDIT_SYS_OPERATIONS should be set to the recommended value on testdb1 instance

 WARNING =>  Database parameter AUDIT_SYS_OPERATIONS should be set to the recommended value on orcldb1 instance

 INFO =>     Oracle GoldenGate failure prevention best practices

 INFO =>     One or more non-default AWR baselines should be created for orcldb

 WARNING =>  Non-default database Services are not configured for orcldb

 WARNING =>  Database parameter processes should be set to recommended value on testdb1 instance

 WARNING =>  Database parameter processes should be set to recommended value on orcldb1 instance

 FAIL =>     _reconnect_to_cell_attempts parameter in cellinit.ora is not set to recommended value

 FAIL =>     Oracle monitoring agent and Operating systems settings on Automatic diagnostic  repository directories are not correct or not all targets have been scanned or not all diagnostic directories found

 FAIL =>     Storage Server user “CELLDIAG” should exist

 FAIL =>     Downdelay attribute is not set to recommended value on bonded client interface

 FAIL =>     One or more of SYSTEM, SYSAUX, USERS, TEMP tablespaces are not of type bigfile for orcldb

 FAIL =>     The initialization parameter cluster_database_instances should be at the default value for testdb

 FAIL =>     The initialization parameter cluster_database_instances should be at the default value for orcldb

 WARNING =>  SYS or SYSTEM objects were found to be INVALID for orcldb

 WARNING =>  There are non-Exadata components in use on the InfiniBand fabric

 INFO =>     Database parameter AUDIT_TRAIL should be set to the recommended value for testdb

 INFO =>     Database parameter AUDIT_TRAIL should be set to the recommended value for orcldb

 FAIL =>     Memlock settings do not meet the Oracle best practice recommendations for /u01/app/oracle/product/11.2.0.4/dbhome

 WARNING =>  All disk groups should have compatible.advm attribute set to recommended values

 WARNING =>  All disk groups should have compatible.rdbms attribute set to recommended values

 WARNING =>  Database has one or more dictionary managed tablespace for orcldb

 CRITICAL => System is exposed to Exadata Critical Issue EX58

 CRITICAL => System is exposed to Exadata Critical Issue EX58

 FAIL =>     Some data or temp files are not autoextensible for orcldb

 WARNING =>  Key InfiniBand fabric error counters should not be present

 CRITICAL => One or more log archive destination and alternate log archive destination settings are not as recommended for orcldb

 FAIL =>     Database parameter DB_LOST_WRITE_PROTECT is not set to recommended value on testdb1 instance

 FAIL =>     Database parameter DB_LOST_WRITE_PROTECT is not set to recommended value on orcldb1 instance

 FAIL =>     Database parameter GLOBAL_NAMES is not set to recommended value on testdb1 instance

 FAIL =>     Database parameter GLOBAL_NAMES is not set to recommended value on orcldb1 instance

 FAIL =>     Database parameter PARALLEL_ADAPTIVE_MULTI_USER is not set to recommended value on testdb1 instance

 FAIL =>     Database parameter PARALLEL_ADAPTIVE_MULTI_USER is not set to recommended value on orcldb1 instance

 FAIL =>     Database parameter PARALLEL_THREADS_PER_CPU is not set to recommended value on testdb1 instance

 FAIL =>     Database parameter PARALLEL_THREADS_PER_CPU is not set to recommended value on orcldb1 instance

 FAIL =>     Database parameter OS_AUTHENT_PREFIX is not set to recommended value on testdb1 instance

 FAIL =>     Database parameter OS_AUTHENT_PREFIX is not set to recommended value on orcldb1 instance

 FAIL =>     Database parameter sql92_security is not set to recommended value on testdb1 instance

 FAIL =>     Database parameter sql92_security is not set to recommended value on orcldb1 instance

 FAIL =>     Database parameter COMPATIBLE should be set to recommended value for testdb

 FAIL =>     Database parameter COMPATIBLE should be set to recommended value for orcldb

 CRITICAL => Database parameters log_archive_dest_n with Location attribute are not all set to recommended value for orcldb

 CRITICAL => Database parameter Db_create_online_log_dest_n is not set to recommended value for testdb

 FAIL =>     Flashback on PRIMARY is not configured for orcldb

 FAIL =>     Flashback on STANDBY is not configured for testdb

 INFO =>     Operational Best Practices

 INFO =>     Database Consolidation Best Practices

 INFO =>     Computer failure prevention best practices

 INFO =>     Data corruption prevention best practices

 INFO =>     Logical corruption prevention best practices

 INFO =>     Database/Cluster/Site failure prevention best practices

 INFO =>     Client failover operational best practices

 INFO =>     Verify the percent of available celldisk space used by the griddisks

 WARNING =>  Application objects were found to be invalid for orcldb

 CRITICAL => Database control files are not configured as recommended for testdb

 CRITICAL => Database control files are not configured as recommended for orcldb

 WARNING =>  ASM parameter ASM_POWER_LIMIT is not set to the default value.

 INFO =>     While initialization parameter LOG_ARCHIVE_CONFIG is set it should be verified for your environment on Standby Database for testdb

 WARNING =>  Redo log files should be appropriately sized for testdb

 WARNING =>  Redo log files should be appropriately sized for orcldb

 FAIL =>     Table AUD$[FGA_LOG$] should use Automatic Segment Space Management for orcldb

 INFO =>     Database failure prevention best practices

 WARNING =>  Database has one or more dictionary managed tablespace for orcldb

 FAIL =>     Primary database is not protected with Data Guard (standby database) for real-time data protection and availability for orcldb

 FAIL =>     Database parameter LOG_BUFFER is not set to recommended value on orcldb1 instance

 INFO =>     Storage failures prevention best practices

 INFO =>     Software maintenance best practices

 CRITICAL => The data files should be recoverable for testdb

 CRITICAL => The data files should be recoverable for orcldb

 FAIL =>     FRA space management problem file types are present without an RMAN backup completion within the last 7 days for testdb

 INFO =>     Oracle recovery manager(rman) best practices

 WARNING =>  control_file_record_keep_time should be within recommended range [1-9] for testdb

 INFO =>     Exadata Critical Issues (Doc ID 1270094.1):- DB1-DB4,DB6,DB9-DB44, EX1-EX60 and IB1-IB3,IB5-IB8

Collecting patch inventory on CRS_HOME /u01/app/11.2.0.4/grid

Collecting patch inventory on ORACLE_HOME /u01/app/oracle/product/11.2.0.4/dbhome

 

Copying results from dm01db02 and generating report. This might take a while. Be patient.

 

.

============================================================

              Node name – dm01db02

============================================================

 

 Collecting – CPU Information

 Collecting – Clusterware and RDBMS software version

 Collecting – Compute node PCI bus slot speed for infiniband HCAs

 Collecting – Kernel parameters

 Collecting – Maximum number of semaphore sets on system

 Collecting – Maximum number of semaphores on system

 Collecting – OS Packages

 Collecting – Patches for Grid Infrastructure

 Collecting – Patches for RDBMS Home

 Collecting – RDBMS patch inventory

 Collecting – number of semaphore operations per semop system call

 Collecting – CRS user limits configuration

 Collecting – CRS user time zone check

 Collecting – Check alerthistory for non-test open stateless alerts [Database Server]

 Collecting – Check alerthistory for stateful alerts not cleared [Database Server]

 Collecting – Clusterware patch inventory

 Collecting – Exadata Critical Issue DB09

 Collecting – Exadata Critical Issue EX30

 Collecting – Exadata Critical Issue EX36

 Collecting – Exadata Critical Issue EX56

 Collecting – Exadata Critical Issue EX57

 Collecting – Exadata Critical Issue EX58

 Collecting – Exadata critical issue EX48

 Collecting – Exadata critical issue EX55

 Collecting – Exadata software version on database server

 Collecting – Exadata system model number

 Collecting – Exadata version on database server

 Collecting – HCA firmware version on database server

 Collecting – HCA transfer rate on database server

 Collecting – Infrastructure Software and Configuration for compute

 Collecting – MaxStartups setting in sshd_config

 Collecting – OFED Software version on database server

 Collecting – Obtain hardware information

 Collecting – Operating system and Kernel version on database server

 Collecting – Oracle monitoring agent and/or OS settings on ADR diagnostic directories

 Collecting – Raid controller bus link speed

 Collecting – System Event Log

 Collecting – Validate key sysctl.conf parameters on database servers

 Collecting – Verify Data Network is Separate from Management Network

 Collecting – Verify Database Server Disk Controller Configuration

 Collecting – Verify Database Server Physical Drive Configuration

 Collecting – Verify Database Server Virtual Drive Configuration

 Collecting – Verify Disk Cache Policy on database server

 Collecting – Verify Hardware and Firmware on Database and Storage Servers (CheckHWnFWProfile) [Database Server]

 Collecting – Verify ILOM Power Up Configuration for HOST_AUTO_POWER_ON

 Collecting – Verify ILOM Power Up Configuration for HOST_LAST_POWER_STATE

 Collecting – Verify IP routing configuration on database servers

 Collecting – Verify InfiniBand Address Resolution Protocol (ARP) Configuration on Database Servers

 Collecting – Verify Master (Rack) Serial Number is Set [Database Server]

 Collecting – Verify Quorum disks configuration

 Collecting – Verify RAID Controller Battery Temperature [Database Server]

 Collecting – Verify RAID disk controller CacheVault capacitor condition [Database Server]

 Collecting – Verify RAID disk controller CacheVault capacitor condition [Database Server]

 Collecting – Verify TCP Segmentation Offload (TSO) is set to off

 Collecting – Verify available ksplice fixes are installed [Database Server]

 Collecting – Verify basic Logical Volume(LVM) system devices configuration

 Collecting – Verify database server InfiniBand network MTU size

 Collecting – Verify database server disk controllers use writeback cache

 Collecting – Verify database server file systems have Check interval = 0

 Collecting – Verify database server file systems have Maximum mount count = -1

 Collecting – Verify imageinfo on database server

 Collecting – Verify imageinfo on database server to compare systemwide

 Collecting – Verify installed rpm(s) kernel type match the active kernel version

 Collecting – Verify no database server kernel out of memory errors

 Collecting – Verify proper ACFS drivers are installed for Spectre v2 mitigation

 Collecting – Verify service exachkcfg autostart status on database server

 Collecting – Verify the localhost alias is pingable [Database Server]

 Collecting – Verify the InfiniBand Fabric Topology (verify-topology)

 Collecting – Verify the Name Service Cache Daemon (NSCD) configuration

 Collecting – Verify the Subnet Manager is properly disabled [Database Server]

 Collecting – Verify the currently active image status [Database Server]

 Collecting – Verify the ib_sdp module is not loaded into the kernel

 Collecting – Verify the storage servers in use configuration matches across the cluster

 Collecting – Verify the vm.min_free_kbytes configuration

 Collecting – Verify there are no files present that impact normal firmware update procedures [Database Server]

 Collecting – collect time server data [Database Server]

 Collecting – root time zone check

 Collecting – verify asr exadata configuration check via ASREXACHECK on database server

list index out of range

 

Data collections completed. Checking best practices on dm01db02.

————————————————————

 

 FAIL =>     Exadata software version on database server does not meet certified platinum configuration

 FAIL =>     Oracle database does not meet certified platinum configuration for /u01/app/oracle/product/11.2.0.4/dbhome

 WARNING =>  Database parameter AUDIT_SYS_OPERATIONS should be set to the recommended value on testdb2 instance

 WARNING =>  Database parameter AUDIT_SYS_OPERATIONS should be set to the recommended value on orcldb2 instance

 INFO =>     Oracle GoldenGate failure prevention best practices

 WARNING =>  Non-default database Services are not configured for orcldb

 WARNING =>  Database parameter processes should be set to recommended value on testdb2 instance

 WARNING =>  Database parameter processes should be set to recommended value on orcldb2 instance

 FAIL =>     _reconnect_to_cell_attempts parameter in cellinit.ora is not set to recommended value

 FAIL =>     Oracle monitoring agent and Operating systems settings on Automatic diagnostic  repository directories are not correct or not all targets have been scanned or not all diagnostic directories found

 FAIL =>     Downdelay attribute is not set to recommended value on bonded client interface

 FAIL =>     One or more of SYSTEM, SYSAUX, USERS, TEMP tablespaces are not of type bigfile for orcldb

 FAIL =>     The initialization parameter cluster_database_instances should be at the default value for testdb

 FAIL =>     The initialization parameter cluster_database_instances should be at the default value for orcldb

 INFO =>     Database parameter AUDIT_TRAIL should be set to the recommended value for testdb

 INFO =>     Database parameter AUDIT_TRAIL should be set to the recommended value for orcldb

 FAIL =>     Memlock settings do not meet the Oracle best practice recommendations for /u01/app/oracle/product/11.2.0.4/dbhome

 CRITICAL => System is exposed to Exadata Critical Issue EX58

 CRITICAL => One or more log archive destination and alternate log archive destination settings are not as recommended

 CRITICAL => One or more disk groups which contain critical files do not use high redundancy

 FAIL =>     Database parameter DB_LOST_WRITE_PROTECT is not set to recommended value on testdb2 instance

 FAIL =>     Database parameter DB_LOST_WRITE_PROTECT is not set to recommended value on orcldb2 instance

 FAIL =>     Database parameter GLOBAL_NAMES is not set to recommended value on testdb2 instance

 FAIL =>     Database parameter GLOBAL_NAMES is not set to recommended value on orcldb2 instance

 FAIL =>     Database parameter PARALLEL_ADAPTIVE_MULTI_USER is not set to recommended value on testdb2 instance

 FAIL =>     Database parameter PARALLEL_ADAPTIVE_MULTI_USER is not set to recommended value on orcldb2 instance

 FAIL =>     Database parameter PARALLEL_THREADS_PER_CPU is not set to recommended value on testdb2 instance

 FAIL =>     Database parameter PARALLEL_THREADS_PER_CPU is not set to recommended value on orcldb2 instance

 FAIL =>     Database parameter OS_AUTHENT_PREFIX is not set to recommended value on testdb2 instance

 FAIL =>     Database parameter OS_AUTHENT_PREFIX is not set to recommended value on orcldb2 instance

 FAIL =>     Database parameter sql92_security is not set to recommended value on testdb2 instance

 FAIL =>     Database parameter sql92_security is not set to recommended value on orcldb2 instance

 FAIL =>     Database parameter COMPATIBLE should be set to recommended value for testdb

 FAIL =>     Database parameter COMPATIBLE should be set to recommended value for orcldb

 CRITICAL => Database parameters log_archive_dest_n with Location attribute are not all set to recommended value for orcldb

 CRITICAL => Database parameter Db_create_online_log_dest_n is not set to recommended value for testdb

 CRITICAL => Database control files are not configured as recommended

 WARNING =>  ASM parameter ASM_POWER_LIMIT is not set to the default value.

 INFO =>     While initialization parameter LOG_ARCHIVE_CONFIG is set it should be verified for your environment on Standby Database for testdb

 WARNING =>  Redo log files should be appropriately sized for testdb

 WARNING =>  Redo log files should be appropriately sized for orcldb

 FAIL =>     Database parameter LOG_BUFFER is not set to recommended value on orcldb2 instance

Collecting patch inventory on CRS_HOME /u01/app/11.2.0.4/grid

Collecting patch inventory on ORACLE_HOME /u01/app/oracle/product/11.2.0.4/dbhome

 

 

Copying results from dm01db03 and generating report. This might take a while. Be patient.

 

.

============================================================

              Node name – dm01db03

============================================================

 

 Collecting – CPU Information

 Collecting – Clusterware and RDBMS software version

 Collecting – Compute node PCI bus slot speed for infiniband HCAs

 Collecting – Kernel parameters

 Collecting – Maximum number of semaphore sets on system

 Collecting – Maximum number of semaphores on system

 Collecting – OS Packages

 Collecting – Patches for Grid Infrastructure

 Collecting – Patches for RDBMS Home

 Collecting – RDBMS patch inventory

 Collecting – number of semaphore operations per semop system call

 Collecting – CRS user limits configuration

 Collecting – CRS user time zone check

 Collecting – Check alerthistory for non-test open stateless alerts [Database Server]

 Collecting – Check alerthistory for stateful alerts not cleared [Database Server]

 Collecting – Clusterware patch inventory

 Collecting – Exadata Critical Issue DB09

 Collecting – Exadata Critical Issue EX30

 Collecting – Exadata Critical Issue EX36

 Collecting – Exadata Critical Issue EX56

 Collecting – Exadata Critical Issue EX57

 Collecting – Exadata Critical Issue EX58

 Collecting – Exadata critical issue EX48

 Collecting – Exadata critical issue EX55

 Collecting – Exadata software version on database server

 Collecting – Exadata system model number

 Collecting – Exadata version on database server

 Collecting – HCA firmware version on database server

 Collecting – HCA transfer rate on database server

 Collecting – Infrastructure Software and Configuration for compute

 Collecting – MaxStartups setting in sshd_config

 Collecting – OFED Software version on database server

 Collecting – Obtain hardware information

 Collecting – Operating system and Kernel version on database server

 Collecting – Oracle monitoring agent and/or OS settings on ADR diagnostic directories

 Collecting – Raid controller bus link speed

 Collecting – System Event Log

 Collecting – Validate key sysctl.conf parameters on database servers

 Collecting – Verify Data Network is Separate from Management Network

 Collecting – Verify Database Server Disk Controller Configuration

 Collecting – Verify Database Server Physical Drive Configuration

 Collecting – Verify Database Server Virtual Drive Configuration

 Collecting – Verify Disk Cache Policy on database server

 Collecting – Verify Hardware and Firmware on Database and Storage Servers (CheckHWnFWProfile) [Database Server]

 Collecting – Verify ILOM Power Up Configuration for HOST_AUTO_POWER_ON

 Collecting – Verify ILOM Power Up Configuration for HOST_LAST_POWER_STATE

 Collecting – Verify IP routing configuration on database servers

 Collecting – Verify InfiniBand Address Resolution Protocol (ARP) Configuration on Database Servers

 Collecting – Verify Master (Rack) Serial Number is Set [Database Server]

 Collecting – Verify Quorum disks configuration

 Collecting – Verify RAID Controller Battery Temperature [Database Server]

 Collecting – Verify RAID disk controller CacheVault capacitor condition [Database Server]

 Collecting – Verify RAID disk controller CacheVault capacitor condition [Database Server]

 Collecting – Verify TCP Segmentation Offload (TSO) is set to off

 Collecting – Verify available ksplice fixes are installed [Database Server]

 Collecting – Verify basic Logical Volume(LVM) system devices configuration

 Collecting – Verify database server InfiniBand network MTU size

 Collecting – Verify database server disk controllers use writeback cache

 Collecting – Verify database server file systems have Check interval = 0

 Collecting – Verify database server file systems have Maximum mount count = -1

 Collecting – Verify imageinfo on database server

 Collecting – Verify imageinfo on database server to compare systemwide

 Collecting – Verify installed rpm(s) kernel type match the active kernel version

 Collecting – Verify no database server kernel out of memory errors

 Collecting – Verify proper ACFS drivers are installed for Spectre v2 mitigation

 Collecting – Verify service exachkcfg autostart status on database server

 Collecting – Verify the localhost alias is pingable [Database Server]

 Collecting – Verify the InfiniBand Fabric Topology (verify-topology)

 Collecting – Verify the Name Service Cache Daemon (NSCD) configuration

 Collecting – Verify the Subnet Manager is properly disabled [Database Server]

 Collecting – Verify the currently active image status [Database Server]

 Collecting – Verify the ib_sdp module is not loaded into the kernel

 Collecting – Verify the storage servers in use configuration matches across the cluster

 Collecting – Verify the vm.min_free_kbytes configuration

 Collecting – Verify there are no files present that impact normal firmware update procedures [Database Server]

 Collecting – collect time server data [Database Server]

 Collecting – root time zone check

 Collecting – verify asr exadata configuration check via ASREXACHECK on database server

list index out of range

 

 

Data collections completed. Checking best practices on dm01db03.

————————————————————

 

 FAIL =>     Exadata software version on database server does not meet certified platinum configuration

 FAIL =>     Oracle database does not meet certified platinum configuration for /u01/app/oracle/product/11.2.0.4/dbhome

 WARNING =>  Database parameter AUDIT_SYS_OPERATIONS should be set to the recommended value on testdb3 instance

 WARNING =>  Database parameter AUDIT_SYS_OPERATIONS should be set to the recommended value on orcldb3 instance

 INFO =>     Oracle GoldenGate failure prevention best practices

 WARNING =>  Non-default database Services are not configured for orcldb

 WARNING =>  Database parameter processes should be set to recommended value on testdb3 instance

 WARNING =>  Database parameter processes should be set to recommended value on orcldb3 instance

 FAIL =>     _reconnect_to_cell_attempts parameter in cellinit.ora is not set to recommended value

 FAIL =>     Oracle monitoring agent and Operating systems settings on Automatic diagnostic  repository directories are not correct or not all targets have been scanned or not all diagnostic directories found

 FAIL =>     Downdelay attribute is not set to recommended value on bonded client interface

 WARNING =>  The IP routing configuration is not correct

 FAIL =>     One or more of SYSTEM, SYSAUX, USERS, TEMP tablespaces are not of type bigfile for orcldb

 FAIL =>     The initialization parameter cluster_database_instances should be at the default value for testdb

 FAIL =>     The initialization parameter cluster_database_instances should be at the default value for orcldb

 INFO =>     Database parameter AUDIT_TRAIL should be set to the recommended value for testdb

 INFO =>     Database parameter AUDIT_TRAIL should be set to the recommended value for orcldb

 FAIL =>     Memlock settings do not meet the Oracle best practice recommendations for /u01/app/oracle/product/11.2.0.4/dbhome

 CRITICAL => System is exposed to Exadata Critical Issue EX58

 FAIL =>     Management network is not separate from data network

 CRITICAL => One or more log archive destination and alternate log archive destination settings are not as recommended

 CRITICAL => One or more disk groups which contain critical files do not use high redundancy

 FAIL =>     Database parameter DB_LOST_WRITE_PROTECT is not set to recommended value on testdb3 instance

 FAIL =>     Database parameter DB_LOST_WRITE_PROTECT is not set to recommended value on orcldb3 instance

 FAIL =>     Database parameter GLOBAL_NAMES is not set to recommended value on testdb3 instance

 FAIL =>     Database parameter GLOBAL_NAMES is not set to recommended value on orcldb3 instance

 FAIL =>     Database parameter PARALLEL_ADAPTIVE_MULTI_USER is not set to recommended value on testdb3 instance

 FAIL =>     Database parameter PARALLEL_ADAPTIVE_MULTI_USER is not set to recommended value on orcldb3 instance

 FAIL =>     Database parameter PARALLEL_THREADS_PER_CPU is not set to recommended value on testdb3 instance

 FAIL =>     Database parameter PARALLEL_THREADS_PER_CPU is not set to recommended value on orcldb3 instance

 FAIL =>     Database parameter OS_AUTHENT_PREFIX is not set to recommended value on testdb3 instance

 FAIL =>     Database parameter OS_AUTHENT_PREFIX is not set to recommended value on orcldb3 instance

 FAIL =>     Database parameter sql92_security is not set to recommended value on testdb3 instance

 FAIL =>     Database parameter sql92_security is not set to recommended value on orcldb3 instance

 FAIL =>     Database parameter COMPATIBLE should be set to recommended value for testdb

 FAIL =>     Database parameter COMPATIBLE should be set to recommended value for orcldb

 CRITICAL => Database parameters log_archive_dest_n with Location attribute are not all set to recommended value for orcldb

 CRITICAL => Database parameter Db_create_online_log_dest_n is not set to recommended value for testdb

 CRITICAL => Database control files are not configured as recommended

 WARNING =>  ASM parameter ASM_POWER_LIMIT is not set to the default value.

 INFO =>     While initialization parameter LOG_ARCHIVE_CONFIG is set it should be verified for your environment on Standby Database for testdb

 WARNING =>  Redo log files should be appropriately sized for testdb

 WARNING =>  Redo log files should be appropriately sized for orcldb

 FAIL =>     Database parameter LOG_BUFFER is not set to recommended value on orcldb3 instance

Collecting patch inventory on CRS_HOME /u01/app/11.2.0.4/grid

Collecting patch inventory on ORACLE_HOME /u01/app/oracle/product/11.2.0.4/dbhome

 

 

Copying results from dm01db04 and generating report. This might take a while. Be patient.

 

.

============================================================

              Node name – dm01db04

============================================================

 

 Collecting – CPU Information

 Collecting – Clusterware and RDBMS software version

 Collecting – Compute node PCI bus slot speed for infiniband HCAs

 Collecting – Kernel parameters

 Collecting – Maximum number of semaphore sets on system

 Collecting – Maximum number of semaphores on system

 Collecting – OS Packages

 Collecting – Patches for Grid Infrastructure

 Collecting – Patches for RDBMS Home

 Collecting – RDBMS patch inventory

 Collecting – number of semaphore operations per semop system call

 Collecting – CRS user limits configuration

 Collecting – CRS user time zone check

 Collecting – Check alerthistory for non-test open stateless alerts [Database Server]

 Collecting – Check alerthistory for stateful alerts not cleared [Database Server]

 Collecting – Clusterware patch inventory

 Collecting – Exadata Critical Issue DB09

 Collecting – Exadata Critical Issue EX30

 Collecting – Exadata Critical Issue EX36

 Collecting – Exadata Critical Issue EX56

 Collecting – Exadata Critical Issue EX57

 Collecting – Exadata Critical Issue EX58

 Collecting – Exadata critical issue EX48

 Collecting – Exadata critical issue EX55

 Collecting – Exadata software version on database server

 Collecting – Exadata system model number

 Collecting – Exadata version on database server

 Collecting – HCA firmware version on database server

 Collecting – HCA transfer rate on database server

 Collecting – Infrastructure Software and Configuration for compute

 Collecting – MaxStartups setting in sshd_config

 Collecting – OFED Software version on database server

 Collecting – Obtain hardware information

 Collecting – Operating system and Kernel version on database server

 Collecting – Oracle monitoring agent and/or OS settings on ADR diagnostic directories

 Collecting – Raid controller bus link speed

 Collecting – System Event Log

 Collecting – Validate key sysctl.conf parameters on database servers

 Collecting – Verify Data Network is Separate from Management Network

 Collecting – Verify Database Server Disk Controller Configuration

 Collecting – Verify Database Server Physical Drive Configuration

 Collecting – Verify Database Server Virtual Drive Configuration

 Collecting – Verify Disk Cache Policy on database server

 Collecting – Verify Hardware and Firmware on Database and Storage Servers (CheckHWnFWProfile) [Database Server]

 Collecting – Verify ILOM Power Up Configuration for HOST_AUTO_POWER_ON

 Collecting – Verify ILOM Power Up Configuration for HOST_LAST_POWER_STATE

 Collecting – Verify IP routing configuration on database servers

 Collecting – Verify InfiniBand Address Resolution Protocol (ARP) Configuration on Database Servers

 Collecting – Verify Master (Rack) Serial Number is Set [Database Server]

 Collecting – Verify Quorum disks configuration

 Collecting – Verify RAID Controller Battery Temperature [Database Server]

 Collecting – Verify RAID disk controller CacheVault capacitor condition [Database Server]

 Collecting – Verify RAID disk controller CacheVault capacitor condition [Database Server]

 Collecting – Verify TCP Segmentation Offload (TSO) is set to off

 Collecting – Verify available ksplice fixes are installed [Database Server]

 Collecting – Verify basic Logical Volume(LVM) system devices configuration

 Collecting – Verify database server InfiniBand network MTU size

 Collecting – Verify database server disk controllers use writeback cache

 Collecting – Verify database server file systems have Check interval = 0

 Collecting – Verify database server file systems have Maximum mount count = -1

 Collecting – Verify imageinfo on database server

 Collecting – Verify imageinfo on database server to compare systemwide

 Collecting – Verify installed rpm(s) kernel type match the active kernel version

 Collecting – Verify no database server kernel out of memory errors

 Collecting – Verify proper ACFS drivers are installed for Spectre v2 mitigation

 Collecting – Verify service exachkcfg autostart status on database server

 Collecting – Verify the localhost alias is pingable [Database Server]

 Collecting – Verify the InfiniBand Fabric Topology (verify-topology)

 Collecting – Verify the Name Service Cache Daemon (NSCD) configuration

 Collecting – Verify the Subnet Manager is properly disabled [Database Server]

 Collecting – Verify the currently active image status [Database Server]

 Collecting – Verify the ib_sdp module is not loaded into the kernel

 Collecting – Verify the storage servers in use configuration matches across the cluster

 Collecting – Verify the vm.min_free_kbytes configuration

 Collecting – Verify there are no files present that impact normal firmware update procedures [Database Server]

 Collecting – collect time server data [Database Server]

 Collecting – root time zone check

 Collecting – verify asr exadata configuration check via ASREXACHECK on database server

list index out of range

 

 

Data collections completed. Checking best practices on dm01db04.

————————————————————

 

 FAIL =>     Exadata software version on database server does not meet certified platinum configuration

 FAIL =>     Oracle database does not meet certified platinum configuration for /u01/app/oracle/product/11.2.0.4/dbhome

 WARNING =>  Database parameter AUDIT_SYS_OPERATIONS should be set to the recommended value on testdb4 instance

 WARNING =>  Database parameter AUDIT_SYS_OPERATIONS should be set to the recommended value on orcldb4 instance

 INFO =>     Oracle GoldenGate failure prevention best practices

 WARNING =>  Non-default database Services are not configured for orcldb

 WARNING =>  Database parameter processes should be set to recommended value on testdb4 instance

 WARNING =>  Database parameter processes should be set to recommended value on orcldb4 instance

 FAIL =>     _reconnect_to_cell_attempts parameter in cellinit.ora is not set to recommended value

 FAIL =>     Oracle monitoring agent and Operating systems settings on Automatic diagnostic  repository directories are not correct or not all targets have been scanned or not all diagnostic directories found

 FAIL =>     Downdelay attribute is not set to recommended value on bonded client interface

 WARNING =>  The IP routing configuration is not correct

 FAIL =>     One or more of SYSTEM, SYSAUX, USERS, TEMP tablespaces are not of type bigfile for orcldb

 FAIL =>     The initialization parameter cluster_database_instances should be at the default value for testdb

 FAIL =>     The initialization parameter cluster_database_instances should be at the default value for orcldb

 INFO =>     Database parameter AUDIT_TRAIL should be set to the recommended value for testdb

 INFO =>     Database parameter AUDIT_TRAIL should be set to the recommended value for orcldb

 FAIL =>     Memlock settings do not meet the Oracle best practice recommendations for /u01/app/oracle/product/11.2.0.4/dbhome

 CRITICAL => System is exposed to Exadata Critical Issue EX58

 FAIL =>     Management network is not separate from data network

 CRITICAL => One or more log archive destination and alternate log archive destination settings are not as recommended

 CRITICAL => One or more disk groups which contain critical files do not use high redundancy

 FAIL =>     Database parameter DB_LOST_WRITE_PROTECT is not set to recommended value on testdb4 instance

 FAIL =>     Database parameter DB_LOST_WRITE_PROTECT is not set to recommended value on orcldb4 instance

 FAIL =>     Database parameter GLOBAL_NAMES is not set to recommended value on testdb4 instance

 FAIL =>     Database parameter GLOBAL_NAMES is not set to recommended value on orcldb4 instance

 FAIL =>     Database parameter PARALLEL_ADAPTIVE_MULTI_USER is not set to recommended value on testdb4 instance

 FAIL =>     Database parameter PARALLEL_ADAPTIVE_MULTI_USER is not set to recommended value on orcldb4 instance

 FAIL =>     Database parameter PARALLEL_THREADS_PER_CPU is not set to recommended value on testdb4 instance

 FAIL =>     Database parameter PARALLEL_THREADS_PER_CPU is not set to recommended value on orcldb4 instance

 FAIL =>     Database parameter OS_AUTHENT_PREFIX is not set to recommended value on testdb4 instance

 FAIL =>     Database parameter OS_AUTHENT_PREFIX is not set to recommended value on orcldb4 instance

 FAIL =>     Database parameter sql92_security is not set to recommended value on testdb4 instance

 FAIL =>     Database parameter sql92_security is not set to recommended value on orcldb4 instance

 FAIL =>     Database parameter COMPATIBLE should be set to recommended value for testdb

 FAIL =>     Database parameter COMPATIBLE should be set to recommended value for orcldb

 CRITICAL => Database parameters log_archive_dest_n with Location attribute are not all set to recommended value for orcldb

 CRITICAL => Database parameter Db_create_online_log_dest_n is not set to recommended value for testdb

 CRITICAL => Database control files are not configured as recommended

 WARNING =>  ASM parameter ASM_POWER_LIMIT is not set to the default value.

 INFO =>     While initialization parameter LOG_ARCHIVE_CONFIG is set it should be verified for your environment on Standby Database for testdb

 WARNING =>  Redo log files should be appropriately sized for testdb

 WARNING =>  Redo log files should be appropriately sized for orcldb

 FAIL =>     Database parameter LOG_BUFFER is not set to recommended value on orcldb4 instance

Collecting patch inventory on CRS_HOME /u01/app/11.2.0.4/grid

Collecting patch inventory on ORACLE_HOME /u01/app/oracle/product/11.2.0.4/dbhome

 

 

————————————————————

                      CLUSTERWIDE CHECKS

————————————————————

 

————————————————————

Detailed report (html) –  /u01/app/oracle/oracle.ahf/data/dm01db01/exachk/exachk_dm01db01_orcldb_040620_12376/exachk_dm01db01_orcldb_040620_12376.html

 

 

 

UPLOAD [if required] – /u01/app/oracle/oracle.ahf/data/dm01db01/exachk/exachk_dm01db01_orcldb_040620_12376.zip

 

 

Step 7: Review the Exachk report or Upload file to Oracle Support

 

[root@dm01db01 Exachk]# curl -x webproxy.netsoftmate.com:80 -T /u01/app/oracle/oracle.ahf/data/dm01db01/exachk/exachk_dm01db01_orcldb_040620_12376.zip  -u abdul.mohammed@netsoftmate.com   https://transport.oracle.com/upload/issue/3-XXXXXXXX/ -v

 

 

Sample Exadata Output:

 

Oracle Autonomous Health Check Installation and Execution | Netsoftmate

 

To Uninstall AHF

 

[root@dm01db01 ~]# cd /opt/oracle.ahf/ahf/bin

 

[root@dm01db01 bin]# ls -ltr

total 88

-r-xr-xr-x 1 root root 19623 Mar 17 11:25 uninstallahf.sh

-r-xr-xr-x 1 root root 14504 Mar 17 11:25 uninstallahf.pl

-rwxr-xr-x 1 root root  3296 Mar 17 11:25 tfactl

-r-xr-xr-x 1 root root 45597 Mar 17 11:25 installAHF.pl

 

[root@dm01db01 bin]# ./uninstallahf.sh -h

 

   Usage for ./uninstallahf.sh

 

   ./uninstallahf.sh [-local] [-silent] [-deleterepo]

 

        -local            –    Uninstall AHF only on the local node

        -silent           –    Do not ask any uninstall questions

        -deleterepo       –    Delete AHF repository

 

 

   Note: If -local is not passed, AHF will be uninstalled from all configured nodes.

 

 

[root@dm01db01 bin]# ./uninstallahf.sh -deleterepo

Starting AHF Uninstall

AHF will be uninstalled on:

dm01db01

dm01db02

 

Do you want to continue with AHF uninstall ? [Y]|N : Y

 

Stopping AHF service on local node dm01db01…

Stopping TFA Support Tools…

 

 

TFA-00002 Oracle Trace File Analyzer (TFA) is not running

Stopping exachk scheduler …

Removing exachk cache discovery….

No exachk cache discovery found.

 

Removed exachk from inittab

 

Stopping and removing AHF in dm01db02…

TFA-00002 Oracle Trace File Analyzer (TFA) is not running

Removing exachk cache discovery….

Successfully completed exachk cache discovery removal.

 

Removed exachk from inittab

 

Successfully uninstalled AHF on node dm01db02

Removing AHF setup on dm01db01:

Removing /etc/rc.d/rc0.d/K17init.tfa

Removing /etc/rc.d/rc1.d/K17init.tfa

Removing /etc/rc.d/rc2.d/K17init.tfa

Removing /etc/rc.d/rc4.d/K17init.tfa

Removing /etc/rc.d/rc6.d/K17init.tfa

Removing /etc/init.d/init.tfa…

Removing /opt/oracle.ahf/jre

Removing /opt/oracle.ahf/common

Removing /opt/oracle.ahf/bin

Removing /opt/oracle.ahf/python

Removing /opt/oracle.ahf/analyzer

Removing /opt/oracle.ahf/tfa

Removing /opt/oracle.ahf/ahf

Removing /opt/oracle.ahf/exachk

Removing /u01/app/oracle/oracle.ahf/data/dm01db01

Removing /opt/oracle.ahf/install.properties

Removing /u01/app/oracle/oracle.ahf/data/repository

Removing /u01/app/oracle/oracle.ahf/data

Removing /u01/app/oracle/oracle.ahf

Removing AHF Home : /opt/oracle.ahf

 

 

Conclusion:

 

In this article we have learned how to install, setup and execute Autonomous Health Check Frame work for Exadata Database Machines. We have also seen how to uninstall the AHF software.



About Netsoftmate: 

Netsoftmate is an Oracle Gold Partner and a boutique IT services company specializing in installation, implementation and 24/7 support for Oracle Engineered Systems like Oracle Exadata, Oracle Database Appliance, Oracle ZDLRA, Oracle ZFS Storage and Oracle Private Cloud Appliance. Apart from OES, we have specialized teams of  experts providing round the clock remote database administration support for any type of database and cyber security compliance and auditing services.


Feel free to get in touch with us by signing up on the link below – 


Priority Suport for Oracle Engineered Systems | Netsoftmate

0

Database Management Services, Oracle Database Appliance - ODA, Oracle Database Management Solution, Oracle Databases, Remote Database Management, Technology Consulting Services
In September 2019, Oracle announced Oracle Database Appliance X8-2 (Small, Medium and HA). ODA X8-2 comes with more computing resources compared with X7-2 Models.


Let’s take a quick look at few benefits of ODA followed by the technical specification on ODA X8-2 Small/Medium and HA.


Oracle Database Appliance is an Engineered System. Software, server, storage, and networking, all co-engineered and optimized to run Oracle Database and applications.


Benefits of Oracle Database Appliance (ODA):

  1. Software, server, storage, and networking engineered and optimized to run Oracle Database and applications.
  2. Supports Oracle Database Standard Edition, Standard Edition One, Standard Edition 2, and Enterprise Edition.
  3. Optimized for Cloud
  4. Capacity on Demand Licensing – Reduced Cost
  5. Ease of deployment, patching, management, and support
  6. Increased performance and reliability with NVMe flash storage
  7. Reliable hardware architecture with redundant power, cooling, networking, and storage
  8. Browser User Interface (BUI)


Oracle Database Appliance X8-2 HA Benefits & Technical specification


  1. Support mission-critical applications and consolidation of many databases
  2. Built for high availability
  3. Choice of high-performance flash or high-capacity drives
  4. 32 cores per server (64 cores in total for 2 servers)
  5. 384 GB physical memory per server expandable upto 768 (1.5 TB memory in total for 2 servers)
  6. Storage Shelf
  7. High Capacity: 46 TB SSD and 252 TB SDD raw capacity per shelf
  8. High Performance: 184 TB SSD raw capacity per shelf
  9. Choice of 10GBase-T or 10/25 GbE SFP28 public networking
  10. 25GbE interconnect for cluster communication


For more information on the technical specification loot at the ODA X8-2 HA Data Sheet at:
https://www.oracle.com/technetwork/database/database-appliance/oda-x8-2-ha-datasheet-5730739.pdf


 
 


 
  • Oracle Database Appliance X8-2 Small Technical specification

  1. One server
  2. 1 Intel Xeon processor, 16 Cores
  3. 192GB Physical memory expandable upto 384GB
  4. Choice of 10GBase-T or 10/25 GbE SFP28 public networking
  5. 12.8TB NVMe raw storage



Oracle Database Appliance X8-2 Medium Technical specification

  1. One server
  2. 2 Intel Xeon processor, 32 Cores
  3. 384 GB Physical memory expandable upto 768GB
  4. Choice of 10GBase-T or 10/25 GbE SFP28 public networking
  5. 12.8 TB NVMe raw storage capacity with optional expansion to 76.8 TB NVMe raw storage


For more information on the technical specification loot at the ODA X8-2 S/M Data Sheet at:
https://www.oracle.com/technetwork/database/database-appliance/oda-x8-2sm-datasheet-5730738.pdf



Conclusion


In this article we seen the benefits and the technical specification of latest Oracle Database Appliance X8 model family. ODA is the right choice for all type of Businesses as an on-premises solution and cloud ready option.



Are you and your team considering setting up Oracle Database Appliance? Let Netsoftmate help you choose the right product keeping under consideration your budget, requirement and usage forecasting. Click on the image below to sign-up NOW!



0


In previous articles Oracle DBCS: Create Virtual Image Database Deployment series we have learned how to:

Oracle DBCS : Create Virtual Image Database Deployment – Part 1
https://netsoftmate.blogspot.in/2018/02/oracle-dbcs-create-virtual-image-database-deployment.html

Oracle DBCS : Create Virtual Image Database Deployment – Part 2
https://netsoftmate.blogspot.in/2018/03/oracle-dbcs-create-virtual-image-database-deployment-part2.html

Oracle DBCS : Create Virtual Image Database Deployment – Part 3
https://netsoftmate.blogspot.in/2018/03/oracle-dbcs-create-virtual-image-database-deployment-part3.html

In this article we will learn the final step on how to Create a Database in Virtual Image Database Deployment.

Prerequisites

  • Create Virtual Image Database Deployment
  • Create Storage Volumes for Oracle Database Software and Database Files
  • Install Oracle Database Software

Step to Create a Database in Virtual Image Database Deployment:

  • Get the IP address of the Compute node you want to connect from Oracle Database Cloud Service Console. Here my Deployment Service Name is “NSM-DBaaS-VM” and the IP address is 144.21.72.128

  • On the left pane, expand “SSH” and select “Auth”. On the right pane, click on “Browse” button. Select the Private Key that matches the Public Key for your Deployment. Click “Open”

  • Enter login as “opc” user. You will be connected without prompting for the password. Switch to the “root” user and “oracle user”. Verify no database is running currently.

  • Login as oracle user and set Oracle Home, Oracle Base and PATH variable. Make sure Oracle Executable dbca is set in PATH correctly.

  • Start dbca in silent mode by providing the values on the command line as shown below:

  • Connect to the database and verify the status



Conclusion

In this article we have learned how to create a database in Virtual Image Database Deployment.
0


In previous articles we have learned how to create Virtual Image deployment and Scale Up Storage Using Oracle Database Cloud Service and Create one storage volume for the Oracle Database software and one storage volume for all database files, and prepare them for use. 

Oracle DBCS : Create Virtual Image Database Deployment – Part 1
https://netsoftmate.blogspot.in/2018/02/oracle-dbcs-create-virtual-image-database-deployment.html

Oracle DBCS : Create Virtual Image Database Deployment – Part 2


https://netsoftmate.blogspot.in/2018/03/oracle-dbcs-create-virtual-image-database-deployment-part2.html

In this article we will learn how to Install Oracle Database Software

Prerequisites

  • Create Virtual Image Database Deployment
  • Create Storage Volumes for Oracle Database Software and Database Files

Steps to Install Oracle Database Software in Virtual Image Database Deployment.

  • Get the IP address of the Compute node you want to connect from Oracle Database Cloud Service Console. Here my Deployment Service Name is “NSM-DBaaS-VM” and the IP address is 144.21.72.128

  • On the left pane, expand “SSH” and select “Auth”. On the right pane, click on “Browse” button. Select the Private Key that matches the Public Key for your Deployment. Click “Open”

  • Enter login as “opc” user. You will be connected without prompting for the password. Switch to the “root” user and “oracle user”

  • Login as oracle user, switch directory to /u01 and look for the zip file containing the Oracle Database software by displaying the contents of the /scratch/db directory. Extract the zip file into the current directory /u01

  • Ensure that the Oracle software is extracted correctly

  • Switch to root user and execute the set-up scripts as follows. Exit the session


Conclusion



In this article we have learned how to Install Oracle Database Software on a Virtual Image Database Deployment. Installing Oracle Software in VM Image deployment is very easy, you just need to locate the correct Oracle Software version and extract them in the correct directory.
0

In previous article “Oracle DBCS : Create Virtual Image Database Deployment – Part 1” we have learned how to Create Virtual Image Database Deployment using Create Service Wizard.
https://netsoftmate.blogspot.in/2018/02/oracle-dbcs-create-virtual-image-database-deployment.html

In this article we will Scale Up Storage Using Oracle Database Cloud Service and Create one storage volume for the Oracle Database software and one storage volume for all database files, and prepare them for use. This will be the part 2 and continuation of the previous article.


Prerequisites
Create Virtual Image Database Deployment


Steps to Scale up Storage using Oracle Database Cloud Service Console in Virtual Image Database Deployment.


  • Open a web browser and enter the URL you received in the Welcome email to login to Oracle Cloud Account
https://myservices-xxxxx-xxxxxxxxxxef4b21bb7ee3b2cf4123d1.console.oraclecloud.com/mycloud/faces/dashboard.jspx


  • Enter your username and password


  • On the home page, Click “Menu” under “Database” Cloud Service as shown below


  • Click “Open Service Console”


  • Currently the Storage is 32GB. Let’s Scale Up the Storage. Click on the Instance Name


  • Click on the “Menu” icon and Select “Scale Up/Down”

  • I am adding addition 30GB storage. Click “Yes, Scale Up/Down Service”Here I am adding addition 30GB storage. Click “Yes, Scale Up/Down Service”


  • A message printed on the screen “Service scale up/down request is accepted”. The Instance status changed to “Service Maintenance”


  • After sometime we can see that the Storage is now Scale up to 62GB. Click on Instance to add more Storage


  • Click on the “Menu” icon and Select “Scale Up/Down”


  • This time I am adding addition 50GB storage. Click “Yes, Scale Up/Down Service”


  • A message printed on the screen “Service scale up/down request is accepted”. The Instance status changed to “Service Maintenance”


  • After sometime we can see that the Storage is now Scale up to 112GB.


  • Get the IP address of the Compute node you want to connect from Oracle Database Cloud Service Console. Here my Deployment Service Name is “NSM-DBaaS-VM” and the IP address is 144.21.72.128


  • On the left pane, expand “SSH” and select “Auth”. On the right pane, click on “Browse” button. Select the Private Key that matches the Public Key for your Deployment. Click “Open”


  • Enter login as “opc” user. You will be connected without prompting for the password. Switch to the “root” user


  • Display the list of block devices, the two volumes created are xvdc and xvdd


  • First format the volume for the Oracle Database software and mount it as /u01 as shown below




  • Now format the volume for the database files, and mount it as /u02 as shown below




  • Verify the mount points and display the block devices


  • Update the /etc/fstab file so new mount points get mounted automatically whenever the VM is rebooted



Conclusion

In this article we have learned how to Scale Up Storage using Oracle Database Cloud Service console and create storage volumes for Oracle Database Software and Database files.

0

When you create an Oracle Database Cloud Service – Virtual Image Database Deployment, the following tasks are completed for you:
  • Compute Allocated
  • Storage Allocated
  • Virtual Machine Image Installed
  • Included software to create Oracle Database

You are responsible for Connecting to VM, create Database, perform maintenance operations such as Backup , Patching and Upgrade.

Note that Oracle Database 12c Release 2 (12.2) is not available for Oracle Database Cloud Service – Virtual Image service level.


When you create a database deployment on Oracle Database Cloud Service using the Virtual Image service level, Oracle Database software is not automatically installed and no database is created. You must perform these steps manually after the deployment is created.



To create a database on a Virtual Image Database Deployment, you perform these tasks:
  1. Create Virtual Image Database Deployment service level
  2. Create storage volumes for the Oracle Database software and for the database files, and then format and mount them
  3. Stage the Oracle Database software on the mount point you created for it
  4. Create a database and start the database instance
  5. Start the listener for the database instance

In this article we will demonstrate how to Create Virtual Image Database Deployment using Create Service Wizard.


Prerequisites
  • Oracle Account
  • Oracle Cloud Subscription
  • SSH Public/Private Key pair

Steps to Create Virtual Image Database Deployment

  • Open a web browser and enter the URL you received in the Welcome email to login to Oracle Cloud Account
https://myservices-xxxxx-xxxxxxxxxxef4b21bb7ee3b2cf4123d1.console.oraclecloud.com/mycloud/faces/dashboard.jspx

  • Enter your username and password 

  • On the home page, Click “Menu” under “Database” Cloud Service as shown below

  • Click “Open Service Console”

  • Click on “Create Service”

Fill in all the details and click Next
  • Service Name: Enter the service name, It only accepts hyphen (-) as special character
  • Description (optional): Enter a description on the service
  • Notification Email: To send the update on Instance creation
  • Service level: Oracle Database Cloud Service
  • Metering Frequency: Monthly or Hourly
  • Software Release: 11gR2 or 12cR1
  • Software Edition: Enterprise, standard, Enterprise Edition – Extreme performance or Enterprise Edition – High performance
  • Database Type: Single Instance, Single Instance with Data Guard, RAC, RAC with Data Guard, Hybrid DR


  • Select Compute Shape (CPU and Memory for your deployment). Click “Edit” beside “SSH Public Key” box

  • Click on the radio button and click browse

  • Select the Public Key from your desktop/laptop

  • Click Enter button

  • Click Next

  • Review the details for deployment and click “Create”

  • We can see that the deployment is being created. Click on the Instance name “NSM-DBaaS-VM”

  • The status shows “Creating Service…”

  • We can see that the status is “Ready”

  • Virtual Image Database Deployment “NSM-DBaaS-VM” is now ready.

  • You will also receive a confirmation email whent the service is created

  • Enter the IP address of the Compute Node

  • On the left pane, expand “SSH” and select “Auth”. On the right pane, click on “Browse” button. Select the Private Key that matches the Public Key for your Deployment. Click “Open”

  • Click Yes

  • Login as opc user, switch to root user, Switch to Oracle user and check if any database instance is running. With Virtual image database deployment database is not created by default

  • Check the file systems. We can see that no users files systems are created. You need to allocate extra storage and create the users file systems.



Conclusion
In this article we have learned how to Create Virtual Image Database Deployment using Create Service Wizard. When you create an Oracle Database Cloud Service – Virtual Image Database Deployment, Compute Allocated, Storage Allocated Virtual Machine Image Installed and Included software to create Oracle Database. You are responsible for Connecting to VM, create Database, perform maintenance operations such as Backup , Patching and Upgrade.
2

You made some configuration changes to the Oracle Cloud Compute Node and it requires a restart/reboot take affect.

In this article we will demonstrate how to restart a Database Cloud Service Database Deployment.

Steps to Restart a Database Cloud Service Database Deployment.

Method 1:

  • Open a web browser and enter the URL you received in the Welcome email to login to Oracle Cloud Account


  • Enter your username and password


  • On the home page, Click “Menu” under “Database” Cloud Service as shown below


  • Click “Open Service Console”


  • On the this page, Click “Menu” as shown


  • Click “Restart” from the list


  • Click “OK” to confirm


  • A message displays that the restart request is accepted. Click on Instance name “NSM-DBaaS”


  • We can see that the status has changed to “Service Maintenance..”


  • We can now see that the status has changed to Ready”. This completed the restart process

Method 2


  • Open PuTTY session on your desktop and enter Compute Node IP address


  • On the left pane, expand “SSH” and select “Auth”. On the right pane, click on “Browse” button. Select the Private Key that matches the Public Key for your Deployment. Click “Open”


  • Login as opc user, switch to root user, check uptime and issue reboot command.

Wait for few minutes and execute steps 1 and 2 above.


  • Login as opc user, switch to root user, check uptime and Verify the databsae and listener status


  • Switch to oracle user and start the databsae if it not started already


  • Verify the database status. This completed the restart process


Conclusion

In this article we have learned how to restart a Database Cloud Service Database Deployment.

2

You want to start a Database Cloud Service Database Deployment after completing your maintenance work. When you start a database deployment, you can access it again and can perform any management operations.

Starting a database deployment is very simple and it is similar to powering on a computer or laptop. 


When database deployment is started, its CPU and RAM are allocated. As a result, it consumes OCPU and memory resources and so metering and billing of these resources are started.

In this article we will demonstrate how to start a Database Cloud Service Database Deployment.

Steps to Start a Database Cloud Service Database Deployment.


  • Open a web browser and enter the URL you received in the Welcome email to login to Oracle Cloud Account

  • Enter your username and password

  • On the home page, Click “Menu” under “Database” Cloud Service as shown below

  • Click “Open Service Console”

  • On the this page, Click “Menu” as shown

  • Click “Start” from the list

  • Click “OK” to confirm

  • A message displays that the start request is accepted

  • We can see that the status has changed to “Service Maintenance”. Click on Instance name “NSM-DBaaS”

  • We can see that the status is still under “Service Maintenance..”

  • We can now see that the status has changed to Ready”

Login to the Oracle Cloud Compute Node to verify that the server is not accessible.

  • Open PuTTY session on your desktop and enter Compute Node IP address

  • On the left pane, expand “SSH” and select “Auth”. On the right pane, click on “Browse” button. Select the Private Key that matches the Public Key for your Deployment. Click “Open”

  • Login as opc user, check uptime and verify the database status



Conclusion

In this article we have learned how to start a Database Cloud Service Database Deployment. Starting a database deployment is very simple and it is similar to powering on a computer or laptop. When database deployment is started, its CPU and RAM are allocated. As a result, it consumes OCPU and memory resources and so metering and billing of these resources are started.

0

You want to stop a Database Cloud Service Database Deployment as you are not using it or for some maintenance work. When you stop a database deployment, you can’t access it and can’t perform any management operations other than starting or deleting the Database Deployment.



Stopping a database deployment is very simple and it is similar to powering off a computer or laptop. 




It is important to note that when database deployment is stopped, its CPU and RAM are stopped. As a result, it consumes no OCPU or memory resources and so metering and billing of these resources stop. However, all the other resources of the database deployment continue to exist and so continue to be metered and billed, including.




In this article we will demonstrate how to stop a Database Cloud Service Database Deployment.




Steps to Stop a Database Cloud Service Database Deployment



  • Open a web browser and enter the URL you received in the Welcome email to login to Oracle Cloud Account

  • Enter your username and password

  • On the home page, Click “Menu” under “Database” Cloud Service as shown below

  • Click “Open Service Console”

  • On the this page, Click “Menu” as shown

  • Click “Stop” from the list

  • Click “OK” to confirm

  • A message displays that the stop request is accepted

  • We can see that the status has changed to “Service Maintenance..”. Click on Instance name “NSM-DBaaS”

  • We can see that the status is still under “Service Maintenance..”. Under Overview section it says “Stopping service..”

  • We can now see that the status has changed to “Service Stopped”



Login to the Oracle Cloud Compute Node to verify that the server is not accessible.


  • Open PuTTY session on your desktop and enter Compute Node IP address

  • On the left pane, expand “SSH” and select “Auth”. On the right pane, click on “Browse” button. Select the Private Key that matches the Public Key for your Deployment. Click “Open”


  • Connection timed out. It means that the server is down and not accessible





Conclusion




In this article we have learned how to stop a Database Cloud Service Database Deployment. When database deployment is stopped, its CPU and RAM are stopped. As a result, it consumes no OCPU or memory resources and so metering and billing of these resources stop. However, all the other resources of the database deployment continue to exist and so continue to be metered and billed, including.


0

When you create a Database Deployment in Oracle Database Cloud Service, the following tasks are completed for you:
  • Compute Node Allocated
  • Storage Allocated
  • Virtual Machine Image Installed
  • Set Keys and Privileges
  • Install and Configure Database
  • Configure Backup
  • Configure Tools
  • Configure Access

Network access to the Compute Node associated with Oracle Database Cloud Service is primarily provided by SSH connections on port 22. By default SSH port 22 is opened to allow access to the tools, utilities and other resources on the Compute Node associated with the Oracle Database Cloud Services. You can use SSH client software such as PuTTY on Windows to establish a secure connection and log in as “opc” or “oracle” user. You can also connect to Compute node using GUI interface, for this you can use VNC.

In this article we will demonstrate how to connect to Compute Node using VNC.

Prerequisites
  • IP address of Compute Node
  • TigerVNC Viewer client software
  • TigerVNC Server package installed on Compute Node


Steps to connect to Oracle Database Cloud Compute Node using VNC on Windows Operating System


  • Login to the Oracle Cloud Compute Node 


Open PuTTY session on your desktop and enter Compute Node IP address


  • On the left pane, expand “SSH” and select “Auth”. On the right pane, click on “Browse” button. Select the Private Key that matches the Public Key for your Deployment. Click “Open”


  • Enter login as “opc”. This will connect you to the compute node without password


  • Switch to root by executing “sudo -s” command. Confirm that you are switched to root by executing “id” command


  • Verify your Operating System version. Here the OS is OEL and version is 6 with update 8


  • Navigate to the yum repository directory and open the public yum repository file


  • In the file look for your operating system version, example ol6_latest and make sure “enabled=1” is set


  1. Next look for operating system base update, example ol6_u8_base and make sure “enabled=1” is set


  • Verify the file is updated successfully


  • Install the Tigervnc* package using the yum utility


  • Type y and hit return


  • We can see that the package installation completed successfully


  • Verify that the package is installed using rpm -qa command


  • Verify if vnc server is running or not as root and oracle user. We can see that vnc server is not running


  • Let’s start the vnc server as oracle users. Enter a password of your choice and verify. From the ‘ps -ef|grep vnc’ command output note down the port numbers :1 and 5901


  • Open Tiger VNC Viewer on your desktop/laptop and enter the port :1


  • Connection failed…. This is because the port 5901 is not opened on the Compute Node. We should open the port 5901 and try again



Follow the procedure below to configure custom Security List and Rules to enable access to specific security applications (VNC application and port range 5901 – 5905) on the compute node.


  • Open a web browser and enter the URL you received in the Welcome email to login to Oracle Cloud Account


  • Enter your username and password


  • On the home page, Click “Menu” under “Compute Classic” Cloud Service as shown below


  • Click “Open Service Console”


  • Click on “Network”


  • Expand “Shared Network”


  • Click “Security Applications” and then “Create Security Application”


  • Enter a Security Application Name, Port Type, Port Range Start, Port Range End and a Description and click Create. In our scenario we are enabling access to VNC application on the ports between 5901 and 5905


  • Make sure the Security Application is created by searching it


  • Click “Security Lists” and then “Create Security List”


  • Enter Security List Name and leave Inbound Policy and Outbound Policy to DEFAULT value and click Create


  • Make sure the Security List is created by searching it


  • Click “Security Rules” and then “Create Security Rule”


  • Enter the details as show below:
Name: Any desired meaningful name
Status: Enabled to enable the rule
Security Application: we create above
Source: Security IP List -> public-internet
Destination: select security list created above from drop down 
Click Create


  • Make sure the Security Rule is created by searching it


  • Click “Instances”


  • Select your Instance and scroll down


  • Click “Add Security List”


  • Select “Security List” create above from the drop down list


  • Make sure the Security List added to your Instance


  • Open VNC on your desktop/Laptop and enter the IP address of your Database Deployment


  • Enter VNC password used at the time of starting VNC server software on the compute node


  • Enter Oracle user password given at the time of configuring VNC Server to connect to the Compute node


  • We are now connected to the compute node in GUI interface using VNC

Enjoy working with Compute Node in GUI mode…


Conclusion
In this article we have learned how to connect to Oracle Cloud Compute Node using VNC in GUI Mode. To accomplish this we need to install Operating System packages and create custom Security List and Rules to enable access to specific security applications (VNC application and port range 5901 – to 5905) on the compute node. Oracle Compute Cloud Service networking create resources to provide network access to the compute node.

3

PREVIOUS POSTSPage 1 of 3NO NEW POSTS

[contact-form-7 id=”4973″ title=”Lead”]