반응형


아래 내용도 기본적으로 알아야 하는 내용들.....

CLUSTERWARE PROCESSES in 11g RAC R2 Environment

i).Cluster Ready Services (CRS)

$ ps -ef | grep crs | grep -v grep

root 25863 1 1 Oct27 ? 11:37:32 /opt/oracle/grid/product/11.2.0/bin/crsd.bin reboot

crsd.bin => The above process is responsible for start, stop, monitor and failover of resource. It maintains OCR and also restarts the resources when the failure occurs.

This is applicable for RAC systems. For Oracle Restart and ASM ohasd is used.

ii).Cluster Synchronization Service (CSS)

$ ps -ef | grep -v grep | grep css

root 19541 1 0 Oct27 ? 00:05:55 /opt/oracle/grid/product/11.2.0/bin/cssdmonitor

root 19558 1 0 Oct27 ? 00:05:45 /opt/oracle/grid/product/11.2.0/bin/cssdagent

oragrid 19576 1 6 Oct27 ? 2-19:13:56 /opt/oracle/grid/product/11.2.0/bin/ocssd.bin

cssdmonitor => Monitors node hangs(via oprocd functionality) and monitors OCCSD process hangs (via oclsomon functionality) and monitors vendor clusterware(via vmon functionality).This is the multi threaded process that runs with elavated priority.

Startup sequence: INIT --> init.ohasd --> ohasd --> ohasd.bin --> cssdmonitor

cssdagent => Spawned by OHASD process.Previously(10g) oprocd, responsible for I/O fencing.Killing this process would cause node reboot.Stops,start checks the status of occsd.bin daemon

Startup sequence: INIT --> init.ohasd --> ohasd --> ohasd.bin --> cssdagent

occsd.bin => Manages cluster node membership runs as oragrid user.Failure of this process results in node restart.

Startup sequence: INIT --> init.ohasd --> ohasd --> ohasd.bin --> cssdagent --> ocssd --> ocssd.bin

iii) Event Management (EVM)

$ ps -ef | grep evm | grep -v grep

oragrid 24623 1 0 Oct27 ? 00:30:25 /opt/oracle/grid/product/11.2.0/bin/evmd.bin

oragrid 25934 24623 0 Oct27 ? 00:00:00 /opt/oracle/grid/product/11.2.0/bin/evmlogger.bin -o /opt/oracle/grid/product/11.2.0/evm/log/evmlogger.info -l /opt/oracle/grid/product/11.2.0/evm/log/evmlogger.log

evmd.bin => Distributes and communicates some cluster events to all of the cluster members so that they are aware of the cluster changes.

evmlogger.bin => Started by EVMD.bin reads the configuration files and determines what events to subscribe to from EVMD and it runs user defined actions for those events.

iv).Oracle Root Agent

$ ps -ef | grep -v grep | grep orarootagent

root 19395 1 0 Oct17 ? 12:06:57 /opt/oracle/grid/product/11.2.0/bin/orarootagent.bin

root 25853 1 1 Oct17 ? 16:30:45 /opt/oracle/grid/product/11.2.0/bin/orarootagent.bin

orarootagent.bin => A specialized oraagent process that helps crsd manages resources owned by root, such as the network, and the Grid virtual IP address.

The above 2 process are actually threads which looks like processes. This is a Linux specific

v).Cluster Time Synchronization Service (CTSS)

$ ps -ef | grep ctss | grep -v grep

root 24600 1 0 Oct27 ? 00:38:10 /opt/oracle/grid/product/11.2.0/bin/octssd.bin reboot

octssd.bin => Provides Time Management in a cluster for Oracle Clusterware

vi).Oracle Agent

$ ps -ef | grep -v grep | grep oraagent

oragrid 5337 1 0 Nov14 ? 00:35:47 /opt/oracle/grid/product/11.2.0/bin/oraagent.bin

oracle 8886 1 1 10:25 ? 00:00:05 /opt/oracle/grid/product/11.2.0/bin/oraagent.bin

oragrid 19481 1 0 Oct27 ? 01:45:19 /opt/oracle/grid/product/11.2.0/bin/oraagent.bin

oraagent.bin => Extends clusterware to support Oracle-specific requirements and complex resources. This process runs server callout scripts when FAN events occur. This process was known as RACG in Oracle Clusterware 11g Release 1 (11.1).

ORACLE HIGH AVAILABILITY SERVICES STACK

i) Cluster Logger Service

$ ps -ef | grep -v grep | grep ologgerd

root 24856 1 0 Oct27 ? 01:43:48 /opt/oracle/grid/product/11.2.0/bin/ologgerd -m mg5hfmr02a -r -d /opt/oracle/grid/product/11.2.0/crf/db/mg5hfmr01a

ologgerd => Receives information from all the nodes in the cluster and persists in a CHM repository-based database. This service runs on only two nodes in a cluster

ii).System Monitor Service (osysmond)

$ ps -ef | grep -v grep | grep osysmond

root 19528 1 0 Oct27 ? 09:42:16 /opt/oracle/grid/product/11.2.0/bin/osysmond

osysmond => The monitoring and operating system metric collection service that sends the data to the cluster logger service. This service runs on every node in a cluster

iii). Grid Plug and Play (GPNPD):

$ ps -ef | grep gpn

oragrid 19502 1 0 Oct27 ? 00:21:13 /opt/oracle/grid/product/11.2.0/bin/gpnpd.bin

gpnpd.bin => Provides access to the Grid Plug and Play profile, and coordinates updates to the profile among the nodes of the cluster to ensure that all of the nodes have the most recent profile.

iv).Grid Interprocess Communication (GIPC):

$ ps -ef | grep -v grep | grep gipc

oragrid 19516 1 0 Oct27 ? 01:51:41 /opt/oracle/grid/product/11.2.0/bin/gipcd.bin

gipcd.bin => A support daemon that enables Redundant Interconnect Usage.

v). Multicast Domain Name Service (mDNS):

$ ps -ef | grep -v grep | grep dns

oragrid 19493 1 0 Oct27 ? 00:01:18 /opt/oracle/grid/product/11.2.0/bin/mdnsd.bin

mdnsd.bin => Used by Grid Plug and Play to locate profiles in the cluster, as well as by GNS to perform name resolution. The mDNS process is a background process on Linux and UNIX and on Windows.

vi).Oracle Grid Naming Service (GNS)

$ ps -ef | grep -v grep | grep gns

gnsd.bin => Handles requests sent by external DNS servers, performing name resolution for names defined by the cluster.


참조 https://blogs.oracle.com/myoraclediary/entry/clusterware_processes_in_11g_rac


반응형

'Oracle > Architecture' 카테고리의 다른 글

[Join] Inner join / Outer join  (0) 2017.02.03
[Oracle] TAF 와 CTF 개념  (0) 2015.10.13
반응형

하아...

복구는 요청이 올때마다 테스트를 안해볼 수가 없다..


옛날에 공부하면서 테스트 했던 부분도

아...이렇게 되었던 거지 하면서도 다시 물어보면 확신에 차서 이야기를 못한다..

역시 내공이 부족하다..능력이 부족한걸까...


어쨋든 오랫만에 다시 테스트를 해봤다.



Hotbackup 도중 베리타스 에서 DB를 내리게 될 경우..


보통 베리타스에서 DB를 내리게 되는 경우 Abort로 내리는 것을 로그로 확인한 기억이 있다.

또한, OS 엔지니어가 확인까지 해준 적이 있기에..

당연히 이번에도 Abort로 내리는 것으로 가정 하였다.


Hotbackup 은 Archive Mode 에서만 가능 하다는 것을 머리에 새겨두고..


SQL> alter tablespace users begin backup;


Tablespace altered.


-----------------

Alert Log



이 상태에서 session 을 열어 abort로 내리자.


그리고 다시 startup 해보자.


예상대로 에러가 떨어진 것을 확인할 수 있다.


혹시 모르니 상태 확인도 필수



mount 상태이군..

깔끔하게 책에 있던 내용데로 진행해 보자.


SQL> alter database datafile '/oradata/EAIDB/users01.dbf' end backup;

Database altered.


SQL> alter database open;

Database altered.


역시나 책은 배신하지 않는군...


다른 방법으로 recover database 해도 된다.

SQL> recover database;

Media recovery complete.


SQL> alter database open;

Database altered.




뭐 어쨋든..

오랜만에 아무 생각없이 복구 했군...


아무것도 아니라고 생각 들지만 고객한테는 확실하게 이야기를 해야되고, 나도 확실히 알고 있어야지 복구가 쉬우니...

기초도 탄탄히!!!

반응형
반응형

가끔 OS patch 확인할 필요가 있다.

그때마다 매번 검색 하기도...그렇다고 외우면 자꾸 잊고..쓸일도 없고...


노트의 필요성...


$instfix -i -k "IY68975"

    All filesets for IY68975 were found.

<-- 현재 설치 되어 있는 패치


$instfix -i -k "IZ41855"

    There was no data for IZ41855 in the fix database.

<--현재 설치 되어 있지 않는 패치





Usage: instfix [-R Path] [-T [-M platform]] [-s string] 

          [ -k keyword | -f file ] [-d device] [-S] 

          [-p | [-i [-c] [-q] [-t type] [-v] [-F]]] [-a] 


Function: Installs or queries filesets associated with keywords or fixes.


        -a Display the symptom text (can be combined with -i, -k, or -f).

        -c Colon-separated output for use with -i. Output includes keyword

           name, fileset name, required level, installed level, status, and

           abstract.  Status values are < (down level), = (correct level),

           + (superseded), and ! (not installed).

        -d Input device (not valid with flags -i or -a).

        -F Returns failure unless all filesets associated with the fix

           are installed.

        -f Input file containing keywords or fixes. Use '-' for standard input.

           The -T option produces a suitable input file format for -f.

        -i Use with -k or -f option to display whether specified fixes or 

           keywords are installed.  Installation is not attempted.

           If neither -k nor -f is specified, all known fixes are displayed.

        -k Install filesets for a keyword or fix.

        -M Use with -T option to display information for fixes present

           on the media that have to do with the platform specified.

        -p Use with -k or -f to print filesets associated with keywords.

           Installation is not attempted when -p is used.

        -q Quiet option for use with -i.  If -c is specified, no heading is 

           displayed.  Otherwise, no output is displayed.

        -R User Specified Install Location

        -S Suppress multi-volume processing.

        -s Search for and display fixes on media containing a specified string.

        -T Display fix information for complete fixes present on the media.

        -t Use with -i option to limit search to a given type.  Currently

           valid types are 'f' (fix) and 'p' (preventive maintenance).

        -v Verbose option for use with -i.  Gives information about each

           fileset associated with a fix or keyword.

           to the environment provided.


반응형

'OS > AIX' 카테고리의 다른 글

Topas Memory  (0) 2015.08.25

+ Recent posts