하아...

복구는 요청이 올때마다 테스트를 안해볼 수가 없다..


옛날에 공부하면서 테스트 했던 부분도

아...이렇게 되었던 거지 하면서도 다시 물어보면 확신에 차서 이야기를 못한다..

역시 내공이 부족하다..능력이 부족한걸까...


어쨋든 오랫만에 다시 테스트를 해봤다.



Hotbackup 도중 베리타스 에서 DB를 내리게 될 경우..


보통 베리타스에서 DB를 내리게 되는 경우 Abort로 내리는 것을 로그로 확인한 기억이 있다.

또한, OS 엔지니어가 확인까지 해준 적이 있기에..

당연히 이번에도 Abort로 내리는 것으로 가정 하였다.


Hotbackup 은 Archive Mode 에서만 가능 하다는 것을 머리에 새겨두고..


SQL> alter tablespace users begin backup;


Tablespace altered.


-----------------

Alert Log



이 상태에서 session 을 열어 abort로 내리자.


그리고 다시 startup 해보자.


예상대로 에러가 떨어진 것을 확인할 수 있다.


혹시 모르니 상태 확인도 필수



mount 상태이군..

깔끔하게 책에 있던 내용데로 진행해 보자.


SQL> alter database datafile '/oradata/EAIDB/users01.dbf' end backup;

Database altered.


SQL> alter database open;

Database altered.


역시나 책은 배신하지 않는군...


다른 방법으로 recover database 해도 된다.

SQL> recover database;

Media recovery complete.


SQL> alter database open;

Database altered.




뭐 어쨋든..

오랜만에 아무 생각없이 복구 했군...


아무것도 아니라고 생각 들지만 고객한테는 확실하게 이야기를 해야되고, 나도 확실히 알고 있어야지 복구가 쉬우니...

기초도 탄탄히!!!

반응형

가끔 OS patch 확인할 필요가 있다.

그때마다 매번 검색 하기도...그렇다고 외우면 자꾸 잊고..쓸일도 없고...


노트의 필요성...


$instfix -i -k "IY68975"

    All filesets for IY68975 were found.

<-- 현재 설치 되어 있는 패치


$instfix -i -k "IZ41855"

    There was no data for IZ41855 in the fix database.

<--현재 설치 되어 있지 않는 패치





Usage: instfix [-R Path] [-T [-M platform]] [-s string] 

          [ -k keyword | -f file ] [-d device] [-S] 

          [-p | [-i [-c] [-q] [-t type] [-v] [-F]]] [-a] 


Function: Installs or queries filesets associated with keywords or fixes.


        -a Display the symptom text (can be combined with -i, -k, or -f).

        -c Colon-separated output for use with -i. Output includes keyword

           name, fileset name, required level, installed level, status, and

           abstract.  Status values are < (down level), = (correct level),

           + (superseded), and ! (not installed).

        -d Input device (not valid with flags -i or -a).

        -F Returns failure unless all filesets associated with the fix

           are installed.

        -f Input file containing keywords or fixes. Use '-' for standard input.

           The -T option produces a suitable input file format for -f.

        -i Use with -k or -f option to display whether specified fixes or 

           keywords are installed.  Installation is not attempted.

           If neither -k nor -f is specified, all known fixes are displayed.

        -k Install filesets for a keyword or fix.

        -M Use with -T option to display information for fixes present

           on the media that have to do with the platform specified.

        -p Use with -k or -f to print filesets associated with keywords.

           Installation is not attempted when -p is used.

        -q Quiet option for use with -i.  If -c is specified, no heading is 

           displayed.  Otherwise, no output is displayed.

        -R User Specified Install Location

        -S Suppress multi-volume processing.

        -s Search for and display fixes on media containing a specified string.

        -T Display fix information for complete fixes present on the media.

        -t Use with -i option to limit search to a given type.  Currently

           valid types are 'f' (fix) and 'p' (preventive maintenance).

        -v Verbose option for use with -i.  Gives information about each

           fileset associated with a fix or keyword.

           to the environment provided.


반응형

'OS > AIX' 카테고리의 다른 글

Topas Memory  (0) 2015.08.25

이놈의 삽질은..

한번 시작하면 몇일을 가니..

정말 OS 담당자가 짜잔 하고 구축 해 놨으면 좋겠다..


몇일 전 Network- Interconnect IP 가 끊어 지는 경우 CSS 데몬이 체크하다가 데이터 정합성 유지를 위해 한쪽 노드의 OS 를 리부팅 하는 경우가 생겼다. (주로 slave쪽을 재기동)


하지만, 네트워크 상에서는 어떠한 에러 로그도 없었다고 하니..

테스트를 해서 증명할 수 밖에..


먼저 나는 10g 에서 하고 싶었는데..10g RAC 구축해 놓은 것이 없어서

어쩔수 없이 11g 에서 테스트를 진행할 수 밖에 없었다.


각설하고..10g File system 구축은 AIX 에서 몇번 했지만 실질적으로 테스트는 11g로 많이 진행 하였기 때문에 이참에 하나 구축할 필요가 있었다.


여러번의 삽질 끝에 완료는 했지만 삽질의 주 요인을 하나 찾아서 공유해 본다.


먼저 10g ocfs 설치는 가급적 yum 으로 진행 했으면 하는 바람이다....

이걸로도 삽질하다가 포기한 적이 있기에....

그리고 ocfs2 설정은 다른 블로그도 많기에 아래 부분만 확인 잘 했으면 한다.


설정 도중 

service o2cb configure 한다.

# service o2cb configure

 

Configuring the O2CB driver.

 

This will configure the on-boot properties of the O2CB driver.

The following questions will determine whether the driver is loaded on

boot.  The current values will be shown in brackets ('[]').  Hitting

ENTER without typing an answer will keep that current value.  Ctrl-C

will abort.

 

Load O2CB driver on boot (y/n) [n]: y

Cluster stack backing O2CB [o2cb]: 

Cluster to start on boot (Enter "none" to clear) [ocfs2]: ocfscluster1

Specify heartbeat dead threshold (>=7) [31]: 

Specify network idle timeout in ms (>=5000) [30000]: 

Specify network keepalive delay in ms (>=1000) [2000]: 

Specify network reconnect delay in ms (>=2000) [2000]: 

Writing O2CB configuration: OK

Loading filesystem "configfs": OK

Mounting configfs filesystem at /sys/kernel/config: OK

Loading stack plugin "o2cb": OK

Loading filesystem "ocfs2_dlmfs": OK

Creating directory '/dlm': OK

Mounting ocfs2_dlmfs filesystem at /dlm: OK

Setting cluster stack "o2cb": OK

Checking O2CB cluster configuration : Failed


위에서 빨간색 부분은 클러스터 명이다.


# o2cb_ctl -C -n ocfscluster1 -t cluster -a name=ocfscluster1


등록한 이후 아래와 같이 각 노드를 클러스터에 등록하는 경우이다.

여기서 내가 잘못한 부분이다.


# o2cb_ctl -C -n ocfsrac1 -t node -a number=0 -a ip_address=192.168.131.100 -a ip_port=7777 -a cluster=ocfscluster1

# o2cb_ctl -C -n ocfsrac2 -t node -a number=1 -a ip_address=192.168.131.110 -a ip_port=7777 -a cluster=ocfscluster1


파란색 부분이 자기 자신에 맞게 설정해 줘야 하는 부분이다.


즉, 다른 부분은 잘못 설정해도 오류가 나지만 ocfsrac1 , ocfsrac2 는 설정 잘못해도 오류가 생기지 않는다.(Host명을 잘못 설정해도 오류가 날일 없겠지...=_=;;)


다른 부분 에러는 내가 판단해서 잘 설정했지만 host 명을 따라 하다 보니 잘못 적은 것이다..에러가 없으니 난 완료된 줄 알았다.


하지만 service o2cb status 를 하게되면 아래와 같이 나온다.


rac1:/root>service o2cb status

Driver for "configfs": Loaded

Filesystem "configfs": Mounted

Stack glue driver: Loaded

Stack plugin "o2cb": Loaded

Driver for "ocfs2_dlmfs": Loaded

Filesystem "ocfs2_dlmfs": Mounted

Checking O2CB cluster "ocfscluster1": Offline


먼 짓을 해도 계속 offline 만 뜬다.. 나머지는 정상적인데...

다른 부분도 추가해서 첨부하자면..


rac1:/root>service o2cb start ocfscluster1

Setting cluster stack "o2cb": OK

Registering O2CB cluster "ocfscluster1": OK

Setting O2CB cluster timeouts : OK

-------------------------------------------------------
rac1:/root>/etc/init.d/o2cb status
Driver for "configfs": Loaded
Filesystem "configfs": Mounted
Stack glue driver: Loaded
Stack plugin "o2cb": Loaded
Driver for "ocfs2_dlmfs": Loaded
Filesystem "ocfs2_dlmfs": Mounted
-------------------------------------------------------
rac1:/root>vi /etc/fstab

LABEL=/                 /                       ext3    defaults        1 1
LABEL=/app              /app                    ext3    defaults        1 2
LABEL=/var              /var                    ext3    defaults        1 2
LABEL=/home             /home                   ext3    defaults        1 2
LABEL=/boot             /boot                   ext3    defaults        1 2
tmpfs                   /dev/shm                tmpfs   defaults        0 0
devpts                  /dev/pts                devpts  gid=5,mode=620  0 0
sysfs                   /sys                    sysfs   defaults        0 0
proc                    /proc                   proc    defaults        0 0
LABEL=SWAP-sda5         swap                    swap    defaults        0 0
/dev/sdb1               /oradata01               ocfs2   _netdev,datavolume,nointr      0 0


전혀 문제가 없이 나온다.

결국 아래에서 수정 후 정상적으로 공유가 되었다.



즉!!name 에는 자신의 host 명을 설정해 줘야한다!!!


ocfs 설정 참고 자료

http://db.necoaki.net/145


반응형

10.2.0.5 2node RAC 에서 한쪽 (Slave) 노드가 자꾸 재기동 되는 현상에서

이것 저것 확인하다가 아래와 같이 에러로그가 있어서 찾아보다가 확신이 안서서

일단은 포워딩 하여 저장해 본다.

아래는 crsd.log 이다.(master)


2015-10-10 12:49:30.874: [ COMMCRS][9526]clsc_receive: (114771870) error 2


2015-10-10 21:54:31.062: [ COMMCRS][9526]clsc_receive: (114771870) error 2


2015-10-13 10:41:30.444: [  CRSEVT][11008]32CAAMonitorHandler :: 0:Could not join /oracle/crs/bin/racgwrap(check)

category: 1234, operation: scls_process_join, loc: childcrash, OS error: 0, other: Abnormal termination of the child


2015-10-13 10:41:30.449: [  CRSEVT][11008]32CAAMonitorHandler :: 0:Action Script /oracle/crs/bin/racgwrap(check) timed out for ora.glvndbp05.vip! (timeout=60)

2015-10-13 10:41:30.449: [  CRSAPP][11008]32CheckResource error for ora.glvndbp05.vip error code = -2

2015-10-13 10:43:03.482: [  CRSEVT][11011]32CAAMonitorHandler :: 0:Could not join /oracle/crs/bin/racgwrap(check)

category: 1234, operation: scls_process_join, loc: childcrash, OS error: 0, other: Abnormal termination of the child


2015-10-13 10:43:03.482: [  CRSEVT][11011]32CAAMonitorHandler :: 0:Action Script /oracle/crs/bin/racgwrap(check) timed out for ora.glvndbp05.vip! (timeout=60)

2015-10-13 10:43:03.482: [  CRSAPP][11011]32CheckResource error for ora.glvndbp05.vip error code = -2

2015-10-13 10:44:36.509: [  CRSEVT][11017]32CAAMonitorHandler :: 0:Could not join /oracle/crs/bin/racgwrap(check)

category: 1234, operation: scls_process_join, loc: childcrash, OS error: 0, other: Abnormal termination of the child


위에서 확인해 보면 정리해서...

[CRSEVT] CAAMonitorHandler :: 0:Could not join .../crs/bin/racgwrap(check)

category: 1234, operation: scls_process_join, loc: childcrash, OS error: 0, other: Abnormal termination of the child


[CRSEVT] CAAMonitorHandler :: 0:Action Script ../crs/bin/racgwrap(check) timed out for 오라클.vip! (timeout=60)


이부분이 보인다. 검색해 보니 딱 한분(한국어로 검색...) 올려놔 있어서

정확하게 몰라서 일단은 공유해 본다.


퍼온 것이므로 문제 되면 자삭하겠습니다.


10g RAC 환경에서 있는 bug 인데 racgmain check 데몬이 비정상적으로 fork 되면서 메모리 사용율이 올라가게 되어 결국 나중엔 시스템을 사용할수 없는 지경까지 이르게 됨.

 

 oracle 26024     1  0  Dec  6  ?         0:00 /oracle/crs/bin/racgmain check

 oracle 23218     1  0  Dec  6  ?         0:00 /oracle/crs/bin/racgmain check

 oracle 23179     1  0  Dec  4  ?         0:00 /oracle/ora10/bin/racgmain check

 oracle 27277     1  0  Dec  6  ?         0:00 /oracle/ora10/bin/racgmain check

 oracle  1028     1  0  Dec  5  ?         0:00 /oracle/ora10/bin/racgmain check

 oracle  7991     1  0  Dec  4  ?         0:00 /oracle/ora10/bin/racgmain check

 oracle 15324     1  0  Dec  3  ?         0:00 /oracle/ora10/bin/racgmain check

 oracle 14314     1  0  Dec  4  ?         0:00 /oracle/ora10/bin/racgmain check

 oracle 10895     1  0  Dec  4  ?         0:00 /oracle/ora10/bin/racgmain check

 oracle   404     1  0  Dec  3  ?         0:00 /oracle/ora10/bin/racgmain check


해결책은 아래와 같이 CRS bundle #2 patchset을 적용시키거나 workaround 방법을 써서 조치해 주어야 함.

=====================================================================================


Applies to:

Oracle Server - Enterprise Edition - Version: 10.2.0.1 to 11.1.0.6

Information in this document applies to any platform.

Oracle Server Enterprise Edition - Version: 10.1.0.2 to 10.2.0.4

Symptoms

System slows down and many "racgmain check" processes may appear in ps output.  CRS log would show the following messages.

oracle@HA5-ZW05:[/home/oracle] ps -ef|grep "racgmain check"|wc -l

1290


~~~~

CAAMonitorHandler :: 0:Action Script /opt/oracle/product/crs/bin/racgwrap(check) timed out for ora.harac1.vip! (timeout=60)

CheckResource error for ora.harac1.vip error code = -2

CAAMonitorHandler :: 0:Could not join /opt/oracle/product/crs/bin/racgwrap(check)

category: 1234, operation: scls_process_join, loc: childcrash, OS error: 0, 

other: Abnormal termination of the child

~~~~

Cause

crsd.bin invokes the racgmain to check the status of the resources that are managed by CRS. The racgmain is invoked through the wrapper script racgwrap. 


If the resource action timed out, crsd kills the action script, which is racgwrap, while racgmain process will not be killed. Over time, this might create lot of orphan racgmain processes in the system. This would eventually slow down the due to the resource contention at the OS level.


Internal bug:6196746  addresses this issue.


Solution


This is fixed in 11.1.0.7 patchset.. If you are running into this issue in 10gR2, please go ahead and apply 10.2.0.4 patchset and the latest CRS bundle patch. This fix is included  in CRS bundle patch from bundle #2 onwards.

Following option could be used as a temporary workaround until the patch is applied.


1.  Make a copy of racgwrap located under $ORACLE_HOME/bin and $CRS_HOME/bin on ALL Nodes


2.  Edit the file racgwrap and modify the last 3 lines from:


~~~

$ORACLE_HOME/bin/racgmain "$@"

status=$?

exit $status


to:


# Line added to fix for Bug 6196746

exec $ORACLE_HOME/bin/racgmain "$@"

~~~


3.  Kill all the orphan racgmain processes running.


$ ps -ef|grep "racgmain check"

oracle 18701 1 0 Aug 1 ? 0:00 /oracle/product/10.2.0/database/bin/racgmain check

oracle 14653 1 0 Aug 1 ? 0:00 /oracle/product/10.2.0/database/bin/racgmain check

oracle 24517 1 0 Aug 1 ? 0:00 /oracle/product/10.2.0/database/bin/racgmain check


$ kill -9 <PID of racgmain>



펌 : http://pat98.tistory.com/376


추가로 찾은 문서도 공유해 본다.


10g/11gR1: Many Orphaned Or Hanging "racgmain" Processes Running (문서 ID 732086.1) <--위에 것과 동일


10g RAC: One node VIP status always shows "UNKNOWN" and "CRS-0223: Resource 'ora.rac-test1.vip' has placement error" when try to startup the VIP. (문서 ID 1993024.1)

Symptoms

Symptom 1:

In 10g RAC on unix platform, VIP on one nodes always shows "UNKNOWN". 


Symptom 2:

When try to start it up, it report following error:

CRS-1028: Dependency analysis failed because of:

CRS-0223: Resource 'ora.rac-test1.vip' has placement error.  


Symptom 3:

In CRSD log, find following error:


2015-03-19 10:51:09.772: [  CRSRES][3737213248][ALERT]0`ora.rac-test1.vip` on member `rac-test1` has experienced an unrecoverable failure.

2015-03-19 10:51:09.772: [  CRSRES][3737213248]0Human intervention required to resume its availability.

2015-03-19 10:51:09.772: [  CRSEVT][3741415744]0CAAMonitorHandler :: 0:Could not execute /u01/app/oracle/product/10.2.0/crs_1/bin/racgwrap(stop) for ora.rac-test2.vip

category: 1234, operation: scls_canexec, loc: , OS error: 0, other: no exe permission, file /u01/app/oracle/product/10.2.0/crs_1/bin/racgwrap    ===> No execute permission for this file.


Solution

1. Shutdown all the resource of this node: instance/asm/nodeapps/crs


2. Change permission of those 2 files to 751 on the node with issue


chmod 751 /u01/app/oracle/product/10.2.0/crs_1/racg/admin/racgwrap

chmod 751 /u01/app/oracle/product/10.2.0/crs_1/bin/racgeut


3. Then you can startup all the resource and check whether VIP is online.



반응형

ORACLE 10g RAC 테스트 할 필요가 생겼다.


CSS 관련하여 해당 데몬이 죽었을 경우 OS 가 리부팅 되는 것인데,

또한, private IP 가 통신 안될 때에도 DISK 적합성 때문에 OS 강제 리부팅 하는 경우가 있다


이 부분은 다음에 정리 되는데로 올려야 겠다.


아래는 RAC 설치 전 OCFS2 로 파일 시스템 구축 중에 생기는 에러에 대해 정리한 부분이다.


rac1:/root>mkfs.ocfs2 -b 4K -C 32K -N 2 -L "OCFS2Filesystem" /dev/sdb1

mkfs.ocfs2 1.8.0

Cluster stack: classic o2cb

/dev/sdb1 is apparently in use by the system; will not make a ocfs2 volume here!

<--이렇게 에러가 - _ -;;;


아래와 같이 상태 확인

rac1:/root>dmsetup status


rac-vote01: 0 614400 linear 

rac-control02: 0 204800 linear 

rac-undotbs2: 0 1638400 linear 

rac-control01: 0 204800 linear 

rac-undotbs1: 0 1638400 linear 

rac-system: 0 1638400 linear 

rac-users: 0 1638400 linear 

rac-redo06: 0 409600 linear 

rac-spfile: 0 204800 linear 

rac-temp: 0 1638400 linear 

rac-redo05: 0 409600 linear 

rac-redo04: 0 409600 linear 

rac-sysaux: 0 1638400 linear 

rac-redo03: 0 409600 linear 

rac-vote03: 0 614400 linear 

rac-redo02: 0 409600 linear 

rac-ocr02: 0 614400 linear 

rac-vote02: 0 614400 linear 

rac-redo01: 0 409600 linear 

rac-control03: 0 204800 linear 

rac-ocr01: 0 614400 linear 

젠장 내 잘못이다..기존에 raw device 구성 중에 이 짓을 했으니...ㅠㅠ

아래와 같이 지워주자..


rac1:/root>dmsetup remove_all

다시 진행해 보자..!!


rac1:/root>mkfs.ocfs2 -b 4K -C 32K -N 2 -L "OCFS2Filesystem" /dev/sdb1


mkfs.ocfs2 1.8.0

Cluster stack: classic o2cb

Label: OCFS2Filesystem

Features: sparse extended-slotmap backup-super unwritten inline-data strict-journal-super xattr indexed-dirs refcount discontig-bg

Block size: 4096 (12 bits)

Cluster size: 32768 (15 bits)

Volume size: 21467922432 (655149 clusters) (5241192 blocks)

Cluster groups: 21 (tail covers 10029 clusters, rest cover 32256 clusters)

Extent allocator size: 12582912 (3 groups)

Journal size: 134184960

Node slots: 2

Creating bitmaps: done

Initializing superblock: done

Writing system files: done

Writing superblock: done

Writing backup superblock: 3 block(s)

Formatting Journals: done

Growing extent allocator: done

Formatting slot map: done

Formatting quota files: done

Writing lost+found: done

mkfs.ocfs2 successful



완료!!!!!

감사합니다.


출처 : http://blog.helperchoi.com/75


반응형

'OS > LINUX' 카테고리의 다른 글

[펌]PuTTY 한글 깨짐 문제 해결하기  (1) 2016.02.23
[CentOS 6.6] FTP 설정  (1) 2016.02.17

TAF 와 CTF 도 모르고 지금까지 뭘한걸까...

역시 개념은 잘 알고 있어야 되는듯..

문제를 해결하고 심화된 공부를 할지라도, 결국은 기본이 중요하다는 것을..


- TAF

 RAC 에서 Failover 의 개념으로 한쪽 노드에 장애가 발생했을 경우, 나머지 살아있는 노드로 Failover 되는 것


- TAF 적용 방법

 클라이언트 $ORACLE_HOME/network/admin/tnsnames.ora 파일 수정


ORCLTEST =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = vip-linux1)(PORT = 1521))
    (ADDRESS = (PROTOCOL = TCP)(HOST = vip-linux2)(PORT = 1521))
    (LOAD_BALANCE = yes)
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = orcltest.idevelopment.info)
      (FAILOVER_MODE =
        (TYPE = SELECT)
        (METHOD = BASIC)
        (RETRIES = 180)
        (DELAY = 5)

      )
    )

  )


 - Type : None, Session, Select 선택. 사용 해제를 위해서는 Type=SESSION 설정하며, Session 과 Open Cursor 의 Failover 위해서는 Type=Select 로 설정. TAF를 해제하기 위해서는 Type=None으로 설정


 - METHOD : BASIC 또는 PRECONNECT 중 하나 사용. BASIC 방식을 사용하면, 기존 접속이 실패할 때까지, TAF는 접속의 재설정을 시도하지 않음. PRECONNECT 방식을 사용하면 TAF는 백업 접속을 위해 필요한 메모리 구조를 사전 설정 가능하지만, 기존 접속이 실패할 때까지 백업 접속은 비활성화


 - BACKUP : 백업 접속의 설정을 위해 사용되는 네트 서비스 이름을 지정. BACKUP 지정은 PRCONNECT 방식을 사용할 때 필요. BASIC 방식에서 추천. 그렇지 않으면 클라이언트가 재접속을 할때까지 추가적으로 지연을 시켜 실패한 인스턴스에 최초로 재접속을 시도. 그러나 사용자는 LOAD_BALANCING=ON 인 상태에서 BACKUP을 지정할 수 없음


 - DELAY : TAF가 장애 후에 BACKUP 에 연결하려는 시도 사이에서 기다리는 수초간을 지연


 - RETRIES : TAF가 장애 후에 BACKUP 연결 하기 위한 시도 횟수. RETRIES와 DELAY는 TAF가 백업 접속을 실패하기 전에 콜드 페일오버가 완료될 수 있는 지연시간이 있음


- /etc/hosts 에 정의 되어 있어야 함

10.10.100.101 vip-linux1

10.10.100.102 vip-linux2


- TAF 테스트


C:\> sqlplus system/manager@orcltest


COLUMN instance_name    FORMAT a13

COLUMN host_name        FORMAT a9

COLUMN failover_method  FORMAT a15

COLUMN failed_over      FORMAT a11


SELECT

    instance_name

  , host_name

  , NULL AS failover_type

  , NULL AS failover_method

  , NULL AS failed_over

FROM v$instance

UNION

SELECT

    NULL

  , NULL

  , failover_type

  , failover_method

  , failed_over

FROM v$session

WHERE username = 'SYSTEM';



INSTANCE_NAME HOST_NAME FAILOVER_TYPE FAILOVER_METHOD FAILED_OVER

------------- --------- ------------- --------------- -----------

orcl1         linux1

                        SELECT        BASIC           NO


위에서 설정한 SQL*Plus 세션에서 로그아웃 하지 않습니다!


위 쿼리를 수행한 다음, abort 옵션을 사용하여 linux1 노드의 orcl1 인스턴스를 셧다운 합니다. 이 작업을 수행하기 위해 아래와 같이 srvctl 커맨드라인 유틸리티를 사용합니다:


# su - oracle

$ srvctl status database -d orcl

Instance orcl1 is running on node linux1

Instance orcl2 is running on node linux2


$ srvctl stop instance -d orcl -i orcl1 -o abort


$ srvctl status database -d orcl

Instance orcl1 is not running on node linux1

Instance orcl2 is running on node linux2


다시 앞의 SQL 세션으로 돌아가, 버퍼에 저장된 SQL 구문을 재실행합니다: 

COLUMN instance_name    FORMAT a13

COLUMN host_name        FORMAT a9

COLUMN failover_method  FORMAT a15

COLUMN failed_over      FORMAT a11


SELECT

    instance_name

  , host_name

  , NULL AS failover_type

  , NULL AS failover_method

  , NULL AS failed_over

FROM v$instance

UNION

SELECT

    NULL

  , NULL

  , failover_type

  , failover_method

  , failed_over

FROM v$session

WHERE username = 'SYSTEM';


INSTANCE_NAME HOST_NAME FAILOVER_TYPE FAILOVER_METHOD FAILED_OVER

------------- --------- ------------- --------------- -----------

orcl2         linux2

                        SELECT        BASIC           YES


SQL> exit


위 실행 결과에서, 세션이 linux2 노드의 orcl2 인스턴스로 페일오버 되었음을 확인


- CTF 와 TAF 차이



CTF  : 신규접속자

- 한쪽 Instance 가 장애가 나더라도  자동으로 다른 Instance로 접속할  수 있게 함

- RAC 설치시 기본으로 됨



TAF  : 기존접속자

- 기존 접속자를 넘겨주는 기술

- 별도로 설정해야 사용가능 


출처 :   http://aozjffl.tistory.com/323

http://dinggur.tistory.com/207


http://www.oracle.com/technology/global/kr/pub/articles/hunter_rac10gr2_3.html

     http://www.oracle.com/technology/global/kr/deploy/availability/htdocs/taf.html

     http://publib.boulder.ibm.com/infocenter/pim/v6r0m0/index.jsp?topic=/com.ibm.wpc.ins.doc/wpc_tsk_setting_up_oracle_to_use_taf_support.html




반응형

하아...이놈 때문에 이틀을 넘게 허비했다...


raw device rac 11.2.0.3 grid 설치 도중 문제가 생긴다..

그 겁나는 rootpre 도 완료 한 후 리스너 설치 중에 생기는 에러다.


아쉽게도 에러 화면은 캡쳐하지 못하였지만 구글 검색이나 다시 직면하게 된다면 

필히 업데이트 하겠습니다.


해당 log를 까보면 아래와 같이 확인할 수 있다.


* 이미지가 작게 보일 것 같아서 아래와 같이 유사 내용을 가져왔습니다.


INFO: Problem in configuration: PRCN-2061 : Failed to add listener ora.LISTENER.lsnr
INFO: PRCN-2065 : Port(s) 1521 are not available on the nodes given
INFO: PRCN-2067 : Port 1521 is not available across node(s) "hww-poc1-VIP,hww-poc2-VIP"
INFO: Oracle Net Listener Startup:
INFO:     Listener does not exists.
INFO: Check the trace file for details: /home/grid/app/grid/cfgtoollogs/netca/trace_Ora11g_gridinfrahome1-1410287PM2700.log
INFO: Oracle Net Services configuration failed.  The exit code is 1



다음과 같이 tns 가 양쪽에 떠 있는지, 또는 1521 포트를 사용하는지 확인해 본다.

양쪽 노드에서 동시에 확인해 보자.

$ ps -ef | grep tns

netstat -nltp

RAC 1번


RAC 2번


나의 경우는 2번에는 떠 있지만, 1번에서는 없는 것을 확인하였다.

이제 리스너의 상태를 확인 후 stop 및 disable 를 시도하자.

$ srvctl status scan_listener

$ srvctl status stop_listener

srvctl disable scan_listener



반드시 oracle 유저로 진행 하자!


이 후, 중지된 설치 화면에서 재시도 진행을 해보면 설치가 잘 진행 되는 것을 확인할 수 있다.

설치 완료 후에는 다시 한번 확인한 후 필자와 같이 중지되어 있으면 enable 및 시작을 진행하자.


$ srvctl enable scan_listener

$ srvctl start scan_listener





[출처] http://www.shishirtekade.com/2014/10/prcn-2065-ports-1521-are-not-available.html






반응형

[root@rac1 install]# pwd

/oracle/grid/crs/install

[root@rac1 install]# ./rootcrs.pl -deconfig -force -verbose

Using configuration parameter file: ./crsconfig_params

CRS-4535: Cannot communicate with Cluster Ready Services

CRS-4000: Command Stop failed, or completed with errors.

CRS-4535: Cannot communicate with Cluster Ready Services

CRS-4000: Command Delete failed, or completed with errors.

CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac1'

CRS-2673: Attempting to stop 'ora.mdnsd' on 'rac1'

CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'rac1'

CRS-2673: Attempting to stop 'ora.ctssd' on 'rac1'

CRS-2673: Attempting to stop 'ora.evmd' on 'rac1'

CRS-2673: Attempting to stop 'ora.asm' on 'rac1'

CRS-2677: Stop of 'ora.evmd' on 'rac1' succeeded

CRS-2677: Stop of 'ora.mdnsd' on 'rac1' succeeded

CRS-2677: Stop of 'ora.asm' on 'rac1' succeeded

CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'rac1'

CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'rac1' succeeded

CRS-2677: Stop of 'ora.drivers.acfs' on 'rac1' succeeded

CRS-2677: Stop of 'ora.ctssd' on 'rac1' succeeded

CRS-2673: Attempting to stop 'ora.cssd' on 'rac1'

CRS-2677: Stop of 'ora.cssd' on 'rac1' succeeded

CRS-2673: Attempting to stop 'ora.crf' on 'rac1'

CRS-2677: Stop of 'ora.crf' on 'rac1' succeeded

CRS-2673: Attempting to stop 'ora.gipcd' on 'rac1'

CRS-2677: Stop of 'ora.gipcd' on 'rac1' succeeded

CRS-2673: Attempting to stop 'ora.gpnpd' on 'rac1'

CRS-2677: Stop of 'ora.gpnpd' on 'rac1' succeeded

CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac1' has completed

CRS-4133: Oracle High Availability Services has been stopped.

Successfully deconfigured Oracle Restart stack


[root@rac2 install]# ./rootcrs.pl -deconfig -force -verbose

Using configuration parameter file: ./crsconfig_params

CRS-4047: No Oracle Clusterware components configured.

CRS-4000: Command Stop failed, or completed with errors.

CRS-4047: No Oracle Clusterware components configured.

CRS-4000: Command Delete failed, or completed with errors.

CRS-4047: No Oracle Clusterware components configured.

CRS-4000: Command Stop failed, or completed with errors.

You must kill ohasd processes or reboot the system to properly 

cleanup the processes started by Oracle clusterware

ACFS-9313: No ADVM/ACFS installation detected.

Either /etc/oracle/olr.loc does not exist or is not readable

Make sure the file exists and it has read and execute access

Failure in execution (rc=-1, 256, No such file or directory) for command /etc/init.d/ohasd deinstall

Successfully deconfigured Oracle Restart stack


[root@rac1 oracle]# $GRID_HOME/root.sh

[root@rac2 oracle]# $GRID_HOME/root.sh


아래 내용은 발췌

[설명] 양쪽 노드에서 아래와 같이 Grid가 설치되어있는 홈 디렉토리에서 작업을 진행 하도록 하겠습니다. 우선 현재 crs, asm 등등 리소스를 확인 합니다. 어차피 띄워져 있어도 자동으로 모두 제거 하도록 하겠습니다. 아래 작업은 노드 1, 노드 2에서 모두 해주셔야 합니다.


[설명] 로컬 인벤토리에 있는 데이터 파일도 모두 제거 합니다.( 제거하지 않을 경우 그리드 설치시 에러가 발생 됩니다.)
[root@rac2 oraInventory]# $ORACLE_HOME/oraInventory/rm -rf *

[설명] 환경설정이 되어있으므로 아래와 같이 모두 제거 합니다. 
[root@rac2 u01]# rm -rf /etc/ora*

[설명] 데몬이 설정되어 있다면 rootdeinstall.sh를 반드시 수행 하셔야 합니다.  이후 아래 파일을 제거 하시기 바랍니다.
rm -f /etc/init.d/init.cssd 
rm -f /etc/init.d/init.crs 
rm -f /etc/init.d/init.crsd 
rm -f /etc/init.d/init.evmd 
rm -f /etc/rc2.d/K96init.crs
rm -f /etc/rc2.d/S96init.crs
rm -f /etc/rc3.d/K96init.crs
rm -f /etc/rc3.d/S96init.crs
rm -f /etc/rc5.d/K96init.crs
rm -f /etc/rc5.d/S96init.crs
rm -Rf /etc/oracle/scls_scr
rm -f /etc/inittab.crs 
cp /etc/inittab.orig /etc/inittab


[출처] http://estenpark.tistory.com/283



반응형
Oracle 11gR2에 새로 추가된 기능인 SCAN에 대해 간단히 정리합니다.. (까먹기 전에 --;)

Oracle은 새로운 버전이 나올때 마다 새로운 기능들을 추가하는데, 이번에 소개드릴 기능은 SCAN (Single Client Access Name) 입니다. 말 그대로 client에서 server를 접속할 때 여러개의 RAC 노드가 있더라도 하나의 access name을 갖도록 하는 기능입니다. 이 기능은 새로운 노드가 추가되거나 삭제되는 경우에도 적용되며, 사실 이것을 염두에 두고 있습니다. 

새로운 노드의 추가와 삭제와 상관없는 single client access name 이라.. 
딱 클라우드 컴퓨팅ㄱ이라는 단어가 생각나지 않습니다? 

아래의 tns alias 설정은 SCAN 기능을 사용할 경우 client의 tns alias 설정 sample 입니다. 
언듯보면.. 자세히 봐도 single DB 접속하는 tns alias와 동일합니다. 

TEST.ORACLE.COM =
(DESCRIPTION=
(ADDRESS=(PROTOCOL=tcp)(HOST=SCAN-TEST.ORACLE.COM)(PORT=1521))
(CONNECT_DATA=(SERVICE_NAME=11GR2TEST.ORACLE.COM))
)

이전의 RAC에서의 tns alias는 아래와 같이 설정했었습니다. 

TEST.ORACLE.COM =
(DESCRIPTION=
(ADDRESS_LIST=
(ADDRESS=(PROTOCOL=tcp)(HOST=TEST1-vip.ORACLE.COM)(PORT=1521))
(ADDRESS=(PROTOCOL=tcp)(HOST=TEST2-vip.ORACLE.COM)(PORT=1521))
)
(CONNECT_DATA=(SERVICE_NAME=11GR2TEST.ORACLE.COM))



그럼 어떻게 노드의 추가/삭제에도 동일한 access name을 가질 수 있을까요?
각 노드의 listener 앞에 새로운 listener를 두는 겁니다. 이 앞단의 listener들이 뒤의 RAC listener를 보게 됩니다.



11gR2 Grid Infrastructure Single Client Access Name (SCAN) Explained

SCAN Concepts

  • Single client access name (SCAN) is the virtual hostname to provide for all clients connecting to the cluster (as opposed to the vip hostnames in 10g and 11gR1).  
  • SCAN is a domain name registered to at least one and up to three IP addresses, either in the domain name service (DNS) or the Grid Naming Service (GNS).
  • By default, the name used as the SCAN is also the name of the cluster and must be globally unique throughout your enterprise. The default value for the SCAN is based on the local node name. SCAN name must be at least one character long and no more than 15 characters in length, must be alphanumeric - cannot begin with a numeral and may contain hyphens (-). If you require a SCAN that is longer than 15 characters, then select an Advanced installation.
  • For installation to succeed, the SCAN must resolve to at least one address.
  • SCAN VIP addresses must be on the same subnet as virtual IP addresses and public IP addresses.
  • Oracle strongly recommends that you do not configure SCAN VIP addresses in the hosts file. If you use the hosts file to resolve SCANs, then you will only be able to resolve to one IP address and you will have only one SCAN address - be sure to provide a hosts file entry for each SCAN address in hosts file in same order.
  • If hosts file is used to resolve SCAN hostname, you will receive Cluster Verification Utility failure at end of installation (see Note: 887471.1 for more details)
  • For high availability and scalability, Oracle recommends that you configure the SCAN to use DNS Round Robin resolution to three addresses.
  • Because the SCAN is associated with the cluster as a whole, rather than to a particular node, the SCAN makes it possible to add or remove nodes from the cluster without needing to reconfigure clients. It also adds location independence for the databases, so that client configuration does not have to depend on which nodes are running a particular database.
  • Clients can continue to access the cluster in the same way as with previous releases, but Oracle recommends that clients accessing the cluster use the SCAN. Clients using the SCAN can also access the cluster using EZCONNECT.
  • Grid Infrastructure will start local listener LISTENER on all nodes to listen on local VIP, and SCAN listener LISTENER_SCAN1 (up to three cluster wide) to listen on SCAN VIP(s); 11gR2 database by default will set local_listener to local LISTENER, and remote_listener to SCAN listener.

위의 SCAN에 대한 concept을 정리해보자면 RAC에 대한 virtual hostname 입니다. 이는 DNS에 설정되어 있고 이를 통해 DB에 접속하게 됩니다. failover나 load-balancing은 RAC 각 노드의 listener 들이 담당하게 됩니다.

이 내용은 "Note:887522.1 - 11gR2 Grid Infrastructure Single Client Access Name (SCAN) Explained" 를 참조했습니다

관련 문서와 동영상을 같이 링크 겁니다.  아직 한글로 소개된 자료는 없는 것 같네요.. 

[PDF] 

SINGLE CLIENT ACCESS NAME (SCAN)

 - [ 이 페이지 번역하기 ]
파일 형식: PDF/Adobe Acrobat - 빠른 보기
29 Mar 2010 ... Single Client Access Name (SCAN) is s a new Oracle Real Application Clusters (RAC)11g Release 2 feature that provides ...
www.oracle.com/technology/products/database/.../scan.pdf - 유사한 페이지


<출처> 에너쓰오라클


반응형

[oracle@rac1:/oracle/grid]$ cd deinstall/

[oracle@rac1:/oracle/grid/deinstall]$ ./deinstall

Checking for required files and bootstrapping ...

Please wait ...

Location of logs /tmp/deinstall2015-09-11_09-37-29AM/logs/


############ ORACLE DEINSTALL & DECONFIG TOOL START ############



######################### CHECK OPERATION START #########################

## [START] Install check configuration ##



Checking for existence of the Oracle home location /oracle/grid

Oracle Home type selected for deinstall is: Oracle Grid Infrastructure for a Cluster

Oracle Base selected for deinstall is: /oracle/app/oracle

Checking for existence of central inventory location /oracle/app/oraInventory

Checking for existence of the Oracle Grid Infrastructure home /oracle/grid

The following nodes are part of this cluster: rac1,rac2

Checking for sufficient temp space availability on node(s) : 'rac1,rac2'


## [END] Install check configuration ##


Traces log file: /tmp/deinstall2015-09-11_09-37-29AM/logs//crsdc.log

Enter an address or the name of the virtual IP used on node "rac1"[rac1-vip]

 > 


The following information can be collected by running "/sbin/ifconfig -a" on node "rac1"

Enter the IP netmask of Virtual IP "192.168.131.120" on node "rac1"[255.255.255.0]

 > 


Enter the network interface name on which the virtual IP address "192.168.131.120" is active

 > 


Enter an address or the name of the virtual IP used on node "rac2"[rac2-vip]

 > 


The following information can be collected by running "/sbin/ifconfig -a" on node "rac2"

Enter the IP netmask of Virtual IP "192.168.131.130" on node "rac2"[255.255.255.0]

 > 


Enter the network interface name on which the virtual IP address "192.168.131.130" is active

 > 


Enter an address or the name of the virtual IP[]

 > 



Network Configuration check config START


Network de-configuration trace file location: /tmp/deinstall2015-09-11_09-37-29AM/logs/netdc_check2015-09-11_10-08-47-AM.log


Specify all RAC listeners (do not include SCAN listener) that are to be de-configured [LISTENER,LISTENER_SCAN1]:


Network Configuration check config END


Asm Check Configuration START


ASM de-configuration trace file location: /tmp/deinstall2015-09-11_09-37-29AM/logs/asmcadc_check2015-09-11_10-08-48-AM.log


ASM configuration was not detected in this Oracle home. Was ASM configured in this Oracle home (y|n) [n]: 

ASM was not detected in the Oracle Home


######################### CHECK OPERATION END #########################



####################### CHECK OPERATION SUMMARY #######################

Oracle Grid Infrastructure Home is: /oracle/grid

The cluster node(s) on which the Oracle home deinstallation will be performed are:rac1,rac2

Oracle Home selected for deinstall is: /oracle/grid

Inventory Location where the Oracle home registered is: /oracle/app/oraInventory

Following RAC listener(s) will be de-configured: LISTENER,LISTENER_SCAN1

ASM was not detected in the Oracle Home

Do you want to continue (y - yes, n - no)? [n]: y

A log of this session will be written to: '/tmp/deinstall2015-09-11_09-37-29AM/logs/deinstall_deconfig2015-09-11_09-37-37-AM.out'

Any error messages from this session will be written to: '/tmp/deinstall2015-09-11_09-37-29AM/logs/deinstall_deconfig2015-09-11_09-37-37-AM.err'


######################## CLEAN OPERATION START ########################

ASM de-configuration trace file location: /tmp/deinstall2015-09-11_09-37-29AM/logs/asmcadc_clean2015-09-11_10-08-51-AM.log

ASM Clean Configuration END


Network Configuration clean config START


Network de-configuration trace file location: /tmp/deinstall2015-09-11_09-37-29AM/logs/netdc_clean2015-09-11_10-08-51-AM.log


De-configuring RAC listener(s): LISTENER,LISTENER_SCAN1


De-configuring listener: LISTENER

    Stopping listener: LISTENER

    Warning: Failed to stop listener. Listener may not be running.

Listener de-configured successfully.


De-configuring listener: LISTENER_SCAN1

    Stopping listener: LISTENER_SCAN1

    Warning: Failed to stop listener. Listener may not be running.

Listener de-configured successfully.


De-configuring Naming Methods configuration file on all nodes...

Naming Methods configuration file de-configured successfully.


De-configuring Local Net Service Names configuration file on all nodes...

Local Net Service Names configuration file de-configured successfully.


De-configuring Directory Usage configuration file on all nodes...

Directory Usage configuration file de-configured successfully.


De-configuring backup files on all nodes...

Backup files de-configured successfully.


The network configuration has been cleaned up successfully.


Network Configuration clean config END



---------------------------------------->


The deconfig command below can be executed in parallel on all the remote nodes. Execute the command on  the local node after the execution completes on all the remote nodes.


Run the following command as the root user or the administrator on node "rac2".


/tmp/deinstall2015-09-11_09-37-29AM/perl/bin/perl -I/tmp/deinstall2015-09-11_09-37-29AM/perl/lib -I/tmp/deinstall2015-09-11_09-37-29AM/crs/install /tmp/deinstall2015-09-11_09-37-29AM/crs/install/rootcrs.pl -force  -deconfig -paramfile "/tmp/deinstall2015-09-11_09-37-29AM/response/deinstall_Ora11g_gridinfrahome1.rsp"


Run the following command as the root user or the administrator on node "rac1".


/tmp/deinstall2015-09-11_09-37-29AM/perl/bin/perl -I/tmp/deinstall2015-09-11_09-37-29AM/perl/lib -I/tmp/deinstall2015-09-11_09-37-29AM/crs/install /tmp/deinstall2015-09-11_09-37-29AM/crs/install/rootcrs.pl -force  -deconfig -paramfile "/tmp/deinstall2015-09-11_09-37-29AM/response/deinstall_Ora11g_gridinfrahome1.rsp" -lastnode


Press Enter after you finish running the above commands


<----------------------------------------

**여기서 Enter 치지 말고, console 를 하나 더 열어 root 로 위에 빨간색 명령어 진행


[root@rac2 ~]# /tmp/deinstall2015-09-11_09-37-29AM/perl/bin/perl -I/tmp/deinstall2015-09-11_09-37-29AM/perl/lib -I/tmp/deinstall2015-09-11_09-37-29AM/crs/install /tmp/deinstall2015-09-11_09-37-29AM/crs/install/rootcrs.pl -force  -deconfig -paramfile "/tmp/deinstall2015-09-11_09-37-29AM/response/deinstall_Ora11g_gridinfrahome1.rsp"

Using configuration parameter file: /tmp/deinstall2015-09-11_09-37-29AM/response/deinstall_Ora11g_gridinfrahome1.rsp


...


**기다리면 succeeded 등이 뜨면서 끝날때까지 기다린다.(난.. failed..Y Y)


Successfully deconfigured Oracle clusterware stack on this node <--이런식으로..


------------------------------------------------------------------

**똑같이 rac1 에서도 진행 한다.

[root@rac1 ~]# 

[root@rac1 ~]# root

-bash: root: command not found

[root@rac1 ~]# /tmp/deinstall2015-09-11_09-37-29AM/perl/bin/perl -I/tmp/deinstall2015-09-11_09-37-29AM/perl/lib -I/tmp/deinstall2015-09-11_09-37-29AM/crs/install /tmp/deinstall2015-09-11_09-37-29AM/crs/install/rootcrs.pl -force  -deconfig -paramfile "/tmp/deinstall2015-09-11_09-37-29AM/response/deinstall_Ora11g_gridinfrahome1.rsp" -lastnode

...

Successfully deconfigured Oracle clusterware stack on this node



다시 deinstall 하는 곳으로 넘어와서 Enter를 친다.


<----------------------------------------


Setting the force flag to false

Setting the force flag to cleanup the Oracle Base

Oracle Universal Installer clean START


Detach Oracle home '/oracle/grid' from the central inventory on the local node : Done


Delete directory '/oracle/grid' on the local node : Done


Delete directory '/oracle/app/oraInventory' on the local node : Done


Delete directory '/oracle/app/oracle' on the local node : Done


Detach Oracle home '/oracle/grid' from the central inventory on the remote nodes 'rac2' : Done


Delete directory '/oracle/grid' on the remote nodes 'rac2' : Done


Delete directory '/oracle/app/oraInventory' on the remote nodes 'rac2' : Done


Delete directory '/oracle/app/oracle' on the remote nodes 'rac2' : Done


Oracle Universal Installer cleanup was successful.


Oracle Universal Installer clean END



## [START] Oracle install clean ##


Clean install operation removing temporary directory '/tmp/deinstall2015-09-11_09-37-29AM' on node 'rac1'

Clean install operation removing temporary directory '/tmp/deinstall2015-09-11_09-37-29AM' on node 'rac2'


## [END] Oracle install clean ##



######################### CLEAN OPERATION END #########################



####################### CLEAN OPERATION SUMMARY #######################

Following RAC listener(s) were de-configured successfully: LISTENER,LISTENER_SCAN1

Oracle Clusterware is stopped and successfully de-configured on node "rac1"

Oracle Clusterware is stopped and successfully de-configured on node "rac2"

Oracle Clusterware is stopped and de-configured successfully.

Successfully detached Oracle home '/oracle/grid' from the central inventory on the local node.

Successfully deleted directory '/oracle/grid' on the local node.

Successfully deleted directory '/oracle/app/oraInventory' on the local node.

Successfully deleted directory '/oracle/app/oracle' on the local node.

Successfully detached Oracle home '/oracle/grid' from the central inventory on the remote nodes 'rac2'.

Successfully deleted directory '/oracle/grid' on the remote nodes 'rac2'.

Successfully deleted directory '/oracle/app/oraInventory' on the remote nodes 'rac2'.

Successfully deleted directory '/oracle/app/oracle' on the remote nodes 'rac2'.

Oracle Universal Installer cleanup was successful.



Run 'rm -rf /etc/oraInst.loc' as root on node(s) 'rac1,rac2' at the end of the session.


Run 'rm -rf /opt/ORCLfmap' as root on node(s) 'rac1,rac2' at the end of the session.

Oracle deinstall tool successfully cleaned up temporary directories.

#######################################################################



############# ORACLE DEINSTALL & DECONFIG TOOL END #############


[oracle@rac1:/oracle/grid/deinstall]$ 


끝!!이 아니고 아래 마지막으로 root 로 rac1 /rac2 에서 실행

[root@rac1 ~]# rm -rf /etc/oraInst.loc

[root@rac1 ~]# rm -rf /opt/ORCLfmap


----------------------------------------------------------

[root@rac2 ~]# rm -rf /opt/ORCLfmap

[root@rac2 ~]# rm -rf /etc/oraInst.loc


제대로 지워졌는지 directory 등 확인해 보자!



반응형

+ Recent posts