하아...

복구는 요청이 올때마다 테스트를 안해볼 수가 없다..


옛날에 공부하면서 테스트 했던 부분도

아...이렇게 되었던 거지 하면서도 다시 물어보면 확신에 차서 이야기를 못한다..

역시 내공이 부족하다..능력이 부족한걸까...


어쨋든 오랫만에 다시 테스트를 해봤다.



Hotbackup 도중 베리타스 에서 DB를 내리게 될 경우..


보통 베리타스에서 DB를 내리게 되는 경우 Abort로 내리는 것을 로그로 확인한 기억이 있다.

또한, OS 엔지니어가 확인까지 해준 적이 있기에..

당연히 이번에도 Abort로 내리는 것으로 가정 하였다.


Hotbackup 은 Archive Mode 에서만 가능 하다는 것을 머리에 새겨두고..


SQL> alter tablespace users begin backup;


Tablespace altered.


-----------------

Alert Log



이 상태에서 session 을 열어 abort로 내리자.


그리고 다시 startup 해보자.


예상대로 에러가 떨어진 것을 확인할 수 있다.


혹시 모르니 상태 확인도 필수



mount 상태이군..

깔끔하게 책에 있던 내용데로 진행해 보자.


SQL> alter database datafile '/oradata/EAIDB/users01.dbf' end backup;

Database altered.


SQL> alter database open;

Database altered.


역시나 책은 배신하지 않는군...


다른 방법으로 recover database 해도 된다.

SQL> recover database;

Media recovery complete.


SQL> alter database open;

Database altered.




뭐 어쨋든..

오랜만에 아무 생각없이 복구 했군...


아무것도 아니라고 생각 들지만 고객한테는 확실하게 이야기를 해야되고, 나도 확실히 알고 있어야지 복구가 쉬우니...

기초도 탄탄히!!!

반응형

10.2.0.5 2node RAC 에서 한쪽 (Slave) 노드가 자꾸 재기동 되는 현상에서

이것 저것 확인하다가 아래와 같이 에러로그가 있어서 찾아보다가 확신이 안서서

일단은 포워딩 하여 저장해 본다.

아래는 crsd.log 이다.(master)


2015-10-10 12:49:30.874: [ COMMCRS][9526]clsc_receive: (114771870) error 2


2015-10-10 21:54:31.062: [ COMMCRS][9526]clsc_receive: (114771870) error 2


2015-10-13 10:41:30.444: [  CRSEVT][11008]32CAAMonitorHandler :: 0:Could not join /oracle/crs/bin/racgwrap(check)

category: 1234, operation: scls_process_join, loc: childcrash, OS error: 0, other: Abnormal termination of the child


2015-10-13 10:41:30.449: [  CRSEVT][11008]32CAAMonitorHandler :: 0:Action Script /oracle/crs/bin/racgwrap(check) timed out for ora.glvndbp05.vip! (timeout=60)

2015-10-13 10:41:30.449: [  CRSAPP][11008]32CheckResource error for ora.glvndbp05.vip error code = -2

2015-10-13 10:43:03.482: [  CRSEVT][11011]32CAAMonitorHandler :: 0:Could not join /oracle/crs/bin/racgwrap(check)

category: 1234, operation: scls_process_join, loc: childcrash, OS error: 0, other: Abnormal termination of the child


2015-10-13 10:43:03.482: [  CRSEVT][11011]32CAAMonitorHandler :: 0:Action Script /oracle/crs/bin/racgwrap(check) timed out for ora.glvndbp05.vip! (timeout=60)

2015-10-13 10:43:03.482: [  CRSAPP][11011]32CheckResource error for ora.glvndbp05.vip error code = -2

2015-10-13 10:44:36.509: [  CRSEVT][11017]32CAAMonitorHandler :: 0:Could not join /oracle/crs/bin/racgwrap(check)

category: 1234, operation: scls_process_join, loc: childcrash, OS error: 0, other: Abnormal termination of the child


위에서 확인해 보면 정리해서...

[CRSEVT] CAAMonitorHandler :: 0:Could not join .../crs/bin/racgwrap(check)

category: 1234, operation: scls_process_join, loc: childcrash, OS error: 0, other: Abnormal termination of the child


[CRSEVT] CAAMonitorHandler :: 0:Action Script ../crs/bin/racgwrap(check) timed out for 오라클.vip! (timeout=60)


이부분이 보인다. 검색해 보니 딱 한분(한국어로 검색...) 올려놔 있어서

정확하게 몰라서 일단은 공유해 본다.


퍼온 것이므로 문제 되면 자삭하겠습니다.


10g RAC 환경에서 있는 bug 인데 racgmain check 데몬이 비정상적으로 fork 되면서 메모리 사용율이 올라가게 되어 결국 나중엔 시스템을 사용할수 없는 지경까지 이르게 됨.

 

 oracle 26024     1  0  Dec  6  ?         0:00 /oracle/crs/bin/racgmain check

 oracle 23218     1  0  Dec  6  ?         0:00 /oracle/crs/bin/racgmain check

 oracle 23179     1  0  Dec  4  ?         0:00 /oracle/ora10/bin/racgmain check

 oracle 27277     1  0  Dec  6  ?         0:00 /oracle/ora10/bin/racgmain check

 oracle  1028     1  0  Dec  5  ?         0:00 /oracle/ora10/bin/racgmain check

 oracle  7991     1  0  Dec  4  ?         0:00 /oracle/ora10/bin/racgmain check

 oracle 15324     1  0  Dec  3  ?         0:00 /oracle/ora10/bin/racgmain check

 oracle 14314     1  0  Dec  4  ?         0:00 /oracle/ora10/bin/racgmain check

 oracle 10895     1  0  Dec  4  ?         0:00 /oracle/ora10/bin/racgmain check

 oracle   404     1  0  Dec  3  ?         0:00 /oracle/ora10/bin/racgmain check


해결책은 아래와 같이 CRS bundle #2 patchset을 적용시키거나 workaround 방법을 써서 조치해 주어야 함.

=====================================================================================


Applies to:

Oracle Server - Enterprise Edition - Version: 10.2.0.1 to 11.1.0.6

Information in this document applies to any platform.

Oracle Server Enterprise Edition - Version: 10.1.0.2 to 10.2.0.4

Symptoms

System slows down and many "racgmain check" processes may appear in ps output.  CRS log would show the following messages.

oracle@HA5-ZW05:[/home/oracle] ps -ef|grep "racgmain check"|wc -l

1290


~~~~

CAAMonitorHandler :: 0:Action Script /opt/oracle/product/crs/bin/racgwrap(check) timed out for ora.harac1.vip! (timeout=60)

CheckResource error for ora.harac1.vip error code = -2

CAAMonitorHandler :: 0:Could not join /opt/oracle/product/crs/bin/racgwrap(check)

category: 1234, operation: scls_process_join, loc: childcrash, OS error: 0, 

other: Abnormal termination of the child

~~~~

Cause

crsd.bin invokes the racgmain to check the status of the resources that are managed by CRS. The racgmain is invoked through the wrapper script racgwrap. 


If the resource action timed out, crsd kills the action script, which is racgwrap, while racgmain process will not be killed. Over time, this might create lot of orphan racgmain processes in the system. This would eventually slow down the due to the resource contention at the OS level.


Internal bug:6196746  addresses this issue.


Solution


This is fixed in 11.1.0.7 patchset.. If you are running into this issue in 10gR2, please go ahead and apply 10.2.0.4 patchset and the latest CRS bundle patch. This fix is included  in CRS bundle patch from bundle #2 onwards.

Following option could be used as a temporary workaround until the patch is applied.


1.  Make a copy of racgwrap located under $ORACLE_HOME/bin and $CRS_HOME/bin on ALL Nodes


2.  Edit the file racgwrap and modify the last 3 lines from:


~~~

$ORACLE_HOME/bin/racgmain "$@"

status=$?

exit $status


to:


# Line added to fix for Bug 6196746

exec $ORACLE_HOME/bin/racgmain "$@"

~~~


3.  Kill all the orphan racgmain processes running.


$ ps -ef|grep "racgmain check"

oracle 18701 1 0 Aug 1 ? 0:00 /oracle/product/10.2.0/database/bin/racgmain check

oracle 14653 1 0 Aug 1 ? 0:00 /oracle/product/10.2.0/database/bin/racgmain check

oracle 24517 1 0 Aug 1 ? 0:00 /oracle/product/10.2.0/database/bin/racgmain check


$ kill -9 <PID of racgmain>



펌 : http://pat98.tistory.com/376


추가로 찾은 문서도 공유해 본다.


10g/11gR1: Many Orphaned Or Hanging "racgmain" Processes Running (문서 ID 732086.1) <--위에 것과 동일


10g RAC: One node VIP status always shows "UNKNOWN" and "CRS-0223: Resource 'ora.rac-test1.vip' has placement error" when try to startup the VIP. (문서 ID 1993024.1)

Symptoms

Symptom 1:

In 10g RAC on unix platform, VIP on one nodes always shows "UNKNOWN". 


Symptom 2:

When try to start it up, it report following error:

CRS-1028: Dependency analysis failed because of:

CRS-0223: Resource 'ora.rac-test1.vip' has placement error.  


Symptom 3:

In CRSD log, find following error:


2015-03-19 10:51:09.772: [  CRSRES][3737213248][ALERT]0`ora.rac-test1.vip` on member `rac-test1` has experienced an unrecoverable failure.

2015-03-19 10:51:09.772: [  CRSRES][3737213248]0Human intervention required to resume its availability.

2015-03-19 10:51:09.772: [  CRSEVT][3741415744]0CAAMonitorHandler :: 0:Could not execute /u01/app/oracle/product/10.2.0/crs_1/bin/racgwrap(stop) for ora.rac-test2.vip

category: 1234, operation: scls_canexec, loc: , OS error: 0, other: no exe permission, file /u01/app/oracle/product/10.2.0/crs_1/bin/racgwrap    ===> No execute permission for this file.


Solution

1. Shutdown all the resource of this node: instance/asm/nodeapps/crs


2. Change permission of those 2 files to 751 on the node with issue


chmod 751 /u01/app/oracle/product/10.2.0/crs_1/racg/admin/racgwrap

chmod 751 /u01/app/oracle/product/10.2.0/crs_1/bin/racgeut


3. Then you can startup all the resource and check whether VIP is online.



반응형

하아...이놈 때문에 이틀을 넘게 허비했다...


raw device rac 11.2.0.3 grid 설치 도중 문제가 생긴다..

그 겁나는 rootpre 도 완료 한 후 리스너 설치 중에 생기는 에러다.


아쉽게도 에러 화면은 캡쳐하지 못하였지만 구글 검색이나 다시 직면하게 된다면 

필히 업데이트 하겠습니다.


해당 log를 까보면 아래와 같이 확인할 수 있다.


* 이미지가 작게 보일 것 같아서 아래와 같이 유사 내용을 가져왔습니다.


INFO: Problem in configuration: PRCN-2061 : Failed to add listener ora.LISTENER.lsnr
INFO: PRCN-2065 : Port(s) 1521 are not available on the nodes given
INFO: PRCN-2067 : Port 1521 is not available across node(s) "hww-poc1-VIP,hww-poc2-VIP"
INFO: Oracle Net Listener Startup:
INFO:     Listener does not exists.
INFO: Check the trace file for details: /home/grid/app/grid/cfgtoollogs/netca/trace_Ora11g_gridinfrahome1-1410287PM2700.log
INFO: Oracle Net Services configuration failed.  The exit code is 1



다음과 같이 tns 가 양쪽에 떠 있는지, 또는 1521 포트를 사용하는지 확인해 본다.

양쪽 노드에서 동시에 확인해 보자.

$ ps -ef | grep tns

netstat -nltp

RAC 1번


RAC 2번


나의 경우는 2번에는 떠 있지만, 1번에서는 없는 것을 확인하였다.

이제 리스너의 상태를 확인 후 stop 및 disable 를 시도하자.

$ srvctl status scan_listener

$ srvctl status stop_listener

srvctl disable scan_listener



반드시 oracle 유저로 진행 하자!


이 후, 중지된 설치 화면에서 재시도 진행을 해보면 설치가 잘 진행 되는 것을 확인할 수 있다.

설치 완료 후에는 다시 한번 확인한 후 필자와 같이 중지되어 있으면 enable 및 시작을 진행하자.


$ srvctl enable scan_listener

$ srvctl start scan_listener





[출처] http://www.shishirtekade.com/2014/10/prcn-2065-ports-1521-are-not-available.html






반응형

[root@rac1 install]# pwd

/oracle/grid/crs/install

[root@rac1 install]# ./rootcrs.pl -deconfig -force -verbose

Using configuration parameter file: ./crsconfig_params

CRS-4535: Cannot communicate with Cluster Ready Services

CRS-4000: Command Stop failed, or completed with errors.

CRS-4535: Cannot communicate with Cluster Ready Services

CRS-4000: Command Delete failed, or completed with errors.

CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac1'

CRS-2673: Attempting to stop 'ora.mdnsd' on 'rac1'

CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'rac1'

CRS-2673: Attempting to stop 'ora.ctssd' on 'rac1'

CRS-2673: Attempting to stop 'ora.evmd' on 'rac1'

CRS-2673: Attempting to stop 'ora.asm' on 'rac1'

CRS-2677: Stop of 'ora.evmd' on 'rac1' succeeded

CRS-2677: Stop of 'ora.mdnsd' on 'rac1' succeeded

CRS-2677: Stop of 'ora.asm' on 'rac1' succeeded

CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'rac1'

CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'rac1' succeeded

CRS-2677: Stop of 'ora.drivers.acfs' on 'rac1' succeeded

CRS-2677: Stop of 'ora.ctssd' on 'rac1' succeeded

CRS-2673: Attempting to stop 'ora.cssd' on 'rac1'

CRS-2677: Stop of 'ora.cssd' on 'rac1' succeeded

CRS-2673: Attempting to stop 'ora.crf' on 'rac1'

CRS-2677: Stop of 'ora.crf' on 'rac1' succeeded

CRS-2673: Attempting to stop 'ora.gipcd' on 'rac1'

CRS-2677: Stop of 'ora.gipcd' on 'rac1' succeeded

CRS-2673: Attempting to stop 'ora.gpnpd' on 'rac1'

CRS-2677: Stop of 'ora.gpnpd' on 'rac1' succeeded

CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac1' has completed

CRS-4133: Oracle High Availability Services has been stopped.

Successfully deconfigured Oracle Restart stack


[root@rac2 install]# ./rootcrs.pl -deconfig -force -verbose

Using configuration parameter file: ./crsconfig_params

CRS-4047: No Oracle Clusterware components configured.

CRS-4000: Command Stop failed, or completed with errors.

CRS-4047: No Oracle Clusterware components configured.

CRS-4000: Command Delete failed, or completed with errors.

CRS-4047: No Oracle Clusterware components configured.

CRS-4000: Command Stop failed, or completed with errors.

You must kill ohasd processes or reboot the system to properly 

cleanup the processes started by Oracle clusterware

ACFS-9313: No ADVM/ACFS installation detected.

Either /etc/oracle/olr.loc does not exist or is not readable

Make sure the file exists and it has read and execute access

Failure in execution (rc=-1, 256, No such file or directory) for command /etc/init.d/ohasd deinstall

Successfully deconfigured Oracle Restart stack


[root@rac1 oracle]# $GRID_HOME/root.sh

[root@rac2 oracle]# $GRID_HOME/root.sh


아래 내용은 발췌

[설명] 양쪽 노드에서 아래와 같이 Grid가 설치되어있는 홈 디렉토리에서 작업을 진행 하도록 하겠습니다. 우선 현재 crs, asm 등등 리소스를 확인 합니다. 어차피 띄워져 있어도 자동으로 모두 제거 하도록 하겠습니다. 아래 작업은 노드 1, 노드 2에서 모두 해주셔야 합니다.


[설명] 로컬 인벤토리에 있는 데이터 파일도 모두 제거 합니다.( 제거하지 않을 경우 그리드 설치시 에러가 발생 됩니다.)
[root@rac2 oraInventory]# $ORACLE_HOME/oraInventory/rm -rf *

[설명] 환경설정이 되어있으므로 아래와 같이 모두 제거 합니다. 
[root@rac2 u01]# rm -rf /etc/ora*

[설명] 데몬이 설정되어 있다면 rootdeinstall.sh를 반드시 수행 하셔야 합니다.  이후 아래 파일을 제거 하시기 바랍니다.
rm -f /etc/init.d/init.cssd 
rm -f /etc/init.d/init.crs 
rm -f /etc/init.d/init.crsd 
rm -f /etc/init.d/init.evmd 
rm -f /etc/rc2.d/K96init.crs
rm -f /etc/rc2.d/S96init.crs
rm -f /etc/rc3.d/K96init.crs
rm -f /etc/rc3.d/S96init.crs
rm -f /etc/rc5.d/K96init.crs
rm -f /etc/rc5.d/S96init.crs
rm -Rf /etc/oracle/scls_scr
rm -f /etc/inittab.crs 
cp /etc/inittab.orig /etc/inittab


[출처] http://estenpark.tistory.com/283



반응형
Oracle 11gR2에 새로 추가된 기능인 SCAN에 대해 간단히 정리합니다.. (까먹기 전에 --;)

Oracle은 새로운 버전이 나올때 마다 새로운 기능들을 추가하는데, 이번에 소개드릴 기능은 SCAN (Single Client Access Name) 입니다. 말 그대로 client에서 server를 접속할 때 여러개의 RAC 노드가 있더라도 하나의 access name을 갖도록 하는 기능입니다. 이 기능은 새로운 노드가 추가되거나 삭제되는 경우에도 적용되며, 사실 이것을 염두에 두고 있습니다. 

새로운 노드의 추가와 삭제와 상관없는 single client access name 이라.. 
딱 클라우드 컴퓨팅ㄱ이라는 단어가 생각나지 않습니다? 

아래의 tns alias 설정은 SCAN 기능을 사용할 경우 client의 tns alias 설정 sample 입니다. 
언듯보면.. 자세히 봐도 single DB 접속하는 tns alias와 동일합니다. 

TEST.ORACLE.COM =
(DESCRIPTION=
(ADDRESS=(PROTOCOL=tcp)(HOST=SCAN-TEST.ORACLE.COM)(PORT=1521))
(CONNECT_DATA=(SERVICE_NAME=11GR2TEST.ORACLE.COM))
)

이전의 RAC에서의 tns alias는 아래와 같이 설정했었습니다. 

TEST.ORACLE.COM =
(DESCRIPTION=
(ADDRESS_LIST=
(ADDRESS=(PROTOCOL=tcp)(HOST=TEST1-vip.ORACLE.COM)(PORT=1521))
(ADDRESS=(PROTOCOL=tcp)(HOST=TEST2-vip.ORACLE.COM)(PORT=1521))
)
(CONNECT_DATA=(SERVICE_NAME=11GR2TEST.ORACLE.COM))



그럼 어떻게 노드의 추가/삭제에도 동일한 access name을 가질 수 있을까요?
각 노드의 listener 앞에 새로운 listener를 두는 겁니다. 이 앞단의 listener들이 뒤의 RAC listener를 보게 됩니다.



11gR2 Grid Infrastructure Single Client Access Name (SCAN) Explained

SCAN Concepts

  • Single client access name (SCAN) is the virtual hostname to provide for all clients connecting to the cluster (as opposed to the vip hostnames in 10g and 11gR1).  
  • SCAN is a domain name registered to at least one and up to three IP addresses, either in the domain name service (DNS) or the Grid Naming Service (GNS).
  • By default, the name used as the SCAN is also the name of the cluster and must be globally unique throughout your enterprise. The default value for the SCAN is based on the local node name. SCAN name must be at least one character long and no more than 15 characters in length, must be alphanumeric - cannot begin with a numeral and may contain hyphens (-). If you require a SCAN that is longer than 15 characters, then select an Advanced installation.
  • For installation to succeed, the SCAN must resolve to at least one address.
  • SCAN VIP addresses must be on the same subnet as virtual IP addresses and public IP addresses.
  • Oracle strongly recommends that you do not configure SCAN VIP addresses in the hosts file. If you use the hosts file to resolve SCANs, then you will only be able to resolve to one IP address and you will have only one SCAN address - be sure to provide a hosts file entry for each SCAN address in hosts file in same order.
  • If hosts file is used to resolve SCAN hostname, you will receive Cluster Verification Utility failure at end of installation (see Note: 887471.1 for more details)
  • For high availability and scalability, Oracle recommends that you configure the SCAN to use DNS Round Robin resolution to three addresses.
  • Because the SCAN is associated with the cluster as a whole, rather than to a particular node, the SCAN makes it possible to add or remove nodes from the cluster without needing to reconfigure clients. It also adds location independence for the databases, so that client configuration does not have to depend on which nodes are running a particular database.
  • Clients can continue to access the cluster in the same way as with previous releases, but Oracle recommends that clients accessing the cluster use the SCAN. Clients using the SCAN can also access the cluster using EZCONNECT.
  • Grid Infrastructure will start local listener LISTENER on all nodes to listen on local VIP, and SCAN listener LISTENER_SCAN1 (up to three cluster wide) to listen on SCAN VIP(s); 11gR2 database by default will set local_listener to local LISTENER, and remote_listener to SCAN listener.

위의 SCAN에 대한 concept을 정리해보자면 RAC에 대한 virtual hostname 입니다. 이는 DNS에 설정되어 있고 이를 통해 DB에 접속하게 됩니다. failover나 load-balancing은 RAC 각 노드의 listener 들이 담당하게 됩니다.

이 내용은 "Note:887522.1 - 11gR2 Grid Infrastructure Single Client Access Name (SCAN) Explained" 를 참조했습니다

관련 문서와 동영상을 같이 링크 겁니다.  아직 한글로 소개된 자료는 없는 것 같네요.. 

[PDF] 

SINGLE CLIENT ACCESS NAME (SCAN)

 - [ 이 페이지 번역하기 ]
파일 형식: PDF/Adobe Acrobat - 빠른 보기
29 Mar 2010 ... Single Client Access Name (SCAN) is s a new Oracle Real Application Clusters (RAC)11g Release 2 feature that provides ...
www.oracle.com/technology/products/database/.../scan.pdf - 유사한 페이지


<출처> 에너쓰오라클


반응형

[oracle@rac1:/oracle/grid]$ cd deinstall/

[oracle@rac1:/oracle/grid/deinstall]$ ./deinstall

Checking for required files and bootstrapping ...

Please wait ...

Location of logs /tmp/deinstall2015-09-11_09-37-29AM/logs/


############ ORACLE DEINSTALL & DECONFIG TOOL START ############



######################### CHECK OPERATION START #########################

## [START] Install check configuration ##



Checking for existence of the Oracle home location /oracle/grid

Oracle Home type selected for deinstall is: Oracle Grid Infrastructure for a Cluster

Oracle Base selected for deinstall is: /oracle/app/oracle

Checking for existence of central inventory location /oracle/app/oraInventory

Checking for existence of the Oracle Grid Infrastructure home /oracle/grid

The following nodes are part of this cluster: rac1,rac2

Checking for sufficient temp space availability on node(s) : 'rac1,rac2'


## [END] Install check configuration ##


Traces log file: /tmp/deinstall2015-09-11_09-37-29AM/logs//crsdc.log

Enter an address or the name of the virtual IP used on node "rac1"[rac1-vip]

 > 


The following information can be collected by running "/sbin/ifconfig -a" on node "rac1"

Enter the IP netmask of Virtual IP "192.168.131.120" on node "rac1"[255.255.255.0]

 > 


Enter the network interface name on which the virtual IP address "192.168.131.120" is active

 > 


Enter an address or the name of the virtual IP used on node "rac2"[rac2-vip]

 > 


The following information can be collected by running "/sbin/ifconfig -a" on node "rac2"

Enter the IP netmask of Virtual IP "192.168.131.130" on node "rac2"[255.255.255.0]

 > 


Enter the network interface name on which the virtual IP address "192.168.131.130" is active

 > 


Enter an address or the name of the virtual IP[]

 > 



Network Configuration check config START


Network de-configuration trace file location: /tmp/deinstall2015-09-11_09-37-29AM/logs/netdc_check2015-09-11_10-08-47-AM.log


Specify all RAC listeners (do not include SCAN listener) that are to be de-configured [LISTENER,LISTENER_SCAN1]:


Network Configuration check config END


Asm Check Configuration START


ASM de-configuration trace file location: /tmp/deinstall2015-09-11_09-37-29AM/logs/asmcadc_check2015-09-11_10-08-48-AM.log


ASM configuration was not detected in this Oracle home. Was ASM configured in this Oracle home (y|n) [n]: 

ASM was not detected in the Oracle Home


######################### CHECK OPERATION END #########################



####################### CHECK OPERATION SUMMARY #######################

Oracle Grid Infrastructure Home is: /oracle/grid

The cluster node(s) on which the Oracle home deinstallation will be performed are:rac1,rac2

Oracle Home selected for deinstall is: /oracle/grid

Inventory Location where the Oracle home registered is: /oracle/app/oraInventory

Following RAC listener(s) will be de-configured: LISTENER,LISTENER_SCAN1

ASM was not detected in the Oracle Home

Do you want to continue (y - yes, n - no)? [n]: y

A log of this session will be written to: '/tmp/deinstall2015-09-11_09-37-29AM/logs/deinstall_deconfig2015-09-11_09-37-37-AM.out'

Any error messages from this session will be written to: '/tmp/deinstall2015-09-11_09-37-29AM/logs/deinstall_deconfig2015-09-11_09-37-37-AM.err'


######################## CLEAN OPERATION START ########################

ASM de-configuration trace file location: /tmp/deinstall2015-09-11_09-37-29AM/logs/asmcadc_clean2015-09-11_10-08-51-AM.log

ASM Clean Configuration END


Network Configuration clean config START


Network de-configuration trace file location: /tmp/deinstall2015-09-11_09-37-29AM/logs/netdc_clean2015-09-11_10-08-51-AM.log


De-configuring RAC listener(s): LISTENER,LISTENER_SCAN1


De-configuring listener: LISTENER

    Stopping listener: LISTENER

    Warning: Failed to stop listener. Listener may not be running.

Listener de-configured successfully.


De-configuring listener: LISTENER_SCAN1

    Stopping listener: LISTENER_SCAN1

    Warning: Failed to stop listener. Listener may not be running.

Listener de-configured successfully.


De-configuring Naming Methods configuration file on all nodes...

Naming Methods configuration file de-configured successfully.


De-configuring Local Net Service Names configuration file on all nodes...

Local Net Service Names configuration file de-configured successfully.


De-configuring Directory Usage configuration file on all nodes...

Directory Usage configuration file de-configured successfully.


De-configuring backup files on all nodes...

Backup files de-configured successfully.


The network configuration has been cleaned up successfully.


Network Configuration clean config END



---------------------------------------->


The deconfig command below can be executed in parallel on all the remote nodes. Execute the command on  the local node after the execution completes on all the remote nodes.


Run the following command as the root user or the administrator on node "rac2".


/tmp/deinstall2015-09-11_09-37-29AM/perl/bin/perl -I/tmp/deinstall2015-09-11_09-37-29AM/perl/lib -I/tmp/deinstall2015-09-11_09-37-29AM/crs/install /tmp/deinstall2015-09-11_09-37-29AM/crs/install/rootcrs.pl -force  -deconfig -paramfile "/tmp/deinstall2015-09-11_09-37-29AM/response/deinstall_Ora11g_gridinfrahome1.rsp"


Run the following command as the root user or the administrator on node "rac1".


/tmp/deinstall2015-09-11_09-37-29AM/perl/bin/perl -I/tmp/deinstall2015-09-11_09-37-29AM/perl/lib -I/tmp/deinstall2015-09-11_09-37-29AM/crs/install /tmp/deinstall2015-09-11_09-37-29AM/crs/install/rootcrs.pl -force  -deconfig -paramfile "/tmp/deinstall2015-09-11_09-37-29AM/response/deinstall_Ora11g_gridinfrahome1.rsp" -lastnode


Press Enter after you finish running the above commands


<----------------------------------------

**여기서 Enter 치지 말고, console 를 하나 더 열어 root 로 위에 빨간색 명령어 진행


[root@rac2 ~]# /tmp/deinstall2015-09-11_09-37-29AM/perl/bin/perl -I/tmp/deinstall2015-09-11_09-37-29AM/perl/lib -I/tmp/deinstall2015-09-11_09-37-29AM/crs/install /tmp/deinstall2015-09-11_09-37-29AM/crs/install/rootcrs.pl -force  -deconfig -paramfile "/tmp/deinstall2015-09-11_09-37-29AM/response/deinstall_Ora11g_gridinfrahome1.rsp"

Using configuration parameter file: /tmp/deinstall2015-09-11_09-37-29AM/response/deinstall_Ora11g_gridinfrahome1.rsp


...


**기다리면 succeeded 등이 뜨면서 끝날때까지 기다린다.(난.. failed..Y Y)


Successfully deconfigured Oracle clusterware stack on this node <--이런식으로..


------------------------------------------------------------------

**똑같이 rac1 에서도 진행 한다.

[root@rac1 ~]# 

[root@rac1 ~]# root

-bash: root: command not found

[root@rac1 ~]# /tmp/deinstall2015-09-11_09-37-29AM/perl/bin/perl -I/tmp/deinstall2015-09-11_09-37-29AM/perl/lib -I/tmp/deinstall2015-09-11_09-37-29AM/crs/install /tmp/deinstall2015-09-11_09-37-29AM/crs/install/rootcrs.pl -force  -deconfig -paramfile "/tmp/deinstall2015-09-11_09-37-29AM/response/deinstall_Ora11g_gridinfrahome1.rsp" -lastnode

...

Successfully deconfigured Oracle clusterware stack on this node



다시 deinstall 하는 곳으로 넘어와서 Enter를 친다.


<----------------------------------------


Setting the force flag to false

Setting the force flag to cleanup the Oracle Base

Oracle Universal Installer clean START


Detach Oracle home '/oracle/grid' from the central inventory on the local node : Done


Delete directory '/oracle/grid' on the local node : Done


Delete directory '/oracle/app/oraInventory' on the local node : Done


Delete directory '/oracle/app/oracle' on the local node : Done


Detach Oracle home '/oracle/grid' from the central inventory on the remote nodes 'rac2' : Done


Delete directory '/oracle/grid' on the remote nodes 'rac2' : Done


Delete directory '/oracle/app/oraInventory' on the remote nodes 'rac2' : Done


Delete directory '/oracle/app/oracle' on the remote nodes 'rac2' : Done


Oracle Universal Installer cleanup was successful.


Oracle Universal Installer clean END



## [START] Oracle install clean ##


Clean install operation removing temporary directory '/tmp/deinstall2015-09-11_09-37-29AM' on node 'rac1'

Clean install operation removing temporary directory '/tmp/deinstall2015-09-11_09-37-29AM' on node 'rac2'


## [END] Oracle install clean ##



######################### CLEAN OPERATION END #########################



####################### CLEAN OPERATION SUMMARY #######################

Following RAC listener(s) were de-configured successfully: LISTENER,LISTENER_SCAN1

Oracle Clusterware is stopped and successfully de-configured on node "rac1"

Oracle Clusterware is stopped and successfully de-configured on node "rac2"

Oracle Clusterware is stopped and de-configured successfully.

Successfully detached Oracle home '/oracle/grid' from the central inventory on the local node.

Successfully deleted directory '/oracle/grid' on the local node.

Successfully deleted directory '/oracle/app/oraInventory' on the local node.

Successfully deleted directory '/oracle/app/oracle' on the local node.

Successfully detached Oracle home '/oracle/grid' from the central inventory on the remote nodes 'rac2'.

Successfully deleted directory '/oracle/grid' on the remote nodes 'rac2'.

Successfully deleted directory '/oracle/app/oraInventory' on the remote nodes 'rac2'.

Successfully deleted directory '/oracle/app/oracle' on the remote nodes 'rac2'.

Oracle Universal Installer cleanup was successful.



Run 'rm -rf /etc/oraInst.loc' as root on node(s) 'rac1,rac2' at the end of the session.


Run 'rm -rf /opt/ORCLfmap' as root on node(s) 'rac1,rac2' at the end of the session.

Oracle deinstall tool successfully cleaned up temporary directories.

#######################################################################



############# ORACLE DEINSTALL & DECONFIG TOOL END #############


[oracle@rac1:/oracle/grid/deinstall]$ 


끝!!이 아니고 아래 마지막으로 root 로 rac1 /rac2 에서 실행

[root@rac1 ~]# rm -rf /etc/oraInst.loc

[root@rac1 ~]# rm -rf /opt/ORCLfmap


----------------------------------------------------------

[root@rac2 ~]# rm -rf /opt/ORCLfmap

[root@rac2 ~]# rm -rf /etc/oraInst.loc


제대로 지워졌는지 directory 등 확인해 보자!



반응형

+ Recent posts