Saturday, November 29, 2008

Adding a Third Node to a two Node RAC Setup in Oracle 10gR2 on Sun Solaris 10

Dear All,

Another challenge for the RAC experts around:


Oracle® Database 2 Day + Real Application Clusters Guide 10g Release 2 (10.2)
Part Number B28759-01


Backup (Fallback)

** Backing Up Voting Disks
dd if=voting_disk_name of=backup_file_name
If your voting disk is stored on a raw device, use the device name in place of voting_disk_name. For example:
dd if=/dev/sdd1 of=/tmp/voting.dmp
When you use the dd command for making backups of the voting disk, the backup can be performed while the Cluster Ready Services (CRS) process is active; you do not need to stop the crsd.bin process before taking a backup of the voting disk.

** Recovering Voting Disks
If a voting disk is damaged, and no longer usable by Oracle Clusterware, you can recover the voting disk if you have a backup file. Run the following command to recover a voting disk where backup_file_name is the name of the voting disk backup file and voting_disk_name is the name of the active voting disk:
dd if=backup_file_name of=voting_disk_name

** Adding and Removing Voting Disks
You can dynamically add and remove voting disks after installing Oracle RAC. Do this using the following commands where path is the fully qualified path for the additional voting disk. Run the following command as the root user to add a voting disk:
./clusterware/crs/oracle/product/10.2.0/bin/crsctl add css votedisk path
Run the following command as the root user to remove a voting disk:
./clusterware/crs/oracle/product/10.2.0/bin/crsctl delete css votedisk path
Note:
If your cluster is down, then you can use the -force option to modify the voting disk configuration when using either of these commands without interacting with active Oracle Clusterware daemons. However, you may corrupt your cluster configuration if you use the -force option while a cluster node is active.

** Backing Up and Recovering the Oracle Cluster Registry
* Viewing Available OCR Backups
To find the most recent backup of the OCR, on any node in the cluster, use the following command:
./clusterware/crs/oracle/product/10.2.0/bin/ocrconfig -showbackup
* Backing Up the OCR
Because of the importance of OCR information, Oracle recommends that you use the ocrconfig tool to make copies of the automatically created backup files at least once a day.
In addition to using the automatically created OCR backup files, you should also export the OCR contents to a file before and after making significant configuration changes, such as adding or deleting nodes from your environment, modifying Oracle Clusterware resources, or creating a database. Exporting the OCR contents to a file lets you restore the OCR if your configuration changes cause errors. For example, if you have unresolvable configuration problems, or if you are unable to restart your cluster database after such changes, then you can restore your configuration by importing the saved OCR content from the valid configuration.
To export the contents of the OCR to a file, use the following command, where backup_file_name is the name of the OCR backup file you want to create:
./clusterware/crs/oracle/product/10.2.0/bin/ocrconfig -export backup_file_name
Note:
You must be logged in as the root user to run the ocrconfig command.
* Recovering the OCR
This section describes two methods for recovering the OCR. The first method uses automatically generated OCR file copies and the second method uses manually created OCR export files.
In event of a failure, before you attempt to restore the OCR, ensure that the OCR is unavailable. Run the following command to check the status of the OCR:
ocrcheck
If this command does not display the message 'Device/File integrity check succeeded' for at least one copy of the OCR, then both the primary OCR and the OCR mirror have failed. You must restore the OCR from a backup.
**************************************************************************************************************
On Node3: (New Node)
Check /etc/hosts for all entries on all three nodes
ping node1
ping node1-priv
ping node1-vip
ping node2
ping node2-priv
ping node2-vip
On Node1:
ping node3
ping node3-priv
./runcluvfy.sh stage -pre crsinst -n node1,node2,node3 -r 10gR2
** Extending the Oracle Clusterware
1. Verify the $ORACLE_HOME environment variable on node1 directs you to the successfully installed Oracle Clusterware home on that node.
echo $ORACLE_HOME
echo $ORA_CRS_HOME
2. Go to CRS_home/oui/bin and run the addNode.sh script.
cd /clusterware/crs/oracle/product/10.2.0/oui/bin
./addNode.sh
OUI starts and first displays the Welcome window.
3. Click Next.
The Specify Cluster Nodes to Add to Installation window appears.
4. Select the node or nodes that you want to add. After selecting node3, click Next.
5. Verify the entries that OUI displays on the Summary Page and click Next.
6. Run the rootaddNode.sh script from the CRS_home/install/ directory on node1 when prompted to do so.
/clusterware/crs/oracle/product/10.2.0/install/rootaddNode.sh
Basically, this script adds the node applications of the new node to the OCR configuration.
7. Run the orainstRoot.sh script on the node node3 if OUI prompts you to do so.
8. Run the CRS_home/root.sh script on the node node3 to start Oracle Clusterware on the new node.
/clusterware/crs/oracle/product/10.2.0/root.sh
9. Add the new node's Oracle Notification Services (ONS) configuration information to the shared Oracle Cluster Registry (OCR). Obtain the ONS port identifier used by the new node, which you need to know for the next step, by running the following command from the CRS_home/opmn/conf directory on the node1 node:
cat /clusterware/crs/oracle/product/10.2.0/opmn/conf/ons.config
After you locate the ONS port number for the new node, you must make sure that the ONS on node1 can communicate with the ONS on the new node, node3.
10. From the CRS_home/bin directory on the node node1, run the Oracle Notification Services configuration utility as shown in the following example, where remote_port is the port number from step 9, and node3 is the name of the node that you are adding:
./clusterware/crs/oracle/product/10.2.0/bin/racgons add_config node3:remote_port
At the end of the cloning process, you should have Oracle Clusterware running on the new node. To verify the installation of Oracle Clusterware on the new node, you can run the following command as the root user on the newly configured node, docrac3:
./clusterware/crs/oracle/product/10.2.0/bin/runcluvfy.sh stage -post crsinst -n node3 -verbose

** Extending the Oracle Automatic Storage Management Home Directory
To extend the ASM installation to include the new node:
1. Ensure that you have successfully installed the ASM software on at least one node in your cluster environment. To use these procedures as shown, your $ASM_HOME environment variable must identify your successfully installed ASM home directory.
2. Go to the $ASM_HOME/oui/bin directory on node1 and run the addNode.sh script.
./oracle/app/oracle/product/10.2.0/sskidb/oui/bin/addNode.sh
3. When OUI displays the Node Selection window, select the node to be added (node3), then click Next.
4. Verify the entries that OUI displays on the Summary window, then click Next.
5. Run the root.sh script on the new node, node3, from the ASM home directory on that node when OUI prompts you to do so.
./oracle/app/oracle/product/10.2.0/sskidb/root.sh
You now have a copy of the ASM software on the new node.

** Extending the Oracle RAC Software Home Directory
To extend the Oracle RAC installation to include the new node:
1. Ensure that you have successfully installed the Oracle RAC software on at least one node in your cluster environment. To use these procedures as shown, your $ORACLE_HOME environment variable must identify your successfully installed Oracle RAC home directory.
2. Go to the $ORACLE_HOME/oui/bin directory on node1 and run the addNode.sh script.
./oracle/app/oracle/product/10.2.0/sskidb/oui/bin/addNode.sh
3. When OUI displays the Specify Cluster Nodes to Add to Installation window, select the node to be added (node3), then click Next.
4. Verify the entries that OUI displays in the Cluster Node Addition Summary window, then click Next.
5. Run the root.sh script on the new node, node3, from the $ORACLE_HOME directory on that node when OUI prompts you to do so.
./oracle/app/oracle/product/10.2.0/sskidb/root.sh
After completing these steps, you should have an installed Oracle RAC home on the new node.

** Creating a Listener on the New Node
To create a new Listener on the new node using Oracle Net Configuration Assistant:
1. Start the Oracle Net Configuration Assistant by entering netca at the system prompt from the $ORACLE_HOME/bin directory.
./oracle/app/oracle/product/10.2.0/sskidb/bin/netca
NETCA displays the Welcome window. Click Help on any NETCA window for additional information.
2. Select Listener configuration, and click Next.
NETCA displays the Listener Configuration, Listener window.
3. Select Add to create a new Listener, then click Next.
NETCA displays the Listener Configuration, Listener Name window.
4. Accept the default value of LISTENER for the Listener name by clicking Next.
NETCA displays the Listener Configuration, Select Protocols window.
5. Choose TCP and move it to the Selected Protocols area, then click Next.
NETCA displays the Listener Configuration, TCP/IP Protocol window.
6. Choose Use the standard port number of 1521, then click Next.
NETCA displays the Real Application Clusters window.
7. Select Cluster configuration for the type of configuration to perform, then click Next.
NETCA displays the Real Application Clusters, Active Nodes window.
8. Select the name of the node you are adding, for example node3, then click Next.
NETCA creates a Listener using the configuration information provided. You can now exit NETCA.
You should now have a Listener named LISTENER running on the new node.

** Adding a New Cluster Instance on the New Node
To create a new cluster instance on the new node using DBCA:
1. Start DBCA by entering dbca at the system prompt from the $ORACLE_HOME/bin directory.
./oracle/app/oracle/product/10.2.0/sskidb/bin/dbca
DBCA displays the Welcome window for Oracle RAC. Click Help on any DBCA page for additional information.
2. Select Oracle Real Application Clusters database, and then click Next.
DBCA displays the Operations window.
3. Select Instance Management, and then click Next.
DBCA displays the Instance Management window.
4. Select Add an Instance, then click Next.
DBCA displays the List of Cluster Databases window, which shows the databases and their current status, such as ACTIVE or INACTIVE.
5. In the List of Cluster Databases window, select the active Oracle RAC database to which you want to add an instance, for example sales. Enter the user name and password for the database user that has SYSDBA privileges. Click Next.
DBCA will spend a few minutes performing tasks in the background, then it will display the Instance naming and node selection window.
6. In the Instance naming and node selection window, enter the instance name in the field at the top of this window if the default instance name provided by DBCA does not match your existing instance naming scheme. For example, instead of the SSKIDB3 instance, you might want to create the SSKIDB_03 instance.
Click Next to accept the default instance name of SSKIDB3.
DBCA displays the Instance Storage window.
7. In the Instance Storage window, you have the option of changing the default storage options and file locations for the new database instance. In this example, you accept all the default values and click Finish.
DBCA displays the Summary window.
8. Review the information in the Summary window, then click OK to start the database instance addition operation. DBCA displays a progress dialog box showing DBCA performing the instance addition operation.
9. During the instance addition operation, if you are using ASM for your cluster database storage, DBCA detects the need for a new ASM instance on the new node.
When DBCA displays a dialog box, asking if you want to ASM to be extended, click Yes.
After DBCA extends ASM on the new node and completes the instance addition operation, DBCA displays a dialog box asking whether or not you want to perform another operation. Click No to exit DBCA.
You should now have a new cluster database instance and ASM instance running on the new node. After you terminate your DBCA session, you should run the following command to verify the administrative privileges on the new node and obtain detailed information about these privileges:
./clusterware/crs/oracle/product/10.2.0/bin/runcluvfy.sh comp admprv -o db_config -d /oracle/app/oracle/product/10.2.0/sskidb/ -n node3 -verbose
*******************************************************************************************************************

Switchover and Switchback in a CASCADED STANDBY SETUP with Oracle 10gR2

Dear All,

This is first of the challenges we all might have faced while doing a switchover and switchback with Oracle Database having cascaded standby setup:
Primary Database: epprod2
First Standby Database: epstby2
Second Standby Database (cascaded standby DB): epdr2

Steps are given below:

SWITCHOVER:

On Standby (epstby2):
alter system set log_archive_dest_state_2=DEFER;
alter database recover managed standby database cancel;
shutdown immediate;
Keep it as a backup

On Primary (epprod2):
alter system set log_archive_dest_state_3=DEFER;
alter system set log_archive_dest_2='SERVICE=epdr2 LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=epdr2 max_connections=4';
alter system set log_archive_dest_state_2=ENABLE;
alter system switch logfile;
alter system switch logfile;
select switchover_status from v$database;
alter database commit to switchover to standby with session shutdown;
shutdown immediate
startup nomount
alter database mount standby database;
alter system set log_archive_dest_state_2=defer;
alter database recover managed standby database disconnect from session;
select max(sequence#) from v$log_history;

On Cascaded Standby (DR - epdr2):
select switchover_status from v$database;
alter database commit to switchover to primary with session shutdown;
shutdown immediate
startup
alter system set log_archive_dest_state_2=ENABLE;
alter system switch logfile;
alter system switch logfile;
select max(sequence#) from v$archived_log;


SWITCHBACK:

On Primary (epdr2):
select switchover_status from v$database;
alter database commit to switchover to standby with session shutdown;
shutdown immediate
startup nomount
alter database mount standby database;
alter system set log_archive_dest_state_2=defer;
alter database recover managed standby database disconnect from session;
select max(sequence#) from v$log_history;

On Cascaded Standby (epprod2):
select switchover_status from v$database;
alter database commit to switchover to primary with session shutdown;
shutdown immediate
startup
alter system set log_archive_dest_state_2=ENABLE;
alter system switch logfile;
alter system switch logfile;
alter system set log_archive_dest_state_2=DEFER;
alter system set log_archive_dest_2='SERVICE=epdr2 VALID_FOR=(STANDBY_LOGFILES,STANDBY_ROLE) DB_UNIQUE_NAME=epdr2'
alter system set log_archive_dest_state_3=ENABLE;
select max(sequence#) from v$archived_log;

On Standby (epstby2):
startup nomount;
alter database mount standby database;
alter database recover managed standby database disconnect from session; (Till the Gap is resolved)
alter database recover managed standby database disconnect from session;
alter database recover managed standby database disconnect from session;
alter system set log_archive_dest_state_2=ENABLE;

Hope this helps ...
Paresh