Oracle rac node eviction troubleshooting

Oracle's explanations on MOSC say "a node" but in my testing it is always a particular physical machine and not the secondary instance. What would happen in a 3 or 4 node cluster when you have a RAC node eviction?

Is it safe to assume it will be instance 2 every time no matter which side of the interconnect goes down? Also what are some of the most common reasons for a node eviction?

Answer: A RAC node eviction is done on this basis. When a condition that requires a node eviction occurs see below The node with the lowest node number will be the node the "survives" the eviction. When you have a three node cluster, two nodes will survive a node eviction, and so on.

A node eviction is done when a heartbeat indicates that a node is not responding, and the evicted node is re-started so that it can continue to partiipate in the cluster. There are many reasons for a RAC node eviction. It's important that each node be properly configured. Feel free to ask questions on our Oracle forum. Verify experience! Anyone considering using the services of an Oracle support expert should independently investigate their credentials and experience, and not rely on advertisements and self-proclaimed expertise.

All legitimate Oracle experts publish their Oracle qualifications. Oracle technology is changing and we strive to update our BC Oracle support information.

If you find an error or have a suggestion for improving our content, we would appreciate your feedback. All rights reserved by Burleson. Search BC Oracle Sites. Burleson is the American Team Note: This Oracle documentation was created as a support and Oracle training reference for use by our DBA performance tuning consulting professionals.Very good article.

Hi Keyurif there is no script is automated for this alert,so how as a dba will know the node is evicted. There are two RAC processes which are basically deciding about node evictions and who will initiate node evictions in almost all platforms. OCSSD : This process is primary responsible for inter node health monitoring and instance endpoint recovery. It runs as oracle user. It also provides basic cluster locking and group services.

It can run with or without vendor clusterware. The abnormal termination or killing this process will reboot the node by init. If this script is killed then ocssd process will survive and node will keep functioning. Since one ocssd process is already running, this 2nd time script calling ocssd starting will fail and 2nd init.

On Linux, it is not available on Starting from Killing this process will reboot the node. If a machine is hang for long time, this process needs to kill itself to avoid IO happening to disk so that rest of the nodes can remaster the resources.

This executable sets a signal handler and sets the interval time bases on milliseconds parameter. It takes two parameters. Timeout value —t : This is the length of time between executions. Margin —m : This is the acceptable difference between dispatches. There are two kinds of heartbeat mechanisms which are responsible for node reboot and reconfiguration of remaining clusteware nodes.

Troubleshooting Oracle RAC Node Evictions (Reboots) [ 11.2 and above ]

Network heartbit : This indicates that node can participate in cluster activities like group membership changes. This too long value is determined by css miscount parameter value which is 30 seconds on most of platforms but can be changed depending on network configuration of particular environment.

Disk heartbit : This disk heartbit means heartbits to voting disk file which has the latest information about node members. Connectivity to a majority of voting files must be maintained for a node to stay alive.

Voting disk file uses kill blocks to notify nodes they have been evicted and then remaining nodes can go for reconfiguration and a node with least no will become master as per Oracle algorithm generally. By default this value is seconds which is css disktimeout parameter. Network split resolution : When network fails and nodes are not able to communicate to each other then one node has to fail to maintain data integrity. The surviving nodes should be an optimal subcluster of original cluster.

Each node writes its own vote to voting file and Reconfiguration manager component reads these votes to calculate an optimal sub cluster. Nodes that are not to survive are evicted via communication through network and disk.

Adding a Node in RAC (Practical)

Network failure : 30 consecutive missed checkins will reboot a node where heartbits are issues once per second. Some kind of messages in occsd. If node eviction time in messages log file is less than missed checkins then node eviction is likely not due to missed checkins.

If node eviction time in messages log file is greater than missed checkins then node eviction is likely due to missed checkins. Problems writing to voting disk file : some kind of hang in accessing voting disk.This post provides a reference for troubleshooting Clusterware node evictions in versions The Oracle Clusterware is designed to perform a node eviction by removing one or more nodes from the cluster if some critical problem is detected.

A critical problem could be a node not responding via a network heartbeat, a node not responding via a disk heartbeat, a hung or severely degraded machine, or a hung ocssd.

The purpose of this node eviction is to maintain the overall health of the cluster by removing bad members. Starting in This is called a rebootless restart. In this case, we restart most of the Clusterware stack to see if that fixes the unhealthy node. It runs in both vendor clusterware and non-vendor clusterware environments.

oracle rac node eviction troubleshooting

The health monitoring includes a network heartbeat and a disk heartbeat to the voting files. This is a multi-threaded process that runs at an elevated priority and runs as the Oracle user. This is a multi-threaded process that runs at an elevated priority and runs as the root user. This can be used to determine which process is responsible for the reboot. Example message from a clusterware alert log:. This particular eviction happened when we had hit the network timeout. CSSD exited and the cssdagent took action to evict.

The cssdagent knows the information in the error message from local heartbeats made from CSSD. More data may be required.

How to Troubleshoot Clusterware Node Evictions (Reboots)

Mar 27 choldbrp kernel: Error:Mpx:All paths to Symm vol 0c71 are dead. Mar 27 choldbrp kernel: Error:Mpx:Symm vol 0c71 is dead. You May Also Like.The Oracle Clusterware is designed to perform a node eviction by removing one or more nodes from the cluster if some critical problem is detected. A critical problem could be a node not responding via a network heartbeat, a node not responding via a disk heartbeat, a hung or severely degraded machine, or a hung ocssd.

The purpose of this node eviction is to maintain the overall health of the cluster by removing bad members. Starting in This is called a rebootless restart. In this case we restart most of the clusterware stack to see if that fixes the unhealthy node.

It runs in both vendor clusterware and non-vendor clusterware environments. The health monitoring includes a network heartbeat and a disk heartbeat to the voting files. This is a multi-threaded process that runs at an elevated priority and runs as the Oracle user.

This is a multi-threaded process that runs at an elevated priority and runs as the root user.

This can be used to determine which process is responsible for the reboot. This particular eviction happened when we had hit the network timeout. CSSD exited and the cssdagent took action to evict. The cssdagent knows the information in the error message from local heartbeats made from CSSD.

You are commenting using your WordPress. You are commenting using your Google account. You are commenting using your Twitter account. You are commenting using your Facebook account.

Notify me of new comments via email. Notify me of new posts via email. Skip to content. Example message from a clusterware alert log: [ohasd ]CRSreboot advisory message from host: sta, component: cssagentwith timestamp: L Share this: Twitter Facebook.Post a Comment.

References: - - Troubleshooting Look at the cssd. Also take a look at crsd. The evicted node will have core dump file generated and system reboot info.

Find out if there was node rebootis it because of CRS or others, check system reboot time. After finding Network or Disk issue. Then starting going in depth. If network was the issue, then check if any NIC cards were down, or if link switching as happen. And check private interconnect is working between both the nodes. What got changed recently? Ask your coworker to open up a ticket with Oracle and upload logs. Check the health of clusterware, db instances, asm instances, uptime of all hosts and all the logs — ASM logs, Grid logs, CRS and ocssd.

Check health of interconnect if error logs guide you in that direction. Check storage error logs guide you in that direction. Node eviction because iptables had been enabled. After iptables was turned off, everything went back to normal. Avoid to enable firewalls between the nodes, and that appears to be true. The ACL can open the ports on the interconnect, as we did, but we still experienced all kinds of issues. Verify user equiv between cluster nodes Verify switch use for only interconnect.

Verify all nodes are pct the same configuration, sometimes there are net or config diffs that are not obvious. A major reason however for node evictions at our cluster was at the "patch-levels" not being equal across the two nodes. Nodes sometimes completely died, without any error what so ever.

It turned to be a bug in the installer of After we patched to High CPU consumption.This appendix introduces monitoring the Oracle Clusterware environment and explains how you can enable dynamic debugging to troubleshoot Oracle Clusterware processing, and enable debugging and tracing for specific components and specific Oracle Clusterware resources to focus your troubleshooting efforts. When you log in to Oracle Enterprise Manager using a client browser, the Cluster Database Home page appears where you can monitor the status of both Oracle Clusterware environments.

Monitoring can include such things as:. Status of the Oracle Clusterware on each node of the cluster using information obtained through the Cluster Verification Utility cluvfy. Notification of issues in the Oracle Clusterware alert log for the Oracle Cluster Registry, voting disk issues if anyand node evictions. This includes a summary about alert messages and job activity, and links to all the database and Automatic Storage Management Oracle ASM instances.

For example, you can track problems with services on the cluster including when a service is not running on all of the preferred instances or when a service response time threshold is not being met. The Interconnects page shows the public and private interfaces on the cluster, the overall throughput on the private interconnect, individual throughput on each of the network interfaces, error rates if any and the load contributed by database instances on the interconnect, including:.

All of this information also is available as collections that have a historic view. This is useful with cluster cache coherency, such as when diagnosing problems related to cluster wait events. You can access the Interconnects page by clicking the Interconnect tab on the Cluster Database home page. Also, the Oracle Enterprise Manager Cluster Database Performance page provides a quick glimpse of the performance statistics for a database.

Statistics are rolled up across all the instances in the cluster database in charts. Using the links next to the charts, you can get more specific information and perform any of the following tasks:.

oracle rac node eviction troubleshooting

The chart shows maximum, average, and minimum load values for available nodes in the cluster for the previous hour. Using Cache Fusion, Oracle RAC environments logically combine each instance's buffer cache to enable the database instances to process data as if the data resided on a logically combined, single cache.

Comparing CPU time to wait time helps to determine how much of the response time is consumed with useful work rather than waiting for resources that are potentially held by other processes. Chart for Database Throughput : The Database Throughput charts summarize any resource contention that appears in the Average Active Sessions chart, and also show how much work the database is performing on behalf of the users or applications.

The Per Second view shows the number of transactions compared to the number of logons, and the amount of physical reads compared to the redo size for each second. The Per Transaction view shows the amount of physical reads compared to the redo size for each transaction.

Logons is the number of users that are logged on to the database. In addition, the Top Activity drilldown menu on the Cluster Database Performance page enables you to see the activity by wait events, services, and instances. Oracle Database uses a unified log directory structure to consolidate the Oracle Clusterware component log files.

oracle rac node eviction troubleshooting

This consolidated structure simplifies diagnostic information collection and assists during data retrieval and problem analysis. Oracle Cluster Registry.

Event Manager EVM information generated by evmd. Core files are in subdirectories of the log directory. Each RACG executable has a subdirectory assigned exclusively for that executable. The name of the RACG executable subdirectory is the same as the name of the executable. Additionally, you can find logging information for the VIP and database in these two locations, respectively.

Every time an Oracle Clusterware error occurs, run the diagcollection. The diagnostics provide additional information so Oracle Support can resolve problems.

Run this script from the following location:. Oracle Clusterware posts alert messages when important events occur.It's cool Umesh. In one of the project where I was not having access to the servers. I have got to know that node reboots happened due to NTP server reboot and in another case it happened due to firmware upgrade. Please share if you have more details if you come across with the above situations. Hi umesh nice explanation. Tell me with more details about which are all the background processes involved during node eviction.

One can learn about reboot issues clearly, though they dont get a chance to work, Super boss. Node Reboot is performed by CRS to maintain consistency in Cluster environment by removing node which is facing some critical issue. A critical problem could be a node not responding via a network heartbeat, a node not responding via a disk heartbeat, a hungor a hung ocssd. There could be many more reasons for node Eviction but Some of them are common and repetitive.

Here, I am listing From above message, we can see that this system has only 4kB free swap out of 24G swap space. This picture is also clear from OS Watcher of system. DBRM help to resolve this by allowing the database to have more control over how hardware resources and their allocation. DBA should setup Resource consumer group and Resource plan and should use them as per requirements. Voting Disk not Reachable: One of the another reason for Node Reboot is clusterware is not able to access a minimum number of the voting files.

When the node aborts for this reason, the node alert log will show CRS error.

oracle rac node eviction troubleshooting

Use command " crsctl query css votedisk " on a node where clusterware is up to get a list of all the voting files. Check that each node can access the devices underlying each voting file.

Apply fix for if only one voting disk is in use. This is fixed in Whenever there is communication gap or no communication between nodes on private network interconnect due to network outage or some other reason. A node abort itself to avoid " split brain " situation. The most common but not exclusive cause of missed NHB is network problems communicating over the private interconnect.

Check OS statistics from the evicted node from the time of the eviction. Validate the interconnect network setup with the Help of Network administrator. In these case Database instance is hang and is terminated afterwardswhich cause either reboot cluster or Node eviction.

Initiating system state dump. See Note Real Application Cluster Log files. In few of the cases, bugs could be the reason for node rebootbug may be at Database level, ASM level or at Real Application Cluster level.

Oracle RAC: Debugging node eviction issues

Please share if you know any other Common reason for Node eviction in RAC environment in comment section of the post. Labels: Real Application Cluster. Anonymous June 6, at AM. Unknown June 19, at AM. Unknown August 14, at AM. Anil December 3, at PM.

Anonymous January 29, at AM.


thoughts on “Oracle rac node eviction troubleshooting

Leave a Reply

Your email address will not be published. Required fields are marked *