Computer

Twenty-five years ago, Larry Ellison saw an opportunity other companies missed when he came across a description of a working prototype for a relational database and discovered that no company had committed to commercializing the technology. Ellison and his co-founders, Bob Miner and Ed Oates, realized there was tremendous business potential in the relational database model–but they may not have realized that they would change the face of business computing forever.
Today Oracle (Nasdaq: ORCL) is still at the head of the pack. Oracle technology can be found in nearly every industry around the world and in the offices of 98 of the Fortune 100 companies. Oracle is the first software company to develop and deploy 100% internet-enabled enterprise software across its entire product line: database, business applications, and application development and decision support tools. Oracle is the world’s leading supplier of software for information management, and the world’s second largest independent software company.
Oracle has always been an innovative company. It was one of the first companies to make its business applications available through the internet–today, that idea is pervasive. Now Oracle is committed to making sure that all of its software is designed to work together–the suite approach–and other companies, analysts, and the press are beginning to acknowledge that Oracle is right. What’s in store for tomorrow? We will continue to innovate and to lead the industry–while always making sure that we’re focused on solving the problems of the customers who rely on our software (http://www.oracle.com/corporate/index.html?story.html)
We will first look at the hardware and software platforms to see how they measure up, and what, if any, special requirements are needed, as well as the costs that are associated with them. Next, we will look at the deployment of the application and what it takes to roll out a production system. Once the application is up and running, performance becomes the top priority of the data center. We can first note that the hardware and software requirements are similar. The abstraction to the application is also the same with both solutions – it gives an illusion of a single database and there is no need to modify the SQL code. The differences are in ease of deployment, performance and manageability.
Both databases have similar cluster hardware requirements. A cluster is a group of independent servers that collaborate as a single system. The primary cluster components are processor nodes, a cluster interconnect (private network), and a disk subsystem. The clusters share disk access and resources that manage the data, but each distinct hardware cluster nodes do not share memory. Eachnode has its own dedicated system memory as well as its own operating system, database instance.


Clusters can provide improved fault resilience and modular incremental system growth over single symmetric multi-processors systems. In the event of subsystem failures, clustering ensures high availability. Redundant hardware components, such as additional nodes, interconnect, and shared disks, provide higher availability. Such redundant hardware architectures avoid single points of-failure and provide exceptional fault resilience. From a conceptual view, the cluster requirements for RAC and DB2 are similar, especially when high availability (HA) is a basic requirement. In a cluster, CPU and memory requirements for each node are similar to single systems requirements and leave no differentiation for this discussion. However, there has been a lot of debate on the other cluster components requirements, especially around performance and costs. The two relevant requirements are:
P Cluster Interconnect
P Shared disk vs. shared-nothing storage
Cluster Interconnect
Each node in a cluster needs to keep other nodes in that cluster informed of its health and configuration. This is done periodically by broadcasting a network message, called a heartbeat, across a network. The heartbeat signal is usually sent over a private network, the cluster interconnect, which is used for internodecommunications. This cluster interconnect is built by installing network cards in each node and connecting them by an appropriate network cable and configuring a software protocol to run across the wire. This interconnect can be a low cost Ethernet card running TCP/IP or UDP or a high-speed proprietary interconnect like Compaqs Memory Channel running Reliable DataGram (RDG) or Hewlett- Packards Hyperfabric/2 with Hyper Messaging Protocol (HMP) depending on where you want to be on the price-performance curve. A low latency/high speed interconnect is best for the performance of RAC because of the cache transfer requirements in OLTP applications and for parallel query communication in DSS applications. DB2 also needs a fast interconnect for the best performance of its coordinator to worker process communication in OLTP and DSS applications.

Shared disk vs. shared-nothing storage
This is probably one of the biggest differences between RAC and DB2 EEE. There have been a lot of papers written about these two database architectures and yet there still seems to be a lot of confusion. It is important that we understand both the concepts and how they have been implemented because this will severely influence the Total Cost of Ownership ( TCO ) when you deploy your application. RAC and DB2 EEE rely on a cluster manager to provide cluster membership services. On some platforms (Windows and Linux), Oracle provides the cluster manager software. Therefore, it is not tied to any limitations in the vendor’s cluster software on these platforms. For example, on Windows 2000, DB2 EEE is limited to four or eight nodes (depending on the Windows 2000 edition) in a single cluster and cannot use raw devices with Microsoft Cluster Service (MSCS) for database files. It is possible, however, for a single DB2 EEE database to run across more than eight machines even on Windows 2000. The limitation is that a partition cannot be failed over across machines in different groups. RAC, on the other hand can run on more than 64 nodes today. This allows you to use more less expensive and smaller nodes to do the same work as fewer but more expensive larger nodes and it extends the ceiling for scaling out to larger, more powerful clusters.


This document has shown that there are numerous drawbacks in deployment and performance when using DB2 EEE as compared to RAC. It is clear that RAC is usable in a broader range of applications and scenarios. Although the hardware and system software requirements are similar, DB2 EEE shifts a lot of the burden over to the users while implementing their shared-nothing solution. RACs shared-cache architecture provides many key benefits to enterprise ebusiness application deployments:
,hnnFlexible and effortless scalability for all types of applications; application users can log onto a single virtual high performance cluster server. Adding nodes to the database is easy and manual intervention is not required to partition data when processor nodes are added or when business requirements change. Cluster scalability for all applications out-of-the box X without modification.

,hnnA higher availability solution than traditional cluster database architectures; this architecture provides customers near continuous access to data with minimal interruption by hardware and software component failures. The system is resilient to multiple node failures and component failures are masked from end-users.

,hnnA single management entity; a single system image is preserved across the cluster for all management operations. DBAs perform installation, configuration, backup, upgrade, and monitoring functions once. Oracle then automatically distributes the management functions to the appropriate
nodes. This means the DBA manages one virtual server.

,hnnA lower Total Cost of Ownership; unlike its shared nothing competitor that requires a great deal of expensive manual intervention and creates significant difficulties in capacity planning, performance tuning and availability, RACs shared-cache architecture enables a smooth, unlimited growth with near-linear scalability, to handle the most demanding real world workloads.