|
||||||||||||||
Disaster Recovery: Weighing Data Replication Alternatives | ||||
|
Enterprises with short disaster recovery time objectives use data replication technologies. We provide a framework for understanding the myriad of available options. | |||
| Bottom
Line | |||
|
| |||
| Key Issue How will technologies for business continuity and resumption evolve? | |||
|
Inextricable business process dependence on IT has increased economic vulnerabilities associated with downtime. No longer can enterprises wait the traditional two, three or more days for recovery of critical applications in the event of a disaster. In fact, the most-critical applications, such as those used in e-commerce and customer service, often require either continuous availability (no interruption in service for any reason) or recovery in minutes to a few hours. In addition, many enterprises require no loss of transactions in the event of a disaster. Achieving short recovery times requires that enterprises build data replication architectures into their disaster recovery plans to replicate from a primary processing site to an alternate site, which can then be used in the event that the primary site becomes unavailable. Evaluating data replication technologies can be a daunting task, because of the large number of products on the market with differing implementation options and features. To add to this complexity, data replication solutions are specific to a database, file system, OS or disk subsystem; thus, enterprises often must use multiple solutions to protect their critical data — as well as multiple methods to protect a single application environment. Solutions can be implemented at a secondary internal data center, at service providers, such as hot-site providers Comdisco, IBM BRS and SunGard, or at Web-hosting facilities/ISPs. Figure 1 and Figure 2 present data replication options with some differentiating features (defined in this Research Note). The most-popular solutions are disk-to-disk remote copy (with EMC SRDF leading in installations). These solutions operate at the disk volume level and are significantly less complex to set up and administer than host-based replication. They also offer the benefit of capturing all application environment (e.g., DBMS and file system) changes. A drawback, however, is their lack of transaction knowledge and the potential for data corruption in the unlikely event of a disaster (enterprises should plan for an alternative recovery approach should corruption occur). However, enterprises that use solutions that guarantee the block write sequence (data consistency) between the primary and disaster recovery sites should expect their DBMS systems to restart and provide normal recovery at the disaster site, including database rollback to the last committed transaction. Most disk-to-disk remote-copy solutions operate in synchronous mode, which degrades performance of production applications unless the solution can be deployed over fiber link to the recovery site. The distance can generally be from a few kilometers up to about 60 km, depending on the solution. For some, the distances can be extended by channel extenders. Figure 1 Data Replication Options for Disaster Recovery | |||
|
|
Source: Gartner Research Figure 2 Data Replication Options for Disaster Recovery (Continued) | |||
|
|
Source: Gartner Research Host-based disk block replication alternatives, such as IBM's Geographic Remote Mirror for AIX and Veritas Software's Volume Replicator, are fairly new to the market and have not made significant market inroads yet. However, as they mature, they offer the potential for a less expensive solution to replicate all enterprise data, regardless of the disk platform chosen. For Windows environments, file-based replication solutions from NSI Software and Legato Systems have garnered a significant number of installations, and Veritas has recently entered this market. Also popular, and generally less costly, are log-based database replication solutions. These include solutions from DataMirror, Lakeview Technology, Microsoft, Oracle, Quest Software and Vision Solutions. Note that traditional trigger-based native DBMS replication is generally not appropriate for disaster recovery because of high system and administrative overhead on the primary site. When making replication decisions, enterprises must first evaluate recovery point objectives. If some number of minutes (typically between five and 60 minutes) of lost transactions is acceptable, an asynchronous solution will probably provide a more cost-effective solution while still offering fast recovery. Where a small number of transactions can be lost (e.g., those uncommitted at the primary system and the secondary system), synchronous mirroring must be deployed. Other product evaluation criteria should be platform support, integration with other complementary products such as clustering technology, cost, speed of deployment, performance impact, product completeness and manageability. Finally, enterprises must keep in mind that the replication solutions are just one part of the disaster recovery plan. To ensure confidence in the recovery capability of the enterprise, frequent testing of these plans is vital (one to four times a year, depending on the amount of change to the applications and infrastructure). This includes testing of replication solutions under various failure/disaster scenarios to ensure appropriate recovery behavior of databases, file systems and applications. Definition of Terms Transaction-Aware Replication: Transaction-aware replication offers transaction-level replication, typically by electronically transmitting database or file changes (e.g., through logs) to the secondary site and applying those changes to a replica image. The primary advantage of this approach is that the replication method understands units of work (e.g., transactions) and has a greater potential for data integrity (via transaction roll-forward/back), although data integrity is not guaranteed. Mirroring or Shadowing: Shadowing maintains a replica of databases and/or file systems, typically by continuously capturing changes and applying them at the recovery site. Shadowing is an asynchronous process, thus requiring less network bandwidth than synchronous mirroring. Recovery time objectives (RTOs) are significantly reduced (generally between one and eight hours, depending on the lag time for applying logs), while recovery point objectives (RPOs) are as up-to-date as the last receipt and apply of the logs. Mirroring maintains a replica of databases and/or file systems by applying changes at the secondary site in lock step with or synchronous to changes at the primary site. As a result, RTOs can be reduced to 20 minutes to several hours, while RPOs are reduced only to the loss of uncommitted work. Because it is synchronous, mirroring requires significantly greater network bandwidth than shadowing. Too little bandwidth and/or high latencies will degrade the performance of the production system. Deployed to 100 or More Sites? The number of production deployments will aid enterprises in understanding the maturity of the solution. In general, the fewer the production deployments, the higher the degree of risk associated with implementing the solution. Use Replica for Reporting? Most data replication solutions supporting disaster recovery do not enable inquiry/reporting of the replica at the secondary site (most require a third copy to be made for reporting). Using the replica for reporting offloads production workloads for horizontal scalability and achieves better resource utilization of the disaster recovery configuration (e.g., it reduces the total cost of ownership of a disaster recovery configuration). | |||
| Bottom Line
Using data replication for disaster recovery has moved into the mainstream, as critical business processes are increasingly dependent on IT services. Selecting an approach, however, is not that simple, because each solution is dependent on specific IT infrastructure items, including disk subsystems, OSs, databases or file systems. Consequently, most enterprises will employ multiple tools to protect their critical data and applications. IT management needs to weigh all the factors discussed in this Research Note in making these decisions. This research is part of a broader article consisting of a number of contemporaneously produced pieces. See COM-13-6392 on www.gartner.com for an overview of the article. | |||
Acronym Key CPCS Check Processing Control System DBMS Database management system DG Data General DRM Data Replication Manager FSC Fujitsu Siemens Computers HACMP High-Availability Clustered Multiprocessing HAGEO Geographic High Availability HOARC Hitachi Open Asynchronous Remote Copy HP Hewlett-Packard HRC Hitachi Remote Copy HXRC Hitachi Extended Remote Copy IDMS Integrated Database Management System ISP Internet service provider MSCS Microsoft Cluster Server NSK NonStop Kernel OS Operating system PPRC Peer-to-Peer Remote Copy RDF Remote Database Facility RRDF Remote Recovery Data Facility SRDF Symmetrix Remote Data Facility VCS Veritas Cluster Server XRC Extended Remote Copy | |||
|
This research is part of a set of related research pieces. See AV-14-5238 for an overview. | |||
| Entire contents © 2001 Gartner, Inc. All rights
reserved. Reproduction of this publication in any form without prior
written permission is forbidden. The information contained herein has been
obtained from sources believed to be reliable. Gartner disclaims all
warranties as to the accuracy, completeness or adequacy of such
information. Gartner shall have no liability for errors, omissions or
inadequacies in the information contained herein or for interpretations
thereof. The opinions expressed herein are subject to change without
notice. Resource ID: 333199 | ||