11:17 AM
5 Best Practices in Automated Disaster Recovery
The banking industry is as attuned to the threat of disaster as any other market -- perhaps more so. With an acute need for high availability and business continuity, financial institutions cannot afford to fail in their response to disasters large or small. Whether data centers are affected by technical issues, weather-related incidents or human error, these organizations need comprehensive disaster recovery (DR) plans that cover the full range of resources and logistics. In terms of IT operations, banks and financial firms are turning to automated solutions to minimize their downtime and speed recovery. The following are several best practices being embraced by the industry:
1. Draw on the benefits of virtualization to simplify deployments. Business continuity is the ability to failover operations to alternate or standby systems in case of primary systems failure, and disaster recovery is the full restoration of operational primary systems and infrastructure. In the past, deploying capabilities for both actions has been a complicated undertaking. However, today’s technologies are simplifying these rollouts.
Server, desktop and storage virtualization impact more than just IT infrastructure; these technologies enable the full mobility of systems and data across distance, creating more agile businesses. Workforce mobility can be easily addressed with desktop virtualization, allowing some organizations to have productive employees working from the comfort of their homes in the event that access to company facilities is compromised. Virtualization certainly simplifies the challenges of business continuity and disaster recovery (BCDR) planning.
2. Select the right tool for the whole job. Replicating application data to a remote site is merely one component of DR. A comprehensive DR solution will duplicate entire IT services at the DR site and keep them up to date via frequent data replication; in addition, it will automate failover and failback of entire business services, such as email, as a single unit of protection and recovery.
BS&T's Top 11 Stories of 2011
- Google Reinvents the Wallet
- What is Barney Frank's Legacy?
- 5 Best Practices in Automated Disaster Recovery
- Banks Mining Social Networks with Analytics Tools
- Rolling Out the Welcome Mat for Online Banking Customers
- Regulatory Enforcement of IT Security: Slap on the Wrist vs. Padlock on the Door
- Man Arrested for Depsoiting Chase-Issued check at Local Chase Branch
- Meet Bank Systems & Technology's 2011 Elite Honorees
- Inside the Citibank.com Redesign
- Why the Retail Store Bank Branch Is Making a Comeback
- The Perfect Storm: 10 Ways to Ruin the Customer Experience
Look for BCDR options that map to the IT operational business application model and that allow for the definition of interdependencies between the different systems and components of any given service. This is critical to reduce the recovery time and to ensure a successful failover or failback operation. Such technology should also include testing of DR capabilities without disrupting business operations.
3. Recognize your true tolerance for downtime. The Enterprise Strategy Group (ESG) recently conducted a survey on downtime (PDF) tolerance and found that 74 percent of respondents could tolerate no more than three hours of downtime before they start suffering revenue loss; 53 percent of those surveyed said that even one hour of downtime was unacceptable. Recognizing that stringent expectation on BCDR, organizations must take a critical look at their legacy systems.
For more than 20 years, tape backup was the backbone of DR, but tape recovery can take hours or even days. Virtual tape libraries (VTL) and other disk-based deduplication solutions have accelerated the backup and recovery process, but data restoration performed with traditional backup software remains an extremely complex and lengthy process. For those with high expectations for uptime, newer technologies are required.
4. Reduce the tape infrastructure. Start with the local recovery infrastructure and reduce the level of dependency on tape by leveraging newer technologies such as disk-based snapshots and continuous data protection. This will improve recovery time, eliminate the backup window, and reduce tape production from daily to monthly backups. The result is a massive reduction of the tape infrastructure and a much lower operating costs for both backup and recovery operations.
5. Lower costs by evaluating remote copy practices. Evaluate the actual needs for the remote infrastructure. Often, an infrastructure that ensures basic essential services is sufficient for DR, and the performance-level requirements may be reduced in a disaster scenario. This should allow for a lower cost infrastructure at the remote site. A WAN-optimized replication solution can also push DR costs down, since a significant part of the price of data replication comes from the monthly cost of bandwidth.
The key to BCDR planning is automation. As financial institutions strive for lower downtime and faster recoveries, the biggest challenges they face are cost and complexity. In terms of complexity, the number of IT infrastructure components involved in the BCDR process can make it almost impossible to manage manually. On the cost front, quicker recovery times means revenue preservation, since downtime often impedes revenue. Automation addresses both of these concerns and provides the financial industry with BCDR that lives up to high expectations.
Bobby Crouch is a product marketing manager at FalconStor Software. He is a 20-year technology industry veteran, with roles ranging from development engineering to sales and marketing. Bobby’s expertise covers microprocessor architecture, servers, networking, enterprise software and storage.