(973) 575-4950 sales@ibsre.com

In the News

Failover Plan Considerations, and Distinguishing Data Loss from Downtime
By Michael Mullin, President
Integrated Business Systems
Totowa, N.J.

In today’s technology-dependent business world, when access to data and software is interrupted, whether by a temporary power outage or a full-scale disaster, the ability to conduct business is, too. In an age where data is king, the idea that it can be lost so easily should be enough to encourage businesses to take steps to protect it.

The good news? Advancements like cloud computing are changing the way mission-critical information is stored and accessed, providing a real “win” for business continuity in terms of both efficiency and cost. Now companies of all sizes can enjoy uninterrupted (or nearly uninterrupted) technology-reliant functions – provided they have a well-crafted business continuity plan in place.

Designing a failover plan
Business continuity solutions are not one-size-fits-all. At the outset, it is important to look at two key metrics. The first is called the recovery point objective (RPO), and it basically determines how frequently backups should be taken based on how much data a company is willing to lose. What would happen if email correspondences were lost? What systems, software applications, key documents and user clearances must be kept absolutely current in order to run the business?

The second is the recovery time objective (RTO), which determines how long a company can afford to be offline during and after a disaster. Totally recreating a company’s IT environment is not as simple as buying a new server and feeding in information from a backup source. Complex configurations, hardware availability and other business issues can make the process a slow one.

These metrics vary from organization to organization, yet in every case five basic building blocks are necessary for successful business continuity.

  • Effective hardware engineering. Properly designing a failover system ties directly to RPO and RTO. Hardware must be engineered to deliver a fast (enough) return to operations. It also should be relevant to the individual company’s situation. An organization with five production servers does not necessarily require five replicated servers, but it needs enough capacity to run the business. Additionally, any offsite backup storage and servers must truly be remote – another location within the building is not adequate.
  • Functioning and up-to-date software. This one may seem obvious, but it is important. If data backup and storage software applications are not properly installed and maintained, even the best failover plan will fail.
  • Proper planning and communication. Failover systems only work if people know when and how to access them. Who will authorize the switch to the replicated servers? Do employees know how to log into the system? From the outset, companies need to put a clear and well-communicated plan in place. It is imperative that all failover procedures are well documented and accessible.
  • Regular process review and testing. No organization wants to be faced with testing their recovery capabilities for the first time when disaster strikes. Business continuity plans should be put through their paces regularly with both planned and unplanned drills. Additionally, a designated team should review the plan regularly to be sure it accommodates any operational and staffing changes.
  • Ongoing monitoring. Are backups completing successfully? Are they being corrected if there is an issue? Hardware and software should be checked frequently, on a set schedule.

A note on data loss, downtime and the cloud
Every company understands the importance of backing up its data. The most cautious organizations traditionally run two sets of in-house servers – one for production and one for replication. They also conduct full backups to tapes at regular intervals, and store those tapes off-site as an important last resort for data recovery in the event that a fire, flood or other disaster destroys the in-house equipment.

While this process protects against data loss, having a current copy of a company’s information alone does nothing to prevent downtime. This is an important distinction of business continuity, which ensures that data is not only available but instantly recoverable – enabling an organization to keep functioning during a disaster vs. simply recovering from it after the fact. Until now, only companies with big budgets could afford to maintain the remote backup servers and workstations needed to ensure truly seamless operation.

Now, replicated cloud servers provide a comprehensive platform for data storage as well as access. In the event of an interruption, the cloud server and everything on it is immediately available. They also provide built-in cost savings. Cloud solutions are relatively inexpensive. And users can cut expenses significantly by eliminating redundant in-house and external servers, as well as the pricey “juke box” devices designed to store hard copies of backup tapes. And in terms of access, employees can work from anywhere they have an Internet connection.

Moving the plan forward
Orchestrating an effective business continuity solution requires understanding an organization’s distinctive needs and designing a system to accommodate them. With so many moving parts, this process can be challenging. However, it is well worth the effort.

Many companies choose to work with technology partners who can help them map out, implement and monitor their failover plan. This works to ease the decision-making burdens of choosing the right backup model, identifying critical data and functionality, and ensuring successful backups.

The bottom line is that proper planning can protect against devastating losses and provide peace of mind that operations can continue uninterrupted in nearly any situation.