How much is your data worth? A quick look at Disaster Recovery as a Service
As each day goes on, an organization’s data becomes more critical to day-to-day operations. With many companies having a need to store critical information (financial, medical, personal, etc.), the need to have the ability to rapidly recover that data in case of a disaster grows exponentially. If we were to think about any company out there losing our personal data, we probably would not be all that happy. The company’s stakeholders would also not be happy, and CxO’s could end up on the front page. As you can imagine, data that is not protected in case of a disaster could result in countless amounts of money and time being lost, and the company’s reputation could be severely damaged and customer loyalty could plummet, none of which any company that I know would want.
We have had the ability to provide disaster recovery for many years, whether this has been by providing periodic data backups onsite to physical tapes that could be restored if needed, replicating data to multiple company-owned physical data centers or now more commonly, having the ability to use cloud providers such as Google Cloud Platform (GCP), Amazon AWS, Microsoft Azure, or using a solution such as VMware’s Cloud on AWS. Options typically include anything from low cost “cold” storage with manual restoration options, to “hot standby” solutions that can ensure applications seamlessly continue to work in the event of an onsite disaster by utilizing automated and rapid failover solutions.
Thinking about which one may be the best for your company will require some questions to be asked and answered, as there is no “silver bullet” to cover all. Some of the following questions need to be answered before you can make the most informed decision to help provide protection and business continuity for your company. Some of these will be common sense to most, but I think most can agree, some (often critical) questions are often overlooked.
Below are only a few of many to consider.
- How critical is the company’s business applications and data, and what is the impact if the company no longer has access to that data?
- Could the company go a week, day, hour, minute or second, without having access to it? If so, what level of access to this data does the company require to remain at an acceptable operational state and still be competitive in the market? Part of the questions above can be answered when we look at the RTO, or “recovery time objective”, and the RPO, or “recovery point objective”. The RTO is the maximum amount of time that an application or data can be offline, and RPO is the maximum acceptable length of time that data might be lost due to a disaster. Both are extremely important metrics to consider and to obtain answers for.
- Are there any compliance-oriented requirements that must be taken into consideration? This could be something as simple as an amount of time that data/records must be kept before discarding, or regulations that dictate how secure the data must be at rest and/or in transit
- Is it elastic? This means that cloud providers could take in as much data as your company is willing to pay for – and do it extremely quickly.
- How much is the company data worth and what risks are the company willing to take?
Benefits of DRaaS
Let’s focus on some of the benefits that DRaaS can offer.
- It can be cost-effective with lower CapEx when compared to on-premise solutions when planned properly. It can reduce the upfront cost that is typically associated with providing on-premise automated disaster recovery methods – i.e. having to build out a secondary datacenter. Most cloud providers offer a “pay for what you use” model.
- There is a faster recovery of data, which can help avoid costly fines for missing compliance deadlines.
- It will be secure – Providers have the ability to keep data secure during transit and at rest with industry-leading security practices.
- It can provide guaranteed backups. Many cloud providers will provide SLA’s (Service Level Agreements) to guarantee data is backed up efficiently and securely, and that access to that data will be there when it’s most needed.
- It can be custom tailored to meet the needs of each business. The company can get even more granular if they want, such as designing a custom-tailored solution to meet the needs of individual applications. An example could be this: A company can decide that the most critical of applications/data is replicated globally across multiple regions, or even spread throughout multiple cloud providers, with the ability to automatically detect on-premise failure, and at that point utilize cloud-based instances that can be designed to provide seamless failover. This option would provide a tremendous amount of uptime, availability, optimized performance and resiliency based on provider-redundancy, as well as geo-redundancy. Then when it comes to other data, it can be replicated to a lower cost storage class like a Coldline storage in GCP, which can still be used for critical data storage.
- It can provide a great deal of flexibility and also lead to new practices for handling data down the road.
- It gives companies the ability to leverage cloud providers massive, highly available, highly scalable infrastructures. For example, many cloud providers have data centers located around the world, which again gives companies the ability to choose exactly where their data will be stored – whether in one zone, region or even distributed and load balanced globally.
Bottom line, companies have to have the ability to back-up and quickly recover data in case of a disaster. With so many options, which solution will your organization choose?