Cloud Lessons Learned from OVH Datacenter Fire

| March 24, 2021

Recently the world was shaken with the news about a huge fire on one of OVH datacenters in Strasbourg. This cloud computing company is considered the largest hosting provider in Europe and the third-largest in the world. But even all the experience has not saved them from totally losing one datacenter, getting the other damaged and two more shut down as a precautionary measure.

A fire in a building of this class looks daunting. Nothing like this has happened anywhere for many years, the story looks totally apocalyptic. The Strasbourg accident clearly shows that everything can be on fire - no matter how reliable the provider, where the data center is located, what measures are taken or who owns the infrastructure.

The customers were at once recommended activating their disaster recovery plan but practice showed that many companies and organizations just didn’t have it in place. As a result, even some government bodies and banks were offline for hours. Though having a disaster recovery plan on the table, most companies never test its effectiveness in practice.

The reality is that no service provider in the world guarantees 100% protection and recovery of lost customer data, so “don't put all your eggs in one basket.” OVH datacenter disaster shows why multi-region or multi-cloud distribution of business critical workloads is very important for any serious company or organization.

“94% of companies suffering from a catastrophic data loss do not survive – 43% never reopen and 51% close within two years.” – University of Texas

Even if the vendor is proven, no service is immune from common technical failures or mother nature with possible natural disasters. One of the most appropriate solutions can be to have at least two active and synchronized replicas of the application in different data centers. Distribution of the services across multiple vendors is a classic approach to mitigate the risk of application downtime, outages, and data loss. As a result, the applications certainly will not have a single point of failure.

Considering the promise of data availability, multi-cloud strategy looks very attractive but the integration can be extremely complex and involve tasks in fact difficult to an experienced operation team. Running projects across several clouds requires skills, as well as human efforts and time resources. Even sort of easy tasks like resource provisioning can be confusing if vendors offer different methods and measures. Every provider has its own portals, APIs and processes that should be integrated and managed.

That is why it is vital to gain in-house experts or third-party often use professional services while initiating multi-cloud integration. As an alternative, the solution here can be choosing the right cloud computing software to provide a level of automation with unified management of different clouds within a single panel. This can simplify application deployment and lifecycle management across different vendors, as well as ease the migration itself.

Companies stand in need of a governance layer which can guarantee a complete abstraction from proprietary functionality of different cloud vendors and enable cloud-agnostic implementation without extra complexity. This mediator between company and cloud infrastructure should take into consideration all cloud specifics, combine standardized services from the requested vendors and provide missing functionality on top based on the personalized company’s needs.

Multi-datacenter management platforms like Jelastic offer the required level of interoperability to facilitate the entry point and omit complexity throughout the project lifecycle. This solution can be used as multi-cloud PaaS (Platform-as-Service) by developers from companies across different industries. Also, it is an orchestration platform designed for cloud hosting providers in order to advance their product offering and ease the multi-region infrastructure management.