Inside the Briefcase

Augmented Reality Analytics: Transforming Data Visualization

Augmented Reality Analytics: Transforming Data Visualization

Tweet Augmented reality is transforming how data is visualized...

ITBriefcase.net Membership!

ITBriefcase.net Membership!

Tweet Register as an ITBriefcase.net member to unlock exclusive...

Women in Tech Boston

Women in Tech Boston

Hear from an industry analyst and a Fortinet customer...

IT Briefcase Interview: Simplicity, Security, and Scale – The Future for MSPs

IT Briefcase Interview: Simplicity, Security, and Scale – The Future for MSPs

In this interview, JumpCloud’s Antoine Jebara, co-founder and GM...

Tips And Tricks On Getting The Most Out of VPN Services

Tips And Tricks On Getting The Most Out of VPN Services

In the wake of restrictions in access to certain...

Creating a Comprehensive Data Recovery Policy with OpenShift

March 29, 2021 No Comments

Featured article by Edward Roesch, Independent Technology Author

disk

Increasingly, enterprises are turning to Red Hat’s OpenShift to address their application container management. This shift is seeing increasingly complex applications move to the platform built on Kubernetes 1.20. With this move, the need to have a comprehensive backup and data recovery policy is further highlighted, as the need to prevent widespread outages as a result of a disaster has not lessened. For enterprises that run mission-critical applications on OpenShift the ability to recover from any disaster should be a priority.

OpenShift requires a Non-Traditional System

OpenShift’s popularity is being compared to Red Hat’s earlier groundbreaking software OpenStack and promises to be just as revolutionary. To meet the data security demands traditional data recovery applications and methods need to be evolved. This is due in part to OpenShift blurring the lines between virtual machines and containers. Diving deeper into this reasoning, when an application runs on a single virtual machine (VM) both can be backed up as a synchronous entity. This is not the case with OpenShift.

OpenShift’s containerization works on an architecture that will span a cluster of servers. If using a traditional backup system if you just want to backup data from one specific app, you’ll either backup data from other unrelated apps or not all the data from the targeted app. This does not mean that data existing within the OpenShift architecture cannot be backed up but an enterprise-level backup solution, the recommended method for backing data on OpenShift, needs to have several requirements met. This includes being able OpenShift and Kubernetes namespace-aware and capable of backing up data from persistent volumes. 

Data Resilience

A key facet of any data recovery policy is data resilience. Along with the capability of performing Openshift backup operations, OpenShift has several in-built data resilience features. These features enable IT, teams, to create point-in-time snapshots and clones of persistent data volumes. The features can be accessed via the Container Storage Interface (CSI). These features will also be available for users who choose to interact with OpenShift via Red Hat’s provided API.

Asynchronous or Synchronous

The next question to be asked by IT teams will be whether using an OpenShift backup solution requires an asynchronous or synchronous data protection policy. Many solutions not only allow for, but policies can be developed that adopt a hybrid of the two approaches. It is an important question to ask, the requirements though are down to the environment the IT team works in and careful consideration of that environment needs to be taken stock of. This would be a critical step in data protection policy.

Conclusion

When Kubernetes was initially released it was viewed by many as something for developers to play around with. This view can no longer be supported given how enterprises have adopted the platform and now require improved tools like OpenShift for better management of apps and data stored within Kubernetes architecture. This meant that new and innovative features and tools needed to be developed to make sure the data stored adhered to data resiliency standards.

The need for data resiliency and data protection policies is never placed on hold when new technology comes to market. Increasingly, the focus of new technology is not just to make tools that allow for companies to flourish but likewise on how to protect the data so many rely on. 

Sorry, the comment form is closed at this time.

ADVERTISEMENT

Gartner

WomeninTech