Inside the Briefcase

Augmented Reality Analytics: Transforming Data Visualization

Augmented Reality Analytics: Transforming Data Visualization

Tweet Augmented reality is transforming how data is visualized...

ITBriefcase.net Membership!

ITBriefcase.net Membership!

Tweet Register as an ITBriefcase.net member to unlock exclusive...

Women in Tech Boston

Women in Tech Boston

Hear from an industry analyst and a Fortinet customer...

IT Briefcase Interview: Simplicity, Security, and Scale – The Future for MSPs

IT Briefcase Interview: Simplicity, Security, and Scale – The Future for MSPs

In this interview, JumpCloud’s Antoine Jebara, co-founder and GM...

Tips And Tricks On Getting The Most Out of VPN Services

Tips And Tricks On Getting The Most Out of VPN Services

In the wake of restrictions in access to certain...

How Kubernetes Can Help Scale Your Dynamic Environment

July 29, 2020 No Comments

Featured article by David Bisson

Containers bring many benefits to organizations in their digital transformation. Kubernetes notes that containers improve the ease and efficiency with which organizations can create applications, for instance, which makes the software development process more agile. They also make the deployment process more continuous with quick rollbacks and reliable container image build. That’s not all containers do for deployment, either, for they break applications into smaller pieces that make the deployment of code more dynamic, too.

As organizations continue to grow their digital presence, they might be inclined to scale their containers by using clusters. Red Hat defines a cluster as “a set of node machines for running containerized applications.” Consisting of a control plane and at least one compute machine (or “node”), the cluster forms the heart of Kubernetes. It enables organizations to run containers across a group of machines regardless of where they’re deployed.

But using clusters to scale your container environment isn’t as easy as it sounds. Indeed, TechBeacon noted that many developers and application architects don’t understand certain processes and procedures involving container clusters. The IT technology news archive also made the point that containers and cluster managers are still maturing, as evidenced by the following observations:

– Lacking security capabilities: Many cluster managers lack basic services that organizations value for trusting containers and managing the secure data stored therein. Additionally, many cluster managers do not allow for integration with identity and access management services except through third-party tools.

– Missing plugins: Administrators can extend the capabilities of Docker Engine using third-party plugins. But despite the availability of volume plugins and network plugins for providing data volumes and connecting to container networks, there’s still a dearth of middleware plugins that could help connect to messaging systems, for example.

– Absent open standards: Docker vendors are moving in different directions. In particular, their container cluster manager offerings don’t share the same standards. This limits the portability and scalability of container cluster managers in general, as organizations can’t use the same cluster manager solution to derive the same value.

Fortunately, organizations can use Kubernetes as a container orchestrator to manage and respond to their dynamic environments by autoscaling their resources. StackRox notes that there are three primary forms of autoscaling available. These are the Horizontal Pod Autoscaler, the Cluster Autoscaler and the Vertical Pod Autoscaler.

Horizontal Pod Autoscaler

Some organizations have applications whose usage varies overtime. In that type of situation, administrators might want to then add or remove pod replicas in support of those changes. Fortunately, they can use the Horizontal Pod Autoscaler (HPA) to scale those workloads.

HPA is ideal for scaling stateless applications in that it can help to reduce the number of active nodes in a replication controller as the number of pods decreases. As noted by Kubernetes, HPA does this by being implemented as a Kubernetes API resource and a controller, with the controller adjusting the number of replicas based upon how much CPU the specified target is using. It functions as a control loop within a period that’s set by the controller manager; each time it passes through a period, HPA compares the target’s resource consumption against the specified metrics and implements an appropriate change based on what it observes.

Cluster Autoscaler

HPA is responsible for scaling the number of pods that are running in a cluster. By contrast, the Cluster Autoscaler can dynamically change the number of nodes in a cluster. As noted on GitHub, this resource makes that change when pods fail to run in a cluster as the result of insufficient resources or when there are nodes on that cluster that haven’t been utilized to their desired potential for a length of time and whose pods can be easily shifted to other nodes.

By default, Cluster Autoscaler is set to run on a Kubernetes master node. Administrators can run a customized deployment of the Cluster Autoscaler on worker nodes. In those cases, however, they need to be careful by ensuring that the Cluster Autoscaler remains up and running.

Vertical Pod Autoscaler

Last but not least, there’s the Vertical Pod Autoscaler (VPA). This tool responds to the tendency of the default Kubernetes scheduler to overcommit CPU and memory reservations on a node. Traditionally, human guesswork or rarely run benchmarks inform these estimations of CPU.

The benefit of the Vertical Pod Autoscaler lies in its ability to set container resource limits using live data. It will then interpret that usage to allow scheduling on a node in such a way that gives an appropriate amount of resources to all pods stored on that node. The work of VPA is a process, which means that the tool can increase and decrease the resources for certain pods based upon their CPU consumption over time.

Autoscaling Your Kubernetes

Administrators can use HPA, Cluster Autoscaler and VPA to automatically scale their Kubernetes environments. But there are other autoscaling resources that are also available to them. For more information, click here.

About the Author: David Bisson is an information security writer and security junkie. He’s a contributing editor to IBM’s Security Intelligence, Tripwire’s The State of Security Blog, and a contributing writer to Bora. He also regularly produces written content for Zix and a number of other companies in the digital security space.

 

Sorry, the comment form is closed at this time.

ADVERTISEMENT

Gartner

WomeninTech