Kubernetes has revolutionized application deployment during the last few years. Thousands of businesses have migrated to the cloud within a short period in order to leverage the power of Kubernetes.
However, adopting Kubernetes is not a walk in the park. There are many complexities related to setting up Kubernetes in a manner that works for your organization. Many of them are technical in nature, but you will also need to deal with the reluctance many people display when being introduced to new technologies.
Upon overcoming these challenges, you will be able to arrive at a point where your applications are running smoothly on shared Kubernetes clusters. Armed with separate clusters for each of your environments and/or applications, adoption will increase over time. As your applications scale with the growth of your business, you will observe that the costs involved are also growing at an alarming rate.
Is there a way to avoid the increased costs from implementing an increasing number of clusters? The short answer is yes – but for more on this topic, read on:
What part do Kubernetes clusters play?
It’s quite common for organizations to use a cluster to run an application or a particular environment such as staging or production. A Kubernetes cluster is a group of nodes used to deploy containerized applications. So, if you use Kubernetes for your application, you have at least one cluster.
A Kubernetes cluster usually contains at least one master node and one or more worker nodes. The Master node manages the state of the cluster, while the worker nodes run the application.
Is a separate cluster required for each environment?
While it seems quite logical to have each environment and/or application in its own cluster, it is not required, and it’s not the only way. Kubernetes makes this easy enough by making it possible to quickly roll out multiple nodes with the same configuration.
Namespaces are one of the most significant benefits of Kubernetes clusters. They allow segregating the resources within a cluster, so you can deploy multiple applications or environments within it. That means, with careful planning, you can deploy all your environments and applications within a single cluster.
The ease of managing a single cluster is one of the most compelling reasons to opt for deploying all your applications within the same cluster. You will be able to use namespaces to control the amount of resources allocated to each application and/or environment. Similarly, you will also be able to run server and batch jobs without affecting other namespaces.
The argument that many experts use to discourage the use of a single cluster is the possibility of failure and downtime. However, Kubernetes introduced support for running a single cluster in multiple zones as far back as version 1.12. This feature allows you to deploy nodes across zones in order to ensure continuity and high availability.
Another argument is that a single cluster cannot handle large numbers of nodes and pods. As at version 1.18, Kubernetes allows a cluster to have up to 5,000 nodes, 150,000 total pods, 300,000 total containers, and 100 pods per node. These limits are quite extensive, and generally are sufficient for most production applications.
There are other benefits to using one Kubernetes Cluster. You can read more about them on the official Kubernetes blog.
So how does this compute to cost savings? Let’s look at both the direct and indirect forms of cost savings:
Savings on Per-Cluster costs
Each of your clusters will require a master node, which adds to the total number of nodes your application requires. Naturally, as you increase the number of clusters, there is an increase in the costs associated in terms of having additional computing resources for master nodes.
In addition to the direct cost of increased master nodes, you may have additional costs depending on your service provider. Let’s consider the savings from the three main Managed Kubernetes services. Azure Kubernetes Service (AKS) does not charge additionally for cluster management. Google Kubernetes Engine (GKE) provides one free cluster per zone, per account, making it more cost-effective to have a single cluster.
However, if you are with AWS Kubernetes Engine (AKE) there will be an additional charge of $.10 per master node, which is approximately double the cost of a worker node (based on EC2 pricing). This cost translates to roughly $144 per month — which can have a significant impact on overall costs if you require a large number of clusters.
Savings on idle resource costs
This aspect of cost savings becomes prominent if you host multiple environments such as d ev, staging, and production on the same cluster. Resources from namespaces that are receiving lesser traffic can be allocated to the more important ones when needed. This is in contrast to purchasing additional worker nodes, which increases running costs.
Because some environments don’t always require the same amount of resources, they can be shifted to other namespaces, which are experiencing a spike in user activity. This type of saving can be even more critical during seasonal periods that see peak activity on some applications.
Managing resources of a single cluster is simple for obvious reasons. Whether it is shutting down idle nodes or scaling other resources, having a single cluster makes this process much easier.
While your Kubernetes provider would take care of most of the maintenance of your nodes and clusters, there will be some activities that require human intervention – for example, testing resource allocations of namespaces and ensuring that they are optimized.
You will need to incur fixed costs for teams that manage your clusters. The need for dedicated personnel positively correlates with the number of clusters. Multiple clusters will usually mean that many of them have their own configurations like the Kubernetes version and other third-party monitoring tools. All of these add to the overhead costs of managing multiple Kubernetes clusters, making a single cluster the best option for cost savings.
Kubernetes is an innovative and exciting platform for teams to deploy their applications and experience the power of the cloud, containers, and microservices. Resources such as computing, storage, and networking are virtually unlimited and can cater even to the most demanding apps.
However, there is a delicate balance between cost-effectiveness and efficiency. While having a single cluster can lead to cost savings in many ways, it can become inefficient when resource limits reach the upper limits allowed per cluster. The additional cost of maintaining multiple clusters becomes immaterial in such cases.
Please visit our Blue Sentry Blog if you enjoyed this article and want to learn more about topics like Kubernetes, Cloud Computing, and DevOps.