Organizations utilizing Kubernetes as their operational applications platforms have traditionally dedicated a single cluster for each environment. However, this standard deployment process has proven to be costly in terms of maintenance and overhead. The solution to this dilemma is the deployment and management of a single cluster that is a more easily manageable and efficient use of resources.
Running Multiple Clusters Can Be Costly
The mindset of many developers is to automatically dedicate a single cluster to each phase of application development. For example, one cluster for production, a second for QA, and yet another for staging. Additionally, the development environments are often separated into several single clusters.
However, with this approach, the multitude of clusters involved will soon become overwhelming and financially unfeasible in their management. Additionally, running 30 master nodes instead of three under the same power constraints will drastically increase the monthly bill. The complexity of administrating to each cluster becomes a cumbersome experience. Establishing authentication and authorizations for each is a time-consuming task in itself. On top of this responsibility, the need for periodic version upgrades for each cluster will prove to be even more tedious.
Sharing Environments in a Single Cluster
There are practical solutions for managing and configuring software within a single Kubernetes cluster.
As an example, rather than using multiple clusters for each stage of software development described above, the namespaces feature in Kubernetes allows additional testing and staging environments in the same cluster. Each cluster resource sees only those that share the same namespace. As a result, resources such as pods, services, and replication controllers are given the same names as long as they are in separate namespaces. Namespaces are easy to create and delete and can even be subdivided for multiple team tasks. This saves substantially on server costs and provides a platform for integration testing before production deployment. Additionally, a variety of server, batch, and other jobs can also be executed in the same cluster without interfering with one another.
When working in teams, role based access control (RBAC) can be implemented in the cluster and assigned read and write access permissions. This adds an extra layer of security by preventing unauthorized access by outside users. For enhanced security, Kubernetes contains an add-on feature for implementing network policies between services in the cluster. This is enforced by Weave Net which automatically monitors Kubernetes for namespace network policy rules and allows or blocks traffic as directed.
Controlling Resource Allocation In a Single Cluster
The only caveat to the cluster sharing configuration is that care must be taken when establishing limits in order to avoid depleting CPU, memory, or storage resources. The following are processes used to establish resource limits.
- Persistent volumes (PVs) are implemented as plugins and provisioned by an administrator to manage durable storage in the cluster. PersistentVolumeClaim can be created to instruct Kubernetes to provision a persistent disk automatically.
- Limit ranges are used at the individual pod or container level to ensure they do not eat up resources allocated to a specific namespace.
- Network policies contain options to specify communication rules for ingress and egress traffic on pods.
- API object monitoring is another essential measure to control cluster resource allocation. Unused API objects can slow down performance, and those that are mapped to idle infrastructure add unnecessary costs. Monitoring all resources used by the cluster helps in understanding how they are used to avoid running close to capacity.
Determining Single Cluster Size
Before planning and implementing a single Kubernetes cluster, there are several considerations to deciding on the size, scale, and performance.
- Number of nodes needed
Nodes are the essential building blocks of the cluster and the more nodes available, the greater the workload and availability. The guideline is to have 20% more resource capacity available than the current workloads will require.
- The purpose of the cluster
From learning and testing how a cluster functions, to deploying a robust production-level cluster, the purpose will determine the size needed. Additionally, consider if the cluster will be managed by a cloud provider or on-premises at the company.
- The number and type of applications that will run on the cluster
A cluster running just a few applications will not require the same size as one running big data and artificial intelligence programs. Determine how CPU and memory intensive the applications are before deciding on cluster size.
- Expected traffic amount
Steady, heavy traffic, and burst traffic amounts can drive the decision for cluster size.
- Budget considerations
The costs of establishing hardware and virtualization infrastructure set-ups will vary on the cluster size.
Single Cluster Solution Case Study
iSentium, a market leader in real-time sentiment mining, was confined to a single instance of MungoDB for capturing and analyzing social media channel data. It was obvious that this prevented the company from scaling up its resources to process the increasing voluminous amounts of data. To remedy this situation, Blue Sentry created and deployed a single reproducible MongoDB cluster using Amazon Kinesis, AWS Kinesis Firehouse, EC2, VPC, and Cloud Former.
The result was the successful deployment of a single cluster with a capacity of 200 record PUTs/s, burstable up to 6,000 PUT/s. The deployment of the cluster provides iSentium a reproducible infrastructure with templates that can be deployed on demand as client needs dictate.
Deploying and managing multiple environments on a single Kubernetes cluster provides many advantages over traditional multiple cluster strategies. The use of namespaces allows the partitioning of apps within the cluster to make this configuration possible. As long as resources are carefully monitored and are scalable, single cluster configurations can be implemented successfully for many situations.
Blue Sentry is a leader in cloud-native deployments with extensive experience in Kubernetes cluster implementations. Contact us for more information on how we can build a scalable, containerized solution that can cut operating costs for your company.