Adopting Effective Kubernetes Configuration Management
No doubt that Kubernetes has quickly become the de-facto standard for deploying and managing modern applications across a wide array of underlying infrastructures. This go-to industry standard is the fastest-growing CNCF project. As a result, close to three out of four cloud deployments now rely on the technology.
Kubernetes is an open-source framework for coordinating containers. CloudOps teams embrace Kubernetes because it orchestrates containers across public cloud, private cloud, or on-premises servers, promising a level of portability to mitigate lock-in concerns.
Kubernetes widespread adoption is creating challenges for developers to manage application configuration settings (ConfigMaps and secrets), a need that a unified cloud configuration management solution addresses.
A Kubernetes deployment includes two types of configuration management:
- Configuring the Kubernetes platform to run any of the supported infrastructures. Think of this as “outside” Kubernetes.
- Configuring the applications that run inside a container managed by the Kubernetes platform. Think of this as “inside” Kubernetes. This is where ConfigMaps are used.
But the Kubernetes application configuration management framework is not a panacea. Application configuration settings (as opposed to overall Kubernetes platform configuration settings) have to be fine-tuned for every application deployment, instance, and update. Compounding the work, each deployment across development, staging, and production environments requires touching numerous YAML files at the risk of inadvertent typos or copy and paste errors, which can cause misconfigurations that lead to unplanned downtime or security incidents.
ConfigMaps Decouple Application Code from Hard-wired Configuration Settings
Developers configure applications that run inside a Kubernetes-managed container via ConfigMaps, which are key-value text files used to tell your application the configuration information needed to run in a specific environment (i.e. development, staging, or production.)
This concept allows dynamically changing configuration settings at runtime, across multiple environments, and no more hard-coding configuration settings in application code.
ConfigMaps pass configuration values to the application running in the container, such as:
- The database endpoint URL.
- The IAM role the application will use for permissions.
- The VPC ID for secure operation.
- The log level and location.
The Kubernetes platform itself has many configuration challenges. Tuning performance, implementing high-availability architectures, and applying updates require expertise that many do not have. This is why the managed Kubernetes offerings from AWS, GCP, Azure, and RedHat are so popular.
There is no managed service corollary for Kubernetes application configuration.
What’s missing is a better way to manage ConfigMaps and secrets. Developers often resort to using complicated Git repo hierarchies to manage ConfigMaps for multiple environments. While a GitOps workflow is a great way to enforce “everything as code” principles, it’s not the most efficient or reliable way to manage ConfigMaps when it comes to keeping track of application settings across multiple environments.
Often copy and paste errors between ConfigMap files for development, staging, and production environments can result in unplanned downtime or a security incident. These errors can go unnoticed in +1 peer reviews and ultimately destabilize production environments.
Troubleshooting outages developers waste time trying to answer two fundamental questions:
- The production environment worked yesterday but doesn’t work today. “What changed most recently?”
- The new feature works in staging but not in production. “What’s the difference between the environments?”
Therefore, DevOps teams spend a lot of time tinkering with configuration settings rather than working on more value-add tasks.
Keeping Secrets
Another Kubernetes configuration challenge is giving applications access to secrets. Secrets are similar to ConfigMaps but their purpose is to hold confidential data, such as passwords, access keys, and tokens.
But Kubernetes does not have a robust built-in secret store, instead, secrets are, by default, stored unencrypted in the Kubernetes API server’s underlying data store (etcd).
So how do developers securely pass a secret into a Kubernetes pod? One approach is to use an open-source tool such as KubeTruth. This syncs parameter updates between Kubernetes ConfigMaps (and Secrets) and the CloudTruth external datastore. This mechanism uses a hands-off approach by using naming conventions to automate the delivery of configuration data to a pod.
Learn more about the challenges of managing secrets across multiple environments.
The Benefits of Centralized Configuration Management
In sum, companies need a better way to manage Kubernetes ConfigMaps and secrets. One approach is to use a centralized configuration data platform that automatically synchronizes ConfigMaps and secrets with a single source of truth. Developers will spend less time manually editing, copying and pasting YAML files and more time on new features.
Teams are adopting Kubernetes in record numbers. Along with Kube’s many benefits come configuration management inefficiencies, especially in the area of multi-environment and multi-cloud scenarios. With the right unified configuration management solution, developers gain a single record of truth about how their Kubernetes applications are structured. The results are greater efficiency, increased uptime and security posture, and team cohesion.
Simplify your cloud configuration complexity with a no-obligation free trial.
Get Started
Our bite-sized newsletter with DevSecOps industry tips and security alerts to increase pipeline velocity and system security.