Skip to main content

Your submission was sent successfully! Close

Thank you for signing up for our newsletter!
In these regular emails you will find the latest updates from Canonical and upcoming events where you can meet our team.Close

Thank you for contacting us. A member of our team will be in touch shortly. Close

  1. Blog
  2. Article

Michael Iatrou
on 5 December 2018


The appeal of Kubernetes is universal. Application development, operations and infrastructure teams recognise diverse reasons for its immediate utility and growing potential — a testament of Kubernetes’ empathetic design. Web apps, galvanised by the 12 factor pattern as well as microservice-structured applications find a native habitat in Kubernetes. Moreover, there is a growing list of analytics and data streaming applications, Function-as-a-Service platforms and deep/machine learning frameworks that benefit from Kubernetes’ functionality. Add to the mix a deep desire to decouple applications from VMs, increase portability for hybrid cloud operations, and a voracious appetite from the business for continuous innovation. The intrinsic diversity of goals and expectations make the decision for the most appropriate Kubernetes solution challenging. Here, we will explore what constitutes a minimum viable Kubernetes environment from a developer and operations perspective.

We have learned much from the rise and fall of the “move fast and break things” development mantra. To implement and test ideas quickly, unverified approximations and assumptions might be employed. Conversely, achieving consistent and reliable behavior, for any engineering endeavor, in-depth understanding and hypothesis validation is necessary. Developers need to write and debug code in the comfort of their IDE, complete unit tests on their laptop, and collaborate with their DevOps peers for integration testing and production lifecycle management. Having a tacit knowledge of Kubernetes drastically improves the efficiency of a developer. Using a production-grade Kubernetes cluster for such experimentation is a cumbersome experience. A self-contained, isolated and disposable cluster is preferred. With MicroK8s, anyone can install such a cluster on a laptop or a VM (local or in the cloud) in a matter of minutes. MicroK8s is an entire Kubernetes cluster in a snap, and it can be easily installed on the most common Linux distributions, Windows and MacOS.

The definition of a minimal production environment for Kubernetes comes with a broader set of prerequisites. The target environments for a production cluster range from the data center to the cloud, and then to the elusive edge. Furthermore, a production deployment reflects governance needs and possibly regulatory mandates. Characterising scale, elasticity, availability, portability, security, and compliance should drive design decisions. Nevertheless, there is a minimal subset of properties and attributes, which need to be carefully considered.

Automation: There are myriad ways to deploy a Kubernetes cluster. The best in class tooling can perform across multiple dimensions. It can deploy Kubernetes on a variety of substrates, including bare-metal, virtualised, private cloud, and public cloud. It enables repeatable and predictable operations, such as updating and upgrading a cluster, scaling out (adding more worker nodes) and scaling up (adding higher capacity nodes), scaling back, as well as simplifying recovery from physical server or virtual machine failure. Finally, automation needs to facilitate extensibility of the core featureset, allowing integration with third party components from the Kubernetes ecosystem.

Observability: In depth understanding of a cluster’s state is essential for a production environment.Two main types of data streams provide insight to the control plane’s health: logs and monitoring metrics. Diagnostic logging records an event timeline and describes state changes. Metrics provide lightweight instrumentation about system resources. Metrics are used for a variety of tasks — capacity planning, triggering alarms, and initial triangulation during root cause analysis of a pathological state. Logs provide detailed, context specific descriptions, which are utilized to troubleshoot and remediate erroneous cluster conditions. The logging and monitoring solution needs to scale horizontally alongside with the worker nodes.

Artifact management: Kubernetes uses text-based manifests to deploy and manage binary containers, which are typically built from source. Two types of tools are needed: source version control and binary repository management. Version control systems store and track predominantly source code, configuration files, and documentation. Binary repositories provide analogous functionality for containers, OS packages, and built executable binaries. A binary repository can integrate directly with Kubernetes as a container registry. Most public clouds offer registries as a service, which can be utilised both for cloud-based Kubernetes deployments, as well as on-prem. Private registries can be used when compliance requirements do not allow for hosted solutions.

High availability (HA): Production-grade is virtually synonymous with a highly available control plane. Kubernetes includes components that accommodate active/passive, active/active, and clustered HA configuration. A minimum of three nodes (typically physical servers or properly sized virtual machines) is necessary to host the associated services. Isolation of the respective services, fine-grained control of resource allocation, and streamlined updates and upgrades can be accommodated by additionally utilising machine containers for each cluster service.

At the current stage of the Kubernetes wave, a minimalistic cluster is a nimble cluster. As Kubernetes evolves quickly, along with its ecosystem, it is crucial to cherry pick additional components progressively and maintain agility for alternative options in the future. MicroK8s is an ideal solution within this context, as it comes with a very simple and fast set-up and a great set of add-ons, out-of-the-box.

Contact us about you Kubernetes challenges and use cases.

Related posts


Simon Fels
20 March 2024

Implementing an Android™ based cloud game streaming service with Anbox Cloud

Cloud and server Article

Since the outset, Anbox Cloud was developed with a variety of use cases for running Android at scale. Cloud gaming, more specifically for casual games as found on most user’s mobile devices, is the most prominent one and growing in popularity. Enterprises are challenged to find a solution that can keep up with the increasing ...


Marcin "Perk" Stożek
14 March 2024

How should a great K8s distro feel? Try the new Canonical Kubernetes, now in beta

Kubernetes Article

Try the new Canonical Kubernetes beta, our new distribution that combines ZeroOps for small clusters and intelligent automation for larger production environments that also want to benefit from the latest community innovations ...


Mita Bhattacharya
6 November 2024

Meet Canonical at KubeCon + CloudNativeCon North America 2024

Cloud and server Article

We are ready to connect with the pioneers of open-source innovation! Canonical, the force behind Ubuntu, is returning as a gold sponsor at KubeCon + CloudNativeCon North America 2024.  This premier event, hosted by the Cloud Native Computing Foundation, brings together the brightest minds in open source and cloud-native technologies. From ...