Your submission was sent successfully! Close

  1. Blog
  2. Article

Jorge O. Castro
on 17 November 2016


If you’re interested in running Kubernetes you’ve probably heard of Kelsey Hightower’s Kubernetes the Hard Way. Exercises like these are important, they highlight the coordination needed between components in modern stacks, and it highlights how far the world has come when it comes to software automation. Could you imagine if you had to set everything up the hard way every time?

Learning is fun

Doing things the hard way is fun, once. After that, I’ve got work to do, and soon after I am looking around to see who else has worked on this problem and how I can best leverage the best open source has to offer.

It reminds me of the 1990’s when I was learning Linux. Sure, as a professional, you need to know systems and how they work, down to the kernel level if need. Having to do those things without a working keyboard or network makes that process much harder. Give me a working computer, and then I can begin. There’s value in learning how the components work together and understanding the architecture of Kubernetes, I encourage everyone to try the hard way at least one time, if anything it’ll make you appreciate the work people are putting into automating all of this for you in a composable and reusable way.

The easy way

I am starting a new series of videos on how we’re making the Canonical Distribution of Kubernetes easy for anyone to deploy on any cloud. All our code is open source and we love pull requests. Our goal is to help people get Kubernetes in as many places as quickly and easily as possible. We’ve incorporated lots of the things people tell us they’re looking for in a production-grade Kubernetes, and we’re always looking to codify those best practices.

Following the steps in the howto above will get you a working cluster, in this example I’m deploying to us-east-2, the shiny new AWS region. Subsequent videos will cover how to interact with the cluster and do more things with it.

Original article

Related posts


Hugo Huang
5 December 2023

How to use Ubuntu in GKE on nodes and in containers

Cloud and server Article

Google Kubernetes Engine (GKE) traces its roots back to Google’s development of Borg in 2004, a Google internal system managing clusters and applications. In 2014, Google introduced Kubernetes, an open-source platform based on Borg’s principles, gaining rapid popularity for automating containerized application deployment. In 2015, Google ...


Canonical
5 September 2023

도커(Docker) 컨테이너 보안: 우분투 프로(Ubuntu Pro)로 FIPS 지원 컨테이너 이해하기

FIPS Security

오늘날 급변하는 디지털 환경에서 강력한 도커 컨테이너 보안 조치의 중요성은 아무리 강조해도 지나치지 않습니다. 컨테이너화된 계층도 규정 준수 표준의 적용을 받기 때문에 보안 문제 및 규정 준수 요구 사항이 발생합니다. 도커 컨테이너 보안 조치는 경량의 어플라이언스 유형 컨테이너(각 캡슐화 코드 및 해당 종속성)를 위협 및 취약성으로부터 보호하는 것을 수반합니다. 민감한 개인 데이터를 처리하는 데 의존하는 ...


Valentin Viennot
2 June 2023

Docker container security: demystifying FIPS-enabled containers with Ubuntu Pro

container Article

In today’s rapidly changing digital environment, the significance of robust Docker container security measures cannot be overstated. Even the containerised layer is subject to compliance standards, which raise security concerns and compliance requirements. ...