Your submission was sent successfully! Close

  1. Blog
  2. Article

James Donner
on 21 December 2017


Image courtesy of VMWare

 

Containers are one of the most exciting technologies in the cloud right now. But when it comes to your IT strategy, where is the best place to start? With so many different options and configurations, it’s critical that you find the best possible strategy for your software stack.

To answer these questions, Canonical’s VP of Product Development Dustin Kirkland and VMware Staff Engineer Sabari Murugesan presented at the SF Bay Area OpenStack User Group Meeting. You can watch the full talk here!

Watch this keynote to learn

  • The high level concepts and principles behind containers
  • How Ubuntu provides a first class container experience
  • How to determine the best container use case
  • Container case studies: How are enterprises using containers in production?

Related posts


Hugo Huang
29 November 2023

Generative AI explained

AI Article

When OpenAI released ChatGPT on November 30, 2022, no one could have anticipated that the following 6 months would usher in a dizzying transformation for human society with the arrival of a new generation of artificial intelligence. Since the emergence of deep learning in the early 2010s, artificial intelligence has entered its third wave ...


Hasmik Zmoyan
14 November 2023

Join Canonical at Open Source Experience Paris 2023

Ubuntu Article

Date: 6-7 December, 2023 Location: Palais des congrès – Paris, France Booth: Booth 26 Canonical is excited to attend Open Source Experience (OSXP) 2023, the annual event dedicated to the open source ecosystem – something that speaks directly to our hearts.  This year the conference has six themes, three of which we will cover in ...


Tytus Kurek
8 September 2023

How telcos are building carrier-grade infrastructure using open source

Cloud and server Article

Telco cloud implementation with Canonical and HPE Service providers need cloud infrastructure everywhere, from modern 5G and 6G network functions running in the network core to sophisticated AI/ML jobs running on the edge. Given the sensitivity of those workloads to any interruptions, outages or performance degradations, the cloud infrast ...