Skip to main content

Your submission was sent successfully! Close

Thank you for signing up for our newsletter!
In these regular emails you will find the latest updates from Canonical and upcoming events where you can meet our team.Close

Thank you for contacting us. A member of our team will be in touch shortly. Close

How to deploy on AKS

Azure Kubernetes Service (AKS) allows you to quickly deploy a production ready Kubernetes cluster in Azure. To access the AKS Web interface, go to https://portal.azure.com/.

Summary


Install AKS and Juju tooling

Install Juju and Azure CLI tool:

sudo snap install juju
sudo apt install --yes azure-cli

Follow the installation guides for:

  • az - the Azure CLI

To check they are all correctly installed, you can run the commands demonstrated below with sample outputs:

~$ juju version
3.4.2-genericlinux-amd64

~$ az --version
azure-cli                         2.61.0

core                              2.61.0
telemetry                          1.1.0

Dependencies:
msal                              1.28.0
azure-mgmt-resource               23.1.1
...
Your CLI is up-to-date.

Authenticate

Login to your Azure account:

az login

Create a new AKS cluster

Export the deployment name for further use:

export JUJU_NAME=aks-$USER-$RANDOM

This following examples in this guide will use the single server AKS in location eastus - feel free to change this for your own deployment.

Create a new Azure Resource Group:

az group create --name aks --location eastus

Bootstrap AKS with the following command (increase nodes count/size if necessary):

az aks create -g aks -n ${JUJU_NAME} --enable-managed-identity --node-count 1 --node-vm-size=Standard_D4s_v4 --generate-ssh-keys

Sample output:

{
  "aadProfile": null,
  "addonProfiles": null,
  "agentPoolProfiles": [
    {
      "availabilityZones": null,
      "capacityReservationGroupId": null,
      "count": 1,
      "creationData": null,
      "currentOrchestratorVersion": "1.28.9",
      "enableAutoScaling": false,
      "enableEncryptionAtHost": false,
      "enableFips": false,
      "enableNodePublicIp": false,
...

Dump newly bootstraped AKS credentials:

az aks get-credentials --resource-group aks --name ${JUJU_NAME} --context aks

Sample output:

...
Merged "aks" as current context in ~/.kube/config

Bootstrap Juju on AKS

Bootstrap Juju controller:

juju bootstrap aks aks

Sample output:

Creating Juju controller "aks" on aks/eastus
Bootstrap to Kubernetes cluster identified as azure/eastus
Creating k8s resources for controller "controller-aks"
Downloading images
Starting controller pod
Bootstrap agent now started
Contacting Juju controller at 20.231.233.33 to verify accessibility...

Bootstrap complete, controller "aks" is now available in namespace "controller-aks"

Now you can run
	juju add-model <model-name>
to create a new model to deploy k8s workloads.

Create a new Juju model (k8s namespace)

juju add-model welcome aks

[Optional] Increase DEBUG level if you are troubleshooting charms:

juju model-config logging-config='<root>=INFO;unit=DEBUG'

Deploy charms

The following command deploys PostgreSQL K8s:

juju deploy postgresql-k8s --trust -n 3 --channel 14/stable

Sample output:

Deployed "postgresql-k8s" from charm-hub charm "postgresql-k8s", revision 247 in channel 14/stable on ubuntu@22.04/stable

Check the status:

juju status --watch 1s

Sample output:

Model    Controller  Cloud/Region  Version  SLA          Timestamp
welcome  aks         aks/eastus    3.4.2    unsupported  17:53:35+02:00

App             Version  Status  Scale  Charm           Channel       Rev  Address       Exposed  Message
postgresql-k8s  14.11    active      3  postgresql-k8s  14/stable     247  10.0.237.223  no       Primary

Unit               Workload  Agent  Address      Ports  Message
postgresql-k8s/0*  active    idle   10.244.0.19         Primary
postgresql-k8s/1   active    idle   10.244.0.18         
postgresql-k8s/2   active    idle   10.244.0.17  

Display deployment information

Display information about the current deployments with the following commands:

~$ kubectl cluster-info 
Kubernetes control plane is running at https://aks-user-aks-aaaaa-bbbbb.hcp.eastus.azmk8s.io:443
CoreDNS is running at https://aks-user-aks-aaaaa-bbbbb.hcp.eastus.azmk8s.io:443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Metrics-server is running at https://aks-user-aks-aaaaa-bbbbb.hcp.eastus.azmk8s.io:443/api/v1/namespaces/kube-system/services/https:metrics-server:/proxy

~$ az aks list
...
        "count": 1,
        "currentOrchestratorVersion": "1.28.9",
        "enableAutoScaling": false,
...

~$ kubectl get node
NAME                                STATUS   ROLES   AGE   VERSION
aks-nodepool1-31246187-vmss000000   Ready    agent   11m   v1.28.9

Clean up

Always clean AKS resources that are no longer necessary - they could be costly!

To clean the AKS cluster, resources and juju cloud, run the following commands:

juju destroy-controller aks --destroy-all-models --destroy-storage --force

List all services and then delete those that have an associated EXTERNAL-IP value (load balancers, …):

kubectl get svc --all-namespaces
kubectl delete svc <service-name> 

Next, delete the AKS resources (source: Deleting an all Azure VMs)

az aks delete -g aks -n ${JUJU_NAME}

Finally, logout from AKS to clean the local credentials (to avoid forgetting and leaking):

az logout

Last updated 2 months ago. Help improve this document in the forum.