Skip to main content

Your submission was sent successfully! Close

Thank you for signing up for our newsletter!
In these regular emails you will find the latest updates from Canonical and upcoming events where you can meet our team.Close

Thank you for contacting us. A member of our team will be in touch shortly. Close

Wrapping Up

This section concludes the Tutorial by freeing up the resources used so far.

Cleanup

First of all, let’s destroy the Juju controller we bootstrapped for the tutorial.

juju destroy-controller --destroy-all-models --destroy-storage --force spark-tutorial

The spark-streaming, history-server and cos namespaces are automatically deleted when the corresponding models are destroyed. Let’s also delete the spark K8s namespace. This will automatically clean up K8s resources within the namespace.

kubectl delete namespace cos
kubectl delete namespace history-server
kubectl delete namespace spark-streaming
kubectl delete namespace spark

Finally, the S3 bucket that was created can be removed using the AWS CLI as follows:

aws s3 rb  s3://spark-tutorial --force

Going Further

Parts of this tutorial were originally covered in a talk at the Ubuntu Summit 2023, the recording of which is available here on YouTube.

This tutorial covers running Charmed Apache Spark locally using MicroK8s. Running Charmed Apache Spark in MicroK8s locally is limited by the amount of resources available locally. For a more robust deployment, it’s also possible to run Charmed Apache Spark solution on AWS EKS. Please refer to this how-to guide for guidance on deploying and configuring an AWS EKS cluster to run Charmed Apache Spark. Additionally, here is a video demonstration of running Charmed Apache Spark on top of AWS EKS at the 2023 Operator Day demo.

Last updated 21 days ago. Help improve this document in the forum.