Application orchestration is the process of integrating applications together to automate and synchronise processes. In robotics, this is essential, especially on complex systems that involve a lot of different processes working together. But, ROS applications are usually launched all at once from one top-level launch file.
With orchestration, smaller launch files could be launched and synchronised to start one after the other to make sure everything is in the right state. Orchestration can also hold processes and insert some process logic. This is what ROS orchestration should be about.
This way, for instance, you could make your
localisation node start only once your
map_server made the map available.
Snaps offer orchestration features that might come handy for your ROS orchestration.
In this post, we will demonstrate how to start a snap automatically at boot and how to monitor it. Then, through some examples, we will explore the different orchestration features that snaps offer. We thus assume that you are familiar with snaps for ROS; if you aren’t, or need a refresher, head over to the documentation page.
Let’s get started
Let us first build and install the snap we will use in this step-by-step tutorial
git clone https://github.com/ubuntu-robotics/ros-snaps-examples.git -b ros_orchestration_with_snaps_blog cd ros-snaps-examples/orchestration_humble_core22 SNAPCRAFT_ENABLE_EXPERIMENTAL_EXTENSIONS=1 snapcraft sudo snap install talker-listener_0.1_amd64.snap --dangerous
Note that all the steps described hereafter are already implemented in this git repository. However, they are commented for you to easily follow along.
Start a ROS application automatically at boot
Once you have tested and snapped your robotics software, you can start it from the shell. For an autonomous robot, starting your applications automatically at boot is preferable than starting manually every single time. It obviously saves time and most importantly makes your robot truly autonomous.
Snaps offer a simple way to turn your snap command into services and daemons, so that they will either start automatically at boot time and end when the machine is shut down, or start and stop on demand through socket activation.
Here, we will work with a simple ROS 2 Humble talker-listener that is already snapped (strictly confined). If you want to know how the talker-listener was snapped, you can visit the How to build a snap using ROS 2 Humble blog post.
Turn your snap command into a daemon
Once you have snapped your application, you can not only expose commands, but also create daemons.
- Daemons are commands that can be started automatically at boot, which is a must-have for your robot software.
For now, our snap is exposing two commands –
listener. They respectively start the node publishing message and the node subscribing and listening to the message.
You can test the snap by launching each of the following commands in their own terminal:
$ talker-listener.talker $ talker-listener.listener
In order to start them both automatically in the background, we must turn them into daemons. Snap daemons can be of different types, but the most common one is “
simple”. It will simply run as long as the service is enabled.
To turn our application into daemons, we only have to add ‘
daemon: simple’ to both our snap applications:
apps: listener: command: opt/ros/humble/bin/ros2 run demo_nodes_cpp listener + daemon: simple plugs: [network, network-bind] extensions: [ros2-humble] talker: command: opt/ros/humble/bin/ros2 run demo_nodes_cpp talker + daemon: simple plugs: [network, network-bind] extensions: [ros2-humble]
All there’s left to do is to rebuild and re-install the snap. Upon installation, both daemons are going to be automatically started in no particular order
Now we can build and install this snap.
Check your daemons
listener are running in the background. Snaps offer a way to monitor and interact with your snap daemons.
Snap daemons are actually plain SystemD daemons so if you are familiar with SystemD commands and tools (
systemctl, journalctl, etc.) you can use them for snap daemons too.
For this post, we are going to focus on
snap commands to interact with our daemons and monitor them.
Check our service status
The very first thing would be to verify the status of our daemons, making sure they are running. The snap info command gives us a summary of the status of our daemons,
$ snap info talker-listener name: talker-listener summary: ROS 2 Talker/Listener Example publisher: – license: unset description: | This example launches a ROS 2 talker and listener. Services: talker-listener.listener: simple, enabled, active talker-listener.talker: simple, enabled, active refresh-date: today at 18:00 CEST installed: 0.1 (x35) 69MB -
Here we see our two services listed. They are both
Simple is the type of daemon we specified.
Enabled, means that our service is meant to start automatically (at boot, upon snap install etc).
Active, means that our service is currently running.
So here, both our
listener services are up and running. Let’s browse the logs.
Browsing the logs
The snap command also offers a way to browse our service logs.
Since our services are already running in the background, we can type:
$ sudo snap logs talker-listener 2022-08-23T11:13:08+02:00 talker-listener.listener : [INFO] [1661245988.120676423] [talker]: Publishing: 'Hello World: 123' [...] 2022-08-23T11:13:12+02:00 talker-listener.talker: [INFO] [1661245992.121411564] [listener]: I heard: [Hello World: 123]
This command will fetch the logs of our services and display the last 10 lines by default. In case you want the command to continuously run and print new logs as they come in, you can use the “
sudo snap logs talker-listener -f
Note that so far we have been fetching the logs of our whole snap (both services). We can also get the logs of a specific service. To continuously fetch the
listener logs, type:
sudo snap logs talker-listener.listener -f
Interact with snap daemons
The snap command also offers ways to control services. As we saw, our services are currently
Enabled, means our service will start automatically at boot. We can change this by “disabling” it, so it won’t start automatically any more
sudo snap disable talker-listener.talker
Note that disabling the service won’t stop the current running process.
We can also stop the current process altogether with:
sudo snap stop talker-listener.talker
Conversely, we can enable/start a service with:
sudo snap enable talker-listener.talker sudo snap start talker-listener.talker
Make sure to re-enable everything to keep following this post along:
sudo snap enable talker-listener
So far, our talker and listener start up without any specific orchestration; or in layman’s terms, in no specific order. Fortunately, snaps offer different ways to orchestrate services.
To spice up our experience, let’s add some script to our snap to showcase the orchestration features:
Parts: [...] + # copy local scripts to the snap usr/bin + local-files: + plugin: dump + source: snap/local/ + organize: '*.sh': usr/bin/
This is a collection of bash scripts that have been conveniently prepared to demonstrate orchestration hereafter.
We will also add another app:
Apps: [...] + listener-waiter: + command: usr/bin/listener-waiter.sh + daemon: oneshot + plugs: [network, network-bind] + extensions: [ros2-humble] + # Necessary for python3 ROS app + environment: "LD_LIBRARY_PATH": "$LD_LIBRARY_PATH:$SNAP/usr/lib/$SNAPCRAFT_ARCH_TRIPLET/blas:$SNAP/usr/lib/$SNAPCRAFT_ARCH_TRIPLET/lapack"
This app is simply waiting for the node
/listener to be present. This daemon is created as a “
oneshot”, another type of daemon that is meant to only be run once at start and then exit after completion.
After and before for ROS orchestration
The very first thing we can do is to change the start order. The
after/before keywords are valid only for daemons and allow us to specify if a specific daemon should be started
before one (or several) other service(s). Note that for
oneshot daemons, the
before/after keywords are waiting for the completion of the
The scenario here goes as follows: start the
listener, make sure it’s properly started, then and only then, start the
talker. To make sure our
listener is properly started, we will use the
listener-waiter app we introduced in the previous section. Remember, it waits for a node to be listed.
Here, we define the orchestration only at the
listener-waiter application level to keep it simple. So, we want it to start after the
listener and before the
talker so the
talker will start only once the
listener is ready.
To do so, let’s use the
listener-waiter: command: usr/bin/listener-waiter.sh daemon: oneshot + after: [listener] + before: [talker] plugs: [network, network-bind] extensions: [ros2-humble]
This is rather explicit, it must be started
talker. After rebuilding the snap, we can reinstall it and look at the log again. Here is a shortened version of the output logs:
systemd: Started Service for snap application talker-listener.listener. systemd: Starting Service for snap application talker-listener.listener-waiter... talker-listener.listener-waiter: Making sure the listener is started systemd: snap.talker-listener.listener-waiter.service: Succeeded. systemd: Finished Service for snap application talker-listener.listener-waiter. systemd: Started Service for snap application talker-listener.talker. [talker]: Publishing: 'Hello World: 1' talker-listener.listener: [INFO] [1661266809.685248681] [listener]: I heard: [Hello World: 1]
We can see in this log that everything went as expected. The
talker has been started only once the
listener was available.
In this example, we specified the
before/after field within the
listener-waiter for the sake of simplicity. However, any daemon can specify a
before/after as long as the specified applications are from the same snap, allowing for pretty complex orchestration.
Stop-command for ROS orchestration
Another interesting feature for snap orchestration is the
stop-command. It allows one to specify a script, or a command, to be called right before the stop signal is sent to a program when running
snap stop. With this, we could make sure, for instance, that everything is synchronised or saved before exiting. Let’s look at a quick example: running an echo of a string.
A script called
stop-command.sh has already been added to the snap
All we need to do here is to specify the path to the said script as a
talker: command: opt/ros/humble/bin/ros2 run demo_nodes_cpp talker plugs: [network, network-bind] daemon: simple + stop-command: usr/bin/stop-command.sh extensions: [ros2-humble]
After rebuilding and reinstalling the snap, we can trigger a stop manually with the
snap stop command:
sudo snap stop talker-listener # stop all the services of this snap sudo snap logs talker-listener -f # visualize the logs
We should see an output similar to:
systemd: Stopping Service for snap application talker-listener.listener... systemd: snap.talker-listener.listener.service: Succeeded. systemd: Stopped Service for snap application talker-listener.listener. systemd: Stopping Service for snap application talker-listener.talker... talker-listener.talker: About to stop the service systemd: snap.talker-listener.talker.service: Succeeded. 2022-08-23T17:23:57+02:00 systemd: Stopped Service for snap application talker-listener.talker.
From the logs, we can see that before exiting the service
stop-command script was executed and printed a message: “
About to stop the service”. Then, only after the
stop-command script finished, was the
Post-stop-command for ROS orchestration
Similarly to the
stop-command entry, the
post-stop-command is also calling a command, but this time, only after the service is stopped.
The use case could be to run some data clean-up or even notify a server that your system just stopped. Again, let us try this feature with a conveniently pre-baked script logging a message
talker: command: opt/ros/humble/bin/ros2 run demo_nodes_cpp talker plugs: [network, network-bind] daemon: simple stop-command: usr/bin/stop-command.sh + post-stop-command: usr/bin/post-stop-command.sh extensions: [ros2-humble]
Rebuild, re-install, and without much surprise, we get the following output:
systemd: Stopping Service for snap application talker-listener.talker... talker-listener.talker: About to stop the service talker-listener.talker: [INFO] [1661269138.094854527] [rclcpp]: signal_handler(signum=15) talker-listener.talker: Goodbye from the post-stop-command! systemd: snap.talker-listener.talker.service: Succeeded. systemd: Stopped Service for snap application talker-listener.talker.
From the logs we can see that our
talker application executed the
stop-command script then received the termination signal and only after our
post-command script logged the message: ”
Goodbye from the post-stop-command!”.
So far, we have seen how to call additional commands around the moment we stop our service. The
command-chain keyword allows us to list commands to be executed before our main command. The characteristic use case is to set up your environment. The
ros2-humble extension that we are using in our snap example is actually using this mechanism. Thanks to it, we don’t have to worry about sourcing the ROS environment in the snap. If you are curious, here is the said command-chain script. The best part is that the
command-chain entry is not only available for daemons, but also for services and regular commands.
The scripts listed in the
command-chain are not called one by one automatically. Instead, they are called as arguments of each others, resulting in a final command similar to:
./command-chain-script1.sh command-chain-script2.sh main-script.sh
So you must make sure that your
command-chain-scripts are calling passed arguments as executables. For example, here,
command-chain-script1.sh is responsible for calling
Let’s see what our command-chain-talker.sh script looks like
#!/usr/bin/sh echo "Hello from the talker command-chain!" # Necessary to start the main command exec $@
The only thing to pay attention to is the
exec $@ which is simply calling the next command. If we don’t specify this, our main snap command won’t be called.
Let’s add yet another script to our
talker: + command-chain: [usr/bin/command-chain-talker.sh] command: opt/ros/humble/bin/ros2 run demo_nodes_cpp talker plugs: [network, network-bind] daemon: simple stop-command: usr/bin/stop-command.sh post-stop-command: usr/bin/post-stop-command.sh extensions: [ros2-humble]
After building and installing, we can see that now the logs are:
systemd: Starting Service for snap application talker-listener.listener-waiter... talker-listener.listener-waiter: Making sure the listener is started systemd: snap.talker-listener.listener-waiter.service: Succeeded. systemd: Finished Service for snap application talker-listener.listener-waiter. systemd: Started Service for snap application talker-listener.talker. talker-listener.talker: Hello from the talker command-chain! talker-listener.talker: [INFO] [1661271361.139378609] [talker]: Publishing: 'Hello World: 1'
We can see from the logs that once our
listener was available, the
talker part was started. The
command-chain-talker.sh script was called and printed the message: “
Hello from the talker command-chain!”, and only after that our talker started publishing.
I hope that reading this article helps you understand the snap daemon features a bit more and inspires you to use them for ROS orchestration. For now, orchestration can only be done within the same snap, since strictly confined snaps are not allowed to launch other applications outside their sandbox. Of course, you could also combine the snap orchestration features with other orchestration software. Most notably, ROS 2 nodes lifecycle allows you to control the state of your nodes, so that you can orchestrate your node’s initialisation, for instance.
If you have any feedback, questions or ideas regarding ROS snap orchestration with snaps, please join our forum and let us know what you think. Furthermore, have a look at the snap documentation if you want to learn more about snaps for robotics applications.