Building application containers - AWS RoboMaker

End of support notice: On September 10, 2025, AWS will discontinue support for AWS RoboMaker. After September 10, 2025, you will no longer be able to access the AWS RoboMaker console or AWS RoboMaker resources. For more information on transitioning to AWS Batch to help run containerized simulations, visit this blog post.

Building application containers

There are three steps to submitting a simulation job in AWS RoboMaker: build the application containers, link the container to an AWS RoboMaker application, and use the containers to submit a simulation job. This section covers how to build application containers using Docker for AWS RoboMaker. We use the hello-world sample application to demonstrate the steps required to build sample robot and simulation application containers for a ROS-based example. This page also demonstrates how to test your container locally.

If you are not using ROS, see the blog post which describes how to run any high-fidelity simulation in AWS RoboMaker with GPU and container support.

Prerequisites

Before getting started, make sure your development environment has the necessary dependencies. You must have Docker, the AWS CLI, and the VCS Import Tool installed on your machine.

sudo pip3 install vcstool

You must also have an AWS account with an IAM role containing the following permissions:

  • Create an IAM role

  • Create AWS RoboMaker resources (simulation job, robot, and simulation applications)

  • Create and upload Amazon ECR repositories

Finally, you must know your account number and you must select a region in which to run the simulation. AWS RoboMaker is supported in the following Regions listed AWS RoboMaker endpoints and quotas

Building application containers from a ROS workspace

AWS RoboMaker simulations are comprised of a simulation application and an optional robot application. Each of these applications is defined by a name and a container image. This section demonstrates how to build the container image for both a simulation application and a robot application. In the following example, both applications are built within a single workspace. The approach that follows is easily generalizable to any ROS project.

To start, clone the hello world repository and import the source.

git clone https://github.com/aws-robotics/aws-robomaker-sample-application-helloworld.git helloworld cd helloworld vcs import robot_ws < robot_ws/.rosinstall vcs import simulation_ws < simulation_ws/.rosinstall

Next, create a new text file in the helloworld directory and name it Dockerfile. Copy and paste the following contents:

# ======== ROS/Colcon Dockerfile ======== # This sample Dockerfile will build a Docker image for AWS RoboMaker # in any ROS workspace where all of the dependencies are managed by rosdep. # # Adapt the file below to include your additional dependencies/configuration # outside of rosdep. # ======================================= # ==== Arguments ==== # Override the below arguments to match your application configuration. # =================== # ROS Distribution (ex: melodic, foxy, etc.) ARG ROS_DISTRO=melodic # Application Name (ex: helloworld) ARG APP_NAME=robomaker_app # Path to workspace directory on the host (ex: ./robot_ws) ARG LOCAL_WS_DIR=workspace # User to create and use (default: robomaker) ARG USERNAME=robomaker # The gazebo version to use if applicable (ex: gazebo-9, gazebo-11) ARG GAZEBO_VERSION=gazebo-9 # Where to store the built application in the runtime image. ARG IMAGE_WS_DIR=/home/$USERNAME/workspace # ======== ROS Build Stages ======== # ${ROS_DISTRO}-ros-base # -> ros-robomaker-base # -> ros-robomaker-application-base # -> ros-robomaker-build-stage # -> ros-robomaker-app-runtime-image # ================================== # ==== ROS Base Image ============ # If running in production, you may choose to build the ROS base image # from the source instruction-set to prevent impact from upstream changes. # ARG UBUNTU_DISTRO=focal # FROM public.ecr.aws/lts/ubuntu:${UBUNTU_DISTRO} as ros-base # Instruction for each ROS release maintained by OSRF can be found here: # https://github.com/osrf/docker_images # ================================== # ==== Build Stage with AWS RoboMaker Dependencies ==== # This stage creates the robomaker user and installs dependencies required # to run applications in RoboMaker. # ================================== FROM public.ecr.aws/docker/library/ros:${ROS_DISTRO}-ros-base AS ros-robomaker-base ARG USERNAME ARG IMAGE_WS_DIR RUN apt-get clean RUN apt-get update && apt-get install -y \ lsb \ unzip \ wget \ curl \ xterm \ python3-colcon-common-extensions \ devilspie \ xfce4-terminal RUN groupadd $USERNAME && \ useradd -ms /bin/bash -g $USERNAME $USERNAME && \ sh -c 'echo "$USERNAME ALL=(root) NOPASSWD:ALL" >> /etc/sudoers' USER $USERNAME WORKDIR /home/$USERNAME RUN mkdir -p $IMAGE_WS_DIR # ==== ROS Application Base ==== # This section installs exec dependencies for your ROS application. # Note: Make sure you have defined 'exec' and 'build' dependencies correctly # in your package.xml files. # ======================================== FROM ros-robomaker-base as ros-robomaker-application-base ARG LOCAL_WS_DIR ARG IMAGE_WS_DIR ARG ROS_DISTRO ARG USERNAME WORKDIR $IMAGE_WS_DIR COPY --chown=$USERNAME:$USERNAME $LOCAL_WS_DIR/src $IMAGE_WS_DIR/src RUN sudo apt update && \ rosdep update && \ rosdep fix-permissions # Note: This will install all dependencies. # You could further optimize this by only defining the exec dependencies. # Then, install the build dependencies in the build image. RUN rosdep install --from-paths src --ignore-src -r -y # ==== ROS Workspace Build Stage ==== # In this stage, we will install copy source files, install build dependencies # and run a build. # =================================== FROM ros-robomaker-application-base AS ros-robomaker-build-stage LABEL build_step="${APP_NAME}Workspace_Build" ARG APP_NAME ARG LOCAL_WS_DIR ARG IMAGE_WS_DIR RUN . /opt/ros/$ROS_DISTRO/setup.sh && \ colcon build \ --install-base $IMAGE_WS_DIR/$APP_NAME # ==== ROS Robot Runtime Image ==== # In the final stage, we will copy the staged install directory to the runtime # image. # ================================= FROM ros-robomaker-application-base AS ros-robomaker-app-runtime-image ARG APP_NAME ARG USERNAME ARG GAZEBO_VERSION ENV USERNAME=$USERNAME ENV APP_NAME=$APP_NAME ENV GAZEBO_VERSION=$GAZEBO_VERSION RUN rm -rf $IMAGE_WS_DIR/src COPY --from=ros-robomaker-build-stage $IMAGE_WS_DIR/$APP_NAME $IMAGE_WS_DIR/$APP_NAME # Add the application source file to the entrypoint. WORKDIR / COPY entrypoint.sh /entrypoint.sh RUN sudo chmod +x /entrypoint.sh && \ sudo chown -R $USERNAME /entrypoint.sh && \ sudo chown -R $USERNAME $IMAGE_WS_DIR/$APP_NAME ENTRYPOINT ["/entrypoint.sh"]

The Dockerfile you just created is an instruction set used to build Docker images. Read through the comments in the Dockerfile to get a sense for what is being built and adapt as necessary for your need. For ease of development, the Dockerfile is based on the official ROS Docker images maintained by Open Source Robotics Foundation (OSRF). However, when running in production, you may choose to build the ROS base image with the OSRF source instruction-set in GitHub to prevent impact from upstream changes.

Next, create a new file called entrypoint.sh.

#!/bin/bash set -e source "/home/$USERNAME/workspace/$APP_NAME/setup.bash" if [[ -f "/usr/share/$GAZEBO_VERSION/setup.sh" ]] then source /usr/share/$GAZEBO_VERSION/setup.sh fi printenv exec "${@:1}"

An ENTRYPOINT file is an executable that runs when the Docker container is spawned. We are using an entrypoint to source the ROS workspace, so we can easily run roslaunch commands in AWS RoboMaker. You may want to add your own environment configuration steps to this ENTRYPOINT file.

Our Dockerfile uses a multi-stage build and integrated caching with Docker BuildKit. Multi-stage builds allow workflows with separate build steps, so the build dependencies and source code are not copied into the runtime image. This reduces the size of the Docker image and improves performance. The caching operations speed up future builds by storing previously built files.

Build the robot application with the following command:

DOCKER_BUILDKIT=1 docker build . \ --build-arg ROS_DISTRO=melodic \ --build-arg LOCAL_WS_DIR=./robot_ws \ --build-arg APP_NAME=helloworld-robot-app \ -t robomaker-helloworld-robot-app

After the robot application has been built, you can build the simulation application as follows:

DOCKER_BUILDKIT=1 docker build . \ --build-arg GAZEBO_VERSION=gazebo-9 \ --build-arg ROS_DISTRO=melodic \ --build-arg LOCAL_WS_DIR=./simulation_ws \ --build-arg APP_NAME=helloworld-sim-app \ -t robomaker-helloworld-sim-app

Run the command docker images to confirm the Docker images have been successfully built. The output should resemble the following:

Administrator:~/environment/helloworld (ros1) $ docker images REPOSITORY TAG IMAGE ID CREATED SIZE robomaker-helloworld-sim-app latest 5cb08816b6b3 6 minutes ago 2.8GB robomaker-helloworld-robot-app latest b5f6f755feec 10 minutes ago 2.79GB

At this point, you have successfully built your Docker images. It is a good idea to test these locally before uploading them for use with AWS RoboMaker. The next section describes how to do this.

Testing your containers

The following commands give you the ability to run the application in your local development environment.

Launch the robot application:

docker run -it -v /tmp/.X11-unix/:/tmp/.X11-unix/ \ -u robomaker -e ROBOMAKER_GAZEBO_MASTER_URI=http://localhost:5555 \ -e ROBOMAKER_ROS_MASTER_URI=http://localhost:11311 \ robomaker-helloworld-robot-app:latest roslaunch hello_world_robot rotate.launch

Launch the simulation application:

docker run -it -v /tmp/.X11-unix/:/tmp/.X11-unix/ \ -u robomaker -e ROBOMAKER_GAZEBO_MASTER_URI=http://localhost:5555 \ -e ROBOMAKER_ROS_MASTER_URI=http://localhost:11311 \ robomaker-helloworld-sim-app:latest roslaunch hello_world_simulation empty_world.launch

Once you have confirmed that your containers are functioning properly, you can Publish application containers to Amazon ECR and then Submit a simulation job.