Running Gazebo on a remote machine using Docker

Hi all,
our project is the following:

We want to run automated tests of several ROS2 features on a simulated robot. For this, we need to launch Ignition Gazebo, headless (no GUI), inside of a docker container on a remote machine (which is equipped with a dedicated Nvidia GPU).

First point, although I did not found it explicitly mentioned as a hardware requirement for Ignition Gazebo, it runs significantly worse on integrated graphics. Even without GUI.

Based on that, we require the remote-launched (over ssh) docker to utilize the Nvidia GPU. That is where the problem comes, since it is only able to access it when given a display of a logged in user. Which is highly impractical on a server-like machine.
And we did try creating a virtual display using Xvfb with no success (GPU was not used by ign gazebo process).

Does anybody have some experience, similar issue or any ideas on how to resolve this? I believe remote headless robotic simulation may be a strong application of Gazebo (and it used to work with the “classic” version that did not require GPU).

Check out the headless rendering option: EGL support is available in Ignition Fortress . Also see my comment in that thread for workarounds if you need to run a version of Gazebo that does not support the headless rendering.

Running speed on integrated GPU really depends on the complexity of the world you use and the number, resolution and FPS of rendering sensors in your simulation. Usually, iGPUs run very low on RAM, which causes constant moving of data between CPU and GPU. If the BIOS of your computer allows setting a larger piece of RAM dedicated to the GPU, set the maximum.

Thank you for your reply.

I believe the headless rendering is launched automatically, if I do not use the --headless-rendering parameter, I get a log
[Wrn] [] Unable to open display: . Trying to run in headless mode.
and the behavior is the same.

OK, so this way we could achieve more reasonable execution on integrated GPU, we will definitely try that. However, I am still interested in finding out how to properly utilize the dedicated GPU:)

Just to be sure - are you lanuching it in nvidia-docker, right?

If you’re not exactly bound to Docker, you can try converting your docker image to singularity and try runnning the image via singularity run --nv ....

I actually wasn’t, but it also does not seem to make a difference. Now I launch (through ssh)

nvidia-docker run -it --net=host --gpus all --env="NVIDIA_DRIVER_CAPABILITIES=all" -v <workspace> <dockerfile>

(the dockerfile installs ign gazebo, ROS2 etc.)
and inside of it a simple world

ign gazebo -s --headless-rendering -v 4 -r visualize_lidar.sdf

and nvidia-smi in another terminal does not show Gazebo running on GPU. It does appear there with the logged user/display workaround I mentioned in the original post - even when running just docker run.

And we are bound to use Docker specifically due to other benefits.

If you install mesa-utils-extra, does eglinfo work inside the container? What does it show? Can you post the output of ls -la /dev/dri from inside the container?

You can also look into the contents of ~/.ignition/rendering/ogre2.log to find some info about the graphical system initialization (inside the container).

Hi @ToRy,

Not sure that would fit your use-case but have you considered using the Gazebo snap? Being a self-contained containerized package it is easy to install. Except that it has to be installed outside the Docker container hence the doubt about your use-case.