Gazebo faster-than-realtime operation?

Hi all, I know this has been asked in various ways over the years, but I couldn’t find any great answers. I’m testing a few Velodyne LIDAR sensors in Gazebo11 with ROS2 and trying to understand the behavior I’m seeing when I set Gazebo to run as fast as possible.

When running normally (real-time factor or RTF at 1.0), I get LIDAR data from my three sensors at ~30Hz, checked with ‘ros2 topic hz’ in a terminal. When I set Gazebo’s Physics ‘real-time update rate’ from 1000.0 to 0 (which should enable running as fast as possible), the RTF goes up to ~16, yet the data I see in RViz is clearly not 16x as fast. Additionally, the ‘ros2 topic hz’ only shows the output rate roughly at 2x… and the message count received in RViz doesn’t seem to agree with either 16x or 2x. Please see this short video for a demonstration, and let me know if you have issues viewing it.

What’s going on here? What does an RTF of 16x actually mean, and why is the LIDAR data not truly coming in 16 times as fast?

Thank you. I also asked this on the other forum but it seems like that one gets no attention these days, based on the recent posts :frowning: Is there a Gazebo discord or anything with more active users?

The RTF measures the physics update rate, but that may not be the same update rate as the sensors. You can use the new Lockstep of physics and sensors feature to ensure that sensors and physics run in sync, which will essentially slow down physics.

You may also be interested in the new Performance metrics tools to check the sensor update rate directly on Gazebo without the need to check it with ROS.

Thank you very much for the links, I will take a closer look into the lockstep option and how it affects performance. I did check the performance metrics within the Gazebo GUI and have a few follow-up questions:

  1. Apologies if this is basic and I’m overlooking it, but are these messages internal to Gazebo (like /gazebo/performance_metrics) also accessible outside the GUI via CLI?
  2. With the nominal Physics update rate of 1000.0hz, at RTF=1.0 the three LIDARs I’m simulating each output data at 10Hz - this is consistent between the performance metrics in Gazebo, and checking with ‘ros2 topic hz’
  3. With the Physics update rate again set to zero, my RTF is ~17. The performance metrics shows a “real_update_rate” of ~75Hz on average, while the “sim_update_rate” is only ~4hz on average. Using ‘ros2 topic hz’ also shows ~75Hz, but in looking at the message count in rviz2 (unless this is not a reliable way to check frequency?), it’s clear that the point cloud is being updated faster than 10Hz but not up to 75Hz. What actually controls the ROS2 output rate and why does ‘ros2 topic hz’ appear to be inconsistent with rviz2?

Any insight would be appreciated, as we’d really like to run faster than real-time, but compared to similar simulators (eg Webots), faster-than-realtime is a bit confusing in Gazebo. I will now test with lockstep, if that answers my questions then I’ll edit this post accordingly.

Ha, we should have mentioned it on the tutorial. You can echo those messages with the gz topic CLI. You can also view them on the GUI with WindowTopic Visualization.

I’m not sure what the question is. What are the LIDAR’s update rates as set on SDF?

I don’t have any immediate insights for you here, I’d have to research a bit on how the ros2 CLI and rviz2 are calculating the frequency.

Thanks! Very useful. I was trying ‘ign topic’ but I guess that’s specific to Ignition Gazebo which I was also testing recently.

Sorry, the second “question” was actually just setting up for my final question re: the difference in RTF/data output rates when running with the Physics update rate set to zero. I’m using this ROS2 port of the velodyne_simulator package with the update rates set to their default 10Hz.

Thanks, maybe this is more a question about ROS2 than Gazebo in that case. Perhaps I’m wrong in stating that the sensor output rate differs between ROS2 CLI/rviz2 and Gazebo; I will check this more rigorously. I’m pretty sure the CLI tool (ros2 topic hz) is correct, and rviz2’s message count is wrong.

I guess I’m still a little confused about how the real_time_factor, real_update_rate, and sim_update_rate are all related based on the docs you linked. Is the following summary correct?

  • sim_update_rate is the sensor update rate in simulation time, ie how many times the sensor data is updated per simulated second
  • real_time_factor is the rate of simulated time versus real (system clock) time, which is also the rate the physics are updated
  • real_update_rate is the sensor update rate in real time, which is calculated as RTF * sim_update_rate. This is identical to we should see using the ROS2 CLI to measure data output rate from Gazebo, as ‘ros2 topic hz’ appears to use

If this sounds correct, then my goal is to keep sim_update_rate fixed to the desired rate (which lockstep should accomplish) while also increasing RTF as high as possible. I think this is what users would expect by default when trying to run faster-than-realtime; it’s pretty confusing to see RTF>1 but see the output rate has actually dropped below the nominal value, in my opinion.

Does this all make sense? Am I missing anything which could help for faster-than-realtime operation?

Yes, it all sounds reasonable to me.

There are many factors that affect simulation performance, so it’s hard to tell without knowing the exact use case. In general, limiting your sensor rates to just what you need, tuning the physics update rate to as little as possible while keeping the simulation stable, optimizing collisions, contacts, etc.

Thank you for the confirmation! I think I have a good understanding of these topics now.

@chapulina I’m doing some further experiments with Gazebo in which I’ve written a subscriber to the ~/diagnostics message in order to use the RTF to set the system clock speed with libfaketime, however I just noticed that the RTF reported in this message is often very different than the one displayed in the bottom toolbar in Gazebo.

For example, see this image of a simulation in which I added some random objects and lowered the physics update rate to 500hz; the RTF displayed in Gazebo is 0.5, but the RTF published in ~/diagnostics is very slowly dropping from 1.0 down to this value. Is there some kind of smoothing of the published RTF, or am I misinterpreting something here?

Thanks again for all the help.

RTF is calculated differently in a few places inside Gazebo.

The one from diagnostics is calculated here, and as you can see it’s just an instantaneous accummulated simTime / realTime

The one on the GUI is calculated on the client here using the last 20 messages received in the ~/world_stats topic.

For completeness, I should also mention that the RTF displayed with the gz stats tool is calculated somewhere else, but in a similar manner to the GUI.


I can see how this is all opaque to end-users. Hopefully this summary will be helpful to other people :slightly_smiling_face:

Thanks for the insights! So it sounds like the RTF computed from world_stats is smoothed with a moving average? This is a little strange, because I see the opposite effect: the diagnostics RTF (which is instantaneous) lags the world_stats RTF (which is smoothed). For example, when I set the physics update rate from 1000hz to 500hz, the world_stats (GUI) RTF drops very quickly to 0.5 as expected, yet the diagnostics RTF slowly creeps down. I even collect both sim/real time from ~/diagnostics and sim/real time from ~/world_stats and use these to compute instantaneous RTF, and they both agree with the diagnostics RTF; see here, ignoring the jaggedness of the calculated RTFs which is due to me not using the nanoseconds fields.

Maybe I’m missing something here (and it’s not a huge deal for me to understand what’s going wrong), but I just wanted to note that the behavior I observe for these different RTFs is not consistent with what I’d expect based on the code.

Sorry, I may have mislead you using the word “instantaneous” up there. I should have said “accumulated” or something like that. I think it’s expected that the diagnostics RTF drops slowly because it’s dividing the total sim time by the total real time, counted from the beginning of the simulation.

So if during the first 10 mins you had a perfect 1 RTF, then the update rate drops by half, by 12 mins real time, you’ll have run 11 mins sim time and your accumulated RTF is 11/12.

Ah I see, that makes perfect sense now, sorry for my misunderstanding. The ~/world_stats computation is more an “instantaneous” RTF in practice, and the ~/diagnostics RTF is indeed “accumulated.” It would actually be useful to publish the “instantaneous” RTF in one of these messages, but as long as one understands the difference, I suppose it can be easily computed. Thanks again.