Massive Real-Time Scene Generation

Hello!

I have recently been trying to develop a plugin for Gazebo that achieves real-time generation of scenes such as those depicted in the following image:

Essentially, I require a large amount of these images (in order of the tenths of thousands) and so far I can generate about one per second, which is too slow, each involving spawning and removing models with custom textures, and acquiring image data.

Furthermore, it seems that the simulation slows down with time, even though the objects themselves seem to no longer be present in the World.
I understand Gazebo is not the ideal platform to achieve this, yet this research is a preliminary step in what I hope develops to be a useful open-source tool for dataset generation for all kinds of robots.
Right now we are attempting to improve our program, but I fear Gazebo is not suitable for such an application, and performance may be underwhelming.

So, I’d like to hear some feedback regarding whether this seems feasible at all, or if I should drop it and experiment with an engine such as Unity.

I’m an assistant researcher at Vislab-ISR, a systems and robotics lab in Portugal, and am currently working on my master’s thesis.
Thanks in advance.

Those look really trippy :smile:

It’s a bit difficult to tell without looking at some code, but 1 second sounds like a lot of time to load a few textured shapes.

Thanks, for the quick reply!

The code is on github but not yet public. Regardless I can show you what we have so far if you are interested.

In any case, I am using a World Plugin and it’s quite fast to generate the sdf on the fly and spawn these objects. However, it is generally taking over a second to generate the full scene, which involves moving a virtual camera and light source around, saving images and so on (Plus I have to ensure each object is in scene before acquiring images, etc.). My main concern is that there is a clear performance hit after a few spawn, despawn loops, the program becomes unresponsive for a few seconds between scenes and overall synchronisation issues start to arise.

Do you by any chance have some insight into the memory management works behind the scenes, and if there is some design aspect that limits performance for this application?

You seem to have some experience regarding dynamic object spawning, so your input is very much appreciated.

Do you need physics at all? If you’re just spawning shapes on a static scene and taking screenshots, I’d recommend you keep the world paused and use a VisualPlugin to stay in the rendering.

Another thing to try is not to generate SDF strings and then parse them every time, you may be able to create shapes using the C++ API directly.

It sounds like some things may not be properly cleaned up every cycle. Maybe moving to rendering-only could improve that. Also make sure you release any pointers you may be holding so they can be properly destroyed.

Hope that helps!

1 Like

The fastest would IMO be to load a bunch of objects once and then just move them around and change their scale/material/orientation.

Thank you, I think this will indeed help a lot!
I think it will take some time for me to implement these changes, but I will start working on it and I’ll post the results once it’s done.

Also, @peci1, I considered this approach too, but it was unclear to me how we could change an object once it was spawned. Is it also using a VisualPlugin or with a ModelPlugin?
Thanks!

A ModelPlugin should be ok if you go with models and not just visuals.

1 Like

It has been a while but the code is finally public in GitHub :smiley: !

I have gone with the visual plugin approach and overall performance has increased quite a lot!
Thanks once more, @chapulina and @peci1, you have already helped a great deal.

I have improved synchronization by binding callback functions to an update Event, as seen in the provided plugin examples (e.g. Visual plugin connects to PreRender event).
However, I still run into some synchronization issues.
For instance, in my application I send a command to a plugin instance via Proto message and only reply after some action is performed in the plugin itself, in the safe environment of the callback function.
This is basically how I ensure that the objects have moved to a new position, to make sure that indeed I have a new scene to capture.
Yet sometimes I request a frame capture to a camera plugin and find out that indeed the scene has not changed yet.

Is my assumption that after function calls like Visual::SetWorldPose has indeed changed wrong?
Or could it be just the camera that has a buffered frame from between scenes?
If so, is it possible to explicitly synchronize cameras to the World update Event on top of waiting for the OnNewFrame callback?

Thanks once more to this awesome community!