Plugin for event cameras

Hi all,

I would love to have an event camera plugin for Gazebo. In event cameras, each pixel operates independently and fires an event once a pre-set threshold in change of brightness was exceeded. This implies that there is no concept of conventional frames anymore; absolute brightness information is not available. For more info, please see Wikipedia or this amazing survey.

There have been some attempts to simulate event cameras, including rpg_esim and v2e (video2events) [apologies, I’m only allowed two links …]. These tools typically take a video as input, use some super slow-motion approach to temporally sample more finely, and then simulate the events.

I think it would be amazing to instead have a plugin for Gazebo that simulates event cameras directly without the intermediate step of using a video.

Unfortunately, I am not well versed in Gazebo / writing plugins for Gazebo. However, I am willing to learn, have good knowledge of ROS (as one of the RoboStack maintainers) and potentially could get a student to work on it, if I get some pointers on where to start / what would be needed for this project. I’d also be more than happy to collaborate with someone who has similar interests.

Best wishes,
Tobi

Hi Tobi, I can see three possible ways to start:

  1. Render a frame each simulation step (which is usually 250 Hz or 1000 Hz) and do diffs between these frames. This would be possibly the best approximation of event cameras. But super-slow (real-time factor probably well below 1%).

  2. Start your work off of LogicalCameraSensor. It maintains a frustum and checks for collision between models and this frustum. So if each pixel of your camera created such frustum, you would have something like a binary event camera. Unfortunately, the current logical camera only checks whether the centerpoint of each model is inside the frustum, which is probably not what you want.

  3. Start your work off of PerformerDetector. It maintains an axis-aligned box and reports whenever this box intersects with some performer. Performers are special kinds of models “tagged” as performers. If you’d change the AABB to a frustum, you might get something interesting.

I’m not sure if cases 2 and 3 would actually result in faster rendering than the image-based approach. But they’re a way to try…

Hi @peci1,

Apologies for my late reply! I really appreciate these starting points, that’s great. I have written up a student project to investigate this, hopefully someone will pick it up :slight_smile:.

Regarding the first way to start, I actually realised that this is what the Neurorobotics platform people do, see Bitbucket - however the update rate is set to 60 Hz to keep it computationally feasible (so probably not a great approximation)