Working for Open Robotics as a Google Summer of Code 2021 Intern was an amazing experience that was full of learning and commitment in such a great working environment. It is my second time at Open Robotics as I was a GSoC intern last year and worked on Plotting Tool Project.
I would like to thank my mentor @adlarkin for guiding and helping me throughout the project.
The project is about developing a computer vision datasets generation feature for Ignition simulator, so that users can generate datasets to train machine learning / deep learning models for computer vision applications, The project includes developing sensors for semantic & instance segmentation datasets and 2D / 3D object detection datasets.
Segmentation Camera Sensor
Segmentation sensor provides semantic and panoptic / instance segmentation images from the simulation world data, And publishes them via ign-transport msgs. It provides both colored map (for visualization) and labels map (for ML training usage).
Semantic SegmentationIn the semantic image, objects with the same label have the same color in the colored map and the same label id in the labels map
Panoptic / Instance SegmentationIn the panoptic image, each pixel has 3 values, one for label id and two for instance count of that label. So each object has a unique color in the colored map and a unique 3 values in the labels map.
Bounding Box Camera Sensor
There are 2 types of 2D bounding boxes, a visible box which is a box around the visible part of the
object, and a full box, which is a box around the whole object even if it has an invisible part.
3D BoxesOriented 3D bounding boxes in camera coordinates
SDF Models Annotation
Users should annotate the models (give them a label) in the
SDF world, so that the sensors can see them, as unlabeled objects will be considered as a background.
Example of the SDF format people will use to label a model from fuel models:
<include> <pose>...</pose> <uri>...</uri> <plugin filename="ignition-gazebo-label-system" name="ignition::gazebo::systems::Label"> <label>3</label> </plugin> </include>
Models annotation / labeling is explained briefly in the tutorials.
To generate dataset samples from these sensor, we add the
<save> tag to the
<camera> tag of the sensor, and it will save the dataset samples in the given path.
There are tutorials to explain briefly how to use these sensors, how to annotate the models and how to generate the datasets.
- BoundingBox Camera rendering in review
- Segmentation Camera rendering in reiview
- SDF sensors types merged
- New ign-msgs for bounding boxes merged
- Mesh vertices to oriented box math in review
- BoundingBox Sensor in review
- Segmentation Sensor in review
- Label System in Gazebo in review
- OGRE rendering engine
- gtest for unit / integration testing
- APIs to generate samples automatically by positioning objects and sensors in random positions
- Depth map datasets generation
ConclusionThe Machine learning extension to Ignition provides users with segmentation and object detection datasets from the simulation for computer vision applications in robotics.