GSoC 2021: Machine Learning Extensions to Ignition Gazebo

Hello there :grinning_face_with_smiling_eyes: :wave: , I am Amr Elsersy a computer engineering student @ ASU from Egypt, I was an intern @ Open Robotics last year through GSoC 2020, I developed the Plotting Tool for Ignition Project, and it was a great experience to work in such a great environment, so i am very excited to make it agian ! :zap:.

About my skills, I have an experience in software development using C++, Javascript, Qt, QML, also i have skills in Computer vision, Deep learning and Computer Graphics basics using WebGL.

I am very interested to work on “Machine Learning Extension to Ignition” project as it is a combination of many topics that i am interested in, and it has a huge benefit to the community, also it will improve my skills in software dev & computer graphics as i will learn new interesting things such as OGRE.

Also i am very interested in 3D Computer Vision through some projects i’ve done, and i am willing to extend the project to 3D datasets such as 3D bounding boxes for 3D object detection models & depth maps datasets using sensors::DepthCamera for depth estimation models.

I want to familiarize myself with how the project will be done, so i started with ignition-rendering tutorials and reading codes in ign-rendering & ign-sensors, and i will start playing with code to see how the project could be done (will start after a week as i’ve some exams now :sweat_smile:), So if you know any code/tutorials that could help me, please mention it :grinning_face_with_smiling_eyes:.

Thanks

My Github Linkedin

2 Likes

I have a problem with declaring a new sensor in ign-sensors repo

what structure should i follow to configure the sensor ?

i just added a .hh & .cc files with a similar structure to other sensors
but the sensor is not configured in cmake as a library (the cmake build is done successfully but it configures all the sensors except the my new sensor) (but i have the header file in the include dir in the build folder)

i can build from colcon successfully also.
that what i wrote .
MySensor.hh
MySensor.cc

We don’t currently support custom sensors declared outside of ign-sensors. See this issue and this PR.

1 Like

Thanks Louise :smiley: , but i got some issues, i followed the PRs, and it is working but not for all sensors (camera & thermal camera are not working), when i run the “sensors_demo.sdf”, all sensors work except these 2, giving me that error.

also i need to implement a rendering sensor unlike the example.
but anyway i don’t necessary need to have a “custom sensor” i want to try to implement one of the project’s sensors, so it could be installed in ign-sensors.

I tried to have a small prototype to the bounding box sensor, i composite the code temporary in CameraSensor class, till i am able to create it as one of ign-sensors :smile:

The idea i used is to get the 3D boxes of the visuals from the scene graph and convert it to 2D bounding boxes and used the camera parameters(from rendering::Camera) to check if that 3D boxes inside the camera frustum and get the projected points of the vertices to convert them to a bounding box.

Not sure if that is the best approach to do that, as i didn’t use OGRE to get them, all the logic is by the Scene, what do you think about that ? :smile:

Here is the code i implemented
Github_Code

The code is in CameraSensor class but what i want is to have a BoundingBox sensor that inherits from CameraSensor(to use it’s camera parameters & matrices), but i took too much time trying to add it inside ign-sensors but couldn’t :sweat_smile:.
So what exactly that i need to modify in SensorFactory/Sensor/SensorTypes/CmakeLists to add it inside ign-sensors ?

Yeah there are still known issues, that’s why we didn’t merge it into Edifice. We may be able to fix it by Fortress :crossed_fingers:

Cool :sunglasses:

I recommend following the pattern of one of the existing rendering sensors.

Another thing to keep in mind is that you’ll probably need to add a custom SDF element to load your new sensor, see Custom elements and attributes.

1 Like

Hey Amr,

Thanks for expressing interest in GSoC at Open Robotics this year. I’m glad to hear that last year’s program was a positive experience for you! I am going to mention a list of things that I believe would be useful for preparing/familiarizing yourself with the tools/concepts that may be relevant to the Machine Learning Extensions project (I also hope that this list will be helpful to other candidates who are interested in learning more about this project):

  1. Familiarity with ign-rendering, since we will probably need to produce a semantic image (colored image) with a mapping of pixel to label type (take a look at point #5 below for more information/resources regarding semantic segmentation).
    • Going through the ign-rendering tutorials would be useful. You can work through the examples as well if you’d like to gain greater understanding of how the code works.
    • Most of the rendering work will be done with the Ogre rendering engine (specifically, version 2.1). So, it would also be good to go through some Ogre tutorials, and also take a look at how Ogre2 is used in ign-rendering.
    • Taking a look at the Ogre2SelectionBuffer may provide some ideas for how to produce a semantic image with colored pixels for each label.
  2. Familiarity with ign-gazebo. At a minimum, I’d recommend going through the following tutorials (if you’d like to go through more, go for it!):
    • Terminology
    • Create System Plugins
    • Rendering Plugins
    • GUI Configuration
    • Server Configuration
  3. Familiarity with ign-sensors - in particular, how the current camera sensors work there.
  4. Knowledge of data-related concepts for machine learning:
    • Training data vs validation data vs test data
    • Features and labels
  5. Knowledge of other machine learning concepts:
    • semantic segmentation (the Cityscapes Dataset is a good example of this), including object recognition vs scene understanding
    • bounding boxes
  6. Knowledge of how data is formatted for common/popular machine learning libraries (for example: PyTorch, Tensorflow, Scikit-Learn)
  7. Familiarity with ign-gui and Qt for building a GUI in Ignition Gazebo that customizes how data is generated (there may not be time for this)

I hope this is helpful. If you have not officially applied to the Open Robotics GSoC program yet, please do so - applications are now open, so we will be reviewing them over the next few weeks.

2 Likes

Cool, i will try it, thanks Louise :smile:

Thanks Ashton! that is really helpfull :smiley: , i will start following these tutorials then will come back :zap:

Hello :smile: , i’ve made a small version of the labels camera :zap:

I spent the last period in learning OGRE, reading ign-rendering code and following the rest of the tutorials.

I used a similar approach to the one in Ogre2SelectionBuffer:

  • making a basic compositor workspace to render the scene to a render texture
  • switching the material through MaterialSwitcher Listener at the Pre Render then getting the original materials back at Post Render.
  • I made a class that inherits from MaterialSwitcher but changes the coloring in it by applying random colors (but each pass i reset the pseudo num generator with the same seed, so it generate the same sequence of random numbers each frame)

Here is the code Github
also i added an example of that camera Github

so that approach helps in instance segmentation data, for semantic segmentation, we can have another derived class from MaterialSwitcher that colors the objects based on its type (and this type may be selected by the user (may be in the UserDara such as Thermal Camera) )

3 Likes