Hello there , I am Amr Elsersy a computer engineering student @ ASU from Egypt, I was an intern @ Open Robotics last year through GSoC 2020, I developed the Plotting Tool for Ignition Project, and it was a great experience to work in such a great environment, so i am very excited to make it agian ! .
About my skills, I have an experience in software development using C++, Javascript, Qt, QML, also i have skills in Computer vision, Deep learning and Computer Graphics basics using WebGL.
I am very interested to work on âMachine Learning Extension to Ignitionâ project as it is a combination of many topics that i am interested in, and it has a huge benefit to the community, also it will improve my skills in software dev & computer graphics as i will learn new interesting things such as OGRE.
Also i am very interested in 3D Computer Vision through some projects iâve done, and i am willing to extend the project to 3D datasets such as 3D bounding boxes for 3D object detection models & depth maps datasets using sensors::DepthCamera for depth estimation models.
I want to familiarize myself with how the project will be done, so i started with ignition-rendering tutorials and reading codes in ign-rendering & ign-sensors, and i will start playing with code to see how the project could be done (will start after a week as iâve some exams now ), So if you know any code/tutorials that could help me, please mention it .
Thanks
My Github Linkedin
2 Likes
I have a problem with declaring a new sensor in ign-sensors repo
what structure should i follow to configure the sensor ?
i just added a .hh & .cc files with a similar structure to other sensors
but the sensor is not configured in cmake as a library (the cmake build is done successfully but it configures all the sensors except the my new sensor) (but i have the header file in the include dir in the build folder)
i can build from colcon successfully also.
that what i wrote .
MySensor.hh
MySensor.cc
We donât currently support custom sensors declared outside of ign-sensors
. See this issue and this PR.
1 Like
Thanks Louise , but i got some issues, i followed the PRs, and it is working but not for all sensors (camera & thermal camera are not working), when i run the âsensors_demo.sdfâ, all sensors work except these 2, giving me that error.
also i need to implement a rendering sensor unlike the example.
but anyway i donât necessary need to have a âcustom sensorâ i want to try to implement one of the projectâs sensors, so it could be installed in ign-sensors.
I tried to have a small prototype to the bounding box sensor, i composite the code temporary in CameraSensor class, till i am able to create it as one of ign-sensors
The idea i used is to get the 3D boxes of the visuals from the scene graph and convert it to 2D bounding boxes and used the camera parameters(from rendering::Camera) to check if that 3D boxes inside the camera frustum and get the projected points of the vertices to convert them to a bounding box.
Not sure if that is the best approach to do that, as i didnât use OGRE to get them, all the logic is by the Scene, what do you think about that ?
Here is the code i implemented
Github_Code
The code is in CameraSensor class but what i want is to have a BoundingBox sensor that inherits from CameraSensor(to use itâs camera parameters & matrices), but i took too much time trying to add it inside ign-sensors but couldnât .
So what exactly that i need to modify in SensorFactory/Sensor/SensorTypes/CmakeLists to add it inside ign-sensors ?
Yeah there are still known issues, thatâs why we didnât merge it into Edifice. We may be able to fix it by Fortress
Cool
I recommend following the pattern of one of the existing rendering sensors.
Another thing to keep in mind is that youâll probably need to add a custom SDF element to load your new sensor, see Custom elements and attributes.
1 Like
Hey Amr,
Thanks for expressing interest in GSoC at Open Robotics this year. Iâm glad to hear that last yearâs program was a positive experience for you! I am going to mention a list of things that I believe would be useful for preparing/familiarizing yourself with the tools/concepts that may be relevant to the Machine Learning Extensions project (I also hope that this list will be helpful to other candidates who are interested in learning more about this project):
- Familiarity with ign-rendering, since we will probably need to produce a semantic image (colored image) with a mapping of pixel to label type (take a look at point #5 below for more information/resources regarding semantic segmentation).
- Going through the ign-rendering tutorials would be useful. You can work through the examples as well if youâd like to gain greater understanding of how the code works.
- Most of the rendering work will be done with the Ogre rendering engine (specifically, version 2.1). So, it would also be good to go through some Ogre tutorials, and also take a look at how Ogre2 is used in ign-rendering.
- Taking a look at the Ogre2SelectionBuffer may provide some ideas for how to produce a semantic image with colored pixels for each label.
- Familiarity with ign-gazebo. At a minimum, Iâd recommend going through the following tutorials (if youâd like to go through more, go for it!):
- Terminology
- Create System Plugins
- Rendering Plugins
- GUI Configuration
- Server Configuration
- Familiarity with ign-sensors - in particular, how the current camera sensors work there.
- Knowledge of data-related concepts for machine learning:
- Training data vs validation data vs test data
- Features and labels
- Knowledge of other machine learning concepts:
- semantic segmentation (the Cityscapes Dataset is a good example of this), including object recognition vs scene understanding
- bounding boxes
- Knowledge of how data is formatted for common/popular machine learning libraries (for example: PyTorch, Tensorflow, Scikit-Learn)
- Familiarity with ign-gui and Qt for building a GUI in Ignition Gazebo that customizes how data is generated (there may not be time for this)
I hope this is helpful. If you have not officially applied to the Open Robotics GSoC program yet, please do so - applications are now open, so we will be reviewing them over the next few weeks.
2 Likes
Cool, i will try it, thanks Louise
Thanks Ashton! that is really helpfull , i will start following these tutorials then will come back
Hello , iâve made a small version of the labels camera
I spent the last period in learning OGRE, reading ign-rendering code and following the rest of the tutorials.
I used a similar approach to the one in Ogre2SelectionBuffer:
- making a basic compositor workspace to render the scene to a render texture
- switching the material through MaterialSwitcher Listener at the Pre Render then getting the original materials back at Post Render.
- I made a class that inherits from MaterialSwitcher but changes the coloring in it by applying random colors (but each pass i reset the pseudo num generator with the same seed, so it generate the same sequence of random numbers each frame)
Here is the code Github
also i added an example of that camera Github
so that approach helps in instance segmentation data, for semantic segmentation, we can have another derived class from MaterialSwitcher that colors the objects based on its type (and this type may be selected by the user (may be in the UserDara such as Thermal Camera) )
3 Likes