SDFormat data-bindings for Python (all SDF versions)

Essentially what it says in the title. I always wanted python-native data bindings for SDFormat XML, and over the last two weeks, I’ve decided to finally made it happen. As a side-effect of this effort, I also created (versioned) XSD1.1 bindings for all SDFormat versions (SDFormat can not be efficiently described by XSD1.0) Both, the bindings and the schemata are part of ropy, and I have an open PR upstream with the schema generation script that - at the time of this writing - awaits feedback from the ignition team.

I am aware that python has xml.etree and lxml.etree, which can parse SDF (since it is XML); however, the bindings come with IntelliSense/auto-complete, validation (statically via mypy or at runtime via pydantic/marshmallow), and automatic population of default values.

Another advantage of being class-bassed is automatic type-conversion. Once another upstream dependency releases its new version in the next couple of days you can replace the parsing logic of specific SDF elements (classes) with your desired parsing logic. With this, you can, for example, directly turn any vector (mainly thinking about poses) into tuples, or numpy-arrays. You could split them into a pose.rotation (which you can further convert into a quaternion) and pose.translation or turn them into PyTorch/TensorFlow tensors. You write parsing functions for the elements of your choice, inject it into the parser and it (should) handle the rest … alternatively, you just roll with the default and get the type that SDF promises to give you.

Here is a code example (without type conversion)


from ropy.ignition.sdformat.bindings.v18 import sdf
import ropy.ignition as ign
from ropy.ignition.sdformat.bindings import v18
from pathlib import Path

# declarative style
# nesting objects inside the constructor
ground_plane = v18.model.Model(
    static=True,
    name="ground_plane",
    link=[
        v18.Link(
            collision=v18.Collision(
                v18.Geometry(plane=v18.Geometry.Plane(normal="1 0 0", size="1.4 6.3"))
            ),
            visual=v18.Visual(
                v18.Geometry(plane=v18.Geometry.Plane(normal="0 1 0", size="2 4"))
            ),
        )
    ],
)

# procedual style
# also available as v18.ModelModel (due to name collision within SDF spec)
box = v18.model.Model()
box.pose = v18.model.Model.Pose("0 0 2.5 0 0 0", relative_to="ground_plane")
box.link.append(v18.Link())
box.link[0].name = "link"
# set-up collision
box.link[0].collision.append(v18.Collision())
box.link[0].collision[0].name = "collision"
box.link[0].collision[0].geometry = v18.Geometry.Box("1 2 3")
box.link[0].collision[0].surface = v18.Collision.Surface()
box.link[0].collision[0].surface.contact = v18.Collision.Surface.Contact(
    collide_bitmask=171
)
# set-up visual
box.link[0].visual.append(v18.Visual())
box.link[0].visual[0].name = "box_vis"
box.link[0].visual[0].geometry = v18.Geometry.Box("1 2 3")

root = v18.Sdf(world=[v18.World(name="shapes_world", model=[ground_plane, box])])

# elements are populated with default values where applicable
print(f"Gravity set to: {root.world[0].gravity}")
# Gravity set to: 0 0 -9.8

# and of course you can serialize to SDF
sdf_string = ign.sdformat.dumps(root, format=True)

# and write the string into a file
Path("my_world.sdf").write_text(sdf_string)

# or (more old-school) via open(...)
with open("my_world.sdf", "w") as sdf_file:
    print(sdf_string, file=sdf_file)


# loading is similary straight forward
sdf_string = Path("my_world.sdf").read_text()
root = ign.sdformat.loads(sdf_string)

# or again via open(...)
with open("my_world.sdf", "r") as sdf_file:
    sdf_string = sdf_file.read()
root = ign.sdformat.loads(sdf_string)

# Note that ropy will by default auto-detect the SDF version from sdf's version
# attribute (all versions are supported). Alternatively, you can manually
# overwrite the auto-detect with a specific version.
sdf_reloaded = ign.sdformat.reads(sdf_string, version="1.6")

# you can assign the same object in multiple places to link the properties
box_geometry = v18.Geometry()
box_geometry.box = v18.Geometry.Box("1 2 3")
box.link[0].collision[0].geometry = box_geometry
box.link[0].visual[0].geometry = box_geometry

# and change both properties at the same time
box_geometry.box.size = "2 5 2"
print(ign.sdformat.dumps(root, format=True))

# however this linking will (of course) not survive serialization.

I’ll also lobby briefly for ropy’s other big feature: ropy.transform. ropy.transform is a numpy-based version of ROS TF2. The main difference is that it works with arbitrary directed graphs (can be circular, not restricted to trees only), works in N dimensions, and is fully vectorized/tensorized.

You can do 6-dimensional rotations, sure, but at the moment I mainly use it to neatly transform end-effector/gripper positions into camera coordinates (pixels) with a line like cam_pos = gripper_frame.transform(x, "camera_1"). Projections, too, work between arbitrary dimensions, so it should be possible to create some nice joint-space visualizations if anybody has a cool idea.

Being fully vectorized means that you can toss in batches (of points or joint configurations) and you will get full SIMD support (SSE/AVX/…). The same will happen if both points and configurations are batches, and you obviously get the familiar numpy broadcasting support. You can use a multi-threadded BLAS/LAPACK backend for further acceleration, or - it’s numpy based - toss it into dask to scale it to clusters, or use @tf.numpy_function on it to make it part of your ML model (not GPU accelerated). Python is pretty flexible here. Oh and it is also possible, if you lobby for it, to create a torch/tf backend for ropy.transform, which will allow GPU acceleration and backdrop through the graph. (sorry for the TL:DR; I’m quite nerdy when it comes to HPC.)

The reason I’m mentioning all this is that my next step (after adding <include> resolution with fuel support) is to write a function that can convert the parsed SDFormat into a ropy.transform graph. Which (without too much effort) could mean that you can fetch your ignition simulation as SDF and dump it onto the GPU for some computation (probably RL/ML). Alternatively, you can fetch the simulation state and efficiently do forward kinematics for a couple thousand potential joint configurations in your new IK/planning algorithm. Is this useful? I have no idea … but it sounds pretty cool, so I thought it’s fun to point out.

Note: The original version had many more links to all the related projects. However, I am apparently not allowed to add more than two per post … so they are all gone now :frowning:

Related Links:

2 Likes

That’s great! The team is currently focusing on Python interfaces to Ignition. We’re tackling Ignition Math and Ignition Gazebo though, so it’s great to see others working on SDFormat.

Sweet! Are you going to use the Ignition Fuel-Tools C++ API, the ign fuel command line tool, or something else?

That’s an interesting use case, I’d love to see a project using this.

If efficiency is a concern, I’d expect a C++ solution to be preferred over Python though. Could you elaborate on why someone may prefer using Python for this?

Sweet! Are you going to use the Ignition Fuel-Tools C++ API, the ign fuel command line tool, or something else?

Option 3 (something else). My current codebase does not have a direct C++ dependency and it seems like overkill to introduce it to download zip files. The CLI could be an option, but then I need to worry about the location of the executable, the user’s path, etc. Instead, I simply talk directly to the REST API that I found (https://app.ignitionrobotics.org/api). The swagger schema doesn’t validate, but - if I understand correctly - all I need are the endpoints for downloading models and worlds, which can be as simple as

def download_fuel_model(url: str) -> bytes:
    """Download a model from the Fuel Server.

    Parameters
    ----------
    url : str
        The URL of the model. This is the same as the URL used for
        include elements in SDF files.

    Returns
    -------
    blob : bytes
        A gzip compressed blob containing the model files.

    """

    metadata = fuel_model_metadata(url)
    username = metadata.owner.lower()
    model_name = quote(metadata.name)
    version = metadata.version

    base_url = f"https://fuel.ignitionrobotics.org/1.0/{username}/models/{model_name}/{version}/"
    zip_url = base_url + f"{model_name}.zip"

    result = requests.get(
        url=zip_url,
        stream=True,
        headers={"accept":"application/zip"}
    )
    result.raise_for_status()
    blob = result.content

    return blob

You can then write the blob to disk (it’s a zip file), or directly consume it in-memory via something like model_zip = ZipFile(BytesIO(blob)). I’d like to add a decent caching mechanism to this, but I am aware that fuel does this, too, so I am currently trying to figure out the format so that I can perhaps share it or combine it with pythons caching abilities.

Could you elaborate on why someone may prefer using Python for this?

Sure. In my experience, HPC (in numerics) boils down to applying a sequence of kernel calls from your favorite numerics library of choice to your data. Once your algorithm is expressed as such a sequence, Python vs C++ doesn’t make a large difference, because in both cases the kernels are implemented in C (or Fortran if you use BLAS directly) and are highly, highly optimized.

The memory layout of your data then becomes the primary performance bottleneck. Touching RAM takes eons and allocating new memory is even worse. Long story short, you want to be able to feed your data from one kernel into the next without doing any copies and you want to avoid allocating new memory whenever possible.

The question then becomes which framework gives you access to enough relevant kernels so that you can build your algorithm from these blocks in a zero copy fashion. Here numpy is pretty untouchable, because the base version is as feature-complete (if not more so) as MATLAB, and you can always throw in another library (scipy and most of the scikit-X stuff) to get area-specific kernels. You do get some constant overhead from touching the python layer after each kernel (often not critical), but that one can be eliminated by using a jit-compiler like numba or by writing that one function in cython

For writing your own kernels C definitely beats plain python, but that need doesn’t arise very often, and then you still get to write cython if you choose to do so, or the library demands it. For example, I’ve written this kernel for scikit-image a while back, because your out-of-the-box image convolution kernel didn’t scale well to big kernels.

In summary, I don’t think one language can beat the other once your algorithm streams data (ie. is vectorized), and is zero copy. What’s more important is getting to that point (fast), which boils down to how familiar you are with your framework, and how well it is documented.

1 Like

In that case, you may be interested in Pit Crew, it’s a Python script that does exactly that, but it’s geared mainly towards Gazebo classic, so you may want to make some tweaks.

Thanks, I didn’t know about them, so I will definitely check them out. I’m sure that, if the licence allows, there are some nuggets that I can … faithfully borrow :wink:

I’ve written the fuel interface today and it is working (and heavily cached), but I still need to write tests and a SDF walker that (perhaps recursively) resolves includes.

That’s maintained by us at Open Robotics, the old URL from the osrf org still redirects there :wink: https://github.com/osrf/traffic_editor/

It’s Apache 2 like most of our stuff, so go for it :grinning_face_with_smiling_eyes:

Adding a brief update: It is now possible to download fuel models.

While there is a simple ign.download_fuel_model(url) function, that takes a URL and gives you the corresponding gzipped blob. You may be more interested in the more sophisticated ign.get_fuel_model(url), which returns the content of specific files. You can control which file via the optional kwarg file_path='model.sdf'.

If you turn off all the caches, then you can fetch and consume models without ever touching the filesystem. If you do use caching, then the download itself will be cached in memory (unless you disable that), so repeatedly accessing different files within a model will be very fast after the initial download.

On top, you get the familiar file cache (that ign-fuel-tools uses, located at ~/.ignition/fuel) which is active by default but you can change its location. You also get another in-memory cache that stores the inflated content of the files that you do end up accessing, so that repeated access will be lightning-fast, e.g., if you have a world that makes heavy use of the same model, or if you have a model that includes the same fragment many times. Of course, you can turn all of these off individually if undesirable.

Finally, - if that isnt’ enough caching for you - you can supply your own caching mechanism which will wrap the call. An example could be that you want more sophisticated eviction of models, e.g., you want to check the fuel server if a new model is available and then update accordingly, but serve the downloaded/cached one otherwise. Checking can, btw, be done conveniently via ign.get_fuel_model_info(url).

1 Like

Models can have dependencies (specified in model.config or metadata.pbtxt). Look e.g. at https://app.ignitionrobotics.org/OpenRobotics/fuel/models/CTU_CRAS_NORLAB_ABSOLEM_SENSOR_CONFIG_8 .

Otherwise, we did some robot-terrain interaction machine learning, so it might be interesting to consider using this package instead of a custom-created model for the ML part in a future extension.

Models can have dependencies (specified in model.config or metadata.pbtxt)

Based on what I found here (Gazebo : Tutorial : Model structure and requirements) that would be other models, right? Which would still be the “download model zip” endpoint, just called recursively for the dependency tree of the model. It is from the documentation for gazebo classic though, so things may have changed. That said if something is missing it can easily become a new feature ;D

we did some robot-terrain interaction machine learning

I must admit that I only read the abstract, so I may misunderstand. It sounds like you are estimating a heightmap from sensor data; what sensors did you use? It sounds a bit like a laser scanner or RGBD camera (given that you are talking about reflectance/absorbtion issues), which could indeed be an interesting use-case.

The transformation of both kinds of sensors is “invertible”. Given the sensor measurement, you can work out an object’s world coordinates relative to the sensor as opposed to just being able to work out the expected measurement given an object’s relative world coordinates. This means you could write your own link function for the transform module and - assuming you are able to localize the robot within the terrain - express the sensor data in the coordinate frame of the terrain (sensor.transform(array_of_measurements, "ground").

You can spin this further by adding a custom link/transformation to the ground plane that transforms (x,y,z) data into a heighmap. You can then transform your sensor data directly into a height map ala sensor.transform(array_of_measurements, "heightmap").

I think you got it right. The only difference is IMO that model.config is deprecated and metadata.pbtxt is its replacement (that should be a textual representation of this Protobuf message).

We use a rotating 2D lidar (so that it provides slow, long-duration 3D scans). But the idea of the research is that surfaces like grass can look untraversable in lidar, but be actually traversable - and the algorithm should learn to recognize this. I think it could be nicely simulated in Gazebo using terrain that has different visual and collision elements.