question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Feature request carla 0.9.2 rosbridge

See original GitHub issue

Multi Client Support in Carla 0.9.x is incredibly useful feature and writing a ros_bridge for the it is also a huge task and I appreciate the efforts. I believe some things could be improved.

  1. The actor_id in carla and child id in ros_bridge are different and they both changes everytime I relaunch my python script. This in turn changes my topic names and my frame_id everytime.

  2. It is very difficult with current version of ros_bridge to rename the frame_id and topic names without hardcoding them and naming do not follow the ros conventions. (we should have args in launch file to set the names)

  3. Only in the case of ego_vehicle the topic name remains same. But what if I don’t want to control my vehicle with ros but with another client for example in scenario_runner for junction_crossing. I need to control two vehicle through python client and acquire the sensor data from both vehicle through ros.

  4. All the TF transforms in carla 0.92 rosbridge are with respect to the /map frame. If we follow the robot TF setup guide for navigation stack in ros: link Transforms for the sensors should actually be with respect to the vehicle (if it is attached to the vehicle while spawning in Carla). Some of the standard codes in ros could not run because they are expecting TF base_link --> lidar. But through carla_ros_bridge we get /map--> /base_link and /map --> lidar. How to get the transform with respect to vehicle??

Issue Analytics

  • State:open
  • Created 5 years ago
  • Comments:14 (8 by maintainers)

github_iconTop GitHub Comments

9reactions
jjw2commented, Mar 19, 2020

I’m experiencing the same issue noted in #4. @berndgassmann, @fabianoboril, I’m hoping I can explain why this is indeed an issue and why it would be helpful for it to be resolved.

First, a general discussion: @anshulpaigwar is correct that robot sensor coordinate frame TFs typically do not relate directly to the /map frame, but instead relate to some fixed frame on the vehicle (ex: base_link, though it could be any fixed frame on the vehicle). This is partly because a robot doesn’t automatically know where it is in the world, and must actually use sensor measurements to determine its location, which is what’s referred to as “localization”. Think about it: when a robot starts up, all that’s known is where the sensors are on the robot structure; it is not know where all sensors are relative to the world (or /map). Just from a philosophical perspective, it doesn’t make sense to give a robot simulated sensor data and specify out of the box how that simulated sensor data relates to the world (or /map), because that is not how a robot receives that information in the real world; in the real world, it receives the data with no indication of exactly how that data relates to the world (obviously, GPS is slightly different, but even with GPS, there can be substantial error). The purpose of localization is to use sensor measurements (GPS data, IMU data, lidar returns with respect to a pre-generated 3D map, wheel speeds, and other info) to actually determine the location of the robot in the world. So, prior to being “localized” you’d have a world-fixed /map coordinate frame, and a “tree” of body-fixed frames on the robot (base_link, sensor frames, etc) the relationships of which you already know; the job of localization is to “fill in the gap” by determining the relationship between the map frame and the body-fixed frames (note: an odometry frame comes into play here as well, but I’m not going to get into the details here). See ROS Rep 105 for more detail, and check out the “map” frame, where it states that “[i]n a typical setup, a localization component constantly re-computes the robot pose in the map frame based on sensor observations.”

Now on to some technical issues: ROS TF is implemented such that each TF node can only have one parent (though it can have multiple children). The standard TF tree for most robots, as mentioned above, has sensors related via TF to some body-fixed frame. Since you’ve already made each sensor a child of /map, they can’t be a child of any other frame, which breaks the TF tree for (I would think) the majority of people who have a pre-existing ROS robot setup that want to use CARLA. For example, in my system, my lidar frame is already a child of my base_link frame, so the fact that CARLA makes the lidar frame a child of /map causes issues for me.

Furthermore, any “lookup” of the relationship between sensors on the ROS side of things would need to pass through a dynamic transform to /map, which is just not a good idea (why is beyond scope and this is already long-winded, but check out that ROS Rep 105 for an explanation of possible discontinuities in positions in the map frame). The relationship between sensors on the car never changes, and thus, these relationships should be represented by a static transform, which typically publishes must faster and is always consent with no changing error. For example, let’s say I’m fusing lidar and camera data together in my perception stack, meaning I need to know the relationship between the location of the camera, and the location of the lidar. The way the CARLA ros-bridge TF tree is implemented, the relationship lookup would be processed as camera -> map -> lidar, which doesn’t really make much sense. The lookup should only have to pass throughs static TF links, such as, say, camera -> lidar directly, or camera -> base_link -> lidar (this all depends on how one sets up their body-fixed TF tree).

The real problem here is that you’re actually publishing the TF data out of the box, with no way to modify it without modifying the ros-bridge code. All of the sensor data has been converted correctly and whatnot, but the TF tree is wrong. To summarize the issues:

  1. The way you’ve related everything back to map breaks any existing TF tree in ROS, and also has the potential to cause issues, because TF lookups between sensors would have to pass through the map frame, which has a dynamic relationship with the body-fixed frames (and just kind of doesn’t make sense).
  2. You’ve already solved the localization problem with the way you’ve related everything back to map out of the box in the TF tree. I can’t test my localization system in CARLA.

I’d suggest the following as a solution:

  1. Either determine and publish the tf between the base frame (/ego_vehicle) and the respective sensors, or force users to define the relationships between their sensors in ROS and publish their own TF tree. Many ROS users already use URDF files to define the relationships between all the fixed frames on their vehicle (see ROS URDF). This way, you’d still have exactly the same info, but all of the sensors would be related to /ego_vehicle, and then one just needs to relate /ego_vehicle to /map.
  2. Give users the option of publishing the relationship between /map and /ego_vehicle (again, this is simplified, because there’s the /odom frame to consider), or leave this task up to them. This would allow users to run/test their localization systems in CARLA.
2reactions
berndgassmanncommented, Jan 7, 2019

@4 Let me add some additions here. As you already said correctly we publish the following frames: map (global frame) map -> ego_vehicle ( which compares to the ‘base_link’ in the robot tutorial ) map -> {ego_vehicle_sensors} ( which compares to the ‘base_laser’ in the robot tutorial) map -> {other_vehicles} ( which compares to the ‘base_link’ in the robot tutorial for the other robots 😉 Because every transformation implicitly also defines the inverse transformation, all transformations required are available. Having said that, also ego_vehicle -> map {ego_vehicle_sensors} -> map {other_vehicles} -> map are implicitly available within TF. So you just use tf::TransformListener as usual. waitForTransform(“ego_vehicle”, “ego_vehicle_sensor”, …) and lookupTransform(“ego_vehicle”, “ego_vehicle_sensor”, …) Within ROS you can calculate the transformation from any node of the TF tree to any other… so I don’t see how the current ROS bridge implementation will make problems.

In our AD-Stack code we e.g. receive the CARLA objects and transform these into the vehicle coordinate frame to make them look as if they were received by some real object sensor (I believe, that’s what you search for, right?) tf::StampedTransform objectToCarTransform; mTfListener.lookupTransform( “ego_vehicle”, object.header.frame_id, object.header.stamp, objectToCarTransform);

tf::Transform objectPoseInObjectCS; tf::poseMsgToTF(object.pose, objectPoseInObjectCS); tf::Transform const objectPoseInCarCSTransform = objectToCarTransform * objectPoseInObjectCS; ::ros::geometry_msgs::Pose objectPoseInCarCS; tf::poseTFToMsg(objectPoseInCarCSTransform, objectPoseInCarCS);

@1 And yes, currently the frame_id and the actor_id are not the same, but you’re code should not rely on such, because the frame_id in ROS is 32bit and the actor_id is 64bit (coming from Unreal…). We could change the actor_id_registry in the sense, that it tries to keep these two the same as long as it is possible, but exactly then there will be some rare cases where it will not work (because of the actor_id exceeding the 32-bit range) and then it’s hard to detect the error at that point. Therefore, we decided to use an independent id here to make clear, that these two numbers are semantically not the same. I would be interested in the actual use-case.

@2 Which exactly are the frame_ids and topic names that don’t follow ROS convention and you need to rename? All topic names considering the ego_vehicle should be constant, the sensors make use of the assigned role_name … so you could define the same sensor type with different role names for your specific vehicle setup…

Read more comments on GitHub >

github_iconTop Results From Across the Web

ROS bridge installation - CARLA Simulator - Read the Docs
The ROS bridge enables two-way communication between ROS and CARLA. The information from the CARLA server is translated to ROS topics. In the...
Read more >
CARLA 0.9.2: Upgraded ROS-bridge, Traffic scenario engine ...
This release brings long-time awaited requests such as a new version of the ROS-bridge and new documentation and tutorials. It also introduces ...
Read more >
Image Layer Details - jrefi/carla-ros-bridge:latest | Docker Hub
/bin/sh -c git clone https://github.com/carla-simulator/ros-bridge.git. 7.58 MB. 25. /bin/sh -c rosdep update &&. 1.7 MB.
Read more >
carla_simulator_bridge · master · Autoware Foundation ...
Additional Functionality. There are additional nodes available in the CARLA ROS bridge repo. Please have a look here. Design. Beside several ROS-only nodes ......
Read more >
CARLA Simulator (@carlasimulator) / Twitter
CARLA Autonomous Driving Consortium Names NVIDIA as Latest Member ... CARLA 0.9.2: Upgraded ROS-bridge, Traffic scenario engine and a new Agent class to ......
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found