Feature request carla 0.9.2 rosbridge
See original GitHub issueMulti Client Support in Carla 0.9.x is incredibly useful feature and writing a ros_bridge for the it is also a huge task and I appreciate the efforts. I believe some things could be improved.
-
The actor_id in carla and child id in ros_bridge are different and they both changes everytime I relaunch my python script. This in turn changes my topic names and my frame_id everytime.
-
It is very difficult with current version of ros_bridge to rename the frame_id and topic names without hardcoding them and naming do not follow the ros conventions. (we should have args in launch file to set the names)
-
Only in the case of ego_vehicle the topic name remains same. But what if I don’t want to control my vehicle with ros but with another client for example in scenario_runner for junction_crossing. I need to control two vehicle through python client and acquire the sensor data from both vehicle through ros.
-
All the TF transforms in carla 0.92 rosbridge are with respect to the
/map
frame. If we follow the robot TF setup guide for navigation stack in ros: link Transforms for the sensors should actually be with respect to the vehicle (if it is attached to the vehicle while spawning in Carla). Some of the standard codes in ros could not run because they are expecting TFbase_link --> lidar
. But through carla_ros_bridge we get/map--> /base_link
and/map --> lidar
. How to get the transform with respect to vehicle??
Issue Analytics
- State:
- Created 5 years ago
- Comments:14 (8 by maintainers)
Top GitHub Comments
I’m experiencing the same issue noted in #4. @berndgassmann, @fabianoboril, I’m hoping I can explain why this is indeed an issue and why it would be helpful for it to be resolved.
First, a general discussion: @anshulpaigwar is correct that robot sensor coordinate frame TFs typically do not relate directly to the /map frame, but instead relate to some fixed frame on the vehicle (ex: base_link, though it could be any fixed frame on the vehicle). This is partly because a robot doesn’t automatically know where it is in the world, and must actually use sensor measurements to determine its location, which is what’s referred to as “localization”. Think about it: when a robot starts up, all that’s known is where the sensors are on the robot structure; it is not know where all sensors are relative to the world (or /map). Just from a philosophical perspective, it doesn’t make sense to give a robot simulated sensor data and specify out of the box how that simulated sensor data relates to the world (or /map), because that is not how a robot receives that information in the real world; in the real world, it receives the data with no indication of exactly how that data relates to the world (obviously, GPS is slightly different, but even with GPS, there can be substantial error). The purpose of localization is to use sensor measurements (GPS data, IMU data, lidar returns with respect to a pre-generated 3D map, wheel speeds, and other info) to actually determine the location of the robot in the world. So, prior to being “localized” you’d have a world-fixed /map coordinate frame, and a “tree” of body-fixed frames on the robot (base_link, sensor frames, etc) the relationships of which you already know; the job of localization is to “fill in the gap” by determining the relationship between the map frame and the body-fixed frames (note: an odometry frame comes into play here as well, but I’m not going to get into the details here). See ROS Rep 105 for more detail, and check out the “map” frame, where it states that “[i]n a typical setup, a localization component constantly re-computes the robot pose in the map frame based on sensor observations.”
Now on to some technical issues: ROS TF is implemented such that each TF node can only have one parent (though it can have multiple children). The standard TF tree for most robots, as mentioned above, has sensors related via TF to some body-fixed frame. Since you’ve already made each sensor a child of /map, they can’t be a child of any other frame, which breaks the TF tree for (I would think) the majority of people who have a pre-existing ROS robot setup that want to use CARLA. For example, in my system, my lidar frame is already a child of my base_link frame, so the fact that CARLA makes the lidar frame a child of /map causes issues for me.
Furthermore, any “lookup” of the relationship between sensors on the ROS side of things would need to pass through a dynamic transform to /map, which is just not a good idea (why is beyond scope and this is already long-winded, but check out that ROS Rep 105 for an explanation of possible discontinuities in positions in the map frame). The relationship between sensors on the car never changes, and thus, these relationships should be represented by a static transform, which typically publishes must faster and is always consent with no changing error. For example, let’s say I’m fusing lidar and camera data together in my perception stack, meaning I need to know the relationship between the location of the camera, and the location of the lidar. The way the CARLA ros-bridge TF tree is implemented, the relationship lookup would be processed as camera -> map -> lidar, which doesn’t really make much sense. The lookup should only have to pass throughs static TF links, such as, say, camera -> lidar directly, or camera -> base_link -> lidar (this all depends on how one sets up their body-fixed TF tree).
The real problem here is that you’re actually publishing the TF data out of the box, with no way to modify it without modifying the ros-bridge code. All of the sensor data has been converted correctly and whatnot, but the TF tree is wrong. To summarize the issues:
I’d suggest the following as a solution:
@4 Let me add some additions here. As you already said correctly we publish the following frames: map (global frame) map -> ego_vehicle ( which compares to the ‘base_link’ in the robot tutorial ) map -> {ego_vehicle_sensors} ( which compares to the ‘base_laser’ in the robot tutorial) map -> {other_vehicles} ( which compares to the ‘base_link’ in the robot tutorial for the other robots 😉 Because every transformation implicitly also defines the inverse transformation, all transformations required are available. Having said that, also ego_vehicle -> map {ego_vehicle_sensors} -> map {other_vehicles} -> map are implicitly available within TF. So you just use tf::TransformListener as usual. waitForTransform(“ego_vehicle”, “ego_vehicle_sensor”, …) and lookupTransform(“ego_vehicle”, “ego_vehicle_sensor”, …) Within ROS you can calculate the transformation from any node of the TF tree to any other… so I don’t see how the current ROS bridge implementation will make problems.
In our AD-Stack code we e.g. receive the CARLA objects and transform these into the vehicle coordinate frame to make them look as if they were received by some real object sensor (I believe, that’s what you search for, right?) tf::StampedTransform objectToCarTransform; mTfListener.lookupTransform( “ego_vehicle”, object.header.frame_id, object.header.stamp, objectToCarTransform);
tf::Transform objectPoseInObjectCS; tf::poseMsgToTF(object.pose, objectPoseInObjectCS); tf::Transform const objectPoseInCarCSTransform = objectToCarTransform * objectPoseInObjectCS; ::ros::geometry_msgs::Pose objectPoseInCarCS; tf::poseTFToMsg(objectPoseInCarCSTransform, objectPoseInCarCS);
@1 And yes, currently the frame_id and the actor_id are not the same, but you’re code should not rely on such, because the frame_id in ROS is 32bit and the actor_id is 64bit (coming from Unreal…). We could change the actor_id_registry in the sense, that it tries to keep these two the same as long as it is possible, but exactly then there will be some rare cases where it will not work (because of the actor_id exceeding the 32-bit range) and then it’s hard to detect the error at that point. Therefore, we decided to use an independent id here to make clear, that these two numbers are semantically not the same. I would be interested in the actual use-case.
@2 Which exactly are the frame_ids and topic names that don’t follow ROS convention and you need to rename? All topic names considering the ego_vehicle should be constant, the sensors make use of the assigned role_name … so you could define the same sensor type with different role names for your specific vehicle setup…