AprilTags Support on DepthAI/megaAI
See original GitHub issueStart with the why
:
In unstructured environments, DepthAI allows capabilities (spatial AI) which previously required multiple components (e.g. a depth camera, a host, and a neural processor) which then precluded application of this spatial AI in locations where embedded application is required (i.e. low size, weight, power, boot-time, and/or cost).
In structured environments (e.g. a warehouse), AprilTags (here) are the go-to, as they provide ‘cheat code’ level information on relative location (x,y,z) and pose (yaw, pitch, roll) of tags. So these can be put on objects (boxes) for locating them, walls or floors for navigation, or on complicated structures to know their pose/location/etc. real time.
So if DepthAI supports AprilTags, it allows use of this cheat code
(fully offloaded from the host) in conjunction with the unique value-add that other solutions can’t do - unstructured 3D location of objects of interest (e.g. a human).
So then DepthAI could enable for example collaborative robotics, where a human (an unstructured object) is followed in a warehouse, and DepthAI is providing all the AprilTags results so that this robot knows exactly where it is in physical space while it follows the person. (And all sorts of other collaborative tasks you could imagine).
This sort of application is exactly why the evgrandprix (and other autonomous) 1/5th scale autonomous-racing competitions put AprilTags on their vehicles. It allows a cheat-code
for each vehicle to know the relative location of the other vehicles in xyz and yaw/pitch/roll.
And most importantly, these tags are so ubiquitous used, there’s probably a whole slew of use-cases that we can’t even think of that folks will use DepthAI for if AprilTags are cleanly supported, particularly if at a high framerate. For example, Peter Mullen reached out this weekend for his application, here: https://april.eecs.umich.edu/software/apriltag, wanting April Tags support. His application is interesting as he is mounting them on walls such that the camera can self-localize, and he has a system (above) which actually uses the location in pixel space of the corners of the April Tags to further improve the localization results (doing SLAM-like techniques).
So we should include his suggested additional output (the location in pixel-space of the detected corners of the April Tags) detailed below in what
:
The how
:
Characterize how many resources AprilTags takes.
If AprilTags looks to ‘heavy’ for this, then define which stream(s) it needs to be mutually exclusive with. From their comment of “Real-time performance can be achieved even on cell-phone grade processors.” it seems like it will likely be runnable in parallel to the existing DepthAI functions.
The what
:
Implement April Tag detection on DepthAI as a stream that users can optionally enable, ideally while the other current functions are being used (neural inference, depth, etc.).
Enable a user-selectable optional output of meta-data (in addition to the 6DoF result that AprilTags gives) which is the pixel-space coordinates of the 4 corners of the april tags (maybe as an optional output) correlated with the tag ID - allowing folks to do their own work off that data, either ignoring our 6DOF result from April Tags, or combining their own algorithm on top of it.
Issue Analytics
- State:
- Created 3 years ago
- Reactions:1
- Comments:24 (3 by maintainers)
Any updates? I kinda got this camera specifically for the april tag support it advertised 😕
As of version 2.15, AprilTags support has been mainline to the depthai.