Second PCI device of Dual Edge TPU m.2 card visible in container but not detected in Frigate
See original GitHub issueDescribe the bug Second PCI device of Dual Edge TPU m.2 card visible in container but not detected in Frigate.
Version of frigate
Output from /api/version
: 0.8.4-5043040
Config file
detectors:
coral0:
type: edgetpu
device: pci:0
coral1:
type: edgetpu
device: pci:1
Frigate container logs
* Starting nginx nginx
...done.
frigate.app INFO : Creating directory: /tmp/cache
Starting migrations
peewee_migrate INFO : Starting migrations
There is nothing to migrate
peewee_migrate INFO : There is nothing to migrate
frigate.mqtt INFO : MQTT connected
frigate.app INFO : Camera processor started for camera0: 43
frigate.app INFO : Camera processor started for camera1: 44
frigate.app INFO : Camera processor started for camera2: 47
frigate.app INFO : Capture process started for camera0: 49
frigate.app INFO : Capture process started for camera1: 51
frigate.app INFO : Capture process started for camera2: 53
detector.coral1 INFO : Starting detection process: 38
frigate.edgetpu INFO : Attempting to load TPU as pci:1
frigate.edgetpu INFO : No EdgeTPU detected.
Process detector:coral1:
Traceback (most recent call last):
File "/usr/local/lib/python3.8/dist-packages/tflite_runtime/interpreter.py", line 152, in load_delegate
delegate = Delegate(library, options)
File "/usr/local/lib/python3.8/dist-packages/tflite_runtime/interpreter.py", line 111, in __init__
raise ValueError(capture.message)
ValueError
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap
self.run()
File "/usr/lib/python3.8/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/opt/frigate/frigate/edgetpu.py", line 124, in run_detector
object_detector = LocalObjectDetector(tf_device=tf_device, num_threads=num_threads)
File "/opt/frigate/frigate/edgetpu.py", line 63, in __init__
edge_tpu_delegate = load_delegate('libedgetpu.so.1.0', device_config)
File "/usr/local/lib/python3.8/dist-packages/tflite_runtime/interpreter.py", line 154, in load_delegate
raise ValueError('Failed to load delegate from {}\n{}'.format(
ValueError: Failed to load delegate from libedgetpu.so.1.0
detector.coral0 INFO : Starting detection process: 37
frigate.edgetpu INFO : Attempting to load TPU as pci:0
frigate.edgetpu INFO : TPU found
frigate.app INFO : Stopping...
frigate.object_processing INFO : Exiting object processor...
Frigate stats N/A (app stops during startup)
FFprobe from your camera N/A
Screenshots N/A
Computer Hardware
- OS: Debian 10 amd64
- Install method: Home Assistant addon with supervisor
- Virtualization: KVM/Qemu
- Coral Version: Dual Edge TPU m.2 PCIe
- Network Setup: Wired
Camera Info: N/A
Additional context
Both core are visible in Frigate container:
root@hassio:~# docker exec -ti addon_ccab4aaf_frigate sh -c "ls -l /sys/devices/pci0000\:00/0000\:00\:0*/apex/"
'/sys/devices/pci0000:00/0000:00:0c.0/apex/':
total 0
drwxr-xr-x 3 root root 0 Jul 23 19:05 apex_0
'/sys/devices/pci0000:00/0000:00:0d.0/apex/':
total 0
drwxr-xr-x 3 root root 0 Jul 23 19:05 apex_1
root@hassio:~# docker exec -ti addon_ccab4aaf_frigate sh -c "ls -l /dev/apex*"
crw-rw---- 1 root root 120, 0 Jul 23 18:29 /dev/apex_0
crw-rw---- 1 root root 120, 1 Jul 23 18:29 /dev/apex_1
The first core works fine with just:
detectors:
coral0:
type: edgetpu
device: pci:0
# coral1:
# type: edgetpu
# device: pci:1
Note: after commenting out the second core, the following error message is repetitively output in Home Assistant logs by the integration:
2021-07-23 19:08:59 ERROR (MainThread) [homeassistant] Error doing job: Task exception was never retrieved
Traceback (most recent call last):
File "/usr/src/homeassistant/homeassistant/helpers/update_coordinator.py", line 134, in _handle_refresh_interval
await self._async_refresh(log_failures=True, scheduled=True)
File "/usr/src/homeassistant/homeassistant/helpers/update_coordinator.py", line 265, in _async_refresh
update_callback()
File "/usr/src/homeassistant/homeassistant/helpers/update_coordinator.py", line 325, in _handle_coordinator_update
self.async_write_ha_state()
File "/usr/src/homeassistant/homeassistant/helpers/entity.py", line 419, in async_write_ha_state
self._async_write_ha_state()
File "/usr/src/homeassistant/homeassistant/helpers/entity.py", line 453, in _async_write_ha_state
state = self._stringify_state()
File "/usr/src/homeassistant/homeassistant/helpers/entity.py", line 425, in _stringify_state
state = self.state
File "/config/custom_components/frigate/sensor.py", line 149, in state
self.coordinator.data["detectors"][self.detector_name][
KeyError: 'coral1'
Issue Analytics
- State:
- Created 2 years ago
- Comments:9 (4 by maintainers)
Top Results From Across the Web
Second TPU on dual M.2 Accelerator not working, Failed to ...
I have a dual TPU M.2 accelerator, both devices are seen by the system and the first device is working properly in Frigate,...
Read more >[SUPPORT] blakeblackshear - Frigate - Page 13
Trying to get this setup while waiting for my tpu (have a dual m.2 already, but no adapter yet). Im having some trouble...
Read more >[How to] Get a Coral M.2 Device passed through to HAOS in ...
Select "PCI Device" In the "Device" dropdown, select the line with "Coral Edge TPU" Click the "All Functions" checkbox. Click the "Advanced" ...
Read more >This is NOT a Graphics Card - Page 2 - LTT Releases
The Edge TPU does a lot of the hard work on the Frigate side for object detection and that can be offloaded to...
Read more >Local realtime person detection for RTSP cameras - #5875
Has anyone run the Coral TPU (M.2 module via PCIe) as a remote service ... being exported by Frigate via RTMP are not...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
Thanks for the tip! It works now by adding the apex_0
Disabling Home Assistant supervisor’s Protection mode is no longer needed to get two TPU devices working since:
stable addon: https://github.com/blakeblackshear/frigate-hass-addons/commit/41d653be5041f094713216397309a0c5a35e7d8d#diff-16128a75ca639145b75f8cbf484f154746f3c3b5b65d5802bab6ed2ad4735d5aR28
beta addon: https://github.com/blakeblackshear/frigate-hass-addons/commit/5d629ff8619d431c015d3baacfd34f9fd4de1f54#diff-633ba7fa57bbc8cc5116ddff75350013658cb27c1bfe4f569c735a669d1af0f9R27