Allow audio on JSMPEG stream and add a `camera.audio` flag
See original GitHub issueDescribe what you are trying to accomplish and why in non technical terms
After several iterations of features and bug fixes in the Frigate Card, I believe JSMPEG is one of the best options for streaming the Frigate cameras in HA. It has a good latency, always works, has a good UI and so on.
What is missing though is that it does not support audio. The Card could support audio just fine, but some changes needs to be made here first.
Describe the solution you’d like
The JSMPEG stream from Frigate to include audio. I don’t think the ffmpeg args for jsmpeg needs to be configurable, but Frigate needs to be aware, somehow, whether we want to spend CPU cycles to perform audio
encoding for the camera or not when streaming with jsmpeg.
For that, I suggest something like a camera.audio
flag, which could be enabled/disabled. When enabled, the jsmpeg stream would be set to contain audio.
Another idea, would be to expose an API in which the Card could use to request a jsmpeg stream with audio or not. This would allow to save both CPU resources and bandwidth. The card could then reload the stream when the user presses mute/unmute.
But even if the above is feasible, a camera.audio
flag could still be useful: the HA integration could leverage this information to provide an extra attribute, and this extra attribute (supports audio or not) could be used for many things, like to decide whether to hide the mute/unmute button by default in the Card or not. Or to filter out media_player
s which are only capable of audio (speakers) from the list of targets when casting a camera stream which does not have support for audio. /cc @dermotduffy
The camera.audio
if combined with:
Would allow to simplify the list of presets. When camera.audio
is enabled, the ffmpeg.preset
s can also behave accordingly.
It would be required to also:
- Include some args in the ffmpeg stream for jsmpeg, like
-c:a mp2
- Fix the issue mentioned by @blakeblackshear at https://github.com/blakeblackshear/frigate/issues/2911#issuecomment-1063998266
Describe alternatives you’ve considered
To use other live providers, like HA’s built-in LL-HLS (which still have quite a lot of latency, is very slow and buffers a lot). There is also the WebRTC implementations, but since Frigate needs audio to be in AAC format, WebRTC can’t decode it too.
Additional context
Issue Analytics
- State:
- Created a year ago
- Reactions:1
- Comments:12 (7 by maintainers)
Top GitHub Comments
Some things need to move around architecturally before this can be done. Currently the jsmpeg feed is generated from frame data after the entire processing pipeline (motion detection, object detection, etc). It doesn’t make sense to try and pass audio data along the same pipeline. It has been on my list for a while to send frames directly from the upstream ffmpeg process instead. This would further reduce the delay and any stuttering due to processing. Once that is done, it should be much easier to add audio.
In general, I am good with this approach for inserting or modifying some ffmpeg parameters programmatically.
I see. That’s a feature request I have in mind for the future, by the way. Support multiple streams for playing.