Client side recording? [Looking for advice]
See original GitHub issueBackground & motivation
I have a use case that is driving me to implement client-side recording on Jitsi Meet (both Web and Mobile clients). The scenario is like this:
- I am working on a podcast with someone remotely. The way we currently do it is to call on Skype (or Jitsi).
- Each one of us will have recording software (Audacity and Voicemeter) on our computer to record just the microphone audio on each side.
- After the audio call, we collect the recorded files from both sides and merge them into a single audio track.
The reason we do this instead of recording both participants on the same computer is to ensure audio quality so that in the final product we can have high-fidelity audio, and without any jitter. In other words, we want every sound bit recorded to come directly from the microphones, rather than transmitted over the Internet.
Now this works to some extent, but it is cumbersome. Especially when we want to invite guests to our show, we cannot require them to have the same local-recording setup as we do.
Solution: a rough idea
The idea is to have this distributed recording mechanism built into the audio conference software itself. I initially thought about implementing this from grounds up using WebRTC, but it was too difficult. Jitsi Meet, being opensource and builds on top of WebRTC, seems an ideal platform to implement this feature on top of.
So to put the requirements into context:
- Jitsi Meet clients (or my modified version) need to be able to record the audio stream from the audio device simultaneously as the audio is streaming over the network.
- There needs to be some signalling & synchronisation mechanism that can endure network partition.
- One of the participant (owner of the call) will signal the start of the recording, and every participant’s device will start recording individually. Same with stopping the recording.
- The audio call might be interrupted or reconnecting but the local recording should carry on normally. (Timestamp-based signalling might be most easy to implement.)
How to collect the audio recording from each participant’s device and merge them would be out of the scope of this issue. (Let’s assume that offline processing part is already implemented.)
After some initial research, I have found that:
-
For Jitsi Meet web client: I was able to call
getUserMedia()
and created aMediaRecorder
in the browser console simultaneously as a Jisit Meet conference was going on. TheMediaRecorder
worked as expected and did manage to record and save high-quality audio from the microphone. It did not seem to affect the Jisit call. -
For Android / iOS, it depends. For example, it appears that multiple native Android
MediaRecorder
instances will have unexpected behaviour. But I am suspect it is possible to reuse the same nativeMediaRecorder
for both WebRTC streaming and recording at the same time. I might need to look into https://github.com/oney/react-native-webrtc to figure this out.
I currently plan to implement this as a separate application based on Jitsi Meet. I am not aware of any existing feature like this in Jitsi or similar applications. Do you think this use case is common enough that should be incorporated into Jitsi Meet?
I would like to hear any thoughts or suggestions from you guys.
Thanks!
Issue Analytics
- State:
- Created 5 years ago
- Reactions:1
- Comments:14 (7 by maintainers)
Top GitHub Comments
HI, Any news on Local Recording using Android SDK from Jitsi/Flutter?
Hi anil1712, it’s not code, it’s a trick. If you still are interested we can fix a jitsi call and I can demonstrate my way how I do. I think that is more instructive that 2 pages of how to instructions. Contact via rainerbielefeldng@bielefeldundbuss.de