Is there any way for me to implement audio processing?

I’m interested in doing some preprocessing on audio tracks (background noise reduction, speaking detection) before they get sent to webrtc. It seems like there’s no way to do this with the current react-native-webrtc (unless I’m looking in the wrong place)

I notice, though, that there’s functionality for video processing on Android, and a PR open for doing the same for iOS.

Does anyone have any pointers on how I might do this for audio as well? Happy to implement & contribute this back.

1 Like

@liamuk Any luck so far? I am also trying this but couldn’t figure it out. I was thinking of trying to get the audio array and using a ML model on that audio and sending it back using a flask api then trying to send it to the other peer but I can’t really get it done. Any work arounds or luck? I am making a mobile audio calling app.

We ended up doing the following. It’s a little involved. Also note that it doesn’t work with react-native-webrtc > 106 because the APIs change.

Common

For iOS:

  • swizzle RTCPeerConnectionFactory::audioDeviceModule to use your custom audioDeviceDataObserver using CreateAudioDeviceWithDataObserver

For Android:

  • patch react-native-webrtc to not initialize its own WebRTC native module
  • in your app entrypoint, initialize the WebRTC on your own, passing it options that create your own custom Java audioDeviceModule that behind the scenes builds a native audioDeviceModule using CreateAudioDeviceWithDataObserver

Note that this won’t let you intercept the audio and transform it before sending it forward, it will only let you get access to the audio for reading. Hope this helps!

1 Like