Help on how to pitch shift / modulate audio

Hey, all!

Working on a fun side project that uses react-native-webrtc to allow users in nearby distances to call each other (basically like a proximity chat - think an audio-only Omegle locked to a small locale) and one feature I want to add is the ability to modulate one’s audio. I have a little toy UI for selecting a “deep”, “silly”, or “high-pitched” voice, which would, at its simplest, just involve shifting the pitch a few semitones up or down.

Currently, I got a test version working by writing a VoiceModulator.swift and .m file, which allows you to speak into a phone and hear a pitch-shifted playback. What I need, though, is that the stream used for the actual webRTC call to be somehow modulated by my native Swift code (I’m just focusing on getting iPhones working right now).

I know that I can’t do this without writing native code, since the react-native-webrtc library itself uses native modules to access the device audio stream. What I’ve come to humbly ask are the following few questions -

1: Is the most standard way to do this writing native code and perhaps forking the react-native-webrtc library to make edits to how it interfaces with that native code?

2: Are there any libraries or existing projects the community knows of that could help me make this easier? I’ll even take hacky fixes for now.

3. I have done a rough skim of relevant files in the react-native-webrtc Swift code, but if anyone knows the codebase super well and perhaps could point me to some kind of “shortcut”, ie, some file I could directly just edit and avoid writing a bunch of custom native code, I’d appreciate it.