So recently clients of our app had reported that IOS13.2 broke the calling functionality in our app. We were using react-native-webrtc@1.69.2. Because of this I thought it could be fixed by just upgrade to 1.75.1 but I have been having all kinds of problems since upgrading.
I managed to get IOS working with little to no issues with the new library (other than having to upgrade xcode to debug ios13.2). Android is another story though, I keep getting crashing errors such as:
W: java.lang.NullPointerException: Attempt to invoke virtual method 'byte[] java.lang.String.getBytes(java.lang.String)' on a null object reference
W: at org.webrtc.JniHelper.getStringBytes(JniHelper.java:25)
W: at org.webrtc.PeerConnectionFactory.nativeCreateAudioSource(Native Method)
W: at org.webrtc.PeerConnectionFactory.createAudioSource(PeerConnectionFactory.java:452)
W: at com.oney.WebRTCModule.GetUserMediaImpl.createAudioTrack(GetUserMediaImpl.java:67)
W: at com.oney.WebRTCModule.GetUserMediaImpl.getUserMedia(GetUserMediaImpl.java:166)
W: at com.oney.WebRTCModule.WebRTCModule.lambda$getUserMedia$2$WebRTCModule(WebRTCModule.java:485)
W: at com.oney.WebRTCModule.-$$Lambda$WebRTCModule$C_Gitx4KOYySlWXaI1K4M1yoJoA.run(lambda)
W: at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1133)
W: at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:607)
W: at java.lang.Thread.run(Thread.java:762)
E: #
# Fatal error in: gen/sdk/android/generated_base_jni/jni/../../../../../../../sdk/android/src/jni/jni_generator_helper.h, line 85
# last system error: 11
# Check failed: !env->ExceptionCheck()
#
After fixing that one by just setting audio constraint to just true, i am faced with this one:
E: FATAL EXCEPTION: pool-3-thread-1
Process: PID: 21302
com.facebook.react.bridge.NoSuchKeyException: width
at com.facebook.react.bridge.ReadableNativeMap.getValue(ReadableNativeMap.java:124)
at com.facebook.react.bridge.ReadableNativeMap.getValue(ReadableNativeMap.java:128)
at com.facebook.react.bridge.ReadableNativeMap.getInt(ReadableNativeMap.java:182)
at com.oney.WebRTCModule.VideoCaptureController.<init>(VideoCaptureController.java:50)
at com.oney.WebRTCModule.GetUserMediaImpl.createVideoTrack(GetUserMediaImpl.java:81)
at com.oney.WebRTCModule.GetUserMediaImpl.getUserMedia(GetUserMediaImpl.java:170)
at com.oney.WebRTCModule.WebRTCModule.lambda$getUserMedia$2$WebRTCModule(WebRTCModule.java:485)
at com.oney.WebRTCModule.-$$Lambda$WebRTCModule$C_Gitx4KOYySlWXaI1K4M1yoJoA.run(lambda)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1133)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:607)
Anyone have any ideas? Heres the code for setting the constraints and using getUserMedia:
var audioConstraints = [{
googAutoGainControl: true
}, {
googAutoGainControl2: true
}, {
googEchoCancellation: true
}, {
googEchoCancellation2: true
}, {
googNoiseSuppression: true
}, {
googNoiseSuppression2: true
}, {
googHighpassFilter: true
}, {
googTypingNoiseDetection: true
}, {
googAudioMirroring: true
}];
var videoConstraints = [];
var audioSource = options.audioSource;
if (audioSource)
audioConstraints.push({
sourceId: audioSource
});
var isFront = true;
var voiceOnly = false;
var videoSource = options.videoSource;
if (videoSource) {
videoConstraints.push({
sourceId: videoSource
});
isFront = videoSource.slice(-1) === "1";
}
voiceOnly = (videoSource === "false") || options.voiceOnly;
var screenSize = Dimensions.get('window');
var isHorizontal = screenSize.width > screenSize.height;
var minFrameRate = 1;
var maxFrameRate = 30;
var minWidth = isHorizontal ? 200 : 150;
var minHeight = isHorizontal ? 150 : 200;
var maxWidth = (use_bigger ? 320 : 640);
var maxHeight = (use_bigger ? 240 : 480);
if (isHorizontal) {
var swap = maxWidth;
maxWidth = maxHeight;
maxHeight = swap;
}
if (Platform.OS === "android") {
minWidth = maxWidth;
minHeight = maxHeight;
minFrameRate = maxFrameRate;
}
var constraints = {
audio: {
optional: audioConstraints
},
video: {
optional: videoConstraints,
facingMode: (isFront ? "user" : "environment"),
mandatory: {
minWidth: minWidth,
minHeight: minHeight,
maxWidth: maxWidth,
maxHeight: maxHeight,
minFrameRate: minFrameRate,
maxFrameRate: maxFrameRate
},
}
};
if (Platform.OS === "ios")
constraints.audio = true;
if (voiceOnly)
constraints.video = false;
return MediaDevices.getUserMedia(constraints);
react-native-webrtc: 1.75.1
react-native: 0.59.9
Any suggestions, we have been using webrtc for video/voice calling for a very long time the code is likely outdated but it should still work as it has up until this point. Also is there a better way to be doing constraints now? I know that the goog stuff isn’t really used anymore but finding solid documentation for this library is pretty difficult.