I’m pretty familiar with how to use webrtc in React, but new to using it with React Native.
To get a reference for how it works with react-native-webrtc I’ve created the simplest example possible. It uses RN 0.61.2 and functional components
Hey Jerem, I put it together on my Windows machine and since Android is my priority…I haven’t doubled back to get the iOS side working. Hopefully the changes will be somewhat similar to Android’s.
Unfortunately I didn’t think to make a commit before and after getting the Android folder set up
Well after a hard day of hard work, I managed to make your project compatible with ios (tested on an iPhone X). I’m doing a PR for you, I can also make your other project (the chat project) iOS compatible if you want?
Specs satisfying the react-native-webrtc (from …/node_modules/react-native-webrtc) dependency were found, but they required a higher minimum deployment target.
Hi @baconcheese113, thanks for your posting.
I am working with voice/video call with react-native-webrtc.
Your repository works fine on my pc.
By the way, the problem is I am using RN 0.59.9 and React 16.8.3.
So it doesn’t work at const devices = await mediaDevices.enumerateDevices();
How can I fix this?
And I guess it is just for self-device. How can I implement it to call with another device?
Thanks
Finally got around to the iOS compatibility tonight. Should all work fine now, but I’ve only tested on the emulator which doesn’t have camera functionality…I’ll be able to test it more tmw.
@baconcheese113 thanks!
I test that on my real iPhone iOS 13 device and it’s works !
I just have a question about the installation of the react-native-wrbrtc in your example project:
I look at the podFile and can’t see this row
pod 'react-native-webrtc', :path => '../node_modules/react-native-webrtc'
did you install the library manually ? because I can’t see also the project under ‘libraries’ , so how actually it’s works?
My understanding is that the 0.60 introduction of Autolinking has removed the need for manually setting dependencies in config/spec files if the library you’re importing has a podSpec in the root folder.
hi @baconcheese113, I just tried your sample project, and it work perfectly. But I want to ask how to integrate with different devices (with socket.io). As I know the example project can only be used for the same device.
Answered a similar question for another user here recently.
Basically your backend will manage a ledger of created rooms from talking to all the clients through socket connections. This is where you can manage who users connect with.
The main objective is to successfully get an IceCandidate with a session description protocol (sdp) object containing information about who they’re connecting to…to each of the clients in a room.
Believe it or not, the example project I made showcases all that ^ locally. I’m guessing that’s why it’s become so popular. Quite a bit easier to digest than a full stack example imo. Going from the local example to a networked one involves setting up a backend (like an nodejs + express + socketio server) and slightly changing the webRTC flow.
Locally it’s:
Nothing -> (press share video) -> Sharing local video -> (press start call) -> Call immediately negotiated and started
Networked it’s:
Nothing -> (press share video) -> Sharing local video -> (press start call) -> Server puts client in a socketio room and you broadcast you’re there and start negotiating calls if needed and wait for others to broadcast when they join