I am developing a voice chat application. After the server receives an RTP packet from a client, it sends this packet unmodified to all the other clients. Each client has a different SSRC.
The following code works (i.e. all audio streams received from server are played correctly):
function join_vc(offer){
const rtc_conn = new RTCPeerConnection({
bundlePolicy: 'max-compat'
})
rtc_conn.ontrack = (ev) => {
const vc_audio = $("#vc_audio")[0]
vc_audio.srcObject = ev.streams[0]
vc_audio.play()
}
rtc_conn.onicegatheringstatechange = (state) => {
if(rtc_conn.iceGatheringState === "complete"){
const answer = rtc_conn.localDescription
vc_sock.send(JSON.stringify(answer))
}
}
await rtc_conn.setRemoteDescription(offer)
const media = await navigator.mediaDevices.getUserMedia({audio: true})
console.log("tracks", await navigator.mediaDevices.enumerateDevices())
media.getTracks().forEach(track => {rtc_conn.addTrack(track, media)})
const answer = await rtc_conn.createAnswer()
await rtc_conn.setLocalDescription(answer)
}
However, the streams are played as one, and I couldn't find a way to separate them. The RTCPeerConnection instance has a single RTCReceiver, which has a single RTCTransport.
Is there a way to separate multiplexed streams (to enable client-side muting / volume adjustment) by SSRCs using WebRTC API? Re-negotiating all RTCPeerConnections whenever a new participant joins a voice channel seems expensive; keeping separate connections is even more expensive (O(N^2)).
I tried using transformers, but they are not available for Chrome.