How do I rescue my WebRTC connection when my SDP offer is swallowed? - amazon-web-services

I'm using AWS Kinesis Video Signalling Channels to set up a WebRTC connection, with help from the Amazon Kinesis Video Streams WebRTC SDK for JavaScript (on whose repo I have cross-posted this question: https://github.com/awslabs/amazon-kinesis-video-streams-webrtc-sdk-js/issues/175).
In my code, the "viewer" side of the WebRTC call uses the KVSWebRTC.SignalingClient.sendSdpOffer function to send an SDP offer, and the "master" side of the call listens for "sdpOffer" events and reacts to them by calling KVSWebRTC.SignalingClient.sendSdpAnswer.
This usually all works fine. But I've found that the following sequence of events can happen:
The viewer user opens the webpage, then sends an SDP offer
The master user opens the webpage, and receives the SDP offer
Before the master side can send an SDP answer, the master user refreshes the page
The master user tries to rejoin the call, but no longer receives any SDP offer event - it's as if the offer has been swallowed by the original aborted session. The connection can only be made if the viewer side makes a new SDP offer.
Am I correct in thinking that the SDP offer is "consumed" when the master side signalling client first fires the "sdpOffer" event?
What's the appropriate way to handle this? Is the viewer side supposed to speculatively make fresh SDP offers every few seconds, in case the above sequence of events has occurred? Is there any way for the viewer side to know that their SDP offer has been lost?

Related

gRPC client streaming

Official gRPC documentation for client streaming states that:
The server sends back a single response, typically but not necessarily after it has received all the client’s requests...
What I'm trying to do is to catch server response in the middle of the stream to stop sending more data.
In Go I can spin up a new goroutine listening for the message from the server using RecvMsg, but I can't find a way to do the same in C++. It looks like ClientWriter doesn't offer this kind of functionality.
One solution would be to have a bidirectional stream but was wondering if there is any other way to achieve this in C++.
Once the response and status are sent by the server and received back at the client(i.e., the client-side gRPC stack) , subsequent attempts to Write() will start failing. The first failing Write() is the signal to the client that it should stop Writing and Finish the RPC.
So the two options here are:
1. Wait for a Write to fail, then call finish to receive the Server's response and status.
2. Switch to Bidirectional Streaming if the client really wants to read the response from the server before calling Finish.

How to keep a HTTP long-polling connection open?

I want to implement long polling in a web service. I can set a sufficiently long time-out on the client. Can I give a hint to intermediate networking components to keep the response open? I mean NATs, virus scanners, reverse proxies or surrounding SSH tunnels that may be in between of the client and the server and I have not under my control.
A download may last for hours but an idle connection may be terminated in less than a minute. This is what I want to prevent. Can I inform the intermediate network that an idle connection is what I want here, and not because the server has disconnected?
If so, how? I have been searching around four hours now but I don’t find information on this.
Should I send 200 OK, maybe some headers, and then nothing?
Do I have to respond 102 Processing instead of 200 OK, and everything is fine then?
Should I send 0x16 (synchronous idle) bytes every now and then? If so, before or after the initial HTTP status code, before or after the header? Do they make it into the transferred file, and may break it?
The web service / server is in C++ using Boost and the content file being returned is in Turtle syntax.
You can't force proxies to extend their idle timeouts, at least not without having administrative access to them.
The good news is that you can design your long polling solution in such a way that it can recover from a connection being suddenly closed.
One such design would be as follows:
Since long polling is normally used for event notifications (think the Observer pattern), you associate a serial number with each event.
The client makes a GET request carrying the serial number of the last event it has seen, either as part of the URL or in a cookie.
The server maintains a buffer of recent events. Upon receiving a GET request from the client, it checks if any of the buffered events need to be sent to the client, based on their serial numbers and the serial number provided by the client. If so, all such events are sent in one HTTP response. The response finishes at that point, in case there is a proxy that wants to buffer the whole response before relaying it further.
If the client is up to date, that is it didn't miss any of the buffered events, the server is delaying its response till another event is generated. When that happens, it's sent as one complete HTTP response.
When the client receives a response, it immediately sends a new one. When it detects the connection was closed, it creates a new one and makes a new request.
When using cookies to convey the serial number of the last event seen by the client, the client side implementation becomes really simple. Essentially you just enable cookies on the client side and that's it.

Azure EventHub: offline event buffering/queueing possible?

I can't find any definitive answer here. My IoT service needs to tollerate flaky connections. Currently, I manage a local cache myself and retry a cloud-blob transfer as often as required. Could I replace this with an Azure EventHub service? i.e. will the EventHub client (on IoT-Core) buffer events until the connection is available? If so, where is the info on this?
It doesn't seem so according to:
https://azure.microsoft.com/en-us/documentation/articles/event-hubs-programming-guide/
You are resposible for sending and caching it seems:
Send asynchronously and send at scale
You can also send events to an Event Hub asynchronously. Sending
asynchronously can increase the rate at which a client is able to send
events. Both the Send and SendBatch methods are available in
asynchronous versions that return a Task object. While this technique
can increase throughput, it can also cause the client to continue to
send events even while it is being throttled by the Event Hubs service
and can result in the client experiencing failures or lost messages if
not properly implemented. In addition, you can use the RetryPolicy
property on the client to control client retry options.

Available event for connection timeout for streams publishing over RTSP?

I use Wowza GoCoder to publish video to a custom Wowza live application. In my application I attach an IRTSPActionNotify event listener within the onRTPSessionCreate callback. In the onRecord callback of my IRTSPActionNotify I perform various tasks - start recording the live stream, among other things. In my onTeardown callback I then stop the recording and do some additional processing of the recorded video, like moving the video file to a different storage location.
What I just noticed was that if the encoder timeout, due to a lost connection, power failure or some other sudden event, I wont receive an onTeardown event - not even when the RTSP session timeout. This is a big issue for me, since I need to do this additional processing before I make the published stream available for on demand view through another application.
I've been browsing through the documentation looking for an event or a utility-class that could help me out, but so far to no avail.
Is there some reliable event, or other reliable way to know that a connection has timed out, so that I can trigger this processing also for streams that doesn't fire a teardown-event?
I first discovered this issue when I lost connection on my mobile device while encoding video using the Wowza GoCoder app for iOS, but I guess the issue would be the same for any encoder.
In my Wowza modules, I have the following pattern, which proved to be quite reliable so far:
I have a custom worker thread and that iterates over all client types. Now this allows me to keep track of clients, and I have found that eventually all kind of disasters lead to clients being removed from those lists after unclear timeouts.
I think try tracking (add / remove) clients in your own Set and see if that is more accurate.
You could also try and see if anything gets called in that case in IMediaStreamActionNotify2.
I have seen onStreamCreate(IMediaStream stream) and onStreamDestroy(IMediaStream stream) being called in ModuleBase in case of GoCoder on iOS, and I am attaching an instance of IMediaStreamActionNotify2 to the stream by calling stream.addClientListener(actionNotify)
On the GoCoder platform: I am not sure that it's the same on Android, the GoCoder Android and iOS versions have a fundamental difference, that is the streaming protocol itself, which leads to different API calls and behaviour on backend side. Don't go live without testing on both platforms.. :-)

How to guarantee delivery of data in a Compact Framework Webservice call?

We have a mobile Application in a very unsteady WLan Environment. Sending Data to a webserver could result in a timeout or in a lost WLan connection.
How do we ensure, that our data is delivered correctly? Is there a possibility of having Web Services Reliable Messaging (WSRM) on the device?
MSMQ is no option at the moment.
WSRM isn't supported. A reliable mechanism is to ensure that either the Web Service responds to the upload with an ack after the data has been received (i.e. a synchronous call) or that when you start the upload you get back a transaction ID that you can then send back to the service at a later point to ensure that it has been delivered before local deletion.