Regarding HTTPs bandwidth control on server side - c++

Requirement:
Client supports 2MBPS bitrate and it will be sending HTTPS CURL request to server
Server supports 1000kbps bitrate.So it has to send the data at the speed of 1000kbps to the client..
But In my case,Server doesn't validating the bitrate which i've set.Instead its sending the data at rapid speed to the client.Now my requirement is to control the bandwidth on server side and send the data at the specified rate of 1000kbps.
Be able to fetch the bitrate from code level: I need some clue about how we need to handle the bitrate at socket level for HTTPS.
Could someone please guide me on achieving this?
Note:
For HTTP, it works properly because bit-rate is handled at socket level whereas for HTTPS, they use ssl_write library call to transfer the data. In that call, they are not dealing with bitrate.
If someone guide me about how to deal with this, it will be more helpful for me.

Related

Easy way to "nudge" a server to keep a connection open?

Okay, so a little context:
I have an app running on an embedded system that sends a few different requests over HTTP (using libcurl in C++) at the following intervals:
5 minutes
15 minutes
1 hour
24 hours
My goal: Reduce data consumption (runs over cellular)
We have both client and server side TLS authentication, so the handshake is costly. The idea is that we use persistent connections (at least for the shorter interval files) to avoid doing the handshake every time.
Unfortunately, after much tinkering I've figured out that the server is closing the connection before the intervals pass. Maybe this is something we can extend? I'll have to talk to the server side guys.
I was under the impression that was the reason the "TCP keep-alive" packets existed, but supposedly those "check the connection" not "keep it open" like the name suggests.
My idea is this:
Have my app send a packet (as small as possible) every 2 minutes or so (however long the timeout is) to "nudge" the connection into staying open.
My questions are:
Does that make any sense?
I don't suppose there is an easy way to do this in libcurl is there?
If so, how small could we get the request?
Is there an even easier way to do it? My only issue here is that all the connection stuff "lives" in libcurl.
Thanks!
It would be easier to give a more precise answer if you gave a little more detail on your application architecture. For example, is it a RESTful API? Is the use of HTTP absolutely mandatory? If so, what HTTP server are you using (nginx, apache, ...)? Could you consider websockets as an alternative to plain HTTP?
If you are at liberty to use something other than regular HTTP or HTTPs - and to use something other than libcurl on the client side - you would have more options.
If, on the other hand, if you are constrained to both
use HTTP (rather than a raw TCP connection or websockets), and
use libcurl
then I think your task is a good bit more difficult - but maybe still possible.
One of your first challenges is that the typical timeouts for a HTTP connection are quite low (as low as a few seconds for Apache 2). If you can configure the server you can increase this.
I was under the impression that was the reason the "TCP keep-alive" packets existed, but supposedly those "check the connection" not "keep it open" like the name suggests.
Your terminology is ambiguous here. Are you referring to TCP keep-alive packets or persistent HTTP connections? These don't necessarily have anything to do with each other. The former is an optional mechanism in TCP (which is disabled by default). The latter is an application-layer concept which is specific to HTTP - and may be used regardless of whether keep-alive packets are being used at the transport layer.
My only issue here is that all the connection stuff "lives" in libcurl.
The problem with using libcurl is that it first and foremost a transfer library. I don't think it is tailored for long-running, persistent TCP connections. Nonetheless, according to Daniel Stenberg (the author of libcurl), the library will automatically try to reuse existing connections where possible - as long as you re-use the same easy handle.
If so, how small could we get the request?
Assuming you are using a 'ping' endpoint on your server - which accepts no data and returns a 204 (success but no content) response, then the overhead - in the application layer - would be the size of the HTTP request headers + the size of the HTTP response headers. Maybe you could get it down to 200-300 bytes, or thereabouts.
Alternatives to (plain) HTTP
If you are using a RESTful API, this paradigm sort of goes against the idea of a persistent TCP connection - although I can not think of any reason why it would not work.
You might consider websockets as an alternative, but - again - libcurl is not ideal for this. Although I know very little about websockets, I believe they would offer some advantages.
Compared to plain HTTP, websockets offer:
significantly less overhead than HTTP per message;
the connection is automatically persistent: there is no need to send extra 'keep alive' messages to keep it open;
Compared to a raw TCP connection, the benefits of websockets are that:
you don't have to open a custom port on your server;
it automatically handles the TLS/SSL stuff for you.
(Someone who knows more about websockets is welcome to correct me on some of the above points - particularly regarding TLS/SSL and keep alive messages.)
Alternatives to libcurl
An alternative to libcurl which might be useful here is the Mongoose networking library. It would provide you with a few different alternatives:
use a plain TCP connection (and a custom application layer protocol),
use a TCP connection and handle the HTTP requests yourself manually,
use websockets - which it has very good support for (both as server and client).
Mongoose allows you to enable SSL for all of these options also.

How can I send bandwidth of client to server?

Is there away for server (web-server of web-page or file server) to know what bandwidth client had during last access or during page/file request? Can this information be sent via cookie or together with page/file request?
I guess this is more of theoretical question, since I want to know if server can provide lower resolution image for clients with bad bandwidth available to them.
Yes; use JavaScript Image onload event to time the download speed of a small image (like a logo), then do something usefull with the result like downloaded a large image if the client has the bandwidth.

How to get total bytes count transferred over the connection with libcurl?

Libcurl allows to get information how much bytes application level protocol (HTTP, FTP, etc) sent and received. However, is there any way to get amount of bytes that underlying socket sent and received? I am about all data, including, for example, bytes that socket used to establish SSL connection. So, I am basically searching a way to get the same information from libcurl that Apache HTTP client gives in HttpConnectionMetrics.getSentBytesCount() and HttpConnectionMetrics.getReceivedBytesCount().
One idea is to access socket that Libcurl uses directly from C++; however, how to get this socket total sent/received bytes count?
Use CURLOPT_DEBUGFUNCTION and just add up the different parts, as it shows the socket level amounts. This will thus give you an exact number for all protocols speaking plain-text, those not using SSL
However - that won't necessarily give you the counters for the SSL level stuff though in case of HTTPS/FTPS etc, as libcurl doesn't always expose that. It depends on what particular TLS backend it was built to use. The OpenSSL backend should be fine and it will tell you about incoming and outgoing TLS data too (using the same debug callback).

RTP MCU (media projector) design

I'm trying to build RTP media projector for only audio streams.
A user will create a session with the server and possibly broadcast audio stream.
The server will send to the user audio streams of all the other active users.
Can the server send media from a single port or does it need to be able to use a range of ports for sending? (I know it needs to listen to ports 1024 - 65535).
Does the server need to use ICE or can it just respond to the SDP request right away?
How does RTCP works in this scenario? Does the server sends quality of service feedback in the name of clients or acts as a client and sends feedback for himself?
What does the server do with quality of service feedback from the clients?
Does the server need to do something with the media packets like changing timestamps or just deliver them as is, assuming all clients are using the G.711 codec?
Thanks
The MCU can use a single port if it is the passive side in the peer-to-peer connection or a separate port for each session if it is the active side.
The MCU can act as a Translator and just forward RTCP packets from clients but this might result with high bandwidth usage. A more complicated MCU can parse RTCP packets and generate RTCP reports from this info.
The MCU need to decrypt and re-encrypt the RTP packets but as long as all the participants are using the same codec, there is no need for transcoding.
The info can be found in the RTP rfc
http://www.ietf.org/rfc/rfc3550.txt

How do I get through proxy server environments for non-standard services?

I'm not real hip on exactly what role(s) today's proxy servers can play and I'm learning so go easy on me :-) I have a client/server system I have written using a homegrown protocol and need to enhance the client side to negotiate its way out of a proxy environment.
I have an existing client and server system written in C and C++ for the speed and a small amount of MFC in the client to handle the user interface. I have written both the server and client side of the system on Windows (the people I work for are mainly web developers using Windows everything - not a choice) sticking to Berkeley Sockets as it were via wsock32 for efficiency. The clients connect to the server through a nonstandard port (even though using port 80 is an option to get out of some environments but the protocol that goes over it isn't HTTP). The TCP connection(s) stay open for the duration of the clients participation in real time conferences.
Our customer base is expanding to all kinds of networked environments. I have been able to solve a lot of problems by adding the ability to connect securely over port 443 and using secure sockets which allows the protocol to pass through a lot environments since the internal packets can't be sniffed. But more and more of our customers are behind a proxy server environment and my direct connections don't make it through. My old school understanding of proxy servers is that they act as a proxy for external HTML content over HTTP, possibly locally caching popular material for faster local access, and also allowing their IT staff to blacklist certain destination sites. Customer are complaining that my software doesn't recognize and easily navigate its way through their proxy environments but I'm finding it difficult to decide what my "best fit" solution should be. My software doesn't tear down the connection after each client request, and on top of that packets can come from either side at any time, basically your typical custom client/server system for a specific niche.
My first reaction is "why can't they just add my server's addresses to their white list" but if there is a programmatic way I can get through without requiring their IT staff to help it is politically better and arguably a better solution anyway. Plus maybe I'm still not understanding the role and purpose of what proxy servers and environments have grown to be these days.
My first attempt at a solution was to use WinInet with its various proxy capabilities to establish a connection over port 80 to my non-standard protocol server (which knows enough to recognize and answer a simple HTTP-looking GET request and answer it with a simple HTTP response page to get around some environments that employ initial packet sniffing (DPI)). I retrieved the actual SOCKET handle behind WinInet's HINTERNET request object and had hoped to use that in place of my software's existing SOCKET connection and hopefully not need to change much more on the client side. It initially seemed to be my solution but on further inspection it seems that the OS gets first-chance at the received data on this socket since when I get notified of events via the standard select(...) statement on the socket and query the size of the data available via ioctlsocket the call succeeds but returns 0 bytes available, the reads don't work and it goes downhill from there.
Can someone tell me of a client-side library (commercial is fine) will let me get past these proxy server environments with as little user and IT staff help as possible? From what I read it has grown past SOCKS and I figure someone has to have solved this problem before me.
Thanks for reading my long-winded question,
Ripred
If your software can make an SSL connection on port 443, then you are 99% of the way there.
Typically HTTP proxies are set up to proxy SSL-on-443 (for the purposes of HTTPS). You just need to teach your software to use the HTTP proxy. Check the HTTP RFCs for the full details, but the Cliffs Notes version is:
Connect to the HTTP proxy on the proxy port;
Send to the proxy:
.
CONNECT your.real.server:443 HTTP/1.1\r\n
Host: your.real.server:443\r\n
User-Agent: YourSoftware/1.234\r\n
\r\n
Then parse the proxy response, which will start with a HTTP status code, followed by HTTP headers, followed by a blank line. You'll then be talking with your destination (if the status code indicated success, anyway), and can start talking SSL.
In many corporate environments you'll have to authenticate with the proxy - this is almost always HTTP Basic Authentication, which is pretty easy - again, see the RFCs.