I am working on a network related project, where communication between client and server is implemented by grpc-cpp. I want to estimate the bandwidth/throughput of data transfer between server and client. Currently, client sends request containing data and server will reply a short message. The data is transferred as bytes with size 10~100KB.
It can be easy to estimate the bandwidth on client side by measuring the time difference between sending and receiving, then minus the execution time on server. But how to do that on server side? It looks like the GlobalCallbacks::PreSynchronousRequest is called only after the whole frame has been received, and there is no way to know the duration between two packets (each contains a part of the whole frame).
Is there any other way to roughly estimate the bandwidth between server-client on server side?
Depending on what you want to achieve with this bandwidth measurement, tools can vary but I suggest to start with a network throughput measuring tool such as iptraf, iftop, so on. Then you don't need to implement this by yourself with the limitation of gRPC API which isn't meant to measure this bandwidth.
Related
I have an interesting IoT use case. For this example let's say I need to deploy a thousand IoT cellular displays. These have a single LED that will display information that will be useful to someone out in the world. For example a sign at the start of hiking trail that indicates whether the conditions are favorable. Each display needs to receive 3 bytes of data every 5-10 minutes.
I have successfully created a computer based demo of this system using a basic http GET request and cloud functions in the GCP. The "device" will ask for its 3 bytes every 10 minutes and receive the data back. The issue here is that the http overhead takes up 200+ bytes so the bandwidth usage will be high over cellular.
I then decided to try out Google cloud Pub/Sub protocol, but quickly realized that it is designed for devices transmitting to the cloud rather than receiving. My best guess is that each device would need its own topic, which would scale horribly?
Does anyone have any advice on a protocol that would work with the cloud (hopefully GCP) that could serve low bandwidth receive only devices? Does the pub/sub structure really not work for this case?
I have 2 c++ applications communicating over tcp/ip. One application acts as a client and another one acts as a server. We have observed some delay in receiving data into client. This delay keeps on increasing during a day from few seconds to few minutes in a day.
How we concluded that delay is in communication ?
We have debug statement that prints timestamp when data in ready for write in server. Also we have debug statements in client when we receive that data. After comparing those timestamp we realized that we received data in client after few minutes it was written by server. Each data ha id so it's easy for us to know it's same data whose timestamp is recorded at server & client.
Send/Receive buffer sizes from netstat command : -
We have 1GB send buffer in server which is filled max upto 300MB when this delay is seen.
We have 512MB receive buffer in client which always shows 0 whenever delay is seen. That indicates client is processing data fast enough to make sure that sender(server) will not slow down.
It's my assumption that somehow data is accumulated in send buffer of server that is causing this delay?
Is my assumption correct? Is there solution for this?
Update 1 :- One important fact that I forgot to mention that both the apps are running on same machine. They are suppose to run different machine that's why they use tcp but in current situation they are running on same machine so there should not be a problem of bandwidth.
I am facing a problem in my django web server.
We are using python3, django2, django-rest-framework3.8 and channels2.x
Scenario is we are receiving DATA from a UDP connection at very fast rate (~100 messages per second). The data revived is in proto format (you can say we are receiving byte data). some data gets starved in this process as Rate of production >>> rate of consumption we are implementing throttling but still at 100 concurrent users data starves again. Can anyone help us in this scenario.
If anyone has any new architecture idea please share.
This is surely an interesting problem. This is about stock market feed
PS :- I cannot post any code as it is my companies. but i can help any time you need clarification on any point.
In many stock market data applications your same exact problem is solved by having Lightstreamer Server take care of throttling on the websocket (full disclosure: I am the CEO at Lightstreamer).
You will develop a Data Adapter using the Lightstreamer API to consume data from your UDP connection and inject them into the Lightstreamer Server. Then, you can specify a maximum update rate for each client and each subscription, as well as a max bandwidth. Lightstreamer will throttle the data on the fly, taking into consideration not only the client capacity, but also the network status.
When throttling, you can choose between conflating updates (typical for stock market data) and queuing them.
If I have a server running on my machine, and several clients running on other networks, what are some concepts of testing for synchronicity between them? How would I know when a client goes out-of-sync?
I'm particularly interested in how network programmers in the field of game design do this (or just any continuous network exchange application), where realtime synchronicity would be a commonly vital aspect of success.
I can see how this may be easily achieved on LAN via side-by-side comparisons on separate machines... but once you branch out the scenario to include clients from foreign networks, I'm just not sure how it can be done without clogging up your messaging system with debug information, and therefore effectively changing the way that synchronicity would result without that debug info being passed over the network.
So what are some ways that people get around this issue?
For example, do they simply induce/simulate latency on the local network before launching to foreign networks, and then hope for the best? I'm hoping there are some more concrete solutions, but this is what I'm doing in the meantime...
When you say synchronized, I believe you are talking about network latency. Meaning, that a client on a local network may get its gaming information sooner than a client on the other side of the country. Correct?
If so, then I'm sure you can look for books or papers that cover this kind of topic, but I can give you at least one way to detect this latency and provide a way to manage it.
To detect latency, your server can use a type of trace route program to determine how long it takes for data to reach each client. A common Linux program example can be found here http://linux.about.com/library/cmd/blcmdl8_traceroute.htm. While the server is handling client data, it can also continuously collect the latency statistics and provide the data to the clients. For example, the server can update each client on its own network latency and what the longest latency is for the group of clients that are playing each other in a game.
The clients can then use the latency differences to determine when they should process the data they receive from the server. For example, a client is told by the server that its network latency is 50 milliseconds and the maximum latency for its group it 300 milliseconds. The client then knows to wait 250 milliseconds before processing game data from the server. That way, each client processes game data from the server at approximately the same time.
There are many other (and probably better) ways to handle this situation, but that should get you started in the right direction.
So we have some server with some address port and ip. we are developing that server so we can implement on it what ever we need for help. What are standard/best practices for data transfer speed management between C++ windows client app and server (C++)?
My main point is in how to get how much data can be uploaded/downloaded from/to client via his low speed network to my relatively super fast server. (I need it for set up of his live stream Audio/Video bit rate)
My try on explaining number 3.
We do not care how fast is our server. It is always faster than needed. We care about client tyring to stream out to our server his media. he streams encoded (via ffmpeg) live video data to our server. But he has say ADSL with 500kb/s of outgoing traffic. Also he uses some ICQ or what so ever so he has less than 500 kb/s per second. And he wants to stream live video! So we need to set up our ffmpeg to encode video with respect to the bit rate user can provide. We develop server side and client side. We need a way of finding out how much user can upload per second currently (so value can change dynamically over time)
Check this CodeProject Article
it's dot-net but you can try figure out the technique from there.
I found what I wanted. "thrulay, network capacity tester" A C++ code library for Available bandwidth tracking in real time on clients. And there is "Spruce" and it is also oss. It is made using some of linux code but I use Boost library so it will be easy to rewrite.
Offtop: I want to report that there is some group of people on SO down voting on all questions on this topic - I do not know why they are so angry but they deffenetly exist.