What is the best way to figure out the transfer rate of a wxWidgets socket. Is there a built in way to do this, or would I be better getting the time before a transfer of data and then after its done and comparing them?
I ask because I want to be able to limit the transfer rate of my sockets to a user entered value.
Thanks for any help
No, there does not appear to be any mechanism built into wxWidgets to measure socket transfer rate *. The best you can do is to measure the rate at which you call wxSocketBase::Write.
Note that this won't be perfectly accurate, however. Just because you write to a socket doesn't mean that the data is already sent, much less already received.
Related
I'm working on up programme using boost::asio::ip::udp:: socket where I need at certain moment send a buffer to a specific end point with high priority (I need a guarantee that my data was delivered to the end point and I should see it in tcmpdump capture)
My question is about the function boost::asio::ip::udp::socket::send, is it possible that the system cache the data and don't deliver immediately.
With this kind of function.
I have to fflush the socket if i need to send a high priority data ?
Which is the best approach i have to follow it for this use case ?
Thanks :)
So I'm working with a camera which is connected to the computer by an ethernet cable and apparently, has to be accessed as a tcp/ip stream socket.
Basically, I want something like taking an image every 1 second. I noticed though that input data from the camera keeps coming in, while what I want is just to get the most recent data from the camera and nothing else, i.e. only the most current image at that time.
What I read so far is that I need to read the input data multiple times until I reach the 'most current data'. Is this really the only way to do this? I really don't like the idea of one process being busy all the time just to 'throw away' the incoming data from the stream socket.
Can't I, in theory, decrease the 'input buffer size' for the input from the socket so that I can receive only one picture's worth of data? And then, every further imcoming data would be just wasted, so when the input buffer is then flushed once, it gets filled with the newest data or something like that. (I mean, there has to be some limit on how much input data from the stream can 'pile up' waiting to be processed/read, right? What happens when that limit is reached? Does the further data gets thrown away or is the 'buffer' overwritten with the new data?)
Is that even possible? I'm a complete beginner at this, so I'm just theorizing. If something like that is possible, can anyone show the outline of how to code that? (I have to use the boost asio library on Ubuntu for this stuff)
That would be very helpful!
Yes, it's the only way to do it.
The whole reason for using TCP is that it is a "reliable" protocol, with guaranteed delivery. As opposed to UDP.
TCP's job is to deliver the data to the receiver, in the order it was sent, without losing anything. If the data cannot be delivered, the connection gets broken, at some point, when TCP gives up. But, as long as there's an active connection, the receiver is going to get everything that the sender sends.
If you don't want to get some data that the sender gets, you must make whatever appropriate arrangements there are, with the sender, for that to happen. TCP is not going to discard data, just because the receiver doesn't want it.
I know how to open an UDP socket in C++, and I also know how to send packets through that. When I send a packet I correctly receive it on the other end, and everything works fine.
EDIT: I also built a fully working acknowledgement system: packets are numbered, checksummed and acknowledged, so at any time I know how many of the packets that I sent, say, during the last second were actually received from the other endpoint. Now, the data I am sending will be readable only when ALL the packets are received, so that I really don't care about packet ordering: I just need them all to arrive, so that they could arrive in random sequences and it still would be ok since having them sequentially ordered would still be useless.
Now, I have to transfer a big big chunk of data (say 1 GB) and I'd need it to be transferred as fast as possible. So I split the data in say 512 bytes chunks and send them through the UDP socket.
Now, since UDP is connectionless it obviously doesn't provide any speed or transfer efficiency diagnostics. So if I just try to send a ton of packets through my socket, my socket will just accept them, then they will be sent all at once, and my router will send the first couple and then start dropping them. So this is NOT the most efficient way to get this done.
What I did then was making a cycle:
Sleep for a while
Send a bunch of packets
Sleep again and so on
I tried to do some calibration and I achieved pretty good transfer rates, however I have a thread that is continuously sending packets in small bunches, but I have nothing but an experimental idea on what the interval should be and what the size of the bunch should be. In principle, I can imagine that sleeping for a really small amount of time, then sending just one packet at a time would be the best solution for the router, however it is completely unfeasible in terms of CPU performance (I probably would need to busy wait since the time between two consecutive packets would be really small).
So is there any other solution? Any widely accepted solution? I assume that my router has a buffer or something like that, so that it can accept SOME packets all at once, and then it needs some time to process them. How big is that buffer?
I am not an expert in this so any explanation would be great.
Please note, however, that for technical reasons there is no way at all I can use TCP.
As mentioned in some other comments, what you're describing is a flow control system. The wikipedia article has a good overview of various ways of doing this:
http://en.wikipedia.org/wiki/Flow_control_%28data%29
The solution that you have in place (sleeping for a hard-coded period between packet groups) will work in principle, but in order to get reasonable performance in a real-world system you need to be able to react to changes in the network. This means implementing some kind of feedback where you automatically adjust both the outgoing data rate and packet size in response to to network characteristics, such as throughput and packetloss.
One simple way of doing this is to use the number of re-transmitted packets as an input into your flow control system. The basic idea would be that when you have a lot of re-transmitted packets, you would reduce the packet size, reduce the data rate, or both. If you have very few re-transmitted packets, you would increase packet size & data rate until you see an increase in re-transmitted packets.
That's something of a gross oversimplification, but I think you get the idea.
I want to know how can I control the rate of my network interface, In fact, I want to receive with a rate of 32 Kbits/s and send the received data to the network with a rate of 1 Mbits/s....do you have any ideas on how to control the interface's rate?....or do you know any tricks that could help?...
Thanks in advance..
There is a difference between data throughput rate and the baud rate of the connection. Generally, you want the baud rate to be as fast as possible (without errors of course). Some low level drivers or the OS may allow you to control this, but it is fundamentally a low-level hardware/driver issue.
For data throughput rate, throttling sending is easy, just don't call send() as fast. This requires that you track how much you are sending per time period and limiting it with sleeps.
Receiving can work the same way, but you have to consider that if someone is sending faster than the rate you are receiving, there may be issues.
You can do this, you must only control time and carry about not recv more and less than 32kbits (you can set this in function arguments) in second and same practice on send.
I've done this "the hard way" (dunno if there is an easier way). Specifically, I did it by controlling the rate at which I called send() and/or recv(), and how much data I indicated I was willing to send/receive in each of those calls. It takes a bit of math to do it right, but it's not impossible.
I am making a multiplayer game in c++ :
The clients simply take commands from the users, calculate their player's new position and communicate it to the server. The server accepts such position updates from all clients and broadcasts the same about each to every. In such a scenario, what parameters should determine the time gap between consecutive updates ( i dont want too many updates, hence choking the n/w). I was thinking, the max ping among the clients should be one of the contributing parameters.
Secondly, how do i determine this ping/latency of the clients ? Other threads on this forum suggest using "raw sockets" or using the system's ping command and collecting the output from a file .. do they mean using something like system('ping "client ip add" > file') or forking and exec'ing a ping command..
This answer is going to depend on what kind of a multiplayer game you are talking about. It sounds like you are talking about an mmo-type game.
If this is the case then it will make sense to use an 'ephemeral channel', which basically means the client can generate multiple movement packets per second, but only the most recent movement packets are sent to the server. If you use a technique like this then you should base your update rate on the rate in which players move in the game. By doing this you can ensure that players don't slip through walls or run past a trigger too quickly.
Your second question I would use boost::asio to set up a service that your clients can 'ping' by sending a simple packet, then the service would send a message back to the client and you could determine the time it took to get the packet returned.
If you're going to end up doing raw-packet stuff, you might as well roll your own ICMP packet; the structure is trivial (http://en.wikipedia.org/wiki/Ping).
The enet library does a lot of the networking for you. It calculates latency as well.