I am coding a real-time, network program with UDP protocol using SFML libraries. The server will be handling all of the processing, sending packets to the client and vice versa.
I need a method to synchronize the screen updates, because there will be a real-time user interface on both sides of the network. What I am thinking is, I will have the client and server open two ports, 1 for sending the packet and the other as a sort of 'verifying' port.
Once the updates are made and verified, both loops will send a byte over the network indicating that their side is 'ready'. Once a 'ready' side receives that byte, it will know the screen is ready to update on both sides, and then render the updates.
My question is, how would I do this in C++ using SFML libraries. Is the logic correct, will I encounter network errors etc.
Comment if anything needs clearing up.
You should read on the subject to see how it really works, like How do the protocols of real time strategy games such as Starcraft and Age of Empires look?
There is nothing specific to know in terms of SFML as the theory is applicable to every programming languages (that uses sockets). And you've not provided the code that you're struggling with, so I can't help without Minimal, Complete, and Verifiable example.
But one thing I would say is to never wait on the network to render. This will be laggy as hell and if you want "real-time", you must simulate it without real "real-time" as there will always be latency with networking.
And if you go with UDP, you don't have a choice but to open 2 ports, one on the server and one on the client. But I would argue against using a
"confirmation" port as this is useless since you'll be sending multiple times a second anyway.
See how to use sockets with SFML. I guess you already know this but it might be useful for people coming from google.
See SFML networking rts which explains way better than me what to do.
From #muhwu
Generally if you are going for a real-time game with timing critical elements, you should be placing the commands player make slightly to the future to compensate for the latency. This can be few hundred milliseconds at most.
From #sean-middleditch:
Note also that many articles are just really out of date. TCP for instance is a perfectly good choice for networking in a game like an RTS, especially with lock-step networking, and is even the protocol of choice for many smaller MMOs. Despite this, almost every article around on networking is about building a custom UDP protocol.
Speaking of gamedev.stackexchange.com, you might want to read the questions and answers, and if that does not help, then ask one there.
Asking the same question on SO might not be the way to go, as your question is too broad and as already been answered a number of times (like the first one I linked).
Related
I'm looking for a way to add network emulation to a socket.
The basic solution would be some way to add bandwidth limitation to a connection.
The ideal solution for me would:
Support advanced network properties (latency, packet-loss)
Open-source
Have a similar API as standard sockets (or wraps around them)
Work on both Windows and Linux
Support IPv4 and IPv6
I saw a few options that work on the system level, or even as proxy (Dummynet, WANem, neten, etc.), but that won't work for me, because I want to be able to emulate each socket manually (for example, open one socket with modem emulation and one with 3G emulation. Basically I want to know how these tools do it.
EDIT: I need to embed this functionality in my own product, therefore using an extra box or a third-party tool that needs manual configuration is not acceptable. I want to write code that does the same thing as those tools do, and my question is how to do it.
Epilogue: In hindsight, my question was a bit misleading. Apparently, there is no way to do what I wanted directly on the socket. There are two options:
Add delays to send/receive operation (Based on #PaulCoccoli's answer):
by adding a delay before sending and receiving, you can get a very crude network simulation (constant delay for latency, delay sending, as to not send more than X bytes per second, for bandwidth).
Paul's answer and comment were great inspiration for me, so I award him the bounty.
Add the network simulation logic as a proxy (Based on #m0she and others answer):
Either send the request through the proxy, or use the proxy to intercept the requests, then add the desired simulation. However, it makes more sense to use a ready solution instead of writing your own proxy implementation - from what I've seen Dummynet is probably the best choice (this is what webpagetest.org does). Other options are in the answers below, I'll also add DonsProxy
This is the better way to do it, so I'm accepting this answer.
You can compile a proxy into your software that would do that.
It can be some implementation of full fledged socks proxy (like this) or probably better, something simpler that would only serve your purpose (and doesn't require prefixing your communication with the destination and other socks overhead).
That code could run as a separate process or a thread within your process.
Adding throttling to a proxy shouldn't be too hard. You can:
delay forwarding of data if it passes some bandwidth limit
add latency by adding timer before read/write operations on buffers.
If you're working with connection based protocol (like TCP), it would be senseless to drop packets, but with a datagram based protocol (UDP) it would also be simple to implement.
The connection creation API would be a bit different from normal posix/winsock (unless you do some macro or other magic), but everything else (send/recv/select/close/etc..) is the same.
If you're building this into your product, then you should implement a layer of abstraction over the sockets API so you can select your own implementation at run time. Alternatively, you can implement wrappers of each socket function and select whether to call your own version or the system's version.
As for adding latency, you could have your implementation of the sockets API spin off a thread. In that thread, have a priority queue ordered by time (i.e. this background thread does a very basic discrete event simulation). Each "packet" you send or receive could be enqueued along with a delivery time. Each delivery time should have some amount of delay added. I would use some kind of random number generator with a Gaussian distribution.
The background thread would also have to simulate the other side of the connection, though it sounds like you may have already implemented that part?
I know only Network Link Conditioner for Mac OS X Lion. You should be mac developer to download it, so i cannot put download link there. Only description from 9to5mac.com: http://9to5mac.com/2011/08/10/new-in-os-x-lion-network-link-conditioner-utility-lets-you-simulate-internet-and-bandwidth-conditions/
This answer might be a partial solution for you when using linux:
Simulate delayed and dropped packets on Linux. It refers to a kernel module called netem, which can simulate all kinds of network problems.
If you want to work with TCP connections, having "packet loss" could be problematic since a lot of error-handling (like recovering lost packages) is done in the kernel. Simulating this in a cross-platform way could be hard.
you usually add a network device to your network that throttles the bandwidth or latency, on a port by port basis, you can then achieve what you want just by connecting to the port allocated to the particular type of crappy network you want to test, with no code changes or modifications required.
The easiest ways to do this is just add iptables rules to a Linux server acting as a proxy.
If you want it to work without the separate device, try trickle that is a software package that throttles your network on your client PC. (or for Windows)
You may would like to check WANem http://wanem.sourceforge.net/ . WANEM is Open Source and licensed under the GNU General Public License.
WANem allows the application development team to setup a transparent application gateway which can be used to simulate WAN characteristics like Network delay, Packet loss, Packet corruption, Disconnections, Packet re-ordering, Jitter, etc.
I think you could use a tool like Network Simulator. It's free, for Windows.
The only thing to do is to setup your program to use the right ports (and the settings for the network, of course).
If you want a software only solution that you control, you will have to implement it yourself. I know of no such existing package.
While a wrapper layer over a socket may give you the ability to introduce delay, it won't be sufficient to introduce loss or out of order delivery. In order to simulate those activities, you actually need intercept the data in transit between the two TCP stacks.
The approach I would recommend is to use a tunneling device (say tunX). Routes should be set so the client believes the way to the server is through tunX. Additional code (perhaps running in a different thread) would promiscuously intercept traffic on tunX, and perform your augmented behavior, before forwarding packets over the true physical interface that will get the traffic to your server. The reverse would happen for packets arriving from the server on the physical interface. Those packets would be intercepted by the client code, behavior augmented, before forwarding through tunX.
However, since you are testing client software, I am unclear as to why you would want to embed this code in your released software, unless the software itself is a WAN simulating client.
I am developing the protocol for my network Asteroids game. The first version will just allow each player have a ship shooting opponents in a scrolling world. Later I will add rocks, many rocks.
It seems non trivial to implement the protocol on top of UDP, however UDP is unquestionably the way to go in real time games. I have studied Gaffers articles on networking for games and so I think I will have to implement the following on top of UDP:
Reliablilty (making sure certain packets arrive by sending
acknowledgments. This implies also making sure that the
acknowledgements arrive).
Fragmentation and reassembly of packets that are larger than the
recommended UDP packet size.
Detecting dead clients.
Have I left anything major out?
The issue is, will you simply end up re-inventing TCP / stream socket functionality? Once you start considering ACK/NAK, packet ordering and re-assembly, and endpoint integrity, you're essentially re-inventing the wheel. Not only that, but you're doing it in user space, and without the extensive path optimizations and memory management strategies provided by the kernel. I'm not saying UDP is the wrong choice, but have a look at the source for multiplayer / network code in the Q3 engine, etc. A lot depends on payload contents and redundancy. i.e., can you afford to drop frames / packets?
I'm working on an online turn based card game for the PC. It features a lobby which will automatically update the list of active games so I'll be sending many updates to many clients. I will have a game server for this. Lag is not too big a deal for me, if players have to wait an extra 1/4 second sometimes until their card is shown, it dosn't really concern me. What concerns me is reliability and stability. I want to be able to host many 4 player games, I also allow people to watch a specific game too.
I also will need to log them in, and remember their session if they disconnect so they can get back in the game if they get disconnected.
What I am debating is if I should go with Enet which is UDP based with reliability, or plain old TCP/IP.
I will eventually need to be able to send them additional content created such as extra decks which will be in the form of a zip file. But for that I'm sure there is a library that could help me get these from an HTTP source.
If anyone has experience with either or both of these, I'd appreciate your input.
Thnks
I'd stick with classic TCP, not some library that has hacked in correct ordering and packet delivery reliability into a protocol not designed for it.
Enet's claim is to have a few of the benefits of TCP, while delivering speed for "real-time" games. That phrase makes me think of FPS, which isn't your app at all.
I haven't worked with ENet, but I had a quick read of it's documentation, and it seems like it could be useful in certain circumstances.
First question: Do you need a reliable packet delivery mechanism, or is unreliable OK? ENet can do reliable, but at that point it's getting kind of similar to TCP, and unless you need it's multi-stream support, I'm not sure it's worth fooling with.
If unreliable is OK, then the next question is: do you need the packet fragmentation support that ENet has? If your packets are going to be small, then I'd say just use UDP directly.
I'm unclear on how ENet uses UDP sockets and how it manages connections, so I'm unclear whether you're better off with lots of open TCP sockets or lots of open ENet connections.
Lots of questions, I am sorry!
I am doing a voice-chat (VoIP) application and I was thinking of doing a custom implementation of the IP&UDP headers, along with small, extra information mainly seq number. Sounds alot like RTP yes, but I'm mainly just interested in the seq number or timestamp, and trying to implement my own whole RTP sounds like a nightmare with all the complexity involved and data im not likely to use.
Target OS for the application is windows xp and above. I have read http://msdn.microsoft.com/en-us/library/ms740548%28v=vs.85%29.aspx on the topic of Raw sockets in windows, and now I just want some confirmation.
I also have some general networking questions.
Here's the following questions;
1) According to MSDN, you cannot send custom IP packets with a source that is not on the network list. I understand it from a security PoV, but is there any way around this? My idea was to have for example two clients open UDP communication to a non-NAT protected server, and then have the clients spoof the source-header to make it look like packets come from the server instead of each other, thereby eliminating the need for a server as a relay of data to get through NAT, which would improve latency.
I have heard of winpcap but I don't want each client to have to install any 3rd party apps. Considering the number of DoS attacks surely there must be some way around this, like spoofing the network table the OS uses to check if source-header is legit? Will this trigger anti-virus systems?
I feel it would be really fun to actually toy with IP headers and above instead of just using predefined headers.
2) I've been having issues with free RTP libraries like JRTPLIB(which probably is very good anyway it just dosn't want to work for me) to make them work, more than I could almost tolerate, and am thinking of just writing my own interpretation ontop of UDP. Does application-level protcols like RTP simply build their header directly inside the UDP payload with the actual data afterwards? I suspect this considering the encapsulation process but just want to make sure.
If so, one does not need to create a RAW socket to implement application-level protocol, just an ordinary UDP socket and then your own payload interpretation above?
3) RTP does not give any performance boost compared to UDP since it adds more headers, all it does is making sure packets arrive in a sort-of correct manner based on timestamps and sequence numbers, right?
Is it -really- that usefull to use an RTP implementation for your basic VoIP project needs instead of adding basic sequencing yourself? I realise for video conferencing perhaps you reaally don't want frames to play out of order, but in audio conversations, would you really notice it?
4) If my solution in #1 is not applicable and I would have to use a server as a data relay between clients, would multicast be a good solution to reduce server loads? Is multicast supported enough in routing hardware?
5) It is related to question 1). Why do routers/firewalls allow things like UDP hole punching? For example, two clients first conenct to the server, then the server gives a client port / ip on to other clients, so the clients can talk to each other on those ports.
Why would firewalls allow data to be received from another IP than the one used in making the connection on that very port? Sounds like a big security hole that should easly be filtered? I understand that source IP spoofing would trick it, but this?
6) To set up a UDP session between two parties (the client which is behind NAT, server whic his non-NAT) does the client simply have to send a packet to the server and then the session is allowed through the firewall? Meaning the client can receive too from the server.
Based on article at wiki, http://en.wikipedia.org/wiki/UDP_hole_punching
7) Is SIP dependant on RTP? For some reason I got this impression but I cant find data to back it up. I may plan to add softphone functionality to my VoIP client in the future and want to make sure I have a good foundation (RTP if I really must, otherwise my own UDP interpretation)
Thanks in advance!
1, Raw sockets seems unnecessary for this application
2, Yes
3, RTP runs on top of UDP, of course it adds overhead. In many ways RTP (ignoring RTCP) is pretty much the bare minimum already and if you implemented a half-way decent alternative it would save you a few bytes at best and you wouldn't be able to use any of the many RTP test tools.
7, SIP is completely independent of RTP. SIP is used to Initiate Sessions. SDP is the protocol commonly transported by SIP, and it is SDP that negotiates and controls RTP video/voice voice.
I've already developed some online games (like chess, checkers, risk clone) using server side programming (PHP and C++) and Flash (for the GUI). Now, I'd like to develop some kind of game portal (like www.mytopia.com). In order to do so, I must decide what is a good way to structure my server logic.
At first I thought in programming separated game servers for each game. In this way, each game will be an isolated program that opens a specific port to the client. I thought also in creating different servers to each game room (each game room allow 100 clients connected on the same time). Of course I'd use database to link everything (like highscores, etc).
Then, I guess it is not the best way to structure a game portal server. I'm reading about thread programming and I think that is the best way to do it. So, I thought in doing something like a connection thread that will listen only to new connection clients (that way every type of game client will connect in only one port), validate this client (login) and then tranfer this client to the specific game thread (like chess thread, checkers thread, etc). I'll be using select (or variants) to handle the asynchronous clients (I guess the "one thread per client" is not suited this time). This structure seems to be the best but how do I make the communication between threads? I've read about race conditions and global scope variables, so one solution is to have a global clients array (vector or map) that need to be locked by connection thread or game thread everytime it is changed (new connection, logout, change states, etc). Is it right?
Has anyone worked in anything like this? Any recommendations?
Thanks very much
A portal needs to be robust, scalable and extensible so that you can cope with larger audiences, more games/servers being added, etc. A good place to start is to look into the way MMOs and distributed systems are designed. This might help too: http://onlinegametechniques.blogspot.com/
Personally, I'd centralise the users by having an authentication server, then a separate game server for each game that validates users against the authentication server.
If you use threads you might have an easier time sharing data but you'll have to be more careful about security for exactly the same reason. That of course doesn't address MT issues in general.
TBH I've been doing a voip system where the server can send out many streams and the client can listen to many streams. The best architecture I've come up with so far is just to bind to a single port and use sendto and recvfrom to handle communications. If i receive a valid connect packet from a client on a new address then I add the client to an internal list and begin sending audio data to them. The packet receive and response management (RRM) all happens in one thread. The audio, as it becomes ready, then gets sent to all the clients from the audio thread. The clients respond saying they received the audio and that gets handle on the RRM thread. If the client fails to respond for longer than 30 seconds then I send a disconnect and remove the client from my internal list. I don't need to be particularly fault tolerant.
As for how to do this in a games situation my main thought was to send a set of impulse vectors (the current one and 'n' previous ones). This way if the client moves out of sync it can check how out of sync it is by checking the last few impulses it should have received for a given object. If it doesn't correspond to what its got then it can either correct or if it is too far out of sync it can ask for a game state reset. The idea being to try and avoid doig a full game state reset as it is going to be quite an expensive thing to do.
Obviously each packet would be hashed so the client can check the validity of incoming packets but it also allows for the client to ignore an invalid packet and still get the info it needs in the next update and thus helping prevent the state reset.
On top of that its worth doing things like keeping an eye on where the client is. There is no point in sending updates to a client when the client is looking in the other direction or there is something in the way (ie the client can't see the object its being told about). This also limits the effectiveness of a wallhack packet sniffing the incoming packets. Obviously you have to start sending things a tad before the object becomes visible, however, or you will get things popping into existence at inconvenient moments.
Anyway ... thats just some random thoughts. I have to add that I've never actually written a multiplayer engine for a game so I hope my musings help ya a bit :)