I am building a 1 vs 1 tetris game using cocos2d-x(C++) and a node.js(websocket).
Currently, I am working on a PVP feature between two players.
The basic logic looks like this:
1. Client1 completes a horizontal line and destroy it.
2. In this case, Client1 sends an attack packet to a server.
3. A server sends an attack packet to client2.
4. Client 2 gets an additional line.
The problem is that this is a little bit slow even though I use the update() function with 10 frames per second. The delay must be caused by the long route(Client1 => server => Client2).
I am thinking about sending the packet to the opponent directly.
In this case, the client1 needs to know the IP address of the client2.
My questions is:
1) In a real game, isn't it dangerous for client2 to let client1 know the IP address of his?
2) In general, how do game developers handle this kind of latency issue?
1) In a real game, isn't it dangerous for client2 to let client1 know the IP address of his?
It's generally not a good idea to expose the ip of your user, at least without notice. While it is legal (I'm not lawyer), if people doing bad thing the user may get upset and complain to you.
2) In general, how do game developers handle this kind of latency issue?
First, what's the latency you measured? Have you tried host your server on local network and see how is it? To deal with this problem, we usually define the required latency (e.g. 100ms, 200ms, 500ms) and have game designed in way that information can be propagated in a delayed manner without impact on gaming experience. In your case of attack logic, the usual trick is to make a charge timer to initiate the attack, so that both client will agree on the same wall clock for the actual attack. So,
1. Client1 completes a horizontal line and destroy it.
2. In this case, Client1 sends an attack packet to a server, and start a 1 second charging timer (may show a charging bar)
3. A server sends an attack packet to client2.
4. Client 2 start a timer and show the bar. Note that you need to adjust the time to account for the round trip.
5. Client 2 gets an additional line after 1 second since 2. Client 1 show the exact scene at the same time.
Note that, Client 2 actually got attacked later than your original design, but since the player see a charging bar, the experience is however smooth.
You may also want to adjust the "1 second" according to your latency requirement.
Related
So we have a very huge database which has around 300,000 urls. These urls have to be pinged and get data from.(these urls are radio stations which are playing song. The data is metadata)
Some of them are sometimes inactive and sometimes active.
On any given time, around 80,000 are active. Some respond slow, some respond quickly. I have a server and I am thinking to do this using c++
My goal is to ping and parse(or crawl) them within 1 minute and keep repeating the process because information(the song playing on them) can change over time. ranging from 2-7 minutes mostly. But I am not sure if it is possible.
What should be my approach to do it?
I have thought of creating two programs, one to test if the url is active or not and run it twice a day. And how much time it generally takes to respond. Does it usually respond slow or whether it is responding slower now.
And the other to do the actual crawling where fastest will be crawled first and some dedicated threads for urls which respond faster.
Please i would love more better ideas or better solutions for it. Can any one tell me how to do the maths to find out the number of dedicated threads i should allot to each for getting the results in least number of time
You don't need performance of your CPU (not your bottleneck at the moment), but you need to avoid network layer stall... if the request timeout is 60 seconds, and you have 16 threads, and hit 16 very slow servers (which will time-out eventually), you are generally stalled for 60 seconds and not processing anything more.
So I would start with let's say 500 threads (and like 15-30s timeout, if you know the very slow radios are capable to fit even this), and keep some statistic about their turnaround, and keep adding more working threads dynamically for every original which didn't get response within 2-3 secs. 80000/500 = 160, so each "normally quick" worker thread has then to ping around 160 urls, if each does take 2 seconds, that's still 320 = 5min! So 500 sounds like minimum.
That said, having 500+ threads will somewhat burden CPU and memory (not sure how much, with decent thread/memory model implementation 500 doesn't sounds like much for modern x86 CPU with GB of RAM, even 5000 sounds still reasonable), but I would worry lot more about the network layer and about possible firewalls around, you need server-grade like network for such amount of requests (if I would try something like that from my home, my own router would filter me out with default settings, detecting it as some kind of DoS attack).
So get some statistic how long the request on average take, then take your target time (2-7min), and divide the number of urls by those, like average ping 5s, round time 3min = 300,000/(3*60/5) = 8333.33 threads at least needed. Then you will have to profile your app to verify, that with 8000 threads it will not choke on something else, but it will really handle the task as expected.
(other option is to fire asynchronous http request from single thread, but that sort of creates its own threads for each task any way, so I would rather manage the threads myself, and use synchronous http calls)
And thinking about dynamic grow mechanics... you can keep some counters about how many new requests were added in last second, and how many finished (either responded or failed), and after few seconds of running these should start to form some kind of "throughput" statistic, then if throughput is under desired threshold, you can add more threads.
About active/inactive... keep the response time/last-seen/last-check together with url, and add some further logic to check url only when it makes sense (like not within next 60s, if it did just respond, or check inactive just after 6h from last test). You need also avoid checking the same url in two different threads at the same time, so some central manager code should feed the threads with target (maybe some FIFO thread-safe queue ... actually you can use its size to estimate how well the worker threads are processing it, so you can add more threads when you see the queue is not emptying fast enough = that avoids adding the statistic code to thread themselves).
Question: Is it possible to update 100+ objects in the Flash Player over Socket Connections? More details and my own try's below!
Details
For my internship I got the time to create a multiplayer physics game. I worked for a steady three months on it. My internship is coming to an end and I couldn't finish my game.
My problem is that its hard to send multiple packets each time-step to the server and back. The packets I send are position updates of the objects and mouse of other clients.
I will try to explain the network/game flow.
Client connects to the server using the binary Socket class in AS3
Server ask for verification and client sends name and thumbnail.
Server waits until 4 clients are connected (Some matchmaking etc)
Server picks 4 clients and makes them run on a separate Thread(Combined as a Team)
Client sends his performance score to the server range 1-100.
Server makes the best client the host machine for the physics and the other 3 slaves
Host game sets up the level and makes around 1-100 shapes in the level(primary shapes and complex shapes like bridges, motors, springs)
Every time-step the host gets all updated property's of the shapes and sends them to the clients (x, y, rotation, sleep)
The client applys all the shape property's to the correct shapes
I tried different time-steps and noticed that until a time-step of 1/15 second the client(slave) won't notice any lagging in the game. I also tried to pick a lower time-step and tween the movement of the shapes but that did give some strange movement on the client(slave) side.
I will give an example of a single object update packet.
<O|t=s:u|x=201|y=202|f=automaticoo</O
<O|t=m:p|x=100|y=345|f=automaticoo</O
I noticed that the Flash Player can stack a lot of packets in the buffer before sending. For example if I send a lot of packets at once it stacks them up and send them together to the server. With faster time-steps you don't get more updates on the client(slave) side but more updates in the same packet row.
Tries
Use the new RTMFP(udp & p2p) protocol for updates. (little bit better in performance but less in reliability)
Code my entire socket server in c++ instead of Air(with the ServerSocket) (better in performance but noticed the lagging part is not the server but the Flash Player)
Use the ByteArray compress method and the AMF serialized format (performance about the same except the c++ server can't unserialize the messages)
Do you guys think it is possible in the Flash Player too handle so many update request each time-step.
Discoveries
There is a stick arena game that is multiplayer in ActionScript 3.0. They used a lot of tricking and even then I get a ping of about 300ms and it only updates the players constantly (4 players in a lobby).
Sorry for the long post.
I believe your problem comes down to type of game and data.
This again is broken up into:
server speed (calculations need in CPU + RAM requirements for world/player data)
connection speed (bandwidth on server)
data size (how much info is needed and how often)
player interaction form (event or FPS)
distance from client to server (ping)
MMORG
Eg. World Of Warcraft, is to my knowledge on the none-PVP worlds using a "client is a viewport" and "client sends keystrokes", "server validates and performs, and tells client what happens" on a CLIENT-SERVER base.
This gives the game a lot of acceptable latency as you only need to transfer commands from client and then results to the client. The rest is drawn on the client.
Its very event driven and from you click an icon or press a key, its okay that your "spell" needs some time to fire on the server. Secondly there is no player collision needed. This lets the server process less data too and keeps the requirements to the server CPU smaller.
Counter-Strike / Battlefield etc.
FPS, fast paced action, with quick response needs to get information about every detail all the time. This makes a higher demand on precision. Collision is a must for both player and weapons.
This sort of game usually doesnt handle more than 32 players on a single map, as they all need to be able to share their positions, bullets, explosions etc. very fast and all this data has to go through the server-validation which again is a bottleneck for any type of online game.
Network latency
In a perfect world this would be 0 ms, but as we all know. All the hardware from client to the server and back takes time. Both going through the network stacks and through the internet connection (switch, router, modem, fiber centrals etc) so the way many modern real time games fixes this is by "prediction". The let the server look at your direction and speed. Then they try to predict (much like the GPS do in a tunnel too) that you were last seen moving forward with a speed of +4 so given timeframe you have moved (timeframes x 4) - but what if you had slowed down or speeded up? then they either instantly "hyperjump" you from A to B in a split second and this you feel like a lagging game or they easy up to the real position so your "hero" slides a little faster or slower into the right possition.
This technique is explained many places on the net, so no need for details in here, but it takes time and tweaks to get a good performance from this - but it works and saves a lot of headaches for the programmers.
What network data is needed?
I read your question and thought: that could be compressed quite a lot. Secondly, I have made a Flash socket chat with pure ByteStream and that worked awesome. It was hard to get running for a start, but once I got it up and running it was fast.
The flash client/player itself isnt the biggest networking client, so expect a lot of lost speed there too. I would go for 10-15 fps for the networking part and then use a more RAW approach for the data sent back and forth.
Lastly, try to keep the data as simple as possible.
Eg. use COMMANDS/SHORTCUTS for certain data/events.
Like a server data bytestring could be: 0x99, 0x45,0x75,0x14,0x04,0x06
Where 0x99 means : BIG EXPLOSION at the following COORDS: (0x45,0x75)
Then 0x14 means: PLAYER 0x14 (player 20 in decimal) has moved to (0x04, 0x06)
So the staring opcode tells the networking protocol handler in your client and server what to expect next. (Its how a CPU knows how to read memory btw.)
For my chat I had commands for each type of data parsed. One for login, one for broadcasting, for for telling the name of a user etc. So once the client made a login, the client received a command + a packed of online users. This was only transfed once to the client. After that each attached client received a "new user online" command too with the name of the new user. Each client maintained its own list with current users and ID's so that I only needed to tell which client number say the text. This kept the traffic at a minimum. Same would go for coordinates or commands of what to do. "Player #20 goes north" etc. could be be 0x14, 0x41, 0xf0 (0x41 could be MOVE, 0xf0 could be NORTH, 0xf1 EAST etc.)
This physical distance to the game
This one you cant change, but you can put in some restrictions or make the servers run in multiple locations worldwide, depending on what type of game you wanna make. Amazon EC2 is a great platform for such projects as they have data-centers all over the world and then you can benchmark the users network against these and then redirect the users to the nearest datacenter where you are running a server.
Hacking/cheating
Also remember, if something gets popular and you start earning money on it, sooner or later SOMEONE will try to break the protocols or break down the accounts to gain access to servers, informations or cheat to get further items/points in the games. You could also be attacked by DDOS where they bomb your network with wrong data just to crash everything and render the game unusable.
Dont mind it so much for a start, just remember that once you go online, you NEVER know who in the world or where in the world they are. I'm not trying to make you paranoid, but there are sick people who will try to earn money by cheating others.
So think this into your structures, dont show data in network packages that isnt needed. Dont believe data from client always is correct. Validate data on server-side.
This also takes time if you have 100 active players at the same time.
But once you do it, you can sleep much better if it gets to be a big success for you, which I really hope.
That was my thoughts from experience. Hope some of it might be usefull eventhough I didnt quite answer if 100 players are possible.
Infact I would say: YES 100 players is possible, but it depends if they all move at the same time and there is collission testing involved and if you will accept lag or not.
Question: Is it possible to update 100+ objects in the Flash Player over Socket Connections?
Phosphor 2 seems to pull it off.
Maybe the best option was do the physics on server AND on each client, with synchronization (server object positions are overwriting client's). This way all clients get equal lags. Until discrepancy is low (as it should be) corrections will not be noticeable. If you use Box2D, you have both AS3 and C++ version ready. But this is totally different architecture, worth 3 month to implement by itself. What lag do you get on empty/simple arena? In limited time, simplification may be your only option.
I am making a multiplayer game in c++ :
The clients simply take commands from the users, calculate their player's new position and communicate it to the server. The server accepts such position updates from all clients and broadcasts the same about each to every. In such a scenario, what parameters should determine the time gap between consecutive updates ( i dont want too many updates, hence choking the n/w). I was thinking, the max ping among the clients should be one of the contributing parameters.
Secondly, how do i determine this ping/latency of the clients ? Other threads on this forum suggest using "raw sockets" or using the system's ping command and collecting the output from a file .. do they mean using something like system('ping "client ip add" > file') or forking and exec'ing a ping command..
This answer is going to depend on what kind of a multiplayer game you are talking about. It sounds like you are talking about an mmo-type game.
If this is the case then it will make sense to use an 'ephemeral channel', which basically means the client can generate multiple movement packets per second, but only the most recent movement packets are sent to the server. If you use a technique like this then you should base your update rate on the rate in which players move in the game. By doing this you can ensure that players don't slip through walls or run past a trigger too quickly.
Your second question I would use boost::asio to set up a service that your clients can 'ping' by sending a simple packet, then the service would send a message back to the client and you could determine the time it took to get the packet returned.
If you're going to end up doing raw-packet stuff, you might as well roll your own ICMP packet; the structure is trivial (http://en.wikipedia.org/wiki/Ping).
The enet library does a lot of the networking for you. It calculates latency as well.
I've already developed some online games (like chess, checkers, risk clone) using server side programming (PHP and C++) and Flash (for the GUI). Now, I'd like to develop some kind of game portal (like www.mytopia.com). In order to do so, I must decide what is a good way to structure my server logic.
At first I thought in programming separated game servers for each game. In this way, each game will be an isolated program that opens a specific port to the client. I thought also in creating different servers to each game room (each game room allow 100 clients connected on the same time). Of course I'd use database to link everything (like highscores, etc).
Then, I guess it is not the best way to structure a game portal server. I'm reading about thread programming and I think that is the best way to do it. So, I thought in doing something like a connection thread that will listen only to new connection clients (that way every type of game client will connect in only one port), validate this client (login) and then tranfer this client to the specific game thread (like chess thread, checkers thread, etc). I'll be using select (or variants) to handle the asynchronous clients (I guess the "one thread per client" is not suited this time). This structure seems to be the best but how do I make the communication between threads? I've read about race conditions and global scope variables, so one solution is to have a global clients array (vector or map) that need to be locked by connection thread or game thread everytime it is changed (new connection, logout, change states, etc). Is it right?
Has anyone worked in anything like this? Any recommendations?
Thanks very much
A portal needs to be robust, scalable and extensible so that you can cope with larger audiences, more games/servers being added, etc. A good place to start is to look into the way MMOs and distributed systems are designed. This might help too: http://onlinegametechniques.blogspot.com/
Personally, I'd centralise the users by having an authentication server, then a separate game server for each game that validates users against the authentication server.
If you use threads you might have an easier time sharing data but you'll have to be more careful about security for exactly the same reason. That of course doesn't address MT issues in general.
TBH I've been doing a voip system where the server can send out many streams and the client can listen to many streams. The best architecture I've come up with so far is just to bind to a single port and use sendto and recvfrom to handle communications. If i receive a valid connect packet from a client on a new address then I add the client to an internal list and begin sending audio data to them. The packet receive and response management (RRM) all happens in one thread. The audio, as it becomes ready, then gets sent to all the clients from the audio thread. The clients respond saying they received the audio and that gets handle on the RRM thread. If the client fails to respond for longer than 30 seconds then I send a disconnect and remove the client from my internal list. I don't need to be particularly fault tolerant.
As for how to do this in a games situation my main thought was to send a set of impulse vectors (the current one and 'n' previous ones). This way if the client moves out of sync it can check how out of sync it is by checking the last few impulses it should have received for a given object. If it doesn't correspond to what its got then it can either correct or if it is too far out of sync it can ask for a game state reset. The idea being to try and avoid doig a full game state reset as it is going to be quite an expensive thing to do.
Obviously each packet would be hashed so the client can check the validity of incoming packets but it also allows for the client to ignore an invalid packet and still get the info it needs in the next update and thus helping prevent the state reset.
On top of that its worth doing things like keeping an eye on where the client is. There is no point in sending updates to a client when the client is looking in the other direction or there is something in the way (ie the client can't see the object its being told about). This also limits the effectiveness of a wallhack packet sniffing the incoming packets. Obviously you have to start sending things a tad before the object becomes visible, however, or you will get things popping into existence at inconvenient moments.
Anyway ... thats just some random thoughts. I have to add that I've never actually written a multiplayer engine for a game so I hope my musings help ya a bit :)
I'm coding simple game which I plan to make multiplayer (over the network) as my university project.
I'm considering two scenarios for client-server communication:
The physics (they're trivial! I should call it "collision tests" in fact :) ) are processed on server machine only. Therefore the communication looks like
Client1->Server: Pressed "UP"
Server->Clients: here you go, Client1 position is now [X,Y]
Client2->Server: Pressed "fire"
Server->Clients: Client1 hit Client2, make Client2 disappear!
server receives the event and broadcasts it to all the other clients.
Client1->Server: Pressed "UP"
Server->Clients: Client1 pressed "UP", recalculate his position!!
[Client1 receives this one as well!]
Which one is better? Or maybe none of them?
The usual approach is to send this information:
Where the player is
At what time he is there (using the game's internal concept of time, not necessarily real time)
The player's movement vector (direction and speed)
Then the clients can use dead reckoning to estimate where the other players are, so that the network latency will disturb the game less. Updates need to be send only when the player changes his direction or speed of movement (which the other clients cannot predict), so also network bandwidth will be saved.
Here are some links on dead reckoning. The same web sites contain probably also more articles on it.
http://www.gamasutra.com/view/feature/3230/dead_reckoning_latency_hiding_for_.php
http://www.gamedev.net/reference/articles/article1370.asp
i think the first approach is better. so you have equal data on all clients.
when the physics are simple and the results of the calculations are always the same the second approach is ok too. but if there are random numbers possible you will have different effects on all clients.