Let's say I have a chat app with registration and it does long-polling to an Apache server. I've done some reading but I'm still confused and want to be extremely sure. From my understanding, it can either be :
Any amount of client can do long-polling to that server and it won't affect the limit because all the clients only have 1 concurrent connection each to the server. So if I open the chat app in 7 IE8/chrome/firefox in d same computer OR in different computer EACH and connect to the same url/domain, it won't be affected but if I open the chat in 7 tabs in IE8/chrome/firefox only then it will be affected.
Same as the above but the limit will only be affected if I open 7 IE8/chrome/firefox browsers in 7 computers by 7 different accounts. Which means only 6 different users can connect to the chat app at the same time.
I'm leaning heavily to the first one. Can you help me correct/expand on either both or if both are wrong, kindly add number 3? Thank you!
This limitation is a restriction put in place by each browser vendor. The typical connection limit for a browser instance is set to 6 socket connections to the same domain. These six connections make up the browsers socket pool. This socket pool is managed by the socket pool manager and are used across all browser processes. This is to maximize the efficiency of the TCP connection by reusing established connections, as well as other performance benefits.
According to the HTTP 1.1 specification the maximum number of connections should be limited to 2.
Clients that use persistent connections SHOULD limit the number of
simultaneous connections that they maintain to a given server. A
single-user client SHOULD NOT maintain more than 2 connections with
any server or proxy. These guidelines are intended to improve HTTP
response times and avoid congestion.
However, this spec was approved in June 1999 during the infancy of the internet, and browser vendors like Chrome have since increased this number to six.
Currently these are set to 32 sockets per proxy, 6 sockets per
destination host, and 256 sockets per process (not implemented exactly
correct, but good enough).
With that said, each socket pool is managed by each browser. Depending on the browsers connection limit (a minimum of two). You should be able to open 8 connections by opening two tabs in IE, Chrome, Firefox, and Safari. Your max connection is limited by the browser itself. Also keep in mind the server can only handle so many concurrent connections at once. Don't accidentally DoS yourself :)
If you absolutely need to go beyond the connection limitation you could look into domain sharding. Which basically tricks the browser into opening new more connections by providing a different the host name with the request. I wouldn't advise using it though, as the browser has set these limitations to maximize performance and reuse existing connections. Tread lightly.
Related
I am developing a multiplayer game (client-server model) and I am stuck when it comes to scaling its servers.
I understand that most games never even reach 10 000+ players, and I don't think mine will either.
Though if I would be very lucky to get that I want to develop the servers so they cannot become a huge obsticle later.
I have searched a lot for a solution to my problem on the internet, checking GDC talks about it and checking other posts on this website, but none of them seems to solve my specific problem.
My current setup is below and all servers are written in C++ using ENet as my network library.
Game server
This server handles the actual gameplay of the game and requires quite a lot of CPU and packages being sent between the server and its connected clients.
But this dedicated server is hosted by the players themselves, so I don't have to think about scaling it at all.
Lobby server
This server handles the server list, containing all servers currently up.
All game servers are sending a UDP package to this server every 5 seconds to say they are still alive.
This is so the lobby server can keep an updated list of all servers currently online.
All clients are sending a UDP package to this server when they want to fetch all servers (which is only in the server
list screen), and the lobby server sends back a list of all servers.
This does not happen that often and the lobby server is limited to send 4 servers per second to a client (and not a huge package containing all servers).
Login server
This server handles creating accounts, lost password, logins, friends and their current game status,
private messages to other logged in players and player profiles that specifies what in-game items they have.
All clients are sending a UDP package to this server every 5 seconds to say they are still alive, while also
sending what game they are currently in. The server then sends back their friend lists online/offline/in-game statuses.
This is so their friends can keep an updated list of which friend is online/offline/in-game.
It sends messages only with player actions otherwise, like creating an account, logging in, changing/resetting password,
adding/removing/ignoring a friend, private messages to friends, etc.
My questions
What I am worried about is that my lobby and login server might not be scalable and that they would have too much traffic on them.
1. Could they in theory be hosted on just a single computer? Or would it be too much traffic for 10 000+ players?
2. If they can be hosted on a single computer, will the servers still not have issues for people that live far away?
Would it be better to have the lobby and login servers per region of the world in that case?
The bad thing about that is that the players would not be able to see servers in the US if they live in Europe, and that their account and items would not exist on the other servers.
3. Might be far-fetched, but if I would rewrite both servers to instead be on a website with a database and make the client/game server do
web requests instead (such as HTTPS or calling a php with specific headers),
would it help in solving my problems somehow?
All your problems and questions are solved by serverless cloud based solution AWS Lambda e.g. or similar. In this case the scalability is not your problem. Just develop the logic. This will save you much time.
If you would like to make servers as single app hosted by your own server. Consider using something like e.g. Go instead of C++. It's designed exactly for these purposes. I mean highly loaded web/network services.
Well, this is c++ and i code in java, but maybe the logic is useful for you any way so i will tell you how i end up implementing something similar but in a casino.
In my case I have 2 diferrent sockets in the same server program, one of the sockets is TCP and it handles all logins, registers and payments, while the second socket is UDP and it handle the actual game multiple payers are playing, then you could group internally all those UDP connection in groups (probably arrays of sockets) to generate those lobbies. Doing that all your server is just one class that could run in a single pc using 2 ports (one for each socket) However this do not solve the problem of the ping for people who live far away.
If ping is a problem (not my case in a casino) you could probably host your server region base but removing the login, registration and paymets of your server and replace it for a connection to a central server (this central server should be TCP and you could also implement a https socket to also allow your webpage to connect to this server and create accounts or pay you directly from the browser)
sorry to mess your life even more, but i hope it helps
Ok, so I created a very simple WAR which serves a simple Hello World .jsp. With all the HTML it's about 200bytes.
Deployed it on my server running Jetty 7.5.x jdk 6u27
On my client computer create simple JMeter test plan with: Thread Group, HTTP Request, Response Assertion, Summary Report Client also running jdk6u27
I set up the thread group to 5 threads running for 60secs and I got 5800 requests/sec
Then I setup 10 threads and got 6800 requests/sec
The moment I disable Keep-Alive in JMeter on the HTTP Request sampler. I seem to get lots of big pauses on the client side I suppose, it doesn't seem the server is receiving anything. I get less pauses at 5 threads or barely none but at 10 threads it hangs pretty much all the time.
What does this mean exactly?
Keep in mind I'm technically creating a REST service and I was getting the same issue, so I though maybe I was doing something funky in my service, till I figured out it's a Keep-Alive issue as it's doing it pretty much on a staic web app. So in reality I will have 1 client request 1 server response. The client will not be keeping the connection open.
My guess is that since Keep-Alive is what allows HTTP Connection (and thereby, socket) reuse, you are running out of available ephemeral port numbers -- there are only 64k port numbers, and since connections must have unique client/server port combos (and server port is fixed), you can quickly go through those. Now, if ports were reusable as soon as connection was closed by one side, it would not matter: however, as per TCP spec, both sides MUST wait for configurable amount of time (default: 2 minutes) until reuse is considered safe.
For more details you can read a TCP book (like "Stevens book"); above is a simplification.
I have an application server. At a high level, this application server has users and groups. Users are part of one or more groups, and the server keeps all users aware of the state of their groups and other users in their groups. There are three major functions:
Updating and broadcasting meta-data relating to users and their groups; for example, a user logs in and the server updates this user's status and broadcasts it to all online users in this user's groups.
Acting as a proxy between two or more users; the client takes advantage of peer-to-peer transfer, but in the case that two users are unable to directly connect to each other, the server will act as a proxy between them.
Storing data for offline users; if a client needs to send some data to a user who isn't online, the server will store that data for a period of time and then send it when the user next comes online.
I'm trying to modify this application to allow it to be distributed across multiple servers, not necessarily all on the same local network. However, I have a requirement that backwards compatibility with old clients cannot be broken; essentially, the distribution needs to be transparent to the client.
The biggest problem I'm having is handling the case of a user connected to Server A making an update that needs to be broadcast to a user on Server B.
By extension, an even bigger problem is when a user on Server A needs the server to act as a proxy between them and a user on Server B.
My initial idea was to try to assign each user a preferred server, using some algorithm that takes which users they need to communicate with into account. This could reduce the number of users who may need to communicate with users on other servers.
However, this only minimizes how often users on different servers will need to communicate. I still have the problem of achieving the communication between users on different servers.
The only solution I could come up with for this is having the servers connect to each other, when they need to deal with a user connected to a different server.
For example, if I'm connected to Server A and I need a proxy with another user connected to Server B, I would ask Server A for a proxy connection to this user. Server A would see that the other user is connected to Server B, so it would make a 'relay' connection to Server B. This connection would just forward my requests to Server B and the responses to me.
The problem with this is that it would increase bandwidth usage, which is already extremely high. Unfortunately, I don't see any other solution.
Are there any well known or better solutions to this problem? It doesn't seem like it's very common for a distributed system to have the requirement of communication between users on different servers.
I don't know how much flexibility you have in modifying the existing server. The way I did this a long time ago was to have all the servers keep a TCP connection open to each other. I used a UDP broadcast which told the other servers about each other and allowed them to connect to new servers and remove servers that stopped sending the broadcast.
Then everytime a user connects to a server that server Unicasts a TCP message to all the servers it is connected to, and all the servers keeps a list of users and what server they are on.
Then as you suggest if you get a message from one user to another user on another server you have to relay that to the other server. The servers really need to be on the same LAN for this to work well.
You can run the server to server communications in a thread, and actually simulate the user being on the same server.
However maintaining the user lists and sending messages is prone to race conditions (like a user drops off while you are relaying the message from one server to another etc).
Maintaining the server code was a nightmare and this is really not the most efficient way to implement scalable servers. But if you have to use the legacy server code base then you really do not have too many options.
If you can look into using a language that supports remote processes and nodes like Erlang.
An alternative might be to use a message queue system like RabbitMQ or ActiveMQ, and have the servers talk to each other through that. Those system are designed to be scalable, and usually work off a Publish/Subscribe mechanism.
i'm reading about way to implemnt client-server in the most efficient manner, and i bumped into that link :
http://msdn.microsoft.com/en-us/library/ms740550(VS.85).aspx
saying :
"Concurrent connections should not exceed two, except in special purpose applications. Exceeding two concurrent connections results in wasted resources. A good rule is to have up to four short lived connections, or two persistent connections per destination "
i can't quite get what they mean by 2... and what do they mean by persistent?
let's say i have a server who listens to many clients , whom suppose to do some work with the server, how can i keep just 2 connections open ?
what's the best way to implement it anyway ? i read a little about completion port , but couldn't find a good examples of code, or at least a decent explanation.
thanks
Did you read the last sentence:
A good rule is to have up to four
short lived connections, or two
persistent connections per
destination.
Hard to say from the article, but by destination I think they mean client. This isn't a very good article.
A persistent connection is where a client connects to the server and then performs all its actions without ever dropping the connection. Even if the client has periods of time when it does not need the server, it maintains its connection to the server ready for when it might need it again.
A short lived connection would be one where the client connects, performs its action and then disconnects. If it needs more help from the server it would re-connect to the server and perform another single action.
As the server implementing the listening end of the connection, you can set options in the listening TCP/IP socket to limit the number of connections that will be held at the socket level and decide how many of those connections you wish to accept - this would allow you to accept 2 persistent connections or 4 short lived connections as required.
What they mean by, "persistent," is a connection that is opened, and then held open. It's pretty common problem to determine whether it's more expensive to tie up resources with an "always on" connection, or suffer the overhead of opening and closing a connection every time you need it.
It may be worth taking a step back, though.
If you have a server that has to listen for requests from a bunch of clients, you may have a perfect use case for a message-based architecture. If you use tightly-coupled connections like those made with TCP/IP, your clients and servers are going to have to know a lot about each other, and you're going to have to write a lot of low-level connection code.
Under a message-based architecture, your clients could place messages on a queue. The server could then monitor that queue. It could take messages off the queue, perform work, and place the responses back on the queue, where the clients could pick them up.
With such a design, the clients and servers wouldn't have to know anything about each other. As long as they could place properly-formed messages on the queue, and connect to the queue, they could be implemented in totally different languages, and run on different OS's.
Messaging-oriented-middleware like Apache ActiveMQ and Weblogic offer API's you could use from C++ to manage and use queues, and other messaging objects. ActiveMQ is open source, and Weblogic is sold by Oracle (who bought BEA). There are many other great messaging servers out there, so use these as examples, to get you started, if messaging sounds like it's worth exploring.
I think key words are "per destination". Single tcp connection tries to accelerate up to available bandwidth. So if you allow more connections to same destination, they have to share same bandwidth.
This means that each transfer will be slower than it could be and server has to allocate more resources for longer time - data structures for each connection.
Because establishing tcp connection is "time consuming", it makes sense to allow establish second connection in time when you are serving first one, so they are overlapping each other. for short connections setup time could be same as for serving the connection itself (see poor performance example), so more connections are needed for filling all bandwidth effectively.
(sorry I cannot post hyperlinks yet)
here msdn.microsoft.com/en-us/library/ms738559%28VS.85%29.aspx you can see, what is poor performance.
here msdn.microsoft.com/en-us/magazine/cc300760.aspx is some example of threaded server what performs reasonably well.
you can limit number of open connections by limiting number of accept() calls. you can limit number of connections from same source just by canceling connection when you find out, that you allready have more then two connections from this location (just count them).
For example SMTP works in similar way. When there are too many connections, it returns 4xx code and closes your connection.
Also see this question:
What is the best epoll/kqueue/select equvalient on Windows?
I am trying to create a simple board game (a kind of checkers), where users will be able to play online with each other using flex application as a client.
I am using django application to process the game on the server side. And I come across the problem, if one user made a move, I can send it to a server, but how do I let the opponent know about it?
The way I am thinking to do it is to create a timer and send requests to the server asking was opponents move done or not....But here we have 2 limitations:
1) Each client would produce big amount of requests (not sure how server will work if I have e.g 100 such clients)
2) If players will chose game with a time limit for example 5 minutes/per game it will be very important to show them situation on the board as soon as it changes (without a pause), but timer will send request only on timer event, so if for example I will chose tick interval to 5 seconds it will mean that 5 seconds another side will not be aware of the situation changes.
Think of it this way. If you poll every 1 or 2 seconds, that should be quick enough not to be noticed by either client. A simple REST request checking for changes is bloody quick and a modern web servers should be able to handle 100 such requests without issue.
Implement it with the timer now, run some performance tests and worry about servers after you're done.
If you are worried later, you can always have graduated timers. e.g., check after 100ms, 200ms, 400ms, 800ms, 1600ms, etc... with a cap at 5 seconds or something.
Take a look at this code for some ideas maybe, since chat uses similar concepts: http://anantgarg.com/2009/05/13/gmail-facebook-style-jquery-chat/
One way is to use a TCP Socket from the client to connect back to your server. Have the client listen for data, and have the server send updates whenever needed. This may require firewall changes (to allow the port you'll be using) and a server which accepts multiple persistent client connections. This may only work for a fixed smallish number of clients, since if you are keeping multiple connections open it will incur some server overhead.
If you have firewall restrictions and need to use HTTP ports, you can investigate Comet implementations. What I proposed in the first paragraph is more or less the same thing - Comet just does it over HTTP and standardises some aspects of the communication.