Connection per session or multiplexing multiple sessions through one connection - c++

When designing a client/server architecture, is there any advantage to multiplexing multiple connections from the same process to the remote server (i.e. sharing one connection) vs opening one connection per thread/session in the client (as is typically done when connecting to memcached or database servers.)
I know there's a bit of overhead associated with each connection (e.g. if a server has 50,000 open connections that uses up a lot of RAM) this was one major reason why facebook made a UDP patch for memcached. But I don't expect to have anywhere near that number. Maybe 10,000 at the most. There's also savings in establishing a tcp/ip connection and doing authorization, but for now I'd rather leave authorization to firewall software as memcached does.
Are there any reasons to implement multiplexing connections in a tcp/ip client/server application with less than 10K connections?
Edit - Details:
This is for a database server/client I'm working on. I think that Informix and Oracle do actually allow for session multiplexing over one tcp/ip connection. In the Informix documentation they say you may get a performance improvement for nonthreaded clients (no mention of multi-threaded clients, perhaps it's not a thread-safe implementation.)

is there any advantage to multiplexing multiple connections vs opening one connection per thread/session
Yes, though it depends on the implementation of the simplex. You probably know about the firewall hassle with e.g. FTP, SIP et al, especially when encryption is used partway. This is what influences the decision whether to use multiple, or just one connection.

Related

How to create multiple connection to one specific address in C++ GRPC client

I wrote a c++ grpc client, and wanted to create multiple connection by create multiple channel, just like the hello world example.
But only one connection for the specific address was created. So how to create multiple connection to the server?
Honestly, I don't see significant reasons for that (at least for basic use cases). You don't need to create multiple connections to have some kind of connection pooling (as you might want to do while connecting to RDBMS like PostgreSQL). The bandwidth of physical transport (TCP connection) will be fully utilized by the single network connection.

Pinging by creating new sockets per each peer

I created a small cross-platform app using Qt sockets in C++ (although this is not a C++ or Qt specific question).
The app has a small "ping" feature that tries to connect to a peer and asks for a small challenge (i.e. some custom data sent and some custom data replied) to see if it's alive.
I'm opening one socket per each peer so as soon as the ping starts we have several sockets in SYN_SENT.
Is this a proper way to implement a ping-like protocol with challenge? Am I wasting sockets? Is there a better way I should be doing this?
I'd say your options are:
An actual ping (using ICMP echo packets). This has low overhead, but only tells you whether the host is up. And it requires you to handle lost packets, timeouts, and retransmits.
A UDP-based protocol. This also has lower kernel overhead, but again you'll be responsible for setting up timeouts, handling lost packets, and retransmits. It has the advantage of allowing you to positively affirm that your program is running on the peer. It can be implemented with only a single socket endpoint no matter how many peers you add. (It is also possible that you could send to multiple peers at once with a broadcast if all are on a local network, or a multicast [complicated set-up required for that].)
TCP socket as you're doing now. This is much easier to code, extremely reliable and will automatically provide a timeout (i.e. your connect will eventually fail if the peer doesn't respond). It lets you know positively that your peer is there and running your program. Although there is more kernel overhead to this, and you will use one socket endpoint on your host per peer system, I wouldn't call it a significant issue unless you think you'll be having thousands of peers.
So, in the end, you have to judge: If thousands of hosts will be participating and this pinging is going to happen frequently, you may be better off coding up a UDP solution. If the pinging is rare or you don't expect so many peers, I would go the TCP route. (And I wouldn't consider that a "waste of sockets" -- those advantages are why TCP is so commonly used.)
The technique described in the question doesn't really implement ping for the connection and doesn't test if the connection itself is alive. The technique only checks that the peer is listening for (and is responsive to) new connections...
What you're describing is more of an "is the server up?" test than a "keep-alive" ping.
If we're discussing "keep-alive" pings, than this technique will fail.
For example, if just the read or the write aspect of the connection is closed, you wouldn't know. Also, if the connection was closed improperly (i.e., due to an intermediary dropping the connection), this ping will not expose the issue.
Most importantly, for some network connections and protocols, you wouldn't be resetting the connection's timeout... so if your peer is checking for connection timeouts, this ping won't help.
For a "keep-alive" ping, I would recommend that you implement a protocol specific ping.
Make sure that the ping is performed within the existing (same) connection and never requires you to open a new connection.

What does it really means by maximum concurrent connections in browser?

Let's say I have a chat app with registration and it does long-polling to an Apache server. I've done some reading but I'm still confused and want to be extremely sure. From my understanding, it can either be :
Any amount of client can do long-polling to that server and it won't affect the limit because all the clients only have 1 concurrent connection each to the server. So if I open the chat app in 7 IE8/chrome/firefox in d same computer OR in different computer EACH and connect to the same url/domain, it won't be affected but if I open the chat in 7 tabs in IE8/chrome/firefox only then it will be affected.
Same as the above but the limit will only be affected if I open 7 IE8/chrome/firefox browsers in 7 computers by 7 different accounts. Which means only 6 different users can connect to the chat app at the same time.
I'm leaning heavily to the first one. Can you help me correct/expand on either both or if both are wrong, kindly add number 3? Thank you!
This limitation is a restriction put in place by each browser vendor. The typical connection limit for a browser instance is set to 6 socket connections to the same domain. These six connections make up the browsers socket pool. This socket pool is managed by the socket pool manager and are used across all browser processes. This is to maximize the efficiency of the TCP connection by reusing established connections, as well as other performance benefits.
According to the HTTP 1.1 specification the maximum number of connections should be limited to 2.
Clients that use persistent connections SHOULD limit the number of
simultaneous connections that they maintain to a given server. A
single-user client SHOULD NOT maintain more than 2 connections with
any server or proxy. These guidelines are intended to improve HTTP
response times and avoid congestion.
However, this spec was approved in June 1999 during the infancy of the internet, and browser vendors like Chrome have since increased this number to six.
Currently these are set to 32 sockets per proxy, 6 sockets per
destination host, and 256 sockets per process (not implemented exactly
correct, but good enough).
With that said, each socket pool is managed by each browser. Depending on the browsers connection limit (a minimum of two). You should be able to open 8 connections by opening two tabs in IE, Chrome, Firefox, and Safari. Your max connection is limited by the browser itself. Also keep in mind the server can only handle so many concurrent connections at once. Don't accidentally DoS yourself :)
If you absolutely need to go beyond the connection limitation you could look into domain sharding. Which basically tricks the browser into opening new more connections by providing a different the host name with the request. I wouldn't advise using it though, as the browser has set these limitations to maximize performance and reuse existing connections. Tread lightly.

How to create multiplayer game with contacts and realtime communication, without creating server?

I would like to have some advices about my project. I am currently developing a musical application/game in C++/Qt, with a multiplayer mode, and I have the following requirements (ideally):
I want to be able to have friends contacts, and be able to chat and play with them
I need to send/receive realtime datas to these contacts (music notes)
I don't want to create a server application
What would you recommend to do this?
I was thinking of using xmpp protocol, so I can connect to google/jabber and retrieve contacts, chat with them. Actually this part works, but then I don't konw how to send/receive realtime datas. For the realtime part, I was thinking of using direct TCP communication, but I need to know external ip of my contacts, and I have no idea how to do it. I was thinking to automatically send my external ip and tcp port to all my contacts every time I connect, but I didn't find solution to retrieve external ip from code. So I'm a bit stuck. Any advice?
Is there alternative solutions? alternative protocols?
Thanks you,
Laurent
You're going to have a really hard time avoiding writing a server, for realistic, practical, and performance reasons:
Many residential internet connections are behind firewalls (at the ISP, local router, or OS level) that limit accepting connections from outside the network. NAT further complicates accepting connections from the internet on a LAN.
There are precious few methods of internet communication that are serverless, and those that are are subject to using local peer discovery to find peers. Most LPD traffic will not make it off your lan, the ISP will filter it (otherwise you'd be able to "locally" discover peers on the entire internet).
Bandwidth can be a concern for games. Not everyone has a high speed internet connection yet (though market penetration of fiber optics and fast DSL is pretty high at this point), and you'd end up with problems connecting slower hosts to a large swarm.
Servers facilitate star-like networks, which are very efficient. Other network topologies exist, but many suffer from drawbacks that severely inhibit their ability to scale.
Star networks, for example, for clients, require O(connections) =
O(1), O(bandwidth) = O(1), and O(latency) = O(1).
Fully connected networks require every client to be connected to
every other client, so O(connections) = O(bandwidth) = O(n), and
O(latency) = O(1).
In ring networks, each client connects to 2 neighbors and messages of
distant clients are forwarded. Therefore, they have O(connections) =
O(1), but O(bandwidth) = O(latency) = O(n).
If all you need is a chat system, or want badly enough not to write your own server that you're willing to piggyback the entire online experience over a chat server, you could probably rely on something like an XMPP server.
If you choose to go that route, make sure that proper authentication and encryption is used wherever necessary to protect user's private data (passwords, etc). I recommend using a cryptographic authentication scheme that allows clients to authenticate other clients (such as a challenge/response scheme, or something else). Or, you could mediate all authentication with a central service.
Keep in mind that many chat services will not want to provide your project with free bandwidth. Even if you do decide to use XMPP as the heart of your multiplayer protocol, expect to be running your own server.

TCP/IP and designing networking application

i'm reading about way to implemnt client-server in the most efficient manner, and i bumped into that link :
http://msdn.microsoft.com/en-us/library/ms740550(VS.85).aspx
saying :
"Concurrent connections should not exceed two, except in special purpose applications. Exceeding two concurrent connections results in wasted resources. A good rule is to have up to four short lived connections, or two persistent connections per destination "
i can't quite get what they mean by 2... and what do they mean by persistent?
let's say i have a server who listens to many clients , whom suppose to do some work with the server, how can i keep just 2 connections open ?
what's the best way to implement it anyway ? i read a little about completion port , but couldn't find a good examples of code, or at least a decent explanation.
thanks
Did you read the last sentence:
A good rule is to have up to four
short lived connections, or two
persistent connections per
destination.
Hard to say from the article, but by destination I think they mean client. This isn't a very good article.
A persistent connection is where a client connects to the server and then performs all its actions without ever dropping the connection. Even if the client has periods of time when it does not need the server, it maintains its connection to the server ready for when it might need it again.
A short lived connection would be one where the client connects, performs its action and then disconnects. If it needs more help from the server it would re-connect to the server and perform another single action.
As the server implementing the listening end of the connection, you can set options in the listening TCP/IP socket to limit the number of connections that will be held at the socket level and decide how many of those connections you wish to accept - this would allow you to accept 2 persistent connections or 4 short lived connections as required.
What they mean by, "persistent," is a connection that is opened, and then held open. It's pretty common problem to determine whether it's more expensive to tie up resources with an "always on" connection, or suffer the overhead of opening and closing a connection every time you need it.
It may be worth taking a step back, though.
If you have a server that has to listen for requests from a bunch of clients, you may have a perfect use case for a message-based architecture. If you use tightly-coupled connections like those made with TCP/IP, your clients and servers are going to have to know a lot about each other, and you're going to have to write a lot of low-level connection code.
Under a message-based architecture, your clients could place messages on a queue. The server could then monitor that queue. It could take messages off the queue, perform work, and place the responses back on the queue, where the clients could pick them up.
With such a design, the clients and servers wouldn't have to know anything about each other. As long as they could place properly-formed messages on the queue, and connect to the queue, they could be implemented in totally different languages, and run on different OS's.
Messaging-oriented-middleware like Apache ActiveMQ and Weblogic offer API's you could use from C++ to manage and use queues, and other messaging objects. ActiveMQ is open source, and Weblogic is sold by Oracle (who bought BEA). There are many other great messaging servers out there, so use these as examples, to get you started, if messaging sounds like it's worth exploring.
I think key words are "per destination". Single tcp connection tries to accelerate up to available bandwidth. So if you allow more connections to same destination, they have to share same bandwidth.
This means that each transfer will be slower than it could be and server has to allocate more resources for longer time - data structures for each connection.
Because establishing tcp connection is "time consuming", it makes sense to allow establish second connection in time when you are serving first one, so they are overlapping each other. for short connections setup time could be same as for serving the connection itself (see poor performance example), so more connections are needed for filling all bandwidth effectively.
(sorry I cannot post hyperlinks yet)
here msdn.microsoft.com/en-us/library/ms738559%28VS.85%29.aspx you can see, what is poor performance.
here msdn.microsoft.com/en-us/magazine/cc300760.aspx is some example of threaded server what performs reasonably well.
you can limit number of open connections by limiting number of accept() calls. you can limit number of connections from same source just by canceling connection when you find out, that you allready have more then two connections from this location (just count them).
For example SMTP works in similar way. When there are too many connections, it returns 4xx code and closes your connection.
Also see this question:
What is the best epoll/kqueue/select equvalient on Windows?