How to create multiple TCP connections within 1 gRPC stream - c++

I'm using gRPC stream to transfer data from server to client, and am suffering low throughput. One thing specific to my case is: my client only sends control messages(eg, start, pause, resume), and server streams messages back till the end.
One thing I think could be useful is parallelization, and make full utilization of the b/w.
But one consideration is: my messages sent is ordered, which means if I open multiple multiple gRPC streams, I don't have a way to tell their order.
My question is: is there a way in gRPC to open multiple TCP connections?

Related

C++ Gaming application UDP vs TCP

I am making a real time application. I can't say much about it, but it's an online real time application that needs as little latency as possible. I am using sockets, no library. Also I need full bandwitdh. Should I use TCP or UDP? I don't mind programming a bit more to get UDP working. Thanks in advance.
Depends on the nature of client connections.
TCP is stateful session. If you have a lot of clients connected at the same time, you may suffer port exhaustion. If clients connect and disconnect frequently, establishing and tearing down TCP session adds to latency, CPU load and bandwidth. If your client connections are more or less permanent and not too many clients are connected at the same time, TCP is only slightly worse than UDP.
UDP is much better suited for low-latency communications. Beware of NAT firewalls however - not all are capable or are set up for UDP address mapping.
Also be aware that TCP is a stream and, as such, does not provide message packetization. Your application has to assemble packets from TCP stream with additional overhead.
UDP is by definition a complete message, i.e. arrives as a packet that was sent. Beware that delivery is not guaranteed and application may need to provide acknowledgement and resending layer.
TCP/IP implements a stream; that is, it wants to send the recipient everything sent through it on the other end. To this end, it adds a lot of protocol that handles situations where a portion of the stream is missing--such that it will retry sends, keep track of how many bytes it still needs to send (with a window), etc. The TCP/IP connection is used to add the guarantee of the streams; if TCP/IP fails to deliver packets and cannot recover, the connection will drop.
UDP/IP implements a datagram--that is, it wants to send a particular packet (with a small limited size) from A to B. If a datagram is lost, there's no built in way to resend it. If it's not lost, UDP packets are just dropped.
The UDP lack of guarantees is actually a benefit. Say you're modeling something like "health"--a monster attacks you twice on a server. Your health drops from 90 to 78, then to 60. Using TCP/IP, you must receive 78 first--so if that packet was dropped, it has to be resent, and in order--because there's a stream here. But you really don't care--your health now is 60; you want to try to get 60 across. If 78 was dropped by the time your health reaches 60, who cares--78's old history and not important any more. You need health to say 60.
TCP is also a benefit--you don't want to use UDP for in game chats. You want everything said to you to come to you, in order.
TCP also adds congestion control; with UDP you'd have to implement it, or somehow take care that you throttle UDP such that it doesn't saturate the unknown network characteristics between the server and the player.
So yes, you want to use "both"; but "importance" isn't quite the criteria you need to aim for. Use TCP/IP for delivering streams, easy congestion control, etc. Use UDP for real time states, and other situations where the stream abstraction interferes with the purpose rather than aligning with it.
Both UDP and TCP have benefits in terms of latency. If all of the below is true:
You have a single source of data in your client
Send small messages but are worried about their latency
Your application can deal with loosing messages from time to time
Your application can deal with receiving messages out of order
UDP may be a better option. Also UDP is good for sending data to multiple recipiemts.
On the other hand, if either
If any of the above is not true
If your application sends as much data as possible as fast as possible
You have multiple connections to maintain
You should definitely use TCP.
This post will tell you why UDP is not necessarily the fastest.
If you are planning to tranfer large quantity of data over TCP, you should consider the effect of bandwidth delay product. Although you seem to be more worried about the latency than throughput, it may be of an interest to you.

multi way inter process communication

There are 10 processes in my machine and each should have the capability to communicate with each other.
Now the scenario is all the 10 processes should be in the listening state so that any process can communicate with it at any time. Again when required it should be able to pass a message to any of the processes.
I am trying to code it with C++ and unix tcp/udp sockets. However I don't understand how to structure it. Shall I use UDP or TCP, which would be better? How can a process listen and send data simultaneously.
I need help.
The decision of UDP vs TCP depends on your messages, whether or not they need to be reliably delivered, etc.
For pure TCP, each peer would have a TCP socket on which each process accepts connections from other peers (and each accept would result in a new socket). This new socket is bi directional and can be used for sending / recieving from one peer to another. With this solution, you would need some sort of discovery mechanism.
For UDP, it's much the same except you don't need the accept socket. You still need some form of discovery mechanism.
The discovery mechanism could either be another peer with a well known (via configuration, etc) address, or possibly you could use UDP broadcast for the discovery mechanism.
In terms of zeroMQ, which is a slightly higher level than raw sockets, you would have a single ROUTER socket on which you're listening and recieving data, and one DEALER socket per peer on which you're sending data.
No matter the solution, you would likely need a thread for handling the network connections using poll() or something like that, and as messages are received you need another thread (or thread pool) for handling the messages.
you can run each process as severer & span 9 more thread to connect other processes as client.
This question applies to any language, so the answer is not C++ related.
When given a choice, look for a library to have an easier communication (e.g. apache-thrift).
About TCP/UDP: TCP is typically slower but more reliable, so by default, go for TCP, but there might be reasons for choosing UDP, like streaming, multicast/broadcast,... Reliability might not be an issue when all processes are on the same board, but you might want to communicate with external processes later on.
A threaded process can use the same socket for sending and receiving without locks.
Also, you need some kind of scheme to find out to what port to send to reach a process and with TCP, you need to decide whether to use static connections or connect every time you want to send.
what you want to do seems to be message passing.
before trying to build it yourself, take a look at boost mpi

Reusing an Asio connection

I am working on a project currently where I have a web-server. I have to add the ability so that for each request, I need to send multiple requests to other servers, get responses, and send back results to the original client. These servers are high throughput, so I was getting worried about the number of sockets as well as the speeds of setting up new threads/sockets for sending out many requests over many sockets. So I started thinking that have a single(or a few connections), open to each client would help solve this problem. I wasn't sure how persistent connections and boost ASIO worked though. Some questions I had:
-How can I set keep alive times using ASIO tcp sockets.
-Can I send out multiple concurrent requests over the same socket? Would I run into an issue with the order of the results(Each result should have an Id, so I don't mean order as in results being sent out of order, but more packet order, if a response is more than one packet, will I have a problem with the order of the packets).
All requests are HTTP GET/POST requests if that matters too.
Any information in this subject would be appreciated. Thanks.
A TCP socket acts as a data stream, the data you write on one end will be received in the same order in the other end. You can send multiple requests over the same socket if your protocol can handle it.
You mention concurrent requests, therefore you need to be very careful to not interleave the write calls of two different requests. If you can ensure that each result is written atomically, then I see no problem in using a socket for multiple requests (you can do that with a reply queue).
You can set the standard socket keep alive here.

TCP/IP and designing networking application

i'm reading about way to implemnt client-server in the most efficient manner, and i bumped into that link :
http://msdn.microsoft.com/en-us/library/ms740550(VS.85).aspx
saying :
"Concurrent connections should not exceed two, except in special purpose applications. Exceeding two concurrent connections results in wasted resources. A good rule is to have up to four short lived connections, or two persistent connections per destination "
i can't quite get what they mean by 2... and what do they mean by persistent?
let's say i have a server who listens to many clients , whom suppose to do some work with the server, how can i keep just 2 connections open ?
what's the best way to implement it anyway ? i read a little about completion port , but couldn't find a good examples of code, or at least a decent explanation.
thanks
Did you read the last sentence:
A good rule is to have up to four
short lived connections, or two
persistent connections per
destination.
Hard to say from the article, but by destination I think they mean client. This isn't a very good article.
A persistent connection is where a client connects to the server and then performs all its actions without ever dropping the connection. Even if the client has periods of time when it does not need the server, it maintains its connection to the server ready for when it might need it again.
A short lived connection would be one where the client connects, performs its action and then disconnects. If it needs more help from the server it would re-connect to the server and perform another single action.
As the server implementing the listening end of the connection, you can set options in the listening TCP/IP socket to limit the number of connections that will be held at the socket level and decide how many of those connections you wish to accept - this would allow you to accept 2 persistent connections or 4 short lived connections as required.
What they mean by, "persistent," is a connection that is opened, and then held open. It's pretty common problem to determine whether it's more expensive to tie up resources with an "always on" connection, or suffer the overhead of opening and closing a connection every time you need it.
It may be worth taking a step back, though.
If you have a server that has to listen for requests from a bunch of clients, you may have a perfect use case for a message-based architecture. If you use tightly-coupled connections like those made with TCP/IP, your clients and servers are going to have to know a lot about each other, and you're going to have to write a lot of low-level connection code.
Under a message-based architecture, your clients could place messages on a queue. The server could then monitor that queue. It could take messages off the queue, perform work, and place the responses back on the queue, where the clients could pick them up.
With such a design, the clients and servers wouldn't have to know anything about each other. As long as they could place properly-formed messages on the queue, and connect to the queue, they could be implemented in totally different languages, and run on different OS's.
Messaging-oriented-middleware like Apache ActiveMQ and Weblogic offer API's you could use from C++ to manage and use queues, and other messaging objects. ActiveMQ is open source, and Weblogic is sold by Oracle (who bought BEA). There are many other great messaging servers out there, so use these as examples, to get you started, if messaging sounds like it's worth exploring.
I think key words are "per destination". Single tcp connection tries to accelerate up to available bandwidth. So if you allow more connections to same destination, they have to share same bandwidth.
This means that each transfer will be slower than it could be and server has to allocate more resources for longer time - data structures for each connection.
Because establishing tcp connection is "time consuming", it makes sense to allow establish second connection in time when you are serving first one, so they are overlapping each other. for short connections setup time could be same as for serving the connection itself (see poor performance example), so more connections are needed for filling all bandwidth effectively.
(sorry I cannot post hyperlinks yet)
here msdn.microsoft.com/en-us/library/ms738559%28VS.85%29.aspx you can see, what is poor performance.
here msdn.microsoft.com/en-us/magazine/cc300760.aspx is some example of threaded server what performs reasonably well.
you can limit number of open connections by limiting number of accept() calls. you can limit number of connections from same source just by canceling connection when you find out, that you allready have more then two connections from this location (just count them).
For example SMTP works in similar way. When there are too many connections, it returns 4xx code and closes your connection.
Also see this question:
What is the best epoll/kqueue/select equvalient on Windows?

What is the best way to implement a heartbeat in C++ to check for socket connectivity?

Hey gang. I have just written a client and server in C++ using sys/socket. I need to handle a situation where the client is still active but the server is down. One suggested way to do this is to use a heartbeat to periodically assert connectivity. And if there is none to try to reconnect every X seconds for Y period of time, and then to time out.
Is this "heartbeat" the best way to check for connectivity?
The socket I am using might have information on it, is there a way to check that there is a connection without messing with the buffer?
If you're using TCP sockets over an IP network, you can use the TCP protocol's keepalive feature, which will periodically check the socket to make sure the other end is still there. (This also has the advantage of keeping the forwarding record for your socket valid in any NAT routers between your client and your server.)
Here's a TCP keepalive overview which outlines some of the reasons you might want to use TCP keepalive; this Linux-specific HOWTO describes how to configure your socket to use TCP keepalive at runtime.
It looks like you can enable TCP keepalive in Windows sockets by setting SIO_KEEPALIVE_VALS using the WSAIoctl() function.
If you're using UDP sockets over IP you'll need to build your own heartbeat into your protocol.
Yes, this heartbeat is the best way. You'll have to build it into the protocol the server and client use to communicate.
The simplest solution is to have the client send data periodically and the server close the connection if it hasn't received any data from the client in a particular period of time. This works perfectly for query/response protocols where the client sends queries and the server sends responses.
For example, you can use the following scheme:
The server responds to every query. If the server does not receive a query for two minutes, it closes the connection.
The client sends queries and keeps the connection open after each one.
If the client has not send a query for one minute, it sends an "are you there" query. The server responds with "yes I am". This resets the server's two minutes timer and confirms to the client that the connection is still available.
It may be simpler to just have the client close the connection if it hasn't needed to send a query for the past minute. Since all operations are initiated by the client, it can always just open a new connection if it needs to perform a new operation. That reduces it to just this:
The server closes the connection if it hasn't received a query in two minutes.
The client closes the connection if it hasn't needed to send a query in one minute.
However, this doesn't assure the client that the server is present and ready to accept a query at all times. If you need this capability, you will have to implement an "are you there" "yes I am" query/response into your protocol.
If the other side has gone away (i.e. the process has died, the machine has gone down, etc.), attempting to receive data from the socket should result in an error. However if the other side is merely hung, the socket will remain open. In this case, having a heartbeat is useful. Make sure that whatever protocol you are using (on top of TCP) supports some kind of "do-nothing" request or packet - each side can use this to keep track of the last time they received something from the other side, and can then close the connection if too much time elapses between packets.
Note that this is assuming you're using TCP/IP. If you're using UDP, then that's a whole other kettle of fish, since it's connectionless.
Ok, I don't know what your program does or anything, so maybe this isn't feasible, but I suggest that you avoid trying to always keep the socket open. It should only be open when you are using it, and should be closed when you are not.
If you are between reads and writes waiting on user input, close the socket. Design your client/server protocol (assuming you're doing this by hand and not using any standard protocols like http and/or SOAP) to handle this.
Sockets will error if the connection is dropped; write your program such that you don't lose any information in the case of such an error during a write to the socket and that you don't gain any information in the case of an error during a read from the socket. Transactionality and atomicity should be rolled into your client/server protocol (again, assuming you're designing it yourself).
maybe this will help you, TCP Keepalive HOWTO
or this SO_SOCKET