which zmq pattern can I use for router communications? - python-2.7

I want to connect peer-to-peer by TCP. With which zmq pattern I can connect them? Does I need server/ client for each side?

You can use several patterns for p2p.
Here is socket features in brief:
REQ-REP sync pair of sockets. pros: doesn't drop messages when HWM is reached. cons: this pair of sockets is sync and blocking, it means that if REQ socket sent a message, it will wait for a reply forever and there is no reply, you can use it again only after recreating.
DEALER-ROUTER async pair of sockets. pros: these socket are not blocking and you can route your messages, but cons: it HWM of ROUTER socket is reached it will drop messages and there is no API to let you know about it.
PUSH-PULL async pair of sockets. pros: no blocks, no message drops, async, cons: no routing, so its ideal for p2p, but if you have 1-to-N connection all messages will be distributed by round robin
If you have N-to-N or your peers come and go and you have no discovery service, you may use any pattern with broker (but you must implement broker by yourself, its not very hard to do).
Here is The Guide, you can find a lot of examples on python there.

Related

Boost::Beast Websocket Bidirection Stream (C++)

I'm looking into using the Boost::Beast websocket library to create an asynchronous bidirectional pipe to pass data between a server and a client. I leveraged some code from the async example (I can post some at a later time if necessary, don't have access to it now). I currently have a class which creates several threads running a SocketListener. When a client connects, it creates a Session shared_ptr to do the async read and write functions. The problem is, this session object will only write out when the client has sent me a message. I'm looking for an implementation that allows my server to write on demand to all the clients connected to it and also listen for incoming data from those connections.
Is this possible? Am I using the wrong technique for this? The other way I though this may be achievable is to have an incoming websocket and and outgoing websocket. Incoming would allow a client to drop configurations for the server and outgoing would just monitor a message queue and do a async write if a message is available.
Thanks!
Is this possible?
Yes
Am I using the wrong technique for this?
No
The other way I though this may be achievable is to have an incoming websocket and and outgoing websocket, and No respectively.
That is not necessary, a websocket stream is full-duplex. You can read and write at the same time.
outgoing would just monitor a message queue and do a async write if a message is available.
This is the correct approach, but you can do that in the same Session object that also handles the reads.
Here's an example that reads continuously and can also write full-duplex: https://github.com/vinniefalco/CppCon2018

gRPC polling for incoming packets from multiple sockets at once

I am looking into possibility of listening on different sockets at once. To handle multiple socket connection at the same fd_set can be used in Linux. I have seen that gRPC also support this functionality with having epoll based pollset.
https://github.com/grpc/grpc/blob/18df25228cfa1f97fc5cca9176fbaef64c0e4221/doc/epoll-polling-engine.md
I intend to call different services in async mode and providing a service at the same time. Therefore, I was thinking about having a poll-set consist of client sockets waiting for async responses and server sockets. It seems to be possible in gRPC. I haven't been able to find anything in gRPC API that exposes construction of a poll-set.
Therefore, my question is how to use this capability of gRPC?
Does gRPC manages this automatically? In that case how can I wait for incoming messages?
The same CompletionQueue should be used for both client and server. To wait for the incoming messages next needs to be invokek.

TCP push-pull socket server design

I am designing a cross-platform messaging service as a learning exercise. I have programmed socket-based servers before, but always a "client-polls-server" design, like a web server. I want to be able to target mobile platforms, and I read that polling is a battery drain, so I would like to do push notification.
The server will be TCP-based, written in C++. What I'm having trouble getting my head around is how to manage the bi-directional nature of the design. I need a client to be able to send packets to the server as normal, but also listen for packets. How do I mitigate situations like, the client is sending data when the server is trying to send to it, or it's blocked listening for data but then needs to send something?
For example, consider the following crude diagram:
So, let's say client A is in the middle of sending a chunk of data (arrow 1). While this is happening, client B sends a message (arrow 2), which causes the server to attempt to send data back to client A (arrow 3), but client A hasn't finished sending arrow 1 yet. What happens in this instance? Should I setup 2 separate ports on each client, one for inbound, one for outbound? Do I need to keep track of the state of each connection?
Or is there a better approach to this altogether?
One socket port is inherently bidirectional. To handle both inbound and outbound traffic more or less concurrently you need to use nonblocking sockets.
I think the solution is pretty simple. The TCP server should have a list with connected clients. Since a TCP connection is bi-directional, the push mechanism is quite simple.
Another important thing, as long as your server isn't multithreaded, you can read from or write to one client at the same time.

Non - blocking RPC invocation, using gSoap

Is this even possible?
I know, I can make a one-way asynchronous communication, but I want it to be two-way.
In other words, I'm asking about the request/response pattern, but non-blocking, like described here (the 3rd option)
Related to Asynchronous, acknowledged, point-to-point connection using gSoap - I'd like to make the (n)acks async, too
You need a way to associate requests with replies. In normal RPC, they are associated by a timeline: the reply follows the response before another response can occur.
A common solution is to send a key along with the request. The reply references the same key. If you do this, two-way non-blocking RPC becomes a special case of two one-way non-blocking RPC connections. The key is commonly called something like request-id or noince.
I think that is not possible by basic usage,
The only way to make it two way is via response 'results of the call'
But you might want to use little trick
1] Create another server2 at client end and call that server2 from server
Or if thats not you can do over internet because of NAT / firewall etc
2] re architect your api so that client calls server again based on servers responce first time.
You can have client - server on both end. For example you can have client server on system 1 and system 2. (I specify sender as cient and receiver as server). You send async message from sys1 client to sys 2 server. On recieving message from sys1 you can send async response from sys 2 client to sys1 server back. This is how you can make async two way communication.
I guess you would need to run the blocking invocation in a separate thread, as described here: https://developer.nokia.com/Community/Wiki/Using_gsoap_for_web_services#Multithreading_for_non-blocking_calls

C++ socket design

I am designing a client server socket program using TCP/IP.
The server listens on a certain port, the client program makes 2 connections to the server. One is for command and response and the other is for streaming of data.
For the command and response, I can use the normal blocking socket mode to receive the client command and send the server response.
For the streaming data, the server would wait for the client to send a start stream command and begins continuous sending of data to that client. The issue now is I need the handler to also listen on this connection for the stop stream command. Hence, I was thinking of making this connection non-blocking so that the receive would not block followed by a non-blocking send.
Is this method of implementing the server and client handler efficient?
Take a look at Boost::asio socket management layer. It's very well written.
http://www.boost.org/doc/libs/1_49_0/doc/html/boost_asio/tutorial/tutdaytime1.html
Yes it is very efficient.
You can use libraries like libevent.
From perspective of efficiency, the server should always be designed to use non-blocking sockets, and use event-driven asynchronous I/O architecture. Blocking sockets should be avoided at server side.
Fortunately, there've been a few mature open source frameworks you can use. Among them, libev is most lightweight.