multi way inter process communication - c++

There are 10 processes in my machine and each should have the capability to communicate with each other.
Now the scenario is all the 10 processes should be in the listening state so that any process can communicate with it at any time. Again when required it should be able to pass a message to any of the processes.
I am trying to code it with C++ and unix tcp/udp sockets. However I don't understand how to structure it. Shall I use UDP or TCP, which would be better? How can a process listen and send data simultaneously.
I need help.

The decision of UDP vs TCP depends on your messages, whether or not they need to be reliably delivered, etc.
For pure TCP, each peer would have a TCP socket on which each process accepts connections from other peers (and each accept would result in a new socket). This new socket is bi directional and can be used for sending / recieving from one peer to another. With this solution, you would need some sort of discovery mechanism.
For UDP, it's much the same except you don't need the accept socket. You still need some form of discovery mechanism.
The discovery mechanism could either be another peer with a well known (via configuration, etc) address, or possibly you could use UDP broadcast for the discovery mechanism.
In terms of zeroMQ, which is a slightly higher level than raw sockets, you would have a single ROUTER socket on which you're listening and recieving data, and one DEALER socket per peer on which you're sending data.
No matter the solution, you would likely need a thread for handling the network connections using poll() or something like that, and as messages are received you need another thread (or thread pool) for handling the messages.

you can run each process as severer & span 9 more thread to connect other processes as client.

This question applies to any language, so the answer is not C++ related.
When given a choice, look for a library to have an easier communication (e.g. apache-thrift).
About TCP/UDP: TCP is typically slower but more reliable, so by default, go for TCP, but there might be reasons for choosing UDP, like streaming, multicast/broadcast,... Reliability might not be an issue when all processes are on the same board, but you might want to communicate with external processes later on.
A threaded process can use the same socket for sending and receiving without locks.
Also, you need some kind of scheme to find out to what port to send to reach a process and with TCP, you need to decide whether to use static connections or connect every time you want to send.

what you want to do seems to be message passing.
before trying to build it yourself, take a look at boost mpi

Related

Pinging by creating new sockets per each peer

I created a small cross-platform app using Qt sockets in C++ (although this is not a C++ or Qt specific question).
The app has a small "ping" feature that tries to connect to a peer and asks for a small challenge (i.e. some custom data sent and some custom data replied) to see if it's alive.
I'm opening one socket per each peer so as soon as the ping starts we have several sockets in SYN_SENT.
Is this a proper way to implement a ping-like protocol with challenge? Am I wasting sockets? Is there a better way I should be doing this?
I'd say your options are:
An actual ping (using ICMP echo packets). This has low overhead, but only tells you whether the host is up. And it requires you to handle lost packets, timeouts, and retransmits.
A UDP-based protocol. This also has lower kernel overhead, but again you'll be responsible for setting up timeouts, handling lost packets, and retransmits. It has the advantage of allowing you to positively affirm that your program is running on the peer. It can be implemented with only a single socket endpoint no matter how many peers you add. (It is also possible that you could send to multiple peers at once with a broadcast if all are on a local network, or a multicast [complicated set-up required for that].)
TCP socket as you're doing now. This is much easier to code, extremely reliable and will automatically provide a timeout (i.e. your connect will eventually fail if the peer doesn't respond). It lets you know positively that your peer is there and running your program. Although there is more kernel overhead to this, and you will use one socket endpoint on your host per peer system, I wouldn't call it a significant issue unless you think you'll be having thousands of peers.
So, in the end, you have to judge: If thousands of hosts will be participating and this pinging is going to happen frequently, you may be better off coding up a UDP solution. If the pinging is rare or you don't expect so many peers, I would go the TCP route. (And I wouldn't consider that a "waste of sockets" -- those advantages are why TCP is so commonly used.)
The technique described in the question doesn't really implement ping for the connection and doesn't test if the connection itself is alive. The technique only checks that the peer is listening for (and is responsive to) new connections...
What you're describing is more of an "is the server up?" test than a "keep-alive" ping.
If we're discussing "keep-alive" pings, than this technique will fail.
For example, if just the read or the write aspect of the connection is closed, you wouldn't know. Also, if the connection was closed improperly (i.e., due to an intermediary dropping the connection), this ping will not expose the issue.
Most importantly, for some network connections and protocols, you wouldn't be resetting the connection's timeout... so if your peer is checking for connection timeouts, this ping won't help.
For a "keep-alive" ping, I would recommend that you implement a protocol specific ping.
Make sure that the ping is performed within the existing (same) connection and never requires you to open a new connection.

p2p open source library tcp/udp multicast support

I have a certain application running on my computer. The same application can run on many computers on a LAN or different places in the world. I want to communicate between them. So I basically want a p2p system. But I will always know which computers(specific IP address) will be peers. I just want peers to have join and leave functionality. The single most important aim will be communication speed and time required. I assume simple UDP multicast (if anything like that exists) between peers will be fastest possible solution. I dont want to retransmit messages even if lost. Should I use an existing p2p library e.g. libjingle,etc. or just create some basic framework from scratch as my needs are pretty basic?
I think you're missing the point of UDP. It's not saving any time in a sense that a message gets faster to the destination, it's just you're posting the message and don't care if it arrives safely to the other side. On WAN - it will probably not arrive on the other side. UDP accross networks is problematic, as it can be thrown out by any router on the way which is tight on bandwidth - there's no guarantee of delivery for it.
I wouldn't suggest using UDP out of the topology under your control.
As to P2P vs directed sockets - the question is what it is that you need to move around. Do you need bi/multidirectional communication between all the peers, or you're talking to a single server from all the nodes?
You mentioned multicast - that would mean that you have some centralized source of data that transmits information and all the rest listen - in this case there's no benefit for P2P, and multicast, as a UDP protocol, may not work well accross multiple networks. But you can use TCP connections to each of the nodes, and "multicast" on your own, and not through IGMP. You can (and should) use threading and non-blocking sockets if you're concerned about sending blocking you, and of course you can use the QoS settings to "ask" routers to rush your sockets through.
You can use zeromq for support all network communication:
zeromq is a simple library encapsulate TCP and UDP for high level communication.
For P2P you can use the different mode of 0mq :
mode PGM/EPGM for discover member of P2P on your LAN (it use multicast)
mode REQ/REP for ask a question to one member
mode PULL/PUSH for duplicate one resource on the net
mode Publish/subscribe for transmission a file to all requester
Warning, zeromq is hard to install on windows...
And for HMI, use green-shoes ?
i think you should succeed using multicast,
unfortunately i do not know any library,
but still in case you have to do it from scratch
take a look at this:
http://www.tldp.org/HOWTO/Multicast-HOWTO.html
good luck :-)

How to implement client-server udp multicast in one program?

I have written a server and a client both as separate apps. They communicate through UDP Multicast (becouse I need that everyone who joins the group can read & write messages). Now I have two windows, but my aim is to create one simple chat program, but I don't know how to listen and send at the same time. Do I need to create 2 sockets? Or can I use just one? I've even tried to merge both apps in one, but I didn't succeeded (I know, I know.. but I was kinda desperate).
I've searched google for a tut, but didn't succeeded.
I'm using c++.
You can use one or two sockets, it all depends on whether you wish to bind to a particular network adapter and whether you wish to use unicast & broadcast packets. It is often easier to manage one for sending and one for receiving.
To listen to sent multicast packets on the same host check the IP_MULTICAST_LOOP socket option, noting it applies differently on Windows to Unix.

Send same packets to multiple clients

I have to develop a software to send same packets to multiple destination.
But i must not use multicast scheme.!!!! ( because my boss is a stupid man )
so, any way, the problem is that:
i have same packets and multiple IP address ( clients) and i can not use multicast
how can i do that in the best way?
i must use c++ as a language and Linux as a platform.
so please help me
Thanx
If your boss said you can't use multicast, maybe he/she has his/her reason. I guess broadcasting is out of the game too?
If these are the requisites, your only chance is to establish a TCP connection with every remote host you want to send packet to.
EDIT
UDP, conversely, would not provide much benefit over multicasting if your application will run over a LAN you are in charge for configuration of, that's the reason I specified TCP.
Maybe you have to describe your scenario a little better.
This could be done with either TCP or UDP depending on your reliability requirements. Can you tolerate lost or reordered packets? Are you prepared to handle timeouts and retransmission? If both answers are "yes", pick UDP. Otherwise stay with TCP. Then:
TCP case. Instead of single multicast UDP socket you would have a number of TCP sockets, one per destination. You will have to figure out the best scheme for connection establishment. Regular listening and accepting connecting clients works as usual. Then you just iterate over connected sockets and send your data to each one.
UDP case. This could be done with single UDP socket on the server side. If you know the IPs and ports of the clients (data receivers) use sendto(2) on the same data for each address/port. The clients would have to be recv(2)-ing at that time. If you don't know your clients upfront you'd need to devise a scheme for clients to request the data, or just register with the server. That's where recvfrom(2) is usefull - it gives you the address of the client.
You have restricted yourself by saying no to multicast. I guess sending packets to multiple clients is just a part of your requirement and unless you throw more light, it will be difficult to provide a complete solution.
Are you expecting two way communication between the client and the server ? in that case choosing multicast may prove complex. please clarify
You have to iterate through the clients and send packets one after another. You may want to persist the sessions if you are expecting response from the clients back.
Choice of UDP or TCP again depends on the nature of data being sent. with UDP you would need to handle out of sequence packets and also need to implement re-transmission.
You'll have to create a TCP Listerner on your server running at a particular port listening for incoming Tcp Client connections (Sockets).
Every time a client connects, you'll have to cache it in some kind of datastructre like a Name value pair (name being a unique name for the client amd value being the Network Stream of that client obtained as a result of the TCP socket).
Then when you are finally ready to transmit the data you could either iterate through this collection of name value pair connections and send them data as byte array one by one to each client or spawm off one thread per connected client and have it send the data concurrently.
TCP is a bulky protocol (due to its connection-oriented nature) and transmission of large data (like videos/images) can be quite slow.
UDP is definitely the choice for streaming large data packets but you'll have to trade-off with the delivery gurantee.

Multithreaded Server Issue

I am writing a server in linux that is supposed to serve an API.
Initially, I wanted to make it Multi-threaded on a single port, meaning that I'd have multiple threads working on various request received on a single port.
One of my friends told me that it not the way it is supposed to work. He told me that when a request is received, I first have to follow a Handshake procedure, create a thread that is listening to some other port dedicated to the request and then redirect the requested client to the new port.
Theoretically, it's very interesting but I could not find any information on how to implement the handshake and do the redirection. Can someone help?
If I'm not wrong in interpreting your responses, once I create a multithreaded server with a main thread listening to a port, and creates a new thread to handle requests, I'm essentially making it multithreaded on a single port?
Consider the scenario where I get a large number of requests every second. Isn't it true that every request on the port should now wait for the "current" request to complete? If not, how would the communication still be done: Say a browser sends a request, so the thread handling this has to first listen to the port, block it, process it, respond and then unblock it.
By this, eventhough I'm having "multithreads" , all I'm using is one single thread at a time apart from the main thread because the port is being blocked.
What your friend told you is similar to passive FTP - a client tells the server that it needs a connection, the server sends back the port number and the client creates a data connection to that port.
But all you wanted to do is a multithreaded server. All you need is one server socket listening and accepting connections on a given port. As soon as the automatic TCP handshake is finished, you'll get a new socket from the accept function - that socket will be used for communication with the client that has just connected. So now you only have to create a new thread, passing that client socket to the thread function. In your server thread, you will then call accept again in order to accept another connection.
TCP/IP does the handshake, if you can't think of any reason to do a handshake than your application does not demand it.
An example of an application specific handshake could be for user authentication.
What your colleague is suggesting sounds like the way FTP works. This is not a good thing to do -- the internet these days is more or less used for protocols which use a single port, and having a command port is bad. One of the reasons is because statefull firewalls aren't designed for multi-port applications; they have to be extended for each individual application that does things this way.
Look at ASIO's tutorial on async TCP. There one part accept connections on TCP and spawns handlers that each communicate with a single client. That's how TCP-servers usually work (including HTTP/web, the most common tcp protocol.)
You may disregard the asynchronous stuff of ASIO if you're set on creating a thread per connection. It doesn't apply to your question. (Going fully async and have one worker-thread per core is nice, but it might not integrate well with the rest of your environment.)