What's the difference between pipe and socket? - c++

Both can be used for communicating between different processes,
what's the difference?

Windows has two kinds of pipes: anonymous pipes and named pipes. Anonymous pipes correspond (fairly) closely to Unix pipes -- typical usage is for a parent process to set them up to be inherited by a child process, often connected to the standard input, output and/or error streams of the child. At one time, anonymous pipes were implemented completely differently from named pipes so they didn't (for one example) support overlapped I/O. Since then, that's changed so an anonymous pipe is basically just a named pipe with a name you don't know, so you can't open it by name, but it still has all the other features of a named pipe (such as the aforementioned overlapped I/O capability).
Windows named pipes are much more like sockets. They originated with OS/2, where they were originally the primary mechanism for creating client/server applications. They were originally built around NetBIOS (i.e., used NetBIOS both for addressing and transport). They're tightly integrated with things like Windows authentication, so you can (for example) have a named pipe server impersonate the client to restrict the server to doing things the client would be able to do if logged in directly. More recently, MS has gone to some trouble to get rid of the dependence on NetBIOS, but even though they can now use IP as their transport (and DNS for addressing, IIRC) they're still used primarily for Windows machines. The primary use on other machines is to imitate Windows, such as by running Samba.

(Shamelessly cribbed from http://www.perlmonks.org/?node_id=180842)
Pipes are fast and reliable, because they are implemented in memory on a single host where both communicating processes run. Sockets are slower and less reliable, but are much more flexible since they allow communication between processes on different hosts.

(Off the top of my head)
Pipe: A tube with a small bowl at one end; used for smoking tobacco
Socket: Receptacle where something (a pipe, probe or end of a bone) is inserted
Anyways:
"A major difference between pipes and
sockets is that pipes require a common
parent process to set up the
communications channel. A connection
between sockets can be set up by two
unrelated processes, possibly residing
on different machines."

Sockets would use some sort of IP protocol like TCP/IP or UDP, thus would be slower, but your code'd be more portable if you'd need to communicate over a network. There is a third Shared mem approach and forth Mach ports (in this case I am not sure about it would work with Windows )

They both do the same function, the only difference is that pipes are more efficient as they are closest one can get to the barebones of internets. Sockets are an abstraction built on top of series of tubes (pipes) as a result they are slower (just as java is slower than native assembly code).

Related

How do I establish communication between multiple applications across languages

I currently have two programs running that are related to the same operation. Program A (written in vb6) is a sort of "controller" that turns a device on and launches Program B (written in C++) that starts a collection of the device's data. Now, Program B needs a way to report back a few MINOR pieces of the data back to Program A. Program A also needs to monitor certain extra's, and if signal 1 fires, it needs to let Program B know. What is the best way to establish this communication? The obvious ways are a text/binary file that each program reads and writes to (no where near the best way), and I also thought of using UDP to communicate since the machine will be on a closed network. However, I'm unsure of how I should actually do this?
Named Pipes is an elegant solution, but you can even use files for this as you said.
http://support.microsoft.com/kb/177696
How to work with named pipes (C++ server , C# client)
There are other choices:
Clipboard, COM, Data Copy, DDE, File Mapping, Mailslots, Pipes, RPC, Windows Sockets
Refer here: http://msdn.microsoft.com/en-us/library/windows/desktop/aa365574%28v=vs.85%29.aspx
how are you turning on the device, and starting Program B?
I would use the same channel for the extra communication, as this line is already there and working (as Program B has started)
Personally I would choose to establish communication and transmit the data using standard protocols for the task, like FTP or HTTP. This way, when your Program A or Program B change (or the medium of connection is swapped out for another), you can still have a well-defined and easy to implement communication protocol. Network stacks are available for a variety of CPUs and operating systems and FTP (or TFTP) are easy to implement.

c++ IPC through streambuf on Windows

I have a message object serialized as binary data stream (it can be any std::streambuf), and i want to transfer it to another process. The key is, server application must handle many clients, connection have to be asynchronous (because of multiple clients), and bidirectional (under the hood it may be implemented by two separated connections). Messages have variable length and should be queued. What method of IPC should i pick for this? Is there any simple way to transfer stream buffer through applications? Speed is not critical, but it will be good not to block application for too much time. Anything will be done locally under Windows (i aim to XP and newer), no network support required.
I also need a possibility of listening to incoming connections. Server should automatically detect new connections, do some handshake and accept, if it is compatible. I am aware of that i need to write many of things i mentioned on my own. Anyway, it must be possible to achieve but of course simpler is better.
You can use named pipes in windows.
See MSDN ref: http://msdn.microsoft.com/en-us/library/aa365150%28v=vs.85%29.aspx
You can also set it to be full duplex (bi-directional) and asynchronous. If you are familiar with File I/O APIs on windows then it should be straightforward to use.

Boost::Signals encapsulation over network

I am currently involved in the development of a software using distributed computing to detect different events.
The current approach is : a dozen of threads are running simultaneously on different (physical) computers. Each event is assigned a number ; and every thread broadcasts its detected events to the other and filters the relevant events from the incoming stream.
I feel very bad about that, because it looks awful, is hard to maintain and could lead to performance issues when the system will be upgraded.
So I am looking for a flexible and elegant way to handle this IPC, and I think Boost::Signals seems a good candidate ; but I never used it, and I would like to know whether it is possible to provide encapsulation for network communication.
Since I don't know any solution that will do that, other then Open MPI, if I had to do that, I would first use Google's Protocol Buffer as my message container. With it, I could just create an abstract base message with stuff like source, dest, type, id, etc. Then, I would use Boost ASIO to distribute those across the network, or over a Named PIPE/loopback for local messages. Maybe, in each physical computer, a dedicated process could be running just for distribution. Each thread registers with it which types of messages it is interested in, and what its named pipe is called. This process would know the IP of all the other services.
If you need IPC over the network then boost::signals won't help you, at least not entirely by itself.
You could try using Open MPI.

C++: Most common way to talk to one application from the other one

In bare outlines, I've got an application which looks through the directories at startup and creates special files' index - after that it works like daemon. The other application creates such 'special' files and places them in some directory. What way of informing the first application about a new file (to index it) is the most common, simple (the first one is run-time, so it shouldn't slow it too much), and cross-platform if it is possible?
I've looked through RPC and IPC but they are too heavy (also non-cross-platform and slow (need a lot of features to work - I need a simple light well-working way), probably).
Pipes would be one option: see Network Programming with Pipes and Remote Procedure Calls (Windows) or Creating Pipes in C (Unix).
I haven't done this in a while but from my experience with RPC, DCOM, COM, .NET Remoting, and socket programming, I think pipes is the most straightforward and efficient option.
For windows (NTFS) you can get notification from OS that directory was changed. But it is not crosspl. and not about two apps.
"IPC but them are too heavy" - no no, they are not heavy at all. You should look at named pipes - this IPC is fastest and it is in both Win/Unix-like with slight differences. Or sockets!
eisbaw suggested TCP. I'd say, to make it even more simple, use UDP.
Create a listening thread that will receive packets, and handle it from there - on all applications.
Since it is on the same PC you'll never lose any packet, something that UDP could mistakenly do when on network.
Each application instance will need a special port but this is easy to configure with configuration files that you (I assume) already have.
Keep it simple (:
Local TCP sockets are guarenteed to work - as already mentioned by Andrey
Shared memory would be another option, take a look at
http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2006/n2044.html
As Andrey noted, if you agree on the full path ahead of time, you can just have the OS tell you when it's added. All major platforms actually support this in some form. You can use a cross-platform library for this, such as QFileSystemWatcher.
EDIT: I don't think QFileSystemWatcher will cause too much of a performance hit. It definitely relies on the underlying OS for notifications on Linux, FreeBSD, and Mac OS (and I think Windows). See http://qtnode.net/wiki/QFileSystemWatcher
memory mapped files, socket, and named pipes are all highly efficient, cross platform, ipc mechanisms. Well, the apis to access named pipes and memory mapped files differ between POSIX and Win32, but the basic mechanisims are similar enough that its easy to make a cross platform wrapper. Sockets and named pipes tend to be fast because, in inter-process situations, the OS developers (of most common OSs) have built in shortcuts that essentially makes the socket / named pipe write a rather simple wrap of a memory section.

Fast Cross Platform Inter Process Communication in C++

I'm looking for a way to get two programs to efficiently transmit a large amount of data to each other, which needs to work on Linux and Windows, in C++. The context here is a P2P network program that acts as a node on the network and runs continuously, and other applications (which could be games hence the need for a fast solution) will use this to communicate with other nodes in the network. If there's a better solution for this I would be interested.
boost::asio is a cross platform library handling asynchronous io over sockets. You can combine this with using for instance Google Protocol Buffers for your actual messages.
Boost also provides you with boost::interprocess for interprocess communication on the same machine, but asio lets you do your communication asynchronously and you can easily have the same handlers for both local and remote connections.
I have been using ICE by ZeroC (www.zeroc.com), and it has been fantastic. Super easy to use, and it's not only cross platform, but has support for many languages as well (python, java, etc) and even an embedded version of the library.
Well, if we can assume the two processes are running on the same machine, then the fastest way for them to transfer large quantities of data back and forth is by keeping the data inside a shared memory region; with that setup, the data is never copied at all, since both processes can access it directly. (If you wanted to go even further, you could combine the two programs into one program, with each former 'process' now running as a thread inside the same process space instead. In that case they would be automatically sharing 100% of their memory with each other)
Of course, just having a shared memory area isn't sufficient in most cases: you would also need some sort of synchronization mechanism so that the processes can read and update the shared data safely, without tripping over each other. The way I would do that would be to create two double-ended queues in the shared memory region (one for each process to send with). Either use a lockless FIFO-queue class, or give each double-ended queue a semaphore/mutex that you can use to serialize pushing data items into the queue and popping data items out of the queue. (Note that the data items you'd be putting into the queues would only be pointers to the actual data buffers, not the data itself... otherwise you'd be back to copying large amounts of data around, which you want to avoid. It's a good idea to use shared_ptrs instead of plain C pointers, so that "old" data will be automatically freed when the receiving process is done using it). Once you have that, the only other thing you'd need is a way for process A to notify process B when it has just put an item into the queue for B to receive (and vice versa)... I typically do that by writing a byte into a pipe that the other process is select()-ing on, to cause the other process to wake up and check its queue, but there are other ways to do it as well.
This is a hard problem.
The bottleneck is the internet, and that your clients might be on NAT.
If you are not talking internet, or if you explicitly don't have clients behind carrier grade evil NATs, you need to say.
Because it boils down to: use TCP. Suck it up.
I would strongly suggest Protocol Buffers on top of TCP or UDP sockets.
So, while the other answers cover part of the problem (socket libraries), they're not telling you about the NAT issue. Rather than have your users tinker with their routers, it's better to use some techniques that should get you through a vaguely sane router with no extra configuration. You need to use all of these to get the best compatibility.
First, ICE library here is a NAT traversal technique that works with STUN and/or TURN servers out in the network. You may have to provide some infrastructure for this to work, although there are some public STUN servers.
Second, use both UPnP and NAT-PMP. One library here, for example.
Third, use IPv6. Teredo, which is one way of running IPv6 over IPv4, often works when none of the above do, and who knows, your users may have working IPv6 by some other means. Very little code to implement this, and increasingly important. I find about half of Bittorrent data arrives over IPv6, for example.