I currently have two programs running that are related to the same operation. Program A (written in vb6) is a sort of "controller" that turns a device on and launches Program B (written in C++) that starts a collection of the device's data. Now, Program B needs a way to report back a few MINOR pieces of the data back to Program A. Program A also needs to monitor certain extra's, and if signal 1 fires, it needs to let Program B know. What is the best way to establish this communication? The obvious ways are a text/binary file that each program reads and writes to (no where near the best way), and I also thought of using UDP to communicate since the machine will be on a closed network. However, I'm unsure of how I should actually do this?
Named Pipes is an elegant solution, but you can even use files for this as you said.
http://support.microsoft.com/kb/177696
How to work with named pipes (C++ server , C# client)
There are other choices:
Clipboard, COM, Data Copy, DDE, File Mapping, Mailslots, Pipes, RPC, Windows Sockets
Refer here: http://msdn.microsoft.com/en-us/library/windows/desktop/aa365574%28v=vs.85%29.aspx
how are you turning on the device, and starting Program B?
I would use the same channel for the extra communication, as this line is already there and working (as Program B has started)
Personally I would choose to establish communication and transmit the data using standard protocols for the task, like FTP or HTTP. This way, when your Program A or Program B change (or the medium of connection is swapped out for another), you can still have a well-defined and easy to implement communication protocol. Network stacks are available for a variety of CPUs and operating systems and FTP (or TFTP) are easy to implement.
Related
i'm trying to implment my idea of simple yet pretty effective multithreaded server working on UDP. Main goal is for gaming(-like) applications, but it would be good if it could be used in other purposes too.
I want to use this API/technologies etc
STD::Thread for multithreading, since it is part of C++ standard, it should be future-proof and as far as i seen it it's both simple and works well with C++.
BSDSock (Linux) & WinSock2 (Windows). I would create one abstract class called Socket and for each platform (Linux - BSD, Windows - WinSock) create derived class implementing native API. Then i would use API provided by base class Socket, not native/platform API. That would allow me to use one code for whole server/client module and if i want to change platform i'd have to just switch class type of socket and thats it.
As for strategy of server-client comunication i thought of something like this:
Each programm has two sockets - one that listens on specified port and one that is used to send data to server/other clients. Both sockets run on different threads so that i can both read and send data at the same time (sort of), that way waiting for data won't ruin my performance. There will be one main server, and other clients will connect directly to that server. Clients will send only their data and recieve data directly from server.
Now i have some question:
Is it wise to use STD::Thread? I heard it's good on Linux, but not that good on Windows. Would PThreads would be much better?
Any other interesting ideas about making one code for many platforms (mainly Linux&Windows)? Or mine is good enough?
Any other ideas or some tips about strategy for how server/client would work? I wrote some simple network apps, but it didn't need that good strategy, so i'm not sure if it's best from simple ideas.
How often should i send data from client to server (and from server to client)? I dont want to flood the network and to make server load 100%?
Also: it should work nice with 2-4 players at the same time, i don't plan to use it with more at the moment.
Intuitively, from multi-threading purposes Linux + Pthread would be a nice combination. A vast number of mission critical systems are running on that combination. However, when we come to std::thread, having platform dependent nature is a nice to have feature. Certainly, if some bad smells are in windows dialect, MS would correct them future. BUT, if I were you, I will certainly select Linux + std::thread combination. Selection of Linux over Windows is a different topic and no need to comment here (with respect to server development perspective). std::thread provides you a nice set of feature,yet having the power of pthreads.
Regarding UDP, you have both pros and cons. But, I'd say if you are going to open your sever for public, you have to think about network firewalls as well. If you can address the inherent transport layer issues of UDP (packet re-ordering, lost packet recovery), a UDP server is light weighted in most of the cases.
It depends on your game to decide how often you need to send messages. I can't comment it.
Moreover, pay your attention to extra security on your data communication more seriously. Sooner or later your sever will be hacked. It is just a matter of fact of TIME.
I have a message object serialized as binary data stream (it can be any std::streambuf), and i want to transfer it to another process. The key is, server application must handle many clients, connection have to be asynchronous (because of multiple clients), and bidirectional (under the hood it may be implemented by two separated connections). Messages have variable length and should be queued. What method of IPC should i pick for this? Is there any simple way to transfer stream buffer through applications? Speed is not critical, but it will be good not to block application for too much time. Anything will be done locally under Windows (i aim to XP and newer), no network support required.
I also need a possibility of listening to incoming connections. Server should automatically detect new connections, do some handshake and accept, if it is compatible. I am aware of that i need to write many of things i mentioned on my own. Anyway, it must be possible to achieve but of course simpler is better.
You can use named pipes in windows.
See MSDN ref: http://msdn.microsoft.com/en-us/library/aa365150%28v=vs.85%29.aspx
You can also set it to be full duplex (bi-directional) and asynchronous. If you are familiar with File I/O APIs on windows then it should be straightforward to use.
Both can be used for communicating between different processes,
what's the difference?
Windows has two kinds of pipes: anonymous pipes and named pipes. Anonymous pipes correspond (fairly) closely to Unix pipes -- typical usage is for a parent process to set them up to be inherited by a child process, often connected to the standard input, output and/or error streams of the child. At one time, anonymous pipes were implemented completely differently from named pipes so they didn't (for one example) support overlapped I/O. Since then, that's changed so an anonymous pipe is basically just a named pipe with a name you don't know, so you can't open it by name, but it still has all the other features of a named pipe (such as the aforementioned overlapped I/O capability).
Windows named pipes are much more like sockets. They originated with OS/2, where they were originally the primary mechanism for creating client/server applications. They were originally built around NetBIOS (i.e., used NetBIOS both for addressing and transport). They're tightly integrated with things like Windows authentication, so you can (for example) have a named pipe server impersonate the client to restrict the server to doing things the client would be able to do if logged in directly. More recently, MS has gone to some trouble to get rid of the dependence on NetBIOS, but even though they can now use IP as their transport (and DNS for addressing, IIRC) they're still used primarily for Windows machines. The primary use on other machines is to imitate Windows, such as by running Samba.
(Shamelessly cribbed from http://www.perlmonks.org/?node_id=180842)
Pipes are fast and reliable, because they are implemented in memory on a single host where both communicating processes run. Sockets are slower and less reliable, but are much more flexible since they allow communication between processes on different hosts.
(Off the top of my head)
Pipe: A tube with a small bowl at one end; used for smoking tobacco
Socket: Receptacle where something (a pipe, probe or end of a bone) is inserted
Anyways:
"A major difference between pipes and
sockets is that pipes require a common
parent process to set up the
communications channel. A connection
between sockets can be set up by two
unrelated processes, possibly residing
on different machines."
Sockets would use some sort of IP protocol like TCP/IP or UDP, thus would be slower, but your code'd be more portable if you'd need to communicate over a network. There is a third Shared mem approach and forth Mach ports (in this case I am not sure about it would work with Windows )
They both do the same function, the only difference is that pipes are more efficient as they are closest one can get to the barebones of internets. Sockets are an abstraction built on top of series of tubes (pipes) as a result they are slower (just as java is slower than native assembly code).
I am currently involved in the development of a software using distributed computing to detect different events.
The current approach is : a dozen of threads are running simultaneously on different (physical) computers. Each event is assigned a number ; and every thread broadcasts its detected events to the other and filters the relevant events from the incoming stream.
I feel very bad about that, because it looks awful, is hard to maintain and could lead to performance issues when the system will be upgraded.
So I am looking for a flexible and elegant way to handle this IPC, and I think Boost::Signals seems a good candidate ; but I never used it, and I would like to know whether it is possible to provide encapsulation for network communication.
Since I don't know any solution that will do that, other then Open MPI, if I had to do that, I would first use Google's Protocol Buffer as my message container. With it, I could just create an abstract base message with stuff like source, dest, type, id, etc. Then, I would use Boost ASIO to distribute those across the network, or over a Named PIPE/loopback for local messages. Maybe, in each physical computer, a dedicated process could be running just for distribution. Each thread registers with it which types of messages it is interested in, and what its named pipe is called. This process would know the IP of all the other services.
If you need IPC over the network then boost::signals won't help you, at least not entirely by itself.
You could try using Open MPI.
I'm looking for a way to get two programs to efficiently transmit a large amount of data to each other, which needs to work on Linux and Windows, in C++. The context here is a P2P network program that acts as a node on the network and runs continuously, and other applications (which could be games hence the need for a fast solution) will use this to communicate with other nodes in the network. If there's a better solution for this I would be interested.
boost::asio is a cross platform library handling asynchronous io over sockets. You can combine this with using for instance Google Protocol Buffers for your actual messages.
Boost also provides you with boost::interprocess for interprocess communication on the same machine, but asio lets you do your communication asynchronously and you can easily have the same handlers for both local and remote connections.
I have been using ICE by ZeroC (www.zeroc.com), and it has been fantastic. Super easy to use, and it's not only cross platform, but has support for many languages as well (python, java, etc) and even an embedded version of the library.
Well, if we can assume the two processes are running on the same machine, then the fastest way for them to transfer large quantities of data back and forth is by keeping the data inside a shared memory region; with that setup, the data is never copied at all, since both processes can access it directly. (If you wanted to go even further, you could combine the two programs into one program, with each former 'process' now running as a thread inside the same process space instead. In that case they would be automatically sharing 100% of their memory with each other)
Of course, just having a shared memory area isn't sufficient in most cases: you would also need some sort of synchronization mechanism so that the processes can read and update the shared data safely, without tripping over each other. The way I would do that would be to create two double-ended queues in the shared memory region (one for each process to send with). Either use a lockless FIFO-queue class, or give each double-ended queue a semaphore/mutex that you can use to serialize pushing data items into the queue and popping data items out of the queue. (Note that the data items you'd be putting into the queues would only be pointers to the actual data buffers, not the data itself... otherwise you'd be back to copying large amounts of data around, which you want to avoid. It's a good idea to use shared_ptrs instead of plain C pointers, so that "old" data will be automatically freed when the receiving process is done using it). Once you have that, the only other thing you'd need is a way for process A to notify process B when it has just put an item into the queue for B to receive (and vice versa)... I typically do that by writing a byte into a pipe that the other process is select()-ing on, to cause the other process to wake up and check its queue, but there are other ways to do it as well.
This is a hard problem.
The bottleneck is the internet, and that your clients might be on NAT.
If you are not talking internet, or if you explicitly don't have clients behind carrier grade evil NATs, you need to say.
Because it boils down to: use TCP. Suck it up.
I would strongly suggest Protocol Buffers on top of TCP or UDP sockets.
So, while the other answers cover part of the problem (socket libraries), they're not telling you about the NAT issue. Rather than have your users tinker with their routers, it's better to use some techniques that should get you through a vaguely sane router with no extra configuration. You need to use all of these to get the best compatibility.
First, ICE library here is a NAT traversal technique that works with STUN and/or TURN servers out in the network. You may have to provide some infrastructure for this to work, although there are some public STUN servers.
Second, use both UPnP and NAT-PMP. One library here, for example.
Third, use IPv6. Teredo, which is one way of running IPv6 over IPv4, often works when none of the above do, and who knows, your users may have working IPv6 by some other means. Very little code to implement this, and increasingly important. I find about half of Bittorrent data arrives over IPv6, for example.