I've got one program which creates 3 worker programs. The preferable method of communication in my situation would be through a memory buffer which all four programs may access.
Is there a way to pass a pointer, reference or any kind of handler to the child processes?
Update
The three child programs are transforming vertex data while the main program primarily deals with UI, system messages, errors, etc..
I'm hoping there is some way to leverage OpenCL such that the four programs can share a context. If this is not possible, it would be nice to have access to the array of vertices across all programs.
I suppose our target platform is Windows right now but we'd like to keep it as cross-platform as possible. If there is no way to implement this utilizing OpenCL we'll probably fall back to wrapping this piece of code for a handful of different platforms.
Your question is platform dependent, therefore :
for Windows : Named Shared Memory
for linux : mmap or POSIX shared memory access
general case : boost::interprocess
If you explain a bit what kind of data is shared and other constraints/goal of the system it would be easier to answer your question.
I wonder why you think a shared buffer would be good? Is that because you want to pass a pointer in the buffer to the data to be worked on? Then you need shared memory if you want to work across processes.
What about a client-server approach where you send data to clients on request?
More information about your problem helps giving a better answer.
You should use Named Shared Memory and inter-process synchronization.
This is somewhat wider than the original question on shared memory buffers, but depending on your design, volume of data and performance requirements you could look into in-memory databases such as Redis or distributed caches, especially if you find yourself in 'publish-subscribe' situation.
Related
What is the fastest technology to send messages between C++ application processes, on Linux? I am vaguely aware that the following techniques are on the table:
TCP
UDP
Sockets
Pipes
Named pipes
Memory-mapped files
are there any more ways and what is the fastest?
Whilst all the above answers are very good, I think we'd have to discuss what is "fastest" [and does it have to be "fastest" or just "fast enough for "?]
For LARGE messages, there is no doubt that shared memory is a very good technique, and very useful in many ways.
However, if the messages are small, there are drawbacks of having to come up with your own message-passing protocol and method of informing the other process that there is a message.
Pipes and named pipes are much easier to use in this case - they behave pretty much like a file, you just write data at the sending side, and read the data at the receiving side. If the sender writes something, the receiver side automatically wakes up. If the pipe is full, the sending side gets blocked. If there is no more data from the sender, the receiving side is automatically blocked. Which means that this can be implemented in fairly few lines of code with a pretty good guarantee that it will work at all times, every time.
Shared memory on the other hand relies on some other mechanism to inform the other thread that "you have a packet of data to process". Yes, it's very fast if you have LARGE packets of data to copy - but I would be surprised if there is a huge difference to a pipe, really. Main benefit would be that the other side doesn't have to copy the data out of the shared memory - but it also relies on there being enough memory to hold all "in flight" messages, or the sender having the ability to hold back things.
I'm not saying "don't use shared memory", I'm just saying that there is no such thing as "one solution that solves all problems 'best'".
To clarify: I would start by implementing a simple method using a pipe or named pipe [depending on which suits the purposes], and measure the performance of that. If a significant time is spent actually copying the data, then I would consider using other methods.
Of course, another consideration should be "are we ever going to use two separate machines [or two virtual machines on the same system] to solve this problem. In which case, a network solution is a better choice - even if it's not THE fastest, I've run a local TCP stack on my machines at work for benchmark purposes and got some 20-30Gbit/s (2-3GB/s) with sustained traffic. A raw memcpy within the same process gets around 50-100GBit/s (5-10GB/s) (unless the block size is REALLY tiny and fits in the L1 cache). I haven't measured a standard pipe, but I expect that's somewhere roughly in the middle of those two numbers. [This is numbers that are about right for a number of different medium-sized fairly modern PC's - obviously, on a ARM, MIPS or other embedded style controller, expect a lower number for all of these methods]
I would suggest looking at this also: How to use shared memory with Linux in C.
Basically, I'd drop network protocols such as TCP and UDP when doing IPC on a single machine. These have packeting overhead and are bound to even more resources (e.g. ports, loopback interface).
NetOS Systems Research Group from Cambridge University, UK has done some (open-source) IPC benchmarks.
Source code is located at https://github.com/avsm/ipc-bench .
Project page: http://www.cl.cam.ac.uk/research/srg/netos/projects/ipc-bench/ .
Results: http://www.cl.cam.ac.uk/research/srg/netos/projects/ipc-bench/results.html
This research has been published using the results above: http://anil.recoil.org/papers/drafts/2012-usenix-ipc-draft1.pdf
Check CMA and kdbus:
https://lwn.net/Articles/466304/
I think the fastest stuff these days are based on AIO.
http://www.kegel.com/c10k.html
As you tagged this question with C++, I'd recommend Boost.Interprocess:
Shared memory is the fastest interprocess communication mechanism. The
operating system maps a memory segment in the address space of several
processes, so that several processes can read and write in that memory
segment without calling operating system functions. However, we need
some kind of synchronization between processes that read and write
shared memory.
Source
One caveat I've found is the portability limitations for synchronization primitives. Nor OS X, nor Windows have a native implementation for interprocess condition variables, for example,
and so it emulates them with spin locks.
Now if you use a *nix which supports POSIX process shared primitives, there will be no problems.
Shared memory with synchronization is a good approach when considerable data is involved.
Well, you could simply have a shared memory segment between your processes, using the linux shared memory aka SHM.
It's quite easy to use, look at the link for some examples.
posix message queues are pretty fast but they have some limitations
I'm in complete lack of understanding in this. Maybe this is too broad for stack, but here it goes:
Suppose I have two programs (written in C/C++) running simultaneously, say A and B, with different PIDs.
What are the options to make then interact with each other. For instance, how do I pass information from one to another like having one being able to wait for a signal from the other, and respond accordingly.
I know MPI, but MPI normally works for programs that are compiled using the same source (so, it works more for parallel computing than just interaction from completely different programs built to interact with each other).
Thanks
You must lookout for "IPC" (inter process communication). There are several types:
pipes
signals
shared memory
message queues
semaphores
files (per suggestion of #JonathanLeffler :-)
RPC (suggested by #sftrabbit)
Which is usually more geared towards Client/Server
CORBA
D-Bus
You use one of the many interprocess communication mechanisms, like pipes (one applications writes bytes into a pipe, the other reads from it. Imagine stdin/stdout.) or shared memory (a region of memory is mapped into both programs virtual address space and they can communicate through it).
The same source doesn't matter - once your programs are compiled the system doesn't know or care where they came from.
There are different ways to communicate between them depending on how much data, how fast, one way or bidirectional, predicatable rate etc etc....
The simplest is possibly just to use the network - note that if you are on the same machine the network stack will automatically use some higher performance system to actually send the data (ie shared memory)
I need to move large amounts (~10^6 floats) between multiple c++ threads and a fortran thread. At the moment we use windows shared memory to move very small piece of data, mainly for communication, and then save the file to a proprietary format to move the data. I've been asked to look at moving the bulk of the data via shared memory, but looking at the shared memory techniques in windows (seemingly a character buffer) this looks like a mess. Another possibility is boost's interprocess communication, but not sure how to use that from fortran, or if it's a good idea. Another idea was to use a database like sqlite.
I'm just wondering if anyone had any experience or would like to comment, as this is a little over my head at the moment.
Thanks very much
Jim
Use pipes. If you can inherit handles between processes, you can use anonym pipes, when not, you have to use named pipes. Also, threads share the address space, so you're probably thinking of processes when you say threads.
My unix/windows C++ app is already parallelized using MPI: the job is splitted in N cpus and each chunk is executed in parallel, quite efficient, very good speed scaling, the job is done right.
But some of the data is repeated in each process, and for technical reasons this data cannot be easily splitted over MPI (...).
For example:
5 Gb of static data, exact same thing loaded for each process
4 Gb of data that can be distributed in MPI, the more CPUs are used, smaller this per-CPU RAM is.
On a 4 CPU job, this would mean at least a 20Gb RAM load, most of memory 'wasted', this is awful.
I'm thinking using shared memory to reduce the overall load, the "static" chunk would be loaded only once per computer.
So, main question is:
Is there any standard MPI way to share memory on a node? Some kind of readily available + free library ?
If not, I would use boost.interprocess and use MPI calls to distribute local shared memory identifiers.
The shared-memory would be read by a "local master" on each node, and shared read-only. No need for any kind of semaphore/synchronization, because it wont change.
Any performance hit or particular issues to be wary of?
(There wont be any "strings" or overly weird data structures, everything can be brought down to arrays and structure pointers)
The job will be executed in a PBS (or SGE) queuing system, in the case of a process unclean exit, I wonder if those will cleanup the node-specific shared memory.
One increasingly common approach in High Performance Computing (HPC) is hybrid MPI/OpenMP programs. I.e. you have N MPI processes, and each MPI process has M threads. This approach maps well to clusters consisting of shared memory multiprocessor nodes.
Changing to such a hierarchical parallelization scheme obviously requires some more or less invasive changes, OTOH if done properly it can increase the performance and scalability of the code in addition to reducing memory consumption for replicated data.
Depending on the MPI implementation, you may or may not be able to make MPI calls from all threads. This is specified by the required and provided arguments to the MPI_Init_Thread() function that you must call instead of MPI_Init(). Possible values are
{ MPI_THREAD_SINGLE}
Only one thread will execute.
{ MPI_THREAD_FUNNELED}
The process may be multi-threaded, but only the main thread will make MPI calls (all MPI calls are ``funneled'' to the main thread).
{ MPI_THREAD_SERIALIZED}
The process may be multi-threaded, and multiple threads may make MPI calls, but only one at a time: MPI calls are not made concurrently from two distinct threads (all MPI calls are ``serialized'').
{ MPI_THREAD_MULTIPLE}
Multiple threads may call MPI, with no restrictions.
In my experience, modern MPI implementations like Open MPI support the most flexible MPI_THREAD_MULTIPLE. If you use older MPI libraries, or some specialized architecture, you might be worse off.
Of course, you don't need to do your threading with OpenMP, that's just the most popular option in HPC. You could use e.g. the Boost threads library, the Intel TBB library, or straight pthreads or windows threads for that matter.
I haven't worked with MPI, but if it's like other IPC libraries I've seen that hide whether other threads/processes/whatever are on the same or different machines, then it won't be able to guarantee shared memory. Yes, it could handle shared memory between two nodes on the same machine, if that machine provided shared memory itself. But trying to share memory between nodes on different machines would be very difficult at best, due to the complex coherency issues raised. I'd expect it to simply be unimplemented.
In all practicality, if you need to share memory between nodes, your best bet is to do that outside MPI. i don't think you need to use boost.interprocess-style shared memory, since you aren't describing a situation where the different nodes are making fine-grained changes to the shared memory; it's either read-only or partitioned.
John's and deus's answers cover how to map in a file, which is definitely what you want to do for the 5 Gb (gigabit?) static data. The per-CPU data sounds like the same thing, and you just need to send a message to each node telling it what part of the file it should grab. The OS should take care of mapping virtual memory to physical memory to the files.
As for cleanup... I would assume it doesn't do any cleanup of shared memory, but mmaped files should be cleaned up since files are closed (which should release their memory mappings) when a process is cleaned up. I have no idea what caveats CreateFileMapping etc. have.
Actual "shared memory" (i.e. boost.interprocess) is not cleaned up when a process dies. If possible, I'd recommend trying killing a process and seeing what is left behind.
With MPI-2 you have RMA (remote memory access) via functions such as MPI_Put and MPI_Get. Using these features, if your MPI installation supports them, would certainly help you reduce the total memory consumption of your program. The cost is added complexity in coding but that's part of the fun of parallel programming. Then again, it does keep you in the domain of MPI.
MPI-3 offers shared memory windows (see e.g. MPI_Win_allocate_shared()), which allows usage of on-node shared memory without any additional dependencies.
I don't know much about unix, and I don't know what MPI is. But in Windows, what you are describing is an exact match for a file mapping object.
If this data is imbedded in your .EXE or a .DLL that it loads, then it will automatically be shared between all processes. Teardown of your process, even as a result of a crash will not cause any leaks or unreleased locks of your data. however a 9Gb .dll sounds a bit iffy. So this probably doesn't work for you.
However, you could put your data into a file, then CreateFileMapping and MapViewOfFile on it. The mapping can be readonly, and you can map all or part of the file into memory. All processes will share pages that are mapped the same underlying CreateFileMapping object. it's good practice to close unmap views and close handles, but if you don't the OS will do it for you on teardown.
Note that unless you are running x64, you won't be able to map a 5Gb file into a single view (or even a 2Gb file, 1Gb might work). But given that you are talking about having this already working, I'm guessing that you are already x64 only.
If you store your static data in a file, you can use mmap on unix to get random access to the data. Data will be paged in as and when you need access to a particular bit of the data. All that you will need to do is overlay any binary structures over the file data. This is the unix equivalent of CreateFileMapping and MapViewOfFile mentioned above.
Incidentally glibc uses mmap when one calls malloc to request more than a page of data.
I had some projects with MPI in SHUT.
As i know , there are many ways to distribute a problem using MPI, maybe you can find another solution that does not required share memory,
my project was solving an 7,000,000 equation and 7,000,000 variable
if you can explain your problem,i would try to help you
I ran into this problem in the small when I used MPI a few years ago.
I am not certain that the SGE understands memory mapped files. If you are distributing against a beowulf cluster, I suspect you're going to have coherency issues. Could you discuss a little about your multiprocessor architecture?
My draft approach would be to set up an architecture where each part of the data is owned by a defined CPU. There would be two threads: one thread being an MPI two-way talker and one thread for computing the result. Note that MPI and threads don't always play well together.
I have two processes one will query other for data.There will be huge amount of queries in a limited time (10000 per second) and data (>100 mb) will be transferred per second.Type of data will be an integral type(double,int)
My question is in which way to connect this process?
Shared memory , message queue , lpc(Local Procedure call) or others....
And also i want to ask which library you suggest? by the way please do not suggest MPI.
edit : under windows xp 32 bit
One Word: Boost.InterProcess. If it really needs to be fast, shared memory is the way to go. You nearly have zero overhead as the operation system does the usual mapping between virtual and physical addresses and no copy is required for the data. You just have to lookout for concurrency issues.
For actually sending commands like shutdown and query, I would use message queues. I previously used localhost network programming to do that, and used manual shared memory allocation, before i knew about boost. Damn if i would need to rewrite the app, I would immediately pick boost. Boost.InterProcess makes this more easy for you. Check it out.
I would use shared memory to store the data, and message queues to send the queries.
I'll second Marc's suggestion -- I'd not bother with boost unless you have a portability concern or want to do cool stuff like map standard container types over shared memory (in which case I'd definitely use boost).
Otherwise, message queues and shared memory are pretty simple to deal with.
If your data consists of multiple types and/or you need things like mutex, use Boost.
Else use a shared section of memory using #pragma data_seg or a memory mapped file.
If you do use shared memory you will have to decide whether or not to spin or not. I'd expect that if you use a semaphore for synchronization and storing data in shared memory you will not get much performance benefit compared to using message queues (at significant clarity degradation), but if you spin on an atomic variable for synchronization, then you have to suffer the consequences of that.