I have two processes one will query other for data.There will be huge amount of queries in a limited time (10000 per second) and data (>100 mb) will be transferred per second.Type of data will be an integral type(double,int)
My question is in which way to connect this process?
Shared memory , message queue , lpc(Local Procedure call) or others....
And also i want to ask which library you suggest? by the way please do not suggest MPI.
edit : under windows xp 32 bit
One Word: Boost.InterProcess. If it really needs to be fast, shared memory is the way to go. You nearly have zero overhead as the operation system does the usual mapping between virtual and physical addresses and no copy is required for the data. You just have to lookout for concurrency issues.
For actually sending commands like shutdown and query, I would use message queues. I previously used localhost network programming to do that, and used manual shared memory allocation, before i knew about boost. Damn if i would need to rewrite the app, I would immediately pick boost. Boost.InterProcess makes this more easy for you. Check it out.
I would use shared memory to store the data, and message queues to send the queries.
I'll second Marc's suggestion -- I'd not bother with boost unless you have a portability concern or want to do cool stuff like map standard container types over shared memory (in which case I'd definitely use boost).
Otherwise, message queues and shared memory are pretty simple to deal with.
If your data consists of multiple types and/or you need things like mutex, use Boost.
Else use a shared section of memory using #pragma data_seg or a memory mapped file.
If you do use shared memory you will have to decide whether or not to spin or not. I'd expect that if you use a semaphore for synchronization and storing data in shared memory you will not get much performance benefit compared to using message queues (at significant clarity degradation), but if you spin on an atomic variable for synchronization, then you have to suffer the consequences of that.
Related
One rule every programmer quickly learns about multithreading is:
If more than one thread has access to a data structure, and at least one of threads might modify that data structure, then you'd better serialize all accesses to that data structure, or you're in for a world of debugging pain.
Typically this serialization is done via a mutex -- i.e. a thread that wants to read or write the data structure locks the mutex, does whatever it needs to do, and then unlocks the mutex to make it available again to other threads.
Which brings me to the point: the memory-heap of a process is a data structure which is accessible by multiple threads. Does this mean that every call to default/non-overloaded new and delete is serialized by a process-global mutex, and is therefore a potential serialization-bottleneck that can slow down multithreaded programs? Or do modern heap implementations avoid or mitigate that problem somehow, and if so, how do they do it?
(Note: I'm tagging this question linux, to avoid the correct-but-uninformative "it's implementation-dependent" response, but I'd also be interested in hearing about how Windows and MacOS/X do it as well, if there are significant differences across implementations)
new and delete are thread safe
The following functions are required to be thread-safe:
The library versions of operator new and operator delete
User replacement versions of global operator new and operator delete
std::calloc, std::malloc, std::realloc, std::aligned_alloc, std::free
Calls to these functions that allocate or deallocate a particular unit of storage occur in a single total order, and each such deallocation call happens-before the next allocation (if any) in this order.
With gcc, new is implemented by delegating to malloc, and we see that their malloc does indeed use a lock. If you are worried about your allocation causing bottlenecks, write your own allocator.
Answer is yes, but in practice it is usually not a problem.
If it is a problem for you you may try replacing your implementation of malloc with tcmalloc that reduces, but does not eliminate possible contention(since there is only 1 heap that needs to be shared among threads and processes).
TCMalloc assigns each thread a thread-local cache. Small allocations are satisfied from the thread-local cache. Objects are moved from central data structures into a thread-local cache as needed, and periodic garbage collections are used to migrate memory back from a thread-local cache into the central data structures.
There are also other options like using custom allocators and/or specialized containers and/or redesigning your application.
As you tried to avoid the the answer is architecture/system dependant by trying to avoid the problem that multiple threads must serialize accesses, this only happens with heaps that grow or shrink when the program needs to expand it or return part of it to the system.
The first answer has to be simply it's implementation dependant, without any system dependencies, because normally, libraries get large chunks of memory to base the heap and they administer those internally, which makes the problem actually operating system and architecture independent.
The second answer is that, of course, if you have only one single heap for all the threads, you'll have a possible bottleneck in case all of the active threads compete for a single chunk of memory. There are several approaches to this, you can have a pool of heaps to allow parallelism, and make the different threads use different pools for their requests, think that the possible largest problem is in requesting memory, as this is the case when you have the bottleneck. On returning there's not such issue, as you can act more like a garbage collector in which you queue the returned chunks of memory and enqueue them for a thread to dispatch and put those chunks in the proper places to conserve the heaps integrities. Having multiple heaps allows even to classify them by priorities, by chunk sizes, etc. so the risk of collision is made low by the class or problem you are going to deal with. This is the case of operating system kernels like *BSD, which use several memory heaps, somewhat dedicated to the kind of use they are going to receive (there's one for the io-disk buffers, one for virtual memory mapped segments, one for process virtual memory space management, etc)
I recommend you to read The design and implementation of the FreeBSD Operating System which explains very well the approach used in the kernel of BSD systems. This is general enough and probably a great percentage of the other systems follow this or a very similar approach.
What is the fastest technology to send messages between C++ application processes, on Linux? I am vaguely aware that the following techniques are on the table:
TCP
UDP
Sockets
Pipes
Named pipes
Memory-mapped files
are there any more ways and what is the fastest?
Whilst all the above answers are very good, I think we'd have to discuss what is "fastest" [and does it have to be "fastest" or just "fast enough for "?]
For LARGE messages, there is no doubt that shared memory is a very good technique, and very useful in many ways.
However, if the messages are small, there are drawbacks of having to come up with your own message-passing protocol and method of informing the other process that there is a message.
Pipes and named pipes are much easier to use in this case - they behave pretty much like a file, you just write data at the sending side, and read the data at the receiving side. If the sender writes something, the receiver side automatically wakes up. If the pipe is full, the sending side gets blocked. If there is no more data from the sender, the receiving side is automatically blocked. Which means that this can be implemented in fairly few lines of code with a pretty good guarantee that it will work at all times, every time.
Shared memory on the other hand relies on some other mechanism to inform the other thread that "you have a packet of data to process". Yes, it's very fast if you have LARGE packets of data to copy - but I would be surprised if there is a huge difference to a pipe, really. Main benefit would be that the other side doesn't have to copy the data out of the shared memory - but it also relies on there being enough memory to hold all "in flight" messages, or the sender having the ability to hold back things.
I'm not saying "don't use shared memory", I'm just saying that there is no such thing as "one solution that solves all problems 'best'".
To clarify: I would start by implementing a simple method using a pipe or named pipe [depending on which suits the purposes], and measure the performance of that. If a significant time is spent actually copying the data, then I would consider using other methods.
Of course, another consideration should be "are we ever going to use two separate machines [or two virtual machines on the same system] to solve this problem. In which case, a network solution is a better choice - even if it's not THE fastest, I've run a local TCP stack on my machines at work for benchmark purposes and got some 20-30Gbit/s (2-3GB/s) with sustained traffic. A raw memcpy within the same process gets around 50-100GBit/s (5-10GB/s) (unless the block size is REALLY tiny and fits in the L1 cache). I haven't measured a standard pipe, but I expect that's somewhere roughly in the middle of those two numbers. [This is numbers that are about right for a number of different medium-sized fairly modern PC's - obviously, on a ARM, MIPS or other embedded style controller, expect a lower number for all of these methods]
I would suggest looking at this also: How to use shared memory with Linux in C.
Basically, I'd drop network protocols such as TCP and UDP when doing IPC on a single machine. These have packeting overhead and are bound to even more resources (e.g. ports, loopback interface).
NetOS Systems Research Group from Cambridge University, UK has done some (open-source) IPC benchmarks.
Source code is located at https://github.com/avsm/ipc-bench .
Project page: http://www.cl.cam.ac.uk/research/srg/netos/projects/ipc-bench/ .
Results: http://www.cl.cam.ac.uk/research/srg/netos/projects/ipc-bench/results.html
This research has been published using the results above: http://anil.recoil.org/papers/drafts/2012-usenix-ipc-draft1.pdf
Check CMA and kdbus:
https://lwn.net/Articles/466304/
I think the fastest stuff these days are based on AIO.
http://www.kegel.com/c10k.html
As you tagged this question with C++, I'd recommend Boost.Interprocess:
Shared memory is the fastest interprocess communication mechanism. The
operating system maps a memory segment in the address space of several
processes, so that several processes can read and write in that memory
segment without calling operating system functions. However, we need
some kind of synchronization between processes that read and write
shared memory.
Source
One caveat I've found is the portability limitations for synchronization primitives. Nor OS X, nor Windows have a native implementation for interprocess condition variables, for example,
and so it emulates them with spin locks.
Now if you use a *nix which supports POSIX process shared primitives, there will be no problems.
Shared memory with synchronization is a good approach when considerable data is involved.
Well, you could simply have a shared memory segment between your processes, using the linux shared memory aka SHM.
It's quite easy to use, look at the link for some examples.
posix message queues are pretty fast but they have some limitations
My goal is to send/share data between multiple programs. These are the options I thought of:
I could use a file, but prefer to use my RAM because it's generally faster.
I could use a socket, but that would require a lot of address information which is unnecessary for local stuff. And ports too.
I could ask others about an efficient way to do this.
I chose the last one.
So, what would be an efficient way to send data from one program to another? It might use a buffer, for example, and write bytes to it and wait for the reciever to mark the first byte as 'read' (basically anything else than the byte written), then write again, but where would I put the buffer and how would I make it accessible for both programs? Or perhaps something else might work too?
I use linux.
What about fifos and pipes? if you are on a linux environment, this is the way to allow 2 programs to share data.
The fastest IPC for processes running on same host is a shared memory.
In short, several processes can access same memory segment.
See this tutorial.
You may want to take a look at Boost.Interprocess
Boost.Interprocess simplifies the use of common interprocess communication and synchronization mechanisms and offers a wide range of them:
Shared memory.
Memory-mapped files.
Semaphores, mutexes, condition variables and upgradable mutex types to place them in shared
memory and memory mapped files.
Named versions of those synchronization objects, similar to UNIX/Windows sem_open/CreateSemaphore API.
File locking.
Relative pointers.
Message queues.
To answer your questions:
Using a file is probably not the best way, and files are usually not used for passing inner-process information. Remember the os has to open, read, write, close them. They are however used for locking (http://en.wikipedia.org/wiki/File_locking).
The highest performance you get using pipestream (http://linux.die.net/man/3/popen), but in Linux it's hard to get right. You have to redirect the stdin, stdout, and stderr. This has to be done for each inner-process. So it will work well for two applications but go beyond that and it gets very hairy.
My favorite solution, use socketpairs (http://pubs.opengroup.org/onlinepubs/009604499/functions/socketpair.html). These are very robust and easy to setup. But if you use multiple applications you have to prepare some sort of pool where to access the applications.
On Linux, when using files, they are very often in cache, so you won't read the disk that often, and you could use a "RAM" filesystem like tmpfs (actually tmpfs use virtual memory, so RAM + swap, and practically the files are kept in RAM most of the time).
The main issue remains synchronization.
Using sockets (which may be, if all processes are on the same machine, AF_UNIX sockets which are faster than TCP/IP ones) has the advantage of making our code easily portable to environments where you prefer to run several processes on several machines.
And you could also use an existing framework for parallel execution, like e.g. MPI, Corba, etc etc.
You should have a gross idea of the bandwidth and latency expected from your application.
(it is not the same if you need to share dozens of megabytes every millisecond, or hundreds of kilobytes every tenths of seconds).
I would suggest learning more about serialization techniques, formats and libraries like XDR, ASN1, JSON, YAML, s11n, jsoncpp etc.
And sending or sharing data is not the same. When you send (and recieve) data, you think in terms of message passing. When you share data you think in terms of a shared memory. Programming style is very different.
Shared memory is the best for sharing the data between the processes. But it needs lots of synchronization and if more than 2 processes are sharing the data then synchronization is like a Cyclops. (Single eye - Single shared memory).
But if you make use of sockets (multicast sockets), then implementation will be little difficult, but scalability and maintainability is very easy. You no need to bother how many apps will be waiting for the data, you can just multicast and they will listen to the data and process. No need to wait for the semaphore (shared memory synchronization technique) to read the data.
So reading the data time can be reduced.
Shared memory - Wait for the semaphore, read the data and process the data.
Sockets - Receive the data, process the data.
Performance, scalability and maintainability will be added advantages with the sockets.
Regards,
SSuman185
I've got one program which creates 3 worker programs. The preferable method of communication in my situation would be through a memory buffer which all four programs may access.
Is there a way to pass a pointer, reference or any kind of handler to the child processes?
Update
The three child programs are transforming vertex data while the main program primarily deals with UI, system messages, errors, etc..
I'm hoping there is some way to leverage OpenCL such that the four programs can share a context. If this is not possible, it would be nice to have access to the array of vertices across all programs.
I suppose our target platform is Windows right now but we'd like to keep it as cross-platform as possible. If there is no way to implement this utilizing OpenCL we'll probably fall back to wrapping this piece of code for a handful of different platforms.
Your question is platform dependent, therefore :
for Windows : Named Shared Memory
for linux : mmap or POSIX shared memory access
general case : boost::interprocess
If you explain a bit what kind of data is shared and other constraints/goal of the system it would be easier to answer your question.
I wonder why you think a shared buffer would be good? Is that because you want to pass a pointer in the buffer to the data to be worked on? Then you need shared memory if you want to work across processes.
What about a client-server approach where you send data to clients on request?
More information about your problem helps giving a better answer.
You should use Named Shared Memory and inter-process synchronization.
This is somewhat wider than the original question on shared memory buffers, but depending on your design, volume of data and performance requirements you could look into in-memory databases such as Redis or distributed caches, especially if you find yourself in 'publish-subscribe' situation.
My unix/windows C++ app is already parallelized using MPI: the job is splitted in N cpus and each chunk is executed in parallel, quite efficient, very good speed scaling, the job is done right.
But some of the data is repeated in each process, and for technical reasons this data cannot be easily splitted over MPI (...).
For example:
5 Gb of static data, exact same thing loaded for each process
4 Gb of data that can be distributed in MPI, the more CPUs are used, smaller this per-CPU RAM is.
On a 4 CPU job, this would mean at least a 20Gb RAM load, most of memory 'wasted', this is awful.
I'm thinking using shared memory to reduce the overall load, the "static" chunk would be loaded only once per computer.
So, main question is:
Is there any standard MPI way to share memory on a node? Some kind of readily available + free library ?
If not, I would use boost.interprocess and use MPI calls to distribute local shared memory identifiers.
The shared-memory would be read by a "local master" on each node, and shared read-only. No need for any kind of semaphore/synchronization, because it wont change.
Any performance hit or particular issues to be wary of?
(There wont be any "strings" or overly weird data structures, everything can be brought down to arrays and structure pointers)
The job will be executed in a PBS (or SGE) queuing system, in the case of a process unclean exit, I wonder if those will cleanup the node-specific shared memory.
One increasingly common approach in High Performance Computing (HPC) is hybrid MPI/OpenMP programs. I.e. you have N MPI processes, and each MPI process has M threads. This approach maps well to clusters consisting of shared memory multiprocessor nodes.
Changing to such a hierarchical parallelization scheme obviously requires some more or less invasive changes, OTOH if done properly it can increase the performance and scalability of the code in addition to reducing memory consumption for replicated data.
Depending on the MPI implementation, you may or may not be able to make MPI calls from all threads. This is specified by the required and provided arguments to the MPI_Init_Thread() function that you must call instead of MPI_Init(). Possible values are
{ MPI_THREAD_SINGLE}
Only one thread will execute.
{ MPI_THREAD_FUNNELED}
The process may be multi-threaded, but only the main thread will make MPI calls (all MPI calls are ``funneled'' to the main thread).
{ MPI_THREAD_SERIALIZED}
The process may be multi-threaded, and multiple threads may make MPI calls, but only one at a time: MPI calls are not made concurrently from two distinct threads (all MPI calls are ``serialized'').
{ MPI_THREAD_MULTIPLE}
Multiple threads may call MPI, with no restrictions.
In my experience, modern MPI implementations like Open MPI support the most flexible MPI_THREAD_MULTIPLE. If you use older MPI libraries, or some specialized architecture, you might be worse off.
Of course, you don't need to do your threading with OpenMP, that's just the most popular option in HPC. You could use e.g. the Boost threads library, the Intel TBB library, or straight pthreads or windows threads for that matter.
I haven't worked with MPI, but if it's like other IPC libraries I've seen that hide whether other threads/processes/whatever are on the same or different machines, then it won't be able to guarantee shared memory. Yes, it could handle shared memory between two nodes on the same machine, if that machine provided shared memory itself. But trying to share memory between nodes on different machines would be very difficult at best, due to the complex coherency issues raised. I'd expect it to simply be unimplemented.
In all practicality, if you need to share memory between nodes, your best bet is to do that outside MPI. i don't think you need to use boost.interprocess-style shared memory, since you aren't describing a situation where the different nodes are making fine-grained changes to the shared memory; it's either read-only or partitioned.
John's and deus's answers cover how to map in a file, which is definitely what you want to do for the 5 Gb (gigabit?) static data. The per-CPU data sounds like the same thing, and you just need to send a message to each node telling it what part of the file it should grab. The OS should take care of mapping virtual memory to physical memory to the files.
As for cleanup... I would assume it doesn't do any cleanup of shared memory, but mmaped files should be cleaned up since files are closed (which should release their memory mappings) when a process is cleaned up. I have no idea what caveats CreateFileMapping etc. have.
Actual "shared memory" (i.e. boost.interprocess) is not cleaned up when a process dies. If possible, I'd recommend trying killing a process and seeing what is left behind.
With MPI-2 you have RMA (remote memory access) via functions such as MPI_Put and MPI_Get. Using these features, if your MPI installation supports them, would certainly help you reduce the total memory consumption of your program. The cost is added complexity in coding but that's part of the fun of parallel programming. Then again, it does keep you in the domain of MPI.
MPI-3 offers shared memory windows (see e.g. MPI_Win_allocate_shared()), which allows usage of on-node shared memory without any additional dependencies.
I don't know much about unix, and I don't know what MPI is. But in Windows, what you are describing is an exact match for a file mapping object.
If this data is imbedded in your .EXE or a .DLL that it loads, then it will automatically be shared between all processes. Teardown of your process, even as a result of a crash will not cause any leaks or unreleased locks of your data. however a 9Gb .dll sounds a bit iffy. So this probably doesn't work for you.
However, you could put your data into a file, then CreateFileMapping and MapViewOfFile on it. The mapping can be readonly, and you can map all or part of the file into memory. All processes will share pages that are mapped the same underlying CreateFileMapping object. it's good practice to close unmap views and close handles, but if you don't the OS will do it for you on teardown.
Note that unless you are running x64, you won't be able to map a 5Gb file into a single view (or even a 2Gb file, 1Gb might work). But given that you are talking about having this already working, I'm guessing that you are already x64 only.
If you store your static data in a file, you can use mmap on unix to get random access to the data. Data will be paged in as and when you need access to a particular bit of the data. All that you will need to do is overlay any binary structures over the file data. This is the unix equivalent of CreateFileMapping and MapViewOfFile mentioned above.
Incidentally glibc uses mmap when one calls malloc to request more than a page of data.
I had some projects with MPI in SHUT.
As i know , there are many ways to distribute a problem using MPI, maybe you can find another solution that does not required share memory,
my project was solving an 7,000,000 equation and 7,000,000 variable
if you can explain your problem,i would try to help you
I ran into this problem in the small when I used MPI a few years ago.
I am not certain that the SGE understands memory mapped files. If you are distributing against a beowulf cluster, I suspect you're going to have coherency issues. Could you discuss a little about your multiprocessor architecture?
My draft approach would be to set up an architecture where each part of the data is owned by a defined CPU. There would be two threads: one thread being an MPI two-way talker and one thread for computing the result. Note that MPI and threads don't always play well together.