MPI large data processing - c++

My application of MPI will read a series images to build a 3-D data. It is very large data( about 4 GB). I don't want the data distributed to every worker. I don't know how to do with this. Shared memory may be one solution. But how to use shared memory by using MPI. I have searched a lot about this, nothing good was found. Could someone give me suggestions or examples for large data processing by using MPI (BTW. I am using Open MPI implementation).
Thank you very much for your great help.

What you are looking for are one-sided communications that were added to MPI-2. It is available in OpenMPI. For an introduction, you could have a look at http://www.linux-mag.com/id/1793/ .
The principle is that you create a window (an area of shared memory), then you can get or put data from that window. MPI should optimize it to use RMA when available. There are also mechanisms like fences to ensure synchronization across processes.

Related

Kaldi - how to share language model among multiple decoders?

I'm using Kaldi for decoding lots of audio samples every day. I have a plan that there would be multiple decoders running in parallel doing decoding on the same language model. For this it would be nice if I could share one language model that is loaded into memory by multiple decoders. Model I have right now is 1GB on disc and uses around 3GB in the memory and it would be great if I can save the memory by using it once again.
Have anyone ever thought about such thing? Is it doable?
I have not found anything about it in Kaldi documentation
I was thinking to use boost::interprocess library to manage the object fst::VectorFst fst::ReadFstKaldi() as this is the biggest object. But this looks like a big issue, as it is a complex custom object and I'm not sure if boost::interprocess can handle those. I don't want to go into customizing the Kaldi objects to have them supported by boost memory sharing.
Any other ideas about this approach?
You do not need multiple processes, you just share the fst object across threads. It's constant, so there is no need to protect it. You create decoder with fst pointer in every worker, decoders are separate for every thread. You can use io_service for processing requests.

Transfering data between threads in C++ and Fortran

I need to move large amounts (~10^6 floats) between multiple c++ threads and a fortran thread. At the moment we use windows shared memory to move very small piece of data, mainly for communication, and then save the file to a proprietary format to move the data. I've been asked to look at moving the bulk of the data via shared memory, but looking at the shared memory techniques in windows (seemingly a character buffer) this looks like a mess. Another possibility is boost's interprocess communication, but not sure how to use that from fortran, or if it's a good idea. Another idea was to use a database like sqlite.
I'm just wondering if anyone had any experience or would like to comment, as this is a little over my head at the moment.
Thanks very much
Jim
Use pipes. If you can inherit handles between processes, you can use anonym pipes, when not, you have to use named pipes. Also, threads share the address space, so you're probably thinking of processes when you say threads.

MPI how to send and receive unknown datatypes

We have developed an algorithm library in C++ which allows the user to implement his own datatypes for sharing data between individual algorithms (also implemented by the user).
This works fine, but we want to provide parallelization at library level. The individual algorithms should be executed in parallel on different nodes of distributed memory machines.
We decided to use MPI for parallelization, as it can be used for distributed and shared memory machines without code changes.
Unfortunately we fight now the problem how to distribute the user implemented datatypes between the nodes. We have the following problems:
We do not know how big the data might be, it might even change from run to run.
We do not know what data is inside the data structure.
The amount of data can be very big up to 1GB (this should be no problem with MPI)
The user should not see any difference in implementing the datatypes or algorithms for parallel execution (for the algorithm there is actually no problem)
Is there a possibility to use MPI to share these data between the nodes, or are there approaches available, which might be better suited for this kind of problem.
We would like to have a solution which works at least on shared memory machines however we would love to have a solution which works without code changes on shared and distributed memory machines.
Yes, you can do this with MPI, but no, MPI can't do it for you by itself.
Whether you're sending this data to another node, or writing it to disk, at some point you need to expressly describe the data structures layout in memory so that it can be serialized. If you pass MPI (or any other communications library) a pointer, it doesn't know what lies on the other side of that pointer, and so it has no way of traversing the data structure to copy its contents.
You can marshal the arguments into plain old data (manually, or with things like MPI_PACK), or you can create an MPI datatype which describes the layout of data in memory for that particular instance, and that will copy the data over. In addition, you'll need to redirecting any pointers within the data structure. Boost serialization may be able to help you with all of this.

How to implement a shared buffer?

I've got one program which creates 3 worker programs. The preferable method of communication in my situation would be through a memory buffer which all four programs may access.
Is there a way to pass a pointer, reference or any kind of handler to the child processes?
Update
The three child programs are transforming vertex data while the main program primarily deals with UI, system messages, errors, etc..
I'm hoping there is some way to leverage OpenCL such that the four programs can share a context. If this is not possible, it would be nice to have access to the array of vertices across all programs.
I suppose our target platform is Windows right now but we'd like to keep it as cross-platform as possible. If there is no way to implement this utilizing OpenCL we'll probably fall back to wrapping this piece of code for a handful of different platforms.
Your question is platform dependent, therefore :
for Windows : Named Shared Memory
for linux : mmap or POSIX shared memory access
general case : boost::interprocess
If you explain a bit what kind of data is shared and other constraints/goal of the system it would be easier to answer your question.
I wonder why you think a shared buffer would be good? Is that because you want to pass a pointer in the buffer to the data to be worked on? Then you need shared memory if you want to work across processes.
What about a client-server approach where you send data to clients on request?
More information about your problem helps giving a better answer.
You should use Named Shared Memory and inter-process synchronization.
This is somewhat wider than the original question on shared memory buffers, but depending on your design, volume of data and performance requirements you could look into in-memory databases such as Redis or distributed caches, especially if you find yourself in 'publish-subscribe' situation.

shared memory, MPI and queuing systems

My unix/windows C++ app is already parallelized using MPI: the job is splitted in N cpus and each chunk is executed in parallel, quite efficient, very good speed scaling, the job is done right.
But some of the data is repeated in each process, and for technical reasons this data cannot be easily splitted over MPI (...).
For example:
5 Gb of static data, exact same thing loaded for each process
4 Gb of data that can be distributed in MPI, the more CPUs are used, smaller this per-CPU RAM is.
On a 4 CPU job, this would mean at least a 20Gb RAM load, most of memory 'wasted', this is awful.
I'm thinking using shared memory to reduce the overall load, the "static" chunk would be loaded only once per computer.
So, main question is:
Is there any standard MPI way to share memory on a node? Some kind of readily available + free library ?
If not, I would use boost.interprocess and use MPI calls to distribute local shared memory identifiers.
The shared-memory would be read by a "local master" on each node, and shared read-only. No need for any kind of semaphore/synchronization, because it wont change.
Any performance hit or particular issues to be wary of?
(There wont be any "strings" or overly weird data structures, everything can be brought down to arrays and structure pointers)
The job will be executed in a PBS (or SGE) queuing system, in the case of a process unclean exit, I wonder if those will cleanup the node-specific shared memory.
One increasingly common approach in High Performance Computing (HPC) is hybrid MPI/OpenMP programs. I.e. you have N MPI processes, and each MPI process has M threads. This approach maps well to clusters consisting of shared memory multiprocessor nodes.
Changing to such a hierarchical parallelization scheme obviously requires some more or less invasive changes, OTOH if done properly it can increase the performance and scalability of the code in addition to reducing memory consumption for replicated data.
Depending on the MPI implementation, you may or may not be able to make MPI calls from all threads. This is specified by the required and provided arguments to the MPI_Init_Thread() function that you must call instead of MPI_Init(). Possible values are
{ MPI_THREAD_SINGLE}
Only one thread will execute.
{ MPI_THREAD_FUNNELED}
The process may be multi-threaded, but only the main thread will make MPI calls (all MPI calls are ``funneled'' to the main thread).
{ MPI_THREAD_SERIALIZED}
The process may be multi-threaded, and multiple threads may make MPI calls, but only one at a time: MPI calls are not made concurrently from two distinct threads (all MPI calls are ``serialized'').
{ MPI_THREAD_MULTIPLE}
Multiple threads may call MPI, with no restrictions.
In my experience, modern MPI implementations like Open MPI support the most flexible MPI_THREAD_MULTIPLE. If you use older MPI libraries, or some specialized architecture, you might be worse off.
Of course, you don't need to do your threading with OpenMP, that's just the most popular option in HPC. You could use e.g. the Boost threads library, the Intel TBB library, or straight pthreads or windows threads for that matter.
I haven't worked with MPI, but if it's like other IPC libraries I've seen that hide whether other threads/processes/whatever are on the same or different machines, then it won't be able to guarantee shared memory. Yes, it could handle shared memory between two nodes on the same machine, if that machine provided shared memory itself. But trying to share memory between nodes on different machines would be very difficult at best, due to the complex coherency issues raised. I'd expect it to simply be unimplemented.
In all practicality, if you need to share memory between nodes, your best bet is to do that outside MPI. i don't think you need to use boost.interprocess-style shared memory, since you aren't describing a situation where the different nodes are making fine-grained changes to the shared memory; it's either read-only or partitioned.
John's and deus's answers cover how to map in a file, which is definitely what you want to do for the 5 Gb (gigabit?) static data. The per-CPU data sounds like the same thing, and you just need to send a message to each node telling it what part of the file it should grab. The OS should take care of mapping virtual memory to physical memory to the files.
As for cleanup... I would assume it doesn't do any cleanup of shared memory, but mmaped files should be cleaned up since files are closed (which should release their memory mappings) when a process is cleaned up. I have no idea what caveats CreateFileMapping etc. have.
Actual "shared memory" (i.e. boost.interprocess) is not cleaned up when a process dies. If possible, I'd recommend trying killing a process and seeing what is left behind.
With MPI-2 you have RMA (remote memory access) via functions such as MPI_Put and MPI_Get. Using these features, if your MPI installation supports them, would certainly help you reduce the total memory consumption of your program. The cost is added complexity in coding but that's part of the fun of parallel programming. Then again, it does keep you in the domain of MPI.
MPI-3 offers shared memory windows (see e.g. MPI_Win_allocate_shared()), which allows usage of on-node shared memory without any additional dependencies.
I don't know much about unix, and I don't know what MPI is. But in Windows, what you are describing is an exact match for a file mapping object.
If this data is imbedded in your .EXE or a .DLL that it loads, then it will automatically be shared between all processes. Teardown of your process, even as a result of a crash will not cause any leaks or unreleased locks of your data. however a 9Gb .dll sounds a bit iffy. So this probably doesn't work for you.
However, you could put your data into a file, then CreateFileMapping and MapViewOfFile on it. The mapping can be readonly, and you can map all or part of the file into memory. All processes will share pages that are mapped the same underlying CreateFileMapping object. it's good practice to close unmap views and close handles, but if you don't the OS will do it for you on teardown.
Note that unless you are running x64, you won't be able to map a 5Gb file into a single view (or even a 2Gb file, 1Gb might work). But given that you are talking about having this already working, I'm guessing that you are already x64 only.
If you store your static data in a file, you can use mmap on unix to get random access to the data. Data will be paged in as and when you need access to a particular bit of the data. All that you will need to do is overlay any binary structures over the file data. This is the unix equivalent of CreateFileMapping and MapViewOfFile mentioned above.
Incidentally glibc uses mmap when one calls malloc to request more than a page of data.
I had some projects with MPI in SHUT.
As i know , there are many ways to distribute a problem using MPI, maybe you can find another solution that does not required share memory,
my project was solving an 7,000,000 equation and 7,000,000 variable
if you can explain your problem,i would try to help you
I ran into this problem in the small when I used MPI a few years ago.
I am not certain that the SGE understands memory mapped files. If you are distributing against a beowulf cluster, I suspect you're going to have coherency issues. Could you discuss a little about your multiprocessor architecture?
My draft approach would be to set up an architecture where each part of the data is owned by a defined CPU. There would be two threads: one thread being an MPI two-way talker and one thread for computing the result. Note that MPI and threads don't always play well together.