I've been reading up on Python's "multiprocessing", specifically the "Pool" stuff. I'm familiar with threading but not the approach used here. If I were to pass a very large collection (say a dictionary of some sort) to the process pool ("pool.map(myMethod, humungousDictionary)") are copies made of the dictionary in memory and than handed off to each process, or does there exist only the one dictionary? I'm concerned about memory usage. Thank you in advance.
The short answer is: No. Processes work in their own independent memory space, effectively duplicating your data.
If your dictionary is read only, and modifications will not be made, here are some options you could consider:
Save your data into a database. Each worker will read the data and work independently
Have a single process with a parent that spawns multiple workers using os.fork. Thus, all threads share the same context.
Use shared memory. Unix systems offer shared memory for interprocess communication. If there is a chance of racing, you will need semaphores as well.
You may also consider referring here for deeper insight on a possible solution.
Related
I am trying to find out if there is an inherent speed difference between inter-thread and inter-process communication.
I know that when using threads the threads share the same memory, can use the same global variables, and the such while processes have to use other tricks, which basically means queues.
But take the following case:
An application is comprised of several completely separate .exe files. When all are run they form a producer/consumer (or publisher/subscriber) architecture, with some processes producing some values and other processes reading and using those values and maybe producing some other values.
This communication is done with conventional ways of IPC.
My question is: if I were to move the code around so that it's one process with multiple threads (assuming no conflicts with variable names and the such), but keep the communication methods the same, queues with all the locks and semaphores behind them, will the thread-based application be faster than the process-based one?
The startup costs of processes vs. threads are not important because the application is meant to run for a long time (hours) so a few milliseconds will not be important.
Google has yielded no conclusive answers to this.
To clarify some aspects of the question:
The factor I want to maximize is throughput.
Some external factor (an arduino sensor for example) produces an input for one of the nodes and the entire network takes some time while all the nodes consume and produce values. Then a new input can be processed. I would like to be able to process more inputs per minute/second.
The data being passed back and forth are mostly numbers or small arrays of numbers.
The entire network can have lets say between 5 and 25 nodes.
As for platform (if it is relevant) I would like answers for both Linux and Windows.
The specific use-case is too large to be described here so consider the use-case provided above. This is as much, if not more, an educational question for my own knowledge as it is a question about a specific problem.
Please ask for any other relevant information that I have not included here.
If I were to move the code around so that it's one process with multiple threads (assuming no conflicts with variable names and the such), but keep the communication methods the same, queues with all the locks and semaphores behind them, will the thread-based application be faster than the process-based one?
That is not possible. The multiple thread version will take advantage of the shared memory space and the multi-process version cannot do so. For example, you can take an ordinary object in one thread and access it through a pointer in another thread, and all the referenced sub-objects will "just work". Anything not modified can be accessed just as easily in one thread as another with no special effort.
This simply won't work across processes at all since they don't share an address space.
What's your idea about simulating thread with "fork() function" and a "shared memory" block ...
Is it possible ?
How much is it reasonable to do this for a program ? ( I mean , Will it work well..?)
For starters, don't mix a thread and fork().
A fork gives you a brand new process, which is a copy of the current process, with the same code segments. As the memory image changes (typically this is due to different behavior of the two processes) you get a separation of the memory images, however the executable code remains the same. Tasks do not share memory unless they use some Inter Process Communication (IPC) primitive.
In contrast a thread is another execution thread of the same task. One task can have multiple threads, and the task memory object are shared among threads, therefore shared data must be accessed through some primitive and synchronization objects that allow you to avoid data corruption.
Yes, it is possible, but I cannot imagine it being a good idea, and it would be a real pain to test.
If you have a shared heap, and you make sure all semaphores etc. are allocated in the heap, and not the stack, then there's no inherent reason you couldn't do something like it. There would be some tricky differences though.
For example, anything you do in an interrupt handler in a multi-threaded program can change data used by all the threads, while in a forked program, you would have to send multiple interrupts, which would be caught at different times, and might lead to unintended effects.
If you want threading behavior, just use a thread.
AFAIK, fork will create a separate process with its own context, stack and so on. Depends what you mean by "simulating"...
You might want to check this out : http://www.linuxprogrammingblog.com/threads-and-fork-think-twice-before-using-them
A few of the answers here focus on "don't mix fork and threads". But the way I read your question is: "can you use two different processes, and still communicate quickly and conveniently with shared memory between them, just like how threads have access to each others' memory?"
And the answer is, yes you can, but you have to remember to explicitly mark which memory areas you want shared. You can not just share your variables between the processes. Also, you can communicate this way between processes not related to each other at all. It is not limited to processes forked from each other.
Have a look at shared memory or "shm".
I have implemented a Observer Pattern, in C++ project.
My Subject is a XML File reader which reads tags and publishes its value.
I have some "processing objects" which are my observers. They check the tag that has currently been read, if they have subsribed to the tag they will process it else ignore it.
I have banks of memory into which the tags and their values are dumped into.
My problem no is, how do I synchronise the memory operations?
When my XML reader wants to publish some tag/value, its should get a unused block of memory and "lock" it so that on its unavailable for editing. Once all the "Processing Objects" are done with the memory, they should be able to "unlock" for further use.
How can I achieve this? please help.
have you checked out boost shared memory? It has various synchronization mechanisms and examples...
The synchronization mechanisms outlined in the interprocess library are specifically useful if you want to put the mutexes in the shared memory block itself.
I assume your main task is not to learn/develop the synchronization mechanism.
You should reuse existing components available, and there are many. This is very good http://www.rabbitmq.com/getstarted.html
It will support multiple (including distributed/networking) models. Although there might initial learning period, but once integrated you can continue using it for your feature extensions, and focus on the meat of problem instead.
I know there are many way to handle inter-communication between two processes, but I'm still a bit confused how to deal with it. Is it possible to share queue (from standard library) between two processes in efficient way?
Thanks
I believe your confusion comes from not understanding the relationship between the memory address spaces of the parent and child process. The two address spaces are effectively unrelated. Yes, immediately after the fork() the two processes contain almost identical copies of memory, but you should think of them as copies. Any change one proces makes to memory in its address space has no impact on the other process's memory.
Any "plain old data structures" (such as provided by the C++ standard library) are purely abstractions of memory, so there is no way to use them to communicate between the two processes. To send data from one process to the other, you must use one of several system calls that provide interprocess communication.
But, note that shared memory is an exception to this. You can use system calls to set up a section of share memory, and then create data structures in the share memory. You'll still need to protect these data structures with a mutex, but the mutex will have to be shared-memory aware. With Posix threads, you'd use pthread_mutexattr_init with the PTHREAD_PROCESS_SHARED attribute.
Simple answer: Sharing an std::queue by two processes can be done but it is not trivial to do.
You can use shared memory to hold the queue together with some synchronization mechanism (usually a mutex). Note that not only the std::queue object must be constructed in the shared memory region, but also the contents of the queue, so you will have to provide your own allocator that manages the creation of memory in the shared region.
If you can, try to look at higher level libraries that might provide already packed solutions to your process communication needs. Consider Boost.Interprocess or search in your favorite search engine for interprocess communication.
I don't think there are any simple ways to share structures/objects like that between two projects. If you want to implement a queue/list/array/etc between two processes, you will need to implement some kind of communication between the processes to manage the queues and to retrieve and store entries.
For example, you could implement the queue management in one process and implement some kind of IPC (shared memory, sockets, pipes, etc.) to hand off entries from one process to the other.
There may be other methods outside of the standard C++ libraries that will do this for you. For example, there are likely Boost libraries that already implement this.
I have a count variable that should get counted up by a few processes I forked and used/read by the mother process.
I tried to create a pointer in my main() function of the mother process and count that pointer up in the forked children. That does not work! Every child seems to have it's own copy even though the address is the same in every process.
What is the best way to do that?
Each child gets its own copy of the parent processes memory (at least as soon as it trys to modify anything). If you need to share betweeen processes you need to look at shared memory or some similar IPC mechanism.
BTW, why are you making this a community wiki - you may be limiting responses by doing so.
2 processes cannot share the same memory. It is true that a forked child process will share the same underlying memory after forking, but an attempt to write to this would cause the operating system to allocate a new writeable space for it somewhere else.
Look into another form of IPC to use.
My experience is, that if you want to share information between at least two processes, you almost never want to share just some void* pointer into memory. You might want to have a look at
Boost Interprocess
which can give you an idea, how to share structured data (read "classes" and "structs") between processes.
No, use IPC or threads. Only file descriptors are shared (but not the seek pointer).
You might want to check out shared memory.
the pointers are always lies in the same process. It's private to the process, relative to the process's base address. There different kind of IPC mechanisms available in any operating systems. You can opt for Windows Messaging, Shared memory, socket, pipes etc. Choose one according to your requirement and size of data. Another mechanism is to write data in target process using Virtual memory APIs available and notify the process with corresponding pointer.
One simple option but limited form of IPC that would work well for a shared count is a 'shared data segment'. On Windows this is implemented using the #pragma data_seg directive.
See this article for an example.