Nim: Parallel loop with mutating state - concurrency

I'm new to the Nim language. I wanted to learn it by implementing a simple Genetic Algorithm to evolve strings (array of integers atm) by distributing the work to the CPU cores:
https://github.com/peheje/nim_genetic
I have successfully parallelized the creation of the "Agents", but cannot do it to call a function "life" that must mutate the passed in state:
...
type
Agent* = ref object
data*: seq[int]
fitness*: float
...
var pool: seq[Agent] = newAgentsParallel(POPULATION_SIZE)
# Run generations
for gen in 0..<N_GENERATIONS:
let wheel = createWheel(pool)
let partitions: seq[seq[Agent]] = partition(pool, N_THREADS)
parallel:
for part in partitions:
echo "spawning"
spawn life(part, pool, wheel)
pool = createPool(pool, wheel)
...
proc life(part: seq[Agent], pool: seq[Agent], wheel: seq[float]) =
for a in part:
if random(1.0) < CROSSOVER_PROP:
a.crossover(pool, wheel)
if random(1.0) < MUTATE_PROP:
a.mutate()
a.calcFitness()
echo "life done"
CPU is bound at 100 % and it seems like Nim is copying the data to "life" as the RAM usage skyrockets after "spawning". In the Nim manual it says about the parallel block:
"Every other complex location loc that is used in a spawned proc
(spawn f(loc)) has to be immutable for the duration of the parallel
section. This is called the immutability check. Currently it is not
specified what exactly "complex location" means. We need to make this
an optimization!"
And I'm very much using it as mutable state, so maybe that is why Nim is copying the data? How can I get around passing pointers only? A few points:
I guess I could avoid mutating, instead returning new instances that are modified, but I still need to pass in the pool and wheel to read from.
If the parallel: statement is not possible to use, how would I implement it using threads?
Is random() threadsafe? How else?
Anything else I could do different? E.g. easier unwrapping of the FlowVar?
Coming from Kotlin with Java8 Streams I feel really spoiled.

Related

When is a Haskell thread joined?

As a former C++ programmer, the behaviour of Haskell threads is confusing. Refer to the following Haskell code snippet:
import Control.Concurrent
import Control.Concurrent.MVar
import Data.Functor.Compose
import System.Random
randomTill0 :: MVar () -> IO () -- Roll a die until 0 comes
randomTill0 mV = do
x <- randomRIO (0,65535) :: IO Int
if 0 == x
then putMVar mV ()
else randomTill0 mV
main :: IO ()
main = do
n <- getNumCapabilities
mV <- newEmptyMVar
sequence (replicate n (forkIO (randomTill0 mV)))
readMVar mV
putStrLn "Excution complete."
As far as I know, Haskell's forkIO is roughly equivalent to C++'s std::async. In C++, I store a std::future which is returned by std::async, then std::future::wait for it, then the thread will be std::thread::joined.
(Regarding the tiny delay before the message, I don't think any laziness is involved here.)
My question is:
In the above Haskell code snippet, when are the threads resulting from forkIOs joined? Is it upon readMVar mV, or the end of main?
Is there a Haskell equivalent of std::thread::detach?
As far as I understand, threads are never joined, and the program ends when main ends. This is special to main –– in general threads are not associated to each other in any kind of hierarchy. The concept of joining a thread does not exist: a thread runs until its action is finished or until it is explicitly killed with killThread, and then it evaporates (thanks, garbage collector). If you want to wait for a thread to complete you have to do it yourself, probably with an MVar.
It follows that there is no analogue of detach –– all threads are automatically detached.
Another thing worth mentioning is that there is not a 1:1 correspondence between OS threads and Haskell threads. The Haskell runtime system has its own scheduler that can run multiple Haskell threads on a single OS thread; and in general a Haskell thread will bounce around between different OS threads over the course of its lifetime. There is a concept of bound threads which are tied to an OS thread, but really the only reason to use that is if you are interfacing with code in other languages that distinguishes between OS threads.

python ThreadPool not working (executed sequentially instead of in parallel)

I'm using ThreadPool from multiprocessing.pool but it seems it is not working (i.e. it is executed sequentially rather than in parallel: res1 takes 14s, res2 takes 11s, and using ThreadPool takes 27s instead of 14s).
I think the problem is in another_method because it uses a (shared) read_only_resource.
I've tried having a time.sleep(val) in another_method instead of calling another method (works as expected - it takes as long as the maximum value I pass) and I've also tried to pass a deep copy of the read_only_resource (this doesn't work, it takes 27s).
I have run out of things to try to make this work:
def method(text_type, read_only_resource):
value = some_processesing(text_type)
return another_method(value, read_only_resource)
def main():
same_read_only_resource = get_read_only_resource()
pool = ThreadPool(processes=2)
res1 = pool.apply_async(method, (some_text_type, same_read_only_resource))
res2 = pool.apply_async(method, (other_text_type, same_read_only_resource))
results1 = res1.get()
results2 = res2.get()
It looks like you want to use map instead of apply_async. The apply_async function isn't for parallelizing multiple function calls. Rather, it's to asynchronously call a single instance of the function. Since you call it twice and get the results in order, you get serialized performance.
Calling map will run multiple instances of a function in parallel. It requires packing the inputs into a single object, e.g. a tuple, since it only allows a single argument to be passed to your function. Then all packed inputs can be placed in a list or other iterable and given to map, for example:
work_args =[(some_text, read_only_resource), (other_text, read_only_resource), ... ]
results = pool.map(method, work_args)
Additionally you can use something like itertools.izip() to create work_args, e.g.:
work_args = itertools.izip([some_text, other_text, ...], read_only_resource)
Note that since you are using ThreadPool, you still might not get much performance increase, depending on the work being done. The topic has been discussed in many places, but Python may not parallelize with threads like you expect, due to the Global Interpreter Lock. See here for a good summary. In short, if your function is going to do I/O, ThreadPool can help.
However if you use multiprocessing.Pool, you will have multiple processes executing simultaneously. Just replace ThreadPool with Pool to use it.

How to make something lwt supported?

I am trying to understand the term lwt supported.
So assume I have a piece of code which connect a database and write some data: Db.write conn data. It has nothing to do with lwt yet and each write will cost 10 sec.
Now, I would like to use lwt. Can I directly code like below?
let write_all data_list = Lwt_list.iter (Db.write conn) data_list
let _ = Lwt_main.run(write_all my_data_list)
Support there are 5 data items in my_data_list, will all 5 data items be written into the database sequentially or in parallel?
Also in Lwt manually or http://ocsigen.org/tutorial/application, they say
Using Lwt is very easy and does not cause troubles, provided you never
use blocking functions (non cooperative functions). Blocking functions
can cause the entre server to hang!
I quite don't understand how to not using blocking functions. For every my own function, can I just use Lwt.return to make it lwt support?
Yes, your code is correct. The principle of lwt supported is that everything that can potentially takes time in your code should return an Lwt value.
About Lwt_list.iter, you can choose whether you want the treatment to be parallel or sequential, by choosing between iter_p and iter_s :
In iter_s f l, iter_s will call f on each elements
of l, waiting for completion between each element. On the
contrary, in iter_p f l, iter_p will call f on all
elements of l, then wait for all the threads to terminate.
About the non-blocking functions, the principle of the Light Weight Threads is that they keep running until they reach a "cooperation point", i.e. a point where the thread can be safely interrupted or has nothing to do, like in a sleep.
But you have to declare you enter a "cooperation point" before actually doing the sleep. This is why the whole Unix library has been wrapped, so that when you want to do an operation that takes time (e.g. a write), a cooperation point is automatically reached.
For your own function, if you use IOs operations from Unix, you should instead use the Lwt version (Lwt_unix.sleep instead of Unix.sleep)

pthreads - previously created thread uses new value (updated after thread creation)

So here's my scenario. First, I have a structure -
struct interval
{
double lower;
double higher;
}
Now my thread function -
void* thread_function(void* i)
{
interval* in = (interval*)i;
double a = in->lower;
cout << a;
pthread_exit(NULL)
}
In main, let's say I create these 2 threads -
pthread_t one,two;
interval i;
i.lower = 0; i.higher = 5;
pthread_create(&one,NULL,thread_function,&i);
i.lower=10; i.higher = 20;
pthread_create(&two,NULL,thread_function, &i);
pthread_join(one,NULL);
pthread_join(two,NULL);
Here's the problem. Ideally, thread "one" should print out 0 and thread "two" should print out 10. However, this doesn't happen. Occasionally, I end up getting two 10s.
Is this by design? In other words, by the time the thread is created, the value in i.lower has been changed already in main, therefore both threads end up using the same value?
Is this by design?
Yes. It's unspecified when exactly the threads start and when they will access that value. You need to give each one of them their own copy of the data.
Your application is non-deterministic.
There is no telling when a thread will be scheduled to run.
Note: By creating a thread does not mean it will start executing immediately (or even first). The second thread created may actually start running before the first (it is all dependant on the OS and hardware).
To get deterministic behavior each thread must be given its own data (that is not modified by the main thread).
pthread_t one,two;
interval oneData,twoData
oneData.lower = 0; oneData.higher = 5;
pthread_create(&one,NULL,thread_function,&oneData);
twoData.lower=10; twoData.higher = 20;
pthread_create(&two,NULL,thread_function, &twoData);
pthread_join(one,NULL);
pthread_join(two,NULL);
I would not call it by design.
I would rather refer to it as a side-effect of scheduling policy. But the observed behavior is what I would expect.
This is the classic 'race condition'; where the results vary depending on which thread wins the 'race'. You have no way of knowing which thread will 'win' each time.
Your analysis of the problem is correct; you simply don't have any guarantees that the first thread created will be able to read i.lower before the data is changed on the next line of your main function. This is in some sense the heart of why it can be hard to think about multithreaded programming at first.
The straight forward solution to your immediate problem is to keep different intervals with different data, and pass a separate one to each thread, i.e.
interval i, j;
i.lower = 0; j.lower = 10;
pthread_create(&one,NULL,thread_function,&i);
pthread_create(&two,NULL,thread_function,&j);
This will of course solve your immediate problem. But soon you'll probably wonder what to do if you want multiple threads actually using the same data. What if thread 1 wants to make changes to i and thread 2 wants to take these into account? It would hardly be much point in doing multithreaded programming if each thread would have to keep its memory separate from the others (well, leaving message passing out of the picture for now). Enter mutex locks! I thought I'd give you a heads up that you'll want to look into this topic sooner rather than later, as it'll also help you understand the basics of threads in general and the required change in mentality that goes along with multithreaded programming.
I seem to recall that this is a decent short introduction to pthreads, including getting started with understanding locking etc.

How/why do functional languages (specifically Erlang) scale well?

I have been watching the growing visibility of functional programming languages and features for a while. I looked into them and didn't see the reason for the appeal.
Then, recently I attended Kevin Smith's "Basics of Erlang" presentation at Codemash.
I enjoyed the presentation and learned that a lot of the attributes of functional programming make it much easier to avoid threading/concurrency issues. I understand the lack of state and mutability makes it impossible for multiple threads to alter the same data, but Kevin said (if I understood correctly) all communication takes place through messages and the mesages are processed synchronously (again avoiding concurrency issues).
But I have read that Erlang is used in highly scalable applications (the whole reason Ericsson created it in the first place). How can it be efficient handling thousands of requests per second if everything is handled as a synchronously processed message? Isn't that why we started moving towards asynchronous processing - so we can take advantage of running multiple threads of operation at the same time and achieve scalability? It seems like this architecture, while safer, is a step backwards in terms of scalability. What am I missing?
I understand the creators of Erlang intentionally avoided supporting threading to avoid concurrency problems, but I thought multi-threading was necessary to achieve scalability.
How can functional programming languages be inherently thread-safe, yet still scale?
A functional language doesn't (in general) rely on mutating a variable. Because of this, we don't have to protect the "shared state" of a variable, because the value is fixed. This in turn avoids the majority of the hoop jumping that traditional languages have to go through to implement an algorithm across processors or machines.
Erlang takes it further than traditional functional languages by baking in a message passing system that allows everything to operate on an event based system where a piece of code only worries about receiving messages and sending messages, not worrying about a bigger picture.
What this means is that the programmer is (nominally) unconcerned that the message will be handled on another processor or machine: simply sending the message is good enough for it to continue. If it cares about a response, it will wait for it as another message.
The end result of this is that each snippet is independent of every other snippet. No shared code, no shared state and all interactions coming from a a message system that can be distributed among many pieces of hardware (or not).
Contrast this with a traditional system: we have to place mutexes and semaphores around "protected" variables and code execution. We have tight binding in a function call via the stack (waiting for the return to occur). All of this creates bottlenecks that are less of a problem in a shared nothing system like Erlang.
EDIT: I should also point out that Erlang is asynchronous. You send your message and maybe/someday another message arrives back. Or not.
Spencer's point about out of order execution is also important and well answered.
The message queue system is cool because it effectively produces a "fire-and-wait-for-result" effect which is the synchronous part you're reading about. What makes this incredibly awesome is that it means lines do not need to be executed sequentially. Consider the following code:
r = methodWithALotOfDiskProcessing();
x = r + 1;
y = methodWithALotOfNetworkProcessing();
w = x * y
Consider for a moment that methodWithALotOfDiskProcessing() takes about 2 seconds to complete and that methodWithALotOfNetworkProcessing() takes about 1 second to complete. In a procedural language this code would take about 3 seconds to run because the lines would be executed sequentially. We're wasting time waiting for one method to complete that could run concurrently with the other without competing for a single resource. In a functional language lines of code don't dictate when the processor will attempt them. A functional language would try something like the following:
Execute line 1 ... wait.
Execute line 2 ... wait for r value.
Execute line 3 ... wait.
Execute line 4 ... wait for x and y value.
Line 3 returned ... y value set, message line 4.
Line 1 returned ... r value set, message line 2.
Line 2 returned ... x value set, message line 4.
Line 4 returned ... done.
How cool is that? By going ahead with the code and only waiting where necessary we've reduced the waiting time to two seconds automagically! :D So yes, while the code is synchronous it tends to have a different meaning than in procedural languages.
EDIT:
Once you grasp this concept in conjunction with Godeke's post it's easy to imagine how simple it becomes to take advantage of multiple processors, server farms, redundant data stores and who knows what else.
It's likely that you're mixing up synchronous with sequential.
The body of a function in erlang is being processed sequentially.
So what Spencer said about this "automagical effect" doesn't hold true for erlang. You could model this behaviour with erlang though.
For example you could spawn a process that calculates the number of words in a line.
As we're having several lines, we spawn one such process for each line and receive the answers to calculate a sum from it.
That way, we spawn processes that do the "heavy" computations (utilizing additional cores if available) and later we collect the results.
-module(countwords).
-export([count_words_in_lines/1]).
count_words_in_lines(Lines) ->
% For each line in lines run spawn_summarizer with the process id (pid)
% and a line to work on as arguments.
% This is a list comprehension and spawn_summarizer will return the pid
% of the process that was created. So the variable Pids will hold a list
% of process ids.
Pids = [spawn_summarizer(self(), Line) || Line <- Lines],
% For each pid receive the answer. This will happen in the same order in
% which the processes were created, because we saved [pid1, pid2, ...] in
% the variable Pids and now we consume this list.
Results = [receive_result(Pid) || Pid <- Pids],
% Sum up the results.
WordCount = lists:sum(Results),
io:format("We've got ~p words, Sir!~n", [WordCount]).
spawn_summarizer(S, Line) ->
% Create a anonymous function and save it in the variable F.
F = fun() ->
% Split line into words.
ListOfWords = string:tokens(Line, " "),
Length = length(ListOfWords),
io:format("process ~p calculated ~p words~n", [self(), Length]),
% Send a tuple containing our pid and Length to S.
S ! {self(), Length}
end,
% There is no return in erlang, instead the last value in a function is
% returned implicitly.
% Spawn the anonymous function and return the pid of the new process.
spawn(F).
% The Variable Pid gets bound in the function head.
% In erlang, you can only assign to a variable once.
receive_result(Pid) ->
receive
% Pattern-matching: the block behind "->" will execute only if we receive
% a tuple that matches the one below. The variable Pid is already bound,
% so we are waiting here for the answer of a specific process.
% N is unbound so we accept any value.
{Pid, N} ->
io:format("Received \"~p\" from process ~p~n", [N, Pid]),
N
end.
And this is what it looks like, when we run this in the shell:
Eshell V5.6.5 (abort with ^G)
1> Lines = ["This is a string of text", "and this is another", "and yet another", "it's getting boring now"].
["This is a string of text","and this is another",
"and yet another","it's getting boring now"]
2> c(countwords).
{ok,countwords}
3> countwords:count_words_in_lines(Lines).
process <0.39.0> calculated 6 words
process <0.40.0> calculated 4 words
process <0.41.0> calculated 3 words
process <0.42.0> calculated 4 words
Received "6" from process <0.39.0>
Received "4" from process <0.40.0>
Received "3" from process <0.41.0>
Received "4" from process <0.42.0>
We've got 17 words, Sir!
ok
4>
The key thing that enables Erlang to scale is related to concurrency.
An operating system provides concurrency by two mechanisms:
operating system processes
operating system threads
Processes don't share state – one process can't crash another by design.
Threads share state – one thread can crash another by design – that's your problem.
With Erlang – one operating system process is used by the virtual machine and the VM provides concurrency to Erlang programme not by using operating system threads but by providing Erlang processes – that is Erlang implements its own timeslicer.
These Erlang process talk to each other by sending messages (handled by the Erlang VM not the operating system). The Erlang processes address each other using a process ID (PID) which has a three-part address <<N3.N2.N1>>:
process no N1 on
VM N2 on
physical machine N3
Two processes on the same VM, on different VM's on the same machine or two machines communicate in the same way – your scaling is therefore independent of the number of physical machines you deploy your application on (in the first approximation).
Erlang is only threadsafe in a trivial sense – it doesn't have threads. (The language that is, the SMP/multi-core VM uses one operating system thread per core).
You may have a misunderstanding of how Erlang works. The Erlang runtime minimizes context-switching on a CPU, but if there are multiple CPUs available, then all are used to process messages. You don't have "threads" in the sense that you do in other languages, but you can have a lot of messages being processed concurrently.
Erlang messages are purely asynchronous, if you want a synchronous reply to your message you need to explicitly code for that. What was possibly said was that messages in a process message box is processed sequentially. Any message sent to a process goes sits in that process message box, and the process gets to pick one message from that box process it and then move on to the next one, in the order it sees fit. This is a very sequential act and the receive block does exactly that.
Looks like you have mixed up synchronous and sequential as chris mentioned.
Referential transparency: See http://en.wikipedia.org/wiki/Referential_transparency_(computer_science)
In a purely functional language, order of evaluation doesn't matter - in a function application fn(arg1, .. argn), the n arguments can be evaluated in parallel. That guarantees a high level of (automatic) parallelism.
Erlang uses a process modell where a process can run in the same virtual machine, or on a different processor -- there is no way to tell. That is only possible because messages are copied between processes, there is no shared (mutable) state. Multi-processor paralellism goes a lot farther than multi-threading, since threads depend upon shared memory, this there can only be 8 threads running in parallel on a 8-core CPU, while multi-processing can scale to thousands of parallel processes.