MPI fortran code on top of another one - fortran

I wrote a MPI fortran program that I need to run multiple times (for consistency let call this program P1). The minimum number of core that I can use to run a program is 512. The problem is that P1 has the best scalability with 128 cores.
What I want to do is to create another program (P2) on top of P1, that call P1 4 times simultaneously, each of the call would be on 128 cores..
Basically I need to run 4 instances of a call simultaneously with a number of process equal to the total processors divided by 4.
Do you think it is possible? My problem is I don't know where to search to do this.
I am currently looking at MPI groups and communicators, am I following the good path to reach my goal?
EDIT :
The system scheduler is Loadleveler. When I submit a job I need to specify how many nodes I need. There is 16 cores by node et the minimum nodes I can use is 32. In the batch, we specify also -np NBCORES, but if we do so i.e. -np 128, the time consumed will be as if we were using 512 cores (32 nodes) even if the job ran on 128 cores..

I were able to do it thanks to your answers.
As I mentioned later (sry for that), the scheduler is Loadlever,
if you have access to the subblock module follow this : http://www.hpc.cineca.it/content/batch-scheduler-loadleveler-0#sub-block as Hristo Iliev mentioned.
if you don't, you can do a multistep job with no dependency between the steps, so they will be executed simultaneously. It is a classic multistep job, you just have to remove any #dependency flags (in the case of Loadleveler).

Related

Dynamically Evaluate load and create Threads depending on machine performance

Hi i have started to work on a project where i use parallel computing to separate job loads among multiple machines, such as hashing and other forms of mathematical calculations. Im using C++
it is running on a Master/slave or Server/Client model if you prefer where every client connects to the server and waits for a job. The server can than take a job and seperate it depending on the number of clients
1000 jobs -- > 3 clients
IE: client 1 --> calculate(0 to 333)
Client 2 --> calculate(334 to 666)
Client 3 --> calculate(667 to 999)
I wanted to further enhance the speed by creating multiple threads on every running client. But since every machine are not likely (almost 100%) not going to have the same hardware, i cannot arbitrarily decide on a number of threads to run on every client.
i would like to know if one of you guys knew a way to evaluate the load a thread has on the cpu and extrapolate the number of threads that can be run concurently on the machine.
there are ways i see of doing this.
I start threads one by one, evaluating the cpu load every time and stop when i reach a certain prefix ceiling of (50% - 75% etc) but this has the flaw that ill have to stop and re-separate the job every time i start a new thread.
(and this is the more complex)
run some kind of test thread and calculate its impact on the cpu base load and extrapolate the number of threads that can be run on the machine and than start threads and separate jobs accordingly.
any idea or pointer are welcome, thanks in advance !

How to make TensorFlow use more available CPU

How can I fully utilize each of my EC2 cores?
I'm using a c4.4xlarge AWS Ubuntu EC2 instance and TensorFlow to build a large convoluted neural network. nproc says that my EC2 instance has 16 cores. When I run my convnet training code, the top utility says that I'm only using 400% CPU. I was expecting it to use 1600% CPU because of the 16 cores. The AWS EC2 monitoring tab confirms that I'm only using 25% of my CPU capacity. This is a huge network, and on my new Mac Pro it consumes about 600% CPU and takes a few hours to build, so I don't think the reason is because my network is too small.
I believe the line below ultimately determines CPU usage:
sess = tf.InteractiveSession(config=tf.ConfigProto())
I admit I don't fully understand the relationship between threads and cores, but I tried increasing the number of cores. It had the same effect as the line above: still 400% CPU.
NUM_THREADS = 16
sess = tf.InteractiveSession(config=tf.ConfigProto(intra_op_parallelism_threads=NUM_THREADS))
EDIT:
htop shows that shows that I am actually using all 16 of my EC2 cores, but each core is only at about 25%
top shows that my total CPU % is around 400%, but occasionally it will shoot up to 1300% and then almost immediately go back down to ~400%. This makes me think there could be a deadlock problem
Several things you can try:
Increase the number of threads
You already tried changing the intra_op_parallelism_threads. Depending on your network it can also make sense to increase the inter_op_parallelism_threads. From the doc:
inter_op_parallelism_threads:
Nodes that perform blocking operations are enqueued on a pool of
inter_op_parallelism_threads available in each process. 0 means the
system picks an appropriate number.
intra_op_parallelism_threads:
The execution of an individual op (for
some op types) can be parallelized on a pool of
intra_op_parallelism_threads. 0 means the system picks an appropriate
number.
(Side note: The values from the configuration file referenced above are not the actual default values tensorflow uses but just example values. You can see the actual default configuration by manually inspecting the object returned by tf.ConfigProto().)
Tensorflow uses 0 for the above options meaning it tries to choose appropriate values itself. I don't think tensorflow picked poor values that caused your problem but you can try out different values for the above option to be on the safe side.
Extract traces to see how well your code parallelizes
Have a look at
tensorflow code optimization strategy
It gives you something like this. In this picture you can see that the actual computation happens on far fewer threads than available. This could also be the case for your network. I marked potential synchronization points. There you can see that all threads are active for a short moment which potentially is the reason for the sporadic peaks in CPU utilization that you experience.
Miscellaneous
Make sure you are not running out of memory (htop)
Make sure you are not doing a lot of I/O or something similar

How can i learn if am using all of my cores in the maximum level

I have a time-critical application which processes a sequence of images coming from camera. It is written in C++ and it uses Qt, OpenCV and boost libraries. It is going to run on a dedicated PC.
Currently, the gui functions in main thread and i open a new thread for image processing. I didn't bother to divide the process section into threads because i think OpenCV is already doing that. However, i am having trouble maintaining the maximum tolerable delay.
My question is, how can i learn if my application using all the cores in the maximum level ?
When i look at the performance monitor, the pattern i see is really strange. The CPU usage is likely %35-40, all the cores are working but not at a full throttle.
Am i doing something wrong ?
You are not doing anything wrong, however you could change your code to take full use of the cpu cores by:
1 - setting the core affinity so that the thread does not change from one core to another, this could improve the cache usage (L1 and maybe L2)
2 - setting the scheduling of threads to FIFO so it does not get context-switched before finishing its processing
3 - run that thread on a higher priority process (this would require root privilege for the process)
Cheers

Unbalanced load (v2.0) using MPI

(the problem is embarrassingly parallel)
Consider an array of 12 cells:
|__|__|__|__|__|__|__|__|__|__|__|__|
and four (4) CPUs.
Naively, I would run 4 parallel jobs and feeding 3 cells to each CPU.
|__|__|__|__|__|__|__|__|__|__|__|__|
=========|========|========|========|
1 CPU 2 CPU 3 CPU 4 CPU
BUT, it appears, that each cell has different evaluation time, some cells are evaluated very quickly, and some are not.
So, instead of wasting "relaxed CPU", I think to feed EACH cell to EACH CPU at time and continue until the entire job is done.
Namely:
at the beginning:
|____|____|____|____|____|____|____|____|____|____|____|____|
1cpu 2cpu 3cpu 4cpu
if, 2cpu finished his job at cell "2", it can jump to the first empty cell "5" and continue working:
|____|done|____|____|____|____|____|____|____|____|____|____|
1cpu 3cpu 4cpu 2cpu
|-------------->
if 1cpu finished, it can take sixth cell:
|done|done|____|____|____|____|____|____|____|____|____|____|
3cpu 4cpu 2cpu 1cpu
|------------------------>
and so on, until the full array is done.
QUESTION:
I do not know a priori which cell is "quick" and which cell is "slow", so I cannot spread cpus according to the load (more cpus to slow, less to quick).
How one can implement such algorithm for dynamic evaluation with MPI?
Thanks!!!!!
UPDATE
I use a very simple approach, how to divide the entire job into chunks, with IO-MPI:
given: array[NNN] and nprocs - number of available working units:
for (int i=0;i<NNN/nprocs;++i)
{
do_what_I_need(start+i);
}
MPI_File_write(...);
where "start" corresponds to particular rank number. In simple words, I divide the entire NNN array into fixed size chunk according to the number of available CPU and each CPU performs its chunk, writes the result to (common) output and relaxes.
IS IT POSSIBLE to change the code (Not to completely re-write in terms of Master/Slave paradigm) in such a way, that each CPU will get only ONE iteration (and not NNN/nprocs) and after it completes its job and writes its part to the file, will Continue to the next cell and not to relax.
Thanks!
There is a well known parallel programming pattern, known under many names, some of which are: bag of tasks, master / worker, task farm, work pool, etc. The idea is to have a single master process, which distributes cells to the other processes (workers). Each worker runs an infinite loop in which it waits for a message from the master, computes something and then returns the result. The loop is terminated by having the master send a message with a special tag. The wildcard tag value MPI_ANY_TAG can be used by the worker to receive messages with different tags.
The master is more complex. It also runs a loop but until all cells have been processed. Initially it sends each worker a cell and then starts a loop. In this loop it receives a message from any worker using the wildcard source value of MPI_ANY_SOURCE and if there are more cells to be processed, sends one of them to the same worker that have returned the result. Otherwise it sends a message with a tag set to the termination value.
There are many many many readily available implementations of this model on the Internet and even some on Stack Overflow (for example this one). Mind that this scheme requires one additional MPI process that often does very little work. If this is unacceptable, one can run a worker loop in a separate thread.
You want to implement a kind of client-server architecture where you have workers asking the server for work whenever they are out of work.
Depending on the size of the chunks and the speed of your communication between workers and server, you may want to adjust the size of the chunks sent to workers.
To answer your updated question:
Under the master/slave (or worker pool if that's how you prefer it to be labelled) model, you will basically need a task scheduler. The master should have information about what work has been done and what still needs to be done. The master will give each process some work to be done, then sit and wait until a process completes (using nonblocking receives and a wait_all). Once a process completes, have it send the data to the master then wait for the master to respond with more work. Continue this until the work is done.

Open MPI: how to run exactly 1 process per host

Actually I have 3 questions. Any input is appreciated. Thank you!
1) How to run exactly 1 process on each host? My application uses TBB for multi-threading. Does it mean that I should run exactly 1 process on each host for best performance?
2) My cluster has heterogeneous hosts. Some hosts have better CPUs and more memory than the others. How to map process ranks to real hosts for work distribution purposes? I am thinking to use hostname.Is there a better to do it?
3) How process ranks are assigned? What process gets 0?
1) TBB splits loops into several threads of a thread pool to utilize all processors of one machine. So you should only run one process per machine. More processes would fight with each other for processor time. The number of processes per machine is given by options in your hostfile:
# my_hostfile
192.168.0.208 slots=1 max_slots=1
...
2) To give each machine an appropriate amount of work according to its performance is not trivial.
The easiest approach is to split the workload into small pieces of work, send them to the slaves, collect their answers, and give them new pieces of work, until you are done. There is an example on my website (in German). You can also find some references to manuals and tutorials there.
3) Each process gets a number (processID) in your program by
MPI_Comm_rank(MPI_COMM_WORLD, &processID);
The master has processID == 0. Maybe the other are given the slots in the order of your hostfile. Another possibility is they are assigned in the order the connections to slaves are established. I don't know that.