How to find finish times of processes in cplex - scheduling

I have a machine,batch scheduling problem. Finish time of a batch is "Z[b]" variable. There are three machines(f represent machines). If a machine starts processing a specific batch at time t X[f][b][t] equals to 1.
"P[b]" parameter is the proccesing time of the batches. I need to find ending times of batches.Tried this constraint.t is the range of time for example 48 hours.
"forall(p in B) Z[p]-(sum(n in F)sum(a in 1..t-P[p]+1)(a+P[p])*X[n][p][a])==0 ;"
I have 3 machines but this constraint just use 2 machines at time 1. Also Z[p] values is not logical.How can i fix this?

Within CPLEX you have CPOptimizer that is good at scheduling.
And to get the end of an interval , endOf(itvs) works fine

Related

Learning about multithreading. Tried to make a prime number finder

I'm studying for a uni project and one of the requirements is to include multithreading. I decided to make a prime number finder and - while it works - it's rather slow. My best guess is that this has to do with the amount of threads I'm creating and destroying.
My approach was to take the range of primes that are below N, and distribute these evenly across M threads (where M = number of cores (in my case 8)), however these threads are being created and destroyed every time N increases.
Pseudocode looks like this:
for each core
# new thread
for i in (range / numberOfCores) * currentCore
if !possiblePrimeIsntActuallyPrime
if possiblePrime % i == 0
possiblePrimeIsntActuallyPrime = true
return
else
return
Which does work, but 8 threads being created for every possible prime seems to be slowing the system down.
Any suggestions on how to optimise this further?
Use thread pooling.
Create 8 threads and store them in an array. Feed it new data each time one ends and start it again. This will prevent them from having to be created and destroyed each time.
Also, when calculating your range of numbers to check, only check up to ceil(sqrt(N)) as anything after that is guaranteed to either not go into it or the other corresponding factor has already been checked. i.e. ceil(sqrt(24)) is 5.
Once you check 5 you don't need to check anything else because 6 goes into 24 4 times and 4 has been checked, 8 goes into it 3 times and 3 has been checked, etc.

How to set internal wall clock in a Fortran program?

I use Fortran to do some scientific computation. I use HPC. As we know, when we submit jobs in a HPC job scheduler, we also specify the wall clock time limit for our jobs. However, when the time is up, if the job is still writing output data, it will be terminated and it will cause some 'NUL' values in the data, causing trouble for the post-processing:
So, could we set an internal mechanism that our job can stop itself peacefully some time before the end of HPC allowance time?
Related Question: How to skip reading "NUL" value in MATLAB's textscan function?
After realizing what you are asking I found out that I implemented similar functionality in my program very recently (commit https://bitbucket.org/LadaF/elmm/commits/f10a1b3421a3dd14fdcbe165aa70bf5c5001413f). But I still have to set the time limit manually.
The most important part:
time_stepping%clock_time_limit is the time limit in seconds. Count the number of system clock ticks corresponding to that:
call system_clock(count_rate = timer_rate)
call system_clock(count_max = timer_max_count)
timer_count_time_limit = int( min(time_stepping%clock_time_limit &
* real(timer_rate, knd), &
real(timer_max_count, knd) * 0.999_dbl) &
, dbl)
Start the timer
call system_clock(count = time_steps_timer_count_start)
Check the timer and exit the main loop with error_exit set to .true. if the time is up
if (mod(time_step,time_stepping%check_period)==0) then
if (master) then
error_exit = time_steps_timer_count_2 - time_steps_timer_count_start > timer_count_time_limit
if (error_exit) write(*,*) "Maximum clock time exceeded."
end if
MPI_Bcast the error exit to other processes
if (error_exit) exit
end if
Now, you may want to get the time limit from your scheduler automatically. That will vary between different job scheduling softwares. There will be an environment variable like $PBS_WALLTIME. See Get walltime in a PBS job script but check your scheduler's manual.
You can read this variable using GET_ENVIRONMENT_VARIABLE()

measuring concurent loop times in erlang

I create a round of processes in erlang and wish to measure the time that it took for the first message to pass throigh the network and the entire message series, each time the first node gets the message back it sends another one.
right now in the first node i have the following code:
receive
stop->
io:format("all processes stopped!~n"),
true;
start->
statistics(runtime),
Son!{number, 1},
msg(PID, Son, M, 1);
{_, M} ->
{Time1, _} = statistics(runtime),
io:format("The last message has arrived after ~p! ~n",[Time1*1000]),
Son!stop;
of course i start the statistics when sending the first message.
as you can see i use the Time_Since_Last_Call for the first message loop and wish to use the Total_Run_Time for the entire run, the problem is that Total_Run_Time is accumulative since i start the statistics for the first time.
The second thought i had in mind is using another process with 2 receive loops getting the times for each one adding them and printing but i'm sure that erlang can do better than this.
i guess the best method to solve this is somehow flush the Total_Run_Time, but i couldn't find how this could be done. any ideas how this can be tackled?
One way to measure round-trip times would be to send a timestamp along with each message. When the first node receives the message, it can then measure the round-trip time, calculating Total_Run_Time - Timestamp.
To calculate the total run time, I would memorize the first timestamp in the process state (or dictionary), and calculate the total run time when stopping the test.
Besides, given that you mention the network, are you sure that the CPU time (which is what statistics(runtime) calculates is what you're after? Perhaps, wall clock time would be more appropriate.

How can I periodically execute some function if this function takes along time to run (less than peroid)

I want to run a function for example func() exactly 1 time per second. However the running time of func() is about 500 ms. How Can I do that? I know if the running time of the function is low, I can write a while loop in func() and sleep() for 1 second after each execution. But now, the running time is high. What should I do to ensure the func() run exactly 1 time per second? Thanks.
Yo do:
Take the current time in start_time.
Perform your job
Take the current time in end_time
Wait for (1 second + start_time - end_time)
That way, you can perform your tasks every seconds reliably. If the task takes less time, you will wait longer and vice versa. Note however that this assumes that your task takes always less than 1 sec. to execute. In the real code, you want to check for that before the sleep statement.
Implementation details depend on the platform.
Note that using this method still results in a small drift due to the time it takes to compute step 4. A more accurate alternative would be to synchronize on integer multiple of one second. That way, over 1000s of cycles you would not drift.
It depends on the level of accuracy you need.
If you want a brute, easy to code solution, you can get the time before first run of the function and save it in some variable (start_time). Create repeat index count variable (repeat_number) that stores next repeat number. Then you can do kinda this:
1) next_run_time = ++repeat_number*1sec + start_time;
2) func();
3) wait_time = next_run_time - current_time;
4) sleep(wait_time)
5) goto 1;
This approach disables accumulation of time error on each iteration.
But for the real application you should find some event framework or library.

Timing commands in cpp project (windows)

I hope someone could help me with this (And english is not my native language so I'm sorry in advance for any grammar or spelling mistakes):
As part of a project I'm coding, I need to time some commands. More specifically: I have 2 sets of commands (Lets call them set A and set B) - I need to to execute set A, then wait for a specific number of milliseconds (calculated in set A), then execute set B. I did it using the Sleep(time) command between the sets.
Now, I need to incorporate another set of commands (Set C) that will run in a loop in the time between the sets A and B instead of simply doing nothing. Meaning, instead of the time the program was idle before (waiting the specified number of milliseconds) I need it to loop the C set - but the catch is that it has to loop C exactly the same time it would have waited in the idle time.
How can I do this without using threads? (And generally keep it as simple as possible)
I guess the "work-time" for the set of commands in C is known. And C is a loop which can/shall finish when the wait time has expired.
In this case I'd suggest to use a performance counter to count down the wait time. Depending on what is calculated and what overhaed is introduced in C the accuracy to obtain can be in the microseconds range.
Pseudo code:
Delay = 1000
Do A
CounterBegin = GetCounter()
// and now the C loop
while ((GetCounter() - CounterBegin) < Delay) {
Do C
}
Do B
Note: The counter values are to be converted into times by using the counter frequency. See the link above to get the details.