How can I get these tasks run concurrently, so that the "Hello World from N" messages will be mixed up?
My output looks always like this except that 1, 2 or 3 can be interchanged.
Hello World from 1!
Hello World from 2!
Hello World from 3!
It does not look like the tasks run concurrently. It looks like it run in a chain on a first-come, first-served basis.
main.adb
with Ada.Text_IO;
procedure Main is
type Runnable_Type is access procedure;
task type Filter (Runnable_Access : Runnable_Type) is
entry start;
end Filter;
task body Filter is
begin
accept start;
Runnable_Access.all;
end Filter;
procedure Run_1 is
begin
Ada.Text_IO.Put_Line ("Hello World from 1!");
end Run_1;
procedure Run_2 is
begin
Ada.Text_IO.Put_Line ("Hello World from 2!");
end Run_2;
procedure Run_3 is
begin
Ada.Text_IO.Put_Line ("Hello World from 3!");
end Run_3;
Filter_1 : Filter (Run_1'Access);
Filter_2 : Filter (Run_2'Access);
Filter_3 : Filter (Run_3'Access);
begin
Filter_1.start;
Filter_2.start;
Filter_3.start;
end Main;
Using Text_IO.Put_Line will most likely cause the entire line to be written in one operation, although it's plausible that it might be two operations (one to output the characters in the string, and one to output the newline). But the OS call to output the string (possibly without the newline) is probably one call, and the OS operation may be be uninterruptible, or it may go so fast that it would be very difficult to interrupt with a thread switch. In any event, this probably does not output one character at a time. (I'm assuming you're running on a Linux or Windows system or similar, as opposed to an embedded system with a minimal runtime or the like.)
You could output one character at a time yourself:
procedure Output_String (S : String) is
begin
for I in S'range loop
Text_IO.Put (S (I));
--delay 0.0;
end loop;
Text_IO.New_Line;
end Output_String;
and then have your Run procedures call this instead of Text_IO.Put_Line. If it doesn't work without delay 0.0, try it with this delay instead, as it may cause the program to look for some other ready task of the same priority to run. I'm not guaranteeing anything, though.
The tasks are running concurrently. They're just not doing enough for this concurrency to be visible. Add more work to each task, e.g. repeatedly printing out a line of text, and you will see it.
As Jack notes, they are run concurrently. Putting a busy-loop in different parts of the Run_* procedures shows this.
with Ada.Text_IO;
procedure Main is
type Runnable_Type is access procedure;
task type Filter (Runnable_Access : Runnable_Type) is
entry start;
end Filter;
task body Filter is
begin
accept start;
Runnable_Access.all;
end Filter;
procedure Run_1 is
counter : integer := 0;
begin
for i in 1..1000000 loop
counter := counter + 1;
end loop;
Ada.Text_IO.Put_Line ("Hello World from 1a!");
Ada.Text_IO.Put_Line ("Hello World from 1b!");
Ada.Text_IO.Put_Line ("Hello World from 1c!");
end Run_1;
procedure Run_2 is
counter : integer := 0;
begin
Ada.Text_IO.Put_Line ("Hello World from 2a!");
for i in 1..1000000 loop
counter := counter + 1;
end loop;
Ada.Text_IO.Put_Line ("Hello World from 2b!");
Ada.Text_IO.Put_Line ("Hello World from 2c!");
end Run_2;
procedure Run_3 is
counter : integer := 0;
begin
Ada.Text_IO.Put_Line ("Hello World from 3a!");
Ada.Text_IO.Put_Line ("Hello World from 3b!");
for i in 1..1000000 loop
counter := counter + 1;
end loop;
Ada.Text_IO.Put_Line ("Hello World from 3c!");
end Run_3;
Filter_1 : Filter (Run_1'Access);
Filter_2 : Filter (Run_2'Access);
Filter_3 : Filter (Run_3'Access);
begin
Filter_1.start;
Filter_2.start;
Filter_3.start;
end Main;
Output is:
Hello World from 2a!
Hello World from 3a!
Hello World from 3b!
Hello World from 2b!
Hello World from 2c!
Hello World from 1a!
Hello World from 1b!
Hello World from 1c!
Hello World from 3c!
Related
I am new to Ada.
I have declared my new task type and I stored three of them in a pool. Then, I want to run every task in a loop.
The expected behavior is that all of them are executed at the same time.
The reality is, they are executed one-after-another. So, no sooner is tasks(2) executed than tasks(1) has terminated. In fact, task(2) will never have been executed since it terminaes due to the select constraints.
My code:
with Counter;
procedure Main is
task type CounterTask is
entry Execute(t:in Counter.Timeout; d:in Duration);
end CounterTask;
task body CounterTask is
begin MyLoop: loop
select
accept Execute(t:in Counter.Timeout;d:in Duration) do
Counter.Run(t, d);
end Execute;
or
delay 2.0;
exit;
end select;
end loop MyLoop;
end CounterTask;
tasks:Array(1..3) of CounterTask;
begin
for i in Integer range 1..3 loop
tasks(i).Execute(Counter.Timeout(10*i), Duration(0.5 * i));
end loop;
end Main;
Any hints or ideas will be most welcome!
When your main program calls the accept statement
accept Execute(t:in Counter.Timeout;d:in Duration) do
Counter.Run(t, d);
end Execute;
it is blocked until the end Execute. You don’t show Counter.Run, but I guess that there’s a delay t (or d?) in there.
You need to copy Execute’s parameters to local task variables within the accept statement, and only then call Counter.Run; that way, both the main program and the Countertask are free to proceed.
task body CounterTask is
Timeout : Counter.Timeout;
Dur : Duration;
begin
MyLoop:
loop
select
accept Execute(t:in Counter.Timeout;d:in Duration) do
Timeout := T;
Dur := D;
end Execute;
Counter.Run (Timeout, Dur);
or
delay 2.0;
exit;
end select;
end loop MyLoop;
end CounterTask;
Apart from taking Counter.Run out of the accept block (as was just stated by Simon Wright), you might also want to consider using a synchronization barrier (see also ARM D.10.1):
with Counter;
with Ada.Synchronous_Barriers;
procedure Main is
use Ada.Synchronous_Barriers;
Num_Tasks : Positive := 3;
Sync : Synchronous_Barrier (Num_Tasks);
task type Counter_Task is
entry Execute (T : in Counter.Timeout; D : in Duration);
end Counter_Task;
task body Counter_Task is
Notified : Boolean;
The_Timeout : Counter.Timeout;
The_Duration : Duration;
begin
MyLoop : loop
select
accept Execute (T : in Counter.Timeout; D : in Duration) do
The_Timeout := T;
The_Duration := D;
end Execute;
-- Synchronize tasks: wait until all 3 tasks have arrived at this point.
Wait_For_Release (Sync, Notified);
Counter.Run (The_Timeout, The_Duration);
or
delay 2.0;
exit;
end select;
end loop MyLoop;
end Counter_Task;
Tasks : array (1 .. Num_Tasks) of Counter_Task;
begin
for K in Tasks'Range loop
Tasks (K).Execute
(Counter.Timeout (K * 10),
Duration (Duration (0.5) * K));
end loop;
end Main;
New to Go. I'm attempting to code an "assembly line" where multiple functions act like workers and pass some data structure to each other down the line, each doing something to the data structure.
type orderStruct struct {
orderNum,capacity int
orderCode uint64
box [9]int
}
func position0(in chan orderStruct){
order := <-in
if((order.orderCode<<63)>>63 == 1){
order.box[order.capacity] = 1
order.capacity += 1
}
fmt.Println(" filling box {", order.orderNum, order.orderCode, order.box, order.capacity, "} at position 0")
}
func startOrder(in chan orderStruct){
order := <-in
fmt.Printf("\nStart an empty box for customer order number %d , request number %d\n", order.orderNum, order.orderCode)
fmt.Println(" starting box {", order.orderNum, order.orderCode, order.box, order.capacity, "}")
d := make(chan orderStruct,1)
go position0(d)
d <- order
}
func main() {
var orders [10]orderStruct
numOrders := len(os.Args)-1
var x int
for i := 0; i < numOrders; i++{
x, _ = strconv.Atoi(os.Args[i+1])
orders[i].orderCode = uint64(x)
orders[i].orderNum = i+1
orders[i].capacity = 0
for j := 0; j < 9; j++{
orders[i].box[j] = 0
}
c := make(chan orderStruct)
go startOrder(c)
c <- orders[i]
}
}
So basically the issue I'm having is that the print statements in startOrder() execute fine, but when I try to pass the struct to position0(), nothing is printed. Am I misunderstanding how channels work?
Pipelines are a great place to start when learning to program concurrently in Go. Nick Craig-Wood's answer provides a working solution to this specific challenge.
There is a whole range of other ways to use concurrency in Go. Broadly, there are three categories divided according to what is being treated as concurrent:
Functional decomposition - Creating pipelines of several functions is a good way to get started - and is your question's topic. It's quite easy to think about and quite productive. However, if it progresses to truly parallel hardware, it's quite hard to balance the load well. Everything goes at the speed of the slowest pipeline stage.
Geometric decomposition - Dividing the data up into separate regions that can be processed independently (or without too much communication). Grid-based systems are popularly used in certain domains of scientific high-performance computing, such as weather-forecasting.
Farming - Identifying how the work to be done can be chopped into (a large number of) tasks and these tasks can be allocated to 'workers' one by one until all are completed. Often, the number of tasks far exceeds the number of workers. This category includes all the so-called 'embarrassingly parallel' problems (embarrassing because if you fail to get your high-performance system to give linear speed-up, you look a bit daft).
I could add a fourth category of hybrids of several of the above.
There is quite a lot of literature about this, including much from the days of Occam programming in the '80s and '90s. Go and Occam both use CSP message passing so the issues are similar. I would single out the helpful book Practical Parallel Processing: An introduction to problem solving in parallel (Chalmers and Tidmus 1996).
I've attempted to re-write what you've written to work properly. You can run it on the playground
The main differences are
only two go routines are started - these act as the two workers on the production line - one taking orders and the other filling boxes
use of sync.WaitGroup to find out when they end
use of for x := range channel
use of close(c) to signal end of channel
you could start multiple copies of each worker and the code would still work fine (repeat the wg.Add(1); go startOrders(c, wg) code)
Here is the code
package main
import (
"fmt"
"sync"
)
type orderStruct struct {
orderNum, capacity int
orderCode uint64
box [9]int
}
func position0s(in chan orderStruct, wg *sync.WaitGroup) {
defer wg.Done()
for order := range in {
if (order.orderCode<<63)>>63 == 1 {
order.box[order.capacity] = 1
order.capacity += 1
}
fmt.Println(" filling box {", order.orderNum, order.orderCode, order.box, order.capacity, "} at position 0")
}
}
func startOrders(in chan orderStruct, wg *sync.WaitGroup) {
defer wg.Done()
d := make(chan orderStruct)
wg.Add(1)
go position0s(d, wg)
for order := range in {
fmt.Printf("\nStart an empty box for customer order number %d , request number %d\n", order.orderNum, order.orderCode)
fmt.Println(" starting box {", order.orderNum, order.orderCode, order.box, order.capacity, "}")
d <- order
}
close(d)
}
func main() {
var orders [10]orderStruct
numOrders := 4
var x int = 10
wg := new(sync.WaitGroup)
c := make(chan orderStruct)
wg.Add(1)
go startOrders(c, wg)
for i := 0; i < numOrders; i++ {
orders[i].orderCode = uint64(x)
orders[i].orderNum = i + 1
orders[i].capacity = 0
for j := 0; j < 9; j++ {
orders[i].box[j] = 0
}
c <- orders[i]
}
close(c)
wg.Wait()
}
I would like to thank to everyone in ahead for reading this. I have a very specific issue. I need to control three USRPs simultaneously from Matlab. Here is my serial solution:
`frameLength=180;
%transmit
R=zeros(frameLength,8000);
i = 1;j = 1;k=1;nStop = 8000;
while(j<nStop) % three times of lenght of frame
step(hTx1,data1(:,i)); % Tx1 transmit data in interval i
step(hTx2,data2(:,i)); %Tx2 transmit data in interval i
[Y, LEN]=step(hRx); %receiver recieve data from Tx1 and Tx2 (but shifted)
data=Y; %just reorganzied
if mod(i,nPacket)==0 % end of column (packet, start new packet)
i = 0;
end
if LEN==frameLength %just reorganizing
R(:,k)=data;
k=k+1
if k == nStop
break; %end
end
end
i = i+1;
j = j+1
end`
This solution have one problem, its not fully synchronized because steps functions are serially executed, therefore is little delay between signals from Tx1 a Tx2 on receiver.
If iI try this with parfor, lets assume that matlabpool invoke 4 workers (cores) it give me error right on first "step" function because multiple workers try to execute same function and therefore it cause collision. Step is Matlab routine to access Universal Software Radio Peripheral (USRP). But its little complicated when one core already execute that command with argument of some USRP, that USRP is busy and other call of this command causes error.
Unfortunatelly there is no scheduling for parallel loops to assign individual "step" commands to each core.
My question is, if is there any question how to parallelize at least three steps commands with prevent cores collision? If only these three steps commands, the rest can be done serial way, it doesn't matter.
It could be done in worst case by invoking three matlab instances where every instance controls one USRP and before step command could be some external routine (like x bit counter in C for instance) to synchronize this tasks.
Ive already tried to use this semaphore routine to create barrier for every core to stop and wait before step commands. http://www.mathworks.com/matlabcentral/fileexchange/45504-semaphore-posix-and-windows
This example is shown here:
function init()
(1) exitThreads = false; % used to exit func1, func2, func3 threads.
(2)cntMutexKey = 5; % mutex for doneCnt.
(3)doneCnt = 0; % func1-3 increment this when they finish.
(4)barrierCnt = 0; %global
(5)barrierKey = 7; %global
(6)paralellTasksDoneKey = 8; %global, semaphore to tell main loop when func1-3 are done.
(7)semaphore('create', cntMutexKey, 1);
(8)semaphore('create', barrierKey, 4); %Has count of 3 for each of the three functions to execute in parallel. We want to initialize it to 0.
(9)semaphore('wait', barrierKey); %now it has 3
(10)semaphore('wait', barrierKey); %now it has 2
(11)semaphore('wait', barrierKey); %now it has 1
(12)semaphore('wait', barrierKey); %now it has 0
(13)semaphore('create', paralellTasksDoneKey, 1);
(14)semaphore('wait', paralellTasksDoneKey); %Set it to 0.
(15)funList = {#func1,#func2,#func3};
(16)matlabpool
(17)parfor i=1:length(funList) %Start 3 threads.
funList{i}();
(18)end
(jump to) mycycle(); %Now run your code.
exitThreads = true; %Tell func1-3 to exit.
end
global exitThreads
while(~exitThreads)
barrier();
step(hTx1,data1(:,l));
done();
end
end
function func2()
global exitThreads
while(~exitThreads)
barrier();
step(hTx2,data2(:,l));
done();
end
end
function func3()
global exitThreads Y LEN
while(~exitThreads)
barrier();
[Y, LEN]=step(hRx); %need [Y,LEN] global or something, and run "parfor j=1:8000" sequentially and not in paralell.
done();
end
end
(25)function barrier()
(26)semaphore('wait', cntMutexKey); %init to 1, after 3 cores increment to 4 it proceed IF
(27)barrierCnt = barrierCnt+1; %changed from barrierCnt += 1
(28)if(barrierCnt == 4) %We now know that func1,func2,func3,yourcode are all at the barrier.
(29)barrierCnt = 0; %reset count
(30)semaphore('post', cntMutexKey);
(31)semaphore('post', barrierKey); %Increment barrier count, so a func will run.
(32)semaphore('post', barrierKey); %Increment barrier count, so a func will run.
(33)semaphore('post', barrierKey); %Increment barrier count, so a func will run.
else
(34)semaphore('post', cntMutexKey);
(get stuck here)semaphore('wait', barrierKey); %Wait for other threads (the barrier).
end
end
function done()
semaphore('wait', doneKey);
doneCnt = doneCnt+ 1; %changed from doneCnt += 1
if(doneCnt == 3)
semaphore('post', paralellTasksDoneKey);
doneCnt = 0; %Reset counter.
end
semaphore('post', doneKey);
end
function mycycle()
(19) global paralellTasksDoneKey Y LEN data
(21)for j=1:8000 % three times send and recieved frame with nPackets,
(22)i=1; %example is done with this loop handled sequentially.
(23)l=1; % More complex to allow this in paralell, but its not necessary
(24)k=1;
(jump to) barrier(); %Want loop to stop here & allow func1,func2,func3 do to their work.
semaphore('wait', paralellTasksDoneKey); %Wait for func1,func2,func3 to finish.
data=Y;
if mod(i,nPacket)==0 %end of frame
i = 0;
end
if LEN==frameLength
R(:,k)=data;
k=k+1;
end
i = i+1;
l=l+1;
end
end
*Note: numbers and jumps in parenthesis indicate flow of program, step by step from debbuger. End program get stuck there (35).
Or it could be maybe done by using the OpenMP library in C, to run those commands in parallel, but I've non experience with that, Iam not so skilled programmer. viz [http://bisqwit.iki.fi/story/howto/openmp/#Sections][2]
Sorry for a little bit larger file, but I want to show you my solutions (not fully mine) because it can be helpful for anyone who read this and is more skilled. I will be thankful for any kind of help or advice. Have a nice day for all of you.
I would suggest using SPMD rather than PARFOR for this sort of problem. Inside SPMD, you can use labBarrier to synchronise the workers.
I'm sure that there is a simple explanation to this trivial situation, but I'm new to the go concurrency model.
when I run this example
package main
import "fmt"
func main() {
c := make(chan int)
c <- 1
fmt.Println(<-c)
}
I get this error :
fatal error: all goroutines are asleep - deadlock!
goroutine 1 [chan send]:
main.main()
/home/tarrsalah/src/go/src/github.com/tarrsalah/tour.golang.org/65.go:8 +0x52
exit status 2
Why ?
Wrapping c <- in a goroutine makes the example run as we expected
package main
import "fmt"
func main() {
c := make(chan int)
go func(){
c <- 1
}()
fmt.Println(<-c)
}
Again, why ?
Please, I need deep explanation , not just how to eliminate the deadlock and fix the code.
From the documentation :
If the channel is unbuffered, the sender blocks until the receiver has received the value. If the channel has a buffer, the sender blocks only until the value
has been copied to the buffer; if the buffer is full, this means
waiting until some receiver has retrieved a value.
Said otherwise :
when a channel is full, the sender waits for another goroutine to make some room by receiving
you can see an unbuffered channel as an always full one : there must be another goroutine to take what the sender sends.
This line
c <- 1
blocks because the channel is unbuffered. As there's no other goroutine to receive the value, the situation can't resolve, this is a deadlock.
You can make it not blocking by changing the channel creation to
c := make(chan int, 1)
so that there's room for one item in the channel before it blocks.
But that's not what concurrency is about. Normally, you wouldn't use a channel without other goroutines to handle what you put inside. You could define a receiving goroutine like this :
func main() {
c := make(chan int)
go func() {
fmt.Println("received:", <-c)
}()
c <- 1
}
Demonstration
In unbuffered channel writing to channel will not happen until there must be some receiver which is waiting to receive the data, which means in the below example
func main(){
ch := make(chan int)
ch <- 10 /* Main routine is Blocked, because there is no routine to receive the value */
<- ch
}
Now In case where we have other go routine, the same principle applies
func main(){
ch :=make(chan int)
go task(ch)
ch <-10
}
func task(ch chan int){
<- ch
}
This will work because task routine is waiting for the data to be consumed before writes happen to unbuffered channel.
To make it more clear, lets swap the order of second and third statements in main function.
func main(){
ch := make(chan int)
ch <- 10 /*Blocked: No routine is waiting for the data to be consumed from the channel */
go task(ch)
}
This will leads to Deadlock
So in short, writes to unbuffered channel happens only when there is some routine waiting to read from channel, else the write operation is blocked forever and leads to deadlock.
NOTE: The same concept applies to buffered channel, but Sender is not blocked until the buffer is full, which means receiver is not necessarily to be synchronized with every write operation.
So if we have buffered channel of size 1, then your above mentioned code will work
func main(){
ch := make(chan int, 1) /*channel of size 1 */
ch <-10 /* Not blocked: can put the value in channel buffer */
<- ch
}
But if we write more values to above example, then deadlock will happen
func main(){
ch := make(chan int, 1) /*channel Buffer size 1 */
ch <- 10
ch <- 20 /*Blocked: Because Buffer size is already full and no one is waiting to recieve the Data from channel */
<- ch
<- ch
}
In this answer, I will try to explain the error message through which we can peek a little bit into how go works in terms of channels and goroutines
The first example is:
package main
import "fmt"
func main() {
c := make(chan int)
c <- 1
fmt.Println(<-c)
}
The error message is:
fatal error: all goroutines are asleep - deadlock!
In the code, there are NO goroutines at all (BTW this error is in runtime, not compile time). When go runs this line c <- 1, it wants to make sure that the message in the channel will be received somewhere (i.e <-c). Go does NOT know if the channel will be received or not at this point. So go will wait for the running goroutines to finish until either one of the following happens:
all of the goroutines are finished(asleep)
one of the goroutine tries to receive the channel
In case #1, go will error out with the message above, since now go KNOWS that there is no way that a goroutine will receive the channel and it need one.
In case #2, the program will continue, since now go KNOWS that this channel is received. This explain the successful case in OP's example.
Buffering removes synchronization.
Buffering makes them more like Erlang's mailboxes.
Buffered channels can be important for some problems but they are more subtle to reason about
By default channels are unbuffered, meaning that they will only accept sends
(chan <-) if there is a corresponding receive (<- chan) ready to receive the sent value.
Buffered channels accept a limited number of
values without a corresponding receiver for those values.
messages := make(chan string, 2) //-- channel of strings buffering up to 2 values.
Basic sends and receives on channels are blocking.
However, we can use select with a default clause to implement non-blocking sends, receives, and even non-blocking multi-way selects.
I'm new to golang (whith Java concurrency background). Consider this peace of code :
package main
import "fmt"
func sendenum(num int, c chan int) {
c <- num
}
func main() {
c := make(chan int)
go sendenum(0, c)
x, y := <-c, <-c
fmt.Println(x, y)
}
When I run this code , I get this error
fatal error: all goroutines are asleep - deadlock!
goroutine 1 [chan receive]:
main.main()
/home/tarrsalah/src/go/src/github.com/tarrsalah/stackoverflow/chan_dead_lock.go:12 +0x90
exit status 2
I know, adding another go sendenum(0, c) statement fix the issue, ... but
When and Where the deadlock happened ?
After it receives the 0, main keeps on waiting on the receiving end of c for another value to arrive (to put in the y variable), but it never will, as the goroutine running main is the only one left to live.
When you add another go sendenum(0, c), it actually gets a value on the second channel receive, puts it into the y variable, prints x and y out and the program finishes succesfully.
It's not that "reusing" a channel is a problem. It's just a simple deadlock happening because the code prescribes two reads, but only one write to the same channel. The second read can never happen, hence the deadlock.