I have this code:
if ev, ok := evt.(*ATypeEvent); ok {
//process ATypeEvent
} else if ev, ok := evt.(*BTypeEvent); ok {
//process BTypeEvent
} else if ev, ok := evt.(*CTypeEvent); ok {
//process CTypeEvent
}
It so now happens that I have 3 more event types, which all fit into one of the other 3 - I'd think I need a an OR.
But after several tries, I haven't been able to figure how to do it.
This doesn't work:
if ev, ok := evt.(*ATypeEvent) || evt.(*XTypeEvent); ok {
//process ATypeEvent and X
} else if ev, ok := evt.(*BTypeEvent) || evt.(*YTypeEvent); ok {
//process BTypeEvent and Y
} else if ev, ok := evt.(*CTypeEvent) || evt.(*ZTypeEvent); ok {
//process CTypeEvent and Z
}
nor something like
if ev, ok := evt.(*ATypeEvent) || ev, ok := evt.(*XTypeEvent); ok {
nor
if ev, ok := (evt.(*ATypeEvent) || evt.(*XTypeEvent ) ); ok {
How can this be done correctly?
Use a type switch as explained in Effective Go, a highly recommended resource for you to read and understand many things in Go:
switch v := ev.(type) {
case *ATypeEvent, *XTypeEvent:
// process ATypeEvent and X
case *BTypeEvent, *YTypeEvent:
// process BTypeEvent and Y
case *CTypeEvent, *ZTypeEvent:
// process CTypeEvent and Z
default:
// should never happen
log.Fatalf("error: unexpected type %T", v)
}
As for why your approach didn't work, Go's || and && operators require values of type bool and result in a single value of type bool, so assigning to ev, ok won't work as you wanted, nor will using a type assertion as a Boolean value. Without a type switch, you're stuck doing something like this:
if ev, ok := evt.(*ATypeEvent); ok {
//process ATypeEvent
} else if ev, ok := evt.(*XTypeEvent); ok {
//process XTypeEvent
} else if ...
Another option is to define a method on the interface for evt.
func (a *ATypeEvent) Process(...) ... {
//process ATypeEvent
}
func (x *XTypeEvent) Process(...) ... {
//process XTypeEvent
}
func (b *BTypeEvent) Process(...) ... {
//process BTypeEvent
}
and so on.
Related
Imagine a situation where I'd like to call a function that does some amount of processing, but is time-bound. I could write a function in golang using context.Context and select. I'd imagine something as follows:
package main
import (
"context"
"fmt"
"time"
)
func longRunning(ctx context.Context, msg string) {
stop := make(chan bool)
done := make(chan bool)
go func() {
for {
fmt.Printf("long running calculation %v...", msg)
select {
case <-stop:
fmt.Println("time to stop early!")
return
default:
}
}
done <- true
}()
select {
case <-done:
return
case <-ctx.Done():
stop <- true
return
}
}
func main() {
ctx := context.Background()
ctx, cancel := context.WithTimeout(ctx, 3*time.Second)
defer cancel()
longRunning(ctx, "wheeee")
}
Is there a pattern I can use to achieve similar results in C++? in the sample above select is able to listen on a channel in a non-blocking way. Is having a eventfd file descriptor of some kind and listening to events the way to do it?
Any suggestions or tips would be much appreciated.
Something along these lines, perhaps:
void longRunning(std::atomic<bool>& stop) {
for (;;) {
if (stop) return;
// Do a bit of work
}
}
int main() {
std::atomic<bool> stop = false;
auto future = std::async(std::launch::async, longRunning, std::ref(stop));
future.wait_for(std::chrono::seconds(num_seconds));
stop = true;
future.get(); // wait for the task to finish or exit early.
}
Demo
I have currently the following function in one file:
func pinExported(pin int) bool {
pinPath := fmt.Sprintf("/sys/class/gpio/gpio%d", pin)
if file, err := os.Stat(pinPath); err == nil && len(file.Name()) > 0 {
return true
}
return false
}
and another code part in the same file which uses the above function which looks like this:
func isGpioPinExported(gpioPin int) bool {
exported := pinExported(gpioPin)
for !exported && (timeOut < timeOutForPinExportInMilliseconds) {
timeOut++
time.Sleep(1 * time.Millisecond)
exported = pinExported(gpioPin)
}
...
So now I'm searching for an elegant way to mock/replace somehow the above pinExported function within my unit tests to test the logic inside isGpioPinExported because the function pinExported is hardware dependent (Raspberry PI).
One solution could be to make the pinExported function a parameter of isGpioPinExported
So defining a function type like this:
type pinExported func(int) int
which means I have to define isGpioPinExported like this:
isGpioPinExported(pinExported pinExported, gpioPin int) bool {
exported := pinExported(gpioPin)
for !exported && (timeOut < timeOutForPinExportInMilliseconds) {
...
}
..
}
Now I can write my unit test and define a mock/fake pinExported without a problem. So far so good. But I have about five or six of such functions which means it would result in putting five or six supplemental parameters to a function like isGpioPinExported which is simply wrong. Apart from that the question is where can I define the default implementation which are used if this is not running under test?
So based on the suggestion of mkopriva I have created an interface which looks like this (now with three functions to see how this really works):
type Raspberry interface {
isPinExported(gpioPin int) bool
valueExist(gpioPin int) bool
directionExist(gpioPin int) bool
}
Furthermore defined a structure to make the implementation for real hardware (Raspberry):
type Rasberry3Plus struct {
}
func (raspberry Rasberry3Plus) valueExist(gpioPin int) bool {
pinPath := fmt.Sprintf("%s%d/value", sysClassGPIOPin, gpioPin)
if file, err := os.Stat(pinPath); err == nil && len(file.Name()) > 0 {
return true
}
return false
}
func (raspberry Rasberry3Plus) directionExist(gpioPin int) bool {
pinPath := fmt.Sprintf("%s%d/direction", sysClassGPIOPin, gpioPin)
if file, err := os.Stat(pinPath); err == nil && len(file.Name()) > 0 {
return true
}
return false
}
func (raspberry Rasberry3Plus) isPinExported(gpioPin int) bool {
pinPath := fmt.Sprintf("%s%d", sysClassGPIOPin, gpioPin)
if file, err := os.Stat(pinPath); err == nil && len(file.Name()) > 0 {
return true
}
return false
}
and the function IsGpioPinExported which uses the above functions looks now like this (This is just an example implementation to see how the mocking testing works):
func IsGpioPinExported(raspberry Raspberry, gpioPin int) bool {
pinExported := raspberry.isPinExported(gpioPin)
valueExist := raspberry.valueExist(gpioPin)
directionExist := raspberry.directionExist(gpioPin)
return valueExist && directionExist && pinExported
}
So now the tests look like this. First I have to define a type (btw: I have decided to go with Mock):
import (
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/mock"
"testing"
)
type mockRaspberry struct {
mock.Mock
}
func (raspMock mockRaspberry) isPinExported(gpioPin int) bool {
args := raspMock.Called(gpioPin)
return args.Bool(0)
}
func (raspMock mockRaspberry) valueExist(gpioPin int) bool {
args := raspMock.Called(gpioPin)
return args.Bool(0)
}
func (raspMock mockRaspberry) directionExist(gpioPin int) bool {
args := raspMock.Called(gpioPin)
return args.Bool(0)
}
func Test_ValueTrue_DirectionExistTrue(t *testing.T) {
testObj := new(mockRaspberry)
testObj.On("isPinExported", 5).Return(false)
testObj.On("valueExist", 5).Return(true)
testObj.On("directionExist", 5).Return(true)
exported := IsGpioPinExported(testObj, 5)
assert.Equal(t, false, exported)
}
And now it is simple to test the loigic in function IsGpioPinExported with the appropriate mocked functions with wished result. And finally the main program looks like this:
func main() {
rasberry3Plus := gpio.Rasberry3Plus{}
gpio.IsGpioPinExported(rasberry3Plus, 23)
}
I'm doing some stream processing in Go and got stuck trying to figure out how to do this the "Go way" without locks.
This contrived example shows the problem I'm facing.
We get one thing at a time.
There is a goroutine which buffers them into a slice called things.
When things becomes full len(things) == 100 then it is processed somehow and reset
There are n number of concurrent goroutines that need to access things before it's full
Access to the "incomplete" things from other goroutines is not predictable.
Neither doSomethingWithPartial nor doSomethingWithComplete needs to mutate things
Code:
var m sync.Mutex
var count int64
things := make([]int64, 0, 100)
// slices of data are constantly being generated and used
go func() {
for {
m.Lock()
if len(things) == 100 {
// doSomethingWithComplete does not modify things
doSomethingWithComplete(things)
things = make([]int64, 0, 100)
}
things = append(things, count)
m.Unlock()
count++
}
}()
// doSomethingWithPartial needs to access the things before they're ready
for {
m.Lock()
// doSomethingWithPartial does not modify things
doSomethingWithPartial(things)
m.Unlock()
}
I know that slices are immutable so does that mean I can remove the mutex and expect it to still work (I assume no).
How can I refactor this to use channels instead of a mutex.
Edit: Here's the solution I came up with that does not use a mutex
package main
import (
"fmt"
"sync"
"time"
)
func Incrementor() chan int {
ch := make(chan int)
go func() {
count := 0
for {
ch <- count
count++
}
}()
return ch
}
type Foo struct {
things []int
requests chan chan []int
stream chan int
C chan []int
}
func NewFoo() *Foo {
foo := &Foo{
things: make([]int, 0, 100),
requests: make(chan chan []int),
stream: Incrementor(),
C: make(chan []int),
}
go foo.Launch()
return foo
}
func (f *Foo) Launch() {
for {
select {
case ch := <-f.requests:
ch <- f.things
case thing := <-f.stream:
if len(f.things) == 100 {
f.C <- f.things
f.things = make([]int, 0, 100)
}
f.things = append(f.things, thing)
}
}
}
func (f *Foo) Things() []int {
ch := make(chan []int)
f.requests <- ch
return <-ch
}
func main() {
foo := NewFoo()
var wg sync.WaitGroup
wg.Add(10)
for i := 0; i < 10; i++ {
go func(i int) {
time.Sleep(time.Millisecond * time.Duration(i) * 100)
things := foo.Things()
fmt.Println("got things:", len(things))
wg.Done()
}(i)
}
go func() {
for _ = range foo.C {
// do something with things
}
}()
wg.Wait()
}
It should be noted that the "Go way" is probably just to use a mutex for this. It's fun to work out how to do it with a channel but a mutex is probably simpler and easier to reason about for this particular problem.
What is the idiomatic way to cast multiple return values in Go?
Can you do it in a single line, or do you need to use temporary variables such as I've done in my example below?
package main
import "fmt"
func oneRet() interface{} {
return "Hello"
}
func twoRet() (interface{}, error) {
return "Hejsan", nil
}
func main() {
// With one return value, you can simply do this
str1 := oneRet().(string)
fmt.Println("String 1: " + str1)
// It is not as easy with two return values
//str2, err := twoRet().(string) // Not possible
// Do I really have to use a temp variable instead?
temp, err := twoRet()
str2 := temp.(string)
fmt.Println("String 2: " + str2 )
if err != nil {
panic("unreachable")
}
}
By the way, is it called casting when it comes to interfaces?
i := interface.(int)
You can't do it in a single line.
Your temporary variable approach is the way to go.
By the way, is it called casting when it comes to interfaces?
It is actually called a type assertion.
A type cast conversion is different:
var a int
var b int64
a = 5
b = int64(a)
func silly() (interface{}, error) {
return "silly", nil
}
v, err := silly()
if err != nil {
// handle error
}
s, ok := v.(string)
if !ok {
// the assertion failed.
}
but more likely what you actually want is to use a type switch, like-a-this:
switch t := v.(type) {
case string:
// t is a string
case int :
// t is an int
default:
// t is some other type that we didn't name.
}
Go is really more about correctness than it is about terseness.
Or just in a single if:
if v, ok := value.(migrater); ok {
v.migrate()
}
Go will take care of the cast inside the if clause and let you access the properties of the casted type.
template.Must is the standard library's approach for returning only the first return value in one statement. Could be done similarly for your case:
func must(v interface{}, err error) interface{} {
if err != nil {
panic(err)
}
return v
}
// Usage:
str2 := must(twoRet()).(string)
By using must you basically say that there should never be an error, and if there is, then the program can't (or at least shouldn't) keep operating, and will panic instead.
What I would like to do is have a set of producer goroutines (of which some may or may not complete) and a consumer routine. The issue is with that caveat in parentheses - we don't know the total number that will return an answer.
So what I want to do is this:
package main
import (
"fmt"
"math/rand"
)
func producer(c chan int) {
// May or may not produce.
success := rand.Float32() > 0.5
if success {
c <- rand.Int()
}
}
func main() {
c := make(chan int, 10)
for i := 0; i < 10; i++ {
go producer(c, signal)
}
// If we include a close, then that's WRONG. Chan will be closed
// but a producer will try to write to it. Runtime error.
close(c)
// If we don't close, then that's WRONG. All goroutines will
// deadlock, since the range keyword will look for a close.
for num := range c {
fmt.Printf("Producer produced: %d\n", num)
}
fmt.Println("All done.")
}
So the issue is, if I close it's wrong, if I don't close - it's still wrong (see comments in code).
Now, the solution would be an out-of-band signal channel, that ALL producers write to:
package main
import (
"fmt"
"math/rand"
)
func producer(c chan int, signal chan bool) {
success := rand.Float32() > 0.5
if success {
c <- rand.Int()
}
signal <- true
}
func main() {
c := make(chan int, 10)
signal := make(chan bool, 10)
for i := 0; i < 10; i++ {
go producer(c, signal)
}
// This is basically a 'join'.
num_done := 0
for num_done < 10 {
<- signal
num_done++
}
close(c)
for num := range c {
fmt.Printf("Producer produced: %d\n", num)
}
fmt.Println("All done.")
}
And that totally does what I want! But to me it seems like a mouthful. My question is: Is there any idiom/trick that lets me do something similar in an easier way?
I had a look here: http://golang.org/doc/codewalk/sharemem/
And it seems like the complete chan (initialised at the start of main) is used in a range but never closed. I do not understand how.
If anyone has any insights, I would greatly appreciate it. Cheers!
Edit: fls0815 has the answer, and has also answered the question of how the close-less channel range works.
My code above modifed to work (done before fls0815 kindly supplied code):
package main
import (
"fmt"
"math/rand"
"sync"
)
var wg_prod sync.WaitGroup
var wg_cons sync.WaitGroup
func producer(c chan int) {
success := rand.Float32() > 0.5
if success {
c <- rand.Int()
}
wg_prod.Done()
}
func main() {
c := make(chan int, 10)
wg_prod.Add(10)
for i := 0; i < 10; i++ {
go producer(c)
}
wg_cons.Add(1)
go func() {
for num := range c {
fmt.Printf("Producer produced: %d\n", num)
}
wg_cons.Done()
} ()
wg_prod.Wait()
close(c)
wg_cons.Wait()
fmt.Println("All done.")
}
Only producers should close channels. You could achieve your goal by invoking consumer(s) which iterates (range) over the resulting channel once your producers were started. In your main thread you wait (see sync.WaitGroup) until your consumers/producers finished their work. After producers finished you close the resulting channel which will force your consumers to exit (range will exit when channels are closed and no buffered item is left).
Example code:
package main
import (
"log"
"sync"
"time"
"math/rand"
"runtime"
)
func consumer() {
defer consumer_wg.Done()
for item := range resultingChannel {
log.Println("Consumed:", item)
}
}
func producer() {
defer producer_wg.Done()
success := rand.Float32() > 0.5
if success {
resultingChannel <- rand.Int()
}
}
var resultingChannel = make(chan int)
var producer_wg sync.WaitGroup
var consumer_wg sync.WaitGroup
func main() {
rand.Seed(time.Now().Unix())
for c := 0; c < runtime.NumCPU(); c++ {
producer_wg.Add(1)
go producer()
}
for c := 0; c < runtime.NumCPU(); c++ {
consumer_wg.Add(1)
go consumer()
}
producer_wg.Wait()
close(resultingChannel)
consumer_wg.Wait()
}
The reason I put the close-statement into the main function is because we have more than one producer. Closing the channel in one producer in the example above would lead to the problem you already ran into (writing on closed channels; the reason is that there could one producer left who still produces data). Channels should only be closed when there is no producer left (therefore my suggestion on closing the channel only by the producer). This is how channels are constructed in Go. Here you'll find some more information on closing channels.
Related to the sharemem example: AFAICS this example runs endless by re-queuing the Resources again and again (from pending -> complete -> pending -> complete... and so on). This is what the iteration at the end of the main-func does. It receives the completed Resources and re-queues them using Resource.Sleep() to pending. When there is no completed Resource it waits and blocks for new Resources being completed. Therefore there is no need to close the channels because they are in use all the time.
There are always lots of ways to solve these problems. Here's a solution using the simple synchronous channels that are fundamental in Go. No buffered channels, no closing channels, no WaitGroups.
It's really not that far from your "mouthful" solution, and--sorry to disappoint--not that much smaller. It does put the consumer in it's own goroutine, so that the consumer can consume numbers as the producer produces them. It also makes the distinction that a production "try" can end in either success or failure. If production fails, the try is done immediately. If it succeeds, the try is not done until the number is consumed.
package main
import (
"fmt"
"math/rand"
)
func producer(c chan int, fail chan bool) {
if success := rand.Float32() > 0.5; success {
c <- rand.Int()
} else {
fail <- true
}
}
func consumer(c chan int, success chan bool) {
for {
num := <-c
fmt.Printf("Producer produced: %d\n", num)
success <- true
}
}
func main() {
const nTries = 10
c := make(chan int)
done := make(chan bool)
for i := 0; i < nTries; i++ {
go producer(c, done)
}
go consumer(c, done)
for i := 0; i < nTries; i++ {
<-done
}
fmt.Println("All done.")
}
I'm adding this because the extant answers don't make a couple things clear. First, the range loop in the codewalk example is just an infinite event loop, there to keep re-checking and updating the same url list forever.
Next, a channel, all by itself, already is the idiomatic consumer-producer queue in Go. The size of the async buffer backing the channel determines how much producers can produce before getting backpressure. Set N = 0 below to see lock-step producer consumer without anyone racing ahead or getting behind. As it is, N = 10 will let the producer produce up to 10 products before blocking.
Last, there are some nice idioms for writing communicating sequential processees in Go (e.g. functions that start go routines for you, and using the for/select pattern to communicate and accept control commands). I think of WaitGroups as clumsy, and would like to see idiomatic examples instead.
package main
import (
"fmt"
"time"
)
type control int
const (
sleep control = iota
die // receiver will close the control chan in response to die, to ack.
)
func (cmd control) String() string {
switch cmd {
case sleep: return "sleep"
case die: return "die"
}
return fmt.Sprintf("%d",cmd)
}
func ProduceTo(writechan chan<- int, ctrl chan control, done chan bool) {
var product int
go func() {
for {
select {
case writechan <- product:
fmt.Printf("Producer produced %v\n", product)
product++
case cmd:= <- ctrl:
fmt.Printf("Producer got control cmd: %v\n", cmd)
switch cmd {
case sleep:
fmt.Printf("Producer sleeping 2 sec.\n")
time.Sleep(2000 * time.Millisecond)
case die:
fmt.Printf("Producer dies.\n")
close(done)
return
}
}
}
}()
}
func ConsumeFrom(readchan <-chan int, ctrl chan control, done chan bool) {
go func() {
var product int
for {
select {
case product = <-readchan:
fmt.Printf("Consumer consumed %v\n", product)
case cmd:= <- ctrl:
fmt.Printf("Consumer got control cmd: %v\n", cmd)
switch cmd {
case sleep:
fmt.Printf("Consumer sleeping 2 sec.\n")
time.Sleep(2000 * time.Millisecond)
case die:
fmt.Printf("Consumer dies.\n")
close(done)
return
}
}
}
}()
}
func main() {
N := 10
q := make(chan int, N)
prodCtrl := make(chan control)
consCtrl := make(chan control)
prodDone := make(chan bool)
consDone := make(chan bool)
ProduceTo(q, prodCtrl, prodDone)
ConsumeFrom(q, consCtrl, consDone)
// wait for a moment, to let them produce and consume
timer := time.NewTimer(10 * time.Millisecond)
<-timer.C
// tell producer to pause
fmt.Printf("telling producer to pause\n")
prodCtrl <- sleep
// wait for a second
timer = time.NewTimer(1 * time.Second)
<-timer.C
// tell consumer to pause
fmt.Printf("telling consumer to pause\n")
consCtrl <- sleep
// tell them both to finish
prodCtrl <- die
consCtrl <- die
// wait for that to actually happen
<-prodDone
<-consDone
}
You can use simple unbuffered channels without wait groups if you use the generator pattern with a fanIn function.
In the generator pattern, each producer returns a channel and is responsible for closing it. A fanIn function then iterates over these channels and forwards the values returned on them down a single channel that it returns.
The problem of course, is that the fanIn function forwards the zero value of the channel type (int) when each channel is closed.
You can work around it by using the zero value of your channel type as a sentinel value and only using the results from the fanIn channel if they are not the zero value.
Here's an example:
package main
import (
"fmt"
"math/rand"
)
const offset = 1
func producer() chan int {
cout := make(chan int)
go func() {
defer close(cout)
// May or may not produce.
success := rand.Float32() > 0.5
if success {
cout <- rand.Int() + offset
}
}()
return cout
}
func fanIn(cin []chan int) chan int {
cout := make(chan int)
go func() {
defer close(cout)
for _, c := range cin {
cout <- <-c
}
}()
return cout
}
func main() {
chans := make([]chan int, 0)
for i := 0; i < 10; i++ {
chans = append(chans, producer())
}
for num := range fanIn(chans) {
if num > offset {
fmt.Printf("Producer produced: %d\n", num)
}
}
fmt.Println("All done.")
}
producer-consumer is such a common pattern that I write a library prosumer for convenience with dealing with chan communication carefully. Eg:
func main() {
maxLoop := 10
var wg sync.WaitGroup
wg.Add(maxLoop)
defer wg.Wait()
consumer := func(ls []interface{}) error {
fmt.Printf("get %+v \n", ls)
wg.Add(-len(ls))
return nil
}
conf := prosumer.DefaultConfig(prosumer.Consumer(consumer))
c := prosumer.NewCoordinator(conf)
c.Start()
defer c.Close(true)
for i := 0; i < maxLoop; i++ {
fmt.Printf("try put %v\n", i)
discarded, err := c.Put(i)
if err != nil {
fmt.Errorf("discarded elements %+v for err %v", discarded, err)
wg.Add(-len(discarded))
}
time.Sleep(time.Second)
}
}
close has a param called graceful, which means whether drain the underlying chan.