As a silly basic threading exercise, I've been trying to implement the sleeping barber problem in golang. With channels this should be quite easy, but I've run into a heisenbug. That is, when I try to diagnose it, the problem disappears!
Consider the following. The main() function pushes integers (or "customers") onto the shop channel. barber() reads the shop channel to cut "customers'" hair. If I insert a fmt.Print statement into the customer() function, the program runs as expected. Otherwise, barber() never cuts anyone's hair.
package main
import "fmt"
func customer(id int, shop chan<- int) {
// Enter shop if seats available, otherwise leave
// fmt.Println("Uncomment this line and the program works")
if len(shop) < cap(shop) {
shop <- id
}
}
func barber(shop <-chan int) {
// Cut hair of anyone who enters the shop
for {
fmt.Println("Barber cuts hair of customer", <-shop)
}
}
func main() {
shop := make(chan int, 5) // five seats available
go barber(shop)
for i := 0; ; i++ {
customer(i, shop)
}
}
Any idea what's afoot?
The problem is the way Go's scheduler is implemented. The current goroutine can yield to other goroutines only when it makes a system call or a blocking channel operation. fmt.Println makes a system call, giving the goroutine an opportunity to yield. Otherwise it doesn't have one.
In practice this doesn't often matter, but for small problems like this it can sometimes crop up.
Also, a more idiomatic, less racy way of doing a non-blocking send on a channel is:
func customer(id int, shop chan<- int) {
// Enter shop if seats available, otherwise leave
select {
case shop <- id:
default:
}
}
The way you're doing it, a customer could end up waiting outside of the barber shop since by the time you actually do the send, len(shop) may have changed.
Does adding runtime.GOMAXPROCS(2) at the beginning of main solves this?
Related
When writing concurrent code, it's fairly common to want to spin off a separate (green or OS) thread and then ask the code in that thread to react to various thread-safe messages. Raku supports this pattern in a number of ways.
For example, many of the Channel examples in the docs show code that's similar to the code below (which prints one through ten across two threads).
my $channel = Channel.new;
start { react whenever $channel { say $_ }}
for ^10 { $channel.send($_) }
sleep 1
However, if we switch from the single-consumer world of Channels to the multi-consumer world of live Supplys, the equivalent code no longer works.
my Supplier $supplier .= new;
start { react whenever $supplier { say $_ }}
for ^10 { $supplier.emit($_) }
sleep 1;
This code prints nothing. As I understand it, this is because the react block was not listening when the values were emited – it doesn't take long to start a thread and react to events, but it takes even less time to emit ten values. And, logically enough, moving the sleep 1 line above the for loop causes the values to print again.
And that's all fair enough – after all, the reason to use a live Supply rather than an on-demand one is because you want the live semantics. That is, you want to only react to future events, not to past ones.
But my question is whether there's a way to ask a react block in a thread I've started whether it's ready and/or to wait for it to be ready before sending data. (awaiting the start block waits until the thread is done, rather than until it's ready, so that doesn't help here).
I'm also open to answers saying that I'm approaching this incorrectly/there's an X-Y problem – it's entirely possible that I'm straining against the direction the language is trying to push me or that live Supplys aren't the correct concurrency abstraction here.
For this specific case (which is a relatively common one), the answer would be to use a Supplier::Preserving:
my Supplier::Preserving $supplier .= new;
start { react whenever $supplier { say $_ }}
for ^10 { $supplier.emit($_) }
sleep 1;
Which retains sent values until $supplier is first tapped, and then emits them.
An alternative, more general, solution is to use a Promise:
my Supplier $supplier .= new;
# A Promise used just for synchronization
my Promise $ready .= new;
start react {
# Set up the subscriptions...
whenever $supplier { say $_ }
# ...and then signal that they are ready.
$ready.keep;
}
# Wait for the subscriptions to be set up...
await $ready;
# ...and off we go.
for ^10 { $supplier.emit($_) }
sleep 1;
The whenevers in a react block set up subscriptions as they are encountered, so by the time the Promise is kept, all of the subscriptions will have been made. (Further, although not important here, no messages are processed until the body of the react block has finished setting everything up.)
Finally I'll note that while Supplier is often reached for, many times one would be better off writing a supply block that emits the values. The example in the question is (quite reasonably enough) abstracted from a concrete application, but it's almost always worth asking, "can I do what I want by writing a supply block" before reaching for a Supplier or Supplier::Preserving. If you really do need to broadcast values or need to distribute asynchronous inputs to multiple places, there's a solid case for Supplier; if it's just a single stream of values to be produced once tapped, there probably isn't.
I am very new to Swift and programming. I'm trying to create a pattern of haptic feedback triggered by a UILongPressGestureRecognizer. When the user "long presses" the screen, I want the phone to vibrate three times with a 1 second delay between each vibration. I tried using "sleep" to accomplish the 1 second delays, but this didn't work. What is the best way to do this correctly?
var feedbackGenerator : UIImpactFeedbackGenerator? = nil
func performFeedbackPattern() {
//create the feedback generator
feedbackGenerator = UIImpactFeedbackGenerator(style: .heavy)
feedbackGenerator?.prepare()
//play the feedback three times with 1 second between each feedback
feedbackGenerator?.impactOccurred()
sleep (1)
feedbackGenerator?.impactOccurred()
sleep (1)
feedbackGenerator?.impactOccurred()
}
#IBAction func gestureRecognizer(_ sender: UILongPressGestureRecognizer) {
switch sender.state {
case .began:
performFeedbackPattern()
default: break
}
Recently I was doing something similar and come up with a small pod you can take a look.
Here is the link https://github.com/iSapozhnik/Haptico
So the idea is to build an OperationQueue with the banch of Operations. One operation could be your haptic feedback and another one - pause operation.
You can create an OperationQueue and add operations with haptic feedback. The operation would look like this:
class HapticFeedbackOperation: Operation {
override func main() {
// Play the haptic feedback
UIImpactFeedbackGenerator(style: .heavy).impactOccurred()
}
}
You might want to add a delay between the operations.
Checkout my open source framework Haptica, it supports both Haptic Feedback, AudioServices and unique vibrations patterns. Works on Swift 4.2, Xcode 10
I have got a Play 2.4 (Java-based) application with some background Akka tasks implemented as functions returning Promise.
Task1 downloads bank statements via bank Rest API.
Task2 processes the statements and pairs them with customers.
Task3 does some other processing.
Task2 cannot run before Task1 finishes its work. Task3 cannot run before Task2. I was trying to run them through sequence of Promise.map() like this:
protected F.Promise run() throws WebServiceException {
return bankAPI.downloadBankStatements().map(
result -> bankProc.processBankStatements().map(
_result -> accounting.checkCustomersBalance()));
}
I was under an impression, that first map will wait until Task1 is done and then it will call Task2 and so on. When I look into application (tasks are writing some debug info into log) I can see, that tasks are running in parallel.
I was also trying to use Promise.flatMap() and Promise.sequence() with no luck. Tasks are always running in parallel.
I know that Play is non-blocking application in nature, but in this situation I really need to do things in right order.
Is there any general practice on how to run multiple Promises in selected order?
You're nesting the second call to map, which means what's happening here is
processBankStatements
checkCustomerBalance
downloadBankStatements
Instead, you need to chain them:
protected F.Promise run() throws WebServiceException {
return bankAPI.downloadBankStatements()
.map(statements -> bankProc.processBankStatements())
.map(processedStatements -> accounting.checkCustomersBalance());
}
I notice you're not using result or _result (which I've renamed for clarity) - is that intentional?
Allright, I found a solution. The correct answer is:
If you are chaining multiple Promises in the way I do. That means, in return of map() function you are expecting another Promise.map() function and so on, you should follow these rules:
If you are returning non-futures from mapping, just use map()
If you are returning more futures from mapping, you should use flatMap()
The correct code snippet for my case is then:
return bankAPI.downloadBankStatements().flatMap(result -> {
return bankProc.processBankStatements().flatMap(_result -> {
return accounting.checkCustomersBalance().map(__result -> {
return null;
});
});
});
This solution was suggested to me a long time ago, but it was not working at first. The problem was, that I had a hidden Promise.map() inside function downloadBankStatements() so the chain of flatMaps was broken in this case.
Let's say I'm using a fictional package in my webserver called github.com/john/jupiterDb that I'm using to connect to my database hosted on Jupiter.
When someone makes a request to my server, I want to store the body of the request in my Jupiter DB. So I have some code like this:
http.HandleFunc("/SomeEvent", registerSomeEvent)
And in my registerSomeEvent handler I want to do this:
func registerSomeEvent(w http.ResponseWriter, r *http.Request) {
jupiterDb.Insert(r.Body) // Takes a while!
fmt.FPrint(w, "Thanks!")
}
Now obviously I don't want to wait for the round trip to Jupiter to thank my user. So the obvious Go thing to do is to wrap that Insert call in a go routine.
But oftentimes creators of packages that do lengthy IO will use go routines in the package to ensure these functions return immediately and are non-blocking. Does this mean I need to check the source for every package I use to make sure I'm using concurrency correctly?
Should I wrap it in an extra go routine anyway or should I trust the maintainer has already done the work for me? This feels to make like I have less ability to treat a package as a black box, or am I missing something?
I would just read the body and send it to a channel. A group of goroutines will be reading from the channel and send to jupiter the payload.
var reqPayloadChannel = make(chan string, 100)
func jupiter_worker() {
for payload := range reqPayloadChannel {
jupiterDb.Insert(payload) // Takes a while!
}
}
func registerSomeEvent(w http.ResponseWriter, r *http.Request) {
reqPayloadChannel <- r.Body.ReadAll()
fmt.Fprint(w, "Thanks!")
}
Next steps are to setup the working group and to handle the case when the jupiter channel is full due to very slow clients.
(I may be using this in a totally incorrect manner, so feel free to challenge the premise of this post.)
I have a small RACTest app (sound familiar?) that I'm trying to unit test. I'd like to test MPSTicker, one of the most ReactiveCocoa-based components. It has a signal that sends a value once per second that accumulates, iff an accumulation flag is set to YES. I added an initializer to take a custom signal for its incrementing signal, rather than being only timer-based.
I wanted to unit test a couple of behaviours of MPSTicker:
Verify that its accumulation signal increments properly (i.e. monotonically increases) when accumulation is enabled and the input incrementing signal sends a new value.
Verify that it sends the same value (and not an incremented value) when the input signal sends a value.
I've added a test that uses the built-in timer to test the first increment, and it works as I expected (though I'm seeking advice on improving the goofy RACSequence initialization I did to get a signal with the #(1) value I wanted.)
I've had a very difficult time figuring out what input signal I can provide to MPSTicker that I can manually send values to. I'm envisioning a test like:
<set up ticker>
<send a tick value>
<verify accumulated value is 1>
<send another value>
<verify accumulated value is 2>
I tried using a RACSubject so I can use sendNext: to push in values as I see fit, but it's not working like I expect. Here's two broken tests:
- (void)testManualTimerTheFirst
{
// Create a custom tick with one value to send.
RACSubject *controlledSignal = [RACSubject subject];
MPSTicker *ticker = [[MPSTicker alloc] initWithTickSource:controlledSignal];
[ticker.accumulateSignal subscribeNext:^(id x) {
NSLog(#"%s value is %#", __func__, x);
}];
[controlledSignal sendNext:#(2)];
}
- (void)testManualTimerTheSecond
{
// Create a custom tick with one value to send.
RACSubject *controlledSignal = [RACSubject subject];
MPSTicker *ticker = [[MPSTicker alloc] initWithTickSource:controlledSignal];
BOOL success = NO;
NSError *error = nil;
id value = [ticker.accumulateSignal asynchronousFirstOrDefault:nil success:&success error:&error];
if (!success) {
XCTAssertTrue(success, #"Signal failed to return a value. Error: %#", error);
} else {
XCTAssertNotNil(value, #"Signal returned a nil value.");
XCTAssertEqualObjects(#(1), value, #"Signal returned an unexpected value.");
}
// Send a value.
[controlledSignal sendNext:#(1)];
}
In testManualTimerTheFirst, I never see any value from controlledSignal's sendNext: come through to my subscribeNext: block.
In testManualTimerTheSecond, I tried using the asynchronousFirstOrDefault: call to get the first value from the signal, then manually sent a value on my subject, but the value didn't come through, and the test failed when asynchronousFirstOrDefault: timed out.
What am I missing here?
This may not answer your question exactly, but it may give you insights on how to effectively test your signals. I've used 2 approaches myself so far:
XCTestCase and TRVSMonitor
TRVSMonitor is a small utility which will pause the current thread for you while you run your assertions. For example:
TRVSMonitor *monitor = [TRVSMonitor monitor];
[[[self.service searchPodcastsWithTerm:#"security now"] collect] subscribeNext:^(NSArray *results) {
XCTAssertTrue([results count] > 0, #"Results count should be > 0";
[monitor signal];
} error:^(NSError *error) {
XCTFail(#"%#", error);
[monitor signal];
}];
[monitor wait];
As you can see, I'm telling the monitor to wait right after I subscribe and signal it to stop waiting at the end of subscribeNext and error blocks to make it continue executing (so other tests can run too). This approach has the benefit of not relying on a static timeout, so your code can run as long as it needs to.
Using CocoaPods, you can easily add TRVSMonitor to your project:
pod "TRVSMonitor", "~> 0.0.3"
Specta & Expecta
Specta is a BDD/TDD (behavior driven/test driven) test framework. Expecta is a framework which provides more convenient assertion matchers. It has built-in support for async tests. It enables you to write more descriptive tests with ReactiveCocoa, like so:
it(#"should return a valid image, with cache state 'new'", ^AsyncBlock {
[[cache imageForURL:[NSURL URLWithString:SECURITY_NOW_ARTWORK_URL]] subscribeNext:^(UIImage *image) {
expect(image).notTo.beNil();
expect(image.cacheState).to.equal(JPImageCacheStateNew);
} error:^(NSError *error) {
XCTFail(#"%#", error);
} completed:^{
done();
}];
});
Note the use of ^AsyncBlock {. Using simply ^ { would imply a synchronous test.
Here you call the done() function to signal the end of an asynchronous test. I believe Specta uses a 10 second timeout internally.
Using CocoaPods, you can easily add Expecta & Specta:
pod "Expecta", "~> 0.2.3"
pod "Specta", "~> 0.2.1"
See this question: https://stackoverflow.com/a/19127547/420594
The XCAsyncTestCase has some extra functionality to allow for asynchronous test cases.
Also, I haven't looked at it in depth yet, but could ReactiveCocoaTests be of some interest to you? On a glance, they appear to be using Expecta.