AWS SWF createTimer promise not returning - amazon-web-services

I want a loop that after each iteration waits for a while. I'm trying to use a timer to do the waiting, but the timer never returns, so the below code only runs once and then waits forever. When I remove the timer condition, the code executes successfully. There are no errors, it just waits...
Any help is appreciated!
#override
public void doWorkflow() {
validate("test", 1);
}
#Asynchronous
private void validate(String id, int retries, Promise<?>... waitFor) {
Promise<Boolean> isValid = activityClient.validate(id);
doWork(id, retries, isValid);
}
#Asynchronous
private void doWork(String id, int retries, Promise<Boolean> isValid) {
if (!isValid.get()) {
return;
}
Promise<Void> waitFor = decisionContext.getWorkflowClock().createTimer(5);
validate(id, retries - 1, waitFor);
}

I was using Spring to set up the application. To get the decision context I was using
#Autowired
DecisionContext decisionContext
The problem is when constructing the Workflow implementation I didn't use a scope. The solution was to add the scope="workflow" attribute:
<bean id="myWorkflow" class="com.example.myWorkflowImpl" scope="workflow">
After doing this, the workflow history began showing the TimerStarted event.

I don't see any obvious problems with your code. First I would look at the workflow history (either through console or by printing it using WorkflowExecutionHistoryPrinter) to see if timer is scheduled and if it ever fires. If the history looks OK I would get "asynchronous thread dump" using WorkflowExecutionFlowThreadDumper to see what exactly your code waits for.

Related

Blocking calls in a Node.js Addon

I'm developing a Node.js application that incorporates a Windows DLL. The DLL manages scientific equipment, for context.
My interface from Node to the DLL is going well, however the DLL has some non-deterministic calls that depend on the network topology and RF signals in the room. These calls can take anywhere from 10 seconds to 10 minutes.
I'd like to get these calls off Node's event loop, and even avoid AsyncWorkers. I'd like to put them in their own C++ threads. I'm worried that I don't know enough Node/V8 to approach the problem correctly, though I've attempted twice now.
Below is my attempt at spawning a thread to call a js callback, though I'm not sure if this is a good approach. I need the result of the call, and what I have so far is a 'daemon' in my node app that checks on a regular interval to retrieve results for completed tasks.
mTp in the snippet below is a threadpool implementation I've written. Runtask takes a C++ lambda as a parameter to be pushed onto my worker thread queue. mThreadStatus is a map from my thread 'handle', which is a string, to thread_status_t enum. mThreadResults is another map from the thread handle to a v8::Value that gets returned by the callback.
void
MyObj::SpawnThread(functionInput info) {
MyObj* obj = ObjectWrap::Unwrap<MyObj>(info.Holder());
obj->mTp.RunTask([&]() {
v8::Isolate::CreateParams cp;
v8::Isolate* tpIsolate = v8::Isolate::New(cp);
v8::Locker locker(tpIsolate);
v8::Isolate::Scope isolateScope(tpIsolate);
Nan::HandleScope scope;
auto global = obj->mContext.Get(tpIsolate)->Global();
auto handle = std::string(*v8::String::Utf8Value(info[0]->ToString()));
{
std::unique_lock<std::shared_mutex> lock(obj->mThreadStatusMutex);
obj->mThreadStatus[handle] = thread_status_t::running;
}
v8::Handle<v8::Function> f = v8::Handle<v8::Function>::Cast(info[1]);
v8::TryCatch trycatch(tpIsolate);
v8::Handle<v8::Value> result = f->Call(global, 0, nullptr);
if (result.IsEmpty()) {
v8::Local<v8::Value> exception = trycatch.Exception();
std::unique_lock<std::shared_mutex> lock(obj->mThreadStatusMutex);
obj->mThreadStatus[handle] = thread_status_t::error;
return;
}
{
std::unique_lock<std::shared_mutex> resultLock(obj->mThreadResultsMutex);
obj->mThreadResults[handle] = result;
}
std::unique_lock<std::shared_mutex> lock(obj->mThreadStatusMutex);
obj->mThreadStatus[handle] = completed;
tpIsolate->Dispose();
});
I'm envisioning my js looking like this to spawn a thread:
var ctx = this
this.myObj.spawnThread('startMeasurements', () => {
return ctx.myObj.startMeasurements()
})
And like this to get the result, in my 'daemon':
var status = this.myObj.getThreadStatus('startMeasurements')
if ( status === 'complete') {
// Publish returned information to front-end
}
else if (status === 'error') {
// Handle error
}
Has anyone solved this problem before? Does this look like a decent approach? Help with v8 is greatly appreciated. Thank you!
I have not solved a similar problem before, but the general way I would go about it is:
let the JavaScript code be oblivious of the threading
expose a function getMeasurements(callback) to JavaScript, implemented in C++
when the function is called, it gets itself a thread (either newly created, or from the pool) and instructs it to do the blocking external call; when that call is completed the thread signals its result to the main thread, which invokes the callback with it.
that way all communication with JavaScript code (i.e. all interaction with V8) happens on the main thread, and you only use background threads for the blocking calls.
I hope this helps!

Deadlock when dispatching notification in notification handler using Poco::NotificationCenter

I'm using Poco as part of a C++ app and I've run into an issue I don't quite understand. The app was translated from Obj-C and made heavy use of Apple's NSNotificationCenter.
To make the transition as painless as possible, I decided to use Poco's NotificationCenter instead. It works fine but I had some users reporting deadlocks I'm now trying to resolve.
Just a quick heads-up for those not familiar with NotificationCenter. You signup for a notification like this:
Poco::NotificationCenter& nc = Poco::NotificationCenter::defaultCenter();
nc.addObserver(Poco::NObserver<MyClass, MyNotification>(*this, &MyClass::onNotification));
and post a notification like this:
Poco::NotificationCenter& nc = Poco::NotificationCenter::defaultCenter();
nc.postNotification(new MyNotification());
The postNotification() method is defined like this:
void NotificationCenter::postNotification(Notification::Ptr pNotification)
{
poco_check_ptr (pNotification);
ScopedLockWithUnlock<Mutex> lock(_mutex);
ObserverList observersToNotify(_observers);
lock.unlock();
for (ObserverList::iterator it = observersToNotify.begin(); it != observersToNotify.end(); ++it)
{
(*it)->notify(pNotification);
}
}
And NObserver::notify() like this:
void notify(Notification* pNf) const
{
Poco::Mutex::ScopedLock lock(_mutex);
if (_pObject)
{
N* pCastNf = dynamic_cast<N*>(pNf);
if (pCastNf)
{
NotificationPtr ptr(pCastNf, true);
(_pObject->*_method)(ptr);
}
}
}
This is all really simple and doesn't involve any black magic.
Given the fact that the postNotification method always iterates over all observers (AFTER locking one and before checking if the type of the notification matches using a dynamic typecast) I'm assuming that this MUST always cause a deadlock when a notification is sent from a notification handler as it would also try to access the observer where itself is called from and will wait forever on the lock in NObserver::notify()?
From the process samples my users sent me it looks like my assumption is correct.
But for some reason this doesn't appear to deadlock in most cases (I never experienced it ever myself). I just stepped through with the debugger and couldn't make it lock up. Does anyone have an explanation why this only locks up under certain circumstances?

How to lock a long async call in a WebApi action?

I have this scenario where I have a WebApi and an endpoint that when triggered does a lot of work (around 2-5min). It is a POST endpoint with side effects and I would like to limit the execution so that if 2 requests are sent to this endpoint (should not happen, but better safe than sorry), one of them will have to wait in order to avoid race conditions.
I first tried to use a simple static lock inside the controller like this:
lock (_lockObj)
{
var results = await _service.LongRunningWithSideEffects();
return Ok(results);
}
this is of course not possible because of the await inside the lock statement.
Another solution I considered was to use a SemaphoreSlim implementation like this:
await semaphore.WaitAsync();
try
{
var results = await _service.LongRunningWithSideEffects();
return Ok(results);
}
finally
{
semaphore.Release();
}
However, according to MSDN:
The SemaphoreSlim class represents a lightweight, fast semaphore that can be used for waiting within a single process when wait times are expected to be very short.
Since in this scenario the wait times may even reach 5 minutes, what should I use for concurrency control?
EDIT (in response to plog17):
I do understand that passing this task onto a service might be the optimal way, however, I do not necessarily want to queue something in the background that still runs after the request is done.
The request involves other requests and integrations that take some time, but I would still like the user to wait for this request to finish and get a response regardless.
This request is expected to be only fired once a day at a specific time by a cron job. However, there is also an option to fire it manually by a developer (mostly in case something goes wrong with the job) and I would like to ensure the API doesn't run into concurrency issues if the developer e.g. double-sends the request accidentally etc.
If only one request of that sort can be processed at a given time, why not implement a queue ?
With such design, no more need to lock nor wait while processing the long running request.
Flow could be:
Client POST /RessourcesToProcess, should receive 202-Accepted quickly
HttpController simply queue the task to proceed (and return the 202-accepted)
Other service (windows service?) dequeue next task to proceed
Proceed task
Update resource status
During this process, client should be easily able to get status of requests previously made:
If task not found: 404-NotFound. Ressource not found for id 123
If task processing: 200-OK. 123 is processing.
If task done: 200-OK. Process response.
Your controller could look like:
public class TaskController
{
//constructor and private members
[HttpPost, Route("")]
public void QueueTask(RequestBody body)
{
messageQueue.Add(body);
}
[HttpGet, Route("taskId")]
public void QueueTask(string taskId)
{
YourThing thing = tasksRepository.Get(taskId);
if (thing == null)
{
return NotFound("thing does not exist");
}
if (thing.IsProcessing)
{
return Ok("thing is processing");
}
if (!thing.IsProcessing)
{
return Ok("thing is not processing yet");
}
//here we assume thing had been processed
return Ok(thing.ResponseContent);
}
}
This design suggests that you do not handle long running process inside your WebApi. Indeed, it may not be the best design choice. If you still want to do so, you may want to read:
Long running task in WebAPI
https://blogs.msdn.microsoft.com/webdev/2014/06/04/queuebackgroundworkitem-to-reliably-schedule-and-run-background-processes-in-asp-net/

Unit-testing a simple usage of RACSignal with RACSubject

(I may be using this in a totally incorrect manner, so feel free to challenge the premise of this post.)
I have a small RACTest app (sound familiar?) that I'm trying to unit test. I'd like to test MPSTicker, one of the most ReactiveCocoa-based components. It has a signal that sends a value once per second that accumulates, iff an accumulation flag is set to YES. I added an initializer to take a custom signal for its incrementing signal, rather than being only timer-based.
I wanted to unit test a couple of behaviours of MPSTicker:
Verify that its accumulation signal increments properly (i.e. monotonically increases) when accumulation is enabled and the input incrementing signal sends a new value.
Verify that it sends the same value (and not an incremented value) when the input signal sends a value.
I've added a test that uses the built-in timer to test the first increment, and it works as I expected (though I'm seeking advice on improving the goofy RACSequence initialization I did to get a signal with the #(1) value I wanted.)
I've had a very difficult time figuring out what input signal I can provide to MPSTicker that I can manually send values to. I'm envisioning a test like:
<set up ticker>
<send a tick value>
<verify accumulated value is 1>
<send another value>
<verify accumulated value is 2>
I tried using a RACSubject so I can use sendNext: to push in values as I see fit, but it's not working like I expect. Here's two broken tests:
- (void)testManualTimerTheFirst
{
// Create a custom tick with one value to send.
RACSubject *controlledSignal = [RACSubject subject];
MPSTicker *ticker = [[MPSTicker alloc] initWithTickSource:controlledSignal];
[ticker.accumulateSignal subscribeNext:^(id x) {
NSLog(#"%s value is %#", __func__, x);
}];
[controlledSignal sendNext:#(2)];
}
- (void)testManualTimerTheSecond
{
// Create a custom tick with one value to send.
RACSubject *controlledSignal = [RACSubject subject];
MPSTicker *ticker = [[MPSTicker alloc] initWithTickSource:controlledSignal];
BOOL success = NO;
NSError *error = nil;
id value = [ticker.accumulateSignal asynchronousFirstOrDefault:nil success:&success error:&error];
if (!success) {
XCTAssertTrue(success, #"Signal failed to return a value. Error: %#", error);
} else {
XCTAssertNotNil(value, #"Signal returned a nil value.");
XCTAssertEqualObjects(#(1), value, #"Signal returned an unexpected value.");
}
// Send a value.
[controlledSignal sendNext:#(1)];
}
In testManualTimerTheFirst, I never see any value from controlledSignal's sendNext: come through to my subscribeNext: block.
In testManualTimerTheSecond, I tried using the asynchronousFirstOrDefault: call to get the first value from the signal, then manually sent a value on my subject, but the value didn't come through, and the test failed when asynchronousFirstOrDefault: timed out.
What am I missing here?
This may not answer your question exactly, but it may give you insights on how to effectively test your signals. I've used 2 approaches myself so far:
XCTestCase and TRVSMonitor
TRVSMonitor is a small utility which will pause the current thread for you while you run your assertions. For example:
TRVSMonitor *monitor = [TRVSMonitor monitor];
[[[self.service searchPodcastsWithTerm:#"security now"] collect] subscribeNext:^(NSArray *results) {
XCTAssertTrue([results count] > 0, #"Results count should be > 0";
[monitor signal];
} error:^(NSError *error) {
XCTFail(#"%#", error);
[monitor signal];
}];
[monitor wait];
As you can see, I'm telling the monitor to wait right after I subscribe and signal it to stop waiting at the end of subscribeNext and error blocks to make it continue executing (so other tests can run too). This approach has the benefit of not relying on a static timeout, so your code can run as long as it needs to.
Using CocoaPods, you can easily add TRVSMonitor to your project:
pod "TRVSMonitor", "~> 0.0.3"
Specta & Expecta
Specta is a BDD/TDD (behavior driven/test driven) test framework. Expecta is a framework which provides more convenient assertion matchers. It has built-in support for async tests. It enables you to write more descriptive tests with ReactiveCocoa, like so:
it(#"should return a valid image, with cache state 'new'", ^AsyncBlock {
[[cache imageForURL:[NSURL URLWithString:SECURITY_NOW_ARTWORK_URL]] subscribeNext:^(UIImage *image) {
expect(image).notTo.beNil();
expect(image.cacheState).to.equal(JPImageCacheStateNew);
} error:^(NSError *error) {
XCTFail(#"%#", error);
} completed:^{
done();
}];
});
Note the use of ^AsyncBlock {. Using simply ^ { would imply a synchronous test.
Here you call the done() function to signal the end of an asynchronous test. I believe Specta uses a 10 second timeout internally.
Using CocoaPods, you can easily add Expecta & Specta:
pod "Expecta", "~> 0.2.3"
pod "Specta", "~> 0.2.1"
See this question: https://stackoverflow.com/a/19127547/420594
The XCAsyncTestCase has some extra functionality to allow for asynchronous test cases.
Also, I haven't looked at it in depth yet, but could ReactiveCocoaTests be of some interest to you? On a glance, they appear to be using Expecta.

Checking if a boost timed thread has completed

I have been reading the boost thread documentation, and cannot find an example of what I need.
I need to run a method in a timed thread, and if it has not completed within a number of milliseconds,
then raise a timeout error.
So I have a method called invokeWithTimeOut() that looks like this:
// Method to invoke a request with a timeout.
bool devices::server::CDeviceServer::invokeWithTimeout(CDeviceClientRequest& request,
CDeviceServerResponse& response)
{
// Retrieve the timeout from the device.
int timeout = getTimeout();
timeout += 100; // Add 100ms to cover invocation time.
// TODO: insert code here.
// Invoke the request on the device.
invoke(request, response);
// Return success.
return true;
}
I need to call invoke(request, response), and if it has not completed within timeout, the method needs to return false.
Can someone supple a quick boost::thread example of how to do this please.
Note: The timeout is in milliseconds. Both getTimeout() and invoke() are pure-virtual functions, that have been implemented on the device sub-classes.
Simplest solution: Launch invoke in a separate thread and use a future to indicate when invoke finishes:
boost::promise<void> p;
boost::future<void> f = p.get_future();
boost::thread t([&]() { invoke(request, response); p.set_value(); });
bool did_finish = (f.wait_for(boost::chrono::milliseconds(timeout)) == boost::future_status::ready)
did_finish will be true if and only if the invoke finished before the timeout.
The interesting question is what to do if that is not the case. You still need to shutdown the thread t gracefully, so you will need some mechanism to cancel the pending invoke and do a proper join before destroying the thread. While in theory you could simply detach the thread, that is a very bad idea in practice as you lose all means of interacting with the thread and could for example end up with hundreds of deadlocked threads without noticing.