When should an ember-test clear its timers? - ember.js

I have an older Ember-cli app that I've just updated to all the latest dependencies and file formats, I've run ember init with ember-cli#0.2.0-beta.1, but when I try to write an acceptance test with the visit() helper, the internal wait function never resolves.
The furthest I've been able to trace the problem is into the wait function in the bower_components/ember/ember.js file, at the line
if (run.hasScheduledTimers() || run.currentRunLoop) { return; }
There is a timer on backburner, but time and time again, the loop returns here, and it never seems to have a chance to clear the timer.
I'm pretty sure the timer is supposed to make sure the wait helper waits after an ajax request, but the ajax request has long since resolved. Heck, if there were still pending requests, we would have exited this function.
Any insights into this process would be greatly appreciated!!

I had an instance of Em.run.later in my application in a loop, to recursively check for timeouts. This is not uncommon, it turns out!
My solution was to put the run.later block in a conditional check for the current environment, and disable it in testing.

Related

What happens to running processes on a continuous Azure WebJob when website is redeployed?

I've read about graceful shutdowns here using the WEBJOBS_SHUTDOWN_FILE and here using Cancellation Tokens, so I understand the premise of graceful shutdowns, however I'm not sure how they will affect WebJobs that are in the middle of processing a queue message.
So here's the scenario:
I have a WebJob with functions listening to queues.
Message is added to Queue and job begins processing.
While processing, someone pushes to develop, triggering a redeploy.
Assuming I have my WebJobs hooked up to deploy on git pushes, this deploy will also trigger the WebJobs to be updated, which (as far as I understand) will kick off some sort of shutdown workflow in the jobs. So I have a few questions stemming from that.
Will jobs in the middle of processing a queue message finish processing the message before the job quits? Or is any shutdown notification essentially treated as "this bitch is about to shutdown. If you don't have anything to handle it, you're SOL."
If we are SOL, is our best option for handling shutdowns essentially to wrap anything you're doing in the equivalent of DB transactions and implement your shutdown handler in such a way that all changes are rolled back on shutdown?
If a queue message is in the middle of being processed and the WebJob shuts down, will that message be requeued? If not, does that mean that my shutdown handler needs to handle requeuing that message?
Is it possible for functions listening to queues to grab any more queue messages after the Job has been notified that it needs to shutdown?
Any guidance here is greatly appreciated! Also, if anyone has any other useful links on how to handle job shutdowns besides the ones I mentioned, it would be great if you could share those.
After no small amount of testing, I think I've found the answers to my questions and I hope someone else can gain some insight from my experience.
NOTE: All of these scenarios were tested using .NET Console Apps and Azure queues, so I'm not sure how blobs or table storage, or different types of Job file types, would handle these different scenarios.
After a Job has been marked to exit, the triggered functions that are running will have the configured amount of time (grace period) (5 seconds by default, but I think that is configurable by using a settings.job file) to finish before they are exited. If they do not finish in the grace period, the function quits. Main() (or whichever file you declared host.RunAndBlock() in), however, will finish running any code after host.RunAndBlock() for up to the amount of time remaining in the grace period (I'm not sure how that would work if you used an infinite loop instead of RunAndBlock). As far as handling the quit in your functions, you can essentially "listen" to the CancellationToken that you can pass in to your triggered functions for IsCancellationRequired and then handle it accordingly. Also, you are not SOL if you don't handle the quits yourself. Huzzah! See point #3.
While you are not SOL if you don't handle the quit (see point #3), I do think it is a good idea to wrap all of your jobs in transactions that you won't commit until you're absolutely sure the job has ran its course. This way if your function exits mid-process, you'll be less likely to have to worry about corrupted data. I can think of a couple scenarios where you might want to commit transactions as they pass (batch jobs, for instance), however you would need to structure your data or logic so that previously processed entities aren't reprocessed after the job restarts.
You are not in trouble if you don't handle job quits yourself. My understanding of what's going on under the covers is virtually non-existent, however I am quite sure of the results. If a function is in the middle of processing a queue message and is forced to quit before it can finish, HAVE NO FEAR! When the job grabs the message to process, it will essentially hide it on the queue for a certain amount of time. If your function quits while processing the message, that message will "become visible" again after x amount of time, and it will be re-grabbed and ran against the potentially updated code that was just deployed.
So I have about 90% confidence in my findings for #4. And I say that because to attempt to test it involved quick-switching between windows while not actually being totally sure what was going on with certain pieces. But here's what I found: on the off chance that a queue has a new message added to it in the grace period b4 a job quits, I THINK one of two things can happen: If the function doesn't poll that queue before the job quits, then the message will stay on the queue and it will be grabbed when the job restarts. However if the function DOES grab the message, it will be treated the same as any other message that was interrupted: it will "become visible" on the queue again and be reran upon the restart of the job.
That pretty much sums it up. I hope other people will find this useful. Let me know if you want any of this expounded on and I'll be happy to try. Or if I'm full of it and you have lots of corrections, those are probably more welcome!

Why does ember ui hang during long promise call

I have a long running call that is encapsulated by a promise,
from my understanding of promises, it allows us to do asynchronous tasks that will be dealt with when they return, and until they return the function should continue.
in my example,
the action is entered
Updates a variable that changes the ui
executes somethingLongRSVP
it should then exit the function and update the ui, but insted it waits for the promise to resolve before updating the ui.
http://emberjs.jsbin.com/kepuki/5/edit?js,output
The console.log message in the loop will hang your UI. If you replace the console log loop with a setTimeout for example, you will see that it is updating the button before resolving the promise.
if you want to emulate a long resolving promise, you should definitely use setTimeout. Otherwise the time gets spent on executing the loop (inside doSomethingLongRSVP) and then executing the next statement (set the variable to clicked). If you're gonna use setTimeout, it will delay the moment when promise resolves causing an effect of long network request.

How to invoke CefV8Context::Eval() method in browser process?

I want to invoke CefV8Context::Eval function and get the returned value in browser process's UI thread. But the CEF3 C++ API Docs states that V8 handles can only be accessed from the thread on which they are created. Valid threads for creating a V8 handle include the render process main thread (TID_RENDERER) and WebWorker threads. Is that means I should use the inter-process communication (CefProcessMessage) to invoke that method and get the return value? If so, how to do this in synchronous mode?
Short answer: CefFrame::ExecuteJavaScript for simple requests will work. For more complex ones, you have to give up one level of synchronousness or use a custom message loop.
What I understand you want to do is to execute some Javascript code as part of your native App's UI Thread. There are two possibilities:
It's generic JS code, doesn't really access any variables or functions in your JS, and as such has not context. This means Cef can just spin up a new V8 context and execute your code - see CefFrame::ExecuteJavaScript(). To quote the examples on CEF's JS Integration link:
CefRefPtr browser = ...;
CefRefPtr frame = browser->GetMainFrame();
frame->ExecuteJavaScript("alert('ExecuteJavaScript works!');",
frame->GetURL(), 0);
It's JS code with a context. In this case, read on.
Yes - CEF is designed such that only the RenderProcess has access to the V8 engine, you'll have to use a CefProcessMessage to head over to the Browser and do the evaluation there. You sound like you already know how to do that. I'll link an answer of mine for others who don't and may stumble upon this later: Background process on the native function at Chromium Embedded Framework
The CEFProcessMessage from Browser to Render processes is one place where the request has to be synchronized.
So after your send your logic over to the render process, you'll need to do the actual execution of the javascript code. That, thankfully, is quite easy - the same JS integration link goes on to say:
Native code can execute JS functions by using the ExecuteFunction()
and ExecuteFunctionWithContext() methods
The best part - the execution seems to be synchronous (I say seems to, since I can't find concrete docs on this). The usage in the examples illustrates this:
if (callback_func_->ExecuteFunctionWithContext(callback_context_, NULL, args, retval, exception, false)) {
if (exception.get()) {
// Execution threw an exception.
} else {
// Execution succeeded.
}
}
You'll notice that the second line assumes that the first has finished execution and that the results of said execution are available to it. So, The CefV8Value::ExecuteFunction() call is by nature synchronous.
So the question boils down to - How do I post a CefProcessMessage from Browser to Renderer process synchronously?. Unfortunately, the class itself is not set up to do that. What's more, the IPC Wiki Page explicitly disallows it:
Some messages should be synchronous from the renderer's perspective.
This happens mostly when there is a WebKit call to us that is supposed
to return something, but that we must do in the browser. Examples of
this type of messages are spell-checking and getting the cookies for
JavaScript. Synchronous browser-to-renderer IPC is disallowed to
prevent blocking the user-interface on a potentially flaky renderer.
Is this such a big deal? Well, I don't really know since I've not come across this need - to me, it's ok since the Browser's message loop will keep spinning away waiting for something to do, and receive nothing till your renderer sends a process message with the results of JS. The only way the browser gets something else to do is when some interaction happens, which can't since the renderer is blocking.
If you really definitely need synchronousness, I'd recommend that you use your custom MessageLoop which calls CefDoMessageLoopWork() on every iteration. That way, you can set a flag to suspend loop work until your message is received from renderer. Note that CefDoMessageLoopWork() and CefRunMessageLoop() are mutually exclusive and cannot work with each other - you either manage the loop yourself, or let CEF do it for you.
That was long, and covers most of what you might want to do - hope it helps!

Kafka's ZookeeperConsumerConnector.createMessageStreams never returns

I'm trying to retrieve data from my Kafka 0.8.1 cluster. I have brought into existence an instance of ZookeeperConsumerConnector and then attempt to call createMessageStreams on it. However, no matter what I do, it seems createMessageStreams just hangs and never returns, even if it is the only thing I have done with Kafka.
Reading mailing lists it seems this can sometimes happen for a few reasons, but as far as I can tell I haven't done any of those things.
Further, I'll point out that I'm actually doing this in Clojure using clj-kafka, but I suspect clj-kafka is not the issue because I have the problem even if I run this code:
(.createMessageStreams
(clj-kafka.consumer.zk/consumer {"zookeeper.connect" "127.0.0.1:2181"
"group.id" "my.consumer"
"auto.offset.reset" "smallest"
"auto.commit.enable" "false"})
{"mytopic" (int 1)})
And clj-kafka.consumer.zk/consumer just uses Consumer.createJavaConsumerConnector to create a ZookeeperConsumerConnector without doing anything too fancy.
Also, there are definitely messages in "mytopic" because from the command line I can run the following and get back everything I've already sent to the topic:
% kafka-console-consumer.sh --zookeeper 127.0.0.1:2181 --topic mytopic --from-beginning
So it's also not that the topic is empty.
Feeling stumped at this point. Ideas?
ETA: By "hang" I guess what I really mean is that it seems to spin up a thread and then stay stuck in it never doing anything. If I run this code from the REPL I can get out of it by hitting control-c and then I get this error:
IllegalMonitorStateException java.util.concurrent.locks.ReentrantLock$Sync.tryRelease (ReentrantLock.java:155)
I was experiencing the same issue with the same exception when interrupting the REPL. The reason it hangs is due to the lazy-iterate function in the consumer.zk namespace. The queue from which messages are read is a LinkedBlockingQueue and the call to .hasNext in the lazy-iterate function calls .take on this queue. This creates a read lock on the queue and will block and wait until something is available to take off the queue. This means that the lazy-iterate function will never actually return. lazy-iterate is called by the 'messages' function and if you don't do something like
(take 2 (messages "mytopic" some-consumer))
then the messages function will never return and hang indefinitely. It's my opinion that this is a bug (or design flaw) in clj-kafka. To illustrate that this is indeed what's happening, try setting "consumer.timeout.ms" "0" in your consumer config. It will throw a TimeoutExpection and return control to the REPL.
This further creates a problem with the 'with-resource' macro. The macro takes a binding to a consumer, a shutdown function, and a body; it calls the body and then the shutdown fn. If inside the body, you make a call to 'messages', the body will never return and thus the shutdown function will never be called. The messages function WOULD return if shutdown was called, because shutdown puts a message on the queue that signals the consumer to clean up its resources and threads in preparation for GC. This macro puts the application into a state where the only way to get out of the main loop is to kill the application (or a thread calling it) itself. The library certainly has a ways to go before it's ready for a production environment.

What is Ember RunLoop and how does it work?

I am trying to understand how Ember RunLoop works and what makes it tick. I have looked at the documentation, but still have many questions about it. I am interested in understanding better how RunLoop works so I can choose appropriate method within its name space, when I have to defer execution of some code for later time.
When does Ember RunLoop start. Is it dependant on Router or Views or Controllers or something else?
how long does it approximately take (I know this is rather silly to asks and dependant on many things but I am looking for a general idea, or maybe if there is a minimum or maximum time a runloop may take)
Is RunLoop being executed at all times, or is it just indicating a period of time from beginning to end of execution and may not run for some time.
If a view is created from within one RunLoop, is it guaranteed that all its content will make it into the DOM by the time the loop ends?
Forgive me if these are very basic questions, I think understanding these will help noobs like me use Ember better.
Update 10/9/2013: Check out this interactive visualization of the run loop: https://machty.s3.amazonaws.com/ember-run-loop-visual/index.html
Update 5/9/2013: all the basic concepts below are still up to date, but as of this commit, the Ember Run Loop implementation has been split off into a separate library called backburner.js, with some very minor API differences.
First off, read these:
http://blog.sproutcore.com/the-run-loop-part-1/
http://blog.sproutcore.com/the-run-loop-part-2/
They're not 100% accurate to Ember, but the core concepts and motivation behind the RunLoop still generally apply to Ember; only some implementation details differ. But, on to your questions:
When does Ember RunLoop start. Is it dependant on Router or Views or Controllers or something else?
All of the basic user events (e.g. keyboard events, mouse events, etc) will fire up the run loop. This guarantees that whatever changes made to bound properties by the captured (mouse/keyboard/timer/etc) event are fully propagated throughout Ember's data-binding system before returning control back to the system. So, moving your mouse, pressing a key, clicking a button, etc., all launch the run loop.
how long does it approximately take (I know this is rather silly to asks and dependant on many things but I am looking for a general idea, or maybe if there is a minimum or maximum time a runloop may take)
At no point will the RunLoop ever keep track of how much time it's taking to propagate all the changes through the system and then halt the RunLoop after reaching a maximum time limit; rather, the RunLoop will always run to completion, and won't stop until all the expired timers have been called, bindings propagated, and perhaps their bindings propagated, and so on. Obviously, the more changes that need to be propagated from a single event, the longer the RunLoop will take to finish. Here's a (pretty unfair) example of how the RunLoop can get bogged down with propagating changes compared to another framework (Backbone) that doesn't have a run loop: http://jsfiddle.net/jashkenas/CGSd5/ . Moral of the story: the RunLoop's really fast for most things you'd ever want to do in Ember, and it's where much of Ember's power lies, but if you find yourself wanting to animate 30 circles with Javascript at 60 frames per second, there might be better ways to go about it than relying on Ember's RunLoop.
Is RunLoop being executed at all times, or is it just indicating a period of time from beginning to end of execution and may not run for some time.
It is not executed at all times -- it has to return control back to the system at some point or else your app would hang -- it's different from, say, a run loop on a server that has a while(true) and goes on for infinity until the server gets a signal to shut down... the Ember RunLoop has no such while(true) but is only spun up in response to user/timer events.
If a view is created from within one RunLoop, is it guaranteed that all its content will make it into the DOM by the time the loop ends?
Let's see if we can figure that out. One of the big changes from SC to Ember RunLoop is that, instead of looping back and forth between invokeOnce and invokeLast (which you see in the diagram in the first link about SproutCore's RL), Ember provides you a list of 'queues' that, in the course of a run loop, you can schedule actions (functions to be called during the run loop) to by specifying which queue the action belongs in (example from the source: Ember.run.scheduleOnce('render', bindView, 'rerender');).
If you look at run_loop.js in the source code, you see Ember.run.queues = ['sync', 'actions', 'destroy', 'timers'];, yet if you open your JavaScript debugger in the browser in an Ember app and evaluate Ember.run.queues, you get a fuller list of queues: ["sync", "actions", "render", "afterRender", "destroy", "timers"]. Ember keeps their codebase pretty modular, and they make it possible for your code, as well as its own code in a separate part of the library, to insert more queues. In this case, the Ember Views library inserts render and afterRender queues, specifically after the actions queue. I'll get to why that might be in a second. First, the RunLoop algorithm:
The RunLoop algorithm is pretty much the same as described in the SC run loop articles above:
You run your code between RunLoop .begin() and .end(), only in Ember you'll want to instead run your code within Ember.run, which will internally call begin and end for you. (Only internal run loop code in the Ember code base still uses begin and end, so you should just stick with Ember.run)
After end() is called, the RunLoop then kicks into gear to propagate every single change made by the chunk of code passed to the Ember.run function. This includes propagating the values of bound properties, rendering view changes to the DOM, etc etc. The order in which these actions (binding, rendering DOM elements, etc) are performed is determined by the Ember.run.queues array described above:
The run loop will start off on the first queue, which is sync. It'll run all of the actions that were scheduled into the sync queue by the Ember.run code. These actions may themselves also schedule more actions to be performed during this same RunLoop, and it's up to the RunLoop to make sure it performs every action until all the queues are flushed. The way it does this is, at the end of every queue, the RunLoop will look through all the previously flushed queues and see if any new actions have been scheduled. If so, it has to start at the beginning of the earliest queue with unperformed scheduled actions and flush out the queue, continuing to trace its steps and start over when necessary until all of the queues are completely empty.
That's the essence of the algorithm. That's how bound data gets propagated through the app. You can expect that once a RunLoop runs to completion, all of the bound data will be fully propagated. So, what about DOM elements?
The order of the queues, including the ones added in by the Ember Views library, is important here. Notice that render and afterRender come after sync, and action. The sync queue contains all the actions for propagating bound data. (action, after that, is only sparsely used in the Ember source). Based on the above algorithm, it is guaranteed that by the time the RunLoop gets to the render queue, all of the data-bindings will have finished synchronizing. This is by design: you wouldn't want to perform the expensive task of rendering DOM elements before sync'ing the data-bindings, since that would likely require re-rendering DOM elements with updated data -- obviously a very inefficient and error-prone way of emptying all of the RunLoop queues. So Ember intelligently blasts through all the data-binding work it can before rendering the DOM elements in the render queue.
So, finally, to answer your question, yes, you can expect that any necessary DOM renderings will have taken place by the time Ember.run finishes. Here's a jsFiddle to demonstrate: http://jsfiddle.net/machty/6p6XJ/328/
Other things to know about the RunLoop
Observers vs. Bindings
It's important to note that Observers and Bindings, while having the similar functionality of responding to changes in a "watched" property, behave totally differently in the context of a RunLoop. Binding propagation, as we've seen, gets scheduled into the sync queue to eventually be executed by the RunLoop. Observers, on the other hand, fire immediately when the watched property changes without having to be first scheduled into a RunLoop queue. If an Observer and a binding all "watch" the same property, the observer will always be called 100% of the time earlier than the binding will be updated.
scheduleOnce and Ember.run.once
One of the big efficiency boosts in Ember's auto-updating templates is based on the fact that, thanks to the RunLoop, multiple identical RunLoop actions can be coalesced ("debounced", if you will) into a single action. If you look into the run_loop.js internals, you'll see the functions that facilitate this behavior are the related functions scheduleOnce and Em.run.once. The difference between them isn't so important as knowing they exist, and how they can discard duplicate actions in queue to prevent a lot of bloated, wasteful calculation during the run loop.
What about timers?
Even though 'timers' is one of the default queues listed above, Ember only makes reference to the queue in their RunLoop test cases. It seems that such a queue would have been used in the SproutCore days based on some of the descriptions from the above articles about timers being the last thing to fire. In Ember, the timers queue isn't used. Instead, the RunLoop can be spun up by an internally managed setTimeout event (see the invokeLaterTimers function), which is intelligent enough to loop through all the existing timers, fire all the ones that have expired, determine the earliest future timer, and set an internal setTimeout for that event only, which will spin up the RunLoop again when it fires. This approach is more efficient than having each timer call setTimeout and wake itself up, since in this case, only one setTimeout call needs to be made, and the RunLoop is smart enough to fire all the different timers that might be going off at the same time.
Further debouncing with the sync queue
Here's a snippet from the run loop, in the middle of a loop through all the queues in the run loop. Note the special case for the sync queue: because sync is a particularly volatile queue, in which data is being propagated in every direction, Ember.beginPropertyChanges() is called to prevent any observers from being fired, followed by a call to Ember.endPropertyChanges. This is wise: if in the course of flushing the sync queue, it's entirely possible that a property on an object will change multiple times before resting on its final value, and you wouldn't want to waste resources by immediately firing observers per every single change.
if (queueName === 'sync')
{
log = Ember.LOG_BINDINGS;
if (log)
{
Ember.Logger.log('Begin: Flush Sync Queue');
}
Ember.beginPropertyChanges();
Ember.tryFinally(tryable, Ember.endPropertyChanges);
if (log)
{
Ember.Logger.log('End: Flush Sync Queue');
}
}
else
{
forEach.call(queue, iter);
}