Then I wait to see "Choose a drink:#0"
# calabash-cucumber-0.18.0/features/step_definitions/calabash_steps.rb:154
execution expired (Calabash::Cucumber::WaitHelpers::WaitError)
While using predefined steps, it works fine in local. However it fails in CI. There are no network calls made. It's just a transition to a new screen.
Any ideas on how to increase the wait timeout for the predefined steps will be appreciated.
The best thing to do would be to write your own step.
If you look at the step you can see that if you set WAIT_TIMEOUT you can increase the wait time.
$ WAIT_TIMEOUT=120 bundle exec cucumber
I think you should write your own step.
Related
Scenario
I'm looking for a way to create an instance of a step function that waits for me to start it. Pseudo code would look like this.
StateMachine myStateMachine = new();
string executionArn = myStateMachine.ExecutionArn;
myStateMachine.Start();
Use Case
We need a way to reliably store the Execution ARN of a step function to a database. If we fail to write the Execution ARN to the database, we won't call the Start method and the step function should timeout. If the starting of the step function fails, the database operation would be rolled back.
These are the steps we plan to take
A local transaction is started
The step function instance is created, but not started
The ExecutionArn of the created step function instance is recorded in a database
The step function is started
The local transaction is committed
Is there a simple way to start a step function like this?
Below is the result of some research I've done on this so far.
Manual Callbacks
Following information in this article https://aws.amazon.com/blogs/compute/implementing-serverless-manual-approval-steps-in-aws-step-functions-and-amazon-api-gateway/,
I create an empty activity, then us this activity as the first step in the step function and add a timeout of 30 seconds to the activity step. The expectation was that if I didn't send a success to that activity task in the step function then the step would timeout and the workflow would fail, but it isn't doing that. Even though I set the timeout to 30 seconds, the step is not timing out. I'm guessing the timeout is about how long it waits for the step function to be able to schedule the activity, not how long it waits for the step function to move on from the activity step.
I've also considered using an SQS SendMessage step with Wait for callback checked and with a similar timeout, but that would require I create a throw-away SQS queue just to contain messages I never intend to read, plus I'm guessing the timeout functionality would work the same here as in an activity.
Wait State
There may be something I can do with a Wait state and parallel branches by following the accepted answer in this SO article: Does AWS Step Functions have a timeout feature?, but before I go down that route I want to see if something simpler can be done.
Global Timeout
I have found that step functions have a global timeout, and that is useful in this case if I use it in conjunction with a step that pauses until my application explicitly resumes it, but the global timeout is only useful if it can be reasonably low (like 20 minutes) and still have the step function viable for all use cases. For instance, if the maximum time it should take to run the step function is 2 or 3 minutes, then all is fine. But if I have another step in the step function that can take longer than 20 minutes then I can't use the global timer anymore or I have to start setting it to something very high, which I don't want to do.
Is there anything I can do here easily that I'm overlooking?
Thanks
Two-phase initialization of a step function cannot be done. We've worked around this by:
Our Application: Writing a row in our DB to indicate the intent to start a step function
Our Application: Start the step function
Our Application: Record the ExecutionArn of the step function instance in the created row
Step Function: Have the step function wait on step 1 indefinitely on an SQS step
Our Application: Poll the SQS queue and either abort the step function or allow it to proceed to the next step by sending a callback to the SQS step. (This is the 2nd phase)
We are using azure web job for batch processing, the job will trigger when there is a message in the storage queue.
We have configured the job to execute the messages one by one.
JobHostConfiguration config = new JobHostConfiguration();
config.Queues.BatchSize = 1;
config.Queues.MaxDequeueCount = 1;
even though the job is taking multiple messages from the storage queue and executing parallelly.
Please help.
taking multiple messages from the storage queue and executing
parallelly
How did you judge take multiple messages and executing in parallel? Did you have multiple instances?
I test the code in different situations.
1)The normal situation ,not set the batchsize, it will drag all messages in the queue.However i think it still run one by one.But from the result i think it won't wait last running completely over.Here is result.
2)Set the batchsize to 1, if you debug the code or refresh the queue frequently, you will find it did drag one message one time run. And here is result.
3) Set the batchsize to three and debug , it just change the message number dragged, each time it will drag 3 messages, then it will run like normal without setting batchsize.Here is the result.And i found if you just run not debug , the order console showing is very orgnized.
So if you don't have other instance running, i think this is working in sequential mode.
If this doesn't match your requirements or you still have questions, please let me know.
In Intel TBB, I'm trying to:
1. Create a set of tasks
2. Let them run
3. When one of them finishes, I get some result out of it and kill the others.
How can I do that ? I can see only API to wait for all not just single...
Thanks.
The task that finishes can store its result in a known place and cancel the group with task::self().cancel_group_execution(). The wait_for_all() will then become unblocked and that thread can load the result from the known place.
https://www.threadingbuildingblocks.org/docs/help/tbb_userguide/Cancellation_Without_An_Exception.html shows how to use cancel_group_execution().
I have created a framework in which I have used Set Browser Implicit Wait 30
I have 50 suite that contains total of 700 test cases. A few of the test cases (200 TC's) has steps to find if Element present and element not present. My Objective is that I do not want to wait until 30 seconds to check if Element Present or Element not Present. I tried using Wait Until Element Is Visible ${locator} timeout=10, expecting to wait only 10 seconds for the Element , but it wait for 30 seconds.
Question : Can somebody help with the right approach to deal with such scenarios in my framework? If I agree to wait until 30 seconds, the time taken to complete such test case will be more. I am trying to save 20*200 secs currently Please advise
The simplest solution is to change the implicit wait right before checking that an element does not exist, and then changing it back afterwards. You can do this with the keyword set selenium implicit wait.
For example, your keyword might look something like this:
*** Keywords ***
verify element is not on page
[Arguments] ${locator}
${old_wait}= Set selenium implicit wait 10
run keyword and continue on failure
... page should not contain element ${locator}
set selenium implicit wait ${old_wait}
You can simply add timeout="${Time}" next to the keyword you want to execute (Exp., Wait Until Page Contains Element ${locator} timeout=50)
The problem you're running into deals with issue of "Implicit wait vs Explicit Wait". Searching the internet will provide you with a lot of good explanations on why mixing is not recommended, but I think Jim Evans (Creator of IE Webdriver) explained it nicely in this stackoverflow answer.
Improving the performance of your test run is typically done by utilizing one or both of these:
Shorten the duration of each individual test
Run test in parallel.
Shortening the duration of a test typically means being in complete control of the application under test resulting in the script knowing when the application has successfully loaded the moment it happens. This means having a a low or none Implicit wait and working exclusively with Fluent waits (waiting for a condition to occur). This will result in your tests running at the speed your application allows.
This may mean investing time understanding the application you test on a technical level. By using a custom locator you can still use all the regular SeleniumLibrary keywords and have a centralized waiting function.
Running tests in parallel starts with having tests that run standalone and have no dependencies on other tests. In Robot Framework this means having Test Suite Files that can run independently of each other. Most of us use Pabot to run our suites in parallel and merge the log file afterwards.
Running several browser application tests in parallel means running more than 1 browser at the same time. If you test in Chrome, this can be done on a single host - though it's not always recommended. When you run IE then you require multiple boxes/sessions. Then you start to require a Selenium Grid type solution to distribute the execution load across multiple machines.
I have setup MRTG-rrdtools-routers2.cgi and setup working fine and happy as a beginner :)
I have set 'ThreshDir:', 'ThreshMinI' and 'ThreshProgI' in MRTG cfgs. At the first run my script in 'ThreshProgI' is run without any issue but it not going to run in the next 5 minutes runs.
I see that in the 'ThreshDir:' location, there is a file generate at at first MRTG run. If I remove that file then my script in 'ThreshProgI' will run in the next MRTG run.
So far what I notice here is that after generating the 'ThreshDir:' file, 'ThreshProgI' will stop working in my setup. What could be the reason for this, how can I make 'ThreshProgI' run every 5 minutes (when 'ThreshMinI' fails).
This is by design.
MRTG only runs the threshold program on the FIRST time the threshold is broken, and not on the following runs, until it recovers. The last status is held in the ThreshDir in order to manage this.
There is another definition for a threshold program to run on recovery.
The only way to trick MRTG into running the threshold program on every run regardless of the previous status, is to delete the status history file in the ThreshDir each pass (as you are doing).