I would like to write tests that work as follows:
start an asynchronous Test
after this test is done, start the next asynchronous Test
do that for an arbitrary number of tests
Setting QUnit.config.reorder to false does not prevent the Tests from being started before the previous one is finished.
asyncTest('test1',function(){}); // all tests are started back on back
asyncTest('test2',function(){}); // but I would like to start them
asyncTest('test3',function(){}); // one after the other
I know that tests should be atomic, but in this case that would lead to one huge test, which itself may become error prone, so I would like to split it up.
Right now I am »packing« each Test into a wrapping function and call this function after one test is done, but that is somehow awkward and would like to know what kind of best practice exists for this.
Cheers!
I found a solution to reset these globals, by reloading the entire script before each test and therefore I used the Qunit.testStart() method. So now I have both atomic tests and freshly initialised variables in each test
Related
I have created a framework in which I have used Set Browser Implicit Wait 30
I have 50 suite that contains total of 700 test cases. A few of the test cases (200 TC's) has steps to find if Element present and element not present. My Objective is that I do not want to wait until 30 seconds to check if Element Present or Element not Present. I tried using Wait Until Element Is Visible ${locator} timeout=10, expecting to wait only 10 seconds for the Element , but it wait for 30 seconds.
Question : Can somebody help with the right approach to deal with such scenarios in my framework? If I agree to wait until 30 seconds, the time taken to complete such test case will be more. I am trying to save 20*200 secs currently Please advise
The simplest solution is to change the implicit wait right before checking that an element does not exist, and then changing it back afterwards. You can do this with the keyword set selenium implicit wait.
For example, your keyword might look something like this:
*** Keywords ***
verify element is not on page
[Arguments] ${locator}
${old_wait}= Set selenium implicit wait 10
run keyword and continue on failure
... page should not contain element ${locator}
set selenium implicit wait ${old_wait}
You can simply add timeout="${Time}" next to the keyword you want to execute (Exp., Wait Until Page Contains Element ${locator} timeout=50)
The problem you're running into deals with issue of "Implicit wait vs Explicit Wait". Searching the internet will provide you with a lot of good explanations on why mixing is not recommended, but I think Jim Evans (Creator of IE Webdriver) explained it nicely in this stackoverflow answer.
Improving the performance of your test run is typically done by utilizing one or both of these:
Shorten the duration of each individual test
Run test in parallel.
Shortening the duration of a test typically means being in complete control of the application under test resulting in the script knowing when the application has successfully loaded the moment it happens. This means having a a low or none Implicit wait and working exclusively with Fluent waits (waiting for a condition to occur). This will result in your tests running at the speed your application allows.
This may mean investing time understanding the application you test on a technical level. By using a custom locator you can still use all the regular SeleniumLibrary keywords and have a centralized waiting function.
Running tests in parallel starts with having tests that run standalone and have no dependencies on other tests. In Robot Framework this means having Test Suite Files that can run independently of each other. Most of us use Pabot to run our suites in parallel and merge the log file afterwards.
Running several browser application tests in parallel means running more than 1 browser at the same time. If you test in Chrome, this can be done on a single host - though it's not always recommended. When you run IE then you require multiple boxes/sessions. Then you start to require a Selenium Grid type solution to distribute the execution load across multiple machines.
I have a class that uses asio::generic::stream_protocol::socket to connect to domain (asio::local::stream_protocol::endpoint) and TCP sockets (asio::ip::tcp::endpoint).
To test that class I have a series of unit tests in a single file using the Catch framework.
I've suddenly come across a problem: when running tests they will get stuck. Passing -DASIO_ENABLE_HANDLER_TRACKING to the compiler flags I can see that it gets stuck on async_connect. This does not happen if I comment all tests but one, no matter which. If I have two tests, no matter if they connect to domain or tcp sockets, or one of each, I get a blockage.
The output of Asio changes but this is an example:
$ tests/unit_tests
#asio|1478248907.301230|0*1|deadline_timer#0x7f96f1c07ad8.async_wait
#asio|1478248907.301276|0*2|resolver#0x7f96f1c07ac0.async_resolve
#asio|1478248907.301322|>1|ec=system:0
#asio|1478248907.301328|<1|
#asio|1478248907.302052|>2|ec=system:0,...
#asio|1478248907.302186|2*3|socket#0x7f96f1c07a20.async_connect
#asio|1478248907.302302|<2|
#asio|1478248907.302468|>3|ec=system:0
#asio|1478248907.302481|<3|
#asio|1478248907.302551|0*4|socket#0x7f96f1c07a20.async_send
#asio|1478248907.302611|>4|ec=system:0,bytes_transferred=23
#asio|1478248907.302617|<4|
#asio|1478248907.302621|0*5|socket#0x7f96f1c07a20.async_receive(null_buffers)
#asio|1478248907.356478|>5|ec=system:0,bytes_transferred=0
#asio|1478248907.356547|<5|
#asio|1478248907.356622|0|socket#0x7f96f1c07a20.close
#asio|1478248907.372967|0|deadline_timer#0x7f96f1c07ad8.cancel
#asio|1478248907.372981|0|resolver#0x7f96f1c07ac0.cancel
#asio|1478248907.373509|0*6|deadline_timer#0x7f96f1d00468.async_wait
#asio|1478248907.373526|0*7|resolver#0x7f96f1d00450.async_resolve
#asio|1478248907.374910|>7|ec=system:0,...
#asio|1478248907.374946|7*8|socket#0x7f96f1d003b0.async_connect
#asio|1478248907.375014|<7|
#asio|1478248907.375127|>8|ec=system:0
#asio|1478248907.375135|<8|
My question is: what is the problem with running unit tests that open and close connections? If this is a no-no, how do you write unit tests that use async_open?
io_service has run, run_one, poll and poll_one methods which actually execute the completion handlers. Boost asio may have its own threads, but the thread state of those may not be correct to call your handlers. Hence, even in a unit test you must figure out which thread is going to call completion handlers.
Secondly, run runs to completion, and then returns. From your description (first test succeeds, second fails) it sounds like you did call run but did not reset and re-runthe io_service.
The problem seemed to be related to the way I was iterating through the output of a tcp::resolver.
Currently I'm coding a network lib based on Boost asio.
I want to automatically test my lib with a kind of loopback echo test.
The problem is, that the server is running continuously thus the test never ends.
My idea is to do some EQUAL tests with response data and to manually stop the unit test with success. If a timeout occurs otherwise, the test should stop with fail (I know, it's more an integration test as an unit test)...
Is there a Boost Unit Test macro to manually stop the test with success?
Thanks!
You can just leave the test function, that will count as success. If you want to explicitly "set" the result, you can use BOOST_CHECK(true) for success and or BOOST_CHECK(false) for failure. There are variants of these macros with an additional message to be printed on failure (BOOST_CHECK_MESSAGE).
The test framework itself is single threaded and the individual tests run one after the other. Each test function has to end by either an explicit return statement or execution "falling off the end".
If you call a function that does not return by itself but needs some trigger to do so, you need to somehow schedule this trigger before calling the function. This might require starting a thread, waiting there for a certain time and then sending the trigger to cause the blocking function to return.
Hard to write a good title for this question. I am developing a performance test in Gatling for a SOAP Webservice. I'm not very experienced with Gatling so I'm learning things as I go, but this conundrum has me entirely stumped.
One of the scenarios I am implementing a test for is an order-process consisting of several unique consecutive calls to the webservice, one of which is a polling call that returns the current status of the ordering process. Simplified, this call gets a SOAP Response with a status that can be of three types:
PROCESSING - Signifying the order is still processing.
ORDER_OK - Order completed without errors.
EVERYTHING_ELSE - A group of varying error-statuses and other results.
What I want to do, is have Gatling continuously poll the webservice until the processing-status changes - and then check that the status says it completed successfully. Polling continuously is easily implemented, but performing the check after it completes is turning out to be a far greater challenge than it has any business being.
So far, this is what I've done to solve the polling:
exec { session => session.set("status", "PROCESSING") }
.asLongAs(session => session("status").as[String].equals("PROCESSING")) {
exec(http("Poll order")
.post("/MyWebService")
.body(ELFileBody("bodies/ws/pollOrder.xml"))
.check(
status.is(200),
regex("soapFault").notExists,
regex("pollResponse").exists,
xpath("//*[local-name(.)='result']").exists.saveAs("status")
)
).exitHereIfFailed.pause(5 seconds)
}
This snip appears to be performing the polling correctly, it continues to poll until the orderStatus changes from processing to something else. I need to check the status to see if it changed to the response I am interested in however, because I don't know what it is, and only one of the many results it can be should cause the scenario to continue for that user.
A potential fix would be to add more checks in that call that go something like this:
.check(regex("EVERYTHING_ELSE_XYZ")).notExists
The service can return a LOT of different "not a happy day" messages however and I'm only really interested in the two other ones, so it would be preferable for me to be able to do a check only for the two valid happy-day responses. Checking if one exact thing exists seems far more sensible than checking that dozens of things don't.
What I thought I would be able to do was performing a check on the status variable in the users session when the step exits the asLongAs-loop, and continue/exit the scenario for that user. As it's a session-variable I could probably do this in the next step of the total scenario and break the run for that user there, but that would also mean the error is reported in the wrong place, and the next calls fault-% would be polluted by errors from the previous call.
Using pseudocode, being able to do something like this immediately after it exits the asLongAs loop would have been perfect:
if (session("status").as[String].equals("ORDER_OK")) ? continueTheScenario : failTheScenario
but I've not been able to do anything similar to that inside a gatling-chain. It's almost starting to appear impossible to do something like that, but can anyone see a solution to this that I'm not seeing?
Instead of "exists", use "in" to check that the result is one of the 2 valid values.
I have some unit tests that all execute things asynchronously. But it turns out that even though I'm calling expectAsync a bunch of times in side my unit tests (sometimes this may involve multiple nested asynchronous calls to expectAsync), the unit tests still exits and calls the tearDown method which effectively is cutting off my infrastructure that my asynchronous tests are running on. What I want to happen is for my tests to run and wait until all the expectations, async or not, to have completed before it continues on to the next test. Is this possible to achieve? the reason my unit tests have been passing up to now is because the clean up code in tearDown was also executing async, but it should ideally work weather it cleans up asynchronously or immediately.
We need to see your code to be able to pinpoint the exact problem.
Most likely you are not calling expectAsync enough. At any time your code is waiting for an asynchronous callback, there must be at least one outstanding expectAsync function waiting to be called.
You can cut it all down to one expectAsync call by creating a "done" function that you call whenever your test is completed:
test("ladadidadida", () {
var done = expectAsync((){});
something.then((a) { return somethingElse(); })
.then((b) { expect(a, b); done(); })
});