I am using postman for API's automation. Created collection runner and while execution, I provided Iterations as 5. I am trying to stop the whole test if any scenario fails.
I tried with the below options. But the current test is failing but it's going to the next iteration. How can I stop whole iterations?
postman.setNextRequest(null);
throw new Error('halt')
As you mentioned you can use postman.setNextRequest(null) and do something like this in your test:
// I used 500 status code for stopping condition
if(pm.response.code === 500) {
postman.setNextRequest(null)
}
Related
I am writing a cypress custom command, which fetches a json response from an API end point. I am writing some assertions on the json response. However, I have a if-else condition to be executed. See below.
cy.getReportJson('84b636f4-c8f0-4aa4-bdeb-15abf811d432',user).then(report=> {
if(services.request_criminal_record_check.include){
console.log('inside if')
cy.wait(30000)
expect(report.report_summary.rcmp_result.status).equal(data.expected_result.rcmp_result.status)
expect(report.report_summary.rcmp_result.overall_score).equal(data.expected_result.rcmp_result.overall_score)
expect(report.report_summary.rcmp_result.result).equal(data.expected_result.rcmp_result.result)
}
})
When I run this code in a spec file, the Output I get is as follows.
As you can see, the assertions are running, before the wait command is triggered.
I want cypress to wait for 30 seconds, so that the back-end runs its magic and generates a report and after 30 seconds, i wanna assert on the report json.
Even the console.log is printed after the assertions are executed.
Is this something related to the async nature of Cypress?
You need to queue the assertions. expect()... runs immediately, but cy.wait() is pausing the queue exection.
cy.wait(30000)
cy.then(() => {
expect(...).equal(...)
expect(...).equal(...)
expect(...).equal(...)
})
For a scenario unit testing a user entering a password and password confirmation. when i try to verify the same method being called in a different on() block, i get the following error on the 2nd on()block.
org.mockito.exceptions.verification.TooManyActualInvocations:
activationPasswordView.disableButton();
Wanted 1 time:
But was twice
Here is the code:
given("user set password "){
on(“password is null”){
presenterImpl.validatePassword(null, null)
it("done button should be disabled"){
verify(view).disableButton()
}
}
on("input only one password"){
presenterImpl.validatePassword("Password", "")
it("done button should be disabled"){
verify(view).disableButton()
}
}
}
But if i call a different method, it works correctly. I assume this was not how Spek framework was intended to be used as all the examples i have seen always use an Assert. Is there a way i can write the following conditions in Spek without the error?. Even a different given() still causes the error.
The mocked object counts the number of times the function invoked for the specific mock.
Since you did not reset the mock between each test, the counter is increased each time you invoked the method.
You should use: reset(view) to reset the mocks counter.
This issue is not related to the Spek framework.
I am writing a new API and want to be able to see how it fairs when hit with n requests.
I have tried to setup environment variables and use the runner tool within Postman to no avail.
End goal is to run it n times, where I pass in the value of [n] into the body so I can audit (the value of that field is stored in database).
I have setup 2 environment variables
company=Bulk API Test
requestcount=0
My pre-request script is
let requestCount = +postman.getEnvironmentVariable("requestcount");
if(!requestCount)
{
requestCount = 0;
}
requestCount++;
postman.setEnvironmentVariable("requestcount", requestCount);
Which should update the environment variable requestcount to +1 each time.
My test script is
var currentCount = +postman.getEnvironmentVariable("requestcount");
if(currentCount < 5) // want it to run 5 times
{
postman.setNextRequest("https://snipped");
}
else
{
postman.setNextRequest(null);
}
When I run it through the runner it takes much longer than a non-runner execution and the result is the API was only hit once.
If your API Call is always the same, try just using the iteration-count of the postman runner. Just enter there e.g. 5. And your collection will be repeated 5 times.
Cou cann access the iteration over the following property:
pm.info.iteration
to find out, which iteration it was.
If you still need to icrement variables make sure, that they parsed as integers.
var currentCount =+ parseInt(postman.getEnvironmentVariable("requestcount"));
To be honest: The best way for this benchmarking test would be to use a load-test tool e.g. Loadrunner instead of Postman.
I am using the pre-request script in the first call to dynamically generate essential environment variables for the entire script. I also want the users to be notified of those failures when running via collection runner without having to look up to the console. Is it possible to generate information in tests or some other alternative so failures are explicit in the collection runner results?
e.g. if the ip has not been provided in the environment, it does not make sense to run the login call. So i would like to write in a pre-requisite script:
if (!environment['IP']) {
//do not execute any further and do not send the REST call
}
I tried using:
if (!environment["xyz"]) {
tests["condtion1"]=false
}
but it gives the error:
There was an error in evaluating pre-requisite script: tests is not defined
Is there any workaround - I don't want to move this code to the tests tab as I don't want to clutter the code there with unrelated environment conditioning.
A throw works just fine. (Updated with excellent tip from #Joe White)
if (!environment['X']) {
throw new Error('No "X" set')
}
This prevents the REST call from going through.
But in the collection runner mode it stops the entire test suite.
But when coupled with newman collection runner it works just fine.
A throw error works fine with this test:
var value = pm.environment.get('X')
if (value == undefined || value == null || value.length == 0) {
throw new Error('No "X" set!')
}
I am trying to get some tests to pass for an ember addon. It was working fine until yesterday I added some code that runs later in the run loops using Em.run.next.
Here is what Im doing in my test.
visit('/').then(function() {
find('bm-select').click();
andThen(function() {
equal(1,1, 'yay');
});
});
The problem is when click is triggered, the later function is executed after andThen. By that time all my tests are done and it throws error. I am under the impression andThen should wait for all async stuff to finish.
This is what what my code looks like when click is triggered(focusOut event is triggered on click)
lostFocus: function() {
if(this.get('isOpen')) {
Em.run.later(this, function() {
var focussedElement = document.activeElement;
var isFocussedOut =
this.$().has(focussedElement).length === 0 && !this.$().is(focussedElement);
if(isFocussedOut) {
this.closeOptions({focus:false});
}
}, 0);
}
}.on('focusOut'),
You can see that it gives an error Uncaught TypeError: Cannot read property 'has' of undefined. This is from the focusOut method. By the time the function executes the components _state is 'destroying' and this.$() returns undefined.
I tried the wait helper and still I am not able to get the tests to work. How is this normally done.
I have extracted the tests to run in a bin. Here is the link to it.
After further debugging, the problem is one of tags 'bm-select' has it focusOut event triggered in the teardown method of testing. So by the time the run loop code executes the component is not inDOM.
I just added a hidden input field in the test app. Once I've run all the tests, I set focus to the hidden input field and use the wait test helper. Now all the run loop code is done by the time the tear down method is executed.