I am using the postman for the first time and need to generate multiple requests.
I added them in collections and it seems that they are executed in parallel. Whereas I want to execute them serially.
Basically depending on the response of the first request I will decide whether I have to send the second request or not?
How to do this?
I'm not sure which version of Postman you are using, but collections allow you to store and group together useful requests so that you can easily find them and execute them individually.
Version 3 has a Runner Mode that is enabled when you install the Jetpacks premium extension, this has support for executing collections. Please check if you are in the Builder or Runner mode. If you are in the Runner mode, switch back to the Builder mode so you can execute requests individually again.
Related
As a user, I have several API collections created using postman and now I've integrated the Newman-Runner-CLI.
My question is that, is there any way to skip some of the selected API Requests from each collection, or is there is any way to add "Skipped API Requests" in a single file so during the execution process of Newman-Runner, it will automatically skip those added requests?
Thanks in advance.
Apparently, there's no way to find out if a collection is being run via Postman or Newman.
According to this thread, you could manually set a variable at the start of the execution (see Postman forum).
You could then steer the execution with postman.setNextRequest() depending on the value of that variable. See: https://learning.postman.com/docs/running-collections/building-workflows/
I have an ever-growing collection of Postman tests for my API that I regularly export and check in to source control, so I can run them as part of CI via Newman.
I'd like to automate the process of exporting the collection when I've added some new tests - perhaps even pull it regularly from Postman's servers and check the updated version in to git.
Is there an API I can use to do this?
I would settle happily for a script I could run to export my collections and environments to named json files.
Such a feature must be available in Postman Pro, when you use the Cloud instance feature(I haven't used it yet, but I'll probably do for continuous integration), I'm also interested and I went through this information:
FYI, that you don't even need to export the collection. You can use Newman to talk to the Postman cloud instance and call collections directly from there. Therefore, when a collection is updated in Postman, Newman will automatically run the updated collection on its next run.
You can also add in the environment URLs to automatically have Newman environment swap (we use this to run a healthcheck collection across all our environments [Dev, Test, Stage & Prod])
You should check this feature, postman licences are not particularly expensive, it can be worth it.
hope this helps
Alexandre
You have probably solved your problem by now, but for anyone else coming across this, Postman has an API that lets you access information about your collections. You could call /collections to get a list of all your collections, then query for the one(s) you want by their id. (If the link doesn't work, google "Postman API".)
I am currently using SOAPUI (Free Version).
I am looking to automate the tests so a value is place in each test, without having to manually enter them.
(At a Basic Level) Example
The test web service I am using is http://www.webservicex.net/mortgage.asmx?WSDL.
http://www.webservicex.net/mortgage.asmx?op=GetMortgagePayment
The service has been set up as SOAP, a test suite has been built and a SOAP request test step has been assigned.
Out of the box, you can enter data manually in to the test step. However I am wanting too achieve three things using the free addition.
Add a randomized value to the test, so that a value of between is inserted each time the test is run. I am aware you can use Groovy but I am unsure whether the Groovy script needs to be added as a test step or as the set up script tab
Pull a list of data from a .csv or similar so the test continues to run through all data.
Repeat step 2 but set the assertions for each piece of test data to state whether the response should be valid or not.
If you could help it would be appreciated.
Cheers,
Tim
The problem is that I want to add some constraints when using the post-review , for example if my python file does not match pep8, I want the request to be automatically refused. How can I do this?
If the requirement is only on your machine, then you can write a script that checks for all the required conditions and then calls post-review. If you want to enforce this on all the users, then you can distribute the script to all the clients or add these check in the server.
There is no implicit way to do this using post-review.
I have written a simple test for my website. The test simply search for a word in my search page and waits for results.
What I need is to run the same test 40 times simultaneously to mimic a situation where 40 users are searching for the same word at the same time.
Basically I want to know how to run them simultaneously not in a queue.
Thanks.
What you probably need is Selenium RC and Selenium Grid as Silenium IDE is quite limited on automated testing. RC allows you to run remote selenium tests (tho rc can run locally too) and grid allows you to simplify the access to all running rcs.
You need 40 clients at once. If you are using selenium-rc you can start severall clients simultaniously by configuring them to run on different ports. After that you have to start your test 40 times at once. That is the tricky part depending on what framework you are using to launch the tests.
I would suggest JMeter for load-test like situations. It is quite easy to setup and you can configure how many simulated users you want on your website at once. JMeter works fine for manuell tests and for automated tests.
Don't mind but I guess you need to do that in Jmeter, actually what you are trying to do is part of Load/Stress where number of user tries to do some certain action simultaneously. Have a look a Jmeter.