Using 5 threads to run postman folders - postman

Is there a way to execute postman folders in parallel in a testng way ?
I have 5 collections, each one contains hundred of tests that are not dependent and use it's own data, so they can be parallalized.
Setting 5 threads can reduce execution time by almost five.
I've seen this Run Postman (or Newman) collection runner iterations in parallel but it doesn't suite me because it doesn't dynamically know how much folder are in each collection
Is there a way to do so with newman ? Or even with external tools ?

You can use newman and GNU Parallel.
The command would look like: parallel --jobs 5 < newman_commands.txt
Where newman_commands.txt would contain the newman commands to run each folder.

Related

(Google Test) Automatically retry a test if it failed the first time

Our team uses Google Test for automated testing. Most of our tests pass consistently, but a few seem to fail ~5% of the time due to race conditions, network time-outs, etc.
We would like the ability to mark certain tests as "flaky". A flaky test would be automatically re-run if it fails the first time, and will only fail the test suite if it fails both times.
Is this something Google Test offers out-of-the-box? If not, is it something that can be built on top of Google Test?
You have several options:
Use --gtest_repeat for the test executable:
The --gtest_repeat flag allows you to repeat all (or selected) test methods in a program many times. Hopefully, a flaky test will eventually fail and give you a chance to debug.
You can mimic tagging your tests by adding "flaky" somewhere in their names, and then use the gtest_filter option to repeat them. Below are some examples from Google documentation:
$ foo_test --gtest_repeat=1000
Repeat foo_test 1000 times and don't stop at failures.
$ foo_test --gtest_repeat=-1
A negative count means repeating forever.
$ foo_test --gtest_repeat=1000 --gtest_break_on_failure
Repeat foo_test 1000 times, stopping at the first failure. This
is especially useful when running under a debugger: when the test
fails, it will drop into the debugger and you can then inspect
variables and stacks.
$ foo_test --gtest_repeat=1000 --gtest_filter=Flaky.*
Repeat the tests whose name matches the filter 1000 times.
See here for more info.
Use bazel to build and run your tests:
Rather than tagging your tests in the test files, you can tag them in the bazel BUILD files.
You can tag each test individually using cc_test rule.
You can also define a set of tests (using test_suite) in the BUILD file and tag them together (e.g. "small", "large", "flaky", etc). See here for an example.
Once you tag your tests, you can use simple commands like this:
% bazel test --test_tag_filters=performance,stress,-flaky //myproject:all
The above command will test all tests in myproject that are tagged as performance,stress, and are not flaky.
See here for documentation.
Using Bazel is probably cleaner because you don't have to modify your test files, and you can quickly modify your tests tags if things change.
See this repo and this video for examples of running tests using bazel.

Integration point of Postman Clean up

Is there a way to incorporate a clean up script in Postman?
Use case: After the collection run : (either success or failure). I need to clear data in some of the databases/data-stores
similar construct to try{} finally{}
for eg : as a part of collection runner contains two apis
api1 -> which puts the data in redis.
api2 -> functional verification
(expecting the clean up hook) to remove the data from that was put in step 1.
writing at the end of test script of api2 will work fine only if there are no errors in execution of test script.
the problem gets worse when there are large number of apis and multiple data entries. We can handle this by setNextRequest, however that brings additional code to be written in each test script.
You could probably achieve this by running the collection file within a script, using Newman. This should give you more flexibility and control over running certain actions at different points before, during and after the run.
More information about the different options can be found here: https://github.com/postmanlabs/newman/blob/develop/README.md#api-reference
If its just clearing out certian variable values, this can be done within the Tests tab of the last request in your collection.

Is there any Newman option to attach a test/pre-request script before each call of a folder in Postman collection

The need here is to run a pre-request script before each call in a folder of a postman collection, optionally, while running from Newman collection.
For example, if running a test suite of 10 calls in one folder, the call would usually be:
newman run <collectionPath> --folder <folderPath>
Is there any option of passing something like,
newman run <collectionPath> --folder <folderPath> --pre-request_script someScript.js --test_script someTest.js
?
The reason why (an obvious) postman collection test / pre-request script is not being used is that
(the main reason) huge amounts of collections are already written and it will be difficult to go into each one of them and add this code. It will be way more convenient to govern this behavior via command line.
the test / pre-request script may vary across different newman runs and these parameters would remove the need of complex conditional code within pre-requests / test scripts.
Is there any other alternative or solution for the same?
From the latest version you can add pre-requests, tests, variables directly to the collection or something different on each sub folder. These collections can just be used in the normal way via newman. It might solve your problem.
http://blog.getpostman.com/2017/12/13/keep-it-dry-with-collection-and-folder-elements/

Ways to write performance testing for Mercurial (hg) extension using Python

i wrote an extension for Mercurial e.g. hg dosomthing --rev 5 and I was wondering what is the right approach to writing performance test cases to monitor the performance of the extension from when its executed till its end!
Ta:)
Mercurial has support for running itself under a Python profiler. Just execute
$ hg --profile dosomething --rev 5
and you'll see the profile output afterwards. See the hgrc man page for a few options you have. If you just want timing data, then use
$ hg --time dosomething --rev 5
instead.
You should also have a look at the perf extension. It runs a command many times (like the timeit module) and reports the best running time. You'll have to extend the extension to run your new command, but it should be simple since all the performance tests follow the same pattern.

How to make a higher-level parallel code/scripts?

Newbie here.
I have a c++ program XX, note XX is a executable binary here. And now I want this program XX to do the similar job N times but with N sets of different input parameters, and say I have N processors now, then I could let these N jobs run simutaneously on these N processors.
Is it possible from a scripts level to qsub these kind of "parallel" jobs? Or it can be done even on a C++ level? Or any better ideas?
I asked because the XX code I have writen is based on a large project and it is not easy for me to change the mpi part of code. :(
OR do I HAVE TO modify the XX code and the project to have a new algorithm to fit my need.
OR any other advices, like using python or what? that can achieve my goal quickly.
Thanks a lot!
I want to add more to my question, to make it clearer.
What if the result of these N are dependent? No, I mean how could I do this,
1st cycle, N jobs on n processors run simutaneously and after a certain time, they all end and give N results, and I need to do a serial job based on these N results, the result of this will be used as initial condition for the next cycle, and then move to the next cycle,
2nd cycle,
3rd cycle,
and so on....
Is shell scripts able to do this? Or I'd better to learn to use python? Or I can still use c++???
Thanks :)
You can use GNU Parallel to execute jobs in parallel through a shell script.
Good old xargs takes a -P parameter that tells it how many jobs to execute at the same time.
The bourne shell can do what you ask just fine:
#!/bin/sh
# Run XX 3 times in parallel
XX args&
XX other-args&
XX different-args&
wait # Wait for all 3 to finish
...
GNU make has "-j" switch that allows you to specify how many jobs you want to run simultaneously. So if you can convert the whole thing into gnu Makefile with proper dependencies, you'll be probably able to run specified number of jobs simultaneously. Alternatively you could try some other build system/automation tool. Or you could implement it from scratch in a shell script or something like that. Plain C++ is also possible, as long as you have access to some threads library. Or you could generate Makefile (or another build script) using python. There are many ways to approach this.