Is there any Newman option to attach a test/pre-request script before each call of a folder in Postman collection - postman

The need here is to run a pre-request script before each call in a folder of a postman collection, optionally, while running from Newman collection.
For example, if running a test suite of 10 calls in one folder, the call would usually be:
newman run <collectionPath> --folder <folderPath>
Is there any option of passing something like,
newman run <collectionPath> --folder <folderPath> --pre-request_script someScript.js --test_script someTest.js
?
The reason why (an obvious) postman collection test / pre-request script is not being used is that
(the main reason) huge amounts of collections are already written and it will be difficult to go into each one of them and add this code. It will be way more convenient to govern this behavior via command line.
the test / pre-request script may vary across different newman runs and these parameters would remove the need of complex conditional code within pre-requests / test scripts.
Is there any other alternative or solution for the same?

From the latest version you can add pre-requests, tests, variables directly to the collection or something different on each sub folder. These collections can just be used in the normal way via newman. It might solve your problem.
http://blog.getpostman.com/2017/12/13/keep-it-dry-with-collection-and-folder-elements/

Related

Using 5 threads to run postman folders

Is there a way to execute postman folders in parallel in a testng way ?
I have 5 collections, each one contains hundred of tests that are not dependent and use it's own data, so they can be parallalized.
Setting 5 threads can reduce execution time by almost five.
I've seen this Run Postman (or Newman) collection runner iterations in parallel but it doesn't suite me because it doesn't dynamically know how much folder are in each collection
Is there a way to do so with newman ? Or even with external tools ?
You can use newman and GNU Parallel.
The command would look like: parallel --jobs 5 < newman_commands.txt
Where newman_commands.txt would contain the newman commands to run each folder.

Making POSTMAN able to read JSON files at a specific directory

I've been searching and searching but i did not find anything useful, i would like to implement some automation in POSTMAN.
I don't know if this is even possible but i would like to force POSTMAN to automatically read JSON files from a directory , i.e: file system or whatever. Do you get me?
Everytime that i want to execute anything on POSTMAN i have to open the COLLECTION, select the desired COLLECTION, click on RUNNER and then: choose the ENVIRONMENT, select the data file and finally: click on Start Run. I don't want to do it manually no more
Take a look at these questions:
Is it possible to schedule a task on POSTMAN?
Is it possible to read/reach files from file systems or something like
that?
A friend of mine told me that it was possible but i don't have the details and i want to do it.
Can you help me? I'm pretty lost
You can this using Newman to run the Collection. All the usage details and examples can be found here:
https://github.com/postmanlabs/newman
You can use the -d flag to specify a file path to the datafile, for the Collection to use. This would be the same as running the collection in the UI, this just brings it out to the command line.

Python Sub-Process Coverage

Situation:
I'm attempting to get coverage reports for a project that uses both C++ and Python. I'm using LCOV/GCOV for C++, and attempting to use Coverage.py for the python stuff. The only issue is, most of the python code that's being used is simply utility functions being called one function at a time. No initialization, no real life-cycle, or exit. So no real way to use the API to start/stop/save, or use the coverage command line to measure.
With this, I thought the easiest way to accomplish this would be using the sitecustomize.py method like outlined here. I have gotten that to work, and it measures all configured python code as expected. Now I'm looking at how to accomplish this with compiled python code (.pyc).
I can get it to work if I keep source(.py) and (.pyc) in the same directory when running, and then reporting. However, I'm looking for a way to RUN the files and generate the measurement data. Then at a later time point to the actual source files, and run the actual reports. Ideally I wouldn't need the source(.py) files at all, but I haven't found a way to accomplish this.
Objective:
In the end I want to be able to compile the python files(.pyc), install them on the target, and run coverage like stated above. It will generate coverage data files, then pull those files to my host machine which houses the source(.py) .. and do the actual coverage reporting.
Is this possible currently?
[Edit] Thanks to Ned's advice, I looked into the [paths] usage, and it worked exactly how I needed it to.

Integration point of Postman Clean up

Is there a way to incorporate a clean up script in Postman?
Use case: After the collection run : (either success or failure). I need to clear data in some of the databases/data-stores
similar construct to try{} finally{}
for eg : as a part of collection runner contains two apis
api1 -> which puts the data in redis.
api2 -> functional verification
(expecting the clean up hook) to remove the data from that was put in step 1.
writing at the end of test script of api2 will work fine only if there are no errors in execution of test script.
the problem gets worse when there are large number of apis and multiple data entries. We can handle this by setNextRequest, however that brings additional code to be written in each test script.
You could probably achieve this by running the collection file within a script, using Newman. This should give you more flexibility and control over running certain actions at different points before, during and after the run.
More information about the different options can be found here: https://github.com/postmanlabs/newman/blob/develop/README.md#api-reference
If its just clearing out certian variable values, this can be done within the Tests tab of the last request in your collection.

Referencing information in builds specified in a run parameter [Hudson]

Day 1 with using Hudson for our CI build. Slowly but surely getting up to speed.
My question is about run parameters. I've seen that I can use them to reference a particular run of a particular project - that's all fine.
What I don't understand (and can't find any documentation on - there's nothing at Parameterized Build) is how I refer to anything in the run defined by the run parameter.
Essentially I want to reference the %BUILD_NUMBER% and %SVN_REVISION% of the run that is selected in the run parameter.
How can I do that?
Do you really need to add extra property values, extra parameters for your job?
Since BUILD_NUMBER and SVN_REVISION are already defined as environment variables (see Building a software project), you can use those in your job.
When a Hudson job executes, it sets some environment variables that you may use in your shell script, batch command, or Ant script
or:
illustrates you already have those values at your disposal.
You can then use them to define other environment variables/properties within your shell or ant script.
When it comes to pass a variable value from one job to another, the Parameterized Trigger Plugin should do the trick:
The parameters section can contain a combination of one or more of the following:
a set of predefined properties
properties from a properties file read from the workspace of the triggering build
the parameters of the current build
"Subversion revision": makes sure the triggered projects are built with the same revision(s) of the triggering build.
You still have to make sure those projects are actually configured to checkout the right Subversion URLs.
Note: there might be an issue with the Join Plugin, which might not work when the Parameterized Trigger is in action.