Is there a programmatic way to export Postman Collections? - postman

I have an ever-growing collection of Postman tests for my API that I regularly export and check in to source control, so I can run them as part of CI via Newman.
I'd like to automate the process of exporting the collection when I've added some new tests - perhaps even pull it regularly from Postman's servers and check the updated version in to git.
Is there an API I can use to do this?
I would settle happily for a script I could run to export my collections and environments to named json files.

Such a feature must be available in Postman Pro, when you use the Cloud instance feature(I haven't used it yet, but I'll probably do for continuous integration), I'm also interested and I went through this information:
FYI, that you don't even need to export the collection. You can use Newman to talk to the Postman cloud instance and call collections directly from there. Therefore, when a collection is updated in Postman, Newman will automatically run the updated collection on its next run.
You can also add in the environment URLs to automatically have Newman environment swap (we use this to run a healthcheck collection across all our environments [Dev, Test, Stage & Prod])
You should check this feature, postman licences are not particularly expensive, it can be worth it.
hope this helps
Alexandre

You have probably solved your problem by now, but for anyone else coming across this, Postman has an API that lets you access information about your collections. You could call /collections to get a list of all your collections, then query for the one(s) you want by their id. (If the link doesn't work, google "Postman API".)

Related

Postman Global pre-request script that all collections inherit

Postman is probably the most amazing piece of software I've ever encountered in my software development career.
However, there is something I'm stuck on and I believe there is an answer out there...
I have a huge pre-request script that I copy to each new postman collection that I create. The pre-request script does different things including setting the server to run my request on, generating reference numbers, and many other tasks.
The problem is that I have the to copy this code all over the place. Each Collection I create gets the same blob of code. And then as time moves forward I update my blob of code and then forget which collection has the latest updates.
I was told that it's possible to set up a global pre-request script in Postman that all collection will execute. I've spent some time searching the internet and I can't find the answer.
Any help would be greatly appreciated...
I think you can't do it over more "real collections" without any custom shell script.
If it would be possible, I think they would mention it here.
https://learning.postman.com/docs/writing-scripts/pre-request-scripts/
Postman supports from itself only one Pre-request Script for each "real collection" - but you could mimic "sub collections" of one upper collection by doing folders under the "real collection".
So the real collection would be my-server-collection - this one contains your pre-request script and every Rest API Controller is a subfolder under this collection - so you would get the same effect.
I was told that it's possible to set up a global pre-request script in Postman that all collection will execute. I've spent some time searching the internet and I can't find the answer.
Did this come from Postman itself? I'm pretty sure Collection webhooks are set per collection, as this is a topic I've explored in depth before. I went to check just in case you could skip naming a collection to force it to * or something, but nope:
With that out of the day, the only suggestion I have to you is to create an utility collection that traverses all collections with a given convention name, for example PRS-X, PRS-Y. For those collections your utility would edit each collection to add/update the pre-request script.
As you probably know you could run that on-demand, schedule it, or initiate the run other automation (like an update to your pre-request script).
You go to the Environments tab and click on Create new Environment icon.
In the Environment windows add a name to it and a variable for your pre-request script. e.g. preRequestScript and set the value to your pre-request code. (Save it of course)
Lastly, you can go to your Collections, edit the one you want, and select the "Global" environment you created from the dropdown.
Once you finish, the global pre-request script will run before each request in your selected collection.

How to change the environment in Tests script of last request of the collection

Some set of the requests are to run under few environments , so i want to make few iterations in collection runner and in a test script of last request to change environment , to make next iteration run on next environment -
Need your help to change environment through script if it is possible
This is not possible at the moment, but there is a workaround (copied from the Postman Community):
You should be able to accomplish by making use of pm.sendRequest and
the Postman API. So you'd use pm.sendRequest to fetch the relevant
Environment and save it to a local variable in your script and use it
accordingly.
More information here:
https://learning.postman.com/docs/writing-scripts/script-references/postman-sandbox-api-reference/#sending-requests-from-scripts
https://learning.postman.com/docs/sending-requests/variables/#defining-local-variables
There's a feature request in the Postman github repository for it: Change environment scope in pre-request or test scripts #9061

Why Environment variable doesn't update in postman flow?

When I am calling an api with normal api calling in postman and running a test script and setting environment value, it's working but when I use that api in postman flow, environment doesn't changing.
Script in my test:
pm.environment.set('email', body.email)
Looks like you are looking for this issue from discussions section of Postman Flows repository:
https://github.com/postmanlabs/postman-flows/discussions/142. Here are some key points from it:
I want to begin by saying that nothing is wrong with environments or variables. They just work differently in Flows from how they used to work in the Collection Runner or the Request Tab.
Variables are not first-class citizens in Flows.
It was a difficult decision to break the existing pattern, but we firmly believe this is a necessary change as it would simplify problems for both us and users.
Environment works in a read-only mode, updates to the environment from scripts are not respected.
Also in this post they suggest:
We encourage using the connection to pipe data from one block to another, rather than using Globals/Environments, etc.
According to this post:
We do not supporting updating globals and environment using flows.

Using cloud functions vs cloud run as webhook for dialogflow

I don't know much about web development and cloud computing. From what I've read when using Cloud functions as the webhook service for dialogflow, you are limited to write code in just 1 source file. I would like to create a real complex dialogflow agent, so It would be handy to have an organized code structure to make the development easier.
I've recently discovered Cloud run which seems like it can also handle webhook requests and makes it possible to develop a complex code structure.
I don't want to use Cloud Run just because it is inconvenient to write everything in one file, but on the other hand it would be strange to have a cloud function with a single file with thousands of lines of code.
Is it possible to have multiple files in a single cloud function?
Is cloud run suitable for my problem? (create a complex dialogflow agent)
Is it possible to have multiple files in a single cloud function?
Yes. When you deploy to Google Cloud Functions you create a bundle with all your source files or have it pull from a source repository.
But Dialogflow only allows index.js and package.json in the Built-In Editor
For simplicity, the built-in code editor only allows you to edit those two files. But the built-in editor is mostly just meant for basic testing. If you're doing serious coding, you probably already have an environment you prefer to use to code and deploy that code.
Is Cloud Run suitable?
Certainly. The biggest thing Cloud Run will get you is complete control over your runtime environment, since you're specifying the details of that environment in addition to the code.
The biggest downside, however, is that you also have to determine details of that environment. Cloud Funcitons provide an HTTPS server without you having to worry about those details, as long as the rest of the environment is suitable.
What other options do I have?
Anywhere you want! Dialogflow only requires that your webhook
Be at a public address (ie - one that Google can resolve and reach)
Runs an HTTPS server at that address with a non-self-signed certificate
During testing, it is common to run it on your own machine via a tunnel such as ngrok, but this isn't a good idea in production. If you're already familiar with running an HTTPS server in another environment, and you wish to continue using that environment, you should be fine.

How to build a perl web-service infrastructure

I have many scripts that I use to manage a multi server infrastructure. Some of these scripts require root access, some require access to a databases, and most of them are perl based. I would like to convert all these scripts into very simple web services that can be executed from different applications. These web services would take regular request inputs and would output json as a result of being executed. I'm thinking that I should setup a simple perl dispatcher, call it action, that would do logging, checking credentials, and executing these simple scripts. Something like:
http://host/action/update-dns?server=www.google.com&ip=192.168.1.1
This would invoke the action perl driver which in turn would call the update-dns script with the appropriate parameters (perhaps cleaned in some way) and return an appropriate json response. I would like for this infrastructure to have the following attributes:
All scripts reside in a single place. If a new script is dropped there, then it automatically becomes callable.
All scripts need to have some form of manifest that describe, who can call it (belonging to some ldap group), what parameters does it take, what is the response, etc. So that it is self explained.
All scripts are logged in terms of who did what and what was the response.
It would be great if there was a command line way to do something like # action update-dns --server=www.google.com --up=192.168.1.1
Do I have to get this going from scratch or is there something already on top of which I can piggy back on?
You might want to check out my framework Sub::Spec. The documentation is still sparse, but I'm already using it for several projects, including for my other modules in CPAN.
The idea is you write your code in functions, decorate/add enough metadata to these functions (including some summary, specification of arguments, etc.) and there will be toolchains to take care of what you need, e.g. running your functions in the command-line (using Sub::Spec::CmdLine, and over HTTP (using Sub::Spec::HTTP::Server and Sub::Spec::HTTP::Client).
There is a sample project in its infancy. Also take a look at http://gudangapi.com/. For example, the function GudangAPI::API::finance::currency::id::bca::get_bca_exchange_rate() will be accessible as an API function via HTTP API.
Contact me if you are interested in deploying something like this.