Why Environment variable doesn't update in postman flow? - postman

When I am calling an api with normal api calling in postman and running a test script and setting environment value, it's working but when I use that api in postman flow, environment doesn't changing.
Script in my test:
pm.environment.set('email', body.email)

Looks like you are looking for this issue from discussions section of Postman Flows repository:
https://github.com/postmanlabs/postman-flows/discussions/142. Here are some key points from it:
I want to begin by saying that nothing is wrong with environments or variables. They just work differently in Flows from how they used to work in the Collection Runner or the Request Tab.
Variables are not first-class citizens in Flows.
It was a difficult decision to break the existing pattern, but we firmly believe this is a necessary change as it would simplify problems for both us and users.
Environment works in a read-only mode, updates to the environment from scripts are not respected.
Also in this post they suggest:
We encourage using the connection to pipe data from one block to another, rather than using Globals/Environments, etc.
According to this post:
We do not supporting updating globals and environment using flows.

Related

How to set maintenance mode with django in stateless aplication

I've hosted my app in Google Cloud Run, a simple Vue's frontend connected to a django API. The problem arises when I try to set maintenance mode to the API, protecting it from unexpected calls. For this purpose I've used django-maintenance-mode's package, but, as I said, due to the implicit stateless character of GC Run, the enviroment variable that stores the maintenance mode's value drops when there isn't any active instance, getting back to OFF.
I'd like to know any other possible solution or fix overriding any of this package's methods to make it work in my project.
Thanks in advance!!
You can use the Graceful shutdowns which will allow you to capture the environment variable that stores the maintenance mode value. Once the value is captured you can store it in a database (Cloud SQL) or in a file on Cloud Storage. At each startup, you get the last value.

Why does monitor show no tests found when there are tests in each GET/POST request?

When I run my Postman monitor it says there are no tests available, even though I have created a single test in the request.
Is this referring to some other type of test?
Have attached both images
Postman monitor error
Developed test
I have the similar issue.
Monitor is an online feature and is only available in workspaces.
I suspect the workspace is in Postman so it has issues to invoke my company's network which is protected with firewall. I tried to turn on and off the proxy but still have the issues.
I feel the newman may work which needs more efforts to implement.
There are a few reasons. The reasons are mentioned well in official doc.
https://learning.postman.com/docs/monitoring-your-api/troubleshooting-monitors/
According to my experience, you should not define "pm.response.json()" command as globally, you need to define it in test function (pm.test()) because saved global variables are not supported in monitors.

Within a team work space can I have local variables that are not saved to the team work space (cloud)

I want to know if it is possible to have a set of local variables that I can use in my team work space but will not be saved.
For example I want to use some auth credentials across multiple tests but do not want them to be saved on postmans cloud.
I believe I can achieve this in Newman using --global-var "<global-variable-name>=<global-variable-value>" but it would be great to make it possible through the GUI.
I have looked online and read Postman's documentation in particular the variable page which talks in detail about:
Global
Environment
Local
Data
None of these seem to do what I require.
If you just set the variable as the Current Value it won't get synced to the Postman servers and stay local to your machine:
This is part of Postman sessions - More details can be found here https://blog.getpostman.com/2018/08/09/sessions-faq/

Is there a programmatic way to export Postman Collections?

I have an ever-growing collection of Postman tests for my API that I regularly export and check in to source control, so I can run them as part of CI via Newman.
I'd like to automate the process of exporting the collection when I've added some new tests - perhaps even pull it regularly from Postman's servers and check the updated version in to git.
Is there an API I can use to do this?
I would settle happily for a script I could run to export my collections and environments to named json files.
Such a feature must be available in Postman Pro, when you use the Cloud instance feature(I haven't used it yet, but I'll probably do for continuous integration), I'm also interested and I went through this information:
FYI, that you don't even need to export the collection. You can use Newman to talk to the Postman cloud instance and call collections directly from there. Therefore, when a collection is updated in Postman, Newman will automatically run the updated collection on its next run.
You can also add in the environment URLs to automatically have Newman environment swap (we use this to run a healthcheck collection across all our environments [Dev, Test, Stage & Prod])
You should check this feature, postman licences are not particularly expensive, it can be worth it.
hope this helps
Alexandre
You have probably solved your problem by now, but for anyone else coming across this, Postman has an API that lets you access information about your collections. You could call /collections to get a list of all your collections, then query for the one(s) you want by their id. (If the link doesn't work, google "Postman API".)

How to build a perl web-service infrastructure

I have many scripts that I use to manage a multi server infrastructure. Some of these scripts require root access, some require access to a databases, and most of them are perl based. I would like to convert all these scripts into very simple web services that can be executed from different applications. These web services would take regular request inputs and would output json as a result of being executed. I'm thinking that I should setup a simple perl dispatcher, call it action, that would do logging, checking credentials, and executing these simple scripts. Something like:
http://host/action/update-dns?server=www.google.com&ip=192.168.1.1
This would invoke the action perl driver which in turn would call the update-dns script with the appropriate parameters (perhaps cleaned in some way) and return an appropriate json response. I would like for this infrastructure to have the following attributes:
All scripts reside in a single place. If a new script is dropped there, then it automatically becomes callable.
All scripts need to have some form of manifest that describe, who can call it (belonging to some ldap group), what parameters does it take, what is the response, etc. So that it is self explained.
All scripts are logged in terms of who did what and what was the response.
It would be great if there was a command line way to do something like # action update-dns --server=www.google.com --up=192.168.1.1
Do I have to get this going from scratch or is there something already on top of which I can piggy back on?
You might want to check out my framework Sub::Spec. The documentation is still sparse, but I'm already using it for several projects, including for my other modules in CPAN.
The idea is you write your code in functions, decorate/add enough metadata to these functions (including some summary, specification of arguments, etc.) and there will be toolchains to take care of what you need, e.g. running your functions in the command-line (using Sub::Spec::CmdLine, and over HTTP (using Sub::Spec::HTTP::Server and Sub::Spec::HTTP::Client).
There is a sample project in its infancy. Also take a look at http://gudangapi.com/. For example, the function GudangAPI::API::finance::currency::id::bca::get_bca_exchange_rate() will be accessible as an API function via HTTP API.
Contact me if you are interested in deploying something like this.