I am working on several apis at the same time, with different urls, endpoints, etc.
I'd like to run postman in a different environment depending on which api I'm working on.
Ideally, each environment would allow the use of different collections, env variables, etc.
Is this possible?
You can create collection for each environment :
Add Collection
New Collection
and then use them for each environment.
Follow these steps
Create a new collection/folder in the postman.
Organize the APIs in the order they should be executed (for eg: API 'A' followed by 'B' if you want to run the API 'A' before 'B').
Run the collection runner
You will get the result which APIs have failed and which have passed.
Thank you!!
I am working on several apis at the same time, with different urls,
endpoints, etc.
As I understand your question, you essentially have a couple of requests and want them all to run in one scenario all together, targeting different url's, because they logically "belongs together". For this I would create an environment for that scenario, and include one entry for each API url so each request can target their corresponding url.
This builts on the fact that you can in your url definition specify environment variables, which is very handy!
Related
When I am calling an api with normal api calling in postman and running a test script and setting environment value, it's working but when I use that api in postman flow, environment doesn't changing.
Script in my test:
pm.environment.set('email', body.email)
Looks like you are looking for this issue from discussions section of Postman Flows repository:
https://github.com/postmanlabs/postman-flows/discussions/142. Here are some key points from it:
I want to begin by saying that nothing is wrong with environments or variables. They just work differently in Flows from how they used to work in the Collection Runner or the Request Tab.
Variables are not first-class citizens in Flows.
It was a difficult decision to break the existing pattern, but we firmly believe this is a necessary change as it would simplify problems for both us and users.
Environment works in a read-only mode, updates to the environment from scripts are not respected.
Also in this post they suggest:
We encourage using the connection to pipe data from one block to another, rather than using Globals/Environments, etc.
According to this post:
We do not supporting updating globals and environment using flows.
To elaborate, we have one server we have setup to run django. Issue is that we need to establish "public" test server that our end-users can test, before we push the changes to the production.
Now, normally we would have production.domain.com and testing.domain.com and run them separately. However, due to conditions outside our control we only have access to one domain. We will call it program.domain.com for now.
Is there a way to setup two entirely separete django intances (AKA we do not want admin of production version to be able to access demo data, and vice versa) in such a way we have program.domain.com/production and program.domain.com/development enviroments?
I tried to look over Djangos "sites"-framework but as far as I can see, all it can do is separate the domains, not paths, and it has both "sites" able to access same data.
However, as I stated, we want to keep our testing data and our production data separate. Yet, we want to give our end-user testers access to version they can tinker around, keeping separation of production, public test and local development(runserver command) versions.
I would say you use the /production or /development path to select which database to use. You can read more about multitenancy from here https://books.agiliq.com/projects/django-multi-tenant/en/latest/
I have an ever-growing collection of Postman tests for my API that I regularly export and check in to source control, so I can run them as part of CI via Newman.
I'd like to automate the process of exporting the collection when I've added some new tests - perhaps even pull it regularly from Postman's servers and check the updated version in to git.
Is there an API I can use to do this?
I would settle happily for a script I could run to export my collections and environments to named json files.
Such a feature must be available in Postman Pro, when you use the Cloud instance feature(I haven't used it yet, but I'll probably do for continuous integration), I'm also interested and I went through this information:
FYI, that you don't even need to export the collection. You can use Newman to talk to the Postman cloud instance and call collections directly from there. Therefore, when a collection is updated in Postman, Newman will automatically run the updated collection on its next run.
You can also add in the environment URLs to automatically have Newman environment swap (we use this to run a healthcheck collection across all our environments [Dev, Test, Stage & Prod])
You should check this feature, postman licences are not particularly expensive, it can be worth it.
hope this helps
Alexandre
You have probably solved your problem by now, but for anyone else coming across this, Postman has an API that lets you access information about your collections. You could call /collections to get a list of all your collections, then query for the one(s) you want by their id. (If the link doesn't work, google "Postman API".)
There are many things that are different in deployment and production. For example, in case of using Facebook API, I need to change id of application(because there are different id for testing and production) every time I push update to the app.
I update only app, so what do usually django developers do in this case? Possibly saving a variable to settings.py and then getting it from there or creating separated file in virtual environment folder, which in my case at least is also separated ?
There is no official way of splitting your Django settings for prod and dev -- developers are encouraged to find a way that works for them. The Django docs list out several good options here: https://code.djangoproject.com/wiki/SplitSettings
I have many scripts that I use to manage a multi server infrastructure. Some of these scripts require root access, some require access to a databases, and most of them are perl based. I would like to convert all these scripts into very simple web services that can be executed from different applications. These web services would take regular request inputs and would output json as a result of being executed. I'm thinking that I should setup a simple perl dispatcher, call it action, that would do logging, checking credentials, and executing these simple scripts. Something like:
http://host/action/update-dns?server=www.google.com&ip=192.168.1.1
This would invoke the action perl driver which in turn would call the update-dns script with the appropriate parameters (perhaps cleaned in some way) and return an appropriate json response. I would like for this infrastructure to have the following attributes:
All scripts reside in a single place. If a new script is dropped there, then it automatically becomes callable.
All scripts need to have some form of manifest that describe, who can call it (belonging to some ldap group), what parameters does it take, what is the response, etc. So that it is self explained.
All scripts are logged in terms of who did what and what was the response.
It would be great if there was a command line way to do something like # action update-dns --server=www.google.com --up=192.168.1.1
Do I have to get this going from scratch or is there something already on top of which I can piggy back on?
You might want to check out my framework Sub::Spec. The documentation is still sparse, but I'm already using it for several projects, including for my other modules in CPAN.
The idea is you write your code in functions, decorate/add enough metadata to these functions (including some summary, specification of arguments, etc.) and there will be toolchains to take care of what you need, e.g. running your functions in the command-line (using Sub::Spec::CmdLine, and over HTTP (using Sub::Spec::HTTP::Server and Sub::Spec::HTTP::Client).
There is a sample project in its infancy. Also take a look at http://gudangapi.com/. For example, the function GudangAPI::API::finance::currency::id::bca::get_bca_exchange_rate() will be accessible as an API function via HTTP API.
Contact me if you are interested in deploying something like this.