How to run a collection with specific environment triggered from postman web-hook - postman

I am trying to trigger the postman collection to run when code is deployed successfully through postman web-hook.
So, I created a web-hook as per their documentation Postman API, to run my collection, whenever the latest code is getting deployed on the server. The collection needs to run under specific postman environment. So when, I hit the the web-hook it has started running the collection with no environment, I did a lot of google though but haven't found any way to trigger the collection with specific environment. And the result are also getting added in different workspace even though i have added the workspace id.
Payload:
{
"webhook":{
"name": "Test Webhook",
"collection": "{{collection id}}",
"workspace" : "{{workspace id}}"
}
}
I am stuck in this from last few days, so need to find a way as soon as possible.

Related

Variable, relative resource path for binary request body in Postman/Newman

Context
I am developing a web application that avails of a suite of REST tests built for Postman.
The idea is that you can run the tests manually with Postman as a REST client against the application runtime, and the Maven pom configures newman to run them automatically in the CI pipelines as integration tests when ever a build is triggered.
This has run fairly well in the past.
Requirement
However, due to an overhaul in business logic, many of those tests now require a binary body as a file resource in POST requests (mostly zip archives).
I need those tests to work in 3 scenarios:
Manually when running individual tests with Postman locally, against a runtime of the web application
Semi-manually as above, but by triggering a runner in Postman
Automatically, when newman is started by maven during the integration tests phase of our pipeline
In order to make sure the path to the file in each request would work regardless of the way the tests are run, I have added a Postman environment variable in each profile. The variable would then be used by the collection in the relevant requests, e.g.:
"body": {
"mode": "file",
"file": {
"src": "{{postman_resources_path}}/empty.zip"
}
},
The idea would be that:
locally, you manually overwrite the value of postman_resources_path in the profile, in order to point to an absolute path in your machine (e.g. simply where you have the resources in source control) - this would then resolve it both for manual tests and a local runner
for the CI pipelines, the same would apply with a default value pointing to a path relative to the --working-dir value, which would be set in the newman command-line parametrization in the exec-maven-plugin already in use to run newman
Problem
While I haven't had a chance to test the pipeline yet with those assumptions, I can already notice that this isn't working locally.
Looking at a request, I can see the environment variable is not being resolved:
Conversely, here is the value that I manually set in the profile I'm running the request against:
TL;DR The request fails, since the resource is not found.
The most relevant literature I've found does not address my use case entirely, but the solution given seems to follow a similar direction: "variabilize" the path - see here.
I could not find anything specific enough in the Postman reference.
I think I'm onto something here, but I won't accept my own answer yet.
TL;DR it may be simpler than it seemed initially.
This Postman doc page states:
When you send a form-data or binary file with a request body, Postman saves a path to the file as part of the collection. The file path is relative to your working directory.
If I modify the raw collection json to ensure only the file name (or any relative path) is the value of the "src" key in the file definition, and set up the working directory manually in my Postman client, it seems to resolve the file correctly --> no need for (non-working) variables in the file path.
The working directory setting does not seem to be saved in the collection, meaning a manual one-time setup for local clients and the usage of --working-dir with newman should do the trick altogether.
Will self-accept once I've successfully tested with newman.

Error: Asset 'webhooks/ActionsOnGoogleFulfillment' cannot be deployed

I wanted to build a Google assistant with custom actions using actions-sdk. Since I am new to this, I have followed the steps in the tutorial "Build Actions for Google Assistant using Actions SDK (Level 1)" as it is, inorder to build a sample assistant. I followed the tutorial as it is. However, in step 5(Implement fulfillment), when trying to test the the fulfillment by running the command
gactions deploy preview
I am getting the below output in the terminal with error
Sending configuration files...
Sending resources...
Waiting for server to respond. It could take up to 1 minute if your cloud function needs to be redeployed.
[ERROR] Server did not return HTTP 200.
{
"error": {
"code": 400,
"message": "Asset 'webhooks/ActionsOnGoogleFulfillment' cannot be deployed. [An operation on function cf-_CcGD8lKs_F_LHmFYfJZsQ-name in region us-central1 in project <my-project-id> is already in progress. Please try again later.]"
}
}
And when I checked the "Google Cloud Platform -> Cloud Functions Console" for this project, the following is seen.
Image 1(Screenshot)
Cloud Platform Cloud Functions Console
A failed deploy of cloud function with an exclamation mark. And if I delete that functions, then immediately a new function is deployed automatically. But instead of an exclamation mark, a spinning wheel symbol(loading/still deploying) mark is present. I cannot delete that cloud function if it is still loading/deploying. Then after 10-15 min, the spinning symbol changes to exclamation symbol. And then if I delete it, then again a new one automatically appears. And it goes on like this
Image 2 (Screenshot)
Cloud Platform Cloud Function Console
This problem arises only when implementing a webhook/fulfillment(Step 5). For static Actions' response, it successfully deploys for testing on entering the command "gactions deploy preview".(Step 1 to Step 4 are successfully implemented)
I have followed the tutorial as it is, hence the code and directory structure is the same as in tutorial,(only the project-id or actions-console project name will be different).
Github Repository for Code
Since, this is only for the tutorial, at present I am not using a billing account, instead did the following changes in package.json(changed node version from 10 to 8.).
"engines": {
"node": "8"
},
Due to this continuous automatic failed deployment, when I try to explicitly deploy the project, as mentioned above, this error occurs.
"An operation on function cf-_CcGD8lKs_F_LHmFYfJZsQ-name in region us-central1 in project <my-project-id> is already in progress. Please try again later".
Can anyone please suggest how to stop this continuous automatic failed deployment of the cloud functions, so that the function I deploy will be successfully deployed? Would really appreciate your help.
(Note: This is the first time I have posted a question in stack overflow, so please let me know if there are any mistakes or stack overflow question conventions I might not have followed. I will improve it.)
Posting this as Community Wiki as it's based in the comments.
As clarified the issue seems to be the billing account, as the tutorial mentions that it's necessary to have one set for the Cloud Functions to be deployed correctly. Besides that, it's not possible to deploy Cloud Functions (webhooks) without a billing account, so yes, even though that you are not using Node.js 10, you will need to have a billing account configured for your project.
To summarize, billing account will be needed to avoid any possible deployment failure, even if you are not using Node.js 10, as explained in the followed tutorial.

How to get SonarQube results back to CodeBuild

I've seen many discussions on-line about Sonar web-hooks to send scan results to Jenkins, but as a CodePipeline acolyte, I could use some basic help with the steps to supply Sonar scan results (e.g., quality-gate pass/fail status) to the pipeline.
Is the Sonar web-hook the right way to go, or is it possible to use Sonar's API to fetch the status of a scan for a given code-project?
Our code is in BitBucket. I'm working with the AWS admin who will create the CodePipeline that fires when code is attempted to be pushed into the repo. sonar-scanner will be run, and then we'd like the pipeline to stop if the quality does not pass the Quality Gate.
If I would use a Sonar web-hook, I imagine the value for host would be, what, the AWS instance running the CodeBuild?
Any pointers, references, examples welcome.
I created a powershell to use with Azure DevOps, that possible may be migrated to some shell script that runs in the code build activity
https://github.com/michaelcostabr/SonarQubeBuildBreaker

Notify to refresh ember.js application after successful deployment on GitLab CI/CD

We are using the awesome Gitlab CI/CD workflow and had been satisfied with the process. A lot of Merge Requests could happen everyday and we want to make sure that our application is updated on realtime whenever our pipeline jobs is successful.
For instance our master branch could also be deployed on staging whenever Merge Request is accepted. Here is our example deploy_staging job on gitlab-ci.yml.
deploy_staging:
type: deploy
script:
- yarn install
- node_modules/ember-cli/bin/ember deploy staging --activate
environment:
name: staging
only:
- master
Since ember is a Single Page Application and once new deployment is shipped and available, ember couldn't recognized the new changes. Hence we need to refresh the page to be updated.
The other downside to this idea is, we can't afford to refresh the page if end user is in the middle of transaction. So my thought is to make a notification to refresh the page similar to any mobile app when updates are available they just go to the link and click the update manually.
Now this problem is narrowed down to this:
How can we sent signal to the running ember application so we can prompt a notification to refresh the page whenever updates are available (after successful CI/CD delivery)?
For this you'll want service workers :)
Service Workers are usually how most other sites notify about updates.
For ember, setting them up is fairly simple, we have ember-service-worker to get your caching and manifest going, and then we have ember-service-worke-update-notify for automatic notification of asset updates.
Though, there is a PR here: https://github.com/topaxi/ember-service-worker-update-notify/pull/3 to notify about updates in a more automated way -- the current way only notifies about an update upon refresh and load of cached assets.
I recently opened this PR, because I think with #pollingInterval={{5000}}, that would be the ideal interval to check for update, where every 5 seconds, we see if there is an update.
Hope this helps!

Is there a programmatic way to export Postman Collections?

I have an ever-growing collection of Postman tests for my API that I regularly export and check in to source control, so I can run them as part of CI via Newman.
I'd like to automate the process of exporting the collection when I've added some new tests - perhaps even pull it regularly from Postman's servers and check the updated version in to git.
Is there an API I can use to do this?
I would settle happily for a script I could run to export my collections and environments to named json files.
Such a feature must be available in Postman Pro, when you use the Cloud instance feature(I haven't used it yet, but I'll probably do for continuous integration), I'm also interested and I went through this information:
FYI, that you don't even need to export the collection. You can use Newman to talk to the Postman cloud instance and call collections directly from there. Therefore, when a collection is updated in Postman, Newman will automatically run the updated collection on its next run.
You can also add in the environment URLs to automatically have Newman environment swap (we use this to run a healthcheck collection across all our environments [Dev, Test, Stage & Prod])
You should check this feature, postman licences are not particularly expensive, it can be worth it.
hope this helps
Alexandre
You have probably solved your problem by now, but for anyone else coming across this, Postman has an API that lets you access information about your collections. You could call /collections to get a list of all your collections, then query for the one(s) you want by their id. (If the link doesn't work, google "Postman API".)