I have a SCM that only allows HTTP push/pull/poll requests. Without modifying my SCM, I would like Jenkins to trigger a build (as soon as possible) when new code is checked in.
Developers usually get notified of new code via a RSS Feed.
Is there a recommended Jenkins plugin that can help with this?
If you are using Git then there is github plugin. Git will notify Jenkins whenever there is a new commit.
If you are using svn, use the feature poll scm. Specify a time interval so that Jenkins will look for new commits based on configured time.
Related
Background
I want to create the following CI/CD flow in AWS and Github, for a react app using Amplify:
A single main branch, with short-lived feature branches and PRs into main.
Each PR triggers its own test environment in Amplify, with its own temporary subdomain, which gets torn down when the PR is merged, as described here.
Merging into main does not automatically trigger a deploy to production.
Instead, there is a separate mechanism (a web page, or amplify command, or even triggers based on git tags) for manually selecting a commit from main to deploy to production.
Questions
It's not clear to me if...
Support for this flow is already built into Amplify (based on the docs I've read, I think the answer is "no", but I'm not sure).
Support for this flow is already built into AWS CodePipeline, or if it can be configured there.
There is another AWS tool that solves this.
I'm looking for answers to those questions, or specific references in the docs which address them.
The answers for Amplify are Yes, Yes, Yes, Partially.
(1) A single main branch, with short-lived feature branches and PRs into main.
Yes. Feature branch deploys. Can define which branch patterns, such as feature*/, you wish to auto-deploy.
(2) Each PR triggers its own test environment in Amplify, with its own temporary subdomain,
Yes. Web Previews for PRs. "A web preview deploys every pull request made to your GitHub repository to a unique preview URL which is completely different from the URL your main site uses."
(3) Merging into main does not automatically trigger a deploy to production.
Yes. Disable automatic builds on main.
(4) Instead, there is a separate mechanism ... for manually selecting a commit from main to deploy to production.
Partially (HEAD only?). Call the StartJob API to manually trigger a build from, say, Lambda. The job type RELEASE starts a new job with the latest change from the specified branch. I am not sure if jobType: MANUAL with a commitId starts a job from an arbitrary commit hash.
Another workaround for 3+4 is to skip the build for an arbitrary commit. Amplify will skip building if [skip-cd] appears at the end of a commit message.
In my experience, I don't think there is any easy way to meet your requirement.
If you are using Gitlab, you can try Gitlab Review Apps to achieve that (I tried before with some scripts)
Support for this flow is already built into Amplify (based on the docs I've read, I think the answer is "no", but I'm not sure).
Check below links, if this help:
https://www.youtube.com/watch?v=QV2WS535nyI
https://dev.to/rajandmr/deploying-react-app-using-aws-amplify-with-ci-cd-pipeline-setup-3lid
Support for this flow is already built into AWS CodePipeline, or if it can be configured there.
For this, you need to create a full your own pipeline. Yes, you can configure your pipeline.
There is another AWS tool that solves this.
If you are okay with Jenkins, then Jenkins will help you to achieve this.
You can deploy Jenkins docker in AWS EC2 and create your pipeline. You can also use the parameterised option for selecting your environment and git branch.
I am not able to find a way to stop the auto triggering of the pipeline whenever I push code to bitbucket.
My assumption is that you want more control over when your pipeline does certain things.
Rather than achieving this through stopping the pipeline from getting triggered, I'd recommend using either stage transitions or manual approvals to achieve this control inside the pipeline.
Stage transitions are better when you want to "turn off" a pipeline and have the latest thing run through when you turn it back on.
Manual approvals are better when you want the version to be locked while waiting for approval so you can run tests without worrying that the version will change.
You mentioned in your comment that you wanted to only run your pipeline at certain times, so a way you could do that is to enable and disable the stage transition after source on a schedule.
https://docs.aws.amazon.com/codepipeline/latest/userguide/transitions.html
https://docs.aws.amazon.com/codepipeline/latest/userguide/approvals.html
You can disable DetectChanges parameter on your Source action as explained here. Extract with the relevant context:
DetectChanges: Controls automatically starting your pipeline when a new commit is made on the configured repository and branch. If unspecified, the default value is true, and the field does not display by default.
This works on Bitbucket, GitHub, and GitHub Enterprise Server actions. I have a CloudFormation template configured with this option and works. Not sure about the same option on AWS console, because I saw that some configurations are only available from CloudFormation or aws cli. As you can read "this field does not display by default".
I am trying to figure out how to trigger a CI CD pipeline from a non source control trigger.
My plan is to use a Google Web Form, to collect all of the variables needed in my scripts, keeping the on boarding process easy enough for non technical staff. Using the Google Forms API Script Editor, I take the submit response JSON, and do a Put to an s3 Bucket / Object.
I would like that PUT (Write Operation), to trigger a CI CD Pipeline.
The CI CD tool is not important, as it seems all CI CD Tools can only use outgoing Web Hooks to push to something, like a Slack Channel, and not ingest, like an API, or POST / PUT / Event.
My Question:
Is it possible to trigger a Pipeline using a PUT or POST?
Tools i would ideally like to use, would be Gitlab CI, Or even Jenkins if it opens up more possibilities.
I have done alot of reading, and am having a hard time coming up with a solution. I would think this was something people would use often, rather than just a simply commit or merge to a source Control Branch...
From what i have Gathered, the API Endpoints of CI Tools, can only process a source control trigger.
Please if anyone have any input on how to achieve this. I am willing to figure out how to create an API, if that somehow helps.
I would like to focus on AWS atm, but the goals would be to also use this solution, or its equivalent in Azure
In the job settings, scroll to Build Triggers section and find a checkbox named "Trigger builds remotely (e.g., from scripts)". You need to provide a token (so only people who know the token may trigger your job). Once this is done, you can trigger a pipeline using curl:
curl 'myjenkins.mycompany.net/job/MyJobName/build?token=myverysecrettoken&cause=Cause+I+Decided+So'
curl 'myjenkins.mycompany.net/job/MyJobName/buildWithParameters?PARAM1=string1&PARAM2=string2&token=myverysecrettoken'
See also Generic Webhook Trigger Plugin for examples.
For those new to pipelines like me, and looking for similar guidance with Gitlab CI:
The same kind of curl request can be made to trigger a pipeline.
However for my specific question, i was looking to trigger the pipeline by sending a POST to the Gitlab CI API directly using HTTPS endpoint. Curl command did not fit my needs
To achieve this, you can use the Gitlab CI Webhook, for other projects:
Just fill in the Ref (branch name), and the Gitlab Project ID
Example:
https://gitlab.com/api/v4/projects/19577683/ref/master/trigger/pipeline?token=4785b192773907c280845066093s93
To use the curl command, to hit the Gitlab Projects Trigger API, similar to Jenkins:
Simply supply the Tigger Token, you create in the Project / CI CD / Trigger section of Gitlab, and specify the Ref, which is a branch name, or tag
Example:
curl -X POST \
-F token=4785b192773907c280845066093s93 \
-F ref=master \
https://gitlab.com/api/v4/projects/19577683/trigger/pipeline
I have created a build pipeline.
have master, develop and feature/* branches in my Azure repo.
I have created a branch policy to require a build for feature/* branches.
How do I trigger an automatic build on pull request? Or even how do I queue a build manually on the pull request?
I can't see such option on my pull request screen in DevOps.
As far as I know the build policy should appear above Work Items on the right hand side. My policy does not appear there and I have no even a possibility to trigger the build manually.
I am not sure what I am doing wrong? Or what is missing?
The screenshot you provided shows that the PR is for the develop branch. If you want a PR for develop to trigger a build, then set a policy on the develop branch.
Branch policies apply to the target branch, not the source branch.
I have a job that I want to run every time a commit is made to a repository. I want to avoid pulling this code down, I only want the notification build trigger. So, is there either a way to not pull down certain repositories in your SCM upon a build or a way to poll things that aren't in the SCM for a build?
you could use a post commit hook to trigger your hudson job.
Since you want to avoid changing SVN, you have to write a job that gets executed every so often (may be every 5 Minutes). This jobs runs a svn command using windows bach or shell script task to get the current revision for the branch in question. You can set the status of the job to unstable if there is a change. Don't use failure because you can't distinguish than between a real failure and a repository change. I think there is a plugin that sets the job status depending on the contents of you output.
You can then use the email extension plugin to send an email every time the revision changes. You can get the revision number from the last (or better the last successful or unstable) job. You can archive a file containing the revision number on the jobs or you can set the description for the job to the revision using the description setter plugin. Have a look at Hudsons remote API for ideas on how to get the information from the previous job.
Since you run your job very often during the day. don't forget to delete old job runs. But I would keep at least two days worth of history, just in case your svn is down for 24 hours.