wrong Google Cloud Build trigger is activated from GitLab merge web hook - google-cloud-platform

Working on usage of GitLab web hooks for activating Google Cloud Build triggers. Problem that I'm facing is that my web hook to push trigger is working fine, but merge trigger is never started, instead push trigger is activated. On GitLab side web hook URLs are different and pointing to right trigger URLs.
While trying to troubleshoot has following: when using dummy build trigger that has inline default step ( ubuntu image to run echo hello world) it works fine, when I'm adding Substitution vars getting an error:
Your build failed to run: generic::invalid_argument: generic::invalid_argument: invalid value for 'build.substitutions': key in the template "URL" is not a valid built-in substitution
_URL is using $(body.project.git_ssh_url), which in accordance to doc should be fine for Merge request event , so it could mean that there is a difference on event type.
Please advise or suggest direction for debugging like get logs of trigger build events.
Best,
Alex
P.S. Used/read docs for web hooks and substitution already, so please do not recommend to read general docs again.

Related

Create dependent triggers in GCP cloud build

I need to create a dependent trigger in cloud build. Currently I have two triggers as shown in below image, both of which are created on push event to master branch in respective repos.
'app-engine-test' is triggered on pushing the code to a cloud repository whereas 'seleniumTest' is triggered on pushing code to a Git repository.
However I want to trigger 'seleniumTest' trigger once 'app-engine-test' build is completed. I could not find any such setting in GCP UI.
Can anyone please help?
You may be able to do this by using a Pub/Sub message as the trigger for your dependent build.
When a CloudBuild build runs it publishes messages to a Pub/Sub topic cloud-builds - see https://cloud.google.com/build/docs/subscribe-build-notifications.
So if you have builds app and test, app would be triggered when you push to source control, and test triggered when a message on the cloud-builds topic is published.
I haven't tested this myself, but need something similar so will update this answer as I go. If it turns out you can't subscribe to the cloud-builds event then at the end of the app build you could also publish a message to your own Pub/Sub topic which you could then use to trigger the second build.
Another solution in your case might be to merge the two projects and simply run the selenium tests as a final build step once you've successfully deployed the code.

AWS Amplify environment 'dev' not found

I'm working with AWS Amplify, specifically following this tutorial AWS-Hands-On-Tutorial.
I'm getting a build failure when I try to deploy the application.
So far I have tried creating multiple backend environments and connecting them with the frontend, hoping that this would alleviate the issue. The error message leads me to believe that the deploy is not set up to also detect the backend environment, despite that I have it set to do so.
Also, I have tried changing the environment that is set to deploy with the frontend by creating another develop branch to see if that is the issue.
I've had no success with trying any of these, the build continues to fail. I have also tried running the 'amplify env add' command as the error message states. I have not however tried "restoring its definition in your team-provider-info.json" as I'm not sure what that entails and can't find any information on it. Regardless, I would think creating a new environment would solve the potential issues there, and it didn't. Any help is appreciated.
Due to the documentation being out of date, I completed the steps below to resolve this issue:
Under Build Settings > Add package version override for Amplify CLI and leave it as 'latest'
When the tutorial advises to "update your front end branch to point to the backend environment you just created. Under the branch name, choose Edit...", where the tutorial advises to use 'dev' it actually had us setup 'staging', choose that instead.
Lastly, we need to setup a 'Service Role' under General. Select General > Edit > Create New Service Role > Select the default options and save the role, it should have a name of amplifyconsole-backend-role. Once the role is saved, you can go back to General > Edit > Select your role from the dropdown, if it doesn't show by default start typing it in.
After completing these steps, I was able to successfully redeploy my build and get it pushed to prod with authentication working. Hope it helps anyone who is running into this issue on Module 3 of the AWS Amplify Starter Tutorial!

CI CD Pipeline - Non Source Control Triggers

I am trying to figure out how to trigger a CI CD pipeline from a non source control trigger.
My plan is to use a Google Web Form, to collect all of the variables needed in my scripts, keeping the on boarding process easy enough for non technical staff. Using the Google Forms API Script Editor, I take the submit response JSON, and do a Put to an s3 Bucket / Object.
I would like that PUT (Write Operation), to trigger a CI CD Pipeline.
The CI CD tool is not important, as it seems all CI CD Tools can only use outgoing Web Hooks to push to something, like a Slack Channel, and not ingest, like an API, or POST / PUT / Event.
My Question:
Is it possible to trigger a Pipeline using a PUT or POST?
Tools i would ideally like to use, would be Gitlab CI, Or even Jenkins if it opens up more possibilities.
I have done alot of reading, and am having a hard time coming up with a solution. I would think this was something people would use often, rather than just a simply commit or merge to a source Control Branch...
From what i have Gathered, the API Endpoints of CI Tools, can only process a source control trigger.
Please if anyone have any input on how to achieve this. I am willing to figure out how to create an API, if that somehow helps.
I would like to focus on AWS atm, but the goals would be to also use this solution, or its equivalent in Azure
In the job settings, scroll to Build Triggers section and find a checkbox named "Trigger builds remotely (e.g., from scripts)". You need to provide a token (so only people who know the token may trigger your job). Once this is done, you can trigger a pipeline using curl:
curl 'myjenkins.mycompany.net/job/MyJobName/build?token=myverysecrettoken&cause=Cause+I+Decided+So'
curl 'myjenkins.mycompany.net/job/MyJobName/buildWithParameters?PARAM1=string1&PARAM2=string2&token=myverysecrettoken'
See also Generic Webhook Trigger Plugin for examples.
For those new to pipelines like me, and looking for similar guidance with Gitlab CI:
The same kind of curl request can be made to trigger a pipeline.
However for my specific question, i was looking to trigger the pipeline by sending a POST to the Gitlab CI API directly using HTTPS endpoint. Curl command did not fit my needs
To achieve this, you can use the Gitlab CI Webhook, for other projects:
Just fill in the Ref (branch name), and the Gitlab Project ID
Example:
https://gitlab.com/api/v4/projects/19577683/ref/master/trigger/pipeline?token=4785b192773907c280845066093s93
To use the curl command, to hit the Gitlab Projects Trigger API, similar to Jenkins:
Simply supply the Tigger Token, you create in the Project / CI CD / Trigger section of Gitlab, and specify the Ref, which is a branch name, or tag
Example:
curl -X POST \
-F token=4785b192773907c280845066093s93 \
-F ref=master \
https://gitlab.com/api/v4/projects/19577683/trigger/pipeline

How to get SonarQube results back to CodeBuild

I've seen many discussions on-line about Sonar web-hooks to send scan results to Jenkins, but as a CodePipeline acolyte, I could use some basic help with the steps to supply Sonar scan results (e.g., quality-gate pass/fail status) to the pipeline.
Is the Sonar web-hook the right way to go, or is it possible to use Sonar's API to fetch the status of a scan for a given code-project?
Our code is in BitBucket. I'm working with the AWS admin who will create the CodePipeline that fires when code is attempted to be pushed into the repo. sonar-scanner will be run, and then we'd like the pipeline to stop if the quality does not pass the Quality Gate.
If I would use a Sonar web-hook, I imagine the value for host would be, what, the AWS instance running the CodeBuild?
Any pointers, references, examples welcome.
I created a powershell to use with Azure DevOps, that possible may be migrated to some shell script that runs in the code build activity
https://github.com/michaelcostabr/SonarQubeBuildBreaker

Trigger deployment button in Jenkins pipeline

I'm setting up a Continuous Delivery pipeline for my team with Jenkins. As a final step, we want to deploy to AWS.
I came across this while searching: :
The last step is a button where you can click to trigger deploying. Very nice! However, I searched throw Jenkins plugins page but I don't think it is there (or it is under a vague name).
Any ideas what it could be?
I'm not sure about the specific plugin you are looking for, but there is a Jenkins plugin for CodeDeploy, which can automatically create a deployment as a post-build action. See: https://github.com/awslabs/aws-codedeploy-plugin
It really depends on how what kind of requirements you have on the actual deployment procedure. One thing to keep in mind if you do infrastructure as code to setup your pipelines automatically (e.g. through JobDSL or Jenkins Job Builder), is that the particular plugins must be supported. For that reason it some times might be more convenient to just script your deployments instead of relying on plugins. I've implemented multiple deployment jobs from Jenkins to AWS by just using plain AWS CLI commands, e.g. triggering Cloudformation creation/updates.
It turns out that there is a button to trigger an operation in the plugin. It was hard to detect as the UI of the plugin is redesigned and the button became smaller.