Trying to find the right condition to run job only after PR doing merge to master only.
I tried it but still after merge the job did not run:
condition: and(succeeded(), variables['System.PullRequest'], 'PullRequest')
There isn't really one. After the merge completes, it's just seen as any other push to master.
You can probably use the REST API against Azure Repos to find whether that commit is associated to a pull request, but you only get the pull request context for the validation builds, not for the build that triggers on the completed merge result.
You can have a variable named i.e. isPullRequest:
variables:
isPullRequest: ${{eq(variables['Build.Reason'], 'PullRequest')}}
Then in the condition of the job that you want to skip in Pull Requests:
condition: and(succeeded(), eq(variables.isPullRequest, 'false'))
Use below condition:
condition: and(succeeded(), startsWith(variables['Build.SourceVersionMessage'], 'Merge pull request'))
I use the below condition to to run steps only when merging into the master branch
condition: and(succeeded(), eq(variables['build.sourceBranch'], 'refs/heads/master'))
Related
I'm new to AWS and transitioning from Azure. I would like to create a pipeline in CodePipeline which asks the user for input (for example: the user needs to input a value for the variable "hello"), and uses that input to run a CodeBuild project. In Azure DevOps this was quite easy to define in the pipeline YML specification, but I can't seem to find a way to easily do this in AWS, or am I missing something?
AWS CodePipeline not supporting this feature currently. What you can do is, pass this parameter in your commit message (if pipeline trigger on commits to branches) or in your Git tag (if pipeline trigger on git tag push).
example:
commit message: my commit message [my_var]
git tag: my_var-1.0.0
Then in your buildspec.yml file collect the commit message or tag and check whether it contains your required parameters. If so execute the next commands otherwise exit the script.
I would like to conditionally execute certain stage in AWS Codepipeline depending on that if I put certain file on repo location. So, if I put "some_file.txt" on certain location in repo, I want for Codepipeline to check existence of this file and if it's there continue further to deploy code to production, otherwise stop on that stage.
With this I would like to avoid manual approval action and control release process with committing a file. Is this possible and what would be best practice?
I think you could create a lambda action for that:
Invoke an AWS Lambda function in a pipeline in CodePipeline
The lambda function can access the input artifact, and check if your file of interest is there or not.
Depending on the outcome of the check, the function with either put_job_success_result or put_job_failure_resul to continue or stop the pipeline.
you can use the spec file to check if there's the needed file present. If not, then you can execute a "stop-pipeline-execution" https://docs.aws.amazon.com/cli/latest/reference/codepipeline/stop-pipeline-execution.html
command. The required args can be fetched from the env vars and one more thing to note is to give that stage of yours adequate permission(s) to be able to execute the command.
I'm able to sort by source but, unable to sort by branch, it shows only few branches for all sources.
I used it like
Source:[source_name] OR Branch:[branch_name]
in the history tab.
can anyone tell me how to sort based on the branch?
I think that is normal, I have the same thing on my side.
My history:
2 branches: Master and Open-source. However when I filter on the branch, even master I have nothing:
Why?? Because I have any trigger on the master or open-source branch. My trigger is on any branch.
Thus the build branch param is not set and thus you can search/filter on it. On others project, when I have a filter per branch, the history is correct and I can filter on the branch name.
An alternative is to use the gcloud command and the filter param like this
gcloud builds list --filter="substitutions.BRANCH_NAME=<YourBranchName>"
More detail on the filter capabilities and expression
How can I configure CodePipeline to be triggered for Pull Requests being opened, edited or merged?
Here is a Terraform configuration:
resource "aws_codepipeline_webhook" "gh_to_codepipeline_integration" {
name = "gh_to_codepipeline_integration"
authentication = "GITHUB_HMAC"
target_action = "Source"
target_pipeline = aws_codepipeline.mycodepipeline.name
authentication_configuration {
secret_token = var.github_webhook_secret
}
// accept pull requests
// Is there a way to filter on the PR being closed and merged? This isn't it...
filter {
json_path = "$.action"
match_equals = "closed"
}
}
CodePipeline is set to accept webhook events that have all of the conditions specified in the filters, which corresponds to Pull Request Events.
Note that the GitHub documentation states for the action field of a PullRequestEvent (my emphasis in bold):
The action that was performed. Can be one of assigned, unassigned,
review_requested, review_request_removed, labeled, unlabeled, opened,
edited, closed, ready_for_review, locked, unlocked, or reopened. If
the action is closed and the merged key is false, the pull request was
closed with unmerged commits. If the action is closed and the merged
key is true, the pull request was merged. While webhooks are also
triggered when a pull request is synchronized, Events API timelines
don't include pull request events with the synchronize action.
It seems like I need to filter for both $.action==closed && $.pull_request_merged=true, but it doesn't look like I can do both. If I just filter on $.action==closed then my pipeline will rebuild if PRs are closed without merging. Is this an oversight on my part, or are CodePipelines not as flexible in their triggers as CodeBuild projects?
For pull requests being opened/updated, because CodePipeline's Git integrations require a branch name, this is not natively supported as the branch name is variable, unless you open PRs on long running branches like dev, qa etc (e.g. if you are using a Gitflow-based workflow).
The way that we support PRs based from dynamic branches is use CodeBuild for the build/unit test stage of our workflow, and then package up the repository and build artefacts to S3. From there we trigger Deployment pipelines for integration and acceptance environments using S3 artefact as the source. Using CodePipeline for deployments is powerful as it automatically ensures only one stage can execute at a time, meaning only one change for a given application is going through a given environment at any one time.
This approach is however quite complex and requires quite a bit of Lambda magic mixed with SQS FIFO queues to deal with concurrent PRs (this is to overcome the superseding behaviour of CodePipeline), but it's quite a powerful pattern. We also use GitHub reviews to do things like trigger acceptance stage, and auto-approve manual approval steps in CodePipeline.
Once you are ready to merge the PR, we just use normal CodePipeline triggered off master to deploy to production - one thing you also need to do is ensure you use the artefact that was built and tested on the PR.
I'm not sure why you want to trigger the whole pipeline when a pull request is open? They way I usually set things up is:
CodePipeline watches the master branch and triggers on a push to it
It will run some builds in CodeBuild
If the builds pass it runs a deploy
Then we have CodeBuild which gets triggered by both CodePipeline and also GitHub pull requests:
resource "aws_codebuild_webhook" "dev" {
project_name = aws_codebuild_project.dev.name
filter_group {
filter {
type = "EVENT"
pattern = "PULL_REQUEST_CREATED, PULL_REQUEST_UPDATED"
}
}
}
Then you can use codebuild filters to choose when to trigger the build. The terraform docs are also helpful.
In the deploy dacpac step in VSTS, you can set the database to only run based on custom conditions. The conditions examples are based on VSTS build information, and I can't find any documentation on using conditions from a connected Azure subscription or dacpac metadata. In the conditional page, they have a version variable which seems like it might be useful, but I can't find other information about it.
Basically, when the dacpac step is triggered, I want to check metadata against existing data, conditionally run the build step, and update metadata. Is this possible through a VSTS build step?
Yes, it is possible. You can add an user defined variable (such as the variable result with default value 0) in the VSTS build definition. And with the value 1 to run the dacpac step, with value 0 to skip the step.
Detail steps as below:
Add a PowerShell task with two operations before the dacpac step:
Check if there has new changes for the existing data.
If the metadata only stored in Azure, you can refer this way to connect with Azure in powershell. If the metadata also stored in the repository (such as a git repo) you build with, you can also check the update in the repository.
Set the result variable value based on if there the metadata is updated or not.
If the data is updated, then change the result variable with value 1:
Write-Host ("##vso[task.setvariable variable=result]1")
Else, do not change the value (keep the value with 0)
Since the data are managed in git VCS, you can check if the data is update or not in git repo. If the data is changed, then change the variable result as 1. detail powershell script as below:
$files=$(git diff HEAD HEAD~1 --name-only)
echo "changed files as below: $files"
if ($files -contains 'filename')
Write-Host ("##vso[task.setvariable variable=result]1")
Set conditions for the dacpac step:
In the task, select Custom conditions for Run this task. If you want to run this task when succeeding and the variable result variable is 1, you can the express:
and(succeeded(), eq(variables['result'], '1'))
Now if the result with the value 0, the dacpac step will be skipped, is the result with value 1, the dacpack will be executed.