I am developing a user flow for a project wherein a user can replace a file. In practice, this means we rapidly delete and replace s3 objects. I am running into a bug when performing the following operations on A bucket.
Delete all s3 objects at a certain path: my_bucket/my_folder/*. The program uses the S3.bucket.objects.delete() method to perform this operation. When this step finishes, my_bucket/my_folder/ is no longer visible in the s3 console.
Create a new set of s3 objects at the same path my_bucket/my_folder/*. We use the put_object() operation here.
However, I am noticing something weird when step 2 is performed. On an inconsistent basis (more often when steps 1 and 2 are done in rapid succession) the creation of new objects at my_bucket/my_folder/* with step 2 triggers the re-population of that “folder” with all of the old files that were originally deleted in step 1.
This does not appear to happen when waiting > 10 minutes between executing step 1 and step 2, which makes me believe it is a caching issue.
Is it possible to get around this behavior somehow?
I m using CODEBUILD_BUILD_NUMBER in AWS Code build to append the build number to the artifacts that are deployed from the build. After every major version release, we need to again reset the build numbers.
For example, after v2.0.0-401 if we want to start building v3.0.0-1, not finding a way to reset the build numbers on the same code build project.
Any help is appreciated.
not finding a way to reset the build numbers on the same code build project.
This is because you can't reset it. Its managed by AWS.
You can setup new build project to start counting from zero if you want, or use different way of tagging your builds, not based on CODEBUILD_BUILD_NUMBER.
Yes this functionality is not available out of the box, but will be definitely useful.
For now my recommendation would be to keep your CUSTOM_BUILD_NUMBER in the SSM Parameter Store, CodeBuild has native integration with Param Store, providing easy way to lookup value:
https://docs.aws.amazon.com/codebuild/latest/userguide/build-spec-ref.html#build-spec.env.parameter-store
Every time I roll the version number I just track it in my buildspec instead of having to store things anywhere else.
- $buildPrefix = "2.0.0."
- $resetBuildNumber = 8 #this should be set to the build number prior to a buildPrefix version update
- $currentBuild = "$buildPrefix" + ($env:CODEBUILD_BUILD_NUMBER - $resetBuildNumber)
We have multiple release branches in our product (currently this is unavoidable). For this question, suppose we have two:
master
release/r-858
Both have classic CI builds. Now I want to replace them with YAML builds. Our requirement is simple - have two distinct build definitions pointing to a YAML script - one for master, one for release/r-858
At the beginning I thought it is a trivial exercise:
Create YAML build script in master. Set the CI trigger to master.
Cherry-pick (never mind why not merge) to release/r-858 - set the CI trigger to release/r-858.
Not ideal, because the two scripts only differ in their CI trigger. But "I am learning, it is good enough for now" saying me to myself.
However, this simple scheme does not work! The build I created for release/r-858 is triggered on changes in master!
I double check every setting I know about builds - all look correct.
Please, observe:
The master build
The release/r-858 build
Uh oh, look at that. It shows the YAML on the master branch! Well, maybe it is an innocent presentation bug? Let us check the branch I am supposed to build from:
Yup, the file is different - I am playing with the trigger trying to solve the very same problem this question is about. The original code had release/r-858 instead of $(Build.SourceBranch) as the CI trigger, but since it did not help I started playing with all kinds of trigger values.
To remove any doubt, here is the proof the branch corresponds to release/r-858:
C:\xyz\58 [arch/shelve/798914 ≡ +0 ~17 -0 !]> git lg -2
cfdb6a9a86a | (HEAD -> arch/shelve/798914, origin/arch/shelve/798914) Rename azure-pipelines-ci-58.yml to azure-pipelines-ci.yml and sync it with the master version (68 seconds ago) [Kharitonov, Mark] (2020-08-14 09:09:46 -0400)
a931e3bd96b | (origin/release/r-858, release/r-858) Merged PR 90230: 793282 Work Assignments Merge (28 minutes ago) [Mihailichenco, Serghei] (2020-08-14 12:02:20 -0400)
C:\xyz\58 [arch/shelve/798914 ≡ +0 ~17 -0 !]>
Anyway, more build properties:
The problem
So a developer pushed some code to master and now the release/r-858 build is running:
Why is this? One of our guys asked a similar question in the Microsoft Developer Community forum, but that thread does not make sense to me.
What am I doing wrong here?
Edit 1
Imagine a big enterprise monolithic application. It is deployed in production at version 858. At the same time, developers work on the next version and also hot fixes and service packs for the version already deployed in prod.
A change can be made only in master or only in release/r-858 or in both (not at the same time, though). Many teams are working at the same time on many different aspects of the application and hence QA has many pods where the application is deployed. As I have mentioned above - about 150 pods for the bleeding edge (master) and about the same amount for the already released code, because there is active work to test hot fixes and service packs.
I appreciate this arrangement is not ideal. It is such not because we love it, but because one has to deal with decade old decisions. We are working to change it, but it takes time.
Anyway, the current process is to have 2 build definitions (in reality there are more for different reasons). So far we used classic CI builds, now we want to migrate to YAML (which we already use for micro services, but not the monolith).
Now I understand that we can have different release pipelines based off the same build definition, but different branch filters.
And maybe we will. But I do not understand why it is wrong to have different build definitions here, given that each branch is a long living release branch.
Edit 2
You can ignore $(Build.SourceBranch) and imaging release/r-858 instead. The net result is exactly the same. In the scenario I bring above code is committed to master, not release/r-858.
Edit 3
It is very confusing. Suppose I am creating a new YAML build. The dialog says "select YAML in any branch", but they point is that once selected this branch becomes the default branch of the build. That is the branch we can see here:
If I have a single YAML file in the master branch, the build with the default branch release/r-858 cannot even use it, unless it is merged to release/r-858. I tried it - I:
created a new YAML build
selected the YAML file from the master branch
ran and right away cancelled the build
then went to edit the build and changes the branch of the build from master to release/r-858 - it allowed me to save the build, even if the YAML does not exist in that branch
But then when I tried to run the build again I got this:
An error occurred while loading the YAML build pipeline. File /Build/azure-pipelines-ci.yml not found in repository bla-bla-bla branch refs/heads/release/r-858 version 5893f559292e56cf6db48687fd910bd2916e3cef.
And indeed, looking at the raw build definition, the process section contains the YAML file path, but not the branch:
"process": {
"yamlFilename": "Build/azure-pipelines-ci.yml",
"type": 2,
"resources": {},
"target": null
},
The branch only appears in the repository section of the definition:
"repository": {
"defaultBranch": "refs/heads/release/r-858",
...
},
It is clear to me that a single build definition can be used to CI build many branches. But this model I need to implement is build definition per release branch. I cannot have a single build definition for the following reasons:
Different release branches have different agent pools, because of the different development intensity. Remember, this is on on-prem Azure DevOps Server with self hosted agents. Can we express this requirement with a single build definition?
Different build variable values which we want to control without sending a Pull Request to YAML file repository. How do you do it with a single build definition? For example, one of the variables controls the version Major.Minor. They are different in each release branch.
So, I do not see any way to avoid multiple build definitions in our situation. The root cause for this are the release branches, but we cannot throw them away in the near future.
So, we have 2 build definitions. That forces us to have 2 YAML - one per branch, because a build definition with the default branch of release/r-858 expects to find YAML in that branch, otherwise we cannot trigger the build manually. Which is a must, even if the build has a CI trigger.
So, 2 build definitions, 2 YAMLs (one per branch). So far my hands were forced. But now I am told that the release branch build would be triggered by the master YAML just because the release branch build is linked to the same YAML file name ignoring the default branch of the build!
Because this is what happens - a commit is checked in to master and the release branch build is invoked in addition to the master branch build! Both build definitions build exactly the same branch (master) using the master YAML script. But because the release branch build has different set of variables the end result is plain wrong.
This is not reasonable. I am going to create a dummy repo to reproduce it cleanly and post here.
Edit 4
As promised - a trivial reproduction. Given:
master branch build test-master-CI
release branch build test-r58-CI
Since having two build definitions necessarily means two YAMLs (one per branch), here they are:
C:\xyz\DevOps\Test [master ≡]> cat .\azure-pipelines.yml
trigger:
branches:
include:
- master
name: $(BuildVersionPrefix).$(DayOfYear)$(Date:HH)
steps:
- script: echo master
C:\xyz\DevOps\Test [master ≡]> git co release/r-858
Switched to branch 'release/r-858'
Your branch is up to date with 'origin/release/r-858'.
C:\xyz\DevOps\Test [release/r-858 ≡]> cat .\azure-pipelines.yml
trigger:
branches:
include:
- release/r-858
name: $(BuildVersionPrefix).$(DayOfYear)$(Date:HH)
steps:
- script: echo release/r-858
C:\Dayforce\DevOps\Test [release/r-858 ≡]>
Where BuildVersionPrefix = 59.0 for master and 58.3 for release/r-858
When I trigger each build manually I get this:
Now I commit a change to master. Lo and behold - both builds are triggered:
In both cases the YAML from the master branch is used. BUT the release branch defines BuildVersionPrefix = 58.3 and so the master build executed by the release branch build definition has bogus version.
Is this really how the feature is supposed to work? That makes the CI YAML trigger useless for my scenario. Thank you Matt for helping me to realize that.
I think I get where the confusion comes from. When you are configuring the pipeline, you are specifying the branch (notice the description says the file in any branch) and the file name.
What you are doing is just duplicating the monitoring though. If you were to really inspect it, I think you will see that when you push to release branch, it isn't trigger the master YAML pipeline ... it is just triggering the release YAML steps a second time. That is because the pipeline is just monitoring changes to the repo and responding based on the YAML configuration. In this case, you pushed to release and it evaluated that there was a YAML that matched that trigger (the release branch's copy) and triggered for both build definitions.
I verified this on a mocked-up pipeline. I had selected different branches on the creation, but the only thing that really impacts I believe is the default branch it would use for scheduled builds. I created a simple echo statement in both of these it was using the release branches YAML configuration.
I think if you really want to achieve the desired results you are expecting, you will want to use the override triggers that you define on the definition instead of relying on what is in the YAML trigger.
I had the same issue and Matt helped me solve this.
I'm only writing this as the only way to get this working for me was to create a build YAML file on one branch (with the correct configuration). Then create the other YAML file on another branch. And then create the pipelines in the new shiny YAML editor within Devops.
The key is, when in the "Configure" section of a new pipeline, select:
"Existing Azure Pipelines YAML file" which allows you to select a branch and a YAML file within that branch.
This allowed me to have the SystemOne branch build and test the system one site and the SystemTwo branch build and test the system two site.
I also added triggers inside the SystemOne.yml using a wild card. EG
trigger:
batch: true
branches:
include:
- SystemOne/*
And the same for the SystemTwo.yml.
efx/
...
aws_account/
nonprod/
account-variables.tf
dev/
account-variables.tf
common.tf
app1.tf
app2.tf
app3.tf
...
modules/
tf_efxstack_app1
tf_efxstack_app2
tf_efxstack_app3
...
In a given environment (dev in the example above), we have multiple modules (app1, app2, app3, etc.) which are based on individual applications we are running in the infrastructure.
I am trying to update the state of one module at a time (e.g. app1.tf). I am not sure how I can do this.
Use Case: I would like only one of the module's LC to be updated to use the latest AMI or security group.
I tried the -target command in terrafrom, but this does not seem to work because it does not check the terraform remote state file.
terraform plan -target=app1.tf
terraform apply -target=app1.tf
Therefor, no changes take place. I believe this is a bug with terraform.
Any ideas how I can accomplish this?
Terraform's -target should be for exceptional use cases only and you should really know what you're doing when you use it. If you genuinely need to regularly target different parts at a time then you should separate your applications into different directory so you can easily apply the whole directory at a time.
This might mean you need to use data sources or rethink the structure of things a bit more but means you also limit the blast radius of any single Terraform action which is always useful.
The MultiJob plugin is great and I want to use it for my build process, but there is one issue I have to solve before: There are three jobs A, B and C. SVN triggers job A and B (parallel execution) and job C starts when A and B have finished. Job C requires the artifacts from job A and B as an input.
-> Job A (with A.zip)
Trigger -> Job C (use artifacts A.zip and B.zip)
-> Job B (with B.zip)
To design the workflow with the MultiJob plugin is easy, but I have no clue how to get the corresponding artifacts from job A and B in job C. Can I pass the build numbers to job C (buildNr(A) != buildNr(B))? Or is there a smarter way to solve the issue?
The multijob plugin sets the following environment variables per job (code):
<JOBNAME>_BUILD_NUMBER
<JOBNAME>_BUILD_RESULT
Where JOBNAME is created by the name of the job with all non characters and numbers replaced with _. Thus you can pass the build numbers as parameters to Job C:
There's a workaround using EnvInject and a groovy script:
https://issues.jenkins-ci.org/browse/JENKINS-20241