Accessing username / committer's email from Codebuild build pipeline - amazon-web-services

As part of my AWS Codebuild pipeline, I am sending a Slack notification that includes the commit ID, which I obtain from the environment variable CODEBUILD_RESOLVED_SOURCE_VERSION as documented here: https://docs.aws.amazon.com/codebuild/latest/userguide/build-env-ref-env-vars.html
This is good, but I also want to access the name or email of the person who made the commit.
How can I possibly obtain that in the same way as I obtain the CODEBUILD_RESOLVED_SOURCE_VERSION?

CodeBuild webhook-triggered builds include the .git metadata. You should be able to retrieve this using the Git CLI, e.g.,:
git log -1 --format="%an <%ae>"
Which gives something like:
John Doe <jdoe#example.com>
The aws/codebuild/standard Docker image comes with Git pre-installed.

Related

Gettting the last Commit from AWS Codecommit for using GetDIfferences() call

I am trying to get the names of folders of code that has changed in an AWS Codecommit repository. I can achieve what I want by using the following Git command; however I am not allowed to use that for my project:
git diff --name-only head head~1
I am only allowed to use an AWS SDK to perform this functionality. And per the docs you must explicitly pass the BeforeCommit and AfterCommit fields. I have tried passing the following args and they do not work:
response = client.get_differences(
repositoryName="My-repo-name",
beforeCommitSpecifier="HEAD~1",
afterCommitSpecifier="HEAD"
)
The above code does not work because it does not accept using HEAD~1. Is another way to obtain the beforeCommitSpecifier?

How to get Commit ID of secondary source in CodePipeline

I have a Codepipeline where in one of the stages I use CodeBuild to collect Commit IDs of multiple sources.
I can get the commit ID of the primary source using CODEBUILD_RESOLVED_SOURCE_VERSION environment variable in codecommit. How do I get the commit id of the second source? Please help
You can get the commit IDs for all your source actions using the list_pipeline_executions command. It provides you with a list of CodePipeline execution summaries, where the first result is the latest execution. Each summary has a sourceRevisions attribute which looks like this for GitHub sources:
{
"actionName": "Action name",
"revisionId": "Commit ID",
"revisionSummary": "Commit Message",
"revisionUrl": "Link to the commit details page"
}
If your pipeline is likely to be running concurrently, make sure you get the current execution ID first so things do not get mixed up. You can do this with a one-liner in CodeBuild as suggested here.

AWS CodeBuild/CodePipeline user input parameters

I'm new to AWS and transitioning from Azure. I would like to create a pipeline in CodePipeline which asks the user for input (for example: the user needs to input a value for the variable "hello"), and uses that input to run a CodeBuild project. In Azure DevOps this was quite easy to define in the pipeline YML specification, but I can't seem to find a way to easily do this in AWS, or am I missing something?
AWS CodePipeline not supporting this feature currently. What you can do is, pass this parameter in your commit message (if pipeline trigger on commits to branches) or in your Git tag (if pipeline trigger on git tag push).
example:
commit message: my commit message [my_var]
git tag: my_var-1.0.0
Then in your buildspec.yml file collect the commit message or tag and check whether it contains your required parameters. If so execute the next commands otherwise exit the script.

AWS CDK - Multiple Stacks - Parameters for the location of Lambda Code is not found

I'm using CDK to set up a CI/CD Pipeline. I have currently a code build from a git into the pipeline. There are then two builds - one that pulls out code for a lambda and builds an artifact for it, and a second that issues the cdk synth to construct the lambda framework (including a nested bucket and dynamo).
Then it heads to a deploy stage, but fails because it can't find the parameters for the location of the lambda code
ive been using this example: https://docs.aws.amazon.com/cdk/latest/guide/codepipeline_example.html
the only differences from this example are that I'm using python for all of it and due to known future needs, the lamdba's are are in a parallel directory from the stack code
|-Lambdas
|--Lambda1
|---Lambda1Code
|--Lambda2
|---Lambda2Code
|-CDKStacks
|--LambdaCreationStack
|--PipelineCreationStack
|--app.py
Everything runs up until deploy where it fails with the error "The following CloudFormation Parameters are missing a value:" and then lists the BucketName and ObjectKey
I assigned those as overrides as per the above link:
admin_permissions=True,
parameter_overrides=dict(
lambda_code.assign(
bucket_name=lambda_location.bucket_name,
object_key=lambda_location.object_key,
object_version=lambda_location.object_version
)
),
as part of the pipeline actions CloudFormationCreateUpdateStackAction, and passed the code just like in the example from lambda stack to the pipeline stack. But every time the lambda stack is attempted to deploy the parameters for the location of the code 'do not exist'
I've tried overriding the parameters, but being in the pipeline and dynamically created I am hesitant to follow further (and my attempts didnt work anyways). I've tried a bunch of different stack/nested stack/single stack configurations but haven't had a Successs yet.
thoughts?
This basically boils down to CodeUri in the Cloudformation template will automatically append the s3 bucket if your CodeUri starts with ./
So you have 2 options.
In your pipeline output your artifact as normal, just do the whole repo from the codebuild into the code deploy. Your code deoploy can pick up the artifact naturally and will automatically append the S3 url to that
if you're using Python however, you MUST be aware that starting from a lambda directory deeper in the tree will mean that the python Imports expect that directory to be a root directory - meaning if you were in Lambdas/Lambda1 and wanted to import a file that existed in the Lambda1 directory, in order for it to work on AWS Lambda you would need to have the import be just the file name, ignoring the rest of the path.
This means that coding can be difficult, and running unit tests can be difficult as well. You'll want to add all the individual lambda folders (and their paths) from root to the PYTHONPATH env variable of your codebuild instance so the unit tests know where to do so (and add a .env file to your IDE as well to handle this in your local)
You use CDK and you cdk synth the stack you want to deploy. This creates a cdk.out folder with a bunch of asset zips in it plus the stack template (a json). you adjust your artifact output in the codebuild to output the cdk.out folder, and the asset zips are automatically (thanks to cdk) subbed into the codeUri locations in the also automatically synthed template. Once you know what the templates name is its easy to set the CodeDeploy to look for that template name and it will find the asset zips individually for each lambda.

Terraform: How to migrate state between projects?

What is the least painful way to migrate state of resources from one project (i.e., move a module invocation) to another, particularly when using remote state storage? While refactoring is relatively straightforward within the same state file (i.e., take this resource and move it to a submodule or vice-versa), I don't see an alternative to JSON surgery for refactoring into different state files, particularly if we use remote (S3) state (i.e., take this submodule and move it to another project).
The least painful way I’ve found is to pull both remote states local, move the modules/resources between the two, then push back up. Also remember, if you’re moving a module, don’t move the individual resources; move the whole module.
For example:
cd dirA
terraform state pull > ../dirA.tfstate
cd ../dirB
terraform state pull > ../dirB.tfstate
terraform state mv -state=../dirA.tfstate -state-out=../dirB.tfstate module.foo module.foo
terraform state push ../dirB.tfstate
# verify state was moved
terraform state list | grep foo
cd ../dirA
terraform state push ../dirA.tfstate
Unfortunately, the terraform state mv command doesn’t support specifying two remote backends, so this is the easiest way I’ve found to move state between multiple remotes.
Probably the simplest option is to use terraform import on the resource in the new state file location and then terraform state rm in the old location.
Terraform does handle some automatic state migration when copying/moving the .terraform folder around but I've only used that when shifting the whole state file rather than part of it.
As mentioned in a related Terraform Q -> Best practices when using Terraform
It is easier and faster to work with smaller number of resources:
Cmdsterraform plan and terraform apply both make cloud API calls to verify the status of resources.
If you have your entire infrastructure in a single composition this can take many minutes (even if you have several files in the same
folder).
So if you'll end up with a mono-dir with every resource, never is late to start segregating them by service, team, client, etc.
Possible Procedures to migrate Terrform states between projects / services:
Example Scenario:
Suppose we have a folder named common with all our .tf files for a certain project and we decided to divide (move) our .tf Terraform resources to a new project folder named security. so we now need to move some resources from common project folder to security.
Case 1:
If the security folder still does not exists (which is the best scenario).
Backup the Terraform backend state content stored in the corresponding AWS S3 Bucket (since it's versioned we should be even safer).
With your console placed in the origin folder, for our case common execute make init to be sure your .terraform local folder it's synced with your remote state.
If the security folder still does not exists (which should be true) clone (copy) the common folder with the destination name security and update the config.tf file inside this new cloned folder to point to the new S3 backend path (consider updating 1 account at a time starting with the less critical one and evaluate the results with terraform state list).
eg:
# Backend Config (partial)
terraform {
required_version = ">= 0.11.14"
backend "s3" {
key = "account-name/security/terraform.tfstate"
}
}
Inside our newly created security folder, run terraform-init (without removing the copied .terraform local folder, which was already generated and synced in step 2) which, as a result, will generate a new copy of the resources state (interactively asking) in the new S3 path. This is a safe operation since we haven't removed the resources from the old .tfstate path file yet.
$ make init
terraform init -backend-config=../config/backend.config
Initializing modules...
- module.cloudtrail
- module.cloudtrail.cloudtrail_label
Initializing the backend...
Backend configuration changed!
Terraform has detected that the configuration specified for the backend
has changed. Terraform will now check for existing state in the backends.
Acquiring state lock. This may take a few moments...
Acquiring state lock. This may take a few moments...
Do you want to copy existing state to the new backend?
Pre-existing state was found while migrating the previous "s3" backend to the
newly configured "s3" backend. No existing state was found in the newly
configured "s3" backend. Do you want to copy this state to the new "s3"
backend? Enter "yes" to copy and "no" to start with an empty state.
Enter a value: yes
...
Successfully configured the backend "s3"! Terraform will automatically
use this backend unless the backend configuration changes.
Initializing provider plugins...
...
Terraform has been successfully initialized!
...
Selectively remove the desired resources from each state (terraform state rm module.foo) in order to keep the desired ones in /common and /security paths. Moreover, It's a must to carry out in parallel the necessary updates (add/remove) of the modules/resources from your .tf files in each folder to keep both your local code base declaration and your remote .tfstate in sync. This is a sensible operation, please start by testing the procedure in the less critical possible single resource.
As reference we can consider the following doc and tools:
https://www.terraform.io/docs/commands/state/list.html
https://www.terraform.io/docs/commands/state/rm.html
https://github.com/camptocamp/terraboard (apparently still not compatible with terraform 0.12)
Case 2:
If the security folder already exists and has it's associated remote .tfstate in its AWS S3 path you'll need to use a different sequence of steps and commands, possible the ones referenced in the links below:
1. https://www.terraform.io/docs/commands/state/list.html
2. https://www.terraform.io/docs/commands/state/pull.html
3. https://www.terraform.io/docs/commands/state/mv.html
4. https://www.terraform.io/docs/commands/state/push.html
Ref links:
https://github.com/camptocamp/terraboard (apparently still not compatible with terraform 0.12)
https://medium.com/#lynnlin827/moving-terraform-resources-states-from-one-remote-state-to-another-c76f8b76a996
I use this script (not work from v0.12) to migrate the state while refactoring. Feel free to adopt it to your need.
src=<source dir>
dst=<target dir>
resources=(
aws_s3_bucket.bucket1
aws_iam_role.role2
aws_iam_user.user1
aws_s3_bucket.bucket2
aws_iam_policy.policy2
)
cd $src
terraform state pull >/tmp/source.tfstate
cd $dst
terraform state pull >/tmp/target.tfstate
for resource in "${resources[#]}"; do
terraform state mv -state=/tmp/source.tfstate -state-out=/tmp/target.tfstate "${resource}" "${resource}"
done
terraform state push /tmp/target.tfstate
cd $src
terraform state push /tmp/source.tfstate
Note that terraform pull is deprecated from v0.12 (but not removed and still works), and terraform push does not work anymore from v0.12.
Important: The terraform push command is deprecated, and only works
with the legacy version of Terraform Enterprise. In the current
version of Terraform Cloud, you can upload configurations using the API. See the docs about API-driven runs for more details.
==================
Below are unrelated to the OP:
If you are renaming your resources in the same project.
For version <= 1.0: use terraform state mv ....
For version >= 1.1, use the moved statement described: here or here.
There are several other useful commands that I listed in my blog