Gettting the last Commit from AWS Codecommit for using GetDIfferences() call - amazon-web-services

I am trying to get the names of folders of code that has changed in an AWS Codecommit repository. I can achieve what I want by using the following Git command; however I am not allowed to use that for my project:
git diff --name-only head head~1
I am only allowed to use an AWS SDK to perform this functionality. And per the docs you must explicitly pass the BeforeCommit and AfterCommit fields. I have tried passing the following args and they do not work:
response = client.get_differences(
repositoryName="My-repo-name",
beforeCommitSpecifier="HEAD~1",
afterCommitSpecifier="HEAD"
)
The above code does not work because it does not accept using HEAD~1. Is another way to obtain the beforeCommitSpecifier?

Related

AWS CodeBuild/CodePipeline user input parameters

I'm new to AWS and transitioning from Azure. I would like to create a pipeline in CodePipeline which asks the user for input (for example: the user needs to input a value for the variable "hello"), and uses that input to run a CodeBuild project. In Azure DevOps this was quite easy to define in the pipeline YML specification, but I can't seem to find a way to easily do this in AWS, or am I missing something?
AWS CodePipeline not supporting this feature currently. What you can do is, pass this parameter in your commit message (if pipeline trigger on commits to branches) or in your Git tag (if pipeline trigger on git tag push).
example:
commit message: my commit message [my_var]
git tag: my_var-1.0.0
Then in your buildspec.yml file collect the commit message or tag and check whether it contains your required parameters. If so execute the next commands otherwise exit the script.

AWS CDK - Multiple Stacks - Parameters for the location of Lambda Code is not found

I'm using CDK to set up a CI/CD Pipeline. I have currently a code build from a git into the pipeline. There are then two builds - one that pulls out code for a lambda and builds an artifact for it, and a second that issues the cdk synth to construct the lambda framework (including a nested bucket and dynamo).
Then it heads to a deploy stage, but fails because it can't find the parameters for the location of the lambda code
ive been using this example: https://docs.aws.amazon.com/cdk/latest/guide/codepipeline_example.html
the only differences from this example are that I'm using python for all of it and due to known future needs, the lamdba's are are in a parallel directory from the stack code
|-Lambdas
|--Lambda1
|---Lambda1Code
|--Lambda2
|---Lambda2Code
|-CDKStacks
|--LambdaCreationStack
|--PipelineCreationStack
|--app.py
Everything runs up until deploy where it fails with the error "The following CloudFormation Parameters are missing a value:" and then lists the BucketName and ObjectKey
I assigned those as overrides as per the above link:
admin_permissions=True,
parameter_overrides=dict(
lambda_code.assign(
bucket_name=lambda_location.bucket_name,
object_key=lambda_location.object_key,
object_version=lambda_location.object_version
)
),
as part of the pipeline actions CloudFormationCreateUpdateStackAction, and passed the code just like in the example from lambda stack to the pipeline stack. But every time the lambda stack is attempted to deploy the parameters for the location of the code 'do not exist'
I've tried overriding the parameters, but being in the pipeline and dynamically created I am hesitant to follow further (and my attempts didnt work anyways). I've tried a bunch of different stack/nested stack/single stack configurations but haven't had a Successs yet.
thoughts?
This basically boils down to CodeUri in the Cloudformation template will automatically append the s3 bucket if your CodeUri starts with ./
So you have 2 options.
In your pipeline output your artifact as normal, just do the whole repo from the codebuild into the code deploy. Your code deoploy can pick up the artifact naturally and will automatically append the S3 url to that
if you're using Python however, you MUST be aware that starting from a lambda directory deeper in the tree will mean that the python Imports expect that directory to be a root directory - meaning if you were in Lambdas/Lambda1 and wanted to import a file that existed in the Lambda1 directory, in order for it to work on AWS Lambda you would need to have the import be just the file name, ignoring the rest of the path.
This means that coding can be difficult, and running unit tests can be difficult as well. You'll want to add all the individual lambda folders (and their paths) from root to the PYTHONPATH env variable of your codebuild instance so the unit tests know where to do so (and add a .env file to your IDE as well to handle this in your local)
You use CDK and you cdk synth the stack you want to deploy. This creates a cdk.out folder with a bunch of asset zips in it plus the stack template (a json). you adjust your artifact output in the codebuild to output the cdk.out folder, and the asset zips are automatically (thanks to cdk) subbed into the codeUri locations in the also automatically synthed template. Once you know what the templates name is its easy to set the CodeDeploy to look for that template name and it will find the asset zips individually for each lambda.

Accessing username / committer's email from Codebuild build pipeline

As part of my AWS Codebuild pipeline, I am sending a Slack notification that includes the commit ID, which I obtain from the environment variable CODEBUILD_RESOLVED_SOURCE_VERSION as documented here: https://docs.aws.amazon.com/codebuild/latest/userguide/build-env-ref-env-vars.html
This is good, but I also want to access the name or email of the person who made the commit.
How can I possibly obtain that in the same way as I obtain the CODEBUILD_RESOLVED_SOURCE_VERSION?
CodeBuild webhook-triggered builds include the .git metadata. You should be able to retrieve this using the Git CLI, e.g.,:
git log -1 --format="%an <%ae>"
Which gives something like:
John Doe <jdoe#example.com>
The aws/codebuild/standard Docker image comes with Git pre-installed.

Can I have terraform keep the old versions of objects?

New to terraform, so perhaps it just not supposed to work this way. I want to use aws_s3_bucket_object to upload a package to a bucket- this is part of an app deploy. Im going to be changing the package for each deploy and I want to keep the old versions.
resource "aws_s3_bucket_object" "object" {
bucket = "mybucket-app-versions"
key = "version01.zip"
source = "version01.zip"
}
But after running this for a future deploy I will want to upload version02 and then version03 etc. Terraform replaces the old zip with the new one- expected behavior.
But is there a way to have terraform not destroy the old version? Is this a supported use case here or is this not how I'm supposed to use terraform? I wouldn't want to force this with an ugly hack if terraform doesn't have official support for doing something like what I'm trying to do here.
I could of course just call the S3 api via script, but it would be great to have this defined with the rest of the terraform definition for this app.
When using Terraform for application deployment, the recommended approach is to separate the build step from the deploy step and use Terraform only for the latter.
The responsibility of the build step -- which is implemented using a separate tool, depending on the method of deployment -- is to produce some artifact (an archive, a docker container, a virtual machine image, etc), publish it somewhere, and then pass its location or identifier to Terraform for deployment.
This separation between build and deploy allows for more complex situations, such as rolling back to an older artifact (without rebuilding it) if the new version has problems.
In simple scenarios it is possible to pass the artifact location to Terraform using Input Variables. For example, in your situation where the build process would write a zip file to S3, you might define a variable like this:
variable "archive_name" {
}
This can then be passed to whatever resource needs it using ${var.archive_name} interpolation syntax. To deploy a particular artifact, pass its name on the command line using -var:
$ terraform apply -var="archive_name=version01.zip"
Some organizations prefer to keep a record of the "current" version of each application in some kind of data store, such as HashiCorp Consul, and read it using a data source. This approach can be easier to orchestrate in an automated build pipeline, since it allows this separate data store to be used to indirectly pass the archive name between the build and deploy steps, without needing to pass any unusual arguments to Terraform itself.
Currently, you tell terraform to manage one aws_s3_bucket_object and terraform takes care of its whole life-cycle, meaning terraform will also replace the file if it sees any changes to it.
What you are maybe looking for is the null_resource. You can use it to run a local-exec provisioner to upload the file you need with a script. That way, the old file won't be deleted, as it is not directly managed by terraform. You'd still be calling the API via a script then, but the whole process of uploading to s3 would still be included in your terraform apply step.
Here an outline of the null_resource:
resource "null_resource" "upload_to_s3" {
depends_on = ["<any resource that should already be created before upload>"]
...
triggers = ["<A resource change that must have happened so terraform starts the upload>"]
provisioner "local-exec" {
command = "<command to upload local package to s3>"
}
}

Updating AWS Lambda with New Zip File?

So I've created an AWS Lambda using a the AWS CLI. I did this by running the command:
aws lambda create-function
with the argument
--zip-file fileb://file-path/zipFile.zip
So then I want to make a change to the source code, so I made a new zip file but the lambda still executes with the old zip file's source code. Thus I tried to run the same command again but got the following error:
Function already exist: FunctionName
So either I just have to abandon that function and create a new one with the new zip file or there's some way for me to update the existing function to use the new zip file's code.
Is there a way for me to do this update, and if so, how?
create-function, as the name implies, creates functions. It does not update them with new code. For that you need update-function-code. This also takes a --zip-file argument.
While this updates the code, you may also need to publish a new version of the function for the changes to take effect. This can be done by adding the --publish argument to update-function-code, or as a separate step with the publish-version command.