AWS CDK - Multiple Stacks - Parameters for the location of Lambda Code is not found - amazon-web-services

I'm using CDK to set up a CI/CD Pipeline. I have currently a code build from a git into the pipeline. There are then two builds - one that pulls out code for a lambda and builds an artifact for it, and a second that issues the cdk synth to construct the lambda framework (including a nested bucket and dynamo).
Then it heads to a deploy stage, but fails because it can't find the parameters for the location of the lambda code
ive been using this example: https://docs.aws.amazon.com/cdk/latest/guide/codepipeline_example.html
the only differences from this example are that I'm using python for all of it and due to known future needs, the lamdba's are are in a parallel directory from the stack code
|-Lambdas
|--Lambda1
|---Lambda1Code
|--Lambda2
|---Lambda2Code
|-CDKStacks
|--LambdaCreationStack
|--PipelineCreationStack
|--app.py
Everything runs up until deploy where it fails with the error "The following CloudFormation Parameters are missing a value:" and then lists the BucketName and ObjectKey
I assigned those as overrides as per the above link:
admin_permissions=True,
parameter_overrides=dict(
lambda_code.assign(
bucket_name=lambda_location.bucket_name,
object_key=lambda_location.object_key,
object_version=lambda_location.object_version
)
),
as part of the pipeline actions CloudFormationCreateUpdateStackAction, and passed the code just like in the example from lambda stack to the pipeline stack. But every time the lambda stack is attempted to deploy the parameters for the location of the code 'do not exist'
I've tried overriding the parameters, but being in the pipeline and dynamically created I am hesitant to follow further (and my attempts didnt work anyways). I've tried a bunch of different stack/nested stack/single stack configurations but haven't had a Successs yet.
thoughts?

This basically boils down to CodeUri in the Cloudformation template will automatically append the s3 bucket if your CodeUri starts with ./
So you have 2 options.
In your pipeline output your artifact as normal, just do the whole repo from the codebuild into the code deploy. Your code deoploy can pick up the artifact naturally and will automatically append the S3 url to that
if you're using Python however, you MUST be aware that starting from a lambda directory deeper in the tree will mean that the python Imports expect that directory to be a root directory - meaning if you were in Lambdas/Lambda1 and wanted to import a file that existed in the Lambda1 directory, in order for it to work on AWS Lambda you would need to have the import be just the file name, ignoring the rest of the path.
This means that coding can be difficult, and running unit tests can be difficult as well. You'll want to add all the individual lambda folders (and their paths) from root to the PYTHONPATH env variable of your codebuild instance so the unit tests know where to do so (and add a .env file to your IDE as well to handle this in your local)
You use CDK and you cdk synth the stack you want to deploy. This creates a cdk.out folder with a bunch of asset zips in it plus the stack template (a json). you adjust your artifact output in the codebuild to output the cdk.out folder, and the asset zips are automatically (thanks to cdk) subbed into the codeUri locations in the also automatically synthed template. Once you know what the templates name is its easy to set the CodeDeploy to look for that template name and it will find the asset zips individually for each lambda.

Related

deploy terraform files that are stored locally

I wish to deploy a infrastructure that is written as a terraform module. This module is as follows:
module "my-module" {
count = var.env == "prod" ? 1 : 0
source = "s3::https://s3-us-east-1.amazonaws.com/my-bucket/my-directory/"
env = var.env
deployment = var.deployment
}
Right now this is in a my-module.tf file, and I am deploying it by running the usual terraform init, plan and apply commands (and passing in the relevant variables).
However, for my specific requirements, I wish to be able to deploy this only by running terraform init, plan and apply commands (and passing in the relevant variables), and not having to store the module in a file on my own machine. I would rather have the module file be stored remotely (e.g. s3 bucket) so other teams/users do not need to have the file on their own machine. Is there any way this terraform could be deployed in such a way that the module file can be stored remotely, and could for example be passed as an option when running terraform plan and apply commands?
could for example be passed as an option when running terraform plan and apply commands?
It's not possible. As explained in TF docs, source must be a literal string. Which means it can't be any dynamic variable.
You would have to develop your own wrapper around TF, which would do simple find-and-replace source place-holders with actual correct values before you use terraform.

cloudformation lambda function upload from s3 deployment package structure issues

I am using cloudformation to create my lambda function. I have opted to pull the code from S3.
However, it appears to create a nested structure when the lambda function gets created, and I am unable to import my packages unless I move the lambda and associated library packages up to the root level of the lambda function.
Cloudformation value for code section:
Code:
S3Bucket: youll_never_guess-bucket-12345
S3Key: python_data_collector.zip
How it appears in lambda, aws console:
Full path for handler in console:
I've tried: python_data_collector/lambda.lambda_handler and python_data_collector.lambda.lambda_handler
Error message:
Unable to import module 'python_data_collector/lambda': No module named 'requests'"
The Python dependencies should reside at the root level of you lambda deployment package. You can indeed point to a nested file as the entry point of your function, but this does not change the dependency behaviour of your function.
However, the structure of your lambda code has nothing to do with where your zip file is located in the S3 bucket. Presumably, when you are creating the zip file, you are adding a folder at the root level which contains the code & dependencies. You should not have that extra folder in the zip file and simply put the code (nested or not) and the dependencies (not nested) at the root of you zip package. Lambda will simply unzip the file and place the contents as is, in your lambda function.

Conditionally execute stage in AWS Codepipeline

I would like to conditionally execute certain stage in AWS Codepipeline depending on that if I put certain file on repo location. So, if I put "some_file.txt" on certain location in repo, I want for Codepipeline to check existence of this file and if it's there continue further to deploy code to production, otherwise stop on that stage.
With this I would like to avoid manual approval action and control release process with committing a file. Is this possible and what would be best practice?
I think you could create a lambda action for that:
Invoke an AWS Lambda function in a pipeline in CodePipeline
The lambda function can access the input artifact, and check if your file of interest is there or not.
Depending on the outcome of the check, the function with either put_job_success_result or put_job_failure_resul to continue or stop the pipeline.
you can use the spec file to check if there's the needed file present. If not, then you can execute a "stop-pipeline-execution" https://docs.aws.amazon.com/cli/latest/reference/codepipeline/stop-pipeline-execution.html
command. The required args can be fetched from the env vars and one more thing to note is to give that stage of yours adequate permission(s) to be able to execute the command.

Terraform: How to migrate state between projects?

What is the least painful way to migrate state of resources from one project (i.e., move a module invocation) to another, particularly when using remote state storage? While refactoring is relatively straightforward within the same state file (i.e., take this resource and move it to a submodule or vice-versa), I don't see an alternative to JSON surgery for refactoring into different state files, particularly if we use remote (S3) state (i.e., take this submodule and move it to another project).
The least painful way I’ve found is to pull both remote states local, move the modules/resources between the two, then push back up. Also remember, if you’re moving a module, don’t move the individual resources; move the whole module.
For example:
cd dirA
terraform state pull > ../dirA.tfstate
cd ../dirB
terraform state pull > ../dirB.tfstate
terraform state mv -state=../dirA.tfstate -state-out=../dirB.tfstate module.foo module.foo
terraform state push ../dirB.tfstate
# verify state was moved
terraform state list | grep foo
cd ../dirA
terraform state push ../dirA.tfstate
Unfortunately, the terraform state mv command doesn’t support specifying two remote backends, so this is the easiest way I’ve found to move state between multiple remotes.
Probably the simplest option is to use terraform import on the resource in the new state file location and then terraform state rm in the old location.
Terraform does handle some automatic state migration when copying/moving the .terraform folder around but I've only used that when shifting the whole state file rather than part of it.
As mentioned in a related Terraform Q -> Best practices when using Terraform
It is easier and faster to work with smaller number of resources:
Cmdsterraform plan and terraform apply both make cloud API calls to verify the status of resources.
If you have your entire infrastructure in a single composition this can take many minutes (even if you have several files in the same
folder).
So if you'll end up with a mono-dir with every resource, never is late to start segregating them by service, team, client, etc.
Possible Procedures to migrate Terrform states between projects / services:
Example Scenario:
Suppose we have a folder named common with all our .tf files for a certain project and we decided to divide (move) our .tf Terraform resources to a new project folder named security. so we now need to move some resources from common project folder to security.
Case 1:
If the security folder still does not exists (which is the best scenario).
Backup the Terraform backend state content stored in the corresponding AWS S3 Bucket (since it's versioned we should be even safer).
With your console placed in the origin folder, for our case common execute make init to be sure your .terraform local folder it's synced with your remote state.
If the security folder still does not exists (which should be true) clone (copy) the common folder with the destination name security and update the config.tf file inside this new cloned folder to point to the new S3 backend path (consider updating 1 account at a time starting with the less critical one and evaluate the results with terraform state list).
eg:
# Backend Config (partial)
terraform {
required_version = ">= 0.11.14"
backend "s3" {
key = "account-name/security/terraform.tfstate"
}
}
Inside our newly created security folder, run terraform-init (without removing the copied .terraform local folder, which was already generated and synced in step 2) which, as a result, will generate a new copy of the resources state (interactively asking) in the new S3 path. This is a safe operation since we haven't removed the resources from the old .tfstate path file yet.
$ make init
terraform init -backend-config=../config/backend.config
Initializing modules...
- module.cloudtrail
- module.cloudtrail.cloudtrail_label
Initializing the backend...
Backend configuration changed!
Terraform has detected that the configuration specified for the backend
has changed. Terraform will now check for existing state in the backends.
Acquiring state lock. This may take a few moments...
Acquiring state lock. This may take a few moments...
Do you want to copy existing state to the new backend?
Pre-existing state was found while migrating the previous "s3" backend to the
newly configured "s3" backend. No existing state was found in the newly
configured "s3" backend. Do you want to copy this state to the new "s3"
backend? Enter "yes" to copy and "no" to start with an empty state.
Enter a value: yes
...
Successfully configured the backend "s3"! Terraform will automatically
use this backend unless the backend configuration changes.
Initializing provider plugins...
...
Terraform has been successfully initialized!
...
Selectively remove the desired resources from each state (terraform state rm module.foo) in order to keep the desired ones in /common and /security paths. Moreover, It's a must to carry out in parallel the necessary updates (add/remove) of the modules/resources from your .tf files in each folder to keep both your local code base declaration and your remote .tfstate in sync. This is a sensible operation, please start by testing the procedure in the less critical possible single resource.
As reference we can consider the following doc and tools:
https://www.terraform.io/docs/commands/state/list.html
https://www.terraform.io/docs/commands/state/rm.html
https://github.com/camptocamp/terraboard (apparently still not compatible with terraform 0.12)
Case 2:
If the security folder already exists and has it's associated remote .tfstate in its AWS S3 path you'll need to use a different sequence of steps and commands, possible the ones referenced in the links below:
1. https://www.terraform.io/docs/commands/state/list.html
2. https://www.terraform.io/docs/commands/state/pull.html
3. https://www.terraform.io/docs/commands/state/mv.html
4. https://www.terraform.io/docs/commands/state/push.html
Ref links:
https://github.com/camptocamp/terraboard (apparently still not compatible with terraform 0.12)
https://medium.com/#lynnlin827/moving-terraform-resources-states-from-one-remote-state-to-another-c76f8b76a996
I use this script (not work from v0.12) to migrate the state while refactoring. Feel free to adopt it to your need.
src=<source dir>
dst=<target dir>
resources=(
aws_s3_bucket.bucket1
aws_iam_role.role2
aws_iam_user.user1
aws_s3_bucket.bucket2
aws_iam_policy.policy2
)
cd $src
terraform state pull >/tmp/source.tfstate
cd $dst
terraform state pull >/tmp/target.tfstate
for resource in "${resources[#]}"; do
terraform state mv -state=/tmp/source.tfstate -state-out=/tmp/target.tfstate "${resource}" "${resource}"
done
terraform state push /tmp/target.tfstate
cd $src
terraform state push /tmp/source.tfstate
Note that terraform pull is deprecated from v0.12 (but not removed and still works), and terraform push does not work anymore from v0.12.
Important: The terraform push command is deprecated, and only works
with the legacy version of Terraform Enterprise. In the current
version of Terraform Cloud, you can upload configurations using the API. See the docs about API-driven runs for more details.
==================
Below are unrelated to the OP:
If you are renaming your resources in the same project.
For version <= 1.0: use terraform state mv ....
For version >= 1.1, use the moved statement described: here or here.
There are several other useful commands that I listed in my blog

Can I have terraform keep the old versions of objects?

New to terraform, so perhaps it just not supposed to work this way. I want to use aws_s3_bucket_object to upload a package to a bucket- this is part of an app deploy. Im going to be changing the package for each deploy and I want to keep the old versions.
resource "aws_s3_bucket_object" "object" {
bucket = "mybucket-app-versions"
key = "version01.zip"
source = "version01.zip"
}
But after running this for a future deploy I will want to upload version02 and then version03 etc. Terraform replaces the old zip with the new one- expected behavior.
But is there a way to have terraform not destroy the old version? Is this a supported use case here or is this not how I'm supposed to use terraform? I wouldn't want to force this with an ugly hack if terraform doesn't have official support for doing something like what I'm trying to do here.
I could of course just call the S3 api via script, but it would be great to have this defined with the rest of the terraform definition for this app.
When using Terraform for application deployment, the recommended approach is to separate the build step from the deploy step and use Terraform only for the latter.
The responsibility of the build step -- which is implemented using a separate tool, depending on the method of deployment -- is to produce some artifact (an archive, a docker container, a virtual machine image, etc), publish it somewhere, and then pass its location or identifier to Terraform for deployment.
This separation between build and deploy allows for more complex situations, such as rolling back to an older artifact (without rebuilding it) if the new version has problems.
In simple scenarios it is possible to pass the artifact location to Terraform using Input Variables. For example, in your situation where the build process would write a zip file to S3, you might define a variable like this:
variable "archive_name" {
}
This can then be passed to whatever resource needs it using ${var.archive_name} interpolation syntax. To deploy a particular artifact, pass its name on the command line using -var:
$ terraform apply -var="archive_name=version01.zip"
Some organizations prefer to keep a record of the "current" version of each application in some kind of data store, such as HashiCorp Consul, and read it using a data source. This approach can be easier to orchestrate in an automated build pipeline, since it allows this separate data store to be used to indirectly pass the archive name between the build and deploy steps, without needing to pass any unusual arguments to Terraform itself.
Currently, you tell terraform to manage one aws_s3_bucket_object and terraform takes care of its whole life-cycle, meaning terraform will also replace the file if it sees any changes to it.
What you are maybe looking for is the null_resource. You can use it to run a local-exec provisioner to upload the file you need with a script. That way, the old file won't be deleted, as it is not directly managed by terraform. You'd still be calling the API via a script then, but the whole process of uploading to s3 would still be included in your terraform apply step.
Here an outline of the null_resource:
resource "null_resource" "upload_to_s3" {
depends_on = ["<any resource that should already be created before upload>"]
...
triggers = ["<A resource change that must have happened so terraform starts the upload>"]
provisioner "local-exec" {
command = "<command to upload local package to s3>"
}
}