I am using AWS CDK and want to use parameters stored in Gitlab as variables in CDK build.
Someone tried something like this?
Solutions found:
Create custom variable in Gitlab (with the same repository as project) in two ways:
In pipeline file with project (not sensitive data)
In project properties in Gitlab (recomended solution if want to store sensitive params as credentials - those parameters should not be stored with project in file)
Declare custom env in main CDK file (e.x. const CUSTOM_ENV = ${CUSTOM_ENV})
Call cdk deploy with created parameter e.g. cdk deploy $CUSTOM_ENV and the variable will be automatically delivered by Gitlab.
More details how to declare variables - Gitlab docs
Related
I'm fairly new to AWS and using the CDK but have been working on a project which deploys to AWS via a pipeline using yaml for the cf-template and later a node script to run cdk deploy on a set of stack files that have been written in Typescript.
In the cf-template yaml where the cdk-toolkit is being defined there's a bucket resource with name X. After the toolkit has been created/updated in the pipeline, the cdk deploy command is executed to deploy some stacks and workers, which should live in bucket X. They aren't automatically being uploaded there however, so I've tried using the --parameters flag to specify X as below.
cdk deploy --toolkit-stack-name my-toolkit --parameters uploadBucketName=X --ci --require-approval never
When I do this I get the following error in the pipeline for the first stack that gets deployed:
Parameters: [uploadBucketName] do not exist in the template
I assumed this meant that in the MyFirstStack.ts file it was missing a parameter definition as suggested by the AWS documentation, but it's not clear to me why this is necessary or how it's supposed to be used when it's the cdk deploy command which provides a value for this parameter. I tried adding it per the docs
const uploadBucketName = new CfnParameter(this, "uploadBucketName", {
type: "String",
description: "The name of the Amazon S3 bucket where uploaded files will be stored."});
but not sure if this is really the right thing to do, and it doesn't work besides - I still get the same error.
Does anyone have any ideas where I'm going wrong?
I have created my aws infrastructure using terraform . the infrastructure includes creating elastic beanstalk apps , application load balancer , s3 , dynamodb , vpc-subnets and vpc-endpoints.
the aws infrastructure runs locally using the terraform commands as shown below:
terraform init
terraform plan -var-file="terraform.tfvars"
terraform apply -auto-approve -var-file="terraform.tfvars"
The terraform.tfvars contains the variables like region , instance type , access key etc .
I want to automate the build and deploy process of this terraform infrastructure using the aws codepipeline .
How can I achieve this task ? What steps to follow ? Where to save the terraform.tfvars file ? What roles to specify in the specific codebuild role . What about the manual process of auto-approve ?
MY APPROACH :The entire process of codecommit/github , codebuild , codedeploy ie (codepipeline) is carried out through aws console , I started with github as source , it is working (the github repo includes my terraform code for building aws infrastructure) then for codebuild , I need to specify the env variables and the buildspec.yml file , this is the problem , Iocally I had a terraform.tfvars to do the job but here I need to do it in the buildspec.yml file .
QUESTIONS :I am unaware how to specify my terraform.tfvars credentials in the buildspec.yml file and what env variables to specify? I also know we need to specify roles in the codebuild project but how to effectively specify them ? How to also Store the Terraform state in s3 ?
- How can I achieve this task ?
Use CodeCommit to store your Terraform Code, CodeBuild to run terraform plan, terraform apply, etc and CodePipeline to connect CodeCommit with CodeBuild.
What steps to follow ?
There are many tutorials on the internet. Check this as an example:
https://medium.com/faun/terraform-deployments-with-aws-codepipeline-342074248843
Where to save the terraform.tfvars file ?
Ideally, you should create one terraform.tfvars for development environment, like terraform.tfvars.dev, and another one for production environment, like terraform.tfvars.prod. And in your CodeBuild environment, choose the file using environment variables.
What roles to specify in the specific CodeBuild role ?
Your CodeBuild role needs to have the permissions to create, list, delete and update resources. Basically, in one service, it's almost everything.
What about the manual process of auto-approve ?
Usually, you use terraform plan in one CodeBuild environment to show what are the changes in your environment, and after a manual approval, you execute terraform apply -auto-approve in another CodeBuild environment. Check the tutorial above, it shows how to create this.
I'm, Planning to convert my existing implemented Terraform (Infrastructure as Service) Code into CDK.. Is it really possible??
You can now do this as of version 0.5 of the CDK.
https://www.hashicorp.com/blog/announcing-cdk-for-terraform-0-5
Create a new directory
Initalize CDK project
cdktf init --template="python" --local
Copy over your main.tf file
Convert!
cat main.tf | cdktf convert --provider hashicorp/aws --language python > imported.py
My biggest concern actually is whether I'll be able to keep the existing infrastructure untouched when rewriting all my terraform modules into aws-cdk. Each construct in aws-cdk has a name which is used to generate a logical id in aws, and I'm not sure what's the equivalent one in terraform.
When running an aws cdk deploy, if a name is different from one that's in the cloud then it will destroy and recreate the resource.
There is currently no way to convert Terraform templates to equivalent CDK code.
I am new to AWS lambda i have i am using serveless deploy to deploy my lambda function declared in serverless.yml file.
In that file i wrote a lambda function i deployed and it is working fine but problem is that whatever environment variable i gave is not available in console of that lambda function. I might be doing some minor mistake or for deploying environment variable there should be some other syntax.
I can go to lambda function in console and add environment variable manually.
But my doubt is can we do it while deploying it through serverless deploy
You can use versions and aliases provided by AWS Lambda
You can create different versions of the same lambda function and give them an alias. Like when you push your lambda code - create a version (say it's 5) - create an alias this (say TEST).
When you're sure that its ready for production, create a version(or choose an existing version and name that (say PROD).
Now whenever your lambda function executes, it gives lambda ARN
which contains alias, by that you can know which alias(in context.invokedFunctionArn) is
executed i.e. that can be used as the environment variable. While
invoking the function, you can mention which function to execute from
your invocation code.
let thisARN = context.invokedFunctionArn;
// Get the last string in ARN - It's either function name or the alias name
let thisAlias = thisARN.split(":").pop();
Now whenever you deploy a new code, just point the alias to that version.
You can use AWS console or CLI for that.
Take a look at this lambda versioning and aliases
For directly deploying to your alias(PROD), you can do this -
aws lambda update-alias \
--region region \
--function-name helloworld \
--function-version 2 \
--name PROD
serverless deploy
Serverless deploy works fine for deployment on any stage it also deploys environment variable in given stage, my case environment variable was not deployed of indentation problem in yaml file, and even serverless deploy command was not throwing error it was deploying function but environment variables were not deployed
In yaml file we can state the the stage where we want to deploy like this
provider:
name: aws
runtime: nodejs6.10
stage: dev
region: eu-west-2
Hope this will help if someone gets similar issue
I'm writing an application which I want to run as an AWS Lambda function but also adhere to the Twelve-Factor app guidelines. In particular Part III. Config which requires the use of environmental variables for configuration.
However, I cannot find a way to set environmental variables for AWS Lambda instances. Can anyone point me in the right direction?
If it isn't possible to use environmental variables can you please recommend a way to use environmental variables for local development and have them transformed to a valid configuration system that can be accessed using the application code in AWS.
Thanks.
As of November 18, 2016, AWS Lambda supports environment variables.
Environment variables can be specified both using AWS console and AWS CLI. This is how you would create a Lambda with an LD_LIBRARY_PATH environment variable using AWS CLI:
aws lambda create-function \
--region us-east-1
--function-name myTestFunction
--zip-file fileb://path/package.zip
--role role-arn
--environment Variables={LD_LIBRARY_PATH=/usr/bin/test/lib64}
--handler index.handler
--runtime nodejs4.3
--profile default
Perhaps the 'custom environment variables' feature of node-lambda would address your concerns:
https://www.npmjs.com/package/node-lambda
https://github.com/motdotla/node-lambda
"AWS Lambda doesn't let you set environment variables for your function, but in many cases you will need to configure your function with secure values that you don't want to check into version control, for example a DB connection string or encryption key. Use the sample deploy.env file in combination with the --configFile flag to set values which will be prepended to your compiled Lambda function as process.env environment variables before it gets uploaded to S3."
There is no way to configure env variables for lambda execution since each invocation is disjoint and no state information is stored. However there are ways to achieve what you want.
AWS credentials - you can avoid storing that in env variables. Instead grant the privileges to your LambdaExec role. In fact, AWS recommends using roles instead of AWS credentials.
Database details: One suggestion is to store it in a well known file in a private bucket. Lambda can download that file when it is invoked, read the contents which can contain database details and other information. Since the bucket is private, others cannot access the file. The LambdaExec role needs IAM privileges to access the private bucket.
AWS just added support for configuration of Lambda functions via environment parameters.
Take a look here
We also had this requirement for our Lambda function and we "solved" this by generating a env file on our CI platform (in our case this is CircleCI). This file gets included in the archive that gets deployed to Lambda.
Now in your code you can include this file and use the variables.
The script that I use to generate a JSON file from CircleCI environment variables is:
cat >dist/env.json <<EOL
{
"CLIENT_ID": "$CLIENT_ID",
"CLIENT_SECRET": "$CLIENT_SECRET",
"SLACK_VERIFICATION_TOKEN": "$SLACK_VERIFICATION_TOKEN",
"BRANCH": "$CIRCLE_BRANCH"
}
EOL
I like this approach because this way you don't have to include environment specific variables in your repository.
I know it has been a while, but I didn't see a solution that works from the AWS Lambda console.
STEPS:
In your AWS Lambda Function Code, look for "Environment variables", and click on "Edit";
For the "Key", type "LD_LIBRARY_PATH";
For the "Value", type "/opt/python/lib".
Look at this screenshot for the details.
The #3 assumes that you are using Python as your runtime environment, and also that your uploaded Layer has its "lib" folder in the following structure:
python/lib
This solution works for the error:
/lib/x86_64-linux-gnu/libz.so.1: version 'ZLIB_1.2.9' not found
assuming the correct libray file is put in the "lib" folder and that the environment variable is set like above.
PS: If you are unsure about the #3 path, just look for the error in your console, and you will be able to see where your "lib" folder for your layer is at runtime.