Is it possible to convert terraform into AWS CDK? - amazon-web-services

I'm, Planning to convert my existing implemented Terraform (Infrastructure as Service) Code into CDK.. Is it really possible??

You can now do this as of version 0.5 of the CDK.
https://www.hashicorp.com/blog/announcing-cdk-for-terraform-0-5
Create a new directory
Initalize CDK project
cdktf init --template="python" --local
Copy over your main.tf file
Convert!
cat main.tf | cdktf convert --provider hashicorp/aws --language python > imported.py

My biggest concern actually is whether I'll be able to keep the existing infrastructure untouched when rewriting all my terraform modules into aws-cdk. Each construct in aws-cdk has a name which is used to generate a logical id in aws, and I'm not sure what's the equivalent one in terraform.
When running an aws cdk deploy, if a name is different from one that's in the cloud then it will destroy and recreate the resource.

There is currently no way to convert Terraform templates to equivalent CDK code.

Related

How to map AWS resource type to Terraform type

I am trying to import existing AWS resources through Terraform import cmd.
Programatically I am able to take AWS resource ID through Resource tagging API but then I can not find a proper way to map it to Terraform type.
For example EC2 instance i-abcd has to be imported in Terraform through the following cmd:
terraform import aws_instance.foo i-abcd
Is there any way that I can determine the Terraform type of the i-abcd knowing that it is an instance in AWS?
Something like a dictionary:
AWS Resource type | Terraform Resource type
instance | aws_instance
Is there any solution like the above one out there or any workarounds to create it without too many manual mappings?
Thanks in advance!

AWS CDK accessing parameters when deploying stacks on the pipeline via yaml, typescript and nodejs

I'm fairly new to AWS and using the CDK but have been working on a project which deploys to AWS via a pipeline using yaml for the cf-template and later a node script to run cdk deploy on a set of stack files that have been written in Typescript.
In the cf-template yaml where the cdk-toolkit is being defined there's a bucket resource with name X. After the toolkit has been created/updated in the pipeline, the cdk deploy command is executed to deploy some stacks and workers, which should live in bucket X. They aren't automatically being uploaded there however, so I've tried using the --parameters flag to specify X as below.
cdk deploy --toolkit-stack-name my-toolkit --parameters uploadBucketName=X --ci --require-approval never
When I do this I get the following error in the pipeline for the first stack that gets deployed:
Parameters: [uploadBucketName] do not exist in the template
I assumed this meant that in the MyFirstStack.ts file it was missing a parameter definition as suggested by the AWS documentation, but it's not clear to me why this is necessary or how it's supposed to be used when it's the cdk deploy command which provides a value for this parameter. I tried adding it per the docs
const uploadBucketName = new CfnParameter(this, "uploadBucketName", {
type: "String",
description: "The name of the Amazon S3 bucket where uploaded files will be stored."});
but not sure if this is really the right thing to do, and it doesn't work besides - I still get the same error.
Does anyone have any ideas where I'm going wrong?

Could not find an option to pass parameter `CallerReference` in Terraform resource `aws_cloudfront_origin_access_identity`

We are in the way to migrate from api calls to terraform to spin resources/accesses/policies in aws. I was bit struct in a place where I could not find an option to pass CallerReference to aws terraform resource aws_cloudfront_origin_access_identity.
We have this option using api: https://docs.aws.amazon.com/cloudfront/latest/APIReference/API_CreateCloudFrontOriginAccessIdentity.html
Do we have any custom options for passing the same in other ways?
If its not directly supported by TF, you can always use local-exec with AWS CLI to create your origin identity.

terraform infrastructure runs locally , building and deploying it on aws codepipeline gives error

I have created my aws infrastructure using terraform . the infrastructure includes creating elastic beanstalk apps , application load balancer , s3 , dynamodb , vpc-subnets and vpc-endpoints.
the aws infrastructure runs locally using the terraform commands as shown below:
terraform init
terraform plan -var-file="terraform.tfvars"
terraform apply -auto-approve -var-file="terraform.tfvars"
The terraform.tfvars contains the variables like region , instance type , access key etc .
I want to automate the build and deploy process of this terraform infrastructure using the aws codepipeline .
How can I achieve this task ? What steps to follow ? Where to save the terraform.tfvars file ? What roles to specify in the specific codebuild role . What about the manual process of auto-approve ?
MY APPROACH :The entire process of codecommit/github , codebuild , codedeploy ie (codepipeline) is carried out through aws console , I started with github as source , it is working (the github repo includes my terraform code for building aws infrastructure) then for codebuild , I need to specify the env variables and the buildspec.yml file , this is the problem , Iocally I had a terraform.tfvars to do the job but here I need to do it in the buildspec.yml file .
QUESTIONS :I am unaware how to specify my terraform.tfvars credentials in the buildspec.yml file and what env variables to specify? I also know we need to specify roles in the codebuild project but how to effectively specify them ? How to also Store the Terraform state in s3 ?
- How can I achieve this task ?
Use CodeCommit to store your Terraform Code, CodeBuild to run terraform plan, terraform apply, etc and CodePipeline to connect CodeCommit with CodeBuild.
What steps to follow ?
There are many tutorials on the internet. Check this as an example:
https://medium.com/faun/terraform-deployments-with-aws-codepipeline-342074248843
Where to save the terraform.tfvars file ?
Ideally, you should create one terraform.tfvars for development environment, like terraform.tfvars.dev, and another one for production environment, like terraform.tfvars.prod. And in your CodeBuild environment, choose the file using environment variables.
What roles to specify in the specific CodeBuild role ?
Your CodeBuild role needs to have the permissions to create, list, delete and update resources. Basically, in one service, it's almost everything.
What about the manual process of auto-approve ?
Usually, you use terraform plan in one CodeBuild environment to show what are the changes in your environment, and after a manual approval, you execute terraform apply -auto-approve in another CodeBuild environment. Check the tutorial above, it shows how to create this.

aws codepipline update lambda function source using s3 object

I am using terraform to create all the infra(CodePipeline, lambda, buckets) on AWS
currently, I've created a pipeline that builds the source zip file and puts it on s3 bucket but the lambda still keeps using the older source. So, I update the URL manually in the AWS console and it works.
Now I want to automate the flow but available solutions are:
AWS SAM + CFT
Codebuild Stage to update the source using AWS CLI
Create a lambda that updates the source
Code Deploy + AWS SAM + CFT
I am not willing to use CFT at all since all of our code is in terraform and CFT requires me to create new lambdas instead of using old ones.
is there any other simpler way to update the lambda source through Codepipeline
The preferred way to deploy a Lambda via CodePipeline is using a CloudFormation Deploy action [1]. Since you are not looking to use CloudFormation, next option could be to run your terraform plan/apply commands from within a CodeBuild job that is part of the pipeline. You will need to provide the CodeBuild role required permission for resource creation (or export the credentials in Environment variabels for TF to use via this [2] method) and install the TF binary within install phase of buildspec.
Ref:
[1] Building a Continuous Delivery Pipeline for a Lambda Application with AWS CodePipeline - https://docs.aws.amazon.com/lambda/latest/dg/build-pipeline.html
[2] How to retrieve Secret Manager data in buildspec.yaml