cdk diff does not diff from console changes - amazon-web-services

I have a stack which creates IAM policies
Its deployed successfully
I then change a policy by removing few statements
Then invoke cdk diff, which does not detect the drift
Is this expected?

Indeed, cdk diff will only compare the specified stack with the local template file (created by the previous cdk deploy).
Thus, if you made some changes in the AWS Console, the AWS CDK will not detect the drift.
Since version 1.17.0, you can now do the following to detect and show drifted changes:
cdk deploy --no-execute
From the PR description:
You will be able to see the ChangeSet in AWS CloudFormation Console, validate the resources and discard or execute the ChangeSet.

Related

Cleaning stacks in target account by pipeline

I am following this tutorial and i have problem with cleaning the infra in target account. The flow detail
1 : developer -commit-> github -> DeplopmentAccount:Pipeline pull the code then deploy the Aws Cloudformation stack to TargetAccount:cloudformation .
Test Cdk Pipeline will deploy the stack in TestAccount which is the good thing
2.But when we want to clean up with cdk destroy --all, it only destroys the Test CDK Pipeline , the stacks in Test Account still remains.
So my question is how do we destroy all stacks ?
The solution is mentioned in the tutorial you linked:
Clean up
Delete stacks using the command cdk destroy --all. When you see the following text, enter y, and press enter/return.
ProdDataLakeCDKBlogInfrastructurePipeline,
DevDataLakeCDKBlogInfrastructurePipeline (y/n)?
Note: This operation deletes stacks only in central deployment account
To delete stacks in development account, log onto Dev account, go to AWS CloudFormation console and delete the following stacks:
Dev-DevDataLakeCDKBlogInfrastructureVpc
Dev-DevDataLakeCDKBlogInfrastructureS3BucketZones
Dev-DevDataLakeCDKBlogInfrastructureIam
Note:
Deletion of Dev-DevDataLakeCDKBlogInfrastructureS3BucketZones will delete the S3 buckets (raw, conformed, and purpose-built). This
behavior can be changed by modifying the retention policy in s3_bucket_zones_stack.py
To delete stacks in test account, log onto ~~Dev~~ Test account, go to AWS CloudFormation console and delete the following stacks:
Test-TestDataLakeCDKBlogInfrastructureVpc
Test-TestDataLakeCDKBlogInfrastructureS3BucketZones
Test-TestDataLakeCDKBlogInfrastructureIam
Note:
The S3 buckets (raw, conformed, and purpose-built) have retention policies attached and must be removed manually when they are
no longer needed.
To delete stacks in prod account, log onto ~~Dev~~ Prod account, go to AWS CloudFormation console and delete the following stacks:
Prod-ProdDataLakeCDKBlogInfrastructureVpc
Prod-ProdDataLakeCDKBlogInfrastructureS3BucketZones
Prod-ProdDataLakeCDKBlogInfrastructureIam
Note:
The S3 buckets (raw, conformed, and purpose-built) have retention policies attached and must be removed manually when they are
no longer needed.
It erroneously says that you have to log into the Dev account for Test and Prod, I have corrected it in the quote.
Alternatively, you can call cdk destroy --all with the --profile flag and set it to the dev/test/prod AWS credentials profile.

AWS CDK accessing parameters when deploying stacks on the pipeline via yaml, typescript and nodejs

I'm fairly new to AWS and using the CDK but have been working on a project which deploys to AWS via a pipeline using yaml for the cf-template and later a node script to run cdk deploy on a set of stack files that have been written in Typescript.
In the cf-template yaml where the cdk-toolkit is being defined there's a bucket resource with name X. After the toolkit has been created/updated in the pipeline, the cdk deploy command is executed to deploy some stacks and workers, which should live in bucket X. They aren't automatically being uploaded there however, so I've tried using the --parameters flag to specify X as below.
cdk deploy --toolkit-stack-name my-toolkit --parameters uploadBucketName=X --ci --require-approval never
When I do this I get the following error in the pipeline for the first stack that gets deployed:
Parameters: [uploadBucketName] do not exist in the template
I assumed this meant that in the MyFirstStack.ts file it was missing a parameter definition as suggested by the AWS documentation, but it's not clear to me why this is necessary or how it's supposed to be used when it's the cdk deploy command which provides a value for this parameter. I tried adding it per the docs
const uploadBucketName = new CfnParameter(this, "uploadBucketName", {
type: "String",
description: "The name of the Amazon S3 bucket where uploaded files will be stored."});
but not sure if this is really the right thing to do, and it doesn't work besides - I still get the same error.
Does anyone have any ideas where I'm going wrong?

How do I update a CloudFormation Template (via CLI or API) if none of the active resources are affected by the update?

If I have an existing CloudFormation stack with some resources that are always active, and some that are not always active (i.e., resources that have a Condition that is evaluating to false), and I attempt to update the template of ONLY those inactive resources without activating them (i.e., their Condition is still evaluating to false) via the CLI or API, I get a No updates are to be performed. error:
aws cloudformation update-stack --stack-name <name> --template-body "..."
An error occurred (ValidationError) when calling the UpdateStack operation: No updates are to be performed.
If I then check the Stack Template, it has the previous template, not the new one.
However, if I do what is essentially the same thing but from the AWS Console (i.e., Update Stack -> Replace current template -> Upload a template file -> No other changes), the template will be updated.
Is there some way to accomplish such a template update via CLI or API?
Edit: This doesn't work. When using the console CloudTrail logs the API call as UpdateStack, but using the same parameters in the CLI command doesn't seem to work.
Instead of aws cloudformation update-stack you can use aws cloudformation deploy --no-fail-on-empty-changeset.
References:
Documentation for deploy
Difference between deploy and create (or update)

AWS CodePipeline is failing with InternalFailure

I have migrated existing AWS Resources from one Cloudformation (CFT) stack to another CFT stack using below link.
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/resource-import-new-stack.html
After migration, my new CFT stack's status was "IMPORT_COMPLETE". Then I have created an AWS CodePipeline wherein my source is AWS CodeCommit and I am trying to deploy it in Cloudformation stack using CodePipeline.
In my CodePipeline I am using my new CFT stack where I have migrated my existing AWS resources and in the same template I have updated my code by added SQS queue policy and uploaded the code in CodeCommit.
So, when my AWS CodePipeline is getting triggered it is getting failed with "InternalFailure" error and it is not giving any specific error about why it is getting failed.
Also, I have checked into CloudTrail logs and there I can see my pipeline is getting failed after "UploadArchive" event which belongs to CodeCommit and it is nor moving further. Also, I tried to give administrator permission to my pipeline service role as well as cloudformation role but still the error is same.
Later, one thing I observed and that is when I update my new Cloudformation stack using AWS Cloudformation console then my stack's status is changing to "Update_Complete" status. Then after that if I try to update the code into CodeCommit then my pipeline is getting completed successfully.
So, not sure why my Pipeline is getting failed with "InternalFailure" when my stacks status is "IMPORT_COMPLETE". Could you please help me to understand if I am missing any specific step die to which my pipeline is getting failed with this error when my CFT stacks status is "IMPORT_COMPLETE" status
It's a bug in codepipeline. I'd recommend submitting at ticket to them in hopes they make a fix. I only found this out via support myself.

aws codepipline update lambda function source using s3 object

I am using terraform to create all the infra(CodePipeline, lambda, buckets) on AWS
currently, I've created a pipeline that builds the source zip file and puts it on s3 bucket but the lambda still keeps using the older source. So, I update the URL manually in the AWS console and it works.
Now I want to automate the flow but available solutions are:
AWS SAM + CFT
Codebuild Stage to update the source using AWS CLI
Create a lambda that updates the source
Code Deploy + AWS SAM + CFT
I am not willing to use CFT at all since all of our code is in terraform and CFT requires me to create new lambdas instead of using old ones.
is there any other simpler way to update the lambda source through Codepipeline
The preferred way to deploy a Lambda via CodePipeline is using a CloudFormation Deploy action [1]. Since you are not looking to use CloudFormation, next option could be to run your terraform plan/apply commands from within a CodeBuild job that is part of the pipeline. You will need to provide the CodeBuild role required permission for resource creation (or export the credentials in Environment variabels for TF to use via this [2] method) and install the TF binary within install phase of buildspec.
Ref:
[1] Building a Continuous Delivery Pipeline for a Lambda Application with AWS CodePipeline - https://docs.aws.amazon.com/lambda/latest/dg/build-pipeline.html
[2] How to retrieve Secret Manager data in buildspec.yaml