CodePipeline Executes Before CodeBuildAction Role is Created - amazon-iam

CDK Version: 1.108.0
Node Version: v12.16.1
I am creating a CodePipeline via new Pipeline() and passing standard props for a CodeCommitSource and CodeBuildAction constructs. I am not providing a role for Pipeline or CodeBuildAction. I do however, add a few permissions to the underlying Pipeline role after instantiation via
myPipeline.role.grantPrincipal.addToPrincipalPolicy(...)
What I'm observing is that the CodeBuild IAM Role intended for CodePipeline to assume and execute StartBuild on the CodeBuild Project is created by CloudFormation after the CodePipeline. This results in the CodePipeline executing and failing on 1st run. Re-executing the pipeline succeeds but when attempting to automate a deployment which starts from deploying a pipeline this behavior is not ideal. Is this a known issue or something that I may be seeing due to some mis-configuration?

Related

CDK v2 update resulting in deployment error with Circle CI CI/CD pipeline

I have updated my cdk from version 1 to version 2, when I tried to do this locally using npm run cdk -- deploy --context awsEnv=dev --all --profile=dev, this works flawlessly.
However when the Circle CI CI/CD pipeline tries to deploy in the same dev environment, it throws an error
User: arn:aws:sts::xxxxxxxx:assumed-role/*******************************************************/jatinmehrotra is not authorized to perform: ssm:GetParameter on resource: arn:aws:ssm:**************:xxxxx:parameter/cdk-bootstrap/xxxxxxxxx/version because no identity-based policy allows the ssm:GetParameter action
SO basically Circle CI CI/CD pipeline for deployment assumes the roles and create temporary credentials using aws sts assume role command.
Note:- after updating to cdk v2 I can see a new role, which has the same name as the bootstrap ssm parameter. Does that have to do something with error?
As of now, I think the assume role credentials (even though it has sufficient permissions) are not able to access the bootstrap parameter.
After some troubleshooting and carefully reading the error logs, i manually updated the role's permission with full SSM parameters permission whose credentials are being used to deploy the resources.
This resolved the issue.

AWS CodePipeline is failing with InternalFailure

I have migrated existing AWS Resources from one Cloudformation (CFT) stack to another CFT stack using below link.
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/resource-import-new-stack.html
After migration, my new CFT stack's status was "IMPORT_COMPLETE". Then I have created an AWS CodePipeline wherein my source is AWS CodeCommit and I am trying to deploy it in Cloudformation stack using CodePipeline.
In my CodePipeline I am using my new CFT stack where I have migrated my existing AWS resources and in the same template I have updated my code by added SQS queue policy and uploaded the code in CodeCommit.
So, when my AWS CodePipeline is getting triggered it is getting failed with "InternalFailure" error and it is not giving any specific error about why it is getting failed.
Also, I have checked into CloudTrail logs and there I can see my pipeline is getting failed after "UploadArchive" event which belongs to CodeCommit and it is nor moving further. Also, I tried to give administrator permission to my pipeline service role as well as cloudformation role but still the error is same.
Later, one thing I observed and that is when I update my new Cloudformation stack using AWS Cloudformation console then my stack's status is changing to "Update_Complete" status. Then after that if I try to update the code into CodeCommit then my pipeline is getting completed successfully.
So, not sure why my Pipeline is getting failed with "InternalFailure" when my stacks status is "IMPORT_COMPLETE". Could you please help me to understand if I am missing any specific step die to which my pipeline is getting failed with this error when my CFT stacks status is "IMPORT_COMPLETE" status
It's a bug in codepipeline. I'd recommend submitting at ticket to them in hopes they make a fix. I only found this out via support myself.

cdk diff does not diff from console changes

I have a stack which creates IAM policies
Its deployed successfully
I then change a policy by removing few statements
Then invoke cdk diff, which does not detect the drift
Is this expected?
Indeed, cdk diff will only compare the specified stack with the local template file (created by the previous cdk deploy).
Thus, if you made some changes in the AWS Console, the AWS CDK will not detect the drift.
Since version 1.17.0, you can now do the following to detect and show drifted changes:
cdk deploy --no-execute
From the PR description:
You will be able to see the ChangeSet in AWS CloudFormation Console, validate the resources and discard or execute the ChangeSet.

aws codepipline update lambda function source using s3 object

I am using terraform to create all the infra(CodePipeline, lambda, buckets) on AWS
currently, I've created a pipeline that builds the source zip file and puts it on s3 bucket but the lambda still keeps using the older source. So, I update the URL manually in the AWS console and it works.
Now I want to automate the flow but available solutions are:
AWS SAM + CFT
Codebuild Stage to update the source using AWS CLI
Create a lambda that updates the source
Code Deploy + AWS SAM + CFT
I am not willing to use CFT at all since all of our code is in terraform and CFT requires me to create new lambdas instead of using old ones.
is there any other simpler way to update the lambda source through Codepipeline
The preferred way to deploy a Lambda via CodePipeline is using a CloudFormation Deploy action [1]. Since you are not looking to use CloudFormation, next option could be to run your terraform plan/apply commands from within a CodeBuild job that is part of the pipeline. You will need to provide the CodeBuild role required permission for resource creation (or export the credentials in Environment variabels for TF to use via this [2] method) and install the TF binary within install phase of buildspec.
Ref:
[1] Building a Continuous Delivery Pipeline for a Lambda Application with AWS CodePipeline - https://docs.aws.amazon.com/lambda/latest/dg/build-pipeline.html
[2] How to retrieve Secret Manager data in buildspec.yaml

CodeDeploy step of CodePipeline because of insufficient role permissions

I have a 3 stage CodePipeline on AWS.
Source: Checks out upon commit a specific branch of CodeCommit (success)
Build: Runs some tests on a docker image via CodeBuild (success)
Deploy: Performs a deployment on a deployment group (a.k.a. some specifically tagged EC2 instances) via CodeDeploy (failure).
Step 3 fails with
Unable to access the artifact with Amazon S3 object key
'someitem-/BuildArtif/5zyjxoZ' located in the Amazon S3
artifact bucket 'codepipeline-eu-west-1-somerandomnumber'. The provided
role does not have sufficient permissions.
Which role is the later referring to?
The service role of CodePipeline or the service role of CodeDeploy?
I am almost certain I have attached the appropriate policies to both though ...
Here is a snippet of my CodePipeline service role
try to give "CodeDeploy" policy with full access, it should work.
This could also be due to the actual BuildArtifact not existing. Check the specified path in your S3 bucket to see whether the object actually exists. CodePipeline just gives CodeDeploy a reference to an artifact it thinks has been built and uploaded, but it doesn't really know.
This issue is not related to the Roles assigned to either Codepipeline or Codebuild. If you investigate you would find that in the S3 bucket 'codepipeline-eu-west-1-somerandomnumber', there is no folder "BuildArtif" and certainly no file - "5zyjxoZ".
The issue is that Codebuild is not sending any artifact to Codedeploy, change the 'Input artifacts' for Codebuild to the output of the Source stage of the Pipeline and the issue would be resolved.
The error message should be referring to the CodeDeploy role. The CodeDeploy action passes the S3 artifact by reference to CodeDeploy, so the CodeDeploy role needs to have read access to the CodePipeline artifact.