iam using CodePipeline to deploy a lambda function.
After deployment, i would like to run integrationtests.
i have a Codepipeline created and deployed my lambdastack successfully but running integrationtests fails because of wrong permissions. My pipeline is in account A and the deployment is for account B. That works, but if i add a ShellStep or a CodeBuildStep as post step with stage.add_post to the StageDeployment object, invoking the lambda function through my tests.sh is not possible, because the CodeBuildStep/ShellStep has different permission settings although i deployed it in the previous step. To access the lambda function arn in my tests, i set CfnOutput in my lambdaStackStage and passed it to my CodeBuildStep/Shellstep as env_from_cfn_outputs.
lambdaStackStage=Deploy(
self,
id='Deployment',
env={
'account': 123456789,
'region': region
},
)
stage=pipe.add_stage(
stage=lambdaStackStage
)
stage.add_post(
CodeBuildStep(
"IntegrationTest",
commands=["./tests.sh"],
env_from_cfn_outputs={"LAMBDA_FUNCTION": lambdaStackStage.lambda_arn},
)
)
how can i set the CodeBuildStep to use the same role as it is used in the deployment step? or why isnt this step using the same role?
the lambdaStackStage was created with the account number for account B and the region. But passing this env to my CodeBuildStep did not change the permission problem.
stage.add_post(
CodeBuildStep(
"IntegrationTest",
commands=["./tests.sh"],
env_from_cfn_outputs={"LAMBDA_FUNCTION": lambdaStackStage.lambda_arn},
env={
'account': 123456789,
'region': region
}
)
)
here is a possible solution where it is necessary to give the CodeBuildStep an assume role policy and then assume the role using the AWS CLI.
Related
Is it possible to use a single IAM role (which can access another role) to deploy resources with environment variables CDK_DEFAULT_ACCOUNT and CDK_DEFAULT_REGION?
For example: Below is a piece of code from Jenkinsfile , which uses a role to deploy resources in the account of which it a part.
script
{
withCredentials([string(credentialsId: "sample-role-arn", variable: 'ARN'), string(credentialsId: "sample-role-extid", variable: 'EXT_ID')])
{
withAWS(role: "${ARN}", externalId: "${EXT_ID}", region: "${AWS_REGION}"){
sh '''
cdk deploy --all
'''
}
}
}
In this code sample-role-arn is defined in the account in which cdk deploy --all will deploy the resources. If the CDK_DEFAULT_ACCOUNT and CDK_DEFAULT_REGION are set to different values of which sample-role-arn is not a part, the cdk deploy --all will through error: Could not assume role in target account using current credentials (which are for account xxxxxx) User: arn:aws:sts::xxxx:assumed-role/sample-role-arn/xxx is not authorized to perform: sts:AssumeRole on resource is is expected.
However, if role is created in account set by CDK_DEFAULT_ACCOUNT and CDK_DEFAULT_REGION and made sample-role-arn as trusted entity, yet the same error as mentioned above is encountered despite the fact that sample-role-arn is a trusted entity.
Could someone please advise, if this is possible?
When cdk bootstrapping AWS account A I am utilizing the --trust flag for account B:
CDK_DEFAULT_ACCOUNT=A cdk boostrap --trust B ...
This should allow B to deploy into the A environment.
However, when a code pipeline job (with no ~/.aws directory and no environment variable credentials) in B is running cdk deploy against A it errors out with
failed: Error: Need to perform AWS calls for account A, but the current credentials are for B
The execution role for the code pipeline action in account B has admin access.
How is a process in the trusted account credentialed to deploy to the boostrapped account?
There is a similarly titled question which is for a separate topic.
Thank you in advance for your consideration and response.
When the target A account is bootstrapped there is an IAM Role created in a A with a name like "cdk-...deploy-role...". By passing the --trust B flag when bootstrapping, a Trust Relationship is created in that deploy IAM Role that allows B account to assume the role.
In the B code pipeline you need to first assume the A deploy role:
aws sts assume-role --role-arn <deploy_role_from_A_arn>
Then use the supplied credentials when invoking the cdk deploy command in code pipeline.
I have a CDK Pipeline stack that synths and deploys some infrastructure. After the infrastructure is created, I want to build a frontend react app that knows the URL to the newly constructed API Gateway. Once the app is built, I want to move the built files to a newly created S3 bucket.
I have the first two steps working no problem. I use a CfnOutput to get the API URL and the bucket name. I then use envFromCfnOutputs in my shell step to build the react app with the right env variable set up.
I can't figure out how to move my files to a s3 bucket. I've tried for days to figure out something using s3deploy, but run into various permission issues. I thought I could try to just use the aws cli and move the files manually, but I don't know how to give the CLI command permission to add and delete objects. To make things a bit more complicated, My infrastructure is deployed to a separate account from where my pipeline lives.
Any idea how I can use the CLI or another thought on how I can move the built files to a bucket?
// set up pipeline
const pipeline = new CodePipeline(this, id, {
crossAccountKeys: true,
pipelineName: id,
synth: mySynthStep
});
// add a stage with all my constructs
const pipelineStage = pipelineAddStage(myStage)
// create a shellstep that builds and moves the frontend assets
const frontend = new ShellStep('FrontendBuild', {
input: source,
commands: [
'npm install -g aws-cli',
'cd frontend',
'npm ci',
'VITE_API_BASE_URL="$AWS_API_BASE_URL" npm run build',
'aws s3 sync ./dist/ s3://$AWS_FRONTEND_BUCKET_NAME/ --delete'
],
envFromCfnOutputs: {
AWS_API_BASE_URL: myStage.apiURL,
AWS_FRONTEND_BUCKET_NAME: myStage.bucketName
}
})
// add my step as a poststep to my stage.
pipelineStage.addPost(frontendApp);
I want to give this a shot and also suggest a solution for cross account pipelines.
You figured out the first half of how to build the webapp, this works by passing the output of the cloudformation to the environment of a shell action building the app with the correct outputs (e.g. API endpoint url).
You now could add permissions to a CodeBuildStep and attach a policy there to allow the step to do call certain actions. That should work if your pipeline and your bucket are in the same account (and also cross account with a lot more fiddling). But there is a problem with scoping those permissions:
The Pipeline and the Bucket are created in an order where first the Pipeline is created or self-updated, so you do not know the Bucket Name or anything else at this point. It then deploys the resources to its own account or to another account. So you need to assign a name which is known beforehand. This is a general problem and broadens if you e.g. also need to create a Cloudfront Invalidation and so on.
My approach is the following (in my case for a cross account deployment):
Create a Role alongside the resources which allows the role to do things (e.g. ReadWrite S3 bucket, create Cloudfront Invalidation, ...) with a predefined name and allow a matching principal to assume that role (In my case an Account principal)
Code snippet
const deploymentRole = new IAM.Role(this, "DeploymentRole", {
roleName: "WebappDeploymentRole",
assumedBy: new IAM.AccountPrincipal(pipelineAccountId),
});
// Grant permissions
bucket.grantReadWrite(deploymentRole);
2. Create a `CodeBuildStep` which has permissions to assume that role (by a pre-defined name)
Code snippet
new CodeBuildStep("Deploy Webapp", {
rolePolicyStatements: [
new PolicyStatement({
actions: ["sts:AssumeRole"],
resources: [
`arn:aws:iam::${devStage.account}:role/${webappDeploymentRoleName}`,
],
effect: Effect.ALLOW,
}),
],
...
}
3. In the `commands` i do call `aws sts assume-role` with the predefined role name and save the credentials to the environment for following calls to use
Code snippet
envFromCfnOutputs: {
bucketName: devStage.webappBucketName,
cloudfrontDistributionID: devStage.webbappCloudfrontDistributionId,
},
commands: [
"yarn run build-webapp",
// Assume role, see https://stackoverflow.com/questions/63241009/aws-sts-assume-role-in-one-command
`export $(printf "AWS_ACCESS_KEY_ID=%s AWS_SECRET_ACCESS_KEY=%s AWS_SESSION_TOKEN=%s" $(aws sts assume-role --role-arn arn:aws:iam::${devStage.account}:role/${webappDeploymentRoleName} --role-session-name WebappDeploySession --query "Credentials.[AccessKeyId,SecretAccessKey,SessionToken]" --output text))`,
`aws s3 sync ${webappPath}/build s3://$bucketName`,
`aws cloudfront create-invalidation --distribution-id $cloudfrontDistributionID --paths \"/*\"`,
],
4. I do call other aws cli actions like `aws s3 sync ...` with the credentials from step 3. which are now correctly scoped to the actions needed
The ShellStep is likely running under the IAM permissions/role of the Pipeline. Add additional permissions to the Pipeline's role and that should trickle down the AWS CLI call.
You'll also need to probably call buildPipeline before you try to do this:
pipeline.buildPipeline();
pipeline.pipeline.addToRolePolicy(...)
I've created an AWS CDK script which deploys an ECR image to Fargate.
When executing the script from an EC2 VM (using cdk deploy using the aws cli tool), I can add an IAM Role to the EC2 instance therefore granting all the permissions required. And the script deploys successfully.
However my aim is to cdk synth the script into a Cloudformation template manually, and then deploy from AWS Service Catalog.
This is where permissions are required, but I'm unsure where exactly to add them?
An example error I get is:
"API: ec2:allocateAddress You are not authorized to perform this operation. Encoded authorization failure message: "
I've looked into the aws cdk docs (https://docs.aws.amazon.com/cdk/api/v1/docs/aws-iam-readme.html) thinking the cdk script needs to have the permissions embedded, however the resources I'm trying to create don't seem to have options to add IAM permissions.
Another option is, like with native Cloudformation scripts, to add Parameters which allow attaching Roles upon provisioning the Product, though I haven't found a way to implement this in cdk either.
It seems like a very obvious solution would be available for this, but I've not found it! Any ideas?
The CDK script used:
from constructs import Construct
from aws_cdk import (
aws_ecs as ecs,
aws_ec2 as ec2,
aws_ecr as ecr,
aws_ecs_patterns as ecs_patterns
)
class MyConstruct(Construct):
def __init__(self, scope: Construct, id: str, *, repository_name="my-repo"):
super().__init__(scope, id)
vpc = ec2.Vpc(self, "my-vpc", max_azs = 3)
cluster = ecs.Cluster(self, "my-ecs-cluster", vpc=vpc)
repository = ecr.Repository.from_repository_name(self, "my-ecr-repo", repository_name)
image = ecs.ContainerImage.from_ecr_repository(repository=repository)
fargate_service = ecs_patterns.ApplicationLoadBalancedFargateService(
self,
"my-fargate-instance",
cluster=cluster,
cpu=256,
desired_count=1,
task_image_options=ecs_patterns.ApplicationLoadBalancedTaskImageOptions(
image=image,
container_port=3000,
),
memory_limit_mib=512,
public_load_balancer=True
)
When use AWS CDK to provision resources in an VPC, it requires me to specify AWS account and region through env environment variables.
I have CLI access to my dev account, but no access to prod account.
I would like to use cdk synth to generate cloudformation template for production account. To do that, I specifies the account ID in .env file.
But cdk synth command returns me following error.
[Error at /whitespace-app-fargate/whitespace-app-fargate/FargateStack] Could not assume role in target account using current credentials (which are for account xxxxxxxx) User: arn:aws:iam::xxxxxxxxx:user/myqinjie is not authorized to perform: sts:AssumeRole on resource: arn:aws:iam::yyyyyyyyy:role/cdk-hnb659fds-lookup-role-yyyyyyyy-ap-southeast-1 . Please make sure that this role exists in the account. If it doesn't exist, (re)-bootstrap the environment with the right '--trust', using the latest version of the CDK CLI.
Is there a ways to run cdk synth to generate cloudformation template without validation?
It is not possible to run cdk synth against an account that you do not have access to.
You need use a role or user that has sufficient permissions to execute cdk synth against production account.
May I ask what is your usecase?
If you want to validate which resources will be created, you can run against your own account but use production stage and production region.
The only thing different when effectively deploying to production will be the account.