I want to give the policy to codebuild to access the ecr repository for push.
However to what should I give the policy?
I can do this manually in amazon web console though,
it's quite not clear to me in cdk.
const buildProject = new codebuild.PipelineProject(this, 'buildproject', {
environment: {
buildImage:codebuild.LinuxBuildImage.STANDARD_4_0,
privileged:true,
},
buildSpec: codebuild.BuildSpec.fromSourceFilename("./buildspec.yml")
});
buildProject.addToRolePolicy(new iam.PolicyStatement({
resources: [what should be here?],
actions: ['ecr:GetAuthorizationToken'] }
));
Simply myRepository.grantPullPush(buildProject).
Reference: https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.aws_ecr.Repository.html#grantwbrpullwbrpushgrantee
This will abstract away the content of the policy.
Related
I am using AWS CDK to deploy codepipeline and codebuild. What I am current doing is to create codebuild in one cloudformation stack and reference the codebuild in the codepipeline in a different cf stack.
Below is my code, I create a codebuild action like:
const action = new actions.CodeBuildAction({
actionName: "MockEventBridge",
type: actions.CodeBuildActionType.BUILD,
input: input,
project: new codebuild.PipelineProject(this, name, {
projectName: mockName,
environment: {
computeType: codebuild.ComputeType.SMALL,
buildImage
privileged: true,
},
role,
buildSpec: codebuild.BuildSpec.fromSourceFilename(
"cicd/buildspec/mockEventbridge.yaml"
),
}),
runOrder: 1,
})
...
const stages = {
stageName, actions: [action]
}
once build the action, I use below code to build codepipeline.
new codepipeline.Pipeline(this, name, {
pipelineName: this.projectName,
role,
stages,
artifactBucket
});
The problem is that both the codebuild project and codepipeline are built into one stack. If I build the codebuild project in a separate cf stack, how can I reference this stack in codepipeline?
when look at the api reference https://docs.aws.amazon.com/cdk/api/v1/docs/#aws-cdk_aws-codepipeline.Pipeline.html, I can't find a way to reference the codebuild arn in codepipeline instance.
Use the codebuild.Project.fromProjectArn static method to import an external Project resource using its ARN. It returns an IProject, which is what your pipeline's actions.CodeBuildAction props expect.
You can use the export value to export the resource Codebuild created in another Stack.
The exported CodeBuild from the first Stack can be imported in the new Stack of CodePipeline.
You can see this page for more info https://lzygo1995.medium.com/how-to-export-and-import-stack-output-values-in-cdk-ff3e066ca6fc
I'm using CDK to build our infra on AWS. I create my IAM User for my microservices to talk to AWS Services under the defined policies. My issue is I cannot get aws secret key and id and then, pass as Env variable to my container. See below:
First, I create my IAM user which will be used by my microservices.
const user = new User(this, "user", {
userName: `${myAppEnv}-api-iam-user`,
});
Second, I'm trying to create Access Key.
const accessKey = new CfnAccessKey(this, "ApiUserAccessKey", {
userName: user.userName,
});
const accessKeyId = new CfnOutput(this, "AccessKeyId", {
value: accessKey.ref,
});
const accessKeySecret = new CfnOutput(this, "SecretAccessKeyId", {
value: accessKey.attrSecretAccessKey,
});
Next, I want to pass it as an env variable.
const apiContainer = apiTaskDefinition.addContainer(containerId, {
image: apiImage,
environment: {
APP_ENV: myAppEnv,
AWS_ACCESS_KEY_ID: awsAccessKeyId.value || ":(",
AWS_SECRET_ACCESS_KEY: awsSecretAccessKey.value || ":(",
NOTIFICATIONS_TABLE_ARN: notificationsTableDynamoDBArn,
NOTIFICATIONS_QUEUE_URL: notificationsQueueUrl,
},
cpu: 256,
memoryLimitMiB: 512,
logging: new AwsLogDriver({ streamPrefix: `${myAppEnv}-ec-api` }),
});
When my CDK deploy finishes successfully, I see the below printed out :/
Outputs:
staging-ecstack.AccessKeyId = SOMETHING
staging-ecstack.SecretAccessKeyId = SOMETHINGsy12X21xSSOMETHING2X2
Do you have any idea how I can achieve this?
Generally speaking, creating an IAM user isn't the way to go here - you're better off using an IAM role. With the CDK it will create a taskRole for you automatically when you instantiate the taskDefinition construct. You can assign permissions to other constructs in your stack using various grant* methods as described here:
const taskDefinition = new ecs.Ec2TaskDefinition(stack, 'TaskDef');
const container = taskDefinition.addContainer('web', {
image: ecs.ContainerImage.fromRegistry("apps/myapp"),
memoryLimitMiB: 256,
});
// Grant this task role access to use other resources
myDynamodbTable.grantReadWriteData(taskDefinition.taskRole);
mySnsTopic.grantPublish(taskDefinition.taskRole);
In short, find the answer in my blogpost here:
https://blog.michaelfecher.com/i-tell-you-a-secret-provide-database-credentials-to-an-ecs-fargate-task-in-aws-cdk
To explain a bit more detailled on your issue:
Avoid custom created IAM Roles and use the generated ones by CDK. They are aligned and using the least-privilege principle. In my blog post, the manual IAM role creation isn't necessary. Yes, I know... I need to update that. ;)
Use the grant* method of the corresponding resource, e.g. taskDefinition.
This will create a policy statement for the generated role of 1)
Don't use environment variables for secrets.
There are a lot of resources on the web, which tell you why.
One is this here: https://security.stackexchange.com/questions/197784/is-it-unsafe-to-use-environmental-variables-for-secret-data
Especially, when working with ECS, make use of the secrets argument in the task definition (see the blog post).
It seems, that you're passing the token instead of the actual secret value.
That doesn't work. The token is resolved during synth / Cloud Formation generation.
You won't need the CfnOutput. Use the direct Stack -> Stack passing of fields.
The CfnOutput is really sth., which you should avoid, when having all Stacks under control in one CDK application.
That only makes sense, if you want to share between CDK applications, which are separated deployments and repositories.
If sth. is unclear, don't hesitate asking questions.
I'm trying to use AWS CDK to create a new lambda tied to already existing AWS resources which were not created using CDK and that are part of a different stack.
Can I trigger my lambda from an already existing user pool using CDK? I've imported the user pool to my new stack using:
const userPool = UserPool.fromUserPoolArn(this, 'poolName, 'arn:aws:cognito-idp:eu-west-1:1234567890:userpool/poolName')
However, this gives me an IUserPool which does not have the addTrigger method. Is there a way to convert this into a UserPool in order to be able to trigger the lambda (since I can see that UserPool has the addTrigger method)?
I have seen that it is possible to e.g. grant permissions for my new lambda to read/write into an existing DynamoDB table using CDK. And I don't really understand the difference here: DynamoDB is an existing AWS resource and I'm importing it to the new stack using CDK and then allowing my new lambda to modify it. The Cognito User Pool is also an existing AWS resource, and I am able to import it in CDK but it seems that I'm not able to modify it? Why?
This was discussed in this issue. You can add triggers to an existing User Pool using a Custom Resource:
import * as CustomResources from '#aws-cdk/custom-resources';
import * as Cognito from '#aws-cdk/aws-cognito';
import * as Iam from '#aws-cdk/aws-iam';
const userPool = Cognito.UserPool.fromUserPoolId(this, "UserPool", userPoolId);
new CustomResources.AwsCustomResource(this, "UpdateUserPool", {
resourceType: "Custom::UpdateUserPool",
onCreate: {
region: this.region,
service: "CognitoIdentityServiceProvider",
action: "updateUserPool",
parameters: {
UserPoolId: userPool.userPoolId,
LambdaConfig: {
PreSignUp: preSignUpHandler.functionArn
},
},
physicalResourceId: CustomResources.PhysicalResourceId.of(userPool.userPoolId),
},
policy: CustomResources.AwsCustomResourcePolicy.fromSdkCalls({ resources: CustomResources.AwsCustomResourcePolicy.ANY_RESOURCE }),
});
const invokeCognitoTriggerPermission = {
principal: new Iam.ServicePrincipal('cognito-idp.amazonaws.com'),
sourceArn: userPool.userPoolArn
}
preSignUpHandler.addPermission('InvokePreSignUpHandlerPermission', invokeCognitoTriggerPermission)
You can also modify other User Pool settings with this method.
We have an aws setup where we have a test account and a production account.
Our code commit (java lambda's) is in our test account and we want to use CodePipeline to deploy code from here to our test account and production accounts.
I was wondering if anyone is aware of any ready made cloudformation (or cdk) templates that can perform this work?
Thanks
Damien
I have implemented that a few days ago using CDK, the idea is to create an IAM Role on the target environment and assume this role when running the codebuild(which runs as part of the code pipeline).
In my case, since the codebuild creates CDK stacks I gave an AdministratorAccess policy to this role.
Later, create new codebuild and attach permissions to codebuild project role.
// create the codebuild project used by the codepipeline
const codeBuildProject = new codebuild.PipelineProject(scope, `${props.environment}-${props.pipelineNamePrefix}-codebuild`, {
projectName: `${props.environment}-${props.pipelineNamePrefix}`,
buildSpec: codebuild.BuildSpec.fromSourceFilename('buildspec.yml'),
environment: {
buildImage: codebuild.LinuxBuildImage.AMAZON_LINUX_2,
privileged: true,
environmentVariables: buildEnvVariables,
computeType: props.computeType
},
})
// attach permissions to codebuild project role
codeBuildProject.addToRolePolicy(new PolicyStatement({
effect: Effect.ALLOW,
resources: [props.deploymentRoleArn],
actions: ['sts:AssumeRole']
}));
Be aware that props.deploymentRoleArn is the ARN of the role you created on the target environment.
Then, create a new pipeline and add codeBuildProject to codepipelineActions.CodeBuildAction as project:
// create codepipeline to deploy cdk changes
const codePipeline = new codepipeline.Pipeline(scope, `${props.environment}-${props.pipelineNamePrefix}-codepipeline`, {
restartExecutionOnUpdate: false,
pipelineName: `${props.environment}-${props.pipelineNamePrefix}`,
stages: [
{
stageName: 'Source',
actions: [
new codepipelineActions.GitHubSourceAction({
branch: props.targetBranch,
oauthToken: gitHubToken,
owner: props.githubRepositoryOwner,
repo: props.githubRepositoryName,
actionName: 'get-sources',
output: pipelineSourceArtifact,
})]
},
{
stageName: 'Deploy',
actions: [
new codepipelineActions.CodeBuildAction({
actionName: 'deploy-cdk',
input: pipelineSourceArtifact,
type: codepipelineActions.CodeBuildActionType.BUILD,
project: codeBuildProject
}),
]
}
]
});
The relevant part from above code snippet is Deploy stage.The other stage is only required in case you want to get sources from github - More info here.
This is the full solution, in case you want to implement something else, Read more about code pipeline actions here.
I'm using the Serverless framework to deploy my functions on the AWS Lambda
I'm trying to create a trigger automatically for each version of my Lambda functions published.
When I deploy my serverless app, the Lambda function and the triggers are created (in this case my AWS IOT trigger), as we can see on the following image:
But for my published version of the lambda function the trigger doesn't exist, only the resources:
I don't want to create new triggers every time I publish a new lambda version.
So, there is any way to create the triggers for my versioned lambdas too? And if is possible, disable the old ones using the Serverless framework?
my serverless.yml file:
service: serverless-lambdas
provider:
name: aws
runtime: nodejs6.10
iamRoleStatements:
- Effect: "Allow"
Action:
- "ses:*"
- "iot:*"
Resource:
- "*"
functions:
function1:
name: "function1"
handler: function1/handler.function1
events:
- iot:
name: "iotEvent1"
sql: "SELECT EXAMPLE"
sqlVersion: "2016-03-23"
enabled: true
UPDATE
I got a similar problem when I was trying to create triggers programmatically using my own AWS Lambda.
I got stuck on this when I saw that the problem was with my trigger that which had no permission to invoke the published Lambda function. So I needed to add the permission for the trigger first with the method add-permission. (This is not clearly written on the AWS docs :/).
So, before I added the the trigger on the Lambda, I used the following method (in node.js):
const addPermission = (ruleName) => {
const thingArn = `arn:aws:iot:${IOT_REGION}:${SOURCE_ACCOUNT}:rule/${ruleName}`;
const params = {
Action: "lambda:InvokeFunction",
FunctionName: LAMBDA_NAME,
Principal: "iot.amazonaws.com",
SourceAccount: SOURCE_ACCOUNT,
SourceArn: thingArn,
StatementId: `iot-sd-${Math.random().toString(36).substring(2) + Date.now().toString(36)}`
};
return new Promise((resolve, reject) => {
lambda.addPermission(params).promise().then(result => {
resolve(result)
}).catch(err => reject(err))
});
};
I tested the same function for the Serverless framework, and Shazam! my triggers were published! We can do something like this for now while the Serverless code is not updated.
In this way, this problem will need to be solved on the Serverless source-code and I will try to do it ASAP.
From what I checked this is the default behavior for the AWS Lambda functions, so there is no issue with the Serverless framework.
Every time I publish a Lambda function, there is no way to create the trigger events automatically.
For further information, we can read the documentation of Versioning aliases.