I'm trying to use AWS CDK to create a new lambda tied to already existing AWS resources which were not created using CDK and that are part of a different stack.
Can I trigger my lambda from an already existing user pool using CDK? I've imported the user pool to my new stack using:
const userPool = UserPool.fromUserPoolArn(this, 'poolName, 'arn:aws:cognito-idp:eu-west-1:1234567890:userpool/poolName')
However, this gives me an IUserPool which does not have the addTrigger method. Is there a way to convert this into a UserPool in order to be able to trigger the lambda (since I can see that UserPool has the addTrigger method)?
I have seen that it is possible to e.g. grant permissions for my new lambda to read/write into an existing DynamoDB table using CDK. And I don't really understand the difference here: DynamoDB is an existing AWS resource and I'm importing it to the new stack using CDK and then allowing my new lambda to modify it. The Cognito User Pool is also an existing AWS resource, and I am able to import it in CDK but it seems that I'm not able to modify it? Why?
This was discussed in this issue. You can add triggers to an existing User Pool using a Custom Resource:
import * as CustomResources from '#aws-cdk/custom-resources';
import * as Cognito from '#aws-cdk/aws-cognito';
import * as Iam from '#aws-cdk/aws-iam';
const userPool = Cognito.UserPool.fromUserPoolId(this, "UserPool", userPoolId);
new CustomResources.AwsCustomResource(this, "UpdateUserPool", {
resourceType: "Custom::UpdateUserPool",
onCreate: {
region: this.region,
service: "CognitoIdentityServiceProvider",
action: "updateUserPool",
parameters: {
UserPoolId: userPool.userPoolId,
LambdaConfig: {
PreSignUp: preSignUpHandler.functionArn
},
},
physicalResourceId: CustomResources.PhysicalResourceId.of(userPool.userPoolId),
},
policy: CustomResources.AwsCustomResourcePolicy.fromSdkCalls({ resources: CustomResources.AwsCustomResourcePolicy.ANY_RESOURCE }),
});
const invokeCognitoTriggerPermission = {
principal: new Iam.ServicePrincipal('cognito-idp.amazonaws.com'),
sourceArn: userPool.userPoolArn
}
preSignUpHandler.addPermission('InvokePreSignUpHandlerPermission', invokeCognitoTriggerPermission)
You can also modify other User Pool settings with this method.
Related
Let's say I have a lambda function on AWS running some boto3 code. This boto3 code interacts with a variety of AWS resources, such as CloudWatch, S3, SNS, Lambda etc. In the execution role, I obviously may need to add certain permissions, such as lambda::CreateFunction as an example.
Now I want to add a permission policy to this function and add all the necessary permissions. Currently the only way to do this seems to be to run the code, read the error about it not having access to a certain permission, and then adding that permission to the permission policy. This can get very tedious and time consuming, especially when the code interacts with a large variety of different AWS resources.
So my question is, are there any ways to just see what permissions the boto3 code will require before running it? Maybe somebody has made a script for this before that reads through the code and prints out the permissions that would be necessary to run it?
If you build your services using the AWS Cloud Development Kit (CDK), then you can simplify the process of granting your Lambda permissions to the correct services.
As a simple example, say you have a Lambda function and a DynamoDB table. You design the infrastructure in CDK with something like this:
const someTable = new dynamodb.Table(this, 'someTable', {
tableName: 'someTable',
billingMode: dynamodb.BillingMode.PAY_PER_REQUEST,
partitionKey: {name: 'id', type: dynamodb.AttributeType.STRING},
});
const someLambdaFunction = new lambda.Function(this, 'someLambdaFunction', {
functionName: 'someLambdaFunction',
runtime: lambda.Runtime.NODEJS_16_X,
handler: 'index.handler',
memorySize: 128,
timeout: cdk.Duration.seconds(10),
code: lambda.Code.fromAsset(path.join(__dirname, '../yourpath')),
});
And then you can grant permissions in one line:
someTable.grantFullAccess(someLambdaFunction);
CDK then generates the CloudFormation templates with the correct roles and permissions.
In our environment there is a dedicated AWS account that contains registered domain as well as hosting zone in Route53. Also an IAM role is created that allows specific set of other accounts to create records in that hosted zone.
Using AWS CDK (v2) is there a way to create API Gateway in one account with DNS record (A Record?) created for it in that dedicated one?
This is an example of setup:
export class CdkRoute53ExampleStack extends Stack {
constructor(scope: Construct, id: string, props?: StackProps) {
super(scope, id, props);
const backend = new lambda.Function(this, 'HelloHandler', {
runtime: lambda.Runtime.NODEJS_14_X,
code: lambda.Code.fromAsset('src'),
handler: 'hello.handler'
});
const restApi = new apigw.LambdaRestApi(this, 'Endpoint', {
handler: backend,
domainName: {
domainName: `cdk53.${Config.domainName}`,
certificate: acm.Certificate.fromCertificateArn(
this,
"my-cert",
Config.certificateARN
),
},
endpointTypes: [apigw.EndpointType.REGIONAL]
});
new route53.ARecord(this, "apiDNS", {
zone: route53.HostedZone.fromLookup(this, "baseZone", {
domainName: Config.domainName,
}),
recordName: "cdk53",
target: route53.RecordTarget.fromAlias(
new route53targets.ApiGateway(restApi)
),
});
}
}
Basically I need that last ARecord construct to be created under credentials from assumed role in another account.
As far as I am aware, a CDK stack is built and deployed entirely within the context of a single IAM user (aka identity). I.e. you can't run different bits of the stack as different IAM users. (As an aside, code which uses the regular AWS SDK - such as lambdas - can switch identities using STS.)
The solution therefore is to do as much as you can using the CDK (in account B). After this is complete, then the final step - registering the DNS record - is done using a different identity which operates within account A.
Registering the DNS record could be done using AWS CLI commands, or you could even create another (mini) stack just for this purpose.
Either way you would execute the second step as an identity which is allowed to write records to the hosted zone in account A.
This could be achieved by using a different --profile with your CLI or CDK commands. Or you could use STS to assume a role which is allowed to create the DNS record in account A.
Using STS has the advantage that you don't need to know credentials of account A. But I've found STS to have a steep learning curve and can be a little confusing to get right.
EDIT: it seems the CDK stack in account B can actually switch roles when registering a DNS record by virtue of the CrossAccountZoneDelegationRecord construct and the delegationRole attribute - see https://stackoverflow.com/a/72097522/226513 This means that you can keep all your code in the account B CDK stack.
I currently have a dynamodb table that's been in use for a couple of years, originally created in the console. It contains lots of valuable data. It uses a stream to periodically send a snapshot using a lambda trigger of the table to s3 for analytics. The table itself is heavily used by end users to access their data.
I want to migrate my solution into CDK. The options I want to explore:
When you use the Table.fromTableArn construct, you don't get access to the table stream arn so it's impossible to attach a lambda trigger. Is there a way around this?
Is there a way to clone the dynamoDB table contents in my CDK stack so that my copy will start off in exactly the same state as the original? Then I can add and manage the stream etc in CDK no problem.
It's worth checking my assumption that these are the only 2 options.
Subscribing Lambda to an existing Dynamo Table:
We don't need to have table created within same stack. We can't use addEventSource on lambda but we can use addEventSourceMapping and add necessary policies to Lambda, which is what addEventSource does behind the scenes.
const streamsArn =
"arn:aws:dynamodb:us-east-1:110011001100:table/test/stream/2021-03-18T06:25:21.904";
const myLambda = new lambda.Function(this, "my-lambda", {
code: new lambda.InlineCode(`
exports.handler = (event, context, callback) => {
console.log('event',event)
callback(null,'10')
}
`),
handler: "index.handler",
runtime: lambda.Runtime.NODEJS_10_X,
});
const eventSoruce = myLambda.addEventSourceMapping("test", {
eventSourceArn: streamsArn,
batchSize: 5,
startingPosition: StartingPosition.TRIM_HORIZON,
bisectBatchOnError: true,
retryAttempts: 10,
});
const roleUpdates = myLambda.addToRolePolicy(
new iam.PolicyStatement({
actions: [
"dynamodb:DescribeStream",
"dynamodb:GetRecords",
"dynamodb:GetShardIterator",
"dynamodb:ListStreams",
],
resources: [streamsArn],
})
);
Importing existing DynamoDb into CDK:
We re-write dynamo db with same attributes in cdk, synth to generate Cloudformation and use resource import to import an existing resources into a stack. Here is an SO answer
I'm using CDK to build our infra on AWS. I create my IAM User for my microservices to talk to AWS Services under the defined policies. My issue is I cannot get aws secret key and id and then, pass as Env variable to my container. See below:
First, I create my IAM user which will be used by my microservices.
const user = new User(this, "user", {
userName: `${myAppEnv}-api-iam-user`,
});
Second, I'm trying to create Access Key.
const accessKey = new CfnAccessKey(this, "ApiUserAccessKey", {
userName: user.userName,
});
const accessKeyId = new CfnOutput(this, "AccessKeyId", {
value: accessKey.ref,
});
const accessKeySecret = new CfnOutput(this, "SecretAccessKeyId", {
value: accessKey.attrSecretAccessKey,
});
Next, I want to pass it as an env variable.
const apiContainer = apiTaskDefinition.addContainer(containerId, {
image: apiImage,
environment: {
APP_ENV: myAppEnv,
AWS_ACCESS_KEY_ID: awsAccessKeyId.value || ":(",
AWS_SECRET_ACCESS_KEY: awsSecretAccessKey.value || ":(",
NOTIFICATIONS_TABLE_ARN: notificationsTableDynamoDBArn,
NOTIFICATIONS_QUEUE_URL: notificationsQueueUrl,
},
cpu: 256,
memoryLimitMiB: 512,
logging: new AwsLogDriver({ streamPrefix: `${myAppEnv}-ec-api` }),
});
When my CDK deploy finishes successfully, I see the below printed out :/
Outputs:
staging-ecstack.AccessKeyId = SOMETHING
staging-ecstack.SecretAccessKeyId = SOMETHINGsy12X21xSSOMETHING2X2
Do you have any idea how I can achieve this?
Generally speaking, creating an IAM user isn't the way to go here - you're better off using an IAM role. With the CDK it will create a taskRole for you automatically when you instantiate the taskDefinition construct. You can assign permissions to other constructs in your stack using various grant* methods as described here:
const taskDefinition = new ecs.Ec2TaskDefinition(stack, 'TaskDef');
const container = taskDefinition.addContainer('web', {
image: ecs.ContainerImage.fromRegistry("apps/myapp"),
memoryLimitMiB: 256,
});
// Grant this task role access to use other resources
myDynamodbTable.grantReadWriteData(taskDefinition.taskRole);
mySnsTopic.grantPublish(taskDefinition.taskRole);
In short, find the answer in my blogpost here:
https://blog.michaelfecher.com/i-tell-you-a-secret-provide-database-credentials-to-an-ecs-fargate-task-in-aws-cdk
To explain a bit more detailled on your issue:
Avoid custom created IAM Roles and use the generated ones by CDK. They are aligned and using the least-privilege principle. In my blog post, the manual IAM role creation isn't necessary. Yes, I know... I need to update that. ;)
Use the grant* method of the corresponding resource, e.g. taskDefinition.
This will create a policy statement for the generated role of 1)
Don't use environment variables for secrets.
There are a lot of resources on the web, which tell you why.
One is this here: https://security.stackexchange.com/questions/197784/is-it-unsafe-to-use-environmental-variables-for-secret-data
Especially, when working with ECS, make use of the secrets argument in the task definition (see the blog post).
It seems, that you're passing the token instead of the actual secret value.
That doesn't work. The token is resolved during synth / Cloud Formation generation.
You won't need the CfnOutput. Use the direct Stack -> Stack passing of fields.
The CfnOutput is really sth., which you should avoid, when having all Stacks under control in one CDK application.
That only makes sense, if you want to share between CDK applications, which are separated deployments and repositories.
If sth. is unclear, don't hesitate asking questions.
I am using the new higher level GraphqlAPI class instead of the lower level constructs to create my Appsync api and connect it to a table.
this.api = new GraphqlApi(...);
The new GraphqlApi instance allows you to simply add datasources:
this.api.addDynamoDbDataSource('name', tableRef);
If you look at the example code at https://docs.aws.amazon.com/cdk/api/latest/docs/aws-appsync-readme.html, I notice that they do not create a role to grant permission for Appsync to access the table:
const itemsTableRole = new Role(this, 'ItemsDynamoDBRole', {
assumedBy: new ServicePrincipal('appsync.amazonaws.com')
});
This snippet I got from this example: https://github.com/aws-samples/aws-cdk-examples/blob/master/typescript/appsync-graphql-dynamodb/index.ts
In that example they still use the CfnGraphQLApi construct.
So there they are adding the role and the are adding a policy to the role for a table with permission to do specific actions. Which makes sense.
So my question is, when using the GrahpQl class, if I don't add a Role I can't execute my queries. And if I don't add permissions like:
this.appSyncRole.addToPolicy(new PolicyStatement({
actions: ['dynamodb:*'],
resources: [`${table.tableArn}/index/*`],
effect: Effect.ALLOW
}));
then I am getting an error like:
"message":"User: arn:aws:sts::[ACCOUNT]:assumed-role/[ROLENAME]/APPSYNC_ASSUME_ROLE is not authorized to perform: dynamodb:Query on resource: arn:aws:dynamodb[REGION]:[ACCOUNT]:table/[TABLENAME]/index/byEmail
So is this example for GrahpqlQl not complete or am I missing something else?