AWS CDK Working with Existing DynamoDB and Streams - amazon-web-services

I'm migrating my cloud solution to cdk. I can see how to add a stream to a new DynamoDB in the constructor through the TableProps:
const newTable = new dynamodb.Table(this, 'new Table', {
tableName: 'streaming',
partitionKey: { name : 'id', type: dynamodb.AttributeType.NUMBER },
stream: StreamViewType.NEW_AND_OLD_IMAGES,
})
but there is no apparent way to enable a stream on an existing DynamoDB. I can't seem to access the TableProps on an existing item.
const sandpitTable = dynamodb.Table.fromTableArn(this, 'sandpitTable', 'arn:aws:dynamodb:ap-southeast-2:xxxxxxxxxxxxxx:table/Sandpit');
sandpitTable.grantStreamRead(streamLambda);
// sandpitTable. ??? what to do?
How can this be achieved? And how does the solution take into account disaster recovery and prevent accidental deletion of the Dynamo DB that is not possible when using the console.

Enabling streams is just another attribute of resource 'AWS::DynamoDB::Table' in CloudFormation and I don't believe we can make changes to a resource that is created in a stack (or manually) from another cloudformation/cdk stack unless we import the resource.
Here is documentation. I can try and summarize.
Assume we have an existing cdk project which is deployed without Metadata resource cdk --no-version-reporting deploy
Assuming we have Dynamo table 'streaming' with partiion key 'id' as you stated.
Adding below cdk code with same attributes of original table like RCU, WCU, keys, etc. For simplicity I just gave name and key and removalPolicy is must
const myTable = new dynamodb.Table(this, "dynamo-table", {
tableName: "streaming",
partitionKey: { name: "id", type: dynamodb.AttributeType.NUMBER },
removalPolicy: cdk.RemovalPolicy.RETAIN,
});
We can now synth and generate the CloudFormation by default into cdk.out folder cdk --no-version-reporting synth
Grab the logical Id from .json file in my case it is dynamotableF6720B98
Create ChangeSet set with right table name and logical id
aws cloudformation create-change-set --stack-name HelloCdkStack --change-set-name ImportChangeSet --change-set-type IMPORT --resources-to-import "[{\"ResourceType\":\"AWS::DynamoDB::Table\",\"LogicalResourceId\":\"dynamotableF6720B98\",\"ResourceIdentifier\":{\"TableName\":\"streaming\"}}]" --template-body file://cdk.out/HelloCdkStack.template.json
Execute change set
aws cloudformation execute-change-set --change-set-name ImportChangeSet --stack-name HelloCdkStack
Best to check the drift and make necessary chagnes to
aws cloudformation detect-stack-drift --stack-name HelloCdkStack
To your other question of preventing accidental deletion, we can simply add deletion policy to avoid dynamo table getting deleted when stack/resource is deleted.
removalPolicy: RemovalPolicy.RETAIN

Related

CDK Codepipeline CloudFormationCreateUpdateStackAction getting "S3: Access Denied" only with Nested Stacks

I am trying to set up a CDK Codepipeline for updating the cdk project itself, with the project being under one stack, and having multiple nested stacks in the constructor. The pipeline is in a second stack with the service stack passed in to access the name. I am using CloudFormationCreateUpdateStackAction to update the stack after I have run cdk synth and put the output in an artifact using codebuild.
pipeline.addStage({
stageName: 'ServiceUpdate',
actions: [
new CloudFormationCreateUpdateStackAction({
actionName: 'Service_Update',
stackName: props.serviceStack.stackName,
templatePath: cdkPipelineBuildOutput.atPath(
`${props.serviceStack.stackName}.template.json`
),
adminPermissions: true,
}),
],
});
This is able to update the stack if it is empty, or has some resources directly in it, however if there is a nested stack inside the service stack I get
S3: AccessDenied
for each of the nested stacks inside of the stack.
If I run "cdk deploy ExampleServiceStackName" from my terminal with admin credentials the nested stacks are created/updated correctly, leading me to believe that there is something wrong with the IAM roles of codebuild or codepipeline here. But I don't know where to start as I have set adminPermissions to true in the CloudFormationCreateUpdateStackAction.
I also manually set admin permissions by calling addToDeploymentRolePolicy on the CloudFormationCreateUpdateStackAction, and CodePipeline, passing
const policy = new PolicyStatement({
resources: ['*'],
actions: ['*'],
effect: Effect.ALLOW,
});
with no change in the access denied error.
I also make sure to specify "cdk synth --all" in my ci script in an attempt to ensure the nested stacks templates will be synthesized.
Other stack overflow questions I have read:
S3 error: Access Denied when deploying CFN template with Nested Stacks
This Q was related to a typo in the manually written cloud formation template. I have looked in the generated templates, and the nested stack name is correctly generated and referenced by cdk. cdk deploy from local terminal also works, further leading me to believe there is no typo problem. I also pass the service stack as a prop and call the stackName property to avoid a typo in accessing the template.
If you spot a way there could be a problem accessing due to a typo, please let me know as that would still be the best-case scenario.
Codepipeline S3 Bucket access denied in Codebuild
This Q says it was solved by giving permissions to the CMK on the S3 bucket. I have used a code pipeline Artifact for source of the "cdk synth -> cloudformation templates". I'm not aware of any KMS CMK being used in this setup. If there is a way I can specify decryption abilities on the artifact maybe that would help.
If there is a way to get more verbose error messages about the s3: Access Denied status that would also be appreciated. It doesn't even share what s3 bucket is being denied, I'm just having to assume.
Thanks for any suggestions.

Create Dynamic S3 bucket names through CDK

I'm trying to create S3 bucket through CDK by using the following code
const myBucket = new Bucket(this, 'mybucket', {
bucketName: `NewBucket`
});
Since S3 bucket names are unique stack deployment fails when I try to upload to another account.
I can change bucketname manually everytime I deploy but Is there a way for me to add 'NewBucket-${Stack.AWSaccountId}' dynamically so that whenever stack is deployed to any aws account bucket gets created without any error
You can prepend the AWS account ID like:
const myBucket = new Bucket(this, `${id}-bucket`, {
bucketName: `${this.account}-NewBucket`
});
But generally and recommend extending the default props, pass into your stack and provide a prefix/name if you wanted something specific for each environment as the account ID is regarded by AWS as sensitive.
For example:
export interface Config extends cdk.StackProps {
readonly params: {
readonly environment: string;
};
}
Then you can use ${props.params.environment} in your bucket name.
If you do not specify the bucket name, it will generate one for you that will be unique among accounts.
Otherwise, generate your own hash and append to the end of your bucket name string.
Edit: While you could programmatically pull the account number and feed that into the stack as a variable for your bucket name append, I wouldn't recommend attaching an account number to an S3 bucket name for security reasons.
I name my buckets projectprefix-name-stage and my cloudformation resource ProjectprefixNameStage (CamelCase) and omit names with randoms.
So cloudformation name MyProjectDataBucketProduction becomes my-project-data-bucket-production

Add existing dynamodb table with stream to CDK

I currently have a dynamodb table that's been in use for a couple of years, originally created in the console. It contains lots of valuable data. It uses a stream to periodically send a snapshot using a lambda trigger of the table to s3 for analytics. The table itself is heavily used by end users to access their data.
I want to migrate my solution into CDK. The options I want to explore:
When you use the Table.fromTableArn construct, you don't get access to the table stream arn so it's impossible to attach a lambda trigger. Is there a way around this?
Is there a way to clone the dynamoDB table contents in my CDK stack so that my copy will start off in exactly the same state as the original? Then I can add and manage the stream etc in CDK no problem.
It's worth checking my assumption that these are the only 2 options.
Subscribing Lambda to an existing Dynamo Table:
We don't need to have table created within same stack. We can't use addEventSource on lambda but we can use addEventSourceMapping and add necessary policies to Lambda, which is what addEventSource does behind the scenes.
const streamsArn =
"arn:aws:dynamodb:us-east-1:110011001100:table/test/stream/2021-03-18T06:25:21.904";
const myLambda = new lambda.Function(this, "my-lambda", {
code: new lambda.InlineCode(`
exports.handler = (event, context, callback) => {
console.log('event',event)
callback(null,'10')
}
`),
handler: "index.handler",
runtime: lambda.Runtime.NODEJS_10_X,
});
const eventSoruce = myLambda.addEventSourceMapping("test", {
eventSourceArn: streamsArn,
batchSize: 5,
startingPosition: StartingPosition.TRIM_HORIZON,
bisectBatchOnError: true,
retryAttempts: 10,
});
const roleUpdates = myLambda.addToRolePolicy(
new iam.PolicyStatement({
actions: [
"dynamodb:DescribeStream",
"dynamodb:GetRecords",
"dynamodb:GetShardIterator",
"dynamodb:ListStreams",
],
resources: [streamsArn],
})
);
Importing existing DynamoDb into CDK:
We re-write dynamo db with same attributes in cdk, synth to generate Cloudformation and use resource import to import an existing resources into a stack. Here is an SO answer

Add Lambda trigger to imported Cognito User Pool with AWS CDK

I'm trying to use AWS CDK to create a new lambda tied to already existing AWS resources which were not created using CDK and that are part of a different stack.
Can I trigger my lambda from an already existing user pool using CDK? I've imported the user pool to my new stack using:
const userPool = UserPool.fromUserPoolArn(this, 'poolName, 'arn:aws:cognito-idp:eu-west-1:1234567890:userpool/poolName')
However, this gives me an IUserPool which does not have the addTrigger method. Is there a way to convert this into a UserPool in order to be able to trigger the lambda (since I can see that UserPool has the addTrigger method)?
I have seen that it is possible to e.g. grant permissions for my new lambda to read/write into an existing DynamoDB table using CDK. And I don't really understand the difference here: DynamoDB is an existing AWS resource and I'm importing it to the new stack using CDK and then allowing my new lambda to modify it. The Cognito User Pool is also an existing AWS resource, and I am able to import it in CDK but it seems that I'm not able to modify it? Why?
This was discussed in this issue. You can add triggers to an existing User Pool using a Custom Resource:
import * as CustomResources from '#aws-cdk/custom-resources';
import * as Cognito from '#aws-cdk/aws-cognito';
import * as Iam from '#aws-cdk/aws-iam';
const userPool = Cognito.UserPool.fromUserPoolId(this, "UserPool", userPoolId);
new CustomResources.AwsCustomResource(this, "UpdateUserPool", {
resourceType: "Custom::UpdateUserPool",
onCreate: {
region: this.region,
service: "CognitoIdentityServiceProvider",
action: "updateUserPool",
parameters: {
UserPoolId: userPool.userPoolId,
LambdaConfig: {
PreSignUp: preSignUpHandler.functionArn
},
},
physicalResourceId: CustomResources.PhysicalResourceId.of(userPool.userPoolId),
},
policy: CustomResources.AwsCustomResourcePolicy.fromSdkCalls({ resources: CustomResources.AwsCustomResourcePolicy.ANY_RESOURCE }),
});
const invokeCognitoTriggerPermission = {
principal: new Iam.ServicePrincipal('cognito-idp.amazonaws.com'),
sourceArn: userPool.userPoolArn
}
preSignUpHandler.addPermission('InvokePreSignUpHandlerPermission', invokeCognitoTriggerPermission)
You can also modify other User Pool settings with this method.

AWS 'Bucket already exists' - how to "migrate" existing resources to CloudFormation?

We have already created some infrastructure manually and with terraform, including some s3 buckets. In the future I would like to use pure CloudFormation to define the infrastructure as code.
So I created a CloudFormation yaml definition which references an existing bucket:
AWSTemplateFormatVersion: '2010-09-09'
Resources:
TheBucket:
Type: AWS::S3::Bucket
Properties:
BucketName: my-existing-bucket-name
When I try to apply it, execution fails, with CloudFormation stack event:
The following resource(s) failed to update: [TheBucket].
12:33:47 UTC+0200 UPDATE_FAILED AWS::S3::Bucket TheBucket
my-existing-bucket-name already exists
How can I start managing existing resources with CloudFormation without recreating them? Or is it impossible by design?
You will need to create a new bucket and sync the data from the old bucket to the new bucket. I have not seen a way to use an modify an existing S3 bucket.
The resources section of a cloud formation template defines which resources should be created by cloud formation. Try refering to the existing resources by defining them as parameters instead.
You should be able to import it by using the "Import resources into stack" option:
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/resource-import-existing-stack.html
As the documentation explains, you should add a "DeletionPolicy": "Retain" attribute to the already existing resources in your stack.