I have one DynamoDB table, and there is a secondary index on the table.
But I have a faced duplication problem when I was query something.
I don't want my Lambda function don't trace secondary index...
I saw IAM policy but there is no relational policy.
How can I solve this problem? This is my lambda function: aws-dynamodb-to-elasticsearch/dynamodb-to-es.py at master ยท vladhoncharenko/aws-dynamodb-to-elasticsearch
This is probably because you have many Lambda functions or many Lambda Function versions in your account for that region.
[Total size of all the deployment packages that can be uploaded per region | 75 GB][1]
Looks like this is a pretty common problem for serverless and someone has developed a plugin to help alleviate this issue: https://github.com/claygregory/serverless-prune-plugin
If you want to deal with this manually you'll need to use either the console, or an sdk/cli to delete old lambda versions. https://docs.aws.amazon.com/cli/latest/reference/lambda/delete-function.html
Related
I am using AWS SAM to define my app, and I am defining a DynamoDB table using this: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-dynamodb-table.html#cfn-dynamodb-table-tablename
However, I am worried that in Prod, this will lead to deleting the table and its content.
How do others handle this? Is there a way to keep the table and not drop and recreate it?
Use changesets and check them carefully to ensure that you are not causing a replacement of the table.
Use Deletion and UpdateReplace policies to ensure you do not lose data, even if you do replace it by accident.
Use a stack policy to block updates to the resource
Here is the thing, I have a serverless project that creates many AWS resources (Lambdas, API Gateway, etc), now I need to change the tags I used a couple of months ago, but when I try to run the serverless I see this message: " A version for this Lambda function exists ( 6 ). Modify the function to create a new version..". I have been reading and applying a couple of different workarounds but same issue.
Does any body have seen this behavior? Is there a way to retag all resources withouth delete the whole stack or doing that manually?
Thanks for your recommendations.
You can use a serverless plugin (serverless-plugin-resource-tagging). it will tag your Lambda function, Dynamo Tables, Bucket, Stream, API Gateway, and CloudFront resources. The way it works is you have to provide stacksTags having your tags inside under the Provider section of serverless.
provider:
stackTags:
STACK: "${self:service}"
PRODUCT: "Product Name"
COPYRIGHT: "Copyright"
You can also update tags value using this plugin.
I'm trying to find out if it's possible to copy a snapshot from one account to another in different region in one go, without intermediate ( meaning copy/share to the other account then copy from the new account to the other region ), using lambda function and boto3.
I have searched in aws documentation but with no luck
When you need such "complex" logic, it can be implemented with either CloudFormation or Terraform. The flow will be like the comments suggested, copy to another region and give permission to another account.
This AWS blog speaks of a similar requirement with example CloudFormation templates here.
If you are unfamiliar with CloudFormation, you can get started with their docs but it isn't something you can do when in a hurry. Just good practice you can develop early on.
Need your help in understanding some concepts. I have a web application that uses Lambda#Edge on the CloudFront. This lambda function accesses the DynamoDB - making around 10 independent queries. This generates occasional errors, though it works perfectly when I test the lambda function stand alone. I am not able to make much sense out of the cloudfront logs, and Lambda#Edge does not show up in the CloudWatch.
I have a feeling that the DynamoDB queries are the culprit. (because that is all I am doing in the Lambda function) To make sure, I replicated the data over all regions. But that has not solved the problem. I increased the timeout and memory allocated to the lambda function. But that has not helped in any way. But, reducing the number of DB queries seems to help.
Can you please help me understand this? Is it wrong to make DB queries in the Lambda#Edge? Is there a way to get detailed logs of the Lambda#Edge?
Over a year too late, but you never know someone benefits of it. Lambda#Edge does not run in a specific region, hence, if you connect to a DynamoDB table, you need to define the region in which this table can be found.
In NodeJS this would result in the below:
// Load the AWS SDK for Node.js
var AWS = require('aws-sdk');
// Set the region
AWS.config.update({region: 'REGION'});
// Create DynamoDB document client
var docClient = new AWS.DynamoDB.DocumentClient({apiVersion: '2012-08-10'});
As F_SO_K mentioned, you can find your CloudWatch logs in the region closest to you. How to find out which region that would be (in case you're the only one using that specific Lambda#Edge, you can have a look in this documentation)
Lambda#Edge logs show up in CloudWatch under the region in which the Lambda was called. I suspect you simply need to go into CloudWatch and change to the correct region to see the logs. If you are calling CloudWatch yourself, this will be the region you are in, not the region you created the Lambda.
Once you have the log you should have much more information to go on.
I would like to be able to perform PITR restoration without losing benefit of Infrastructure-as-a-code with CloudFormation.
Specifically, if I perform PITR restoration manually and then point application to the new database, won't that result in new DynamoDB table falling out of CloudFormation managed infrastructure? AFAIK, there is no mechanism at the moment to add a resource to CloudFormation after it was already created.
Has anyone solved this problem?
There is a now a way to import existing resources into cloudformation.
This means that you can do a PiTR and then import the newly created table into your stack.
You are correct, the restored table will be outside cloudformation control. The only solution that I know of is to write a script that copies that from the recovered table to the original table. Obviously there is a cost and time involved in that and it is less than ideal.
As ever there is always the option to write a custom resource but that somewhat undermines the point of using Cloudformation in the first place.