Tables automatically deleted with no action to cause this - amazon-web-services

I have a few java lambda functions that depend on dynamoDB. The issue I am having is that my tables are being automatically deleted from AWS for no reason. Has anyone had this happen to them ?

Enable CloudTrail logs and you will be able to see who deleted the tables next time this happens. There is a tutorial on how to do this here.
Btw, double check that you're in the right region because sometimes I find all my AWS resources missing and then I notice I'm in the wrong region.

Related

Is it possible for cloudformation to throw an error if it's built in the wrong region

Lamda#Edge needs to be built in the us-east-1. Is there a way to get CloudFormation to send out an error message when someone attempts to build the stack in the wrong region? Currently, I use conditions to only create the resources if they are in us-east-1, and not create the resources in any other region. Since no resources get created, the Output section fails. This has the desired effect, but it doesn't explain to the user that they are being forced into failure because they are in the wrong region. Any ideas?
the Output section fails.
You can put conditions in the Outputs as well to conditionally create the outputs you only want.
But, back to your question. No, CFN will not automatically throw exceptions in your users use wrong regions, unless its not possible to create. You could use custom resources to error out and check your regions.
Other than that you could probably craft IAM policies allowing your users to lunch CFN only in regions you want.

AWS JS SDK: Weird behavior GetBucketLifecycleConfiguration after adding rules

I found a really weird bug on aws js sdk.
If I try to delete/add LifecycleRules in my s3Bucket (using putBucketLifecycleConfiguration) and fetch theses rules just after this call using getBucketLifecycleConfiguration I can receive more or less rules than that I've put with putBucketLifecycleConfiguration.
If I keep calling getBucketLifecycleConfiguration I will continue to receive more or less rules that I've put, it seems to be a random behavior...
Do you know if it's a known bug or the reason of this behavior?
NOTE: It seems it has the same behavior with aws s3api get-bucket-lifecycle-configuration AND also in the AWS Management Console.
Maybe we have to wait a moment for AWS servers to replicate the data on all servers?
Thanks!
It seems to be the normal behavior of aws, look at this link:
https://docs.aws.amazon.com/AmazonS3/latest/userguide/how-to-set-lifecycle-configuration-intro.html
Propagation delay
When you add an S3 Lifecycle configuration to a bucket, there is
usually some lag before a new or updated Lifecycle configuration is
fully propagated to all the Amazon S3 systems. Expect a delay of a
few minutes before the configuration fully takes effect.
This delay can also occur when you delete an S3 Lifecycle configuration.

AWS: how to determine who deleted a Lambda trigger

Since past week we recorded irregular deletions of the trigger of AWS Lambda.
We would like to find out when this exactly happened to determine the reason/cause of deletion. We tried looking for the entries in Cloudtrail, but not sure what to look for exactly ?
How to find the root cause and reasons for the deletion ?
thanks Marcin and ydaetskcoR. We found the problem. The Lambda trigger is a property of the S3 bucket. We had different lambda trigger in different projects (with different terraform states). So every time one (terraform) project will be applied, the trigger of the other project will be overwritten, because the terraform state is not aware of it. We saw PutBucketNotifications in cloudtrail,but didn't recognize the connections...
You can troubleshoot operational and security incidents over the past 90 days in the CloudTrail console by viewing Event history. You can look up events related to creation, modification, or deletion of resources. To view Events in logs follow this ;
https://docs.aws.amazon.com/awscloudtrail/latest/userguide/get-and-view-cloudtrail-log-files.html

describing snapshots whose associated volume is deleted or not present currently

I was trying to do cost optimisation for my aws account. And i came across the snapshots count. and I saw lots of snapshots over there in my console.
There are some snapshots which were created via any volume. and now the volume is deleted.
How can I describe the snapshots whose volume is not present. ( I know we can use ec2-describe-snapshots, but I need the filters and way to get it.)
Thanks in advance. :)
If I were you I would create a lambda function with this code and have it executed by CloudWatch Events daily, this way you clean up regularly without having to remember! ;)
I am going to reference the node.js API here but the method in the madness is the same for all APIs.
Use ec2 describeSnapshots to get your collection for iteration (http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/EC2.html#describeSnapshots-property)
For each snapshot, call describeVolume using the VolumeId in the Snapshot result as the VolumeId. If it doesn't exist anymore you will get an error. (http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/EC2.html#describeVolumes-property)
Call deleteSnapshot to delete the snapshot that you no longer need (http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/EC2.html#deleteSnapshot-property)
Should be a fun little project! :)

AWS S3 Folder deleted History

Is there ways to get deleted history of AWS s3 bucket?
Problem Statement :
Some of s3 folders got deleted . Is there way to figure out when it got deleted
There are at least two ways to accomplish what you want to do, but both are disabled by default.
The first one is to enable server access logging on your bucket(s), and the second one is to use AWS CloudTrail.
You might be out of luck if this already happened and you had no auditing set up, though.