I found a really weird bug on aws js sdk.
If I try to delete/add LifecycleRules in my s3Bucket (using putBucketLifecycleConfiguration) and fetch theses rules just after this call using getBucketLifecycleConfiguration I can receive more or less rules than that I've put with putBucketLifecycleConfiguration.
If I keep calling getBucketLifecycleConfiguration I will continue to receive more or less rules that I've put, it seems to be a random behavior...
Do you know if it's a known bug or the reason of this behavior?
NOTE: It seems it has the same behavior with aws s3api get-bucket-lifecycle-configuration AND also in the AWS Management Console.
Maybe we have to wait a moment for AWS servers to replicate the data on all servers?
Thanks!
It seems to be the normal behavior of aws, look at this link:
https://docs.aws.amazon.com/AmazonS3/latest/userguide/how-to-set-lifecycle-configuration-intro.html
Propagation delay
When you add an S3 Lifecycle configuration to a bucket, there is
usually some lag before a new or updated Lifecycle configuration is
fully propagated to all the Amazon S3 systems. Expect a delay of a
few minutes before the configuration fully takes effect.
This delay can also occur when you delete an S3 Lifecycle configuration.
Related
I am looking for a way to monitor any changes that occur to my production envrionment. Such as security group changes, ec2 create/stop/deletes, database changes, s3 bucket changes, route table changes, subnet changes, etc... I was looking at using cloudtrail for this and monitoring all api calls. However, when testing, my subscribed SNS topic was not receiving any notifications when i was making some changes for a test. Curious if anyone else has a work around for this or if I am missing something? Maybe lambda? Just looking for the easiest way to receive email notifications when any changes are made within my prod environment. Thank you.
If you're looking to audit the entire event history of AWS API calls then you would use CloudTrail, remembering to create a trail and enabling the options if you want to audit S3 or Lambda API calls.
By itself CloudTrail will provide auditing, but it can be combined with CloudWatch/EventBridge to automate actions based on specific API calls such as triggering a Lambda or triggering an SNS topic.
Regarding your own implementation so far using SNS always ensure you've accepted the subscription first on the subscriber(s).
In addition you can use AWS Config with many resources in AWS providing 2 benefits to you. You will be able to maintain a history of changes to you resources, whilst also being able to configure compliance and resolution rules for your resources.
Since past week we recorded irregular deletions of the trigger of AWS Lambda.
We would like to find out when this exactly happened to determine the reason/cause of deletion. We tried looking for the entries in Cloudtrail, but not sure what to look for exactly ?
How to find the root cause and reasons for the deletion ?
thanks Marcin and ydaetskcoR. We found the problem. The Lambda trigger is a property of the S3 bucket. We had different lambda trigger in different projects (with different terraform states). So every time one (terraform) project will be applied, the trigger of the other project will be overwritten, because the terraform state is not aware of it. We saw PutBucketNotifications in cloudtrail,but didn't recognize the connections...
You can troubleshoot operational and security incidents over the past 90 days in the CloudTrail console by viewing Event history. You can look up events related to creation, modification, or deletion of resources. To view Events in logs follow this ;
https://docs.aws.amazon.com/awscloudtrail/latest/userguide/get-and-view-cloudtrail-log-files.html
Need your help in understanding some concepts. I have a web application that uses Lambda#Edge on the CloudFront. This lambda function accesses the DynamoDB - making around 10 independent queries. This generates occasional errors, though it works perfectly when I test the lambda function stand alone. I am not able to make much sense out of the cloudfront logs, and Lambda#Edge does not show up in the CloudWatch.
I have a feeling that the DynamoDB queries are the culprit. (because that is all I am doing in the Lambda function) To make sure, I replicated the data over all regions. But that has not solved the problem. I increased the timeout and memory allocated to the lambda function. But that has not helped in any way. But, reducing the number of DB queries seems to help.
Can you please help me understand this? Is it wrong to make DB queries in the Lambda#Edge? Is there a way to get detailed logs of the Lambda#Edge?
Over a year too late, but you never know someone benefits of it. Lambda#Edge does not run in a specific region, hence, if you connect to a DynamoDB table, you need to define the region in which this table can be found.
In NodeJS this would result in the below:
// Load the AWS SDK for Node.js
var AWS = require('aws-sdk');
// Set the region
AWS.config.update({region: 'REGION'});
// Create DynamoDB document client
var docClient = new AWS.DynamoDB.DocumentClient({apiVersion: '2012-08-10'});
As F_SO_K mentioned, you can find your CloudWatch logs in the region closest to you. How to find out which region that would be (in case you're the only one using that specific Lambda#Edge, you can have a look in this documentation)
Lambda#Edge logs show up in CloudWatch under the region in which the Lambda was called. I suspect you simply need to go into CloudWatch and change to the correct region to see the logs. If you are calling CloudWatch yourself, this will be the region you are in, not the region you created the Lambda.
Once you have the log you should have much more information to go on.
How do you route AWS Web Application Firewall (WAF) logs to an S3 bucket? Is this something I can quickly do through the AWS Console? Or, would I have to use a lambda function (invoked by a CloudWatch timer event) to query the WAF logs every n minutes?
UPDATE:
I'm interested in the ACL logs (Source IP, URI, Matches rule, Request Headers, Action, Time, etc).
UPDATE (05/15/2017)
AWS doesn't provide an easy way to view/parse these logs. You can get a "random sample" via the get-sampled-requests command. Which isn't acceptable...
Gets detailed information about a specified number of requests--a
sample--that AWS WAF randomly selects from among the first 5,000
requests that your AWS resource received during a time range that you
choose. You can specify a sample size of up to 500 requests, and you
can specify any time range in the previous three hours.
http://docs.aws.amazon.com/cli/latest/reference/waf/get-sampled-requests.html
Also, I'm not the only one experiencing this issue either:
https://forums.aws.amazon.com/thread.jspa?threadID=220202
I was looking for this functionality today and stumbled across the referenced thread. It was, coincidentally, updated today:
Hello,
Thanks for your input. I have submitted a feature request on your
behalf to export WAF events to S3 for long term analysis.
Best Regards, albertpataws
The lack of this feature strikes me as being almost as odd as the fact that I can't change timezones for graphs.
Is there ways to get deleted history of AWS s3 bucket?
Problem Statement :
Some of s3 folders got deleted . Is there way to figure out when it got deleted
There are at least two ways to accomplish what you want to do, but both are disabled by default.
The first one is to enable server access logging on your bucket(s), and the second one is to use AWS CloudTrail.
You might be out of luck if this already happened and you had no auditing set up, though.