AWS S3 log for DeleteObject? - amazon-web-services

How to use AWS services like CloudTrail or CloudWatch to check which user performed event DeleteObject?
I can use S3 Event to send a Delete event to SNS to notify an email address that a specific file has been deleted from the S3 bucket but the message does not contain the username that did it.
I can use CloudTrail to log all events related to an S3 bucket to another bucket, but I tested and it logs many details, and only event PutObject but not DeleteObject.
Is there any easy way to monitor an S3 bucket to find out which user deleted which file?
Upate 19 Aug
Following Walt's answer below, I was able to log the DeleteObject event. However, I can only get the file name (requestParameters.key
) for PutObject, but not for DeleteObjects.
| # | #timestamp | userIdentity.arn | eventName | requestParameters.key |
| - | ---------- | ---------------- | --------- | --------------------- |
| 1 | 2019-08-19T09:21:09.041-04:00 | arn:aws:iam::ID:user/me | DeleteObjects |
| 2 | 2019-08-19T09:18:35.704-04:00 | arn:aws:iam::ID:user/me | PutObject |test.txt |
It looks like other people have had the same issue and AWS is working on it: https://forums.aws.amazon.com/thread.jspa?messageID=799831

Here is my setup.
Detail instructions on setting up CloudTrail in the console. When setting up the CloudTrail double check these 2 options.
That your are logging S3 writes. You can do this for all S3 buckets or just the one you are interested. You also don't need to enable read logging to answer this question.
And you are sending events to CloudWatch Logs
If you made changes to the S3 write logging you might have to wait a little while. If you haven't had breakfast, lunch, snack, or dinner now would be a good time.
If you're using the same default CloudWatch log group as I have above this link to CloudWatch Insight Logs search should work for you.
This is a query that will show you all S3 DeleteObject calls. If the link doesn't work
Got to CloudWatch Console.
Select Logs->Insights on the left hand side.
Enter value for "Select log group(s)" that you specific above.
Enter this in the query field.
fields #timestamp, userIdentity.arn, eventName, requestParameters.bucketName, requestParameters.key
| filter eventSource == "s3.amazonaws.com"
| filter eventName == "DeleteObject"
| sort #timestamp desc
| limit 20
If you have any CloudTrail S3 Delete Object calls in the last 30 min the last 20 events will be shown.

As of 2021/04/12, CloudTrail does not record object key(s) or path for DeleteObjects calls.
If you delete an object with S3 console, it always calls DeleteObjects.
If you want to access object keys for deletion you will need to delete individual files with DeleteObject (minus s). This can be done with AWS CLI (aws s3 rm s3://some-bucket/single-filename) or direct API calls.

Related

How to use Cloudtrail to get who created IAM user

How to use Cloudtrail to get who created IAM user , how to get this from logs
If the IAM user was created inside the last 90 days, you can find who created the user using CloudTrail Event history.
Using the AWS CLI:
aws cloudtrail lookup-events --lookup-attributes AttributeKey=EventName,AttributeValue=CreateUser --region us-east-1
Using the Console: Go to Event History in CloudTrail Service, choose the EventName filter with a value of CreateUser. You have to use the region us-east-1 to view the events.
If the IAM user was created outside the 90 days time window, you can still find out who created the user if you have a trail enabled in CloudTrail. You can use Amazon Athena or some other method to search the log files created by CloudTrail in S3.
References:
https://docs.aws.amazon.com/awscloudtrail/latest/userguide/view-cloudtrail-events-cli.html (Note the disclaimer for global services post November 22, 2021)
https://docs.aws.amazon.com/awscloudtrail/latest/userguide/view-cloudtrail-events-console.html
https://docs.aws.amazon.com/athena/latest/ug/cloudtrail-logs.html

AWS Put Subscription Filter for Kinesis Firehose using Cloudformation - Check if the given Firehose stream is in ACTIVE state

Following this guide and creating a Kinesis Firehose Stream.
I have followed the guide and when I get to creating a subscription filter (step 12), I encounter this error when trying to send to S3:
An error occurred (InvalidParameterException) when calling the PutSubscriptionFilter operation: Could not deliver test message to specified Firehose stream. Check if the given Firehose stream is in ACTIVE state.
I can confirm that the stream is active and I can send test data via the console and it arrives in S3 as expected.
This is the command I am running (changed my account id):
aws logs put-subscription-filter --log-group-name "myLogGroup" --filter-name "Destination" --filter-pattern "{$.userIdentity.type = Root}" --destination-arn "arn:aws:firehose:ap-southeast-1:1234567890:deliverystream/my-delivery-stream" --role-arn "arn:aws:iam::1234567890:role/CWLtoKinesisFirehoseRole"
I have checked the trusted entities and the role has priviliges to logs and firehose. Any ideas?
I also struggle with this for a long time, for me it was those 2 gothca's:
step 4 in the guide:
make sure to change to bucket name to you bucket:
step 8 !!!:
make sure to put your account ID it is not highlighted:
I am sure you already know how to configure logs subscription filter so not adding steps in my answer
Go to firehose and check logs is your firehose has access to execute lambda ,if not please add required role.
Now start dummy data stream using firehose test and see is your data is moving till lambda or S3.
check cloud trail and cloud watch logs and see if found any error .
Open your IAM role and check all required role added to your role , now click trust relationship add- "logs group ","IAM" and component name in my case its "Ec2"
Hope this will helpful to resolve your issue.
https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/SubscriptionFilters.html#FirehoseExample

How to know which s3 bucket trigger which lambda?

How to know which s3 bucket trigger which lambda without going to all lambdas?
You can look into these triggers under a bucket events itself. When you open a s3 bucket, navigate to Properties and under that Events. You can however delete or edit the resource triggered from that panel. Hope it helps
This can be a bit difficult since the command line options for Lambda require that you use aws lambda get-policy in order to find out which resources are allowed to perform the lambda:InvokeFunction action on a given function. These permissions aren't shown as part of the lambda configuration for aws lambda get-function-configuration. Use bash and jq to get a list of functions and spit out their allowed invokers. Like this:
aws lambda list-functions | jq '.Functions[].FunctionName' --raw-output | while read f; do
policy=$( aws lambda get-policy --function-name ${f} | jq '.Policy | fromjson | .Statement[] | select(.Effect=="Allow") | select(.Action=="lambda:InvokeFunction") | .Condition.ArnLike[]' --raw-output )
echo "FUNCTION ${f} CAN BE INVOKED FROM:"
echo ${policy}
done
This will list the arn of the resources that are allowed to use the action lambda:InvokeFunction on the all Lambda functions returned from list-functions.
When you set up triggers on your S3 Bucket, you can select which Lambda function is invoked.
Check out this document for more information: https://docs.aws.amazon.com/lambda/latest/dg/with-s3.html.
Here's a more comprehensive document that deep dives on S3 event notifications: https://docs.aws.amazon.com/AmazonS3/latest/user-guide/enable-event-notifications.html
If you select the Lambda Function destination type, do the following:
In Lambda Function, type or choose the name of the Lambda function that you want to receive notifications from Amazon S3.
If you don't have any Lambda functions in the region that contains your bucket, you'll be prompted to enter a Lambda function ARN. In Lambda Function ARN, type the ARN of the Lambda function that you want to receive notifications from Amazon S3.
(Optional) You can also choose Add Lambda function ARN from the menu and type the ARN of the Lambda function in Lambda function ARN.

Amazon CloudWatch: How to find ARN of CloudWatch Log group

I configure Custom Access Logging for Amazon API Gateway and I need to specify CloudWatch Group name, but when I put these just name of log group in format like "API-Gateway-Execution-Logs_3j5w5m7kv9/stage-name" I get such error:
Invalid ARN specified in the request. ARNs must start with 'arn:':
API-Gateway-Execution-Logs_3j5w5m7kv9/stage-name
When I open page of this log group in CloudWatch I just see the same name there and don't see ARN value. How can I find it?
Go to Cloudwatch logs, find your log group, open it and you'll see a list of log streams. There is settings icon on top right:
Click it and you'll see an option to show stream arn:
Save the settings and you'll see stream arns. The part before semicolon looks like Log Group arn
The CloudWatch Group ARN format is arn:aws:logs:{region}:{account-id}:log-group:API-Gateway-Execution-Logs_{rest-api-id}/{stage-name}, cf. https://docs.aws.amazon.com/apigateway/latest/developerguide/set-up-logging.html
arn:aws:logs:region:account-id:log-group:log_group_name
See this documentation
You can also use AWS CLI
aws logs describe-log-groups | grep <log_group_name> | awk '/arn/'
2022 Update
Select your log group:
Click Log Group Details:
Copy the ARN:
Or you could do a aws logs describe-log-groups | grep <name_of_group>
That works too.

Get AWS SQS queue URL?

An AWS SQS queue URL looks like this:
sqs.us-east-1.amazonaws.com/1234567890/default_development
And here are the parts broken down:
Always same | Stored in env var | Always same | ? | Stored in env var
sqs | us-east-1 | amazonaws.com | 1234567890 | default_development
So I can reconstruct the queue URL based on things I know except the 1234567890 part.
What is this number and is there a way, if I have my AWS creds in env vars, to get my hands on it without hard-coding another env var?
The 1234567890 should be your AWS account number.
http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/ImportantIdentifiers.html
If you don't have access to the queue URL directly (e.g. you can get it directly from CloudFormation if you create it there) you can call the GetQueueUrl API. It takes a parameters of the QueueName and optional QueueOwnerAWSAccountId. That would be the preferred method of getting the URL. It is true that the URL is a well formed URL based on the account and region, and I wouldn't expect that to change at this point. It is possible that it would be different in a region like the China regions, or the Gov Cloud regions.