Given a failed AWS API request, how can I debug what permissions I need? - amazon-web-services

I'm using Terraform to provision some resources on AWS. Running the "plan" step of Terraform fails with the following vague error (for example):
Error: Error loading state: AccessDenied: Access Denied
status code: 403, request id: ABCDEF12345678, host id: SOMELONGBASE64LOOKINGSTRING===
Given a request id and a host id is it possible to see more in depth what went wrong?
Setting TF_LOG=DEBUG (or some other level) seems to help, but I was curious if there is a CLI command to get more information from CloudTrail or something.
Thanks!

Terraform won't have any privileged information about the access denial, but AWS does. Because you mentioned S3 was the problem I based my answer on finding the S3 request id. You have a couple options to find the request given a request id in AWS.
Create a trail in AWS CloudTrail. CloudTrail will log the API calls (including request id) at the bucket level by default. If the request was for a specific object, you need to enable S3 data events when you create the trail.
Turn on S3 server access logs.
You can manually search for the request id in the log files in S3 or use Athena. For CloudTrail, you can also configure CloudWatch Logs and search within the Log Group that gets created via the search bar.
CloudTrail records API calls from all services, not just S3. It could be a useful tool for diagnosing issues besides those related to S3. Note that there can be an up to 15-minute delay for logs to appear in CloudTrail.

Related

How to do find out what IAM permissions are needed for a request that failed with a 403

I am using Terraform and I am trying to limit the access as much as possible, but I want to know what those limits are.
Terraform provides me a request ID for the request that had failed, but I am not sure where in the AWS console to go to put in that request ID and see what it was trying to do and what IAM policy it failed on.
In Given a failed AWS API request, how can I debug what permissions I need? they are looking for something more specific to S3, but I am doing a broader one with Terraform and I'm dealing with IAM resources and EC2 creation.
To check the request that was made and all detailed information, just use Cloudtrail, which let you check all the requests made to your account.
Go to https://console.aws.amazon.com/cloudtrail/home?region=us-east-1#/events
In the filter dropdown of Event choose "Event ID" and next type in the ID given from Terraform.

Can you get the AWS usage report for subdirectory for buckets?

Can you get the AWS usage report for subdirectory for buckets? I want to know the amount of traffic of all 'GetObject' requests for all subdirectory of S3.
First, remember that there are no "subdirectories" in S3. Everything within a bucket is in a flat index and identified by an object key. However, in the AWS console, objects that contain a shared prefix are represented together in a "folder" named after the shared prefix.
With that in mind, it should be easier to understand why you cannot get an AWS usage report for a specific "subdirectory". The AWS usage report is meant to be an overview of your AWS services and is not meant to be used for more detailed analytics.
Instead there is another AWS service that allows you insight into more detailed analytics for your other AWS services: AWS CloudWatch. With AWS Cloudwatch you can:
Set up daily storage
metrics
Set up request (GET) metrics on a
bucket
And, for your specific case, you can set up request metrics for specific prefixes (subdirectories) within a bucket.
Using request metrics from AWS CloudWatch is a paid service (and another reason why you cannot get detailed request metrics in the AWS usage report).

How to check who stopped an EC2 instance?

Is there a way (for example in CLI) to check what user stopped an instance?
There is some data in the console:
State transition reason: User initiated (2017-07:24 10:15:42 GMT)
State transition reason message: Client.UserInitiatedShutdown: User initiated shutdown
Amazon CloudTrail can be used to create an Audit Trail of most API requests made to AWS. It records the time, IP address, user and request details.
However, you will need to configure CloudTrail before it captures this information because you will need to specify an Amazon S3 bucket where it can store the data. Therefore, you won't be able to see who stopped your instance this time, but if you configure CloudTrail you'll be able to do it in future.

I am not able to upload Logs from Cloudwatch to my s3 bucket

I am not able to upload Logs in Cloudwatch to S3 bucket through Amazon Console. As it is showing the following error message. Can any one please help me.
"One or more of the specified parameters are invalid e.g. Time Range etc"
Probably you are using an S3 bucket with encryption. This error is shown when the export task to S3 fails due to the fact that CloudWatch Logs export task doesn't support encryption on server side yet.
(I reproduced this).
In my case, it was wrong access permissions configured on the bucket policy. It works with AES-256 encryption enabled in my test run.

AWS S3 bucket logs vs AWS cloudtrail

What's the difference between the AWS S3 logs and the AWS CloudTrail?
On the doc of CloudTrail I saw this:
CloudTrail adds another dimension to the monitoring capabilities
already offered by AWS. It does not change or replace logging features
you might already be using.
CloudTrail tracks API access for infrastructure-changing events, in S3 this means creating, deleting, and modifying bucket (S3 CloudTrail docs). It is very focused on API methods that modify buckets.
S3 Server Access Logging provides web server-style logging of access to the objects in an S3 bucket. This logging is granular to the object, includes read-only operations, and includes non-API access like static web site browsing.
AWS has added one more functionality since this question was asked, namely CloudTrail Data events
Currently there are 3 features available:
CloudTrail: Which logs almost all API calls at Bucket level Ref
CloudTrail Data Events: Which logs almost all API calls at Object level Ref
S3 server access logs: Which logs almost all (best effort server logs delivery) access calls to S3 objects. Ref
Now, 2 and 3 seem similar functionalities but they have some differences which may prompt users to use one or the other or both(in our case)! Below are the differences which I could find:
Both works at different levels of granularity. e.g. CloudTrail data events can be set for all the S3 buckets for the AWS account or just for some folder in S3 bucket. Whereas, S3 server access logs would be set at individual bucket level
The S3 server access logs seem to give more comprehensive information about the logs like BucketOwner, HTTPStatus, ErrorCode, etc. Full list
Information which is not available in Cloudtrail logs but is available in Server Access logs. Reference:
Fields for Object Size, Total Time, Turn-Around Time, and HTTP Referer for log records
Life cycle transitions, expiration, restores
Logging of keys in a batch delete operation
Authentication failures
CloudTrail does not deliver logs for requests that fail authentication (in which the provided credentials are not valid). However, it does include logs for requests in which authorization fails (AccessDenied) and requests that are made by anonymous users.
If a request is made by a different AWS Account, you will see the CloudTrail log in your account only if the bucket owner owns or has full access to the object in the request. If that is not the case, the logs will only be seen in the requester account. The logs for the same request will however be delivered in the server access logs of your account without any additional requirements.
AWS Support recommends that decisions can be made using CloudTrail logs and if you need that additional information too which is not available in CloudTrail logs, you can then use Server access logs.
There are two reasons to use CloudTrail Logs over S3 Server Access Logs:
You are interested in bucket-level activity logging. CloudTrail has that, S3 logs does not.
You have a log analysis setup that involves CloudWatch log streams. The basic S3 logs just store log events to files on some S3 bucket and from there it's up to you to process them (though most log analytics services can do this for you).
Bottom line: use CloudTrail, which costs extra, if you have a specific scenario that requires it. Otherwise, the "standard" S3 Server Access Logs are good enough.
From the CloudTrail developer guide (https://docs.aws.amazon.com/AmazonS3/latest/dev/cloudtrail-logging.html):
Using CloudTrail Logs with Amazon S3 Server Access Logs and CloudWatch Logs
You can use AWS CloudTrail logs together with server access logs for Amazon S3. CloudTrail logs provide you with detailed API tracking for Amazon S3 bucket-level and object-level operations, while server access logs for Amazon S3 provide you visibility into object-level operations on your data in Amazon S3. For more information about server access logs, see Amazon S3 Server Access Logging.
You can also use CloudTrail logs together with CloudWatch for Amazon S3. CloudTrail integration with CloudWatch logs delivers S3 bucket-level API activity captured by CloudTrail to a CloudWatch log stream in the CloudWatch log group you specify. You can create CloudWatch alarms for monitoring specific API activity and receive email notifications when the specific API activity occurs. For more information about CloudWatch alarms for monitoring specific API activity, see the AWS CloudTrail User Guide. For more information about using CloudWatch with Amazon S3, see Monitoring Metrics with Amazon CloudWatch.
AWS CloudTrail is an AWS service for logging all account activities on different AWS resources. It also tracks things like IAM console login etc. Once CloudTrail service is enabled you can just go to CloudTrail console and see all the activity and also apply filters. Also, while enabling you can choose to log these activities and send the data to AWS CloudWatch. In AWS CloudWatch you can apply filters and also create alarms to notify you when a certain kind of activity happens.
S3 logging is enabling logging for basic activity on your S3 buckets/Objects.
CloudTrail logs API calls accessed to your AWS Account.
These CloudTrail logs are stored in Amazon S3 Bucket.
The two offer different services.
The Definition you have shared from CloudTrail Doc:
CloudTrail adds another dimension to the monitoring capabilities already offered by AWS. It does not change or replace logging features you might already be using.
It means you might have already activated some of the other logging features offered in other AWS services like ELB logging etc..
But when you enable CloudTrail monitoring, you need not worry about your previous logging functionalities as they will be still active.
You will recieve logs from all the services.
So By Enabling CloudTrail logging, It does not change or replace logging features you might already be using.
Hope it Helps.. :)