S3 Access Denied when getting objects from CloudTrail S3 bucket - amazon-web-services

I use the cloudtrail bucket to make Athena queries and I keep getting this error:
Your query has the following error(s):
com.amazonaws.services.s3.model.AmazonS3Exception: Access Denied (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: A57070510EFFB74B; S3 Extended Request ID: v56qfenqDD8d5oXUlkgfExqShUqlxwRwTQHR1S0PmHpp7WH+cz0x8D2pPLPkRoGz2o428hmOV1U=), S3 Extended Request ID: v56qfenqDD8d5oXUlkgfExqShUqlxwRwTQHR1S0PmHpp7WH+cz0x8D2pPLPkRoGz2o428hmOV1U= (Path: s3://cf-account-foundation-cloudtrailstack-trailbucket-707522222211/AWSLogs/707522222211/CloudTrail/ca-central-1/2019/01/11/707522222211_CloudTrail_ca-central-1_20190111T0015Z_XE4JGGZLQTNS334S.json.gz)
This query ran against the "default" database, unless qualified by the query. Please post the error message on our forum or contact customer support with Query Id: 56a188d5-9a10-4c30-a701-42c243c154c6.
The query is:
SELECT * FROM "default"."cloudtrail_table_logs" limit 10;
This is the S3 bucket Policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AWSCloudTrailAclCheck20150319",
"Effect": "Allow",
"Principal": {
"Service": "cloudtrail.amazonaws.com"
},
"Action": "s3:GetBucketAcl",
"Resource": "arn:aws:s3:::cf-account-foundation-cloudtrailstack-trailbucket-707522222211"
},
{
"Sid": "AWSCloudTrailWrite20150319",
"Effect": "Allow",
"Principal": {
"Service": "cloudtrail.amazonaws.com"
},
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::cf-account-foundation-cloudtrailstack-trailbucket-707522222211/AWSLogs/707522222211/*",
"Condition": {
"StringEquals": {
"s3:x-amz-acl": "bucket-owner-full-control"
}
}
}
]
}
The S3 bucket is in the region eu-central-1 (frankfurt) same as the athena table from which I make the queries.
I have administrator permissions on my IAM User.
I get the same error when I manually try to open a file in this bucket:
<Error><Code>AccessDenied</Code><Message>Access Denied</Message><RequestId>5D49DF767D01F32C</RequestId><HostId>9Vd/MvDy5/AJYExs6BXoZbuMxxjxTCfFqzaMTQDwyrgyVZpdL+AgDihiZu3k17PWEYOJ19I8sbQ=</HostId></Error>
I don't know what is going on here. One more precision is that the bucket has SSE-KMS encryption but that does not mean we can't make queries into it.
Same error even when I put the bucket on public.
Anyone has a clue?

Related

How to setup terraform state on encrypted s3 bucket

I have setup an s3 backend for terraform state following this excellent answer by Austin Davis. I followed the suggestion by Matt Lavin to add a policy encrypting the bucket.
Unfortunately that bucket policy means that the terraform state list now throws the
Failed to load state: AccessDenied: Access Denied status code: 403, request id: XXXXXXXXXXXXXXXX, host id: XXXX...
I suspect I'm missing either passing or configuring something on the terraform side to encrypt the communication or an additional policy entry to be able to read the encrypted state.
This is the policy added to the tf-state bucket:
{
"Version": "2012-10-17",
"Id": "RequireEncryption",
"Statement": [
{
"Sid": "RequireEncryptedTransport",
"Effect": "Deny",
"Action": ["s3:*"],
"Resource": ["arn:aws:s3:::${aws_s3_bucket.terraform_state.bucket}/*"],
"Condition": {
"Bool": {
"aws:SecureTransport": "false"
}
},
"Principal": "*"
},
{
"Sid": "RequireEncryptedStorage",
"Effect": "Deny",
"Action": ["s3:PutObject"],
"Resource": ["arn:aws:s3:::${aws_s3_bucket.terraform_state.bucket}/*"],
"Condition": {
"StringNotEquals": {
"s3:x-amz-server-side-encryption": "AES256"
}
},
"Principal": "*"
}
]
}
I would start by removing that bucket policy, and just enable the newer default bucket encryption setting on the S3 bucket. If you still get access denied after doing that, then the IAM role you are using when you run Terraform is missing some permissions.

Terraform not able to init with a user with attached Policy document with full access

I have my terraform backend as s3. I get the below error while init:
Error refreshing state: AccessDenied: Access Denied
status code: 403, request id: EYE1JFR2028Y8WA, host id: TSfI4l7i0cSR+tBczEA5nbolYrbBNhieEYItTeN831SUcJ2EZtT91szja0u735Hk9EdWAc=
I am running terraform by exporting these three credentials of a user AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_DEFAULT_REGION. That user has a policy document attached which is:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor1",
"Effect": "Allow",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::test-tf-state"
]
}
]
}
I have cross checked bucket arn it is correct yet i get 403 AccessDenied error.
While I put "/*" as suffix in arn terraform init works successfully, example policy below:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor1",
"Effect": "Allow",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::test-tf-state/*"
]
}
]
}
I can't understand this behaviour.
When your resource is:
"arn:aws:s3:::test-tf-state"
this refers only to bucket, not objects in the bucket. This only allows you to perform operations on a bucket, not the objects inside the bucket, nor upload objects to it. But TF must be able to upload and read objects from the bucket.
In contrast, when you have:
"arn:aws:s3:::test-tf-state/*"
then your permissions apply to objects in a bucket. This allows you to put, get or delete these objects.

Access Denied while querying S3 files from AWS Athena within Lambda in different account

I am trying to query Athena View from my Lambda code. Created Athena table for S3 files which are in different account. Athena Query editor is giving me below error:
Access Denied (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied;
I tried accessing Athena View from my Lambda code. Created Lambda Execution Role and allowed this role in Bucket Policy of another account S3 bucket as well like below:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::2222222222:role/BAccountRoleFullAccess"
},
"Action": [
"s3:PutObject",
"s3:PutObjectAcl",
"s3:GetObject"
],
"Resource": "arn:aws:s3:::s3_bucket/*"
},
{
"Effect": "Allow",
"Principal": {
"AWS": [
"arn:aws:iam::111111111:role/A-Role",
"arn:aws:iam::111111111:role/B-Role"
]
},
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::s3_bucket",
"arn:aws:s3:::s3_bucket/*"
]
}
]
}
From Lambda, getting below error:
'Status': {'State': 'FAILED', 'StateChangeReason': 'com.amazonaws.services.s3.model.AmazonS3Exception:
Access Denied (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: 3A8953784EC73B17;
S3 Extended Request ID: LfQZdTCj7sSQWcBqVNhtHrDEnJuGxgJQxvillSHznkWIr8t5TVzSaUwNSdSNh+YzDUj+S6aOUyI=),
S3 Extended Request ID: LfQZdTCj7sSQWcBqVNhtHrDEnJuGxgJQxvillSHznkWIr8t5TVzSaUwNSdSNh+YzDUj+S6aOUyI=
(Path: s3://s3_bucket/Input/myTestFile.csv)'
This Lambda function is using arn:aws:iam::111111111:role/B-Role Execution role which has full access to Athena and S3.
Someone please guide me.
To reproduce this situation, I did the following:
In Account-A, created an Amazon S3 bucket (Bucket-A) and uploaded a CSV file
In Account-B, created an IAM Role (Role-B) with S3 and Athena permissions
Turned OFF Block Public Access on Bucket-A
Added a bucket policy to Bucket-A that references Role-B:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::[ACCOUNT-B]:role/role-b"
},
"Action": [
"s3:Get*",
"s3:List*"
],
"Resource": [
"arn:aws:s3:::bucket-a",
"arn:aws:s3:::bucket-a/*"
]
}
]
}
In Account-B, manually defined a table in the Amazon Athena console
Ran a query on the Athena table. As expected, received Access Denied because I was using an IAM User to access the console, not the IAM Role defined in the Bucket Policy on Bucket-A
Created an AWS Lambda function in Account-B that uses Role-B:
import boto3
import time
def lambda_handler(event, context):
athena_client = boto3.client('athena')
query1 = athena_client.start_query_execution(
QueryString='SELECT * FROM foo',
ResultConfiguration={'OutputLocation': 's3://my-athena-out-bucket/'}
)
time.sleep(10)
query2 = athena_client.get_query_results(QueryExecutionId=query1['QueryExecutionId'])
print(query2)
Ran the Lambda function. It successfully returned data from the CSV file.
Please compare your configurations against the above steps that I took. Hopefully you will find a difference that will enable your cross-account access by Athena.
Reference: Cross-account Access - Amazon Athena

Getting access denied on Amazon S3 put object for encryption policy

I am trying to upload a file using Java to Amazon S3 bucket.
I have the below policy which should restrict the files that not encrypted.
{
"Version": "2012-10-17",
"Id": "PutObjPolicy",
"Statement": [
{
"Sid": "DenyUnEncryptedObjectUploads",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::file-upload-test/*",
"Condition": {
"StringNotEquals": {
"s3:x-amz-server-side-encryption": "AES256"
}
}
}
]
}
Now, when I try to use the below code, I get Access Denied exception.
com.amazonaws.services.s3.model.AmazonS3Exception: Access Denied (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: F287836ABBF2B486)
The java code that I am using is:
ObjectMetadata objectMetadata = new ObjectMetadata();
objectMetadata.setContentLength(fileSize);
objectMetadata.setSSEAlgorithm(ObjectMetadata.AES_256_SERVER_SIDE_ENCRYPTION);
objectMetadata.setContentType(contentType);
putObjectRequest = new PutObjectRequest(bucketName, fileName, inputStream, objectMetadata);
s3client.putObject(putObjectRequest);
Is this correct that I should use this Java code to achieve this?
If yes, then what modifications are required.
If no, then should I use encryption at client end?
Please suggest.

AWS - Server side encryption Access denied- Change encryption failure for root user

I have read/write/admin access to an S3 bucket I created. I can create object in there and delete them as expected.
Other folders exist on the bucket that were transferred there from another AWS account. I can't download any items from these folders.
When I click on the files there is info stating "Server side encryption Access denied". When I attempt to remove this encryption it fails with the message:
Forbidden (Service: Amazon S3; Status Code: 403; Error Code: 403 Forbidden; Request ID: 93A26842904FFB2D; S3 Extended Request ID: OGQfxPPcd6OonP/CrCqfCIRQlMmsc8DwmeA4tygTGuEq18RbIx/psLiOfEdZHWbItpsI+M1yksQ=)
I'm confused as to what the issue is. I am the root user/owner of the bucket and would have though I would be able to change the permissions/encryption of this material?
Thanks
You must ensure that you remain the owner of the files in the S3 bucket and not the other AWS accounts that upload to it.
Example S3 bucket policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "allowNewDataToBeUploaded",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::$THE_EXTERNAL_ACCOUNT_NUMBER:root"
},
"Action": [
"s3:PutObject",
"s3:PutObjectAcl"
],
"Resource": "arn:aws:s3:::$THE_BUCKET_NAME/*"
},{
"Sid": "ensureThatWeHaveOwnershipOfAllDataUploaded",
"Effect": "Deny",
"Principal": {
"AWS": "arn:aws:iam::$THE_EXTERNAL_ACCOUNT_NUMBER:root"
},
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::$THE_BUCKET_NAME/*",
"Condition": {
"StringNotEquals": {
"s3:x-amz-acl": "bucket-owner-full-control"
}
}
}
]
}
The external account must also use the x-amz-acl header in their request:
ObjectMetadata metaData = new ObjectMetadata();
metaData.setContentLength(byteArrayLength);
metaData.setHeader("x-amz-acl", "bucket-owner-full-control");
s3Client.putObject(new PutObjectRequest(bucketNameAndFolder, fileKey, fileContentAsInputStream, metaData));
Additional reading:
https://docs.aws.amazon.com/AmazonS3/latest/dev/example-walkthroughs-managing-access-example2.html
AWS S3 Server side encryption Access denied error
https://aws.amazon.com/premiumsupport/knowledge-center/s3-bucket-owner-access/
https://docs.aws.amazon.com/cli/latest/reference/s3api/put-object.html
https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectPUT.html
https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectPUTacl.html
This is a interesting problem. I've seen this before when the KMS key that is required to decrypt the files isn't available/accessible. You can try moving the KMS key from the old account to the new account or making the key accessible from the old account.
https://aws.amazon.com/blogs/security/share-custom-encryption-keys-more-securely-between-accounts-by-using-aws-key-management-service/