How to setup terraform state on encrypted s3 bucket - amazon-web-services

I have setup an s3 backend for terraform state following this excellent answer by Austin Davis. I followed the suggestion by Matt Lavin to add a policy encrypting the bucket.
Unfortunately that bucket policy means that the terraform state list now throws the
Failed to load state: AccessDenied: Access Denied status code: 403, request id: XXXXXXXXXXXXXXXX, host id: XXXX...
I suspect I'm missing either passing or configuring something on the terraform side to encrypt the communication or an additional policy entry to be able to read the encrypted state.
This is the policy added to the tf-state bucket:
{
"Version": "2012-10-17",
"Id": "RequireEncryption",
"Statement": [
{
"Sid": "RequireEncryptedTransport",
"Effect": "Deny",
"Action": ["s3:*"],
"Resource": ["arn:aws:s3:::${aws_s3_bucket.terraform_state.bucket}/*"],
"Condition": {
"Bool": {
"aws:SecureTransport": "false"
}
},
"Principal": "*"
},
{
"Sid": "RequireEncryptedStorage",
"Effect": "Deny",
"Action": ["s3:PutObject"],
"Resource": ["arn:aws:s3:::${aws_s3_bucket.terraform_state.bucket}/*"],
"Condition": {
"StringNotEquals": {
"s3:x-amz-server-side-encryption": "AES256"
}
},
"Principal": "*"
}
]
}

I would start by removing that bucket policy, and just enable the newer default bucket encryption setting on the S3 bucket. If you still get access denied after doing that, then the IAM role you are using when you run Terraform is missing some permissions.

Related

AWS S3 Bucket Policy is not valid

I am getting very frustrated with AWS today as it seems to provide validation errors that have literally no relevance to the actual issues (its almost like working on Windows 3.1 again) and the frustration keeps on coming with this latest irritation using the policies on S3.
I am trying to extend an existing S3 bucket policy on a bucket that has ACLs disabled, in order to enable server access logs.
I have extended the following policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicReadGetObject",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::MyS3Bucket/*"
},
-- NEW PART BELOW ---
{
"Sid": "S3ServerAccessLogsPolicy",
"Effect": "Allow",
"Principal": {
"Service": "logging.s3.amazonaws.com"
},
"Action": [
"s3:PutObject"
],
"Resource": "arn:aws:s3:::MyS3LogsBucket/*",
"Condition": {
"ArnLike": {
"aws:SourceArn": "arn:aws:s3:::MyS3Bucket"
},
"StringEquals": {
"aws:SourceAccount": "MyAccountId"
}
}
}
]
}
However, no matter if I follow the documentation found at https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-ownership-migrating-acls-prerequisites.html#object-ownership-server-access-logs) or use the in-built policy generator within S3, or the other policy generator found at https://awspolicygen.s3.us-east-1.amazonaws.com/policygen.html.
I am constantly getting errors such as "Policy has invalid resource".
Please can someone tell me what is wrong with the above because the resource does exist and the name is copied directly from the resource itself, so there are no typos.
I suspect that you have the Source and Destination buckets switched.
Let's say:
Source bucket is the one that you want to track via Server Access Logging
Destination bucket is where you would like the logs stored
The policy should be placed on the Destination bucket. Here is the policy that was automatically created for me on my Destination bucket when I activated Server Access Logging:
{
"Version": "2012-10-17",
"Id": "S3-Console-Auto-Gen-Policy",
"Statement": [
{
"Sid": "S3PolicyStmt-DO-NOT-MODIFY",
"Effect": "Allow",
"Principal": {
"Service": "logging.s3.amazonaws.com"
},
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::destination-bucket/*"
}
]
}
It would seem that you are placing the policy on the Source bucket, based upon the fact that you have a policy that is making the entire bucket public, and the fact that you said you are 'extending' an existing policy.
Basically, the bucket that is referenced in Resource should be the bucket on which the policy is being placed. In your policy above, two different buckets are being referenced in the Resource fields.

Resource field error - AWS S3 access point

I want to access my s3 bucket from some specify IP address that would visit my s3 bucket with S3 restful api.
But I can't set-up the access point policy successful.
aws show me some error in my policy json.
following is my policy json:
{
"Version": "2012-10-17",
"Id":<Id>,
"Statement": [
{
"Sid": "IPAllow",
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:*"
],
"Resource": [
"arn:aws:s3:::<bucket-name>/*",
"arn:aws:s3:::<bucket-name>"
],
"Condition": {
"IpAddress": {
"aws:SourceIp": [
"<my ip>"
]
}
}
}
]
}
I pretty sure my bucket arn is correct, but aws show following error:
Unsupported Resource ARN In Policy: The resource ARN is not supported for the resource-based policy attached to resource type S3 Access Point.
I had try to change action value or remove one of resource array, but it's still not work.

AWS Bucket Policy permissions to Load Balancer denied

So far my S3 Bucket policy looks like this which I have got from the Generator Policy, I included my Account ID as the Principle to generate the policy but when I go to add this within my Load Balancer attributes it says that "Access Denied for bucket: bucket2. Please check S3bucket permission" What is denying access and how could I fix it?
{
"Version": "2012-10-17",
"Id": "Policy1630018580759",
"Statement": [
{
"Sid": "Stmt1630018536294",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::615298492481:root"
},
"Action": "s3:*",
"Resource": "arn:aws:s3:::elb-bkt/logs/AWSLogs/615298492481/*"
}
]
}

Codepipeline S3 Bucket access denied in Codebuild

Background:
I'm testing a Codepipeline with a source stage containing a Github source and a test stage containing a Codebuild project. The Github source is authenticated with a Codestar connection.
Problem:
When the Codebuild project is triggered via the pipeline, the project is denied access to the associated Codepipeline S3 artifact bucket. Here's the log from the build:
AccessDenied: Access Denied
status code: 403, request id: 123, host id: 1234
for primary source and source version arn:aws:s3:::my-bucket/foo/master/foo123
Here's the statement of the Codebuild service role policy that's relevant to the problem:
{
"Sid": "CodePipelineArtifactBucketAccess",
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:ListObjects",
"s3:ListBucket",
"s3:GetObjectVersion",
"s3:GetObject",
"s3:GetBucketLocation"
],
"Resource": "arn:aws:s3:::my-bucket/*"
}
Attempts:
1.
Changing the resource attribute in the policy above from arn:aws:s3:::my-bucket/* to arn:aws:s3:::my-bucket*. (Same Access Denied error)
2.
Checking the associated artifact bucket's permissions. Currently, it's set to block all public access and there is no bucket policy attached. The bucket's ACL is set to allow the bucket owner (me) to have read/write access. (Same Access Denied error)
3.
Given this is a test pipeline, I've tried giving the Codebuild service role and the Codepipeline service role full S3 access to all resources. (Same Access Denied error)
Adding the Codebuild role ARN to the CMK policies usage/grant related permissions did the trick. I guess I mindlessly assumed that the Codebuild service role would inherit the Codepipeline's role which would enable the Codebuild project to decrypt the CMK associated with the Codepipeline artifact bucket. Here's the relevant statements I changed in the CMK's policy:
{
"Sid": "GrantPermissions",
"Effect": "Allow",
"Principal": {
"AWS": [
"arn:aws:iam::111111111111:role/codebuild-role",
"arn:aws:iam::111111111111:role/codepipeline-role"
]
},
"Action": [
"kms:RevokeGrant",
"kms:ListGrants",
"kms:CreateGrant"
],
"Resource": "*",
"Condition": {
"Bool": {
"kms:GrantIsForAWSResource": "true"
}
}
},
{
"Sid": "UsagePermissions",
"Effect": "Allow",
"Principal": {
"AWS": [
"arn:aws:iam::111111111111:role/codebuild-role",
"arn:aws:iam::111111111111:role/codepipeline-role"
]
},
"Action": [
"kms:ReEncrypt*",
"kms:GenerateDataKey*",
"kms:Encrypt",
"kms:DescribeKey",
"kms:Decrypt"
],
"Resource": "*"
}

Access S3 bucket from VPC

I'm running a NodeJS script and using the aws-sdk package to write files to an S3 bucket. This works fine when I run the script locally, but not from a ECS Fargate service, that's when I get Error: AccessDenied: Access Denied.
The service has the allowed VPC vpc-05dd973c0e64f7dbc. I've tried adding an Internet Gateway to this VPC, and also an endpoint (as seen in the attached image) - but nothing resolves the Access Denied error. Any ideas what I'm missing here?
SOLVED: the problem was me misunderstanding aws:sourceVpce. It requires the VPC endpoint id and not the VPC id. **
Endpoint
Internet Gateway
Bucket policy:
{
"Version": "2008-10-17",
"Id": "PolicyForCloudFrontPrivateContent",
"Statement": [
{
"Sid": "1",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::cloudfront:user/CloudFront Origin Access Identity E3MKW5OAU5CHLI"
},
"Action": [
"s3:GetObject",
"s3:PutObject"
],
"Resource": "arn:aws:s3:::mywebsite.com/*"
},
{
"Sid": "Stmt1582486025157",
"Action": "s3:*",
"Effect": "Allow",
"Resource": "arn:aws:s3:::mywebsite.com/*",
"Principal": "*",
"Condition": {
"StringEquals": {
"aws:sourceVpce": "vpc-05dd973c0e64f7dbc"
}
}
}
]
}
Please add an bucket policy that allows access from the VPC endpoint.
Update your bucket policy with a condition, that allows users to access the S3 bucket when the request is from the VPC endpoint that you created. To white list those users to download objects, you can use a bucket policy that's similar to the following:
Note: For the value of aws:sourceVpce, enter the VPC endpoint ID of the endpoint that you created.
{
"Version": "2012-10-17",
"Id": "Policy1314555909999",
"Statement": [
{
"Sid": "<<Access-to-specific-VPConly>>",
"Principal": "*",
"Action": "s3:GetObject",
"Effect": "Allow",
"Resource": ["arn:aws:s3:::awsexamplebucket/*"],
"Condition": {
"StringEquals": {
"aws:sourceVpce": "vpce-1c2g3t4e"
}
}
}
]
}