While creating cloudfront distribution through aws console, we have an option to choose an origin access identity and also, let it update the bucket policy.
I am trying to look for similar options in terraform so that I don't have to manually manage the s3 bucket read permissions for cloudfront origin access identity.
I have checked https://www.terraform.io/docs/providers/aws/r/cloudfront_distribution.html but couldn't find any reference to such option.
Please let me know if I missed checking something on the page.
I don't think you missed anything on that page. But, you also need to look at this page: https://www.terraform.io/docs/providers/aws/r/s3_bucket.html .
This page covers more detail on setting up S3 buckets. Note the policy line in the Static Website Hosting section. You can add a line like
policy = "${file("policy.json")}"
and then you can write whatever policy you need into the policy.json file, which then will be included and thereby allow you to avoid needing to manually configure permissions in the console.
After reading the responses here and doing some reading and tests on my end, I found that the following achieves the effect we want. Assuming you already have your Cloudfront distribution somewhere:
resource "aws_s3_bucket" "my-cdn-s3" {
bucket = "my-cdn"
}
resource "aws_cloudfront_origin_access_identity" "my-oai" {
comment = "my-oai"
}
resource "aws_s3_bucket_policy" "cdn-cf-policy" {
bucket = aws_s3_bucket.my-cdn-s3.id
policy = data.aws_iam_policy_document.my-cdn-cf-policy.json
}
data "aws_iam_policy_document" "my-cdn-cf-policy" {
statement {
sid = "1"
principals {
type = "AWS"
identifiers = [aws_cloudfront_origin_access_identity.my-cdn-oai.iam_arn]
}
actions = [
"s3:GetObject"
]
resources = [
"${aws_s3_bucket.my-cdn-s3arn}/*"
]
}
}
We would then get this in the bucket's policy, which I have copied from a non-Terraform creation of CF and S3.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "1",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::cloudfront:user/CloudFront Origin Access Identity XXXXXXXXX"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::my-cdn/*"
}
]
}
Let me know if I left anything out.
Related
I have managed to make my Terraform loop through all of my buckets creating an IAMs user and a bucket
resource "aws_s3_bucket" "aws_s3_buckets" {
count = "${length(var.s3_bucket_name)}"
bucket = "${var.s3_bucket_name[count.index]}"
acl = "private"
tags = {
Name = "${var.s3_bucket_name[count.index]}"
Environment = "live"
policy = <<POLICY
{
"Id": "Policy1574607242703",
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1574607238413",
"Action": [
"s3:PutObject"
],
"Effect": "Allow",
"Resource": {
"arn:aws:s3:::"."${var.s3_bucket_name[count.index]}"."/*"}
},
"Principal": {
"AWS": [
"${var.s3_bucket_name[count.index]}"}
]}
}
]
}
POLICY
}
}
I'm getting error setting S3 bucket tags: InvalidTag: The TagValue you have provided is invalid status code: 400 is there a way to create policies like this? Or have I done something incorrect in my code?
The error is because policy section is not part of tag argument. It is a separate section within the aws_s3_bucket resource. You can also use aws_s3_bucket_policy resource to create bucket policy.
Note: There are quite a few issues with the policy. You would have to fix them for the policy to go through fine. Some of the issues are:
"arn:aws:s3:::"."${var.s3_bucket_name[count.index]}"."/*"} -- this should not be inside a JSON.
There are some curly braces that are not aligned properly (some extra curly braces).
The principal should be an IAM resource (IAM User or IAM role or an account or *).
I have a instance which needs to read data from two different account s3.
Bucket in DataAccount with bucket name "dataaccountlogs"
Bucket in UserAccount with bucket name "userlogs"
I have console access to both account, so now I need to configure bucket policy to allow instances to read s3 data from buckets dataaccountlogs and userlogs , and my instance is running in UserAccount .
I need to access these two bucket both from command line as well as using spark job.
You will need a role in UserAccount, which will be used to access mentioned buckets, say RoleA. Role should have permissions for required S3 operations.
Then you will able to configure a bucket policy for each bucket:
For DataAccount:
{
"Version": "2012-10-17",
"Id": "Policy1",
"Statement": [
{
"Sid": "test1",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::DataAccount:role/RoleA"
},
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::dataaccountlogs",
"arn:aws:s3:::dataaccountlogs/*"
]
}
]
}
For UserAccount:
{
"Version": "2012-10-17",
"Id": "Policy1",
"Statement": [
{
"Sid": "test1",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::DataAccount:role/RoleA"
},
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::userlogs",
"arn:aws:s3:::userlogs/*"
]
}
]
}
For accessing them from command line:
You will need to setup AWS CLI tool first:
https://docs.aws.amazon.com/polly/latest/dg/setup-aws-cli.html
Then you will need to configure a profile for using your role.
First you will need to make a profile for your user to login:
aws configure --profile YourProfileAlias
And follow instructions for setting up credentials.
Then you will need to edit config and add profile for a role:
~/.aws/config
Add to the end a block:
[profile YourRoleProfileName]
role_arn = arn:aws:iam::DataAccount:role/RoleA
source_profile = YourProfileAlias
After that you will be able to use aws s3api ... --profile YourRoleProfileName to access your both buckets on behalf of created role.
To access from spark:
If you run your cluster on EMR, you should use SecurityConfiguration, and fill a section for S3 role configuration. A different role for each specific bucket can be specified. You should use "Prefix" constraint and list all destination prefixes after. Like "s3://dataaccountlogs/,s3://userlogs".
Note: you should strictly use s3 protocol for this, not s3a. Also there is number of limitations, you can find here:
https://docs.aws.amazon.com/emr/latest/ReleaseGuide/emr-spark-s3-optimized-committer.html
Another way with spark is to configure Hadoop to assume your role. Putting
spark.hadoop.fs.s3a.aws.credentials.provider =
"org.apache.hadoop.fs.s3a.AssumedRoleCredentialProvider,org.apache.hadoop.fs.s3a.auth.AssumedRoleCredentialProvider"
And configuring you role to be used
spark.hadoop.fs.s3a.assumed.role.arn = arn:aws:iam::DataAccount:role/RoleA
This way is more general now, since EMR commiter have various limitations. You can find more information for configuring this at Hadoop docs:
https://hadoop.apache.org/docs/r3.1.1/hadoop-aws/tools/hadoop-aws/assumed_roles.html
Hello I have the following policy definition in my terraform, but it keeps returning as malformed
resource "aws_iam_role_policy" "task-policy" {
name = "docker-flowcell-restore-task-policy"
role = "${aws_iam_role.task-role.id}"
policy = "${file("${path.module}/policies/role-docker-flowcell-restore-${var.environment}-ecs-policy.json")}"
}
Been struggling with trying to find the error in this for awhile.
here is the error
aws_iam_role_policy.task-policy: Error putting IAM role policy docker-flowcell-restore-task-policy: MalformedPolicyDocument: Syntax errors in policy.
Here is the policy that is failing
{
"Version": "2012-10-17",
"Statement": [
{
"Sid":"AllowWritesS3",
"Action": [
"s3:GetObject",
"s3:RestoreObject"
],
"Effect": "Allow",
"Resource": [
"Temp_name_for_post",
"Temp_name_for_post"
]
},
{
"SID": "Allow for user for upload S3 bucket",
"Action": [
"s3:PutObject",
"s3:AbortMultipartUpload"
],
"Resource":[
"temp_name_for_post",
"temp_name_for_post"
]
}
]
}
Unfortunately AWS doesn't tell you exactly what the syntax errors are, so you have to find them yourself. Sometimes you can do this by eye; other times you may just want to use the AWS Console as Jeffrey suggested - and perhaps take out each statement one-by-one then re-validate, to see where the error lies (it's a lot quicker than waiting for Terraform to finish).
In your case, you need to:
change SID to Sid in your second statement
change the Sid value in that second statement so it has no spaces in it (eg. "AllowForUserForUploadS3Bucket")
use valid S3 ARNs instead of "temp_name_for_post", such as "arn:aws:s3:::my-bucket/*" to refer to all objects in a bucket named my-bucket
After changing these items, the policy now validates for me via the AWS Console.
It looks like your iam policy is malformed. Here are docs on iam syntax and grammar. Another option to see what is wrong with the policy would be to copy the contents of that file into the iam policy validator in the aws console. The Sid field is not required, but if present it must be unique in the policy, and it cannot contain spaces.
I'm trying to make all of the images I've stored in my s3 bucket publicly readable, using the following bucket policy.
{
"Id": "Policy1380877762691",
"Statement": [
{
"Sid": "Stmt1380877761162",
"Action": [
"s3:GetObject"
],
"Effect": "Allow",
"Resource": "arn:aws:s3:::<bucket-name>/*",
"Principal": {
"AWS": [
"*"
]
}
}
]
}
I have 4 other similar s3 buckets with the same bucket policy, but I keep getting 403 errors.
The images in this bucket were transferred using s3cmd sync as I'm trying to migrate the contents of the bucket to a new account.
The only difference that I can see is that
i'm using an IAM user with admin access, instead of the root user
the files dont have a
"grantee : everyone open/download file" permission on each of the
files, something the files had in the old bucket
If you want everyone to access your S3 objects in the bucket, the principal should be "*", i.e., like this:
{
"Id": "Policy1380877762691",
"Statement": [
{
"Sid": "Stmt1380877761162",
"Action": [
"s3:GetObject"
],
"Effect": "Allow",
"Resource": "arn:aws:s3:::<bucket-name>/*",
"Principal": "*"
}
}
]
}
Source: http://docs.aws.amazon.com/IAM/latest/UserGuide/AccessPolicyLanguage_ElementDescriptions.html#Principal
I've managed to solve it by running the s3cmd command again but adding --acl-public to the end of it. Seems to have fixed my issue
I Know this is an old question, but for whoever is having this issue and working from the AWS Console. Go to the bucket in AWS S3 console:
Open the permissions tab.
Open Public Access settings.
Click edit
Then in the editing page :
Uncheck Block new public bucket policies (Recommended)
Uncheck Block public and cross-account access if bucket has public policies (Recommended)
Click save
CAUTION
PLEASE NOTE THAT THIS WILL MAKE YOUR BUCKET ACCESSIBLE BY ANYONE ON THE INTERNET, EVENT IF THEY DO NOT HAVE AN AWS ACCOUNT, THEY STILL CAN ACCESS THE BUCKET AND THE BUCKET'S CONTENTS. PLEASE HANDLE WITH CAUTION!
From AWS Documentation
http://docs.aws.amazon.com/AmazonS3/latest/dev/example-bucket-policies.html
{
"Version":"2012-10-17",
"Statement":[
{
"Sid":"AddPerm",
"Effect":"Allow",
"Principal": "*",
"Action":["s3:GetObject"],
"Resource":["arn:aws:s3:::examplebucket/*"]
}
]
}
Not sure if the order or attributes matter here. I would give this one a try.
I am trying to give myself permission to download existing files in an S3 bucket. I've modified the Bucket Policy, as follows:
{
"Sid": "someSID",
"Action": "s3:*",
"Effect": "Allow",
"Resource": "arn:aws:s3:::bucketname/AWSLogs/123123123123/*",
"Principal": {
"AWS": [
"arn:aws:iam::123123123123:user/myuid"
]
}
}
My understanding is that addition to the policy should give me full rights to "bucketname" for my account "myuid", including all files that are already in that bucket. However, I'm still getting Access Denied errors when I try to download any of those files via the link that comes up in the console.
Any thoughts?
Step 1
Click on your bucket name, and under the permissions tab, make sure that Block new public bucket policies is unchecked
Step 2
Then you can apply your bucket policy
Hope that helps
David, You are right but I found that, in addition to what bennie said below, you also have to grant view (or whatever access you want) to 'Authenticated Users'.
But a better solution might be to edit the user's policy to just grant access to the bucket:
{
"Statement": [
{
"Sid": "Stmt1350703615347",
"Action": [
"s3:*"
],
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::mybucket/*"
]
},
{
"Effect": "Allow",
"Action": [
"s3:ListBucket"
],
"Resource": ["arn:aws:s3:::mybucket"],
"Condition": {}
}
]
}
The first block grants all S3 permissions to all elements within the bucket. The second block grants list permission on the bucket itself.
Change resource arn:aws:s3:::bucketname/AWSLogs/123123123123/* to arn:aws:s3:::bucketname/* to have full rights to bucketname
for show website static in s3:
This is bucket policies:
{
"Version":"2012-10-17",
"Statement":[{
"Sid":"PublicReadGetObject",
"Effect":"Allow",
"Principal": "*",
"Action":["s3:GetObject"],
"Resource":["arn:aws:s3:::example-bucket/*"
]
}
]
}
Use below method for uploading any file for public readable form using TransferUtility in Android.
transferUtility.upload(String bucketName, String key, File file, CannedAccessControlList cannedAcl)
Example
transferUtility.upload("MY_BUCKET_NAME", "FileName", your_file, CannedAccessControlList.PublicRead);
To clarify: It is really not documented well, but you need two access statements.
In addition to your statement that allows actions to resource "arn:aws:s3:::bucketname/AWSLogs/123123123123/*", you also need a second statement that allows ListBucket to "arn:aws:s3:::bucketname", because internally the Aws client will try to list the bucket to determine it exists before doing its action.
With the second statement, it should look like:
"Statement": [
{
"Sid": "someSID",
"Action": "ActionThatYouMeantToAllow",
"Effect": "Allow",
"Resource": "arn:aws:s3:::bucketname/AWSLogs/123123123123/*",
"Principal": {
"AWS": [
"arn:aws:iam::123123123123:user/myuid"
]
},
{
"Sid": "someOtherSID",
"Action": "ListBucket",
"Effect": "Allow",
"Resource": "arn:aws:s3:::bucketname",
"Principal": {
"AWS": [
"arn:aws:iam::123123123123:user/myuid"
]
}
]
Note: If you're using IAM, skip the "Principal" part.
If you have an encrypted bucket, you will need kms allowed.
Possible reason: if files have been put/copy by another AWS Account user then you can not access the file since still file owner is not you. The AWS account user who has been placed files in your directory has to grant access during a put or copy operation.
For a put operation, the object owner can run this command:
aws s3api put-object --bucket destination_awsexamplebucket --key dir-1/my_images.tar.bz2 --body my_images.tar.bz2 --acl bucket-owner-full-control
For a copy operation of a single object, the object owner can run one of these commands:
aws s3api copy-object --bucket destination_awsexammplebucket --key source_awsexamplebucket/myobject --acl bucket-owner-full-control
ref : AWS Link
Giving public access to Bucket to add policy is NOT A RIGHT way.
This exposes your bucket to public even for a short amount of time.
You will face this error even if you are admin access (Root user will not face it)
According to aws documentation you have to add "PutBucketPolicy" to you IAM user.
So Simply add a S3 Policy to you IAM User as in below screenshot , mention your Bucket ARN for make it safer and you don't have to make you bucket public again.
No one metioned MFA. For Amazon users who have enabled MFA, please use this:
aws s3 ls s3://bucket-name --profile mfa.
And prepare the profile mfa first by running
aws sts get-session-token --serial-number arn:aws:iam::123456789012:mfa/user-name --token-code 928371 --duration 129600. (replace 123456789012, user-name and 928371).
This can also happen if the encryption algorithm in the S3 parameters is missing. If bucket's default encryption is set to enabled, ex. Amazon S3-managed keys (SSE-S3), you need to pass ServerSideEncryption: "AES256"|"aws:kms"|string to your bucket's param.
const params = {
Bucket: BUCKET_NAME,
Body: content,
Key: fileKey,
ContentType: "audio/m4a",
ServerSideEncryption: "AES256" // Here ..
}
await S3.putObject(params).promise()
Go to this link and generate a Policy.
In the Principal field give *
In the Actions set the Get Objects
Give the ARN as arn:aws:s3:::<bucket_name>/*
Then add statement and then generate policy, you will get a JSON file and then just copy that file and paste it in the Bucket Policy.
For More Details go here.