I have a AWS S3 bucket in account A, This bucket was created by AWS Control Tower.
And used for collecting logs from all other account in my org,
I was trying to understand the bucket policy which is something like this
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowSSLRequestsOnly",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::aws-controltower-logs-12345656-us-east-1",
"arn:aws:s3:::aws-controltower-logs-12345656-us-east-1/*"
],
"Condition": {
"Bool": {
"aws:SecureTransport": "false"
}
}
},
{
"Sid": "AWSBucketPermissionsCheck",
"Effect": "Allow",
"Principal": {
"Service": [
"config.amazonaws.com",
"cloudtrail.amazonaws.com"
]
},
"Action": "s3:GetBucketAcl",
"Resource": "arn:aws:s3:::aws-controltower-logs-12345656-us-east-1"
},
{
"Sid": "AWSConfigBucketExistenceCheck",
"Effect": "Allow",
"Principal": {
"Service": [
"config.amazonaws.com",
"cloudtrail.amazonaws.com"
]
},
"Action": "s3:ListBucket",
"Resource": "arn:aws:s3:::aws-controltower-logs-12345656-us-east-1"
},
{
"Sid": "AWSBucketDelivery",
"Effect": "Allow",
"Principal": {
"Service": [
"config.amazonaws.com",
"cloudtrail.amazonaws.com"
]
},
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::aws-controltower-logs-12345656-us-east-1/o-1234/AWSLogs/*/*"
}
]
}
Now all other account in my org are able to dump there cloudtrail logs within this S3.
But i dont get one thing, i did not specified any particular or individual account number, but still other accounts are able to write content in this bucket,
Although i do see the principal which mentions relevant service name that can dump, but should,nt it only for this account itself ?
Let's analyze the rules one by one:
The first rule only says that no access without SSL is possible, it does nothing if SSL layer is present:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowSSLRequestsOnly",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::aws-controltower-logs-12345656-us-east-1",
"arn:aws:s3:::aws-controltower-logs-12345656-us-east-1/*"
],
"Condition": {
"Bool": {
"aws:SecureTransport": "false"
}
}
},
The next two actions allow only read:
{
"Sid": "AWSBucketPermissionsCheck",
"Effect": "Allow",
"Principal": {
"Service": [
"config.amazonaws.com",
"cloudtrail.amazonaws.com"
]
},
"Action": "s3:GetBucketAcl",
"Resource": "arn:aws:s3:::aws-controltower-logs-12345656-us-east-1"
},
{
"Sid": "AWSConfigBucketExistenceCheck",
"Effect": "Allow",
"Principal": {
"Service": [
"config.amazonaws.com",
"cloudtrail.amazonaws.com"
]
},
"Action": "s3:ListBucket",
"Resource": "arn:aws:s3:::aws-controltower-logs-12345656-us-east-1"
},
So the only action which allows any writing is this one:
{
"Sid": "AWSBucketDelivery",
"Effect": "Allow",
"Principal": {
"Service": [
"config.amazonaws.com",
"cloudtrail.amazonaws.com"
]
},
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::aws-controltower-logs-12345656-us-east-1/o-1234/AWSLogs/*/*"
}
]
}
And it says the following: You can put object under /o-1234/AWSLogs as long as you are one of the following two AWS services: Config or Cloudtrail.
Clearly, if knowing the bucket name and the org ID allows me to persuade Config or Cloudtrail to use that bucket I cannot see anything what would stop me from doing that except from some internal protection inside those services.
Based on this document:
https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-set-bucket-policy-for-multiple-accounts.html
It seems that for allowing an account 111111111111 to write to that bucket you should use the following ARN pattern: "arn:aws:s3:::myBucketName/optionalLogFilePrefix/AWSLogs/111111111111/*",
So while the answer provided by #izayoi does not provide any explanation, it is still correct. Cloudtrail service should guarantee you that it will always use that account id in the log, so you can narrow down the access by listing all your account numbers. Of course, it must be updated with every each new account.
Conclusion: Yes, knowing the bucket name and your organization ID should allow every AWS account in the world to use your bucket for Cloudtrail logging with the current policy...interesting. I would probably go with listing your account numbers.
"Resource": "arn:aws:s3:::aws-controltower-logs-12345656-us-east-1/o-1234/AWSLogs/*/*"
The 1st "*" enables all account numbers.
Related
I am trying to setup IAM policy on S3 based for aws file transfer (SFTP) protocol. Here is the requirement:
User should be able to put files to his/her own folder is. {transfer:username}/
While uploading, lambda will add tag "allowdownload":"no" by default.
We have some third party integration to carry approvals and once approved, new tag will be added to file "allowdownload":"Yes"
Business logic complete Step_1 through 3 are working except tag based policy which I have attached to AWS File Transfer server. Policy has conditions to allow/deny get files based on tag value but it seems not working.
As per requirement: files with tag "allowdownload":"Yes" should be allowed
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::mybucket"
],
"Effect": "Allow",
"Sid": "ReadWriteS3"
},
{
"Action": [
"s3:PutObject"
],
"Resource": [
"arn:aws:s3:::mybucket/${transfer:UserName}/*"
],
"Effect": "Allow",
"Sid": ""
},
{
"Sid": "DenyMkdir",
"Action": [
"s3:PutObject"
],
"Effect": "Deny",
"Resource": "arn:aws:s3:::mybucket/*/"
}
]
}
Here is bucket Policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Statement1",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::mybucket/${transfer:UserName}/*",
"Condition": {
"StringEquals": {
"s3:ExistingObjectTag/allowdownload": "yes"
}
}
},
{
"Sid": "Statement1",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::mybucket/${transfer:UserName}/*",
"Condition": {
"StringEquals": {
"s3:ExistingObjectTag/allowdownload": "no"
}
}
}
]
}
Also, I have block all public access is ON but still user is able to get files with allowdownload tag = no.
Anything wrong with below policy: Please advise.
I would like to create the AWS Config access grant to the Amazon S3 Bucket and the policy is provided below that I write according to the link https://docs.aws.amazon.com/config/latest/developerguide/s3-bucket-policy.html:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AWSConfigBucketPermissionsCheck",
"Effect": "Allow",
"Principal": {
"Service": [
"config.amazonaws.com"
]
},
"Action": "s3:GetBucketAcl",
"Resource": "arn:aws:s3:::targetBucketName"
},
{
"Sid": "AWSConfigBucketExistenceCheck",
"Effect": "Allow",
"Principal": {
"Service": [
"config.amazonaws.com"
]
},
"Action": "s3:ListBucket",
"Resource": "arn:aws:s3:::targetBucketName"
},
{
"Sid": "AWSConfigBucketDelivery",
"Effect": "Allow",
"Principal": {
"Service": [
"config.amazonaws.com"
]
},
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::targetBucketName/[optional] prefix/AWSLogs/sourceAccountID-WithoutHyphens/Config/*",
"Condition": {
"StringEquals": {
"s3:x-amz-acl": "bucket-owner-full-control"
}
}
}
]
}
I want to know about the part provided below:
"Resource": "arn:aws:s3:::targetBucketName/[optional] prefix/AWSLogs/sourceAccountID-WithoutHyphens/Config/*",
Whats the meaning of the prefix and how do I fill the prefix/AWSLogs/sourceAccountID-WithoutHyphens/Config/* part of the policy?
Thanks.
The prefix is what you define when you configure logging to S3. This is optional. Config writes the logs to S3 bucket using a standard path/key format which is "prefix/AWSLogs/sourceAccountID-WithoutHyphens/Config/*".
If you configure Config logging to S3 from console, you won't have to worry about the bucket policy as it will be created automatically. You simply give the bucket name and optional prefix.
I want to create an AWS Elasticsearch with this policy, to enable specific access from IAM roles, set admin IPs, and public read only. ES Console keeps returning an error "Error setting policy". I can't work out why this would not be allowed?
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::<id>:role/<lambda role 1 name>"
},
"Action": "es:ESHttpPost",
"Resource": "arn:aws:es:eu-west-1:<id>:domain/*/*"
},
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::<id>:role/<lambda role 2 name>"
},
"Action": "es:ESHttpDelete",
"Resource": "arn:aws:es:eu-west-1:<id>:domain/*/*"
},
{
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "es:*",
"Resource": "arn:aws:es:eu-west-1:<id>:domain/*/*",
"Condition": {
"IpAddress": {
"aws:SourceIp": [
"<ip1>",
"<ip2>",
"<ip3>"
]
}
}
},
{
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "es:ESHttpGet",
"Resource": "arn:aws:es:eu-west-1:<id>:domain/*/*"
}
]
}
It's in eu-west-1 and version 7.1. I've tried variations like es:* and putting principals in an array (like in the provided templates) but these are all rejected?! I can seemingly only have 2 statements, with 1 principal in each (* and 1 of these IAMs).
Is there a better recommended way? Like putting it behind API Gateway or something. I saw reverse proxy in the docs but this seems like a ridiculous overkill and $$$.
I'm trying to access to one of my S3 storage buckets from my EC2 instance deployed by ElasticBeanstalk. My EC2 instance belongs to aws-elasticbeanstalk-ec2-role and I have granted this role with AmazonS3FullAccess policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "s3:*",
"Resource": "*"
}
]
}
Then the bucket policy is as follows:
"Version": "2008-10-17",
"Statement": [
{
"Sid": "eb-ad78f54a-f239-4c90-adda-49e5f56cb51e",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::XXXXXX:role/aws-elasticbeanstalk-ec2-role"
},
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::elasticbeanstalk-us-east-XXXXXX/resources/environments/logs/*"
},
{
"Sid": "eb-af163bf3-d27b-4712-b795-d1e33e331ca4",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::XXXXXX:role/aws-elasticbeanstalk-ec2-role"
},
"Action": [
"s3:ListBucket",
"s3:ListBucketVersions",
"s3:GetObject",
"s3:GetObjectVersion"
],
"Resource": [
"arn:aws:s3:::elasticbeanstalk-us-east-2-XXXXXX",
"arn:aws:s3:::elasticbeanstalk-us-east-2-XXXXXX/resources/environments/*"
]
},
{
"Sid": "eb-58950a8c-feb6-11e2-89e0-0800277d041b",
"Effect": "Deny",
"Principal": {
"AWS": "*"
},
"Action": "s3:DeleteBucket",
"Resource": "arn:aws:s3:::elasticbeanstalk-us-east-2-XXXXXX"
}
]
}
When I try to access the bucket from an SSH connection or through a script inside .ebextensions I receive an Access Denied 403 error. I tried making the files public and using the same commands and I worked perfectly, but the files I need can't be public.
I think I have the correct policies for both the bucket and the EC2 role. I might be forgetting some detail though.
Any help will be welcomed. Thank you folks in advance!
So based on my knowledge and previous issues I experienced, your bucket policy is incorrect.
It isn't valid because ListBucket and ListBucketVersions actions must be applied to bucket name, not prefix.
Here is my corrected policy that should work;
{
"Version": "2008-10-17",
"Statement": [
{
"Sid": "eb-ad78f54a-f239-4c90-adda-49e5f56cb51e",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::XXXXXX:role/aws-elasticbeanstalk-ec2-role"
},
"Action": "s3:PutObject",
"Resource": [
"arn:aws:s3:::elasticbeanstalk-us-east-XXXXXX/resources/environments/logs/*",
"arn:aws:s3:::elasticbeanstalk-us-east-XXXXXX/resources/environments/logs"
]
},
{
"Sid": "eb-af163bf3-d27b-4712-b795-d1e33e331ca4",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::XXXXXX:role/aws-elasticbeanstalk-ec2-role"
},
"Action": [
"s3:GetObject",
"s3:GetObjectVersion"
],
"Resource": [
"arn:aws:s3:::elasticbeanstalk-us-east-2-XXXXXX/resources/environments",
"arn:aws:s3:::elasticbeanstalk-us-east-2-XXXXXX/resources/environments/*"
]
},
{
"Sid": "eb-af163bf3-d27b-4712-b795-anything",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::XXXXXX:role/aws-elasticbeanstalk-ec2-role"
},
"Action": [
"s3:ListBucket",
"s3:ListBucketVersions"
],
"Resource": [
"arn:aws:s3:::elasticbeanstalk-us-east-2-XXXXXX"
]
},
{
"Sid": "eb-58950a8c-feb6-11e2-89e0-0800277d041b",
"Effect": "Deny",
"Principal": {
"AWS": "*"
},
"Action": "s3:DeleteBucket",
"Resource": "arn:aws:s3:::elasticbeanstalk-us-east-2-XXXXXX"
}
]
}
Useful docs to reference to for the future -> AWS s3 docs
Is it possible to create an S3 bucket policy that allows read and write access from aws machine learning ? I tried below bucket policy, but not work.
bucket policy:
{
"Version": "2008-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "machinelearning.amazonaws.com"
},
"Action": [
"s3:GetObject",
"s3:PutObject"
],
"Resource": "arn:aws:s3:::cxra/*"
},
{
"Effect": "Allow",
"Principal": {
"Service": "machinelearning.amazonaws.com"
},
"Action": "s3:PutObjectAcl",
"Resource": "arn:aws:s3:::cxra/*",
"Condition": {
"StringEquals": {
"s3:x-amz-acl": "bucket-owner-full-control"
}
}
},
{
"Effect": "Allow",
"Principal": {
"Service": "machinelearning.amazonaws.com"
},
"Action": "s3:ListBucket",
"Resource": "arn:aws:s3:::cxra"
},
{
"Effect": "Allow",
"Principal": {
"Service": "machinelearning.amazonaws.com"
},
"Action": [
"s3:GetBucketLocation"
],
"Resource": "arn:aws:s3:::cxra"
}
]
}
Error screen
Double check your policy against Granting Amazon ML Permissions to Output Predictions to Amazon S3 and Granting Amazon ML Permissions to Read Your Data from Amazon S3
As they are documented as two separate policies, I suggest you follow that, rather than joining the two policies in one single document. It will be easier to troubleshoot.