There is a limit of 100 buckets per AWS account. My application is creating buckets when certain conditions are met. Is there a mechanism to monitor the number of buckets created in my account? I would like to alarm/get notified before I reach the 100 bucket limit.
Edit: The plan is to create prefix per customer and grant access to the prefix using Resource Policy. The customers would be uploading objects to only the prefix they have access to. We would update resource policy every time we create a new prefix. Sample policy as shown below. Once we hit limit on Resource Policy size for bucket, we would then need to create new bucket.
"Statement": [
{
"Sid": "AllowGetObject",
"Effect": "Allow",
"Principal": {
"AWS":"123456789012"
},
"Action": "s3:PutObject",
"Resource": [
"arn:aws:s3:::TestBucketName/123456789012/*",
"arn:aws:s3:::TestBucketName/123456789012"
]
}
]
Unfortunately for S3 there is no AWS backed solution that performs all of the actions for monitoring S3.
To do this you would need to create your own solution, the below is a suggestion for covering this problem:
Use a Lambda function to call the list-buckets function, counting the total number of buckets in your account. Push the value to CloudWatch as a custom metric.
Create a CloudWatch alarm for this metric based on a specific threshold.
Create a Lambda function and use the list-service-quotas function to get your service quotas for S3 buckets. Use this to update the alarm thresholds.
Set both of these Lambda functions on a scheduled CloudWatch event.
For other services quotas you might be able to take advantage of the Trusted Advisor API if you are using Business or Enterprise support plan, however this only covers specific quotas for services.
If your application is running on node.js, you can get the number of buckets using the following code:
const s3 = new AWS.S3();
s3.listBuckets({}, (err, data) => {
if (err) console.log(err);
else console.log(data.Buckets.length);
}
It appears that:
You are providing customers with credentials associated with an IAM User (not a good practice because generally IAM User credentials are for your internal staff, not external entities)
You want to allow customers to upload data to Amazon S3
I would recommend:
Use one Amazon S3 bucket
Allow customers to access their own folder (Prefix) within the bucket
This can be done by creating a bucket policy that uses IAM Policy Variables, which can automatically insert the username into the policy. This allows one policy to apply differently for every user.
Here is an example from IAM policy elements: Variables and tags - AWS Identity and Access Management:
{
"Version": "2012-10-17",
"Statement": [
{
"Action": ["s3:ListBucket"],
"Effect": "Allow",
"Resource": ["arn:aws:s3:::mybucket"],
"Condition": {"StringLike": {"s3:prefix": ["${aws:username}/*"]}}
},
{
"Action": [
"s3:GetObject",
"s3:PutObject"
],
"Effect": "Allow",
"Resource": ["arn:aws:s3:::mybucket/${aws:username}/*"]
}
]
}
This way, users can access their own folder, but cannot access other users' folders.
Related
Coming from Google Cloud Platform, I'm struggling to give access to an external team to perform some actions within their own environment (in GCP there is the concept of project, I can't find this concept in AWS).
My goal is to give access to an external team so they can create EC2 instances and S3 buckets but can only view, interact and manage their own resources (EC2 instances and S3 buckets they have created).
What I have done so far is that I have created a group and 2 users belonging to this group. In this group I have added full access to EC2 and S3.
I'm now trying to restrict these permissions to their own resources. How can this be achieved?
To restrict users to specific resource, which the group own. You will need to create policy in IAM which will have restricted access based upon tags to the resource or in case of S3 add the resource ARN in policy document. I will suggest to do try the following.
Note: "*" represent wild character, I have added sample actions in permission you can add additional as per your requirement. You can also refer to AWS policy generator tool to get the exact JSON policy document.
AWS Policy Generator
EC2
Create a policy for EC2 instance which restricts users to access EC2 only having tags Name=ExternalUser
You can change the tag as per your requirement, below is only for reference.
{
"Sid": "EC2RestrictedAccess",
"Action": [
"ec2:Describe*" ],
"Effect": "Allow",
"Resource": "*",
"Condition": {
"StringLike": {
"aws:ResourceTag/Name": "ExternalUser"
}
}
}
S3 bucket
for S3 bucket you can restrict the access based upon ARN of S3 bucket. You can also further restrict it to subfolders.
{
"Sid": "S3BucketRestrictedAccess",
"Action": [
"s3:ListBucket",
"s3:Put*",
"s3:CreateBucket"
],
"Resource": [
"arn:aws:s3:::*your_restricted_external_bucket*",
"arn:aws:s3:::*your_restricted_external_bucket*/*yourfolder*"
],
"Effect": "Allow"
}
We are a humble startup that mines data from the entire Internet and put them in an Amazon S3 bucket to share with the world. For now we have 2TB of data and soon we may reach the 20TB mark.
Our subscribers will be able to download all the data from the Amazon S3 bucket we have. We have to opt for requester pays for the bandwidth apparently unless we want to end up with some heart breaking S3 bills.
Pre-signed URL is not an option because it doesn't seem to audit bandwidth usage in real time, thus is vulnerable to download abuses.
After some research this seems to be the way to grant different AWS accounts the needed permissions to access our bucket:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Permissions to foreign account 1",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::ForeignAccount-ID-1:root"
},
"Action": [
"s3:GetBucketLocation",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::ourbucket"
]
},
{
"Sid": "Permissions to foreign account 2",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::ForeignAccount-ID-2:root"
},
"Action": [
"s3:GetBucketLocation",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::ourbucket"
]
},
{
"Sid": "Permissions to foreign account 3",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::ForeignAccount-ID-3:root"
},
"Action": [
"s3:GetBucketLocation",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::ourbucket"
]
},
......
]
}
Wherein ForeignAccount-ID-x is the account ID e.g. 2222-2222-2222.
However the issue is, we may potentially have tens of thousands or even more subscribers to this bucket.
Is this the right and efficient way to add permissions for them to access this bucket?
Would it pose any performance difficulties to this bucket considering each request would go through this mountainous bucket policy?
Any better solutions for this problem?
Your requirement for Amazon S3 Requester Pays Buckets is understandable, but leads to other limitations.
User will need their own AWS account to authenticate — it will not work with federated logins such as AWS Cognito. Also, pre-signed URLs aren't of benefit because they are generated from an AWS account too.
Bucket policies are limited to 20KB and ACLs are limited to 100 grants.
So, this approach seems unlikely to work.
Another option would be to create a mechanism where your system can push content to another user's AWS account. They would need to provide a destination bucket and some form of access (eg an IAM Role that can be assumed) and your application could copy files to their bucket. However, this could be difficult for regularly-published data.
Another option would be to allow access to the content only from within the same AWS Region. Thus, users would be able to read and process the data in AWS using services such as Amazon EMR. They could write applications on EC2 that access the data in Amazon S3. They would be able to copy the data to their own buckets. The only thing they cannot do is access the data from outside AWS. This would eliminate Data Transfer costs. The data could even be provided in multiple regions to serve worldwide users.
A final option would be to propose your dataset to the AWS Public Dataset Program, which will cover the cost of storage and data transfer for "publicly available high-value cloud-optimized datasets".
There's some CSV data files I need to get in S3 buckets belonging to a series of AWS accounts belonging to a third-party; the owner of the other accounts has created a role in each of the accounts which grants me access to those files; I can use the AWS web console (logged in to my own account) to switch to each role and get the files. One at a time, I switch to the role for each of the accounts and then get the files for that account, then move on to the next account and get those files, and so on.
I'd like to automate this process.
It looks like AWS Glue can do this, but I'm having trouble with the permissions.
What I need it to do is create permissions so that an AWS Glue crawler can switch to the right role (belonging to each of the other AWS accounts) and get the data files from the S3 bucket of those accounts.
Is this possible and if so how can I set it up? (e.g. what IAM roles/permissions are needed?) I'd prefer to limit changes to my own account if possible rather than having to ask the other account owner to make changes on their side.
If it's not possible with Glue, is there some other easy way to do it with a different AWS service?
Thanks!
(I've had a series of tries but I keep getting it wrong - my attempts are so far from being right that there's no point in me posting the details here).
Yes, you can automate your scenario with Glue by following these steps:
Create an IAM role in your AWS account. This role's name must start with AWSGlueServiceRole but you can append whatever you want. Add a trust relationship for Glue, such as:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "glue.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
Attach two IAM policies to your IAM role. The AWS managed policy named AWSGlueServiceRole and a custom policy that provides the access needed to all the target cross account S3 buckets, such as:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "BucketAccess",
"Effect": "Allow",
"Action": [
"s3:ListBucket",
"s3:GetBucketLocation"
],
"Resource": [
"arn:aws:s3:::examplebucket1",
"arn:aws:s3:::examplebucket2",
"arn:aws:s3:::examplebucket3"
]
},
{
"Sid": "ObjectAccess",
"Effect": "Allow",
"Action": "s3:GetObject",
"Resource": [
"arn:aws:s3:::examplebucket1/*",
"arn:aws:s3:::examplebucket2/*",
"arn:aws:s3:::examplebucket3/*"
]
}
]
}
Add S3 bucket policies to each target bucket that allows your IAM role the same S3 access that you granted it in your account, such as:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "BucketAccess",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::your_account_number:role/AWSGlueServiceRoleDefault"
},
"Action": [
"s3:ListBucket",
"s3:GetBucketLocation"
],
"Resource": "arn:aws:s3:::examplebucket1"
},
{
"Sid": "ObjectAccess",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::your_account_number:role/AWSGlueServiceRoleDefault"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::examplebucket1/*"
}
]
}
Finally, create Glue crawlers and jobs in your account (in the same regions as the target cross account S3 buckets) that will ETL the data from the cross account S3 buckets to your account.
Using the AWS CLI, you can create named profiles for each of the roles you want to switch to, then refer to them from the CLI. You can then chain these calls, referencing the named profile for each role, and include them in a script to automate the process.
From Switching to an IAM Role (AWS Command Line Interface)
A role specifies a set of permissions that you can use to access AWS
resources that you need. In that sense, it is similar to a user in AWS
Identity and Access Management (IAM). When you sign in as a user, you
get a specific set of permissions. However, you don't sign in to a
role, but once signed in as a user you can switch to a role. This
temporarily sets aside your original user permissions and instead
gives you the permissions assigned to the role. The role can be in
your own account or any other AWS account. For more information about
roles, their benefits, and how to create and configure them, see IAM
Roles, and Creating IAM Roles.
You can achieve this with AWS lambda and Cloudwatch Rules.
You can create a lambda function that has a role attached to it, lets call this role - Role A, depending on the number of accounts you can either create 1 function per account and create one rule in cloudwatch to trigger all functions or you can create 1 function for all the accounts (be cautious to the limitations of AWS Lambda).
Creating Role A
Create an IAM Role (Role A) with the following policy allowing it to assume the role given to you by the other accounts containing the data.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1509358389000",
"Effect": "Allow",
"Action": [
"sts:AssumeRole"
],
"Resource": [
"",
"",
....
"
]// all the IAM Role ARN's from the accounts containing the data or if you have 1 function for each account you can opt to have separate roles
}
]
}
Also you will need to make sure that a trust relationship with all the accounts are present in Role A's Trust Relationship policy document.
Attach Role A to the lambda functions you will be running. you can use serverless for development.
Now your lambda function has Role A attached to it and Role A has sts:AssumeRole permissions over the role's created in the other accounts.
Assuming that you have created 1 function for 1 account in you lambda's code you will have to first use STS to switch to the role of the other account and obtain temporary credentials and pass these to S3 options before fetching the required data.
if you have created 1 function for all the accounts you can have the role ARN's in an array and iterate over it, again when doing this be aware of the limits of AWS lambda.
For security reasons, we have a pre-prod and a prod AWS account. We're now beginning to use IAM Roles for S3 access to js/css files through django-storage / boto.
While this is working correctly on a per account basis, now a need has risen where the QA instance needs to access one S3 bucket on a the prod account.
Is there a way to have one IAM role that can grant access to the pre-prod And prod S3 buckets? As I'm writing it seems impossible, but it never hearts to ask!
Here's the AWS doc on this: http://docs.aws.amazon.com/AmazonS3/latest/dev/example-walkthroughs-managing-access-example2.html
Essentially, you have to delegate permissions to one account from the other account using the Principal block of your Bucket's IAM policy, and then set up your IAM user in the second account as normal.
Example bucket policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Example permissions",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::<Account-ID>:root"
},
"Action": [
"s3:GetBucketLocation",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::<bucket-name>"
]
}
]
}
This works well for read-only access, but there can be issues with write access. Primarily, the account writing the object will still be the owner of that object. When dealing with Write permissions, you'll usually want to make sure the account owning the bucket still has the ability to access objects written by the other account, which requires the object to be written with a particular header: x-amz-grant-full-control
You can set up your bucket policy so that the bucket will not accept cross-account objects that do not supply this header. There's an example of that at the bottom of this page: http://docs.aws.amazon.com/AmazonS3/latest/dev/example-bucket-policies.html (under "Granting Cross-Account Permissions to Upload Objects While Ensuring the Bucket Owner Has Full Control")
This makes use of a conditional Deny clause in the bucket policy, like so:
{
"Sid":"112",
"Effect":"Deny",
"Principal":{"AWS":"1111111111" },
"Action":"s3:PutObject",
"Resource":"arn:aws:s3:::examplebucket/*",
"Condition": {
"StringNotEquals": {"s3:x-amz-grant-full-control":["emailAddress=xyz#amazon.com"]}
}
}
I generally avoid cross-account object writes, myself...they are quite fiddly to set up.
Is it possible to expose Amazon S3 account bucket (shared by ACL setings) to the users setup using new Amazon AIM API under different account?
I'm able to create working IAM policy when related to the users and objects belonging to a single account. But as it seems this no longer works when two different accounts are involved - despite account 2 being able to access account 1's bucket directly.
Sample policy is:
{
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:*"
],
"Resource": [
"arn:aws:s3:::test1234.doom",
"arn:aws:s3:::test.doom"
],
"Condition": {}
}
]
}
In this case AIM user is able to list test.doom bucket (owned by the same AWS account) and not 'test1234.doom' bucket (owned by the different AWS account). This is despite one account having correct ACL permissions to access the other bucket.
It looks like this can't be done.
http://aws.amazon.com/iam/faqs/#Will_users_be_able_to_access_data_controlled_by_AWS_Accounts_other_than_the_account_under_which_they_are_defined
Although it looks like in the future they might be allowed to create data under another account.
http://aws.amazon.com/iam/faqs/#Will_users_be_able_to_create_data_under_AWS_Accounts_other_than_the_account_under_which_they_are_defined