Create team/group with access to own resources only - amazon-web-services

Coming from Google Cloud Platform, I'm struggling to give access to an external team to perform some actions within their own environment (in GCP there is the concept of project, I can't find this concept in AWS).
My goal is to give access to an external team so they can create EC2 instances and S3 buckets but can only view, interact and manage their own resources (EC2 instances and S3 buckets they have created).
What I have done so far is that I have created a group and 2 users belonging to this group. In this group I have added full access to EC2 and S3.
I'm now trying to restrict these permissions to their own resources. How can this be achieved?

To restrict users to specific resource, which the group own. You will need to create policy in IAM which will have restricted access based upon tags to the resource or in case of S3 add the resource ARN in policy document. I will suggest to do try the following.
Note: "*" represent wild character, I have added sample actions in permission you can add additional as per your requirement. You can also refer to AWS policy generator tool to get the exact JSON policy document.
AWS Policy Generator
EC2
Create a policy for EC2 instance which restricts users to access EC2 only having tags Name=ExternalUser
You can change the tag as per your requirement, below is only for reference.
{
"Sid": "EC2RestrictedAccess",
"Action": [
"ec2:Describe*" ],
"Effect": "Allow",
"Resource": "*",
"Condition": {
"StringLike": {
"aws:ResourceTag/Name": "ExternalUser"
}
}
}
S3 bucket
for S3 bucket you can restrict the access based upon ARN of S3 bucket. You can also further restrict it to subfolders.
{
"Sid": "S3BucketRestrictedAccess",
"Action": [
"s3:ListBucket",
"s3:Put*",
"s3:CreateBucket"
],
"Resource": [
"arn:aws:s3:::*your_restricted_external_bucket*",
"arn:aws:s3:::*your_restricted_external_bucket*/*yourfolder*"
],
"Effect": "Allow"
}

Related

Monitor Number of S3 Buckets in account

There is a limit of 100 buckets per AWS account. My application is creating buckets when certain conditions are met. Is there a mechanism to monitor the number of buckets created in my account? I would like to alarm/get notified before I reach the 100 bucket limit.
Edit: The plan is to create prefix per customer and grant access to the prefix using Resource Policy. The customers would be uploading objects to only the prefix they have access to. We would update resource policy every time we create a new prefix. Sample policy as shown below. Once we hit limit on Resource Policy size for bucket, we would then need to create new bucket.
"Statement": [
{
"Sid": "AllowGetObject",
"Effect": "Allow",
"Principal": {
"AWS":"123456789012"
},
"Action": "s3:PutObject",
"Resource": [
"arn:aws:s3:::TestBucketName/123456789012/*",
"arn:aws:s3:::TestBucketName/123456789012"
]
}
]
Unfortunately for S3 there is no AWS backed solution that performs all of the actions for monitoring S3.
To do this you would need to create your own solution, the below is a suggestion for covering this problem:
Use a Lambda function to call the list-buckets function, counting the total number of buckets in your account. Push the value to CloudWatch as a custom metric.
Create a CloudWatch alarm for this metric based on a specific threshold.
Create a Lambda function and use the list-service-quotas function to get your service quotas for S3 buckets. Use this to update the alarm thresholds.
Set both of these Lambda functions on a scheduled CloudWatch event.
For other services quotas you might be able to take advantage of the Trusted Advisor API if you are using Business or Enterprise support plan, however this only covers specific quotas for services.
If your application is running on node.js, you can get the number of buckets using the following code:
const s3 = new AWS.S3();
s3.listBuckets({}, (err, data) => {
if (err) console.log(err);
else console.log(data.Buckets.length);
}
It appears that:
You are providing customers with credentials associated with an IAM User (not a good practice because generally IAM User credentials are for your internal staff, not external entities)
You want to allow customers to upload data to Amazon S3
I would recommend:
Use one Amazon S3 bucket
Allow customers to access their own folder (Prefix) within the bucket
This can be done by creating a bucket policy that uses IAM Policy Variables, which can automatically insert the username into the policy. This allows one policy to apply differently for every user.
Here is an example from IAM policy elements: Variables and tags - AWS Identity and Access Management:
{
"Version": "2012-10-17",
"Statement": [
{
"Action": ["s3:ListBucket"],
"Effect": "Allow",
"Resource": ["arn:aws:s3:::mybucket"],
"Condition": {"StringLike": {"s3:prefix": ["${aws:username}/*"]}}
},
{
"Action": [
"s3:GetObject",
"s3:PutObject"
],
"Effect": "Allow",
"Resource": ["arn:aws:s3:::mybucket/${aws:username}/*"]
}
]
}
This way, users can access their own folder, but cannot access other users' folders.

Ways to provide a user group access to a S3 bucket

I'm restricting bucket access to my VPC Endpoints, I have a bucket say test-bucket, I have added the below policy to enable the access to be restricted to only through the VPC Endpoints:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Access From Dev, QA Account",
"Effect": "Deny",
"NotPrincipal": {
"AWS": arn:aws:iam::x:root"
},
"Action": [
"s3:GetBucketLocation",
"s3:GetObject",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::test-bucket",
"arn:aws:s3:::test-bucket/*"
],
"Condition": {
"StringNotEquals": {
"aws:sourceVpce": [
"vpce-1234",
"vpce-1235"
]
}
}
}
This policy block console, awscli access to all users, provides only instances in the VPC to gain access to s3 bucket, i have a user group called D which consist of 40 users, I cannot add the group arn to principal as AWS doesn't support it, but it is tedious to add all the 40 users to the bucket policy. We are denying all traffic as we are making our objects Public, as this bucket is used as a yum repo and have to be available over https for the instances to download during a yum install/update. Kindly advice on how to give access using that users group D or is there any way around to provide users access ?
The group is not a principal which means you would be limited to the arn of the IAM user in this specific condition.
As a workaround you could create an IAM role that is able to be assumed either through the console or via the CLI. Then ensure that the S3 bucket policy specified the arn of the IAM role instead. Finally allow the users in the group to assume the IAM role.

Use AWS EC2 tag to determine S3 access policy

I have multiple AWS EC2 instances that have unique name tags of the form ***-manager (three unique characters then -manager).
I have several S3 buckets (and sub-folders in them) with similar 3-character id's in their names that I need to restrict access to depending on which EC2 is asking.
How could I write a single AWS policy to attach to every EC2 that would do the following:
The bucket docker.***.mysite.com should only be accessible by the EC2 whose name tag has value ***-manager. Action is anything, i.e. *.
The folder downloads.mysite.com/***/ should only be accessible by the EC2 whose name tag has value ***-manager. The action is ListBucket and GetObject with a prefix restriction.
The folder downloads.mysite.com/common/ should be accessible by any EC2
No EC2 should have access to the root downloads.mysite.com/ or know anything about it (i.e., can't do any S3 action outside of the common and its *** subfolder.
NOTE: If it's not easy/possible to extract the 3-letter id from the EC2 name tag to "pass" to the Resource part of the policy I can easily add a new tag to each EC2 that just has the *** as its value - but still have to "pass" that somehow to the Resource in the policy definition.
I don't think it will be possible to create one policy for multiple situations.
The closest method would be to use IAM Policy Elements: Variables - AWS Identity and Access Management, but that does not allow use of an arbitrary value nor can it be used to retrieve a tag from an EC2 instance.
I think you'll need to create separate Roles for each EC2 instance that refer to the specific S3 buckets.
There seems to be no way how to do this because IAM S3 policies can't look back for the "caller tags".
What can be done though is lookup the tags of the IAM role assigned to the EC2 instance. Unfortunately the relevant condition on GetObject action is s3:ExistingObjectTag/<key>, which works on the S3 object rather than on the S3 bucket itself.
A simplified policy using the above would look like:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowListBucket",
"Action": [
"s3:ListBucket"
],
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::<S3_BUCKET_NAME>"
]
},
{
"Sid": "AllowROcheckTag",
"Action": [
"s3:GetObject"
],
"Effect": "Allow",
"Resource": "arn:aws:s3:::<S3_BUCKET_NAME>/*",
"Condition": {
"StringEquals": {
"s3:ExistingObjectTag/<TAG_ON_S3_OBJECT>": "aws:RequestTag/<TAG_ON_IAM_ROLE_ASSIGNED_TO_EC2>}"
}
}
}
]
}
In the above case it would be the IAM role assigned to the EC2 instance and its tags driving the access to the objects in the S3 bucket.
Details about allowed conditions for specific S3 action is here

How to lockdown S3 bucket to specific users and IAM role(s)

In our environment, all IAM user accounts are assigned a customer-managed policy that grants read-only access to a lot of AWS services. Here's what I want to do:
Migrate a sql server 2012 express database from on-prem to a RDS instance
Limit access to the S3 bucket containing the database files
Here's the requirements according to AWS:
A S3 bucket to store the .bak database file
A role with access to the bucket
SQLSERVER_BACKUP_RESTORE option attached to RDS instance
So far, I've done the following:
Created a bucket under the name "test-bucket" (and uploaded the .bak file here)
Created a role under the name "rds-s3-role"
Created a policy under the name "rds-s3-policy" with these settings:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:ListBucket",
"s3:GetBucketLocation"
],
"Resource": "arn:aws:s3:::test-bucket/"
},
{
"Effect": "Allow",
"Action": [
"s3:GetObjectMetaData",
"s3:GetObject",
"s3:PutObject",
"s3:ListMultipartUploadParts",
"s3:AbortMultipartUpload"
],
"Resource": "arn:aws:s3:::test-bucket/*"
}
]
}
Assigned the policy to the role
Gave the AssumeRole permissions to the RDS service to assume the role created above
Created a new option group in RDS with the SQLSERVER_BACKUP_RESTORE option and linked it to my RDS instance
With no restrictions on my S3 bucket, I can perform the restore just fine; however, I can't find a solid way of restricting access to the bucket without hindering the RDS service from doing the restore.
In terms of my attempts to restrict access to the S3 bucket, I found a few posts online recommending using an explicit Deny statement to deny access to all types of principals and grant access based on some conditional statements.
Here's the contents of my bucket policy:
{
"Version": "2012-10-17",
"Id": "Policy1486769843194",
"Statement": [
{
"Sid": "Stmt1486769841856",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::test-bucket",
"arn:aws:s3:::test-bucket/*"
],
"Condition": {
"StringNotLike": {
"aws:userid": [
"<root_id>",
"<user1_userid>",
"<user2_userid>",
"<user3_userid>",
"<role_roleid>:*"
]
}
}
}
]
}
I can confirm the bucket policy does restrict access to only the IAM users that I specified, but I am not sure how it treats IAM roles. I used the :* syntax above per a document I found on the aws forums where the author stated the ":*" is a catch-all for every principal that assumes the specified role.
The only thing I'm having a problem with is, with this bucket policy in place, when I attempt to do the database restore, I get an access denied error. Has anyone ever done something like this? I've been going at it all day and haven't been able to find a working solution.
The following, admittedly, is guesswork... but reading between the lines of the somewhat difficult to navigate IAM documentation and elsewhere, and taking into account the way I originally interpreted it (incorrectly), I suspect that you are using the role's name rather than the role's ID in the policy.
Role IDs look similar to AWSAccessKeyIds except that they begin with AROA....
For the given role, find RoleId in the output from this:
$ aws iam get-role --role-name ROLE-NAME
https://aws.amazon.com/blogs/security/how-to-restrict-amazon-s3-bucket-access-to-a-specific-iam-role/
Use caution when creating a broad Deny policy. You can end up denying s3:PutBucketPolicy to yourself, which leaves you in a situation where your policy prevents you from changing the policy... in which case, your only recourse is presumably to persuade AWS support to remove the bucket policy. A safer configuration would be to use this to deny only the object-level permissions.

Amazon S3 Bucket Policy: How to lock down access to only your EC2 Instances

I am looking to lock down an S3 bucket for security purposes - i'm storing deployment images in the bucket.
What I want to do is create a bucket policy that supports anonymous downloads over http only from EC2 instances in my account.
Is there a way to do this?
An example of a policy that I'm trying to use (it won't allow itself to be applied):
{
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::[my bucket name]",
"Condition": {
"ArnEquals": {
"aws:SourceArn": "arn:aws:ec2:us-east-1:[my account id]:instance/*"
}
}
}
]
}
Just to clarify how this is normally done. You create a IAM policy, attach it to a new or existing role, and decorate the ec2 instance with the role. You can also provide access through bucket policies, but that is less precise.
Details below:
S3 buckets are default deny except for my the owner. So you create your bucket and upload the data. You can verify with a browser that the files are not accessible by trying https://s3.amazonaws.com/MyBucketName/file.ext. Should come back with error code "Access Denied" in the xml. If you get an error code of "NoSuchBucket", you have the url wrong.
Create an IAM policy based on arn:aws:iam::aws:policy/AmazonS3ReadOnlyAccess. Starts out looking like the snip below. Take a look at the "Resource" key, and note that it is set to a wild card. You just modify this to be the arn of your bucket. You have to do one for the bucket and its contents so it becomes: "Resource": ["arn:aws:s3:::MyBucketName", "arn:aws:s3:::MyBucketName/*"]
Now that you have a policy, what you want to do is to decorate your instances with a IAM Role that automatically grants it this policy. All without any authentication keys having to be in the instance. So go to Role, create new role, make an Amazon EC2 role, find the policy you just created, and your Role is ready.
Finally you create your instance, and add the IAM role you just created. If the machine already has its own role, you just have to merge the two roles into a new one for the machine. If the machine is already running, it wont get the new role until you restart.
Now you should be good to go. The machine has the rights to access the s3 share. Now you can use the following command to copy files to your instance. Note you have to specify the region
aws s3 cp --region us-east-1 s3://MyBucketName/MyFileName.tgz /home/ubuntu
Please Note, the term "Security through obscurity" is only a thing in the movies. Either something is provably secure, or it is insecure.
I used something like
{
"Version": "2012-10-17",
"Id": "Allow only My VPC",
"Statement": [
{
"Sid": "Allow only My VPC",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject", "s3:ListBucket",
"Resource": [
"arn::s3:::{BUCKET_NAME}",
"arn::s3:::{BUCKET_NAME}/*"
],
"Condition": {
"StringLike": {
"aws:sourceVpc": "{VPC_ID}" OR "aws:sourceVpce": "{VPCe_ENDPOINT}"
}
}
}
]
}