Access S3 bucket from all instances having a particular IAM role [duplicate] - amazon-web-services

I need to fire up an S3 bucket so my EC2 instances have access to store image files to it. The EC2 instances need read/write permissions. I do not want to make the S3 bucket publicly available, I only want the EC2 instances to have access to it.
The other gotcha is my EC2 instances are being managed by OpsWorks and I can have may different instances being fired up depending on load/usage. If I were to restrict it by IP, I may not always know the IP the EC2 instances have. Can I restrict by VPC?
Do I have to make my S3 bucket enabled for static website hosting?
Do I need to make all files in the bucket public as well for this to work?

You do not need to make the bucket public readable, nor the files public readable. The bucket and it's contents can be kept private.
Don't restrict access to the bucket based on IP address, instead restrict it based on the IAM role the EC2 instance is using.
Create an IAM EC2 Instance role for your EC2 instances.
Run your EC2 instances using that role.
Give this IAM role a policy to access the S3 bucket.
For example:
{
"Version": "2012-10-17",
"Statement":[{
"Effect": "Allow",
"Action": "s3:*",
"Resource": ["arn:aws:s3:::my_bucket",
"arn:aws:s3:::my_bucket/*"]
}
]
}
If you want to restrict access to the bucket itself, try an S3 bucket policy.
For example:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": ["arn:aws:iam::111122223333:role/my-ec2-role"]
},
"Action": "s3:*",
"Resource": ["arn:aws:s3:::my_bucket",
"arn:aws:s3:::my_bucket/*"]
}
]
}
Additional information: http://blogs.aws.amazon.com/security/post/TxPOJBY6FE360K/IAM-policies-and-Bucket-Policies-and-ACLs-Oh-My-Controlling-Access-to-S3-Resourc

This can be done very simply.
Follow the following steps:
Open the AWS EC2 on console.
Select the instance and navigate to actions.
Select instances settings and select Attach/Replace IAM Role
Create a new role and attach S3FullAccess policy to that role.
When this is done, connect to the AWS instance and the rest will be done via the following CLI commands:
aws s3 cp filelocation/filename s3://bucketname
Please note... the file location refers to the local address. And the bucketname is the name of your bucket.
Also note: This is possible if your instance and S3 bucket are in the same account.
Cheers.

IAM role is the solution for you.
You need create role with s3 access permission, if the ec2 instance was started without any role, you have to re-build it with that role assigned.
Refer: Allowing AWS OpsWorks to Act on Your Behalf

Related

Giving customer AWS access to my AWS's specific s3 bucket?

How do i grant a customer read/write access to a specific S3 bucket in my AWS account without giving them access to any other buckets or resources?
They should be able to access this bucket from a powershell script in some ec2 instance of theirs.
found this policy
{
"Version": "2012-10-17",
"Id": "PolicyForBucketX",
"Statement": [
{
"Sid": "AllowCustomerRWAccess",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::123456789012:root"
},
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:DeleteObject"
],
"Resource": "arn:aws:s3:::bucket-x/*"
}
]
}
Giving customer AWS access to my AWS's specific s3 bucket?
with this, they might be able to access s3 via their access key in powershell. However they might not be using access key hardcoded to use s3. They might be using STS with instance role for the ec2 to access their s3 resources.
Would this work still? Would they then have to add my bucket x into their instance role permissions buckets?
Any better way? I might/might not have details of their AWS resource IDs.
With Bucket policy and IAM policy (either for user or a role) you can restrict users/resources based on the requirement.
I agree with Maurice here as extent of restriction would heavily depend on what you specifically want to do.
You can also use CloudFront and restrict access to your bucket objects for users not managed by IAM.
In general you should think of access as two part task. On the side of the resource, you grant permissions to a resource, in this case you are doing that for a specific bucket (resource) for a cross account (principal). You're done.
Now, the identity that will access it will also needs permissions given to them by the account administrator (root) the same way. I.e. grant the user/role the permissions to
"s3:PutObject",
"s3:GetObject",
"s3:DeleteObject"
If they would like to use an instance which has AWS PowerShell installed, they can create an instance profile / role that has the above permissions, and they will be able to run the commands and access your bucket. That's right way to do it.
Regardless of how they access to the instance, when they make the api call from the instance to your bucket, AWS will first check to see if the caller (which could be instance profile or a role they assumed) has permissions to these actions (customer setup). It will then be checked to see if the resource allows these actions (your setup).

Restricting Access to S3 buckets from a VPC only

I have few buckets in S3 where I want to limit access.
In the process of implementing this I am now confused and appreciate your help in making me understand this.
This is my scenario --
Created a VPC, Public Subnet, ec2.
Created a bucket using an admin user --aws1234-la
Created a bucket policy and attached to the bucket saying allow access only if coming from my vpc.
"Statement": [
{
"Sid": "Access-to-specific-VPC-only",
"Principal": "*",
"Action": "s3:*",
"Effect": "Deny",
"Resource": ["arn:aws:s3:::aws1234-la",
"arn:aws:s3:::aws1234-la/*"],
"Condition": {
"StringNotEquals": {
"aws:sourceVpc": "vpc-111bbb22"
}
}
} ] }
Next, from CLI , aws s3 ls
It is displaying the buckets.
Where I am making a mistake ?
Ideally step 4 should return an error as I am not going thru my VPC ?
Any help will be hightly appreciated.
Thanks
From Specifying Conditions in a Policy - Amazon Simple Storage Service:
The new condition keys aws:sourceVpce and aws:sourceVpc are used in bucket policies for VPC endpoints.
Therefore, you need to be accessing the S3 bucket via a VPC Endpoint to be able to restrict access to a VPC. This is because, without the VPC Endpoint, the request being received by Amazon S3 simply appears to be coming "from the Internet", so it is not able to identify the source VPC. In difference, a request coming via a VPC Endpoint includes an identifier of the source VPC.
Making it work
Assumption: You already have an IAM Policy on your user(s) that allow access to the bucket. You are wanting to know how to further restrict the bucket so that it is only accessible from a specific VPC. If this is not the case, then you should be using an Allow policy to grant access to the bucket, since access is denied by default.
To reproduce your situation, I did the following:
Created a new VPC with a public subnet
Added a VPC Endpoint to the VPC
Launched an Amazon EC2 instance in the public subnet, assigning an IAM Role that already has permission to access all of my Amazon S3 buckets
Created an Amazon S3 bucket (my-vpc-only-bucket)
Added a Bucket Policy to the bucket (from Example Bucket Policies for VPC Endpoints for Amazon S3):
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Access-to-specific-VPC-only",
"Principal": "*",
"Action": "s3:*",
"Effect": "Deny",
"Resource": ["arn:aws:s3:::my-vpc-only-bucket",
"arn:aws:s3:::my-vpc-only-bucket/*"],
"Condition": {
"StringNotEquals": {
"aws:sourceVpc": "vpc-111bbb22"
}
}
}
]
}
Please note that this policy assumes that the user(s) already have access to the bucket via an IAM Policy that grants Allow access. This policy is adding a Deny that will override the access they already have to the bucket.
Logged in to the Amazon EC2 instance in the new VPC and then:
Run aws s3 ls s3://my-vpc-only-bucket
It worked!
From my own computer on the Internet:
Run aws s3 ls s3://my-vpc-only-bucket
Received a AccessDenied error (which is what we want!)
By the way, the Deny policy will also prohibit your use of the Amazon S3 management console to manage the bucket because requests are not coming from the VPC. This is a side-effect of using Deny and s3:* on the bucket. You can always remove the bucket policy by using your root credentials (login via email address), then go to the Bucket Policy in the S3 console and click Delete. (You'll see some errors on the screen getting to the Bucket Policy, but it will work.)
Alternate method via Allow
If, on the other hand, the user(s) do not already have access to all Amazon S3 buckets, then by default they will not have access to the new bucket. Thus, you will need to grant Allow access to the bucket, but only from the VPC via the VPC Endpoint.
Setup is the same as above, but the Bucket Policy would be:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Access-to-specific-VPC-only",
"Principal": "*",
"Action": "s3:*",
"Effect": "Allow", <-- This changed
"Resource": ["arn:aws:s3:::my-vpc-only-bucket",
"arn:aws:s3:::my-vpc-only-bucket/*"],
"Condition": {
"StringEquals": { <--- This changed
"aws:sourceVpc": "vpc-111bbb22"
}
}
}
]
}
I then tested it with an IAM Role assigned to the EC2 instance that does not have any permissions to access Amazon S3
Ran aws s3 ls s3://my-vpc-only-bucket
It worked!
Ran from my own computer, using an IAM User that does not have any permissions to access Amazon S3
Received a AccessDenied error (which is what we want!)
Bottom line: You need to add a VPC Endpoint to the VPC.
Since you specified the resource at the bucket level, it will denied all the operations inside the bucket. However, the listing of the bucket is acting on the resource arn:aws:s3:::*, and it is not denied, thus the bucket will be displayed even if you are not inside of the VPC.
AFAIK, there is no way to partially hide only for the bucket without blocking all the buckets.

How to restrict users to create a public S3 object into a private bucket

I have created an Amazon S3 bucket which is private in nature but I don't want my users to create any public object inside that bucket. How should I do that through an S3 policy of that bucket? I tried but I am getting an error that the policy has an invalid resource.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "DenyPublicReadGetObject",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": [
" arn:aws:s3:::aws-my-bucket-s3-vk/*"
]
}
]
}
Users should be able to access the bucket and create new object/get the object from inside an EC2 instance with necessary permissions.
On the S3 console, after the bucket name there is access column which should come "bucket and object cannot be public".
You should enable Block Public Access on the S3 bucket. This will prevent anyone making objects public in that bucket. You should also restrict which IAM users/roles/policies have permission to modify the Block Public Access settings.
Amazon S3 block public access settings will override S3 bucket policies and object-level permissions to prevent public access.
All Amazon S3 buckets are private by default. Nobody can access/use the buckets or its contents unless they have been granted permission via an IAM Policy, Bucket Policy or object-level permission.
It appears that you want a particular Amazon EC2 instance to be able to use the bucket. Therefore:
Create an IAM Role
Grant the desired permissions to the IAM Role (eg PutObject and GetObject on that bucket)
Assign the IAM Role to the Amazon EC2 instance
Applications on the EC2 instance will then be able to access the bucket
There is no need to use the Deny policy. It will always override other policies. It is better to use Allow policies to grant access, but only to the desired entities.
Here are some policy examples: User Policy Examples - Amazon Simple Storage Service

How to deny other users/roles from creating EC2 instances with an IAM role

I have an S3 bucket with confidential data.
I added a bucket policy to allow only a limited set of roles within the account. This stops other user from accessing the s3 bucket from console.
One of the allowed roles, say "foo-role" is created for EC2 instances to read the S3 bucket.
Now, even the denied roles can create a VM, assign the "foo-role" to this VM, ssh into this VM and look at the bucket content.
Is there a way that I can prevent other users from assigning the "foo-role" to their EC2 instances.
Add this policy to your IAM Users. This policy will prevent a user from associating or replacing a role to an EC2 instance.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "DENY",
"Action": [
"ec2:AssociateIamInstanceProfile",
"ec2:ReplaceIamInstanceProfileAssociation",
"iam:PassRole"
],
"Resource": "*"
}
]
}

AWS S3 Bucket Access from EC2

I need to fire up an S3 bucket so my EC2 instances have access to store image files to it. The EC2 instances need read/write permissions. I do not want to make the S3 bucket publicly available, I only want the EC2 instances to have access to it.
The other gotcha is my EC2 instances are being managed by OpsWorks and I can have may different instances being fired up depending on load/usage. If I were to restrict it by IP, I may not always know the IP the EC2 instances have. Can I restrict by VPC?
Do I have to make my S3 bucket enabled for static website hosting?
Do I need to make all files in the bucket public as well for this to work?
You do not need to make the bucket public readable, nor the files public readable. The bucket and it's contents can be kept private.
Don't restrict access to the bucket based on IP address, instead restrict it based on the IAM role the EC2 instance is using.
Create an IAM EC2 Instance role for your EC2 instances.
Run your EC2 instances using that role.
Give this IAM role a policy to access the S3 bucket.
For example:
{
"Version": "2012-10-17",
"Statement":[{
"Effect": "Allow",
"Action": "s3:*",
"Resource": ["arn:aws:s3:::my_bucket",
"arn:aws:s3:::my_bucket/*"]
}
]
}
If you want to restrict access to the bucket itself, try an S3 bucket policy.
For example:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": ["arn:aws:iam::111122223333:role/my-ec2-role"]
},
"Action": "s3:*",
"Resource": ["arn:aws:s3:::my_bucket",
"arn:aws:s3:::my_bucket/*"]
}
]
}
Additional information: http://blogs.aws.amazon.com/security/post/TxPOJBY6FE360K/IAM-policies-and-Bucket-Policies-and-ACLs-Oh-My-Controlling-Access-to-S3-Resourc
This can be done very simply.
Follow the following steps:
Open the AWS EC2 on console.
Select the instance and navigate to actions.
Select instances settings and select Attach/Replace IAM Role
Create a new role and attach S3FullAccess policy to that role.
When this is done, connect to the AWS instance and the rest will be done via the following CLI commands:
aws s3 cp filelocation/filename s3://bucketname
Please note... the file location refers to the local address. And the bucketname is the name of your bucket.
Also note: This is possible if your instance and S3 bucket are in the same account.
Cheers.
IAM role is the solution for you.
You need create role with s3 access permission, if the ec2 instance was started without any role, you have to re-build it with that role assigned.
Refer: Allowing AWS OpsWorks to Act on Your Behalf