how to give full access of s3 bucket from ec2 security group - amazon-web-services

I want to give full access to the S3 bucket from the ec2 security group so all the ec2 instances all associated with that security group can have full access to S3 bucket.
also, I am thinking in the right direction or any other method need to use.

Unfortunately, you cannot add ec2 security group to the bucket policy.
However, usually if you want to enable access to S3 to your instances you would do this though instance roles. This way all instances which would need to have the S3 access would have the role attached allowed for it.
Having the role, if you want to use the bucket policy you would control access by specifying the role's ARN as the principle in the bucket policy.

I know it's a bit late, but for others wandering on to this question, see this post on server fault.
Tl;dr:
Create an access point for the bucket in S3, selecting the VPC that will access S3
Then create an endpoint in that VPC to S3 -- it will ask you to select the route from your VPC routing table that will redirect traffic from the concerned instances to S3
Lastly, create an outbound HTTPS rule in your security group with the VPC endpoint prefix as your destination, e.g. pl-xxxx
You can get the endpoint prefix in the AWS console via VPC --> Managed Prefix Lists --> prefix that points to S3.

Related

Lambda security group to S3 and Secrets Manager

I'm new to Terraform (TF) and AWS. I've created TF that creates an RDS cluster, VPC and security groups, S3 bucket, secrets in secrets manager (SM), as well as a lambda that access all of the above. I've attached the RDS VPC and security group to the lambda. So the code in the lambda can successfully access the RDS. My problem is that I need a security group that allow the lambda code to read from secrets manager to get RDS user accounts and S3 to get sql scripts to execute on the RDS. So, a security group with outbound to S3 and secrets manager.
How do i get terraform to calculate (data) the details to the SM and S3. Then use this info to create the security group to allow the lambda code to access SM and S3.
Currently I'm forcing my way with "All to All on 0.0....", this will not be allowed in the production environment.
So, a security group with outbound to S3 and secrets manager.
The easiest way would be to use S3 VPC interface endpoint, not S3 gateway. Thus if you have two endpoints for S3 and SM, both will be associated with SG that you must have created in your code, or they will use default one.
So to simply limit access of your lambda to the S3 and SM, you have just reference the interface's SGs in your lambda's SG.

Is it possible to test a security group policy that restricts S3 access only to a single VPC without creating an EC2?

Title is pretty self explanatory, I want to lockdown S3 access to only be allowed from a particular VPC. Is there a way of doing this without creating an EC2 to test it?
I want to lockdown S3 access to only be allowed from a particular VPC
AWS docs have entire chapter devoted to limiting access to S3 to a given VPC or VPC endpoint:
Controlling access from VPC endpoints with bucket policies
But if you apply these bucket policies, you will actually lock yourself out of the bucket. So you will have to use EC2 instance or some other way of accessing the bucket from within the VPC. Any other access will not be allowed.

Restrict IAM users inside a VPC

I have a VPC in which users have access to confidential data and computing resources. For security and intellectual property reasons, this VPC should not allow users to leak information to the outside world (internet).
I need to expose AWS resources on this VPC for users: say allowing users to invoke a lambda function and access an S3 bucket.
To expose the S3 bucket, it was trivial. I created an S3 VPC endpoint and attached on it a policy only granting s3:* to the specific bucket. Then I opened access to this specific endpoint in the firewall. If I had granted access to s3.amazonaws.com as a whole, users could have created a personal bucket on a foreign AWS account and use it to leak data from within the VPC to their bucket.
Now to expose a specific lambda function, I cannot reuse the same trick, since there is not VPC endpoint for lambda.
How can I expose a specific lambda function in a VPC, without opening access to lambda.amazonaws.com as a whole, and thus allowing users to potentially use their own lambdas in a foreign account to leak information from the VPC?
In an ideal world, I would like to be able to restrict all IAM/STS inside this VPC to users of my account. That would prevent anyone from using external AWS credentials from within the VPC to leak information.
--- EDIT
Apparently there is some confusion on what I am trying to prevent here, so let me explain in more details.
Say you are working at company X that has access to very sensitive data. This data should NOT be possible to leak to the outside world (internet).
Company X gives you access to a machine in a VPC that contain these confidential files that you need for some computations. There is a NAT and internet gateway in this VPC, but iptables prevents you from accessing non-white listed address.
Say your computation requires access to an S3 bucket. If access to s3.amazonaws.com is freely opened in iptables, then data leak is possible. A malicious employee could create a personal AWS account, and use it from within company X's VPC to upload sensitive data to his personal bucket. This problem can be mitigated by company X by creating an S3 VPC endpoint in the VPC, with a policy only allowing the allowed S3 bucket, and opening iptables only for this VPC endpoint.
My question is: how about AWS lambda?
Since there is no VPC endpoint for AWS lambda, I would need to allow lambda.amazonaws.com as a whole in iptables, which basically means that anyone within that VPC can freely use any lambda from any AWS account, and thus leak any data they want by uploading it to a personal lambda created in a personal account.
AWS does not maintain a relationship between VPCs and users.
Resources are connected to VPCs. Permissions are granted to users. The two concepts do not really intersect.
If you have some particularly sensitive resources that you wish to only provide to certain users, another approach would be to create another AWS Account and only create a small subset of users for that account, or only allow a subset of users to assume an IAM Role in that account. This way, there is a direct relationship between "VPCs/Resources" and "Users".
You can add resource based policy to the Lambda and only allow some specific EC2 instances or users to access the function.

AWS Lambda can't reach resources created from MobileHub

I am having an issue accessing resources created in MobileHub from Lambda and that does not make sense for me at all.. I have two questions (maybe it is the same question..):
Why lambda can't access all resources created by MobileHub when it has fullAccess permissions to those specific resources? I mean, if I create those resources separately I can access them but not created ones from MobileHub..
Is there a way to grant access to these resources or am I missing something?
Update
The issue was VPC. Basically when I enabled VPC on lambdas to reach rds which have no public access I couldn't reach any other resources, when I disabled it - RDS was unreachable. The question is how to combine vpc with role policies?
You can find the resources associated with your project using the left-side navigation in the Mobile Hub console and select "Resources." If you want to enable your AWS Lambda functions to be able to make use of any AWS resources, then you'll need to add an appropriate IAM Policy to the Lambda Execute IAM Role. You can find this role in your project on the "Resources" page under "AWS Identity and Access Management Roles." It is the role that has "lambdaexecutionrole" in the name. Select this role then attach whatever policies you like in the IAM (Identity and Access Management) console.
For more information on how to attach roles to polices, see:
http://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_manage_modify.html
And, if you have further problems, you can get help from the AWS community in the forums, here:
https://forums.aws.amazon.com/forum.jspa?forumID=88
**Update - WRT VPC Question**
This question should really go to an expert on the AWS Lambda team. You can reach them in the AWS Forums (link above). However, I'll take a shot at answering (AWS Lambda experts feel free to chime in if I'm wrong here). When you set the VPC on the Lambda function, I expect that any network traffic coming from your Lambda function will have the same routing and domain name resolution behavior as anything else in your VPC. So, if your VPC has firewall rules which prevent traffic from the VPC to, for example, DynamoDB, then you won't be able to reach it. If that is the case, then you would need to update those rules in your VPC's security group(s) to open up out-going traffic. Here's a blurb from a relevant document.
From https://aws.amazon.com/vpc/details/:
*AWS resources such as Elastic Load Balancing, Amazon ElastiCache, Amazon RDS, and Amazon Redshift are provisioned with IP addresses within your VPC. Other AWS resources such as Amazon S3 and Amazon DynamoDB are accessible via your VPC’s Internet Gateway, NAT gateways, VPC Endpoints, or Virtual Private Gateway.*
This doc seems to explain how to configure the gateway approach:
http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/vpc-endpoints.html

AWS S3 Bucket Accessible from my ElasticBeanStalk Instance only

I have 1 s3 bucket, 1 elasticbeanstalk instance. Currenly my s3bucket is made public hence its accessible from any domain, even from my localhost.
I want that all my s3 bucket resources should be accessible from my EBS instance only where my APP is hosted/running. My app should be able to view these resources and upload new images/resources to this bucket .
I am sure somebody myt have done this.
Controlling access to S3 has several ways. The best practice to make something privately accessible is: not to give any rights to access your S3 buckets/files in the bucket policy.
However you should create an IAM role which has either a full access to S3, or limited access to some actions, some buckets.
For every EC2 instances and also to every Elastic Beanstalk environments, you can attache an IAM Role. This role will be automatically served to your instances via instance metadata. This is a safe way to give special rights to your instances.
(Note: This is an AWS security best practice, since AWS will deal with the key rotations on your EC2 boxes.)