I'm new to Terraform (TF) and AWS. I've created TF that creates an RDS cluster, VPC and security groups, S3 bucket, secrets in secrets manager (SM), as well as a lambda that access all of the above. I've attached the RDS VPC and security group to the lambda. So the code in the lambda can successfully access the RDS. My problem is that I need a security group that allow the lambda code to read from secrets manager to get RDS user accounts and S3 to get sql scripts to execute on the RDS. So, a security group with outbound to S3 and secrets manager.
How do i get terraform to calculate (data) the details to the SM and S3. Then use this info to create the security group to allow the lambda code to access SM and S3.
Currently I'm forcing my way with "All to All on 0.0....", this will not be allowed in the production environment.
So, a security group with outbound to S3 and secrets manager.
The easiest way would be to use S3 VPC interface endpoint, not S3 gateway. Thus if you have two endpoints for S3 and SM, both will be associated with SG that you must have created in your code, or they will use default one.
So to simply limit access of your lambda to the S3 and SM, you have just reference the interface's SGs in your lambda's SG.
Related
Title is pretty self explanatory, I want to lockdown S3 access to only be allowed from a particular VPC. Is there a way of doing this without creating an EC2 to test it?
I want to lockdown S3 access to only be allowed from a particular VPC
AWS docs have entire chapter devoted to limiting access to S3 to a given VPC or VPC endpoint:
Controlling access from VPC endpoints with bucket policies
But if you apply these bucket policies, you will actually lock yourself out of the bucket. So you will have to use EC2 instance or some other way of accessing the bucket from within the VPC. Any other access will not be allowed.
I want to give full access to the S3 bucket from the ec2 security group so all the ec2 instances all associated with that security group can have full access to S3 bucket.
also, I am thinking in the right direction or any other method need to use.
Unfortunately, you cannot add ec2 security group to the bucket policy.
However, usually if you want to enable access to S3 to your instances you would do this though instance roles. This way all instances which would need to have the S3 access would have the role attached allowed for it.
Having the role, if you want to use the bucket policy you would control access by specifying the role's ARN as the principle in the bucket policy.
I know it's a bit late, but for others wandering on to this question, see this post on server fault.
Tl;dr:
Create an access point for the bucket in S3, selecting the VPC that will access S3
Then create an endpoint in that VPC to S3 -- it will ask you to select the route from your VPC routing table that will redirect traffic from the concerned instances to S3
Lastly, create an outbound HTTPS rule in your security group with the VPC endpoint prefix as your destination, e.g. pl-xxxx
You can get the endpoint prefix in the AWS console via VPC --> Managed Prefix Lists --> prefix that points to S3.
I was reading Default VPC and Default Subnets - Amazon Virtual Private Cloud about default VPC creation by AWS. Under default VPC components, it states "Amazon creates the above VPC components on behalf of the customers. IAM policies do not apply to those actions because the customers do not perform those actions".
My question is, we need to create an IAM role for an AWS service to call another AWS service, e.g., EC2 invoking S3, but why does IAM policy not work when the AWS builds resources on our behalf?
Thanks in advance for any input.
In your example of Amazon EC2 connecting to Amazon S3, it is actually your program code running on an Amazon EC2 instance that is making calls to Amazon S3. The API calls to S3 need to be authenticated and authorized via IAM credentials.
There are also situations where an AWS service calls another AWS service on your behalf using a service-linked role, such as when Amazon EC2 Auto Scaling launches new Amazon EC2 instances. This requires provision of a Service-Linked Role for Amazon EC2 Auto Scaling, which gives one service permission to call another service.
In the case of creating a Default VPC, this is something that AWS does before an account is given to a customer. This way, customers can launch resources (eg an Amazon EC2 instance) without having to first create a VPC. It is part of the standard account setup.
It appears that AWS has also exposed the CreateDefaultVpc() command to recreate the Default VPC. The documentation is saying that permission to make this API call is sufficient for creating the resources, without requiring permissions for each underlying call that it probably generates. I guess it is using the permissions that would normally be associated with a service-linked role, except that there is no service-linked role for VPC actions. If you are concerned about people creating these resources (eg an Internet Gateway), you can deny permissions on users for being able to call CreateDefaultVpc(), which will prevent them from using the command.
Think of our AWS account as the "root" and AWS essentially has a "super root" account that they can trigger the initial creation of your account with. This all occurs when your account is initially set up and configured since they have that "super root" level of access just as part of being the product owners.
We are limited (and I assume AWS is limited in a different way) by IAM to allow us to use the Principle of Least Privilege
Can i connect to different account AWS services(s3, dynamoDb) from my account ec2 using VPC Endpoint?
Amazon S3 and Amazon DynamoDB are accessed on the Internet via API calls.
When a call is made to these services, a set of credentials is provided to identify the account and user.
If you wish to access S3 or DynamoDB resources belonging to a different account, you simply need to use credentials that belong to the target account. The actual request can be made from anywhere on the Internet (eg from Amazon EC2 or from a computer under your desk) — the only things that matters is that you have valid credentials linked to the desired AWS account.
There is no need to manipulate VPC configurations to access resources belonging to a different AWS Account. The source of the request is actually irrelevant.
I am having an issue accessing resources created in MobileHub from Lambda and that does not make sense for me at all.. I have two questions (maybe it is the same question..):
Why lambda can't access all resources created by MobileHub when it has fullAccess permissions to those specific resources? I mean, if I create those resources separately I can access them but not created ones from MobileHub..
Is there a way to grant access to these resources or am I missing something?
Update
The issue was VPC. Basically when I enabled VPC on lambdas to reach rds which have no public access I couldn't reach any other resources, when I disabled it - RDS was unreachable. The question is how to combine vpc with role policies?
You can find the resources associated with your project using the left-side navigation in the Mobile Hub console and select "Resources." If you want to enable your AWS Lambda functions to be able to make use of any AWS resources, then you'll need to add an appropriate IAM Policy to the Lambda Execute IAM Role. You can find this role in your project on the "Resources" page under "AWS Identity and Access Management Roles." It is the role that has "lambdaexecutionrole" in the name. Select this role then attach whatever policies you like in the IAM (Identity and Access Management) console.
For more information on how to attach roles to polices, see:
http://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_manage_modify.html
And, if you have further problems, you can get help from the AWS community in the forums, here:
https://forums.aws.amazon.com/forum.jspa?forumID=88
**Update - WRT VPC Question**
This question should really go to an expert on the AWS Lambda team. You can reach them in the AWS Forums (link above). However, I'll take a shot at answering (AWS Lambda experts feel free to chime in if I'm wrong here). When you set the VPC on the Lambda function, I expect that any network traffic coming from your Lambda function will have the same routing and domain name resolution behavior as anything else in your VPC. So, if your VPC has firewall rules which prevent traffic from the VPC to, for example, DynamoDB, then you won't be able to reach it. If that is the case, then you would need to update those rules in your VPC's security group(s) to open up out-going traffic. Here's a blurb from a relevant document.
From https://aws.amazon.com/vpc/details/:
*AWS resources such as Elastic Load Balancing, Amazon ElastiCache, Amazon RDS, and Amazon Redshift are provisioned with IP addresses within your VPC. Other AWS resources such as Amazon S3 and Amazon DynamoDB are accessible via your VPC’s Internet Gateway, NAT gateways, VPC Endpoints, or Virtual Private Gateway.*
This doc seems to explain how to configure the gateway approach:
http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/vpc-endpoints.html