AWS Lambda can't reach resources created from MobileHub - amazon-web-services

I am having an issue accessing resources created in MobileHub from Lambda and that does not make sense for me at all.. I have two questions (maybe it is the same question..):
Why lambda can't access all resources created by MobileHub when it has fullAccess permissions to those specific resources? I mean, if I create those resources separately I can access them but not created ones from MobileHub..
Is there a way to grant access to these resources or am I missing something?
Update
The issue was VPC. Basically when I enabled VPC on lambdas to reach rds which have no public access I couldn't reach any other resources, when I disabled it - RDS was unreachable. The question is how to combine vpc with role policies?

You can find the resources associated with your project using the left-side navigation in the Mobile Hub console and select "Resources." If you want to enable your AWS Lambda functions to be able to make use of any AWS resources, then you'll need to add an appropriate IAM Policy to the Lambda Execute IAM Role. You can find this role in your project on the "Resources" page under "AWS Identity and Access Management Roles." It is the role that has "lambdaexecutionrole" in the name. Select this role then attach whatever policies you like in the IAM (Identity and Access Management) console.
For more information on how to attach roles to polices, see:
http://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_manage_modify.html
And, if you have further problems, you can get help from the AWS community in the forums, here:
https://forums.aws.amazon.com/forum.jspa?forumID=88
**Update - WRT VPC Question**
This question should really go to an expert on the AWS Lambda team. You can reach them in the AWS Forums (link above). However, I'll take a shot at answering (AWS Lambda experts feel free to chime in if I'm wrong here). When you set the VPC on the Lambda function, I expect that any network traffic coming from your Lambda function will have the same routing and domain name resolution behavior as anything else in your VPC. So, if your VPC has firewall rules which prevent traffic from the VPC to, for example, DynamoDB, then you won't be able to reach it. If that is the case, then you would need to update those rules in your VPC's security group(s) to open up out-going traffic. Here's a blurb from a relevant document.
From https://aws.amazon.com/vpc/details/:
*AWS resources such as Elastic Load Balancing, Amazon ElastiCache, Amazon RDS, and Amazon Redshift are provisioned with IP addresses within your VPC. Other AWS resources such as Amazon S3 and Amazon DynamoDB are accessible via your VPC’s Internet Gateway, NAT gateways, VPC Endpoints, or Virtual Private Gateway.*
This doc seems to explain how to configure the gateway approach:
http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/vpc-endpoints.html

Related

Lambda security group to S3 and Secrets Manager

I'm new to Terraform (TF) and AWS. I've created TF that creates an RDS cluster, VPC and security groups, S3 bucket, secrets in secrets manager (SM), as well as a lambda that access all of the above. I've attached the RDS VPC and security group to the lambda. So the code in the lambda can successfully access the RDS. My problem is that I need a security group that allow the lambda code to read from secrets manager to get RDS user accounts and S3 to get sql scripts to execute on the RDS. So, a security group with outbound to S3 and secrets manager.
How do i get terraform to calculate (data) the details to the SM and S3. Then use this info to create the security group to allow the lambda code to access SM and S3.
Currently I'm forcing my way with "All to All on 0.0....", this will not be allowed in the production environment.
So, a security group with outbound to S3 and secrets manager.
The easiest way would be to use S3 VPC interface endpoint, not S3 gateway. Thus if you have two endpoints for S3 and SM, both will be associated with SG that you must have created in your code, or they will use default one.
So to simply limit access of your lambda to the S3 and SM, you have just reference the interface's SGs in your lambda's SG.

Is it possible to test a security group policy that restricts S3 access only to a single VPC without creating an EC2?

Title is pretty self explanatory, I want to lockdown S3 access to only be allowed from a particular VPC. Is there a way of doing this without creating an EC2 to test it?
I want to lockdown S3 access to only be allowed from a particular VPC
AWS docs have entire chapter devoted to limiting access to S3 to a given VPC or VPC endpoint:
Controlling access from VPC endpoints with bucket policies
But if you apply these bucket policies, you will actually lock yourself out of the bucket. So you will have to use EC2 instance or some other way of accessing the bucket from within the VPC. Any other access will not be allowed.

Restrict IAM users inside a VPC

I have a VPC in which users have access to confidential data and computing resources. For security and intellectual property reasons, this VPC should not allow users to leak information to the outside world (internet).
I need to expose AWS resources on this VPC for users: say allowing users to invoke a lambda function and access an S3 bucket.
To expose the S3 bucket, it was trivial. I created an S3 VPC endpoint and attached on it a policy only granting s3:* to the specific bucket. Then I opened access to this specific endpoint in the firewall. If I had granted access to s3.amazonaws.com as a whole, users could have created a personal bucket on a foreign AWS account and use it to leak data from within the VPC to their bucket.
Now to expose a specific lambda function, I cannot reuse the same trick, since there is not VPC endpoint for lambda.
How can I expose a specific lambda function in a VPC, without opening access to lambda.amazonaws.com as a whole, and thus allowing users to potentially use their own lambdas in a foreign account to leak information from the VPC?
In an ideal world, I would like to be able to restrict all IAM/STS inside this VPC to users of my account. That would prevent anyone from using external AWS credentials from within the VPC to leak information.
--- EDIT
Apparently there is some confusion on what I am trying to prevent here, so let me explain in more details.
Say you are working at company X that has access to very sensitive data. This data should NOT be possible to leak to the outside world (internet).
Company X gives you access to a machine in a VPC that contain these confidential files that you need for some computations. There is a NAT and internet gateway in this VPC, but iptables prevents you from accessing non-white listed address.
Say your computation requires access to an S3 bucket. If access to s3.amazonaws.com is freely opened in iptables, then data leak is possible. A malicious employee could create a personal AWS account, and use it from within company X's VPC to upload sensitive data to his personal bucket. This problem can be mitigated by company X by creating an S3 VPC endpoint in the VPC, with a policy only allowing the allowed S3 bucket, and opening iptables only for this VPC endpoint.
My question is: how about AWS lambda?
Since there is no VPC endpoint for AWS lambda, I would need to allow lambda.amazonaws.com as a whole in iptables, which basically means that anyone within that VPC can freely use any lambda from any AWS account, and thus leak any data they want by uploading it to a personal lambda created in a personal account.
AWS does not maintain a relationship between VPCs and users.
Resources are connected to VPCs. Permissions are granted to users. The two concepts do not really intersect.
If you have some particularly sensitive resources that you wish to only provide to certain users, another approach would be to create another AWS Account and only create a small subset of users for that account, or only allow a subset of users to assume an IAM Role in that account. This way, there is a direct relationship between "VPCs/Resources" and "Users".
You can add resource based policy to the Lambda and only allow some specific EC2 instances or users to access the function.

IAM policies related to default VPC creation

I was reading Default VPC and Default Subnets - Amazon Virtual Private Cloud about default VPC creation by AWS. Under default VPC components, it states "Amazon creates the above VPC components on behalf of the customers. IAM policies do not apply to those actions because the customers do not perform those actions".
My question is, we need to create an IAM role for an AWS service to call another AWS service, e.g., EC2 invoking S3, but why does IAM policy not work when the AWS builds resources on our behalf?
Thanks in advance for any input.
In your example of Amazon EC2 connecting to Amazon S3, it is actually your program code running on an Amazon EC2 instance that is making calls to Amazon S3. The API calls to S3 need to be authenticated and authorized via IAM credentials.
There are also situations where an AWS service calls another AWS service on your behalf using a service-linked role, such as when Amazon EC2 Auto Scaling launches new Amazon EC2 instances. This requires provision of a Service-Linked Role for Amazon EC2 Auto Scaling, which gives one service permission to call another service.
In the case of creating a Default VPC, this is something that AWS does before an account is given to a customer. This way, customers can launch resources (eg an Amazon EC2 instance) without having to first create a VPC. It is part of the standard account setup.
It appears that AWS has also exposed the CreateDefaultVpc() command to recreate the Default VPC. The documentation is saying that permission to make this API call is sufficient for creating the resources, without requiring permissions for each underlying call that it probably generates. I guess it is using the permissions that would normally be associated with a service-linked role, except that there is no service-linked role for VPC actions. If you are concerned about people creating these resources (eg an Internet Gateway), you can deny permissions on users for being able to call CreateDefaultVpc(), which will prevent them from using the command.
Think of our AWS account as the "root" and AWS essentially has a "super root" account that they can trigger the initial creation of your account with. This all occurs when your account is initially set up and configured since they have that "super root" level of access just as part of being the product owners.
We are limited (and I assume AWS is limited in a different way) by IAM to allow us to use the Principle of Least Privilege

Should I use EC2 or Elastic Beanstalk when I am creating a new role where my EC2 / Beanstalk instances should have access to S3?

This link says
To create the IAM role
Open the IAM console.
In the navigation pane, select Roles, then Create New Role.
Enter a name for the role, then select Next Step. Remember this name, since you'll need it when you launch your Amazon EC2 instance.
On the Select Role Type page, under AWS Service Roles, select Amazon EC2.
On the Set Permissions page, under Select Policy Template, select Amazon S3 Read Only Access, then Next Step.
On the Review page, select Create Role.
But when you click "Create New Role", you will be asked as follows
They say "choose a service that will use this role"
a) As you launch an app in ElasticBeanStalk which in turn creates an Ec2 instance , should I select Ec2 service or Elastic beanstalk service?
You are creating an EC2 instance role, so the service to select is EC2, regardless of whether or not the instances are being spawned and managed by Elastic Beanstalk.
With an instance role, your instance has continuous access to a set of automatically-rotated temporary credentials that it can use to access whatever services the role policies grant access to.
Here, you are granting the EC2 service permission to actually obtain those temporary credentials on behalf of your instance.
Rule of thumb with AWS, only create the resources you need. The reason for this is that AWS charges you for everything that you use. Now with that said, if you only need an EC2 that can communicate with your S3, then go with an EC2 only. EC2's are sorta like your base server, and you can always link one to your Elastic Beanstalk (if in fact you want to utilize that service later on).
Note, if you eventually begin using your S3 to show content to your users (e.g. your images, videos, etc.), then you should use CloudFront as your CDN to control things like caching, speed, and availability across various regions.
Hope this helps.
The AWS document merely is an example (Apply IAM on EC2). You don't need follow the document mechanically, because your case is different, applying IAM on different type(s) of AWS services.