I have a VPC in which users have access to confidential data and computing resources. For security and intellectual property reasons, this VPC should not allow users to leak information to the outside world (internet).
I need to expose AWS resources on this VPC for users: say allowing users to invoke a lambda function and access an S3 bucket.
To expose the S3 bucket, it was trivial. I created an S3 VPC endpoint and attached on it a policy only granting s3:* to the specific bucket. Then I opened access to this specific endpoint in the firewall. If I had granted access to s3.amazonaws.com as a whole, users could have created a personal bucket on a foreign AWS account and use it to leak data from within the VPC to their bucket.
Now to expose a specific lambda function, I cannot reuse the same trick, since there is not VPC endpoint for lambda.
How can I expose a specific lambda function in a VPC, without opening access to lambda.amazonaws.com as a whole, and thus allowing users to potentially use their own lambdas in a foreign account to leak information from the VPC?
In an ideal world, I would like to be able to restrict all IAM/STS inside this VPC to users of my account. That would prevent anyone from using external AWS credentials from within the VPC to leak information.
--- EDIT
Apparently there is some confusion on what I am trying to prevent here, so let me explain in more details.
Say you are working at company X that has access to very sensitive data. This data should NOT be possible to leak to the outside world (internet).
Company X gives you access to a machine in a VPC that contain these confidential files that you need for some computations. There is a NAT and internet gateway in this VPC, but iptables prevents you from accessing non-white listed address.
Say your computation requires access to an S3 bucket. If access to s3.amazonaws.com is freely opened in iptables, then data leak is possible. A malicious employee could create a personal AWS account, and use it from within company X's VPC to upload sensitive data to his personal bucket. This problem can be mitigated by company X by creating an S3 VPC endpoint in the VPC, with a policy only allowing the allowed S3 bucket, and opening iptables only for this VPC endpoint.
My question is: how about AWS lambda?
Since there is no VPC endpoint for AWS lambda, I would need to allow lambda.amazonaws.com as a whole in iptables, which basically means that anyone within that VPC can freely use any lambda from any AWS account, and thus leak any data they want by uploading it to a personal lambda created in a personal account.
AWS does not maintain a relationship between VPCs and users.
Resources are connected to VPCs. Permissions are granted to users. The two concepts do not really intersect.
If you have some particularly sensitive resources that you wish to only provide to certain users, another approach would be to create another AWS Account and only create a small subset of users for that account, or only allow a subset of users to assume an IAM Role in that account. This way, there is a direct relationship between "VPCs/Resources" and "Users".
You can add resource based policy to the Lambda and only allow some specific EC2 instances or users to access the function.
Related
Title is pretty self explanatory, I want to lockdown S3 access to only be allowed from a particular VPC. Is there a way of doing this without creating an EC2 to test it?
I want to lockdown S3 access to only be allowed from a particular VPC
AWS docs have entire chapter devoted to limiting access to S3 to a given VPC or VPC endpoint:
Controlling access from VPC endpoints with bucket policies
But if you apply these bucket policies, you will actually lock yourself out of the bucket. So you will have to use EC2 instance or some other way of accessing the bucket from within the VPC. Any other access will not be allowed.
I was reading Default VPC and Default Subnets - Amazon Virtual Private Cloud about default VPC creation by AWS. Under default VPC components, it states "Amazon creates the above VPC components on behalf of the customers. IAM policies do not apply to those actions because the customers do not perform those actions".
My question is, we need to create an IAM role for an AWS service to call another AWS service, e.g., EC2 invoking S3, but why does IAM policy not work when the AWS builds resources on our behalf?
Thanks in advance for any input.
In your example of Amazon EC2 connecting to Amazon S3, it is actually your program code running on an Amazon EC2 instance that is making calls to Amazon S3. The API calls to S3 need to be authenticated and authorized via IAM credentials.
There are also situations where an AWS service calls another AWS service on your behalf using a service-linked role, such as when Amazon EC2 Auto Scaling launches new Amazon EC2 instances. This requires provision of a Service-Linked Role for Amazon EC2 Auto Scaling, which gives one service permission to call another service.
In the case of creating a Default VPC, this is something that AWS does before an account is given to a customer. This way, customers can launch resources (eg an Amazon EC2 instance) without having to first create a VPC. It is part of the standard account setup.
It appears that AWS has also exposed the CreateDefaultVpc() command to recreate the Default VPC. The documentation is saying that permission to make this API call is sufficient for creating the resources, without requiring permissions for each underlying call that it probably generates. I guess it is using the permissions that would normally be associated with a service-linked role, except that there is no service-linked role for VPC actions. If you are concerned about people creating these resources (eg an Internet Gateway), you can deny permissions on users for being able to call CreateDefaultVpc(), which will prevent them from using the command.
Think of our AWS account as the "root" and AWS essentially has a "super root" account that they can trigger the initial creation of your account with. This all occurs when your account is initially set up and configured since they have that "super root" level of access just as part of being the product owners.
We are limited (and I assume AWS is limited in a different way) by IAM to allow us to use the Principle of Least Privilege
I am having an issue accessing resources created in MobileHub from Lambda and that does not make sense for me at all.. I have two questions (maybe it is the same question..):
Why lambda can't access all resources created by MobileHub when it has fullAccess permissions to those specific resources? I mean, if I create those resources separately I can access them but not created ones from MobileHub..
Is there a way to grant access to these resources or am I missing something?
Update
The issue was VPC. Basically when I enabled VPC on lambdas to reach rds which have no public access I couldn't reach any other resources, when I disabled it - RDS was unreachable. The question is how to combine vpc with role policies?
You can find the resources associated with your project using the left-side navigation in the Mobile Hub console and select "Resources." If you want to enable your AWS Lambda functions to be able to make use of any AWS resources, then you'll need to add an appropriate IAM Policy to the Lambda Execute IAM Role. You can find this role in your project on the "Resources" page under "AWS Identity and Access Management Roles." It is the role that has "lambdaexecutionrole" in the name. Select this role then attach whatever policies you like in the IAM (Identity and Access Management) console.
For more information on how to attach roles to polices, see:
http://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_manage_modify.html
And, if you have further problems, you can get help from the AWS community in the forums, here:
https://forums.aws.amazon.com/forum.jspa?forumID=88
**Update - WRT VPC Question**
This question should really go to an expert on the AWS Lambda team. You can reach them in the AWS Forums (link above). However, I'll take a shot at answering (AWS Lambda experts feel free to chime in if I'm wrong here). When you set the VPC on the Lambda function, I expect that any network traffic coming from your Lambda function will have the same routing and domain name resolution behavior as anything else in your VPC. So, if your VPC has firewall rules which prevent traffic from the VPC to, for example, DynamoDB, then you won't be able to reach it. If that is the case, then you would need to update those rules in your VPC's security group(s) to open up out-going traffic. Here's a blurb from a relevant document.
From https://aws.amazon.com/vpc/details/:
*AWS resources such as Elastic Load Balancing, Amazon ElastiCache, Amazon RDS, and Amazon Redshift are provisioned with IP addresses within your VPC. Other AWS resources such as Amazon S3 and Amazon DynamoDB are accessible via your VPC’s Internet Gateway, NAT gateways, VPC Endpoints, or Virtual Private Gateway.*
This doc seems to explain how to configure the gateway approach:
http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/vpc-endpoints.html
I want to make a Video On Demand service using AWS S3 , and I would like to restrict each of my clients to his own bucket/folder (which one schema is best..) .
I want a client to have access only to his bucket/folder, but these people are not going to have an AWS account.
I read ,and still reading, about IAM users,roles and policies but I have not found something pointing to what I want to achieve.
If you know the IP address (or CIDR blocks) of each client, you can then restrict your bucket with a policy.
http://blogs.aws.amazon.com/security/post/TxPOJBY6FE360K/IAM-policies-and-Bucket-Policies-and-ACLs-Oh-My-Controlling-Access-to-S3-Resourc
http://docs.aws.amazon.com/AmazonS3/latest/dev/example-bucket-policies.html
Alternatively, you could just set up IAM accounts for them within your own account, and scope their access accordingly. That would let them use a very limited form of the AWS Console. You can even write your IAM policies so that users automatically have access to something like:
s3://your-bucket/%username%/
I am in the early stages of writing an AWS app for our users that will run our research algorithms using their AWS resources. For example, our code will need to spin up EC2 instances running our 'worker' app, access RDS databases, and create access SQS queues. The AWS Java SDK examples (we are writing this in Java) use a AwsCredentials.properties file to store the Access Key ID and Secret Access Key, which is fine for examples, but obviously not acceptable for our users, who are would be in essence giving us access to all their resources. What is a clean way to go about running our system on their behalf? I discovered AWS Identity and Access Management (IAM) which seems to be for this purpose (I haven't got my head around it yet), esp. Cross-account access between AWS accounts. This post makes it sound straightforward:
Use the amazon IAM service to create a set of keys that only has
permission to perform the tasks that you require for your script.
http://aws.amazon.com/iam/
However, other posts (e.g., Within IAM, can I restrict a group of users to access/launch/terminate only certain EC2 AMIs or instances?) suggest there are limitations to using IAM with EC2 in particular.
Any advice would be really helpful!
The key limitation with regards to RDS and EC2 is that while you can restrict access to certain API actions there are no resource level constraints. For example with an IAM S3 policy you can restrict a user to only being able to perform certain actions on certain buckets. You can write a policy for EC2 that says that user is allowed to stop instances, but not one that says you can only stop certain instances.
Another option is for them to provide you with temporary credentials via the Security Token Service. Another variant on that is to use the new IAM roles service. With this an instance has a set of policies associated with it. You don't need to provide an AwsCredentials.proprties file because the SDK can fetch credentials from the metadata service.
Finally one last option might be consolidated billing. If the reason you are using their AWS resources is just because of the billing, then setup a new account which is billed from their account. The accounts are isolated from each other so you can't for example delete their instances by accident. Equally you can't access their RDS snapshots and things like that (access to an RDS instance via mysql (as opposed to the AWS api) would depend on the instance's security group). You can of course combine this with the previous options - they could provide you with credentials that only allow you to perform certain actions within that isolated account.