I have to two separate Lambda functions - one to read a file from a S3 bucket and write to memcache cluster. They work well individually. However, I am unable to 'merge' them together.
Firstly, the function to read from S3 works from 'No VPC' setting whereas, the function to write to Elastic Cache works only when the function and cluster are in the same VPC.
Secondly, the function to read from S3 had only the AmazonS3FullAccess policy applied. While I have now applied the AWSLambdaVPCAccessExecutionRole also, I am not sure, if this setting will work because of the VPC difference mentioned above.
Is AWS Step function the answer? How do I build a serverless application that reads a file from S3 and writes to Elastic Cache cluster?
You don't need step functions for this. Run the function in the VPC with your ElastiCache cluster. Either add an S3 endpoint to your VPC, or a NAT gateway. The S3 endpoint is the easiest solution. Then your function will have access to both ElastiCache and S3.
For the IAM role, you need to go into IAM and create a new role that has the permissions of AWSLambdaVPCAccessExecutionRole as well as the necessary S3 permissions. You can assign multiple policies to a single role if necessary. Then assign that role to the Lambda function.
Related
Title is pretty self explanatory, I want to lockdown S3 access to only be allowed from a particular VPC. Is there a way of doing this without creating an EC2 to test it?
I want to lockdown S3 access to only be allowed from a particular VPC
AWS docs have entire chapter devoted to limiting access to S3 to a given VPC or VPC endpoint:
Controlling access from VPC endpoints with bucket policies
But if you apply these bucket policies, you will actually lock yourself out of the bucket. So you will have to use EC2 instance or some other way of accessing the bucket from within the VPC. Any other access will not be allowed.
AWS Glue is serverless but there is a way to assign a VPC and subnet to a Glue ETL job when the job is working with a DB connection (RDS, JDBC or RedShift). This part is fine.
The problem we are facing is when the Glue job only operated on S3 buckets and does not use any other DB.
How to make sure that Glue accesses these S3 buckets through VPC endpoint?
Even if we define a VPC endpoint for a VPC, how to ensure the ETL job runs in the same VPC?
When Glue job works on S3 source and S3 destination, it does not ask for VPC details.
Can any of you help resolve this?
It is possible to make sure the traffic is not leaving VPC when the source and destination is s3. Please refer to this to configure s3 VPC endpoint and adding it to your Glue job.
Also refer to this if you see any issues in accessing s3.
I was reading Default VPC and Default Subnets - Amazon Virtual Private Cloud about default VPC creation by AWS. Under default VPC components, it states "Amazon creates the above VPC components on behalf of the customers. IAM policies do not apply to those actions because the customers do not perform those actions".
My question is, we need to create an IAM role for an AWS service to call another AWS service, e.g., EC2 invoking S3, but why does IAM policy not work when the AWS builds resources on our behalf?
Thanks in advance for any input.
In your example of Amazon EC2 connecting to Amazon S3, it is actually your program code running on an Amazon EC2 instance that is making calls to Amazon S3. The API calls to S3 need to be authenticated and authorized via IAM credentials.
There are also situations where an AWS service calls another AWS service on your behalf using a service-linked role, such as when Amazon EC2 Auto Scaling launches new Amazon EC2 instances. This requires provision of a Service-Linked Role for Amazon EC2 Auto Scaling, which gives one service permission to call another service.
In the case of creating a Default VPC, this is something that AWS does before an account is given to a customer. This way, customers can launch resources (eg an Amazon EC2 instance) without having to first create a VPC. It is part of the standard account setup.
It appears that AWS has also exposed the CreateDefaultVpc() command to recreate the Default VPC. The documentation is saying that permission to make this API call is sufficient for creating the resources, without requiring permissions for each underlying call that it probably generates. I guess it is using the permissions that would normally be associated with a service-linked role, except that there is no service-linked role for VPC actions. If you are concerned about people creating these resources (eg an Internet Gateway), you can deny permissions on users for being able to call CreateDefaultVpc(), which will prevent them from using the command.
Think of our AWS account as the "root" and AWS essentially has a "super root" account that they can trigger the initial creation of your account with. This all occurs when your account is initially set up and configured since they have that "super root" level of access just as part of being the product owners.
We are limited (and I assume AWS is limited in a different way) by IAM to allow us to use the Principle of Least Privilege
I am having an issue accessing resources created in MobileHub from Lambda and that does not make sense for me at all.. I have two questions (maybe it is the same question..):
Why lambda can't access all resources created by MobileHub when it has fullAccess permissions to those specific resources? I mean, if I create those resources separately I can access them but not created ones from MobileHub..
Is there a way to grant access to these resources or am I missing something?
Update
The issue was VPC. Basically when I enabled VPC on lambdas to reach rds which have no public access I couldn't reach any other resources, when I disabled it - RDS was unreachable. The question is how to combine vpc with role policies?
You can find the resources associated with your project using the left-side navigation in the Mobile Hub console and select "Resources." If you want to enable your AWS Lambda functions to be able to make use of any AWS resources, then you'll need to add an appropriate IAM Policy to the Lambda Execute IAM Role. You can find this role in your project on the "Resources" page under "AWS Identity and Access Management Roles." It is the role that has "lambdaexecutionrole" in the name. Select this role then attach whatever policies you like in the IAM (Identity and Access Management) console.
For more information on how to attach roles to polices, see:
http://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_manage_modify.html
And, if you have further problems, you can get help from the AWS community in the forums, here:
https://forums.aws.amazon.com/forum.jspa?forumID=88
**Update - WRT VPC Question**
This question should really go to an expert on the AWS Lambda team. You can reach them in the AWS Forums (link above). However, I'll take a shot at answering (AWS Lambda experts feel free to chime in if I'm wrong here). When you set the VPC on the Lambda function, I expect that any network traffic coming from your Lambda function will have the same routing and domain name resolution behavior as anything else in your VPC. So, if your VPC has firewall rules which prevent traffic from the VPC to, for example, DynamoDB, then you won't be able to reach it. If that is the case, then you would need to update those rules in your VPC's security group(s) to open up out-going traffic. Here's a blurb from a relevant document.
From https://aws.amazon.com/vpc/details/:
*AWS resources such as Elastic Load Balancing, Amazon ElastiCache, Amazon RDS, and Amazon Redshift are provisioned with IP addresses within your VPC. Other AWS resources such as Amazon S3 and Amazon DynamoDB are accessible via your VPC’s Internet Gateway, NAT gateways, VPC Endpoints, or Virtual Private Gateway.*
This doc seems to explain how to configure the gateway approach:
http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/vpc-endpoints.html
I have 1 s3 bucket, 1 elasticbeanstalk instance. Currenly my s3bucket is made public hence its accessible from any domain, even from my localhost.
I want that all my s3 bucket resources should be accessible from my EBS instance only where my APP is hosted/running. My app should be able to view these resources and upload new images/resources to this bucket .
I am sure somebody myt have done this.
Controlling access to S3 has several ways. The best practice to make something privately accessible is: not to give any rights to access your S3 buckets/files in the bucket policy.
However you should create an IAM role which has either a full access to S3, or limited access to some actions, some buckets.
For every EC2 instances and also to every Elastic Beanstalk environments, you can attache an IAM Role. This role will be automatically served to your instances via instance metadata. This is a safe way to give special rights to your instances.
(Note: This is an AWS security best practice, since AWS will deal with the key rotations on your EC2 boxes.)