I have a Lambda function in my VPC, and I want to access S3 bucket.
I have setup the S3 VPC endpoint correctly, I think, because I created an EC2 instance in the same subnet and security group as the Lambda function. When I ran a copy of Lambda function code on the EC2 instance, it can correctly showed the S3 file content.
But when I run the code in Lambda, it failed. So, I want to know what is the difference between "run in EC2" and "run in Lambda"? Why did it fail when I ran it in Lambda?
Here is my Lambda function code:
import boto3
s3 = boto3.client('s3', region_name='ap-northeast-1')
def lambda_handler(event, context):
bucket = '*xxxxxx*'
key = 's3-upload.json'
try:
response = s3.get_object(Bucket=bucket, Key=key)
print('--------------------------------------')
print(response)
print('--------------------------------------')
body = response['Body'].read()
print(body)
print('--------------------------------------')
print("CONTENT TYPE: " + response['ContentType'])
except Exception as e:
print('Error getting object.')
print(e)
raise e
If you want to allow an AWS Lambda to access Amazon S3, use one of these methods:
Do not associate the function to a VPC. Access is then automatic.
If the function is attached to a public subnet in the VPC, associate an Elastic IP to the Lambda function's ENI that appears in the VPC (Not recommended)
If the function is attached to a private subnet in the VPC, launch a NAT Gateway in the public subnet and update Route Tables. Traffic will flow to the Internet via the NAT Gateway.
Add an Amazon S3 VPC Endpoint in the VPC and update Route Tables. Traffic will flow through that instead of the Internet Gateway.
Even though they're in the same VPC, EC2 and Lambda are still different environments within AWS. Being able to run your code in one and not the other implies that your code is fine and works, so it's likely to be a configuration issue with AWS.
Have you checked the service/execution role that the lambda is using?
You need to ensure that the IAM role that it's using is allowed the correct level of S3 access.
This documentation on execution roles for lambda might provide a useful jumping off point: https://docs.aws.amazon.com/lambda/latest/dg/intro-permission-model.html#lambda-intro-execution-role
An IAM policy like this would give whatever execution role you use read-only access to all your S3 buckets, and happens to be one of the AWS managed policies.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:Get*",
"s3:List*"
],
"Resource": "*"
}
]
}
Thanks everyone! I found the reason.
My Lambda have two subnets, private_sn_1 and private_sn_2,
private_sn_1 have correctly set the vpc endpoint route table,
but the private_sn_2 set a wrong route table,
and my ec2 created in private_sn_1 so it can access the vpc endpoint.
In normal, Lambda will run randomly in private_sn_1 or private_sn_2,
but in my case it always run in private_sn_2(I don't know why),
so when I fixed the private_sn_2 route table,
everything is right.
In addition to all said above, it is also possible that VPC Endpoint policy can be prohibitive and not allowing traffic to/from S3 through. Make sure you allow traffic through endpoint by using "Full access" policy.
Edit: here's related bit of documentation:
Your policy must contain a Principal element. For gateway endpoints
only, you cannot limit the principal to a specific IAM role or user.
Specify "*" to grant access to all IAM roles and users. Additionally,
for gateway endpoints only, if you specify the principal in the format
"AWS":"AWS-account-ID" or "AWS":"arn:aws:iam::AWS-account-ID:root",
access is granted to the AWS account root user only, and not all IAM
users and roles for the account.
So for S3 endpoints to work you need to specify '*' as a principal in general case
Related
I'm trying to create a private API using AWS API Gateway. In my understanding, I have 2 options to implement private API Gateway, 1) restrict sources with API Gateway resource policy and 2) restrict sources within a VPC with VPC Endpoint.
My question is: For option 1, can I set the condition in resource policy to allow traffics only from a specific VPC and achieve the same result as option 2?
# API Gateway resource policy
{
...
"Condition": {
"StringEquals: {
"aws:sourceVpc": "vpc-123abc"
}
}
}
If yes, what's the different between them? What are the advantages to adopt VPC Endpoint to implement private API Gateway?
Here are the ways you can use to access private API gateways: How to invoke a private API
The condition that works with VPC endpoints in your case is aws:SourceVpce, with here the ID of execute-api endpoint that you deployed in your aws account. Here you can find list of AWS global condition context keys: AWS global condition context keys.
I have installed nginx with kaltura-nginx-vod module on EC2. I would like to set up private remote read-only mode to my s3 bucket via http. Example of the desired nginx configuration:
vod_upstream_location /s3;
location /s3/ {
internal;
proxy_pass http://my-s3-bucket.s3-eu-east-1.amazonaws.com/;
}
I tried to create Access Point to my s3. In the settings I had pointed Access option to my VPC, but cURL returned 403 from EC2 when I tried to get some object from s3 by http url.
Also I had created IAM role with read-only S3 access and assign to my EC2, but result was same - 403.
How to set up private http-access from EC2 to s3 bucket via virtual private amazon network in same region?
This does not work because you can't access objects based on their URL, unless they are public. Since you've assigned IAM role to the EC2 instance you have to make signed http request to the object using the object's url with EC2 instance role credentials.
So either have to construct the valid signature yourself, or simply use AWS SDK, such as boto3 for python, to do this for you. By looking at the kaltura-nginx-vod description it does not seem to be making any signed requests to S3 on your behalf.
I've created a node.js application which connects to DynamoDB. Everything is working fine locally Now I'm trying to setup on AWS servers.
First I've created DynamoDB tables from AWS DynamoDB console. It is working fine.
I've created a new role from IAM management console > Roles to access DynamoDB. And attached that role to EC2 instance.
But when I fire any aws dynamodb cli command, it gave me error to mention the region.
So I went to IAM management console > Users, and created an access key to my admin type user.
Now I'm login to EC2 CLI using ec2-user and aws configure with previously generated access key.
AWS Access Key ID [None]: ACCESS KEY
AWS Secret Access Key [None]: SECRET
Default region name [None]: us-east-1
Default output format [None]: json
But when I use following command aws dynamodb list-tables. It gives no output, no error.
As I commented, The main issue was outbound rules for attached security group. Here are the necessary things to do
Set a security group outbound rule to HTTPS
Setup Credentials
Create Access Key from IAM management console > Users.
SSH to EC2 instance.
Configure the credentials to EC2 instance using aws configure command or directly modify ~/.aws/credentials file.
Attach Role
Create Role from IAM management console > Roles. Select the role which is necessary to perform operation on AWS service. Eg AmazonDynamoDBFullAccess
Open VPC console and select the EC2 instance.
Attach the role from Actions menu
It is good, though optional, to create VPC endpoint. If you face UnauthorizedOperation error while creating endpoint, assign AmazonEC2FullAccess permission to the user from IAM console. Remove it later if you don't need it.
To use the AWS service from your application, find the relevant endpoint from this list.
It sounds like you are having problems connecting to DynamoDB because of the way you have configured your VPC.
There are some options but if you would prefer to keep your VPC isolated from the internet then you could enable VPC endpoints for DynamoDB. That way you can access DynamoDB from within your VPC without those connections going over the public internet.
There is a step-by-step guide for how to do that here: https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/vpc-endpoints-dynamodb.html
Essentially, it involves the following steps:
you have to get the VPC id for the VPC where your EC2 instance is located
create a VPC endpoint for DynamoDB, specifying the VPC id and the regional dynamodb service name:
aws ec2 create-vpc-endpoint --service-name com.amazonaws.<region>.dynamodb --vpc-id <yourvpcid>
The goal
I want to programmatically add an item to a table in my DynamoDB from my Elastic Beanstalk application, using code similar to:
Item item = new Item()
.withPrimaryKey(UserIdAttributeName, userId)
.withString(UserNameAttributeName, userName);
table.putItem(item);
The unexpected result
Logs show the following error message, with the [bold parts] being my edits:
User: arn:aws:sts::[iam id?]:assumed-role/aws-elasticbeanstalk-ec2-role/i-[some number] is not authorized to perform: dynamodb:PutItem on resource: arn:aws:dynamodb:us-west-2:[iam id?]:table/PiggyBanks (Service: AmazonDynamoDBv2; Status Code: 400; Error Code: AccessDeniedException; Request ID: [the request id])
I am able to get the table just fine, but things go awry when PutItem is called.
The configuration
I created a new Elastic Beanstalk application. According to the documentation, this automatically assigns the application a new role, called:
aws-elasticbeanstalk-service-role
That same documentation indicates that I can add access to my database as follows:
Add permissions for additional services to the default service role in the IAM console.
So, I found the aws-elasticbeanstalk-service-role role and added to it the managed policy, AmazonDynamoDBFullAccess. This policy looks like the following, with additional actions removed for brevity:
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"dynamodb:*",
[removed for brevity]
"lambda:DeleteFunction"
],
"Effect": "Allow",
"Resource": "*"
}
]
}
This certainly looks like it should grant the access I need. And, indeed, the policy simulator verifies this. With the following parameters, the action is allowed:
Role: aws-elasticbeanstalk-service-role
Service: DynamoDB
Action: PutItem
Simulation Resource: [Pulled from the above log] arn:aws:dynamodb:us-west-2:[iam id?]:table/PiggyBanks
Update
In answer to the good question by filipebarretto, I instantiate the DynamoDB object as follows:
private static DynamoDB createDynamoDB() {
AmazonDynamoDBClient client = new AmazonDynamoDBClient();
client.setRegion(Region.getRegion(Regions.US_WEST_2));
DynamoDB result = new DynamoDB(client);
return result;
}
According to this documentation, this should be the way to go about it, because it is using the default credentials provider chain and, in turn, the instance profile credentials,
which exist within the instance metadata associated with the IAM role
for the EC2 instance.
[This option] in the default provider chain is available only when
running your application on an EC2 instance, but provides the greatest
ease of use and best security when working with EC2 instances.
Other things I tried
This related Stack Overflow question had an answer that indicated region might be the issue. I've tried tweaking the region with no additional success.
I have tried forcing the usage of the correct credentials using the following:
AmazonDynamoDBClient client = new AmazonDynamoDBClient(new InstanceProfileCredentialsProvider());
I have also tried creating an entirely new environment from within Elastic Beanstalk.
In conclusion
By the error in the log, it certainly looks like my Elastic Beanstalk application is assuming the correct role.
And, by the results of the policy simulator, it looks like the role should have permission to do exactly what I want to do.
So...please help!
Thank you!
Update the aws-elasticbeanstalk-ec2-role role, instead of the aws-elasticbeanstalk-service-role.
This salient documentation contains the key:
When you create an environment, AWS Elastic Beanstalk prompts you to provide two AWS Identity and Access Management (IAM) roles, a service role and an instance profile. The service role is assumed by Elastic Beanstalk to use other AWS services on your behalf. The instance profile is applied to the instances in your environment and allows them to upload logs to Amazon S3 and perform other tasks that vary depending on the environment type and platform.
In other words, one of these roles (-service-role) is used by the Beanstalk service itself, while the other (-ec2-role) is applied to the actual instance.
It's the latter that pertains to any permissions you need from within your application code.
To load your credentials, try:
InstanceProfileCredentialsProvider mInstanceProfileCredentialsProvider = new InstanceProfileCredentialsProvider();
AWSCredentials credentials = mInstanceProfileCredentialsProvider.getCredentials();
AmazonDynamoDBClient client = new AmazonDynamoDBClient(credentials);
or
AmazonDynamoDBClient client = new AmazonDynamoDBClient(new DefaultAWSCredentialsProviderChain());
I'm trying to constrain the images which a specific IAM group can describe. If I have the following policy for my group, users in the group can describe any EC2 image:
{
"Effect": "Allow",
"Action": ["ec2:DescribeImages"],
"Resource": ["*"]
}
I'd like to only allow the group to describe a single image, but when I try setting "Resource": ["arn:aws:ec2:eu-west-1::image/ami-c37474b7"], I get exceptions when trying to describe the image as a member of the group:
AmazonServiceException Status Code: 403,
AWS Service: AmazonEC2,
AWS Request ID: 911a5ed9-37d1-4324-8493-84fba97bf9b6,
AWS Error Code: UnauthorizedOperation,
AWS Error Message: You are not authorized to perform this operation.
I got the ARN format for EC2 images from IAM Policies for EC2, but perhaps something is wrong with my ARN? I have verified that the describe image request works just fine when my resource value is "*".
Unfortunately the error message is misleading, the problem is that Resource-Level Permissions for EC2 and RDS Resources aren't yet available for all API actions, see this note from Amazon Resource Names for Amazon EC2:
Important
Currently, not all API actions support individual ARNs; we'll add support for additional API actions and ARNs for additional Amazon EC2 resources later. For information about which ARNs you can
use with which Amazon EC2 API actions, as well as supported condition
keys for each ARN, see Supported Resources and Conditions for Amazon
EC2 API Actions.
In particular, all ec2:Describe* actions are absent still from Supported Resources and Conditions for Amazon EC2 API Actions at the time of this writing, which implies that you cannot use anything but "Resource": ["*"] for ec2:DescribeImages.
The referenced page on Granting IAM Users Required Permissions for Amazon EC2 Resources also mentions that AWS will add support for additional actions, ARNs, and condition keys in 2014 - they have indeed regularly expanded resource level permission coverage over the last year or so already, but so far only for actions which create or modify resources, but not any which require read access only, something many users desire and expect for obvious reasons, including myself.