I have a bucket policy that whitelists my IP ranges in AWS. I have an EC2 server running a Packer build job, which tries to pull an object from my bucket and I am getting a 403 Forbidden error, even though the IP of my EC2 server running the said job is clearly within the whitelisted range. Even when I run wget from a machine within that CIDR range, I get the same error. I am confused why this is happening. The policy seems fine. Below is my bucket policy, the IP of my server, and the error:
Bucket Policy:
{
"Version": "2012-10-17",
"Id": "S3PolicyId1",
"Statement": [
{
"Sid": "IPAllow",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": [
"arn:aws:s3:::xxxxxxx",
"arn:aws:s3:::xxxxxxx/*"
],
"Condition": {
"IpAddress": {
"aws:SourceIp": [
"10.x.x.x/12"
]
}
}
}
]
}
Server IP:
10.x.x.x/32
Error:
ui,message, amazon-ebs: "msg": "Error downloading
https://s3.amazonaws.com/xxxxx/yyyy.zip to C:\\temp\\xxx.zip Exception
calling \"DownloadFile\" with \"2\" argument(s): \"The remote server
returned an error: (403) Forbidden.\""
Amazon S3 lives on the Internet.
Therefore, when communicating with S3, your system will be using a Public IP address.
However your policy only includes private IP addresses. That is why it is not working.
Your options are:
Modify the policy to use the Public IP address of the instance(s), or the Public IP address of a NAT Gateway if your instances are in a private subnet, OR
Create a Gateway VPC Endpoint that connects the VPC directly to Amazon S3. You can then configure a Bucket Policy that only accepts traffic via the VPC Endpoint.
aws:sourceIp expects a public IP address. Private addresses are, by definition, ambiguous, and 10.x.x.x/12 is a private (RFC-1918) address, so it will never match.
If you are not using an S3 VPC endpoint, you could whitelist the public IP address of your NAT Gateway (assuming all the instances with access to thr gateway should be able to access the bucket).
If you are using an S3 VPC endpoint, you can't whitelist by IP:
you cannot use the aws:SourceIp condition in your IAM policies for requests to Amazon S3 through a VPC endpoint. This applies to IAM policies for users and roles, and any bucket policies. If a statement includes the aws:SourceIp condition, the value fails to match any provided IP address or range.
https://docs.aws.amazon.com/vpc/latest/userguide/vpc-endpoints-s3.html
Also, there's this:
Note: It's a best practice not to use the aws:SourceIp condition key.
https://aws.amazon.com/premiumsupport/knowledge-center/iam-restrict-calls-ip-addresses/
Related
I'm trying to get an EC2 instance to access a S3 bucket. I'd rather use the IP address of the instance to allow access to S3 rather than assumerole.
In the bucket policy, I've tried allowing the instance's public AND private IP but trying to access any resources gives <Code>AccessDenied</Code><Message>Access Denied</Message>
I'm able to use IAM roles and aws cli to access the S3 bucket but I need to access the S3 bucket using a plain HTTP address like http://s3.amazonaws.com/somebucket/somefile.txt. I've also tested with non cloud servers (my own laptop and other servers) and allowing the public IP of those servers would successfully let me access the S3 resources, it's only not working when I do the same for EC2 instances.
I tried looking at access logs and I see the private IP of the EC2 instance being logged and giving a 403 access denied.
My bucket policy looks like this:
{
"Sid": "Statement1",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:*",
"Resource": "arn:aws:s3:::test-bucket1/*",
"Condition": {
"IpAddress": {
"aws:SourceIp": [
"EC2-public-ip-address/32",
"EC2-private-ip-address/32"
]
}
}
},
I see a gateway endpoint associated with the VPC that the EC2 instance is in
So that's why it uses private IP. S3 gateway endpoint enable private connections from VPC to S3 without accessing the internet. Thus only private IP is used in that case.
You either have to settle for the private IP only, or modify your VPC and S3 gateway settings to allow internet connections to S3. This may be security issue, as S3 gateway endpoints are more secure (no internet).
I've been trying to connect to S3 bucket from a lambda residing in a private subnet. I did the exact same thing for Ec2 instance and it worked like a charm, I'm not sure why with lambda it's such an issue. My lambda times out after a certain defined interval.
Here's my lambda's VPC configuration
Here's the security group output configuration:
Below are the outbound rules of the subnet associated with lambda
As you can see, I created a VPC endpoint to route my traffic through the VPC but it doesn't work. I'm not sure what am I missing here. Below is the VPC Endpoint configuration.
I've given full access to S3 in policy like this:
{
"Statement": [
{
"Action": "*",
"Effect": "Allow",
"Resource": "*",
"Principal": "*"
}
]
}
When I run my lambda code, I get timeout error as below:
You can access Amazon S3 objects using VPC endpoint only when the S3 objects are in the same Region as the Amazon S3 gateway VPC endpoint. Confirm that your objects and endpoint are in the same Region.
To reproduce your situation, I performed the following steps:
Created an AWS Lambda function that calls ListBuckets(). Tested it without attaching to a VPC. It worked fine.
Created a VPC with just a private subnet
Added an Amazon S3 Endpoint Gateway to the VPC and subnet
Reconfigured the Lambda function to use the VPC and subnet
Tested the Lambda function -- it worked fine
I suspect your problem might lie with the Security Group attached to the Lambda function. I left my Outbound rules as "All Traffic 0.0.0.0/0" rather than restricting it. Give that a try and see if it makes things better.
I am receiving "Could not connect to the endpoint URL: "https://s3.amazonaws.com/" from inside EC2 instance running inside private subnet
Note: We are using our corporate shared AWS account instead of Federated account for this exercise.
Here is a configuration:
Created one VPC with 1 private(Attached to VPC endpoints for S3 and Dynamodb) and 1 public (attached to Internet Gateway) subnet. There is no NAT gateway or instance.
Launched EC2 instance(Amazon Linux AMI) one inside each subnet.
Attached IAM roles to access dynamodb and S3 to both the EC2 instance
Connected to EC2 from terminal. Configured my access keys using aws configure
Policy for S3 VPC endpoint:
"Statement": [
{
"Action": "*",
"Effect": "Allow",
"Resource": "*",
"Principal": "*"
}
]
}
Routing is automatically added to the VPC routing where destination is pl-xxxxxxxx(com.amazonaws.us-east-1.s3) and target is the endpoint created in
Opened all traffic in the outbound rules in Security Group for the private subnet to destination prefix s3 endpoint starting with pl-xxxxxxxx
Now entered following command in private ec2 instance on terminal
aws s3 ls --debug --region us-west-2
I got following error
"ConnectionTimeout: Could not connect to the endpoint URL https://sts.us-west-2.amazonaws.com:443"
I read almost all the resources on google and they follow same steps that I have been following but it is not working out for me.
The only difference is that they are using federated AWS account whereas I am using a shared AWS account.
Same goes for dynamodb access.
Similar stackoverflow issue: Connecting to S3 bucket thru S3 VPC Endpoint inside EC2 instance timing out
But I could not benefit from it much.
Thanks a lot in advance.
Update: I was able to resolve the issue with STS endpoint by creating STS interface endpoint in the private subnet and then accessing the Dynamodb and S3 by assuming role inside the EC2 instance
I've created some sort of private documentation for my infra team, uploaded to S3 Bucket and would like to make it private, accessible only on our VPN.
I tried to allow those vpn ip ranges: 173.12.0.0/16 and 173.11.0.0/16 but i keep getting 403 - forbidden (inside vpn).
Can someone help me debug or find where im messing up?
My bucket policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "vpnOnly",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::calian.io/*",
"Condition": {
"IpAddress": {
"aws:SourceIp": [
"173.12.0.0/16",
"173.11.0.0/16"
]
}
}
}
]
}
By default, S3 requests go via the Internet, so the requests would 'appear' to be coming from a public IP address.
Alternatively, you could add a VPC Endpoint for S3, which would make the request come 'from' the private IP addresses.
You might also consider using Amazon S3 Access Points to control the access to the bucket.
Since VPC endpoints are only accessible from Amazon EC2 instances inside a VPC, a local instance must proxy all remote requests before they can
utilize a VPC endpoint connection. The following sections outline a DNS-based proxy solution that directs appropriate traffic from a corporate network to
a VPC endpoint for Amazon S3 as depicted in the following diagram.
From one of the machines you are attempting to access the S3 bucket from go to the "AWS Check My IP" endpoint at https://checkip.amazonaws.com/
From there confirm that the IP address you're seeing is inside of the range you have defined in your policy. My guess is it'll be different- instead you'll see the public ip address of your VPN or NAT Gateway/Instance, as your traffic is likely going over the internet to get to S3.
Once you've identified the IP address you're using you can either update the security group to include it, or look into solutions such as a VPC Endpoint to keep traffic on your private network.
My infrastructure looks like this:
Two EC2 machines (No public IP). Each EC2 machine is in a separate subnet.
AWS API Gateway having 4 APIs.
EC2 machine will access AWS API Gateway to consume the REST APIs.
Test:
I am trying to control an access to the API gateway using resource access policy through IP addresses. I wanted only the two EC2 machine to access the API gateway and my resource policy is below. I am using the IP/CIDR of the subnet in the resource policy.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": "*",
"Action": "execute-api:Invoke",
"Resource": "exact resource name",
"Condition": {
"IpAddress": {
"aws:SourceIp": [
"subnet address 1/CIDR",
"subnet address 2/CIDR"
]
}
}
}
]
}
The above policy is not working and blocking all REST calls.
Please help me in resolving the issue.
API Gateway is not inside your VPC. It can't see your traffic as coming from your VPC CIDR blocks. You need to provide the IP addresses of the NAT Gateways your instances are using to access the Internet, in the policy.