I am receiving "Could not connect to the endpoint URL: "https://s3.amazonaws.com/" from inside EC2 instance running inside private subnet
Note: We are using our corporate shared AWS account instead of Federated account for this exercise.
Here is a configuration:
Created one VPC with 1 private(Attached to VPC endpoints for S3 and Dynamodb) and 1 public (attached to Internet Gateway) subnet. There is no NAT gateway or instance.
Launched EC2 instance(Amazon Linux AMI) one inside each subnet.
Attached IAM roles to access dynamodb and S3 to both the EC2 instance
Connected to EC2 from terminal. Configured my access keys using aws configure
Policy for S3 VPC endpoint:
"Statement": [
{
"Action": "*",
"Effect": "Allow",
"Resource": "*",
"Principal": "*"
}
]
}
Routing is automatically added to the VPC routing where destination is pl-xxxxxxxx(com.amazonaws.us-east-1.s3) and target is the endpoint created in
Opened all traffic in the outbound rules in Security Group for the private subnet to destination prefix s3 endpoint starting with pl-xxxxxxxx
Now entered following command in private ec2 instance on terminal
aws s3 ls --debug --region us-west-2
I got following error
"ConnectionTimeout: Could not connect to the endpoint URL https://sts.us-west-2.amazonaws.com:443"
I read almost all the resources on google and they follow same steps that I have been following but it is not working out for me.
The only difference is that they are using federated AWS account whereas I am using a shared AWS account.
Same goes for dynamodb access.
Similar stackoverflow issue: Connecting to S3 bucket thru S3 VPC Endpoint inside EC2 instance timing out
But I could not benefit from it much.
Thanks a lot in advance.
Update: I was able to resolve the issue with STS endpoint by creating STS interface endpoint in the private subnet and then accessing the Dynamodb and S3 by assuming role inside the EC2 instance
Related
I’m using a gateway endpoint to connect to a S3 bucket from an EC2 instance in the default VPC. However, the connection isn't working.
I have checked the following configurations:
VPC DNS resolution to yes.
VPC route table table has access to Amazon S3 using the gateway VPC endpoint.
Security group outbound rules for EC2 permits all traffic on all ports.
VPC network ACL is permitting all traffic.
Bucket policy allows public access.
EC2 instance is attached to IAM role which is attached to S3FullAccess Policy.
Both bucket and EC2 are in us-east-2.
Error Details:
[ec2-user#ip-172-31-37-114 ~]$ aws s3 ls
Connect timeout on endpoint URL: "https://s3.amazonaws.com/"
[ec2-user#ip-172-31-37-114 ~]$
Can you please explain why it is not working without it --region us-east-2?
It was working because you were using s3.amazonaws.com endpoint which is for us-east-1 region. Gateway VPC endpoints are regional, and your endpoint was created for us-east-2. So you had to explicitly tell aws s3 to use us-east-2, rather then default us-east-1.
I use aws ecr to get login passwaord then pull docker image from private ECR at the public subnet EC2. This public subnet has already attached a internet gateway.
I already have an endpoint gateway for S3 before, so I created an interface endpoint for ECR (com.amazonaws.ap-southeast-1.ecr.dkr) follow the officail document, its subnet setting is the private subnet, also enable the private DNS.
https://docs.aws.amazon.com/vpc/latest/privatelink/create-interface-endpoint.html#test-interface-endpoint-aws
After that public EC2 can get password by aws ecr, but docker login fail, private EC2 cannot get password by aws ecr.
EC2s allow all outbound rules and no NACL setting, they IAM role combines the AmazonEC2ContainerRegistryReadOnly and S3 access permission that shown as below.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::prod-ap-southeast-1-starport-layer-bucket/*"
}
]
}
Private EC2 aws ecr get-login-password --region ap-southeast-1 error messags is
Connect timeout on endpoint URL: "https://api.ecr.ap-southeast-1.amazonaws.com/"
Use dig showed the ip of api.ecr.ap-southeast-1.amazonaws.com is successful. I did not change any setting after created an interface endpoint. I don't know which step is wrong, please give me some suggestion. Thank you very much.
Update
I have a private VPC with 1 public subnet and 1 private subnet, each has it own route table, public subnet route table add internet gateway, private subnet route table add S3 endpoint.
Secority group
private subnet EC2
Inbound rules: source: sg-ALB, HTTP 80, source: sg-public-EC2, SSH 22
Outbound rule: All traffic
public subnet EC2
Inbound rules: source: All Ipv4, SSH 22
Outbound rule: All traffic
Public EC2 error meesage
Error response from daemon: Get "https://xxx.dkr.ecr.ap-southeast-1.amazonaws.com/v2/":
net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
Update 2
I have hosted a private zoned by Route 53 in this VPC, no sure that could be a problem or not.
Your endpoint is using https, which means you have to allow port 443 in your security groups.
I'm trying to get an EC2 instance to access a S3 bucket. I'd rather use the IP address of the instance to allow access to S3 rather than assumerole.
In the bucket policy, I've tried allowing the instance's public AND private IP but trying to access any resources gives <Code>AccessDenied</Code><Message>Access Denied</Message>
I'm able to use IAM roles and aws cli to access the S3 bucket but I need to access the S3 bucket using a plain HTTP address like http://s3.amazonaws.com/somebucket/somefile.txt. I've also tested with non cloud servers (my own laptop and other servers) and allowing the public IP of those servers would successfully let me access the S3 resources, it's only not working when I do the same for EC2 instances.
I tried looking at access logs and I see the private IP of the EC2 instance being logged and giving a 403 access denied.
My bucket policy looks like this:
{
"Sid": "Statement1",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:*",
"Resource": "arn:aws:s3:::test-bucket1/*",
"Condition": {
"IpAddress": {
"aws:SourceIp": [
"EC2-public-ip-address/32",
"EC2-private-ip-address/32"
]
}
}
},
I see a gateway endpoint associated with the VPC that the EC2 instance is in
So that's why it uses private IP. S3 gateway endpoint enable private connections from VPC to S3 without accessing the internet. Thus only private IP is used in that case.
You either have to settle for the private IP only, or modify your VPC and S3 gateway settings to allow internet connections to S3. This may be security issue, as S3 gateway endpoints are more secure (no internet).
I've been trying to connect to S3 bucket from a lambda residing in a private subnet. I did the exact same thing for Ec2 instance and it worked like a charm, I'm not sure why with lambda it's such an issue. My lambda times out after a certain defined interval.
Here's my lambda's VPC configuration
Here's the security group output configuration:
Below are the outbound rules of the subnet associated with lambda
As you can see, I created a VPC endpoint to route my traffic through the VPC but it doesn't work. I'm not sure what am I missing here. Below is the VPC Endpoint configuration.
I've given full access to S3 in policy like this:
{
"Statement": [
{
"Action": "*",
"Effect": "Allow",
"Resource": "*",
"Principal": "*"
}
]
}
When I run my lambda code, I get timeout error as below:
You can access Amazon S3 objects using VPC endpoint only when the S3 objects are in the same Region as the Amazon S3 gateway VPC endpoint. Confirm that your objects and endpoint are in the same Region.
To reproduce your situation, I performed the following steps:
Created an AWS Lambda function that calls ListBuckets(). Tested it without attaching to a VPC. It worked fine.
Created a VPC with just a private subnet
Added an Amazon S3 Endpoint Gateway to the VPC and subnet
Reconfigured the Lambda function to use the VPC and subnet
Tested the Lambda function -- it worked fine
I suspect your problem might lie with the Security Group attached to the Lambda function. I left my Outbound rules as "All Traffic 0.0.0.0/0" rather than restricting it. Give that a try and see if it makes things better.
In a VPC, I have two Subnets, one is a public subnet with an EC2 instance, the other is a private subnet with 2 EC2 instances. All 3 EC2 instances have the same IAM role to access S3.
The EC2 instance in the public subnet can access S3 directly if I login and run aws s3 ls. However, both of the EC2 instances in the private subnet cannot. What can be the reasons?
EC2 in the private subnet uses a Security Group that accepts traffic from the whole VPC.
EC2 in the public subnet use a Security Group that accepts traffic from Anywhere.
All 3 EC2 instances use the same routing table, same NACLs, use the same IAM role, with the policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "s3:*",
"Resource": "*"
}
]
}
If I manually create a credential profile in the EC2 in the private Subnet, then it can login and access "aws s3 ls".
Update:
The routing table of the private subnets does have a VPC Endpoint. The routing table is:
dest=10.1.0.0/16 (myVPC)->target=local
dest=0.0.0.0/0, target=iGW
dest=pl-68a54001 (com.amazonaws.us-west-2.s3)->target=vpce-26cf344f
Among them, #3 means the EC2 can access S3 via VPC endpoint. #2 is added because an ELB is in front of the EC2 instance and has to access internet.
Another observation: If I enable Assign Public IP in the private subnet and then launch a new EC2 instance, this EC2 instance can access S3. If I disable Assign Public IP in the private subnet and then launch a new EC2 instance, the new EC2 instance cannot access S3.
BTW, I already have region set as us-west-2 before running terraform:
[ec2-user#ip-XXXXX]$ echo $AWS_DEFAULT_PROFILE
abcdefg
[ec2-user#XXXXX]$ aws configure --profile abcdefg
AWS Access Key ID [****************TRM]:
AWS Secret Access Key [****************nt+]:
Default region name [us-west-2]:
Default output format [None]:
The fact that your solution works when an instance has a Public IP address, but does not work when it does not have a Public IP address, suggests that the instance is actually in a public subnet.
Indeed, looking at "the routing table of the private subnets", you include this line:
2. dest=0.0.0.0/0, target=iGW
This is making the subnet a Public Subnet because it is pointing to an Internet Gateway.
To clarify:
A public subnet is a subnet with a Route Table entry that points to an Internet Gateway
A private subnet is a subnet with a Route Table that does not point to an Internet Gateway
Therefore, you should remove the above entry from the Route Table for your private subnets if you actually want them to be private.
Next, it appears that you aws s3 ls request is not being sent to the VPC Endpoint. This might be because you are not sending traffic to s3.us-west-2.amazonaws.com as listed in your route table. Try this command:
aws s3 ls --region us-west-2
That will send the request to the S3 endpoint that would be routed via the VPC Endpoint. You should direct all of your S3 commands to that region, since the VPC Endpoint only points to the region in which it was configured.
When you place the EC2 in Private Subnet, Network level it doesn't have access to S3 (Not an issue with the IAM policy). To allow outbound access to S3 from EC2 instances in Private Subnet, you have the following options.
VPC endpoints for S3
NAT Gateway
Out of the two approaches, if you plan to allow access only to S3 from EC2 instance in Private Subnet, configure VPC endpoints for S3.