AWS Lambda Function Connection with VPC Peering - amazon-web-services

Context
I have an EC2 that I want to communicate with, inside a VPC in AccountID: host.
I have a Lambda in AccountID: client that I want to connect with this instance.
I am trying to set up a peering between the two, but I'm having issues.
In development, I am able to launch the Lambda from within a VPN that is connected to the instance and the Lambda is able to perform its task (so I am sure the function is correct).
Setup
AccountID: host
peering connection: pcx-peer
EC2 Private IPv4: e.e.e.e
Accepter (host) CIDR: e.e.0.0/16
EDIT: Security Group (inbound): {Type: All TCP, Port-range: 0-65535, Source: c.c.c.c/18}
EDIT: Security Group (outbound): All trafic
AccountID: client
VPC: vpc-client
VPC IPv4 CIDR: c.c.c.c/18
peering connection: pcx-peer
Requester (client) CIDR: c.c.c.c/18
Route table: rtb-client
Private subnets (rtb-client): subnet-private1, subnet-private2, subnet-private3
Secutiry Group: sg-lambda
Procedure:
I created and accepted a peering (different accounts and same region) according to the documentation.
I edited the route tables from both accounts to point to the other:
AccountID: host > pcx-peer > Route tables > rtb-host > Routes >:
Destination (e.e.0.0/16) Target (local)
Destination (c.c.c.c/18) Target (pcx-peer)
AccountID: client > pcx-peer > Route tables > rtb-client > Routes >:
Destination (e.e.0.0/16) Target (pcx-peer)
Destination (c.c.c.c/18) Target (local)
I created the Lambda and Configuration > VPC:
vpc: vpc-client
subnets: subnet-private1, subnet-private2, subnet-private3
security groups: sg-lambda => Inbound X, Outbound All traffic 0.0.0.0/0
To eliminate any permission restrictions my Lambda has all the permissions enabled
EDIT
I created a Security Group entry with permission on all TCP ports, even though I am sure the instance requires only access on 5432
Working
When using the reachability analyzer on AccountID: host the peering connection has access to the EC2 instance in the correct PORT.
If I launch my Lambda inside the same VPC as the EC2 it executes correctly.
Help
Any help is much appreciated,

Related

Unable to connect to S3 using Interface Endpoint: Connection timeout on endpoint URL: "https://s3.amazonaws.com"

I'm trying to configure a VPC environment where the ec2 instance in the private subnet has access to S3, but I've been having trouble getting it to work. If anyone can offer some guidance it would help a lot. Here's my VPC setup with 1 private and 1 public subnets:
VPC: CIDR 20.0.0.0/24 with DNS hostnames and DNS resolution enabled
Public Subnet: CIDR 20.0.0.0/28 with Route Table
20.0.0.0/24 local
0.0.0.0/0 igw-0e9c938ccf95c7365
Private Subnet: CIDR 20.0.0.32/28 with Route Table
20.0.0.0/24 local
Each subnet has 1 ec-2 instance with roles assigned giving full S3 access. My endpoint is created with the service "com.amazonaws.us-east-1.s3" on my private subnet and the security group attached is default VPC security group.
When I try calling aws s3api list-buckets, I get a connection timeout. It works if I attach attach a NAT gateway to the route table, so it seems like it's not able to connect to the endpoint. Is my configuration wrong?

Cross-account VPC peering connection to RDS

I have two AWS accounts (A and B). Each of them has a VPC with no overlapping CIDR blocks, both are in the same region. I have successfully created a VPC peering connection between them (which is active). The requester and receiver both allow remote vpc dns resolution.
I have specified in each VPC table routes the other's VPC cidr block as a destination with the peering connection as a target.
I have an EC2 instance running in a public subnet inside the VPCA of AccountA, attached to a security group SecurityGroupA. SecurityGroupA enables inbound from all sources in the default security group of VPCA, as well as inbound from AccountBId/SecurityGroupB, and all outbounds.
I have a RDS postgres instance running in the VPCB of AccountB, attached to a security group SecurityGroupB. SecurityGroupB enables inbound TCP on port 5432 (postgres default port) from AccountAId/SecurityGroupAId.
When running aws ec2 describe-security-group-references --group-id SecurityGorupAId, I get
{
"SecurityGroupReferenceSet": [
{
"GroupId": "SecurityGroupAId",
"ReferencingVpcId": "VPCBId",
"VpcPeeringConnectionId": "pcx-XXXXXXXXXXXXXXXXX"
}
]
}
Which seems to indicate that the security group is correctly referenced. But when trying to connect from the EC2 instance to the RDS instance, I'm getting a connection timed out error.

RDS public access lost when adding public subnet with internet gateway and private subnets with NAT

Any help would be much appreciated!
Initially we had 3 subnets in our AWS VPC. The VPC has an IGW and one default route table with 2 routes - 1 for internal and 0.0.0.0/0 to IGW. A standard initial VPC setup.
Within the VPC we have an RDS instance, with an RDS proxy, and the DB is set for public access while we develop the solution. The DB is associated with the default VPC SG along with a specific SG that whitelists IP addresses for DB connectivity via the public endpoint.
Also within the VPC we have a Lambda that is using the default VPC security group and the 3 subnets mentioned above.
The Lambda can connect to the RDS proxy, and we can connect to the RDS public endpoint via a whitelisted IP - This is as expected.
The Issue:
Now we need to provide the Lambda with internet access (it needs to connect with RedisLabs). To do this we've added:
A public subnet (subnet-00245f33edbae3358)
A NAT on the public subnet
Created a route table associated with the existing 3 private subnets (subnet-06d1124e, subnet-ba82bce1, subnet-3344b955) with a route of 0.0.0.0/0 -> NAT
Created a route table associated with the new public subnet (subnet-00245f33edbae3358) with a route of 0.0.0.0/0 -> IGW
With this is place the Lambda can still access the DB via the RDS proxy (expected) and can now access the internet (expected), BUT we lose connection to the DB via the public facing endpoint.
Is there something missing in the configuration that will allow Lambda access to the RDS and internet AND will also allow us access to RDS via the public endpoint? OR do we need an SSH tunnel within the public subnet to do this?
Thanks in advance!
Additional Info:
The RDS currently has the following SG's:
- prod-auth-service-rds - allows TCP 3306 from my whitelisted IP
- sg-11cb746b (default) - All traffic with, self referencing source (sg-11cb746b)
The RDS is on subnets:
- subnet-06d1124e - existing private subnet
- subnet-ba82bce1 - existing private subnet
- subnet-3344b955 - existing private subnet
The NAT is on subnet subnet-00245f33edbae3358
EDIT: Reread your response, if your RDS DB is on private subnets, then it can’t be publicly accessible regardless of of what you set as that option in the DB’s settings.
——-
After looking at the additional info, I believe the problem is your security group for the RDS. It only allows traffic from things in your default security group or your personal whitelisted IP.
Even though the lambda is in your default security group, RDS does see traffic as coming from your Lambda, they see it as coming from the NAT Gataway which doesn’t have and security groups.
You can solve this by adding the EIP of your NAT Gateway as an additional whitelisted IP to your inbound rules of the RDS SG.
It turns out that all I needed to do was create the Lambda in a private subnet(s) separate to the existing RDS subnets. The separate subnet(s) then need a route that forwards 0.0.0.0/0 to NAT.
The Lambda now has outbout internet access and RDS access, while the RDS instance can still be reached via its existing public endpoint.

Connect to RDS in a different region from EC2 instance

So I have a primary RDS in us-east-1 & a replica in us-west-1. Both are inside VPCs in their respective regions. I want to have one of my EC2 instances in us-east-1 connect to the replica instance.
A simple solution is to enable public access for the RDS replica and add the IP of the EC2 to its security group and it works.
But instead of allowing a static IP, I would like to allow access to the entire CIDR range of my us-east-1 VPC and also I don't want my instances to be public accessible.
To do this, I've setup a VPC peering connection between the two regions and I have added entries in the routing tables of both the VPCs to forward traffic to each other's CIDR ranges to the peering connections.
The CIRD range of the EC2 instance is 172.31.0.0/16 and I have added this to the security group of the RDS replica in the us-west-1 region. But for some reason the RDS is not reachable from my EC2.
Have I missed anything else? Thanks!
To summarize my setup:
US EAST:
VPC CIDR: 172.31.0.0/16
Route Table entry: Destination 10.0.0.0/16 routes to the peering connection of us-west-1 VPC.
EC2 IP: 172.31.5.234
US WEST:
VPC CIDR: 10.0.0.0/16
Route Table entry: Destination 172.31.0.0/16 routes to the peering connection of us-east-1 VPC.
RDS:
Public Accessible: Yes
Security Group: Allow connections from 172.31.0.0/16
To reproduce your situation, I did the following:
In us-east-1:
Created a VPC in us-east-1 with a CIDR of 172.31.0.0/16 using the "VPC with Public and Private Subnets" VPC Wizard
Launched an Amazon EC2 Linux instance in the public subnet
In us-west-1:
Created a VPC in us-west-1 with a CIDR of 10.0.0.0/16 using the "VPC with Public and Private Subnets" VPC Wizard
Added an additional private subnet to allow creation of an Amazon RDS Subnet Group that uses multiple AZs
Created an RDS Subnet Group across the two private subnets
Launched an Amazon RDS MySQL database in the private subnet with Publicly accessible = No
Setup peering:
In us-east-1, created a Peering Connection Request to the VPC in us-west-1
In us-west-1, accepted the Peering Request
Configure routing:
In us-east-1, configured the Public Route Table (used by the EC2 instance) to route 10.0.0.0/16 traffic to the peered VPC
In us-west-1, configured the Private Route Table (used by the RDS instance) to route 172.31.0.0/16 traffic to the peered VPC
Security Groups:
In us-east-1, created a security group (App-SG) that allows inbound port 22 connections from 0.0.0.0/0. Associated it to the EC2 instance.
In us-west-1, created a security group (RDS-SG) that allows inbound port 3306 connections from 10.0.0.0/16 (which is the other side of the peering connection). Associated it to the RDS instance.
Test:
Used ssh to connect to the EC2 instance in us-east-1
Installed mysql client (sudo yum install mysql)
Connected to mysql with:
mysql -u master -p -h xxx.yyy.us-west-1.rds.amazonaws.com
This successfully connected to the RDS database across the peering connection.
FYI, the DNS name of the database resolved to 10.0.2.40 (which is in the CIDR range of the us-west-1 VPC). This DNS resolution worked from both VPCs.
In summary, the important bits are:
Establish a 2-way peering connection
Configure the security group on the RDS instance to permit inbound connections from the CIDR of the peered VPC
No need to make the database publicly accessible

vpc peering issues between bastion and app server ec2

I have two existing vpc's. One is shared services, and the other is the actual application servers. I have created a peer between the two vpc's, and added routes on each vpc, but still cannot ssh from bastian to app server from the shared services vpc.
Details:
shared services vpc cidr(172.31.0.0/16)
bastian server ip (172.31.5.84)
route added to main route table (10.2.0.0/16 -> vpc-peer-id)
app server vpc cidr (10.2.0.0/16)
ec2 subnet instance ip (10.2.60.4)
route added to main route table (172.17.0.0/16 -> vpc-peer-id)
added sg allow (22 tcp 172.31.0.0/16)
I also added the same route to the app server subnet but no change.
I am completely stumped atm for how to set this up or even work our where it is blocking. Any help would be appreciated.
To assist you, I did the following:
Started with an existing VPC-A with CIDR of 172.31.0.0/16
Created a new VPC-B with CIDR of 10.0.0.0/16
Created a subnet in VPC-B with CIDR of 10.0.0.0/24
Launched an Amazon Linux EC2 instance in the new subnet in VPC-B
Inbound Security Group: Allow SSH from 172.31.0.0/16
Created Peering connection:
Requester VPC: VPC-A
Acceptor VPC: VPC-B
Accepted peering connection (Did you do this on yours?)
Configured Route Tables:
The public Route Table in VPC-A: Route 10.0.0.0/16 to VPC-B
The private Route Table in VPC-B: Route 172.31.0.0/16 to VPC-A
Opened an SSH connection to an existing instance in VPC-A
From that instance, opened an SSH connection to the private IP address of the new instance (10.0.0.121)
Result: Instantly got a Permission denied (publickey) error because I didn't supply the private key. Getting an instant error messaged proved network connectivity (as opposed to hanging, which normally indicates a lack of network connectivity).
I then supplied the correct private key and tried to SSH again.
Result: Connected!
The complete flow is:
My laptop -> Instance in public subnet of `VPC-A` -> Instance in `VPC-B`
This had to use the peering connection because VPC-B has no Internet Gateway and I connected via the private IP address of the instance.
So, I recommend that you double-check that you have done each of the above steps to find where your configuration might differ (accepting the peering connection, configuring the security group, etc).