I have two AWS accounts, which I will call prod and dev.
prod has an Aurora Serverless cluster (not instance!), perfectly connectable within its own VPC in the prod account. To save time and money, I would like to use this cluster in dev (obviously with read-only permissions, etc) instead of spinning up a new cluster, mostly due to volume of data and the need for parity during development.
I have set up dev exactly the same as prod, minus the Aurora cluster and with a different CIDR block for the VPC. I have set up a VPC peering connection between the VPCs in dev and prod with DNS resolution on both VPCs and enabled in the peering connection. I have added this peering connection to the route tables for each VPC, with the affected subnets explicitly associated. I have added both CIDR and SGs (including the VPC default SG, as I read recommended on SO in a tangentially related answer) to all affected SGs (Aurora cluster SG, SG for Lambda that is calling out to Aurora, default SG for VPC in both accounts). I also added the CIDR blocks to the ACL explicitly.
I can't get any kind of connection whatsoever. Documentation on whether this is even possible leads me in a recursive loop of not-quite-useful information, though it does seem the intra-region peering should be possible. I am no network expert, so it's entirely possible I have just horribly misconfigured something along the way, but I can't find any guides or useful documentation for this particular situation.
Is this even possible? Is there a checklist somewhere of changes to be made to allow connections like this (outside of the severely lacking AWS docs on VPC peering)? Route tables, subnets, security groups, ACLs, VPC peering connections...it's a lot of pieces and I feel like at this point I have just missed something, despite having torn down/reconfigured this setup via several guides.
Related
So far I used a single-account for blue-green deployments using ALB and target groups.
My company decided we should use a multi-account setup for enhanced security (separate staging and prod).
Great, so I'm now migrating our setup and noticed I can't send traffic with an ALB (or NLB) to a target group that is not in a local VPC. Yes, I can set up VPC sharing, but it kind of defeats the point of resource separation. How do you guys deal with blue-green deployments with LBs in a multi-account setup?
For now I just set up VPC sharing but I'm not sure this is the way it should work.
In your case you you can use VPC peering connection - https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/peer-with-vpc-in-another-account.html
Please notice, that you should have different pool of IP address between two VPCs, otherwise you will have problem with setting routing.
On the other hand if the number of accounts will growth in your company you can look at TGW (Transit gateway). You will not have to create VPC peering connection for each new VPC in another account (also VPC peering connection doesn't support multiple VPC peering connections - Multiple VPC peering connections)
I've spend about a full day trying to solve this, but have no luck so far. I'm also open to alternative suggestions than my current setup.
I have an RDS instance inside of a VPC. I am trying to make CodeBuild be able to access this RDS instance for a testing step.
Currently, I setup a VPC endpoint for the CodeBuild service, with all 3 subnets of the VPC. I know that if I allow all inbound traffic for the security group on the RDS, it works. I don't want to allow all inbound traffic though- and given this, have been unsuccessful.
I have tried the following to no avail:
Taking the private IPv4's of the ENI's created by the VPCE, adding them as inbound rules to the security group on the RDS
Creating a separate VPC for CodeBuild, and setup VPC peering (this seemed overly complex, and I'm not sure if the peering would even allow CodeBuild traffic to hit an RDS; it also makes things complicated down the road for CodeDeploy).
Putting CodeBuild inside the VPC of the RDS instance. When doing this, I created a new subnet in the VPC, assigned it to a NAT in the routes table (and this NAT was on the VPC of the RDS instance); put CodeBuild kept telling me it had no internet access.
setup a VPC endpoint for the CodeBuild service,
VPC endpoints are not used for inbound traffic from CB to VPC. They are used for your applications in VPC to interact with CB service without the internet.
Putting CodeBuild inside the VPC of the RDS instance.
This is the correct way. Sadly you haven't provided any details of your VPC, subents, NAT, route tables, security groups, NACLs setup, thus its difficult to speculate why it does not work.
Thanks Marcin for pointing me in the right direction to make CodeBuild in the same VPC. When I was able to focus on that, I saw this post again:
CodeBuild cannot find the 0.0.0.0/0 destination for the target internet gateway
which I had the same issue; my NAT was also on the private subnet. Now, it's on the public subnet, and it's working.
Background: I have a kubernetes cluster set up in one AWS account that needs to access data in an RDS MySQL instance in a different account and I can't seem to get the settings correct to allow traffic to flow.
What I've tried so far:
Setup a peering connection between the two VPCs. They are in the same region, us-east-1.
Created Route table entries in each account to point traffic on the corresponding subnet to the peering connection.
Created a security group in the RDS VPC to allow traffic from the kubernetes subnets to access MySql.
Made sure DNS Resolution is enabled on both VPC's.
Kubernetes VPC details (Requester)
This contains 3 EC2's (looks like each has its own subnet) that house my kubernetes cluster. I used EKS to set this up.
The route table rules I set up have the 3 subnets associated, and point the RDS VPC CIDR block at the peering connection.
RDS VPC details (Accepter)
This VPC contains the mysql RDS instance, as well as some other resources. The RDS instance has quite a few VPC security groups assigned to it for access from our office IP's etc. It has Public Accessibility set to true.
I repeated the route table setup (in reverse) and pointed back to the K8s VPC subnet / peering connection.
Testing
To test the connection, I've tried 2 different ways. The application that needs to access mysql is written in node, so I just wrote a test connector and example query and it times out.
I also tried netcat from a terminal in the pod running in the kubernetes cluster.
nc -v {{myclustername}}.us-east-1.rds.amazonaws.com 3306
Which also times out. It seems to be trying to hit the correct mysql instance IP though so I'm not sure if that means my routing rules are working right from the k8s vpc side.
DNS fwd/rev mismatch: ec2-XXX.compute-1.amazonaws.com != ip-{{IP OF MY MYSQL}}.ec2.internal
I'm not sure what steps to take next. Any direction would be greatly appreciated.
Side Note: I've read thru this Kubernetes container connection to RDS instance in separate VPC
I think I understand what's going on there. My CIDR blocks do not conflict with the default K8s ips (10.0...) so my problem seems to be different.
I know this was asked a long time ago, but I just ran into this problem as well.
It turns out I was editing the wrong AWS routing table! When I ran kops to create my cluster, it created a new VPC with its own routing table but also another routing table! I needed to add the peer connection route to the cluster's routing table instead of the VPC's Main routing table.
I'm trying to secure a pipeline for analyzing controlled-access genomic data with Amazon Elastic MapReduce (EMR), and it would help to know the minimal set of outbound rules required of the master and slave security groups of an EMR cluster. I'm sure it differs from region to region, and the IP ranges given at http://docs.aws.amazon.com/general/latest/gr/aws-ip-ranges.html probably subsume them, but it would be great to know exactly which CIDR blocks we should worry about. It looks like EMR pokes just the right holes among the inbound rules for everything to work, but I've found the cluster gets stuck on provisioning if the outbound rules are anything other than "allow all traffic."
We had the identical problem. The way we addressed this problem is by doing the following.
From the ip-ranges.json, use the EC2 CIDR block & AMAZON service cidr block. You may substract CLOUDFRONT & ROUTE53 blocks.
The reason is you need to be able to talk to EMR webservice endpoints that are hsoted outside your VPC. EMR uses a subset of EC2 instances to spin up cluster.
If you have a support contract, ask Amazon to provide you with the CIDR block (we paid for a consulting engagement and this was one of the things they did).
Also, as the EMR webservice is on a public DNS endpoint (not 10.*), there should be a route to the internet gateway.
I have one VPC with an RDS instance in it. They are both located in the same region.
I want to use the RDS instance in another VPC, that is in another region on another AWS account (we have multiple AWS accounts). If that's not complicated enough the 2nd VPC comes up via CloudFormation (i.e. dynamic). Whenever I am bringing up a CloudFormation stack I want to attach the RDS instance automatically.
I have looked at:
exposing the RDS instance on the public internet :(
an ELB w/ TCP transport to put the database instance behind
VPC peering but the different regions and the approval workflow in the AWS console make little sense in the case we are using CloudFormation
All of these seem suboptimal to me and was wondering if somebody already did this before. If yes, please share what you did and what the though process behind it was.
Use a VPN tunnel from one VPC to the other. You could build your own or look at Vyatta. Ideally the two VPCs do not have overlapping CIDRs. Note that you cannot use VPC peering inter-region.
For anyone who stumbles around here, it looks like AWS VPC Peering can now be done cross region: https://docs.aws.amazon.com/vpc/latest/peering/what-is-vpc-peering.html