I am faced with a chicken and egg problem. I currently have a server in EC2 classic, as well as an RDS instance -- in EC2 classic as well. The EC2 instances also interact with Cassandra cluster, which also resides in EC2 classic.
However, I need to move RDS into the VPC. Now, in an ideal world, I'd have all of my stuff in VPC at this point. However, that presents a major migration challenge and I'd like to minimize impact on users and keep steps to minimum -- this is mainly because of the Cassandra cluster.
It turns out that I cannot create security group rules between VPC and Non-VPC security groups.
So, how can I have RDS in VPC that my EC2 instances can access w/o having to open up my RDS to the entire world ?
Any help is greatly appreciated.
UPDATE: So, one idea I had is to assign elastic IPs to my EC2 instances and add IPs explicitly to the security group for RDS within VPC. Would that work ? (trying it now using https://github.com/skymill/aws-ec2-assign-elastic-ip)
Yes, unfortunately that's the only way to do it. You cannot use DNS in security groups, so you're stuck with IP address.
So, I ended up solving it exactly like I described -- assign elastic IPs to my EC2 instances and add IPs explicitly to the security group for RDS within VPC. It ended up working great.
Related
Is there an alternative to AWS's security groups in the Google Cloud Platform?
Following is the situation which I have:
A Basic Node.js server running in Cloud Run as a docker image.
A Postgres SQL database at GCP.
A Redis instance at GCP.
What I want to do is make a 'security group' sort of so that my Postgres SQL DB and Redis instance can only be accessed from my Node.js server and nowhere else. I don't want them to be publically accessible via an IP.
What we do in AWS is, that only services part of a security group can access each other.
I'm not very sure but I guess in GCP I need to make use of Firewall rules (not sure at all).
If I'm correct could someone please guide me as to how to go about this? And if I'm wrong could someone suggest the correct method?
GCP has firewall rules for its VPC that work similar to AWS Security Groups. More details can be found here. You can place your PostgreSQL database, Redis instance and Node.js server inside GCP VPC.
Make Node.js server available to the public via DNS.
Set default-allow-internal rule, so that only the services present in VPC can access each other (halting public access of DB and Redis)
As an alternative approach, you may also keep all three servers public and only allow Node.js IP address to access DB and Redis servers, but the above solution is recommended.
Security groups inside AWS are instance-attached firewall-like components. So for example, you can have a SG on an instance level, similar to configuring IP-tables on regular Linux.
On the other hand, Google Firewall rules are more on a Network level. I guess, for the level of "granularity", I'd say that Security Groups can be replaced to instance-level granularity, so then your alternatives are to use one of the following:
firewalld
nftables
iptables
The thing is that in AWS you can also attach security groups to subnets. So SG's when attached to subnets, are also kind of similar to google firewalls, still, security groups provide a bit more granularity since you can have different security groups per subnet, while in GCP you need to have a firewall per Network. At this level, protection should come from firewalls in subnets.
Thanks #amsh for the solution to the problem. But there were a few more things that were required to be done so I guess it'll be better if I list them out here if anyone needs in the future:
Create a VPC network and add a subnet for a particular region (Eg: us-central1).
Create a VPC connector from the Serverless VPC Access section for the created VPC network in the same region.
In Cloud Run add the created VPC connector in the Connection section.
Create the PostgreSQL and Redis instance in the same region as that of the created VPC network.
In the Private IP section of these instances, select the created VPC network. This will create a Private IP for the respective instances in the region of the created VPC network.
Use this Private IP in the Node.js server to connect to the instance and it'll be good to go.
Common Problems you might face:
Error while creating the VPC Connector: Ensure the IP range of the VPC connector and the VPC network do not overlap.
Different regions: Ensure all instances are in the same region of the VPC network, else they won't connect via the Private IP.
Avoid changing the firewall rules: The firewall rules must not be changed unless you need them to perform differently than they normally do.
Instances in different regions: If the instances are spread across different regions, use VPC network peering to establish a connection between them.
I have a t2.micro instance running, that is producing some data that needs to be written to a database. So, I created a RDS database with MySQL on it.
The issue I'm facing is, nonsurprisingly, getting the EC2 instance to communicate with the RDS database in any way/shape/form.
I'm been battling with it all day. I'm left with these bits of confusion:
I figured I've just add the public IP of the EC2 instance to the security group of the RDS. Turns out the RDS doesn't really have a security group, only a VPN. So how do I allow communication from the EC2 instance, then?
Speaking of security groups, do I need to se the EC2 up to require outbound connections?
The RDS has an 'endpoint' and not a public IP as far as I can tell. So I can't add it to any security group at all. Is this correct?
Am I going to have to figure out how to use Elastic Beanstalk or some other way to get these components to play together?
These are all the things I'm trying to troubleshoot but I'm not getting anywhere. There doesn't seem to be any good blogs / etc; mostly what I'm finding is stuff on how to get the RDS to be accessed by your local hardware, not an EC2 instance.
How should I set this up?
There are two ways to allow inbound connection to RDS database: CIDR/IP or EC2 security group.
You can go to VPC, at the left panel there is "Security Groups" (yes, RDS do have security group). Click that, and choose your DB security group (if you already have the RDS instance created) or create a new one.
Under connection type, choose either CIDR/IP or EC2 security group.
If you choose to go with CIDR/IP, you should know what IP address your EC2 instance is and put the address or range in e.g. "10.11.12.0/24".
If you choose to go with EC2 security group, you should know the security group nameof your EC2 instance and select it from the dropdown provided e.g. "my security group".
Please note that the EC2 instance and the RDS instance need to be able to "see" each other i.e. in the same region, VPC, subnets with proper NACL (network access control list) etc.
Speaking of outbound connection and security group, no, security groups only manage inbound connection.
Hope that helps, let me know if I can make my answer clearer.
Been on EC2 Classic for years and we're getting squeezed off. I'm having trouble planning the migration for the following reasons:
ec2 classic security groups don't see vpc security groups
routing only seems possible through public internet
I need to migrate master/slave db and a redis cluster into the vpc, but I can't see a clear path for the two bullet points above. Short of taking the site offline and importing all the data via dumps, I'm unsure how to proceed.
Any advice would be appreciated.
You cannot migrate anything "LIVE" from classic to VPC. You need to take Snapshot, create AMI.. etc and then re-launch the whole thing from the scratch inside the VPC. There is no other way out.
For the security group (SG), VPC SG and EC2-classic SG do not mingle. You will have to create separate SGs inside the VPC.
You need to figure out what all things you want to host in public subnet of VPC (OR Private subnet of VPC). Things only inside the Public Subnet could be accessed from Internet.
e.g. you can have your Webserver in the Public Subnet while you can have the back-end Application server in the private subnet. This was just an example.
To make the long story short, you are eventually going to launch everything new in the VPC (take help of EC2 AMI, snapshot…etc so that the things that you will launch in the VPC will have your data)
We have couple of RDS that are not added under VPC, so we need to bring them under VPC. Please let me know the steps and downtime expected. Also let me know if there need to be any changes in the webservers, so that everything works fine after RDS is under VPC.
You must have a VPC created before hand that have subnet in atleast two different regions.
After this go and create a "subnet group" for RDS and add two existing subnet in that.
Next take a snapshot of your RDS instance and start a new RDS instance from snapshot in VPC.
That should be it.
http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_VPC.html#USER_VPC.Non-VPC2VPC - official documentation from Amazon.
Depending on your configuration and if the RDS needs to be accessible from the Internet you will have to check the option "Publicly Accessible" in the creation wizard and (in addition to the subnets mentioned in other answers) ensure that the Security has port for the DB properly enabled (and maybe from 0.0.0.0/0).
When a new security group is added, or the existing one is modified, the affects are not visible. For instance, I have a security group called “mdi-sg-redshift” with two rules:
As you can see, these rules allow inbounds from anyone across the globe. When applied to the cluster, they should allow inbounds at those ports. Does NOT work! I have rebooted the cluster to no affect.
Here is the snapshot of my Redshift Cluster:
Here is the snapshot of the port scanner.
The cluster was rebooted several times to no effect.
Also noted that the cluster belongs to the same region as the VPC and the security group. The cluster belongs to the VPC that has the security group applied.
I have seen similar issues on EC2 side, but reboots usually fixed it. Not this time.
Anyone with insights? Thanks!
This sounds mostly a VPC rules issue.
Things I will check:
Do you get the same issue if you create your cluster outside of VPC?
Check Cluster Subnet group. It says default in your screen shot. Which subnet groups is dded to this default subnet group? Make sure your cluster is running in the subnet which is added to default subnet group.
Check VPC security group policy for the Red-shift cluster
Did this set-up ever worked in the past ? OR is it the 1st time you are working on this cluster? If it worked in the past, then what setting with respect to VPC/cluster subnet group/ VPC security groups has changed?
Where are you accessing Redshift from?
If you ar trying to access Redshift from outside VPC then please check the Route Table for an entry of Internet Gateway (to verify if the Redshift cluster is publicly available over internet)
If you are trying to access Redshift from within VPC then there might be some other issue that might be stopping access