Is there an alternative to AWS's security groups in the Google Cloud Platform?
Following is the situation which I have:
A Basic Node.js server running in Cloud Run as a docker image.
A Postgres SQL database at GCP.
A Redis instance at GCP.
What I want to do is make a 'security group' sort of so that my Postgres SQL DB and Redis instance can only be accessed from my Node.js server and nowhere else. I don't want them to be publically accessible via an IP.
What we do in AWS is, that only services part of a security group can access each other.
I'm not very sure but I guess in GCP I need to make use of Firewall rules (not sure at all).
If I'm correct could someone please guide me as to how to go about this? And if I'm wrong could someone suggest the correct method?
GCP has firewall rules for its VPC that work similar to AWS Security Groups. More details can be found here. You can place your PostgreSQL database, Redis instance and Node.js server inside GCP VPC.
Make Node.js server available to the public via DNS.
Set default-allow-internal rule, so that only the services present in VPC can access each other (halting public access of DB and Redis)
As an alternative approach, you may also keep all three servers public and only allow Node.js IP address to access DB and Redis servers, but the above solution is recommended.
Security groups inside AWS are instance-attached firewall-like components. So for example, you can have a SG on an instance level, similar to configuring IP-tables on regular Linux.
On the other hand, Google Firewall rules are more on a Network level. I guess, for the level of "granularity", I'd say that Security Groups can be replaced to instance-level granularity, so then your alternatives are to use one of the following:
firewalld
nftables
iptables
The thing is that in AWS you can also attach security groups to subnets. So SG's when attached to subnets, are also kind of similar to google firewalls, still, security groups provide a bit more granularity since you can have different security groups per subnet, while in GCP you need to have a firewall per Network. At this level, protection should come from firewalls in subnets.
Thanks #amsh for the solution to the problem. But there were a few more things that were required to be done so I guess it'll be better if I list them out here if anyone needs in the future:
Create a VPC network and add a subnet for a particular region (Eg: us-central1).
Create a VPC connector from the Serverless VPC Access section for the created VPC network in the same region.
In Cloud Run add the created VPC connector in the Connection section.
Create the PostgreSQL and Redis instance in the same region as that of the created VPC network.
In the Private IP section of these instances, select the created VPC network. This will create a Private IP for the respective instances in the region of the created VPC network.
Use this Private IP in the Node.js server to connect to the instance and it'll be good to go.
Common Problems you might face:
Error while creating the VPC Connector: Ensure the IP range of the VPC connector and the VPC network do not overlap.
Different regions: Ensure all instances are in the same region of the VPC network, else they won't connect via the Private IP.
Avoid changing the firewall rules: The firewall rules must not be changed unless you need them to perform differently than they normally do.
Instances in different regions: If the instances are spread across different regions, use VPC network peering to establish a connection between them.
Related
I've followed https://docs.gitlab.com/runner/configuration/runner_autoscale_aws_fargate/ to create a custom runner which has a public IP attached and sits in a VPC alongside "private" resources. The runner is used to apply migrations using gitlab ci/cd.
ALLOW 22 0.0.0.0/0 has been applied within the security group; but it's wide open to attacks. What IP range do I need to add to only allow gitlab ci/cd runners access via SSH? I've removed that rule for the moment so we're getting connection errors, but the IPs connecting on port 22 all come from AWS (assuming gitlab runners are also on AWS).
Is there something I'm missing or not understanding?
I had a look at the tutorial. you should only allow EC2 instances to be able to ssh into the Fargate tasks.
One way to do that is, You could define EC2 instance's security group as the source in the Fargate task's security group instead of using an ip address(or CIDR block). You don't have to explicitly mention any ip ranges. This is my preferred approach.
When you specify a security group as the source for a rule, traffic is allowed from the network interfaces that are associated with the source security group for the specified protocol and port. Incoming traffic is allowed based on the private IP addresses of the network interfaces that are associated with the source security group (and not the public IP or Elastic IP addresses).specify a security group as the source
Second approach is, As #samtoddler mentioned, you can allow the entire VPC network, or you can restrict it to a subnet.
I was misunderstood; gitlab-runner talks to gitlab, not the other way round, my understanding was gitlab talks to runners over SSH.
My immediate solution was 2 things:
Move the EC2 instance into a private subnet
As per #Aruk Ks answer, only allow EC2 to communicate over SSH to ECS Fargate tasks
This answered my question as well https://forum.gitlab.com/t/gitlab-runner-on-private-ip/19673
If one has a publicly accessible rds database on aws, and wants to instead use a bastion ec2 instance to access and perform database functions (anyone on the internet should be able to use the app and perform database functions in accordance with the features provided by the app), how should one go about performing this shift? I have tried searching the internet but often I get loads of information with terminology that isn't entirely easy to digest. Any assistance would be greatly appreciated.
Again, I want the general public to be able to use and access the app's provided db functions, but not have them be able to access the database directly.
A typical 3-tier architecture is:
A Load Balancer across public subnets, which sends traffic to...
Multiple Amazon EC2 instances in private subnets, preferable provisioned through Amazon EC2 Auto Scaling, which can scale based on demand and can also replace failed instances, which are all talking to...
A Database in a private subnet, preferably in Multi-AZ mode, which means that a failure in the database or in an Availability Zone will not lose any data
However, your application may not require this much infrastructure. For low-usage applications, you could just use:
An Amazon EC2 instance as your application server running in a public subnet
An Amazon RDS database in a private subnet, with a security group configured to permit access from the Amazon EC2 instance
Users would connect to your application server. The application server would connect to the database. Users would have no direct access to the database.
However, YOU might require access to the database for administration and testing purposes. Since the database is in a private subnet, it is not reachable from the Internet. To provide you with access, you could launch another Amazon EC2 instance in a public subnet, with a security group configured to permit you to access the instance. This instance "sticks out" on the Internet, and is thus called a Bastion server (named after the part of a castle wall that sticks out to allow archers to fire on invaders climbing the caste wall).
You can use port forwarding to connect to the Bastion server and then through to the database. For example:
ssh -i key.pem ec2-user#BASTION-IP -L 3306:DATABASE-DNS-NAME:3306
This configures the SSH connection to forward localhost:3306 to port 3306 on the named database server. This allows your local machine to talk to the database via the Bastion server.
You will need to create private subnets for this and update DBsubnet groups accordingly with private subnets only. Moreover in DB security group add bastion and app instances security group as source for db port.
Like if you're using mysql engine, allow 3306 for target instances secuirty group id's.
I have production stacks inside a Production account and development stacks inside a Development account. The stacks are identical and are setup as follows:
Each stack as its own VPC.
Within the VPC are two public subnets spanning to AZs and two private subnets spanning to AZs.
The private Subnets contain the RDS instance.
The public Subnets contain a Bastion EC2 instance which can access the RDS instance.
To access the RDS instance, I either have to SSH into the Bastion machine and access it from there, or I create an SSH tunnel via the Bastion to access it through a Database client application such as PGAdmin.
Current DMS setup:
I would like to be able to use DMS (Database Migration Service) to replication an RDS instance from Production into Development. So far I am trying the following but cannot get it to work:
Create a VPC peering connection between Development VPC and Production VPC
Create a replication instance in the private subnet of the Development VPC
Update the private subnet route tables in the development VPC to route traffic to the CIDR of the production VPC through the VPC peering connection
Ensure the Security group for the replication instance can access both RDS instances.
Main Problem:
When creating the source endpoint in DMS, the wizard only shows RDS instances from the same account and the same region, and only allows RDS instances to be configured using server names and ports, however, the RDS instances in my stacks can only be accessed via Bastion machines using tunnelling. Therefore the test endpoint connection always fails.
Any ideas of how to achieve this cross account replication?
Any good step by step blogs that detail how to do this? I have found a few but they don't seem to have RDS instances sitting behind bastion machines and so they all assume the endpoint configuration wizard can be populated using server names and ports.
Many thanks.
Securing the RDS instances via the Bastion host is sound security practice, of course, for developer/operational access.
For DMS migration service however, you should expect to open security group for both the Target and Source RDS database instances to allow the migration instance to have access to both.
From Network Security for AWS Database Migration Service:
The replication instance must have access to the source and target endpoints. The security group for the replication instance must have network ACLs or rules that allow egress from the instance out on the database port to the database endpoints.
Database endpoints must include network ACLs and security group rules that allow incoming access from the replication instance. You can achieve this using the replication instance's security group, the private IP address, the public IP address, or the NAT gateway’s public address, depending on your configuration.
See
https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Security.Network.html
For network addressing and to open the RDS private subnet, you'll need a NAT on both source and target. They can be added easily, and then terminated after the migration.
You can now use Network Address Translation (NAT) Gateway, a highly available AWS managed service that makes it easy to connect to the Internet from instances within a private subnet in an AWS Virtual Private Cloud (VPC).
See
https://aws.amazon.com/about-aws/whats-new/2015/12/introducing-amazon-vpc-nat-gateway-a-managed-nat-service/
I just finished setting up an RDS instance but I couldn't connect to it using sql workbench.
What I did was I set 'publicly available' to 'YES' and I was able to connect to it.
I'm new to this and would appreciate some guidance with regards to the actions that I did.
Are there any risk of having it available to public? currently, I only have my IP as the only allowed traffic.
Allowing your RDS instance to be publicly accessible by only your own IP is fine.
If you only need part time access to your server, I would add and remove my IP address from the security group as needed. I use hand written batch scripts to do this.
If your RDS instance does not need real public access except for SQL WorkBench, then I would setup OpenVPN, keep RDS private and do all my work over a VPN. OpenVPN makes all of this very easy. Automatically routes my traffic destined for my VPC over the VPN using private IP addresses.
With this week's announcemnt of inter-region VPC peering, there is even less need for making non-web servers public.
Announcing Support for Inter-Region VPC Peering
I have created a free-tier PostgreSQL RDS instance and everything appears to looks good on the portal. However, I am unable to get to the instance.
Going through the troubleshooting steps, they mention it could a firewall issue on my end. However, a quick ping from an external site reveals the same timeout issue.
Is there a step that I've missed?
Ping is typically disabled in AWS Security Groups. It is not recommended as a method of checking network connections.
The best method would be to use an SQL client to connect to the database via JDBC or ODBC.
Things to check:
Your RDS instance was launched as Publicly Accessible
Your RDS instance was launched in a Public Subnet (Definition: The subnet's Route Table points to an Internet Gateway)
The Security Group permits connections on the database port (this is also where you could permit PING access, but no guarantee that it would work with an RDS instance)
Check the associated Security Groups that you have tagged. Security groups hold the firewall rules. Either you may have to tweak the group that you have selected or try changing / modifying the group that you (or your profile) have access to.