Several firewalls when creating Google Cloud Platform instance. Which to use? - google-cloud-platform

I created a Google Cloud Platform Ubuntu 16.04 instance. It seems the GCP has several places where traffic can be filtered:
The Instances section of the GCP console lets me allow or disallow
HTTP and HTTPS traffic.
In the Networking section I can create additional firewall rules which limit access to the network.
Finally, in the Ubuntu instance itself I can configure UFW to block/allow certain ports.
Should I configure all of these? Would it be better to just configure one and allow all in the others?
As a note, this instance will serve a website, so I would only allow HTTP/HTTPS traffic.

The complete answer is that it depends.
For number one, the only thing that happens is that the default-allow-http rule gets applied to that instance.
The networking section is where you define your own rules that what to be applied to instances. It becomes easier to maintain all your networking configs in Google Cloud if you start having multiple instances and load balancers. You can share apply a single rule to some machines and you can compose them.
Finally, I would use ufw/iptables only as a last resort config. For example I have a some machines behind a load balancer and one of them is doing something weird. I would ssh into it and block port 80 and investigate it.

Related

Is it recommended to Install and configure UFW for a new Django project on Google Cloud Platform?

I'm deploying a new project on Google Cloud Platform using Django certified by Bitnami that comes with pre-installed Debian 9, Apache, MySQL, Python. My end goal is to build a web application, but nothing is close to production yet and I'm still running on an ephemeral external IP address assigned to the VM instance. So my question is that is it recommended that install an ufw (Uncomplicated Firewall) ?
There's no need to use separate firewall because your instance is already protected by GCP firewall:
GCP firewall blocks all incoming traffic to the instances by default unless explicitly allowed by a firewall rule;
Rules allow incoming traffic from an IP range, a list of protocols (ICMP, TCP and UDP) and a list of ports, and they can be restricted to some instances by using Network tags.
More information you can find at the documentation:
Firewall rules overview
Using firewall rules
VPC network overview
You can check current firewall rules at VPC network -> Firewall rules.

How i can configure Google Cloud Platform with Cloudflare-Only?

I recently start using GCP but i have one thing i can't solve.
I have: 1 VM + 1 DB Instance + 1 LB. DB instance allow only conections from the VM IP. bUT THE VM IP allow traffic from all ip (if i configure the firewall to only allow CloudFlare and LB IP's the website crash and refuse conections).
Recently i was under attack, i activate the Cloudflare ddos mode, restart all and in like 6 h the attack come back with the Cloudflare activate. Wen i see mysql conections bump from 20-30 to 254 and all conections are from the IP of the VM so i think the problem are the public accesibility of the VM but i don't know how to solved it...
If i activate my firewall rules to only allow traffic from LB and Cloudflare the web refuses all conections..
Any idea what i can do?
Thanks.
Cloud Support here, unfortunately, we do not have visibility into what is installed on your instance or what software caused the issue.
Generally speaking you're responsible for investigating the source of the vulnerability and taking steps to mitigate it.
I'm writing here some hints that will help you:
Make sure you keep your firewall rules in a sensible manner, e.g. is not a good practice to have a firewall rule to allow all ingress connections on port 22 from all source IPs for obvious reasons.
Since you've already been rooted, change all your passwords: within the Cloud SQL instance, within the GCE instance, even within the GCP project.
It's also a good idea to check who has access to your service accounts, just in case people that aren't currently working for you or your company still have access to them.
If you're using certificates revoke them, generate new ones and share them in a secure way and with the minimum required number of users.
Securing GCE instances is a shared responsability, in general, OWASP hardening guides are really good.
I'm quoting some info here from another StackOverflow thread that might be useful in your case:
General security advice for Google Cloud Platform instances:
Set user permissions at project level.
Connect securely to your instance.
Ensure the project firewall is not open to everyone on the internet.
Use a strong password and store passwords securely.
Ensure that all software is up to date.
Monitor project usage closely via the monitoring API to identify abnormal project usage.
To diagnose trouble with GCE instances, serial port output from the instance can be useful.
You can check the serial port output by clicking on the instance name
and then on "Serial port 1 (console)". Note that this logs are wipped
when instances are shutdown & rebooted, and the log is not visible
when the instance is not started.
Stackdriver monitoring is also helpful to provide an audit trail to
diagnose problems.
You can use the Stackdriver Monitoring Console to set up alerting policies matching given conditions (under which a service is considered unhealthy) that can be set up to trigger email/SMS notifications.
This quickstart for Google Compute Engine instances can be completed in ~10 minutes and shows the convenience of monitoring instances.
Here are some hints you can check on keeping GCP projects secure.

Exposing various ports behind a load balancer on Rancher/AWS

I am setting up a Rancher environment.
The Rancher server is behind a classic ELB (since ALBs are not recommended per Rancher guidelines).
I also want to make available Prometheus and Grafana services.
These are offered via Rancher catalogue and will run as container services, being exposed on Rancher host ports 3000 and 9090.
Since Rancher server (per their recommendations) requires ELB, I wanted to explore the options on how to make available the two services above using the most minimal possible setup.
If the server is available on say rancher.mydomain.com, ideally I would like to have the other two on grafana.mydomain.com and prometheus.mydomain.com.
Can I at least combine the later two behind an ALB?
If so, how do I map them?
Do I place <my_rancher_host_public_IP>:3000 and <my_rancher_host_public_IP>:9090 behind an ALB?
You could do this a couple (maybe more) ways:
use an external dns updater like the route 53 infra catalog item. That will automatically map dns directly to the public ip of the host that houses the services. Modify the dns template so it prepends the service name to the domain.
register your targets and map the ports, then set a dns entry to the ALB.
The first way will allow for dns to update in case the service shifts across hosts in your environment. You could leverage the second way and force containers to specific hosts.

AWS ECS and Load Balancing

I see that ECS services can use Application Load Balancers, and the dynamic port stuff works atuomagically. However, an ALB has a maximum of 10 rules other than default rules. Does that mean that I need a separate ALB for every 10 services unless I wish to access via a different port (in which case the default rules would kick in)? This seems obvious, but for something touted to be the solution to load balancing in a microservices environment, this would seem incredibly limiting. Am I missing something?
As far as I know and have experienced, this is indeed true, you are limited to 10 listeners per ALB. Take into account that this setup (ALB + ECS) is fairly new so it is possible that Amazon will adjust the limits as people are requesting this.
Take into account as well that a listener typically has multiple targets, in a microservice architecture this translates to multiple instances of the same service. So you can run 10 different services but you are able to run 10 instances of each service, balancing 100 containers with a single ALB.
Alternatively (to save costs) you could create one listener with multiple rules, but they have to be distinguished by path pattern and have to listen (not route to) the same port. Rules can forward to a target group of your choice. E.g. you can route /service1 to container 1 and /service2 to container 2 within one listener.
Yes, you are correct, and it is a low restriction. However if you are able to use different CNAMES for your services then having them in an ALB with single target group for each service won't behave differently to having one ALB and multiple target groups each with rules. Dynamic ports are probably the main part of their "microservices solution" argument.

How to setup EC2 Security Group to allow working with Firebase?

I am preparing a system of EC2 workers on AWS that use Firebase as a queue of tasks they should work on.
My app in node.js that reads the queue and works on tasks is done and working and I would like to properly setup a firewall (EC2 Security Group) that allows my machines to connect only to my Firebase.
Each rule of that Security Group contains:
protocol
port range
and destination (IP address with mask, so it supports whole subnets)
My question is - how can I setup this rule for Firebase? I suppose that IP address of my Firebase is dynamic (it resolves to different IPs from different instances). Is there a list of possible addresses or how would you address this issue? Can some kind of proxy be a solution that would not slow down my Firebase drastically?
Since using node to interact with Firebase is outbound traffic, the default security group should work fine (you don't need to allow any inbound traffic).
If you want to lock it down further for whatever reason, it's a bit tricky. As you noticed, there are a bunch of IP addresses serving Firebase. You could get a list of them all with "dig -t A firebaseio.com" and add all of them to your firebase rules. That would work for today, but there could be new servers added next week and you'd be broken. To try to be a bit more general, you could perhaps allow all of 75.126.., but that is probably overly permissive and could still break if new Firebase servers were added in a different data center or something.
FWIW, I wouldn't worry about it. Blocking inbound traffic is generally much more important than outbound (since to generate outbound traffic you have to have already managed to somehow run software on the box)