How to expose a API that is running in a Pod and limit access? - google-cloud-platform

I have an API running in a service in my GKE Cluster and it needs to be accessible for some other developers in my team. They are using a VPN so they have a static IP they can provide to me.
My idea was to just expose the service using a static external IP and restricting access to this IP using a Firewall rule so just the IP of my colleagues.
Unfortunately this just seems to be possible for Compute-VMs because only they can have tags.
Is there a way how I can simply deny all traffic to my service except for traffic from the specific IP?
I appreciate any hints to features, thank you

Well, you don't need tags, you can create your firewall rule to only allow access to the IP your developers provide you, just when you're creating your firewall rule, select all instances in the network for Targets and for source IP ranges specify the IP with the prefix /32 at the end.

You could provide them RBAC access to the pods in the required namespace and allow them to port forward. Assuming you don't want to set up a public end point and try secure it. This does require kubectl to be installed and cluster access and this will give access to all pods in the namespace.
https://medium.com/#ManagedKube/kubernetes-rbac-port-forward-4c7eb3951e28
Depends what level of security and permanency you need I guess.

Related

Restrict access to some endpoints on Google Cloud

I have a k8s cluster that runs my app (gce as an ingress) and I want to restrict access to some endpoints "/test/*" but all other endpoints should be publically available. I don't want to restrict for specific IP's to have some flexibility and ability to access restricted endpoints from any device like phones.
I considered IAP but it restricts access to the full service when I need it only for some endpoints. Hence extra.
I have thought about VPN. But I don't understand how to set this up, or would it even resolve my issues.
I have heard about proxy but seems to me it can't fulfill my requirements (?)
I can't tell that solution should be super extensible or generic because only a few people will use this feature.
I want the solution to be light, flexible, simple, and fulfill my needs at the same time. So if you say that there are solutions but it's complex I would consider restricting access by the IP, but I worry about how the restricted IP's approach is viable in the real life. In a sense would it be too cumbersome to add the IP of my phone every time I change my location and so on?
You can use API Gateway for that. It approximatively meets your needs, it's not so flexible and simple.
But it's fully managed and can scale with your traffic.
For a more convenient solution, you have to use software proxy (or API Gateway), or go to the Bank and use Apigee
I set up OpenVPN.
It was not a tedious process because of the various small obstacles but I encourage you to do the same.
Get a host (machine, cluster, or whatever) with the static IP
Setup an OpenVPN instance. I do docker https://hub.docker.com/r/kylemanna/openvpn/ (follow instructions but update a host -u YOUR_IP)
Ensure that VPN setup works from your local machine
To the routes you need limit IP access to the VPN one. Nginx example
allow x.x.x.x;
deny all;
Make sure that nginx treats IP right. I had an issue that the nginx was having Load Balancer IP as client IP's, so I have to put some as trusted. http://nginx.org/en/docs/http/ngx_http_realip_module.html
Test the setup

To blacklist an IP in AWS do we need to create IPSet for client or for Client Environment

I have a client IP that I need to black list. Do I need to create IPset for a client or client Environment?
Without knowing how your EC2 instance and network is configured it's difficult to say. However, this answer assumes that you are trying to blacklist an IP address for your entire VPC rather than the EC2 instance only.
Security at the network level can be managed by a Network Access Control List (NACL) or SecurityGroup. NACL's allow ALLOW and DENY rules; SecurityGroups only have ALLOW rules.
So, to blacklist an IP you can use a NACL inbound rule with the IP range and DENY.
|Rule #|Type |Protocol|Port range|Source |Allow/Deny|
|------|-----------|--------|----------|-------------|----------|
|200 |All traffic|All |All |192.0.1.0/32 |DENY |
For more advanced scenarios you may need to look at running something like AWS WAF

GCP forward proxy solution for whitelisting domain names on outbound traffic

I use squid in my vpc as a forward proxy and to sanitize my outgoing traffic to only allow certain domains. Is there a cloud native solution in GCP that accomplishes the same thing? I just want to be able to whitelist certain domain names being requested from some of my instances.
A cloud-native solution is discrimiNAT though. It allows plugging in domain allowlists straight into GCP Firewall Rules, without the need to configure apps on the VM Instances to use an explicit proxy. The association of Firewall Rules with VM Instances by use of Network Tags (as is the GCP-way) then dictates which egress FQDN rules apply to which VM Instances.
It isn't a forward-proxy technically, but an NGFW accomplishing the specific requirement of filtering outbound traffic by FQDNs.
Disclosure: It's a marketplace product and I've written protocol parsers for them in Rust!
There is no native solution in that use case in GCP as of the moment, but you can check this documentation to help you harden your GCP infrastructure.
You can also file a feature request to give them an idea with your use case and possibly consider adding it in the future.

How can I set SSH firewall rule on Google VM so that only my office computers can access the VM over SSH?

In last few days my Google VM is continuously being compromised, I have received warning and faced suspension of VM by Google saying "cryptocurrency mining activities was found on VM". I suspect someone has hacked my VM and doing this activity. So, now I want to create a new VM with secure SSH firewall such that only limited computers can access the VM.
I have tried setting the IP of my office routers on firewall ssh allow rule, but after setting this rule also SSH connection to VM do get established from other IP addresses. I just want to specify two IPs in firewall rule but it expects IP ranges in CIDR format (with which I am not clear).
I have also found some suggestions that I should change the ssh port of the VM.
Can anybody please explain how can I restrict the access to my Google VM to only a specific set of computers when this computers are connected to a router and external IP is same for all i.e. of router?
Thanks
I understand you want to create a new VM with secure firewall SSH and want to restrict and allow access from particular IP addresses of your office router.
To do that you can create firewall rules as explained here 1. To manage the access for a specific instance, I recommend you to use Network Tags for firewall rules 2.
Going back to your concern, that SSH connection to VM do get established from other IP addresses even when you create the firewall rule for the specific IP address. The reason for that might be due to this:
Every project you create in GCP comes with the default firewall rules.
So there might be one default-allow-ssh rule which you need to block, I guess that might be causing the issue. Note that the default network includes some additional rules that override this one, allowing certain types of incoming traffic. See the attached link[3][4] for more details.
[3]https://cloud.google.com/vpc/docs/firewalls#default_firewall_rules
[4]https://cloud.google.com/vpc/docs/firewalls#more_rules_default_vpc
You can also add guest-level firewall rule using for example "iptables" to add another security level to your VM instance. However, GCP project-level firewall rule takes care of inspecting network traffic before it goes to your VM instances. Operating system Firewall blocks all internet traffic to any port 22.
In order to allow a specific address to be able to connect on your VM instance, you may add a CIDR of /32 on the "IP ranges" value of your "default-allow-ssh" GCP firewall rule. For example, 45.56.122.7/32 and 208.43.25.31/32.

Qualys Scanner in AWS - opening outbound FW ports

I'm setting up Qualys scanner in Amazon Web Services in an environment that restricts outbound access to the internet from the VPC. It does so completely.
So I'll need to open a ticket to get the outbound access it needs, and I have to specify each IP that the Qualys server will need to connect to.
I'm seeing this message in the logs:
Starting crond:
Preparing scanner personalization
About to test connectivity to qualysguard.qualys.com
Error: No connectivity to qualysguard.qualys.com - please fix.
About to test connectivity to qualysguard.qualys.com
My question is, do I need to open up access to just that one domain? Or do I have to open up access to more than that one domain. I have to be specific and cannot use wildcards in the request. This environment is extremely locked down for security reasons.
There are several ways you can restrict the access of your environment but also to allow certain ports.
AWS does not resolve on DNS names, so make sure you get the set of IP addresses that are to be allowed for access
Use ELB - allow certain ports and permit access for those ports/ip addresses
Port address translation - look in for applications that will allow particular ports from a set of ip addresses
move your application to public subnet and allow the specific port/ip addresses