Google Cloud SCTP - google-cloud-platform

I am trying to test SCTP traffic from the internet to instances within GCP but this is not working, checking through firewall documentation, is it safe to conclude that GCP does not allow SCTP traffic from the internet to instances?
If this is true, what is the rationale behind this? SCTP is a major protocol that is used in telecom.

Google blocks all traffic going in and out of VM instances to the Internet. This is done most probably for security reasons maybe realiablity and ease of managing such a vast infrastructure as Google has.
The image you posted (from GCP firewall documentation) says, that direct use of SCTP outside GCP network is blocked. For the moment you can go to the Issue Tracker and create a feature request for this functionality.
As a workaraound you can always try to tunnel it inside other protocols (like #John Hanley suggested) or use VPN. Nothing else comes to mind.

Related

Reaching GCP Cloud Run instance through VPC with "only internal range" egress

The current setup is as follows:
I have a Cloud Run service, which acts as "back-end", which needs to reach external services but wants to be reached ONLY by the second Cloud Run instance. which acts as a "front-end", which needs to reach auth0 and the "back-end" and be reached by any client with a browser.
I recognize that the setup is not optimal, but I've inherited as is and we cannot migrate to another solution (maybe k8n). I'm trying to make this work with the least amount of impact on the infrastructure and, ideally, without having to touch the services themselves.
What I've tried is to restrict the ingress of the back-end service to INTERNAL and place two serverless VPC connectors (one per service), so that the front-end service would be able to reach the back-end but no one else could.
But I've encountered a huge issue: if I set the egress of the front-end all on the VPC it works, but now the front-end cannot reach auth0 and therefore the users cannot authenticate. If I place the egress as "mixed" (only internal ip ranges go through the VPC) the Google Run URL (*.run.app) is resolved not through the VPC and therefore it returns a big bad 403.
What I tried so far:
Placing a load balancer in front of the back-end service. But the serverless NEG only supports the global http load balancer and I'd need an internal one if I wanted an internal ip to resolve against
Trying to see if the VPC accessor itself MAYBE provided an internal (static) ip, but it doesn't seem so
Someone in another question suggested a "MIG as a proxy" but I haven't managed to figure that out (Can I run Cloud Run applications on a private IP (inside dedicated VPC network)?)
Fooled around with the Gateway API, but it seems that I'd have to provide a openAPI specification for the back-end, and I'm still under the delusion that this might be resolved with a cheaper (in terms of effort) approach.
So, I get that the Cloud Run instance cannot possibly have an internal IP by itself, but is there any kind of GCP product that can act as a proxy? Can someone elaborate on the "MIG as a proxy" approach (Managed Instance Group? Of what, though?), which might be the solution I'm looking for? (Sadly, I do not have the reputation needed to comment on that question or I would have).
Any kind of pointer is, as always, deeply appreciated.
You are designing this wrong. Use Cloud Run's identity-based access control instead of trying to route traffic. Google IAP (Identity Aware Proxy) will block all traffic that is not authorized.
Authenticating service-to-service

GCP forward proxy solution for whitelisting domain names on outbound traffic

I use squid in my vpc as a forward proxy and to sanitize my outgoing traffic to only allow certain domains. Is there a cloud native solution in GCP that accomplishes the same thing? I just want to be able to whitelist certain domain names being requested from some of my instances.
A cloud-native solution is discrimiNAT though. It allows plugging in domain allowlists straight into GCP Firewall Rules, without the need to configure apps on the VM Instances to use an explicit proxy. The association of Firewall Rules with VM Instances by use of Network Tags (as is the GCP-way) then dictates which egress FQDN rules apply to which VM Instances.
It isn't a forward-proxy technically, but an NGFW accomplishing the specific requirement of filtering outbound traffic by FQDNs.
Disclosure: It's a marketplace product and I've written protocol parsers for them in Rust!
There is no native solution in that use case in GCP as of the moment, but you can check this documentation to help you harden your GCP infrastructure.
You can also file a feature request to give them an idea with your use case and possibly consider adding it in the future.

Is it possible to create a firewall rule accepting calls from a Google Cloud Function [duplicate]

I would like to develop a Google Cloud Function that will subscribe to file changes in a Google Cloud Storage bucket and upload the file to a third party FTP site. This FTP site requires allow-listed IP addresses of clients.
As such, it is possible to get a static IP address for Google Cloud Functions containers?
Update: This feature is now available in GCP https://cloud.google.com/functions/docs/networking/network-settings#associate-static-ip
First of all this is not an unreasonable request, don't get gaslighted. AWS Lambdas already support this feature and have for awhile now. If you're interested in this feature please star this feature request: https://issuetracker.google.com/issues/112629904
Secondly, we arrived at a work-around which I also posted to that issue as well, maybe this will work for you too:
Setup a VPC Connector
Create a Cloud NAT on the VPC
Create a Proxy host which does not have a public IP, so the egress traffic is routed through Cloud NAT
Configure a Cloud Function which uses the VPC Connector, and which is configured to use the Proxy server for all outbound traffic
A caveat to this approach:
We wanted to put the proxy in a Managed Instance Group and behind a GCP Internal LB so that it would dynamically scale, but GCP Support has confirmed this is not possible because the GCP ILB basically allow-lists the subnet, and the Cloud Function CIDR is outside that subnet
I hope this is helpful.
Update: Just the other day, they announced an early-access beta for this exact feature!!
"Cloud Functions PM here. We actually have an early-access preview of this feature if you'd like to test it out.
Please complete this form so we can add you..."
The form can be found in the Issue linked above.
See answer below -- it took a number of years, but this is now supported.
https://cloud.google.com/functions/docs/networking/network-settings#associate-static-ip
For those wanting to associate cloud functions to a static IP address in order to whitelist the IP for an API or something of the sort I recommend checking out this step by step guide which helped me a lot:
https://dev.to/alvardev/gcp-cloud-functions-with-a-static-ip-3fe9 .
I also want to specify that this solution works for Google Cloud Functions and Firebase Functions (as it is based on GCP).
This functionality is now natively part of Google Cloud Functions (see here)
It's a two-step process according to the GCF docs:
Associating function egress with a static IP address In some cases,
you might want traffic originating from your function to be associated
with a static IP address. For example, this is useful if you are
calling an external service that only allows requests from whitelisted
IP addresses.
Route your function's egress through your VPC network. See the
previous section, Routing function egress through your VPC network.
Set up Cloud NAT and specify a static IP address. Follow the guides at
Specify subnet ranges for NAT and Specify IP addresses for NAT to set
up Cloud NAT for the subnet associated with your function's Serverless
VPC Access connector.
Refer to link below:
https://cloud.google.com/functions/docs/networking/network-settings#associate-static-ip
As per Google, the feature has been released check out the whole thread
https://issuetracker.google.com/issues/112629904
It's not possible to assign a static IP for Google Cloud Functions, as it's pretty much orthogonal to the nature of the architecture being 'serverless' i.e. allocate and deallocate servers on demand.
You can, however, leverage a HTTP proxy to achieve a similar effect. Setup a Google Compute Engine instance, assign it a static IP and install a proxy library such as https://www.npmjs.com/package/http-proxy. You can then route all your external API calls etc through this proxy.
However, this probably reduces scale and flexibility, but it might be a workaround.

How i can configure Google Cloud Platform with Cloudflare-Only?

I recently start using GCP but i have one thing i can't solve.
I have: 1 VM + 1 DB Instance + 1 LB. DB instance allow only conections from the VM IP. bUT THE VM IP allow traffic from all ip (if i configure the firewall to only allow CloudFlare and LB IP's the website crash and refuse conections).
Recently i was under attack, i activate the Cloudflare ddos mode, restart all and in like 6 h the attack come back with the Cloudflare activate. Wen i see mysql conections bump from 20-30 to 254 and all conections are from the IP of the VM so i think the problem are the public accesibility of the VM but i don't know how to solved it...
If i activate my firewall rules to only allow traffic from LB and Cloudflare the web refuses all conections..
Any idea what i can do?
Thanks.
Cloud Support here, unfortunately, we do not have visibility into what is installed on your instance or what software caused the issue.
Generally speaking you're responsible for investigating the source of the vulnerability and taking steps to mitigate it.
I'm writing here some hints that will help you:
Make sure you keep your firewall rules in a sensible manner, e.g. is not a good practice to have a firewall rule to allow all ingress connections on port 22 from all source IPs for obvious reasons.
Since you've already been rooted, change all your passwords: within the Cloud SQL instance, within the GCE instance, even within the GCP project.
It's also a good idea to check who has access to your service accounts, just in case people that aren't currently working for you or your company still have access to them.
If you're using certificates revoke them, generate new ones and share them in a secure way and with the minimum required number of users.
Securing GCE instances is a shared responsability, in general, OWASP hardening guides are really good.
I'm quoting some info here from another StackOverflow thread that might be useful in your case:
General security advice for Google Cloud Platform instances:
Set user permissions at project level.
Connect securely to your instance.
Ensure the project firewall is not open to everyone on the internet.
Use a strong password and store passwords securely.
Ensure that all software is up to date.
Monitor project usage closely via the monitoring API to identify abnormal project usage.
To diagnose trouble with GCE instances, serial port output from the instance can be useful.
You can check the serial port output by clicking on the instance name
and then on "Serial port 1 (console)". Note that this logs are wipped
when instances are shutdown & rebooted, and the log is not visible
when the instance is not started.
Stackdriver monitoring is also helpful to provide an audit trail to
diagnose problems.
You can use the Stackdriver Monitoring Console to set up alerting policies matching given conditions (under which a service is considered unhealthy) that can be set up to trigger email/SMS notifications.
This quickstart for Google Compute Engine instances can be completed in ~10 minutes and shows the convenience of monitoring instances.
Here are some hints you can check on keeping GCP projects secure.

Google Cloud Compute Instance, IPv6

I currently have a google cloud compute instance set up to be the back-end for a multiplayer game. Certain publishers and app stores that I'm trying to publish the game on require that the server can be reached via a client using an IPv6 address, which makes perfect sense. So the question is, how do I go about making it that the compute instance can be connected to via IPv6?
It's worth noting that the connection between the client and server is done via UDP, so using load balancing doesn't appear to work (since load balancers in google cloud can only be done over TCP, from what I can tell).
Has anyone else had this issue, and if so how did you solve it?
Many thanks in advance.
IPv6 Termination for HTTP(S), SSL Proxy, and TCP Proxy Load Balancing is currently in Beta.
https://cloud.google.com/compute/docs/load-balancing/ipv6
Configuring IPv6 termination for your load balancers lets your backend instances appear as IPv6 applications to your IPv6 clients.
Note: The documentation says this feature is not covered by any SLA or deprecation policy and may be subject to backward-incompatible changes.
The definition of Beta from their documentation: Beta is the point at which we are ready to open a release for any customer to use. There are no SLA or technical support obligations in a Beta release, and charges may be waived in some cases. Products will be complete from a feature perspective, but may have some open outstanding issues. Beta releases are suitable for limited production use cases.
https://cloud.google.com/terms/launch-stages
IPv6 Termination for HTTP(S), SSL Proxy, and TCP Proxy Load Balancing became GA on September 20, 2017.
Source: https://cloudplatform.googleblog.com/2017/09/announcing-ipv6-global-load-balancing-ga.html.
See the documentation at https://cloud.google.com/compute/docs/load-balancing/ipv6
Keep in mind that inside the GCP network, all is still on IPv4, https://issuetracker.google.com/issues/35904387
Google cloud now supports external ipv6 on VM instances. Each instance can get a /96 external ip range and it can be used to access internet (without NAT) or be used for VM to VM traffic.
At this moment (July 2021) it's only supported limited regions:
asia-east1
asia-south1
europe-west2
us-west2
See more detailed in
https://cloud.google.com/compute/docs/ip-addresses/configure-ipv6-address https://cloud.google.com/vpc/docs/vpc#ipv6-addresses
If your instance happened to be one of the 4 regions above then you should be able to use the VM instance IPv6 feature.
May 2022 update.
Per https://cloud.google.com/vpc/docs/subnets#limitations
Internal and external IPv6 subnets are available in all but asia-southeast2 and asia-northeast3 regions.