Can GCP's Cloud Run be used for non-HTTP services? - google-cloud-platform

I'm new to GCP and trying to make heads and tails of it. So far, I've experienced with GKE and Cloud Run.
In GKE, I can create a Workload (deployment) for a service of any kind under any port I like and allocate resources to it. Then I can create a load balancer and open the ports from the pods to the Internet. The load balancer has an IP that I can use to access the underlying pods.
On the other hand, when I create a Could Run service, I'll give it a docker image and a port and once the service is up and running, it exposes an HTTPS URL! The port that I specify in Cloud Run is the docker's internal port and if I want to access the URL, I have to do that through port 80.
Does this mean that Cloud Run is designed only for HTTP services under port 80? Or maybe I'm missing something?

Technically "no", Cloud Run cannot be used for non-HTTP services. See Cloud Run's container runtime contract.
But also "sort of":
The URL of a Cloud Run service can be kept "private" (and they are by default), this means that nobody but some specific identities are allowed to invoked the Cloud Run service. See this page to learn more)
The container must listen for requests on a certain port, and it does not have CPU outside of request processing. However, it is very easy to wrap your binary into a lightweight HTTP server. See for example the Shell sample that Uses a very small Go HTTP sevrer to invoke an arbitrary shell script.

Related

Why is Cloud Run timing out when the url specifies a port number?

I've deployed a container to Cloud Run that is designed to work only if accessed on a specific port. Localhost access works fine if I access the service by
http://localhost:8080/myendpoint
After deploying to Cloud Run however it times out
https://helloworld-xyzxyzxyz-ew.a.run.app:8080/myendpoint
Is it possible to access services on Cloud Run in this way, with the port specified explicitly in the url?
The public side of Cloud Run only supports ports 80 and 443.
Usage of ports like 8080 is for your application to listen to requests from the Google Frontend (GFE). That port number can be configured but the public port numbers that the GFE listens on cannot be changed.
Google Cloud Run and many other services use the GFE (Google Frontend) for load balancing, TLS termination, DDoS protection, and more. The GFE determines which ports are exposed to access the underlying services.
Google Front End service
Two excellent documents about Cloud Run that help to understand the service:
Cloud Run FAQ
Cloud Run container runtime contract

Reaching GCP Cloud Run instance through VPC with "only internal range" egress

The current setup is as follows:
I have a Cloud Run service, which acts as "back-end", which needs to reach external services but wants to be reached ONLY by the second Cloud Run instance. which acts as a "front-end", which needs to reach auth0 and the "back-end" and be reached by any client with a browser.
I recognize that the setup is not optimal, but I've inherited as is and we cannot migrate to another solution (maybe k8n). I'm trying to make this work with the least amount of impact on the infrastructure and, ideally, without having to touch the services themselves.
What I've tried is to restrict the ingress of the back-end service to INTERNAL and place two serverless VPC connectors (one per service), so that the front-end service would be able to reach the back-end but no one else could.
But I've encountered a huge issue: if I set the egress of the front-end all on the VPC it works, but now the front-end cannot reach auth0 and therefore the users cannot authenticate. If I place the egress as "mixed" (only internal ip ranges go through the VPC) the Google Run URL (*.run.app) is resolved not through the VPC and therefore it returns a big bad 403.
What I tried so far:
Placing a load balancer in front of the back-end service. But the serverless NEG only supports the global http load balancer and I'd need an internal one if I wanted an internal ip to resolve against
Trying to see if the VPC accessor itself MAYBE provided an internal (static) ip, but it doesn't seem so
Someone in another question suggested a "MIG as a proxy" but I haven't managed to figure that out (Can I run Cloud Run applications on a private IP (inside dedicated VPC network)?)
Fooled around with the Gateway API, but it seems that I'd have to provide a openAPI specification for the back-end, and I'm still under the delusion that this might be resolved with a cheaper (in terms of effort) approach.
So, I get that the Cloud Run instance cannot possibly have an internal IP by itself, but is there any kind of GCP product that can act as a proxy? Can someone elaborate on the "MIG as a proxy" approach (Managed Instance Group? Of what, though?), which might be the solution I'm looking for? (Sadly, I do not have the reputation needed to comment on that question or I would have).
Any kind of pointer is, as always, deeply appreciated.
You are designing this wrong. Use Cloud Run's identity-based access control instead of trying to route traffic. Google IAP (Identity Aware Proxy) will block all traffic that is not authorized.
Authenticating service-to-service

Private service to service communication for Google Cloud Run

I'd like to have my Google Cloud Run services privately communicate with one another over non-HTTP and/or without having to add bearer authentication in my code.
I'm aware of this documentation from Google which describes how you can do authenticated access between services, although it's obviously only for HTTP.
I think I have a general idea of what's necessary:
Create a custom VPC for my project
Enable the Serverless VPC Connector
What I'm not totally clear on is:
Is any of this necessary? Can Cloud Run services within the same project already see each other?
How do services address one another after this?
Do I gain the ability to use simpler by-convention DNS names? For example, could I have each service in Cloud Run manifest on my VPC as a single first level DNS name like apione and apitwo rather than a larger DNS name that I'd then have to hint in through my deployments?
If not, is there any kind of mechanism for services to discover names?
If I put my managed Cloud SQL postgres database on this network, can I control its DNS name?
Finally, are there any other gotchas I might want to be aware of? You can assume my use case is very simple, two or more long lived services on Cloud Run, doing non-HTTP TCP/UDP communications.
I also found a potentially related Google Cloud Run feature request that is worth upvoting if this isn't currently possible.
Cloud Run services are only reachable through HTTP request. you can't use other network protocol (SSH to log into instances for example, or TCP/UDP communication).
However, Cloud Run can initiate these kind of connection to external services (for instance Compute Engine instances deployed in your VPC, thanks to the serverless VPC Connector).
the serverless VPC connector allow you to make a bridge between the Google Cloud managed environment (where live the Cloud Run (and Cloud Functions/App Engine) instances) and the VPC of your project where you have your own instances (Compute Engine, GKE node pools,...)
Thus you can have a Cloud Run service that reach a Kubernetes pods on GKE through a TCP connection, if it's your requirement.
About service discovery, it's not yet the case but Google work actively on that and Ahmet (Google Cloud Dev Advocate on Cloud Run) has released recently a tool for that. But nothing really build in.

Google Cloud Run (fully managed) - Can a container redirect to another container?

Background:
Trying to run Vault in Google Cloud Run (fully managed) and trying to decide if setting up HA is possible. Vault requires a single active node (container), and inbound requests to a standby node (container) need to be forwarded or redirected.
Forwarded means a side connection on another port (i.e. clients on tcp/8200 and pod-to-pod on tcp/8201). Is this possible, I don't see anything about this in docs.
Redirected means that a standby node (container) would need to 307 redirect to the active node's address. This would either be the Cloud Run url or the pod specific url. If it was the Cloud Run url then the load balancer could just send it right back to the standby node (loop); not good. It would need to be the pod url. Would the Cloud Run "proxy" (not sure what to call it) be able to accept the client request but do an internal redirect from pod to pod to reach the active pod?
It seems like you’re new to the programming and traffic serving model of Cloud Run. I recommend checking out documentation and https://github.com/ahmetb/cloud-run-faq for some answers.
Briefly answering some of your points:
only 1 port number can be exposed to the outside world from a container running on Cloud Run
Cloud Run apps are only accessible via HTTPS (includes gRPC) protocol over port :443.
you cannot ensure 2 running containers at a time on Cloud Run (that's not what it's designed for, that's something Kubernetes or VMs are more suitable for).
Cloud Run is, by definition, for running stateless HA apps
there's no such thing as "pod URL" in Cloud Run. multiple replicas of an app will have the same address.
as you said, Cloud Run cannot distinguish multiple instances of the same app. if a container forwards a request to its own URL, it might end up getting the request again.
Your best bet is to deploy these two containers as separate applications to Cloud Run, so they have different URLs and different lifecycles. You can set "maximum instances" to 1 to ensure these VaultService1 and VaultService2 never get additional replicas.

Restrict network activity in Google Cloud Run

I'm using Cloud Run containers to run untrusted (user-supplied) code. The container receives a POST request, runs the code, and responds with the result. For security reasons, it's deployed on a locked down service account, but I also want to block all other network activity. How can this be accomplished?
Cloud Run (managed) currently doesn't offer firewall restrictions to selectively block inbound or outbound traffic by IP/host. I'm assuming you're trying to block connections initiated from container to outside. In the future, Cloud Run has plans to add support for Google Cloud VPC Service Controls feature, so that might help.
However, if you have a chance to use Cloud Run for Anthos (on GKE) which has a similar developer experience but runs on Kubernetes clusters, you can actually easily write Kubernetes NetworkPolicy policies (which I have some recipes here) to control which sort of traffic can come/go from the containers running. You can also use GCE firewall rules and VPC service controls when using a Kubernetes cluster.
Other than that, your only option on a Cloud Run (fully managed) environment is to use Linux iptables command while starting your container to block certain network patterns. Importantly, note that Cloud Run (fully managed) runs on a gVisor sandbox which emulates system calls. And many of the features in iptables are currently not implemented/supported in gVisor. By looking at issue tracker and patches , I can tell that it's on the roadmap and some may even be working today.
You could couple the Cloud Run (managed) deployment to a VPC Network that doesn't have any internet access.
I figured this out for my usecase (blocking all egress).
In the first generation of cloud run atlease, there's 2 eth interfaces - eth0 and eth2. Blocking traffic on eth2 solves egress traffic.
iptables -I OUTPUT -o eth2 -j DROP
Run this on startup of the container/app and then ensure the running application is not run (and hence cannot undo this).