Target localhost from GCP Cloud Tasks - google-cloud-platform

I am working with GCP Cloud Tasks API on localhost at the moment. I am able to enqueue tasks in Google from localhost by injecting my default credentials.
I am passing a localhost URL as the callback when the task is to execute. Is this possible to do for development?
Or does the callback URL have to be a hosted endpoint?

See Cloud Tasks documentation:
With this release of HTTP Targets, Cloud Tasks handlers can now be run on any HTTP endpoint with a public IP address, such as Cloud Functions, Cloud Run, GKE, Compute Engine, or even an on-prem web server. Your tasks can be executed on any of these services in a reliable, configurable fashion.
Meaning, it's currently not possible to pass a localhost URL as a callback when executing tasks. For development purposes, Cloud Tasks doesn't have an official emulator yet but a feature request already exists. Please make sure to "star" it so it can gain traction.
As an alternative to your objective, here's a few third-party Cloud Task emulators so you can test locally. Check out the following links:
https://gitlab.com/potato-oss/google-cloud/gcloud-tasks-emulator
https://github.com/aertje/cloud-tasks-emulator

Related

How to forward traffic to specific url in google cloud run?

I have a live app running on Google Cloud Run.
I have a bug in my app that I want to troubleshoot through VSCODE by adding breakpoints through Debugger.
I am using ngrok.io to expose my localhost. So now I want to route all my traffic that hit the google cloud run should be forwarded to my ngrok.io url so that I could receive that request in my local server that would be debugged very easily.
Is this possible in Google Cloud Run ?
I would really appreciate for any contribution here.
Thanks
Regards
Ayyaz
If you use only Cloud Run, you can't achieve that. If you use a Load Balancer in front of Cloud Run, it's possible to change the backend to route the traffic elsewhere than to Cloud Run.
You can also run your container locally and play the requests that you want on it. You can find the request in the Cloud Run logs, and resend them to your local environment and debug your code.

Private service to service communication for Google Cloud Run

I'd like to have my Google Cloud Run services privately communicate with one another over non-HTTP and/or without having to add bearer authentication in my code.
I'm aware of this documentation from Google which describes how you can do authenticated access between services, although it's obviously only for HTTP.
I think I have a general idea of what's necessary:
Create a custom VPC for my project
Enable the Serverless VPC Connector
What I'm not totally clear on is:
Is any of this necessary? Can Cloud Run services within the same project already see each other?
How do services address one another after this?
Do I gain the ability to use simpler by-convention DNS names? For example, could I have each service in Cloud Run manifest on my VPC as a single first level DNS name like apione and apitwo rather than a larger DNS name that I'd then have to hint in through my deployments?
If not, is there any kind of mechanism for services to discover names?
If I put my managed Cloud SQL postgres database on this network, can I control its DNS name?
Finally, are there any other gotchas I might want to be aware of? You can assume my use case is very simple, two or more long lived services on Cloud Run, doing non-HTTP TCP/UDP communications.
I also found a potentially related Google Cloud Run feature request that is worth upvoting if this isn't currently possible.
Cloud Run services are only reachable through HTTP request. you can't use other network protocol (SSH to log into instances for example, or TCP/UDP communication).
However, Cloud Run can initiate these kind of connection to external services (for instance Compute Engine instances deployed in your VPC, thanks to the serverless VPC Connector).
the serverless VPC connector allow you to make a bridge between the Google Cloud managed environment (where live the Cloud Run (and Cloud Functions/App Engine) instances) and the VPC of your project where you have your own instances (Compute Engine, GKE node pools,...)
Thus you can have a Cloud Run service that reach a Kubernetes pods on GKE through a TCP connection, if it's your requirement.
About service discovery, it's not yet the case but Google work actively on that and Ahmet (Google Cloud Dev Advocate on Cloud Run) has released recently a tool for that. But nothing really build in.

Why are outbound SSH connections from Google CloudRun to EC2 instances unspeakably slow?

I have a Node API deployed to Google CloudRun and it is responsible for managing external servers (clean, new Amazon EC2 Linux VM's), including through SSH and SFTP. SSH and SFTP actually work eventually but the connections take 2-5 MINUTES to initiate. Sometimes they timeout with handshake timeout errors.
The same service running on my laptop, connecting to the same external servers, has no issues and the connections are as fast as any normal SSH connection.
The deployment on CloudRun is pretty standard. I'm running it with a service account that permits access to secrets, etc. Plenty of memory allocated.
I have a VPC Connector set up, and have routed all traffic through the VPC connector, as per the instructions here: https://cloud.google.com/run/docs/configuring/static-outbound-ip
I also tried setting UseDNS no in the /etc/ssh/sshd_config file on the EC2 as per some suggestions online re: slow SSH logins, but that has not make a difference.
I have rebuilt and redeployed the project a few dozen times and all tests are on brand new EC2 instances.
I am attempting these connections using open source wrappers on the Node ssh2 library, node-ssh and ssh2-sftp-client.
Ideas?
Cloud Run works only until you have a HTTP request active.
You proably don't have an active request during this on Cloud Run, as outside of the active request the CPU is throttled.
Best for this pipeline is Cloud Workflows and regular Compute Engine instances.
You can setup a Workflow to start a Compute Engine for this task, and stop once it finished doing the steps.
I am the author of article: Run shell commands and orchestrate Compute Engine VMs with Cloud Workflows it will guide you how to setup.
Executing the Workflow can be triggered by Cloud Scheduler or by HTTP ping.

Can GCP's Cloud Run be used for non-HTTP services?

I'm new to GCP and trying to make heads and tails of it. So far, I've experienced with GKE and Cloud Run.
In GKE, I can create a Workload (deployment) for a service of any kind under any port I like and allocate resources to it. Then I can create a load balancer and open the ports from the pods to the Internet. The load balancer has an IP that I can use to access the underlying pods.
On the other hand, when I create a Could Run service, I'll give it a docker image and a port and once the service is up and running, it exposes an HTTPS URL! The port that I specify in Cloud Run is the docker's internal port and if I want to access the URL, I have to do that through port 80.
Does this mean that Cloud Run is designed only for HTTP services under port 80? Or maybe I'm missing something?
Technically "no", Cloud Run cannot be used for non-HTTP services. See Cloud Run's container runtime contract.
But also "sort of":
The URL of a Cloud Run service can be kept "private" (and they are by default), this means that nobody but some specific identities are allowed to invoked the Cloud Run service. See this page to learn more)
The container must listen for requests on a certain port, and it does not have CPU outside of request processing. However, it is very easy to wrap your binary into a lightweight HTTP server. See for example the Shell sample that Uses a very small Go HTTP sevrer to invoke an arbitrary shell script.

Google Cloud Run (fully managed) - Can a container redirect to another container?

Background:
Trying to run Vault in Google Cloud Run (fully managed) and trying to decide if setting up HA is possible. Vault requires a single active node (container), and inbound requests to a standby node (container) need to be forwarded or redirected.
Forwarded means a side connection on another port (i.e. clients on tcp/8200 and pod-to-pod on tcp/8201). Is this possible, I don't see anything about this in docs.
Redirected means that a standby node (container) would need to 307 redirect to the active node's address. This would either be the Cloud Run url or the pod specific url. If it was the Cloud Run url then the load balancer could just send it right back to the standby node (loop); not good. It would need to be the pod url. Would the Cloud Run "proxy" (not sure what to call it) be able to accept the client request but do an internal redirect from pod to pod to reach the active pod?
It seems like you’re new to the programming and traffic serving model of Cloud Run. I recommend checking out documentation and https://github.com/ahmetb/cloud-run-faq for some answers.
Briefly answering some of your points:
only 1 port number can be exposed to the outside world from a container running on Cloud Run
Cloud Run apps are only accessible via HTTPS (includes gRPC) protocol over port :443.
you cannot ensure 2 running containers at a time on Cloud Run (that's not what it's designed for, that's something Kubernetes or VMs are more suitable for).
Cloud Run is, by definition, for running stateless HA apps
there's no such thing as "pod URL" in Cloud Run. multiple replicas of an app will have the same address.
as you said, Cloud Run cannot distinguish multiple instances of the same app. if a container forwards a request to its own URL, it might end up getting the request again.
Your best bet is to deploy these two containers as separate applications to Cloud Run, so they have different URLs and different lifecycles. You can set "maximum instances" to 1 to ensure these VaultService1 and VaultService2 never get additional replicas.