I have a live app running on Google Cloud Run.
I have a bug in my app that I want to troubleshoot through VSCODE by adding breakpoints through Debugger.
I am using ngrok.io to expose my localhost. So now I want to route all my traffic that hit the google cloud run should be forwarded to my ngrok.io url so that I could receive that request in my local server that would be debugged very easily.
Is this possible in Google Cloud Run ?
I would really appreciate for any contribution here.
Thanks
Regards
Ayyaz
If you use only Cloud Run, you can't achieve that. If you use a Load Balancer in front of Cloud Run, it's possible to change the backend to route the traffic elsewhere than to Cloud Run.
You can also run your container locally and play the requests that you want on it. You can find the request in the Cloud Run logs, and resend them to your local environment and debug your code.
Related
I have a dockerized Node.JS express application that I am migrating to AWS from Google Cloud. I had done this before successfully on the same project before deciding Cloud Run was more cost effective because of their free tier. Now, wanting to switch back to Fargate, but am unable to do it again due what I'm guessing is a crucial step. For minimal setup, I used the following guide: https://docs.docker.com/cloud/ecs-integration/ Essentially, using docker compose up with aws context and project name to deploy to ECS and Fargate.
The Load Balancer gives me a public DNS name in the format: xxxxx.elb.us-west-2.amazonaws.com and I have defined a port of 5002 in my Docker container. I know the issue is not related to exposing port numbers or any code-related issue since I had this successfully running in Google Cloud Run. When I try to hit any of my express endpoints, by sending POST to xxxxx.elb.us-west-2.amazonaws.com:5002/my_endpoint, I end up with Error: Request Timed Out
Note: I have already verified that my inbound security rules have been set to all traffic.
I am very new to AWS, so would love guidance if I am missing a critical step.
Thanks!
EDIT (SOLUTION): Turns out everything was deploying correctly, but after checking CloudWatch Logs, it turns out Fargate can't read environment variables defined inside of docker-compose file. Instead, they need to be defined in .env files, then read in docker-compose through -env-file flag. My code was then trying to listen on a port that was in environment variable but was undefined, so was receiving the below error in CloudWatch.
I have built a Django backend and deployed it in cloud run. I have also built a react frontend that was also deployed in cloud run. Frontend calls Django backend. Everything works while backend Allow all traffic, when I change it to "Allow internal traffic and traffic from Cloud Load Balancing" I get 403 error. Both are using VPC connector. And also both are on un-authenticated cloud Run.
Focus on your architecture and where the code is running.
Your backend run on Cloud Run
Your front ent? it's served by Cloud Run, but executed on your browser.
That's why, your browser haven't a serverlessVPC connector or something like that and the request to the backend come from the internet, nothing from your Cloud Run frontend.
Cloud Composer is Google Cloud's offering of Apache Airflow, the workflow management platform.
Composer deploys the Airflow web server in an AppEngine instance, and thus the URL of the deployed webapp is non-customizable. As a service deployed in AppEngine, the host name of the URL ends in ".appspot.com", but has an automatically generated prefix, and is not easily predictable.
How can I assign a custom, easier to remember host name to point to this service?
In particular, there are firewall rules in place, so a firewall exception for *.appspot.com would be too wide.
You can try to get inspiration from my article and perform a similar thing, not with Cloud Run but with App Engine URL.
I mean:
Create an internet NEG to appsport.com
Add the host header equals to your Cloud Composer appspot URL.
Create your Load Balancer with the domain name that you want.
I didn't test; let me know if it's suitable and if it works for you.
I am working with GCP Cloud Tasks API on localhost at the moment. I am able to enqueue tasks in Google from localhost by injecting my default credentials.
I am passing a localhost URL as the callback when the task is to execute. Is this possible to do for development?
Or does the callback URL have to be a hosted endpoint?
See Cloud Tasks documentation:
With this release of HTTP Targets, Cloud Tasks handlers can now be run on any HTTP endpoint with a public IP address, such as Cloud Functions, Cloud Run, GKE, Compute Engine, or even an on-prem web server. Your tasks can be executed on any of these services in a reliable, configurable fashion.
Meaning, it's currently not possible to pass a localhost URL as a callback when executing tasks. For development purposes, Cloud Tasks doesn't have an official emulator yet but a feature request already exists. Please make sure to "star" it so it can gain traction.
As an alternative to your objective, here's a few third-party Cloud Task emulators so you can test locally. Check out the following links:
https://gitlab.com/potato-oss/google-cloud/gcloud-tasks-emulator
https://github.com/aertje/cloud-tasks-emulator
I'm new to GCP and trying to make heads and tails of it. So far, I've experienced with GKE and Cloud Run.
In GKE, I can create a Workload (deployment) for a service of any kind under any port I like and allocate resources to it. Then I can create a load balancer and open the ports from the pods to the Internet. The load balancer has an IP that I can use to access the underlying pods.
On the other hand, when I create a Could Run service, I'll give it a docker image and a port and once the service is up and running, it exposes an HTTPS URL! The port that I specify in Cloud Run is the docker's internal port and if I want to access the URL, I have to do that through port 80.
Does this mean that Cloud Run is designed only for HTTP services under port 80? Or maybe I'm missing something?
Technically "no", Cloud Run cannot be used for non-HTTP services. See Cloud Run's container runtime contract.
But also "sort of":
The URL of a Cloud Run service can be kept "private" (and they are by default), this means that nobody but some specific identities are allowed to invoked the Cloud Run service. See this page to learn more)
The container must listen for requests on a certain port, and it does not have CPU outside of request processing. However, it is very easy to wrap your binary into a lightweight HTTP server. See for example the Shell sample that Uses a very small Go HTTP sevrer to invoke an arbitrary shell script.