Is it possible for the Google Cloud Workflow to reach the Private VPC (perhaps via a serverless VPC connector)? I can't find anything about it in the documentation. We want to use Workflow to trigger certain things via API on the internal network (no outside access).
Worst case we'll have to proxy it through a Cloud Function
Regards,
Niklas
VPC connector is one of the most demanding feature of Cloud Workflow but for now it's not implemented. There is no ETA for this feature.
For now, a proxy is required with Cloud Run/Functions and a VPC connector.
As stated in the first sentences in Workflows doc, Cloud Workflows is meant to:
link series of serverless tasks together
and it
Combine the power of Google Cloud's APIs, serverless products like Cloud Functions and Cloud Run, and calls to external APIs
So, as you proposed, the workaround is to wrap / proxy your call to your internal API, through a call to Cloud Function or Cloud Run with proper authentication / authorisation.
Google Cloud Workflow has an unknown IP, which is difficult to route.
So you're probably looking for Cloud NAT? This would be the console.
Related
How can I orchestrate and call cloud services from different providers: Google Cloud, AWS and Azure
We did check workflows, but it seems that workflows are limited in each provider's platform, so if I build a workflow in Azure, I can only include services from Azure, but not from Google Cloud or AWS.
Cloud Workflow is an orchestrator that call APIs. There is a better integration for Google Cloud services (and with Google Security) but you can also use other API, such as other Cloud Provider services API.
You can also use Cloud Composer, based on Apache Airflow, which has a large library of operator that help you to call different services on different platform. Because it's initially (and still) an open source project, it's portable and there is no Cloud Preference on that solution.
If you want an orchestration tool for multi Cloud usage, Airflow should be the best approach because it proposes many operators for each Cloud Provider.
It's open source, well documented, there are many supports and example in each Cloud Provider.
It's also the best tool if you want to be Cloud Agnostic and migrate it easily to any Cloud provider if needed.
If you host Airflow in Google Cloud, you can use a managed solution with Cloud Composer 2 (GKE autopilot).
Cloud Workflow is a good solution if you stay in the GCP world, it's serverless and based on API calls but it is less documented than Airflow for an usage outside of GCP. It's also not Cloud agnostic and strongly coupled to GCP.
I recommend you using Airflow for your need.
I would like to connect to a Cloud SQL instance from Cloud Run, using a service account. The connection used to be created within the VPC and we would just provide a connection string with a user and a password to our PostgreSQL client. But now we want the authentication to be managed by Google Cloud IAM, with the service account associated with the Cloud Run service.
On my machine, I can use the enable_iam_login argument to use my own service account. The command to run the Cloud SQL proxy would look like this:
./cloud_sql_proxy -dir=/cloudsql -instances=[PROJECT-ID]:[REGION]:[INSTANCE] \
-enable_iam_login -credential_file=${HOME}/.config/gcloud/application_default_credentials.json
The problem is that I can't seem to find a way to use the IAM authentication method to run the Cloud SQL Proxy from Cloud Run, I can just provide an instance name. Has anyone face this problem before?
Unfortunately, there isn't a way to configure Cloud Run's use of the Cloud SQL proxy to do this for you.
If you are using Java, Python, or Go, there are language specific connectors you can use from Cloud Run. These all have the option to use IAM DB AuthN as part of them.
For anyone using NodeJS on Cloud Run :
you can't enable the -enable_iam_login option in Cloud Run (cf accepted answer), which means you have to use a connector.
AFAIK, there is no NodeJS connector right now.
Which means you have 2 options :
write a connector yourself (good luck) or wait for Google to do it
use built-in database authentication for now.
Need help I really don't know where to start I created a web app that access data from a Firestore on a GCP Project.
Now I need to use GCP Cloud Functions that connects to Firestore to perform CRUD Functions.
All are on the same GCP Project.
Thanks in advance I really need to know where to start on Cloud Functions how to define and access via code (Nodejs) the Firestore DB.
There is a step by step guide on how to setup Firestore with Cloud Function in this documentation, but summing it up, it's pretty simple. All you need to do is export the firestore sdk, initialize the sdk, and use it in your code.
You can also use the Admin SDK since Cloud Functions is "server side", and this will allow more elevated access to the Firestore from you Cloud Functions.
I have a few cloud functions. I also have a nodejs server running in AppEngine. I make calls to my cloud functions from there. Currently I can make calls to my cloud function from anywhere!
Is there any way I can restrict access to my cloud functions to only available when called from my server running on Google App Engine?
You have 2 solutions
The first one is to use a service account as described be AndresMijares. But, not to create a new one. Indeed, if you create a new service account and you want to use it with app engine, you need to generate a service account key file and to deploy this secret with your code. It's not very secure because you need also to store securely this secret and so on.
So, the solution is to use the App Engine default service account which as this email pattern
<project_ID>#appspot.gserviceaccount.com
Grant this service account as role/cloudfunctions.invoker on all the functions to invoke at the project level.
The 2nd solution isn't so great as the first one but it's also possible. You can update your Cloud Functions and set the ingress parameter to internal. That means only the traffic coming from your VPCs in the project will be able to reach the Cloud Functions, included the other resources of your project (like Compute Engine). -> That's why it's not a very good solution, but at the end, Cloud Functions can't be invoked from anywhere.
So, to allow App Engin to use your VPC to call the Cloud Function, you need to use a serverless VPC connector that bridge the serverless world with your VPC. In addition to be less secure, this solution involve additional cost for the serverless VPC connector.
The advantage in the 2nd solution is that you don't have to update your application code to perform a secure call to you cloud function. You only update the deployment configuration and you have function callable only internally.
For the first solution, you need to update your code to add a security token to your request's header. It's similar to the function to function authentication. I personally don't like this way to implement that because you can't test locally: locally you don't have metadata servers!
I wrote an article from where you can get inspiration of the part "Avoid metadata server".
EDIT 1
After a deep dive in App Engine serverless VPC connector and this answer, it's only possible to reach a ingress "internal only" Cloud Function (or Cloud Run), with a Cloud Functions or Cloud run. App Engine doesn't route the public traffic inside the serverless VPC connector and thus the 2nd solution isn't possible in App Engine case.
There are a few ways to do this. You can create a Service Account IAM & Admin -> Services accounts
You need to apply the Cloud Functions Invoker role to this service account, you can use the gcloud cli for this.
gcloud beta functions add-iam-policy-binding YOUCLOUDFUNCTIONAME --member serviceAccount:NAME-OF-YOUR-SERVICE-ACCOUNT#project-name.iam.gserviceaccount.com --role roles/cloudfunctions.invoker --region YOUR-REGION
you will be prompted with a message like this:
bindings:
- members:
- allUsers
- YOUR SERVICE ACCOUNT
Ideally, you need to remove the allUsers role.
gcloud beta functions remove-iam-policy-binding YOUFUNCTIONNAME --member allUsers --role roles/cloudfunctions.invoker --region us-central1
Then you need to make sure your AppEngine instances have access to the service account you just created, that should do the trick. Be aware you might need more configuration based on your case, but this can give you a good starting point.
Assume I want to deploy multiple micro-services by using google cloud run and those micro services will be connected each other. My questions are follows
Does each micro-service deploy separately by creating google cloud run service
Then how each micro service call each other (by using public IPs)
How to connect different micro-service with different dbs such as Mongo DB, CassandraDb. Is there way we can create NoSQL Db in compute engine and access through google cloud run.
Does each micro-service deploy separately by creating google cloud run service
Yes, each microservice is individual and has it's own http/s endpoint if you need it.
If you need to deploy more more in bulk, you can always use a CI/CD tool.
Then how each micro service call each other (by using public IPs)
When you deploy your service for the first time with an HTTP trigger you are provided with an unique url (similar to what happens with cloud functions). You can then invoke your service via HTTP as usual.
Of course if you have many services, calling them blindly it's not the best option, I advice you to use a service mesh (istio) and/or an api-gateway (cloud endpoints) in order o have better control and flexibility on your apis.
How to connect different micro-service with different dbs such as
Mongo DB, CassandraDb. Is there way we can create NoSQL Db in compute
engine and access through google cloud run.
I don't see why not but please consider the list of this known limitations of cloud run (managed): here
Basically it doesn't support a VPC connector, so you can't do it over a private ip. Also consider many of the managed db gcp offers, maybe datastore is good enough for you use case ?