Dataflow - Call external API with using private IP - google-cloud-platform

I'm getting an issue calling an external API from a Dataflow job.
Dataflow is running under project A, and the API is hosted in GKE in project B, with Istio. The service account used to run Dataflow has access to resources (like GCS) from project A and B.
The projects don't have a default network and in order to run Dataflow, I needed to set the flag --use_public_ips to false. With that, the job runs, but the API call isn't reaching the API controller and is returning the following error:
I/O error while reading input message; nested exception is org.apache.catalina.connector.ClientAbortException: java.net.SocketTimeoutException"
I tested the same job in a separate environment with a default network and with Dataflow and GKE hosted under the same project. In that environment using --use_public_ips=true, the API call works, and using --use_public_ips=false it doesn't.
My questions are:
1 - What does the --use_public_ips flag changes exactly in terms of external access to resources and how can we configure my services to work with that?
2 - Is there's a way to run Dataflow in a project without default network (subnetwork specified at runtime), and not use the --use_public_ips flag set to false.

Related

Serverless VPC access connector is in a bad shape

Our project is using a Serverless VPC access connector to allow access to DB over private IP from cloud functions and cloud runs. It was working flawlessly for a few months, but today I tried to deploy one of the functions that use such a connector and I got the message:
VPC connector
projects/xxxx/locations/us-central1/connectors/vpc-connector is not
ready yet or does not exist. Please visit
https://cloud.google.com/functions/docs/troubleshooting for in-depth
troubleshooting documentation.
I went to the Serverless VPC access view and found out that indeed the connector has a red marking on it. When I hover on it it says
Connector is in a bad state, manual deletion recommended
but I don't know for what reason, Link to logs doesn't show anything for the past 3 months.
I tried to google about the such error but without success.
I also tried to search through logs but also didn't find anything relevant.
I'm looking for any hints:
Why it happened?
How to fix it? I don't want to recreate the connector, it is related to many functions, and cloud runs
As the issue was blocking us from the deployment of cloud functions I was forced to recreate the connector.
But this time API returned an error:
Error: Error waiting to create Connector: Error waiting for Creating Connector: Error code 7, message: Operation failed: Google APIs Service Agent (<PROJECT_NUMBER>#cloudservices.gserviceaccount.com) needs editor role in the project.
After adding such permission old connector started to work again...
Before there was no such requirement, but it changed in meantime.
Spooky, one time something works other not.

How to connect to cloud sql from a cloud function and not return a ENOENT error?

First of all I find google's cloud docs lacking and somewhat incorrect a fair bit of the time.
I am attempting to connect from a cloud function to a cloud sql database and I have having endless issues.
Here is the connection error
"Internal error looking up Cloud SQL instance "project:region:database/.s.PGS""
Error: connect ENOENT /cloudsql/project:region:database/.s.PGSQL.5432
I am able to connect to said database locally with the public ip address and code is all working fine, but when deployed it doesn't work at all.
What I have...
Project A - This has the database in australia-souteast1 region.
Project B - This has all the other logic, also in australia-southeast1
(the database is legacy, hence why its in a different project).
I have a cloud schedule task that triggers a pubsub, which inturn triggers the cloud function. This process works, and is logging what it should, this is also where I am seeing the can't connect error.
Connection host is /cloudsql/projectId:region:database (coppied from the cloud sql connection page, so I know that isn't the issue).
I have also enabled Cloud Sql API and Cloud Sql Admin Api on both Project A and Project B and still no luck.
I have also tried with the default service account by adding the Cloud Sql Client permission in Project B and then adding Project B's default service account into Project A with Cloud Sql Client permissions.
Failing that, I then created a new service account in Project B and gave it Owner permissions and then added that user to Project A with Owner permissions also, I am still getting this error.
I really have no clue now as to what is going on.
We have app engines on Project B connecting to Project A without any issues, I am really confused.
Here is the stack driver error
And my be connection details via an .env file
UPDATE:
Changing the database to a different database instance in Project A seems to connect, so it is looking like it is possibly a problem with the database instance.
Database 1 is working and I can connect to.
Database 2 is the one that I can not get to work.
Database 2 is a clone of Database 1
In this case, the docs are absolutely correct, but you are using the wrong filepath. The unix socket is located at /cloudsql/project:region:database/.s.PGSQL.5432, not /cloudsql/project:region:database/.s.PGS/.s.PGSQL.5432.

How to achieve multiple gcs backends in terraform

Within our team. We all have our own dev project, and then we have a test and prod environment.
We are currently in the process of migrating from deployment manager, and gcloud cli. Into terraform. however we havent been able to figure out a way to create isolated backends within gcs backend. We have noticed that the remote backends support setting a dedicated workspace but we havent been able to setup something similar within gcs.
Is it possible to state that terraform resource A, will have a configurable backend, that we can adjust per project, or is the equivalent possible with workspaces?
So that we can use either tfvars, and vars parameters to switch between projects?
As stands everytime we attempt to make the backend configurable through vars, we get the error in terraform init of
Error: Variables not allowed
How does one go about creating isolated backends for each project.
Or if that isn't possible how can we guarantee that with multiple projects a shared backend state will not collide causing the state to be incorrect?
Your backend must been known when you run your terraform init command, I mean your backend bucket.
If you don't want to use workspace, you have to customize the backend value before running the init. We are use make to achieve this. According to the environment, make create a backend.tf file with the correct backend name. And run the init command.
EDIT 1
We have this piece of script (sh) which create the backend before triggering the terraform command. (it's our Make file that do this)
cat > $TF_export_dir/backend.tf << EOF
terraform {
backend "gcs" {
bucket = "$TF_subsidiary-$TF_environment-$TF_deployed_application_code-gcs-tfstatebackend"
prefix = "terraform/state"
}
}
EOF
Of course the bucket name pattern is dependent of our project. The $TF_environment is the most important because according to the env var set, the bucket reached will be different.

Turn off the v0.1 and v1beta1 endpoints via GCP console

I have a Flutter + Firebase app, and received an email about "Legacy GAE and GCF Metadata Server endpoints will be turned down on April 30, 2020". I updated it to v1 or whatever, and at the end of the email it suggests to turn off the endpoints completely. I'm using Google Cloud Functions and the email says
If you are using App Engine Standard or Cloud Functions, set the following environment variable: DISABLE_LEGACY_METADATA_SERVER_ENDPOINTS=true.
Upon further research this can be done through the console (https://cloud.google.com/compute/docs/storing-retrieving-metadata#custom). It says to add it as custom metadata (https://cloud.google.com/compute/docs/storing-retrieving-metadata#disable-legacy-endpoints) but I'm not sure if I'm doing this right.
For additional info, the email was triggered from a few cloud functions I have where I used the firebase admin to send push notifications (via cloud messaging)
The custom metadata feature you mention is meant to be used with Compute Engine and allows to pass in arbitrary values to your project or instance, and set startup and shutdown scripts. It's a handy way to pass common environment variables to all your GCE VMs in your project. You can also use those custom metadata in App Engine Flexible instances because they are actually Compute Engine VMs in your project running your App Engine code.
Cloud Functions and App Engine Standard are fundamentally different in that they don't run in your project but in a Google-owned project. This makes your project-wide custom metadata unreachable to them.
For this reason, for Cloud Functions you'll need to set a CF-specific environment variable by either:
using the --set-env-vars flag when deploying your Function with the gcloud functions deploy command
adding it to the environment variable section of your Function when creating it via the Developer Console

Error - functions: failed to create function dialogflowFirebaseFulfillment

when i'm trying to deploy firebase function from my local machine i'm getting this error.
functions: failed to create function dialogflowFirebaseFulfillment
HTTP Error: 400, Default service account 'project-id#appspot.gserviceaccount.com' doesn't exist. Please recreate this account (for example by disabling and enabling the Cloud Functions API), or specify a different account.
and the project that i'm trying to deploy is, https://github.com/actions-on-google/codelabs-nodejs/tree/master/level1-complete
It seems your service account is removed. You may want to check whether your firebase & actions on google projects are removed or not.
If they are not, check for service accounts on console.cloud.google.com and make sure all your accounts are same as you are trying to deploy. (firebase, dialogflow, app-engine etc.) Also, disabling and enabling the Cloud Functions API may help as mentioned in error.
I notice that your error has 'project-id#appspot.gserviceaccount.com'.
Shouldn't the project-id be your {project-id} from the google action that you created, and not the word project-id.