How to connect to cloud sql from a cloud function and not return a ENOENT error? - google-cloud-platform

First of all I find google's cloud docs lacking and somewhat incorrect a fair bit of the time.
I am attempting to connect from a cloud function to a cloud sql database and I have having endless issues.
Here is the connection error
"Internal error looking up Cloud SQL instance "project:region:database/.s.PGS""
Error: connect ENOENT /cloudsql/project:region:database/.s.PGSQL.5432
I am able to connect to said database locally with the public ip address and code is all working fine, but when deployed it doesn't work at all.
What I have...
Project A - This has the database in australia-souteast1 region.
Project B - This has all the other logic, also in australia-southeast1
(the database is legacy, hence why its in a different project).
I have a cloud schedule task that triggers a pubsub, which inturn triggers the cloud function. This process works, and is logging what it should, this is also where I am seeing the can't connect error.
Connection host is /cloudsql/projectId:region:database (coppied from the cloud sql connection page, so I know that isn't the issue).
I have also enabled Cloud Sql API and Cloud Sql Admin Api on both Project A and Project B and still no luck.
I have also tried with the default service account by adding the Cloud Sql Client permission in Project B and then adding Project B's default service account into Project A with Cloud Sql Client permissions.
Failing that, I then created a new service account in Project B and gave it Owner permissions and then added that user to Project A with Owner permissions also, I am still getting this error.
I really have no clue now as to what is going on.
We have app engines on Project B connecting to Project A without any issues, I am really confused.
Here is the stack driver error
And my be connection details via an .env file
UPDATE:
Changing the database to a different database instance in Project A seems to connect, so it is looking like it is possibly a problem with the database instance.
Database 1 is working and I can connect to.
Database 2 is the one that I can not get to work.
Database 2 is a clone of Database 1

In this case, the docs are absolutely correct, but you are using the wrong filepath. The unix socket is located at /cloudsql/project:region:database/.s.PGSQL.5432, not /cloudsql/project:region:database/.s.PGS/.s.PGSQL.5432.

Related

Serverless VPC access connector is in a bad shape

Our project is using a Serverless VPC access connector to allow access to DB over private IP from cloud functions and cloud runs. It was working flawlessly for a few months, but today I tried to deploy one of the functions that use such a connector and I got the message:
VPC connector
projects/xxxx/locations/us-central1/connectors/vpc-connector is not
ready yet or does not exist. Please visit
https://cloud.google.com/functions/docs/troubleshooting for in-depth
troubleshooting documentation.
I went to the Serverless VPC access view and found out that indeed the connector has a red marking on it. When I hover on it it says
Connector is in a bad state, manual deletion recommended
but I don't know for what reason, Link to logs doesn't show anything for the past 3 months.
I tried to google about the such error but without success.
I also tried to search through logs but also didn't find anything relevant.
I'm looking for any hints:
Why it happened?
How to fix it? I don't want to recreate the connector, it is related to many functions, and cloud runs
As the issue was blocking us from the deployment of cloud functions I was forced to recreate the connector.
But this time API returned an error:
Error: Error waiting to create Connector: Error waiting for Creating Connector: Error code 7, message: Operation failed: Google APIs Service Agent (<PROJECT_NUMBER>#cloudservices.gserviceaccount.com) needs editor role in the project.
After adding such permission old connector started to work again...
Before there was no such requirement, but it changed in meantime.
Spooky, one time something works other not.

Informatica Job encounters "Actual File does not have execute permission" error

I'm trying to create a simple mapping task job in Informatica Cloud that copies a text file from a subdirectory to its' parent directory. Even if I give both folders 777 permissions on the secure agent where the process is run, I get the following error when I run the process:
"[ERROR]
com.informatica.cloud.api.adapter.runtime.exception.FatalRuntimeException:
Actual File does not have execute permission!!"
How do I resolve this issue?
We found the issue. Salesforce automatically started enforcing "enhanced domains" in sandboxes even though our org isn't ready to use that feature yet. I learned from my client that this was only happening in our sandbox, and the issue started happening when this change was implemented. We temporarily disabled the feature in the Salesforce sandbox and will reactivate it once our third party vendor has our org ready to use enhanced domains.

AWS Glue Development Endpoint Not Working properly

I am trying to use a development Endpoint to interactively run and edit ETL scripts but there seems to some issues in the development endpoint just after creating it as i am getting errors in scala/python REPL and also unable to do SSH tunnel to remote interpreter.
Let me explain what i did exactly - I created a development endpoint in the AWS console with all the default configurations. While creating the development endpoint i only provided three things 'Development endpoint name' and 'IAM Role' and my 'pub ssh key'. This is how it looks after creation
Then Right After creating the endpoint i am connecting to the spark/python REPL, I am able to connect to them successfully but within couple of minutes of connecting, the REPL starts throwing errors without writing a single line of code. This is happening in all the REPL present in the development endpoints.
Also When I am trying to do SSH tunneling to remote interpreter to connect my Local Zeppelin Notebook it is throwing - "bind: Cannot assign requested address".
Couple of things that are working though -
Able to do ssh to the endpoint.
Created a Sagemaker notebook in the AWS glue that is attached to this development endpoint and this notebook seems to be working fine, although surely it is adding an additional cost and i don't want to continue using it.
Can anyone please help what am i doing wrong? Am I missing any important steps that is needed to be done on the machine right after creating the development endpoint?
Thanks in Advance!
Not very sure about this error but if you are using it smaller datasets then probably you would like to use Docker implementation as it will not add any additional cost and you can go on with your developments.
You can refer this blog on how to set it up
https://towardsdatascience.com/develop-glue-jobs-locally-using-docker-containers-bffc9d95bd1

Error when trying to connect to a Cloud SQL instance using the Cloud Shell

I've had a Cloud SQL instance for about a year now.
I always accessed it the same way:
I would go to my project on the Cloud Console.
Click on the Cloud Shell icon at the top right (a small right pointing arrow).
A black shell screen would pop up where I would type
gcloud sql connect <my instance> --user=root.
Enter my password.
Now, all of a sudden, I am getting an error message saying:
There was no instance found at projects//instances/ or you are not authorized to connect to it.
I am the owner of the project, and also have Admin rights to the Cloud SQL instance. The project and instance are still there, and my app that accesses the data stored in the instances' database is working fine - therefore I know the database is also present, otherwise my app wouldn't work.
I didn't touch or change anything in the Cloud SQL instance. Suddenly, I simply can't access my database using the exact same procedure I have been using almost every day over the past year now.
I am able to access the database using a local Python script on my laptop and the Cloud SQL Proxy, but I would like to access it from the Cloud Shell again.
Any ideas on what could the problem be?
gcloud components update - update all of your installed components to the latest version
gcloud init - reinitialize gcloud shell. It performs the following setup steps:
Authorizes gcloud and other SDK tools to access Google Cloud Platform using your user account credentials, or from an account of your choosing whose credentials are already available.
It seems like there was a problem with the GCP Cloud Shell (even though there was no mention of it on the GCP error tracking page). When I logged back in today and followed the same above process everything worked well.
Looks like GCP Cloud Shell could occasionally go rouge and start producing errors. Word of advice, don't panic when this happens (like I did) and start resetting, rebooting and messing up things. Just wait a day and check back again.

Error establishing connection with local DB and google cloud SQL using data fusion

Need to create a pipeline to export data from local PostgreSQL DB to Google Cloud SQL using Google Cloud DataFusion. Using wrangler to first test the connections with local DB and CloudSQL.
While trying to establish a connection with local DB, I am getting connection failed exception. Hostname, port, username and password are correct.
For establishing connection with Google Cloud SQL (PostgreSQL), I used this reference to build the JAR but got SocketFactory instantiation error.
Steps followed for both:
In Wrangler UI, click add connection
Click databases
Then add the respective jar (JDBC driver)
Add connection details
Kindly help with how to resolve these issues.
Can you provide the full stacktrace for the exception you saw? If it did not show the exception from UI, you can go to "SYSTEM ADMIN" link at the top right hand corner and click on "View Logs" for Wrangler Service.