How can I deploy and connect to a postgreSQL instance in AlloyDB without utilizing VM? - google-cloud-platform

Currently, I have followed the google docs quick start docs for deploying a simple cloud run web server that is connected to AlloyDB. However, in the docs, it all seem to point towards of having to utilize VM for a postgreSQL client, which then is connected to my AlloyDB cluster instance. I believe a connection can only be made within the same VPC and/or a proxy service via the VM(? Please correct me if I'm wrong)
I was wondering, if I only want to give access to services within the same VPC, is having a VM a must? or is there another way?

You're correct. AlloyDB currently only allows connecting via Private IP, so the only way to talk directly to the instances is within the same VPC. The reason all the tutorials (e.g. https://cloud.google.com/alloydb/docs/quickstart/integrate-cloud-run, which is likely the quickstart you mention) talk about a VM is that in order to create your databases themselves within the AlloyDB cluster, set user grants, etc, you need to be able to talk to it from inside the VPC. Another option for example, would be to set up Cloud VPN to some local network to connect your LAN to the VPC directly. But that's slow, costly, and kind of a pain.
Cloud Run itself does not require the VM piece, the quickstart I linked to above walks through setting up the Serverless VPC Connector which is the required piece to connect Cloud Run to AlloyDB. The VM in those instructions is only for configuring the PG database itself. So once you've done all the configuration you need, you can shut down the VM so it's not costing you anything. If you needed to step back in to make configuration changes, you can spin the VM back up, but it's not something that needs to be running for the Cloud Run -> AlloyDB connection.
Providing public ip functionality for AlloyDB is on the roadmap, but I don't have any kind of timeframe for when it will be implemented.

Related

How do I connect to Google Cloud SQL from Google Cloud Run via TCP?

Based on my current understanding, when I enable a service connection to my Cloud SQL instance in one of my revisions, the path /cloudsql/[instance name]/.s.PGSQL.5432 becomes populated. This is a UNIX socket connection.
Unfortunately, a 3rd party application I'm using doesn't support UNIX socket connections and as such I'm required to connect via TCP.
Does the Google Cloud SQL Proxy also configure any way I can connect to Cloud SQL via something like localhost:5432, or other equivalent? Some of the documentation I'm reading suggests that I have to do elaborate networking configuration with private IPs just to enable TCP based Cloud SQL for my Cloud Run revisions, but I feel like the Cloud Proxy is already capable of giving me a TCP connection instead of a UNIX socket.
What is the right and most minimal way forward here, obviously assuming I do not have the ability to modify the code I'm running.
I've also cross posted this question to the Google Cloud SQL Proxy repo.
The most secure and easiest way is to use the private IP. It's not so long and so hard, you have 3 steps
Create a serverless VPC connector. Create it in the same region as your Cloud Run service. Note the VPC Network that you use (by default it's "default")
Add the serverless VPC Connector to Cloud Run service. Route only the private IPs through this connector
Add a private connection to your Cloud SQL database. Attached it in the same VPC Network as your serverless VPC Connector.
The Cloud configuration is over. Now you have to get the Cloud SQL private IP of your instance and to add it in parameters of your Cloud Run service to open a connection to this IP.

AWS ECS Task can't connect to RDS Database

I'm a newer AWS user and today I got stuck while working on a sample project. I successfully created a docker container that runs a simple R script that connects to my AWS RDS MySQL Database and creates & writes some basic files to it. I built a public ECR repository, pushed my docker image there, and built a ECS cluster & task choosing Fargate and using the container image from my repository. My task ran and I could see the R code being executed when I went through the logs, but it was never able to connect to the SQL Database and exited afterwards.
I've had to whitelist my own IP address in the security group for the RDS Database so that I can connect to it, so I'm aware I probably have to do that for my ECS task to establish that connection too. But won't that IP address constantly change because I won't have a static IP for the Fargate Server that is executing my task? I'm trying to stay on the free tier so I'm not sure I want to setup an elastic IP address for this server.
These 2 articles seem close if not the same issue I'm having but I can't figure out a solution. I haven't found any other info.
https://aws.amazon.com/premiumsupport/knowledge-center/ecs-fargate-task-database-connection/
https://aws.amazon.com/premiumsupport/knowledge-center/ecs-fargate-static-elastic-ip-address/
The end goal is to get this sample project successfully running on a scheduled fixed interval, and then running actual scripts on there to help automate things and make my life easier, so this sample project is a first step towards that. Any help or info on the questions I'm having would be appreciated !
Yes, your task is ephemeral (whether you launch it manually or as part of an ECS service) and its private/public ip address may change over time if it gets replaced. The way you'd make the connectivity rules to stick is to assign a security group to the task (that may have inbound access on a specific port you need I assume and outbound to everything) and assign another security group to the RDS db that has inbound access on port 3306 for the security group you assigned to the task (this is the trick, the SG will not change and you are telling RDS to allow access to ALL traffic coming from that SG). I see the first article you posted doesn't talk about this part (it should).

Private service to service communication for Google Cloud Run

I'd like to have my Google Cloud Run services privately communicate with one another over non-HTTP and/or without having to add bearer authentication in my code.
I'm aware of this documentation from Google which describes how you can do authenticated access between services, although it's obviously only for HTTP.
I think I have a general idea of what's necessary:
Create a custom VPC for my project
Enable the Serverless VPC Connector
What I'm not totally clear on is:
Is any of this necessary? Can Cloud Run services within the same project already see each other?
How do services address one another after this?
Do I gain the ability to use simpler by-convention DNS names? For example, could I have each service in Cloud Run manifest on my VPC as a single first level DNS name like apione and apitwo rather than a larger DNS name that I'd then have to hint in through my deployments?
If not, is there any kind of mechanism for services to discover names?
If I put my managed Cloud SQL postgres database on this network, can I control its DNS name?
Finally, are there any other gotchas I might want to be aware of? You can assume my use case is very simple, two or more long lived services on Cloud Run, doing non-HTTP TCP/UDP communications.
I also found a potentially related Google Cloud Run feature request that is worth upvoting if this isn't currently possible.
Cloud Run services are only reachable through HTTP request. you can't use other network protocol (SSH to log into instances for example, or TCP/UDP communication).
However, Cloud Run can initiate these kind of connection to external services (for instance Compute Engine instances deployed in your VPC, thanks to the serverless VPC Connector).
the serverless VPC connector allow you to make a bridge between the Google Cloud managed environment (where live the Cloud Run (and Cloud Functions/App Engine) instances) and the VPC of your project where you have your own instances (Compute Engine, GKE node pools,...)
Thus you can have a Cloud Run service that reach a Kubernetes pods on GKE through a TCP connection, if it's your requirement.
About service discovery, it's not yet the case but Google work actively on that and Ahmet (Google Cloud Dev Advocate on Cloud Run) has released recently a tool for that. But nothing really build in.

Solving connectivity issues to AWS with MariaDB on RDS from local machine

I currently develop a small Java web application with following stack: Java 8, Spring Boot, Hibernate, MariaDB, Docker, AWS (RDS, Fargate, etc.). I use AWS to deploy and to run my application. My java web application runs inside of the docker container, which is managed by AWS Fargate; this web application communicates with Amazon RDS (MariaDB instance) via injected secrets and doesn't need to go through public internet for this kind of communication (instead it uses VPC). My recent problems have begun after I've managed to roll out an software update, that enforced me to make some manual database changes with use of MySQL Workbench and I could not perform this because of local connectivity problems.
Therefore my biggest problem right now is the connectivity to the database from the local machine - I simply can't connect to the RDS instance via MySQL Workbench or even from within the IDE (but it used to work before without such problems). MySQL Workbench gave me following error message as a hint:
After check of given hints from MySQL Workbench I've also checked that:
I use valid database credentials, URL and port (the app in Fargate has the same secrets injected)
Public accessibility flag on RDS is (temporarily) set to "yes"
database security group allows MySQL/Aurora connections from my IP Address range (I've also tested the 0.0.0.0/0 range without further luck)
Therefore my question is: what else should I check to find out the reason of my connectivity failure?
After I've changed my laptop network by switching to the mobile internet the connectivity problem was solved - therefore I suspect, that my laptop was not able to establish the socket connection from the previous network (possibly the communication port or DNS was blocked).
Therefore also don't forget to check the network connectivity by establishing a socket connection like it is described in this answer.

Connecting to Cloud SQL private IP from GCE VM application

I am checking Cloud SQL Private IP connections from different types of clients. I could successfully make a connection from an application hosted in a GKE cluster which was created as a VPC-native cluster as described here. Having already done this, I was expecting it would be easier to connect to the Private IP from the same application (which is a simple Spring Boot application) hosted in a GCE VM. Contrary to my expectations this does not appear to be so. It is the same Spring Boot application that I am trying to run inside a VM. But it does not seem to be able to connect to the database. I was expecting some connection error but nothing shows up - no exception thrown. What is strange is I am able connect to the Cloud SQL Private IP via mysqlcommand line from the same VM but not from within the Spring Boot application. Anyone out there who faced this before?
Issue was not related Cloud SQL Private IP. As mentioned in my earlier comment, I was passing active profile info via Kubernetes pod configuration. So the Dockerfile did not have this info. To fix the issue, I had to pass active profile info when the program was initialized outside Kubernetes. This has a lot of helpful answers how to do this. If the program is being started via a docker run command the active profile info can be passed as a command line argument. See here for a useful reference.
So to summarize, Cloud SQL Private IP works fine from a CE VM. No special configuration required at GCE VM end to get this working.