Viewing requests between Google container and Cloud SQL - google-cloud-platform

We have a GKE application container running Django that connects to a Postgres database on Cloud SQL via Private IP. The application is configured to run 10 django processes per pod. Whenever a new pod is spun up (either when deploying new code or scaling to meet load) each of the 10 django processes in that pod encounters exactly one error (varies, but always database-related) when connecting to the database with all subsequent database requests being fine. We suspect that the problem is on the Django side and the requests that error do not even make it to Cloud SQL.
How do I view the network requests between the application and Cloud SQL?

Related

Docker process exiting when prisma client tries to connect to aurora db

I've got a nestjs application running in docker on AWS (using fargate) that I want to connect to an aurora database using prisma.
Currently I can run database migrations (from another fargate task), but whenever my application tries to do anything in the database the process exits. Nothing is showing up in the logs - the container process just exits, and I can't even find an exit code.
The system works locally, and the applications runs fine on AWS until it tries to reach the database. It also works if I swap out the aurora db for a rds instance.
The database is set up to allow incoming connections from both the application service security group and the security group used for the migrations task (in the same way).
Any ideas on how I can fix this, or at least debug the issue?

AzureDevOps Pipeline fails on creating database in Djano test

I have been trying to build an Azure DevOps Pipeline for CI/CD for my Django project. The code is being pulled from a github repo (and is actually deployed already on Azure app service). However, when I run the test on the Pipeline I get the following error when it runs python manage.py test:
Creating test database for alias 'default'...
pyodbc.OperationalError: ('HYT00', '[HYT00] [Microsoft][ODBC Driver 17 for SQL Server]Login timeout expired (0) (SQLDriverConnect)')
##[error]Bash exited with code '1'.
I tried extensively to whitelist Azure DevOps but the error has persisted. How can I resolve this so that the Pipeline can run tests for CI/CD?
Which agent are you using? Hosted agent or self-hosted agent?
If you are using hosted agent, since we are running the code in the pipeline via hosted agent, we should add hosted agent IP addresses to the whitelist instead of Azure DevOps Services IPs. The whitelist Azure DevOps you used is Azure DevOps Service IP. About hosted agent IP, we publish a weekly JSON file listing IP ranges for Azure data centers, broken out by region. To obtain the complete list of possible IP ranges for your agent, you must use the IP ranges from all of the regions that are contained in your geography.
If you are using self-hosted agent. Please check your local agent server IP and then add it.

memorystore and instances in different regions (GCP)

I'm building a chat app in React Native, and the backend is in Node.JS I'm using GKE to deploy the server code.
I'm using a cloud sql postgresql, connecting with internal IP. This works. I also use a memorystore (redis). Here is the problem.
For autoscaling, I'm planning to get multiple GKE clusters in different regions (for now, europe-west1 and us-central1). I have configured a load balancer with one backend containing all instance groups. I don't know of this is the correct/ideal solution, but it works. The problem lies in the fact that you can only connect to a redis database from an instance within the same region. If i use use-central1 as the region for my memorystore instance, I cannot connect to it through the vm's in the eu-cluster I created.
What is the best solution to overcome this problem? I've created an extra VM in the same region as the redis instance with haproxy configured to use as a reverse proxy to the memorystore, and this way, I can connect to the redis database through all instances, no matter what region they're in. But I don't know if this is the correct solution?
EDIT:
I'm using websockets (socket.io) for chat messages. Because I'm planning to use multiple servers, I need a centralized database to store (references to) the socket ID's, so users can send messages to users that are connected to other servers.
I'm thinking redis is the correct solution for a number of reasons:
I can use socket.io-redis to store the socket ID's on redis
fast response time
I don't know about the size of the data stored, but it's definitely not Mb's
I'm using a postgresql database to store other information (like username, passwords), but it seems to me that redis is a far better solution for real time applications.

Google Cloud Composer and Google Cloud SQL Proxy

I have a project with Cloud Composer and Cloud SQL.
I am able to connect to Cloud SQL because i edited the yaml of airflow-sqlproxy-service and added my Cloud SQL instance on cloud proxy used for the airflow-db, mapping to port 3307.
The workers can connect to airflow-sqlproxy-service on port 3307 but i think the webserver can't connect to this.
Do i need to add some firewall rule to map the 3307 port so the webserver or the UI can connect to airflow-sqlproxy-service?
https://i.stack.imgur.com/LwKQK.png
https://i.stack.imgur.com/CJf7Q.png
https://i.stack.imgur.com/oC2dJ.png
Best regards.
Composer does not currently support configuring additional sql proxies from the webserver. One workaround for cases like this is to have a separate DAG which loads Airflow Variables with the information needed from the other database (via the workers which do have access), then generate a DAG based on the Variable which the webserver can access.
https://github.com/apache/incubator-airflow/pull/4170 recently got merged (not currently available in Composer), which defines a CloudSQL connection type. This might work for these use cases in the future.

Database connections with RDS not working after deploying

I have developed a Dynamic web App, which leverages Amazon RDS. Servlet talks with RDS fetches data and presents it using JSP. The process is working fine. Next, I uploaded the war file of project on AWS Elastic beanstalk. But Database connections are not working.
Could you please guide me here? Why my connection variables are not working after deploying?
I made few changes to the early deployment and now App takes forever to load.