Cassandra c++ driver keeps attempting reconnection to endpoints that do not exist anymore - c++

I am using Datastax's c++ driver version 2.8.0 for Apache Cassandra inside a kubernetes application. Cassandra is deployed as a 3 node cluster via this Helm chart.
The chart leverages kubernetes' headless services to make the Cassandra endpoints available, so there is an entry in the kubernetes DNS for those endpoints.
I have a c++ app running in a kubernetes pod that interacts with Cassandra, which connects using that DNS entry to resolve the endpoints. The application has a single connection to Cassandra object, following the driver usage guidelines. Connection is initialized at the beginning of the program, and failure to initialize the connection or to execute a query later on will actually fail the program.
Everything is working fine, but cassandra nodes/pods may eventually go down for some reason. When that happens, they're respawned, but get reassigned with a different IP. It seems like the c++ driver is able to get the new endpoints from the DNS without any additional code. However in such a situation the connection is not closed on the client side, and it looks like the previous endpoints remain in the connection pool on some level. This leads to a series of log events similar to the following:
1531920921.161 [WARN] (src/pool.cpp:420:virtual void cass::Pool::on_close(cass::Connection*)): Connection pool was unable to reconnect to host XXX.XXX.XXX.XX because of the following error: Connection timeout
and
1531920921.894 [WARN] (src/pool.cpp:420:virtual void cass::Pool::on_close(cass::Connection*)): Connection pool was unable to reconnect to host XXX.XXX.XXX.XX because of the following error: Connect error 'host is unreachable'
Which pop up every [reconnect timeout]. The more IP reassignments, the more log messages, which as you can guess can get to a pretty large number for long lived applications.
Is there some feature of the driver's API that allows dealing with that? Or a good/recommended way to handle that client side, more generally? One option, external to the driver, could be to reset the connection within the client code, but (although I may be missing out) I fail to see a way to "catch" such events : they only show up in the logs.

Related

Cloud SQL Proxy connection timesout occasionaly

We use single-tenant architecture for our instances. Each instance contains 3 Django Apps i.e Django, Celery{worker, beat} and few other things that don't interact with the database. We deploy cloudsql-proxy as a sidecar for these django containers which are running as Pod in Google Kubernetes Engine. We are using CloudSQL (Postgres 9.6) by Google and it has Public IP address.
The problem is that we are getting Operational Errors on Django side i.e
OperationalError: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
and when we check the Pod's logs at the same time when the OperationalError occurred we see the following log error from cloudsql-proxy container
couldn't connect to db_instance: dial tcp our_db_instance_public_ip:3307: connect: connection timed out
It is not that the connection to database doesn't work. It works most of the time but sometime it throws the above errors, which is kind of a pain because we run celery tasks every other minute and they fail due to this. Sometime, this occurs when the end user is interacting with our application and their requests fails.
Our application isn't under very high load. We set the maximum connection of our database to 1000. And the peak number of connection is around 35 (sum of all instance's connections). I checked the stats of Database and it seems pretty happy i.e CPU utilization almost never goes above 50%, Disk is 30% used, Memory usage is around 50%.
I can provide more details if needed. Would appreciate any help!

EKS has issue connecting Oracle

I have Azure devops pipeline which deploys in Amazon k8s, the pipeline is working very fine and with no issue and it is deploying in k8s.
Mine is a Spring Boot application, when starting the application it makes a connection to Oracle db. And I get the below error.
[nio-8203-exec-3] com.zaxxer.hikari.util.DriverDataSource : Registered driver with driverClassName=oracle.jdbc.driver.OracleDriver was not found, trying direct instantiation.
Caused by: oracle.net.ns.NetException: Listener refused the connection with the following error:
ORA-12505, TNS:listener does not currently know of SID given in connect descripto
I suspect this is not the issue , because the same code, the same pipeline, the same k8s is working absolutely fine in different namespace.
The above oracle issue is happening only when the pipeline try to deploy in default namespace.
I tried to telnet to the Oracle server, I get the response just it is that it is not able to connect from default namespace and I don't know why.
Also I have 2 Oracle database one in one prem and other in RDS, both have the same issue in default name space and it was able to connect with no issue from other namespace.
I don't know what is different in default namespace and what to check there.

Cloud Composer worker fails to connect to external database

I am attempting to take my existing cloud composer environment and connect to a remote SQL database (Azure SQL). I've been banging at my head at this for a few days and I'm hoping someone can point out where my problem lies.
Following the documentation found here I've spun up a GKE Service and SQL Proxy workload. I then created a new airflow connection as show here using the full name of the service azure-sqlproxy-service:
I test run one of my DAG tasks and get the following:
Unable to connect: Adaptive Server is unavailable or does not exist
Not sure on the issue I decide to remote directly into one of the workers, whitelist that IP on the remote DB firewall, and try to connect to the server. With no command line MSSQL client installed I launch python on the worker and attempt to connect to the database with the following:
connection = pymssql.connect(host='database.url.net',user='sa',password='password',database='database')
From which I get the same error above with both the Service and the remote IP entered in as host. Even ignoring the service/proxy shouldn't this airflow worker be able to reach the remote database? I can ping websites but checking the remote logs the DB doesn't show any failed logins. With the generic error and not many ideas on what to do next I'm stuck. A few google results have suggested switching libraries but I'm not quite sure how, or if I even need to, within airflow.
What troubleshooting steps could I take next to get at least a single worker communicating to the DB before moving on the the service/proxy?
After much pain I've found that Cloud composer uses ubuntu 1804 which currently breaks pymssql as per here:
https://github.com/pymssql/pymssql/issues/687
I tried downgrading to 2.1.4 to no success. Needing to get this done I've followed the instructions outlined in this post to use pyodbc.
Google Composer- How do I install Microsoft SQL Server ODBC drivers on environments

Got an error reading communication packets in Google Cloud SQL

From 31th March I've got following error in Google Cloud SQL:
Got an error reading communication packets.
I have been using Google Cloud SQL for 2 years, but never faced with such problem.
I'm very worried about it.
This is detail error message:
textPayload: "2019-04-29T17:21:26.007574Z 203385 [Note] Aborted connection 203385 to db: {db_name} user: {db_username} host: 'cloudsqlproxy~{private ip}' (Got an error reading communication packets)"
While it is true that this error message often occurs after a maintenance period, it isn't necessarily a cause for concern as this is a known behavior by MySQL.
Possible explanations about why this issue is happening are :
The large increase of connection requests to the instance, with the
number of active connections increasing over a short period of time.
The freezing / unavailability of the instance can also occur due to
the burst of connections happening in a very short time interval. It
is observed that this freezing always happens with an increase of
connection requests. This increase in connections causes the
instance to be overloaded and hence unavailable to respond to
further connection requests until the number of connections
decreases or the instance stabilizes.
The server was too busy to accept new connections.
There were high rates of previous connections that were not closed
correctly.
The client terminated it abnormally.
readTimeout setting being set too low in the MySQL driver.
In an excerpt from the documentation, it is stated that:
There are many reasons why a connection attempt might not succeed.
Network communication is never guaranteed, and the database might be
temporarily unable to respond. Make sure your application handles
broken or unsuccessful connections gracefully.
Also a low Cloud SQL Proxy version can be the reason for such
incident issues. Possible upgrade to the latest version of (v1.23.0)
can be a troubleshooting solution.
IP from where you are trying to connect, may not be added to the
Authorized Networks in the Cloud SQL instance.
Some possible workaround for this issue, depending which is your case could be one of the following:
In the case that the issue is related to a high load, you could
retry the connection, using an exponential backoff to prevent
from sending too many simultaneous connection requests. The best
practice here is to exponentially back off your connection requests
and add randomized backoffsto avoid throttling, and potentially
overloading the instance. As a way to mitigate this issue in the
future, it is recommended that connection requests should be
spaced-out to prevent overloading. Although, depending on how you
are connecting to Cloud SQL, exponential backoffs may already be in
use by default with certain ORM packages.
If the issue could be related to an accumulation of long-running
inactive connections, you would be able to know if it is your case
using show full processliston your database looking for
the connections with high Time or connections where Command is
Sleep.
If this is your case you would have a few possible options:
If you are not using a connection pool you could try to update the client application logic to properly close connections immediately at the end of an operation or use a connection pool to limit your connections lifetime. In particular, it is ideal to manage the connection count by using a connection pool. This way unused connections are recycled and also the number of simultaneous connection requests can be limited through the use of the maximum pool size parameter.
If you are using a connecting pool, you could return the idle connections to the pool immediately at the end of an operation and set a shorter timeout by adjusting wait_timeout or interactive_timeoutflag values. Set CloudSQL wait_timeout flag to 600 seconds to force refreshing connections.
To check the network and port connectivity once -
Step 1. Confirm TCP connectivity on port 3306 with tcptraceroute or
netcat.
Step 2. If [Step 1] succeeded then try to check if there are any
errors in using mysql client to check timeout/error.
When the client might be terminating the connection abruptly you
could check for:
If the MySQL client or mysqld server are receiving a packet bigger
than max_allowed_packet bytes, or the client receiving a packet
too large message,if it so you could send smaller packets or
increase the max_allowed_packet flag value on both client
and server. If there are transactions that are not being properly
committed using both "begin" and "commit", there is the need to
update the client application logic to properly commit the
transaction.
There are several utilities that I think will be helpful here,
if you can install mtr and the tcpdump utilities to
monitor the packets during these connection-increasing events.
It is strongly recommended to enable the general_log in the
database flags. Another suggestion is to also enable the slow_query
database flag and output to a file. Also have a look at this
GitHub issue comment and go through the list of additional
solutions proposed for this issue here
This error message indicates a connection issue, either because your application doesn't terminate connections properly or because of a network issue.
As suggested in these troubleshooting steps for MySQL or PostgreSQL instances from the GCP docs, you can start debugging by checking that you follow best practices for managing database connections.

Auto failover multiple connections to mirror database when principal goes down

I have a principal database (server_A), mirror database (server_B), and a witness database (server_C). The databases are set up for automatic failover, that is, when server_A goes down or fails over, server_B assumes the role of the new principal database. The database quorum is set up correctly to the best of my knowledge.
I have written an application in c++ to connect to the database and get a value to ensure a true connection. The application detects when a failure occurs on the GetValue call and attempts to reconnect when the error occurs.
The issue is this:
When I have MULTIPLE connections to the database (two threads connected, once connected, it will get a value in a loop), when the failover occurs (stopping sql server on server A so server B will take over as principal), I detect the connection failure and destroy my connection and attempt to reconnect using the same connection string:
"Driver={SQL Native Client};Server=tcp:Server_A;Failover_Partner=tcp:Server_B;Database=SomeDatabase;Uid=SomeUser;Pwd=SomePassword;"
** NOTE **
I have verified that the failover has taken place by monitoring the databases.
Even though, the connection to the database has been properly disposed of, I cannot reconnect to the database until I restart the application, OR if I bring server_A back online (now acting as the mirror database) and then failover server_B (shutting down sql server) making server A the principal database again, the application can reconnect without having to completely close out.
Though I could manipulate the connection string to make server_B the new principal and server_A the new Failover_Partner, this is not an ideal solution as many more connections will be utilized.
Keep in mind, this ONLY happens with multiple connections to the database. If I run the application with only one connection, all is fine and I can reconnect just fine when the failover occurs.
EDIT: If I connect in the beginning with multiple threads, all is fine. When I shutdown SQL Server, and therefore a failover occurs, I can reconnect only when I go through and delete ALL objects and re-instantiate new objects. Also, I am using SQL Native Client 11.0 (ODBC). Thoughts?
A lot of what you're describing is consistent with the issue described in KB 2605597 "Time-out error when a mirrored database connection is created by the .NET Framework data provider for SQLClient."
The KB describes problems when the connection timeout is set to 15 seconds, I have anecdotally heard of similar problems when the connection timeout is set to 0 (which isn't a good idea for other reasons, mentioning just in case).
This hotfix is applied to the application servers. If you want to rule this out as a possible cause, you could test raising the timeout (like it says in the workaround sections of the post) to make sure it's not the issue.
Later thought: The other thing I notice that is unusual is that you're specifying the TCP protocol in the connection string and the failover partner name. It's not clear to me from the documentation that it's supported in the failover partner name. You might want to try removing that and specifying the network attribute instead. (Recommended here.)
I do understand that you believe the issue isn't these things due to the single / multiple connections issue you've tested out.
However, I think you're better off simplifying the connection string so it's as consistent as possible with the published examples and making sure it's not the issues that people have commonly hit with this first. (The retry issue happens when there is latency, which can make it somewhat sporadic.)
Ok I have found the answer.
I had to modify the hosts file because my application did not reside in the same domain as the databases. Therefore when trying to fail over, I could not reach the database with the instance name (which is what the failover partner was cached as). I changed the hosts file to resolve the instance name to the ip address of the machine and it all works now.