MySQL with Qt issue - c++

Can high Aborted_clients value lead to Host IP is blocked because of many connection errors? I want to known it because such error blocks my Qt application from accessing the database server.
Error message:
QSqlDatabasePrivate::database: unable to open database: "Host 'IP' is blocked because of many connection errors; unblock with 'mysqladmin flush-hosts' QMYSQL: Unable to connect"
Also, can Aborted_clients value increase the max_connect_errors value?
Thanks.

They are Globally unrelated. max_connect_errors is one of Server System Variables , it is a per host basis counter while Aborted_clients is one of the Server status variables which is a global information counter for all clients/hosts.
Another reason they are not related, when the host whose max_connect_errors counter is in an incrementing cycle due to connect errors but then that host establishes a successful connection, the error count max_connect_errors for the host is cleared!
max_connect_errors is incremented for a particular host when the host fails to establish and no successful connection (threshold results in blocking the host). It happens when the handshake with the server was interrupted. If it wasn’t interrupted, it counts as “success” and reset the host counter – regardless of whether the end result was a successful connection or not. so it can be considered as a network performance counter, Note that it does not even strongly stand for security issues. You can test this by telnet MyServer 3306 then pressing CTRL C instead of proceeding ..
This counter could be cleared with mysqladmin flush-hosts; as in this post.
On the other hand, if a client successfully connects , but later disconnects improperly or is terminated, the server increments the Aborted_clients counter.
This can be caused by many things - The client exited without calling mysql_close(). Client connection exceeded wait_timeout without interacting with server. The client connection wast cut off like when turning off the PC.
Server status variables provide information about Server operation, It also includes Aborted_connects which is just a statistic for DBAs- not used by mysqld to determine server behavior.

Related

How to detect a connection failure in Indy TCP Client

I have made a client and a server using Indy TIdTCPClient and TIdTCPServer in C++Builder 11 Alexandria.
I can start the server and connect the client to it correctly, but if I set the server MaxConnections to a value N and I try to connect to it with the N+1 client, the connection does not fail, apparently.
For example: I set MaxConnections=1 in the server, the first client connects to it and the server OnConnect event is raised, while in the client OnStatus event I get two messages:
message 1: Connecting to 10.0.0.16.
message 2: Connected.
I try to connect the second client: the server OnConnect event is NOT raised (and this is what I expect) but in the client OnStatus event I get the same two messages (and this is not what I expect):
message 1: Connecting to 10.0.0.16.
message 2: Connected.
Then, the first client can exchange data with the server, and the second client can't (this seems right).
I don't understand why the second client connection does not fail explicitly, am I doing something wrong?
You are not doing anything wrong. This is normal behavior for TIdTCPServer.
There is no cross-platform socket API at the OS level 1 to limit the number of active/accepted connections on a TCP server socket, only to limit the number of pending connections in the server's backlog. That limit is handled by the TIdTCPServer::ListenQueue property, which is 15 by default (but this is more of a suggestion than a hard limit, the underlying socket stack can override this, if it wants to).
As such, the TIdTCPServer::MaxConnections property is implemented by simply accepting any client from the backlog that attempts to connect, and then immediately disconnects that client if the MaxConnections limit is exceeded.
So, if you try to connect more clients to TIdTCPServer than MaxConnections allows, those extra clients will not see any failure in connecting (unless the backlog fills up), but the server will not fire the OnConnect event for them. From the clients' perspectives, they actually did connect successfully, they were fully accepted by the server's underlying socket stack (the TCP 3way handshake is complete). However, they simply will not process the disconnect until they try to actually communicate with the server, then they will detect the disconnect, usually in the form of an EIdConnClosedGracefully exception (but that is not guaranteed).
1: on Windows only, there is a WSAAccept() function which has a callback that can reject pending connections before they leave the backlog queue. But Indy does not make use of this callback at this time.
Different TCP stacks exhibit different behavior. Your description is consistent with a TCP stack that simply ignores SYNs to a socket that has reached the maximum configured limit of pending and/or accepted connections: the SYN packet is simply dropped on the floor and not acknowledged.
The nature of TCP is that it's supposed to handle network drops. The sender does not immediately bail out, but will keep trying to connect, for some period of time. This part is consistent with all TCP implementations.
If you want your client to quickly fail a connection that does not get established within some set period of time you'll need to implement a manual timeout yourself.

Got an error reading communication packets in Google Cloud SQL

From 31th March I've got following error in Google Cloud SQL:
Got an error reading communication packets.
I have been using Google Cloud SQL for 2 years, but never faced with such problem.
I'm very worried about it.
This is detail error message:
textPayload: "2019-04-29T17:21:26.007574Z 203385 [Note] Aborted connection 203385 to db: {db_name} user: {db_username} host: 'cloudsqlproxy~{private ip}' (Got an error reading communication packets)"
While it is true that this error message often occurs after a maintenance period, it isn't necessarily a cause for concern as this is a known behavior by MySQL.
Possible explanations about why this issue is happening are :
The large increase of connection requests to the instance, with the
number of active connections increasing over a short period of time.
The freezing / unavailability of the instance can also occur due to
the burst of connections happening in a very short time interval. It
is observed that this freezing always happens with an increase of
connection requests. This increase in connections causes the
instance to be overloaded and hence unavailable to respond to
further connection requests until the number of connections
decreases or the instance stabilizes.
The server was too busy to accept new connections.
There were high rates of previous connections that were not closed
correctly.
The client terminated it abnormally.
readTimeout setting being set too low in the MySQL driver.
In an excerpt from the documentation, it is stated that:
There are many reasons why a connection attempt might not succeed.
Network communication is never guaranteed, and the database might be
temporarily unable to respond. Make sure your application handles
broken or unsuccessful connections gracefully.
Also a low Cloud SQL Proxy version can be the reason for such
incident issues. Possible upgrade to the latest version of (v1.23.0)
can be a troubleshooting solution.
IP from where you are trying to connect, may not be added to the
Authorized Networks in the Cloud SQL instance.
Some possible workaround for this issue, depending which is your case could be one of the following:
In the case that the issue is related to a high load, you could
retry the connection, using an exponential backoff to prevent
from sending too many simultaneous connection requests. The best
practice here is to exponentially back off your connection requests
and add randomized backoffsto avoid throttling, and potentially
overloading the instance. As a way to mitigate this issue in the
future, it is recommended that connection requests should be
spaced-out to prevent overloading. Although, depending on how you
are connecting to Cloud SQL, exponential backoffs may already be in
use by default with certain ORM packages.
If the issue could be related to an accumulation of long-running
inactive connections, you would be able to know if it is your case
using show full processliston your database looking for
the connections with high Time or connections where Command is
Sleep.
If this is your case you would have a few possible options:
If you are not using a connection pool you could try to update the client application logic to properly close connections immediately at the end of an operation or use a connection pool to limit your connections lifetime. In particular, it is ideal to manage the connection count by using a connection pool. This way unused connections are recycled and also the number of simultaneous connection requests can be limited through the use of the maximum pool size parameter.
If you are using a connecting pool, you could return the idle connections to the pool immediately at the end of an operation and set a shorter timeout by adjusting wait_timeout or interactive_timeoutflag values. Set CloudSQL wait_timeout flag to 600 seconds to force refreshing connections.
To check the network and port connectivity once -
Step 1. Confirm TCP connectivity on port 3306 with tcptraceroute or
netcat.
Step 2. If [Step 1] succeeded then try to check if there are any
errors in using mysql client to check timeout/error.
When the client might be terminating the connection abruptly you
could check for:
If the MySQL client or mysqld server are receiving a packet bigger
than max_allowed_packet bytes, or the client receiving a packet
too large message,if it so you could send smaller packets or
increase the max_allowed_packet flag value on both client
and server. If there are transactions that are not being properly
committed using both "begin" and "commit", there is the need to
update the client application logic to properly commit the
transaction.
There are several utilities that I think will be helpful here,
if you can install mtr and the tcpdump utilities to
monitor the packets during these connection-increasing events.
It is strongly recommended to enable the general_log in the
database flags. Another suggestion is to also enable the slow_query
database flag and output to a file. Also have a look at this
GitHub issue comment and go through the list of additional
solutions proposed for this issue here
This error message indicates a connection issue, either because your application doesn't terminate connections properly or because of a network issue.
As suggested in these troubleshooting steps for MySQL or PostgreSQL instances from the GCP docs, you can start debugging by checking that you follow best practices for managing database connections.

Detect when Remote Desktop Connection is starting?

Is there any way to detect when a Remote Desktop Connection is starting on a Windows machine?
For example, I'd like to have a c++ application print "WARNING: RDC Connection incoming" as soon as Windodws detects that a RDC connection has been initialized.
Is there some sort of system event that is called when RDC connects?
you can create a thread that will keep asking if a remote connection is opened right now every 500 ms,you can find how to do it right here.
you still might not caught it in time so you can check which TCP ports get open every small interval of time, you can use GetTcpTable2 for this look at https://msdn.microsoft.com/en-us/library/windows/desktop/bb408406(v=vs.85).aspx.
specificly you should check the state of the port.
since the first thing that happen in a remote connection is the port changing is state you should catch it in time.
The RDP port is 3389.

Cannot connect to a local port anymore that is still being listened by a process

I have a server application (unimrcpserver.exe) that is answering requests from client processes. This server process listens to several ports.
with netstat -a command I get the following lines for my process.
TCP 192.168.10.65:2544 MERTB-PC:0 LISTENING
TCP 192.168.10.65:2554 MERTB-PC:0 LISTENING
TCP 192.168.10.65:9060 MERTB-PC:0 LISTENING
(netstat output is long I only put relevant lines here)
Normally when the system works I make requests to the server from these ports and each of them works fine.
When I was doing stress tests I saw a situation where the system no longer responded the my requests that I make through the port 2554.
netstat -a still gives me the above lines so the server is somehow still listening to this port. When I run telnet on the same machine it gives an error :
telnet 192.168.10.65 2554
Connecting To 192.168.10.65...Could not open connection to the host, on port 2554: Connect failed
I also wrote a simple program with c++ to get the exact error message that the system generates to a connect() request. This time I get the following error:
No connection could be made because the target machine actively refused it
Additional info: Everything is on the same Windows machine. Firewall is disabled. This situation occurred only once when I am doing stress tests that sends multiple requests at the same time. Before the situation occurred the system handled to around 13000 requests, which took around half an hour.
So the question is : How can this situation occur? The port is being reported as "LISTENING" with netstat but I cannot connect to it. If it can be caused by a programming error what kind of an error can cause this kind of behavior?
A new connection can be "actively refused" under several conditions:
there is no LISTENING socket on the IP:Port being connected to.
there is a LISTENING socket, but its backlog of pending connections is full, so it cannot accept a new connection at that moment.
A firewall is blocking it. Though the firewall is more likely to use a different error, if it sends an error at all.
Since there is a LISTENING socket, #2 is the most likely/common case. If so, it means the server app is not accepting clients from its backlog fast enough, if at all.
A client cannot differentiate between these conditions. All it can do is detect the connect failure - WSAECONNREFUSED or ECONNREFUSED, depending on platform - and try again later.
So the question is : How can this situation occur? The port is being reported >>as "LISTENING" with netstat but I cannot connect to it. If it can be caused >>by a programming error what kind of an error can cause this kind of behavior?
Yes,It could be caused by a programming error on the server. I have seen it happening when the server's listening thread is deadlocked. The socket's state is "listening" but if the listening thread has some global state and is blocked on other threads waiting on a mutex to be released you will encounter this.
Also, like others here stated if the CPU is loaded due to your stress test and that might cause the server to refuse connections since the threads might be busy processing and the listening thread never got a chance to accept the connection.

Winsock select function returning different values

I am working on project having client server architecture. select function returns different value in different scenarios Followings are the details
Scenario 1:
When i install my server at my machine, stop all the corresponding services, my client goes to DC state and now return value of select is 1 and read_mask.fd_count is also 1.
Scenario 2:
When i connect to remote server (abc.com) and disconnect my wireless connection. in this case the same function returns 0 also read_mask.fd_count is 0. i tried changing timeout variable value from ten ms to 50 sec. cant figure out the problem.
Any help will be appreciated
When you shot down the server you cause the network stack to shutdown the connection. Furtehr connection request are refused. The select indicates that there's something and the the recv() returns 0 to indicate forcibly closed.
When you pull the wireless cable out of the plug then the client gets neither the shutdown nor the connection request. You wait for any timeout to detect the not available server.
In a real world application you should implement a kind of heartbeat in you protocol that allows to detect the "disconnected state" in the second scenario.
Edit: If your Winsock implementation supports SO_KEEPALIVE_VALS, you can also configure this to detect the lost connectivity. See also: SO_KEEPALIVE.