MYSQL Too many connections error will not go away - c++

I'm getting a MySQL "Too many connections" error in a C++ program running on Ubuntu Linux.
This is the code that gets the error (it's inside a method that returns the mysql error, if any):
MYSQL connect;
mysql_init(&connect);
if (!mysql_real_connect(&connect,SERVER,USER,PASSWORD,DATABASE,0,NULL,0))
{
return mysql_error(&connect);
}
This code keeps returning the string "Too many connections."
I'm wondering if this is actually some other error. This program has been working for months before I got this error. When the error first appeared it was because I had run the program against several thousand updates/reads and so yes, it's highly likely that I used up the available connections. The problem is, I can't find a way to release them, if that's what it is.
Here is what I have tried:
FLUSH HOSTS;
FLUSH TABLES;
restarting MYSQL
rebooting the machine altogether
It has been over 12 hours since this error first appeared, so if it is the connections then nothing is being reset/released. I would have thought rebooting the machine would have released something.

See C.5.2.7. Too many connections.
View all MySQL connections.
netstat -apn | grep mysql | grep -i established
Tips
Build and return connection object only when connection pointer is null or connection to DB is unavailable.
Use one connection pool for the entirety of the session.
Close the connection at the end of each session and release/clean the connection pointer.
Increase max_connections=# in /etc/mysql/my.cnf or restart MySQL with --max_connections=#

Make sure you close connection when you are done with them.
Consider reusing connections or connection pooling.

Related

Where to even begin investigating issue causing database crash: remaining connection slots are reserved for non-replication superuser connections

Occasionally our Postgres database crashes and it can only be solved by restarting the server. We have tried incrementing the max connections and Django's CONN_MAX_AGE. Also, I am trying to learn how to set up PgBouncer. However, I am convinced the underlying issue must be something else which is fixable.
I am trying to find what that issue is. The problem is I wouldn't know where or what to begin to look at. Here are some pieces of information:
The errors are always OperationalError: FATAL: remaining connection slots are reserved for non-replication superuser connections and OperationalError: could not write to hash-join temporary file: No space left on device. I think this is caused by opening too many database connections, but I have never managed to catch this going down live so that I could inspect pg_stat_activity and see what actual connections were active.
Looking at the error log, the same URL shows up for the most part. I've checked the nginx log and it's listed in many different lines, meaning the request is being made multiple times at once rather than Django logging the same error multiple times. All these requests are responded with 499 Client Closed Request. In addition to this same URL, there are of course sprinkled requests of other users trying to access our site.
I should mention that the logic the server processes when the URL in question is requested is pretty simple and I see nothing suspicious that could cause a database crash. However, for some reason, the page loads slowly in production.
I know this is very vague and very little to work with, but I am not used to working sysadmin, I only studied this kind of thing in college and so far I've only worked as a developer.
Those two problems are mostly independent.
Running out of connection slots won't crash the database. It just is a sign that you either don't use a connection pool or you have a connection leak, i.e. you forget to close transactions in your code.
Running out of space will crash your database if the condition persists.
I assume that the following happens in your system:
Because someone forgot a couple of join conditions or for some other reason, some of your queries take a very long time.
They also priduce a lot of (perhaps intermediate) results that are cached in temporary files that eventually fill up the disk. This out of space condition is cleared as soon as the query fails, but it can crash the database.
Because these queries take long and block a database session, your application keeps starting new sessions until it reaches the limit.
Solution:
Find and fix thise runaway queries. As a stop-gap, you can set statement_timeout to terminate all statements that take too long.
Use a connection pool with an upper limit so that you don't run out of connections.

Error : Can't connect to Mysql server on '' (10055)

Some times my appplication gives the following error. I do not understrand why this error occures .
My server database on windows machine. My server works perfectly with INSERT, UPDATE, DELETE for mysql.
After process~29000 files my application crash!
the relation between code and DB is based on a connections whose using ports on both sides client/server
that's why is totally recommended to close the connections after using it, because the exceed of opened port will block the connection wich will be the cause of launching such exception

PostgreSQL Database Server Unresponsive

How do you diagnose problems with PostgreSQL performance?
I have a Django-based webapp using PostgreSQL as a database backend on Ubuntu 12, and under heavy load, the database seems to just disappear, causing the Django-interface to be unreachable and resulting in errors like:
django.db.utils.DatabaseError: error with no message from the libpq
django.db.utils.DatabaseError: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
What's odd is that the logs in /var/log/postgresql show nothing unusual. The only thing the log /var/log/postgresql/postgresql-9.1-main.log shows are lots of lines like:
2012-09-01 12:24:01 EDT LOG: unexpected EOF on client connection
Running top shows that PostgreSQL doesn't seem to be consuming any CPU, even though service postgresql status indicates it's still running.
Doing a 'service postgresql restart` temporarily fixes the problem, but the problem returns as soon as there's a lot of load on the database.
I've checked the dmesg and syslog, but I don't see anything that would explain what's wrong. What other logs should I check? How do I determine what's wrong with my PostgreSQL server?
Edit: My max_connections is set to 100. Although I am doing a lot of manual transactions. Reading up on Django's ORM behavior with PostgreSQL in manual mode, it looks like I may have to explicitly do connection.close(), which I'm not doing.
I found this was due to Django's buggy Postgres-backend in combination with multi-processing. Essentially, Django doesn't properly close it's connections automatically, causing some weird behavior like tons of "idle in transaction" connections. I fixed it by adding connection.close() to the end of my multi-processing launched functions and before certain queries that were throwing this error.
2012-09-01 12:24:01 EDT LOG: unexpected EOF on client connection
This message shows, so some issue is on client side - maybe some exception from libpq ?? There can be related issues - when clients hangs without correct logout, then you have lot of idle connections, and you get other errors early.
The program pg_ctl has some options that might help. (man pg_ctl)
-c
Attempt to allow server crashes to produce core files, on platforms
where this is possible, by lifting any soft resource limit placed
on core files. This is useful in debugging or diagnosing problems
by allowing a stack trace to be obtained from a failed server
process.
-l filename
Append the server log output to filename. If the file does not
exist, it is created. The umask is set to 077, so access to the log
file is disallowed to other users by default.
The program postgres also has some debug options. (man postgres)
-d debug-level
Sets the debug level. The higher this value is set, the more
debugging output is written to the server log. Values are from 1 to
5. It is also possible to pass -d 0 for a specific session, which
will prevent the server log level of the parent postgres process
from being propagated to this session.
In the section "Semi-internal Options" . . .
-n
This option is for debugging problems that cause a server process
to die abnormally. The ordinary strategy in this situation is to
notify all other server processes that they must terminate and then
reinitialize the shared memory and semaphores. This is because an
errant server process could have corrupted some shared state before
terminating. This option specifies that postgres will not
reinitialize shared data structures. A knowledgeable system
programmer can then use a debugger to examine shared memory and
semaphore state.
-T
This option is for debugging problems that cause a server process
to die abnormally. The ordinary strategy in this situation is to
notify all other server processes that they must terminate and then
reinitialize the shared memory and semaphores. This is because an
errant server process could have corrupted some shared state before
terminating. This option specifies that postgres will stop all
other server processes by sending the signal SIGSTOP, but will not
cause them to terminate. This permits system programmers to collect
core dumps from all server processes by hand.

Server closed the connection unexpectedly

I'm sorry if my question was answered already, but I cannot find it yet.
I'm using C++ and connection pool to connect to a PostgreSQL database in a Win32 console application. It runs OK at the beginning. However, after a while the program received an error: "Server closed the connection unexpectedly. This probably means the server terminated abnormally before or while processing the request".
When I open the PostgreSQL log file, it shows message: "unexpected EOF on client connection, could not receive data from client: No connection could be made because the target machine actively refused it."
Thank you for any help.
This really sounds like a network problem. I would be looking first at firewalls, then switches. I don't think a cable or a bad network card could cause a problem like this.
What sounds like is going on is that a connection is getting reset. If you eliminate network issues, then the next area to blame is the connection pooling software. Look at switching this out and see if the problem persists.

SQL-Server Connection Fails after Network Reconnect

I am working on an update to an application that uses DAO to access an SQL Server. I know, but let's consider DAO a requirement for now.
The application runs all the time in the system tray and periodically performs SQL server operations. Since it is running all the time, and users of the application will be on laptops and transitioning between buildings, I've designed it to quietly transition between active and inactive states. When the database connection is successful operations resume.
I have one last issue before I release this update: When a connection is dropped, then reestablished, the SQL operations fail. This occurs only if I have specified the hostname in my connection string. If I use the IP, everything is fine (but I need to be able to use hostname).
Here is the behavior:
1) Everything working. Good network connection, database operations are fine.
2) Lost connection. Little 'x' appears on task bar icon, and nothing else. All ok.
3) Reconnect.
At step 3, I get an 'ODBC--call failed' error when I run the first query. Interestingly, the database is first opened without error.
If I skip step 1, and start the application when the connection is down, everything works fine in step 3, hostname or not.
I expect this is an issue with the DAO engine caching the DNS entry after the first connection, although the destination IP does not change so I'm not sure about that. I have tried flushing the windows DNS cache (from cmd prompt) to no effect. The same behavior occurs even when I'm using my local hostname with a local SQL server I set up for development. 127.0.0.1 has no problems.
I also tried to CoUninitialize() the DAO interface between active times, but I had trouble getting this to work. If someone thinks that would help I will work harder at it.
This behavior is the same in Windows XP or 7.
Thanks for anything you've got!
Edit: I should have mentioned - I am closing the database connection between the attempts, then reopening it with
m_pDb = m_pDaoEngine->OpenDatabase()
I ended up biting the bullet and converting the application to ADO. Everything works nicely now, and database operations are much faster to boot.