Checking SQL connection state in ADO - c++

I am checking my SQL connection state as:
_ConnectionPtr m_pADOConnection;
// Connection is created and working fine...
// Now I disable network adapter (from Control panel)
if( (pApp->m_pConnection->GetState() == adStateOpen) )
{
// I got here every time....
}
The problem is I get every time adStateOpen even if connection is really not working!
If I try to execute a query or do anything it fails, mostly with
SMux Provider: Physical connection is not usable [xFFFFFFFF].
or
Error number: 80004005 = Unable to open a logical session
Is this value of State property reliable or I need to perform some other check top detect this state?

The state property does not become 0 (adStateClosed) when the connection is interrupted. So checking the state of the connection will always return 1 (adStateOpen) even after the connection is interrupted.
No there is no way to check immediately. The architecture of SQL servers don't allow for it.
I advise you to create an error handling.
It looks as though the only way to establish that the connection has been lost is to try to open it and handle the error.

Related

Mongo C++ Driver - How to Change Timeout Configurations

How can I change the timeout duration for different operations that can fail due to server inaccessibility? (start_session, insert, find, delete, update, ...)
...
auto pool = mongocxx::pool(mongocxx::uri("bad_uri"), pool_options);
auto connection = pool.try_acquire();
auto db = (*(connection.value()))["test_db"];
auto collection = db["test_collection"];
// This does not help
mongocxx::write_concern wc;
wc.timeout(std::chrono::milliseconds(1000));
mongocxx::options::insert insert_options;
insert_options.write_concern(wc);
// takes about 30 seconds to fail
collection.insert_one(from_json(R"({"name": "john doe", "occupation": "_redacted_", "skills" : "a certain set"})"), insert_options);
[Edit]
Here is the exception message:
C++ exception with description "No suitable servers found:
serverSelectionTimeoutMS expired: [connection timeout calling
ismaster on '127.0.0.1:27017']
It would be helpful to see the actual error message from the insert_one() operation, but "takes about 30 seconds to fail" suggests that this may be due to the default server selection timeout. You can configure that via the serverSelectionTimeoutMS connection string option.
If you are connecting to a replica set, I would suggest keeping that timeout a bit above the expected time for a failover to complete. Replica Set Elections states:
The median time before a cluster elects a new primary should not typically exceed 12 seconds
You may find that is shorter in practice. By keeping the server selection timeout above the expected failover time, you'll allow the driver to insulate your application from an error (at the expense of wait time).
If you are not connecting to a replica set, feel free to lower serverSelectionTimeoutMS to a lower value, albeit still greater than the expected latency to your mongod (standalone) or mongos (sharded cluster) node.
Do note that since server selection occurs within a loop, the connectTimeoutMS connection string option won't affect the delay you're seeing. Lower the connection timeout will allow the driver to internally give up when attempting to connect to an inaccessible server, but the server selection will still block for up to serverSelectionTimeoutMS (and likely retry connections to the server during that loop).

Boost Asio SSL not able to receive data for 2nd time onwards (1st time OK)

I'm working on Boost Asio and Boost Beast for simple RESTful server. For normal HTTP and TCP socket, it works perfectly. I put it under load test with JMeter, everything works fine.
I tried to add the SSL socket. I set the 'ssl::context' and also called the 'async_handshake()' - additional steps for SSL compared to normal socket. It works for the first time only. Client can connected with me (server) and I also able to receive the data via 'boost::beast::http::async_read()'.
Because this is RESTful, so the connection will drop after the request & respond. I call 'SSL_Socket.shutdown()' and follow by 'SSL_Socket.lowest_layer().close()' to close the SSL socket.
When the next incoming request, the client able to connect with me (server). I called 'SSL_Socket.async_handshake()' and then follow by 'boost::beast::http::async_read()'. But this time I not able to receive any data. But the connection is successfully established.
Anyone has any clue what i missed?
Thank you very much!
If you want to reuse the stream instance, you need to manipulate SSL_Socket.native_handle() with openssl lib function. After ssl shutdown, use SSL_clear() before start a new ssl handshake.
please read(pay attention to warnings) link for detail
SSL_clear() resets the SSL object to allow for another connection. The reset operation however keeps several settings of the last sessions (some of these settings were made automatically during the last handshake)
.........
WARNINGS
SSL_clear() resets the SSL object to allow for another connection. The reset operation however keeps several settings of the last sessions (some of these settings were made automatically during the last handshake). It only makes sense for a new connection with the exact same peer that shares these settings, and may fail if that peer changes its settings between connections. Use the sequence SSL_get_session(3); SSL_new(3); SSL_set_session(3); SSL_free(3) instead to avoid such failures (or simply SSL_free(3); SSL_new(3) if session reuse is not desired).
In regard to ssl shutdown issue, link explain how boost asio ssl shutdown work.
In Boost.Asio, the shutdown() operation is considered complete upon error or if the party has sent and received a close_notify message.
If you look at boost.asio (1.68) source code boost\asio\ssl\detail\impl\engine.ipp, it shows how boost.asio do ssl shutdown and stream_truncated happens when there is data to be read or ssl shutdown expected from peer not received.
int engine::do_shutdown(void*, std::size_t)
{
int result = ::SSL_shutdown(ssl_);
if (result == 0)
result = ::SSL_shutdown(ssl_);
return result;
}
const boost::system::error_code& engine::map_error_code(
boost::system::error_code& ec) const
......
// If there's data yet to be read, it's an error.
if (BIO_wpending(ext_bio_))
{
ec = boost::asio::ssl::error::stream_truncated;
return ec;
}
......
// Otherwise, the peer should have negotiated a proper shutdown.
if ((::SSL_get_shutdown(ssl_) & SSL_RECEIVED_SHUTDOWN) == 0)
{
ec = boost::asio::ssl::error::stream_truncated;
}
}
Also you can see boost.asio ssl shutdown routine may call openssl SSL_shutdown() twice if first return 0, openssl document allows it but advice call SSL_read() to do a bidirectional shutdown if first SSL_shutdown() returns 0.
Read link for details.
I had a similar issue, the 2nd time onward my asynchonous accept always failed with session id uninitialized.
I solved this problem calling SSL_CTX_set_session_id_context on context or
setting context cache mode with SSL_SESS_CACHE_OFF and SSL_OP_NO_TICKET on context options.
This is my cents to someone else's problem.
I managed to resolve the problem by switching 'ssl::stream' socket to 'boost::optional' and then added 'SSL_Socket.emplace(io_context, oSSLContext)' each time the socket is shutdown and closed.
Big credit to sehe at 'Can't implement boost::asio::ssl::stream<boost::asio::ip::tcp::socket> reconnect to server'. His statement "the purest solution would be to not reuse the stream/socket objects" rocks! Save my time.
Thanks.

C++ OpenSSL Fails to perform handshake when accepting in non-blocking mode. What is the proper way?

I'm trying to implement OpenSSL into my application which uses raw C sockets and the only issue I'm having is the SSL_accept / SSL_connect part of the code which starts the KeyExchange phase but does not seem to complete it on the serverside.
I've had a look at countless websites and Q&A's here on StackOverflow to get myself through the OpenSSL API since this is basically the first time I'm attempting to implement SSL into an application but the only thing I could not find yet was how to properly manage failed handshakes.
Basically, running process A which serves as a server will listen for incoming connections. Once I run process B, which acts as a client, it will successfully connect to process A but SSL_accept (on the server) fails with error code -2 SSL_ERROR_WANT_READ.
According to openssl handshake failed, the problem is "easily" worked around by calling SSL_accept within a loop until it finally returns 1 (It successfully connects and completes the handshake). However, I do not believe that this is the proper way of doing things as it looks like a dirty trick. The reason for why I believe it is a dirty trick is because I tried to run a small application I found on https://www.cs.utah.edu/~swalton/listings/articles/ (ssl_client and ssl_server) and magically, everything works just fine. There are no multiple calls to SSL_accept and the handshake is completed right away.
Here's some code where I'm accepting the SSL connection on the server:
if (SSL_accept(conn.ssl) == -1)
{
fprintf(stderr, "Connection failed.\n");
fprintf(stderr, "SSL State: %s [%d]\n", SSL_state_string_long(conn.ssl), SSL_state(conn.ssl));
ERR_print_errors_fp(stderr);
PrintSSLError(conn.ssl, -1, "SSL_accept");
return -1;
}
else
{
fprintf(stderr, "Connection accepted.\n");
fprintf(stderr, "Server -> Client handshake completed");
}
This is the output of PrintSSLError:
SSL State: SSLv3 read client hello B [8465]
[DEBUG] SSL_accept : Failed with return -1
[DEBUG] SSL_get_error() returned : 2
[DEBUG] Error string : error:00000002:lib(0):func(0):system lib
[DEBUG] ERR_get_error() returned : 0
[DEBUG] errno returned : Resource temporarily unavailable
And here's the client side snippet which connects to the server:
if (SSL_connect(conn.ssl) == -1)
{
fprintf(stderr, "Connection failed.\n");
ERR_print_errors_fp(stderr);
PrintSSLError(conn.ssl, -1, "SSL_connect");
return -1;
}
else
{
fprintf(stderr, "Connection established.\n");
fprintf(stderr, "Client -> Server handshake completed");
PrintSSLInfo(conn.ssl);
}
The connection is successfully enstablished client-side (SSL_connect does not return -1) and PrintSSLInfo outputs:
Connection established.
Cipher: DHE-RSA-AES256-GCM-SHA384
SSL State: SSL negotiation finished successfully [3]
And this is how I wrap the C Socket into SSL:
SSLConnection conn;
conn.fd = fd;
conn.ctx = sslContext;
conn.ssl = SSL_new(conn.ctx);
SSL_set_fd(conn.ssl, conn.fd);
The code snippet here resides within a function that takes a file-descriptor of the accepted incoming connection on the raw socket and the SSL Context to use.
To initialize the SSL Contexts I use TLSv1_2_server_method() and TLSv1_2_client_method(). Yes, I know that this will prevent clients from connecting if they do not support TLS 1.2 but this is exactly what I want. Whoever connects to my application will have to do it through my client anyway.
Either way, what am I doing wrong? I'd like to avoid loops in the authentication phase to avoid possible hang ups/slow downs of the application due to unexpected infinite loops since OpenSSL does not specify how many attempts it might take.
The workaround that worked, but that I'd like to avoid, is this:
while ((accept = SSL_accept(conn.ssl)) != 1)
And inside the while loop I check for the return code stored inside accept.
Things I've tried to workaround the SSL_ERROR_WANT_READ error:
Added usleep(50) inside the while loop (still takes several cycles to complete)
Added SSL_do_handshake(conn.ssl) after SSL_connect and SSL_accept (didn't change anything on the end-result)
Had a look at the code shown on roxlu.com (search on Google for "Using OpenSSL with memory BIOs - Roxlu") to guide me through the handshaking phase but since I'm new to this, and I don't directly use BIOs in my code but simply wrap my native C sockets into SSL, it was kind of confusing. I'm also unable to re-write the Networking part of the application as it'd would be too much work for me right now.
I've done some tests with the openssl command-line as well to troubleshoot the issue but it gives no error. The handshake appears to be successful as no errors such as:
24069864:error:1409E0E5:SSL routines:ssl3_write_bytes:ssl handshake failure:s3_pkt.c:656
appear. Here's the whole output of the command
openssl s_client -connect IP:Port -tls1_2 -prexit -msg
http://pastebin.com/9u1bfuf4
Things to note:
1. I'm using the latest OpenSSL version 1.0.2h
2. Application runs on a Unix system
3. Using self-signed certificates to encrypt the network traffic
Thanks everyone who's going to help me out.
Edit:
I forgot to mention that the sockets are in non-blocking mode since the application serves multiple clients in one-go. Though, client-side they are in blocking mode.
Edit2:
Leaving this here for future reference: jmarshall.com/stuff/handling-nbio-errors-in-openssl.html
You have clarified that the socket question is non-blocking.
Well, that's your answer. Obviously, when the socket is in a non-blocking mode, the handshake cannot be immediately completed. The handshake involves an exchange of protocol packets between the client and the server, with each one having to wait to receive the response from its peer. This works fine when the socket is in its default blocking mode. The library simply read()s and write()s, which blocks and waits until the message gets succesfully read or written. This obviously can't happen when the socket is in the non-blocking mode. Either the read() or write() immediately succeeds, or fails, if there's nothing to read or if the socket's output buffer is full.
The manual pages for SSL_accept() and SSL-connect() explain the procedure you must implement to execute the SSL handshake when the underlying socket is in a non-blocking mode. Rather than repeating the whole thing here, you should read the manual pages yourself. The capsule summary is to use SSL_get_error() to determine if the handshake actually failed, or if the library wants to read or write to/from the socket; and in that eventuality call poll() or select(), accordingly, then call SSL_accept() and SSL_connect() again.
Any other approach, like sprinkling silly sleep() calls, here and there, will result in an unreliable house of cards, that will fail randomly.

Pyro4 does not throw ConnectionClosedError

I'm running pyro 4.31. I need to be able to catch an exception when the proxy object loses connection to the remote object (i.e. when the server abruptly shuts down).
So I have code like this:
for ...
proxy = Pyro4.async(Pyro4.Proxy(pyro_uri))
future_result[i] = proxy.run()
... some other code
for ....
try:
future_result[i].wait()
except ConnectionClosedError:....
At some point this worked and a ConnectionClosedError was thrown in case the connection was lost, but now it just keeps hanging on the wait command even if the server is down. I looked in de Pyro4 code and I must say I don't see how a loss of connection can unblock the wait command as the wait command waits until the Event boolean is set to True which is quite impossible to do when the server is down. If the server is still up, but I shutdown the pyro daemon and abruptly kill the ongoing process then a connection closed error is thrown, but I want it when the entire server goes down.
Not using async object this still gives the same problem (just hangs):
proxy=Pyro4.Proxy(pyro_uri)
try: rs=proxy.run(mms)
except ConnectionClosedError: print "connection closed"
except TimeoutError: print "timeout error"
except CommunicationError: print "communication closed"
print "finished"
print str(rs)
So how can I detect when connection is lost?
Just set the Pyro4.config.COMMTIMEOUT to a proper value (default is 0 and means infinite).

mysql reconnect c++

Right now I have a C++ client application that uses mysql.h to connect to a MYSQL database and have to preform some logic in case there is a disconnect. I'm wondering if this is the best way to reconnect to a MYSQL database in a situation where my client gets disconnected.
bool MYSQL::Reconnect(const char *host, const char *user, const char *passwd, const char *db)
{
bool out = false;
pid_t command_pid = fork();
if (command_pid == 0)
{
while(1)
{
sleep(1);
if (mysql_real_connect(&m_mysql, host, user, passwd, db, 0, NULL, 0) == NULL )
{
fprintf(stderr, "Failed to connect to database: Error: %s\n",
mysql_error(&m_mysql));
}
else
{
m_connected = true;
out = true;
break;
}
}
exit(0);
}
if (command_pid < 0)
fprintf(stderr, "Could not fork process[reconnect]: %s\n", mysql_error(&m_mysql));
return out;
}
Right now i take in all my parameters and preform a fork. the child process attempts to reconnect every second with a sleep() statement. Is this a good way to do this? Thanks
Sorry, but your code doesn't do what you think it does, Kaiser Wilhelm.
In essence, you're trying to treat a fork like a thread, which it is not.
When you fork a child, the parent process is completely cloned, including file and socket descriptors, which is how your program is connected to the MySQL database server. That is, both the parent and the child end up with their own copy of the same connection to the database server when you fork. I assume the parent only calls this Reconnect() method when it sees the connection drop, and stops using its copy of the now-defunct MySQL connection object, m_mysql. If so, the parent's copy of the connection is just as useless as the client's when you start the reconnect operation.
The thing is, the reverse is not also true: once the child manages to reconnect to the database server, the parent's connection object remains defunct. Nothing the child does propagates back up to the parent. After the fork, the two processes are completely independent, except insofar as they might try to access some I/O resource they initially shared. For example, if you called this Reconnect() while the connection was up and continued using the connection in the parent, the child's attempts to talk to the DB server on the same connection would confuse either mysqld or libmysqlclient, likely causing data corruption or a crash.
As hinted above, one solution to this is to use threads instead of forking. Beware, however, of the many problems with using threads with the MySQL C API.
Given a choice, I'd rather use asynchronous I/O to do the background connection attempt within the application's main thread, but the MySQL C API doesn't allow that.
It seems you're trying to avoid blocking your main application thread while attempting the DB server reconnection. It may be that you can get away with doing it synchronously anyway by setting the connect timeout to 1 second, which is fine when the MySQL server is on the same machine or same LAN as the client. If you could tolerate your main thread blocking for up to a second for connection attempts to fail — worst case happening when the server is on a separate machine and it's physically disconnected or firewalled — this would probably be a cleaner solution than threads. The connection attempt can fail much quicker if the server machine is still running and the port isn't firewalled, such as when it is rebooting and the TCP/IP stack is [still] up.
As far as I can tell, this doesn't do what you intended.
Logical issues
Reconnect doesn't "perform some logic in case there is a disconnect" at all.
It attempts to connect over and over again until it succeeds, then stops. That's it. The state of the connection is never checked again. If the connection drops, this code knows nothing about it.
Technical issues
Also pay close attention to the technical issues that Warren raises.
Sure, it's perfectly OK. You might want to think about replacing the while ( 1 ) loop with something like
while ( NULL == mysql_real_connect( ... )) {
sleep( 1 );
...
}
which is the kind of idiom that one learns by practice, but your code works just fine as far as I can see. Don't forget to put a counter inside the while loop.