Does ODBC support asynchronous calls? If it does, then can you tell me about any reference materials?
My preferred language is C++.
This MSDN article could be a starting point for you: Executing Statements ODBC:
Asynchronous Execution
From the article:
ODBC 3.8 in the Windows 7 SDK introduced asynchronous execution on connection-related operations ... an application determined that the asynchronous operation was complete using the polling method. Beginning in the Windows 8 SDK, you can determine that an asynchronous operation is complete using the notification method.
I've wanted to know the exact same thing. An obvious workaround is to maintain a pool of threads that each perform synchronous ODBC calls and are signalled (and signal back) asynchronously.
Typically it seems like such things are implemented at another abstraction level of an application, or you roll your own. Just about anything that involves a blockable "open" action can spawn a thread for the purpose of managing the open and raising a signal or setting a flag somewhere globally when it happens.
Some frameworks are pretty good about offering both flavors. Flex comes to mind, where it's helpful for it to play the tricks with the single browser/javascript/swf thread.
Asynchronous ODBC functions is feature provided by ODBC driver.
Pre ODBC3.8 only statement related calls could be async-enabled. Starting ODBC3.8 connection related function calls can also be made async-enabled.
Of course we can implement any missing functionality at applications side but having it implemented at driver makes things less painful on application side.
I am looking for a comprehensive list of driver which clearly states if a driver supports async calls out-of-the-box. Please point to me to such a list if anyone is aware of it.
Related
I am looking through docs and somehow I am blind or missing important information, whether C++ classes are thread safe or not. the Session class specifically? Any experiences or anybody who found the info actually? Because it seems to me, that I have to get through sources otherwise...
Thanks!
/ip/
I found the answer in the end. Yes, they are actually thread-safe.
In the C++, AMQP Qpid has its own mechanism for processing, it creates a small number of threads(I believe comparable to the number of cores), incorporating thread pool like behavior and thread safety is ensured by processing at any given time on one thread at the time and some locking is ensured for asynchronous operations done by code using given DLL.
With C++/CLI port it is rather worse, there are locks used in the .Net part and I believe, that some parts of the C++/CLI port are very performant as they could be...
I spent a lot of time to investigate why multithreaded libcurl application crashes on Linux. I saw in forums that I have to use CURLOPT_NOSIGNAL to bypass this problem. Ok, no problems, but are there any information what side effects can it create? If CURLOPT_NOSIGNAL = 0 is buggy, why libcurl needs this option at all nowadays when even mobile devices have multicore processors and that is why a lot of applications use multiple threads to use this hardware multitasking support?
By default the DNS resolution uses signals to implement the timeout logic but this is not thread-safe: the signal could be executed on another thread than the original thread that started it.
When libcurl is not built with async DNS support (which means threaded resolver or c-ares) you must set the CURLOPT_NOSIGNAL option to 1 in your multi-threaded application.
You can find more details related to this topic here:
http://curl.haxx.se/libcurl/c/libcurl-tutorial.html#Multi-threading
http://www.redhat.com/archives/libvir-list/2012-September/msg01960.html
http://curl.haxx.se/mail/lib-2013-03/0086.html
Why libcurl needs this option at all nowadays?
Mainly for legacy reasons, but also because not all applications use libcurl in a multi-threaded context.
This is still something that is actively discussed. See this recent discussion:
libcurl has no threads on its own that it needs to protect and it doesn't know
what thread library/concept you use so it cannot on its own set the callbacks.
This has been the subject for discussion before and there are indeed valid
reasons to reconsider what we can and should do, but this is how things have
been since forever and still are. [...] but I'm always open for further discussions!
My general idea is that a single-threaded application ( the Lua interpreter ) will always deteriorate the performance of a multi-threaded application that depends on it ( a generic C++ application ).
To circumvent this problem I'm thinking about an asynchronous approach on the interpreter while keeping the C++ application multi-threaded, this basically means that based on my approach a Lua interpreter should somehow push the entire script/file in a scheduler with an asynchronous approach ( without waiting for the result ) and it's up to the well designed C++ multi-threaded messaging system to keep everything sequential.
The usual relationship is C/C++ function <-> Lua ( with a sequential approach ) ; I would like to have something like C++ messaging system <-> entire Lua script .
I'm also open to any kind of approach that can solve this and really help the mix between Lua and a C++ application designed for multi-threading.
Is this approach made possible by some piece of software ?
EDIT
I need something "user-proof" and I need to implement this behaviour right in C++/Lua API design.
One option is to implement communication to lua as a co-routine. Messages are sent to C++ via coroutine.yield(messagedata), and then it sends back results via lua_resume. (See also: lua_newthread). You could even wrap your functions to provide a nicer event UI.
function doThing(thing, other, data)
return coroutine.yield("doThing", thing, other, data)
end
You can still only have one thread running the lua interpreter at any given time (you will have to do locking) but you can have multiple such co-routines running concurrently.
Concurrency in Lua is a topic that has many many solutions. Here is a resource:
http://lua-users.org/wiki/MultiTasking
You actually can make it easy for yourself since you do not actually have to run Lua itself multithreaded, which would impose a number of additional issues.
The obvious solution is running Lua in a separate thread but providing only a thin API for Lua in which every single API call immediately either forks a new thread/process or uses some sort of message passing for asynchronous data transfer, or even uses short-duration semaphores to read/write some values. This solution requires some sort of idle loop or event listeners unless you want to do busy waiting...
Another option that I think is still quite easy to implement with a new API, is actually the approach of node.js:
Run Lua in a separate thread
Make your whole API of functions that only take callbacks. These callbacks are queued and can be scheduled by your C++ application.
You can even, but do not have to, provide callback wrappers for the standard Lua API.
Example:
local version;
Application.requestVersionNumber(function(val) version = val; end)
Of course this example is riduculously trivial, but you get the idea.
One thing you should know though is that with the callback approach the scripts quickly get highly tiered if you are not careful. While that's not bad for performance, they can get hard to read.
We have a project with a core functionality implemented using ACE, and architectured around it's Reactor. We want to add a small web interface using Wt.
So the question is, is it possible to replace the main loop of the wt interface with the ace reactor?
The only bad idea that comes to my mind is having a fast timer on the Reactor side which somehow invokes the wt part.
The other way round, the reactor can be run 'tick by tick' using it's handle_events method but I can't find an equivalent on the wt side.
note:
The main concern behind this question is about threads. We don't have threads, the code is not thread safe, and it would be a lot simpler for us if the HMI could be running on the same thread as the rest of the application. But having 2 blocking calls, one to theReactor->run_reactor_event_loop(), and the other to Wt::WRun() is a problem!
That can work with some modifications to the Wt connector. Wt can be compiled without thread support, so in the connector there must be a select() loop of some kind. What you need is the ability to hook into that loop with a timer.
Are you talking about the http connector? That's implemented with boost.asio, so an asio deadline_timer with an async_wait that executes theReactor->run_reactor_event_loop() may be all you need. Maybe you may even find a different idea when you dive into the boost.asio documentation...
It could even work without modifications to the connector. It's not documented, but Server::instance()->service() (in src/http/Server.h) returns you the asio service that you need to implement this.
More info -> Wt's mailing list?
I'm working on an instant messenger client in C++ (Win32) and I'm experimenting with different asynchronous socket models. So far I've been using WSAAsyncSelect for receiving notifications via my main window. However, I've been experiencing some unexpected results with Winsock spawning additionally 5-6 threads (in addition to the initial thread created when calling WSAAsyncSelect) for one single socket.
I have plans to revamp the client to support additional protocols via DLL:s, and I'm afraid that my current solution won't be suitable based on my experiences with WSAAsyncSelect in addition to me being negative towards mixing network with UI code (in the message loop).
I'm looking for advice on what a suitable asynchronous socket model could be for a multi-protocol IM client which needs to be able to handle roughly 10-20+ connections (depending on amount of protocols and protocol design etc.), while not using an excessive amount of threads -- I am very interested in performance and keeping the resource usage down.
I've been looking on IO Completion Ports, but from what I've gathered, it seems overkill. I'd very much appreciate some input on what a suitable socket solution could be!
Thanks in advance! :-)
There are four basic ways to handle multiple concurrent sockets.
Multiplexing, that is using select() to poll the sockets.
AsyncSelect which is basically what you're doing with WSAAsyncSelect.
Worker Threads, creating a single thread for each connection.
IO Completion Ports, or IOCP. dp mentions them above, but basically they are an OS specific way to handle asynchronous I/O, which has very good performance, but it is a little more confusing.
Which you choose often depends on where you plan to go. If you plan to port the application to other platforms, you may want to choose #1 or #3, since select is not terribly different from other models used on other OS's, and most other OS's also have the concept of threads (though they may operate differently). IOCP is typically windows specific (although Linux now has some async I/O functions as well).
If your app is Windows only, then you basically want to choose the best model for what you're doing. This would likely be either #3 or #4. #4 is the most efficient, as it calls back into your application (similar, but with better peformance and fewer issues to WSAsyncSelect).
The big thing you have to deal with when using threads (either IOCP or WorkerThreads) is marshaling the data back to a thread that can update the UI, since you can't call UI functions on worker threads. Ultimately, this will involve some messaging back and forth in most cases.
If you were developing this in Managed code, i'd tell you to look at Jeffrey Richter's AysncEnumerator, but you've chose C++ which has it's pros and cons. Lots of people have written various network libraries for C++, maybe you should spend some time researching some of them.
consider to use the ASIO library you can find in boost (www.boost.org).
Just use synchronous models. Modern operating systems handle multiple threads quite well. Async IO is really needed in rare situations, mostly on servers.
In some ways IO Completion Ports (IOCP) are overkill but to be honest I find the model for asynchronous sockets easier to use than the alternatives (select, non-blocking sockets, Overlapped IO, etc.).
The IOCP API could be clearer but once you get past it it's actually easier to use I think. Back when, the biggest obstacle was platform support (it needed an NT based OS -- i.e., Windows 9x did not support IOCP). With that restriction long gone, I'd consider it.
If you do decide to use IOCP (which, IMHO, is the best option if you're writing for Windows) then I've got some free code available which takes away a lot of the work that you need to do.
Latest version of the code and links to the original articles are available from here.
And my views on how my framework compares to Boost::ASIO can be found here: http://www.lenholgate.com/blog/2008/09/how-does-the-socket-server-framework-compare-to-boostasio.html.