Asynchronous function calls in C++ - c++

This is some code i'm using in Java for Making Asynchronous function calls in Java :
public class AsyncLogger
{
public static asyncLog = null;
public static ExecutorService executorService = Executors.newSingleThreadExecutor();
public static AsyncLogger GetAsyncClass()
{
if(asyncLog == null)
{
asyncLog= new AsyncLogger();
}
return asyncLog;
}
public void WriteLog(String logMesg)
{
executorService.execute(new Runnable()
{
public void run()
{
WriteLogDB(logMesg);
}
});
}
public void ShutDownAsync()
{
executorService.shutdown();
}
}
This is a Singleton Class with static ExecutorService and WriteLogDB will be called as an Asynchronous function. So i can process my Code in WriteLogDB asynchronously without affecting the main flow.
Can i get a C++ equivalent like this ..?

std::thread([](){WriteLogDB(logMesg);}).detach();
or if you need to wait for a result:
auto result = std::async(std::launch::async, [](){WriteLogDB(logMesg);});
// do stuff while that's happening
result.get();
If you're stuck with a pre-2011 compiler, then there are no standard thread facilities; you'll need to use a third-party library like Boost, or roll you own, platform specific, threading code. Boost has a thread class similar to the new standard class:
boost::thread(boost::bind(WriteLogDB, logMesg)).detach();

You can make asynchronous functions calls using std::async from C++11.

Related

producer consumer using msdn libraries in C++

I am trying to re-write the classic producer-consumer algorithm using Windows libraries in C++. The snipped below was copied from a Java sample. Anyone know the equivalent of lock.notify and lock.wait using Windows Libraries such as EnterCriticalSection?
private Object lock = new Object();
public void produce() throws InterruptedException {
int value = 0;
while (true)
{
synchronized (lock)
{
while(list.size() == LIMIT)
{
lock.wait();
}
list.add(value++);
lock.notify();
}
}
thx!

Best way to protect a callback function from deconstructed classes

What would be a good/best to ensure thread safety for callback objects? Specifically, I'm trying to prevent a callback object from being deconstructed before all the threads are finished with it.
It is easy to code the client code to ensure thread safety, but I'm looking for a way that is a bit more streamlined. For example, using a factory object to generate the callback objects. The trouble then lies in tracking the usage of the callback object.
Below is an example code that I'm trying to improve.
class CHandlerCallback
{
public:
CHandlerCallback(){ ... };
virtual ~CHandlerCallback(){ ... };
virtual OnBegin(UINT nTotal ){ ... };
virtual OnStep (UINT nIncrmt){ ... };
virtual OnEnd(UINT nErrCode){ ... };
protected:
...
}
static DWORD WINAPI ThreadProc(LPVOID lpParameter)
{
CHandler* phandler = (CHandler*)lpParameter;
phandler ->ThreadProc();
return 0;
};
class CHandler
{
public:
CHandler(CHandlerCallback * sink = NULL) {
m_pSink = sink;
// Start the server thread. (ThreadProc)
};
~CHandler(){...};
VOID ThreadProc(LPVOID lpParameter) {
... do stuff
if (m_pSink) m_pSink->OnBegin(..)
while (not exit) {
... do stuff
if (m_pSink) m_pSink->OnStep(..)
... do stuff
}
if (m_pSink) m_pSink->OnEnd(..);
};
private:
CHandlerCallback * m_pSink;
}
class CSpecial1Callback: public CHandlerCallback
{
public:
CSpecial1Callback(){ ... };
virtual ~CBaseHandler(){ ... };
virtual OnStep (UINT nIncrmt){ ... };
}
class CSpecial2Callback: public CHandlerCallback...
Then the code that runs everything in a way similar to the following:
int main {
CSpecial2Callback* pCallback = new CSpecial2Callback();
CHandler handler(pCallback );
// Right now the client waits for CHandler to finish before deleting
// pCallback
}
Thanks!
If you're using C++11 you can use smart pointers to keep the object around until the last reference to the object disappears. See shared_pointer. If you're not in C++11 you could use boost's version. If you don't want to include that library and aren't in C++11 you can resort to keeping an internal count of threads using that object and destroy the object when that count reaches 0. Note that trying to track the counter yourself can be difficult as you'll need atomic updates to the counter.
shared_ptr<CSpecial2Callback> pCallback(new CSpecial2Callback());
CHandler handler(pCallback); // You'll need to change this to take a shared_ptr
... //Rest of code -- when the last reference to
... //pCallback is used up it will be destroyed.

Why do I have the option to *not* call Concurrency::agent::done inside run?

This is in the context of the Microsoft C++ Concurrency API.
There's a class called agent (under Concurrency namespace), and it's basically a state machine you derive and implement pure virtual agent::run.
Now, it is your responsibility to call agent::start, which will put it in a runnable state. You then call agent::wait*, or any of its variants, to actually execute the agent::run method.
But why do we have to call agent::done within the body? I mean, the obvious answer is that agent::wait* will wait until done is signaled or the timeout has elapsed, but...
What were the designers intending? Why not have the agent enter the done state when agent::run returns? That's what I want to know. Why do I have the option to not call done? The wait methods throw exceptions if the timeout has elapsed.
About the only reason I can see is that it would let you state you are done(), then do more work (say, cleanup) that you don't want your consumer to have to wait on.
Now, they could have done this:
private: void agent::do_run() {
run();
if (status() != agent_done)
done();
}
then have their framework call do_run() instead of run() directly (or the equivalent).
However, you'll note that you yourself can do this.
class myagent: public agent {
protected:
virtual void run() final override { /* see do_run above, except call do_run in it */ }
virtual void do_run() = 0;
};
and poof, if your do_run() fails to call done(), the wrapping function does it for you. If this second virtual function overhead is too high for you:
template<typename T>
class myagent: public agent {
private:
void call_do_run()
{
static_cast<T*>(this)->do_run();
}
protected:
virtual void run() final override { /* see do_run above, but call_do_run() */ }
};
the CRTP that lets you do compile-time dispatch. Use:
class foo: public myagent<foo>
{
public:
void do_run() { /* code */ }
};
... /shrug

wxHTTP & Threads

I have some problems with using wxHTTP inside a Thread. I have created below class which derive from wxThread to use wxHTTP.
class Thread : public wxThread {
private:
wxHTTP get;
public:
Thread()
{
}
~Thread()
{
}
virtual ExitCode Entry()
{
get.SetHeader(wxT("Content-Type"), wxT("text/html; charset=utf-8"));
get.Connect(wxT("www.mysite.com"));
get.SetTimeout(1);
wxInputStream *httpStream = get.GetInputStream(wxT("/script.php?name=aaa&text=blabla"));
wxDELETE(httpStream);
get.Close();
return 0;
}
};
I create this thread and run it (threads are created, ran and everything is fine with them). Unfortunately wxHTTP seems to doesn't work properly with threads (even my firewall doesn't ask me about connection). Is there any way to create wxHTTP connection inside a thread?
Here is the answer (as requested by #bluefeet)
wxHTTP inherits from wxSocketBase and in wxSocketBase we have this quote
When using wxSocket from multiple threads, even implicitly (e.g. by using wxFTP or wxHTTP in another thread) you must initialize the sockets from the main thread by calling Initialize() before creating the other ones.
See here for more explanation
Call
wxSocketBase::Initialize();
in your apps OnInit function
and wxurl/wxhttp functions should work from threads.

How to schedule member function for execution in boost::threadpoool

I found a threadpool which doesn't seem to be in boost yet, but I may be able to use it for now (unless there is a better solution).
I have several million small tasks that I want to execute concurrently and I wanted to use a threadpool to schedule the execution of the tasks. The documentation of the threadpool provides (roughly) this example:
#include "threadpool.hpp"
using namespace boost::threadpool;
// A short task
void task()
{
// do some work
}
void execute_with_threadpool(int poolSize, int numTasks)
{
// Create a thread pool.
pool tp(poolSize);
for(int i = 0; i++; i < numTasks)
{
// Add some tasks to the pool.
tp.schedule(&task);
}
// Leave this function and wait until all tasks are finished.
}
However, the example only allows me to schedule non-member functions (or tasks). Is there a way that I can schedule a member function for execution?
Update:
OK, supposedly the library allows you to schedule a Runnable for execution, but I can't figure out where is the Runnable class that I'm supposed to inherit from.
template<typename Pool, typename Runnable>
bool schedule(Pool& pool, shared_ptr<Runnable> const & obj);
Update2:
I think I found out what I need to do: I have to make a runnable which will take any parameters that would be necessary (including a reference to the object that has a function which will be called), then I use the static schedule function to schedule the runnable on the given threadpool:
class Runnable
{
private:
MyClass* _target;
Data* _data;
public:
Runnable(MyClass* target, Data* data)
{
_target = target;
_data = data;
}
~Runnable(){}
void run()
{
_target->doWork(_data);
}
};
Here is how I schedule it within MyClass:
void MyClass::doWork(Data* data)
{
// do the work
}
void MyClass::produce()
{
boost::threadpool::schedule(myThreadPool, boost::shared_ptr<Runnable>(new Runnable(myTarget, new Data())));
}
However, the adaptor from the library has a bug in it:
template<typename Pool, typename Runnable>
bool schedule(Pool& pool, shared_ptr<Runnable> const & obj)
{
return pool->schedule(bind(&Runnable::run, obj));
}
Note that it takes a reference to a Pool but it tries to call it as if it was a pointer to a Pool, so I had to fix that too (just changing the -> to a .).
To schedule any function or member function - use Boost.Bind or Boost.Lambda (in this order). Also you can consider special libraries for your situation. I can recommend Inter Threading Building Blocks or, in case you use VC2010 - Microsoft Parallel Patterns Library.
EDIT:
I've never used this library or heard anything bad about it, but it's old enough and still is not included into Boost. I would check why.
EDIT 2:
Another option - Boost.Asio. It's primarily a networking library, but it has a scheduler that you can use. I would use this multithreading approach. Just instead of using asynchronous network operations schedule your tasks by boost::asio::io_service::post().
I think I found out what I need to do: I have to make a runnable which will take any parameters that would be necessary (including a reference to the object that has a function which will be called), then I use the static schedule function to schedule the runnable on the given threadpool:
class Runnable
{
private:
MyClass* _target;
Data* _data;
public:
Runnable(MyClass* target, Data* data)
{
_target = target;
_data = data;
}
~Runnable(){}
void run()
{
_target->doWork(_data);
}
};
Here is how I schedule it within MyClass:
void MyClass::doWork(Data* data)
{
// do the work
}
void MyClass::produce()
{
boost::threadpool::schedule(myThreadPool, boost::shared_ptr<Runnable>(new Runnable(myTarget, new Data())));
}
However, the adaptor from the library has a bug in it:
template<typename Pool, typename Runnable>
bool schedule(Pool& pool, shared_ptr<Runnable> const & obj)
{
return pool->schedule(bind(&Runnable::run, obj));
}
Note that it takes a reference to a Pool but it tries to call it as if it was a pointer to a Pool, so I had to fix that too (just changing the -> to a .).
However, as it turns out, I can't use that boost thread pool because I am mixing native C++ (dll), C++/CLI (dll) and .NET code: I have a C++/CLI library that wraps a native C++ library which in tern uses boost::thread. Unfortunately, that results in a BadImageFormatException at runtime (which has previously been discussed by other people):
The problem is that the static boost thread library tries to hook the
native win32 PE TLS callbacks in order to ensure that the thread-local
data used by boost thread is cleaned up correctly. This is not
compatible with a C++/CLI executable.
This solution is what I was able to implement using the information: http://think-async.com/Asio/Recipes. I tried implementing this recipe and found that the code worked in Windows but not in Linux. I was unable to figure out the problem but searching the internet found the key which was make the work object an auto pointer within the code block. I've include the void task() that the user wanted for my example I was able to create a convenience function and pass pointers into my function does the work. For my case, I create a thread pool that uses the function : boost::thread::hardware_concurrency() to get the possible number of threads. I've used the recipe below with as many as 80 tasks with 15 threads.
#include <boost/asio.hpp>
#include <boost/bind.hpp>
#include <boost/thread.hpp>
#include <boost/scoped_ptr.hpp>
// A short task
void task()
{
// do some work
}
void execute_with_threadpool( int numTasks,
int poolSize = boost::thread::hardware_concurrency() )
{
boost::asio::io_service io_service;
boost::thread_group threads;
{
boost::scoped_ptr< boost::asio::io_service::work > work( new boost::asio::io_service::work(io_service) );
for(int t = 0; t < poolSize; t++)
{
threads.create_thread(boost::bind(&boost::asio::io_service::run, &io_service));
}
for( size_t t = 0; t < numTasks; t++ )
{
++_number_of_jobs;
io_service.post(boost::bind(task) );
}
}
threads.join_all();
}
Figured it out, you must have run() method defined, this is the easiest way:
class Command
{
public:
Command() {}
~Command() {}
void run() {}
};
In main(), tp is your threadpool:
shared_ptr<Command> pc(new Command());
tp.schedule(bind(&Command::run, pc));
Done.