Program fails when using my thread implementation? - c++

This is my first post on your forum, but I'm not quite sure if I'm asking the right place? - Can I post C++ questions in this section, or is this just like a general programming section?
Anyway, enough of my noobish doubts, let's get to my problem :).
In my .h file (thread.h) I have a struct (RUNNABLE) and a class (thread).
'RUNNABLE' is like the interface you implement and override, or at least you override its virtual 'run()' void. You then create a 'thread' instance and call its 'start(void* ptr)' function, to start the thread. You pass in an object instance which has RUNNABLE as the base class as the parameter for the 'start' function.
This all seems great, but my implementation crashes my program.
Here's thread.h:
#include <process.h>
struct RUNNABLE{
virtual void run() = 0;
};
class thread{
public:
void start(void *ptr){
DWORD thr_id;
HANDLE thr_handl = (HANDLE)_beginthreadex(NULL, 0, thread_proc, ptr, 0, (unsigned int*)&thr_id);
}
private:
static unsigned int __stdcall thread_proc(void *param){
((RUNNABLE*)param)->run();
ExitThread(0);
return 0;
}
};
And this is my example implementation:
class test : RUNNABLE{
virtual void run(){
while(true){
dbText(0, 0, "hej");
}
}
};
test *obj = new test();
thread th;
th.start(obj);
And the program simply just crashes when I open it.
Help is appreciated :).
Best regards,
Benjamin.

test *obj = new test();
This is a memory management problem. When does obj get deleted? It can take a while for the thread to actually start running, the object needs to stay around long enough. I'm guessing you've got some code that's not in the snippet that is deleting that object again. Too soon.
The only code that can safely and accurately delete the object is the thread itself.

I tested it and it runs just fine, probably your dbText thingy there that crashes, I replaced it with printf("hejdå").

This runs fine for me:
#include <iostream>
#include <process.h>
#include <windows.h>
struct RUNNABLE{
virtual void run() = 0;
};
class thread{
public:
void start(void *ptr){
DWORD thr_id;
HANDLE thr_handl = (HANDLE)_beginthreadex(NULL, 0, thread_proc, ptr,
0, (unsigned int*)&thr_id);
}
private:
static unsigned int __stdcall thread_proc(void *param){
((RUNNABLE*)param)->run();
std::cout << "ending thread\n";
::ExitThread(0);
return 0;
}
};
class test : RUNNABLE{
virtual void run(){
for(unsigned int u=0; u<10; ++u){
std::cout << "thread\n";
}
}
};
int main()
{
test *obj = new test();
thread th;
th.start(obj);
std::cout << "giving thread some time\n";
::Sleep(5000);
std::cout << "ending process\n";
return 0;
}
However, you should probably call _endthreadex() (instead of EndThread()) to end the thread.

Related

segmentation fault while using pthreads in class

I have the follwing code that gets core dumped error. Each C instance creates their own thread then runs. I guess there is something wrong with static function and class argument "count". When I comment out the code that prints it, no fault occurs..
#include <iostream>
#include <pthread.h>
using namespace std;
class C {
public:
int count;
C(int c_): count(c_){}
public:
void *hello(void)
{
std::cout << "Hello, world!" <<std::endl;
std::cout<<count; // bug here!!!
return 0;
}
static void *hello_helper(void *context)
{
return ((C *)context)->hello();
}
void run() {
pthread_t t;
pthread_create(&t, NULL, &C::hello_helper, NULL);
}
};
int main() {
C c(2);
c.run();
C c2(4);
c2.run();
while(true);
return 0;
}
Decided to write an answer. You were calling hello_helper with a context of NULL based on how you were creating your thread. C++ fully allows you to call member functions on null pointers, and no error occurs unless a member element is accessed.
In your case, by adding the line to print count. You are now accessing a member variable on a null pointer, which is a big no-no.
Here's an example of what you were getting away with:
#include <iostream>
class Rebel
{
public:
void speak()
{
std::cout << "I DO WHAT I WANT!" << std::endl;
}
};
int main()
{
void * bad_bad_ptr = NULL;
((Rebel*)bad_bad_ptr)->speak();
}
Output:
I DO WHAT I WANT!
By modifying your pthread_create call to pass the this pointer (i.e. pthread_create(&t, NULL, &C::hello_helper, this);, you now have a valid instance to access member variables on.
I solved the problem by passing this pointer instead off NULL while creating threads. I guess os created same thread twice int the former case ?

Destructor called before end of scope

The following program chrashes. But I don't really understand why. The boolean my_shared_resouce is in real life an asynchonous queue that eventually stops the loop inside of the thread via message passing.
However, the following program crashes because the destructor seems to be called multiple times. And the first time it does is long before the sleep in the main() finishes. If i remove the delete my_shared_resource; I can see the destructor is called three times...
However, following my current understanding the destructor should only be called when main() finishes.
#include <thread>
#include <chrono>
#include <iostream>
using namespace std;
class ThreadedClass {
public:
ThreadedClass() {
my_shared_resource = new bool(true);
}
virtual ~ThreadedClass() {
delete my_shared_resource;
cout << "destructor" << endl;
}
void operator()(){
loop();
}
void stop() {
*my_shared_resource = false;
}
private:
void loop() {
while (*my_shared_resource) {
// do some work
this_thread::sleep_for(std::chrono::milliseconds(1000));
}
}
bool* my_shared_resource;
};
int main(int argc, char** argv) {
ThreadedClass instance;
std::thread t(instance);
this_thread::sleep_for(std::chrono::milliseconds(1000));
cout << "Did some work in main thread." << endl;
instance.stop();
t.join();
return 0;
}
compiled with g++ (Ubuntu 4.8.4-2ubuntu1~14.04) 4.8.4
compiled as g++ --std=c++0x thread.cpp -pthread
Would someone please enlighten me what is wrong about this design.
When ThreadedClass gets copied both copies point to the same my_shared_resource, and both will delete it.
Use a std::shared_ptr<bool> instead:
class ThreadedClass {
public:
ThreadedClass() : shared_resource(new bool(true)) { }
virtual ~ThreadedClass() { }
void operator()() { loop(); }
void stop() { *shared_resource = false; }
private:
void loop() {
while (*shared_resource) {
// Do some work.
this_thread::sleep_for(std::chrono::milliseconds(1000));
}
}
std::shared_ptr<bool> shared_resource;
};
According to http://en.cppreference.com/w/cpp/thread/thread/thread
you are calling:
template< class Function, class... Args >
explicit thread( Function&& f, Args&&... args );
which
Creates new std::thread object and associates it with a thread of execution. First the constructor copies/moves all arguments (both the function object f and all args...) to thread-accessible storage
Thus your my_shared_resourse pointer gets copied and shared between several copies of the thread object and gets destroyed in several places. Either define the appropriate copy constructor/assignment operator or use shared pointers.

C++, linux, how to pass a not static method from a singleton to pthread_create?

I have a problem with threading a not static method from a singleton class,look at the code :
//g++ main.cc -lpthread
#include<iostream>
#include<unistd.h>
#include<pthread.h>
class SLAYER{
private:
SLAYER(){};
~SLAYER(){};
static SLAYER *singleton;
pthread_t t1, t2;
public:
static void *born(){
singleton = new SLAYER;
};
static void *death(){
delete singleton;
singleton = NULL;
};
static void *start(){
pthread_create(&singleton->t1,NULL,singleton->KerryRayKing, NULL);
pthread_create(&singleton->t2,NULL,singleton->JeffHanneman, NULL);
};
void *JeffHanneman(void *arg){
sleep(1);
std::cout << "(1964-2013) R.I.P.\n";
(void) arg;
pthread_exit(NULL);
};
static void *KerryRayKing(void *arg){
sleep(1);
std::cout << "(1964-still with us) bald\n";
(void) arg;
pthread_exit(NULL);
};
};
SLAYER *SLAYER::singleton=NULL;
int main(){
SLAYER::born();
SLAYER::start();
std::cout << "thread started\n";
sleep(5);
SLAYER::death();
return 0;
}
as you can see KerryRayKing() is static unlike JeffHanneman(). I failed to pass JeffHanneman() to pthread_create(), at compilation time I got :
cannot convert ‘SLAYER::JeffHanneman’ from type ‘void* (SLAYER::)(void*)’ to type ‘void* (*)(void*)’
I tried several cast, but failed... isn't possible to use in this cas a non static method ?
edit :
I forgot to say, I don't want to allow access to JeffHanneman() from outside
Short answer: you can't do that.
There are several workarounds, the simplest is to have static wrapper function, e.g.
static void *JHWrapper(void *self)
{
SLAYER *that = static_cast<SLAYER*>(self);
return that->JeffHanneman();
}
void *JeffHanneman(){ // Note "arg" removed.
sleep(1);
std::cout << "(1964-2013) R.I.P.\n";
pthread_exit(NULL);
};
Now, the pthread create becomes:
pthread_create(&singleton->t1,NULL, SLAYER::JHWrapper, static_cast<void *>(singleton));
[I refrained from the pun of "JHRapper", as I think that would be rather demeaning...]

How to create a thread inside a class function?

I am very new to C++.
I have a class, and I want to create a thread inside a class's function. And that thread(function) will call and access the class function and variable as well.
At the beginning I tried to use Pthread, but only work outside a class, if I want to access the class function/variable I got an out of scope error.
I take a look at Boost/thread but it is not desirable because of I don't want to add any other library to my files(for other reason).
I did some research and cannot find any useful answers.
Please give some examples to guide me. Thank you so much!
Attempt using pthread(but I dont know how to deal with the situation I stated above):
#include <pthread.h>
void* print(void* data)
{
std::cout << *((std::string*)data) << "\n";
return NULL; // We could return data here if we wanted to
}
int main()
{
std::string message = "Hello, pthreads!";
pthread_t threadHandle;
pthread_create(&threadHandle, NULL, &print, &message);
// Wait for the thread to finish, then exit
pthread_join(threadHandle, NULL);
return 0;
}
You can pass a static member function to a pthread, and an instance of an object as its argument. The idiom goes something like this:
class Parallel
{
private:
pthread_t thread;
static void * staticEntryPoint(void * c);
void entryPoint();
public:
void start();
};
void Parallel::start()
{
pthread_create(&thread, NULL, Parallel::staticEntryPoint, this);
}
void * Parallel::staticEntryPoint(void * c)
{
((Parallel *) c)->entryPoint();
return NULL;
}
void Parallel::entryPoint()
{
// thread body
}
This is a pthread example. You can probably adapt it to use a std::thread without much difficulty.
#include <thread>
#include <string>
#include <iostream>
class Class
{
public:
Class(const std::string& s) : m_data(s) { }
~Class() { m_thread.join(); }
void runThread() { m_thread = std::thread(&Class::print, this); }
private:
std::string m_data;
std::thread m_thread;
void print() const { std::cout << m_data << '\n'; }
};
int main()
{
Class c("Hello, world!");
c.runThread();
}

Event / Task Queue Multithreading C++

I would like to create a class whose methods can be called from multiple threads. but instead of executing the method in the thread from which it was called, it should perform them all in it's own thread. No result needs to be returned and It shouldn't block the calling thread.
A first attempt Implementation I have included below. The public methods insert a function pointer and data into a job Queue, which the worker thread then picks up. However it's not particularily nice code and adding new methods is cumbersome.
Ideally I would like to use this as a base class which I can easy add methods (with a variable number of arguments) with minimum hastle and code duplication.
What is a better way to do this? Is there any existing code available which does something similar? Thanks
#include <queue>
using namespace std;
class GThreadObject
{
class event
{
public:
void (GThreadObject::*funcPtr)(void *);
void * data;
};
public:
void functionOne(char * argOne, int argTwo);
private:
void workerThread();
queue<GThreadObject::event*> jobQueue;
void functionOneProxy(void * buffer);
void functionOneInternal(char * argOne, int argTwo);
};
#include <iostream>
#include "GThreadObject.h"
using namespace std;
/* On a continuous loop, reading tasks from queue
* When a new event is received it executes the attached function pointer
* It should block on a condition, but Thread code removed to decrease clutter
*/
void GThreadObject::workerThread()
{
//New Event added, process it
GThreadObject::event * receivedEvent = jobQueue.front();
//Execute the function pointer with the attached data
(*this.*receivedEvent->funcPtr)(receivedEvent->data);
}
/*
* This is the public interface, Can be called from child threads
* Instead of executing the event directly it adds it to a job queue
* Then the workerThread picks it up and executes all tasks on the same thread
*/
void GThreadObject::functionOne(char * argOne, int argTwo)
{
//Malloc an object the size of the function arguments
int argumentSize = sizeof(char*)+sizeof(int);
void * myData = malloc(argumentSize);
//Copy the data passed to this function into the buffer
memcpy(myData, &argOne, argumentSize);
//Create the event and push it on to the queue
GThreadObject::event * myEvent = new event;
myEvent->data = myData;
myEvent->funcPtr = &GThreadObject::functionOneProxy;
jobQueue.push(myEvent);
//This would be send a thread condition signal, replaced with a simple call here
this->workerThread();
}
/*
* This handles the actual event
*/
void GThreadObject::functionOneInternal(char * argOne, int argTwo)
{
cout << "We've made it to functionTwo char*:" << argOne << " int:" << argTwo << endl;
//Now do the work
}
/*
* This is the function I would like to remove if possible
* Split the void * buffer into arguments for the internal Function
*/
void GThreadObject::functionOneProxy(void * buffer)
{
char * cBuff = (char*)buffer;
functionOneInternal((char*)*((unsigned int*)cBuff), (int)*(cBuff+sizeof(char*)));
};
int main()
{
GThreadObject myObj;
myObj.functionOne("My Message", 23);
return 0;
}
There's Futures library making its way into Boost and the C++ standard library. There's also something of the same sort in ACE, but I would hate to recommend it to anyone (as #lothar already pointed out, it's Active Object.)
Below is an implementation which doesn't require a "functionProxy" method. Even though it is easier to add new methods, it's still messy.
Boost::Bind and "Futures" do seem like they would tidy a lot of this up. I guess I'll have a look at the boost code and see how it works. Thanks for your suggestions everyone.
GThreadObject.h
#include <queue>
using namespace std;
class GThreadObject
{
template <int size>
class VariableSizeContainter
{
char data[size];
};
class event
{
public:
void (GThreadObject::*funcPtr)(void *);
int dataSize;
char * data;
};
public:
void functionOne(char * argOne, int argTwo);
void functionTwo(int argTwo, int arg2);
private:
void newEvent(void (GThreadObject::*)(void*), unsigned int argStart, int argSize);
void workerThread();
queue<GThreadObject::event*> jobQueue;
void functionTwoInternal(int argTwo, int arg2);
void functionOneInternal(char * argOne, int argTwo);
};
GThreadObject.cpp
#include <iostream>
#include "GThreadObject.h"
using namespace std;
/* On a continuous loop, reading tasks from queue
* When a new event is received it executes the attached function pointer
* Thread code removed to decrease clutter
*/
void GThreadObject::workerThread()
{
//New Event added, process it
GThreadObject::event * receivedEvent = jobQueue.front();
/* Create an object the size of the stack the function is expecting, then cast the function to accept this object as an argument.
* This is the bit i would like to remove
* Only supports 8 byte argument size e.g 2 int's OR pointer + int OR myObject8bytesSize
* Subsequent data sizes would need to be added with an else if
* */
if (receivedEvent->dataSize == 8)
{
const int size = 8;
void (GThreadObject::*newFuncPtr)(VariableSizeContainter<size>);
newFuncPtr = (void (GThreadObject::*)(VariableSizeContainter<size>))receivedEvent->funcPtr;
//Execute the function
(*this.*newFuncPtr)(*((VariableSizeContainter<size>*)receivedEvent->data));
}
//Clean up
free(receivedEvent->data);
delete receivedEvent;
}
void GThreadObject::newEvent(void (GThreadObject::*funcPtr)(void*), unsigned int argStart, int argSize)
{
//Malloc an object the size of the function arguments
void * myData = malloc(argSize);
//Copy the data passed to this function into the buffer
memcpy(myData, (char*)argStart, argSize);
//Create the event and push it on to the queue
GThreadObject::event * myEvent = new event;
myEvent->data = (char*)myData;
myEvent->dataSize = argSize;
myEvent->funcPtr = funcPtr;
jobQueue.push(myEvent);
//This would be send a thread condition signal, replaced with a simple call here
this->workerThread();
}
/*
* This is the public interface, Can be called from child threads
* Instead of executing the event directly it adds it to a job queue
* Then the workerThread picks it up and executes all tasks on the same thread
*/
void GThreadObject::functionOne(char * argOne, int argTwo)
{
newEvent((void (GThreadObject::*)(void*))&GThreadObject::functionOneInternal, (unsigned int)&argOne, sizeof(char*)+sizeof(int));
}
/*
* This handles the actual event
*/
void GThreadObject::functionOneInternal(char * argOne, int argTwo)
{
cout << "We've made it to functionOne Internal char*:" << argOne << " int:" << argTwo << endl;
//Now do the work
}
void GThreadObject::functionTwo(int argOne, int argTwo)
{
newEvent((void (GThreadObject::*)(void*))&GThreadObject::functionTwoInternal, (unsigned int)&argOne, sizeof(int)+sizeof(int));
}
/*
* This handles the actual event
*/
void GThreadObject::functionTwoInternal(int argOne, int argTwo)
{
cout << "We've made it to functionTwo Internal arg1:" << argOne << " int:" << argTwo << endl;
}
main.cpp
#include <iostream>
#include "GThreadObject.h"
int main()
{
GThreadObject myObj;
myObj.functionOne("My Message", 23);
myObj.functionTwo(456, 23);
return 0;
}
Edit: Just for completeness I did an implementation with Boost::bind. Key Differences:
queue<boost::function<void ()> > jobQueue;
void GThreadObjectBoost::functionOne(char * argOne, int argTwo)
{
jobQueue.push(boost::bind(&GThreadObjectBoost::functionOneInternal, this, argOne, argTwo));
workerThread();
}
void GThreadObjectBoost::workerThread()
{
boost::function<void ()> func = jobQueue.front();
func();
}
Using the boost implementation for 10,000,000 Iterations of functionOne() it took ~19sec. However the non boost implementation took only ~6.5 sec. So Approx 3x slower. I'm guessing finding a good non-locking queue will be the biggest performance bottle neck here. But it's still quite a big difference.
The POCO library has something along the same lines called ActiveMethod (along with some related functionality e.g. ActiveResult) in the threading section. The source code is readily available and easily understood.
You might be interested in Active Object one of the ACE Patterns of the ACE framework.
As Nikolai pointed out futures are planned for standard C++ some time in the future (pun intended).
For extensibility and maintainability (and other -bilities) you could define an abstract class (or interface) for the "job" that thread is to perform. Then user(s) of your thread pool would implement this interface and give reference to the object to the thread pool. This is very similar to Symbian Active Object design: every AO subclasses CActive and have to implement methods such as Run() and Cancel().
For simplicity your interface (abstract class) might be as simple as:
class IJob
{
virtual Run()=0;
};
Then the thread pool, or single thread accepting requests would have something like:
class CThread
{
<...>
public:
void AddJob(IJob* iTask);
<...>
};
Naturally you would have multiple tasks that can have all kinds of extra setters / getters / attributes and whatever you need in any walk of life. However, the only must is to implement method Run(), which would perform the lengthy calculations:
class CDumbLoop : public IJob
{
public:
CDumbJob(int iCount) : m_Count(iCount) {};
~CDumbJob() {};
void Run()
{
// Do anything you want here
}
private:
int m_Count;
};
You can solve this by using Boost's Thread -library. Something like this (half-pseudo):
class GThreadObject
{
...
public:
GThreadObject()
: _done(false)
, _newJob(false)
, _thread(boost::bind(&GThreadObject::workerThread, this))
{
}
~GThreadObject()
{
_done = true;
_thread.join();
}
void functionOne(char *argOne, int argTwo)
{
...
_jobQueue.push(myEvent);
{
boost::lock_guard l(_mutex);
_newJob = true;
}
_cond.notify_one();
}
private:
void workerThread()
{
while (!_done) {
boost::unique_lock l(_mutex);
while (!_newJob) {
cond.wait(l);
}
Event *receivedEvent = _jobQueue.front();
...
}
}
private:
volatile bool _done;
volatile bool _newJob;
boost::thread _thread;
boost::mutex _mutex;
boost::condition_variable _cond;
std::queue<Event*> _jobQueue;
};
Also, please note how RAII allow us to get this code smaller and better to manage.
Here's a class I wrote for a similar purpose (I use it for event handling but you could of course rename it to ActionQueue -- and rename its methods).
You use it like this:
With function you want to call: void foo (const int x, const int y) { /*...*/ }
And: EventQueue q;
q.AddEvent (boost::bind (foo, 10, 20));
In the worker thread
q.PlayOutEvents ();
Note: It should be fairly easy to add code to block on condition to avoid using up CPU cycles.
The code (Visual Studio 2003 with boost 1.34.1):
#pragma once
#include <boost/thread/recursive_mutex.hpp>
#include <boost/function.hpp>
#include <boost/signals.hpp>
#include <boost/bind.hpp>
#include <boost/foreach.hpp>
#include <string>
using std::string;
// Records & plays out actions (closures) in a safe-thread manner.
class EventQueue
{
typedef boost::function <void ()> Event;
public:
const bool PlayOutEvents ()
{
// The copy is there to ensure there are no deadlocks.
const std::vector<Event> eventsCopy = PopEvents ();
BOOST_FOREACH (const Event& e, eventsCopy)
{
e ();
Sleep (0);
}
return eventsCopy.size () > 0;
}
void AddEvent (const Event& event)
{
Mutex::scoped_lock lock (myMutex);
myEvents.push_back (event);
}
protected:
const std::vector<Event> PopEvents ()
{
Mutex::scoped_lock lock (myMutex);
const std::vector<Event> eventsCopy = myEvents;
myEvents.clear ();
return eventsCopy;
}
private:
typedef boost::recursive_mutex Mutex;
Mutex myMutex;
std::vector <Event> myEvents;
};
I hope this helps. :)
Martin Bilski
You should take a look at the Boost ASIO library. It is designed to dispatch events asynchronously. It can be paired with the Boost Thread library to build the system that you described.
You would need to instantiate a single boost::asio::io_service object and schedule a series of asynchronous events (boost::asio::io_service::post or boost::asio::io_service::dispatch). Next, you call the run member function from n threads. The io_service object is thread-safe and guarantees that your asynchronous handlers will only be dispatched in a thread from which you called io_service::run.
The boost::asio::strand object is also useful for simple thread synchronization.
For what it is worth, I think that the ASIO library is a very elegant solution to this problem.