I have a commercial application made with C,C++/Qt on Linux platform. The app collects data from different sensors and displays them on GUI. Each of the protocol for interfacing with sensors is implemented using singleton pattern and threads from Qt QThreads class. All the protocols except one work fine. Each protocol's run function for thread has following structure:
void <ProtocolClassName>::run()
{
while(!mStop) //check whether screen is closed or not
{
mutex.lock()
while(!waitcondition.wait(&mutex,5))
{
if(mStop)
return;
}
//Code for receiving and processing incoming data
mutex.unlock();
} //end while
}
Hierarchy of GUI.
1.Login screen.
2. Screen of action.
When a user logs in from login screen, we enter the action screen where all data is displayed and all the thread's for different sensors start. They wait on mStop variable in idle time and when data arrives they jump to receiving and processing data. Incoming data for the problem protocol is 117 bytes. In the main GUI threads there are timers which when timeout, grab the running instance of protocol using
<ProtocolName>::instance() function
Check the update variable of singleton class if its true and display the data. When the data display is done they reset the update variable in singleton class to false. The problematic protocol has the update time of 1 sec, which is also the frame rate of protocol. When I comment out the display function it runs fine. But when display is activated the application hangs consistently after 6-7 hours. I have asked this question on many forums but haven't received any worthwhile suggestions. I Hope that here I will get some help. Also, I have read a lot of literature on Singleton, multithreading, and found that people always discourage the use of singletons especially in C++. But in my application I can think of no other design for implementation.
Thanks in advance
A Hapless programmer
I think singleton is not really what you are looking for. Consider this:
You have (lets say) two sensors, each with its own protocol (frame rate, for our purpose).
Now create "server" classes for each sensor instead of an explicit singleton. This way you can hide the details of how your sensors work:
class SensorServer {
protected:
int lastValueSensed;
QThread sensorProtocolThread;
public:
int getSensedValue() { return lastValueSensed; }
}
class Sensor1Server {
public:
Sensor1Server() {
sensorProtocolThread = new Sensor1ProtocolThread(&lastValueSensed);
sensorProtocolThread.start();
}
}
class Sensor1ProtocolThread : public QThread {
protected:
int* valueToUpdate;
const int TIMEOUT = 1000; // "framerate" of our sensor1
public:
Sensor1ProtocolThread( int* vtu ) {
this->valueToUpdate = vtu;
}
void run() {
int valueFromSensor;
// get value from the sensor into 'valueFromSensor'
*valueToUpdate = valueFromSensor;
sleep(TIMEOUT);
}
}
This way you can do away with having to implement a singleton.
Cheers,
jrh.
Just a drive-by analysis but this doesn't smell right.
If the application is "consistently" hanging after 6-7 hours are you sure it isn't a resource (e.g. memory) leak? Is there anything different about the implementation of the problematic protocol from the rest of them? Have you run the app through a memory checker, etc.?
Not sure it's the cause of what you're seeing, but you have a big fat synchronization bug in your code:
void <ProtocolClassName>::run()
{
while(!mStop) //check whether screen is closed or not
{
mutex.lock()
while(!waitcondition.wait(&mutex,5))
{
if(mStop)
return; // BUG: missing mutex.unlock()
}
//Code for receiving and processing incoming data
mutex.unlock();
} //end while
}
better:
void <ProtocolClassName>::run()
{
while(!mStop) //check whether screen is closed or not
{
const QMutexLocker locker( &mutex );
while(!waitcondition.wait(&mutex,5))
{
if(mStop)
return; // OK now
}
//Code for receiving and processing incoming data
} //end while
}
Related
I'm implementing a system that uses 3 threads (one is GUI, one is TCP client for data acquisition and one analysis thread for calculations).
I'm having a hard time handling an exception for either one. The case that I'm trying to solve now is what happens if some calculation goes wrong, and I need to 'freeze' the system. The problem is that in some scenarios, I have data waiting in the analysis thread's event loop. How can I clear this queue safely, without handling all the events (as I said, something went wrong so I don't want any more calculations done).
Is there a way to clear an event loop for a specific thread? When can I delete the objects safely?
Thanks
You question is somewhat low on details, but I assume you're using a QThread and embedding a QEventLoop in it?
You can call QEventLoop::exit(-1), which is thread safe.
The value passed to exit is the exit status, and will be the value returned from QEventLoop::exec(). I've chosen -1, which is typically used to denote an error condition.
You can then check the return code from exec(), and act accordingly.
class AnalysisThread : public QThread
{
Q_OBJECT
public:
void run() override
{
int res = _loop.exec();
if (res == -1)
{
// delete objects
}
}
void exit()
{
_loop.exit(-1);
}
private:
QEventLoop _loop;
};
Elsewhere, in your exception handler
try
{
// ...
}
catch(const CalculationError& e)
{
_analysis_thread.exit();
}
In what situation should we adopt state pattern?
I've been assigned to maintain a project, the project state machine was implemented by switch-case that are 2000+ lines long. It will be hard to expand function, so I would like to refactor it.
I'm surveying state design pattern, but I have some confusions.
A simple example:
1. Initial state "WAIT", wait user send download command
2. While user send download command, move to "CONNECT" state, connect to server
3. After connection is created, move to "DOWNLOADING" state, keep receive data from server
4. While the data download complete, move to "DISCONNECT", disconnect link with server
5. After disconnect, move to "WAIT" state, wait user send download command
A simple state machine pic
Method 1: Before I survey state pattern, I think a trivial method --- wrapper different state behavior in different function, use a function pointer array to point each state function, and change state by call function.
typedef enum {
WAIT,
CONNECT,
DOWNLOADING,
DISCONNECT
}state;
void (*statefunction[MAX_STATE])(void) =
{
WAITState,
CONNECTState,
DOWNLOADINGState,
DISCONNECTState
};
void WAITState(void)
{
//do wait behavior
//while receive download command
//statefunction[CONNECT]();
}
void CONNECTState(void)
{
//do connect behavior
//while connect complete
//statefunction[DOWNLOADING]();
}
void DOWNLOADINGState(void)
{
//do downloading behavior
//while download complete
//statefunction[DISCONNECT]();
}
void DISCONNECTState(void)
{
//do disconnect behavior
//while disconnect complete
//statefunction[WAIT]();
}
Method 2: The state pattern encapsulates different state and its behavior in different class (object-oriented state machine), uses polymorphism to implement different state behavior, and defines a common interface for all concrete states.
class State
{
public:
virtual void Handle(Context *pContext) = 0;
};
class Context
{
public:
Context(State *pState) : m_pState(pState){}
void Request()
{
if (m_pState)
{
m_pState->Handle(this);
}
}
private:
State *m_pState;
};
class WAIT : public State
{
public:
virtual void Handle(Context *pContext)
{
//do wait behavior
}
};
class CONNECT : public State
{
public:
virtual void Handle(Context *pContext)
{
//do connect behavior
}
};
class DOWNLOADING : public State
{
public:
virtual void Handle(Context *pContext)
{
//do downloading behavior
}
};
class DISCONNECT : public State
{
public:
virtual void Handle(Context *pContext)
{
//do disconnect behavior
}
};
I'm wondering whether the state pattern batter than function pointer in this case or not...
Using function pointer only also can improve readability (compare with switch-case), and more simple.
The state pattern will create several class, and more complex than using function pointer only.
What's the advantage of using state pattern?
Thanks for your time!
What's the advantage of using the state pattern?
First, one needs to notice, that both of the methods you've provided, are in fact examples of the very same pattern. One of the methods describes a function-based implementation, while the other one takes more of an object oriented approach.
That being said, the pattern itself has a few advantages:
It limits the number of states, a program can be in, and thus - eliminates undefined states,
It allows for easier expansion of the application, by adding new states, instead of refactoring the whole code,
From a company perspective, it is safe, even when multiple people work on the same class,
Since you tagged the question as related to c++, it is best to take into account what the language both gives and requires. While classes offer inheritance, a large number of classes can greatly increase the compilation time. Hence, when it comes to implementations, if your state machine is large, static polymorphism may be the way to go.
Disclaimer:I asked this question a few days ago on codereview,but got no answer.Here I change the question format from review request to a specific problems.
I am developing a video player with the following design:
The main thread - is GUI thread (Qt SDK).
Second thread - player thread which accepts commands from the GUI thread to play, forward, backward, stop etc. Now,this thread runs in a constant loop and and uses mutexes and wait conditions to live in sync with the main thread commands.
I have 2 problems with this code:
I don't feel my design is completely correct:I am using both mutex locks and atomic variables.I wonder if I can stay only with the atomics and use locks only for setting the wait conditions.
I am experiencing inconsistent bugs(probably due to the condition race when the play command tries to lock mutex which is already locked by the thread while the play loop is working) when I run "play" commands which activates a loop inside the thread loop. So I suppose it blocks the access to the shared variables to the main thread.
I have stripped off the code from unneeded stuff and it generally goes like this:
void PlayerThread::drawThread()//thread method passed into new boost::thread
{
//some init goes here....
while(true)
{
boost::unique_lock<boost::mutex> lock(m_mutex);
m_event.wait(lock); //wait for event
if(!m_threadRun){
break; //exit the tread
}
///if we are in playback mode,play in a loop till interrupted:
if(m_isPlayMode == true){
while(m_frameIndex < m_totalFrames && m_isPlayMode){
//play
m_frameIndex ++;
}
m_isPlayMode = false;
}else{//we are in a single frame play mode:
if(m_cleanMode){ ///just clear the screen with a color
//clear the screen from the last frame
//wait for the new movie to get loaded:
m_event.wait(lock);
//load new movie......
}else{ //render a single frame:
//play single frame....
}
}
}
}
Here are the member functions of the above class which send commands to the thread loop:
void PlayerThread::PlayForwardSlot(){
// boost::unique_lock<boost::mutex> lock(m_mutex);
if(m_cleanMode)return;
m_isPlayMode = false;
m_frameIndex++;
m_event.notify_one();
}
void PlayerThread::PlayBackwardSlot(){
// boost::unique_lock<boost::mutex> lock(m_mutex);
if(m_cleanMode)return;
m_isPlayMode = false;
m_frameIndex-- ;
if(m_frameIndex < 0){
m_frameIndex = 0;
}
m_event.notify_one();
}
void PlayerThread::PlaySlot(){
// boost::unique_lock<boost::mutex> lock(m_mutex);
if(m_cleanMode)return;
m_isPlayMode = true;
m_event.notify_one(); //tell thread to start playing.
}
All the flag members like m_cleanMode, m_isPlayMode and m_frameIndex are atomics:
std::atomic<int32_t> m_frameIndex;
std::atomic<bool> m_isPlayMode;
std::atomic<bool> m_cleanMode;
The questions summary::
Do I need mutex locks when using atomics?
Do I set waiting in the correct place inside the while loop of the
thread?
Any suggestion of a better design?
UPDATE:
Though I got an answer which seems to be in the right direction I don't really understand it.Especially the pseudo-code part which is talking about service.It is completely unclear to me how it would work.I would like to get a more elaborated answer.It is also strange that I received only one constructive answer to such a common problem.So I am resetting the bounty.
The biggest issue with your code is that you wait unconditionally. boost::condition::notify_one only wake up a thread which is waiting. Which means Forward Step\Backward Step then Play if fast enough will ignore the play command. I dont get clean mode, but you need at least
if(!m_isPlayMode)
{
m_event.wait(lock);
}
In your code stop and stepping to a frame are virtually the same thing .You may want to use a tristate PLAY,STEP, STOP to be able to use the recommended way of waiting on a condition variable
while(state == STOP)
{
m_event.wait(lock);
}
1. Do I need mutex locks when using atomics?
Technically yes. In this specific case I don't think so.
Current races conditions (I noticed) :
playing mode, playforward and playbackward will not result in the same m_frameIndex depending whether or not drawThread is within the while(m_frameIndex < m_totalFrames && m_isPlayMode) loop. Indeed m_frameIndexcould be incremented once or twice (playforward).
Entering the playing state in PlaySlot can be ignored if drawThread execute m_isPlayMode = false; before receiving the next event. Right now it is a non-issue because it will only happen if m_frameIndex < m_totalFrames is false. If PlaySlot was modifying m_frameIndex then you will have case of pushing play and nothing happen.
2. Do I set waiting in the correct place inside the while loop of the thread?
I would suggest to have only one wait in your code, for simplicity. And be explicit about the next thing to do using specific commands :
PLAY, STOP, LOADMOVIE, STEP
3. Any suggestion of a better design?
Use an explicit event queue. You can use one which is Qt-based (require Qthreads) or boost based. The one based on boost use a boost::asio::io_service and a boost::thread.
You start the event loop using :
boost::asio::io_service service;
//permanent work so io_service::exec doesnt terminate immediately.
boost::asio::io_service::work work(service);
boost::thread thread(boost::bind(&boost::asio::io_service::exec, boost::ref(service)));
Then you send your commands from the GUI using
MYSTATE state;
service.post(boost::bind(&MyObject::changeState,this, state));
Your play method should request another play given that the state hasn't changed, rather than looping. It allows a better user preemption.
Your step method should request a stop before displaying the frame.
Pseudocode:
play()
{
if(state != PLAYING)
return;
drawframe(index);
index++;
service.post(boost::bind(&MyObject::play, this));
}
stepforward()
{
stop();
index++;
drawframe(index);
}
stepbackward()
{
stop();
index--;
drawframe(index);
}
Edit:
There is only one player thread which is created once and execute only one event loop. Is is equivalent to QThread::start(). The thread will live as long as the loop doesnt return, which is going to be till the work object is destroyed OR when you explicitly stop the service. When you request to stop a service all posted tasks which are still pending are going to be executed first. You can interrupt the thread for fast exit if neccessary.
When there is a call for an action you post in the event loop ran by the player thread.
Note: You will probably need share pointers for the service and the thread. You will also need to put interrupt points in the play method in order to allow stopping the thread cleanly during playback. You don't need as much atomic as before. You don't need a condition variable anymore.
Any suggestion of a better design?
Yes! Since you are using Qt I would heavily suggest to use Qt's eventloop (apart from the UI stuff this is IMO one of the main selling points of that library) and asynchronous signal/slots to do the controlling instead of your homegrown synchronization, which - as you found out - is a very fragile undertaking.
The main change this will bring to your current design is that you will have to do your video logic as part of the Qt event-loop, or, easier, just do a QEventLoop::processEvents. For that you will need a QThread.
Then it's very straightforward: You create some class that inherits from QObject let's say PlayerController which should contain signals like play, pause, stop and a class Player which will have slots onPlay, onPause, onStop (or without the on, your preference). Then create a 'controller' object of the PlayerController class in the GUI thread and the Player object in the 'video' thread (or use QObject::moveToThread). This is important, as Qt has the notion of thread affinity to determine in which thread SLOTs are executed. No connect the objects by doing QObject::connect(controller, SIGNAL(play()), player, SLOT(onPlay())). Any call now to PlayerController:play on the 'controller' from the GUI thread will result in the onPlay method of the 'player' being executed in the video thread on the next event loop iteration. That's where you can then change your boolean status variables or do other kind of action without the need for explicit synchronization as your variables are only changes from the video thread now.
So something along those lines:
class PlayerController: public QObject {
Q_OBJECT
signals:
void play();
void pause();
void stop();
}
class Player: public QObject {
Q_OBJECT
public slots:
void play() { m_isPlayMode = true; }
void pause() { m_isPlayMode = false; }
void stop() { m_isStop = true; };
private:
bool m_isPlayMode;
bool m_isStop;
}
class VideoThread: public QThread {
public:
VideoThread (PlayerController* controller) {
m_controller = controller;
}
protected:
/* override the run method, normally not adviced but we want our special eventloop */
void run() {
QEventLoop loop;
Player* player = new Player;
QObject::connect(m_controller, SIGNAL(play()), player, SLOT(play()));
QObject::connect(m_controller, SIGNAL(pause()), player, SLOT(pause()));
QObject::connect(m_controller, SIGNAL(stop()), player, SLOT(stop()));
m_isStop = false;
m_isPlayMode = false;
while(!m_isStop) {
// DO video related stuff
loop.processEvents();
}
}
private:
PlayerController* m_controller;
}
// somewhere in main thread
PlayerController* controller = new PlayerController();
VideoThread* videoThread = new VideoThread(controller);
videoThread.start();
controller.play();
Any suggestion of a better design?
Instead of using separate thread, use QTimer and play on the main thread. No atomics or mutexes needed. I am not quite tracking with m_cleanMode, so I mostly took it out of the code. If you elaborate more on what it does, I cam add it to the code.
class Player
{
int32_t m_frameIndex;
bool m_cleanMode;
QTimer m_timer;
void init();
void drawFrame();
slots:
void play();
void pause();
void playForward();
void playBackward();
private slots:
void drawFrameAndAdvance();
}
void Player::init()
{
// some init goes here ...
m_timer.setInterval(333); // 30fps
connect(&m_timer, SIGNAL(timeout()), this, SLOT(drawFrameAndAdvance()));
}
void Player::drawFrame()
{
// play 1 frame
}
void Player::drawFrameAndAdvance()
{
if(m_frameIndex < m_totalFrames - 1) {
drawFrame();
m_frameIndex++;
}
else m_timer.stop();
}
void PlayerThread::playForward()
{
if(m_cleanMode) return;
m_timer.stop(); // stop playback
if(m_frameIndex < m_totalFrames - 1) {
m_frameIndex++;
drawFrame();
}
}
void PlayerThread::playBackward()
{
if(m_cleanMode)return;
m_timer.stop(); // stop playback
if(m_frameIndex > 0) {
m_frameIndex--;
drawFrame();
}
}
void PlayerThread::play()
{
if(m_cleanMode) return;
m_timer.start(); // start playback
}
void PlayerThread::pause()
{
if(m_cleanMode) return;
m_timer.stop(); // stop playback
}
I have a graphical application that is pulling data from a camera. The camera event loop runs in a thread that is started in a object, I use a setter / getter of the object to get the data out and use it. But sometimes the application crashes. I'm not using any synchronization mechanism.
I have this method:
void MyClass::onNewColorSample(ColorNode node, ColorNode::NewSampleReceivedData data)
{
colorData = data;
}
I register it as callback of an external library:
g_cnode.newSampleReceivedEvent().connect(&onNewColorSample);
The method is called each time that a new frame arrives from a camera.
The getter of colorData is:
ColorNode::NewSampleReceivedData MyClass::getColorData()
{
return colorData;
}
Then I use a pthread to run the following:
void* MyClass::runThread(void* na)
{
g_context.run();
}
At some point I start the thread:
pthread_create(&pthread, NULL, runThread, NULL);
Then the class MyClass is getting data from a camera in a thread.
The run method documentation of the library says:
Runs the DepthSense event loop. The connected event handlers are run in the thread that called run().
Now I use myClass to get data from the camera, in another class I have in a method that is called every 1/60 seconds:
static ColorNode::NewSampleReceivedData colorFrame;
depthFrame = dsCam.getDetphData();
...
Sometimes the application crash in dsCam.getDepthData().
I think that the problem occurs because the data is being copied when this method returns and in the middle of the copy operation I got new data.
I use a thread because the external library doesn't provide a non-blocking mechanism to get the data out. It just provides an event-based mechanism.
I'm afraid that if I use a mutex lock/unlock mechanism my FPS will drop down, but I will try it... please give me some ideas.
Finally I solved the problem using QMutex:
//RAII class to unlock after method return (when local var out of scope)
class AutoMutex {
public:
AutoMutex(QMutex* _mutex) {
_mutex->lock();
myMutex = _mutex;
}
~AutoMutex() {
myMutex->unlock();
}
private:
QMutex* myMutex;
};
then I just used this class, passing to it a pointer to mutex (mutex is a member of my class):
ColorNode::NewSampleReceivedData MyClass::getColorData()
{
AutoMutex autoMut(&mutex); //mutex get locked
return colorData;
} //when method ends, autoMut is destroyed and mutex get unlocked
DepthNode::NewSampleReceivedData MyClass::getDetphData()
{
AutoMutex autoMut(&mutex);
return depthData;
}
This question comes from:
C++11 thread doesn't work with virtual member function
As suggested in a comment, my question in previous post may not the right one to ask, so here is the original question:
I want to make a capturing system, which will query a few sources in a constant/dynamic frequency (varies by sources, say 10 times / sec), and pull data to each's queues. while the sources are not fixed, they may add/remove during run time.
and there is a monitor which pulls from queues at a constant freq and display the data.
So what is the best design pattern or structure for this problem.
I'm trying to make a list for all the sources pullers, and each puller holds a thread, and a specified pulling function (somehow the pulling function may interact with the puller, say if the source is drain, it will ask to stop the pulling process on that thread.)
Unless the operation where you query a source is blocking (or you have lots of them), you don't need to use threads for this. We could start with a Producer which will work with either synchronous or asynchronous (threaded) dispatch:
template <typename OutputType>
class Producer
{
std::list<OutputType> output;
protected:
int poll_interval; // seconds? milliseconds?
virtual OutputType query() = 0;
public:
virtual ~Producer();
int next_poll_interval() const { return poll_interval; }
void poll() { output.push_back(this->query()); }
std::size_t size() { return output.size(); }
// whatever accessors you need for the queue here:
// pop_front, swap entire list, etc.
};
Now we can derive from this Producer and just implement the query method in each subtype. You can set poll_interval in the constructor and leave it alone, or change it on every call to query. There's your general producer component, with no dependency on the dispatch mechanism.
template <typename OutputType>
class ThreadDispatcher
{
Producer<OutputType> *producer;
bool shutdown;
std::thread thread;
static void loop(ThreadDispatcher *self)
{
Producer<OutputType> *producer = self->producer;
while (!self->shutdown)
{
producer->poll();
// some mechanism to pass the produced values back to the owner
auto delay = // assume millis for sake of argument
std::chrono::milliseconds(producer->next_poll_interval());
std::this_thread::sleep_for(delay);
}
}
public:
explicit ThreadDispatcher(Producer<OutputType> *p)
: producer(p), shutdown(false), thread(loop, this)
{
}
~ThreadDispatcher()
{
shutdown = true;
thread.join();
}
// again, the accessors you need for reading produced values go here
// Producer::output isn't synchronised, so you can't expose it directly
// to the calling thread
};
This is a quick sketch of a simple dispatcher that would run your producer in a thread, polling it however often you ask it to. Note that passing produced values back to the owner isn't shown, because I don't know how you want to access them.
Also note I haven't synchronized access to the shutdown flag - it should probably be atomic, but it might be implicitly synchronized by whatever you choose to do with the produced values.
With this organization, it'd also be easy to write a synchronous dispatcher to query multiple producers in a single thread, for example from a select/poll loop, or using something like Boost.Asio and a deadline timer per producer.