Multi thread problem with loop on ESP8266 - if-statement

I'm using Nodemcu for my IoT project and I should use PHP with that. I'm trying to changing funcs, when I press the button on web site. My func1 has 1000ms delay with millis. func2 has a 360000ms delay, so I can't change func2 to func1 when I want to. I tried so many ways, how can I do that?
My code is like this:
void func1() {
// code for manuel GPIO control
// millis....
}
void func2() {
// code for automatic GPIO control
// millis....
}
void loop() {
// millis....
if (payload == 1) {
func1();
} else if (payload == 0) {
func2();
}
}

Arduino does not support multi-threading, but you can use different programming technique to make your program to work without blocking, for example Blink Without Delay

Related

How to create QThread without task switching

I'm developing an Atmega328P MCU simulator and have a little problem regarding the timer. The timer in Atmega328P is ticking once per 62.5 ns (on 16MHz) and I have to simulate that. To solve problem I have created this class
class DLL_EXPORT Timer : public QThread
...
void Timer::run()
{
CCPU* cpu = dynamic_cast<CCPU*>(parent());
if (cpu == nullptr)
{
exit();
return;
}
QElapsedTimer timer_core;
timer_core.start();
while (1)
{
if (timer_core.nsecsElapsed() >= 62)
{
++cpu->GetIOPorts()[0x26];
timer_core.restart();
}
}
}
It seems to work fine, but there's a little problem regarding task switching process in CPU. How can I disable that process for this certain class?
Or maybe there is another approach for the solution?

Event Driven SDL2

I'm in the process of wrapping SDL into C++ objects. Basically, I'm just tired of seeing SDL_ in my code. I'd like at least namespaces... SDL::Window. I've done that, its going more or less fine.
Issue arises with events. I'd like it to be event driven (callbacks) rather than me having to poll an events queue. (the propagation routines you have to write to get SDL_Event fit the abstraction I've designed is painful).
Take for example, a Window class. Its constructor calls
SDL_AddEventWatch(window_events, this);
where window_events is a static member of the Window class. It catches anything of type SDL_WINDOWEVENT.
int Window::window_events(void* data, SDL::Events::Event* ev)
{
if (ev->type == SDL::Events::Window::Any)
{
auto win = static_cast<Window *>(data);
if (ev->window.windowID == SDL_GetWindowID(win->mWindow))
{
std::vector<event_callback> callbacks = win->mWindowCallbacks;
for (const auto cbk : callbacks)
{
cbk(*ev);
}
}
}
return 0;
}
My Window class also contains hook and unhook methods. Each takes a std::function. This is what mWindowCallbacks is a collection of. Any external routine interested in an event gets a copy forwarded to it.
//...
using event_callback = std::function<void(SDL::Events::Event)>;
//...
template<typename T> bool
find_func(const T & object,
const std::vector<T> & list,
int * location=nullptr)
{
int offset = 0;
for (auto single : list)
{
if (single.target<T>() ==
object.target<T>())
{
if (location != nullptr) *location = offset;
return true;
}
offset++;
}
return false;
}
void
Window::hook(event_callback cbk)
{
if (!find_func(cbk, mWindowCallbacks))
{
mWindowCallbacks.push_back(cbk);
}
}
void
Window::unhook(event_callback cbk)
{
int offset = 0;
if (find_func(cbk, mWindowCallbacks, &offset))
{
mWindowCallbacks.erase(mWindowCallbacks.begin() + offset);
}
}
Usage:
///...
void cbk_close(SDL::Events::Event e)
{
if (e.window.event == SDL::Events::Window::Close)
{
window.close();
quit = true;
}
}
///...
std::function<void(SDL::Events::Event)> handler = cbk_close;
SDL::Window window;
window.hook(handler);
Close:
void Window::close()
{
SDL_DelEventWatch(window_events, this);
SDL_DestroyWindow(mWindow);
mWindowCallbacks.clear();
}
To me, this doesn't seem like terrible design.
Once you press close on the window the cbk_close is invoked, it calls close, it sets the quit flag... Then it returns to the window_events loop. As expected... However that function doesn't seem to return control to the program.
This is what I need help with. I don't really understand why. I think its hijacking the main thread, as the program will exit once that function exits if you have one window, or... crash if you have two.
Am I on the right lines with that? I've been stuck on this for a week. Its really rather infuriating. To anyone willing to have a play about with it; here's the git repo for the full code.
Windows, Visual Studio 2015/VC solution.
https://bitbucket.org/andywm/sdl_oowrapper/
Okay, so I think I more or less understand what's going on here now.
SDL_AddEventWatch(void (*)(void *, SDL_Event*), void *)
If you're using C++, you should set the calling convention.
SDLCALL
In my case;
int SDLCALL
Window::window_events(void* data, SDL::Events::Event* ev)
This seems to stop the sdl events system from nabbing the main thread.
As for why its crashing with multiple windows... Well, if I remove this line
SDL_DelEventWatch(window_events, this);
it doesn't crash... Not really sure why yet, but if I figure it out I'll amend my answer - and if anyone with more experienced with SDL could fill me in.. That'd be great.

Threaded Video Player sync

Disclaimer:I asked this question a few days ago on codereview,but got no answer.Here I change the question format from review request to a specific problems.
I am developing a video player with the following design:
The main thread - is GUI thread (Qt SDK).
Second thread - player thread which accepts commands from the GUI thread to play, forward, backward, stop etc. Now,this thread runs in a constant loop and and uses mutexes and wait conditions to live in sync with the main thread commands.
I have 2 problems with this code:
I don't feel my design is completely correct:I am using both mutex locks and atomic variables.I wonder if I can stay only with the atomics and use locks only for setting the wait conditions.
I am experiencing inconsistent bugs(probably due to the condition race when the play command tries to lock mutex which is already locked by the thread while the play loop is working) when I run "play" commands which activates a loop inside the thread loop. So I suppose it blocks the access to the shared variables to the main thread.
I have stripped off the code from unneeded stuff and it generally goes like this:
void PlayerThread::drawThread()//thread method passed into new boost::thread
{
//some init goes here....
while(true)
{
boost::unique_lock<boost::mutex> lock(m_mutex);
m_event.wait(lock); //wait for event
if(!m_threadRun){
break; //exit the tread
}
///if we are in playback mode,play in a loop till interrupted:
if(m_isPlayMode == true){
while(m_frameIndex < m_totalFrames && m_isPlayMode){
//play
m_frameIndex ++;
}
m_isPlayMode = false;
}else{//we are in a single frame play mode:
if(m_cleanMode){ ///just clear the screen with a color
//clear the screen from the last frame
//wait for the new movie to get loaded:
m_event.wait(lock);
//load new movie......
}else{ //render a single frame:
//play single frame....
}
}
}
}
Here are the member functions of the above class which send commands to the thread loop:
void PlayerThread::PlayForwardSlot(){
// boost::unique_lock<boost::mutex> lock(m_mutex);
if(m_cleanMode)return;
m_isPlayMode = false;
m_frameIndex++;
m_event.notify_one();
}
void PlayerThread::PlayBackwardSlot(){
// boost::unique_lock<boost::mutex> lock(m_mutex);
if(m_cleanMode)return;
m_isPlayMode = false;
m_frameIndex-- ;
if(m_frameIndex < 0){
m_frameIndex = 0;
}
m_event.notify_one();
}
void PlayerThread::PlaySlot(){
// boost::unique_lock<boost::mutex> lock(m_mutex);
if(m_cleanMode)return;
m_isPlayMode = true;
m_event.notify_one(); //tell thread to start playing.
}
All the flag members like m_cleanMode, m_isPlayMode and m_frameIndex are atomics:
std::atomic<int32_t> m_frameIndex;
std::atomic<bool> m_isPlayMode;
std::atomic<bool> m_cleanMode;
The questions summary::
Do I need mutex locks when using atomics?
Do I set waiting in the correct place inside the while loop of the
thread?
Any suggestion of a better design?
UPDATE:
Though I got an answer which seems to be in the right direction I don't really understand it.Especially the pseudo-code part which is talking about service.It is completely unclear to me how it would work.I would like to get a more elaborated answer.It is also strange that I received only one constructive answer to such a common problem.So I am resetting the bounty.
The biggest issue with your code is that you wait unconditionally. boost::condition::notify_one only wake up a thread which is waiting. Which means Forward Step\Backward Step then Play if fast enough will ignore the play command. I dont get clean mode, but you need at least
if(!m_isPlayMode)
{
m_event.wait(lock);
}
In your code stop and stepping to a frame are virtually the same thing .You may want to use a tristate PLAY,STEP, STOP to be able to use the recommended way of waiting on a condition variable
while(state == STOP)
{
m_event.wait(lock);
}
1. Do I need mutex locks when using atomics?
Technically yes. In this specific case I don't think so.
Current races conditions (I noticed) :
playing mode, playforward and playbackward will not result in the same m_frameIndex depending whether or not drawThread is within the while(m_frameIndex < m_totalFrames && m_isPlayMode) loop. Indeed m_frameIndexcould be incremented once or twice (playforward).
Entering the playing state in PlaySlot can be ignored if drawThread execute m_isPlayMode = false; before receiving the next event. Right now it is a non-issue because it will only happen if m_frameIndex < m_totalFrames is false. If PlaySlot was modifying m_frameIndex then you will have case of pushing play and nothing happen.
2. Do I set waiting in the correct place inside the while loop of the thread?
I would suggest to have only one wait in your code, for simplicity. And be explicit about the next thing to do using specific commands :
PLAY, STOP, LOADMOVIE, STEP
3. Any suggestion of a better design?
Use an explicit event queue. You can use one which is Qt-based (require Qthreads) or boost based. The one based on boost use a boost::asio::io_service and a boost::thread.
You start the event loop using :
boost::asio::io_service service;
//permanent work so io_service::exec doesnt terminate immediately.
boost::asio::io_service::work work(service);
boost::thread thread(boost::bind(&boost::asio::io_service::exec, boost::ref(service)));
Then you send your commands from the GUI using
MYSTATE state;
service.post(boost::bind(&MyObject::changeState,this, state));
Your play method should request another play given that the state hasn't changed, rather than looping. It allows a better user preemption.
Your step method should request a stop before displaying the frame.
Pseudocode:
play()
{
if(state != PLAYING)
return;
drawframe(index);
index++;
service.post(boost::bind(&MyObject::play, this));
}
stepforward()
{
stop();
index++;
drawframe(index);
}
stepbackward()
{
stop();
index--;
drawframe(index);
}
Edit:
There is only one player thread which is created once and execute only one event loop. Is is equivalent to QThread::start(). The thread will live as long as the loop doesnt return, which is going to be till the work object is destroyed OR when you explicitly stop the service. When you request to stop a service all posted tasks which are still pending are going to be executed first. You can interrupt the thread for fast exit if neccessary.
When there is a call for an action you post in the event loop ran by the player thread.
Note: You will probably need share pointers for the service and the thread. You will also need to put interrupt points in the play method in order to allow stopping the thread cleanly during playback. You don't need as much atomic as before. You don't need a condition variable anymore.
Any suggestion of a better design?
Yes! Since you are using Qt I would heavily suggest to use Qt's eventloop (apart from the UI stuff this is IMO one of the main selling points of that library) and asynchronous signal/slots to do the controlling instead of your homegrown synchronization, which - as you found out - is a very fragile undertaking.
The main change this will bring to your current design is that you will have to do your video logic as part of the Qt event-loop, or, easier, just do a QEventLoop::processEvents. For that you will need a QThread.
Then it's very straightforward: You create some class that inherits from QObject let's say PlayerController which should contain signals like play, pause, stop and a class Player which will have slots onPlay, onPause, onStop (or without the on, your preference). Then create a 'controller' object of the PlayerController class in the GUI thread and the Player object in the 'video' thread (or use QObject::moveToThread). This is important, as Qt has the notion of thread affinity to determine in which thread SLOTs are executed. No connect the objects by doing QObject::connect(controller, SIGNAL(play()), player, SLOT(onPlay())). Any call now to PlayerController:play on the 'controller' from the GUI thread will result in the onPlay method of the 'player' being executed in the video thread on the next event loop iteration. That's where you can then change your boolean status variables or do other kind of action without the need for explicit synchronization as your variables are only changes from the video thread now.
So something along those lines:
class PlayerController: public QObject {
Q_OBJECT
signals:
void play();
void pause();
void stop();
}
class Player: public QObject {
Q_OBJECT
public slots:
void play() { m_isPlayMode = true; }
void pause() { m_isPlayMode = false; }
void stop() { m_isStop = true; };
private:
bool m_isPlayMode;
bool m_isStop;
}
class VideoThread: public QThread {
public:
VideoThread (PlayerController* controller) {
m_controller = controller;
}
protected:
/* override the run method, normally not adviced but we want our special eventloop */
void run() {
QEventLoop loop;
Player* player = new Player;
QObject::connect(m_controller, SIGNAL(play()), player, SLOT(play()));
QObject::connect(m_controller, SIGNAL(pause()), player, SLOT(pause()));
QObject::connect(m_controller, SIGNAL(stop()), player, SLOT(stop()));
m_isStop = false;
m_isPlayMode = false;
while(!m_isStop) {
// DO video related stuff
loop.processEvents();
}
}
private:
PlayerController* m_controller;
}
// somewhere in main thread
PlayerController* controller = new PlayerController();
VideoThread* videoThread = new VideoThread(controller);
videoThread.start();
controller.play();
Any suggestion of a better design?
Instead of using separate thread, use QTimer and play on the main thread. No atomics or mutexes needed. I am not quite tracking with m_cleanMode, so I mostly took it out of the code. If you elaborate more on what it does, I cam add it to the code.
class Player
{
int32_t m_frameIndex;
bool m_cleanMode;
QTimer m_timer;
void init();
void drawFrame();
slots:
void play();
void pause();
void playForward();
void playBackward();
private slots:
void drawFrameAndAdvance();
}
void Player::init()
{
// some init goes here ...
m_timer.setInterval(333); // 30fps
connect(&m_timer, SIGNAL(timeout()), this, SLOT(drawFrameAndAdvance()));
}
void Player::drawFrame()
{
// play 1 frame
}
void Player::drawFrameAndAdvance()
{
if(m_frameIndex < m_totalFrames - 1) {
drawFrame();
m_frameIndex++;
}
else m_timer.stop();
}
void PlayerThread::playForward()
{
if(m_cleanMode) return;
m_timer.stop(); // stop playback
if(m_frameIndex < m_totalFrames - 1) {
m_frameIndex++;
drawFrame();
}
}
void PlayerThread::playBackward()
{
if(m_cleanMode)return;
m_timer.stop(); // stop playback
if(m_frameIndex > 0) {
m_frameIndex--;
drawFrame();
}
}
void PlayerThread::play()
{
if(m_cleanMode) return;
m_timer.start(); // start playback
}
void PlayerThread::pause()
{
if(m_cleanMode) return;
m_timer.stop(); // stop playback
}

High CSwitch ("context switch") when using Boost interprocess code (on Windows, Win32)

I'm writing a multithreaded app.
I was using the boost::interprocess classes (version 1.36.0)
Essentially, I have worker threads that need to be notified when work is available for them to do.
I tried both the "semaphore" and "condition" approaches.
In both cases, the CSwitch (context switch) for the worker threads seemed very high, like 600 switches per second.
I had a gander at the code and it seems like it just checks a flag (atomically using a mutex) and then yields the timeslice before trying again next time.
I was expecting the code to use WaitForSingleObject or something.
Ironically, this was exactly how I was doing it before deciding to do it "properly" and use Boost! (i.e. using a mutex to check the status of a flag regularly). The only difference was, in my approach I was sleeping like 50ms between checks so I didn't have the high CSwitch problem (and yes it's fine for me that work won't start for up to 50ms).
Several questions:
Does this "high" CSwitch value matter?
Would this occur if the boost library was using CRITICAL_SECTIONS instead of semaphores (I don't care about inter-process syncing - all threads are in same process)?
Would this occur if boost was using WaitForSingleObject?
Is there another approach in the Boost libs that uses the aforementioned Win32 wait methods (WaitForXXX) which I assume won't suffer from this CSwitch issue.
Update: Here is a pseudo code sample. I can't add the real code because it would be a bit complex. But this is pretty much what I'm doing. This just starts a thread to do a one-off asynchronous activity.
NOTE: These are just illustrations! There is loads missing from this sample, e.g. if you call injectWork() before the thread has hit the "wait" it just won't work. I just wanted to illustrate my use of boost.
The usage is something like:
int main(int argc, char** args)
{
MyWorkerThread thread;
thread.startThread();
...
thread.injectWork("hello world");
}
Here is the example using boost.
class MyWorkerThread
{
public:
/// Do work asynchronously
void injectWork(string blah)
{
this->blah = blah;
// Notify semaphore
this->semaphore->post();
}
void startThread()
{
// Start the thread (Pseudo code)
CreateThread(threadHelper, this, ...);
}
private:
static void threadHelper(void* param)
{
((MyWorkerThread*)param)->thread();
}
/// The thread method
void thread()
{
// Wait for semaphore to be invoked
semaphore->wait();
cout << blah << endl;
}
string blah;
boost::interprocess::interprocess_semaphore* semaphore;
};
And here was my "naive" polling code:
class MyWorkerThread_NaivePolling
{
public:
MyWorkerThread_NaivePolling()
{
workReady = false;
}
/// Do work asynchronously
void injectWork(string blah)
{
section.lock();
this->blah = blah;
this->workReady = true;
section.unlock();
}
void startThread()
{
// Start the thread (Pseudo code)
CreateThread(threadHelper, this, ...);
}
private:
/// Uses Win32 CriticalSection
class MyCriticalSection
{
MyCriticalSection();
void lock();
void unlock();
};
MyCriticalSection section;
static void threadHelper(void* param)
{
((MyWorkerThread*)param)->thread();
}
/// The thread method
void thread()
{
while (true)
{
bool myWorkReady = false;
string myBlah;
// See if work set
section.lock();
if (this->workReady)
{
myWorkReady = true;
myBlah = this->blah;
}
section.unlock();
if (myWorkReady)
{
cout << blah << endl;
return;
}
else
{
// No work so sleep for a while
Sleep(50);
}
}
}
string blah;
bool workReady;
};
Cheers,
John
On non-POSIX systems, it seems that interprocess_condition is emulated using a loop, as you describe in your in question. And interprocess_semaphore is emulated with a mutex and an interprocess_condition, so wait()-ing ends up in the same loop.
Since you mention that you don't need the interprocess synchronization, you should look at Boost.Thread, which offers a portable implementation of condition variables. Amusingly, it seems that it is implemented on Windows in the "classical" way, using a... Semaphore.
If you do not mind a Windows specific (newer versions on windows), check the link for light weight condition variables CONDITION_VARIABLE (like critical sections):

Thread implemented as a Singleton

I have a commercial application made with C,C++/Qt on Linux platform. The app collects data from different sensors and displays them on GUI. Each of the protocol for interfacing with sensors is implemented using singleton pattern and threads from Qt QThreads class. All the protocols except one work fine. Each protocol's run function for thread has following structure:
void <ProtocolClassName>::run()
{
while(!mStop) //check whether screen is closed or not
{
mutex.lock()
while(!waitcondition.wait(&mutex,5))
{
if(mStop)
return;
}
//Code for receiving and processing incoming data
mutex.unlock();
} //end while
}
Hierarchy of GUI.
1.Login screen.
2. Screen of action.
When a user logs in from login screen, we enter the action screen where all data is displayed and all the thread's for different sensors start. They wait on mStop variable in idle time and when data arrives they jump to receiving and processing data. Incoming data for the problem protocol is 117 bytes. In the main GUI threads there are timers which when timeout, grab the running instance of protocol using
<ProtocolName>::instance() function
Check the update variable of singleton class if its true and display the data. When the data display is done they reset the update variable in singleton class to false. The problematic protocol has the update time of 1 sec, which is also the frame rate of protocol. When I comment out the display function it runs fine. But when display is activated the application hangs consistently after 6-7 hours. I have asked this question on many forums but haven't received any worthwhile suggestions. I Hope that here I will get some help. Also, I have read a lot of literature on Singleton, multithreading, and found that people always discourage the use of singletons especially in C++. But in my application I can think of no other design for implementation.
Thanks in advance
A Hapless programmer
I think singleton is not really what you are looking for. Consider this:
You have (lets say) two sensors, each with its own protocol (frame rate, for our purpose).
Now create "server" classes for each sensor instead of an explicit singleton. This way you can hide the details of how your sensors work:
class SensorServer {
protected:
int lastValueSensed;
QThread sensorProtocolThread;
public:
int getSensedValue() { return lastValueSensed; }
}
class Sensor1Server {
public:
Sensor1Server() {
sensorProtocolThread = new Sensor1ProtocolThread(&lastValueSensed);
sensorProtocolThread.start();
}
}
class Sensor1ProtocolThread : public QThread {
protected:
int* valueToUpdate;
const int TIMEOUT = 1000; // "framerate" of our sensor1
public:
Sensor1ProtocolThread( int* vtu ) {
this->valueToUpdate = vtu;
}
void run() {
int valueFromSensor;
// get value from the sensor into 'valueFromSensor'
*valueToUpdate = valueFromSensor;
sleep(TIMEOUT);
}
}
This way you can do away with having to implement a singleton.
Cheers,
jrh.
Just a drive-by analysis but this doesn't smell right.
If the application is "consistently" hanging after 6-7 hours are you sure it isn't a resource (e.g. memory) leak? Is there anything different about the implementation of the problematic protocol from the rest of them? Have you run the app through a memory checker, etc.?
Not sure it's the cause of what you're seeing, but you have a big fat synchronization bug in your code:
void <ProtocolClassName>::run()
{
while(!mStop) //check whether screen is closed or not
{
mutex.lock()
while(!waitcondition.wait(&mutex,5))
{
if(mStop)
return; // BUG: missing mutex.unlock()
}
//Code for receiving and processing incoming data
mutex.unlock();
} //end while
}
better:
void <ProtocolClassName>::run()
{
while(!mStop) //check whether screen is closed or not
{
const QMutexLocker locker( &mutex );
while(!waitcondition.wait(&mutex,5))
{
if(mStop)
return; // OK now
}
//Code for receiving and processing incoming data
} //end while
}