C++ timer_create() does not create new thread - c++

What I am trying to achieve is to use a timer, which starts new thread each minute at exact the same time.
So far it did the job, but unfortunately if the thread execution delays more 1 min, then a new thread is NOT created and instead - the timer waits til the previous thread finishes and then executes the new one, which is not my aim.
How may I instruct the timer to fire a new thread every time, without waiting previous to finish?
What I used:
itimer.it_value.tv_nsec = 0;
itimer.it_interval.tv_sec = 60;
itimer.it_interval.tv_nsec = 0;
tt = time(NULL);
tt += 60; //next minute
tm_t = localtime(&tt);
tm_t->tm_sec = 0;
time_t vv = mktime(tm_t);
itimer.it_value.tv_sec = vv;
memset (&sigev, 0, sizeof (struct sigevent));
sigev.sigev_value.sival_int = 666;
sigev.sigev_notify = SIGEV_THREAD;
sigev.sigev_notify_attributes = NULL;
sigev.sigev_notify_function = threadFunction;
if (timer_create(CLOCK_REALTIME, &sigev, &timer) < 0){
exit (errno);
}
if (timer_settime(timer, TIMER_ABSTIME, &itimer, NULL) < 0){
exit (errno);
}
.....

If you read the sigevent(7) manual page you will see that for SIGEV_THREAD the function is called "as if" it were the start function of a new thread. The system may start a new thread, or the system may use a single thread to handle all timer events (which fits your description), or something completely different altogether.
If you want to make sure a new thread is created unconditionally, then you should make a wrapper function that creates a thread, and make sigev_notify_function point to that wrapper function.

Related

detached std::thread on esp32 arduino sometimes blocks, sometimes doesn't

I have some code running on ESP32 microcontroller with arduino core,
In the setup() function I wish to have some code threadPressureCalib run independently in its own thread, so I do the following:
std::unique_ptr<std::thread> sensorCalib;
void setup()
{
sensorCalib.reset(new std::thread(threadPressureCalib));
std::thread* pc = sensorCalib.get();
pc->detach();
}
void loop()
{
...
}
Then, I define threadPressureCalib() as follows:
void threadPressureCalib()
{
float pressure=0;
int count;
for(timestarted = millis();(millis()-timestarted) < 10000;)
{ // THIS ONE BLOCKS SETUP() AND LOOP() CODE EXECUTION
Serial.println("Doing things");
}
Serial.println("Doing other things");
for (count=1; count<= 5;count++)
{ //THIS ONE DOES NOT BLOCK SETUP() and LOOP()
float temp;
while(!timer2.Delay(2000)); //Not sure if this is blocking anything
do{
temp = adc_pressure();
}while(temp>104.0 || temp<70.0); //Catch errors
pressure += temp;
}
changeSetting(pressure/5.0);
return;
}
Problem: During the first for loop, the setup() function's execution is stopped (as well as loop())
During the second for loop, nothing is stopped and the rest of the code runs in parallel (as expected)
Why is it that the first half of this code blocks, and then the second half does not?
Sorry if the question is vague or improperly asked, my first q here.
Explanation of timer2 per request in comments:
timer2 is a custom timer class, timer2.Delay(TIMEOUT) stores timestamp the first time it's called and returns false on every subsequent call until the current time = TIMEOUT, then it returns true and resets itself
NonBlockDelay timer2;
//time delay function (time in seconds to delay)
// Set iTimeout to current millis plus milliseconds to wait for
/**
* Called with milliseconds to delay.
* Return true if timer expired
*
*/
//Borrowed from someone on StackOverflow...
bool NonBlockDelay::Delay (unsigned long t)
{
if(TimingActive)
{
if((millis() >iTimeout)){
TimingActive = 0;
return(1);
}
return(0);
}
iTimeout = millis() + t;
TimingActive = 1;
return(0);
};
// returns true if timer expired
bool NonBlockDelay::Timeout (void)
{
if(TimingActive){
if((millis() >iTimeout)){
TimingActive = 0;
iTimeout = 0;
return(1);
}
}
return(false);
}
// Returns the current timeout value in milliseconds
unsigned long NonBlockDelay::Time(void)
{
return iTimeout;
}
There is not enough information here to tell you the answer but it seems that you have no idea what you are doing.
std::unique_ptr<std::thread> sensorCalib;
void setup(){
sensorCalib.reset(new std::thread(threadPressureCalib));
std::thread* pc = sensorCalib.get();
pc->detach();
}
So here you store a new thread that executes threadPressureCalib then immediately detach it. Once the thread is detached the instance std::thread no longer manages it. So what's the point of even having std::unique_ptr<std::thread> sensorCalib; in the first place if it literally does nothing? Do you realize that normally you need to join the thread if you wish to wait till it's completion? Could it be that you just start a bunch of instances of these threadPressureCalib - as you probably don't verify that they finished execution - and they interfere with each other?

ESP-IDF How to ckeck if task is already running?

I have a job that should be ran with minimum interval of 5 seconds. Trigger that starts this job can be executed in any moment and in any frequency.
What is the best way to solve such a case in RTOS environment?
I want to make a function that creates a task if it does not exist. Existing task should wait for minimum interval to pass before doing anything. While it is waiting, function that should create it should skip the creation of a new task.
What is the right way to check if task was created but didn't finish yet?
Should I use tasks at all in this case?
Code example below:
#define CONFIG_MIN_INTERVAL 5000
uint32_t last_execution_timestamp = 0;
TaskHandle_t *task_handle = NULL;
bool task_done = true;
static void report_task(void *context)
{
if (esp_timer_get_time() / 1000 < last_execution_timestamp + CONFIG_MIN_INTERVAL)
{
ESP_LOGI(stateTAG, "need to wait for for right time");
int time_to_wait = last_execution_timestamp + CONFIG_MIN_INTERVAL - esp_timer_get_time() / 1000;
vTaskDelay(time_to_wait / portTICK_PERIOD_MS);
}
// do something...
task_done = true;
vTaskDelete(task_handle);
}
void init_report_task(uint32_t context)
{
if (!task_done)
{
ESP_LOGI(stateTAG, "TASK already exists");
}
else
{
ESP_LOGI(stateTAG, "Creating task");
xTaskCreate(&report_task, "report_task", 8192, (void *)context, 4, task_handle);
task_done = false;
}
}
eTaskGetState can be used to check if a task is already running, but such a solution can be susceptible to races. For example your task is technically still "running" when it's in fact "finishing", i.e. setting task_done = true; and preparing for exit.
A better solution could be to use a queue (or a semaphore) and have the task run continuously, waiting for the messages to arrive and processing them in a loop.
Using a semaphore, you can do xSemaphoreTake(sem, 5000 / portTICK_PERIOD_MS); to wait for either a wake-up condition or a timeout of 5 seconds, whichever comes first.
== EDIT ==
if there is no events task should wait. Only if event happens it should run the job. It should run it immediately if there was no execution in past 5 seconds. If there was an execution it should wait until 5 seconds since last execution and only then run it
You can achieve that by carefully managing the semaphore's ticks to wait. Something like this (untested):
TickType_t nextDelay = portMAX_DELAY;
TickType_t lastWakeup = 0;
const TickType_t minDelay = 5000 / portTICK_PERIOD_MS;
for (;;) {
bool signalled = xSemaphoreTake(sem, nextDelay);
TickType_t now = (TickType_t)(esp_timer_get_time() / (portTICK_PERIOD_MS * 1000));
if (signalled) {
TickType_t ticksSinceLastWakeup = now - lastWakeup;
if (ticksSinceLastWakeup < minDelay) {
// wakeup too soon - schedule next wakeup and go back to sleep
nextDelay = minDelay - ticksSinceLastWakeup;
continue;
}
}
lastWakeup = now;
nextDelay = portMAX_DELAY;
// do work ...
}

ResumeThread takes over a minute to resume

I'm using SuspendThread / ResumeThread to modify the RIP register between the calls through GetThreadContext / SetThreadContext. It allows me to execute arbitrary code in a thread in another process.
So this works, but sometimes ResumeThread takes about 60 seconds to resume the target thread.
I understand that I'm somewhat abusing the API through this usage, but is there any way to speed this up? Or something I should look at that might indicate a bad usage?
The target thread is a sample program that loops over itself.
uint64_t blarg = 1;
while (true) {
Sleep(100);
std::cout << blarg << std::endl;
blarg++;
if (blarg == std::numeric_limits<uint64_t>::max()) {
blarg = 0;
}
}
The Suspend / Resume sequence is very simple as well:
void hijackRip(uint64_t targetAddress, DWORD threadId){
HANDLE targetThread = OpenThread(THREAD_ALL_ACCESS, FALSE, threadId);
NTSTATUS suspendResult = SuspendThread(targetThread);
CONTEXT threadContext;
memset(&threadContext, 0, sizeof(threadContext));
threadContext.ContextFlags = CONTEXT_ALL;
BOOL getThreadContextResult = GetThreadContext(targetThread, &threadContext);
threadContext.Rip = targetAddress;
BOOL setThreadContextResult = SetThreadContext(targetThread, &threadContext);
DWORD resumeThreadResult = ResumeThread(targetThread);
}
Again, this works, I can redirect execution correctly, but only 30 / 60 seconds after executing this function.

How do I interrupt xcb_wait_for_event?

In a separate thread (std::thread), I have an event loop that waits on xcb_wait_for_event. When the program exits, I'd like to shut things down nicely by interrupting (I have a solution that sets a thread-local variable, and checkpoints in the loop throw an exception), and then joining my event thread into the main thread. The issue is xcb_wait_for_event; I need a way to return from it early, or I need an alternative to the function.
Can anyone suggest a solution? Thanks for your help!
I believe I've come up with a suitable solution. I've replaced xcb_wait_for_event with the following function:
xcb_generic_event_t *WaitForEvent(xcb_connection_t *XConnection)
{
xcb_generic_event_t *Event = nullptr;
int XCBFileDescriptor = xcb_get_file_descriptor(XConnection);
fd_set FileDescriptors;
struct timespec Timeout = { 0, 250000000 }; // Check for interruptions every 0.25 seconds
while (true)
{
interruptible<std::thread>::check();
FD_ZERO(&FileDescriptors);
FD_SET(XCBFileDescriptor, &FileDescriptors);
if (pselect(XCBFileDescriptor + 1, &FileDescriptors, nullptr, nullptr, &Timeout, nullptr) > 0)
{
if ((Event = xcb_poll_for_event(XConnection)))
break;
}
}
interruptible<std::thread>::check();
return Event;
}
Making use of xcb_get_file_descriptor, I can use pselect to wait until there are new events, or until a specified timeout has occurred. This method incurs negligible additional CPU costs, resting at a flat 0.0% (on this i7). The only "downside" is having to wait a maximum of 0.25 seconds to check for interruptions, and I'm sure that limit could be safely lowered.
A neater way would be to do something like this (the code snippet is extracted from some code I am currently working on):
void QXcbEventQueue::sendCloseConnectionEvent() const {
// A hack to close XCB connection. Apparently XCB does not have any APIs for this?
xcb_client_message_event_t event;
memset(&event, 0, sizeof(event));
event.response_type = XCB_CLIENT_MESSAGE;
event.format = 32;
event.sequence = 0;
event.window = m_connection->clientLeader();
event.type = m_connection->atom(QXcbAtom::_QT_CLOSE_CONNECTION);
event.data.data32[0] = 0;
xcb_connection_t *c = m_connection->xcb_connection();
xcb_send_event(c, false, m_connection->clientLeader(),
XCB_EVENT_MASK_NO_EVENT, reinterpret_cast<const char *>(&event));
xcb_flush(c); }
For _QT_CLOSE_CONNECTION use your own atom to signal an exit and in my case clientLeader() is some invisible window that is always present on my X11 connection. If you don't have any invisible windows that could be reused for this purpose, create one :)
With this you can terminate the thread with xcb_wait_for_event when you see this special event arriving.

Closing a thread with select() system call statement?

I have a thread to monitor serial port using select system call, the run function of the thread is as follows:
void <ProtocolClass>::run()
{
int fd = mPort->GetFileDescriptor();
fd_set readfs;
int maxfd=fd+1;
int res;
struct timeval Timeout;
Timeout.tv_usec=0;
Timeout.tv_sec=3;
//BYTE ack_message_frame[ACKNOWLEDGE_FRAME_SIZE];
while(true)
{
usleep(10);
FD_ZERO(&readfs);
FD_SET(fd,&readfs);
res=select(maxfd,&readfs,NULL,NULL,NULL);
if(res<0)
perror("\nselect failed");
else if( res==0)
puts("TIMEOUT");
else if(FD_ISSET(fd,&readfs))
{//IF INPUT RECEIVED
qDebug("************RECEIVED DATA****************");
FlushBuf();
qDebug("\nReading data into a read buffer");
int bytes_read=mPort->ReadPort(mBuf,1000);
mFrameReceived=false;
for(int i=0;i<bytes_read;i++)
{
qDebug("%x",mBuf[i]);
}
//if complete frame has been received, write the acknowledge message frame to the port.
if(bytes_read>0)
{
qDebug("\nAbout to Process Received bytes");
ProcessReceivedBytes(mBuf,bytes_read);
qDebug("\n Processed Received bytes");
if(mFrameReceived)
{
int no_bytes=mPort->WritePort(mAcknowledgeMessage,ACKNOWLEDGE_FRAME_SIZE);
}//if frame received
}//if bytes read > 0
} //if input received
}//end while
}
The problem is when I exit from this thread, using
delete <protocolclass>::instance();
the program crashes with a glibc error of malloc memory corruption. On checking the core with gdb it was found the when exiting the thread it was processing the data and thus the error. The destructor of the protocol class looks as follows:
<ProtocolClass>::~<ProtocolClass>()
{
delete [] mpTrackInfo; //delete data
wait();
mPort->ClosePort();
s_instance = NULL; //static instance of singleton
delete mPort;
}
Is this due to select? Do the semantics for destroying objects change when select is involved? Can someone suggest a clean way to destroy threads involving select call.
Thanks
I'm not sure what threading library you use, but you should probably signal the thread in one way or another that it should exit, rather than killing it.
The most simple way would be to keep a boolean that is set true when the thread should exit, and use a timeout on the select() call to check it periodically.
ProtocolClass::StopThread ()
{
kill_me = true;
// Wait for thread to die
Join();
}
ProtocolClass::run ()
{
struct timeval tv;
...
while (!kill_me) {
...
tv.tv_sec = 1;
tv.tv_usec = 0;
res = select (maxfd, &readfds, NULL, NULL, &tv);
if (res < 0) {
// Handle error
}
else if (res != 0) {
...
}
}
You could also set up a pipe and include it in readfds, and then just write something to it from another thread. That would avoid waking up every second and bring down the thread without delay.
Also, you should of course never use a boolean variable like that without some kind of lock, ...
Are the threads still looking at mpTrackInfo after you delete it?
Not seeing the code it is hard.
But Iwould think that the first thing the destructor should do is wait for any threads to die (preferably with some form of join() to make sure they are all accounted for). Once they are dead you can start cleaning up the data.
your thread is more than just memory with some members, so just deleting and counting on the destructor is not enough. Since I don't know qt threads I think this link can put you on your way:
trolltech message
Two possible problems:
What is mpTrackInfo? You delete it before you wait for the thread to exit. Does the thread use this data somewhere, maybe even after it's been deleted?
How does the thread know it's supposed to exit? The loop in run() seems to run forever, which should cause wait() in the destructor to wait forever.