FreeRTOS Shared pointer parameter set to 0 - c++

I am trying to share data between 2 FreeRTOS tasks. My approach for this is to create a struct for the tasks' pvParameters that contains a std::shared_pointer.
My task creation looks like this:
auto shared_value = std::make_shared<int8_t>(-1);
auto paramsA = SharedValParams(shared_value);
xTaskCreate(taskA,
"Task A",
4000,
(void *)&paramsA ,
1,
NULL);
auto paramsB = SharedValParams(shared_value);
xTaskCreate(taskB,
"Task B",
4000,
(void *)&paramsB,
1,
NULL);
I'm currently trying to modify this value
void taskA(void *params)
{
for (;;)
{
*((SharedValParams*)params)->shared_val= 5;
Serial.printf("Shared val is now %d\n",
*(*(SharedValParams*)params).shared_val.get());
vTaskDelay(1000/ portTICK_PERIOD_MS);
}
}
The first couple executions of taskA, shared_val is printed as 5. Great! However, after this shared_val is set to 0, or at least that's what is being printed. I've yet to implement this in taskB, so nothing else accesses or modifies shared_val. I'm unsure as to why this is happening, especially when the default value of shared_val is -1.
In the future once I resolve this issue, I will surround shared_val in a mutex, but for now, I cannot reliably set the value. The motivation behind passing this as a param, is to keep the value scoped between relevant tasks.

Related

Why is the xTimerPeriodInTicks affecting the code's runtime?

So for a report I'm trying to time the number of cycles it takes to execute a couple seperate functions in a program running on an ESP32, using freeRTOS. The code looks something like this with init_input_generator() being the first function called, here i'm timing onesecwindow.fifo_iir_filter_datapoint():
float geninputarray[SAMPLING_RATE]; //stores the values of an artificial signal we can use to test the algorithm
fifo_buffer onesecwindow = fifo_buffer(SAMPLING_RATE);
esp_err_t init_input_generator() {
dsps_tone_gen_f32(geninputarray, SAMPLING_RATE, 4, 0.04, 0); //10Hz
TimerHandle_t input_signal_timer =
xTimerCreate("input_timer", pdMS_TO_TICKS(10), pdTRUE, (void *)0, input_generator_callback);
xTimerStart(input_signal_timer, 0);
return ESP_OK;
}
void input_generator_callback(TimerHandle_t xTimer)
{
xTaskCreatePinnedToCore(
input_counter,
"input_generator",
4096,
NULL,
tskIDLE_PRIORITY + 11,
NULL,
PRO_CPU_NUM);
}
void input_counter(TimerHandle_t xTimer)
{
stepcounter = (stepcounter + 1) % SAMPLING_RATE;
insert_new_data(geninputarray[stepcounter]);
vTaskDelete(nullptr); // to gracefully end the task as returning is not allowed
}
esp_err_t insert_new_data(float datapoint)
{
onesecwindow.fifo_write(datapoint); // writes the new datapoint into the onesec window
unsigned int start_time = dsp_get_cpu_cycle_count();
onesecwindow.fifo_iir_filter_datapoint();
unsigned int end_time = dsp_get_cpu_cycle_count();
printf("%i\n", (end_time-start_time));
}
The thing that i'm noticing is that whenever I change the xTimerPeriodInTicks to a larger number, I get significantly longer time results, which just really confuses me and get's in the way of proper timing. Although the function doesn't scale with the SAMPLING_RATE and thus should give quite consistent results for each loop, output in cycles with a 10ms timer period looks something like this (every 5 or 6 "loops" for some reason it's longer):
1567, 630, 607, 624, 591, 619, 649, 1606, 607
with a 40ms timer period I get this as a typical output:
1904, 600, 1894, 616, 1928, 1928, 607, 1897, 628
As such I'm confused by the output, as they use the same freeRTOS taskcreate to run with the same priority on the same core, I don't see why there would be any difference. Perhaps I'm misunderstanding something basic/fundamental here so any help would be greatly appreciated.
Update
So based on the comments by #Tarmo i have restructured to approach to having a recurring task, unfortunately the output still seems to suffer from the same problem. The code now looks like this:
#include <esp_dsp.h> //Official ESP-DSP library
float geninputarray[SAMPLING_RATE]; //stores the values of an artificial signal we can use to test the algorithm
fifo_buffer onesecwindow = fifo_buffer(SAMPLING_RATE);
esp_err_t init_input_generator() {
dsps_tone_gen_f32(geninputarray, SAMPLING_RATE, 4, 0.04, 0); //10Hz
xTaskCreatePinnedToCore(
input_counter, // the actual function to be called
"input_generator",
4096,
NULL,
tskIDLE_PRIORITY + 5,
NULL,
APP_CPU_NUM);
return ESP_OK;
}
void input_counter(TimerHandle_t xTimer)
{
while (true)
{
stepcounter = (stepcounter + 1) % SAMPLING_RATE;
insert_new_data(geninputarray[stepcounter]);
vTaskDelay(4);
}
}
esp_err_t insert_new_data(float datapoint)
{
onesecwindow.fifo_write(datapoint); // writes the new datapoint into the onesec window
unsigned int start_time = dsp_get_cpu_cycle_count();
onesecwindow.fifo_iir_filter_datapoint();
unsigned int end_time = dsp_get_cpu_cycle_count();
printf("%i\n", (end_time-start_time));
}

SetPerTcpConnectionEStats fails and can't get GetPerTcpConnectionEStats multiple times c++

I am following the example in https://learn.microsoft.com/en-gb/windows/win32/api/iphlpapi/nf-iphlpapi-getpertcp6connectionestats?redirectedfrom=MSDN to get the TCP statistics. Although, I got it working and get the statistics in the first place, still I want to record them every a time interval (which I haven't managed to do so), and I have the following questions.
The SetPerTcpConnectionEStats () fails with status != NO_ERROR and equal to 5. Although, it fails, I can get the statistics. Why?
I want to get the statistics every, let's say 1 second. I have tried two different ways; a) to use a while loop and use a std::this_thread::sleep_for(1s), where I could get the statistics every ~1sec, but the whole app was stalling (is it because of the this), I supposed that I am blocking the operation of the main, and b) (since a) failed) I tried to call TcpStatistics() from another function (in different class) that is triggered every 1 sec (I store clientConnectRow to a global var). However, in that case (b), GetPerTcpConnectionEStats() fails with winStatus = 1214 (ERROR_INVALID_NETNAME) and of course TcpStatistics() cannot get any of the statistics.
a)
ClassB::ClassB()
{
UINT winStatus = GetTcpRow(localPort, hostPort, MIB_TCP_STATE_ESTAB, (PMIB_TCPROW)clientConnectRow);
ToggleAllEstats(clientConnectRow, TRUE);
thread t1(&ClassB::TcpStatistics, this, clientConnectRow);
t1.join();
}
ClassB::TcpStatistics()
{
while (true)
{
GetAndOutputEstats(row, TcpConnectionEstatsBandwidth)
// some more code here
this_thread::sleep_for(milliseconds(1000));
}
}
b)
ClassB::ClassB()
{
MIB_TCPROW client4ConnectRow;
void* clientConnectRow = NULL;
clientConnectRow = &client4ConnectRow;
UINT winStatus = GetTcpRow(localPort, hostPort, MIB_TCP_STATE_ESTAB, (PMIB_TCPROW)clientConnectRow);
m_clientConnectRow = clientConnectRow;
TcpStatistics();
}
ClassB::TcpStatistics()
{
ToggleAllEstats(m_clientConnectRow , TRUE);
void* row = m_clientConnectRow;
GetAndOutputEstats(row, TcpConnectionEstatsBandwidth)
// some more code here
}
ClassB::GetAndOutputEstats(void* row, TCP_ESTATS_TYPE type)
{
//...
winStatus = GetPerTcpConnectionEStats((PMIB_TCPROW)row, type, NULL, 0, 0, ros, 0, rosSize, rod, 0, rodSize);
if (winStatus != NO_ERROR) {wprintf(L"\nGetPerTcpConnectionEStats %s failed. status = %d", estatsTypeNames[type], winStatus); //
}
else { ...}
}
ClassA::FunA()
{
classB_ptr->TcpStatistics();
}
I found a work around for the second part of my question. I am posting it here, in case someone else find it useful. There might be other solutions too, more advanced, but this is how I did it myself. We have to first Obtain MIB_TCPROW corresponding to the TCP connection and then to Enable Estats collection before dumping current stats. So, what I did was to add all of these in a function and call this instead, every time I want to get the stats.
void
ClassB::FunSetTcpStats()
{
MIB_TCPROW client4ConnectRow;
void* clientConnectRow = NULL;
clientConnectRow = &client4ConnectRow;
//this is for the statistics
UINT winStatus = GetTcpRow(lPort, hPort, MIB_TCP_STATE_ESTAB, (PMIB_TCPROW)clientConnectRow); //lPort & hPort in htons!
if (winStatus != ERROR_SUCCESS) {
wprintf(L"\nGetTcpRow failed on the client established connection with %d", winStatus);
return;
}
//
// Enable Estats collection and dump current stats.
//
ToggleAllEstats(clientConnectRow, TRUE);
TcpStatistics(clientConnectRow); // same as GetAllEstats() in msdn
}

How to correctly use grpc asynchronously (ClientAsyncReaderWriter)

I can't find a grpc example showing how to use the ClientAsyncReaderWriter (is there one?). I tried something on my own, but am having trouble with the reference counts. My question comes from tracing through the code.
struct grpc_call has a member of type gpr_refcount called ext_ref. The ClientContext C++ object wraps the grpc_call, and holds onto it in a member grpc_call *call_;. Only when ext_ref is 0, can this grpc_call pointer be deleted.
When I use grpc synchronously with ClientReader:
In its implementation it uses CreateCall() and PerformOps() to add to ext_ref (ext_ref == 2).
Then I use Pluck() which subtracts from ext_ref so that (ext_ref == 1).
The last use ~ClientContext() subtracts from ext_ref, so that ext_ref == 0 and deletes the call
But when I use grpc asynchronously with ClientAsyncReaderWriter:
First use asyncXXX(), this API use CreateCall() and register Write() (ext_ref == 2).
Then it uses AsyncNext() to get tag...which must use a write or read operator.
So ext_ref > 1 forever, unless got_event you don't handle.
I'm calling it like this:
struct Notice
{
std::unique_ptr<
grpc::ClientAsyncReaderWriter<ObserveNoticRequest, EventNotice>
> _rw;
ClientContext _context;
EventNotice _rsp;
}
Register Thread
CompletionQueue *cq = new CompletionQueue;
Notice *notice = new Notice;
notice->rw = stub->AsyncobserverNotice(&context, cq, notice);
// here context.call_.ext_ref is 2
Get CompletionQueue Event Thread
void *tag = NULL;
bool ok = false;
CompletionQueue::NextStatus got = CompletionQueue::NextStatus::TIMEOUT;
gpr_timespec deadline;
deadline.clock_type = GPR_TIMESPAN;
deadline.tv_sec = 0;
deadline.tv_nsec = 10000000;
got = cq->AsyncNext<gpr_timespec>(&tag, &ok, deadline);
if (GOT_EVENT == got) {
if (tag != NULL) {
Notice *notice = (Notice *)tag;
notice->_rw->Read(&_rsp, notice);
// here context.call_.ext_ref is 2.
// now I want to stop this CompletionQueue.
delete notice;
// use ~ClientContext(), ext_ref change to 1
// but only ext_ref == 0, call_ be deleted
}
}
Take a look at this file, client_async.cc, for good use of the ClientAsyncReaderWriter. If you still have confusion, please create a very clean reproduction of the issue, and we will look into it further.

Why is my CFRunLoopTimer not firing?

I have a CFRunLoopTimer created within a C++ class as shown below:
#import <CoreFoundation/CoreFoundation.h>
void cClass::StartTimer()
{
if(!mActiveSenseTimer)
{
CFTimeInterval TIMER_INTERVAL = 5;
CFRunLoopTimerContext TimerContext = {0, this, NULL, NULL, NULL};
CFAbsoluteTime FireTime = CFAbsoluteTimeGetCurrent() + TIMER_INTERVAL;
mTimer = CFRunLoopTimerCreate(kCFAllocatorDefault,
FireTime,
0, 0, 0,
ActiveSenseTimerCallback,
&TimerContext);
NSLog(#"RunLoop:0x%x, TimerIsValid:%d, TimeIsNow:%f, TimerWillFireAt:%f",
CFRunLoopGetCurrent(),
CFRunLoopTimerIsValid(mActiveSenseTimer),
CFAbsoluteTimeGetCurrent(),
FireTime);
}
}
void ActiveSenseTimerCallback(CFRunLoopTimerRef timer, void *info)
{
NSLog(#"Timeout");
CFRunLoopTimerContext TimerContext;
TimerContext.version = 0;
CFRunLoopTimerGetContext(timer, &TimerContext);
((cClass *)TimerContext.info)->Timeout();
}
Calling cClass::StartTimer() results in the following log output:
RunLoop:0x7655d60, TimerIsValid:1, TimeIsNow:389196910.537962, TimerWillFireAt:389196915.537956
However, my timer never fires. Any ideas why?
Quote from the docs
A timer needs to be added to a run loop mode before it will fire. To add the timer to a run loop, use CFRunLoopAddTimer. A timer can be registered to only one run loop at a time, although it can be in multiple modes within that run loop.
Also make sure your run loop doesn't die before the timer fires.

Creating Timers in C++ using Lua

I was wondering whether the following setup would work for a small game:
Lets assume I have the following functions registered to Lua like so:
lua_register(L, "createTimer", createTimer);
lua_register(L, "getCondition", getCondition);
lua_register(L, "setAction", setAction);
Where: (leaving the type checking behind)
int createTimer(lua_State* L){
string condition = lua_tostring(L, 1);
string action = lua_tostring(L, 2);
double timer = lua_tonumber(L, 3);
double expiration = lua_tonumber(L, 4);
addTimer(condition, action, timer, expiration); // adds the "timer" to a vector or something
return 1;
}
Calling this function in lua by:
createTimer("getCondition=<5", "setAction(7,4,6)", 5, 20);
Can I then do the following(?):
// this function is called in the game-loop to loop through all timers in the vector
void checkTimers(){
for(std::vector<T>::iterator it = v.begin(); it != v.end(); ++it) {
if(luaL_doString(L, *it->condition)){
luaL_doString(L, *it->action)
}
}
}
Would this work? Would luaL_doString pass "getCondition=<5" to the lua state engine, where it will call the c++ function getCondition(), then see if it is =<5 and return true or false? And would the same go for luaL_doString(L, "setAction(7, 4, 6)"); ?
Moreover, would this be a suitable way to create timers by only accessing lua once (to create them) and let c++ handle the rest, only calling the c++ functions through lua and letting lua deal with logic only?
Thanks in advance.
You may want to change the condition string to "return getCondition()<=5" otherwise the string chunk will not compile or run. Then check the boolean return value on the stack when the luaL_doString() returns successfully. Something like this:
// this function is called in the game-loop to loop through all timers in the vector
void checkTimers(){
for(std::vector<T>::iterator it = v.begin(); it != v.end(); ++it) {
lua_settop(L, 0);
if(luaL_doString(L, *it->condition) == 0 && lua_toboolean(1)){
luaL_doString(L, *it->action);
}
}
}
You cannot interrupt Lua while it is running. The best you can do is to set a flag and then handle the interruption at a safe time. The standalone interpreter uses this technique to handle user interrupts (control-C). This technique is also used in my lalarm library, which can be used to implement timer callbacks, though not at the high level you want.