Why creating a thread inside an unique_ptr makes the thread destruct? - c++

I wrote a simple function that flushes to a file in a thread, so it does not block the main thread:
void MultiChannelDiskRecordingWav::flush() {
size_t amountToWrite = mWriteCursorFrames*mChannelCount;
for (size_t i = 0; i < amountToWrite; i++) {
tempWriteBuffer[i] = writeBuffer[i];
}
auto flush = [this, capturedWriteCursorFrames = mWriteCursorFrames, capturedChannelCount = mChannelCount](){
{
std::unique_lock<std::mutex>lk{recordingFileMutex};
std::vector<float> floats(tempWriteBuffer, tempWriteBuffer + mWriteCursorFrames*mChannelCount);
recordingFile.Write(floats);
bytesWritten += capturedWriteCursorFrames*capturedChannelCount*sizeof(float);
}
};
flushThread = std::make_unique<std::thread>(flush);
}
I'm getting a crash with these functions in the backtrace:
(_ZSt9terminatev+52)
(_ZNSt6__ndk16threadD1Ev+24)
(_ZN28MultiChannelDiskRecordingWav5flushEv+268)
which translate to
(std::terminate() 52)
(std::__ndk1::thread::~thread() 24)
(MultiChannelDiskRecordingWav::flush() 268)
Why is the thread being deleted? I'm not moving it, I'm creating inside an std::unique_ptr

On this line:
flushThread = std::make_unique<std::thread>(flush);
You are creating a new std::unique_ptr<std::thread> and assigning it to an existing std::unique_ptr<std::thread> named flushThread. When flushThread is assigned to, it will destroy any std::thread that it already holds. It is that existing std::thread that is being destroyed, not the new std::thread that you just created.
When a std::thread object is destroyed, its destructor will call std::terminate() if the thread is joinable(). So, you will have to do something like this to ensure that the existing thread is finished before you create the new thred:
if (flushThread) flushThread->join(); // <-- add this
flushThread = std::make_unique<std::thread>(flush);

Related

Why am I getting a "read access violation" exception thrown while accessing value from std::future?

Edit: my question is different from the suggested question becaude I cannot poll 100's of future.get() and wait to execute the remaining program in main() thread. I am hoping the threads to return the value when the thread is done executing and then the calling function should run.
As per this question
in C++ future waits for the results and halts the next line of code
I am having the following code where I am creating about 100 async threads and then retrieving the return variable;
void BEVTest::runTests()
{
CameraToBEV cbevObj(RoiBbox, OutputDim, InputDim);
// for multithreading
std::vector<std::future<cv::Mat>> processingThread;
std::vector<cv::Mat> opMatArr;
for (int i = 0; i < jsonObjects.size(); i++) {
processingThread.emplace_back(std::async(std::launch::async, &CameraToBEV::process,&cbevObj, std::ref(jsonObjects[i]), RoiBbox, OutputDim, InputDim));
}
for (auto& future : processingThread) {
opMatArr.emplace_back(future.get());
}
}
I am getting a run time exception of "read access violation" at the line opMatArr.emplace_back(future.get());
when I checked the processingThread variable, it shows all the future values as pending. So, if the above quote from the question is correct, shouldn't my code wait till it gets all the future values? Else otherwise this answer provides the following solution to wait till you get the future value from future.get()
auto f = std::async(std::launch::async, my_func)
.then([] (auto fut) {
auto result = fut.get();
/* Do stuff when result is ready. */
});
But this won't be possible for me because then I will have to poll all 100 future.get() and it will have an overhead which will defeat the purpose of creating threads.
I have two questions;
1> Why am I getting a run time exception?
2> How do I wait till all 100's of future.get() return value?
EDIT: I am providing my process function.
cv::Mat CameraToBEV::process(json& jsonObject, std::vector<Point2f> roibbox, Point2f opDim, Point2f ipDim)
{ // Parameters
img_width = imgWidth = opDim.x;
img_height = imgHeight = opDim.y;
for (int i=0; i<roibbox.size(); i++)
{
roiPoints[i] = roibbox[i];
}
// From Centroids class
jsonObj = jsonObject;
initializeCentroidMatrix();
getBBoxes();
transformCoordinates();
// From Plotter class
cv::Mat resultImg = plotLocation(finalCentroids,violationArr, ipDim, opDim);
return resultImg;
}

Pointer given as parameter to member function called in boost::thread_group is null

I'm working with a threadpool with C++ and boost::thread_group but called method 'widgetProcessorJob' in thread get a null parameter (widget).
I tried to make it in different ways and I think that I'm using boost::asio badly...
I'm looking for someone who could tell me what I'm doing wrong and which way is the best way ?
void MarketingAutomation::processOnWidgets() {
boost::asio::io_service ioService;
boost::thread_group threadpool;
bool available = true; // need infinite loop in my program
int offset = 0; // Only for batching
boost::asio::io_service::work work(ioService);
for (int i = 0; i < _poolSize; i++) {
threadpool.create_thread(boost::bind(&boost::asio::io_service::run, &ioService));
}
while (available) {
std::shared_ptr<sql::ResultSet> widgets(MyDBConnector::getInstance().getWidgets(_batchSize, offset)); // just getting some data from sql base with mysqlcppconn
if (!widgets->next()) {
offset = 0;
Logger::getInstance().logSTD("Restart widgets iteration !"); // this part is called when i did stuff on all batches
} else {
Logger::getInstance().logSTD("Proccess on " + std::to_string((offset / _batchSize) + 1) + " batch");
// loop through the batch
while (!widgets->isAfterLast()) {
ioService.post(boost::bind(&MarketingAutomation::widgetProcessorJob, this, widgets));
widgets->next();
}
threadpool.join_all();
Logger::getInstance().logSTD("Finish on " + std::to_string((offset / _batchSize) + 1) + " batch");
offset += _batchSize;
}
}
}
// Here is the function called in thread
void MarketingAutomation::widgetProcessorJob(std::shared_ptr<sql::ResultSet> widget) {
WidgetProcessor widgetProcessor(widget, _kind); // Here widget is already null, but why ? :'(
widgetProcessor.processOnWidget();
}
// loop through the batch
while (!widgets->isAfterLast()) {
ioService.post(boost::bind(&MarketingAutomation::widgetProcessorJob, this, widgets));
widgets->next();
}
You have only one std::shared_ptr<sql::ResultSet> widgets. By posting it multiple time you are making copies of the smart pointer, but all these smart pointers point to the same underlying sql::ResultSet.
This means that when you call next() you are "nexting" the same recordset you posted to all your handlers.
Now depending on the timing of execution of your threads and different race conditions, you might have gotten to the end of your recordset before any handler was even called and even if that's not the case, you are in a race condition that will get you, at best, only part of what you want.
As I thought I used boost::asio badly ! After posting I tried my program without infinite loop 'available' and jobs running with ioService made infinite loop because I never called stop method. To get the correct answer I moved thread_pool & io_service declaration/definition in the 'available' loop and call stop on each iteration ! Here is the correct answer including #Drax answer :
void MarketingAutomation::processOnWidgets() {
bool available = true;
int offset = 0;
while (available) {
std::shared_ptr<sql::ResultSet> widgets(SlaaskDBConnector::getInstance().getWidgets(_batchSize, offset));
if (!widgets->next()) {
offset = 0;
Logger::getInstance().logSTD("Restart widgets iteration !");
} else {
boost::asio::io_service ioService;
boost::thread_group threadpool;
boost::asio::io_service::work work(ioService);
for (int i = 0; i < _poolSize; i++) {
threadpool.create_thread(boost::bind(&boost::asio::io_service::run, &ioService));
}
Logger::getInstance().logSTD("Proccess on " + std::to_string((offset / _batchSize) + 1) + " batch");
while (!widgets->isAfterLast()) {
ioService.post(boost::bind(&MarketingAutomation::widgetProcessorJob, this, widgets->getInt("id")));
widgets->next();
}
ioService.stop();
threadpool.join_all();
Logger::getInstance().logSTD("Finish on " + std::to_string((offset / _batchSize) + 1) + " batch");
offset += _batchSize;
}
}
}
void MarketingAutomation::widgetProcessorJob(int widgetID) {
WidgetProcessor widgetProcessor(widgetID, _kind); // Here widget is already null, but why ? :'(
widgetProcessor.processOnWidget();
}

Synchronization between threads without overload

I can't find a good solution on how to implement a good mutual exclusion on a common resource between different threads.
I've got many methods (from a class) that do a lot of access to a database, this is one of them
string id = QUERYPHYSICAL + toString(ID);
wait();
mysql_query(connection, id.c_str());
MYSQL_RES *result = mysql_use_result(connection);
while (MYSQL_ROW row = mysql_fetch_row(result)){
Physical[ID - 1].ID = atoi(row[0]);
Physical[ID - 1].NAME = row[1];
Physical[ID - 1].PEOPLE = atoi(row[2]);
Physical[ID - 1].PIRSTATUS = atoi(row[3]);
Physical[ID - 1].LIGHTSTATUS = atoi(row[4]);
}
mysql_free_result(result);
signal();
The methods wait and signal do these things:
void Database::wait(void) {
while(!this->semaphore);
this->semaphore = false;
}
void Database::signal(void) {
this->semaphore = true;
}
But in this case my CPU goes to more than 190% of usage (reading from /proc/loadavg). What should I do to reduce CPU overload and let the system be more efficient? I'm on a 800MHz RaspberryPi
You can use pthread_mutex_t init at the constructor, lock for wait, unlock for signal, destroy at the destructor.
like this:
class Mutex{
pthread_mutex_t m;
public:
Mutex(){
pthread_mutex_init(&m,NULL);
}
~Mutex(){
pthread_mutex_destroy(&m);
}
void wait() {
pthread_mutex_lock(&m);
}
void signal() {
pthread_mutex_unlock(&m);
}
} ;
You also should check the return value of the pthread_mutex functions: 0 for success, non zero means error.

D parallel loop

First, how D create parallel foreach (underlying logic)?
int main(string[] args)
{
int[] arr;
arr.length = 100000000;
/* Why it is working?, it's simple foreach which working with
reference to int from arr, parallel function return ParallelForeach!R
(ParallelForeach!int[]), but I don't know what it is.
Parallel function is part od phobos library, not D builtin function, then what
kind of magic is used for this? */
foreach (ref e;parallel(arr))
{
e = 100;
}
foreach (ref e;parallel(arr))
{
e *= e;
}
return 0;
}
And second, why it is slower then simple foreach?
Finally, If I create my own taskPool (and don't use global taskPool object), program never end. Why?
parallel returns a struct (of type ParallelForeach) that implements the opApply(int delegate(...)) foreach overload.
when called the struct submits a parallel function to the private submitAndExecute which submits the same task to all threads in the pool.
this then does:
scope(failure)
{
// If an exception is thrown, all threads should bail.
atomicStore(shouldContinue, false);
}
while (atomicLoad(shouldContinue))
{
immutable myUnitIndex = atomicOp!"+="(workUnitIndex, 1);
immutable start = workUnitSize * myUnitIndex;
if(start >= len)
{
atomicStore(shouldContinue, false);
break;
}
immutable end = min(len, start + workUnitSize);
foreach(i; start..end)
{
static if(withIndex)
{
if(dg(i, range[i])) foreachErr();
}
else
{
if(dg(range[i])) foreachErr();
}
}
}
where workUnitIndex and shouldContinue are shared variables and dg is the foreach delegate
The reason it is slower is simply because of the overhead required to pass the function to the threads in the pool and atomically accessing the shared variables.
the reason your custom pool doesn't shut down is likely you don't shut down the threadpool with finish

Creation of thread with an dynamic argument

I am creating a thread from main thread with an dynamic allocated object as an argument. But if we are deleting this dynamic memory allocated object in main thread then how can we find out a object is deleted in created thread.
main thread code :
int CLocalReader::Run()
{
TReaderArgument *readerArg = new TReaderArgument;
readerArg->iFinished = &theFinishedACE;
readerArg->iSelf = this;
#ifdef WIN32
if (ACE_Thread::spawn((ACE_THR_FUNC)LocalReaderFunc, readerArg) == -1)
{
ACE_DEBUG((LM_DEBUG,"Could not start reader\n"));
delete readerArg;
readerArg = NULL;
}
#else
if (ACE_Thread_Manager::instance()->spawn(ACE_THR_FUNC (LocalReaderFunc), readerArg, THR_NEW_LWP | THR_DETACHED) < 0)
{
ACE_DEBUG((LM_DEBUG,"Could not start reader\n"));
delete readerArg;
readerArg = NULL;
}
#endif
return KErrNone;
}
static void *ReaderFunc(void *arg)
{
ASSERT(arg);
ACE_Thread::yield();
ACE_OS::sleep(ACE_Time_Value(0, STARTUP_TIME));
TReaderArgument *rarg = (TReaderArgument *)arg;
CLocalReader *self = static_cast<CLocalReader *>(rarg->iSelf);
int *finished = rarg->iFinished;
while (!(*finished))
{
if( self->GetData() != KErrorNone )
{
ACE_DEBUG((LM_DEBUG, "%D LocalReader : Error receiving data\n"));
}
}
return 0;
}
if in above code , this object is deleted then how we can check in thread function self object is deleted
Use reference counting like in COM. When the main thread is done with the object, it can set a flag "deleted" and release the object. The object will not be deleted yet, because the thread is still holding the reference count. But, the thread can check the flag and release the object if the flag is set. The reference count will drop to 0 and the object will commit suicide by calling delete this;