I change a global variable in one thread, and the change will not take effect in another thread if I do not print it
Here's the code:
pthread_t thread_test[2];
bool test=true;
void* test1(void*)
{
while(1)
{
//printf("test: %d\n",test);
if(test)
continue;
printf("test test test\n");
usleep(500000);
}
}
void* test2(void*)
{
int i=0;
while(i<5)
{
printf("i: %d\n",i++);
sleep(1);
}
test=false;
return NULL;
}
int main()
{
pthread_create(&thread_test[0], NULL, &test1, (void *)NULL);
pthread_create(&thread_test[1], NULL, &test2, (void *)NULL);
pthread_join(thread_test[0],NULL);
return 0;
}
If the line printf("test: %d\n",test); is commented out.Then the change of test in test2 will not take effect in test1.
The run results are shown in below:
enter image description here
if don't commented out it:
enter image description here
This has troubled me for a long time. Wish someone please answer it.
Compiler is probably optimizing out the "while(1)" loop content, assuming global test is always true
Got the same results with qnx compiler 'release' setup:
-g0 -O3
Got "test test test" (test turned false) with 'debug' setup:
-O0 -g3 -ggdb
p.s.
volatile bool test=true;
with 'release' setup also does the trick (despite what others wrote above)
enter image description here
Related
I'm trying to do my first bit of threading but no matter what I've tried I can't get this to compile.
I've gone back to trying to compile some demo code and I'm getting the same problem as in my program.
If I run a simple print hello world it compiles and deploys the program fine and I can simply navigate to and run it directly on the Pi4.
Threading demo code
#include<stdio.h>
#include<string.h>
#include<pthread.h>
#include<stdlib.h>
#include<unistd.h>
pthread_t tid[2];
void* doSomeThing(void* arg)
{
unsigned long i = 0;
pthread_t id = pthread_self();
if (pthread_equal(id, tid[0]))
{
printf("\n First thread processing\n");
}
else
{
printf("\n Second thread processing\n");
}
for (i = 0; i < (0xFFFFFFFF); i++);
return NULL;
}
int main(void)
{
int i = 0;
int err;
while (i < 2)
{
err = pthread_create(&(tid[i]), NULL, &doSomeThing, NULL);
if (err != 0)
printf("\ncan't create thread :[%s]", strerror(err));
else
printf("\n Thread created successfully\n");
i++;
}
sleep(5);
return 0;
}
When I compile I get
Error /home/pi/projects/cpp_raspbian_thread_101/obj/x64/Debug/main.o: in function `main':
Error undefined reference to `pthread_create'
Error ld returned 1 exit status
To resolve this I've tried to add -pthread or -lpthread to
Project > Properties > Configuration Properties > C/C++ > Command Line > Addiitional Options
That does nothing, I'm not really sure if this is the correct place to put this.
I'm building in VS2019 so I'm not building from the command line, I don't know where to add this argument.
I have also tried installing pthreads in NuGet but that doesn't help.
Other software like VSCode seem to have files that could add this to but I'm lost in VS2019
Any help is appreciated.
EDIT:
Thanks for responses
OK so as #Eljay suggested I'm trying to use std::thread (again) but have the same problem.
// thread example
#include <iostream>
#include <thread>
void foo()
{
// do stuff...
}
int main()
{
std::thread first(foo);
return 0;
}
Log file
Validating sources
Copying sources remotely to '10.0.0.2'
Validating architecture
Validating architecture
Starting remote build
Compiling sources:
main.cpp
Linking objects
/usr/bin/ld : error : /home/pi/projects/cpp_raspbian_thread_101/obj/ARM/Debug/main.o: in function `std::thread::thread<void (&)(), , void>(void (&)())':
/usr/include/c++/8/thread(135): error : undefined reference to `pthread_create'
collect2 : error : ld returned 1 exit status
So I'm back to the pthread_create problem again
OK both code examples now compile and run.
As I originally thought, I needed to add -pthread somewhere in VS2019 and I was putting it in the wrong section.
Go to
Project Properties > Configuration Properties > Linker > Command Line
Add -pthread to Additional Options box and Apply.
I hope that saves someone else the 3 days it took me to sort it!
I need a gtest that will pass if sigabrt doesn't happen, but need to know if it does happen, or fail the test. How would I do that?
I was thinking of this sort of thing:
TEST_F(TestTest, testSigabrtDoesntHappen)
{
MyObject &myObject = MyObject::instance();
for(int i=0; i<2; i++){
myObject.doWork(); //this will sigabrt on the second try, if at all
ASSERT_TRUE(myObject);
}
ASSERT_TRUE(myObject);
}
So assuming a sigabrt would exit out of the test if it occurs, then we would get 3 test passes otherwise. Any other ideas?
Not on Window:
::testing::KilledBySignal(signal_number) // Not available on Windows.
You should look the guide.
It seems like that for me (not tested) :
TEST_F(TestTest, testSigabrtDoesntHappen)
{
MyObject &myObject = MyObject::instance();
for(int i=0; i<2; i++){
EXPECT_EXIT(myObject.doWork(), ::testing::KilledBySignal(SIGBART)), "Regex to match error message");
ASSERT_TRUE(myObject);
}
ASSERT_TRUE(myObject);
}
On Window:
You'll have to handle signal yourself with this kind of code:
// crt_signal.c
// compile with: /EHsc /W4
// Use signal to attach a signal handler to the abort routine
#include <stdlib.h>
#include <signal.h>
#include <tchar.h>
void SignalHandler(int signal)
{
if (signal == SIGABRT) {
// abort signal handler code
} else {
// ...
}
}
int main()
{
typedef void (*SignalHandlerPointer)(int);
SignalHandlerPointer previousHandler;
previousHandler = signal(SIGABRT, SignalHandler);
abort(); //emit SIGBART ?
}
doc
But seriously if you have one time get a SIGBART running your code, there are some problems with your code that you have to remove before release the software.
But if you really want to debug your code (with googletest), use this with your debugger:
foo_test --gtest_repeat=1000 --gtest_break_on_failure
You can add others option to it, again : check the doc :)
I am trying to understand how heap consistency checking works in GNU C Library. I Refered http://www.gnu.org/software/libc/manual/html_node/Heap-Consistency-Checking.html#Heap-Consistency-Checking
Here is a sample program I have written. I expected as suggested in the manual if I run in gdb and call mcheck(0) my custom abortfn would be called. But it isn't getting called.
What am I missing here?
included necessary headers.
void *abortfn(enum mcheck_status status)
{
switch(status) {
case MCHECK_DISABLED:
printf("MEMCHECK DISABLED\n");
break;
default:
printf("MEMCHECK ENABLED\n");
}
}
int main()
{
printf("This is mcheck testing code\n");
int *a = (int *) malloc(sizeof(int));
*a = 10;
printf("A: %d\n", *a);
free(a);
return 0;
}
Today, compiling with all warnings & debug info (gcc -Wall -Wextra -g) then using valgrind is more convenient.
However, the very documentation that you are linking to says:
it is too late to begin allocation checking once you have allocated anything with malloc
So start your main as
int main() {
mcheck(abortfn);
However, your abortfn should return void so code it as:
void abortfn(enum mcheck_status status) {
switch(status) {
case MCHECK_DISABLED:
printf("MEMCHECK DISABLED\n");
break;
default:
printf("MEMCHECK ENABLED\n");
} }
I'm wondering how I you can create and register a function from the C++-side that returns a table when called from the Lua-side.
I've tried a lot of things but nothing did really work. :/
(sorry for the long code)
This for example won't work, because Register() expects a "luaCFunction"-styled function:
LuaPlus::LuaObject Test( LuaPlus::LuaState* state ) {
int top = state->GetTop();
std::string var( state->ToString(1) );
LuaPlus::LuaObject tableObj(state);
tableObj.AssignNewTable(state);
if (var == "aaa")
tableObj.SetString("x", "ABC");
else if (var == "bbb")
tableObj.SetString("x", "DEF");
tableObj.SetString("y", "XYZ");
return tableObj;
}
int main()
{
LuaPlus::LuaState* L = LuaPlus::LuaState::Create(true);
//without true I can't access the standard libraries like "math.","string."...
//with true, GetLastError returns 2 though (ERROR_FILE_NOT_FOUND)
//no side effects noticed though
LuaPlus::LuaObject globals = L->GetGlobals();
globals.Register("Test",Test);
char pPath[MAX_PATH];
GetCurrentDirectory(MAX_PATH,pPath);
strcat_s(pPath,MAX_PATH,"\\test.lua");
if(L->DoFile(pPath)) {
if( L->GetTop() == 1 ) // An error occured
std::cout << "An error occured: " << L->CheckString(1) << std::endl;
}
}
When I try to set it up as a luaCFunction-function it just crashes (0x3) and says:
Assertion failed: 0, file C:\......\luafunction.h, line 41
int Test( LuaPlus::LuaState* state ) {
int top = state->GetTop();
std::string var( state->ToString(1) );
LuaPlus::LuaObject tableObj(state);
tableObj.AssignNewTable(state);
if (var == "aaa")
tableObj.SetString("x", "ABC");
else if (var == "bbb")
tableObj.SetString("x", "DEF");
tableObj.SetString("y", "XYZ");
tableObj.Push();
return state->GetTop() - top;
}
For clarification: from the Lua side I wanted it to be callable like:
myVar = Test("aaa")
Print(myVar) -- output: ABC
EDIT: The Print function comes from here. And was basically the cause for this to not work. Print can only print strings not tables... The C++ code from above works fine if you just return 1.
This is the documentation that came with my LuaPlus version btw: http://luaplus.funpic.de/
I really hope you can help me.. I'm already starting to think that it is not possible. :'(
edit:
I totally forgot to say that using PushStack() lead into an error because "the member does not exist"...
After some painstaking probing from the long comment discussion, I'm posting this answer to help summary the situation and hopefully to offer some useful advice.
The main issue the OP was running into was that the wrong print function was being called in the lua test script. Contrary to the original code shown the real code the OP was testing against was calling Print(myVar) which is a custom provided lua_CFunction and not the builtin print function.
Somehow along the way, this ended up creating some instantiation of template <typename RT> class LuaFunction and calling the overloaded operator()(). From inspecting the luafunction.h from luaPlus any lua errors that occurs inside this call will get swallowed up without any kind of logging (not a good design decision on luaPlus's part):
if (lua_pcall(L, 0, 1, 0)) {
const char* errorString = lua_tostring(L, -1); (void)errorString;
luaplus_assert(0);
}
To help catch future errors like this, I suggest adding a new luaplus_assertlog macro. Specifically, this macro will include the errorString so that the context isn't completely lost and hopefully help with debugging. This change hopefully won't break existing uses of luaplua_assert from other parts of the API. In the long run though, it's probably better to modify luaplus_assert so it actually includes something meaningful.
Anyway here's a diff of the changes made:
LuaPlusInternal.h
## -81,5 +81,6 ##
} // namespace LuaPlus
#if !LUAPLUS_EXCEPTIONS
+#include <stdio.h>
#include <assert.h>
#define luaplus_assert(e) if (!(e)) assert(0)
## -84,5 +85,6 ##
#include <assert.h>
#define luaplus_assert(e) if (!(e)) assert(0)
+#define luaplus_assertlog(e, msg) if (!(e)) { fprintf(stderr, msg); assert(0); }
//(void)0
#define luaplus_throw(e) assert(0)
//(void)0
LuaFunction.h
## -21,7 +21,7 ##
class LuaFunction
{
public:
- LuaFunction(LuaObject& _functionObj)
+ LuaFunction(const LuaObject& _functionObj)
: functionObj(_functionObj) {
}
## -36,7 +36,7 ##
if (lua_pcall(L, 0, 1, 0)) {
const char* errorString = lua_tostring(L, -1); (void)errorString;
- luaplus_assert(0);
+ luaplus_assertlog(0, errorString);
}
return LPCD::Type<RT>::Get(L, -1);
}
In the change above, I opted not to use std::cerr simply because C++ streams tend to be heavier than plain-old C-style io functions. This is especially true if you're using mingw as your toolchain -- the ld linker is unable to eliminate unused C++ stream symbols even if your program never uses it.
With that in place, here's an example where an unprotected call is made to a lua function so you can see the errorString printed out prior to the crash:
// snip...
int main(int argc, const char *argv[])
{
LuaStateAuto L ( LuaState::Create(true) );
LuaObject globals = L->GetGlobals();
globals.Register("Test", Test);
globals.Register("Print", Print);
if(argc > 1)
{
/*
if (L->DoFile(argv[argc - 1]))
std::cout << L->CheckString(1) << '\n';
/*/
L->LoadFile( argv[argc - 1] );
LuaFunction<int> f ( LuaObject (L, -1) );
f();
//*/
}
}
Running the above will trigger the crash but will include a semi-helpful error message:
g++ -Wall -pedantic -O0 -g -I ./Src -I ./Src/LuaPlus/lua51-luaplus/src plustest.cpp -o plustest.exe lua51-luaplus.dll
plustest.exe plustest.lua
plustest.lua:2: bad argument #1 to 'Print' (string expected, got table)Assertion failed!
Program: G:\OSS\luaplus51-all\plustest.exe
File: ./Src/LuaPlus/LuaFunction.h, Line 39
Expression: 0
This application has requested the Runtime to terminate it in an unusual way.
Please contact the application's support team for more information.
first you may try to register the function using RegisterDirect(), this may avoid lua_CFunction's problem, check the luaplus manual.like this
LuaPlus::LuaObject globals = L->GetGlobals();
globals.RegisterDirect("Test",Test);
second if I remeber to create a table have two solutions,like this
//first
LuaObject globalsObj = state->GetGlobals();
LuaObject myArrayOfStuffTableObj = globalsObj.CreateTable("MyArrayOfStuff");
//second
LuaObject aStandaloneTableObj;
aStandaloneTableObj.AssignNewTable(state);
check whether you have use the right function.
third I remember the lua stack object is not the luaobject, they have a conversion, may be you can try this
LuaStackObject stack1Obj(state, 1);
LuaObject nonStack1Obj = stack1Obj;
forth, like the function Test() you have give above, the table tableObj you have pushing onto the lua stack, you must remember to clear the object.
I have a program that gets a KERN_PROTECTION_FAILURE with EXC_BAD_ACCESS in a very strange place when running multithreaded and I haven't the faintest idea how to troubleshoot it further. This is on MacOS 10.6 using GCC.
The very strange place that it gets this is when entering a function. Not on the first line of the function, but the actual jump to the function GetMachineFactors():
Program received signal EXC_BAD_ACCESS, Could not access memory.
Reason: KERN_PROTECTION_FAILURE at address: 0xb00009ec
[Switching to process 28242]
0x00012592 in GetMachineFactors () at ../sysinfo/OSX.cpp:168
168 MachineFactors* GetMachineFactors()
(gdb) bt
#0 0x00012592 in GetMachineFactors () at ../sysinfo/OSX.cpp:168
#1 0x000156d0 in CollectMachineFactorsThreadProc (parameter=0x200280) at Threads.cpp:341
#2 0x952f681d in _pthread_start ()
#3 0x952f66a2 in thread_start ()
(gdb)
If I run this non-threaded, it runs great, no issues:
#include "MachineFactors.h"
int main( int argc, char** argv )
{
MachineFactors* factors = GetMachineFactors();
std::string str = CreateJSONObject(factors);
cout << str;
delete factors;
return 0;
}
If I run this in a pthread, I get the EXC_BAD_ACCESS above.
THREAD_FUNCTION CollectMachineFactorsThreadProc( LPVOID parameter )
{
Main* client = (Main*) parameter;
if ( parameter == NULL )
{
ERRORLOG( "No data passed to machine identification thread. Aborting." );
return 0;
}
MachineFactors* mfactors = GetMachineFactors(); // This is where it dies.
// If I don't call GetMachineFactors and do something like mfactors =
// new MachineFactors(); everything is good and the threads communicate and exit
// normally.
if (mfactors == NULL)
{
ERRORLOG("Failed to collect machine identification: GetMachineFactors returned NULL." << endl)
return 0;
}
client->machineFactors = CreateJSONObject(mfactors);
delete mfactors;
EVENT_RAISE(client->machineFactorsEvent);
return 0;
}
Here is an excerpt from the GetMachineFactors() code:
MachineFactors* GetMachineFactors() // Dies on this line in multi-threaded.
{
// printf( "Getting machine factors.\n"); // Tried with and without this, never prints.
factors = new MachineFactors();
factors->OSName = "MacOS";
factors->Manufacturer = "Apple";
///…
// gather various machine metrics here.
//…
return factors;
}
For reference, I am using a socketpair to wait on the thread to complete:
// From the header file I use for cross-platform defines (this runs on OSX, Windows, and Linux.
struct _waitt
{
int fds[2];
};
#define THREAD_FUNCTION void*
#define THREAD_REFERENCE pthread_t
#define MUTEX_REFERENCE pthread_mutex_t*
#define MUTEX_LOCK(m) pthread_mutex_lock(m)
#define MUTEX_UNLOCK pthread_mutex_unlock
#define EVENT_REFERENCE struct _waitt
#define EVENT_WAIT(m) do { char lc; if (read(m.fds[0], &lc, 1)) {} } while (0)
#define EVENT_RAISE(m) do { char lc = 'j'; if (write(m.fds[1], &lc, 1)) {} } while (0)
#define EVENT_NULL(m) do { m.fds[0] = -1; m.fds[1] = -1; } while (0)
Here is the code where I launch the thread.
void Main::CollectMachineFactors()
{
#ifdef WIN32
machineFactorsThread = CreateThread(NULL, 0, CollectMachineFactorsThreadProc, this, 0, 0);
if ( machineFactorsThread == NULL )
{
ERRORLOG( "Could not create thread for machine id: " << ERROR_NO << endl )
}
#else
int retval = pthread_create(&machineFactorsThread, NULL, CollectMachineFactorsThreadProc, this);
if (retval)
{
ERRORLOG( "Return code from machine id pthread_create() is " << retval << endl )
}
#endif
}
Here's the simple failure case of running this multithreaded. It always fails for this code with the stack trace above:
CollectMachineFactors();
EVENT_WAIT(machineFactorsEvent);
cout << machineFactors;
return 0;
At first I suspected a library problem. Here's my makefile:
# Main executable file
PROGRAM = sysinfo
# Object files
OBJECTS = Version.h Main.o Protocol.o Socket.o SSLConnection.o Stats.o TimeElapsed.o Formatter.o OSX.o Threads.o
# Include directories
INCLUDE = -Itaocrypt/include -IyaSSL/taocrypt/mySTL -IyaSSL/include -isysroot /Developer/SDKs/MacOSX10.5.sdk -mmacosx-version-min=10.5
# Library settings
STATICLIBS = libtaocrypt.a libyassl.a -Wl,-rpath,. -ldl -lpthread -lz -lexpat
# Compile settings
RELCXX = g++ -g -ggdb -DDEBUG -Wall $(INCLUDE)
.SUFFIXES: .o .cpp
.cpp.o :
$(RELCXX) -c -Wall $(INCLUDE) -o $# $<
all: $(PROGRAM)
$(PROGRAM): $(OBJECTS)
$(RELCXX) -o $(PROGRAM) $(OBJECTS) $(STATICLIBS)
clean:
rm -f *.o $(PROGRAM)
I can't for the life of me see anything particularly odd or dangerous and I'm not sure where to look. The same threaded process works fine on any Linux machine I have tried. Any suggestions? Any tools I should try?
I can add more info if it would be helpful.
I can see a problem with your Windows code, but not the OSX code that's crashing on you.
It seems that you're not posting the actual code for GetMachineFactors, since the variable factors is not declared. But regarding debugging, you should not take the non-appearance of printf output as conclusive that that statement hasn't been executed. Use debugger facilities such as setting a breakpoint, using special debugger trace output, so on (not sure what gdb handles, it's a very primitive debugger, but perhaps Apple has better tools?).
For Windows, you should use the run time library's thread creation instead of Windows API CreateThread. That's because with CreateThread the runtime lib isn't informed. E.g, a new expression or other call that uses the runtime lib might fail.
Sorry I can't help more.
I think it could perhaps have something to do with the GetMachineFactors code that you haven't shown?
It turns out, and I can't explain why, that a fork() call combined with a socketpair() as the IPC mechanism was the workaround to get things going as intended.
I wish I knew why it was failing in the first place (headscratch) but that approach seems to have been a good workaround.
It almost seemed like the kind of "build out of whack" problem that could be caused by failing to run a 'make clean' after changing header files, but that wasn't the case here.