Mutex sharing between DLL and application - c++

I have an multithreaded application that uses a DLL that I created. There is a certain function that will fail if the DLL has not run a certain function yet. How can I make sure that the thread that runs this application function waits for that DLL function to complete before continuing?
Visualization:
O = DLL Function Completes
T = Application Function Starts
App Thread:--------------O----------------------------------
DLL Thread:----------------------T--------------------------

Several approaches:
First thought would be to put the code into DLLMain(), which is executed automatically on application/DLL load. Not everything can be done here though, like blocking operations or operations that require loading other DLLs. Be sure to read and understand the docs if you try this approach.
The second thought is to throw or assert(), so that the init function must be called before any other one, like WSAStartup(). You would then have to call this once in main() before creating any other threads. This is a simple approach. It requires manual work and you can't create threads inside the ctors of globals (which is always dangerous anyway), but at least it will tell you if you got it wrong and, assuming the assert() approach, has zero overhead for release builds.
A third variant is using Boost.Thread's One-time Initialization initialization, which seems to do what you want.

You could use a named event.
Create an event for both the app and DLL to share first:
HANDLE myEvent = CreateEvent(NULL, false, false, L"MyEvent");
To signal complete use:
SetEvent(myEvent);
To wait for completion use:
WaitForSingleObject(myEvent, INFINITE);

Related

Can two functions in same DLL be called by two threads?

I write a DLL MyDLL.dll with Visual C++ 2008, as follows:
(1) MFC static linked
(2) Using multi-thread runtime library.
In the DLL, this is a global data m_Data shared by two export functions, as follows:
ULONGLONG WINAPI MyFun1(LPVOID *lpCallbackFun1)
{
...
Write m_Data(using Critical section to protect)
…
return xxx;
}
ULONGLONG WINAPI MyFun2(LPVOID *lpCallbackFun2)
{
...
Suspend MyThread1 to prevent conflict.
Read m_Data(using Critical section to protect)
Resume MyThread1.
…
return xxx;
}
In in my main application, it will first call LoadLibrary to load MyDLL.dll, then get the address of MyFun1 and MyFun2, then do the following thing:
(1) Start a new thread MyThread1, which will invoke MyFun1 to do a time-consuming task.
(2) Start a new thread MyThread2, which will invoke MyFun2 for several times, as follows:
for (nIndex = 0; nIndex = 20; nIndex)
{
nResult2 = MyFun2(lpCallbackFun2);
NextStatement2;
}
Although MyThread1 and MyThread2 using critical section to protect the shared data m_Data, I will still suspend MyThread1 before accessing the shared data, to prevent any possible conflicts.
The problem is:
(1) When the first invoke of MyFun2, everything is OK, and the return value of MyFun2(that is nResult2) is 1 , which is expected.
(2) When the second, third and fourth invoke of MyFun2, the operations in MyFun2 are executed successfully, but the return value of MyFun2(that is nResult2) is a random value instead of the expected value 1. I try to using Debug to trace into MyFun2, and confirm that the last return statement is just return a value of 1, but the invoker will receive a random value instead of 1 when inspecting nResult2.
(3) After the fourth invoke of MyFun2 and return back to the next statement follow MyFun2, I will always get a “buffer overrun detected” error, whatever the next statement is.
I think this looks like a stack corruption, so try to make some tests:
I confirm the /GS (Stack security check) feature in the compiler is ON.
If MyFun2 is invoked after MyFun1 in MyThread1 is completed, then everything will be OK.
In debug mode, the codeline in MyFun2 that reads the shared data m_Data will not cause any errors or exceptions. Neither will the codeline in MyFun1 that writes the shared Data.
So, how to solve this problem
Thank you!
I suppose at this line
Suspend MyThread1 to prevent conflict.
you are using SuspendThread() function. That's what its documentation says:
This function is primarily designed for use by debuggers. It is not intended to be used for thread synchronization. Calling SuspendThread on a thread that owns a synchronization object, such as a mutex or critical section, can lead to a deadlock if the calling thread tries to obtain a synchronization object owned by a suspended thread. To avoid this situation, a thread within an application that is not a debugger should signal the other thread to suspend itself. The target thread must be designed to watch for this signal and respond appropriately.
So, in short: don't use it. Critical sections and other synchronization objects do their job just fine.
Never use SupsendThread!!! NEVER!
SuspendThread is only used for Debugging purpose.
The reason is simple. You don't know where you suspend the thread. It may be just in time, when the thread blocks a resource that you want to use. Also a bunch of CRT function use thread synchronisation.
Just use critcal sectins or mutexes.
Just see the simple sample here: http://blog.kalmbachnet.de/?postid=6 and here
http://blog.kalmbachnet.de/?postid=16
Since this is a windows program you could use windows based mutex or semaphore and WaitForSingleObject when reading or writing shared data.

How do I safely share a variable with a thread that is in a different compilation unit?

In the structure of my program I've divided "where it gets called from" and "what gets done" into separate source files. As a matter of practicality, this allows me to compile the program as standalone or include it in a DLL. The code below is not the actual code but a simplified example that makes the same point.
There are 3 interacting components here: kernel mode program that loads my DLL, the DLL and its source files and the utility program with it's source, that is maintained separately.
In the DLL form, the program is loaded as a thread. According to the kernel mode application vendor's documentation, I loose the ability to call Win32 API functions after the initialization of the kernel program so I load the thread as an active thread (as opposed to using CREATE_SUSPENDED since I can't wake it).
I have it monitor a flag variable so that it knows when to do something useful through an inelegant but functional:
while ( pauseThreadFlag ) Sleep(1000);
The up to 1 second lag is acceptable (the overall process is lengthy, and infrequently called) and doesn't seem to impact the system.
In the thread source file I declare the variable as
volatile bool pauseThreadFlag = true;
Within the DLL source file I've declared
extern volatile bool pauseThreadFlag;
and when I am ready to have the thread execute, in the DLL I set
pauseThreadFlag = false;
I've had some difficulty in declaring std::string objects as volatile, so instead I have declared my parameters as global variables within the thread's source file and have the DLL call setters which reside in the thread's source. These strings would have been parameters if I could instantiate the thread at will.
(Missing from all of this is locking the variable for thread safety, which is my next "to do")
This strikes me as a bad design ... it's functional but convoluted. Given the constraints that I've mentioned is there a better way to go about this?
I was thinking that a possible revision would be to use the LPVOID lpParams variable given at thread creation to hold pointers to the string objects, even though the strings will be empty when the thread is created, and access them directly from the thread, that way erasing the declarations, setters, etc in the thread program altogether? If this works then the pause flag could also be referenced there, and the extern declarations eliminated (but I think it still needs to be declared volatile to hint the optimizer).
If it makes any difference, the environment is Visual Studio 2010, C++, target platform Win32 (XP).
Thanks!
If all components are running in kernel mode you will want to take a look at KeInitializeEvent, KeSetEvent, KeResetEvent and KeWaitForSingleObject. These all work in a similar fashion to their user mode equivalents.
I ended up removing the struct and replacing it with an object that encapsulates all the data. It's a little hideous, being filled with getters and setters, but in this particular case I'm using the access methods to make sure that locks are properly set/unset.
Using a void cast pointer to this object passed the object correctly and it seems quite stable.

Injecting thread with codecave

By using 'codecave' technique to inject code into another process; is it possible to inject code to create a new thread (and also inject the code for the new thread) and let that thread execute parallel with the target process main thread?
I can manage this with dll injection but I want to know if it is possible with just pure code injection.
The intention is first of all to learn about different injection techniques but in the end create a heartbeat feature for random processes in order to supervise execution (High Availability). Windows is the target OS and language is C/C++ (with inline ASM when required).
Thanks.
There is CreateRemoteThread function.
When using a DLL injection loader such as "Winject (the one that calls CreateRemoteThread) it is very easy to create Threads that remain until the target process closes.
Just create the Thread within the function:
void run_thread(void* ass)
{
// do stuff until process terminates
}
BOOL APIENTRY DllMain(HMODULE hModule, DWORD result, LPVOID lpReserved)
{
HANDLE handle = (HANDLE)_beginthread(run_thread, 0, 0);
}
regards,
michael
Sure, but you would have to also inject the code for the remote thread into the process (e.g. a function). Injecting an entire function into a remote process is a pain because there is no clear-cut way to determine the size of a function. This approach would be far more effective if the injected code was small, in which case you would just inject a short assembly stub, then call CreateRemoteThread.
Really though, what would be a benefit of doing this over just straight-up DLL injection? Your 'heartbeat' feature could be implemented just as easily with an injected DLL. (unless someone is going to tell me there's significant overhead?)
The problem is, even if you inject your code into the process, unless you create a thread at the start of your injected code, it will still not run. Typically, to do code injection you would inject a full DLL. One of popular ways of injecting DLLs is to:
Get a handle to the target process (EnumProcesses, CreateTool32Snapshot/Process32First/Process32Next, FindWindow/GetWindowThreadProcessId/OpenProcess, etc.)
Allocate memory in the target process that is the same length as a string pointing to the path of your DLL (VirtualAllocEx)
Write a string pointing to the path of your DLL to this allocated memory (WriteProcessMemory)
Create a remote thread at the LoadLibrary routine (get the address by GetModuleHandle/GetProcAddress) and pass the pointer to the allocated memory as a parameter (CreateRemoteThread)
Release the allocated memory (VirtualFreeEx)
Close any opened handles (process handles, snapshot handles, etc. with CloseHandle)
Unless there is a particular reason you want to avoid this method, it is by far preferable to copying in the code yourself (WriteProcessMemory and probably setting up page protections (VirtualProtectEx)). Without loading a library you will need to manually map variables, relocate function pointers and all the other work LoadLibrary does.
You asked earlier about the semantics of CreateRemoteThread. It will create a thread in another process which will keep going until it terminates itself or something else does (someone calls TerminateThread or the process terminates and calls ExitProcess, etc.). The thread will run as parallel in the same way a thread that was legitimately created would (context switching).
You can also use the RtlCreateUserThread function to create the remote thread.

How to solve the DLL function call problem

i have couple of query's with respect to DLL,
1)If i load the DLL in run time, i guess DLL will be in separate thread right?
2)If i call a function present in DLL, and that function takes much time to return the value then how can i make my application thread to wait till the DLL's function return value.
How can i solve the second problem
Your assumption is incorrect.
If you load a DLL, and then call one of its functions, the call is made synchronously, just like any other function call.
There is absolutely no reason why the DLL should be loaded in another thread. You may do it, of course, but this is not the default.
1) No. The dll is just code. The code in the dll is called in the context of whatever threads you create. *
2) As a result, your application will wait for the dll's function to complete.
A Dll can create worker threads as a result of your application calling the dll. However, you cannot call directly into a thread. Any call your code makes will always happen synchronously on the current thread.
Are you using qt threads? Otherwise I cannot understand why you would use the "qt" tag.
As for your problems it seems to me that you have to create another thread which will call the function contained in the DLL.
When this thread exits than you can assume you have the function's result.
You can in switch implement DLL_THREAD_ATTACH too.
You must call this function from thread you want to slow down, or get Thread Suspend before function call, and Thread Resume after.

C++ setTimout function?

What's the cheapest way for a JavaScript like setTimeout-function in C++?
I would need this:
5000 miliseconds from now, start function xy (no parameters, no return value).
The reason for this is I need to initialize COM for text to speech, but when I do it on dll attach, it crashes.
It works fine however if I do not call CoInitialize from dllmain.
I just need to call CoInitialize and CoCreateInstance, and then use the instance in other functions. I can catch the uninitialized instance by checking for NULL, but I need to initialize COM - without crashing.
Calling CoInitialize() from DllMain() is a bad thing to do; there are LOTS of restrictions on what you can do from DllMain(); see here: http://blogs.msdn.com/larryosterman/archive/2004/04/23/118979.aspx
Even if it DID work reliably then initialising COM from within DllMain() isn't an especially nice thing to do as COM is initialised per thread and you don't know what the application itself wants to do with regards to COM apartments for the thread that you want to initialise COM for... This means that you might initialise COM in one way and then the application might need to initialise it in another way and might fail because of what your DLL had done...
You COULD spin up a thread in DllMain() as long as you are careful (see here http://blogs.msdn.com/oldnewthing/archive/2007/09/04/4731478.aspx) and then initialise COM on that thread and do all your COM related work on that thread. You would need to marshal whatever data you need to use COM with from whatever thread you're called on to your own COM thread and make the COM call from there...
And then there's the question of whether the instance of the COM object that you create (could you reliably do what you want to do) would be usable from the thread that was calling into your DLL to make the call... You do understand how you'd have to marshal the interface pointer if required, etc?
Alternatively you should expose YOUR functionality via COM and then have the application load your DLL as a COM DLL and everything will work just fine. You can specify the apartment type that you need and the app is responsible for setting things up for you correctly.
So, in summary, you don't need the answer to your question.
When it is OK to stop the execution for 5 seconds entirely, you can use the Winapi Sleep function. Beware that the documentation of Sleep minds some possible problems with CoInitialize and Messages.