The question is in the title: How can I have the callback for SetWinEventHook have some context information? Usually functions that take callbacks also take a context parameter they pass on to the callbacks, but not this one.
Using a global (including thread_local) variable as suggested in SetWinEventHook with custom data for example is almost certainly unacceptable.
There may be more than one caller to the utility function that sets the WinEvent hook to do something so global is out. thread_local may work, but it also may not. What if the callback calls the same utility function and sets yet another hook? They will both get called next time I pump messages. Using thread_local requires me to define a contract that forbids a single thread to set more than one WinEvent hook through me, and requires my callers to abide by it. That's a pretty arbitrary and artificial limitation and I prefer not to set it.
I'm reimplementing in C++ a little PoC I had in C#. There I could simple use capturing lambdas and they would convert to function pointers that I can pass via P/Invoke to SetWinEventHook. C++ unfortunately doesn't have that feature (probably because it requires allocating WX memory and dynamically generating a little code, which some platforms don't allow), but perhaps there is a platform-specific way to do this?
I seem to remember that ATL did something like that in the past, and I guess I can probably implement something like this too, but maybe there's a better way? If not, perhaps somebody has already written something like that?
(Another valid solution would be a clever-enough data structure and algorithm to solve the reentrancy problem.)
Related
What exactly is an event table and what does it do? I am asking regarding wxWidgets but maybe its a general GUI programming concept, so please correct me on that.
To keep it simple, the evend table tells which function to call when which event occurrs.
However, it is an old way of mapping events to functions.
It is no longer recommanded because isn't very flexible, and use macro tricks to do its job.
Macros themselves are generally not very recommanded in C++.
Unless you must stick to C++03, you should no longer use event tables.
Instead, you should use the bind method for New WXWidgets projects in C++11 or later.
Bind is more flexible, and don't use macro.
You will find this recommandation in the WXWidgets tutorials, too.
You must still be able to read and understand old event tables, though, because many samples haven't been updated for ages.
An event table tells wxWidgets to map events to member functions. It should be defined in a .cpp file.
wxBEGIN_EVENT_TABLE()
is an example of a macro
In addition to the other answers, I'd like to say that if you're starting learning wxWidgets, you should know that event tables are a legacy way of handling events and that using Bind() is the preferred way of doing it in the new code.
In particular, Bind() is much less "magic", and doesn't use any macros.
I need to call C# code from a native C/C++ .dll asynchronously.
While searching how to do I found that I could create a C# delegate and get a function pointer from it, which I would use inside my native code.
The problem is that my native code needs to run asynchronously, i.e in a separate thread created from the native code, which means that the native code called from the C# will return to C# before the delegate is called.
I read in another SO question that someone had trouble with this, despite what the MSDN says, because his delegate was garbage collected before it gets called, due to the asynchronous nature of his task.
My question is : is it really posible to call a C# delegate using a function pointer from a native code running in a thread that was created inside the native code ? Thank you.
No, this is a universal bug and not specific to asynchronous code. It is just a bit more likely to byte in your case since you never have the machinery behind [DllImport] to keep you out of trouble. I'll explain why this goes wrong, maybe that helps.
A delegate declaration for a callback method is always required, that's how the CLR knows now to make the call to the method. You often declare it explicitly with the delegate keyword, you might need to apply the [UnmanagedFunctionPointer] attribute if the unmanaged code is 32-bit and assumes the function was written in C or C++. The declaration is important, that's how the CLR knows how the arguments you pass from your native code need to be converted to their managed equivalent. That conversion can be intricate if your native code passes strings, arrays or structures to the callback.
The scenario is heavily optimized in the CLR, important because managed code inevitably runs on an unmanaged operating system. There are a lot of these transitions, you can't see them because most of them happen inside .NET Framework code. This optimization involves a thunk, a sliver of auto-generated machine code that takes care of making the call to foreign method or function. Thunks are created on-the-fly, whenever you make the interop call that uses the delegate. In your case when C# code passes the delegate to your C++ code. Your C++ code gets a pointer to the thunk, a function pointer, you store it and make the callback later.
"You store it" is where the problem starts. The CLR is unaware that you stored the pointer to the thunk, the garbage collector cannot see it. Thunks require memory, usually just a few handful of bytes for the machine code. They don't live forever, the CLR automatically releases the memory when the thunk is no longer needed.
"Is no longer needed" is the rub, your C++ code cannot magically tell the CLR that it no longer is going to make a callback. So the simple and obvious rule it uses is that the thunk is destroyed when the delegate object is garbage collected.
Programmers forever get in trouble with that rule. They don't realize that the life-time of the delegate object is important. Especially tricky in C#, it has a lot of syntax sugar that makes it very easy to create delegate objects. You don't even have to use the new keyword or name the delegate type, just using the target method name is enough. The lifetime of such a delegate object is only the pinvoke call. After the call completes and your C++ code has stored the pointer, the delegate object isn't referenced anywhere anymore so is eligible for garbage collection.
Exactly when that happens, and the thunk is destroyed, is unpredictable. The GC runs only when needed. Could be a nanosecond after you made the call, that's unlikely of course, could be seconds. Most tricky, could be never. Happens in a typical unit test that doesn't otherwise calls GC.Collect() explicitly. Unit tests rarely put enough pressure on the GC heap to induce a collection. It is a bit more likely when you make the callback from another thread, implicit is that other code is running on other threads that make it more likely that a GC is triggered. You'll discover the problem quicker. Nevertheless, the thunk is going to get destroyed in a real program sooner or later. Kaboom when you make the callback in your C++ code after that.
So, rock-hard rule, you must store a reference to the delegate to avoid the premature collection problem. Very simple to do, just store it in a variable in your C# program that is declared static. Usually good enough, you might want to set it explicitly back to null when the C# code tells your C++ code to stop making callbacks, unlikely in your case. Very occasionally, you'd want to use GCHandle.Alloc()instead of a static variable.
Standard data types or Windows data types?
I would use Windows data types to make my code consistent with the Win32 API.
On the other hand, I would use standard data types to protect from coding errors more.
Using nullptr instead of NULL will guard from the bad style of passing NULL for a parameter that actually doesn't take a pointer type.
Imagine a Win32 API function that for some nonsense reason takes an LPTSTR, but would actually treat it as an LPTCSTR. You have a std::string, and do (LPTSTR)(s.c_str()). All is fine until you switch to the W versions of Win32 API functions. The program compiles because casts were successful, but something bad will probably happen. If you had done (char*)(s.c_str()), the compiler would have caught this.
Using standard data types seems safer, but also feels like going to a 'wear-white' party in all black.
What should be decided here?
This is highly dependent on the project you are working on. Try to follow the already established coding style.
My favourite way of doing things:
for methods I declare I use normal data types (void*, char*, etc...) and when I call them I also use this.
however for every interaction I do with the Win32 API I either declare the variable to be of Win32 API style (LPVOID, LPTSTR, ...) or if I get the variable from one of my functions (which has "normal" data types), but needs to go to to a Win32 API call, I specifically typecast to the required type. Sometimes this also helps to see if I have some conflicting types.
hmm - i think the constness of the winapi is quite good. Did you have any examples?
but to your questions:
i would recommend to use the stdint types for your own stuff but when interfacing with winAPI use their types and convert to and from them to avoid difficulties about type widths and confusion what for example WORD means.
Windows vs C++ types is a completely orthogonal issue to Unicode-safety.
If you have std::string s, then you can do const_cast<LPSTR>(s.c_str()) just fine without sacrificing the compiler error when you compile as Unicode.
I agree with #fritzone as a practical solution. Mine is slightly different.
I have a strong preference for wrapping external APIs. I will write a layer of code inside which the WinApi is used, and always use Windows types within that layer. This will be the only part of the application that includes Windows.h. The API calls into those layer functions will use types that I choose and (as for #fritzone) I will explicitly cast values to WIndows types as needed. Every cast is a mistake waiting to happen.
The other advantage of layering is that it's a great place to put tracing code. The WinApi can be very chatty and hard to step through in a debugger, so I like to have tracing code for every WinApi function, that can be switched on if needed.
That's harder to do with MFC or ATL, which tend to become rather pervasive. It's been my preference across a wide variety of different external APIs.
DirectX 9 / C++
Do you declare d3ddevice global to your entire app/game or do you pass into classes that require the d3ddevice?
What is the usual way?
I understand it (and this may be wrong) that if you declare it globally, all classes and functions will be burdened by header memory that declares that global variable within the class after compiling?
I can be more specific about my question my application but, I'm just looking for the typical way.
I know how to start the d3ddevice etc, it's just a question about what is best?
I would recommend you wrap everything within a class and never put anything in global because global variables can be accessed from anywhere and that can make it very hard to keep track of the variable and who is and isn't using it.
Little bit late to the party here, but I also just recently stumbled into this same design question. It's a little surprising to me that there isn't more talk about it on the internet. I even tried perusing random github libraries to see if I could glean how others did it. Didn't find much. I also have an example of when you can't declare the d3d device as a global/static object. For my game/engine, I have a dll which serves as my engine framework. This dll is consumed by both an editor, and a game client (what people will use to play my game). This allows both things (and other applications in the future if desired) to access all of the world objects, math code, collections, etc.
With that system, I can't think of a way to make the Device object static, and still use it in both the editor and the client. If this were just a game client, or just an editor, then sure, it would be possible. Instead, I've pretty much decided to bite the bullet and pass Device to whatever needs it. One example, is a class that generates vertices at runtime. I need a pointer to Device for rebuilding the class.
I really just wanted to post this here, because I've been thinking about it for most of the day, and it really seems like this is the best way to handle it. Yeah, it sucks to have to pass the Device to nearly everything. But there's not really anything you can do about it.
I'm writing an NPAPI plugin in C++ on Windows. When my plugin's instantiated, I want to pass it some private data from my main application (specifically, I want to pass it a pointer to a C++ object). There doesn't seem to be a mechanism to do this. Am I missing something? I can't simply create my object in the plugin instance, since it's meant to exist outside of the scope of the plugin instance and persists even when the plugin instance is destroyed.
Edit:
I'm using an embedded plugin in C++ via CEF. This means that my code is essentially the browser and the plugin. Obviously, this isn't the way standard NPAPI plugins behave, so this is probably not something that's supported by NPAPI itself.
You can't pass a C++ object to javascript; what you can do is pass an NPObject that is also a C++ object and exposes things through the NPRuntime interface.
See http://npapi.com/tutorial3 for more information.
You may also want to look at the FireBreath framework, which greatly simplifies things like this.
Edit: it seems I misunderstood your question. What you want is to be able to store data linked to a plugin instance. What you need is the NPP that is given to you when your plugin is created; the NPP has two members, ndata (netscape data) and pdata (plugin data). The pdata pointer is yours to control -- you can set it to point to any arbitrary value that you want, and then cast it back to the real type whenever you want to use it. Be sure to cast it back and delete it on NPP_Destroy, of course. I usually create a struct to keep a few pieces of information in it. FireBreath uses this and sends all plugin calls into a Plugin object instance so that you can act as though it were a normal object.
Relevant code example from FireBreath:
https://github.com/firebreath/FireBreath/blob/master/src/NpapiCore/NpapiPluginModule_NPP.cpp#L145
Pay particular attention to NPP_New and NPP_Destroy; also pay particular attention to how the pdata member of the NPP is used.
This is also discussed in http://npapi.com/tutorial2
There is no way to do this via NPAPI, since the concept doesn't make sense in NPAPI terms. Even if you hack something up that passes a raw pointer around, that assumes everything is running in one process, so if CEF switches to the multi-process approach Chromium is designed around, the hack would break.
You would be better off pretending they are different processes, and using some non-NPAPI method of sharing what you need to between the main application and the plugin.