Injecting dll before windows executes target TLS callbacks - c++

There's an app that uses TLS callbacks to remap its memory using (NtCreateSection/NtUnmapViewOfSection/NtMapViewOfSection) using the SEC_NO_CHANGE flag.
Is there any way to hook NtCreateSection before the target app use it on its TLS callback?

You could use API Monitor to check if it is really that function call and if I understand you correctly you want to modify its invocation. API Monitor allows you to modify the parameters on the fly. If just "patching" the value when the application accesses the api is enough you could than use x64dbg to craft a persistent binary patch for your application. But this requires you to at least know or get familiar with basic x64/x86 assembler.

I have no idea what you're trying to achieve exactly but if you're trying to execute setup code before the main() function is called (to setup hooks), you could use the constructor on a static object. You would basically construct an object before your main program starts.
// In a .cpp file (do not put in a header as that would create multiple static objects!)
class StaticIntitializer {
StaticIntitializer(){
std::cout << "This will run before your main function...\n";
/* This is where you would setup all your hooks */
}
};
static StaticInitializer staticInitializer;
Beware though, as any object constructed this way might get constructed in any order depending on compilers, files order, etc. Also, some things might not be initialized yet and you might not be able to achieve what you want to setup.
That might be a good starting point, but as I said, I'm not sure exactly what you're trying to achieve here, so good luck and I hope it helps a little.

Related

Adding Custom c++ function in chromium and call them in browser

I am trying to write custom function in bootstrapper.cc under v8/src/init.
int helloworld(){
return 0;
}
When it try to call it from chromium console, it throws undefined.
Look around bootstrapper.cc to see how other built-in functions are installed. Examples you could look at include Array and DataView (or any other, really).
There is no way to simply define a C++ function of a given name and have that show up in JavaScript. Instead, you have to define a property on the global object; and the function itself needs to have the right calling convention, and process its parameters / prepare its return value appropriately so that it can be called from JavaScript. You can't just take or return an int.
If you find it inconvenient to work with C++, an alternative might be to develop a Chrome extension, which would allow you to use JavaScript for the implementation, and also remove the need to compile/maintain/update your own build (which is a lot of work!). There is no existing guide for how to extend V8 in the way you're asking, because that approach is so much work that we don't recommend doing it like this (though of course it is possible -- you just have to read enough of the existing C++ source to understand how it's done).

Protobuf with C++ plugins

We're working on a relatively large-scale C++ project where we chose at the very beginning to use protobuf as our Lingua Franca for stored and transmitted data.
We had our first problem because of end-of-program memory leaks due to the protobuf generated classes metadata that are stored as static pointer, allocated during the first call to the constructor and never deallocated. We found a nice function provide by Mr. Google to do this clean-up:
google::protobuf::ShutdownProtobufLibrary();
Works fine except there is no symmetric call, so once it's done you can no longer use anything . You have to do that exactly one time in your executable. We did what any lazy developper would have done:
struct LIBPROTOBUF_EXPORT Resource
{
~Resource()
{
google::protobuf::ShutdownProtobufLibrary();
}
};
bool registerShutdownAtExit()
{
static Resource cleaner;
return true;
}
And we added in the protobuf generation of cc files a:
static bool protobufResource = mlv::protobuf::registerShutdownAtExit();
It worked fine for several months.
Then we added the support for dynamically loadable plugins (dlls) in our tool. Some of them using protobuf. Unloading of the plugins worked fine, but when more than one of them used protobuf, we had a nice little crash when unloading the last one.
The reason: the last to unload would destroy the cleaner instance, itself trying to google::protobuf::ShutdownProtobufLibrary(), itself trying to destroy metadata of unloaded types... CRASH.
Long story short: are we condemned to either have lots of "normal" memory leaks or a crash when closing our tool. Does anyone have experienced the same problem and found a better solution? Is my diagnosis bad?
Like johnathon suggested in his comment, use a reference counting scheme, or register your destruction routine with atexit. Such a routine is free-standing, but that could work fine for your case.
Relevant documentation:
MSDN
POSIX
Edit: You're right, it's basically the same thing. Didn't think this through.
Another suggestion: Use a global resource singleton for all protobuf-using plugins. This one has a global destructor, which is only registered when a plugin first uses the protobuf library. Or just set a flag whenver it's used, then call ShutdownProtobufLibrary only if the flag is set.

How to use multiple instances of the same lib

I have to extend a C program which controls a single drone (parrot AR Drone). The goal is to control a squadron of drones, but the API uses a huge amount of global variables (drone IP, ports, drone status...). How can I instanciate several times the library, without having "collision" between instances?
The only solution I've found is to modify the API (which is open source) to call fork() somewhere in the main() function, and I'd like to avoid this...
I would recommend just wrapping the library in a service process. Then you can run one instance of the service process for each drone. Otherwise, fix the library to take a context parameter.
dlmopen can load one library multiple times. But it's limited to 15 times.
You can also create multiple copies of your library and load each of them.
Use macros to replace all of the global variables like this:
#define global1 ctx->global1
#define global2 ctx->global2
...
Then add a struct context *ctx argument to every function.
Alternatively, add _Thread_local (or __thread with old versions of gcc) to each global variable, then run each "instance" in its own thread so it naturally has its own copies of the globals available to it.

How can I create objects based on dump file memory in a WinDbg extension?

I work on a large application, and frequently use WinDbg to diagnose issues based on a DMP file from a customer. I have written a few small extensions for WinDbg that have proved very useful for pulling bits of information out of DMP files. In my extension code I find myself dereferencing c++ class objects in the same way, over and over, by hand. For example:
Address = GetExpression("somemodule!somesymbol");
ReadMemory(Address, &addressOfPtr, sizeof(addressOfPtr), &cb);
// get the actual address
ReadMemory(addressOfObj, &addressOfObj, sizeof(addressOfObj), &cb);
ULONG offset;
ULONG addressOfField;
GetFieldOffset("somemodule!somesymbolclass", "somefield", &offset);
ReadMemory(addressOfObj+offset, &addressOfField, sizeof(addressOfField), &cb);
That works well, but as I have written more extensions, with greater functionality (and accessing more complicated objects in our applications DMP files), I have longed for a better solution. I have access to the source of our own application of course, so I figure there should be a way to copy an object out of a DMP file and use that memory to create an actual object in the debugger extension that I can call functions on (by linking in dlls from our application). This would save me the trouble of pulling things out of the DMP by hand.
Is this even possible? I tried obvious things like creating a new object in the extension, then overwriting it with a big ReadMemory directly from the DMP file. This seemed to put the data in the right fields, but freaked out when I tried to call a function. I figure I am missing something...maybe c++ pulls some vtable funky-ness that I don't know about? My code looks similar to this:
SomeClass* thisClass = SomeClass::New();
ReadMemory(addressOfObj, &(*thisClass), sizeof(*thisClass), &cb);
FOLLOWUP: It looks like POSSIBLY ExtRemoteTyped from EngExtCpp is what I want? Has anyone successfully used this? I need to google up some example code, but am not having much luck.
FOLLOWUP 2: I am pursuing two different routes of investigation on this.
1) I am looking into ExtRemoteTyped, but it appears this class is really just a helper for the ReadMemory/GetFieldOffset calls. Yes, it would help speed things up ALOT, but doesn't really help when it comes to recreating an object from a DMP file. Although documentation is slim, so I might be misunderstanding something.
2) I am also looking into trying to use ReadMemory to overwrite an object created in my extension with data from the DMP file. However, rather than using sizeof(*thisClass) as above, I was thinking I would only pick out the data elements, and leave the vtables untouched.
Interesting idea, but this would have a hope of working only on the simplest of objects. For example, if the object contains pointers or references to other objects (or vtables), those won't copy very well over to a new address space.
However, you might be able to get a 'proxy' object to work that when you call the proxy methods they make the appropriate calls to ReadMemory() to get the information. This sounds to be a fair bit of work, and I'd think it would have to be more or less a custom set of code for each class you wanted to proxy. There's probably a better way to go about this, but that's what came to me off the top of my head.
I ended up just following my initial hunch, and copying over the data from the dmp file into a new object. I made this better by making remote wrapper objects like this:
class SomeClassRemote : public SomeClass
{
protected:
SomeClassRemote (void);
SomeClassRemote (ULONG inRemoteAddress);
public:
static SomeClassRemote * New(ULONG inRemoteAddress);
virtual ~SomeClassRemote (void);
private:
ULONG m_Address;
};
And in the implementation:
SomeClassRemote::SomeClassRemote (ULONG inRemoteAddress)
{
ULONG cb;
m_Address = inRemoteAddress;
// copy in all the data to the new object, skipping the virtual function tables
ReadMemory(inRemoteAddress + 0x4, (PVOID) ((ULONG)&(*this) +0x4), sizeof(SomeClass) - 4, &cb);
}
SomeClassRemote::SomeClassRemote(void)
{
}
SomeClassRemote::~SomeClassRemote(void)
{
}
SomeClassRemote* SomeClassRemote::New(ULONG inRemoteAddress)
{
SomeClassRemote*x = new SomeClassRemote(inRemoteAddress);
return (x);
}
That is the basics, but then I add specific overrides in as necessary to grab more information from the dmp file. This technique allows me to pass these new remote objects back into our original source code for processing in various utility functions, cause they are derived from the original class.
It sure SEEMS like I should be able to templatize this somehow... but there always seems to be SOME reason that each class is implemented SLIGHTLY differently, for example some of our more complicated objects have a couple vtables, both of which have to be skipped.
I know getting memory dumps have always been the way to get information for diagnosing, but with ETW its lot more easy and you get a information along with call stacks which include information system calls and user code. MS has been doing this for all their products including Windows and VS.NET.
It is a non-intrusive way of debugging. I have done same debugging for very long and now with ETW I am able to solve most of customer issues without spending lot of time inside the debugger. These are my two cents.
I approached something similar when hacking a gdi leak tracer extension for windbg. I used an stl container for data storage in the client and needed a way to traverse the data from the extension. I ended up implementing the parts of the hash_map I needed directly on the extension side using ExtRemoteTyped which was satisfactory but took me awhile to figure out ;o)
Here is the source code.

How to mimic the "multiple instances of global variables within the application" behaviour of a static library but using a DLL?

We have an application written in C/C++ which is broken into a single EXE and multiple DLLs. Each of these DLLs makes use of the same static library (utilities.lib).
Any global variable in the utility static library will actually have multiple instances at runtime within the application. There will be one copy of the global variable per module (ie DLL or EXE) that utilities.lib has been linked into.
(This is all known and good, but it's worth going over some background on how static libraries behave in the context of DLLs.)
Now my question.. We want to change utilities.lib so that it becomes a DLL. It is becoming very large and complex, and we wish to distribute it in DLL form instead of .lib form. The problem is that for this one application we wish to preserve the current behaviour that each application DLL has it's own copy of the global variables within the utilities library. How would you go about doing this? Actually we don't need this for all the global variables, only some; but it wouldn't matter if we got it for all.
Our thoughts:
There aren't many global variables within the library that we care about, we could wrap each of them with an accessor that does some funky trick of trying to figure out which DLL is calling it. Presumably we can walk up the call stack and fish out the HMODULE for each function until we find one that isn't utilities.dll. Then we could return a different version depending on the calling DLL.
We could mandate that callers set a particular global variable (maybe also thread local) prior to calling any function in utilities.dll. The utilities DLL could then use this global variable value to determine the calling context.
We could find some way of loading utilities.dll multiple times at runtime. Perhaps we'd need to make multiple renamed copies at build time, so that each application DLL can have it's own copy of the utilities DLL. This negates some of the advantages of using a DLL in the first place, but there are other applications for which this "static library" style behaviour isn't needed and which would still benefit from utilities.lib becoming utilities.dll.
You are probably best off simply having utilities.dll export additional functions to allocate and deallocate a structure that contains the variables, and then have each of your other worker DLLs call those functions at runtime when needed, such as in the DLL_ATTACH_PROCESS and DLL_DETACH_PROCESS stages of DllEntryPoint(). That way, each DLL gets its own local copy of the variables, and can pass the structure back to utilities.dll functions as an additional parameter.
The alternative is to simply declare the individual variables locally inside each worker DLL directly, and then pass them into utilities.dll as input/output parameters when needed.
Either way, do not have utilities.dll try to figure out context information on its own. It won't work very well.
If I were doing this, I'd factor out all stateful global variables - I would export a COM object or a simple C++ class that contains all the necessary state, and each DLL export would become a method on your class.
Answers to your specific questions:
You can't reliably do a stack trace like that - due to optimizations like tail call optimization or FPO you cannot determine who called you in all cases. You'll find that your program will work in debug, work mostly in release but crash occasionally.
I think you'll find this difficult to manage, and it also puts a demand that your library can't be reentrant with other modules in your process - for instance, if you support callbacks or events into other modules.
This is possible, but you've completely negated the point of using DLL's. Rather than renaming, you could copy into distinct directories and load via full path.