TL;DR; How do I create a sandboxed AppDomain (configuring CAS) from a C++ app?
Long version:
I'm hosting the .NET CLR in a C++ app and everything is working fine... However, my AppDomain has full trust, and I'd like to have a more granular control over what it can do (i.e. configure PermissionSets, etc.) as I'll be loading unknown assemblies that could potentially cause damage.
This is the gist of it:
// Create instance (CLRCreateInstance)
// Get meta-host, CorRuntimeHost, etc.
// Start the CLR
// ...
Eventually I have everything I need to create an AppDomain (please pretend that I'm actually handling exceptions, testing the HRESULTs from each of these calls, etc...):
pCorRuntimeHost->CreateDomainSetup(&spAppDomainSetupThunk);
spAppDomainSetupThunk->QueryInterface(IID_PPV_ARGS(&spAppDomainSetup));
spAppDomainSetup->put_ApplicationBase(_bstr_t(L"C:\\PretendThisIsNotHardCoded"));
spAppDomainSetup->put_ApplicationName(appDomainName);
pCorRuntimeHost->CreateDomainEx(appDomainName, spAppDomainSetupThunk, 0, &spAppDomainThunk);
spAppDomainThunk->QueryInterface(IID_PPV_ARGS(&spAppDomain));
// AppDomain ready to go, and full trust (at least on .NET 4)
Any ideas or code samples appreciated.
Related
Code coverage for UWP directly is not supported. But...
... my views and domain logic are in separate projects. So I migrated my domain logic library from UWP to dotnet standard 2.0 and referenced the Microsoft.Windows.SDK.Contracts nuget.
I created a new test project in .net core 3.1 that can consume the domain logic library, run tests, and produce code coverage results. So far, so good.
The problem is, whenever I test a unit of code that contains a control like a TextBox, I get the exception
The application called an interface that was marshalled for a different thread.
I have found many posts saying to use the active window's Dispatcher to run code on the UI thread, however I do not have a UI thread, or an active window.
Is it possible to create a UWP UI thread or instantiate a View/Window?
I'm thinking I might have to start wrapping all the UI thread classes I'm consuming so I can substitute a test dummy.
I have a Win32/MFC application that depends on two separate STA COM DLL servers that I created many years ago using C++/ATL. These are large DLL servers with multiple interfaces and are also successfully used in other contexts and client programs. Several years ago, I had to create 64-bit versions of these 32-bit servers, and my 32-bit MFC app needed to be able to use either the 32-bit or 64-bit version of the DLL COM server (chosen with a checkbox).
Because a 32-bit process can't load a 64-bit COM server DLL in-process, I worked around this by having the MFC app create the 64-bit servers in the system surrogate (DLLHOST.EXE) by replacing
CoCreateInstance(..., CLSCTX_INPROC_SERVER, ...)
with
CoCreateInstance(..., CLSCTX_LOCAL_SERVER | CLSCTX_ACTIVATE_64_BIT_SERVER, ...)
Some updates were required, like adding an interface to copy environment variables into the server process and set the server/surrogate's working directory (the surrogate starts in SYSTEM32), but the other interfaces were all remoteable. This all seems to work perfectly and I can now use the 32-bit and 64-bit servers interchangeably from the 32-bit app by flipping a switch.
There is, however, one problem that I haven't been able to solve: making the surrogate quickly terminate when the client releases the last interface. The surrogate hangs around for 3-5 seconds after all remote interfaces are released by the MFC client -- presumably an optimization, hoping the client will come back. If the MFC app re-launches the server with CoCreateInstance() during that 3-5 seconds, it reconnects to the same "dirty" surrogate. The server code is not serially re-usable (it packages up many thousands of lines of legacy ANSI "C" code with lots of static variables) so reconnecting to the same instance is just not possible.
I worked around this several years ago by having the startup interface return a COM error code indicating the server is waiting to be recycled (better than a crash). However, the servers are launched when the end user presses a toolbar button in the MFC app, so this means the user gets a message like "wait a few seconds and try again". That works, but the bad part is that every fresh launch attempt resets the 3-5 second counter that keeps the surrogate from exiting. And impatient users are complaining. I'll add this all works perfectly in-process, with CoFreeUnusedLibraries() working as expected.
I tried a number of things already -- everything short of coding an ExitProcess() in the server, which seems inappropriate. There seems to be no way to tell the surrogate that the application is complete and should not wait for more connections. The MS documentation claims omitting the RunAs attribute in the AppID might help (I had it set to "Interactive User") but it didn't. It also mentions REGCLS_SINGLUSE but then says "Do not set REGCLS_SINGLUSE or REGCLS_MULTIPLEUSE when you register a surrogate for DLL servers" and "REGCLS_SINGLUSE and REGCLS_MULTIPLEUSE should not be used for DLL servers loaded into surrogates." and I don't have control over what the surrogate's class factory as far as I know.
It looks like COM+ might provide some control over recycling, as it seems to have a RecycleActivationLimit option that I might be able to set to 0, but I have no idea what it would take to convert this into a COM+ server.
The other possibility is to write a custom surrogate.
If there's no easy answer, I might just resort to greying out the button until the server vanishes -- but since I can't probe the server without extending its lifetime, I guess I could add a shared mutex and wait for it to vanish. Ugh.
Is RecycleActivationLimit somehow available to regular COM applications? Any other suggestions are most welcome.
I am trying to Instrument .NET Core web applications that runs on .NET Core 3.1 using CoreCLR Profiler in linux centos7.
I have set the environment values CORECLR_PROFILER , CORECLR_ENABLE_PROFILING and CORECLR_PROFILER_PATH, where my CoreCLRProfiler dll gets attached to dotnet.exe and it is getting the callbacks.
I am able to get all the callbacks,but when i allow injecting the code into the Webapplication's method then the app is getting crashed(dotnet.exe gets killed) as it couldnt find the injected function call.
I have created helper assembly(.NET standard 2.0) with the injected functions body and signed it with strong name and installed it in to the GAC. And also used DefineAssemblyRef(),DefineTypeRefByName() and DefineMemberRef() from IMetaDataAssemblyEmit to load assembly and its class methods. And also tried by placing dotnet standard dll in application folder. But the helper assembly is not loaded to dotnet.exe process.
Where should my helper assembly placed..? and
how can I load helper assembly to dotnet process from my native coreclr profiler?
It would be much helpful if i get some correct direction to load or use helper assembly to dotnet process.
Thanks in advance.
I played around a lot with loading a managed DLL and calling it from our Profiler: many different approaches, almost all of them had a limitation one way or another.
The problem we saw was that if the method that calls the external dll is already compiled, it is now too late to load the external dll. Even if the method is compiled and the dll is loaded as part of the method (before the call is made to the dll), this is still too late for the CLR.
What you can do is a bit patchy but it works: instrument a call to https://learn.microsoft.com/en-us/dotnet/api/system.appdomain.assemblyresolve and add your own method that looks up in a specific place. This has to be done as early as possible (before the method calling this assembly is compiled). Note that this will support .Net Framework as well as .Net Core. If you only need support for .Net Core, you can use the way described here:
https://github.com/richlander/dotnet-core-assembly-loading
This is quite strange question, but, I believe, this is on-topic for SO.
Intro:
I have an service, written in C#, which calls my C++ library. C++ library execute some 3rdparty software via WinExec.
3rdparty software injects DLL via CreateRemoteThread. I don't have source files for this software.
Main part
I have 2 PCs - Win2008 and Win10.
For Win10 - this frankenstein is working flawlessly, Service runs DLL, DLL runs 3rdparty DLL injector, DLL injector injects stuff.
For Win2008 things are different. If I run 3rdparty DLL injector from CMD - it works flawlessly. But if I run service - Injector returns, that he got ERROR_NOT_ENOUGH_MEMORY from CreateRemoteThread.
Service is working from LocalService account, and everything is OK on Windows 10. I am looking for possible ideas\clues, why there is a problem with SERVICE (remember, CMD works fine) and ONLY for Windows 2008.
This issue might be related to creating a remote thread across privilege levels, as explained in the following blog article:
Injecting Code Into Privileged Win32 Processes
With XP SP2 and later (2003, Vista) some new security measures prevent the traditional CreateRemoteThread() function from working properly. You should be able to open the process, allocate memory on its heap, and write data to the allocated region, but when trying to invoke the remote thread, it will fail with ERROR_NOT_ENOUGH_MEMORY.
...
For XP SP2 I did a little debugging and found that inside CreateRemoteThread(), there is a call to ZwCreateThread() which is an export from ntdll.dll. The call is made while specifying that the thread should start suspended, which it does properly, however down the road still inside CreateRemoteThread() before ZwResumeThread() is called, there is a call to CsrClientCallServer() which fails and eventually leads to the error message.
The article explains some different ways of injecting remote threads on different version of Windows to avoid the error, ending with this conclusion:
At this point, we can successfully execute remote threads into privileged processes across all target platforms, but as mentioned before, its pretty messy. We're using three different, largely undocumented functions and auto-detecting which one to use based on the OS version.
The better solution is to create a secondary program that adds a service object (your injector program) to the service control manager database on the target system. Since you're administrator, which is required anyway, you'll be able to add these entries and start the service. This will enable the injector program to run with different access rights than normal code, and the traditional CreateRemoteThread() will work properly on Windows 2000, all of XP, and 2003/Vista. The API functions for adding and controlling the service are documented by MSDN and remain consistent across all of the platforms.
So, what is learned is that we can use a number of different functions to inject code into privileged remote processes, including RtlCreateUserThread() on XP SP2, and NtCreateThreadEx() on Vista, but the optimal way is to install a temporary service and allow CreateRemoteThread() to be the single API that accomplishes the task for all platforms.
Of course, none of this really matters since you don't have the source code for the injector and thus cannot change how it works.
Also, you can't create remote threads across session boundaries, either. Calling WinExec() in a service will run the injector process in the same session as the service, ie session 0. If it is trying to inject into a process that is running in a user session, that will never work. This would also explain why running the injector from CMD works, if CMD is running in the same session as the process that is being injected into.
I encountered the same issue today and this seems to be the issue-
Prior to Windows 8, Terminal Services isolates each terminal session by design. Therefore, CreateRemoteThread fails if the target process is in a different session than the calling process.
This explains why your code works on Windows 10 but not on Windows 7/2008.
Source: https://msdn.microsoft.com/en-us/library/windows/desktop/dd405484(v=vs.85).aspx
I'm struggling to find a basic example on how to set up a minimal plugin host with VST 3.x SDK. The official documentation is absolutely criptic and brief, I can't get anywhere. I would like to:
understand the minimal setup: required headers, interfaces to implement, ...;
load a VST3 plugin (no fancy GUI, for now);
print out some data (e.g. plugin name, parameters, ...).
That would be a great start :)
Yeah, VST3 is rather mysterious and poorly documented. There are not many good examples partially because not many companies (other than Steinberg) actually care about VST3. But all cynicism aside, your best bet would be to look at the Juce source code to see their implementation of a VST3 host:
https://github.com/julianstorer/JUCE/blob/master/modules/juce_audio_processors/format_types/juce_VST3PluginFormat.cpp
There's a few other VST3-related files in that package which are worth checking out. Anyways, this should at least be enough information to get get you started with a VST3 host.
It's worth noting that Juce is GPL (unless you pay for a license), so it's a big no-no to borrow code directly from it unless you are also using the GPL or have a commercial license. Just a friendly reminder to be a responsible programmer when looking at GPL'd code on the net. :)
Simple VST3 hosts already exist in the VST SDK. It is not difficult to augment them, but there are some things to keep in mind.
The samples under public.skd/vst-hosting in the VST SDK contain an EditorHost and and AudioHost. The first handles the GUI, the second handles the effect (the signal processing). You can combine the two. Neither is a full implementation.
VST objects are COM objects and so you have to make sure to set up the application context correctly, so that your COM objects persist between calls. EditorHost and AudioHost both do that in a couple of lines in a global context variable (look for pluginContext).
If you use separate calls to load and unload effects, process data, and so on, you have to keep COM object pointers, so they are not unloaded. For example, you may be tempted to ignore the Steinberg::Vst::Module module, since you don't need it once the effect is loaded, but you would have to keep a pointer to it somewhere globally or in the main application thread. If not, the automatic unloading of that pointer will also unload the plugin as well and subsequent calls to the plugin will fail.
The construction of VST effects is relatively simple. They consist of a component (the effect) and a controller (the GUI). Both are instantiated when Steinberg::Vst::PlugProvider is loaded (some effects do not have a GUI). Both examples above load a plugprovider. Once you load a plugprovider, you are essentially done.
The following code is sufficient to load a plugprovider (the whole effect). Assume returning -1 means an error:
std::string error;
std::string path = "somepath/someeffect.vst3";
VST3::Hosting::Module::Ptr module =
VST3::Hosting::Module::create(path, error);
if (! module)
return -1;
IPtr<PlugProvider> plugProvider;
VST3::Optional<VST3::UID> effectID = std::move(uid);
for (auto& classInfo : module->
getFactory().classInfos())
{
if (classInfo.category() == kVstAudioEffectClass)
{
if (effectID)
{
if (*effectID != classInfo.ID())
continue;
}
plugProvider = owned(new
PlugProvider(module->getFactory(),
classInfo, true));
break;
}
}
if (! plugProvider)
return -1;
After this, plugProvider->getComponent() and plugProvider->getController() give you the effect and GUI. The controller has to be displayed in a window, of course, which is done in EditorHost. These are the implementations of IComponent,IAudioProcessor and IEditController in the VST SDK.
The source/vst/testsuite part of the VST SDK will show you the full functionality of both of these parts (it will essentially give you the functional calls that you can use to do everything you want to do).
Note the module and plugprovider loaded in the code above. As mentioned above, if you don't keep the module pointer, there is no guarantee the plugprovider pointer will survive. It is difficult to keep track of what gets released when in the VST SDK.