A legacy c++ applications with sometimes notorious memory leak issues has to be called from a .Net server based windows application. .Net garbage collection times are not determinable and sometime the c++ object are destroyed or not destroyed "on time" producing unpredictable results and generally crashing the c# web app. What is the best way to push the c++ objects onto the garbage collection stack as frequently as possible, but not so often as to remove the .Net reference to the COM object. Keep in mind that the COM objects can spawn sub-objects so the .Net reference count of the COM objects can change with just a function call and not necessarily an instantiation.
As the memory leaks occur and the COM objects are not cleaned up, performance degrades until it is so slow that IIS trips on itself a few times and then crashes. Restarting IIS fixes the issue until the next time. Periodic restarts help, but a busy day can cause this during the business day.
I had to resolve this using .Net 1.1 a couple of years ago. Wondering if someone else had my solution or a better one. This is NOT ASP.NET. It is a .Net dll.
The final result was not completely satisfactory and the web server crashes every few months.
You ask "What is the best way to push the c++ objects onto the garbage collection stack as frequently as possible", but c++ objects are never garbage-collected. Maybe look at it this way...
You have a process that instantiates a bunch of c++ objects. Some of those c++ objects implement COM objects, and their lifetime is therefore managed via AddRef/Release. Some of those COM objects are imported into the .NET world, and are wrapped with an RCW (runtime-callable wrapper). Only the RCW is a .NET object and goes in the garbage-collected heap.
Without any intervention from you, the RCW will eventually be GC'd and when that happens it will do a Release against its underlying COM object. If you want to Release the COM object immediately, without waiting for GC, you can call...
System.Runtime.InteropServices.Marshal.ReleaseComObject
... or even FinalReleaseComObject if you're sure it's what you want.
To get back to your question: you want to know how to delete the c++ objects without releasing the .NET reference to your COM object. Since the c++ objects don't exist in the .NET heap, there's no way to achieve this directly. You could expose a method from your COM object that deletes all its c++ objects, and simply call that from your .NET code. But I guess if it was possible for your COM object to identify all the leaked c++ objects you'd be doing that already.
Hopefully I've explained why there's no way to achieve what you suggest in your question, but there are plenty of tools around to help you find and fix your memory leaks. I suggest using a tool such as LeakDiag (search StackOverflow for it) to find out where your c++ code is leaking memory.
The pragmatic solution, if you're using IIS6 or higher, is to configure application pool recycling. You can tweak the numbers so that processes are killed and restarted before they've ever leaked enough memory to be problematic, and it normally works in such a way that users don't notice any downtime.
Create a COM+ application and put the COM classes you use inside that application. This way all COM objects are instantiated in a separate process. You can periodically release all COM objects and just restart the COM+ application process.
Related
I know that modern Windows versions reclaim memory that was previously acquired with malloc, new and the like, after program termination, but what about COM objects? Should I call obj->Release() on them on program's exit, or will the system do this for me?
My guess it: it depends. For out of process COM, I should probably always call Release(), but for in-process COM, I think it really doesn't matter, because the COM objects die after program termination anyway.
If you're in the process itself then yes you should as you might not know where the server is and the server could be out of proc. If you're in a DLL it becomes more complicated.
In a DLL you should UNLESS you receive a DLL_PROCESS_DETACH notification, in which case you should do absolutely nothing and just let the application close. This is because this notification is called during process teardown. As such it is too late to clean up at that point. The kernel may have already reclaimed the blocks you call release on.
Remember as a DLL writer there is nothing you can do if the process exits ungracefully, you can only do what you can within reason to clean up after yourself in a graceful exit.
One easy solution is to use smart COM Pointers everywhere, the ATL and WRL have implementations that work nicely and make it so you don't have to worry about it for the most part. Even if these are stored statically their destructors will be called before process teardown or DLL unload, thus releasing safely at a time when it is safe to do so.
So the short answer is If you can e.g. you should always call release if it is safe to do so. However there are times when it is not and you should most definitely NOT do anything.
Depending on the implementation of the underlying object, there may or may not be a penalty. The object may have state that persists beyond process shutdown. A persistent lock in a local database is the easiest example that comes to mind.
With that in mind, I say it's better to call Release just in case.
in-process COM object will die with the process
out-of-process reference will be released on timeout
poorly designed servers and clients might remain in bad state holding pointers to objects (typically proxies), that are not available any longer, being unable to see they are dead. being unable to get rid of them
it is always a good idea to release the pointers gracefully
If you don't release COM pointers properly, and COM activity included marshaling, you are very likely to have exceptions in CoUninitialze which are both annoying and/or can end up showing process crash message to the user.
We've inherited large legacy application which is structured roughly like this:
class Application
{
Foo* m_foo;
Bar* m_bar;
Baz* m_baz;
public:
Foo* getFoo() { return m_foo; }
Bar* getBar() { return m_bar; }
Baz* getBaz() { return m_baz; }
void Init()
{
m_foo = new Foo();
m_bar = new Bar();
m_baz = new Baz();
// all of them are singletons, which can call each other
// whenever they please
// may have internal threads, open files, acquire
// network resources, etc.
SomeManager.Init(this);
SomeOtherManager.Init(this);
AnotherManager.Init(this);
SomeManagerWrapper.Init(this);
ManagerWrapperHelper.Init(this);
}
void Work()
{
SomeManagerWrapperHelperWhateverController.Start();
// it will never finish
}
// no destructor, no cleanup
};
All managers once created stay there for the whole application lifetime. The application does not have close or shutdown methods and managers also doesn't have those. So, the complex inter dependencies are never dealt with.
The question is: if the objects lifetime is tightly coupled with the application lifetime, is it accepted practice to not have cleanup at all? Will the operating system (Windows in our case) be able to cleanup everything (kill threads, close open file handles, sockets, etc.) once the process ends (by ending it in task manager or by calling special functions like ExitProcess, Abort, etc.)? What are possible problems with the above approach?
Or more generic question: are destructors absolutely necessary for global objects (declared outside of main)?
Will the operating system (Windows in our case) be able to cleanup
everything (kill threads, close open file handles, sockets, etc.) once
the process ends (by ending it in task manager or by calling special
functions like ExitProcess, Abort, etc.)? What are possible problems
with the above approach?
As long as your objects aren't initialising any resources not cleaned up by the operating system, then it doesn't make any practical difference whether you explicitly clean up or not, as the OS will mop up for you when your process is terminated.
However, if your objects are creating resources which are not cleaned up by the OS then you've got a problem and need a destructor or some other explicit clean up code somewhere in your app.
Consider if one of those objects creates sessions on some remote service, like a database for example. Of course, the OS doesn't magically know that this has been done or how to clean them up when your process dies, so those sessions would remain open until something kills them (the DBMS itself probably, by enforcing some timeout threshold or other). Perhaps not a problem if your app is a tiny user of resources and you're running on a big infrastructure - but if your app creates and then orphans enough sessions then that resource contention on that remote service might start to become a problem.
if the objects lifetime is tightly coupled with the application
lifetime, is it accepted practice to not have cleanup at all?
That's a matter of subjective debate. My personal preference is to include the explicit cleanup code and make each object I create personally responsible for cleaning up after itself wherever practical. If application-lifetime objects are ever refactored such that they no longer live for the lifetime of the object, I don't have to go back and figure out whether I need to add previously-omitted cleanup. I guess for cleanup I'm saying that I generally prefer to lean towards RAII over the more pragmatic YAGNI.
is it accepted practice to not have cleanup at all
It depends on who you're asking.
Will the operating system (Windows in our case) be able to cleanup
everything (kill threads, close open file handles, sockets, etc.) once
the process ends
Yes, the OS will take back everything. It will claim memory, free handles etc.
What are possible problems with the above approach
One of the possible problems is that if you use a memory leak detector it will constantly show you have leaks.
In general, modern operating systems cleans up all a process resources on exit. But in my opinion it's still good manners to clean up after yourself. (But then I was "raised" on the Amiga, where you had to do it.)
Sometimes it's forced on you by a spec or just by the behaviour of 'peripherals'. Perhaps you have a lot of data buffered in your app that should really be flushed to disk or maybe a DB may accumulate 'half-open' connections is not explicitly closed.
Other than that, as #cnicutar says, it depends who you ask. I'm firmly in the 'don't bother' camp for the following reasons:
1) It's difficult enough to get apps to work anyway without writing extra shutdown code that is not required.
2) The more code you write, the more bugs there are and the more testing you have to do. You may have to test such code in more than one OS version:(
3) The OS developers have spent a long time ensuring that apps can always be shut down if required, (eg. by Task Manger), without any overall impact on the rest of the system. If some functionality is already there in the OS, why not leverage it?
4) Threads pose a particular problem - they could be in any state. They may be running on a different core than the thread that initiates app close or may be blocked on a system call. While it's very easy for the OS to ensure that all threads are terminated before releasing any memory, closing handles etc, it's very difficult to stop such threads in a safe and reliable manner from user code.
5) Performance-sapping memory-managers are not the only way of detecting leaks. If large objects, (eg. network buffers), are pooled, it's easy to tell if any leak during run-time without relying on 3rd-party memory-managers that issue a leak report on app close. An intensive memory-checker like Valgrind my actually cause system problems by affecting the overall timing.
6) Empirically, every app I've eve written for Windows that has no explicit shutdown code has closed immediately and completely when the user clicks on the 'red cross' border icon. This incudes busy, complex IOCP servers running on multicore boxes with thousands of connected clients.
7) Assuming that a reasonable test phase has been done - one that includes load/soak testing - it's not difficult to differentiate an app that is leaking from one that chooses to not free memory that it is using at close time. Colander-apps will show memory/handles/whatever always increasing with run time.
8) Small, occasional leaks that are not obvious are not worth spending a huge amount of time on. Most Windows boxes are restarted every month anyway, (Patch Tuesday).
9) Opaque libraries are often written by developers like me and so will generate spurious 'leak reports' on shutdown anyway.
Designing/writing/debugging/testing shutdown code solely to clean up a memory-report is an expensive luxury I can well do without:)
You should determine that for each object individually. If an object requires special actions to be taken upon cleanup (such as flushing a buffer to disk), this will not happen unless you explicitly take care of it.
I am using a COM dll from a web service.
The COM dll is added as reference. And I am declaring the object as static in Global.asax.
I am creating the COM object in the Application_Start.
I have to call the COM dll interface function in each request.
I am getting exceptions here as memory corruption.I could see the logs that it happens when simultaneous requests come up.
Please let me know what is the best way to do that. How to make it thread safe.?
Try creating a new instance in each request and not use application scope for the object.
If you are accessing it at application scope(eg through Application_Start) you will need to make sure it is safe for multithreading. I don't know how C++ dlls handle threading but you might be able to manage multithreading at the asp.net level.
For example To manage a simple application level counter the code is something like:
Application.Lock();
Application["SomeGlobalCounter"] =
(int)Application["SomeGlobalCounter"] + 1;
Application.UnLock();
For more information you might want to see the MSDN page on Application State.
If the COM object is apartment threaded, COM provides the synchronization to enforce a single execution of a method per thread.
Generally, though, COM should be complaining of multiple threads trying to access an instance of an object using the same pointer shared across threads. Having a static variable holding a pointer to the object is probably a bad idea.
Once the COM object shared library is loaded somewhere (in-proc or out-of-proc) by creating an instance, creation of additional instances per thread should be fairly quick. That is, of course, dependent on what types of things that are being done during object construction.
This isn't so much of a problem now as I've implemented my own collection but still a little curious on this one.
I've got a singleton which provides access to various common components, it holds instances of these components with thread ID's so each thread should (and does, I checked) have it's own instance of the component such as an Oracle database access library.
When running the system (which is a C++ library being called by a C# application) with multiple incoming requests everything seems to run fine for a while but then it crashes out with an AccessViolation exception. Stepping through the debugger the problem appears to be when one thread finishes and clears out it's session information (held in a std::map object) the session information held in a separate collection instance for the other thread also appears to be cleared out.
Is this something anyone else has encountered or knows about? I've tried having a look around but can't find anything about this kind of problem.
Cheers
Standard C++ containers do not concern themselves with thread safety much. Your code sounds like it is modifying the map instance from two different threads or modifying the map in one thread and reading from it in another. That is obviously wrong. Use some locking primitives to synchronize the access between the threads.
If all you want is a separate object for each thread, you might want to take a look at boost::thread_specific_ptr.
How do you manage giving each thread its own session information? Somewhere under there you have classes managing the lifetimes of these objects, and this is where it appears to be going wrong.
From an out-of-process COM object (LocalServer32) can I determine the client process that requested the creation of the object? - to be specific I need to get hold of the client processes command line.
This question arrises because (due to poor standardisation, implementation and support) the potential 3rd party clients of the object have a variety of idiosyncracies which the object needs to workaround.
To do this the object needs to be able to identify its current client.
Extending the interface of the COM object so that the client can identify itself is unfortunately not possible ... or to be more precise the interface can be extended but I won't be able to get the clients to call the extension.
Having looked into this further I suspect the answer is going to be "NO", but by all means tell me I'm wrong.
Using Process Explorer I can see that the parent process for my COM object is an instance of "svchost.exe", and not the client application.
Because COM server processes are shared by all clients of the same AppID, it's not possible to actually get the PID of the client application. As #Anders said, you can use CoImpersonateClient (or, better, call CoGetCallContext and interrogate the resulting IServerSecurity) to find the account and login session of the caller, but you cannot get the process itself.
If you are trying to work around bugs in legacy clients, I would recommend you create a new set of CLSIDs (or IIDs, if you can emulate all the bugs the legacy clients rely on with shims) for new (non-legacy) clients with VERY strict input validation, and implement new features only in these new CLSIDs. Legacy clients stick with their older CLSID, in which you can simply use the existing, legacy implementation (or a bug-for-bug compatible clone).
Maybe CoImpersonateClient()