How to launch process with limited memory? - c++

How does one create and launch process (i.e. launch an .exe file) with RAM limitation using c++ and the win32 API?
Which error code will be returned, if the proccess goes beyond the limit?

Job Objects are the right way to go.
As for an error code, there really isn't one. You create the process (with CreateProcess) and the job (with CreateJobObject), then associate the process with the job object (with AssignProcessToJobObject).
The parent process won't get an error message if the child allocates more than the allowed amount of memory. In fact, the limit will be enforced even if the parent process exits. If the child process tries to allocate more than the allowed amount of memory, the allocation will simply fail.

You can use CreateProcess() to spawn a process.
Once you have done that, you can use SetProcessWorkingSetSize() to attempt to control how much physical memory it uses, but this is more of a really strong suggestion to the VMM than some actual edict that will cause malloc() and new to start failing.
There is no way to say "this process will take 4mb of memory and after that all allocations fail". I mean, you're going to link to win32 dlls and you have no idea what sort of memory usage those things require. If you want your app to only take some certain amount of memory, don't allocate more than that. And don't do things that allocate memory.
Your question regarding the error code makes no sense at all.

NT Job objects ( SetInformationJobObject & JOBOBJECT_BASIC_LIMIT_INFORMATION )

By my knowledge there is no such possibility on windows. It would be very useful to have though, for testing and other things.
You have this on java as a JVM uses only a predefined amount of memory, but there its not a feature, but rather a problem ;-)

If you launch a process, you lose control over this process. Just the operating system can control its behavior (memory footprint i.e.) but even in that case I can't imagine how this could be achieved, as jeffamaphone stated, any limitation is at best a suggestion, not a instruction. A process can call external static libraries, COM instances, etc, so I don't imagine how this limit could be verified/enforced.

Related

Is memory allocated even when program ends? [duplicate]

I've run into memory leaks many times. Usually when I'm malloc-ing like there's no tomorrow, or dangling FILE *s like dirty laundry. I generally assume (read: hope desperately) that all memory is cleaned up at least when the program terminates. Are there any situations where leaked memory won't be collected when the program terminates, or crashes?
If the answer varies widely from language-to-language, then let's focus on C(++).
Please note hyperbolic usage of the phrase, 'like there's no tomorrow', and 'dangling ... like dirty laundry'. Unsafe* malloc*ing can hurt the ones you love. Also, please use caution with dirty laundry.
No. Operating systems free all resources held by processes when they exit.
This applies to all resources the operating system maintains: memory, open files, network connections, window handles...
That said, if the program is running on an embedded system without an operating system, or with a very simple or buggy operating system, the memory might be unusable until a reboot. But if you were in that situation you probably wouldn't be asking this question.
The operating system may take a long time to free certain resources. For example the TCP port that a network server uses to accept connections may take minutes to become free, even if properly closed by the program. A networked program may also hold remote resources such as database objects. The remote system should free those resources when the network connection is lost, but it may take even longer than the local operating system.
The C Standard does not specify that memory allocated by malloc is released when the program terminates. This done by the operating system and not all OSes (usually these are in the embedded world) release the memory when the program terminates.
As all the answers have covered most aspects of your question w.r.t. modern OSes, but historically, there is one that is worth mentioning if you have ever programmed in the DOS world. Terminant and Stay Resident (TSR) programs would usually return control to the system but would reside in memory which could be revived by a software / hardware interrupt. It was normal to see messages like "out of memory! try unloading some of your TSRs" when working on these OSes.
So technically the program terminates, but because it still resides on memory, any memory leak would not be released unless you unload the program.
So you can consider this to be another case apart from OSes not reclaiming memory either because it's buggy or because the embedded OS is designed to do so.
I remember one more example. Customer Information Control System (CICS), a transaction server which runs primarily on IBM mainframes is pseudo-conversational. When executed, it processes the user entered data, generates another set of data for the user, transferring to the user terminal node and terminates. On activating the attention key, it again revives to process another set of data. Because the way it behaves, technically again, the OS won't reclaim memory from the terminated CICS Programs, unless you recycle the CICS transaction server.
Like the others have said, most operating systems will reclaim allocated memory upon process termination (and probably other resources like network sockets, file handles, etc).
Having said that, the memory may not be the only thing you need to worry about when dealing with new/delete (instead of raw malloc/free). The memory that's allocated in new may get reclaimed, but things that may be done in the destructors of the objects will not happen. Perhaps the destructor of some class writes a sentinel value into a file upon destruction. If the process just terminates, the file handle may get flushed and the memory reclaimed, but that sentinel value wouldn't get written.
Moral of the story, always clean up after yourself. Don't let things dangle. Don't rely on the OS cleaning up after you. Clean up after yourself.
This is more likely to depend on operating system than language. Ultimately any program in any language will get it's memory from the operating system.
I've never heard of an operating system that doesn't recycle memory when a program exits/crashes. So if your program has an upper bound on the memory it needs to allocate, then just allocating and never freeing is perfectly reasonable.
If the program is ever turned into a dynamic component ("plugin") that is loaded into another program's address space, it will be troublesome, even on an operating system with tidy memory management. We don't even have to think about the code being ported to less capable systems.
On the other hand, releasing all memory can impact the performance of a program's cleanup.
One program I was working on, a certain test case required 30 seconds or more for the program to exit, because it was recursing through the graph of all dynamic memory and releasing it piece by piece.
A reasonable solution is to have the capability there and cover it with test cases, but turn it off in production code so the application quits fast.
All operating systems deserving the title will clean up the mess your process made after termination. But there are always unforeseen events, what if it was denied access somehow and some poor programmer did not foresee the possibility and so it doesn't try again a bit later?
Always safer to just clean up yourself IF memory leaks are mission critical - otherwise not really worth the effort IMO if that effort is costly.
Edit:
You do need to clean up memory leaks if they are in place where they will accumulate, like in loops. The memory leaks I speak of are ones that build up in constant time throughout the course of the program, if you have a leak of any other sort it will most likely be a serious problem sooner or later.
In technical terms if your leaks are of memory 'complexity' O(1) they are fine in most cases, O(logn) already unpleasant (and in some cases fatal) and O(N)+ intolerable.
Shared memory on POSIX compliant systems persists until shm_unlink is called or the system is rebooted.
If you have interprocess communication, this can lead to other processes never completing and consuming resources depending on the protocol.
To give an example, I was once experimenting with printing to a PDF printer in Java when I terminated the JVM in the middle of a printer job, the PDF spooling process remained active, and I had to kill it in the task manager before I could retry printing.

Why do I get SIGKILL instead of std::bad_alloc? [duplicate]

I'm developing application for an embedded system with limited memory (Tegra 2) in C++. I'm handling NULL results of new and new[] throughout the code which sometimes occurs but application is able to handle this.
The problem is that the system kills the process by SIGKILL if the memory runs out completely. Can I somehow tell that new should just return NULL instead of killing the process?
I am not sure what kind of OS You are using, but You should check if
it supports opportunistic memory allocation like Linux does.
If it is enabled, the following may happen (details/solution are specific to the Linux kernel):
Your new or malloc gets a valid address from the kernel. Even if there is not enough memory, because ...
The kernel does not really allocate the memory until the very moment of the first access.
If all of the "overcommitted" memory is used, the operating system has no chance but killing one of the involved processes. (It is too late to tell the program that there is not enough memory.) In Linux, this is called Out Of Memory Kill (OOM Kill). Such kills get logged in the kernel message buffer.
Solution: Disable overcommitting of memory:
echo 2 > /proc/sys/vm/overcommit_memory
Two ideas come to mind.
Write your own memory allocation function rather than depending on new directly. You mentioned you're on an embedded system, where special allocators are quite common in applications. Are you running your application directly on the hardware or are you running in a process under an executive/OS layer? If the latter, is there a system API provided for allocating memory?
Check out C++ set_new_handler and see if it can help you. You can request that a special function is invoked when a new allocation fails. Perhaps in that function you can take action to prevent whatever is killing the process from executing. Reference: http://www.cplusplus.com/reference/std/new/set_new_handler/

Allocating memory that can be freed by the OS if needed

I'm writing a program that generates thumbnails for every page in a large document. For performance reasons I would like to keep the thumbnails in memory for as long as possible, but I would like the OS to be able to reclaim that memory if it decides there is another more important use for it (e.g. the user has started running a different application.)
I can always regenerate the thumbnail later if the memory has gone away.
Is there any cross-platform method for flagging memory as can-be-removed-if-needed? The program is written in C++.
EDIT: Just to clarify, rather than being notified when memory is low or regularly monitoring the system's amount of memory, I'm thinking more along the lines of allocating memory and then "unlocking" it when it's not in use. The OS can then steal unlocked memory if needed (even for disk buffers if it thinks that would be a better use of the memory) and all I have to do as a programmer is just "lock" the memory again before I intend to use it. If the lock fails I know the memory has been reused for something else so I need to regenerate the thumbnail again, and if the lock succeeds I can just keep using the data from before.
The reason is I might be displaying maybe 20 pages of a document on the screen, but I may as well keep thumbnails of the other 200 or so pages in case the user scrolls around a bit. But if they go do something else for a while, that memory might be better used as a disk cache or for storing web pages or something, so I'd like to be able to tell the OS that it can reuse some of my memory if it wants to.
Having to monitor the amount of free system-wide memory may not achieve the goal (my memory will never be reclaimed to improve disk caching), and getting low-memory notifications will only help in emergencies. I was hoping that by having a lock/unlock method, this could be achieved in more of a lightweight way and benefit the performance of the system in a non-emergency situation.
Is there any cross-platform method for flagging memory as can-be-removed-if-needed? The program is written in C++
For Windows, at least, you can register for a memory resource notification.
HANDLE WINAPI CreateMemoryResourceNotification(
_In_ MEMORY_RESOURCE_NOTIFICATION_TYPE NotificationType
);
NotificationType
LowMemoryResourceNotification Available physical memory is running low.
HighMemoryResourceNotification Available physical memory is high.
Just be careful responding to both events. You might create a feedback loop (memory is low, release the thumbnails! and then memory is high, make all the thumbnails!).
In AIX, there is a signal SIGDANGER that is send to applications when available memory is low. You may handle this signal and free some memory.
There is a discussion among Linux people to implement this feature into Linux. But AFAIK it is not yet implemented in Linux. Maybe they think that application should not care about low level memory management, and it could be transparently handled in OS via swapping.
In posix standard there is a function posix_madvise might be used to mark an area of memory as less important. There is an advice POSIX_MADV_DONTNEED specifies that the application expects that it will not access the specified range in the near future.
But unfortunately, current Linux implementation will immediately free the memory range when posix_madvise is called with this advice.
So there's no portable solution to your question.
However, on almost every OS you are able to read the current available memory via some OS interface. So you can routinely read such value and manually free memory when available memory in OS is low.
There's nothing special you need to do. The OS will remove things from memory if they haven't been used recently automatically. Some OSes have platform-specific ways to improve this, but generally, nothing special is needed.
This question is very similar and has answers that cover things not covered here.
Allocating "temporary" memory (in Linux)
This shouldn't be too hard to do because this is exactly what the page cache does, using unused memory to cache the hard disk. In theory, someone could write a filesystem such that when you read from a certain file, it calculated something, and the page cache would cache it automatically.
All the basics of automatically freed cache space are already there in any OS with a disk cache, and It's hard to imagine there not being an API for something that would make a huge difference especially in things like mobile web browsers.

How far can memory leaks go?

I've run into memory leaks many times. Usually when I'm malloc-ing like there's no tomorrow, or dangling FILE *s like dirty laundry. I generally assume (read: hope desperately) that all memory is cleaned up at least when the program terminates. Are there any situations where leaked memory won't be collected when the program terminates, or crashes?
If the answer varies widely from language-to-language, then let's focus on C(++).
Please note hyperbolic usage of the phrase, 'like there's no tomorrow', and 'dangling ... like dirty laundry'. Unsafe* malloc*ing can hurt the ones you love. Also, please use caution with dirty laundry.
No. Operating systems free all resources held by processes when they exit.
This applies to all resources the operating system maintains: memory, open files, network connections, window handles...
That said, if the program is running on an embedded system without an operating system, or with a very simple or buggy operating system, the memory might be unusable until a reboot. But if you were in that situation you probably wouldn't be asking this question.
The operating system may take a long time to free certain resources. For example the TCP port that a network server uses to accept connections may take minutes to become free, even if properly closed by the program. A networked program may also hold remote resources such as database objects. The remote system should free those resources when the network connection is lost, but it may take even longer than the local operating system.
The C Standard does not specify that memory allocated by malloc is released when the program terminates. This done by the operating system and not all OSes (usually these are in the embedded world) release the memory when the program terminates.
As all the answers have covered most aspects of your question w.r.t. modern OSes, but historically, there is one that is worth mentioning if you have ever programmed in the DOS world. Terminant and Stay Resident (TSR) programs would usually return control to the system but would reside in memory which could be revived by a software / hardware interrupt. It was normal to see messages like "out of memory! try unloading some of your TSRs" when working on these OSes.
So technically the program terminates, but because it still resides on memory, any memory leak would not be released unless you unload the program.
So you can consider this to be another case apart from OSes not reclaiming memory either because it's buggy or because the embedded OS is designed to do so.
I remember one more example. Customer Information Control System (CICS), a transaction server which runs primarily on IBM mainframes is pseudo-conversational. When executed, it processes the user entered data, generates another set of data for the user, transferring to the user terminal node and terminates. On activating the attention key, it again revives to process another set of data. Because the way it behaves, technically again, the OS won't reclaim memory from the terminated CICS Programs, unless you recycle the CICS transaction server.
Like the others have said, most operating systems will reclaim allocated memory upon process termination (and probably other resources like network sockets, file handles, etc).
Having said that, the memory may not be the only thing you need to worry about when dealing with new/delete (instead of raw malloc/free). The memory that's allocated in new may get reclaimed, but things that may be done in the destructors of the objects will not happen. Perhaps the destructor of some class writes a sentinel value into a file upon destruction. If the process just terminates, the file handle may get flushed and the memory reclaimed, but that sentinel value wouldn't get written.
Moral of the story, always clean up after yourself. Don't let things dangle. Don't rely on the OS cleaning up after you. Clean up after yourself.
This is more likely to depend on operating system than language. Ultimately any program in any language will get it's memory from the operating system.
I've never heard of an operating system that doesn't recycle memory when a program exits/crashes. So if your program has an upper bound on the memory it needs to allocate, then just allocating and never freeing is perfectly reasonable.
If the program is ever turned into a dynamic component ("plugin") that is loaded into another program's address space, it will be troublesome, even on an operating system with tidy memory management. We don't even have to think about the code being ported to less capable systems.
On the other hand, releasing all memory can impact the performance of a program's cleanup.
One program I was working on, a certain test case required 30 seconds or more for the program to exit, because it was recursing through the graph of all dynamic memory and releasing it piece by piece.
A reasonable solution is to have the capability there and cover it with test cases, but turn it off in production code so the application quits fast.
All operating systems deserving the title will clean up the mess your process made after termination. But there are always unforeseen events, what if it was denied access somehow and some poor programmer did not foresee the possibility and so it doesn't try again a bit later?
Always safer to just clean up yourself IF memory leaks are mission critical - otherwise not really worth the effort IMO if that effort is costly.
Edit:
You do need to clean up memory leaks if they are in place where they will accumulate, like in loops. The memory leaks I speak of are ones that build up in constant time throughout the course of the program, if you have a leak of any other sort it will most likely be a serious problem sooner or later.
In technical terms if your leaks are of memory 'complexity' O(1) they are fine in most cases, O(logn) already unpleasant (and in some cases fatal) and O(N)+ intolerable.
Shared memory on POSIX compliant systems persists until shm_unlink is called or the system is rebooted.
If you have interprocess communication, this can lead to other processes never completing and consuming resources depending on the protocol.
To give an example, I was once experimenting with printing to a PDF printer in Java when I terminated the JVM in the middle of a printer job, the PDF spooling process remained active, and I had to kill it in the task manager before I could retry printing.

How to add new memory allocator to deal with low memory?

In this answer which discusses about the difference between new and malloc states one difference of new from malloc as Can add a new memory allocator to deal with low memory (set_new_handler).
please give a example of this and how it works?
It isn't exactly a new memory allocator, but a function you can register so that it is called when operator new runs out of memory.
If you can then magically fix the out of memory problem, new can try again and see if it works better now. This is most often not very useful, unless your application is holding on to some memory it can release.
Here's a couple of examples where a new handler might be of use.
Suppose you're on a unix-like machine on which the sysadmin has, for some reason, set a low soft limit on the heap size. The new handler can raise the soft limit to the hard limit and voila! new memory might be available.
Suppose you want your application to hog all the memory, but other already-running applications are in the way. So just make your new handler sleep for a little bit. When one of those already-running programs terminates, viola! new memory is available.
There is at least one commercial application that takes option #2. It is almost always a mistake. Typically the application runs out of memory because a user of the application inadvertently has tried to allocate more memory than exists on any computer. The application will happily munch ever more memory as other running applications quit. Eventually no new programs can be started, including those the operating system needs to run. This application is a rather nice tool for making the machine come crashing to its knees.
I think "low memory" is actually indicating "out of memory" in the answer you're linking too. There are plenty of sample code fragments that install an out-of-memory handler by searching on set_new_handler (e.g. http://www.cplusplus.com/reference/std/new/set_new_handler/ )
One implementation I have seen (in production code for a particularly memory-intensive application) used this hook in conjunction with a "rainy-day" block allocation of ~10MB on application startup. If this handler was ever triggered, it would then delete the memory and attempt to enter a "controlled exit" path.
In practice, I found that this wasn't a very effective technique, as behavior once you are out of memory is already unpredictable.