How to crash a system programmatically [closed] - c++

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
If I could write a user program that would crash my OS (not my application), how would I do it?
I was thinking somehow switch my usermode program to kernel mode and cause a memory corruption. Is it possible?
Note: I am not creating a virus. Just curiosity.

KeBugCheck on Windows is the documented way to get a BSOD.
You can also try deleting the root registry key (\REGISTRY) in Windows XP, using the native NT API.

Write and load a kernel module that calls panic() or implement equivalent thereof.
Or simply exec the shutdown or halt command or the syscall that implements it.

If the OS happens to be windows, create a fake driver that dereferences a NULL pointer. Crash!

The whole idea of an operating system is that a user program can't crash it under normal conditions. Of course you could still do something like exhaust the disk space on a partition that is used for a swap file and that would impair many operating systems or you could find a known vulnerability but there's no very easy way to reliably crash it.

In Linux, Alt-SysRq-C will crash/restart your kernel.
In Windows, see: https://web.archive.org/web/20110513143420/http://www.dailygyan.com/2008/09/some-methods-to-crash-your-windows.html
[Ed: March 8, 2021 - Switch to Archive.org link due to site going down.]

For Windows one possibility is to write a kernel mode driver which locks some memory pages owned by a process and then terminate that process. Will result in a BSOD "Process has locked pages".

Linux: Even though not strictly crashing the OS, you can quite easily make it unusable by allocating lots of memory (and read/writing it for the allocation to actually become effective and make the OS swap a lot) and by forking lots of processes. "Fork bomb" is the keyword and can even be done in shell script.

I think the reason why you want to crash the OS is relevant here. Are you trying to simulate a condition for testing, or are you just plain curious?
Here are two options if you wish to recreate, and automate, crashing, for the purpose of fault tolerance.
Run insider a virtual machine (vmware, VirtualBox) and simply kill the VM process. Alternately you can give it very low priority, drop devices, or otherwise simulate bad things.
Use servers that have a management console. This will have an API that can simply turn off the device.
The other numerous suggestions are good if you wish to crash from within the OS itself. These software crashes can help reproduce a miscreant process. A similar set of hardware related crashes could also work (such as reducing speed on a programmable fan and overheating the CPU).
The reason behind your request is actually quite important since all the different faults will yield a slightly different result.

Try allocating chunks of memory until you have no free memory:
int alloced = 0;
for(;;)
{
char *alloc = malloc(10*1024*1024); // alloc 10 MB
if(alloc != NULL)
{
alloced += 10;
// edit: you have to memset the memory otherwise the system will give it back to you next time
memset(alloc, 0xab, 10*1024*1024);
printf(" alloced %d MB\n", alloced);
}
}
edit:
I actually tried just right now on a 64 bits linux with 2GB of ram and 3.3GB of swap: the screen has frozen, I could allocate 4950MB of ram, but then the process was killed by the system, and linux fell back on its feet gracefully, so, no, this doesnt work :=)

Crash an OS using pure user-mode application means the kernel is vulnerable.
If the OS is well tested, then this should not occur.
You can try BSoD Windows by attacking bugous 3rd-party drivers via sending garbage IO-CONTROLs to them.
DeviceIoControl Function (Windows)
http://msdn.microsoft.com/en-us/library/aa363216(VS.85).aspx

Related

Memory allocation crashes the OS. Who's to blame beside the OS [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 7 years ago.
Improve this question
This short snippet
#include <new>
int main(){
while(true){
try{
new char[0x10000000];
}catch(std::bad_alloc bac){
}
}
}
apparently crashes the entire operating system when compiled as a 64 bit application and run on a 64 bit Windows system.
This is a valid c++ program. How can this happen? Isn't the msvc compiler at fault here too?
All other compiler/system combinations left the system sluggish at worst.
Don't try this at home. user Christophe tried this on his system and it crashed.
To commenters: I'm not interested in debugging. This is a valid c++ program. My code is not at fault. I'm curious what might induce this behaviour.
One quite plausible scenario for "blue screen when an application is using a lot of memory" is a driver that crashes. There are two common problems here:
Driver that can't allocate memory and doesn't detect the NULL returned by allocation function when it couldn't allocate memory", resulting in a NULL [or close to NULL] memory access.
Driver doesn't properly "lock" it's memory buffers, leading to pages that the driver "needs" being swapped out when it comes to use the page - this leads to "IRQL Not Less or Equal" blue-screen, which is caused by OS detecting that a page-in request happens when the driver is in a mode where the scheduler is "locked". In other words, the driver asked for "no other task must run until I finish this", and then a page-fault happens that is a request to page in a page from the swap, which indeed requires a different task [the swapper process] to run - can't have the cake and eat it, so OS says "No can do" - can't continue at that point, since the driver is not able to access memory, can't switch to another task, so we can't do anything other than "report error and stop".
A third alternative is that the driver detects that it can't allocate memory, but decides that it can not continue and then issues its own blue-screen by calling the "I want to bluescreen now" function in Windows. Drivers that are well written should not do this, but like some driver writers still decide that this is a "good idea".
Sorry, it's been about 11 years since I wrote windows drivers, so the exact error codes one can expect here have gone missing. I think 7B for IRQLNotLessOrEqual, and 0xC00000005 for the access of unmapped memory (NULL access etc).
The fact that several different machines behave the same can easily be explained by many machines having either similar hardware (e.g. same printer, USB [mouse or keyboard?] or CD-ROM drive that is flaky), or by having the same antivirus software - AV software always has a driver component to "hook" into other processes and such.
Given that "really running out of memory" is not so common these days, not so skilled/experienced/conscientious developers may well not test properly with either fake out of memory or real out of memory situations, and thus not detect that their driver fails in this situation.
To give more details, we'd need to know at least the blue-screen "code" (four to five hex-numbers at the top of the screen)
Assuming there is some point to debugging this sort of failure, you can either set up windows to store a a dump (or mini-dump) when it crashes, or use a second PC to connect WinDBG as a remote debugger [or some other remote debugger] to the machine that is crashing - when a machine blue-screens, it will stop in the debugger before restarting, so you can see what the state of the system is, including looking at the callstack of the code that caused the crash - which typically will show what component is actually causing the problem. However, unless you actually have a good contact with the driver developers (at the very least an email address to the relevant support people), it's unlikely much can be achieved to solve this.

If I get a segfault in Cygwin, what will that effect?

I am learning C++ using emacs on Cygwin, and I heard that in older Unix operating systems, a segfault can completely destroy critical memory. I know one idea with Cygwin was to make Windows more like Unix, so if I get a segfault on Cygwin, will that damage anything on Windows or Cygwin?
No, it won't damage anything. It will just cause the application that triggered the segfault to crash. And probably (depending upon your Windows version and settings) you'll get an annoying popup message informing you of the crash, and asking if you want to report it.
In modern operating systems (which include Linux, Win2K+ and MacOSX), every process can only access an area of "virtual memory" which is managed by the OS and cleaned up entirely after the process's lifetime finishes. A memory access error on part of the process that causes the process to be terminated simply means that the process has tried to access part of its virtual address space which it has not informed the OS about and which the OS did not want to process to access, but this does not affect anything outside that one process. There is no direct access to "real" memory for userspace processes, and thus you cannot really do a huge amount of harm.
(OK, I'm glossing over things, if you accidentally triggered an API call to "kill" and you had admin privileges, I suppose you could cause some harm. But you know what I mean.)

What to do when an out-of-memory error occurs? [duplicate]

This question already has answers here:
Closed 12 years ago.
Possible Duplicate:
What's the graceful way of handling out of memory situations in C/C++?
Hi,
this seems to be a simple question a first glance. And I don't want to start a huge discussion on what-is-the-best-way-to-do-this....
Context: Windows >= 5, 32 bit, C++, Windows SDK / Win32 API
But after asking a similar question, I read some MSDN and about the Win32 memory management, so now I'm even more confused on what to do if an allocation fails, let's say the C++ new operator.
So I'm very interested now in how you implement (and implicitly, if you do implement) an error handling for OOM in your applications.
If, where (main function?), for which operations (allocations) , and how you handle an OOM error.
(I don't really mean that subjectively, turning this into a question of preference, I just like to see different approaches that account for different conditions or fit different situations. So feel free to offer answers for GUI apps, services - user-mode stuff ....)
Some exemplary reactions to OOM to show what I mean:
GUI app: Message box, exit process
non-GUI app: Log error, exit process
service: try to recover, e.g. kill the thread that raised an exception, but continue execution
critical app: try again until an allocation succeeds (reducing the requested amount of memory)
hands from OOM, let STL / boost / OS handle it
Thank you for your answers!
The best-explained way will receive the great honour of being an accepted answer :D - even if it only consists of a MessageBox line, but explains why evering else was useless, wrong or unneccessary.
Edit: I appreciate your answers so far, but I'm missing a bit of an actual answer; what I mean is most of you say don't mind OOM since you can't do anything when there's no memory left (system hangs / poor performance). But does that mean to avoid any error handling for OOM? Or only do a simple try-catch in the main showing a MessageBox?
On most modern OSes, OOM will occur long after the system has become completely unusable, since before actually running out, the virtual memory system will start paging physical RAM out to make room for allocating additional virtual memory and in all likelihood the hard disk will begin to thrash like crazy as pages have to be swapped in and out at higher and higher frequencies.
In short, you have much more serious concerns to deal with before you go anywhere near OOM conditions.
Side note: At the moment, the above statement isn't as true as it used to be, since 32-bit machines with loads of physical RAM can exhaust their address space before they start to page. But this is still not common and is only temporary, as 64-bit ramps up and approaches mainstream adoption.
Edit: It seems that 64-bit is already mainstream. While perusing the Dell web site, I couldn't find a single 32-bit system on offer.
You do the exact same thing you do when:
you created 10,000 windows
you allocated 10,000 handles
you created 2,000 threads
you exceeded your quota of kernel pool memory
you filled up the hard disk to capacity.
You send your customer a very humble message where you apologize for writing such crappy code and promise a delivery date for the bug fix. Any else is not nearly good enough. How you want to be notified about it is up to you.
Basically, you should do whatever you can to avoid having the user lose important data. If disk space is available, you might write out recovery files. If you want to be super helpful, you might allocate recovery files while your program is open, to ensure that they will be available in case of emergency.
Simply display a message or dialog box (depending on whether your in a terminal or window system), saying "Error: Out of memory", possibly with debugging info, and include an option for your user to file a bug report, or a web link to where they can do that.
If your really out of memory then, in all honesty, there's no point doing anything other than gracefully exiting, trying to handle the error is useless as there is nothing you can do.
In my case, what happens when you have an app that fragments the memory up so much it cannot allocate the contiguous block needed to process the huge amount of nodes?
Well, I split the processing up as much as I could.
For OOM, you can do the same thing, chop your processes up into as many pieces as possible and do them sequentially.
Of course, for handling the error until you get to fix it (if you can!), you typically let it crash. Then you determine that those memory allocs are failing (like you never expected) and put a error message direct to the user along the lines of "oh dear, its all gone wrong. log a call with the support dept". In all cases, you inform the user however you like. Though, its established practice to use whatever mechanism the app currently uses - if it writes to a log file, do that, if it displays an error dialog, do the same, if it uses the Windows 'send info to microsoft' dialog, go right ahead and let that be the bearer of bad tidings - users are expecting it, so don't try to be clever and do something else.
It depends on your app, your skill level, and your time. If it needs to be running 24/7 then obviously you must handle it. It depends on the situation. Perhaps it may be possible to try a slower algorithm but one that requires less heap. Maybe you can add functionality so that if OOM does occur your app is capable of cleaning itself up, and so you can try again.
So I think the answer is 'ALL OF THE ABOVE!', apart from LET IT CRASH. You take pride in your work, right?
Don't fall into the 'there's loads of memory so it probably won't happen' trap. If every app writer took that attitude you'd see OOM far more often, and not all apps are running on a desktop machines, take a mobile phone for example, it's highly likely for you to run into OOM on a RAM starved platform like that, trust me!
If all else fails display a useful message (assuming there's enough memory for a MessageBox!)

How to avoid HDD thrashing

I am developing a large program which uses a lot of memory. The program is quite experimental and I add and remove big chunks of code all the time. Sometimes I will add a routine that is rather too memory hungry and the HDD drive will start thrashing and the program (and the whole system) will slow to a snails pace. It can easily take 5 mins to shut it down!
What I would like is a mechanism for avoiding this scenario. Either a run time procedure or even something to be done before running the program, which can say something like "If you run this program there is a risk of HDD thrashing - aborting now to avoid slowing to a snails pace".
Any ideas?
EDIT: Forgot to mention, my program uses multiple threads.
You could consider using SetProcessWorkingSetSize . This would be useful in debugging, because your app will crash with a fatal exception when it runs out of memory instead of dragging your machine into a thrashing situation.
http://msdn.microsoft.com/en-us/library/ms686234%28VS.85%29.aspx
Similar SO question
Set Windows process (or user) memory limit
Windows XP is terrible when there are multiple threads or processes accessing the disk at the same time. This is effectively what you experience when your application begins to swap, as the OS is writing out some pages while reading in others. Windows XP (and Server 2003 for that matter) is utterly trash for this. This is a real shame, as it means that swapping is almost synonymous with thrashing on these systems.
Your options:
Microsoft fixed this problem in Vista and Server 2008. So stop using a 9 year old OS. :)
Use unbuffered I/O to read/write data to a file, and implement your own paging inside your application. Implementing your own "swap" like this enables you to avoid thrashing.
See here many more details of this problem: How to obtain good concurrent read performance from disk
I'm not familiar with Windows programming, but under Unix you can limit the amount of memory that a program can use with setrlimit(). Maybe there is something similar. The goal is to get the program to abort once it uses to much memory, rather than thrashing. The limit would be a bit less than the total physical memory on the machine. I would guess somewhere between 75% and 90%, but some experimentation would be necessary to find the optimal setting.
Chances are your program could use some memory management. While there are a few programs that do need to hold everything in memory at once, odds are good that with a little bit of foresight you might be able to rework your program to reuse or discard a lot of the memory you need.
Your program will run much faster too. If you are using that much memory, then basically all of your built-in first and second level caches are likely overflowing, meaning the CPU is mostly waiting on memory loads instead of processing your code's instructions.
I'd rather determine reasonable minimum requirements for the computer your program is supposed to run on, and during installation either warn the user if there's not enough memory available, or refuse to install.
Telling him each time he's starting the program is nonsensical.

Profiling a C or C++ based application that never exits [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I have a small doubt regarding profiling applications which never exit until we manually reboot the machine.
I used tools like valgrind which talks about memory leaks or bloating of any application which exits after sometime.
But is there any tool which can be used to tell about memory consumption, bloating, overhead created by the application at various stages if possible?
NOTE: I am more intrested to know about apps which dont exit ... If an app exits I can use tools like valgrind ..
I'd consider adding a graceful exit from the program.
dtrosset's point is well put but apparently misunderstood. Add a means to terminate the program so you can perform a clean analysis. This can be something as simple as adding a signal handler for SIGUSR1, for example, that terminates the program at a point in time you decide. There are a variety of methods at your disposal depending on your OS.
There's a big difference between an application which never exits (embedded, daemons, etc) and one that cannot be exited. The prior is normal, the latter is bad design.
If anything, that application can be forcibly aborted (SIGKILL on *nix, terminate on win32) and you'd get your analysis. That method doesn't give your application the opportunity to clean up before it's destroyed so there will be very likely be retained memory reported.
Profiling is intrusive, so you don't want to deploy the app with the profiler attached, anyway. Therefore, include some #ifdef PROFILE_MODE-code that exits the app after an appropriate amount of time. Compile with -DPROFLILE_MODE, profile. Deploy without PROFILE_MODE.
Modify your program slightly so that you can request a Valgrind leak check at any point - when the command to do that is recieved, your program should use VALGRIND_DO_LEAK_CHECK from memcheck.h (this will have no effect if the program isn't running under Valgrind).
You can use GNU gprof, but it has also the problem that it requires an exit of the program.
You can overcom this by calling internal functions of gprof. (see below) It may be a real "dirty" hack, depending on the version of gcc and, and, and,... but it works.
#include "sys/gmon.h"
extern "C" //the internal functions and vars of gprof
{
void moncontrol (int mode);
void monstartup (unsigned long lowpc, unsigned long highpc);
void _mcleanup (void);
extern void _start(void), etext(void);
extern int __libc_enable_secure;
}
// call this whenever you want to write profiling information to file
void WriteProfilingInformation(char* Name)
{
setenv("GMON_OUT_PREFIX",Name,1); // set file name
int old = __libc_enable_secure; // save old value
__libc_enable_secure = 0; // has to be zero to change profile file name
_mcleanup();
__libc_enable_secure = old; // reset to old value
monstartup(lowpc, highpc); // restart profiler
moncontrol(1); // enable profiler
}
Rational Purify can do that, at least on windows. There seem to be a linux version but I don't know if it can do the same.
Some tools allow you to force a memory analysis at any point during the program's execution. This method is not as reliable as checking on exit, but it gives you a starting point.
Here's a Windows example using LeakDiag.
Have you tried GNU Gprof?
Note that in this document, "cc" and "gcc" are interchangeable. ("cc" is assumed as an alias for "gcc.")
http://www.cs.utah.edu/dept/old/texinfo/as/gprof_toc.html
Your question reads as if you were looking for top. It nicely displays (among other things) the current memory consumption of all running processes. (Limited to one page in the terminal.) On Linux, hit “M” to sort by memory usage. The man page shows more options for sorting and filtering.
I have used rational purify API's to check incremental leaks. Haven't used the API's in linux. I found the VALGRIND_DO_LEAK_CHECK option in Valgrind User Manual, I think this would suffice your requirement.
For windows, DebugDiag does that.
Generates a report in the end with probable memory leaks.
Also has memory pressure analysis.
And it's available for free # microsoft. Download it from here
You need stackshots. Either use pstack or lsstack, or just run it under a debugger or IDE and pause it (Ctrl-C) at random. It will not tell you about memory leaks, but it will give you a good idea of how the time is being used and why.
If time is being used because of memory leaks, you will see a good percent of samples ending in memory management routines. If they are in malloc or new, higher up the stack you will see what objects are being allocated and why, and you can consider how to do that less often.
Work of program that profiling memory leaks is based on detecting memory that was freed by OS not by program.