What are the best methods to reduce a core dump's size?
My problem is that I have very little space dedicated to the purpose on my Linux Mint. I have around 200 MB which is far not enough for my program to generate a decent dump. I have tried configuring the coredump filter, but only the anonymous private mappings dump made any significant change in the size, but it also made my dump unreadable. I also tried to limit the size with ulimit, but the only result was that dump was truncated every single time.
My question is that what is the best way to control the core dump's size?
Is there any way to keep only maybe the top 10-15 frames of a dump? Also what is the use of setting the size limit with ulimit, if it only generates useless core files?
Consider using Breakpad: https://code.google.com/p/google-breakpad/
https://code.google.com/p/google-breakpad/wiki/GettingStartedWithBreakpad :
Breakpad is a library and tool suite that allows you to distribute an
application to users with compiler-provided debugging information
removed, record crashes in compact "minidump" files, send them back to
your server, and produce C and C++ stack traces from these minidumps.
Breakpad can also write minidumps on request for programs that have
not crashed.
It appears that there are several related answers here on StackOverflow:
Minimal core dump (stack trace + current frame only)
This link suggests setting up a signal handler for SIGSEGV and dumping core in your own way.
and
Linux core dumps are too large!
This link suggests using setrlimit to limit the size of the dump.
Related
I need to read information, code, flags, address, etc from a memory.dmp file generated from a windows BSOD through C++. The basic idea is that status info can be requested from a remote site and one of the requested pieces of information is some basic info from the last BSOD that occured on the machine thus I need to open the kernel/memory dump file through C++ (Im using MSVC 2005).
Start here, then realize using scripted commands in WinDBG is much easier.
Note: you only need WinDBG on the analysis machine, not the crashing one. You retrieve the minidump and analyse it externally. The only difficulty you will have is getting the right symbols - for Windows, Microsoft makes them available via their symbol servers, but applications that caused the crash may not supply the right symbols you need. IF they are you own applications causing the crash, get a symbol server and use it.
I would configure Windows to create small kernel memory dumps which will include the parameter of the bugcheck you are after.
On XP it was 64KB on my Win8.1 x64 it is 256KB. These files compress well. You should be able to get away with a zip file of 10-60KB size depending on the bitness of the OS. If bandwidth is of of utmost importance to you, you can use 7z which compresses about 50% better than the plain zip algo at the expense of much longer compression times (5-6 longer) but for such small files the CPU time difference should be irrelevant.
If you do not want your users to configure dump reporting you need to set the DWORD
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\CrashControl
to 3 for a small kernel dump programatically.
For an explanation of the values see http://technet.microsoft.com/en-us/library/cc976050.aspx
0 Debugging information is not written to a file.
1 Complete crash dump is written to a file.
2 Kernel memory dump is written to a file.
3 Small memory dump is written to a file.
You will then get by default a small kernel dump in %SystemRoot%\MEMORY.DMP.
Before I ask my question I describe briefly how I get backtrace from my clients.
I write a C++ application on linux (opensuse).
This application is launched by a script (the launcher), and if the application crashes, a core dump is generated (because ulimit -c unlimited).
Then the launcher generates a backtrace from the core file whith gdb, and starts again the application, which gives the possibility to the user to send a crash report containing the backtrace.
Now my problem and my question :
the problem : the core dump can be quite big (up to 5 or 10 GB). The copy of the core file takes a certain time (up to 2 minutes). This is a problem for my clients : it's too long between the crash and the application auto-restart.
the question : I generate the backtrace with gdb from 1) my program 2) the core file.
When the application crashes, a custom script is called by Piping core dumps to a program : could I, in this program, direclty attach gdb to the "dying" program and generate the backtrace earning the time to copy the core file on the HDD ?
Thanks in advance.
Just a remark:
I did all I can to reduce to the minimum the size of the core dump (no debug symbol, only dump what is needed for a backtrace (see Controlling which mappings are written to the core dump))
There is an excellent answer in how to obtain a stacktrace without generating a core dump in copy/paste form here.
It will generate a stacktrace to stderr, but you could easily do something different like post the stacktrace data using HTTP etc.
You do not have to add whole gdb just to do backtrace of crashing program. Just intercept signal like SIGBUS and when signalled you can use backtrace() or simply call gstack with pid of your program.
I have an application (didn't write it) that is producing APPCRASH dumps in C:\Windows\SysWOW64. The application while dumping is crippled, but operating at bare minimum capacity to not lose data. The issue is that these dumps are so large that the system is spending most of it's time writing these and the application is falling far behind in processing and will start losing data soon.
The plan is to either entirely disable it, or mount it to a RAM drive and purge them as soon as they hit the RAM drive.
Now I've looked into using this key:
http://msdn.microsoft.com/en-us/library/windows/desktop/bb787181%28v=vs.85%29.aspx
But all it does is generate a second dump now instead of redirect the original.
The dump is named:
dump-2013_03_31-15_23_55_772.dmp
This is generally the realm of developers on Windows (with stuff like C/C++) so I'd like to hit them up, don't think ServerFault could get me any answers on this.
Additionally: It's not cycling dump files (they'll fill the 20GBs left on the hard drive), so I'm not sure if this is Windows behavior or custom code in the app (if it is... ick!).
To write a DumpFile, an app has to call the function "MiniDumpWriteDump" so this is not a behavior of the system or something you can control, it is application driven. If it dumps on crashes, it uses "SetUnhandledExceptionFilter" to set its own handling routine, before(!) the OS takes over. Unfortunately I didn't found a way to overwrite this handler from an other process, so the only hope left is, that there is a register entry for the app switching the behavior or change the path (as my applications have it for exactly the reason you describe).
I'm currently working on trying to resolve a crash/exception on an unmanaged C++ application.
The application crashes with some predicatibility. The program basically
process a high volume of files combined with running a bunch of queries through
the access DB.
It's definitely occuring during a file access. The error message is:
"failed reading. Network name is no longer available."
It always seems to be crashing in the same lower level file access code.
It's doing a lower level library Seek(), then a Read(). The exception occurs
during the read.
To further complicate things, we can only get the errors to occur when
we're running an disk balancing utility. The utility essentially examines file
access history and moves more frequently/recently used files to faster storage retrieval
while files that are used less frequently are moved to a slower retrieval area. I don't fully
understand the architecture of the this particular storage device,
but essentially it's got an area for "fast" retrieval and one for "archived/slower."
The issues are more easily/predicably reproducible when the utility app is started and
stopped several times. According to the disk manufacturer, we should be able to run
the utility in the background without effecting the behaviour of the client's main application.
Any suggestions how to proceed here? There are theories floating around here that it's somehow related to latency on the storage device. Is there a way to prove/disprove that? We've written a small sample app that basically goes out accesses/reads a whole mess of files on the drive. We've (so far) been unable to reproduce the issue even running with SmartPools. My thought is to try push the latency theory is to have multiple apps basically reading volumes of files from disk while running the utility application.
The memory usage and CPU usage do not look out of line in the Task Manager.
Thoughts? This is turning into a bit of a hairball.
Thanks,
JohnB
Grab your debug binaries.
Setup Application Verifier and add your application to its list.
Hopefully wait for a crash dumb.
Put that through WinDBG.
Try command: !avrf
See what you get....
I am trying to obtain the memory working set value for a given PID in my C++ application running on LINUX. In Windows I can get this info using GetProcessWorkingSetSize function. Is there anything like this function I can call in LINUX?
The only sensible solution that comes to mind is accessing the relevant information via the /proc filesystem. It seems weird that a process would have to read out its own information from /proc, though, but I don't know about any other system calls that might make this easier.
The information you're probably most interested in is located in /proc/[pid]/statm, which includes :
total program size,
resident set size,
shared pages,
text (code) size,
library (unused in Linux 2.6),
data and stack size,
dirty pages (unused in Linux 2.6).
Keep in mind that all those measurements are given in the number of pages.