I'm trying to do an Instrumentation Profiling of a quite big project (around 40'000 source files in the whole solution, but the project under profiling has around 200 source files), written in C++.
Each time I run the profiling, it creates a huge report of around 34GB, and then, when it's going to analyze it, it's trying (I think) to load the whole file into RAM.
Obviously, it renders the computer unusable, and I have to stop the analyzer before it completes.
Any suggestions?
Hi there hope this response isn't too late. This is Andre Hamilton from Visual Studio profiler team. Analyzing such a large report file does take some time. Instrumentation produces that much data because all of your functions are instrumented. By instrumenting a few functions or specific binary you might be able to speed things up if you don't mind profiling via the command line. This will product a vsp file which you can then open in VS and use as normal. Lets say that your project requires n binaries to run. Let us assume that of these binaries you are interested in the performance of binary ni
Open up a VisualStudio command prompt
1) Do vsinstr ni.dllto instrument the entire binary or use the /include or /exclude options of vsinstr to further restrict which functions are instrumented. N.B if your binary was signed, you will need to resign after instrumenting
2)Start the profiler in instrumentation mode via the given command
vsperf /start:trace /output:myinstrumentedtrace.vsp
3)Launch Your Application
4)When you are ready to stop profiling
vsperf /shutdown
Hope this helps
(Notice, I assume you have a licensed copy of VS to both collect and analyze the data).
This is a general problem when profiling large or "dense" programs. You need to restrict the profiler to collect data only from certain units of your code base. In Microsoft's profilers this is done by using Include/Exclude switches either at command line or in the IDE.
There is a bug in VS, the reason is most of the profiling work is done in UI thread it make the VS unusable, as mentioned in http://channel9.msdn.com/Forums/TechOff/260091-Visual-Studio-Performance-Analysis-in-10-minutes
You may give try to VS 2012 to see if the problem is resolved, but there is no doubt loading a 34 GB file is not a simple task and its also the reason behind resulting in system unusable, so as John suggested above in comment section, break your code in smaller component and then do profiling, hope it help!
Related
I have been doing some profiling of a physics application I wrote, and I've noticed when I profile it, it runs faster and perhaps smoother than without the profiler. Note that I am NOT running the program in the debug configuration or with the debugger attached.
I measured the difference, and I found program runs ~50% faster under the profiler. I don't consider this a duplicate because the other question doesn't make it clear whether he/she was running it with the debugger attached, and the top answer assumes that's the case (And the 20x speedup strongly indicates it would be the correct answer in most cases).
Another answer suggests a "heisenburg" bug, but that's kind of a catchall hypothesis (I'm still going to investigate down this line).
Is it possible that Visual Studio does something that prevents other applications from interfering with my application's compute or memory resources (in order to get a "fairer" result)?
Visual Studio's "CPU Usage" profiler appears to disregard laptop power usage settings, so if you run an application on a laptop that is trying to conserve battery power, it will run slower than if you run the profiler on it.
I discovered this when I got home from work- I noticed the speed difference had disappeared. On a hunch, I unplugged my laptop and tried the test several more times. The speed difference returned. What's more, under the profiler, the application runs at about the same speed plugged in or not.
I was not able to find any sources on this, but I'll be happy to edit them in if someone can find some.
If you use threading in your code, this can be caused due to the System Timer Resolution in Windows.
Default windows timer resolution is 15.6ms
When you are running the profiler, this is reduced to 1ms and your program run fast.
Checkout this answer
I've experienced this with every version of Visual Studio starting from 2012 (2012, 2013, 2015 Preview), on multiple computers and multiple projects, but I haven't figured out how to fix it:
Whenever I'm debugging a 64-bit(?) C++ console program, after a few minutes and seemingly completely randomly (when I'm not clicking or typing anything), the console window for the program spontaneously closes and I can no longer debug or step through the program with Visual Studio. When I press Stop and attempt to restart debugging, I usually get ERROR_NETWORK_UNREACHABLE:
// MessageId: ERROR_NETWORK_UNREACHABLE
// MessageText:
// The network location cannot be reached. For information about network troubleshooting, see Windows Help.
#define ERROR_NETWORK_UNREACHABLE 1231L
If I try to attach to the process manually I get the error:
Unable to attach to the process.
The only fix I've found for this is to restart Visual Studio. I can't find any other way to fix it, and I've tried running Process Monitor but haven't found anything.
What causes this problem and how can I fix it?
(?) Upon further checking it seems that this only happens in 64-bit mode, but I'm not 100% sure.
Ok, this is just so wrong
I also have issues with this bug, and in my case it occurred every other debug session. Which meant debug -> stop -> debug -> bug -> restart visual studio -> go to start (repeat every minute during the whole day).
Needless to say I was driven to find a solution. So yesterday I tried procmon, spend hours looking at API monitor differences, looked at plugins, netstat, etc, etc, etc. And found nothing. I gave up.
Today
Until today.
To track down a stupid bug in my program today, I launched appverifier. For my application, I ran the 'basics' tests and clicked save. After a few hours this led me to the bug in my program, which was something like this (extremely simplified version):
void* dst = _aligned_malloc(4096, 32);
memcpy(dst, src, 8192);
Obviously this is a bug and obviously it needed fixing. I noticed the error after putting a breakpoint on the memcpy line, which was not executed.
After a stop and 'debug' again I was surprised to find that I could actually debug the program for the second time. And now, several hours later, this annoying bug here hasn't re-emerged.
So what appears to be going on
So... apparently data from my program is bleeding through into the data or execution space of the debugger, which in turn appears to generate the bug.
I see you thinking: No, this shouldn't happen... you're right; but apparently it does.
So how to fix it? Basically fixing your program (more particular: heap corruption issues) seems to make the VS debugger bug go away. Using appverifier.exe (It's in Debugging tools for Windows) will give you a head start.
Why this works
Since VS2012, VC++ uses a different way to manage the heap. Hans Passant explains it here: Does msvcrt uses a different heap for allocations since (vs2012/2010/2013) .
What basically happens is that heap corruption will break your debugger. The AppVerifier basic settings will ensure that a breakpoint is triggered just before the application does something to corrupt the heap.
So, what happens now is that before the process will break the heap, a breakpoint will trigger instead, which usually means you will terminate the process. The net effect is that the heap will still be in-tact before you terminate your program, which means that your debugger will still function.
"Test"
Before using appverifier -- bug triggered every 2 minutes
While using appverifier -- VS debugger has been stable for 5 days (and counting)
This is an environmental problem of course. Always hard to troubleshoot, SysInternals' utilities like Process Monitor and Process Explorer are your primary weapons of choice. Some non-intuitive ways that a network error can be generated while debugging:
Starting with VS2012, the C runtime library had a pretty drastic modification that can cause very hard to diagnose mis-behavior if your program corrupts the heap. Much like #atlaste describes. Since time memorial, the CRT always created its own heap, underlying call was HeapCreate(). No more, it now uses GetProcessHeap(). This is very convenient, much easier now to deal with DLLs that were built with /MT. But with a pretty sharp edge, you can now easily corrupt the memory owned by Microsoft code. Not strongly indicated if you can't reattach a 64-bit program, you'd have to kill msvsmon.exe to clear up the corruption.
The Microsoft Symbol Server supplies PDBs for Microsoft executables. They normally have their source+line-number info stripped, but not all of them. Notably not for the CRT for example. Those PDBs were built on a build server owned by DevDiv in Redmond that had the source code on the F: drive. A few around that were built from the E: drive, Patterns+Practices uses that (unlikely in a C++ program). Your debugger will go look there to try to find source code. That usually ends well, it gives up quickly, but not if your machine uses those drive letters as well. Diagnose by clearing the symbol cache and disabling the symbol server with Tools + Options, Debugging, Symbols.
The winapi suffers from two nasty viral infections it inherited from another OS that add global state to any process. The PATH environmental variable and the default working directory. Use Control Panel + System + Advanced + Environment to have a look at PATH, copy/paste the content of the intentionally small textboxes into a text editor. Make sure it is squeaky clean, some paralysis at the usual mess is normal btw. Take no prisoners. Having trouble with the default directory is much harder to troubleshoot. Both should pop out when you use Process Monitor.
No slamdunk explanations, it is a tough problem, but dark corners you can look in.
I have the same problem. Thought it was related to 64 bit console apps, where it is very easily triggered with almost any debug session. But, it also happens on 64 bit windows apps too. Now I am seeing it on 32 bit windows apps. I am running Windows 8.1 pro on a single desktop with the latest version of vs 2013 and no remote debugging. My (added) extensions are Visual Assist, Advanced Installer, ClangFormat, Code Alignment, Code Compare, Duplicate Selection, Productivity Power Tools 2013, and Visual SVN.
I discovered that the "Visual Studio 2013\Settings\CurrentSettings.vssettings" file gets corrupted. You can delete this file and recreate it by restarting VS or you can try to edit the XML. I then keep a copy of a good settings file that I use to replace when it gets corrupted again.
In my case, the corrupted line begins with
</ToolsOptionsSubCategory><ToolsOptionsSubCategory name="XAML" RegisteredName="XAML"
... and it is extremely long (I think this is why it is prone to corruption).
I just disabled in the Menu
Tools > Options
Debugging > Edit and Continue
Native-only options > Enable native Edit and Continue
and now it does not give the that stupid error which was preventing the starting of the debuggee application.
I also had the same problem with VS2015. It was so frustrating that a simple Hello World program gave this error when I ran debugger for the second time. Tried uninstall and reinstall and didn't work.
Finally, the solution mentioned in https://social.msdn.microsoft.com/Forums/vstudio/en-US/8dce0952-234f-4c18-a71a-1d614b44f256/visual-studios-2012-cannot-findlaunch-project-exe?forum=vsdebug
worked. Reset all visual studio settings using Tools->Import and Export settings. Now the issue is not occurring.
I need to read information, code, flags, address, etc from a memory.dmp file generated from a windows BSOD through C++. The basic idea is that status info can be requested from a remote site and one of the requested pieces of information is some basic info from the last BSOD that occured on the machine thus I need to open the kernel/memory dump file through C++ (Im using MSVC 2005).
Start here, then realize using scripted commands in WinDBG is much easier.
Note: you only need WinDBG on the analysis machine, not the crashing one. You retrieve the minidump and analyse it externally. The only difficulty you will have is getting the right symbols - for Windows, Microsoft makes them available via their symbol servers, but applications that caused the crash may not supply the right symbols you need. IF they are you own applications causing the crash, get a symbol server and use it.
I would configure Windows to create small kernel memory dumps which will include the parameter of the bugcheck you are after.
On XP it was 64KB on my Win8.1 x64 it is 256KB. These files compress well. You should be able to get away with a zip file of 10-60KB size depending on the bitness of the OS. If bandwidth is of of utmost importance to you, you can use 7z which compresses about 50% better than the plain zip algo at the expense of much longer compression times (5-6 longer) but for such small files the CPU time difference should be irrelevant.
If you do not want your users to configure dump reporting you need to set the DWORD
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\CrashControl
to 3 for a small kernel dump programatically.
For an explanation of the values see http://technet.microsoft.com/en-us/library/cc976050.aspx
0 Debugging information is not written to a file.
1 Complete crash dump is written to a file.
2 Kernel memory dump is written to a file.
3 Small memory dump is written to a file.
You will then get by default a small kernel dump in %SystemRoot%\MEMORY.DMP.
I am trying to create a way to transfer the debug information of a C++ project to a remote location for testing. In the current development cycle, small changes to the code require the entire binary (100s MB in size and mostly debug info) to be transferred.
Currently my approach to addressing this is by splitting the debugging information from the object files (the size of which without the debugging info is manageable on my connection) using -gsplit-dwarf and then diffing the debug files against a copy of the build currently on the remote box.
The aim is to have a set of patches for the debug files of a project so that new code can be debugged at a remote location. The connection between the remote location and the local machine is slow and so minimization of the size of the patches is paramount but it should also be balanced with the run time of the tool. I have looked into bsdiff and xdelta as potential solutions and have run into a conundrum where xdetla is fast but too large and bsdiff is perfect in terms of size but the run time and memory requirements are a little higher than I would like it to be.
Is there a tool or approach I am missing or am I just going about this the wrong way? Some alternative to bsdiff and xdelta perhaps? I know that a tool like gbdserver won't work in this situation because of some of the requirements we have with the actual debugging. Could some alteration of bsdiff help the performance? And indeed if the approach I'm using is sound, what would be a good way to keep a copy of the build on the remote machine around to diff against.
The simplest way is to use "strip" to copy the debuginfo into a separate ".debug" file, and then use "strip" again to remove the debug info from the executable that you will deploy. The "strip" manual explains how to do this, look for the "--only-keep-debug" option.
After you do this, you can tell gdb about the separate debug info in various ways. The very best way is to use the "build-id" feature. This is what modern Linux distros do. However there are other ways as well. There's a whole section in the gdb manual about separate debug files.
The key point here is that you can start gdb on the stripped executable and it will find the separate debug info automatically. This data can all be local, so you won't need to deploy the debug info.
If you still care about shrinking debug info even when this is done, you can look at the "dwz" tool. This is a DWARF compressor. However this usually only matters if you plan to ship the debug info somewhere -- distros use it to make it easier to download debug info, but ordinary users won't really see the need.
OK, so there are numerous questions around, asking for a "Visual Studio equivalent on Linux" or a variation of this question. (here, here, here, ...)
I would like to focus on one aspect and ask how the debugging workflow possibly differs on different systems, specifically the full-integrated-IDE approach used by Visual Studio (like) systems and a possibly more "separate" toolchain oriented approach.
To this end, let me present what I consider a short description of the "Visual Studio Debugging Workflow":
Given an existing project
I open up the project (one single step from a user perspective)
I navigate to the code I want to debug (possibly by searching of my project files, which is simply done by opening the Find in Files dialog box.)
I put a breakpoint at line (a), simply by putting the cursor on the line and hitting F9
I put a "tracepoint" at line (b), by adding a breakpoint there and then changing the breakpoint properties so that the debugger doesn't stop, but instead traces the value of a local variable.
I hit F5, which automatically compiles my executable, starts it under the debugger and then I wait until the prg stops at (a), meanwhile monitoring the trace window for output of (b)
When the debugger finally stops at (a), my screen automatically shows me the following information in (one-time preconfigured windows) side-by-side at the same time:
Current call stack
values of the most recently changed local variables
loaded modules (DLLs)
a list of all active breakpoints with their locations
a watch window with the last watch expressions I entered
A memory window to examine raw memory contents
a small window displaying current register values
Plus/minus some features, this is what I would expect under Eclipse/CDT under Linux also.
How would this workflow and presented information be retrieved when developing with VIM, Emacs, gdb/DDD and the likes?
This question isn't really about if some tool has one feature or not, it's about seeing that development/debugging work is using a combination of features and having a multitude of options available at your fingertips and how you access this information when not using a fully integrated IDE.
I think your answer isn't just about which software you use, but also what methodology you use. I use Emacs and depends on TDD for most of my debugging. When I see something fail, I usually write tests filling in the gap which I (obviously) have missed, and checks every expectation that way. So it goes far between each time I use the debugger.
When I do run into problems I have several options. In some cases I use valgrind first, it can tell me if there is some memory related problems right away, eliminating the need for the debugger. It will point straight to the line where i overwrite or delete memory that should be left alone. If I suspect a race condition valgrind is pretty good at that to.
When I use the debugger I often use it right in emacs, through GUD mode. It will give me a view with stack, local variables, the source code, breakpoints and a window where I can command the debugger. It usually involves setting a couple of breakpoints, watching some memory or some evaluation, and stepping through the code. It is pretty much like using the debugger in an IDE. The GDB debugger is a powerful beast, but my problems has never been large enough to need to invoke its power.