Compilation time between compiler and exe - c++

After testing my program with big chunks of data I noticed that when I run my program on my compiler(Codeblocks) it runs slower than when I click the exe in my folder. Is that true?
Also when I click the exe it doesn't tell me the time it took to compile how can I see the time then?

When you run your application on in the IDE, it most often includes some other things for debugging purposes. In visual studio, it attaches a debugger and that takes up quite some performance.
When you directly run the executable, such debuggers wont get attached, thus improving the speed.
Another way to improve your executable performance is through compiler optimizations. You can read about them here: https://queue.acm.org/detail.cfm?id=3372264
Also when I click the exe it doesn't tell me the time it took to compile how can I see the time then?
What the compiler does is, it converts your source code into executables/ libraries. Basically if you run the compiler through your source code once, you don't need to do it again and again (which is what interpreted languages do).

Related

Recompiling code while program execution

If I have a series of C/C++ programs that I need to build using Make , would it mess up the code run if I made changes to the code and recompiled while the program is executing an executable? Or is all the information preloaded in the executable before runtime?
Thanks.
This depends entirely on whatever operating system you're using.
Linux is perfectly happy continuing to execute a program whose binary has been removed, and replaced with a new binary.
It is my understanding that Microsoft Windows is, on the other hand, rather grumpy in the same situation, and won't be happy if something like this is attempted.
If I am understanding correctly, you can edit the code while you run the program and the program will not change while you run it.

What VS2010 C Project settings cause exes to require Compatibility Mode

I've been running and compiling a program on my Windows 7 64-bit machine for several months now, but recently I had to change several VC project settings of the static libs that it uses and now the generated executable file requires me to run it in "Windows XP Compatibility Mode".
Compiled on Windows7 64-bit machine with Visual Studio 2010 SP1
The program I am generating is being built in Win32, debug mode.
The static lib projects specify Target Machine /X86.
When I run the program from the debugger, it start up and runs, however if running via the windows icon, it requires XP compatibility mode.
When trying to start outside of the debugger the EXE shows up in task manager for a second then goes away.
I've tried using Microsoft Application Verifier on it, however I don't know what to look for in the output.
I've been unable to find any details on how to troubleshoot this issue so if anyone has any ways if finding what could be causing this recent Compatibility Mode requirement I'd love to hear how it was fixed.
I have the source/projects/solutions for the majority of the static libs that I link against, as well as the exe file generated, however some of the external dependencies I only have the .lib,.dll, and .h files for. This means I can change (most) of the project settings for the dependencies if neccessary, but I need to know which ones to look for.
Thanks
To be honest, don't be afraid to make another project and copy the code files, even if it's 5 projects. You need to cut the problem in half. If it works with the new projects then it's the project files, if not, it's the code. Making projects isn't that hard really, though I'm sure a source of much consternation and something people avoid. If its the projects you can diff the files and see what happened by process of elimination. If you are really worried, copy the entire solution to another folder; always make backups.
The problem is that you probably won't be able to hoist enough information up to us to get a meaningful answer unless get lucky, and all the answers will be shots in the dark.
So I'm goign to take this question as "this happens, what can I do about it". The strategy above will get you out of it, if this used to work before. This exercise will arm you for the future and will be more productive in the long term. Go look at UAC and manifest files, aka Vista+ difference tht dramatically changes load and run behaviour (Linker Commands, Vista Migration Guide) if you need one thing to look at, but try the above process.
----
Other generic things to try:
1) another machine
2) another install of VS
3) a simple project with one window that does nothing to prove everything else in your tool chain and environment is ok.
4) planting message boxes along the code path with different messages so you know where its crapping out.
5) turing on pdb in release and runnign outside of debugger. If craps out, then try debugging and see if still craps out, but you get to see where.
6) assume that your code is unstable and you were getting lucky when it used to work. (this one is no fun). Many times things work in debug and not in release due to mem layout being different. If your progam is large you can find creative ways to use #if's whatever to elimitate code from running while haivng the whole thing still load. You can find the code that causes the bad behaviour.
7) turn off UAC and error reporting if its on, see if changes.
8) go find the "run without debugging" menu button in Visual Studio, so you don't have to go run it with the icon. That's an accident waiting to happen, and eliminates one more environmental difference. It looks like the run with debugging button, but it's hollow, a plain green triangle. It's under debug menu set. My oppinion is that it has done more harm than good to not have that on the bar by default as its confused many many people to think launching wiht VS means always using the debugger.
and so on....

Linking error in vc++

I am very new to VC++ and I am running the program on VC++ for the first time.
I strictly followed the instructions given in Microsoft Programming Visual C++ book and created one project as instructions given.
About the ex03a.exe:
I saw in the path "...\Ex03a\Debug\" and in that no file exists such as ex03a.exe.
I tested my vc++ by executing a simple 'Test.cpp' file. I was able to run the simple c++ program and I got the output. And Test.exe is there in '\Test\Debug\Test.exe'
My Question:
How could I get rid off the error.
Almost always when VS says it isn't able to open a file, it's about opening it for writing.
And almost always this doesn't work because the file is locked.
And almost always this is because the file is an executable that is currently running :-)
This is a specialty of Windows - an exe is not simply loaded, it's locked for all of it's run time. This is probably due to the fact that exe files (actually called portable executables, for whatever reason) contain not only code, but usually also an arbitrary number of resources (like images, etc.), and changing the file on the fly would make the aplication crash hard when it attempts to read one of those resources at run time.
Therefore, I suggest looking for a way to exit / close / terminate the application, so it isn't running any longer, so the file is not locked anymore, so in this case the linker can do its work.
The error message, btw., isn't that intuitive from my point of view - this problem being SO standard, it could at least attempt to tell you anything about this possible source of the problem - afaik, this hasn't been improved until now, probably because most developers have seen this before, found out why it happened, and therefore do not have any more problems with it.
I see that in that screenshot, that you are running multiple versions of VC6.
Now You get that error if you run the newly compiled exe of that program without shutting down the previous compiled exe.
VC tries to overrwrite the exe that is currenty running but encounters that exact error.
Always close the program when you are done.

Why does C++ linking use virtually no CPU?

On a native C++ project, linking right now can take a minute or two. Yet, during this time CPU drops from 100% during compilation to virtually zero. Does this mean linking is primarily a disk activity?
If so, is this the main area an SSD would make big changes? But, why aren't all my OBJ files (or as many as possible) kept in RAM after compilation to avoid this? With 4 GB of RAM I should be able to save a lot of disk access and make it CPU-bound again, no?
Update: so the obvious follow-up is, can the VC++ compiler and linker talk together better to streamline things and keep OBJ files in memory, similar to how Delphi does it?
Linking is indeed primarily a disk-based activity. Borland Pascal (back in the day) would keep the entire program in memory, which is why it would link so fast.
Your OBJ files aren't kept in RAM because the compiler and linker are separate programs. If your development environment had an integrated compiler and linker (instead of running them as a separate processes), it could indeed keep everything in RAM.
But you would lose the ability to separate the development environment from the compilers and/or linkers - you would have to use the same compiler/linker, and you wouldn't be able to run the compiler outside the environment.
You can try installing some of those RAM disks utilities and keep your obj directory on the RAM disk or even whole project directory. That should speed it up considerably.
Don't forget to make it permanent afterwards :-D
The Visual Studio linker is largely I/O bound, but how much so depends on a few variables.
Incremental linking (common in Debug builds) generally requires a lot less I/O.
Writing a PDB file (for symbols) can consume a lot of the time. It's a specific bottleneck that Microsoft targeted in VS 2010. The PDB writing is now done asynchronously. I haven't tried it, but I've heard it can help link times quite a bit.
If you using link-time code generation (LTCG) (common in Release builds), you have all the usual I/O initially. Then, the linker re-invokes the compiler to re-generate code for sections that can be further optimized. This portion is generally much more CPU-intensive. Off hand, I don't know if the linker actually spins up the compiler in a separate process and waits (in which case you'll still see low CPU usage for the linker process), or if the compilation is done in the linker process (in which case you'll see the linker go through phases of heavy-I/O then heavy-CPU).
Using an SSD can help with the I/O bound portions. Simply having a second drive can help, too. For example, if your source and objects are all on one drive, and you write your PDB to a separate drive, the linker should spend less time waiting for the PDB writer. Having a second spinning drive has helped my current team's link times dramatically.
In debug builds in Visual Studio you can use incremental linking which allows you to usually avoid a lot of the time spent on linking. Basically it means that instead of linking the whole EXE (or DLL) file from scratch it builds upon the one you last linked, replacing only the things that changed.
This is however not recommended for release builds since it adds some overhead in runtime and can result in an EXE file that is several times larger than the usual.
It's hard to say what exactly is taking the linker so long without knowing how it is interacting with the OS. Thankfully, Microsoft provides Process Monitor so you can do just that.
It's helped me diagnose bugs with the Visual Studio IDE and debugger without access to source.

VS 2008 C++ build output?

Why when I watch the build output from a VC++ project in VS do I see:
1>Compiling...
1>a.cpp
1>b.cpp
1>c.cpp
1>d.cpp
1>e.cpp
[etc...]
1>Generating code...
1>x.cpp
1>y.cpp
[etc...]
The output looks as though several compilation units are being handled before any code is generated. Is this really going on? I'm trying to improve build times, and by using pre-compiled headers, I've gotten great speedups for each ".cpp" file, but there is a relatively long pause during the "Generating Code..." message. I do not have "Whole Program Optimization" nor "Link Time Code Generation" turned on. If this is the case, then why? Why doesn't VC++ compile each ".cpp" individually (which would include the code generation phase)? If this isn't just an illusion of the output, is there cross-compilation-unit optimization potentially going on here? There don't appear to be any compiler options to control that behavior (I know about WPO and LTCG, as mentioned above).
EDIT:
The build log just shows the ".obj" files in the output directory, one per line. There is no indication of "Compiling..." vs. "Generating code..." steps.
EDIT:
I have confirmed that this behavior has nothing to do with the "maximum number of parallel project builds" setting in Tools -> Options -> Projects and Solutions -> Build and Run. Nor is it related to the MSBuild project build output verbosity setting. Indeed if I cancel the build before the "Generating code..." step, none of the ".obj" files will exist for the most recent set of "compiled" files. This implies that the compiler truly is handling multiple translation units together. Why is this?
Compiler architecture
The compiler is not generating code from the source directly, it first compiles it into an intermediate form (see compiler front-end) and then generates the code from the intermediate form, including any optimizations (see compiler back-end).
Visual Studio compiler process spawning
In a Visual Studio build compiler process (cl.exe) is executed to compile multiple source files sharing the same command line options in one command. The compiler first performs "compilation" sequentially for each file (this is most likely front-end), but "Generating code" (probably back-end) is done together for all files once compilation is done with them.
You can confirm this by watching cl.exe with Process Explorer.
Why code generation for multiple files at once
My guess is Code generation being done for multiple files at once is done to make the build process faster, as it includes some things which can be done only once for multiple sources, like instantiating templates - it has no use to instantiate them multiple times, as all instances but one would be discarded anyway.
Whole program optimization
In theory it would be possible to perform some cross-compilation-unit optimization as well at this point, but it is not done - no such optimizations are ever done unless enabled with /LTCG, and with LTCG the whole Code generation is done for the whole program at once (hence the Whole Program Optimization name).
Note: it seems as if WPO is done by linker, as it produces exe from obj files, but this a kind of illusion - the obj files are not real object files, they contain the intermediate representation, and the "linker" is not a real linker, as it is not only linking the existing code, it is generating and optimizing the code as well.
It is neither parallelization nor code optimization.
The long "Generating Code..." phase for multiple source files goes back to VC6. It occurs independent of optimizations settings or available CPUs, even in debug builds with optimizations disabled.
I haven't analyzed in detail, but my observations are: They occur when switching between units with different compile options, or when certain amounts of code has passed the "file-by-file" part. It's also the stage where most compiler crashes occured in VC6 .
Speculation: I've always assumed that it's the "hard part" that is improved by processing multiple items at once, maybe just the code and data loaded in cache. Another possibility is that the single step phase eats memory like crazy and "Generating code" releases that.
To improve build performance:
Buy the best machine you can afford
It is the fastest, cheapest improvement you can make. (unless you already have one).
Move to Windows 7 x64, buy loads of RAM, and an i7 860 or similar. (Moving from a Core2 dual core gave me a factor of 6..8, building on all CPUs.)
(Don't go cheap on the disks, too.)
Split into separate projects for parallel builds
This is where 8 CPUS (even if 4 physical + HT) with loads of RAM come to play. You can enable per-project parallelization with /MP option, but this is incompatible with many other features.
At one time compilation meant parse the source and generate code. Now though, compilation means parse the source and build up a symbolic database representing the code. The database can then be transformed to resolved references between symbols. Later on, the database is used as the source to generate code.
You haven't got optimizations switched on. That will stop the build process from optimizing the generated code (or at least hint that optimizations shouldn't be done... I wouldn't like to guarantee no optimizations are performed). However, the build process is still optimized. So, multiple .cpp files are being batched together to do this.
I'm not sure how the decision is made as to how many .cpp files get batched together. Maybe the compiler starts processing files until it decides the memory size of the database is large enough such that if it grows any more the system will have to start doing excessive paging of data in and out to disk and the performance gains of batching any more .cpp files would be negated.
Anyway, I don't work for the VC compiler team, so can't answer conclusively, but I always assumed it was doing it for this reason.
There's a new write-up on the Visual C++ Blog that details some undocumented switches that can be used to time/profile various stages of the build process (I'm not sure how much, if any, of the write-up applies to versions of MSVC prior to VS2010). Interesting stuff which should provide at least a little insight into what's going on behind the scenes:
http://blogs.msdn.com/vcblog/archive/2010/04/01/vc-tip-get-detailed-build-throughput-diagnostics-using-msbuild-compiler-and-linker.aspx
If nothing else, it lets you know what processes, dlls, and at least some of the phases of translation/processing correspond to which messages you see in normal build output.
It parallelizes the build (or at least the compile) if you have a multicore CPU
edit: I am pretty sure it parallelizes in the same was as "make -j", it compiles multiple cpp files at the same time (since cpp are generally independent) - but obviously links them once.
On my core-2 machine it is showing 2 devenv jobs while compiling a single project.