Compiling time is high c++ [closed] - c++

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
I am writing a code in c++. Compared to my friends the compiling time high. what could be the reason for this? Its taking around 4 seconds. But for my friends its getting compiled immediately.

This is an impossible question there are so many factors, but some thing to look out for:
heavy use of template meta programming - are you using something like boost spirit
are the header file including other header when it could be a forwatd declaration
are there uneeded headers.
Is there just a lot of code
Is your build system correct? Is it recompiling code that hasn't changed. Look at make file if you haven't already.
Is their system better than yours.
Finally I would love my code to compile in 4 seconds.

Do you have optimizations turned on? That will slow things down.
Do you have a Temp directory mapped to a network drive? That will slow things down.
Are you linking from a network drive? That will slow things down.

Talk about an open ended question, but here are some quick reasons;
Slow computer (CPU/Disk etc)
Too little memory.
Different compiler (they vary greatly in speed).
Precompiled headers vs. non precompiled headers.
Different options (RTTI/optimization/...)
Esp. in Visual Studio, plugins slowing your IDE down.
Code structure (are you including un-necessary headers)
Compiling everything every time vs. using Makefiles or a smart IDE.

Related

How do I speed up C++ execution with in-line functions? [closed]

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 9 years ago.
I am transferring a FORTRAN Monte Carlo program to C++ and found that when completely transferred the C++ program runs nearly twice as slow as the FORTRAN program. I am trying to draft a 2nd version of the C++ program where many of the functions are brought in line through the use of class structures to speed things up; however, some of the functions are upwards of 40 or 50 lines and I have read that bringing large functions in line can slow down the program. What is too large when it comes to bringing functions in line and how can I speed up a C++ program (without multi processing) such that a C++ program can execute as fast, or near as fast as a FORTRAN program?
Inlining in C++ is only a suggestion to the compiler. If the function is too complicated, it will not be inlined by most modern compilers. The compiler will do what it can to optimize the calls in any case, even without the "inline" keyword, so long as the implementation is available when it's being compiled. There are also compilers that will inline across compilation units, but this is less common.
In any case, it's unlikely that function calls are dominating your processing time. You probably want to profile your code to figure out where the bottleneck is actually at before putting too much effort into micro-optimizations that the compiler is probably doing for you in any case.

Why the OS allocates more memory than deemed necessary for a small sized executable? [closed]

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 9 years ago.
Ok so I wrote this minimal C code and complied it to an executable in release mode
1.void main()
2.{ for(;;); } //this is here to make the app hang.
The executable file size itself is 6kb, I didn't include any headers. Even if the entire exe file gets copied to the ram, apparently it shouldn't occupy more than 7 kb, nevertheless the OS allocates 320 kb, why is that? I'm using windows.
It seems like you're confused and mixing a wide variety of concepts. Let me try to explain:
That program is very clearly an infinite loop which explains why it doesn't end (or what you call "hang").
The compiler/linker still need to write a valid executable for your OS, and this involves a bunch of headers and stuff, which could be easily consuming 6kb.
320kb in a mainstream OS at this point in time seems like almost nothing and it can mainly be OS overhead. It is hard to say more without knowing what OS.
I strongly encourage you to disassemble your code. Another option is to play with your compiler options to optimize for executable size. I think the bottom line is that you're expecting that since your program doesn't do anything useful its size should be zero, and this is an unreasonable expectation.

fast on-demand c++ compilation [closed]

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
I'm looking at the possibility of building a system where when a query hits the server, we turn the query into c++ code, compile it as shared object and the run the code.
The time for compilation itself needs to be small for it to be worthwhile. My code can generate the corresponding c++ code but if I have to write it out on disk and then invoke gcc to get a .so file and then run it, it does not seem to be worth it.
Are there ways in which I can get a small snippet of code to compile and be ready as a share object fast (can have a significant start up time before the queries arrive). If such a tool has a permissive license thats a further plus.
Edit: I have a very restrictive query language that the users can use so the security threat is not relevant. My own code translates the query into c++ code. The answer mentioning clang is perfect.
Running Clang in JIT mode should provide the speed you need, and example can be found here, safety on the other hand is something else...
Ch also had a JIT added, and seeing as its an interpreter, it might provided an easier sandboxed/controlled environment.
In addition to Necrolis answer, there's also specialized C++ parser Cling. Might come in handy.

can command line utilities be faster than C++? [closed]

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 11 years ago.
I have a project where I want to manipulate certain output files.
This can be accomplished using a combination of grep and sed and piping with |
Alternatively, I can also write a C++ program to do the same thing.
Is there a conclusive answer on which method will be faster since grep and sed should already be fairly well optimised?
From a technical standpoint, a well-written self-contained C++ program that does everything you need will be faster than using two (or more) shell commands interconnected with a pipe, simply because there will be no IPC overhead, and they can be tailor-made and optimized for your exact needs.
But unless you're writing a program that will be run 24/7 for years, you'll never notice enough gain to be worth the effort.
And the standard rules for pre-optimization apply...
If I were you, use what is already out there as these have likely been around a long time and have been tested and tried. Writing a new program yourself to do the same thing seems like a reinventing the wheel type action and is prone to error.
If you really need faster performance than you'll get with piping, you can download the source for grep and sed and tailor it to your needs in one application (be wary of licenses if you plan on distributing your code). I'd be highly surprised if you'd even notice the overhead of piping (like Flimzy mentioned), so if things are really that slow I'd start profiling your app.
It is likely that if you are a very good C/C++ programmer and spend a lot of time, that you will be able to write a program that's faster than the pipeline you're thinking of. But unless performance is so critical in this case that you absolutely must do it this way you should use the pipeline.

Finding a totally nasty, complex, Schröding-Bohr-Bug [closed]

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 11 years ago.
I have a really nasty bug in my program, which grew quite complex over time. It's probably the worst bug I've ever had.
I think that it might be related to a static variable initialization fiasco, but how can I ensure myself of that?
When the bug strikes, the program crashes due to heap corruption at a random point after startup, but far inside the main() function.
To be honest, I don't know what to do.
I'm on Windows 7 using Microsoft Visual Studio 2010
my program, which grew quite complex
over time
Do you keep backups of previous versions?
Find an older version that worked and continue working based on that version...
There is a famous quote out there:
"Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it." --Brian Kernighan
If this program has become more complicated than you can handle then it may be time to think about refactoring.
(This is in no way intended to be demeaning or to be taken as a personal attack...)
Run your program in the debugger, and step through the code until you see what's wrong. Place breakpoints liberally anywhere you think the bug might be caused.
Try debugging your program with gdb.