Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
A little question that I know many diverge on this but I would like to know performance and sanity wise, what do you use to do:
What's best, factoring code in function (when using the same piece of code in multiple places) but then having to face the function-call cost or just keeping those pieces everywhere then having to deal with changes in different places when you have to change the logic?
Considering the fact that I need my code to be the fastest possible. Because it will run on memory/cpu restricted device.
Maybe some of you have a rule of thumb they apply, like when the code is bigger than a certain amount of lign, they gather it in a function...
Rule of thumb:
Trust the compiler, in general it has better heuristics than you whether a code should be inlined. Write clean code. Code duplication is your enemy.
Measure the performance or check the generated code, and only try to optimize if you are unhappy with the results.
If there are problems, try to utilize templates to avoid code duplication and generate code at the template instantiation location.
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 months ago.
This post was edited and submitted for review 5 months ago and failed to reopen the post:
Original close reason(s) were not resolved
Improve this question
I have a tendency to use switch statements if I am creating a menu driven program, and I tend to use if statements when I only have a few items. I believe this has to do with the way I was taught in school, but I don't know if that is necessarily the way to go.
Are there vast differences between the two? When should you pick one over the other?
Edit: I should specify, I am mainly concerned with optimization (even if one or the other is only marginally more efficient).
If statements look like if statements. Switch statements look like switch statements. Some compilers may be mildly better at optimizing certain types of switch statements than the equivalent set of if statements, though that won’t be a significant factor in your situations. In cases where the two are both applicable, there are few practical concerns in choosing one over the other.
Use whichever fits your intended coding style better.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I am new to C++ and I have a general question. In order to solve any question in the exercises of the book I am learning from, while I am able to successfully solve the questions, I usually end up creating a lot of new variables within functions in addition to the ones that I have already initialised. For some reason, this worries me because I feel that I am writing inefficient code that might hog resources if I follow this practice for more complex programmes. Am I wrong in thinking this way? Are there any best practices regarding initialising and declaring new variables?
EDIT: I forgot to add, before resolving any question, I tend to convert the solution into plain English and then attempt to draw the program structure.
Normally compilers do liveness analysis of variables during the compilation of your code. Variables are considered live only starting from their assignment till their last use - optimizing compilers are capable of reducing the amount of local storage on the stack that is required by sequentially used variables (sometimes they even can eliminate their use entirely or keep them in registers only for a short period of time).
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I asked a question about global variables and one of the answers raised another question which is what is the risk of very large cpp file?
Is the concern here is the maintainability of the program or something else?
My original question
Only maintainability. There is no compilation issues, as it is common for compilers to combine all #include files into a translation unit and then compile that. Thus each .cpp file winds up being many times larger than the input anyway, before moving on to later stages of compilation.
For a single programmer working on his own program, when size become an issue is a personal choice. For a team of programmers at some company, having some reasonable number of C++ files for an application allows each team member to work on a separate file in parallel. Although there are tool sets that can merge separate edits made to the same source file(s), dealing with potential conflicts (someone has to check and/or fix the conflicts), is an issue.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
Its just a question out of my curiosity. Though generally considering the Framework and the steps involved in Execution, i'd say yes. Still i would also like to consider the factors like Memory/Disc access and networking which limit the performance of unmanaged code.
Quoting Herb Sutter
"First, JIT compilation isn’t the main issue. The root cause is much more fundamental: Managed languages made deliberate design tradeoffs to optimize for programmer productivity even when that was fundamentally in tension with, and at the expense of, performance efficiency."
There’s always an inescapable and fundamental difference between “prevention” and “cure” — when it comes to performance optimization, C++ always chooses “prevention,” and managed languages choose “cure” with the above-mentioned heroic efforts and many more. But the old ounce/pound saying is inescapable; you can’t beat prevention (in part because you can always add the cure after first doing the prevention, but not the reverse), and if you care about performance and control primarily then you should use a language that is designed to prioritize that up front, that’s all.
You can refer this article for more clarity
http://www.i-programmer.info/professional-programmer/i-programmer/4026-the-war-at-microsoft-managed-v-unmanaged.html
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I have this big C++ boostified project that takes ages to build so i'm trying to set up compilation firewalls. Now I could sprinkle pimpls or pure interfaces following my intuition but that doesn't seem very efficient... usually if i wanted to optimize a piece of code, I would run it through a profiler to see the bottlenecks which leads me to the following question: how do I see where are the bottlenecks in my compilation time?
All answers including trying alternate compilers are welcomed since code is cross-platform (crosses fingers!)
Thanks,