Is managed code slower than unmanaged code? [closed] - c++

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
Its just a question out of my curiosity. Though generally considering the Framework and the steps involved in Execution, i'd say yes. Still i would also like to consider the factors like Memory/Disc access and networking which limit the performance of unmanaged code.

Quoting Herb Sutter
"First, JIT compilation isn’t the main issue. The root cause is much more fundamental: Managed languages made deliberate design tradeoffs to optimize for programmer productivity even when that was fundamentally in tension with, and at the expense of, performance efficiency."
There’s always an inescapable and fundamental difference between “prevention” and “cure” — when it comes to performance optimization, C++ always chooses “prevention,” and managed languages choose “cure” with the above-mentioned heroic efforts and many more. But the old ounce/pound saying is inescapable; you can’t beat prevention (in part because you can always add the cure after first doing the prevention, but not the reverse), and if you care about performance and control primarily then you should use a language that is designed to prioritize that up front, that’s all.
You can refer this article for more clarity
http://www.i-programmer.info/professional-programmer/i-programmer/4026-the-war-at-microsoft-managed-v-unmanaged.html

Related

What optimizations can a C++ compiler perform that a C compiler cannot? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I've somehow implanted in my head the idea that a C++ compiler can perform certain optimizations that a C compiler cannot. I (think I) remember hearing this in a conference talk, perhaps regarding the use of C++ in embedded programming.
If I recall correctly, I think these optimizations had to do with the idea that you can qualify the use of pointers (and other means of indirection, like references) with more information at compile-time.
For instance, "const" is an example of such a compile-time human-supplied tag available both C and C++. Is there similar information that only C++ has?
Some things that spring to mind are the different types of iterators and their requirements, but I'm not sure if that allows for C++ to make some optimizations.
EDIT: I think the talk I had in mind was Dan Saks's cppcon 2016 presentation, but I realize now that (from what I understand) he mainly mentions how C++'s type system allows for better compile-time type-checking.
I think some of the examples I would enjoy hearing more about are things closer to how C++'s std::sort can be optimized more readily than C's qsort (thanks to multiple commenters). Explanations to when this sort of scenario occurs would be greatly appreciated.

To factor code in function or not? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
A little question that I know many diverge on this but I would like to know performance and sanity wise, what do you use to do:
What's best, factoring code in function (when using the same piece of code in multiple places) but then having to face the function-call cost or just keeping those pieces everywhere then having to deal with changes in different places when you have to change the logic?
Considering the fact that I need my code to be the fastest possible. Because it will run on memory/cpu restricted device.
Maybe some of you have a rule of thumb they apply, like when the code is bigger than a certain amount of lign, they gather it in a function...
Rule of thumb:
Trust the compiler, in general it has better heuristics than you whether a code should be inlined. Write clean code. Code duplication is your enemy.
Measure the performance or check the generated code, and only try to optimize if you are unhappy with the results.
If there are problems, try to utilize templates to avoid code duplication and generate code at the template instantiation location.

Is the statement "C++ is slower than C" referring to compilation or execution? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I never feel that C++ is slower than C. Did the people who say it mean compiler time?
I think in many conditions C++ is more reasonable than C for optimizing such as reference.
From SO: Is C notably faster than C++
In C++, "you only pay for what you use." So there is nothing that would make it any slower than C. In particular for scientific programs, template expressions make it possible to perform some custom optimization using the template engine to process program semantics.
The reason C is chosen for projects such as Python is that many people understand it (relatively) fully, so a large codebase will not confuse many of a large pool of contributors.
In almost all cases C is valid C++ (since C is nearly a subset of C++), so there's almost always a way to do things that's at least equally fast in C++ as in C. As mentioned later in the SO answer referenced above, however, C has an edge on C++ in terms of space efficiency.
The people who say this do not mean compile time. They mean execution time, largely due to the possible performance impact of virtual functions.

Considering the Chaos Monkey in Designing and Architecting an Embedded Systems [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I work on embedded systems with limited memory and throughput. Making a system more robust requires more memory, and time on the processor. I like the idea of the Chaos Monkey to figure out if your system is fault tolerant, however with limited resources I'm not sure how feasible it is to just keep adding code to it. Are there certain design considerations whether in the architecture or otherwise that would improve the fault handling capabilities, without necessarily adding "more code" to a system?
One way I have seen to help in preventing writing an if then statement in c (or c++) that assigns rather then compares a static value, recommends writing that on the left hand side of the comparison, this way if you try to assign your variable to say the number 5, things will complain and you're likely to find any issues right away.
Are there Architectural or Design decisions that can be made early on that prevent possible redundancy/reliability issues in a similar way?
Yes many other techniques can be used. You'd do well to purchase and read "Code Complete".
Code Complete on Amazon

Does every large project include a Lisp interpreter? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
I had the impression that there was a paper or article somewhere that claimed every sufficiently large project (not written in a Lisp variant) contained a poorly implemented Lisp interpreter. Google turns up nothing and a quick search of SO doesn't either. Is this something well known and documented somewhere I have forgotten, or just a figment of my imagination?
An actual document or link to such an article would be appreciated, if it exists. Otherwise, I will remove the question.
What Greenspun meant when he uttered this quip was that Lisp provides a great many foundational technologies for writing good software, and that programs written in other languages informally (and inferiorly) reproduce a number of them as they grow.
Yes, this claim is Greenspun's tenth rule (actually the only rule):
Any sufficiently complicated C or Fortran program contains an ad hoc,
informally-specified, bug-ridden, slow implementation of half of
Common Lisp.
It is making a valid point about the expressiveness of Lisp-style features (particularly its kind of macros). However, it isn't serious to the degree you would write a paper on it.