Matlab computation Vs 'C/C++' computation..Which one is efficient? [closed] - c++

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 11 years ago.
I have some mathematical algorithms implemented in Matlab. I have implemented those algorithms in C++ (i used Microsoft VS 2005). When i compare matlab code output with C++ code output, it is 98 to 99% matches with matlab output. Shouldn't it be 100% matched?
Is the matlab computation efficiency is better than C/C++?

Generally, no, Matlab will not produce more exact results just because it is Matlab. However, there is a number of things that might make a difference:
Different implementations of the same algorithm might have been written with different numerical stability in mind.
C and C++ compilers generally allow you to set compilation flags to do fast math, which changes floating-point math behaviour.
The output options for floating point numbers might just differ, making results look different.
The Matlab and C version might have used different floating point precisions.

In matlab also there would be appropriate compiler so in this scenario its hard to say
that matlab computation efficiency is better than C/C++
if your code is same in both case then there should be same output. If you found some differences between them them its should be because of their compiler version difference.

Related

Navigating arrays in C (performance)? [closed]

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 9 years ago.
Is it better to use a pointer to navigate an array of values,or is it better to use a subscripted array name?
How does the compiler treat both approaches?
Any modern compiler should produce an equivalent assembly code for both methods.
I did a short test. I created int arr[10] and set all cells to 10 using normal for loop indexed by int and one using int*.
What was strange form me (I accept Midhun MP arguments) pointer indexed loop assembly code was larger then normal approach (1 line more). But when I turn O3 optimization output was exactly the same.
IMO code should be easy to read and work without errors on the first place. Micro optimization can be done only if you really need them. In other cases readability beats performance.
If you are curious how it works. Just do this test yourself. Prepare 2 versions of code, compile it with gcc -S and diff outputs.

How do I speed up C++ execution with in-line functions? [closed]

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 9 years ago.
I am transferring a FORTRAN Monte Carlo program to C++ and found that when completely transferred the C++ program runs nearly twice as slow as the FORTRAN program. I am trying to draft a 2nd version of the C++ program where many of the functions are brought in line through the use of class structures to speed things up; however, some of the functions are upwards of 40 or 50 lines and I have read that bringing large functions in line can slow down the program. What is too large when it comes to bringing functions in line and how can I speed up a C++ program (without multi processing) such that a C++ program can execute as fast, or near as fast as a FORTRAN program?
Inlining in C++ is only a suggestion to the compiler. If the function is too complicated, it will not be inlined by most modern compilers. The compiler will do what it can to optimize the calls in any case, even without the "inline" keyword, so long as the implementation is available when it's being compiled. There are also compilers that will inline across compilation units, but this is less common.
In any case, it's unlikely that function calls are dominating your processing time. You probably want to profile your code to figure out where the bottleneck is actually at before putting too much effort into micro-optimizations that the compiler is probably doing for you in any case.

What is Bison and why is it useful? [closed]

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
I have been programming for a few years now and have seen the name Bison in passing, but never bothered to ask why it is or why it might be needed. How can Bison effect how I program, can it make my C/C++ code faster?
Bison is a parser generator. It takes it's input in something similar to Backus-Naur notation and outputs code to parse input according to that grammar. It lets you write a parser more easily than you would otherwise. Instead of having to do everything manually, you only have to specify the rules of your grammar and what you want to happen when it matches one of the rules.
GNU Bison is the only Bison related to programming I know of. It won't make your code faster, and it's possible that you won't ever need it in your life. However, learning some compiler theory, or even writing a simple compiler yourself, is a terrific learning experience that does affect the way you program, the way you think about computer programming, and a lot of things like that. If you enjoy formal languages and automata, you'll enjoy compiler theory; if you dislike theory in general, it's probably not for you. If you're interested, there are lots of questions about starting books on Stackoverflow.
Oh and, once in a while a programmer does need some more complicated parsing work and suchlike, and it's a huge boon to know about parser generators, instead of writing everything by hand, following a naive approach.

c++ Realy large floating point calculation [closed]

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 11 years ago.
I want to do really large floating point calculations. Should be fast enough.
How can I make use of Graphics processors if available? If not GPU available then I would like to use main CPU.
thanks
Depending on the 'size' of these numbers, you can try MPFR, although its not a GPU solution, it can handle big numbers and should be relitively fast, its used by a few opensource compilers(GCC & LLVM) to do static constant folding, so its designed to preserve accuracy.
To do work on a GPU (really a GPGPU), you'd need to write a kernel using soemthing like OpenCL or DirectCompute, and do your number crunching in that.
You may also be interested in intels new AVX extensions as well.
Floating point calculations can handle very large numbers, very small numbers, and numbers somewhere in-between. The problem is that the number of significant digits is limited, and and number that doesn't perfectly fit in a base two representation (numbers similar to 1/3 or 1/7) will experience errors as they are translated into their closest base 2 counterparts.
If a GPU is available, as one is in nearly all computers with video, then a libarary like GPGPU should help you access it without writing tons of assembly language. That said, until you are sure your computations will involve operations that are similar to those already performed by a GPU, you would be better off avoiding the GPU as they are excellent for doing what they already do, and poor at being adaptable to anything else.

Generating word library - C or C++ [closed]

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 11 years ago.
I need to create a simple application, but speed is very important here. Application is pretty much simple.
It will generate all available chars by saving them to text file. User will enter length that will be used for generating so the application will use recursive function with loop inside.
Will C be faster then C++ in this matter, or it does not matter?
Speed is very important because if my application needs to generate/save to file 10 million+ words.
It doesn't really matter, chances are your application will be I/O bound rather than CPU bound unless you have enough RAM to hold all that in memory.
It's much more important that you choose the best algorithm, and the best data structures to back that algorithm up.
Then implement that in the language you're most familiar with. C++ has the advantage of having easy to use containers in its standard libraries, but that's about it. You can write slow code in both, and fast code in both.