c++ Realy large floating point calculation [closed] - c++

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 11 years ago.
I want to do really large floating point calculations. Should be fast enough.
How can I make use of Graphics processors if available? If not GPU available then I would like to use main CPU.
thanks

Depending on the 'size' of these numbers, you can try MPFR, although its not a GPU solution, it can handle big numbers and should be relitively fast, its used by a few opensource compilers(GCC & LLVM) to do static constant folding, so its designed to preserve accuracy.
To do work on a GPU (really a GPGPU), you'd need to write a kernel using soemthing like OpenCL or DirectCompute, and do your number crunching in that.
You may also be interested in intels new AVX extensions as well.

Floating point calculations can handle very large numbers, very small numbers, and numbers somewhere in-between. The problem is that the number of significant digits is limited, and and number that doesn't perfectly fit in a base two representation (numbers similar to 1/3 or 1/7) will experience errors as they are translated into their closest base 2 counterparts.
If a GPU is available, as one is in nearly all computers with video, then a libarary like GPGPU should help you access it without writing tons of assembly language. That said, until you are sure your computations will involve operations that are similar to those already performed by a GPU, you would be better off avoiding the GPU as they are excellent for doing what they already do, and poor at being adaptable to anything else.

Related

Arrays and Caching [closed]

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
If I have a char array that is of length 8 billion. Would breaking it into smaller arrays increase performance by improving caching? Basically, I will iterate the array and do some comparisons. If not, what is the most optimal way of using an array with such length.
I am reading a file in binary form into an array, and will be performing binary comparisons on different parts of the file.
8 GB worth of data will inevitably ruin data locality so one way or the other you either have to manage your memory in smaller pieces or your OS will do the disk swapping of virtual memory.
There is, however, an alternative - a so-called mmap. Essentially this allows you to map a file into a virtual memory space and your OS then takes the task of accessing it and loading the necessary pages into RAM, while your access to this file becomes nothing more than just a simple memory addressing.
Read more about mmap at http://en.wikipedia.org/wiki/Mmap
If you are going to do this once then just run through it. The programming effort may not be worth the time gained.
I am assuming you want to do this again and again which is why you want to optimize it. It would surely help to know if your iteration and comparisons need to be done sequentially etc? Without some problem domain input it is kind of difficult to give a generic optimization here.
If it can be done in parallel and you have to do it multiple times I suggest you take a look at MapReduce techniques to solve this.

No longer able to max CPU [closed]

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 11 years ago.
I've been programming normally using an AMD 1100 6 core chip. Previously (using the same code) I was able to max out all CPU's (6 threads)
Now, the same code is doing this (still 6 threads):
Before, I was able to get them right to 100% (all 6 cores would go straight up and draw straight lines right across the top. Now I can't seem to figure why the application isn't maxing out the CPU cores, even though there are 6 parallel threads, and the same code used to max out the CPU just a day ago.
I'm running no extra processes, and am doing nothing generally different.
I'm running an extra fan on the CPU as well, and the CPU fan remains calm (which indicates it's not overheating).
First off we need to see the code. Without that I'd be looking to see if you have any synchronisation mechanisms (e.g. mutexes), I/O or use of shared resources that could be giving the CPU some breathing space. If there is any I/O, such as disk, network or other external devices going on, access speed is liable to change from run to run, and the program may be I/O bound rather than CPU bound.

Matlab computation Vs 'C/C++' computation..Which one is efficient? [closed]

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 11 years ago.
I have some mathematical algorithms implemented in Matlab. I have implemented those algorithms in C++ (i used Microsoft VS 2005). When i compare matlab code output with C++ code output, it is 98 to 99% matches with matlab output. Shouldn't it be 100% matched?
Is the matlab computation efficiency is better than C/C++?
Generally, no, Matlab will not produce more exact results just because it is Matlab. However, there is a number of things that might make a difference:
Different implementations of the same algorithm might have been written with different numerical stability in mind.
C and C++ compilers generally allow you to set compilation flags to do fast math, which changes floating-point math behaviour.
The output options for floating point numbers might just differ, making results look different.
The Matlab and C version might have used different floating point precisions.
In matlab also there would be appropriate compiler so in this scenario its hard to say
that matlab computation efficiency is better than C/C++
if your code is same in both case then there should be same output. If you found some differences between them them its should be because of their compiler version difference.

can command line utilities be faster than C++? [closed]

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 11 years ago.
I have a project where I want to manipulate certain output files.
This can be accomplished using a combination of grep and sed and piping with |
Alternatively, I can also write a C++ program to do the same thing.
Is there a conclusive answer on which method will be faster since grep and sed should already be fairly well optimised?
From a technical standpoint, a well-written self-contained C++ program that does everything you need will be faster than using two (or more) shell commands interconnected with a pipe, simply because there will be no IPC overhead, and they can be tailor-made and optimized for your exact needs.
But unless you're writing a program that will be run 24/7 for years, you'll never notice enough gain to be worth the effort.
And the standard rules for pre-optimization apply...
If I were you, use what is already out there as these have likely been around a long time and have been tested and tried. Writing a new program yourself to do the same thing seems like a reinventing the wheel type action and is prone to error.
If you really need faster performance than you'll get with piping, you can download the source for grep and sed and tailor it to your needs in one application (be wary of licenses if you plan on distributing your code). I'd be highly surprised if you'd even notice the overhead of piping (like Flimzy mentioned), so if things are really that slow I'd start profiling your app.
It is likely that if you are a very good C/C++ programmer and spend a lot of time, that you will be able to write a program that's faster than the pipeline you're thinking of. But unless performance is so critical in this case that you absolutely must do it this way you should use the pipeline.

Generating word library - C or C++ [closed]

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 11 years ago.
I need to create a simple application, but speed is very important here. Application is pretty much simple.
It will generate all available chars by saving them to text file. User will enter length that will be used for generating so the application will use recursive function with loop inside.
Will C be faster then C++ in this matter, or it does not matter?
Speed is very important because if my application needs to generate/save to file 10 million+ words.
It doesn't really matter, chances are your application will be I/O bound rather than CPU bound unless you have enough RAM to hold all that in memory.
It's much more important that you choose the best algorithm, and the best data structures to back that algorithm up.
Then implement that in the language you're most familiar with. C++ has the advantage of having easy to use containers in its standard libraries, but that's about it. You can write slow code in both, and fast code in both.