C++ long division? [closed] - c++

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking us to recommend or find a tool, library or favorite off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 8 years ago.
Improve this question
Im seeking help in how to write an algorithm to divide two very large numbers (about 100 digits each) in C++. Id like to point out now that Im not a programmer at all!
I'm only doing this for recreational purposes so i managed to get a few free division algorithms but none of them seem to be what Im looking for! i.e., all of them still only have a precision of 16 digits!
Some people told me to get a bignum library (which I had to look up what that actually meant)
and I got some arbitrary precision package from www.hvks.com, but I dont know how to actually use it!
Any help at all is greatly appreciated as I have no idea what to do!

You are going to handle some numbers with 100s of digits.
You can refer some libraries like Boost::multiprecision.
Here based on the number type that you are using, precision would be arbitrarily large (limited only by available memory), fixed at compile time (for example 50 or 100 decimal digits), or a variable controlled at run-time by member functions. The types are expression-template-enabled for better performance than naive user-defined types.
Next is, GNU Multiple Precision Arithmetic Library
This is a free library for arbitrary precision arithmetic and doesn't have practical limit to the precision except the ones implied by the available memory in the machine GMP runs on.
Another idea is to write your own data structure to handle such numerical operations with the help of char pointer. You can keep your data as a char array and split accordingly for operations. Go with this method, if there is no suitable library for your purpose.
Hope this will help you.

Yeah what #EdHeal said
The answer requires a lecture course – Ed Heal
I wouldn't say this is the place to start out programming given the amount of background information on the topic that is needed. For example why is there only a precision of 16 digits? To me it seems prudent to know that before attempting this at all. Not to mention all the syntax you'd need to understand to actually write something like this in C++. Now I'm not trying to discourage you but the breadth of this question for a non-programmer requires an answer too large to give, or at least to want to even attempt. You need to at least study the basics of programming and how to actually use a library in your own program. Just showing you how to setup the linking process in an IDE could take up a page or more. Also since you didn't understand the last sentence(or at least probably didn't given you're new to programming) I think it shows a need to familiarize yourself with the world of programming before this kind of undertaking. Google is your best friend...

Related

find number of floating point register programmaticaly in c++ [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking for code must demonstrate a minimal understanding of the problem being solved. Include attempted solutions, why they didn't work, and the expected results. See also: Stack Overflow question checklist
Closed 9 years ago.
Improve this question
I am working on parallel algorithm optimization (sparse matrix) and working on register blocking. I want to find number and type of registers (specifically floating point registers and then others) available in machine In order to tune my code based on available registers and make it platform independent. Is there any way to do this in c++?
thank you.
mjr
In general, compilers do know this sort of stuff (and how to best use it), so I'm slightly surprised that you think that you can outsmart the compiler - unless I have very high domain knowledge, and start writing assembler code, I very rarely outsmart the compiler.
Since writing assembler code is highly unportable, I don't think that counts as a solution for optimising the code using knowledge as to how many registers, etc. It is very difficult to know how the compiler uses registers. If you have int x = y + z; as a simple example, how many registers does it take? Depends on the compiler - it could use none, one, two, three, four, five or six, without being below optimal register usage - it all depends on how the compiler decides to deal with things, machine architecture, where/how variables are being stored, etc. The same principle applies to number of floating point registers if we change int to double. There is no obvious way to tell how many registers are being used in this statement (although I suspect no more than three - however, it could be zero or one, depending on what the compiler decides to do).
It's probably possible to do some clever tricks if you know the processor architecture and how the compiler deals with certain types of code - but that also assumes that the compiler doesn't change its behaviour in the next release. But if you know what processor architecture it is, then you also know the number of registers of various kinds...
I am afraid there is no easy portable solution.
There are many factors that could influence the optimal block size for a given computer. One way to discover a good configuration is by automatically running a series of benchmarks, and using the results to tune your code at runtime.
Another approach is to automatically tweak the source code based on the results of some benchmarks. This is what Automatically Tuned Linear Algebra Software (ATLAS) does.

Fortran vs C++, does Fortran still hold any advantage in numerical analysis these days? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
With the rapid development of C++ compilers,especially the intel ones, and the abilities of directly applying SIMD functions in your C/C++ code, does Fortran still hold any real advantage in the world of numerical computations?
I am from an applied maths background, my job involves a lot of numerical analysis, computations, optimisations and such, with a strictly defined performance-requirement.
I hardly know anything about Fortran, I have some experience in C/CUDA/matlab(if you consider the latter as a computer language to begin with), and my daily task involves analysis of very large data (e.g. 10GB-large matrix), and it seems the program at least spend 2/3 of its time on memory-accessing (thats why I send some of its job to GPU), do you people think it may worth the effects for me to trying the fortran routine on at least some performance-critical part of my code to improve the performance of my program?
Because the complexity and things need to be done involved there, I will only go that routine if only there is significant performance benefit there, thanks in advance.
Fortran has strict aliasing semantics compared to C++ and has been aggressively tuned for numerical performance for decades. Algorithms that uses the CPU to work with arrays of data often have the potential to benefit from a Fortran implementation.
The programming languages shootout should not be taken too seriously, but of the 15 benchmarks, Fortran ranks #1 for speed on four of them (for Intel Q6600 one core), more than any other single language. You can see that the benchmarks where Fortran shines are the heavily numerical ones:
spectral norm 27% faster
fasta 67% faster
mandelbrot 56% faster
pidigits 18% faster
Counterexample:
k-nucleotide 500% slower (this benchmark focuses heavily on more sophisticated data structures and string processing, which is not Fortran's strength)
You can also see a summary page "how many times slower" that shows that out of all implementations, the Fortran code is on average closest to the fastest implementation for each benchmark -- although the quantile bars are much larger than for C++, indicating Fortran is unsuited for some tasks that C++ is good at, but you should know that already.
So the questions you will need to ask yourself are:
Is the speed of this function so critical that reimplementing it in Fortran is worth my time?
Is performance so important that my investment in learning Fortran will pay off?
Is it possible to use a library like ATLAS instead of writing the code myself?
Answering these questions would require detailed knowledge of your code base and business model, so I can't answer those. But yes, Fortran implementations are often faster than C++ implementations.
Another factor in your decision is the amount of sample code and the quantity of reference implementations available. Fortran's strong history means that there is a wealth of numerical code available for download and even with a trip to the library. As always you will need to sift through it to find the good stuff.
The complete and correct answer to your question is, "yes, Fortran does hold some advantages".
C++ also holds some, different, advantages. So do Python, R, etc etc. They're different languages. It's easier and faster to do some things in one language, and some in others. All are widely used in their communities, and for very good reasons.
Anything else, in the absence of more specific questions, is just noise and language-war-bait, which is why I've voted to close the question and hope others will too.
Fortran is just naturally suited for numerical programming. You tend to have a large amount of numbers in such programs, typically arranged arrays. Arrays are first class citizens in Fortran and it is often pretty straight forward to translate numerical kernels from Matlab into Fortran.
Regarding potential performance advantages see the other answers, that cover this quite nicely. The baseline is probably you can create highly efficient numerical applications with most compiled languages today, but you might jump through some loops to get there. Fortran was carefully designed to allow the compiler to recognize most spots for optimizations, due to the language features. Of course you can also write arbitrary slow code with any compiled language, including Fortran.
In any case you should pick the tools as suited. Fortran suits numerical applications, C suits system related development. On a final remark, learning Fortran basics is not hard, and it is always worthwhile to have a look into other languages. This opens a different view on problems you want to solve.
Also worth mentioning is that Fortran is a lot easier to master than C++. In fact, Fortran has a shorter language spec than plain C and it's syntax is arguably simpler. You can pick it up very quickly.
Meaning that if you are only interested in learning C++ or Fortran to solve a single specific problem you have at the moment (say, to speed up the bottlenecks in something you wrote in a prototyping language), Fortran might give you a better return on investment.
Fortran code is better for matrix and vector type operation in general. But you also can produce similar performance with c/c++ code by passing hints/suggestions to the compiler to produce similar quality vector instructions. One option that gave me good boost was not to assume memory aliasing among input variables that are array objects. This way, the compiler can aggressively do inner loop unrolling and pipelining for ILP where it can overlap loads and store operation across loop iteration with right prefetches.

Why is there no unsized Integer type? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
Why does the STL not contain an unbounded integer data type?
I feel like it's a data type whose purpose is similar to that of a string.
Programmers would not have to worry about overflowing values and could work with much larger numbers.
So I'm curious if there is a specific reason for it's absence.
This isn't an issue on how to implement or use one from a 3rd party library,
but just a question as to why the language doesn't already come with one.
Any links on the matter are appreciated.
You probably mean arbitrary precision arithmetic or big numbers.
Perhaps it is not in C++ because it is not relevant to a wide audience. Very probably, almost any non-trivial C++ code would use some part of the STL (std::ostream or collections like std::vector or types like std::string).
But a lot of code don't need big numbers.
Likewise, graphical interfaces (like Qt) are not a part of STL, for the same reasons. A lot of people don't care about these issues (e.g. in server code, or numerical applications).
And defining a standard library is a big effort. In my opinion, C++ STL is perhaps too big already; no need to add a lot more inside.
You might want to use GMP if you need it.
Because even way back when the STL was designed:
There were already significantly better arbitrary precision integer libraries in C. Sure they weren't officially classes, but the structures they used still did the job. An STL implementation wouldn't really get a great deal of uptake from those who need arbitrary precision integers, which leads me to my second reason:
Not that many people actually need arbitrary precision integers. Those who do, pull in a third party library. For most people 32bit longs did the job in those days. For many people they still do. And the performance is significantly better. Even on a system with no native 64bit operations you could simulate them with a few instructions and still be significantly faster than an arbitrary integer implementation (no matter how thin you make it, the arbitrary part and the likely heap allocations are going to make it more expensive than two lesser integer operations and a manual carry).
Beyond all that it simply comes down to Stroustrup didn't feel it had broad enough appeal to fit into his vision of the STL.
I think a better question would be why no currency or arbitrary precision decimal class in the STL, since I think they are far more commonly an issue, but the answers are the same.

Fastest 128 bit integer library [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 8 years ago.
Improve this question
I am working on a CPU-heavy numerical computation app. Without going into many details, it's a computational math research project that involves computing a certain function f(x) for large integer x.
Right now everything is implemented in C++ in x64 mode, using native 64-bit ints. That limits me to x<2^64~1.8*10^19. I want to go further, to do that, I need a library that does 128-bit arithmetic. And it has to be very fast. In particular, integer divisions should be fast. Otherwise I'll be sitting here waiting for the results till Thanksgiving. And I'd rather not reinvent the wheel.
I found a list of ~20 big integer libraries on Wikipedia, but most of those seem to be targeted towards arbitrary-precision numbers, which is overkill for my task, and I don't need extra costs associated with that.
Does anyone know what library can operate on 128 bit integers fastest?
You didn't mention your platform / portability requirements. If you are willing to use gcc or clang, on 64 bit platforms they have a builtin 128 bit types that come for free, __uint128_t and __int128_t. Maybe other platforms have similar type extensions.
In any case it should be possible to find the corresponding generic code in the gcc sources that assembles two integers of width N to synthesize one integer of width 2N. This would probably be a good starting point to make a standalone library for that purpose.
The ttmath library does what you want.
This might not be for everyone, but what I would do is pick the highest-performance arbitrary integer library with source code and otherwise suitable for the job, and hack it to be for fixed integer sizes. Change some variable "nbits" to 128 hard-coded. It probably allocates memory at runtime, not knowing the number of bytes until then. Change it to use struct with data in-place, saving a pointer dereferencing every time data is read. Unroll certain critical loops by hand. Hard-code anything else that might be critical. Then the compiler will probaby have an easier time optimizing things. Of course, much of this will be assembly, using fancy SIMD with whatever the technology is in use this week.
It would be fun! But then, as a programmer I started off with machine code and very low level stuff.
But for those not as crazy as I am, perhaps one of the available libraries uses templates or has some means of generating code custom to some size. And, some compilers have a "long long" integer type which might be suitable.

A C++ library for Arrays, Matrix, Vector, and classical linear algebra operations [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking us to recommend or find a tool, library or favorite off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 9 years ago.
Improve this question
Which library do you use for N-dimensional arrays?
I use blitz++ at work and I really dislike some aspect of it.
Some aspect of it are even dangerous. The need for resizing before
using operator=. A(Range::all(), Range::all()) throws for an (0,0)
matrix, etc. and the linear algebra operations are to be
done via clapack.
I used and loved eigen. I appreciate its "all-in-header" implementations,
the C++ syntactic sugar, and the presence of all the linear algebra operations
I need (matrix multiplication, system resolution, cholesky...)
What are you using?
boost::array and also boost::MultiArray. There's also a pretty good linear algebra package in boost called uBLAS
There is also armadillo which I am using in some projects. From their website:
Armadillo is a C++ linear algebra library (matrix maths) aiming towards
a good balance between speed and ease
of use. Integer, floating point and
complex numbers are supported, as well
as a subset of trigonometric and
statistics functions. Various matrix
decompositions are provided through
optional integration with LAPACK and
ATLAS libraries.
A delayed evaluation approach is employed (during compile time) to
combine several operations into one
and reduce (or eliminate) the need for
temporaries. This is accomplished
through recursive templates and
template meta-programming.
This library is useful if C++ has been decided as the language of choice
(due to speed and/or integration
capabilities), rather than another
language like Matlab ® or Octave. It
is distributed under a license that is
useful in both open-source and
commercial contexts.
Armadillo is primarily developed at NICTA (Australia), with
contributions from around the world.
We've used TNT successfully for a number of years. There are sufficient issues, however, that we're moving toward an internally developed solution instead. The two biggest sticking points for us are that
The arrays are not thread safe, even for read access, because they use a non-thread safe reference count.
The arrays cause all sorts of problems when you write const-correct code.
If those aren't a problem then they're fairly convenient for a lot of common array tasks.