Boost library for human-readable fractional units [closed] - c++

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 8 years ago.
Improve this question
Is there a boost library that supports us in converting doubles to (US style) fractional units, i.e. that converts
from (double) 2.56
to (string) '2 5/9' ?
The use case is to display the fractional number to the user whilst keeping the double representation internally. The fractional representation might very well be an approximation of the exact internal value.

Boost appears to have considered the problem and decided not to implement it.
I quote from the documentation for boost::rational
The library does not offer a conversion function from floating point to rational. A number of requests were received for such a conversion, but extensive discussions on the boost list reached the conclusion that there was no "best solution" to the problem. As there is no reason why a user of the library cannot write their own conversion function which suits their particular requirements, the decision was taken not to pick any one algorithm as "standard"…
All of this implies that we should be looking for some form of "nearest simple fraction". Algorithms to determine this sort of value do exist. However, not all applications want to work like this…
With these conflicting requirements, there is clearly no single solution which will satisfy all users. Furthermore, the algorithms involved are relatively complex and specialised, and are best implemented with a good understanding of the application requirements. All of these factors make such a function unsuitable for a general-purpose library such as this.
GP/Pari does implement a bestappr(X, B) function, which (in one of its incarnations) returns the best rational approximation of X whose denominator is less than B. (Thanks to #SLeske's answer to a similar question for the pointer.)
A Google search for "rational approximation of real numbers" yielded a number of other links, including this non-paywalled paper by Emilie Charriera and Lilian Buzera (and three cheers to Discrete Applied Mathematics for allowing open access.)

Related

Why are fractional data types generally not implemented? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
I have been working with C/C++ for quite a few years. I have also worked with few other programming languages but none have built-in fractional datatypes. Here I am not talking about the regular decimal datatypes like float. I am referring to regular mathematical fractions having the following form:
Nr/Dr where Dr≠0
What are the challenges that are faced by the developers in implementing such datatypes?
Builtin data types tend to be those that are widely useful, and map more or less efficiently to the underlying architecture. This is no hard and fast rule, but it’s a pretty good rule of thumb.
Fractional data types are neither: they would be widely useful, but limited-precision floating-point types are good enough for 99.9% (or 999/1000, if you prefer) of applications, can be implemented much more efficiently due to hardware support, and have far fewer theoretical and practical limitations because they represent a more general domain of numbers (i.e. not just operations on rational numbers).
There are specific applications where floating point data types don’t cut it, and where arbitrary-precision rational numbers are needed. For those cases, special libraries exist. Cluttering standard libraries with these types would be a waste of effort.
So, essentially, the answer is the same as for any other specialised data type that isn’t widely implemented as a built-in type.
What are the challenges that are faced by the developers in implementing such datatypes?
It is not because it is a challenge, but because they are not that useful in practice.
Unlimited precision rationals are way more useful, but that implies managing memory and using smart algorithms to have as good performance as possible. At that point, you are better served implementing the functionality as a full-fledged math library like GNU GMP and friends rather than a built-in type.
What are the challenges that are faced by the developers in implementing such datatypes?
It's really tedious to duplicate everything you can do with int.
The fraction type has to be pervasive across all the libraries you wish to use, or you end up switching back and forth between fraction and builtin types

C/C++ library with map()/mapto() function? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 4 years ago.
Improve this question
What I'm searching for is a function which allows mapping a certain numeric value of a certain numeric range onto another. I did find a way to create this function manually, which has the basic calculation output = output_start + ((output_end - output_start) / (input_end - input_start)) * (input - input_start), however I find it rather bothersome to always create it myself, search through my old projects to find it or to create a header file only for this function.
I'd imagine it's such a basic function that it should be somewhere in the C/C++ Standard libraries or at least in one of the mainstream 3rd party libraries, however I couldn't find it.
You are looking for a linear transformation function, that for every number in the range [input_start, input_end] computes a number in the range [output_start, output_end] using a proportionality factor (i.e. (output_end - output_start) / (input_end - input_start) in order to cover the full range).
Unfortunately, this simple function doesn't exist in the standard library (e.g. neither in cmath nor numeric). Anyway, when using such a transformation, most frequently you'd like to compute the proportionality factor only once.
So the easiest way would be to create your own function for this. In C++ you could create a class, which computes the factor at construction.

Best performance for basic arbitrary-precision arithmetic with arbitrary base [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
What is the best way to perform basic arbitrary-precision arithmetic on an arbitrary base, with best performance ?
I was thinking about switching to binary, and then work with some inline assembly, but actually, I need the best-performance way to do it and I am not sure that this is the best way to do it.
EDIT : I do not want to use any library except the standart C++ one.
The problem is, "best" performance attainable with multiprecision numeric algorithms very strongly depends on the data you're working with (such as average order of numbers you may need to calculate). Consider the discussion of algorithm selection used by gnu gmp as an example:
https://gmplib.org/manual/Algorithms.html
Gnu GMP code is also used inside glibc (in particular, in precise floating point conversion code), so in a sense it is part of a "standard c" library.
Speaking of personal experience, it is extremely difficult to beat GMP's performance figures (in fact, it is rather difficult to even get within factor of 2 to GMP's performance in a general case, so if performance is an absolute priority you may want to reconsider your design goals). Performance in multiprecision calculations is not strongly dependent on implementation technique (so you're not going to win anything by using assembly instead of something like Java for this matter, if your numbers are reasonably long) - the algorithmic complexities will necessarily dominate. In fact, it makes sense to start with highest level language available and optimize from there.
And just in case, you should definitely go through chapter 4, volume 2 of Knuth's TAoCP if you haven't done so already.
I know this is probably not the answer you're looking for, but it's longer than a comment.

C++ Bessel function for complex numbers [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 2 years ago.
Improve this question
I want to implement the Bessel functions of first and second kindDescription of bessel functions for complex numbers in C++. Now I am looking for possibilities to introduce them in my source code. Since math.h only contains bessel functions for real numbers, I would be interested in seeing any kind of possibility.
I haven't found that Boost is compatible with complex arguments (though it may a mistake on my part).
The FORTRAN code developed by D.E. Amos (the code used by MATLAB and others) is in the public domain and can be used by anybody. I have been developing a C++ interface to the library, extending it to the case of negative orders. You can check it out on GitHub.
The Boost library implements ordinary Bessel functions of the first and second kind and modified Bessel functions of the first and second kind for both real and complex numbers (see documentation about Bessel functions).
Don't try to reinvent the wheel, just use the Boost implementation which is far superior to anything you could write yourself.

Library for Trust-Region Reflective algorithms in C [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 3 years ago.
Improve this question
I'm trying to rebuild some Matlab code in C that uses their fsolve function. From the documentation it's using a "trust region reflective" algorithm (I already built it using a Levenberg-marquardt algorithm and it's converging completely differently). Can anyone recommend a library for doing this type of optimization in C/C++ ?
Not sure what "reflective" adds to the "trust region" definition. However, Knitro is a powerful trust-region interior-point optimizer with a C/C++ interface. Unfortunately, Knitro is only available without cost in a limited edition for students; the full version requires a commercial license.
There is also Ipopt, which is not trust-region but nevertheless a powerful C/C++ based large-scale nonlinear constrained optimization engine with an open-source license.
Have you tried checking if your function is convex, if LM and some other convex optimization algorithm converge differently, there is a good chance that the base function is not convex. Also have you checked if the cost function is at least of order 2. If this is the case minimizing the square of the cost function can be better than minimizing the cost function alone.
There are two types of general-purpose algorithms for which there is a global convergence guarantee (under standard assumptions, don't ask :) ). These methods are the line search and the trust region methods. If you wish, you can read more on this topic in the book of Nocedal-Wright: Numerical Optimization.
I haven't tried Knitro recently.
Ipopt is the most robust solver among those I have tried, I highly recommend it. It implements the line search method and is written in C++.