I have to find the set of integers that minimize this objective function:
The costraints are:
every x must be a non-negative integer
T, A and B are double known numbers.
I have been looking at the OR-Tools C++ library in order to solve this problem, specifically at the CP-SAT solver.
Is it the right tool from such problems?
If yes, would it be feasible to convert all the double to int in the objective function?
If not, what else do you suggest? (I'm also open to other open source C++ libraries)
It will fit in the CP-SAT solver. You will need to scale floating point coefficients to integers.
The objective function accepts floating point coefficients.
But (x1 + A1)^2 will propagate better if you keep it that way instead of A1^2 + 2 * A1 * x1 + x1^2. which fits into the linear with double coefficient limitation of CP-SAT, provided you use temporary variables sx1 = x1 * x1.
Then make sure to use at least 8 workers for that. (parameters num_search_workers:8).
Now, I believe there are least square solvers that are more suited for this.
Related
I'm implementing an arbitrary precision arithmetic library in C++ and I'm pretty much stuck when implementing the gamma function.
By using the equivalences gamma(n) = gamma(n - 1) * n and gamma(n) = gamma(n + 1) / n, respectively, I can obtain a rational number r in the range (1; 2] for all real values x.
However, I don't know how to evaluate gamma(r). For the Lanczos approximation (https://en.wikipedia.org/wiki/Lanczos_approximation), I need precomputed values p which happen to calculate a factorial of a non-integer value (?!) and can't be calculated dynamically with my current knowledge... Precomputing values for p wouldn't make much sense when implementing an arbitrary precision library.
Are there any algorithms that compute gamma(r) in a reasonable amount of time with arbitrary precision? Thanks for your help.
Spouge's approximation is similar to Lanczos's approximation, but probably easier to use for arbitrary precision, as you can set the desired error.
Lanczos approximation doesn't seem too bad. What exactly do you suspect?
Parts of code which calculate p, C (Chebyshev polynomials) and (a + 1/2)! can be implemented as stateful objects so that, for example, you can calculate p(i) from p(i-1) and Chebyshev coefficients and be computed once, maintaining their matrix.
I just want to ask how to write the formula of in c++:
(square root of 2K over M)
i don't know how. please help.
I am going to find the mass using the formula of Kinetic Energy which is K = 1/2 mv^2
i dont know how to put square root symbol in c++
Use std::sqrt to compute the square root.
Given mass and energy as numeric types, you can use use std::sqrt(2.0 * energy / mass) to compute the speed. I take care to write 2.0 to force the floating point overload of std::sqrt to be used, and to ensure that the division is not an integral one.
Take care that mass is not zero, or negative, else you'll get a NaN on a platform that uses IEEE754 floating point types.
First include header file math.h
And then write the formula as
int a = (2*k)/m;
V=sqrt (a);
Here sqrt() function is used for square root in above example it is square root of a
When I run a solver to solve for x given y and the relationship y = f(x), we have to keep checking if previous guess of x and the current guess of x atleast differ by the machine precision. What is the most efficient way of doing this in C++?
If I have x1 and x2, then I want to check if mantissa(x1) - mantisaa(x2) < 1e-14 or 2^-equivalent. Is there a predefined function which does this check for us? I did notice a stackoverflow response that tells us an efficient way to get the mantissa of floating point using unions. But, is there a machine independent implementation of this in boost or gsl etc.
I am to implement a function to do a Cholesky factorization of a semidefinite matrix in C++ and am wondering if there is an library/anything out there that is already optimized. It is to work as or similar to what is described in this:
http://www.sciencedirect.com/science/article/pii/S0096300310012713
This is an example for positive definite but it doesn't work for positive semi-definite: http://en.wikipedia.org/wiki/Cholesky_decomposition#The_Cholesky.E2.80.93Banachiewicz_and_Cholesky.E2.80.93Crout_algorithms
The program must be in C++, with no C/FORTRAN libraries, (think pointy hared boss giving instructions) which means ATLAS, LAPACK, ect. are out. I have looked through MTL + Boost but theirs only works for positive definite matrices. Are there any libraries that I haven't found, or even single functions that have been written?
The problem with Cholesky decomposition of semi definite matrices is that 1) it is not unique 2) Crout's algorithm fails.
The existence of a decomposition is usually proven non constructively, via a limiting argument (if M_n -> M, and M_n = U^t_n U_n, then ||U_n|| = ||M_n||^1/2 where ||.|| = Hilbert-Schmidt norm, and U_n is a bounded sequence. Extract a subsequence to find a limit U satisfying U^t U = M and U triangular.)
I have found that in the cases I was interested in, it was satisfactory to multiply the diagonal elements by 1 + epsilon, with epsilon small (take a few thousand times the machine epsilon) to give a perfectly acceptable decomposition.
Indeed, if M is positive semi definite, then for each epsilon > 0, M + epsilon I is definite positive.
As the scheme converges when epsilon goes to zero, you can contemplate computing the decomposition for multiple epsilons and perform a Richardson extrapolation.
As to the positive definite case, you could implement Crout's algorithm yourself (there is a sample code in Numerical Recipes), but I would highly recommend against writing it yourself, and advise using LAPACK instead.
This may involve having your boss pay for Intel MKL if he is concerned by potentially poor implementations of LAPACK. Most of the time I heard such a speech, the rationale was "but we can't control the code, we do want to write it yourself so that we can debug it in case of a problem". Dumb argument. LAPACK is 40 years old and thoroughly tested.
Requiring not to use LAPACK is as silly as requiring not to use the standard library for sine, cosine and logarithms.
So we have some function like (pow(e,(-a*x)))/(sqrt(x)) where a, e are const floats. we have some float eps=pow (10,(-4)). We need to find out starting from which x integral of that function from that x to infinety is less than eps? We can not use functions for special default integration function just standart math like operators. point is to achive max evaluetion speed.
If you perform the u-substitution u=sqrt(x), your integral will become 2 * integral e^(-au^2) du. With one more substitution you can reduce it to a standard normal. Once you have it in standard normal form, this reduces to calculating erf(x). The substitutions can be done abstractly for any a, and the results hardcoded for simplicity and speed.
To calculate this integral you need calculate Error function. If you use gcc you can find erf(...) function in math.h, but it doesn't take params to get exact precise. But you can evaluate Error function's value by youself just using Taylor's series. With given eps it possible to calc the exact number of terms of the series.
Hmm, no one seems to understand the question. The question is: given some function f, find the smallest x such that Integral _ x ^ +inf f(x) < eps. That's the question. So basically we try x = 0, then x = 0.1 then x = 0.2 ... until the integral, for all intents and purposes, vanishes.
For example, given the bell curve for IQ of programmers on SO, at what IQ is the cumulative intelligence of programmers with higher IQ vanishingly small? If we pick x = 100 we know at least half the programmers will have a higher IQ than 100, if we pick 120, how many are left? What about 200? If we have 10,000 programmers here and eps = 1/10000 we're basically asking what IQ the top 0.01% of SO contributors have.
The question is: what is the most efficient way to find this number, given that nothing is known about f other than that is decreases fast enough that its the integral from x to infinity approaches zero as x approaches infinity?
The general answer is: you must start with a guess of some kind. If the result is too big, double your guess, and keep going until you satisfy the requirement. Then, go back to the last value you had (which didn't) and do a binary chop to find the smallest x satisfying the requirement.
To make a good guess is hard. One way is to use a Chebychev approximation of the function, integrate it analytically, solve the problem with the resulting polynomial, and use the solution as your starting guess. The assumption is that all functions look like polynomials off sufficiently high order in any given range.