I'm trying to solve aX2 + bX + c = 0 but I can't seem to make it work with using the math header (which I'm not supposed to use).
printf("%E",(-b+(b*b-4*a*c)E0.5)/2a);
Use std::sqrt from header <cmath>. Also, you must write (2 * a), not 2a.
Another thing: don't use the textbook formula for solving quadratic equations. Use the method described there.
If you can't use the math header, then you have to implement the square root eg. as described there:
double my_abs(double x)
{
return x > 0 ? x : -x;
}
double my_sqrt(double x)
{
static const double eps = 1e-12;
double u = x, uold;
do { uold = u; u = (u * u + x) / (2 * u); }
while (my_abs(u - uold) < eps * x);
return u;
}
That is not at all how E works.
E is used in floating point literals, to express a number scientific notation (more or less)
// x will be the same as 0.00104
double x = 1.04e-3
If you want to take a square root, then you should be using a sqrt function:
sqrt(-b+(b*b-4*a*c))/2 / a
Of course, since you can't use #include <cmath>, you'd have to roll your own!
You can't use E as pow in C/C++ (see for example mathematical power operator not working as expected). And the E in the printf will print the number as Scientific notation, you know? (like 3.9265E+2).
E only works when you're typing out a constant floating point, like 2.2E6. To compute exponentials, you need to use std::pow() from <cmath>. In this case, you could use std::sqrt().
I suppose with E you mean the power, but there is no such power operator in C++. Use either the pow function or the, in your case more appropriate, sqrt function. But these are both in <cmath>. If you cannot use <cmath> (homework assignment?), you might have to implement your own square root function.
I think you are confusing scientific notation (3.2E6 = 3.2 x 10^6) with exponentiation (sqrt(5) = 5^(1/2)), where I am using ^ for "raise to the power of". Unfortunately, c++, like C, doesn't have a built-in power operator. So you would normally use either sqrt(x) or pow(x,0.5) from the math library.
However, if you want to solve this without the math header, you'll have to find a different way to calculate square roots. You could write a subroutine to use the Babylonian or Heron method, for example...
Related
In python 3.8, there is a built-in function for calculating the number of combinations (nCr(n, k)):
>>>from math import comb
>>>comb(10,3)
120
Is there any such function in C++?
The beta function from the mathematical library can be used to express binomial coefficients (aka nCr).
double binom(int n, int k) {
return 1/((n+1)*std::beta(n-k+1,k+1));
}
Source.
This function is available either with C++17 or as part of an implementation of the mathematical special functions extensions for C++ (ISO/IEC 29124:2010). In the latter case, your implementation may require you to #define __STDCPP_WANT_MATH_SPEC_FUNCS__ 1 before including the <cmath> header for the function to be available.
Note that, unlike Python, C++ does not have built-in support for big integer numbers, so using floating point arithmetic is probably a good choice here in the first place.
Disclaimer: If using C++17, use the beta function as described by #ComicSansMS Otherwise you can use the tgamma or even lgamma functions if using C++11
Using lgamma:
double comb_l(double n, double k) {
return std::exp(std::lgamma(n + 1)- std::lgamma(k + 1) - std::lgamma(n - k + 1));
}
Using tgamma:
double comb_t(double n, double k) {
return std::tgamma(n + 1) / std::tgamma(k + 1) / std::tgamma(n - k + 1));
}
I have the following expression:
A = cos(5x),
where x is a letter indicating a generic parameter.
In my program I have to work on A, and after some calculations I must have a result that must still be a function of x , explicitly.
In order to do that, what kind of variable should A (and I guess all the other variables that I use for my calculations) be?
Many thanks to whom will answer
I'm guessing you need precision. In which case, double is probably what you want.
You can also use float if you need to operate on a lot of floating-point numbers (think in the order of thousands or more) and analysis of the algorithm has shown that the reduced range and accuracy don't pose a problem.
If you need more range or accuracy than double, long double can also be used.
To define function A(x) = cos(5 * x)
You may do:
Regular function:
double A(double x) { return std::cos(5 * x); }
Lambda:
auto A = [](double x) { return std::cos(5 * x); };
And then just call it as any callable object.
A(4.); // cos(20.)
It sounds like you're trying to do a symbolic calculation, ie
A = magic(cos(5 x))
B = acos(A)
print B
> 5 x
If so, there isn't a simple datatype that will do this for you, unless you're programming in Mathematica.
The most general answer is "A will be an Expression in some AST representation for which you have a general algebraic solver."
However, if you really want to end up with a C++ function you can call (instead of a symbolic representation you can print as well as evaluating), you can just use function composition. In that case, A would be a
std::function<double (double )>
or something similar.
As I showed in other questions, i'm currently implementing a C++ metaprogramming library which includes, among other things, a set of types and metafunctions for compile-time arithmetic.
My goal now is to implement the trigonometric functions sin and cos for my fixed point type.
My problem is that every paper I have found about trigonometric algorithms talks about CORDIC or some kind of Taylor series. The problem with CORDIC is that it needs a huge set of precomputed values through a lookup-table, and I couln't provide it easilly with tmp. Also, the point of CORDIC is to compute that trigonometric functions in hardware which has not a multiplier, and i'm perfectly cappable to do multiplications with my library.
So my question is: Is there any other simple alternative to CORDIC and Taylor Series to compute trigonometric functions?
Finally I have implemented the sin metafunction through Taylor series, using series of 10 terms by default (Could be configurable).
I have based my implementation in this interesting article.
My library includes an implementation of a tmp for loop using iterators, and expression templates to allow write complex expressions in a "clear" way (Clear compared to the common template-meta-programming syntax add<mul<sub<1,2>>>...). This allows me to literally copy-paste the C implementation provided by the article:
template<typename T , typename TERMS_COUNT = mpl::uinteger<4>>
struct sin_t;
template<typename T , typename TERMS_COUNT = mpl::uinteger<4>>
using sin = typename sin_t<T,TERMS_COUNT>::result;
/*
* sin() function implementation through Taylor series (Check http://www10.informatik.uni-erlangen.de/~pflaum/pflaum/ProSeminar/meta-art.html)
*
* The C equivalent code is:
*
* // Calculate sin(x) using j terms
* float sine(float x, int j)
* {
* float val = 1;
*
* for (int k = j - 1; k >= 0; --k)
* val = 1 - x*x/(2*k+2)/(2*k+3)*val;
*
* return x * val;
* }
*/
template<mpl::fpbits BITS , mpl::fbcount PRECISION , unsigned int TERMS_COUNT>
struct sin_t<mpl::fixed_point<BITS,PRECISION>,mpl::uinteger<TERMS_COUNT>>
{
private:
using x = mpl::fixed_point<BITS,PRECISION>;
using begin = mpl::make_integer_backward_iterator<TERMS_COUNT-1>;
using end = mpl::make_integer_backward_iterator<-1>;
using one = mpl::decimal<1,0,PRECISION>;
using two = mpl::decimal<2,0,PRECISION>;
using three = mpl::decimal<3,0,PRECISION>;
template<typename K , typename VAL>
struct kernel : public mpl::function<decltype( one() - ( x() * x() )/(two() * K() + two())/(two()*K()+three())*VAL() )> {};
public:
using result = decltype( x() * mpl::for_loop<begin , end , one , kernel>() );
};
Here is the header of the implementation in the project repo.
huge set of precomputed values through a lookup-table
How many is "huge"? Sounds like a one-time effort that would be fast as hell once you were done. My advice? Get a shovel and fill in that table. You'd have it done by the time you get another answer here.
are there any details available for the algorithm behind the erf-function of boost? The documentation of the module is not very precise. All I found out is that several methods are mixed. For me it looks like variations of Abramowitz and Stegun.
Which methods are mixed?
How are the methods mixed?
What is the complexity of the erf-function (constant time)?
Sebastian
The docs for Boost Math Toolkit has a long list of references, among which Abramowitz and Stegun. The erf-function interface contains a policy template parameter that can be used to control the numerical precision (and hence its run-time complexity).
#include <boost/math/special_functions/erf.hpp>
namespace boost{ namespace math{
template <class T>
calculated-result-type erf(T z);
template <class T, class Policy>
calculated-result-type erf(T z, const Policy&);
template <class T>
calculated-result-type erfc(T z);
template <class T, class Policy>
calculated-result-type erfc(T z, const Policy&);
}} // namespaces
UPDATE:
Below a verbatim copy of the section "Implementation" of the earlier provided reference to the erf-function:
Implementation
All versions of these functions first use the usual reflection formulas to make their arguments positive:
erf(-z) = 1 - erf(z);
erfc(-z) = 2 - erfc(z); // preferred when -z < -0.5
erfc(-z) = 1 + erf(z); // preferred when -0.5 <= -z < 0
The generic versions of these functions are implemented in terms of the incomplete gamma function.
When the significand (mantissa) size is recognised (currently for 53, 64 and 113-bit reals, plus single-precision 24-bit handled via promotion to double) then a series of rational approximations devised by JM are used.
For z <= 0.5 then a rational approximation to erf is used, based on the observation that erf is an odd function and therefore erf is calculated using:
erf(z) = z * (C + R(z*z));
where the rational approximation R(z*z) is optimised for absolute error: as long as its absolute error is small enough compared to the constant C, then any round-off error incurred during the computation of R(z*z) will effectively disappear from the result. As a result the error for erf and erfc in this region is very low: the last bit is incorrect in only a very small number of cases.
For z > 0.5 we observe that over a small interval [a, b) then:
erfc(z) * exp(z*z) * z ~ c
for some constant c.
Therefore for z > 0.5 we calculate erfc using:
erfc(z) = exp(-z*z) * (C + R(z - B)) / z;
Again R(z - B) is optimised for absolute error, and the constant C is the average of erfc(z) * exp(z*z) * z taken at the endpoints of the range. Once again, as long as the absolute error in R(z - B) is small compared to c then c + R(z - B) will be correctly rounded, and the error in the result will depend only on the accuracy of the exp function. In practice, in all but a very small number of cases, the error is confined to the last bit of the result. The constant B is chosen so that the left hand end of the range of the rational approximation is 0.
For large z over a range [a, +∞] the above approximation is modified to:
erfc(z) = exp(-z*z) * (C + R(1 / z)) / z;
The rational approximations are explained in excruciating detail. Tf you need more details, you can always look at the source code.
I've just finished second year at Uni doing a games course, this is always been bugging me how math and game programming are related. Up until now I've been using Vectors, Matrices, and Quaternions in games, I can under stand how these fit into games.
This is a General Question about the relationship between Maths and Programming for Real Time Graphics, I'm curious on how dynamic the maths is. Is it a case where all the formulas and derivatives are predefined(semi defined)?
Is it even feasible to calculate derivatives/integrals in realtime?
These are some of things I don't see how they fit inside programming/maths As an example.
MacLaurin/Talor Series I can see this is useful, but is it the case that you must pass your function and its derivatives, or can you pass it a single function and have it work out the derivatives for you?
MacLaurin(sin(X)); or MacLaurin(sin(x), cos(x), -sin(x));
Derivatives /Integrals This is related to the first point. Calculating the y' of a function done dynamically at run time or is this something that is statically done perhaps with variables inside a set function.
f = derive(x); or f = derivedX;
Bilnear Patches We learned this as a way to possible generate landscapes in small chunks that could be 'sewen' together, is this something that happens in games? I've never heard of this (granted my knowlages is very limited) being used with procedural methods or otherwise. What I've done so far involves arrays for vertex information being processesed.
Sorry if this is off topic, but the community here seems spot on, on this kinda thing.
Thanks.
Skizz's answer is true when taken literally, but only a small change is required to make it possible to compute the derivative of a C++ function. We modify skizz's function f to
template<class Float> f (Float x)
{
return x * x + Float(4.0f) * x + Float(6.0f); // f(x) = x^2 + 4x + 6
}
It is now possible to write a C++ function to compute the derivative of f with respect to x. Here is a complete self-contained program to compute the derivative of f. It is exact (to machine precision) as it's not using an inaccurate method like finite differences. I explain how it works in a paper I wrote. It generalises to higher derivatives. Note that much of the work is done statically by the compiler. If you turn up optimization, and your compiler inlines decently, it should be as fast as anything you could write by hand for simple functions. (Sometimes faster! In particular, it's quite good at amortising the cost of computing f and f' simultaneously because it makes common subexpression elimination easier for the compiler to spot than if you write separate functions for f and f'.)
using namespace std;
template<class Float>
Float f(Float x)
{
return x * x + Float(4.0f) * x + Float(6.0f);
}
struct D
{
D(float x0, float dx0 = 0) : x(x0), dx(dx0) { }
float x, dx;
};
D operator+(const D &a, const D &b)
{
// The rule for the sum of two functions.
return D(a.x+b.x, a.dx+b.dx);
}
D operator*(const D &a, const D &b)
{
// The usual Leibniz product rule.
return D(a.x*b.x, a.x*b.dx+a.dx*b.x);
}
// Here's the function skizz said you couldn't write.
float d(D (*f)(D), float x) {
return f(D(x, 1.0f)).dx;
}
int main()
{
cout << f(0) << endl;
// We can't just take the address of f. We need to say which instance of the
// template we need. In this case, f<D>.
cout << d(&f<D>, 0.0f) << endl;
}
It prints the results 6 and 4 as you should expect. Try other functions f. A nice exercise is to try working out the rules to allow subtraction, division, trig functions etc.
2) Derivatives and integrals are usually not computed on large data sets in real time, its too expensive. Instead they are precomputed. For example (at the top of my head) to render a single scatter media Bo Sun et al. use their "airlight model" which consists of a lot of algebraic shortcuts to get a precomputed lookup table.
3) Streaming large data sets is a big topic, especially in terrain.
A lot of the maths you will encounter in games is to solve very specific problems, and is usually kept simple. Linear algebra is used far more than any calculus. In Graphics (I like this the most) a lot of the algorithms come from research done in academia, and then they are modified for speed by game programmers: although even academic research makes speed their goal these days.
I recommend the two books Real time collision detection and Real time rendering, which contain the guts of most of the maths and concepts used in game engine programming.
I think there's a fundamental problem with your understanding of the C++ language itself. Functions in C++ are not the same as mathmatical functions. So, in C++, you could define a function (which I will now call methods to avoid confusion) to implement a mathmatical function:
float f (float x)
{
return x * x + 4.0f * x + 6.0f; // f(x) = x^2 + 4x + 6
}
In C++, there is no way to do anything with the method f other than to get the value of f(x) for a given x. The mathmatical function f(x) can be transformed quite easily, f'(x) for example, which in the example above is f'(x) = 2x + 4. To do this in C++ you'd need to define a method df (x):
float df (float x)
{
return 2.0f * x + 4.0f; // f'(x) = 2x + 4
}
you can't do this:
get_derivative (f(x));
and have the method get_derivative transform the method f(x) for you.
Also, you would have to ensure that when you wanted the derivative of f that you call the method df. If you called the method for the derivative of g by accident, your results would be wrong.
We can, however, approximate the derivative of f(x) for a given x:
float d (float (*f) (float x), x) // pass a pointer to the method f and the value x
{
const float epsilon = a small value;
float dy = f(x+epsilon/2.0f) - f(x-epsilon/2.0f);
return epsilon / dy;
}
but this is very unstable and quite inaccurate.
Now, in C++ you can create a class to help here:
class Function
{
public:
virtual float f (float x) = 0; // f(x)
virtual float df (float x) = 0; // f'(x)
virtual float ddf (float x) = 0; // f''(x)
// if you wanted further transformations you'd need to add methods for them
};
and create our specific mathmatical function:
class ExampleFunction : Function
{
float f (float x) { return x * x + 4.0f * x + 6.0f; } // f(x) = x^2 + 4x + 6
float df (float x) { return 2.0f * x + 4.0f; } // f'(x) = 2x + 4
float ddf (float x) { return 2.0f; } // f''(x) = 2
};
and pass an instance of this class to a series expansion routine:
float Series (Function &f, float x)
{
return f.f (x) + f.df (x) + f.ddf (x); // series = f(x) + f'(x) + f''(x)
}
but, we're still having to create a method for the function's derivative ourselves, but at least we're not going to accidentally call the wrong one.
Now, as others have stated, games tend to favour speed, so a lot of the maths is simplified: interpolation, pre-computed tables, etc.
Most of the maths in games is designed to to as cheap to calculate as possible, trading speed over accuracy. For example, much of the number crunching uses integers or single-precision floats rather than doubles.
Not sure about your specific examples, but if you can define a cheap (to calculate) formula for a derivative beforehand, then that is preferable to calculating things on the fly.
In games, performance is paramount. You won't find anything that's done dynamically when it could be done statically, unless it leads to a notable increase in visual fidelity.
You might be interested in compile time symbolic differentiation. This can (in principle) be done with c++ templates. No idea as to whether games do this in practice (symbolic differentiation might be too expensive to program right and such extensive template use might be too expensive in compile time, I have no idea).
However, I thought that you might find the discussion of this topic interesting. Googling "c++ template symbolic derivative" gives a few articles.
There's many great answers if you are interested in symbolic calculation and computation of derivatives.
However, just as a sanity check, this kind of symbolic (analytical) calculus isn't practical to do at real time in the context of games.
In my experience (which is more 3D geometry in computer vision than games), most of the calculus and math in 3D geometry comes in by way of computing things offline ahead of time and then coding to implement this math. It's very seldom that you'll need to symbolically compute things on the fly and then get on-the-fly analytical formulae this way.
Can any game programmers verify?
1), 2)
MacLaurin/Taylor series (1) are constructed from derivatives (2) in any case.
Yes, you are unlikely to need to symbolically compute any of these at run-time - but for sure user207442's answer is great if you need it.
What you do find is that you need to perform a mathematical calculation and that you need to do it in reasonable time, or sometimes very fast. To do this, even if you re-use other's solutions, you will need to understand basic analysis.
If you do have to solve the problem yourself, the upside is that you often only need an approximate answer. This means that, for example, a series type expansion may well allow you to reduce a complex function to a simple linear or quadratic, which will be very fast.
For integrals, the you can often compute the result numerically, but it will always be much slower than an analytic solution. The difference may well be the difference between being practical or not.
In short: Yes, you need to learn the maths, but in order to write the program rather than have the program do it for you.