Let's say I have a function that returns a random number.
//defines the max value's interval
enum class Type {
INCLUSIVE,
EXCLUSIVE
};
template<typename T = int, Type Interval = Type::EXCLUSIVE>
T rand(const T min, const T max);
This currently works as expected, the usage is as follows:
int result = rand(1,6); //returns an int [1, 6)
double result = rand(1.0, 6.0); //returns a double [1.0, 6.0)
int result = rand<int, Type::INCLUSIVE>(1, 6); // returns an int [1, 6]
double result = rand<double, Type::INCLUSIVE>(1.0, 6.0); //returns a double [1.0, 6.0]
I want the Interval to have a default value depending on what T is passed in. For example, if T is an int, the Interval will be EXCLUSIVE. If T is a double (or floating point), Interval will be INCLUSIVE.
I tried using std::conditional like so:
template<typename T = int, typename Type Interval = std::conditional<std::is_integral<T>::value, Type::EXCLUSIVE, Type::INCLUSIVE>::type>
T RandomNumber(const T min, const T max);
Where the expected behavior is:
int result = rand(1,6); //returns an int [1, 6)
double result = rand(1.0, 6.0); //returns a double [1.0, 6.0]
//with the possibility of still overriding the default behavior e.g.
int result = rand<int, Type::INCLUSIVE>(1,6); //returns an int [1, 6]
I can't get it to work because I get C2783 could not deduce template argument for "Interval".
Is there another way of doing this or am I doing it incorrectly?
Related
I am a C++ noob.
What I am trying to do is sum the values of a vector of doubles (let's call it x) and ignore any values that are NaN. I tried to look this up, but I couldn't find anything specifically referencing what would happen if a vector contains any NaN values.
E.g.:
// let's say x = [1.0, 2.0, 3.0, nan, 4.0]
y = sum(x) // y should be equal to 10.0
Would the accumulate function work here? Or would it return NaN if x contains a NaN? Would a for loop work here with a condition to check for if the value is NaN (if yes, how do I check if NaN? In Python, the language I know best, this kind of check is not always straightforward).
std::isnan returns true if the passed floating point value is not a number. You have to add this check to all functions to avoid including NANs in your calculations. For example for sum:
constexpr auto sum(auto list) {
typename decltype(list)::value_type result = 0;
for (const auto& i : list) {
if (!std::isnan(i)) { // < - crucial check here
result += i;
}
}
return result;
}
Demo:
int main() {
auto list = std::array{ 1.0f, 2.0f, 3.0f, NAN };
std::cout << sum(list); //prints out 6
}
you could use std::accumulate with a custom summation operation.
const std::vector<double> myVector{1.0, 2.0, 3.0, std::nan("42"), 4.0};
auto nansum = [](const double a, const double b)
{
return a + (std::isnan(b) ? 0 : b);
}
auto mySum = std::accumulate(myVector.begin(), myVector.end(), 0.0, nansum);
I have a uniform 1D grid with value {0.1, 0.22, 0.35, 0.5, 0.78, 0.92}. These values are equally positioned from position 0 to 5 like following:
value 0.1 0.22 0.35 0.5 0.78 0.92
|_________|_________|_________|_________|_________|
position 0 1 2 3 4 5
Now I like to extract/interpolated value positioned, say, at 2.3, which should be
val(2.3) = val(2)*(3-2.3) + val(3)*(2.3-2)
= 0.35*0.7 + 0.5*0.3
= 0.3950
So how should I do it in a optimized way in C++? I am on Visual Studio 2017.
I can think of a binary search, but is any some std methods/or better way to do the job? Thanks.
You can get the integer part of the interpolation value and use that to index the two values you need to interpolate between. No need to use binary search as you are always know between which two values you interpolate. Only need to look out for indices that are outside of the values if that could ever happen.
This only works if the values are always mapped to integer indices starting with zero.
#include <cmath>
float get( const std::vector<float>& val, float p )
{
// let's assume p is always valid so it is good as index
const int a = static_cast<int>(p); // round down
const float t = p - a;
return std::lerp(val[a], val[a+1], t);
}
Edit:
std::lerp is a c++20 feature. If you use earlier versions you can use the following implementation which should be good enough:
float lerp(float a, float b, float t)
{
return a + (b - a) * t;
}
How can I add precision to drand48() in C++?
I am using it in a function like:
double x = drand48()%1000+1;
to generate numbers below 1000.
But then I get this error:
error: invalid operands of types ‘double’ and ‘int’ to binary ‘operator%’
This does not happen when I use:
double x = rand()%1000+1;
Why and what is the difference between rand() and drand48()?
drand48 returns a number from the interval [0.0, 1.0). Are you looking for a number between 1 and 1000? In this case, you need to multiply by 999 and add 1.
Actually, what are you expecting?
drand48() returns a double, whereas rand() returns int.
Furthermore, drand48() returns a value that's distributed between [0.0, 1.0), so your formula needs to change:
double x = drand48() * 1000.0 + 1; // floating-point values from [1, 1001)
or
double x = (int)(drand48() * 1000.0) + 1; // integer values from [1, 1000]
You could either scale the result of drand48() as above, or use lrand48() with your existing formula.
drand48 returns a double in the range of 0.0 to 1.0. You want to multiply that by the range you're looking to generate. double x = drand48() * 1000.0
Is there a better way in C or C++, or just mathematically in general, to map ratios of numbers while preserving or rounding data?
Take the following example
double cdp = 17000.0;
float integ = 30000.0 / 255;
int val = cdp / integ;
color = color + RGB(val, val, val);
Here I want to map a range of numbers [0, 30000] to a value [0, 255]
You can use a simple linear interpolation to do it, if I'm understanding correctly. It would work like this:
x = (inValue - minInRange) / (maxInRange - minInRange);
result = minOutRange + (maxOutRange - minOutRange) * x
where inValue is the number out of 30,000 in your example. minInRange = 0, maxInRange = 30,000, minOutRange = 0, maxOutRange = 255.
Multiply by 255 then divide by 30000. Use an integer format that can hold the product of your two range limits, 30000*255 or 7.65 million. This avoids the precision issues with intermediate floating point values.
If you want to round to the nearest value rather than truncate any fractional component, then you have to do this:
double prod = cpd * 255; //double is big enough to hold the product without loss of precision
val = (int)(prod / 30000 + 0.5); //Adding 0.5 turns truncation into rounding.
In case someone needs templated version of #user1118321 proposed
template<typename T>
struct interpolation{
T min_in, max_in, min_out, max_out;
interpolation(T min_in_, T max_in_, T min_out_, T max_out_): min_in(min_in_), max_in(max_in_), min_out(min_out_), max_out(max_out_){
}
};
template<typename T, typename interpolate_type>
T map_range_1d(T value, interpolate_type interp_type){
double x = (value - interp_type.min_in) / (interp_type.max_in - interp_type.min_in);
return interp_type.min_out + (interp_type.max_out - interp_type.min_out) * x;
}
template<typename T, typename interpolate_type>
std::pair<T,T> map_range_2d(T value_x, T value_y, interpolate_type interp_type_x, interpolate_type interp_type_y){
std::pair<T,T> mapped_values = {map_range_1d(value_x, interp_type_x), map_range_1d(value_y, interp_type_y)};
return mapped_values;
}
typedef interpolation<double> interp1d;
constexpr auto map_range2d = &map_range_2d<double, interp1d>;
constexpr auto map_range1d = &map_range_1d<double, interp1d>;
How to use it
interp1d x_range(min_x_in, max_x_in, min_x_out, max_x_out);
double index_local = map_range1d(456, x_range);
I have following code:
template<typename I,typename O> O convertRatio(I input,
I inpMinLevel = std::numeric_limits<I>::min(),
I inpMaxLevel = std::numeric_limits<I>::max(),
O outMinLevel = std::numeric_limits<O>::min(),
O outMaxLevel = std::numeric_limits<O>::max() )
{
double inpRange = abs(double(inpMaxLevel - inpMinLevel));
double outRange = abs(double(outMaxLevel - outMinLevel));
double level = double(input)/inpRange;
return O(outRange*level);
}
the usage is something like this:
int value = convertRatio<float,int,-1.0f,1.0f>(0.5);
//value is around 1073741823 ( a quarter range of signed int)
the problem is for I=int and O=float with function default parameter:
float value = convertRatio<int,float>(123456);
the line double(inpMaxLevel - inpMinLevel) result is -1.0, and I expect it to be 4294967295 in float.
do you have any idea to do it better?
the base idea is just to convert a value from a range to another range with posibility of different data type.
Adding to romkyns answer, besides casting all values to doubles before casting to prevent overflows, your code returns wrong results when the lower bounds are distinct than 0, because you don't adjust the values appropiately. The idea is mapping the range [in_min, in_max] to the range [out_min, out_max], so:
f(in_min) = out_min
f(in_max) = out_max
Let x be the value to map. The algorithm is something like:
Map the range [in_min, in_max] to [0, in_max - in_min]. To do this, substract in_min from x.
Map the range [0, in_max - in_min] to [0, 1]. To do this, divide x by (in_max - in_min).
Map the range [0, 1] to [0, out_max - out_min]. To do this, multiply x by (out_max - out_min).
Map the range [0, out_max - out_min] to [out_min, out_max]. To do this, add out_min to x.
The following implementation in C++ does this (I will forget the default values to make the code clearer:
template <class I, class O>
O convertRatio(I x, I in_min, I in_max, O out_min, O out_max) {
const double t = ((double)x - (double)in_min) /
((double)in_max - (double)in_min);
const double res = t * ((double)out_max - (double)out_min) + out_min;
return O(res);
}
Notice that I didn't took the absolute value of the range sizes. This allows reverse mapping. For example, it makes possible to map [-1.0, 1.0] to [3.0, 2.0], giving the following results:
convertRatio(-1.0, -1.0, 1.0, 3.0, 2.0) = 3.0
convertRatio(-0.8, -1.0, 1.0, 3.0, 2.0) = 2.9
convertRatio(0.8, -1.0, 1.0, 3.0, 2.0) = 2.1
convertRatio(1.0, -1.0, 1.0, 3.0, 2.0) = 2.0
The only condition needed is that in_min != in_max (to prevent division by zero) and out_min != out_max (otherwise, all inputs will be mapped to the same point). To prevent rounding errors, try to not use small ranges.
Try
(double) inpMaxLevel - (double) inpMinLevel
instead. What you are doing currently is subtracting max from min while the numbers are still of type int - which necessarily overflows; a signed int is fundamentally incapable of representing the difference between its min and max.