I am in need of writing UNIX/LINUX srand48 and drand48 functions in C. I am stuck with setting and using a seed value. I have two functions:
#include <math.h>
long int mfml_win_drandX0; //actual value used for generation
void srand48(long int seedval)
{
mfml_win_drandX0=seedval; //setting seed into mfml_win_drandX0
}
double drand48(void)
{
static const double a=0x273673163155,
c=0x13,
m=281474976710656;
/*EDIT: error was here*/
mfml_win_drandX0=fmod(a*mfml_win_drandX0+c,m);; //computing the next value
return mfml_win_drandX0/m;
}
But when using:
srand48(2) ;
for (int i=0;i<10;i++)
std::cout<<drand48()<<std::endl;
I get the same number everytime (mfml_win_drandX0) does not change. How to solve this issue?
You cannot guarantee that mfml_win_drandX0 will be in a range that representable by a long int, even if you choose long long for its type. AFAIK this is Undefined Behavior:
C++11 §5/4:
“If during the evaluation of an expression, the result is not mathematically defined or not in the range of representable values for its type, the behavior is undefined.”
And I guess you should avoid defining m in this way because when you enter a literal that has an integral type, it is defined as an int. Using
m = 281474976710656LL;
is better, however this does not solve your main problem. Also, take care with
mfml_win_drandX0 = a * mfml_win_drandX0 + c;
Where you do a conversion from const double to long.
What do you expect to get though? There is nothing random in your code. a, c, m are all constant, you won't get different result util changing mfml_win_drandX0.
Related
I have a constant integer, steps, which is calculated using the floor function of the quotient of two other constant variables. However, when I attempt to use this as the length of an array, visual studio tells me it must be a constant value and the current value cannot be used as a constant. How do I make this a "true" constant that can be used as an array length? Is the floor function the problem, and is there an alternative I could use?
const int simlength = 3.154*pow(10,7);
const float timestep = 100;
const int steps = floor(simlength / timestep);
struct body bodies[bcount];
struct body {
string name;
double mass;
double position[2];
double velocity[2];
double radius;
double trace[2][steps];
};
It is not possible with the standard library's std::pow and std::floor function, because they are not constexpr-qualified.
You can probably replace std::pow with a hand-written implementation my_pow that is marked constexpr. Since you are just trying to take the power of integers, that shouldn't be too hard. If you are only using powers of 10, floating point literals may be written in the scientific notation as well, e.g. 1e7, which makes the pow call unnecessary.
The floor call is not needed since float/double to int conversion already does flooring implicitly. Or more correctly it truncates, which for positive non-negative values is equivalent to flooring.
Then you should also replace the const with constexpr in the variable declarations to make sure that the variables are usable in constant expressions:
constexpr int simlength = 3.154*my_pow(10,7); // or `3.154e7`
constexpr float timestep = 100;
constexpr int steps = simlength / timestep;
Theoretically only float requires this change, since there is a special exception for const integral types, but it seems more consistent this way.
Also, I have a feeling that there is something wrong with the types of your variables. A length and steps should not be determined by floating-point operations and types, but by integer types and operations alone. Floating-point operations are not exact and introduce errors relative to the mathematical precise calculations on the real numbers. It is easy to get unexpected off-by-one or worse errors this way.
You cannot define an array of a class type before defining the class.
Solution: Define body before defining bodies.
Furthermore, you cannot use undefined names.
Solution: Define bcount before using it as the size of the array.
Is the floor function the problem, and is there an alternative I could use?
std::floor is one problem. There's an easy solution: Don't use it. Converting a floating point number to integer performs similar operation implicitly (the behaviour is different in case of negative numbers).
std::pow is another problem. It cannot be replaced as trivially in general, but in this case we can use a floating point literal in scientific notation instead.
Lastly, non-constexpr floating point variable isn't compile time constant. Solution: Use constexpr.
Here is a working solution:
constexpr int simlength = 3.154e7;
constexpr float timestep = 100;
constexpr int steps = simlength / timestep;
P.S. trace is a very large array. I would recommend against using so large member variables, because it's easy for the user of the class to not notice such detail, and they are likely to create instances of the class in automatic storage. This is a problem because so large objects in automatic storage are prone to cause stack overflow errors. Using std::vector instead of an array is an easy solution. If you do use std::vector, then as a side effect the requirement of compile time constant size disappear and you will no longer have trouble using std::pow etc.
Because simlength is 3.154*10-to-the-7th, and because timestep is 10-squared, then the steps variable's value can be written as:
3.154e7 / 1e2 == 3.154e5
And, adding a type-cast, you should be able to write the array as:
double trace[2][(int)(3.154e5)];
Note that this is HIGHLY IRREGULAR, and should have extensive comments describing why you did this.
Try switching to constexpr:
constexpr int simlength = 3.154e7;
constexpr float timestep = 1e2;
constexpr int steps = simlength / timestep;
struct body {
string name;
double mass;
double position[2];
double velocity[2];
double radius;
double trace[2][steps];
};
Had been going through this code:
#include<cstdio>
#define TOTAL_ELEMENTS (sizeof(array) / sizeof(array[0]))
int array[] = {1,2,3,4,5,6,7};
int main()
{
signed int d;
printf("Total Elements in the array are => %d\n",TOTAL_ELEMENTS);
for(d=-1;d <= (TOTAL_ELEMENTS-2);d++)
printf("%d\n",array[d+1]);
return 0;
}
Now obviously it does not get into the for loop.
Whats the reason?
The reason is that in C++ you're getting an implicit promotion. Even though d is declared as signed, when you compare it to (TOTAL_ELEMENTS-2) (which is unsigned due to sizeof), d gets promoted to unsigned. C++ has very specific rules which basically state that the unsigned value of d will then be the congruent unsigned value mod numeric_limits<unsigned>::max(). In this case, that comes out to the largest possible unsigned number which is clearly larger than the size of the array on the other side of the comparison.
Note that some compilers like g++ (with -Wall) can be told to warn about such comparisons so you can make sure that the code looks correct at compile time.
The program looks like it should throw a compile error. You're using "array" even before its definition. Switch the first two lines and it should be okay.
We know that -2*4^31 + 1 = -9.223.372.036.854.775.807, the lowest value you can store in long long, as being said here: What range of values can integer types store in C++.
So I have this operation:
#include <iostream>
unsigned long long pow(unsigned a, unsigned b) {
unsigned long long p = 1;
for (unsigned i = 0; i < b; i++)
p *= a;
return p;
}
int main()
{
long long nr = -pow(4, 31) + 5 -pow(4,31);
std::cout << nr << std::endl;
}
Why does it show -9.223.372.036.854.775.808 instead of -9.223.372.036.854.775.803? I'm using Visual Studio 2015.
This is a really nasty little problem which has three(!) causes.
Firstly there is a problem that floating point arithmetic is approximate. If the compiler picks a pow function returning float or double, then 4**31 is so large that 5 is less than 1ULP (unit of least precision), so adding it will do nothing (in other words, 4.0**31+5 == 4.0**31). Multiplying by -2 can be done without loss, and the result can be stored in a long long without loss as the wrong answer: -9.223.372.036.854.775.808.
Secondly, a standard header may include other standard headers, but is not required to. Evidently, Visual Studio's version of <iostream> includes <math.h> (which declares pow in the global namespace), but Code::Blocks' version doesn't.
Thirdly, the OP's pow function is not selected because he passes arguments 4, and 31, which are both of type int, and the declared function has arguments of type unsigned. Since C++11, there are lots of overloads (or a function template) of std::pow. These all return float or double (unless one of the arguments is of type long double - which doesn't apply here).
Thus an overload of std::pow will be a better match ... with a double return values, and we get floating point rounding.
Moral of the story: Don't write functions with the same name as standard library functions, unless you really know what you are doing!
Visual Studio has defined pow(double, int), which only requires a conversion of one argument, whereas your pow(unsigned, unsigned) requires conversion of both arguments unless you use pow(4U, 31U). Overloading resolution in C++ is based on the inputs - not the result type.
The lowest long long value can be obtained through numeric_limits. For long long it is:
auto lowest_ll = std::numeric_limits<long long>::lowest();
which results in:
-9223372036854775808
The pow() function that gets called is not yours hence the observed results. Change the name of the function.
The only possible explaination for the -9.223.372.036.854.775.808 result is the use of the pow function from the standard library returning a double value. In that case, the 5 will be below the precision of the double computation, and the result will be exactly -263 and converted to a long long will give 0x8000000000000000 or -9.223.372.036.854.775.808.
If you use you function returning an unsigned long long, you get a warning saying that you apply unary minus to an unsigned type and still get an ULL. So the whole operation should be executed as unsigned long long and should give without overflow 0x8000000000000005 as unsigned value. When you cast it to a signed value, the result is undefined, but all compilers I know simply use the signed integer with same representation which is -9.223.372.036.854.775.803.
But it would be simple to make the computation as signed long long without any warning by just using:
long long nr = -1 * pow(4, 31) + 5 - pow(4,31);
As a addition, you have neither undefined cast nor overflow here so the result is perfectly defined per standard provided unsigned long long is at least 64 bits.
Your first call to pow is using the C standard library's function, which operates on floating points. Try giving your pow function a unique name:
unsigned long long my_pow(unsigned a, unsigned b) {
unsigned long long p = 1;
for (unsigned i = 0; i < b; i++)
p *= a;
return p;
}
int main()
{
long long nr = -my_pow(4, 31) + 5 - my_pow(4, 31);
std::cout << nr << std::endl;
}
This code reports an error: "unary minus operator applied to unsigned type, result still unsigned". So, essentially, your original code called a floating point function, negated the value, applied some integer arithmetic to it, for which it did not have enough precision to give the answer you were looking for (at 19 digits of presicion!). To get the answer you're looking for, change the signature to:
long long my_pow(unsigned a, unsigned b);
This worked for me in MSVC++ 2013. As stated in other answers, you're getting the floating-point pow because your function expects unsigned, and receives signed integer constants. Adding U to your integers invokes your version of pow.
I just came across an extremely strange problem. The function I have is simply:
int strStr(string haystack, string needle) {
for(int i=0; i<=(haystack.length()-needle.length()); i++){
cout<<"i "<<i<<endl;
}
return 0;
}
Then if I call strStr("", "a"), although haystack.length()-needle.length()=-1, this will not return 0, you can try it yourself...
This is because .length() (and .size()) return size_t, which is an unsigned int. You think you get a negative number, when in fact it underflows back to the maximum value for size_t (On my machine, this is 18446744073709551615). This means your for loop will loop through all the possible values of size_t, instead of just exiting immediately like you expect.
To get the result you want, you can explicitly convert the sizes to ints, rather than unsigned ints (See aslgs answer), although this may fail for strings with sufficient length (Enough to over/under flow a standard int)
Edit:
Two solutions from the comments below:
(Nir Friedman) Instead of using int as in aslg's answer, include the header and use an int64_t, which will avoid the problem mentioned above.
(rici) Turn your for loop into for(int i = 0;needle.length() + i <= haystack.length();i ++){, which avoid the problem all together by rearranging the equation to avoid the subtraction all together.
(haystack.length()-needle.length())
length returns a size_t, in other words an unsigned int. Given the size of your strings, 0 and 1 respectively, when you calculate the difference it underflows and becomes the maximum possible value for an unsigned int. (Which is approximately 4.2 billions for a storage of 4 bytes, but could be a different value)
i<=(haystack.length()-needle.length())
The indexer i is converted by the compiler into an unsigned int to match the type. So you're gonna have to wait until i is greater than the max possible value for an unsigned int. It's not going to stop.
Solution:
You have to convert the result of each method to int, like so,
i <= ( (int)haystack.length() - (int)needle.length() )
Is the following an efficient and problem free way to convert an unsigned int to an int in C++:
#include <limits.h>
void safeConvert(unsigned int passed)
{
int variable = static_cast<int>(passed % (INT_MAX+1));
...
}
Or is there a better way?
UPDATE
As pointed out by James McNellis it is not undefined to assign an unsigned int > INT_MAX to an integer - rather this is implementation defined. As such the context here is now specifically on my preference is to ensure this integer resets to zero when the unsigned int exceeds INT_MAX.
Original Context
I have a number of unsigned int's used as counters, but want to pass them around as integers in a specific case.
Under normal operation these counts will remain within the bounds of INT_MAX. However to avoid running into undefined implementation specific behaviour should the abnormal (but valid) case occur I want some efficient conversion here.
This should also work:
int variable = passed & INT_MAX;
Under normal operation these counts will remain within the bounds of INT_MAX. However to avoid running into undefined behaviour should the abnormal (but valid) case occur I want some efficient conversion here.
Efficient conversion to what? If all the shared values for int and unsigned int correspond, and you want other unsigned values such as INT_MAX + 1 to each have distinct values, then you can only map them onto the negative integer values. This is done by default, and can be explicitly requested with static_cast<int>(my_unsigned). Otherwise, you could map them all to 0, or -1, or INT_MIN, or throw away the high bit... easiest way is simply: if (my_unsigned > INT_MAX) my_unsigned = XXX, or ...my_unsigned &= INT_MAX to clear the high bit. But will the called functions work properly if the int overflows? Perhaps a better solution would be to use 64-bit ints to begin with?