Apologies if this question has already been answered.
#include <iostream>
#include <cstdlib>
#include <ctime>
using namespace std;
int main () {
srand( time(NULL) );
cout << rand();
}
"implicit conversion loses integer precision: 'time_t' (aka 'long') to 'unsigned int'"
Is the error message Im getting when I execute the code above. I am using xcode 4.6.1. Now when I use a different complier such as the one from codepad.org it executes perfectly fine generating what seems like random numbers so I am assuming it is an xcode issue that I need to work around?
I have JUST started programming so I am a complete beginner when it comes to this. Is there a problem with my code or is it my complier?
Any help would be appreciated!
"implicit conversion loses integer precision: 'time_t' (aka 'long') to 'unsigned int'"
You're losing precision implicitly because time() returns a long which is larger than an unsigned int on your target. In order to workaround this problem, you should explicitly cast the result (thus removing the "implicit precision loss"):
srand( static_cast<unsigned int>(time(nullptr)));
Given that it's now 2017, I'm editing this question to suggest that you consider the features provided by std::chrono::* defined in <chrono> as a part of C++11. Does your favorite compiler provide C++11? If not, it really should!
To get the current time, you should use:
#include <chrono>
void f() {
const std::chrono::time_point current_time = std::chrono::system_clock::now();
}
Why should I bother with this when time() works?
IMO, just one reason is enough: clear, explicit types. When you deal with large programs among big enough teams, knowing whether the values passed around represent time intervals or "absolute" times, and what magnitudes is critical. With std::chrono you can design interfaces and data structures that are portable and skip out on the is-that-timeout-a-deadline-or-milliseconds-from-now-or-wait-was-it-seconds blues.
As mentioned by "nio", a clean workaround would be to explicitly type cast.
Deeper explanation:
The srand() requires an unsigned int as parameter (srand(unsigned int)) but time() returns a long int (long int time()) and this is not accepted by the srand() so in order to fix this, the compiler has to simply typecast (convert) the "long int" to "unsigned int".
BUT in your case the compiler warns you about it instead (as the designers of the compiler thought you should be aware that's all).
So a simple
srand( (unsigned int) time(NULL) );
will do the trick!
(forgive me if i have done something wrong, this is my first answer on stackoverflow)
The srand() function has unsigned int as a type of argument, time_t is long type. the upper 4 bytes from long are stripped out, but there's no problem in it.
srand() will randomize the rand() algorithm with 4 lower bytes of time(), so you're supplying more data than is needed.
If you get an error, try to just explicitly cast the time_t type to unsigned int:
srand( static_cast<unsigned int>(time(NULL)) );
Another interesting thing is that if you run your program twice in the same second, you'll get the same random number, which can be sometimes undesired, that's because if you seed the rand() algorithm with the same data, it will generate the same random sequence. Or it can be desirable when you debug some piece of code and need to test the same behaviour again... then you simply use something like srand(123456).
This is not an error. The code is valid and its meaning is well defined; if a compiler refuses to compile it, the compiler does not conform to the language definition. More likely, it's a warning, and it's telling you that the compiler writer thinks that you might have made a mistake. If you insist on eliminating warning messages you could add a cast, as others have suggested. I'm not a big fan of rewriting valid, meaningful code in order to satisfy some compiler writer's notion of good style; I'd turn off the warning. If you do that, though, you might overlook other places where a conversion loses data that you didn't intend.
#include <stdlib.h>
#include <iostream> //rand
#include <time.h> //time
float randomizer(int VarMin, int VarMax){
srand((unsigned)time(NULL));
int range = (VarMax - VarMin);
float rnd = VarMin + float(range*(rand()/(RAND_MAX + 1.0)));
return rnd;
}
Related
We have a big C++ project where we rely on compiler warnings and Flexelint to identify potential programming errors. I was curious about how they will warn us once we accidentally try to cast an enum value to a narrower integer.
As suggested by Lint, we usually perform static casts from the enum to the integer. Lint doesn't like implicit casts. We usually cast to the exact type expected by the method.
I got interesting results, see this experiment:
#include <iostream>
#include <string>
#include <stdint.h>
void foo(uint8_t arg)
{
std::cout << static_cast<int>(arg) << std::endl;
}
enum Numbers
{
hundred = 100,
thousand = 1000,
};
int main()
{
std::cout << "start" << std::endl;
foo(static_cast<uint8_t>(hundred)); // 1) no compiler or lint warning
foo(static_cast<uint8_t>(thousand)); // 2) no compiler or lint warning
foo(static_cast<int>(thousand)); // 3) compiler and lint warning
foo(thousand); // 4) compiler and lint warning
std::cout << "end" << std::endl;
}
http://cpp.sh/5hpyz
First case is not a concern, just to mention the good case.
Interestingly, I only got compiler warnings in the latter two cases, when doing an implicit cast. The explicit cast in case 2) will truncate the value (output is 232 like the following two) but with no warning. Ok, the compiler is probably assuming I know what I'm doing here with my explicit cast to uint8_t. Fair enough.
I expected Lint to help me out here. I run this code in Gimpel's online Lint but didn't get any warnings either. Only in the latter two cases again, with this warning:
warning 569: loss of information (call) in implicit conversion from 'int' 1000 (10 bits) to 'uint8_t' (aka 'unsigned char') (8 bits)
Again, the explicit cast to uint8_t in case 2), that truncates my value, doesn't bother Lint at all.
Given a case where all values in an enum fit into the uint8_t. But in some future, we add bigger values (or say: more than 256 values in total), cast them and without noticing that this will truncate them and get unexpected results.
By default, I always cast to the target variable size (case 2) ). Given this experiment, I wonder if this is a wise approach. Shall I cast to the widest type and rely on implicit casts instead (case 3) )?
What's the right approach to get the expected warnings?
You could also write foo(uint8_t{thousand}); instead of a static_cast. With that, you would get a compiler error/warning if thousand is too large for uint8_t. But I don't know what lint thinks about it
This is a problem I also encountered. What I found to work best is to write a function which performs the cast for you and would generate an error in case something is wrong based on type traits.
#include <type_traits>
#include <limits>
template<class TYPE>
TYPE safe_cast(const Numbers& number)
{
using FROM_TYPE = std::underlying_type_t<Numbers>;
// Might have to add some additional code here to fix signed unsigned comparisons.
if((abs(std::numeric_limits<TYPE>::min()) > static_cast<FROM_TYPE>(number)) ||
(std::numeric_limits<TYPE>::max() < static_cast<FROM_TYPE>(number)))
{
// Throw an error or assert.
std::cout << "Error in safe_cast" << std::endl;
}
return static_cast<TYPE>(number);
}
Hope this wil help
p.s. In case you could rewrite this to compile time with a constexpr you could also uses static_assert.
Since ints and longs and other integer types may be different sizes on different systems, why not have stouint8_t(), stoint64_t(), etc. so that portable string to int code could be written?
Because typing that would make me want to chop off my fingers.
Seriously, the basic integer types are int and long and the std::stoX functions are just very simple wrappers around strtol etc. and note that C doesn't provide strtoi32 or strtoi64 or anything that std::stouint32_t could wrap.
If you want something more complicated you can write it yourself.
I could just as well ask "why do people use int and long, instead of int32_t and int64_t everywhere, so the code is portable?" and the answer would be because it's not always necessary.
But the actual reason is probably that noone ever proposed it for the standard. Things don't just magically appear in the standard, someone has to write a proposal and justify adding them, and convince the rest of the committee to add them. So the answer to most "why isn't this thing I just thought of in the standard?" is that noone proposed it.
Because it's usually not necessary.
stoll and stoull return results of type long long and unsigned long long respectively. If you want to convert a string to int64_t, you can just call stoll() and store the result in your int64_t object; the value will be implicitly converted.
This assumes that long long is the widest signed integer type. Like C (starting with C99), C++ permits extended integer types, some of which might be wider than [unsigned] long long. C provides conversion functions strtoimax and strtoumax (operating on intmax_t and uintmax_t, respectively) in <inttypes.h>. For whatever reason, C++ doesn't provide wrappers for this functions (the logical names would be stoimax and stoumax.
But that's not going to matter unless you're using a C++ compiler that provides an extended integer type wider than [unsigned] long long, and I'm not aware that any such compilers actually exist. For any types no wider than 64 bits, the existing functions are all you need.
For example:
#include <iostream>
#include <string>
#include <cstdint>
int main() {
const char *s = "0xdeadbeeffeedface";
uint64_t u = std::stoull(s, NULL, 0);
std::cout << u << "\n";
}
Why should I get this error
C2668: 'abs' : ambiguous call to overloaded function
For a simple code like this
#include <iostream>
#include <cmath>
int main()
{
unsigned long long int a = 10000000000000;
unsigned long long int b = 20000000000000;
std::cout << std::abs(a-b) << "\n"; // ERROR
return 0;
}
The error still presents after removing std::. However if I use int data type (with smaller values) there is no problem.
The traditional solution is to check that manually
std::cout << (a<b) ? (b-a) : (a-b) << "\n";
Is that the only solution?
The check seem the only really good solution. Alternatives require type bigger than yours and nonstandard extension to use it.
You can go with solutions casting to signed long long if your range fits. I would hardly suggest that way, especially if the implementation is placed in a function that does only that.
You are including <cmath> and thus using the "floating-point abs".
The "integer abs" is declared in <cstdlib>.
However, there is no overload for unsigned long long int (both a and b are, thus a-b is, too), and the overload for long long int only exists since C++11.
First, you need to include the correct header. As pointed out by gx_, <cmath> has a floating-point abs and on my compiler it actually compiles, but the result is probably not the one you expected:
1.84467e+19
Include <cstdlib> instead. Now the error is:
main.cpp:7:30: error: call of overloaded ‘abs(long long unsigned int)’ is ambiguous
main.cpp:7:30: note: candidates are:
/usr/include/stdlib.h:771:12: note: int abs(int)
/usr/include/c++/4.6/cstdlib:139:3: note: long int std::abs(long int)
/usr/include/c++/4.6/cstdlib:173:3: note: long long int __gnu_cxx::abs(long long int)
As you can see, there is no unsigned overload of this function, because computing an absolute value of something which is of type unsigned makes no sense.
I see answers suggesting you to cast an unsigned type to a signed one, but I believe this is dagereous, unless you really know what you are doing!
Let me ask first what is the expected range of the values a and b that you are going to operate on? If both are below 2^63-1 I would strongly suggest to just use long long int. If that is not true however, let me note that your program for the values:
a=0, b=1
and
a=2^64-1, b=0
will produce exactly the same result, because you actually need 65 bits to represent any possible outcome of a difference of 2 64-bit values. If you can confirm that this is not going to be a problem, use the cast as suggested. However, if you don't know, you may need to rethink what you are actually trying to achieve.
Because back before C++ with C you used to have use abs, fabs, labs for each different type, c++ allows overloading of abs, in this case it doesn't understand or isn't happy with your overload.
Use labs(a-b) seeing as you're using longs, this should solve your problem.
This question already has answers here:
How to generate a random number in C++?
(14 answers)
Closed 5 years ago.
I have two questions.
What other ways are there to seed a psuedo-random number generator in C++ without using srand(time(NULL))?
The reason I asked the first question. I'm currently using time as my seed for my generator, but the number that the generator returns is always the same. I'm pretty sure the reason is because the variable that stores time is being truncated to some degree. (I have a warning message saying, "Implicit conversion loses integer precision: 'time_t' (aka 'long') to 'unsigned int') I'm guessing that this is telling me that in essence my seed will not change until next year occurs. For my purposes, using time as my seed would work just fine, but I don't know how to get rid of this warning.
I have never gotten that error message before, so I assume it has something to do with my Mac. It's 64-bit OS X v10.8. I'm also using Xcode to write and compile, but I had no problems on other computers with Xcode.
Edit:
After toying and researching this more, I discovered a bug that 64-bit Macs have. (Please correct me if I am mistaken.) If you try to have your mac select a random number between 1 and 7 using time(NULL) as the seed, you will always get the number four. Always. I ended up using mach_absolute_time() to seed my randomizer. Obviously this eliminates all portability from my program... but I'm just a hobbyist.
Edit2:
Source code:
#include <iostream>
#include <time.h>
using namespace std;
int main(int argc, const char * argv[]) {
srand(time(NULL));
cout << rand() % 7 + 1;
return 0;
}
I ran this code again to test it. Now it's only returning 3. This must be something to do with my computer and not the C++ itself.
Tl;dr but, most likely, you're doing it wrong. You're only supposed to set the seed once, whereas you might have something like:
for ( ... )
{
srand(time(NULL));
whatever = rand();
}
when it should be
srand(time(NULL));
for ( ... )
{
whatever = rand();
}
1.Not really. You can ask user to input random seed, for example. Or use some other system parameters, but this won't make a difference.
2.To rid of this warning you have to do explicit conversion. Like:
unsigned int time_ui = unsigned int( time(NULL) );
srand( time_ui );
or
unsigned int time_ui = static_cast<unsigned int>( time(NULL) );
or
unsigned int time_ui = static_cast<unsigned int>( time(NULL)%1000 );
to check whether this is really conversion problem you can simply output your time on the screen and see yourself
std::cout << time(NULL);
You should see random once at the begining of you program:
int main()
{
// When testing you probably want your code to be deterministic
// Thus don't see random and you will get the same set of results each time
// This will allow you to use unit tests on code that use rand().
#if !defined(TESTING)
srand(time(NULL)); // Never call again
#endif
// Your code here.
}
For x86, direct call to the CPU time stamp counter rdtsc, instead of a library function TIME(NULL), could be used. Below 1) reads timestamp 2) seed RAND in assembly:
rdtsc
mov edi, eax
call srand
For C++, the following would do the job with g++ compiler.
asm("rdtsc\n"
"mov edi, eax\n"
"call srand");
NOTE: But may not be recommended if code is running in virtual machine.
When I use this
#include<time.h>
//...
int n = time(0);
//...
I get a warning about converting time to int. Is there a way to remove this warning?
Yes, change n to be a time_t. If you look at the signature in time.h on most / all systems, you'll see that that's what it returns.
#include<time.h>
//...
time_t n = time(0);
//...
Note that Arak is right: using a 32 bit int is a problem, at a minimum, due to the 2038 bug. However, you should consider that any sort of arithmetic on an integer n (rather than a time_t) only increases the probability that your code will trip over that bug early.
PS: In case I didn't make it clear in the original answer, the best response to a compiler warning is almost always to address the situation that you're being warned about. For example, forcing higher precision data into a lower precision variable loses information - the compiler is trying to warn you that you might have just created a landmine bug that someone won't trip over until much later.
Time returns time_t and not integer. Use that type preferably because it may be larger than int.
If you really need int, then typecast it explicitly, for example:
int n = (int)time(0);
I think you are using Visual C++. The return type of time(0) is 64bit int even if you are programming for 32bit platform unlike g++. To remove the warning, just assign time(0) to 64bit variable.
You probably want to use a type of time_t instead of an int.
See the example at http://en.wikipedia.org/wiki/Time_t.
The reason is time() functions returns a time_t time so you might need to do a static cast to an int or uint in this case. Write in this way:
time_t timer;
int n = static_cast<int> (time(&timer)); // this will give you current time as an integer and it is same as time(NULL)