Another way to create random numbers - c++

An efficient way to create random numbers in C/C++ is rand() function. but I've seen code like this to create random variables:
int x; x%=100;
Is this a good way to produce a simple random number? If your answer is no, please tell me why?
EDIT :
well the actual code is here:
int temp1,temp2;
A=(abs(temp1))%11-1;
B=(abs(temp2))%11-1; //Randomize without using rand()
A friend of mine wrote this code. I tried to compile it and I got uninitialized local variable 'temp1' used error (on MSVS). He wrote this code in 2011 and it worked on his Linux with latest version of GCC.

It's rare to see something worse.
You have undefined behaviour as you are using an uninitialised variable.
And using modulus introduces statistical bias.

Your friend has misunderstood uninitialised variables.
Ignoring for a moment that reading from them is undefined and can thus do just about anything, if you pretend that they will safely yield an arbitrary value then you need to remember that arbitrary does not mean random.
This approach has somewhere between minimal and no random distribution of values. So you won't be able to predict what you get back, but that does not make it usefully "random" in any meaningful sense.
Also, applying % ruins any distribution so don't do that either.
Tell your friend to turn warnings on in his compilation settings. His GCC is trying to tell him all this, but he isn't listening.

Related

Is uninitialized local variable the fastest random number generator?

I know the uninitialized local variable is undefined behaviour(UB), and also the value may have trap representations which may affect further operation, but sometimes I want to use the random number only for visual representation and will not further use them in other part of program, for example, set something with random color in a visual effect, for example:
void updateEffect(){
for(int i=0;i<1000;i++){
int r;
int g;
int b;
star[i].setColor(r%255,g%255,b%255);
bool isVisible;
star[i].setVisible(isVisible);
}
}
is it that faster than
void updateEffect(){
for(int i=0;i<1000;i++){
star[i].setColor(rand()%255,rand()%255,rand()%255);
star[i].setVisible(rand()%2==0?true:false);
}
}
and also faster than other random number generator?
As others have noted, this is Undefined Behavior (UB).
In practice, it will (probably) actually (kind of) work. Reading from an uninitialized register on x86[-64] architectures will indeed produce garbage results, and probably won't do anything bad (as opposed to e.g. Itanium, where registers can be flagged as invalid, so that reads propagate errors like NaN).
There are two main problems though:
It won't be particularly random. In this case, you're reading from the stack, so you'll get whatever was there previously. Which might be effectively random, completely structured, the password you entered ten minutes ago, or your grandmother's cookie recipe.
It's Bad (capital 'B') practice to let things like this creep into your code. Technically, the compiler could insert reformat_hdd(); every time you read an undefined variable. It won't, but you shouldn't do it anyway. Don't do unsafe things. The fewer exceptions you make, the safer you are from accidental mistakes all the time.
The more pressing issue with UB is that it makes your entire program's behavior undefined. Modern compilers can use this to elide huge swaths of your code or even go back in time. Playing with UB is like a Victorian engineer dismantling a live nuclear reactor. There's a zillion things to go wrong, and you probably won't know half of the underlying principles or implemented technology. It might be okay, but you still shouldn't let it happen. Look at the other nice answers for details.
Also, I'd fire you.
Let me say this clearly: we do not invoke undefined behavior in our programs. It is never ever a good idea, period. There are rare exceptions to this rule; for example, if you are a library implementer implementing offsetof. If your case falls under such an exception you likely know this already. In this case we know using uninitialized automatic variables is undefined behavior.
Compilers have become very aggressive with optimizations around undefined behavior and we can find many cases where undefined behavior has lead to security flaws. The most infamous case is probably the Linux kernel null pointer check removal which I mention in my answer to C++ compilation bug? where a compiler optimization around undefined behavior turned a finite loop into an infinite one.
We can read CERT's Dangerous Optimizations and the Loss of Causality (video) which says, amongst other things:
Increasingly, compiler writers are taking advantage of undefined
behaviors in the C and C++ programming languages to improve
optimizations.
Frequently, these optimizations are interfering with
the ability of developers to perform cause-effect analysis on their
source code, that is, analyzing the dependence of downstream results
on prior results.
Consequently, these optimizations are eliminating
causality in software and are increasing the probability of software
faults, defects, and vulnerabilities.
Specifically with respect to indeterminate values, the C standard defect report 451: Instability of uninitialized automatic variables makes for some interesting reading. It has not been resolved yet but introduces the concept of wobbly values which means the indeterminatness of a value may propagate through the program and can have different indeterminate values at different points in the program.
I don't know of any examples where this happens but at this point we can't rule it out.
Real examples, not the result you expect
You are unlikely to get random values. A compiler could optimize the away the loop altogether. For example, with this simplified case:
void updateEffect(int arr[20]){
for(int i=0;i<20;i++){
int r ;
arr[i] = r ;
}
}
clang optimizes it away (see it live):
updateEffect(int*): # #updateEffect(int*)
retq
or perhaps get all zeros, as with this modified case:
void updateEffect(int arr[20]){
for(int i=0;i<20;i++){
int r ;
arr[i] = r%255 ;
}
}
see it live:
updateEffect(int*): # #updateEffect(int*)
xorps %xmm0, %xmm0
movups %xmm0, 64(%rdi)
movups %xmm0, 48(%rdi)
movups %xmm0, 32(%rdi)
movups %xmm0, 16(%rdi)
movups %xmm0, (%rdi)
retq
Both of these cases are perfectly acceptable forms of undefined behavior.
Note, if we are on an Itanium we could end up with a trap value:
[...]if the register happens to hold a special not-a-thing value,
reading the register traps except for a few instructions[...]
Other important notes
It is interesting to note the variance between gcc and clang noted in the UB Canaries project over how willing they are to take advantage of undefined behavior with respect to uninitialized memory. The article notes (emphasis mine):
Of course we need to be completely clear with ourselves that any such expectation has nothing to do with the language standard and everything to do with what a particular compiler happens to do, either because the providers of that compiler are unwilling to exploit that UB or just because they have not gotten around to exploiting it yet. When no real guarantee from the compiler provider exists, we like to say that as-yet unexploited UBs are time bombs: they’re waiting to go off next month or next year when the compiler gets a bit more aggressive.
As Matthieu M. points out What Every C Programmer Should Know About Undefined Behavior #2/3 is also relevant to this question. It says amongst other things (emphasis mine):
The important and scary thing to realize is that just about any
optimization based on undefined behavior can start being triggered on
buggy code at any time in the future. Inlining, loop unrolling, memory
promotion and other optimizations will keep getting better, and a
significant part of their reason for existing is to expose secondary
optimizations like the ones above.
To me, this is deeply dissatisfying, partially because the compiler
inevitably ends up getting blamed, but also because it means that huge
bodies of C code are land mines just waiting to explode.
For completeness sake I should probably mention that implementations can choose to make undefined behavior well defined, for example gcc allows type punning through unions while in C++ this seems like undefined behavior. If this is the case the implementation should document it and this will usually not be portable.
No, it's terrible.
The behaviour of using an uninitialised variable is undefined in both C and C++, and it's very unlikely that such a scheme would have desirable statistical properties.
If you want a "quick and dirty" random number generator, then rand() is your best bet. In its implementation, all it does is a multiplication, an addition, and a modulus.
The fastest generator I know of requires you to use a uint32_t as the type of the pseudo-random variable I, and use
I = 1664525 * I + 1013904223
to generate successive values. You can choose any initial value of I (called the seed) that takes your fancy. Obviously you can code that inline. The standard-guaranteed wraparound of an unsigned type acts as the modulus. (The numeric constants are hand-picked by that remarkable scientific programmer Donald Knuth.)
Good question!
Undefined does not mean it's random. Think about it, the values you'd get in global uninitialized variables were left there by the system or your/other applications running. Depending what your system does with no longer used memory and/or what kind of values the system and applications generate, you may get:
Always the same.
Be one of a small set of values.
Get values in one or more small ranges.
See many values dividable by 2/4/8 from pointers on 16/32/64-bit system
...
The values you'll get completely depend on which non-random values are left by the system and/or applications. So, indeed there will be some noise (unless your system wipes no longer used memory), but the value pool from which you'll draw will by no means be random.
Things get much worse for local variables because these come directly from the stack of your own program. There is a very good chance that your program will actually write these stack locations during the execution of other code. I estimate the chances for luck in this situation very low, and a 'random' code change you make tries this luck.
Read about randomness. As you'll see randomness is a very specific and hard to obtain property. It's a common mistake to think that if you just take something that's hard to track (like your suggestion) you'll get a random value.
Many good answers, but allow me to add another and stress the point that in a deterministic computer, nothing is random. This is true for both the numbers produced by an pseudo-RNG and the seemingly "random" numbers found in areas of memory reserved for C/C++ local variables on the stack.
BUT... there is a crucial difference.
The numbers generated by a good pseudorandom generator have the properties that make them statistically similar to truly random draws. For instance, the distribution is uniform. The cycle length is long: you can get millions of random numbers before the cycle repeats itself. The sequence is not autocorrelated: for instance, you will not begin to see strange patterns emerge if you take every 2nd, 3rd, or 27th number, or if you look at specific digits in the generated numbers.
In contrast, the "random" numbers left behind on the stack have none of these properties. Their values and their apparent randomness depend entirely on how the program is constructed, how it is compiled, and how it is optimized by the compiler. By way of example, here is a variation of your idea as a self-contained program:
#include <stdio.h>
notrandom()
{
int r, g, b;
printf("R=%d, G=%d, B=%d", r&255, g&255, b&255);
}
int main(int argc, char *argv[])
{
int i;
for (i = 0; i < 10; i++)
{
notrandom();
printf("\n");
}
return 0;
}
When I compile this code with GCC on a Linux machine and run it, it turns out to be rather unpleasantly deterministic:
R=0, G=19, B=0
R=130, G=16, B=255
R=130, G=16, B=255
R=130, G=16, B=255
R=130, G=16, B=255
R=130, G=16, B=255
R=130, G=16, B=255
R=130, G=16, B=255
R=130, G=16, B=255
R=130, G=16, B=255
If you looked at the compiled code with a disassembler, you could reconstruct what was going on, in detail. The first call to notrandom() used an area of the stack that was not used by this program previously; who knows what was in there. But after that call to notrandom(), there is a call to printf() (which the GCC compiler actually optimizes to a call to putchar(), but never mind) and that overwrites the stack. So the next and subsequent times, when notrandom() is called, the stack will contain stale data from the execution of putchar(), and since putchar() is always called with the same arguments, this stale data will always be the same, too.
So there is absolutely nothing random about this behavior, nor do the numbers obtained this way have any of the desirable properties of a well-written pseudorandom number generator. In fact, in most real-life scenarios, their values will be repetitive and highly correlated.
Indeed, as others, I would also seriously consider firing someone who tried to pass off this idea as a "high performance RNG".
Undefined behavior means that the authors of compilers are free to ignore the problem because programmers will never have a right to complain whatever happens.
While in theory when entering UB land anything can happen (including a daemon flying off your nose) what normally means is that compiler authors just won't care and, for local variables, the value will be whatever is in the stack memory at that point.
This also means that often the content will be "strange" but fixed or slightly random or variable but with a clear evident pattern (e.g. increasing values at each iteration).
For sure you cannot expect it being a decent random generator.
Undefined behaviour is undefined. It doesn't mean that you get an undefined value, it means that the the program can do anything and still meet the language specification.
A good optimizing compiler should take
void updateEffect(){
for(int i=0;i<1000;i++){
int r;
int g;
int b;
star[i].setColor(r%255,g%255,b%255);
bool isVisible;
star[i].setVisible(isVisible);
}
}
and compile it to a noop. This is certainly faster than any alternative. It has the downside that it will not do anything, but such is the downside of undefined behaviour.
Not mentioned yet, but code paths that invoke undefined behavior are allowed to do whatever the compiler wants, e.g.
void updateEffect(){}
Which is certainly faster than your correct loop, and because of UB, is perfectly conformant.
Because of security reasons, new memory assigned to a program has to be cleaned, otherwise the information could be used, and passwords could leak from one application into another. Only when you reuse memory, you get different values than 0. And it is very likely, that on a stack the previous value is just fixed, because the previous use of that memory is fixed.
Your particular code example would probably not do what you are expecting. While technically each iteration of the loop re-creates the local variables for the r, g, and b values, in practice it's the exact same memory space on the stack. Hence it won't get re-randomized with each iteration, and you will end up assigning the same 3 values for each of the 1000 colors, regardless of how random the r, g, and b are individually and initially.
Indeed, if it did work, I would be very curious as to what's re-randomizing it. The only thing I can think of would be an interleaved interrupt that piggypacked atop that stack, highly unlikely. Perhaps internal optimization that kept those as register variables rather than as true memory locations, where the registers get re-used further down in the loop, would do the trick, too, especially if the set visibility function is particularly register-hungry. Still, far from random.
As most of people here mentioned undefined behavior. Undefined also means that you may get some valid integer value (luckily) and in this case this will be faster (as rand function call is not made).
But don't practically use it. I am sure this will terrible results as luck is not with you all the time.
Really bad! Bad habit, bad result.
Consider:
A_Function_that_use_a_lot_the_Stack();
updateEffect();
If the function A_Function_that_use_a_lot_the_Stack() make always the same initialization it leaves the stack with the same data on it. That data is what we get calling updateEffect(): always same value!.
I performed a very simple test, and it wasn't random at all.
#include <stdio.h>
int main() {
int a;
printf("%d\n", a);
return 0;
}
Every time I ran the program, it printed the same number (32767 in my case) -- you can't get much less random than that. This is presumably whatever the startup code in the runtime library left on the stack. Since it uses the same startup code every time the program runs, and nothing else varies in the program between runs, the results are perfectly consistent.
You need to have a definition of what you mean by 'random'.
A sensible definition involves that the values you get should have little correlation. That's something you can measure. It's also not trivial to achieve in a controlled, reproducible manner. So undefined behaviour is certainly not what you are looking for.
There are certain situations in which uninitialized memory may be safely read using type "unsigned char*" [e.g. a buffer returned from malloc]. Code may read such memory without having to worry about the compiler throwing causality out the window, and there are times when it may be more efficient to have code be prepared for anything memory might contain than to ensure that uninitialized data won't be read (a commonplace example of this would be using memcpy on partially-initialized buffer rather than discretely copying all of the elements that contain meaningful data).
Even in such cases, however, one should always assume that if any combination of bytes will be particularly vexatious, reading it will always yield that pattern of bytes (and if a certain pattern would be vexatious in production, but not in development, such a pattern won't appear until code is in production).
Reading uninitialized memory might be useful as part of a random-generation strategy in an embedded system where one can be sure the memory has never been written with substantially-non-random content since the last time the system was powered on, and if the manufacturing process used for the memory causes its power-on state to vary in semi-random fashion. Code should work even if all devices always yield the same data, but in cases where e.g. a group of nodes each need to select arbitrary unique IDs as quickly as possible, having a "not very random" generator which gives half the nodes the same initial ID might be better than not having any initial source of randomness at all.
As others have said, it will be fast, but not random.
What most compilers will do for local variables is to grab some space for them on the stack, but not bother setting it to anything (the standard says they don't need to, so why slow down the code you're generating?).
In this case, the value you'll get will depend on what was on previously on the stack - if you call a function before this one that has a hundred local char variables all set to 'Q' and then call you're function after that returns, then you'll probably find your "random" values behave as if you've memset() them all to 'Q's.
Importantly for your example function trying to use this, these values wont change each time you read them, they'll be the same every time. So you'll get a 100 stars all set to the same colour and visibility.
Also, nothing says that the compiler shouldn't initialize these value - so a future compiler might do so.
In general: bad idea, don't do it.
(like a lot of "clever" code level optimizations really...)
As others have already mentioned, this is undefined behavior (UB), but it may "work".
Except from problems already mentioned by others, I see one other problem (disadvantage) - it will not work in any language other than C and C++. I know that this question is about C++, but if you can write code which will be good C++ and Java code and it's not a problem then why not? Maybe some day someone will have to port it to other language and searching for bugs caused by "magic tricks" UB like this definitely will be a nightmare (especially for an inexperienced C/C++ developer).
Here there is question about another similar UB. Just imagine yourself trying to find bug like this without knowing about this UB. If you want to read more about such strange things in C/C++, read answers for question from link and see this GREAT slideshow. It will help you understand what's under the hood and how it's working; it's not not just another slideshow full of "magic". I'm quite sure that even most of experienced C/c++ programmers can learn a lot from this.
Not a good idea to rely our any logic on language undefined behaviour. In addition to whatever mentioned/discussed in this post, I would like to mention that with modern C++ approach/style such program may not be compile.
This was mentioned in my previous post which contains the advantage of auto feature and useful link for the same.
https://stackoverflow.com/a/26170069/2724703
So, if we change the above code and replace the actual types with auto, the program would not even compile.
void updateEffect(){
for(int i=0;i<1000;i++){
auto r;
auto g;
auto b;
star[i].setColor(r%255,g%255,b%255);
auto isVisible;
star[i].setVisible(isVisible);
}
}
I like your way of thinking. Really outside the box. However the tradeoff is really not worth it. Memory-runtime tradeoff is a thing, including undefined behavior for runtime is not.
It must give you a very unsettling feeling to know you are using such "random" as your business logic. I woudn't do it.
Use 7757 every place you are tempted to use uninitialized variables. I picked it randomly from a list of prime numbers:
it is defined behavior
it is guaranteed to not always be 0
it is prime
it is likely to be as statistically random as uninitualized
variables
it is likely to be faster than uninitialized variables since its
value is known at compile time
There is one more possibility to consider.
Modern compilers (ahem g++) are so intelligent that they go through your code to see what instructions affect state, and what don't, and if an instruction is guaranteed to NOT affect the state, g++ will simply remove that instruction.
So here's what will happen. g++ will definitely see that you are reading, performing arithmetic on, saving, what is essentially a garbage value, which produces more garbage. Since there is no guarantee that the new garbage is any more useful than the old one, it will simply do away with your loop. BLOOP!
This method is useful, but here's what I would do. Combine UB (Undefined Behaviour) with rand() speed.
Of course, reduce rand()s executed, but mix them in so compiler doesn't do anything you don't want it to.
And I won't fire you.
Using uninitialized data for randomness is not necessarily a bad thing if done properly. In fact, OpenSSL does exactly this to seed its PRNG.
Apparently this usage wasn't well documented however, because someone noticed Valgrind complaining about using uninitialized data and "fixed" it, causing a bug in the PRNG.
So you can do it, but you need to know what you're doing and make sure that anyone reading your code understands this.

Fortran : Initialize all variables to a specific default value

I am working on a ~40 years old Fortran spaghetti code with lots of variables that are implicitly declared. So there is not a simple way to even know what variables exist in the code in order to initialize their values. Now, is there a way to tell the compiler (for example Intel Fortran) to initialize all variables in the code to a specific default value (e.g., -999) other than zero or a very large number, as provided by Intel compiler?
gfortran provides some options for this. Integers can be intialized with -finit-integer=n where n is an integer. For real numbers you can use -finit-real=<zero|inf|-inf|nan|snan>. Together with -ffpe-trap=denormal this can be very helpful, to get uninitialized reals.
You probably want :
ifort -check uninit
Note per the man page this only checks scalars
Also, based on some quick testing it is a pretty weak check. It doesn't catch this simple thing for example:
program test
call f(i)
end
subroutine f(j)
write(*,*)j
end
returns 0 ..
I suppose its better than nothing though.

Use of Literals, yay/nay in C++

I've recently heard that in some cases, programmers believe that you should never use literals in your code. I understand that in some cases, assigning a variable name to a given number can be helpful (especially in terms of maintenance if that number is used elsewhere). However, consider the following case studies:
Case Study 1: Use of Literals for "special" byte codes.
Say you have an if statement that checks for a specific value stored in (for the sake of argument) a uint16_t. Here are the two code samples:
Version 1:
// Descriptive comment as to why I'm using 0xBEEF goes here
if (my_var == 0xBEEF) {
//do something
}
Version 2:
const uint16_t kSuperDescriptiveVarName = 0xBEEF;
if (my_var == kSuperDescriptiveVarName) {
// do something
}
Which is the "preferred" method in terms of good coding practice? I can fully understand why you would prefer version 2 if kSuperDescriptiveVarName is used more than once. Also, does the compiler do any optimizations to make both versions effectively the same executable code? That is, are there any performance implications here?
Case Study 2: Use of sizeof
I fully understand that using sizeof versus a raw literal is preferred for portability and also readability concerns. Take the two code examples into account. The scenario is that you are computing the offset into a packet buffer (an array of uint8_t) where the first part of the packet is stored as my_packet_header, which let's say is a uint32_t.
Version 1:
const int offset = sizeof(my_packet_header);
Version 2:
const int offset = 4; // good comment telling reader where 4 came from
Clearly, version 1 is preferred, but what about for cases where you have multiple data fields to skip over? What if you have the following instead:
Version 1:
const int offset = sizeof(my_packet_header) + sizeof(data_field1) + sizeof(data_field2) + ... + sizeof(data_fieldn);
Version 2:
const int offset = 47;
Which is preferred in this case? Does is still make sense to show all the steps involved with computing the offset or does the literal usage make sense here?
Thanks for the help in advance as I attempt to better my code practices.
Which is the "preferred" method in terms of good coding practice? I can fully understand why you would prefer version 2 if kSuperDescriptiveVarName is used more than once.
Sounds like you understand the main point... factoring values (and their comments) that are used in multiple places. Further, it can sometimes help to have a group of constants in one place - so their values can be inspected, verified, modified etc. without concern for where they're used in the code. Other times, there are many constants used in proximity and the comments needed to properly explain them would obfuscate the code in which they're used.
Countering that, having a const variable means all the programmers studying the code will be wondering whether it's used anywhere else, keeping it in mind as they inspect the rest of the scope in which it's declared etc. - the less unnecessary things to remember the surer the understanding of important parts of the code will be.
Like so many things in programming, it's "an art" balancing the pros and cons of each approach, and best guided by experience and knowledge of the way the code's likely to be studied, maintained, and evolved.
Also, does the compiler do any optimizations to make both versions effectively the same executable code? That is, are there any performance implications here?
There's no performance implications in optimised code.
I fully understand that using sizeof versus a raw literal is preferred for portability and also readability concerns.
And other reasons too. A big factor in good programming is reducing the points of maintenance when changes are done. If you can modify the type of a variable and know that all the places using that variable will adjust accordingly, that's great - saves time and potential errors. Using sizeof helps with that.
Which is preferred [for calculating offsets in a struct]? Does is still make sense to show all the steps involved with computing the offset or does the literal usage make sense here?
The offsetof macro (#include <cstddef>) is better for this... again reducing maintenance burden. With the this + that approach you illustrate, if the compiler decides to use any padding your offset will be wrong, and further you have to fix it every time you add or remove a field.
Ignoring the offsetof issues and just considering your this + that example as an illustration of a more complex value to assign, again it's a balancing act. You'd definitely want some explanation/comment/documentation re intent here (are you working out the binary size of earlier fields? calculating the offset of the next field?, deliberately missing some fields that might not be needed for the intended use or was that accidental?...). Still, a named constant might be enough documentation, so it's likely unimportant which way you lean....
In every example you list, I would go with the name.
In your first example, you almost certainly used that special 0xBEEF number at least twice - once to write it and once to do your comparison. If you didn't write it, that number is still part of a contract with someone else (perhaps a file format definition).
In the last example, it is especially useful to show the computation that yielded the value. That way, if you encounter trouble down the line, you can easily see either that the number is trustworthy, or what you missed and fix it.
There are some cases where I prefer literals over named constants though. These are always cases where a name is no more meaningful than the number. For example, you have a game program that plays a dice game (perhaps Yahtzee), where there are specific rules for specific die rolls. You could define constants for One = 1, Two = 2, etc. But why bother?
Generally it is better to use a name instead of a value. After all, if you need to change it later, you can find it more easily. Also it is not always clear why this particular number is used, when you read the code, so having a meaningful name assigned to it, makes this immediately clear to a programmer.
Performance-wise there is no difference, because the optimizers should take care of it. And it is rather unlikely, even if there would be an extra instruction generated, that this would cause you troubles. If your code would be that tight, you probably shouldn't rely on an optimizer effect anyway.
I can fully understand why you would prefer version 2 if kSuperDescriptiveVarName is used more than once.
I think kSuperDescriptiveVarName will definitely be used more than once. One for check and at least one for assignment, maybe in different part of your program.
There will be no difference in performance, since an optimization called Constant Propagation exists in almost all compilers. Just enable optimization for your compiler.

Initialize a variable

Is it better to declare and initialize the variable or just declare it?
What's the best and the most efficient way?
For example, I have this code:
#include <stdio.h>
int main()
{
int number = 0;
printf("Enter with a number: ");
scanf("%d", &number);
if(number < 0)
number= -number;
printf("The modulo is: %d\n", number);
return 0;
}
If I don't initialize number, the code works fine, but I want to know, is it faster, better, more efficient? Is it good to initialize the variable?
scanf can fail, in which case nothing is written to number. So if you want your code to be correct you need to initialize it (or check the return value of scanf).
The speed of incorrect code is usually irrelevant, but for you example code if there is a difference in speed at all then I doubt you would ever be able to measure it. Setting an int to 0 is much faster than I/O.
Don't attribute speed to language; That attribute belongs to implementations of language. There are fast implementations and slow implementations. There are optimisations assosciated with fast implementations; A compiler that produces well-optimised machine code would optimise the initialisation away if it can deduce that it doesn't need the initialisation.
In this case, it actually does need the initialisation. Consider if scanf were to fail. When scanf fails, it's return value reflects this failure. It'll either return:
A value less than zero if there was a read error or EOF (which can be triggered in an implementation-defined way, typically CTRL+Z on Windows and CTRL+d on Linux),
A number less than the number of objects provided to scanf (since you've provided only one object, this failure return value would be 0) when a conversion failure occurs (for example, entering 'a' on stdin when you've told scanf to convert sequences of '0'..'9' into an integer),
The number of objects scanf managed to assign to. This is 1, in your case.
Since you aren't checking for any of these return values (particular #3), your compiler can't deduce that the initialisation is necessary and hence, can't optimise it away. When the variable is uninitialised, failure to check these return values results in undefined behaviour. A chicken might appear to be living, even when it is missing its head. It would be best to check the return value of scanf. That way, when your variable is uninitialised you can avoid using an uninitialised value, and when it isn't your compiler can optimise away the initialisations, presuming you handle erroneous return values by producing error messages rather than using the variable.
edit: On that topic of undefined behaviour, consider what happens in this code:
if(number < 0)
number= -number;
If number is -32768, and INT_MAX is 32767, then section 6.5, paragraph 5 of the C standard applies because -(-32768) isn't representable as an int.
Section 6.5, paragraph 5 says:
If an exceptional condition occurs during the evaluation of an
expression (that is, if the result is not mathematically defined or
not in the range of representable values for its type), the behavior
is undefined.
Suppose if you don't initialize a variable and your code is buggy.(e.g. you forgot to read number). Then uninitialized value of number is garbage and different run will output(or behave) different results.
But If you initialize all of your variables then it will produce constant result. An easy to trace error.
Yes, initialize steps will add extra steps in your code at low level. for example mov $0, 28(%esp) in your code at low level. But its one time task. doesn't kill your code efficiency.
So, always using initialization is a good practice!
With modern compilers, there isn't going to be any difference in efficiency. Coding style is the main consideration. In general, your code is more self-explanatory and less likely to have mistakes if you initialize all variables upon declaring them. In the case you gave, though, since the variable is effectively initialized by the scanf, I'd consider it better not to have a redundant initialization.
Before, you need to answer to this questions:
1) how many time is called this function? if you call 10.000.000 times, so, it's a good idea to have the best.
2) If I don't inizialize my variable, I'm sure that my code is safe and not throw any exception?
After, an int inizialization doesn't change so much in your code, but a string inizialization yes.
Be sure that you do all the controls, because if you have a not-inizialized variable your program is potentially buggy.
I can't tell you how many times I've seen simple errors because a programmer doesn't initialize a variable. Just two days ago there was another question on SO where the end result of the issue being faced was simply that the OP didn't initialize a variable and thus there were problems.
When you talk about "speed" and "efficiency" don't simply consider how much faster the code might compile or run (and in this case it's pretty much irrelevant anyway) but consider your debugging time when there's a simple mistake in the code do to the fact you didn't initialize a variable that very easily could have been.
Note also, my experience is when coding for larger corporations they will run your code through tools like coverity or klocwork which will ding you for uninitialized variables because they present a security risk.

GCC: program doesn't work with compilation option -O3

I'm writing a C++ program that doesn't work (I get a segmentation fault) when I compile it with optimizations (options -O1, -O2, -O3, etc.), but it works just fine when I compile it without optimizations.
Is there any chance that the error is in my code? or should I assume that this is a bug in GCC?
My GCC version is 3.4.6.
Is there any known workaround for this kind of problem?
There is a big difference in speed between the optimized and unoptimized version of my program, so I really need to use optimizations.
This is my original functor. The one that works fine with no levels of optimizations and throws a segmentation fault with any level of optimization:
struct distanceToPointSort{
indexedDocument* point ;
distanceToPointSort(indexedDocument* p): point(p) {}
bool operator() (indexedDocument* p1,indexedDocument* p2){
return distance(point,p1) < distance(point,p2) ;
}
} ;
And this one works flawlessly with any level of optimization:
struct distanceToPointSort{
indexedDocument* point ;
distanceToPointSort(indexedDocument* p): point(p) {}
bool operator() (indexedDocument* p1,indexedDocument* p2){
float d1=distance(point,p1) ;
float d2=distance(point,p2) ;
std::cout << "" ; //without this line, I get a segmentation fault anyways
return d1 < d2 ;
}
} ;
Unfortunately, this problem is hard to reproduce because it happens with some specific values. I get the segmentation fault upon sorting just one out of more than a thousand vectors, so it really depends on the specific combination of values each vector has.
Now that you posted the code fragment and a working workaround was found (#Windows programmer's answer), I can say that perhaps what you are looking for is -ffloat-store.
-ffloat-store
Do not store floating point variables in registers, and inhibit other options that might change whether a floating point value is taken from a register or memory.
This option prevents undesirable excess precision on machines such as the 68000 where the floating registers (of the 68881) keep more precision than a double is supposed to have. Similarly for the x86 architecture. For most programs, the excess precision does only good, but a few programs rely on the precise definition of IEEE floating point. Use -ffloat-store for such programs, after modifying them to store all pertinent intermediate computations into variables.
Source: http://gcc.gnu.org/onlinedocs/gcc-3.4.6/gcc/Optimize-Options.html
I would assume your code is wrong first.
Though it is hard to tell.
Does your code compile with 0 warnings?
g++ -Wall -Wextra -pedantic -ansi
Here's some code that seems to work, until you hit -O3...
#include <stdio.h>
int main()
{
int i = 0, j = 1, k = 2;
printf("%d %d %d\n", *(&j-1), *(&j), *(&j+1));
return 0;
}
Without optimisations, I get "2 1 0"; with optimisations I get "40 1 2293680". Why? Because i and k got optimised out!
But I was taking the address of j and going out of the memory region allocated to j. That's not allowed by the standard. It's most likely that your problem is caused by a similar deviation from the standard.
I find valgrind is often helpful at times like these.
EDIT: Some commenters are under the impression that the standard allows arbitrary pointer arithmetic. It does not. Remember that some architectures have funny addressing schemes, alignment may be important, and you may get problems if you overflow certain registers!
The words of the [draft] standard, on adding/subtracting an integer to/from a pointer (emphasis added):
"If both the pointer operand and the result point to elements of the same array object, or one past the last element of the array object, the evaluation shall not produce an overflow; otherwise, the behavior is undefined."
Seeing as &j doesn't even point to an array object, &j-1 and &j+1 can hardly point to part of the same array object. So simply evaluating &j+1 (let alone dereferencing it) is undefined behaviour.
On x86 we can be pretty confident that adding one to a pointer is fairly safe and just takes us to the next memory location. In the code above, the problem occurs when we make assumptions about what that memory contains, which of course the standard doesn't go near.
As an experiment, try to see if this will force the compiler to round everything consistently.
volatile float d1=distance(point,p1) ;
volatile float d2=distance(point,p2) ;
return d1 < d2 ;
The error is in your code. It's likely you're doing something that invokes undefined behavior according to the C standard which just happens to work with no optimizations, but when GCC makes certain assumptions for performing its optimizations, the code breaks when those assumptions aren't true. Make sure to compile with the -Wall option, and the -Wextra might also be a good idea, and see if you get any warnings. You could also try -ansi or -pedantic, but those are likely to result in false positives.
You may be running into an aliasing problem (or it could be a million other things). Look up the -fstrict-aliasing option.
This kind of question is impossible to answer properly without more information.
It is very seldom the compiler fault, but compiler do have bugs in them, and them often manifest themselves at different optimization levels (if there is a bug in an optimization pass, for example).
In general when reporting programming problems: provide a minimal code sample to demonstrate the issue, such that people can just save the code to a file, compile and run it. Make it as easy as possible to reproduce your problem.
Also, try different versions of GCC (compiling your own GCC is very easy, especially on Linux). If possible, try with another compiler. Intel C has a compiler which is more or less GCC compatible (and free for non-commercial use, I think). This will help pinpointing the problem.
It's almost (almost) never the compiler.
First, make sure you're compiling warning-free, with -Wall.
If that didn't give you a "eureka" moment, attach a debugger to the least optimized version of your executable that crashes and see what it's doing and where it goes.
5 will get you 10 that you've fixed the problem by this point.
Ran into the same problem a few days ago, in my case it was aliasing. And GCC does it differently, but not wrongly, when compared to other compilers. GCC has become what some might call a rules-lawyer of the C++ standard, and their implementation is correct, but you also have to be really correct in you C++, or it'll over optimize somethings, which is a pain. But you get speed, so can't complain.
I expect to get some downvotes here after reading some of the comments, but in the console game programming world, it's rather common knowledge that the higher optimization levels can sometimes generate incorrect code in weird edge cases. It might very well be that edge cases can be fixed with subtle changes to the code, though.
Alright...
This is one of the weirdest problems I've ever had.
I dont think I have enough proof to state it's a GCC bug, but honestly... It really looks like one.
This is my original functor. The one that works fine with no levels of optimizations and throws a segmentation fault with any level of optimization:
struct distanceToPointSort{
indexedDocument* point ;
distanceToPointSort(indexedDocument* p): point(p) {}
bool operator() (indexedDocument* p1,indexedDocument* p2){
return distance(point,p1) < distance(point,p2) ;
}
} ;
And this one works flawlessly with any level of optimization:
struct distanceToPointSort{
indexedDocument* point ;
distanceToPointSort(indexedDocument* p): point(p) {}
bool operator() (indexedDocument* p1,indexedDocument* p2){
float d1=distance(point,p1) ;
float d2=distance(point,p2) ;
std::cout << "" ; //without this line, I get a segmentation fault anyways
return d1 < d2 ;
}
} ;
Unfortunately, this problem is hard to reproduce because it happens with some specific values. I get the segmentation fault upon sorting just one out of more than a thousand vectors, so it really depends on the specific combination of values each vector has.
Wow, I didn't expect answers so quicly, and so many...
The error occurs upon sorting a std::vector of pointers using std::sort()
I provide the strict-weak-ordering functor.
But I know the functor I provide is correct because I've used it a lot and it works fine.
Plus, the error cannot be some invalid pointer in the vector becasue the error occurs just when I sort the vector. If I iterate through the vector without applying std::sort first, the program works fine.
I just used GDB to try to find out what's going on. The error occurs when std::sort invoke my functor. Aparently std::sort is passing an invalid pointer to my functor. (of course this happens with the optimized version only, any level of optimization -O, -O2, -O3)
as other have pointed out, probably strict aliasing.
turn it of in o3 and try again. My guess is that you are doing some pointer tricks in your functor (fast float as int compare? object type in lower 2 bits?) that fail across inlining template functions.
warnings do not help to catch this case. "if the compiler could detect all strict aliasing problems it could just as well avoid them" just changing an unrelated line of code may make the problem appear or go away as it changes register allocation.
As the updated question will show ;) , the problem exists with a std::vector<T*>. One common error with vectors is reserve()ing what should have been resize()d. As a result, you'd be writing outside array bounds. An optimizer may discard those writes.
post the code in distance! it probably does some pointer magic, see my previous post. doing an intermediate assignment just hides the bug in your code by changing register allocation. even more telling of this is the output changing things!
The true answer is hidden somewhere inside all the comments in this thread. First of all: it is not a bug in the compiler.
The problem has to do with floating point precision. distanceToPointSort should be a function that should never return true for both the arguments (a,b) and (b,a), but that is exactly what can happen when the compiler decides to use higher precision for some data paths. The problem is especially likely on, but by no means limited to, x86 without -mfpmath=sse. If the comparator behaves that way, the sort function can become confused, and the segmentation fault is not surprising.
I consider -ffloat-store the best solution here (already suggested by CesarB).