For testing reasons I would like to cause a division by zero in my C++ code. I wrote this code:
int x = 9;
cout << "int x=" << x;
int y = 10/(x-9);
y += 10;
I see "int =9" printed on the screen, but the application doesn't crash. Is it because of some compiler optimizations (I compile with gcc)? What could be the reason?
Make the variables volatile. Reads and writes to volatile variables are considered observable:
volatile x = 1;
volatile y = 0;
volatile z = x / y;
Because y is not being used, it's getting optimized away.
Try adding a cout << y at the end.
Alternatively, you can turn off optimization:
gcc -O0 file.cpp
Division by zero is an undefined behavior. Not crashing is also pretty much a proper subset of the potentially infinite number of possible behaviors in the domain of undefined behavior.
Typically, a divide by zero will throw an exception. If it is unhandled, it will break the program, but it will not crash.
Related
I want to know if there is any different between std::atomic<int> and int if we are just doing load and store. I am not concerned about the memory ordering. For example consider the below code
int x{1};
void f(int myid) {
while(1){
while(x!= myid){}
//cout<<"thread : "<< myid<<"\n";
//this_thread::sleep_for(std::chrono::duration(3s));
x = (x % 3) + 1;
}
}
int main(){
thread x[3];
for(int i=0;i<3;i++){
x[i] = thread(f,i+1);
}
for(int i=0;i<3;i++){
x[i].join();
}
}
Now the output (if you uncomment the cout) will be
Thread :1
Thread :2
Thread :3
...
I want to know if there is any benefit in changing the int x to atomic<int> x?
Consider your code:
void f(int myid) {
while(1){
while(x!= myid){}
//cout<<"thread : "<< myid<<"\n";
//this_thread::sleep_for(std::chrono::duration(3s));
x = (x % 3) + 1;
}
}
If the program didn't have undefined behaviour, then you could expect that when f was called, x would be read from the stack at least once, but having done that, the compiler has no reason to think that any changes to x will happen outside the function, or that any changes to x made within the function need to be visible outside the function until after the function returns, so it's entitled to read x into a CPU register, keep looking at the same register value and comparing it to myid - which means it'll either pass through instantly or be stuck forever.
Then, compilers are allowed to assume they'll make progress (see Forward Progress in the C++ Standard), so they could conclude that because they'd never progress if x != myid, x can't possibly be equal to myid, and remove the inner while loop. Similarly, an outer loop simplified to while (1) x = (x % 3) + 1; where x might be a register - doesn't make progress and could also be eliminated. Or, the compiler could leave the loop but remove the seemingly pointless operations on x.
Putting your code into the online Godbolt compiler explorer and compiling with GCC trunk at -O3 optimisation, f(int) code is:
f(int):
.L2:
jmp .L2
If you make x atomic, then the compiler can't simply use a register while accessing/modifying it, and assume that there will be a good time to update it before the function returns. It will actually have to modify the variable in memory and propagate that change so other threads can read the updated value.
I want to know if there is any benefit in changing the int x to atomic x?
You could say that. Turning int into atomic<int> in your example will turn your program from incorrect to correct (*).
Accessing the same int from multiple threads at the same time (without any form of access synchronization) is Undefined Behavior.
*) Well, the program might still be incorrect, but at least it avoids this particular problem.
I have exam papers that were from last year and I have been reviewing it. I have a problem one point. As you can see the codes below, my teacher said the given codes will be lead compile error, but I tried it on Visual Studio on my computer and it worked with the outputs : 4.0 The codes are:
float x = 3.0;
float y = 2.0;
int j = 10;
int k = 4;
j = j / k + y;
I will attend the exam tomorrow about it. What should I write as an answer?
There's only two issues I see here:
The x variable is unused, which could be an error if your compiler is set to enable warnings on unused variables, and you have asked your compiler to turn all warnings into errors. However, it will generally compile fine with default compiler settings.
The assignment statement stores a float value in an int variable, which is probably what your teacher is getting at. However, this conversion is automatic and does not cause an error (but may generate a warning).
In other words, your teacher appears to be wrong about this and hasn't actually tried compiling this code.
As others have stated, "the code needs to be wrapped in an int main() function" is also a pretty bulletproof way to get the question right if your teacher is reasonable.
If you get marked wrong for stating that there is no compile-time error, go talk to the teacher and show them. I've successfully argued back points on exams for similar reasons. (I had one question asking why o.ToString; was a compile-time error in a C# program. The professor was looking for "missing parens." The correct answer was "o is not in scope.")
int main() {
float x = 3.f; // unused-variable warning
float y = 2.f;
int j = 10;
int k = 4;
j = j / k + y;
}
Will compile with g++, if you compile with
g++ -Wall -pedantic -pedantic-errors test.cpp
You will only have a warning because x is not used.
Is it safe to write such code?
#include <iostream>
int main()
{
int x = x-x;// always 0
int y = (y)? y/y : --y/y // always 1
}
I know there is undefined behaviour, but isn't it in this case just a trash value? If it is, then same value minus same is always 0, and same value divided by itself (excluding 0) is always 1. It's a great deal if one doesn't want to use integer literals, isn't it? (to feint the enemy)
Allow me to demonstrate the evil magic of undefined behaviour:
given:
#include <iostream>
int main()
{
using namespace std;
int x = x-x;// always 0
int y = (y)? y/y : --y/y; // always 1
cout << x << ", " << y << endl;
return 0;
}
apple clang, compile with -O3:
output:
1439098744, 0
Undefined is undefined. The comments in the above code are lies which will confound future maintainers of your random number generator ;-)
I know there is undefined behaviour, but isn't it in this case just a trash value? If it is, then same value minus same is always 0, and same value divided by itself (excluding 0) is always 1.
No! No, no, no!
The "trash value" is an "indeterminate value".
Subtracting an indeterminate value from itself does not yield zero: it causes your program to have undefined behaviour ([C++14: 8.5/12]).
You cannot rely on the normal rules of arithmetic to "cancel out" undefined behaviour.
Your program could travel back in time and spoil Game of Thrones/The Force Awakens/Supergirl for everyone. Please don't do this!
Undefined behavior is undefined. Always. Stuff may work or break more or less reliably on certain platforms, but in general, you can not rely on this program not crashing or any variable having a certain value.
Undefined behavior is undefined behavior. There's no "isn't it in this case something specific" (unless you are actually talking about result of a completed compilation, and looking at the generated machine code, but that is no longer C++). Compiler is allowed to do whatever it pleases.
Hi guys could anyone explain why does this program correctly even being a bit starnge:
int main()
{
int array[7]={5,7,57,77,55,2,1};
for(int i=0;i<10;i++)
cout<<i[array]<<", "; //array[i]
cout<<endl;
return 0;
}
why does the program compile correctly??
An expression (involving fundamental types) such as this:
x[y]
is converted at compile time to this:
*(x + y)
x + y is the same as y + x
Therefore: *(x + y) is the same as *(y + x)
Therefore: x[y] is the same as y[x]
In your program, you are trying to index an array out of its bounds. This will probably lead to a Segmentation Violation error, meaning that in your program, there is an attempt from the CPU to access memory that can not be physically addressed (think that it is not allocated for the array, as it is out of its bounds). This error is a runtime error, meaning that it is not in the responsibility of the compiler to check it but will it will be raised from the Operating System, having become notified by the hardware. Compiler's 'error' responsibilities are lexical and syntactical errors checking, in order to compile correctly your code into machine code and finally, binary.
For more information about Segmentation Violation error or Segmentation Fault, as commonly known, look here:
http://en.wikipedia.org/wiki/Segmentation_fault
You've come across Undefined Behavior. This means that the compiler is allowed to do whatever it wants with your program -- including compiling it without warnings or errors. Furthermore, it can produce any code it wants to for the case of undefined behavior, including assuming that it does not occur (a common optimization). Accessing an array out-of-bounds is an example of undefined behavior. Signed integer overflow, data races, and invalid pointer creation/use are others.
Theoretically, the compiler could emit code that invoked the shell and performed rm -rf /* (delete every file you have permission to delete)! Of course, no reasonable compiler would do this, but you get the idea.
Simply put, a program with undefined behavior is not a valid C++ program. This is true for the entirety of the program, not just after the undefined behavior. A compiler would have been perfectly free to compile your program to a no-op.
Adding to Benjamin Lindley, Compile the below code and you will see how the address are calculated:
int main()
{
int array[7]={5,7,57,77,55,2,1};
cout<<&(array[0])<<endl;
cout<<&(array[1])<<endl;
return 0;
}
output:(for me);-)
0x28ff20
0x28ff24
Its just &(array+0) and &(array+1)..
I'm experimenting with C++0x support and there is a problem, that I guess shouldn't be there. Either I don't understand the subject or gcc has a bug.
I have the following code, initially x and y are equal. Thread 1 always increments x first and then increments y. Both are atomic integer values, so there is no problem with the increment at all. Thread 2 is checking whether the x is less than y and displays an error message if so.
This code fails sometimes, but why? The issue here is probably memory reordering, but all atomic operations are sequentially consistent by default and I didn't explicitly relax of those any operations. I'm compiling this code on x86, which as far as I know shouldn't have any ordering issues. Can you please explain what the problem is?
#include <iostream>
#include <atomic>
#include <thread>
std::atomic_int x;
std::atomic_int y;
void f1()
{
while (true)
{
++x;
++y;
}
}
void f2()
{
while (true)
{
if (x < y)
{
std::cout << "error" << std::endl;
}
}
}
int main()
{
x = 0;
y = 0;
std::thread t1(f1);
std::thread t2(f2);
t1.join();
t2.join();
}
The result can be viewed here.
There is a problem with the comparison:
x < y
The order of evaluation of subexpressions (in this case, of x and y) is unspecified, so y may be evaluated before x or x may be evaluated before y.
If x is read first, you have a problem:
x = 0; y = 0;
t2 reads x (value = 0);
t1 increments x; x = 1;
t1 increments y; y = 1;
t2 reads y (value = 1);
t2 compares x < y as 0 < 1; test succeeds!
If you explicitly ensure that y is read first, you can avoid the problem:
int yval = y;
int xval = x;
if (xval < yval) { /* ... */ }
The problem could be in your test:
if (x < y)
the thread could evaluate x and not get around to evaluating y until much later.
Every now and then, x will wrap around to 0 just before y wraps around to zero. At this point y will legitimately be greater than x.
First, I agree with "Michael Burr" and "James McNellis". Your test is not fair, and there's a legitime possibility to fail. However even if you rewrite the test the way "James McNellis" suggests the test may fail.
First reason for this is that you don't use volatile semantics, hence the compiler may do optimizations to your code (that are supposed to be ok in a single-threaded case).
But even with volatile your code is not guaranteed to work.
I think you don't fully understand the concept of memory reordering. Actually memory read/write reorder can occur at two levels:
Compiler may exchange the order of the generated read/write instructions.
CPU may execute memory read/write instructions in arbitrary order.
Using volatile prevents the (1). However you've done nothing to prevent (2) - memory access reordering by the hardware.
To prevent this you should put special memory fence instructions in the code (that are designated for CPU, unlike volatile which is for compiler only).
In x86/x64 there're many different memory fence instructions. Also every instruction with lock semantics by default issues full memory fence.
More information here:
http://en.wikipedia.org/wiki/Memory_barrier