This question already has answers here:
What is the purpose of std::launder?
(3 answers)
Closed 4 years ago.
I am trying to understand what std::launder does, and I hoped that by looking up an example implementation it would be clear.
Where can I find an example implementation of std::launder?
When I looked in lbic++ I see a code like
template<typename _Tp>
[[nodiscard]] constexpr _Tp*
launder(_Tp* __p) noexcept
{ return __builtin_launder(__p); }
Which makes me think that this another of those compiler-magic functions.
What is that this function __builtin_launder can potentially do, does it simply add a tag to suppress compiler warnings about aliasing?
Is it possible to understand std::launder in terms of __builtin_launder or it just more compiler-magic (hooks)?
The purpose of std::launder is not to "suppress warnings" but to remove assumptions that the C++ compiler may have.
Aliasing warnings are trying to inform you that you are possibly doing things whose behaviour is not defined by the C++ standard.
The compiler can and does make assumptions that your code is only doing things defined by the standard. For example, it can assume that a pointer to a const value once constructed will not be changed.
Compilers may use that assumption to skip refetching the value from memory (and store it in a register), or even calculate its value at compile time and do dead-code elimination based on it. It can assume this, because any program where it is false is doing undefined behaviour, so any program behaviour is accepted under the C++ standard.
std::launder was crafted to permit you do things like take a pointer to a truly const value that was legally modified (by creating a new object in its storage, say) and use that pointer after the modification in a defined way (so it refers to the new object) and other specific and similar situations (don't assume it just "removes aliasing problems"). __builtin_launder is going to be a "noop" function in one sense, but in another sense it is going to change what kind of assembly code can be generated around it. With it, certain assumptions about what value can be reached from its input cannot be made about its output. And some code that would be UB on the input pointer is not UB on the output pointer.
It is an expert tool. I, personally, wouldn't use it without doing a lot of standard delving and double checking that I wasn't using it wrong. It was added because there were certain operations someone proved there was no way to reasonably do in a standard compliant way, and it permits a library writer to do it efficiently now.
Related
As far as I can tell, in c++ references are implemented as constant pointers.
say you have y, which is a reference to variable x.
why would it not be more performant and efficient (especially when passing variables in functions). To either:
A. every time y is mentioned it gets replaced by x as a pre-processing stage.
B. have both x and y refer to the same memory address in the compilers symbol table.
As far as I can tell, in c++ references are implemented as constant pointers.
Might be correct, might be not. I actually don't know for sure. In any case thats implementation details. What matters is how the C++ standard specifies reference and it does not mention that they must be implemented as constant pointers.
why would it not be more performant and efficient...
When it is possible and more performant the compiler will do that. The so-called as-if rule allows the compiler to perform any optimization that does not change observable behavior of the program in accordance with the C++ standard. The standard does not specify how references are implemented in detail. It is up to the compiler to implement them in the most efficient way.
Undefined behaviour in C++ can be really hard to debug. Is there a version of C++ and standard library which does not contain any undefined behaviour but rather throws exceptions? I understand that this will be a performance killer, but I only intend to use this version when I am programming, debugging and compiling in debug mode and don't really care about performance. Ideally this version would be portable and you would be able to easily switch on/off the undefined behaviour checks.
For example, you could implement a safe pointer class like so (only check for null pointer, not actually if it points to a valid block of memory):
template <typename T>
class MySafePointer {
T* value;
public:
auto operator-> () {
#ifndef DEBUG_MODE
assert(value && "Trying to dereference a null pointer");
#endif
return value;
}
/* Other Stuff*/
};
Here the user only needs to #undef DEBUG_MODE if you want to get your performance back.
Is there a library / safe version of C++ which does this?
EDIT: Changed the code above so that it actually makes more sense and doesn't throw an exception but asserts value is non-null. The question is simply a matter of having a descriptive error message vs a crash...
Is there a version of c++ and standard library which does not contain any undefined behaviour but rather throws exceptions?
No, there is not. As mentioned in a comment, there are Address Sanitizer and Undefined Behavior Sanitizer and many other tools you can use to hunt for bugs, but there is no "C++ without undefined behavior" implementation.
If you want an inherently safe language, choose one. C++ isn't it.
Undefined behavior
Undefined behavior means that your program has ended up in a state the behavior of which is not defined by the standard.
So what you're really asking is if there's a language the standard of which defines every possible scenario.
And I can't think of one language like this, for the simple reason that programs are run by machines, but programming languages and standards and written by humans.
Is it always unintentional?
Per the reason explained above, the standard can have unintentional "holes", i.e. undefined behavior that was not intentionally allowed, and maybe not even noticed during standardization.
However, as all the "is undefined behavior" sentences in the standard prove, many times UB is intentionally allowed.
But why? Because that means giving less guarantees to the programmer, with the benefit of being able to make more optimizations or, equivalently, to not waste time verifying that the user is sticking to a defined contract.
So, even if the standard had no holes, there would still be a lot of cases where UB is stated to happen by the standard, because compilers can take advantage of it to make all sort of optmizations.²
The impact of preventing it in some trivial case
One trivial case of undefined behavior is when you access an out-of-bound element of a std::vector via operator[]. Exactly like for C-style arrays, v[i] basically gives you back *(v_ + i), where v_ is the pointer wrapped into v. This is fast and not safe.¹
What if you want to access the ith element safely? You would have to change the implementation of std::vector<>::operator[].
So what would the impact be of supporting the DEBUG_MODE flag? Essentially you would have to write two implementations separated by a #ifdef/(#else/)#endif. Obviously the two implementation can have a lot in common, so you could #-branch several times in the code. But... yeah, my bottom line is the your request can be fulfilled by changing the standard in such a way that it forces the implementers to support a two different implementations (safe and fast/unsafe and slow) for everything.
By the way, for this specific case, the standar does define another function, at, which is required to handle the out-of-bound case. But that's the point: it's another function.
Hypothetically, we could rip all undefined behavior out of C++ or even C. We could have everything be a priori well-defined and remove anything from the language whose evaluation could not be definitely determinable from first principles.
which makes me feel nervous about the answer I've given here.
(¹) This and other examples of UB are listed in this excellent article; search for Out of Bounds for the example I made.
(²) I really recommend reading this answer by Nicol Bolas about UB being absent in constexprs.
Is there a safe version of c++ without undefined behaviour?
No.
For example, you could implement a safe pointer class like so
How is throwing an exception safer than just crashing? You're still trying to find the bug so you can fix it statically, right?
What you wrote allows your buggy program to keep running (unless it just calls terminate, in which case you did some work for no result at all), but that doesn't make it correct, and it hides the error rather than helping you fix it.
Is there a library / safe version of C++ which does this?
Undefined behaviour is only one type of error, and it isn't always wrong. Deliberate use of non-portable platform features may also be undefined by the standard.
Anyway, let's say you catch every uninitialized value and null pointer and signed integer overflow - your program can still produce the wrong result.
If you write code that can't produce the wrong result, it won't have UB either.
I am aware that after using std::move the variable is still valid, but in an unspecified state.
Unfortunately, recently I have come across several bugs in our code base where a function was accessing the moved variable, and weird things were happening. These issues were extremely hard to track down.
Is there any compiler option (in clang) or any way to throw an error either during runtime or compilation?
Some things that may help :
Use a static analyzer. Xcode has it built-in.
https://clang-analyzer.llvm.org/
Use Address Sanitizer and Undefined Behaviour sanitizer
http://clang.llvm.org/docs/AddressSanitizer.html
https://clang.llvm.org/docs/UndefinedBehaviorSanitizer.html
Code changes that can make such bugs easy to track down:
I'm assuming that if you're using std::move on something, it is (not always) a heavy container.
If so, try to use std::unique_ptr<T> to create it. Calls to movers must explicitly use std::move, which is easy to spot. And other non-owning access functions can just work with .get(). You can also check for nullability and throw if it's nullptr at any point where you need to access it.
I am aware that after using std::move the variable is still valid, but in an unspecified state.
This is not a universal truth. More generally, the object that was moved from is in whatever state in which the move constructor / assignment operator left it.
The standard library does have the guarantee that you describe at minimum. But it is also possible to implement a member function for your class which doesn't abide by it, and leaves the moved from object in an invalid state. It is however a good design choice to implement move operations in the way you describe.
How to check if variable is still valid or std::move was used on it?
There is no way to do such check in general within the language.
Is there any compiler option (in clang) or any way to throw an error either during runtime or compilation?
Note that using a variable after a move is not necessarily a bug at all, but can instead be an entirely correct thing to do. Some types specify exactly the state of the moved from object (std::unique_ptr for example) and others which have the validity guarantee can be used in ways that have no pre-conditions (such as calling container.size()).
As such, using a moved from object is only a problem if that violates a pre-condition, which would result in undefined behaviour. Clang and other compilers have runtime sanitisers that may be able to catch some undefined behaviour. There are also many warning options and static analysers that diagnose cases where bugs are likely.
Using them is a very good idea, but you should not rely solely on them because they won't be able to find all bugs. The programmer still needs to be careful when writing the program, and needs to compare it with the rules of the language. Following common idioms such as RAII, avoiding bare owning pointers (and other resource handles) goes a long way in avoiding typical bugs.
I was hoping that someone could clarify exactly what is meant by undefined behaviour in C++. Given the following class definition:
class Foo
{
public:
explicit Foo(int Value): m_Int(Value) { }
void SetValue(int Value) { m_Int = Value; }
private:
Foo(const Foo& rhs);
const Foo& operator=(const Foo& rhs);
private:
int m_Int;
};
If I've understood correctly the two const_casts to both a reference and a pointer in the following code will remove the const-ness of the original object of type Foo, but any attempts made to modify this object through either the pointer or the reference will result in undefined behaviour.
int main()
{
const Foo MyConstFoo(0);
Foo& rFoo = const_cast<Foo&>(MyConstFoo);
Foo* pFoo = const_cast<Foo*>(&MyConstFoo);
//MyConstFoo.SetValue(1); //Error as MyConstFoo is const
rFoo.SetValue(2); //Undefined behaviour
pFoo->SetValue(3); //Undefined behaviour
return 0;
}
What is puzzling me is why this appears to work and will modify the original const object but doesn't even prompt me with a warning to notify me that this behaviour is undefined. I know that const_casts are, broadly speaking, frowned upon, but I can imagine a case where lack of awareness that C-style cast can result in a const_cast being made could occur without being noticed, for example:
Foo& rAnotherFoo = (Foo&)MyConstFoo;
Foo* pAnotherFoo = (Foo*)&MyConstFoo;
rAnotherFoo->SetValue(4);
pAnotherFoo->SetValue(5);
In what circumstances might this behaviour cause a fatal runtime error? Is there some compiler setting that I can set to warn me of this (potentially) dangerous behaviour?
NB: I use MSVC2008.
I was hoping that someone could clarify exactly what is meant by undefined behaviour in C++.
Technically, "Undefined Behaviour" means that the language defines no semantics for doing such a thing.
In practice, this usually means "don't do it; it can break when your compiler performs optimisations, or for other reasons".
What is puzzling me is why this appears to work and will modify the original const object but doesn't even prompt me with a warning to notify me that this behaviour is undefined.
In this specific example, attempting to modify any non-mutable object may "appear to work", or it may overwrite memory that doesn't belong to the program or that belongs to [part of] some other object, because the non-mutable object might have been optimised away at compile-time, or it may exist in some read-only data segment in memory.
The factors that may lead to these things happening are simply too complex to list. Consider the case of dereferencing an uninitialised pointer (also UB): the "object" you're then working with will have some arbitrary memory address that depends on whatever value happened to be in memory at the pointer's location; that "value" is potentially dependent on previous program invocations, previous work in the same program, storage of user-provided input etc. It's simply not feasible to try to rationalise the possible outcomes of invoking Undefined Behaviour so, again, we usually don't bother and instead just say "don't do it".
What is puzzling me is why this appears to work and will modify the original const object but doesn't even prompt me with a warning to notify me that this behaviour is undefined.
As a further complication, compilers are not required to diagnose (emit warnings/errors) for Undefined Behaviour, because code that invokes Undefined Behaviour is not the same as code that is ill-formed (i.e. explicitly illegal). In many cases, it's not tractible for the compiler to even detect UB, so this is an area where it is the programmer's responsibility to write the code properly.
The type system — including the existence and semantics of the const keyword — presents basic protection against writing code that will break; a C++ programmer should always remain aware that subverting this system — e.g. by hacking away constness — is done at your own risk, and is generally A Bad Idea.™
I can imagine a case where lack of awareness that C-style cast can result in a const_cast being made could occur without being noticed.
Absolutely. With warning levels set high enough, a sane compiler may choose to warn you about this, but it doesn't have to and it may not. In general, this is a good reason why C-style casts are frowned upon, but they are still supported for backwards compatibility with C. It's just one of those unfortunate things.
Undefined behaviour depends on the way the object was born, you can see Stephan explaining it at around 00:10:00 but essentially, follow the code below:
void f(int const &arg)
{
int &danger( const_cast<int&>(arg);
danger = 23; // When is this UB?
}
Now there are two cases for calling f
int K(1);
f(k); // OK
const int AK(1);
f(AK); // triggers undefined behaviour
To sum up, K was born a non const, so the cast is ok when calling f, whereas AK was born a const so ... UB it is.
Undefined behaviour literally means just that: behaviour which is not defined by the language standard. It typically occurs in situations where the code is doing something wrong, but the error can't be detected by the compiler. The only way to catch the error would be to introduce a run-time test - which would hurt performance. So instead, the language specification tells you that you mustn't do certain things and, if you do, then anything could happen.
In the case of writing to a constant object, using const_cast to subvert the compile-time checks, there are three likely scenarios:
it is treated just like a non-constant object, and writing to it modifies it;
it is placed in write-protected memory, and writing to it causes a protection fault;
it is replaced (during optimisation) by constant values embedded in the compiled code, so after writing to it, it will still have its initial value.
In your test, you ended up in the first scenario - the object was (almost certainly) created on the stack, which is not write protected. You may find that you get the second scenario if the object is static, and the third if you enable more optimisation.
In general, the compiler can't diagnose this error - there is no way to tell (except in very simple examples like yours) whether the target of a reference or pointer is constant or not. It's up to you to make sure that you only use const_cast when you can guarantee that it's safe - either when the object isn't constant, or when you're not actually going to modify it anyway.
What is puzzling me is why this appears to work
That is what undefined behavior means.
It can do anything including appear to work.
If you increase your optimization level to its top value it will probably stop working.
but doesn't even prompt me with a warning to notify me that this behaviour is undefined.
At the point it were it does the modification the object is not const. In the general case it can not tell that the object was originally a const, therefore it is not possible to warn you. Even if it was each statement is evaluated on its own without reference to the others (when looking at that kind of warning generation).
Secondly by using cast you are telling the compiler "I know what I am doing override all your safety features and just do it".
For example the following works just fine: (or will seem too (in the nasal deamon type of way))
float aFloat;
int& anIntRef = (int&)aFloat; // I know what I am doing ignore the fact that this is sensable
int* anIntPtr = (int*)&aFloat;
anIntRef = 12;
*anIntPtr = 13;
I know that const_casts are, broadly speaking, frowned upon
That is the wrong way to look at them. They are a way of documenting in the code that you are doing something strange that needs to be validated by smart people (as the compiler will obey the cast without question). The reason you need a smart person to validate is that it can lead to undefined behavior, but the good thing you have now explicitly documented this in your code (and people will definitely look closely at what you have done).
but I can imagine a case where lack of awareness that C-style cast can result in a const_cast being made could occur without being noticed, for example:
In C++ there is no need to use a C style cast.
In the worst case the C-Style cast can be replaced by reinterpret_cast<> but when porting code you want to see if you could have used static_cast<>. The point of the C++ casts is to make them stand out so you can see them and at a glance spot the difference between the dangerous casts the benign casts.
A classic example would be trying to modify a const string literal, which may exist in a protected data segment.
Compilers may place const data in read only parts of memory for optimization reasons and attempt to modify this data will result in UB.
Static and const data are often stored in another part of you program than local variables. For const variables, these areas are often in read-only mode to enforce the constness of the variables. Attempting to write in a read-only memory results in an "undefined behavior" because the reaction depends on your operating system. "Undefined beheavior" means that the language doesn't specify how this case is to be handled.
If you want a more detailed explanation about memory, I suggest you read this. It's an explanation based on UNIX but similar mecanism are used on all OS.
I've looked at the standard but couldn't find any indication that simply writing to memory would be considered observable behaviour. If not, that would mean the compiled code need not actually write to that memory. If a compiler choose to optimize away such access anything involving mapper memory, or shared memory, may not work.
1.9-8 seems to defined a very limited observable behaviour but indicates an implementation may define more. Can one assume than any quality compiler would treat modifying memory as an observable behaviour? That is, it may not guarantee atomicity or ordering, but does guarantee that data will eventually be written.
So, have I overlooked something in the standard, or is the writing to memory merely something the compiler decides to do?
Statements from the current or C++0x standard are good. Please note I'm not talking about accessing memory through a function, I mean direct access, such as writing data to a pointer (perhaps retrieved via mmap or another library function).
This kind of thing is what volatile exists for. Else, writing to memory and never apparently reading from it is not observable behaviour. However, in the general case, it would be quite impossible for the optimizer to prove that you never read it back except in relatively trivial examples, so it's not usually an issue.
Can one assume than any quality compiler would treat modifying memory as an observable behaviour?
No. Volatile is meant for marking that. However, you cannot fully trust the compiler even after adding the volatile qualifier, at least as told by a 2008 paper: http://www.cs.utah.edu/~regehr/papers/emsoft08-preprint.pdf
EDIT:
From C standard (not C++) http://c0x.coding-guidelines.com/5.1.2.3.html
An actual implementation need not evaluate part of an expression if it can deduce that its value is not used and that no needed side effects are produced (including any caused by calling a function or accessing a volatile object).
My reading of C99 is that unless you specify volatile, how and when the variable is actually accessed is implementation defined. If you specify volatile qualifier then code must work according to the rules of an abstract machine.
Relevant parts in the standard are: 6.7.3 Type qualifiers (volatile description) and 5.1.2.3 Program execution (the abstract machine definition).
For some time now I know that many compilers actually have heuristics to detect cases when a variable should be reread again and when it is okay to use a cached copy. Volatile makes it clear to the compiler that every access to the variable should be actually an access to the memory. Without volatile it seems compiler is free to never reread the variable.
And BTW wrapping the access in a function doesn't change that since a function even without inline might be still inlined by the compiler within the current compilation unit.
From your question below:
Assume I use an array on the heap (unspecified where it is allocated),
and I use that array to perform a calculation (temp space). The
optimizer sees that it doesn't actually need any of that space as it
can use strictly registers. Does the compiler nonetheless write the
temp values to the memory?
Per MSalters below:
It's not guaranteed, and unlikely. Consider a a Static Single
Assignment optimizer. This figures out each possible write/read
dependency, and then assigns registers to optimize these dependencies.
As a side effect, any write that's not followed by a (possible) read
creates no dependencies at all, and is eliminated. In your example
("use strictly registers") the optimizer has satisfied all write/read
dependencies with registers, so it won't write to memory at all. All
reads produce the correct values, so it's a correct optimization.