I know that meaning of auto keyword has been changed completely from C++11. But Recently I wrote a following simple program that compiles & runs fine when compiling with =std=c++98 option.
#include <iostream>
void fun(auto int a)
{
a=3;
std::cout<<a<<'\n';
}
int main()
{
fun(3);
}
Orwell Dev C++ IDE gives me warning like following:
[Warning] 'auto' changes meaning in C++11; please remove it [-Wc++0x-compat]
So, is it fine to use auto for function parameters or should I never use auto like this as in above program to maintain compatibility with C++11?
Until C++ 11 the auto keyword was a "storage class specifier" whereas with C++ 11 it becomes a type-induction specifier.
To answer your question: depending on the C++ standard you use to compile your code, adjust the use of the auto keyword accordingly. It's not portable across the pre/post C++ 11 boundary of the C++ standard.
So, is it fine to use auto for function parameters or should I never use auto like this as in above program to maintain compatibility with C++11?
That depends on what you mean by "fine":
If you mean "will it compile?" then YES.
If you mean "is it a good practice", the answer is that it is not a practice at all; It was possible to do so and the code was perfectly valid (before C++11) but that time is passed, and I do not know of anyone who did this (not even for tricky interview questions).
In conclusion, don't do it.
auto prior to C++11 was a storage class specifier, like register or static or extern.
It, however, was the default storage class specifier. Valid C++03 code with it removed would have the same meaning, which is why C++11 felt comfortable stealing the keyword.
In short:
void fun1(auto int a) {
std::cout<<a<<'\n';
}
void fun2(int a) {
std::cout<<a<<'\n';
}
have the same meaning in C++03. In C++11, fun1 is ill-formed.
Simply remove it from all of your pre-C++11 codebases. If the code was valid C++03, it will continue to have the same meaning.
There is a (very small) problem that some compilers might implement the K&R era C "by default, a type is int". Ie, they might consider auto x; to mean auto int x;. This was not valid C++03, however. Compiling with sufficiently strict flags in C++03 mode should generate errors around that (ab)use of auto.
As an aside, C++11 also introduces a new storage class specifier, thread_local. It steals auto for the use of auto-typed variables, and auto is no longer a storage class specifier.
Related
Take note of the following C++ code:
#include <iostream>
using std::cout;
int foo (const int);
int main ()
{
cout << foo(3);
}
int foo (int a)
{
a++;
return a;
}
Notice that the prototype of foo() takes a const int and that the definition takes an int. This compile without any errors...
Why are there no compilation errors?
Because it doesn't matter to the caller of the foo function whether foo modifies its copy of the variable or not.
Specifically in the C++03 standard, the following 2 snippets explain exactly why:
C++03 Section: 13.2-1
Two function declarations of the same name refer to the same function if they are in the same scope and
have equivalent parameter declarations (13.1).
C++03 Section: 13.1-3
Parameter declarations that differ only in the presence or absence of const and/or volatile are equivalent. Only the const and volatile type-specifiers at the outermost level of the parameter type specification are ignored in this fashion; const and volatile type-specifiers buried within a parameter type specification are significant and can be used to distinguish overloaded function declarations.
Top-level const (i.e., that applies to the value that's passed, not something to which it points or refers) affects only the implementation, not the interface, of a function. The compiler ignores it from the interface viewpoint (i.e., the calling side) and enforces it only on the implementation (i.e., code in the body of the function).
As others have explained, the Standard says it's ok, and that the compiler can afford to be lenient about enforcing this because it doesn't affect the caller, but nobody's answered why the compiler should choose to be lenient. It's not particularly lenient in general, and a programmer who's just been looking at the interface then dives into the implementation may have it in the back of their mind that a parameter is const when it's not or vice versa - not a good thing.
This leniency allows implementation changes without modifying headers, which using traditional make tools triggers recompilation of client code. This can be a serious issue in enterprise scale development, where a non-substantive change in a low-level header (e.g. logging) can force rebuilding of virtually all objects between it and the applications... wasting thousands of hours of CPU time and delaying everyone and everything waiting on the builds.
So, it's ugly, but a practical concession.
I've also answered another similar question which looks at why overloading of f(const T) and f(T) isn't allowed - may be of interest to anyone reading this - Top-level const doesn't influence a function signature
int foo (const int a)
{
a++;
return a;
}
That'll throw an error during compiling.
This question's answers are a community effort. Edit existing answers to improve this post. It is not currently accepting new answers or interactions.
Introduction
With the C++14 (aka. C++1y) Standard in a state close to being final, programmers must ask themselves about backwards compatibility, and issues related to such.
The question
In the answers of this question it is stated that the Standard has an Appendix dedicated to information regarding changes between revisions.
It would be helpful if these potential issues in the previously mentioned Appendix could be explained, perhaps with the help of any formal documents related to what is mentioned there.
According to the Standard: What changes introduced in C++14 can potentially break a program written in C++11?
Note: In this post I consider a "breaking change" to be either, or both, of;
1. a change that will make legal C++11 ill-formed when compiled as C++14, and;
2. a change that will change the runtime behavior when compiled as C++14, vs C++11.
C++11 vs C++14, what does the Standard say?
The Standard draft (n3797) has a section dedicated for just this kind of information, where it describes the (potentially breaking) differences between one revision of the standard, and another.
This post has used that section, [diff.cpp11], as a base for a semi-elaborate discussion regarding the changes that could affect code written for C++11, but compiled as C++14.
C.3.1] Digit Separators
The digit separator was introduced so that one could, in a more readable manner, write numeric literals and split them up in a way that is more natural way.
int x = 10000000; // (1)
int y = 10'000'000; // (2), C++14
It's easy to see that (2) is much easier to read than (1) in the above snippet, while both initializers have the same value.
The potential issue regarding this feature is that the single-quote always denoted the start/end of a character-literal in C++11, but in C++14 a single-quote can either be surrounding a character-literal, or used in the previously shown manner (2).
Example Snippet, legal in both C++11 and C++14, but with different behavior.
#define M(x, ...) __VA_ARGS__
int a[] = { M(1'2, 3'4, 5) };
// int a[] = { 5 }; <-- C++11
// int a[] = { 3'4, 5 }; <-- C++14
// ^-- semantically equivalent to `{ 34, 5 }`
( Note: More information regarding single-quotes as digit separators can be found in n3781.pdf )
C.3.2] Sized Deallocation
C++14 introduces the opportunity to declare a global overload of operator delete suitable for sized deallocation, something which wasn't possible in C++11.
However, the Standard also mandates that a developer cannot declare just one of the two related functions below, it must declare either none, or both; which is stated in [new.delete.single]p11.
void operator delete (void*) noexcept;
void operator delete (void*, std::size_t) noexcept; // sized deallocation
Further information regarding the potential problem:
Existing programs that redefine the global unsized version do not also
define the sized version. When an implementation introduces a sized
version, the replacement would be incomplete and it is likely that
programs would call the implementation-provided sized deallocator on
objects allocated with the programmer-provided allocator.
Note: Quote taken from n3536 - C++ Sized Deallocation
( Note: More of interest is available in the paper titled n3536 - C++ Sized Deallocation, written by Lawrence Crowl )
C.3.3] constexpr member-functions, no longer implicitly const
There are many changes to constexpr in C++14, but the only change that will change semantics between C++11, and C++14 is the constantness of a member-function marked as constexpr.
The rationale behind this change is to allow constexpr member-functions to mutate the object to which they belong, something which is allowed due to the relaxation of constexpr.
struct A { constexpr int func (); };
// struct A { constexpr int func () const; }; <-- C++11
// struct A { constexpr int func (); }; <-- C++14
Recommended material on this change, and why it is important enough to introduce potential code-breakage:
Andrzej's C++ blog - “constexpr” function is not “const”
open-std.org - constexpr member functions and implicit const
(open-std.org - Relaxing constraints on constexpr functions)
Example snippet, legal in both C++11 and C++14, but with different behavior
struct Obj {
constexpr int func (int) {
return 1;
}
constexpr int func (float) const {
return 2;
}
};
Obj const a = {};
int const x = a.func (123);
// int const x = 1; <-- C++11
// int const x = 2; <-- C++14
C.3.4] Removal of std::gets
std::gets has been removed from the Standard Library because it is considered dangerous.
The implications of this is of course that trying to compile code written for C++11, in C++14, where such a function is used will most likely just fail to compile.
( Note: there are ways of writing code that doesn't fail to compile, and have different behavior, that depends on the removal of std::gets from the Standard Library )
Is return type deduction allowed for member functions in c++14, or only for free functions?
I ask because I sort of implicitly assumed it would work, but in gcc 4.8.1 I get an internal compiler error("in gen_type_die_with_usage"). First time I have ever gotten such a cryptic error like that, so I am a bit skeptical; and I know they have changed the spec since then.
For clarity this works for me:
auto foo() {return 5;}
but this doesn't:
class Bar{
auto baz() {return 5;}
}
Is this allowed in the draft standard?
Yes the standard should allow it according to the paper n3582. Here is an example from the paper.
Allowing non-defining function declarations with auto return type is not strictly necessary, but it is useful for coding styles that prefer to define member functions outside the class:
struct A {
auto f(); // forward declaration
};
auto A::f() { return 42; }
and if we allow it in that situation, it should be valid in other situations as well. Allowing it is also the more orthogonal choice; in general, I believe that if combining two features can work, it should work.
According to the comment by #bamboon, "Return type deduction is only supported as of gcc 4.9." so that would explain why you don't have it.
Can I declare ObjC block with auto?
auto fun = ^(int x) { NSLog(#"%d", x); }
fun(5);
I cannot work out valid syntax for that.
You are missing a ; after the declaration of fun. Otherwise, you got the syntax right, and Clang will accept that in -std=c++11 -fblocks mode, for C++ or Objective-C++ input. (Note that blocks are actually an orthogonal extension which is not part of Objective-C.)
I don't think the auto keyword from C++/Objective-C++ is used in objective-C.
As for declaring block variable for your example the following will work in objective-C
void(^fun)(int x) = ^(int x) {
NSLog(#"%d",x);
};
fun(5);
For more declaration options on block there's a very good answer here
The auto keyword is a c++11 keyword. Objective-c is a superset of c not c++ and therefor does not contain the properties of c++, but rather c. As for objective-c++, I do not believe that clang is up to date on all of the new c++11 features, especially in the compiler that builds objective-c++. Hope this helps!
Take note of the following C++ code:
#include <iostream>
using std::cout;
int foo (const int);
int main ()
{
cout << foo(3);
}
int foo (int a)
{
a++;
return a;
}
Notice that the prototype of foo() takes a const int and that the definition takes an int. This compile without any errors...
Why are there no compilation errors?
Because it doesn't matter to the caller of the foo function whether foo modifies its copy of the variable or not.
Specifically in the C++03 standard, the following 2 snippets explain exactly why:
C++03 Section: 13.2-1
Two function declarations of the same name refer to the same function if they are in the same scope and
have equivalent parameter declarations (13.1).
C++03 Section: 13.1-3
Parameter declarations that differ only in the presence or absence of const and/or volatile are equivalent. Only the const and volatile type-specifiers at the outermost level of the parameter type specification are ignored in this fashion; const and volatile type-specifiers buried within a parameter type specification are significant and can be used to distinguish overloaded function declarations.
Top-level const (i.e., that applies to the value that's passed, not something to which it points or refers) affects only the implementation, not the interface, of a function. The compiler ignores it from the interface viewpoint (i.e., the calling side) and enforces it only on the implementation (i.e., code in the body of the function).
As others have explained, the Standard says it's ok, and that the compiler can afford to be lenient about enforcing this because it doesn't affect the caller, but nobody's answered why the compiler should choose to be lenient. It's not particularly lenient in general, and a programmer who's just been looking at the interface then dives into the implementation may have it in the back of their mind that a parameter is const when it's not or vice versa - not a good thing.
This leniency allows implementation changes without modifying headers, which using traditional make tools triggers recompilation of client code. This can be a serious issue in enterprise scale development, where a non-substantive change in a low-level header (e.g. logging) can force rebuilding of virtually all objects between it and the applications... wasting thousands of hours of CPU time and delaying everyone and everything waiting on the builds.
So, it's ugly, but a practical concession.
I've also answered another similar question which looks at why overloading of f(const T) and f(T) isn't allowed - may be of interest to anyone reading this - Top-level const doesn't influence a function signature
int foo (const int a)
{
a++;
return a;
}
That'll throw an error during compiling.