What does the term 'equivalent' mean in C++ standard? - c++

According to the draft of the C++ 11 standard, page 421, Table 23 — CopyAssignable, it says that post-condition of an expression t = v of copyassignable type is
t is equivalent to v, the value of v is unchanged
But I'm not sure what the term 'equivalent' means here. Is it mean t == v? Or something like all bytes are equivalent 'deeply' in sense of 'deepcopy'?

As far as I can tell, there is no separate definition of the term "equivalent" that would apply in the standard. It may be interpreted as plain English. Here are a few definitions from dictionaries:
corresponding or virtually identical especially in effect or function
equal to or having the same effect as something
something that is the same amount, price, size, etc. as something else or has the same purpose as something else
Another interpretation is that your quote is the definition for equivalent in this context i.e. the meaning of equality for the type in question is defined by the assignment operator of that type.

Related

What happens when we interchange the arrayname and index like index[arrayname] in C++?

What happens when we interchange the array name and index like index[arrayname] in C++? Is arrayname[index] the same as writing index[arrayname]? What will be the value in both?
For builtin types, the definition of E1[E2] "is identical (by definition) to" *((E1) + (E2)). (quotation from [expr.sub]/1) So the answer is simple: interchanging the names has no effect.
For user-defined types, E1 has to be a class type with an overloaded operator[], so, absent some funky stuff, you can't interchange the two expressions.
It has no effect since arrayname[index] is identical writing as index[arrayname] cause both are interpreted as:
*(arrayname + index)

C++14 standard Annex A interpretation

What are "superset of valid C++ constructs" from Annex A ?
Also, any guide which will help you read this grammar in Annex A ?
Annex A quote (donot block quote the following as it messes up the angle brackets):
This summary of C++ syntax is intended to be an aid to comprehension. It is not an exact statement
of the language. In particular, the grammar described here accepts a superset of valid C++ constructs.
Disambiguation rules (6.8, 7.1, 10.2) must be applied to distinguish expressions from declarations. Further,
access control, ambiguity, and type rules must be used to weed out syntactically valid but meaningless
constructs.
Here is one short example that is valid according to the grammar, but not according to the full language rules:
int a[];
struct s;
void main(foo bar)
{
return (sizeof a) + sizeof (s);
}
The primary issue is that the grammar is expressed using context-free productions, but C++ syntactic parse is highly contextual.
If S is a set of elements, a superset is another set X such that each element s in S is also an element of X, but there may be elements x in X that are not elements of S.
As an example, {1,2,3} is a set of 3 numbers. {1,2,3,4} is a superset of the first set -- it contains the elements in {1,2,3}, but also an extra element 4.
So the grammar listed in Annex A will match C++, but will also match things that are not valid C++.
It then goes on to list some issues you have to solve "outside of the grammar" -- the disambiguation rules, access control, ambiguity, and type rules.
The quote implies, lightly, that this is a complete set of things you must consider to distinguish valid C++ from things matched by the grammar, but does not explicitly say so. I am uncertain if this light implication is actually intended or not.

How strict C/C++ compilers about operator precedence/evaluation? [duplicate]

This question already has answers here:
order of evaluation of || and && in c
(5 answers)
Closed 9 years ago.
This question is been on my mind for a while so time to let it out and see what what you guys you have to say about it.
In C/C++ the operator precedence is defined by the C specification but as with everything there may be backdoors or unknown / not well known things that the compilers may employ under the name of 'optimization' which will mess up your application in the end.
Take this simple example :
bool CheckStringPtr(const char* textData)
{
return (!textData || textData[0]==(char)0);
}
In this case I test if the pointer is null then I check the first char is zero, essentially this is a test for a zero length string. Logically the 2 operations are exchangeable but if that would happen in some cases it would crash with since it's trying to read a non-existent memory address.
So the question is : is there anything that enforces the order of how operators/functions are executed, I know the safest way is to use 2 IFs below each other but this way should be the same assuming that the evaluation order of the operators never ever change.
So are compilers forced by the C/C++ specification to not change the order of evaluation or are they sometimes allowed to change the order, like it depends on compiler parameters, optimizations especially?
First note that precedence and evaluation order are two different (largely unrelated) concepts.
So are compilers forced by the C/C++ specification to not change the order of evaluation?
The compiler must produce behaviour that is consistent with the behaviour guaranteed by the C language standard. It is free to change e.g. the order of evaluation so long as the overall observed behaviour is unchanged.
Logically the 2 operations are exchangeable but if that would happen in some cases it would crash
|| and && are defined to have short-circuit semantics; they may not be interchanged.
The C and C++ standards explicitly support short-circuit evaluation, and thus require the left-hand operand of the &&, ||, or ? operator to be evaluated before the right-hand side.
Other "sequence points" include the comma operator (not to be confused with commas separating function arguments as in f(a, b)), the end of a statement (;), and between the evaluation of a function's arguments and the call to the function.
But for the most part, order of evaluation (not to be confused with precedence) is implementation defined. So, for example, don't depend on f to be called first in an expression like f(x) + g(y).

Accessing arrays by index[array] in C and C++

There is this little trick question that some interviewers like to ask for whatever reason:
int arr[] = {1, 2, 3};
2[arr] = 5; // does this line compile?
assert(arr[2] == 5); // does this assertion fail?
From what I can understand, a[b] gets converted to *(a + b) and since addition is commutative, it doesn't really matter their order, so 2[a] is really *(2 + a) and that works fine.
Is this guaranteed to work by C and/or C++'s specs?
Yes. 6.5.2.1 paragraph 1 (C99 standard) describes the arguments to the [] operator:
One of the expressions shall have type "pointer to object type", the other expression shall have integer type, and the result has type "type".
6.5.2.1 paragraph 2 (emphasis added):
A postfix expression followed by an expression in square brackets [] is a subscripted
designation of an element of an array object. The definition of the subscript operator []
is that E1[E2] is identical to (*((E1)+(E2))). Because of the conversion rules that
apply to the binary + operator, if E1 is an array object (equivalently, a pointer to the
initial element of an array object) and E2 is an integer, E1[E2] designates the E2-th
element of E1 (counting from zero).
It says nothing requiring the order of the arguments to [] to be sane.
In general 2[a] is identical to a[2] and this is guaranteed to be equivalent in both C and C++ (assuming no operator overloading), because as you meantioned it translates into *(2+a) or *(a+2), respectively. Because the plus operator is commutative, the two forms are equivalent.
Although the forms are equivalent, please for the sake of all that's holy (and future maintenance programmers), prefer the "a[2]" form over the other.
P.S., If you do get asked this at an interview, please do exact revenge on behalf of the C/C++ community and make sure that you ask the interviewer to list all trigraph sequences as a precondition to you giving your answer. Perhaps this will disenchant him/her from asking such (worthless, with regard to actually programming anything) questions in the future. In the odd event that the interviewer actually knows all nine of the trigraph sequences, you can always make another attempt to stomp them with a question about the destruction order of virtual base classes - a question that is just as mind bogglingly irrelevant for everyday programming.

Why "**" does not bind more tightly than negation in OCaml?

after this question, I don't know what to think.
In OCaml, if you do something like -1.0**2.0 (because of the typing you need to have float), you obtain 1.00. According to the standard order of operations, the result should be -1 (as in python).
I wasn't able to find the reason or a clear definition of the operator precedence in OCaml...
Is this because of the type system ? or the fact that there's a binding underneath with pow ?
As the very page you quote says, "The order in which the unary operator − (usually read "minus") acts is often problematical." -- it quotes Excel and bc as having the same priority for it as O'CAML, but also says "In written or printed mathematics" it works as in Python. So, essentially, there's no universal consensus on this specific issue.
Operator precedence is syntax-directed in OCaml, which means that the first character of the function identifier (and whether it's unary or binary) determines the operator precedence according to a fixed sequence. Contrast this with languages like Haskell, where the operator precedence can be specified at function definition regardless of which characters are used to form the function identifier.