This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
In C arrays why is this true? a[5] == 5[a]
How is it possible that this is valid C++?
void main()
{
int x = 1["WTF?"];
}
On VC++10 this compiles and in debug mode the value of x is 84 after the statement.
What's going on?
Array subscript operator is commutative. It's equivalent to int x = "WTF?"[1]; Here, "WTF?" is an array of 5 chars (it includes null terminator), and [1] gives us the second char, which is 'T' - implicitly converted to int it gives value 84.
Offtopic: The code snippet isn't valid C++, actually - main must return int.
You can read more in-depth discussion here: In C arrays why is this true? a[5] == 5[a]
int x = 1["WTF?"];
equals to
int x = "WTF?"[1];
84 is "T" ascii code
The reason why this works is that when the built-in operator [] is applied to a pointer and an int, a[b] is equivalent to *(a+b). Which (addition being commutative) is equivalent to *(b+a), which, by definition of [], is equivalent to b[a].
Related
This question already has answers here:
With arrays, why is it the case that a[5] == 5[a]?
(20 answers)
Closed 2 years ago.
I've seen an example showing as
int n = sizeof(0)["abcdefghij"];
cout<<n;
What does that thing in square brackets mean? I've read somewhere that (0)["abc"] is equivalent to ("abc")[0]. Meaning the above expression is simply
n = sizeof("abcdefghij")[0];
i.e. the first element.
First, sizeof is not a function but an operator
sizeof(0)["abcdefghij"] can be parsed as either
sizeof( (0)["abcdefghij"] ), or
( sizeof(0) )["abcdefghij"]
Since sizeof has lower precedence than [], the former will take place
(0)["abcdefghij"] is equivalent to "abcdefghij"[0] which is just 'a', so the whole thing is the same as sizeof('a') which is 1 in C++
Demo on GodBolt, ideone
If you replace sizeof(0) with sizeof(int) then the same thing happens, but now (int)["abcdefghij"] is invalid so it should result in a compilation fail. Most compilers report an error as expected that except ICC so it looks like that's an ICC bug which chooses (sizeof(int))["abcdefghij"] over sizeof((int)["abcdefghij"]) just because the latter is invalid
Related: Why does sizeof(my_arr)[0] compile and equal sizeof(my_arr[0])?
This question already has answers here:
bool operator ++ and --
(4 answers)
Closed 2 years ago.
why incase of boolean, overflow doesn't occur in circular fashion. eg say a=126 when you reach 128 and you increment it, a goes to -127 if range is -127 to 128. similarly for boolean it is 0 to 1 so it should move around 0101010101 and so on. please clarify
using namespace std;
int main()
{
bool a;
for (a = 1; a <= 5; a++)
cout << a;
return 0;
}
From cppreference
If the operand of the pre-increment operator is of type bool, it is set to true
If the operand of the post-increment operator is of type bool, it is set to true
Note that this behaviour was removed in C++17 and your code won't compile with newer standards (probably because it was confusing).
This question already has answers here:
Size of character ('a') in C/C++
(4 answers)
Closed 9 years ago.
#include<stdio.h>
int main()
{
printf("%d", sizeof('a'));
return 0;
}
Why does the above code produce different results when compiling in C and C++ ?
In C, it prints 4 while in C++, it is the more acceptable answer i.e. 1.
When I replace the 'a' inside sizeof() with a char variable declared in main function, the result is 1 in both cases!
Because, and this might be shocking, C and C++ are not the same language.
C defines character literals as having type int, while C++ considers them to have type char.
This is a case where multi-character constants can be useful:
const int foo = 'foo';
That will generate an integer whose value will probably be 6713199 or 7303014 depending on the byte-ordering and the compiler's mood. In other words, multiple-character character literals are not "portable", you cannot depend on the resulting value being easy to predict.
As commenters have pointed out (thanks!) this is valid in both C and C++, it seems C++ makes multi-character character literals a different type. Clever!
Also, as a minor note that I like to mention when on topic, note that sizeof is not a function and that values of size_t are not int. Thus:
printf("the size of a character is %zu\n", sizeof 'a');
or, if your compiler is too old not to support C99:
printf("the size of a character is %lu\n", (unsigned long) sizeof 'a');
represent the simplest and most correct way to print the sizes you're investigating.
In C, the 'a' is a character constant, which is treated as an integer, so you get a size of 4, whereas in C++ it's treated as a char.
possible duplicate question Size of character ('a') in C/C++
You example is one of the cases where C++ is not compatible with C. In C APIs that return a single character (like getch) return and int because this allows space for an EOF marker (-1). So for this reason it sees 'c' as an int.
C++ introduces operator and function overloading which means that we probably want to handle
cout << 'c'
differently to:
cout << 99
This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Size of character ('a') in C/C++
OS: linuxmint 32-bit
Compiler: gcc & g++
I have try this code:
#include <stdio.h>
int main()
{
printf("%d\n",sizeof('a'));
return 0;
}
and I compile it with gcc , the result is 4, and I change to g++, and it is 1
then I use:
sizeof(char), and the result is 1
I use:
char s = 'a';
printf('%d\n', sizeof(s));
and the result is 1
but I search in the Internet, and some people said that they get the result of 1 or 2.
So why there are so many different result?
Character constants are represented as int in C. When you are specifying char type, it's only 1 byte.
Character literals like 'a' have type int in C89 which is the default standard used by gcc. In C++ it is important for overloading that characters and strings have types char and char* respectively (think about std::cout << 'a'). Since sizeof(int) == 4 and sizeof(char) == 1 on x86 and x64_86 you get the results you describe.
sizeof(char) is always 1, both in C and C++. In C, the type of 'c' is int, so sizeof('c') is the same as sizeof(int). In C++, the type of 'c' is char, so sizeof('c') is the same as sizeof(char), i.e., 1.
This question already has answers here:
Closed 13 years ago.
Possible Duplicate:
In C arrays why is this true? a[5] == 5[a]
Is the possibility of both array[index] and index[array] a compiler feature or a language feature. How is the second one possible?
The compiler will turn
index[array]
into
*(index + array)
With the normal syntax it would turn
array[index]
into
*(array + index)
and thus you see that both expressions evaluate to the same value. This holds for both C and C++.
From the earliest days of C, the expression a[i] was simply the address of a[0] added to i (scaled up by the size of a[0]) and then de-referenced. In fact, all these were equivalent:
a[i]
i[a]
*(a+i)
====
The only thing I'd be concerned about is the actual de-referencing. Whilst they all produce the same address, de-referencing may be a concern if the types of a and i are different.
For example:
int i = 4;
long a[9];
long x = a[i]; //get the long at memory location X.
long x = i[a]; //get the int at memory location X?
I haven't actually tested that behavior but it's something you may want to watch out for. If it does change what gets de-referenced, it's likely to cause all sorts of problems with arrays of objects as well.
====
Update:
You can probably safely ignore the bit above between the ===== lines. I've tested it under Cygwin with a short and a long and it seems okay, so I guess my fears were unfounded, at least for the basic cases. I still have no idea what happens with more complicated ones because it's not something I'm ever likely to want to do.
As Matthew Wilson discusses in Imperfect C++, this can be used to enforce type safety in C++, by preventing use of DIMENSION_OF()-like macros with instances of types that define the subscript operator, as in:
#define DIMENSION_OF_UNSAFE(x) (sizeof(x) / sizeof((x)[0]))
#define DIMENSION_OF_SAFER(x) (sizeof(x) / sizeof(0[(x)]))
int ints[4];
DIMENSION_OF_UNSAFE(ints); // 4
DIMENSION_OF_SAFER(ints); // 4
std::vector v(4);
DIMENSION_OF_UNSAFE(v); // gives impl-defined value; v likely wrong
DIMENSION_OF_SAFER(v); // does not compile
There's more to this, for dealing with pointers, but that requires some additional template smarts. Check out the implementation of STLSOFT_NUM_ELEMENTS() in the STLSoft libraries, and read about it all in chapter 14 of Imperfect C++.
edit: some of the commenters suggest that the implementation does not reject pointers. It does (as well as user-defined types), as illustrated by the following program. You can verify this by uncommented lines 16 and 18. (I just did this on Mac/GCC4, and it rejects both forms).
#include <stlsoft/stlsoft.h>
#include <vector>
#include <stdio.h>
int main()
{
int ar[1];
int* p = ar;
std::vector<int> v(1);
printf("ar: %lu\n", STLSOFT_NUM_ELEMENTS(ar));
// printf("p: %lu\n", STLSOFT_NUM_ELEMENTS(p));
// printf("v: %lu\n", STLSOFT_NUM_ELEMENTS(v));
return 0;
}
In C and C++ (with array being a pointer or array) it is a language feature: pointer arithmetic. The operation a[b] where either a or b is a pointer is converted into pointer arithmetic: *(a + b). With addition being symetrical, reordering does not change meaning.
Now, there are differences for non-pointers. In fact given a type A with overloaded operator[], then a[4] is a valid method call (will call A::operator ) but the opposite will not even compile.