following this question Why doesn't gcc allow a const int as a case expression?, basically the same as What promoted types are used for switch-case expression comparison? or Is there any way to use a constant array with constant index as switch case label in C?.
From the first link, I tried to replace :
case FOO: // aka 'const int FOO = 10'
with :
case ((int) "toto"[0]): // can't be anything *but* constant
Which gives :
https://ideone.com/n1bmIb -> https://ideone.com/4aOSXR = works in C++
https://ideone.com/n1bmIb -> https://ideone.com/RrnO2R = fails in C
I don't quite understand since the "toto" string can't be anything but a constant one, it isn't even a variable, it lies in the void of the compiler memory. I'm not even playing with the 'const' fuzzy logic of the C language (that really stands for "read-only, not constant, what did you expect?"), the problem is either "array access" or "pointer referencing" into a constant expression that do not evaluate in C, but do quite well in C++.
I expected to use this "trick" to use a HASH_MACRO(str) to generate unique case labels values from a key identifier, leaving eventually the compiler to raise an error in case of collision because of similar label values found.
OK, ok, I was told these restrictions were made to simplify language tooling (preproc, compiler, linker) and C ain't no LISP, but you can have full featured LISP interpreter/compilers for a fraction of the size of a C equivalent, so that's no excuse.
Question is : is there an "extension" to C11 that just allows this "toto" thingy to work in GCC, CLANG and... MSVC ? I don't want to go the C++ path (typedef's forward declarations don't work anymore) and because embedded stuff (hence the compile-time hash computation for space-time distortion).
Is there an intermediary "C+" language that is more 'permissive' and 'understand' embedded a little better, like -Praise the Lords- "enums as bitfield members", among nice other things we cannot have (because of out-of-reality standards evolving like snails under a desert sun) ?
#provemewrong, #changemymind, #norustplease
It doesn't matter whether or not it could be known to the compiler at compile time. The case label needs to have a value that is an integer constant expression (C11 6.8.4.2p3).
The expression of each case label shall be an integer constant expression and no two of the case constant expressions in the same switch statement shall have the same value after conversion. There may be at most one default label in a switch statement. (Any enclosed switch statement may have a default label or case constant expressions with values that duplicate case constant expressions in the enclosing switch statement.)
And the definition of an integer constant expression is in C11 6.6p6:
An integer constant expression shall have integer type and shall only have operands that are integer constants, enumeration constants, character constants, sizeof expressions whose results are integer constants, _Alignof expressions, and floating constants that are the immediate operands of casts. Cast operators in an integer constant expression shall only convert arithmetic types to integer types, except as part of an operand to the sizeof or _Alignof operator.
Since "toto" is none of integer constants, enumeration constants, character constants, constant sizeof, _Alignof expressions or floating point constant cast to an integer; and that list was specified in the constraints section of the standard, the compiler must not pass this silently. (Even a conforming compiler may still successfully compile the program, but it must diagnose this as a constraint violation.)
What you can use is chained ? : to resolve the index to a character constant, i.e.
x == 0 ? 't'
: x == 1 ? 'o'
: x == 2 ? 't'
: x == 3 ? 'o'
This can be written into a macro.
"toto[0]" is not an integer constant expression as C defines the term:
6.6 Constant expressions
...
6 An integer constant expression117) shall have integer type and shall only have operands
that are integer constants, enumeration constants, character constants, sizeof
expressions whose results are integer constants, _Alignof expressions, and floating
constants that are the immediate operands of casts. Cast operators in an integer constant
expression shall only convert arithmetic types to integer types, except as part of an
operand to the sizeof or _Alignof operator.
117) An integer constant expression is required in a number of contexts such as the size of a bit-field
member of a structure, the value of an enumeration constant, and the size of a non-variable length
array. Further constraints that apply to the integer constant expressions used in conditional-inclusion
preprocessing directives are discussed in 6.10.1.
C 2011 online draft
The issue that you're running into is that, in C, "toto" is an array of chars. Sure, it's constant in memory, but it's still just an array. The [] operator indexes in an array (from a pointer). If you wanted, you would be able to edit a compiled binary and change the string "toto" to something else. In a sense, it is not compile-time known. It's equivalent to doing:
char * const ___string1 = "toto";
...
case ((int) ___string1[0]):
(This is a little forced and redundant, but it's just for demonstration)
Note that the type of the elements of a string literal is char, not const char.
The case, must be a constant however, as it is built into the compiled program control flow.
Related
For same program in c and c++ when we use constant integer variable as case label it is valid only in c++ and not in c also when we use constant integer array member as case label then it is not valid for both c and c++. What is the main reason for this behaviour ?
//for c
#include <stdio.h>
int main()
{
const int a=90;
switch(90)
{
case a://error : case label does not reduce to an integer constant
printf("error");
break;
}
const int arr[3]={88,89,90};
switch(90)
{
case arr[2]://error : case label does not reduce to an integer constant
printf("Error");
break;
}
}
//for c++
#include <stdio.h>
int main()
{
const int a=90;
switch(90)
{
case a:
printf("No error");
break;
}
const int arr[3]={88,89,90};
switch(90)
{
case arr[2]://error 'arr' cannot appear in a constant-expression
printf("Error");
break;
}
}
What is the main reason for this behaviour ?
The main reason for this is the 2018 C standard says of case labels in 6.8.4.2 3 “The expression of each case label shall be an integer constant expression…” and defines an integer constant expression in 6.6 6:
An integer constant expression shall have integer type and shall only have operands that are integer constants, enumeration constants, character constants, sizeof expressions whose results are integer constants, _Alignof expressions, and floating constants that are the immediate operands of casts. Cast operators in an integer constant expression shall only convert arithmetic types to integer types, except as part of an operand to the sizeof or _Alignof operator.
whereas the 2017 C++ standard says of case labels in 9.4.2 “the constant-expression shall be a converted constant expression…” and defines a converted constant expression in 8.20 with four pages of text involving core constant expression, integer constant expression, and more.
In summary, the C++ standard is broader about what it requires a language implementation to evaluate at translation time.
The term “constant expression” is a misnomer. A more apt description is that these are expressions that the language implementation is required to be able to resolve to a specific value when translating a program. The C++ standard requires more of the implementation than the C standard does.
Compiling some test code in avr-gcc for an 8-bit micro-controller, the line
const uint32_t N = 65537;
uint8_t values[N];
I got the following compilation warning (by default should be an error, really)
warning: conversion from 'long unsigned int' to 'unsigned int' changes value from '65537' to '1' [-Woverflow]
uint8_t values[N];
Note that when compiling for this target, sizeof(int) is 2.
So it seems that, at an array size cannot exceed the size of an unsigned int.
Am I correct? Is this GCC-specific or is it part of some C or C++ standard?
Before somebody remarks that an 8-bit microcontroller generally does not have enough memory for an array so large, let me just anticipate saying that this is beside the point.
size_t is considered as the type to use, despite not being formally ratified by either the C or C++ standards.
The rationale for this is that the sizeof(values) will be that type (that is mandatated by the C and C++ standards), and the number of elements will be necessarily not greater than this since sizeof for an object is at least 1.
So it seems that, at an array size cannot exceed the size of an
unsigned int.
That seems to be the case in your particular C[++] implementation.
Am I correct? Is this gcc-specific or is it part of some C or C++
standard?
It is not a characteristic of GCC in general, nor is it specified by either the C or C++ standard. It is a characteristic of your particular implementation: a version of GCC for your specific computing platform.
The C standard requires the expression designating the number of elements of an array to have an integer type, but it does not specify a particular one. I do think it's strange that your GCC seems to claim it's giving you an array with a different number of elements than you specified. I don't think that conforms to the standard, and I don't think it makes much sense as an extension. I would prefer to see it reject the code instead.
I'll dissect the issue with the rules in the "incorrekt and incomplet" ISO CPP standard draft n4659. Emphasis is added by me.
11.3.4 defines array declarations. Paragraph one contains
If the constant-expression [between the square brackets] (8.20) is present, it shall be a converted constant expression of type std::size_t [...].
std::size_t is from <cstddef>and defined as
[...] an implementation-defined unsigned integer type that is large enough to contain the size in bytes of any object.
Since it is imported via the C standard library headers the C standard is relevant for the properties of size_t. The ISO C draft N2176 prescribes in 7.20.3 the "minimal maximums", if you want, of integer types. For size_t that maximum is 65535. In other words, a 16 bit size_t is entirely conformant.
A "converted constant expression" is defined in 8.20/4:
A converted constant expression of type T is an expression, implicitly converted to type T, where the converted expression is a constant expression and the implicit conversion sequence contains only [any of 10 distinct conversions, one of which concerns integers (par. 4.7):]
— integral conversions (7.8) other than narrowing conversions (11.6.4)
An integral conversion (as opposed to a promotion which changes the type to equivalent or larger types) is defined as follows (7.8/3):
A prvalue of an integer type can be converted to a prvalue of another integer type.
7.8/5 then excludes the integral promotions from the integral conversions. This means that the conversions are usually narrowing type changes.
Narrowing conversions (which, as you'll remember, are excluded from the list of allowed conversions in converted constant expressions used for array sizes) are defined in the context of list-initialization, 11.6.4, par. 7
A narrowing conversion is an implicit conversion
[...]
7.31 — from an integer type [...] to an integer type that cannot represent all the values of the original type, except where the source is a constant expression whose value after integral promotions will fit into the target type.
This is effectively saying that the effective array size must be the constant value at display, which is an entirely reasonable requirement for avoiding surprises.
Now let's cobble it all together. The working hypothesis is that std::size_t is a 16 bit unsigned integer type with a value range of 0..65535. The integer literal 65537 is not representable in the system's 16 bit unsigned int and thus has type long. Therefore it will undergo an integer conversion. This will be a narrowing conversion because the value is not representable in the 16 bit size_t2, so that the exception condition in 11.6.4/7.3, "value fits anyway", does not apply.
So what does this mean?
11.6.4/3.11 is the catch-all rule for the failure to produce an initializer value from an item in an intializer list. Because the initializer-list rules are used for array sizes, we can assume that the catch-all for conversion failure applies to the array size constant:
(3.11) — Otherwise, the program is ill-formed.
A conformant compiler is required to produce a diagnostic, which it does. Case closed.
1 Yes, they sub-divide paragraphs.
2 Converting an integer value of 65537 (in whatever type can hold the number — here probably a `long) to a 16 bit unsigned integer is a defined operation. 7.8/2 details:
If the destination type is unsigned, the resulting value is the least unsigned integer congruent to the source
integer (modulo 2n where n is the number of bits used to represent the unsigned type). [ Note: In a two’s
complement representation, this conversion is conceptual and there is no change in the bit pattern (if there is
no truncation). —end note ]
The binary representation of 65537 is 1_0000_0000_0000_0001, i.e. only the least significant bit of the lower 16 bits is set. The conversion to a 16 bit unsigned value (which circumstantial evidence indicates size_t is) computes the [expression value] modulo 2^16, i.e. simply takes the lower 16 bits. This results in the value of 1 mentioned in the compiler diagnostics.
In your implementation size_t is defined as unsigned int and uint32_t is defined as a long unsigned int. When you create a C array the argument for the array size gets implicitly converted to size_t by the compiler.
This is why you're getting a warning. You're specifying the array size argument with an uint32_t that gets converted to size_t and these types don't match.
This is probably not what you want. Use size_t instead.
The value returned by sizeof will be of type size_t.
It is generally used as the number of elements in an array, because it will be of sufficient size. size_t is always unsigned but it is implementation-defined which type this is. Lastly, it is implementation-defined whether the implementation can support objects of even SIZE_MAX bytes... or even close to it.
[This answer was written when the question was tagged with C and C++. I have not yet re-examined it in light of OP’s revelation they are using C++ rather than C.]
size_t is the type the C standard designates for working with object sizes. However, it is not a cure-all for getting sizes correct.
size_t should be defined in the <stddef.h> header (and also in other headers).
The C standard does not require that expressions for array sizes, when specified in declarations, have the type size_t, nor does it require that they fit in a size_t. It is not specified what a C implementation ought to do when it cannot satisfy a request for an array size, especially for variable length arrays.
In your code:
const uint32_t N = 65537;
uint8_t values[N];
values is declared as a variable length array. (Although we can see the value of N could easily be known at compile time, it does not fit C’s definition of a constant expression, so uint8_t values[N]; qualifies as a declaration of a variable length array.) As you observed, GCC warns you that the 32-bit unsigned integer N is narrowed to a 16-bit unsigned integer. This warning is not required by the C standard; it is a courtesy provided by the compiler. More than that, the conversion is not required at all—since the C standard does not specify the type for an array dimension, the compiler could accept any integer expression here. So the fact that it has inserted an implicit conversion to the type it needs for array dimensions and warned you about it is a feature of the compiler, not of the C standard.
Consider what would happen if you wrote:
size_t N = 65537;
uint8_t values[N];
Now there would be no warning in uint8_t values[N];, as a 16-bit integer (the width of size_t in your C implementation) is being used where a 16-bit integer is needed. However, in this case, your compiler likely warns in size_t N = 65537;, since 65537 will have a 32-bit integer type, and a narrowing conversion is performed during the initialization of N.
However, the fact that you are using a variable length array suggests you may be computing array sizes at run-time, and this is only a simplified example. Possibly your actual code does not use constant sizes like this; it may calculate sizes during execution. For example, you might use:
size_t N = NumberOfGroups * ElementsPerGroup + Header;
In this case, there is a possibility that the wrong result will be calculated. If the variables all have type size_t, the result may easily wrap (effectively overflow the limits of the size_t type). In this case, the compiler will not give you any warning, because the values are all the same width; there is no narrowing conversion, just overflow.
Therefore, using size_t is insufficient to guard against errors in array dimensions.
An alternative is to use a type you expect to be wide enough for your calculations, perhaps uint32_t. Given NumberOfGroups and such as uint32_t types, then:
const uint32_t N = NumberOfGroups * ElementsPerGroup + Header;
will produce a correct value for N. Then you can test it at run-time to guard against errors:
if ((size_t) N != N)
Report error…
uint8_t values[(size_t) N];
When I'm in C++, and I call an overloaded function foo, like so:
foo('e' - (char) 5)
it can output "this is a char" or "this is an int" based on the type result. I get "this is an int" from my program, like this:
#include <iostream>
void foo(char x)
{
std::cout << "output is a char" << std::endl;
}
void foo(int x)
{
std::cout << "output is an int" << std::endl;
}
int main()
{
foo('a' + (char) 5);
}
My instructor says that in C, the expression above, ('a' + (char) 5), evaluates as a char. I see in the C99 standard that chars are promoted to ints to find the sum, but does C recast them back to chars when it's done? I can't find any references that seem credible saying one way or another what C actually does after the promotion is completed, and the sum is found.
Is the sum left as an int, or given as a char? How can I prove this in C, or is there a reference I'm not understanding/finding?
From the C Standard, 6.3.1.8 Usual arithmetic conversions, emphasis mine:
Many operators that expect operands of arithmetic type cause conversions and yield result
types in a similar way. The purpose is to determine a common real type for the operands
and result. For the specified operands, each operand is converted, without change of type
domain, to a type whose corresponding real type is the common real type. Unless
explicitly stated otherwise, the common real type is also the corresponding real type of
the result, whose type domain is the type domain of the operands if they are the same,
and complex otherwise. This pattern is called the usual arithmetic conversions:
First, if the correspeonding real type of either operand is long double...
Otherwise, if the corresponding real type of either operand is double...
Otherwise, if the corresponding real type of either operand is float...
Otherwise, the integer promotions are performed on both operands. Then the following rules are applied to the promoted operands:
If both operands have the same type, then no further conversion is needed.
So you are exactly correct. The type of the expression 'a' + (char) 5 is int. There is no recasting back to char, unless explicitly asked for by the user. Note that 'a' here has type int, so it's only the (char)5 that needs to be promoted. This is stipulated in 6.4.4.4 Character Constants:
An integer character constant is a sequence of one or more multibyte characters enclosed
in single-quotes, as in 'x'.
...
An integer character constant has type int.
There is an example demonstrating the explicit recasting to char:
In executing the fragment
char c1, c2;
/* ... */
c1 = c1 + c2
the ‘‘integer promotions’’ require that the abstract machine promote the value of each variable to int size and then add the two ints and truncate the sum. Provided the addition of two chars can be done without overflow, or with overflow wrapping silently to produce the correct result, the actual execution need only produce the same result, possibly omitting the promotions.
The truncation here only happens because we assign back to a char.
No, C does not recast them back to chars.
The standard (ISO/IEC 9899:1999) says (6.3.1.8 Usual arithmetic conversions):
Many operators that expect operands of arithmetic type cause conversions and yield result
types in a similar way. The purpose is to determine a common real type for the operands
and result. For the specified operands, each operand is converted, without change of type
domain, to a type whose corresponding real type is the common real type. Unless
explicitly stated otherwise, the common real type is also the corresponding real type of
the result, whose type domain is determined by the operator.
Your instructor seems to be wrong. Additional to your standard find that the arithmetic promotes to int, we can use a simple test program to show the behavior (no standard prove of course, but the same level of proof as your C++ test):
#include <stdio.h>
int main () {
printf("%g",'c' - (char)5);
}
produces
Warning: format specifies type 'double' but argument has type 'int'
with gcc and clang.
You can't determine the type of an expression as easily in C, but you can easily determine the size of an expression:
#include <stdio.h>
int main(void) {
printf("sizeof(char)==1\n");
printf("sizeof(int)==%u\n", sizeof(int));
printf("sizeof('a' + (char) 5)==%u\n", sizeof('a' + (char) 5));
return 0;
}
This gives me:
sizeof(char)==1
sizeof(int)==4
sizeof('a' + (char) 5)==4
which at least proves that 'a' + (char) 5 is not of type char.
It's promoted to an int, and there's nothing to tell the compiler it should use anything else. You can convert back to a char like this:
foo((char)('a' + 5));
This tells the compiler to treat the result of the calculation as a char, otherwise it leaves it as an int.
Section 6.5.2.2/6
If the expression that denotes the called function has a type that
does not include a prototype, the integer promotions are performed on
each argument...
So the answer to your question depends on the function prototype. If the function is declared as
void foo(int x)
or
void foo()
then the function argument will be passed as an int.
OTOH, if the function is declared as
void foo( char x )
then the result of the expression will be implicitly cast to char.
In C (unlike C++), the character literal 'a' has type int (§6.4.4.4¶10: "An integer character constant has type int.")
Even if that were not the case, the C standard clearly states that prior to the evaluation of the operator +, "[i]f both operands have arithmetic type, the usual arithmetic conversions are performed on them." (C11, §6.5.6 ¶4) In this respect, C and C++ have identical semantics. (See [expr.add] §5.7¶1 of C++)
From the C++ Standard (C++ Working Draft N3797, 5.7 Additive operators)
1 The additive operators + and - group left-to-right. The usual
arithmetic conversions are performed for operands of arithmetic or
enumeration type.
and (5 Expressions)
10 Many binary operators that expect operands of arithmetic or
enumeration type cause conversions and yield result types in a similar
way. The purpose is to yield a common type, which is also the type of
the result. This pattern is called the usual arithmetic conversions,
which are defined as follows:
...
— Otherwise, the integral promotions (4.5) shall be performed on
both operands.62 Then the following rules shall be applied to the
promoted operands:
Thus the expression in the function call
foo('a' + (char) 5);
has type int.
To call the overloaded function with parameter of type char you have to write for example
foo( char( 'a' + 5 ) );
or
foo( ( char )( 'a' + 5 ) );
or you can use C++ casting like
foo( static_cast<char>( 'a' + 5 ) );
The above quotes from the C++ Standard also are valid for C Standard. The visible difference is that in C++ character literals have type char while in C they have type int.
So in C++ the output of the statement
std::cout << sizeof( 'a' ) << std::endl;
will be equal to 1.
While in C the output of the statement
printf( "%zu\n", sizeof( 'a' ) );
will be equal to sizeof( int ) that is usually equal to 4.
I tried to write a c program as below?
const int x = 5;
int main()
{
int arr[x] = {1, 2, 3, 4, 5};
}
This is giving warnings when I tried to compile with gcc as below.
simple.c:9: error: variable-sized object may not be initialized.
But the same is allowed in C++. When I pass x as array size, why x is not treated as constant?
In C const doesn't mean "constant" (i.e., evaluable at compile time). It merely means read-only.
For example, within a function, this:
const int r = rand();
const time_t now = time(NULL);
is perfectly valid.
The name of an object defined as const int is not a constant expression. That means that (in C prior to C99, and in all versions of C++) it can't be used to define the length of an array.
Although C99 (and, optionally, C11) support variable-length arrays (VLAs), they can't be initialized. In principle, the compiler doesn't know the size of a VLA when it's defined, so it can't check whether an initializer is valid. In your particular case, the compiler quite probably is able to figure it out, but the language rules are designed to cover the more general case.
C++ is nearly the same, but C++ has a special rule that C lacks: if an object is defined as const, and its initialization is a constant expression, then the name of the object it itself a constant expression (at least for integral types).
There's no really good reason that C hasn't adopted this feature. In C, if you want a name constant of an integer type, the usual approach is to use a macro:
#define LEN 5
...
int arr[LEN] = {1, 2, 3, 4, 5};
Note that if you change the value of LEN, you'll have to re-write the initializer.
Another approach is to use an anonymous enum:
enum { LEN = 5 };
...
int arr[LEN] = {1, 2, 3, 4, 5};
The name of an enumeration constant is actually a constant expression. In C, for historical reasons, it's always of type int; in C++ it's of the enumeration type. Unfortunately, this trick only works for constants of type int, so it's restricted to values in the range from INT_MIN to INT_MAX.
When I pass x as array size, why x is not treated as constant?
Because in C, constant expressions can't involve the values of any variables, even const ones. (This is one reason why C is so dependent on macro constants, whereas C++ would use const variables for the same purpose.)
On the other hand, in C++, x would certainly be a constant expression if x is declared as const int x = 5;.
If your question is why C++ is so much more liberal than C when it comes to constant expressions, I think it's to support metaprogramming, and allow complex computation to be performed at compile time using templates.
I think almost everyone has misunderstood the error, the error says:
variable-sized object may not be initialized.
which is correct, C99 and C11(although they are optional in C11). They can not be initialized in the declaration, we can see this from section 6.7.8 Initialization:
It is treated as a VLA because unlike C++, C expect an integer constnt expression:
If the size is an integer constant expression and the element type has a known constant size, the array type is not a variable length array type;
and an integer constant expression has the following restrictions:
shall have integer type and shall only have operands
that are integer constants, enumeration constants, character constants, sizeof
expressions whose results are integer constants, and floating constants that are the
immediate operands of casts. Cast operators in an integer constant expression shall only
convert arithmetic types to integer types, except as part of an operand to the sizeof
operator.
which x does not satisfy.
The type of the entity to be initialized shall be an array of unknown size or an object type
that is not a variable length array type.
In C++ this is not a variable length array since x is considered a constant expression and we can this is valid from the draft C++ standard section 8.3.4 Arrays under section 8 Declarators which says:
In a declaration T D where D has the form
D1 [ constant-expressionopt] attribute-specifier-seqopt
[...]If the constant-expression (5.19)
is present, it shall be a converted constant expression of type
std::size_t and its value shall be greater than zero. The constant
expression specifies the bound of (number of elements in) the array.
If the value of the constant expression is N, the array has N elements
numbered 0 to N-1[...]
If we removed the const from the declaration of x it would fail for one of two reasons, either the compiler supports VLA as an extension and it would fail for the same reason it fails in C or the compiler does not support VLA as an extension and the therefore the declaration would not be valid.
I will assume you are using a C99 compiler (which supports dynamically sized arrays).
What happens is that the compiler can't know for sure in compilation time how your array will behave regarding memory.
Try this:
int arr[x];
memset( arr, 0, x*sizeof(int) );
and see if it works.
Another thing I think could be causing this is that const does not really mean anything under the hood, and so the compiler might not let you do what you are trying to because of that. You see, there are several ways you can alter const variables, and that is part of why c#, for example, does not present the const keyword.
const is more like an alert for humans than anything else.
Can a float value be used as the index of an array? What will happen if an expression used as an index resulted to a float value?
The float value will be casted to int (it can give warning or error depending on compiler's warning level)
s1 = q[12.2]; // same as q[12]
s2 = q[12.999999]; // same as q[12]
s3 = q[12.1/6.2]; // same as q[1]
Yes. But it's pointless. The float value will be truncated to an integer.
(You can use std::map<float, T>, however, but most of the time you'll miss the intended values because of inaccuracy.)
A C++ array is a contiguous sequence of memory locations. a[x] means "the xth memory location after one pointed to by a."
What would it mean to access the 12.4th object in a sequence?
It will be casted to int.
This is an error. In [expr.sub]:
A postfix expression followed by an expression in square brackets is a postfix expression. One of the expressions shall have the type “pointer to T” and the other shall have unscoped enumeration or integral type.
I am not aware of a clause in the standard that specifies that conversion should happen here (admittedly, I would not be surprised if such a clause existed), although testing with ideone.com did produce a compilation error.
However, if you're subscripting a class rather than a pointer — e.g. std::vector or std::array — then the overload of operator[] will have the usual semantics of a function call, and floating-point arguments will get converted to the corresponding size_type.