Declearing enum and setting values manually not in increasing order - c++

Is the definition of enums in C++ and setting values in random order a valid? Is it used in any well-known code?
e.g.:
enum ESampleType{ STPositiveControl='P', STNegativeControl='N', STSample='S' };
VS2008 compiles without warnings. I wonder whether gcc would.
In my opinion, it a least harms "least surprise rule" as iterating over all values using
for(int i=STPositiveControl; i<=STSample; ++i)
would fail.
P.S: The rationale for this approach:
In my DB application I'm defining wrapper methods. Some columns contain "constants" encoded as
single char. I'm trying to make it a little more type safe.

It's a standard and widely used feature. Most of the time (but
not always) it will be used to create bitmaps, but it can also
be used as you show, to give "printable" values to the enum
contants.
It shouldn't surprise anyone who knows C++, as it is widely
used.

Yes it is valid.
There is an example use of it in the book The C++ Programming Language (3rd Edition) by Bjarne Stroustrup, section "6.1 A Desk Calculator [expr.calculator]" (and more precisely "6.1.1 The Parser [expr.parser]"), for the parser code of a simple arithmetic calculator. Here's an excerpt:
The parser uses a function get_token() to get input. The value of the most recent call of get_token() can be found in the global variable curr_tok. The type of curr_tok is the enumeration Token_value:
enum Token_value {
NAME, NUMBER, END,
PLUS='+', MINUS='-', MUL='*', DIV='/',
PRINT=';', ASSIGN='=', LP='(', RP=')'
};
Token_value curr_tok = PRINT;
Representing each token by the integer value of its character is convenient and efficient and can be a help to people using debuggers. This works as long as no character used as input has a value used as an enumerator – and no character set I know of has a printing character with a single-digit integer value. (...)
(That last sentence is specific to this example, because the enumeration mixes "default-value" and "explicit-value" enumerators and wants each one to be unique.)
However it's only an educative example (and notably it uses a global variable, and CAPS names for enumerators when you should reserve them for macros (but Stroustrup doesn't like macros :p)).
Now, indeed you can't iterate over it (at least with a plain for loop; but see this question). (And as James Kanze pointed, an enum's values are not always ordered, contiguous, and unique.)

Related

Confusion between constants and literals?

I am currently reading about constants on the c++ tutorial from TutorialsPoint and, where it says:
Constants refer to fixed values that the program may not alter and they are called literals.
(Source)
I do not really get this. If constants are called literals and literals are data represented directly in the code, how can constants be considered as literals? I mean variables preceded with the const keyword are constants, but they are not literals, so how can you say that constants are literals?
Here:
const int MEANING = 42;
the value MEANING is a constant, 42 is a literal. There is no real relationship between the two terms, as can be seen here:
int n = 42;
where n is not a constant, but 42 is still a literal.
The major difference is that a constant may have an address in memory (if you write some code that needs such an address), whereas a literal never has an address.
I disagree with the claim "...There wasn't a thing called const in C originally so this was fine." const is actually one of the 32 C keywords. Google to see.
With that rested, I think the man missed something at TP. To be fair to them at Tutorials Point, they had an article that explained the difference thus (full quote, verbatim):
https://www.tutorialspoint.com/questions/category/Cplusplus
A literal is a value that is expressed as itself. For example, the number 25 or the string "Hello World" are both literals.
A constant is a data type that substitutes a literal. Constants are used when a specific, unchanging value is used various times during the program. For example, if you have a constant named PI that you'll be using at various places in your program to find the area, circumference, etc of a circle, this is a constant as you'll be reusing its value. But when you'll be declaring it as:
const float PI = 3.141;
The 3.141 is a literal that you're using. It doesn't have any memory address of its own and just sits in the source code.
Pls don't disparage those fellows doing what you call "random tutorials". Kids from poorer homes and less developed world can't afford your " good C++ textbooks " e.g. Scott Myers Effective C++ It is these online free tutorials they can have, and most of these tutorials do better explaining than the "good books".
By any means read them guys. Get confused some then come over here to StackOveflow or Quora to have your confusion cleared. Happy coding guys.
The author of the article is confused, and spreading that confusion to others (including you).
In C, literals are "constants". There wasn't a thing called const in C originally so this was fine.
C++ is a different language. In C++, literals are called "literals", and "constant" has a few meanings but generally is a const thing. The two concepts are different (although both kinds of things cannot be mutated after initial creation). We also have compile-time constants via constexpr which is yet another thing.
In general, read a good book rather than random tutorials written by randomers on the internet!
While the first part of the statement makes sense
Constants refer to fixed values that the program may not alter
the continuation
and they are called literals
is not really true.
Neil has already explained the semantical difference between the literal and the constant in his answer. But I would also like to add that the values of constant variables in C++ are not necessarily known at compile time.
// x might be obtained at runtime
// for instance, from the user input
void print_square(int x)
{
const int square = x*x;
std::cout << square << '\n';
}
Literals are values that are known at compile-time, which allows the compiler to put them to a separate read-only address space in the resulting binaries.
You can also enforce your variables to be known at compile-time by applying constexpr keyword (C++11).
constexpr int meaning = 42;
P.S. And I also do agree with a comment suggesting to use a good book instead of tutorialspoint.
If constants are called literals and literals are data represented directly in the code, how can constants be considered as literals?
The article from which you drew the quote is defining the word "constant" to be a synonym of "literal". The latter is the C++ standard's term for what it is describing. The former is what the C standard uses for the same concept.
I mean variables preceded with the const keyword are constants, but they are not literals, so how can you say that constants are literals?
And there you are providing an alternative definition for the term "constant", which, you are right, is inconsistent with the other. That's all. TP is using a different definition of the term than the one you are used to.
In truth, although the noun usage of "constant" appears in a couple of places in the C++ standard outside the defined term "null pointer constant", apparently with the meaning you propose here, I do not find an actual definition of that term, and especially not one matching yours. In truth, your definition is less plausible than TutorialPoint's, because an expression having const-qualified type can nevertheless designate an object that is modifiable (via a different expression).
Constant is simply a variable declared constant by keyword 'const' whose value after being declared shouldn't be altered during the course of the program (and if tried to alter it will result in an error).
On the other hand, literal is simply what is used and represented as it is typed in. For example, 25 when used in an expression (x+4*y+25) will be termed as literal.
Whenever we use String values or directly supply it in double quotes ("hello"), then that value in double quotes is called literal.
For example, printf("This is literal");
And if you are assigning a string value to a variable then thereafter you will refer to the variable (which could be declared constant if desired) and not exclusively to the value you have stored in it, i.e., only till the point you are supplying a value (string type of any other type) to the variable, the value is referred to as literal value, after that the variable is talked about whenever referring that value.
Once again, the value(25) in expression : x+4*y+25 is literal.
The value(4) in the term 4*y is also a literal (since it is exactly as we see it and is known to compiler beforehand).
--> The value(4) in the term 4*y is called numerical coefficient in algebraic terms and y is called literal coefficient in algebraic terms.
Hence,
All the above explanation I have given is in computer terms only. The meaning of literals and constants in Algebra are somewhat different than used in computer terms.
"Constants refer to fixed values that the program may not alter and they are called literals. (Source)"
The sentence construction is weird which is leading to the confusion.
Here, the the "they" that are referring to are the the fixed values and not constants. I would phrase it as "Constants refer to fixed values, that the program may not alter, called literals." which is less confusing I hope.
Constants are variables that can't vary, whereas Literals are literally numbers/letters that indicate the value of a variable or constant.
I can explain it this way.
Basically, constants are variables whose value cannot change.
Literals are notations that represent fixed values. These values can be Strings numbers etc
Literals can be assigned to variables
Code :
var a = 10;
var name = "Simba";
const pi = 3.14;
Here a and name are variables. pi is a constant. ( Constants are those variables whose value doesn't change. )
Here 10, "Simba" and 3.14 are literals.

Why do C fundamental types have identifiers with multiple keywords

Apart from obvious answer because, guys designed it that way, why does C/C++ have types, which consist of multiple identifiers, e.g.
long long (int)
short int
signed char
A do have some basic knowledge of parsing and have used flex/bison tools to make few parsers and I think, that this bring much more complexity to parsing type names. And looking on C++ grammar in standard, everything about types really is complicated.
I know, that C++ (also C, I believe) do not specify much about sizes of fundamental data types, thus making types int_8, uint_8, etc. would not work (Altough c++11 gave us fixed width integers).
So, why did developers of standard agreed on multi-word type identifiers, when they could make int, uint and similar.
Speaking in terms of C, why did the developers of the standard agree on multi-word identifiers? It's because that was what the language had at the time of standardisation.
The mandate for the original standard was not to create a new language but to codify existing practice. As per the C89 standard itself:
The Committee evaluated many proposals for additions, deletions, and changes to the base documents during its deliberations. A concerted effort was made to codify existing practice wherever unambiguous and consistent practice could be identified. However, where no consistent practice could be identified, the Committee worked to establish clear rules that were consistent with the overall flavor of the language.
And, from the C99 rationale document:
The original X3J11 charter clearly mandated codifying common existing practice, and the C89 Committee held fast to precedent wherever that was clear and unambiguous. The vast majority of the language defined by C89 was precisely the same as defined in Appendix A of the first edition of The C Programming Language by Brian Kernighan and Dennis Ritchie, and as was implemented in almost all C translators of the time.
Beyond that, each iteration of the standard has valued backward compatibility highly so that code doesn't break. From that same rationale document:
Existing code is important, existing implementations are not. A large body of C code exists of considerable commercial value. Every attempt has been made to ensure that the bulk of this code will be acceptable to any implementation conforming to the Standard. The C89 Committee did not want to force most programmers to modify their C programs just to have them accepted by a conforming translator.
So, while later versions of the standard gave us things like stdint.h with its fixed width integral types, taking away the standard ones like int and long would be a gross violation of that guideline.
In terms of C++, it's almost certainly a holdover from the earliest days of that language where it was put forward as "C plus classes". In fact, the very early cfront C++ compiler was so named because it took C++ source code and turned that into C before giving it to a suitable C compiler (i.e., a front end for C, hence cfront).
This would have allowed the original author Bjarne to minimise the workload in delivering C++ since the bulk of it was already provided by the C compiler itself.
In terms of parsing a language, it's certainly more difficult to have to process unsigned long int x (a) than it is to handle ulong x.
But, given that the compiler already has to handle a large number of optional "modifiers/specifiers" for a variable (e.g., const char * const x), handling a few others is par for the course.
(a) Or int long unsigned x or long unsigned x or any of the other type specifiers that end up becoming the singular unsigned long int type. See here for more details.
Adding new reserved words to a language will break any code which happens to use such words as identifiers unless those words are of a form which is reserved for future expansion (e.g. contain two leading underscores, or start with an underscore and a capital letter, etc.)
By contrast, if some particular sequence of reserved words has no defined meaning in any existing implementation, there can be no existing code which uses that sequence of reserved words, and thus no danger of breaking existing code by attaching a new meaning to it.

Assigning values to enum

While doing a review of some older code, I notice the following two strange constructions using enum (two different files/classes/namespaces, just putting them together here):
enum FirstEnum
{
A_CHOICE
,ANOTHER_CHOICE=1
,YET_SOME_OTHER_CHOICE
};
enum SecondEnum
{
FIRST_CHOICE
,SECOND_CHOICE
,THIRD_CHOICE
,DEFAULT_CHOICE=SECOND_CHOICE
};
I think both constructions are wrong.
The first one assigns a value to one of the choices, but not to the others, meaning that things might go wrong if new choices are added.
In the second case, we end up with two enumeration elements having the same underlying value.
Is there any reason why the C++ standard allows both constructions?
(using Visual Studio 2010)
The first one assigns a value to one of the choices, but not to the others, meaning that things might go wrong if new choices are added.
I don't know what you mean by "go wrong". It's well-defined that if you don't specify a value for an enumerator, its value is one more than the previous (or zero, if it's the first).
In the second case, we end up with two enumeration elements having the same underlying value.
Yes we do. That would be wrong if enumerations were supposed to be a set of unique values but (in C++) they aren't.
Is there any reason why the C++ standard allows both constructions?
Because, in C++, an enumeration is simply a way to declare a set of related, named, constant values. It doesn't try to restrict what values they can take.
This article from Microsoft should help:
http://msdn.microsoft.com/en-us/library/2dzy4k6e(v=VS.80).aspx
The first one assigns a value to one of the choices, but not to the others
By default, the first enumerator has a value of 0, and each successive enumerator is one larger than the value of the previous one, unless you explicitly specify a value for a particular enumerator.
In the second case, we end up with two enumeration elements having the same underlying value.
Enumerators needn't have unique values within an enumeration. The name of each enumerator is treated as a constant and must be unique within the scope where the enum is defined.
The article includes examples of how these features could be taken advantage of.
I don't have a quote from the standard for you, but enums are specified such that uninitialized values take on a value one larger than the value preceding them.
In the FirstEnum, YET_SOME_OTHER_CHOICE would therefore be 2 (ANOTHER_CHOICE+1). It is also perfectly legal to have multiple equivalent values within an enum.
The first one assigns a value to one of the choices, but not to the others, meaning that things might go wrong if new choices are added.
What might go wrong? Sure, if somebody changes the first example to
enum FirstEnum
{
A_CHOICE //0
,A_THIRD_CHOICE //1
,ANOTHER_CHOICE=1 //1
,YET_SOME_OTHER_CHOICE //2
};
Then yes, they will get problems if they didn't expect two values to be the same in the enum. The same if somebody had #defined these values, and accidentally made two of them the same.
And for your second example, the names of the values in the enumeration give a hint as to why that is useful. You can have a default value for variables of type SecondEnum stored in the definition of SecondEnum, allowing you to do things like
SecondEnum var = DEFAULT_CHOICE;
without the need for #defines or constants that are closely coupled to the enum definition but aren't part of it.
Is there any reason why the C++ standard allows both constructions?
I'm not on the standards committee, but if I were to guess, it's because both constructions are useful for programmers.

Why doesn't this character conversion work?

Visual Studio 2008
Project compiled as multibyte character set
LPWSTR lpName[1] = {(WCHAR*)_T("Setup")};
After this conversion, lpName[0] contains garbage (at least when previewed in VS)
LPWSTR is typedef'd as follows:
typedef __nullterminated WCHAR *NWPSTR, *LPWSTR, *PWSTR;
It's an expanded version of my comment above.
The code shown casts a pointer of type A to a pointer of type B. This is a low-vevel, machine-dependent operation. It almost never works as a conversion of an object of type A to an object of type B, especially if one type is a regular character type and the other is wide characters.
Imagine that you take a French book, and read it aloud as if it was written in English.
FRENCH* book;
readaloud ((ENGLISH*) book);
You will mostly hear gibberish. The letters used in the two languages are the same (or similar, at any rate), but the rules of the two languages are are totally different. The representation is the same for both languages, but the meaning is not.
This is very similar to what we have here. Whatever type you have, bits and bytes are the same, but the rules are totally different. You take bits laid out according to regular character rules, and try to interpret them according to wide character rules. It doesn't work. The representation is the same in both cases, but the meaning is not.
To convert one character flavor to another, you in general need a lookup table or some other means to convert each character from one type to the other — change representation, but keep the meaning. Likewise, to convert a French book into an English book, you need to use a big lookup table a.k.a. dictionary... well, the analogy breaks here, because there's no formal set of conversion rules, you need to be creative! But you get the idea.
The rules of C++ actually prohibit such casts. You can only cast an object type poiner to void*, and only use the result to cast it back to the original object type. Everything else is a no-no (unless you are willing to venture in the realm of undefined behavior).
So what should you do?
Pick one character variant and stick to it.
If you must convert between flavors, do so with a library function.
Try to avoid pointer casts, they almost always signal trouble.
I think what you're looking for is
LPTSTR lpName[1] = {_T("Setup")};
The various typedefs with a T in them (e.g. TSTR, LPTSTR) are dependant on whether you use unicode or multi-byte or whatever else. By using these, you should be able to write code that work in whatever encoding you are using (i.e., tomorrow you could switch to ascii, and a large portion of your code should still work).
Edit
If you are in situation where you really must convert between encodings, then there are various conversion functions available, such as wcstombs (or microsoft's documentation) and mbstowcs. These are defined in <cstdlib>

Why can't variable names start with numbers?

I was working with a new C++ developer a while back when he asked the question: "Why can't variable names start with numbers?"
I couldn't come up with an answer except that some numbers can have text in them (123456L, 123456U) and that wouldn't be possible if the compilers were thinking everything with some amount of alpha characters was a variable name.
Was that the right answer? Are there any more reasons?
string 2BeOrNot2Be = "that is the question"; // Why won't this compile?
Because then a string of digits would be a valid identifier as well as a valid number.
int 17 = 497;
int 42 = 6 * 9;
String 1111 = "Totally text";
Well think about this:
int 2d = 42;
double a = 2d;
What is a? 2.0? or 42?
Hint, if you don't get it, d after a number means the number before it is a double literal
It's a convention now, but it started out as a technical requirement.
In the old days, parsers of languages such as FORTRAN or BASIC did not require the uses of spaces. So, basically, the following are identical:
10 V1=100
20 PRINT V1
and
10V1=100
20PRINTV1
Now suppose that numeral prefixes were allowed. How would you interpret this?
101V=100
as
10 1V = 100
or as
101 V = 100
or as
1 01V = 100
So, this was made illegal.
Because backtracking is avoided in lexical analysis while compiling. A variable like:
Apple;
the compiler will know it's a identifier right away when it meets letter 'A'.
However a variable like:
123apple;
compiler won't be able to decide if it's a number or identifier until it hits 'a', and it needs backtracking as a result.
Compilers/parsers/lexical analyzers was a long, long time ago for me, but I think I remember there being difficulty in unambiguosly determining whether a numeric character in the compilation unit represented a literal or an identifier.
Languages where space is insignificant (like ALGOL and the original FORTRAN if I remember correctly) could not accept numbers to begin identifiers for that reason.
This goes way back - before special notations to denote storage or numeric base.
I agree it would be handy to allow identifiers to begin with a digit. One or two people have mentioned that you can get around this restriction by prepending an underscore to your identifier, but that's really ugly.
I think part of the problem comes from number literals such as 0xdeadbeef, which make it hard to come up with easy to remember rules for identifiers that can start with a digit. One way to do it might be to allow anything matching [A-Za-z_]+ that is NOT a keyword or number literal. The problem is that it would lead to weird things like 0xdeadpork being allowed, but not 0xdeadbeef. Ultimately, I think we should be fair to all meats :P.
When I was first learning C, I remember feeling the rules for variable names were arbitrary and restrictive. Worst of all, they were hard to remember, so I gave up trying to learn them. I just did what felt right, and it worked pretty well. Now that I've learned alot more, it doesn't seem so bad, and I finally got around to learning it right.
It's likely a decision that came for a few reasons, when you're parsing the token you only have to look at the first character to determine if it's an identifier or literal and then send it to the correct function for processing. So that's a performance optimization.
The other option would be to check if it's not a literal and leave the domain of identifiers to be the universe minus the literals. But to do this you would have to examine every character of every token to know how to classify it.
There is also the stylistic implications identifiers are supposed to be mnemonics so words are much easier to remember than numbers. When a lot of the original languages were being written setting the styles for the next few decades they weren't thinking about substituting "2" for "to".
Variable names cannot start with a digit, because it can cause some problems like below:
int a = 2;
int 2 = 5;
int c = 2 * a;
what is the value of c? is 4, or is 10!
another example:
float 5 = 25;
float b = 5.5;
is first 5 a number, or is an object (. operator)
There is a similar problem with second 5.
Maybe, there are some other reasons. So, we shouldn't use any digit in the beginnig of a variable name.
The restriction is arbitrary. Various Lisps permit symbol names to begin with numerals.
COBOL allows variables to begin with a digit.
Use of a digit to begin a variable name makes error checking during compilation or interpertation a lot more complicated.
Allowing use of variable names that began like a number would probably cause huge problems for the language designers. During source code parsing, whenever a compiler/interpreter encountered a token beginning with a digit where a variable name was expected, it would have to search through a huge, complicated set of rules to determine whether the token was really a variable, or an error. The added complexity added to the language parser may not justify this feature.
As far back as I can remember (about 40 years), I don't think that I have ever used a language that allowed use of a digit to begin variable names. I'm sure that this was done at least once. Maybe, someone here has actually seen this somewhere.
As several people have noticed, there is a lot of historical baggage about valid formats for variable names. And language designers are always influenced by what they know when they create new languages.
That said, pretty much all of the time a language doesn't allow variable names to begin with numbers is because those are the rules of the language design. Often it is because such a simple rule makes the parsing and lexing of the language vastly easier. Not all language designers know this is the real reason, though. Modern lexing tools help, because if you tried to define it as permissible, they will give you parsing conflicts.
OTOH, if your language has a uniquely identifiable character to herald variable names, it is possible to set it up for them to begin with a number. Similar rule variations can also be used to allow spaces in variable names. But the resulting language is likely to not to resemble any popular conventional language very much, if at all.
For an example of a fairly simple HTML templating language that does permit variables to begin with numbers and have embedded spaces, look at Qompose.
Because if you allowed keyword and identifier to begin with numberic characters, the lexer (part of the compiler) couldn't readily differentiate between the start of a numeric literal and a keyword without getting a whole lot more complicated (and slower).
C++ can't have it because the language designers made it a rule. If you were to create your own language, you could certainly allow it, but you would probably run into the same problems they did and decide not to allow it. Examples of variable names that would cause problems:
0x, 2d, 5555
One of the key problems about relaxing syntactic conventions is that it introduces cognitive dissonance into the coding process. How you think about your code could be deeply influenced by the lack of clarity this would introduce.
Wasn't it Dykstra who said that the "most important aspect of any tool is its effect on its user"?
The compiler has 7 phase as follows:
Lexical analysis
Syntax Analysis
Semantic Analysis
Intermediate Code Generation
Code Optimization
Code Generation
Symbol Table
Backtracking is avoided in the lexical analysis phase while compiling the piece of code. The variable like Apple, the compiler will know its an identifier right away when it meets letter ‘A’ character in the lexical Analysis phase. However, a variable like 123apple, the compiler won’t be able to decide if its a number or identifier until it hits ‘a’ and it needs backtracking to go in the lexical analysis phase to identify that it is a variable. But it is not supported in the compiler.
When you’re parsing the token you only have to look at the first character to determine if it’s an identifier or literal and then send it to the correct function for processing. So that’s a performance optimization.
Probably because it makes it easier for the human to tell whether it's a number or an identifier, and because of tradition. Having identifiers that could begin with a digit wouldn't complicate the lexical scans all that much.
Not all languages have forbidden identifiers beginning with a digit. In Forth, they could be numbers, and small integers were normally defined as Forth words (essentially identifiers), since it was faster to read "2" as a routine to push a 2 onto the stack than to recognize "2" as a number whose value was 2. (In processing input from the programmer or the disk block, the Forth system would split up the input according to spaces. It would try to look the token up in the dictionary to see if it was a defined word, and if not would attempt to translate it into a number, and if not would flag an error.)
Suppose you did allow symbol names to begin with numbers. Now suppose you want to name a variable 12345foobar. How would you differentiate this from 12345? It's actually not terribly difficult to do with a regular expression. The problem is actually one of performance. I can't really explain why this is in great detail, but it essentially boils down to the fact that differentiating 12345foobar from 12345 requires backtracking. This makes the regular expression non-deterministic.
There's a much better explanation of this here.
it is easy for a compiler to identify a variable using ASCII on memory location rather than number .
I think the simple answer is that it can, the restriction is language based. In C++ and many others it can't because the language doesn't support it. It's not built into the rules to allow that.
The question is akin to asking why can't the King move four spaces at a time in Chess? It's because in Chess that is an illegal move. Can it in another game sure. It just depends on the rules being played by.
Originally it was simply because it is easier to remember (you can give it more meaning) variable names as strings rather than numbers although numbers can be included within the string to enhance the meaning of the string or allow the use of the same variable name but have it designated as having a separate, but close meaning or context. For example loop1, loop2 etc would always let you know that you were in a loop and/or loop 2 was a loop within loop1.
Which would you prefer (has more meaning) as a variable: address or 1121298? Which is easier to remember?
However, if the language uses something to denote that it not just text or numbers (such as the $ in $address) it really shouldn't make a difference as that would tell the compiler that what follows is to be treated as a variable (in this case).
In any case it comes down to what the language designers want to use as the rules for their language.
The variable may be considered as a value also during compile time by the compiler
so the value may call the value again and again recursively
Backtracking is avoided in lexical analysis phase while compiling the piece of code. The variable like Apple; , the compiler will know its a identifier right away when it meets letter ‘A’ character in the lexical Analysis phase. However, a variable like 123apple; , compiler won’t be able to decide if its a number or identifier until it hits ‘a’ and it needs backtracking to go in the lexical analysis phase to identify that it is a variable. But it is not supported in compiler.
Reference
There could be nothing wrong with it when comes into declaring variable.but there is some ambiguity when it tries to use that variable somewhere else like this :
let 1 = "Hello world!"
print(1)
print(1)
print is a generic method that accepts all types of variable. so in that situation compiler does not know which (1) the programmer refers to : the 1 of integer value or the 1 that store a string value.
maybe better for compiler in this situation to allows to define something like that but when trying to use this ambiguous stuff, bring an error with correction capability to how gonna fix that error and clear this ambiguity.