This question already has answers here:
uint8_t can't be printed with cout
(8 answers)
Closed 2 years ago.
Is it possible to get the decimal value of a char? I know I can use short(char) but that would waste memory, and that is why I used char in the first place, I want only 8 bits of data, so I use char (similar to byte in C#). In my program, when I need to print it, it always shows some weird character corresponding to decimal value, I want it to show the decimal value itself. So is there a C# equivalent of char.GetNumericValue()?
You can use one of the integer descripted in the link.
Then if the problem is only the print you can use the std::cout with this syntax
char a = 45;
cout << +a; // promotes a to a type printable as a number, regardless of type.
This works as long as the type provides a unary + operator with ordinary semantics. If you are defining a class that represents a number, to provide a unary + operator with canonical semantics, create an operator+() that simply returns *this either by value or by reference-to-const.
Further read in c++-faq print-char-or-ptr-as-number
Related
This question already has answers here:
Using an escape sequence that can't fit in its related type
(1 answer)
Unicode encoding for string literals in C++11
(1 answer)
Why C++ returns wrong codes of some characters, and how to fix this?
(1 answer)
Is '\u0B95' a multicharacter literal?
(4 answers)
Why is there no ASCII or UTF-8 character literal in C11 or C++11?
(5 answers)
Closed 5 months ago.
I am learning C++ using the books listed here. In particular, I learnt that
The signedness of char depends on the compiler and the target platform
This means that on one implementation/platform char might be signed and in another it might be unsigned. In other words, we cannot portably write char ch = 228; because the system in which char is signed 228 is out of range. For example, if you see this demo you'll see that we get a warning in clang.
Then I was surprised to learn that the type of '\xe4' is char and not unsigned char. I was surprised because \xe4 corresponds to 228 which will be out of range for a system in which char is signed. So I expected the type of '\xe4' to be an unsigned char.
Thus, my question is why did the standard choose to define the type of '\xe4' to be char instead of unsigned char. I mean \xe4 is in range of unsigned char but out of range for char(in a system where char is signed). So it seems natural/intuitive to me that unsigned char should've been used as the type of '\xe4' so that it won't have platform/implementation dependence.
Note
Note that I am trying to make sense of what is happening here and my current understanding might be wrong. I was curious about this and so have asked this question to clear my concept further, as I've just started learning C++.
Note also that my question is not about whether we can portably write char ch = 228; but instead that why is the type of '\xe4' chosen to be char inplace of unsigned char.
Summary
Why is the type of a character literal char, even when the value of the literal falls outside the range of char? Wouldn't it make more sense to allow the type to be unsigned char where the value fits that range?
By language (C++) definition. The basic character literal is a char type. Link to cppreference. It can be found in the standard under lex.ccon
But for char, basic.fundamental #7 states:
Type "char" is a distinct type that has an implementation-defined choice of “signed char” or “unsigned char” as its underlying type.
Which again says that it depends on the implementation. This is also stated in other answers, e.g. Why is 'char' signed by default in C++? (It isn't)
And think about what the impact would be if a character literal in the extended ASCII range (*normal ASCII is 7 bit) would automatically be promoted to unsigned char... that would give all sort of difficult issues. e.g. What should happen if you append an unsigned char to a string of signed char?
But in the end, how important is it what the underlying type is? It's not that char c = '\xe4' is UB: It's perfectly defined behavior, as the string literal get converted to the same char type, negative or not. In operations strings it doesn't matter that much that chars can be negative, just like stated in this answer. However, when sorting strings it will matter
This question already has answers here:
Why does sizeof(x++) not increment x?
(10 answers)
Closed 8 months ago.
The value i is 1, but why is i still 1 after sizeof(i++)? I only know sizeof is an operator.
int main() {
int i = 1;
sizeof(i++);
std::cout << i << std::endl; // 1
}
sizeof does not evaluate its operand. It determines the operand's type and returns its size. It is not necessary to evaluate the operand in order to determine it's size. This is actually one of the fundamental core principles of C++: the types of all objects -- and by consequence of all expressions -- is known at compile time.
And since the operand does not get evaluated there are no side effects from its evaluation. If you sizeof() something that makes a function call the function does not actually get called: sizeof(hello_world()) won't call this function, but just determines the size of whatever it returns.
This question already has answers here:
With arrays, why is it the case that a[5] == 5[a]?
(20 answers)
Closed 6 years ago.
Why does C++ allow the following statement?
int a[10] = { 0 };
std::cout << 1[a] << std::endl;
std::cout << a[1] << std::endl;
Both lines print zero and no compiler warning is generated. Shouldn't 1[a] be illegal as 1 is not an array and a is not an integer type.
Code example : http://cpp.sh/4tan
It is because of pointer arithmetic:
a[1] == *(a+1) == *(1+a) == 1[a];
Quoting the standard (§8.3.4; point 6):
Except where it has been declared for a class, the subscript operator [] is interpreted in such a way that E1[E2] is identical to *((E1)+(E2)). Because of the conversion rules that apply to +, if E1 is an array and E2 an integer, then E1[E2] refers to the E2-th member of E1. Therefore, despite its asymmetric appearance, subscripting is a commutative operation.
Note that when you write a[1], the compiler interprets it as *(a+1). You are still referring the same array a when you write 1[a], so the compiler is infact still doing type checking.
Both are fine since under the covers it's all just pointer arithmetic. Taking the address of something and adding something else to it (a[1) is exactely the same as taking something else and adding an address to it (1[a]) - the final address of the object you refer to is the same. One notation is just more intuitive to humans than the other.
This question already has answers here:
Can I call memcpy() and memmove() with "number of bytes" set to zero?
(2 answers)
Closed 7 years ago.
Is there a problem in passing 0 to memcpy():
memcpy(dest, src, 0)
Note that in my code I am using a variable instead of 0, which can sometimes evaluate to 0.
As one might expect from a sane interface, zero is a valid size, and results in nothing happening. It's specifically allowed by the specification of the various string handling functions (including memcpy) in C99 7.21.1/2:
Where an argument declared as size_t n specifies the length of the array for a function, n can have the value zero on a call to that function. [...] On such a call, a function that locates a character finds no occurrence, a function that compares two character sequences returns zero, and a function that copies characters copies zero characters.
Yes, it's totally Ok. The only restriction on memcpy is that memory areas must not overlap.
This question already has answers here:
Why is the data type needed in pointer declarations?
(8 answers)
Closed 8 years ago.
A pointer stores the address of the variable it is pointing to. But why can't a pointer of one type point to the address of a variable of another type?
For example, why does the following code below give me an error?
int main()
{
int *i;
float a;
i=&a; //this statement gives me an error
}
In 32 bit machine., Normally address in four bytes. While the data type is declared then it pointing to that data type and taking the byte equal to the data type byte.
In this case I is a pointer to int. That means it will pointing to the four byte. Values can be stored in the integer format. While a is a floating point variable. So it will take the value stored in the floating format(It means exponent and mantissa). While accessing the pointer with different data types. It will dereference the value corresponding data type format. So we get the some garbage value or some values enabled in the bits.
You have declared the i as integer and a as float.
if you point to a variable the data type should be same, You can point to a only if a is an integer variable.
change the data type of a as int and execute the program.
you can refer to this link.
The pointers can point to any type if it is a void pointer.
but the void pointer cannot be dereferenced. only if the complier knows the data type of the pointer variable, it can dereference the pointer and perform the operations.
Well, it is actually very funny, I was writing the code and got this kind of error, at that time I figured out the reason myself but then forgot and as soon as I posted the question the answer just came to me again.
In the above code i is a pointer variable of type integer which is assigned 4 bytes of memory (though depends upon platform but for now lets just assume) on the other hand float will be assigned 8 bytes of memory (again just assumed). So if I make i point to the float variable it will lose the precision as it will only be able to contain the value stored in first two bytes only.