I wish to understand the situation regarding uint8_t vs char, portability, bit-manipulation, the best practices, state of affairs, etc. Do you know a good reading on the topic?
I wish to do byte-IO. But of course char has a more complicated and subtle definition than uint8_t; which I assume was one of the reasons for introducing stdint header.
However, I had problems using uint8_t on multiple occasions. A few months ago, once, because iostreams are not defined for uint8_t. Isn't there a C++ library doing really-well-defined-byte-IO i.e. read and write uint8_t? If not, I assume there is no demand for it. Why?
My latest headache stems from the failure of this code to compile:
uint8_t read(decltype(cin) & s)
{
char c;
s.get(c);
return reinterpret_cast<uint8_t>(c);
}
error: invalid cast from type 'char' to type 'uint8_t {aka unsigned char}'
Why the error? How to make this work?
The general, portable, roundtrip-correct way would be to:
demand in your API that all byte values can be expressed with at most 8 bits,
use the layout-compatibility of char, signed char and unsigned char for I/O, and
convert unsigned char to uint8_t as needed.
For example:
bool read_one_byte(std::istream & is, uint8_t * out)
{
unsigned char x; // a "byte" on your system
if (is.get(reinterpret_cast<char *>(&x)))
{
*out = x;
return true;
}
return false;
}
bool write_one_byte(std::ostream & os, uint8_t val)
{
unsigned char x = val;
return os.write(reinterpret_cast<char const *>(&x), 1);
}
Some explanation: Rule 1 guarantees that values can be round-trip converted between uint8_t and unsigned char without losing information. Rule 2 means that we can use the iostream I/O operations on unsigned char variables, even though they're expressed in terms of chars.
We could also have used is.read(reinterpret_cast<char *>(&x), 1) instead of is.get() for symmetry. (Using read in general, for stream counts larger than 1, also requires the use of gcount() on error, but that doesn't apply here.)
As always, you must never ignore the return value of I/O operations. Doing so is always a bug in your program.
A few months ago, once, because iostreams are not defined for uint8_t.
uint8_t is pretty much just a typedef for unsigned char. In fact, i doubt you could find a machine where it isn't.
uint8_t read(decltype(cin) & s)
{
char c;
s.get(c);
return reinterpret_cast<uint8_t>(c);
}
Using decltype(cin) instead of std::istream has no advantage at all, it is just a potential source of confusion.
The cast in the return-statement isn't necessary; converting a char into an unsigned char works implicitly.
A few months ago, once, because iostreams are not defined for uint8_t.
They are. Not for uint8_t itself, but most certainly for the type it actually represents. operator>> is overloaded for unsigned char. This code works:
uint8_t read(istream& s)
{
return s.get();
}
Since unsigned char and char can alias each other you can also just reinterpret_cast any pointer to a char string to an unsigned char* and work with that.
In case you want the most portable way possible take a look at Kerreks answer.
Related
I'm writing a program that reads a content of a binary file (specificly Windows PE file. Wikipedia page and detailed PE structure).
Because of the binary data in the file, the characters often "fall out" of the ascii range (0-127) and that result in negative values.
To make sure I won't work with unwanted negative values, I can either pass const unsigned char * or convert the resulting char in the calculation to unsigned char.
On one hand, passing const unsigned char * makes sense because the data is non-ascii that has a numaric value and thus should be treated as positive.
In addition, it'll let me perform calculations without the need to cast the result to unsigned char.
On the other hand, I can't pass constant strings (const char *, such as pre-defined strings "MZ", "PE\0\0" etc.) to functions without first casting them to const unsigned char *.
What would be the better approach or best-practice in this scenario?
I think I'd use unsigned char, but avoid casting, and instead define a little class named ustring (or something similar). You have a couple of choices with that. One would be to instantiate std::basic_string over unsigned char. This can be useful (it gives you all of std::string's functionality, but with unsigned chars instead of chars. The obvious disadvantage is that it's probably overkill, and has essentially no compatibility with std::string, even though it's almost exactly the same thing.
The other obvious possibility would be to define your own class. Since you apparently care mostly about string literals, I'd probably go this way. The class would be initalized with a string literal, and it would just hold a pointer to the string, but as unsigned char * instead of just char *.
Then there's one more step to make life better: define a user defined literal operator named something like _us, so creating an object of your type from a string literal will look something like this: auto DOS_sig = "MZ"_us;
class ustring {
unsigned char const *data;
unsigned long long len;
public:
ustring(unsigned char const *s, unsigned long long len)
: data(s)
, len(len)
{}
operator char const *() const { return data; }
bool operator==(ustring const &other) const {
// note: memcmp treats what you pass it as unsigned chars.
return len == other.len && 0 == memcmp(data, other.data, len);
}
// you probably want to add more stuff here.
};
ustring operator"" _us(char const * const s, unsigned long long len) {
return ustring((unsigned char const *)s, len);
}
If I'm not mistaken, this should be pretty easy to work with. For example, let's assume you've memory mapped what you think is a PE file, with its base address at mapped_file. To see if it has a DOS signature, you might do something like this:
if (ustring(&mapped_file[0], 2) == "MZ"_us)
std::cerr << "File appears to be an executable.\n";
else
std::cerr << "file does not appear to be an executable.\n";
Caution: I haven't tested this, so fencepost errors and such are likely--for example, I don't remember whether the length passed to the user defined literal operator includes the NUL terminator or not. This isn't intended to represent finished code, just a sketch of a general direction that might be useful to explore.
I have the following code:
int some_array[256] = { ... };
int do_stuff(const char* str)
{
int index = *str;
return some_array[index];
}
Apparently the above code causes a bug in some platforms, because *str can in fact be negative.
So I thought of two possible solutions:
Casting the value on assignment (unsigned int index = (unsigned char)*str;).
Passing const unsigned char* instead.
Edit: The rest of this question did not get a treatment, so I moved it to a new thread.
The signedness of char is indeed platform-dependent, but what you do know is that there are as many values of char as there are of unsigned char, and the conversion is injective. So you can absolutely cast the value to associate a lookup index with each character:
unsigned char idx = *str;
return arr[idx];
You should of course make sure that the arr has at least UCHAR_MAX + 1 elements. (This may cause hilarious edge cases when sizeof(unsigned long long int) == 1, which is fortunately rare.)
Characters are allowed to be signed or unsigned, depending on the platform. An assumption of unsigned range is what causes your bug.
Your do_stuff code does not treat const char* as a string representation. It uses it as a sequence of byte-sized indexes into a look-up table. Therefore, there is nothing wrong with forcing unsigned char type on the characters of your string inside do_stuff (i.e. use your solution #1). This keeps re-interpretation of char as an index localized to the implementation of do_stuff function.
Of course, this assumes that other parts of your code do treat str as a C string.
I thought all types were signed unless otherwise specified (like int). I was surprised to find that for char it's actually implementation-defined:
... It is implementation-defined whether a char object can hold
negative values. ... In any particular implementation, a plain char
object can take on either the same values as a signed char or an
unsigned char; which one is implementation-defined.
However std::string is really just std::basic_string<char, ...>.
Can the semantics of this program change from implementation?
#include <string>
int main()
{
char c = -1;
std::string s{1, c};
}
Yes and no.
Since a std::string contains objects of type char, the signedness of type char can affect its behavior.
The program in your question:
#include <string>
int main()
{
char c = -1;
std::string s{1, c};
}
has no visible behavior (unless terminating without producing any output is "behavior"), so its behavior doesn't depend on the signedness of plain char. A compiler could reasonably optimize out the entire body of main. (I'm admittedly nitpicking here, commenting on the code example you picked rather than the question you're asking.)
But this program:
#include <iostream>
#include <string>
int main() {
std::string s = "xx";
s[0] = -1;
s[1] = +1;
std::cout << "Plain char is " << (s[0] < s[1] ? "signed" : "unsigned") << "\n";
}
will correctly print either Plain char is signed or Plain char is unsigned.
Note that a similar program that compares two std::string objects using that type's operator< does not distinguish whether plain char is signed or unsigned, since < treats the characters as if they were unsigned, similar to the way C's memcmp works.
But this shouldn't matter 99% of the time. You almost certainly have to go out of your way to write code whose behavior depends on the signedness of char. You should keep in mind that it's implementation-defined, but if the signedness matters, you should be using signed char or (more likely) unsigned char explicitly. char is a numeric type, but you should use it to hold character data.
I am using libxml2 and ICU in the same project. They represent
UTF8 differently. libxml2 uses unsigned char*, and ICU constructors take in plain char* (which on my Pentium 64-bit is equivalent to signed char).
Question: how do I convert between the two? Can I just
use static_cast?
I understand that UTF8 only cares that the underlying data
type be at least 8 bits long. Both signed char and unsigned
char satisfy this. I am just wondering if there is any
gotcha here? Any corner cases?
EDIT: at my compiler's (g++/Gentoo) insistence, only reinterpret_cast can do this conversion (without relying on the C-style cast). Let's say we have two unsigned char strings: 0000 and 1000. The conversion will turn them both into 0. Is this possible under UTF8?
Some libraries use char for storing UTF-8, others use unsigned char.
In this case you may need to cast between char* and unsigned char* using reinterpret_cast, since these types have the same storage unit size and alignment. E.g.:
char const* s = ...;
unsigned char const* p = reinterpret_cast<unsigned char const*>(s);
static_cast can always simulate reinterpret_cast through an intermediate conversion to void*, e.g. char* -> void* -> unsigned char*, e.g.:
char const* s = ...;
void const* intermediate = s;
unsigned char const* p = static_cast<unsigned char const*>(intermediate);
If unsigned char* is just a pointer to a string it should not cause any problem.
It should not matter. In any case as soon as you need to extract a char from the char * or unsigned char * stream you will need a function provided by the library that will extract an int and update the pointer/iterator in a manner that is opaque to you (the caller)
Thanks all. Mike said it best: the difference that makes no difference, and "a byte is a byte is a byte".
When I try the following, I get an error:
unsigned char * data = "00000000"; //error: cannot convert const char to unsigned char
Is there a special way to do this which I'm missing?
Update
For the sake of brevity, I'll explain what I'm trying to achieve:
I'd like to create a StringBuffer in C++ which uses unsigned values for raw binary data. It seems that an unsigned char is the best way to accomplish this. If there is a better method?
std::vector<unsigned char> data(8, '0');
Or, if the data is not uniform:
auto & arr = "abcdefg";
std::vector<unsigned char> data(arr, arr + sizeof(arr) - 1);
Or, so you can assign directly from a literal:
std::basic_string<unsigned char> data = (const unsigned char *)"abcdefg";
Yes, do this:
const char *data = "00000000";
A string literal is an array of char, not unsigned char.
If you need to pass this to a function that takes const unsigned char *, well, you'll need to cast it:
foo(static_cast<const unsigned char *>(data));
You have many ways. One is to write:
const unsigned char *data = (const unsigned char *)"00000000";
Another, which is more recommended is to declare data as it should be:
const char *data = "00000000";
And when you pass it to your function:
myFunc((const unsigned char *)data);
Note that, in general a string of unsigned char is unusual. An array of unsigned chars is more common, but you wouldn't initialize it with a string ("00000000")
Response to your update
If you want raw binary data, first let me tell you that instead of unsigned char, you are better off using bigger containers, such as long int or long long. This is because when you perform operations on the binary literal (which is an array), your operations are cut by 4 or 8, which is a speed boost.
Second, if you want your class to represent binary values, don't initialize it with a string, but with individual values. In your case would be:
unsigned char data[] = {0x30, 0x30, 0x30, 0x30, /* etc */}
Note that I assume you are storing binary as binary! That is, you get 8 bits in an unsigned char. If you, on the other hand, mean binary as in string of 0s and 1s, which is not really a good idea, but either way, you don't really need unsigned char and just char is sufficient.
unsigned char data[] = "00000000";
This will copy "00000000" into an unsigned char[] buffer, which also means that the buffer won't be read-only like a string literal.
The reason why the way you're doing it won't work is because your pointing data to a (signed) string literal (char[]), so data has to be of type char*. You can't do that without explicitly casting "00000000", such as: (unsigned char*)"00000000".
Note that string literals aren't explicitly of type constchar[], however if you don't treat them as such and try and modify them, you will cause undefined behaviour - a lot of the times being an access violation error.
You're trying to assign string value to pointer to unsigned char. You cannot do that. If you have pointer, you can assign only memory address or NULL to that.
Use const char instead.
Your target variable is a pointer to an unsigned char. "00000000" is a string literal. It's type is const char[9]. You have two type mismatches here. One is that unsigned char and char are different types. The lack of a const qualifier is also a big problem.
You can do this:
unsigned char * data = (unsigned char *)"00000000";
But this is something you should not do. Ever. Casting away the constness of a string literal will get you in big trouble some day.
The following is a little better, but strictly speaking it is still unspecified behavior (maybe undefined behavior; I don't want to chase down which it is in the standard):
const unsigned char * data = (const unsigned char *)"00000000";
Here you are preserving the constness but you are changing the pointer type from char* to unsigned char*.
#Holland -
unsigned char * data = "00000000";
One very important point I'm not sure we're making clear: the string "00000000\0" (9 bytes, including delimiter) might be in READ-ONLY MEMORY (depending on your platform).
In other words, if you defined your variable ("data") this way, and you passed it to a function that might try to CHANGE "data" ... then you could get an ACCESS VIOLATION.
The solution is:
1) declare as "const char *" (as the others have already said)
... and ...
2) TREAT it as "const char *" (do NOT modify its contents, or pass it to a function that might modify its contents).