Casting an array of unsigned chars to an array of floats - c++

What is the best way of converting a unsigned char array to a float array in c++?
I presently have a for loop as follows
for (i=0 ;i< len; i++)
float_buff[i]= (float) char_buff[i];
I also need to reverse the procedure, i.e convert from unsigned char to float (float to 8bit conversion)
for (i=0 ;i< len; i++)
char_buff[i]= (unsigned char) float_buff[i];
Any advice would be appreciated
Thanks

I think the best way is to use a function object:
template <typename T> // T models Any
struct static_cast_func
{
template <typename T1> // T1 models type statically convertible to T
T operator()(const T1& x) const { return static_cast<T>(x); }
};
followed by:
std::transform(char_buff, char_buff + len, float_buff, static_cast_func<float>());
std::transform(float_buff, float_buff + len, char_buff, static_cast_func<unsigned char>());
This is the most readable because it says what is being done in English: transforming a sequence into a different type using static casting. And future casts can be done in one line.

Your solution is pretty much the best option, however, I would consider switching to:
char_buff[i]= static_cast<unsigned char>(float_buff[i]);

The cast is automatic so you don't need to make it explicit.
But you can use the standard algorithms:
std::copy(char_buff,char_buff+len,float_buff);
Converting back from float to char there is a potential loss of information. So you need to be more explicit.
std::transform(float_buff,float_buff+len,char_buff,MyTransform());
Here we use the class MyTransform which should have an operator() that takes a float and returns a char. That should be trivial to impliment.

Your solution seems right, though on the way back, you might lose the floating digits in the casting.

For what purpose are you doing this? Shoving a float into a char doesn't really make sense. On most platforms a float will be 4 bytes and represent a floating point number, where as a char will be 1 byte and often represents a single character. You'll lose 3 bytes of data trying to shove a float into a char, right?

Your first loop doesn't require a cast. You can implicitly convert from one type (e.g., unsigned char) to a wider type (e.g., float). Your second loop should use static_cast:
for (i=0; i< len; i++)
char_buff[i]= static_cast<unsigned char>(float_buff[i]);
We use static_cast to explicitly tell the compiler to do the conversion to a narrower type. If you don't use the cast, your compiler might warn you that the conversion could lose data. The presence of the cast operator means that you understand you might lose data precision and you're ok with it. This is not an appropriate place to use reinterpret_cast. With static_cast, you at least have some restrictions on what conversions you can do (e.g., it probably won't let you convert a Bird* to a Nuclear_Submarine*). reinterpret_cast has no such restrictions.
Also, here's what Bjarne Stroustrup has to say about this subject.

If you are dealing with very large arrays and performance is essential then the following may prove slightly more efficient:
float *dst = float_buff;
unsigned char *src = char_buff;
for (i=0; i<len; i++) *dst++ = (float)*src++;

No one has mentioned this, but if you're doing any arithmetic with the floats, you may want to round instead of truncate... if you have the char 49, and it gets turned into 4.9E1, after some arithmetic, it might turn into 4.89999999E1 which will turn into 48 when you turn it back into a char.
If you're not doing anything with the float, this shouldn't be a problem, but then why do you need it as a float in the first place?

Related

Converting Integer Types

How does one convert from one integer type to another safely and with setting off alarm bells in compilers and static analysis tools?
Different compilers will warn for something like:
int i = get_int();
size_t s = i;
for loss of signedness or
size_t s = get_size();
int i = s;
for narrowing.
casting can remove the warnings but don't solve the safety issue.
Is there a proper way of doing this?
You can try boost::numeric_cast<>.
boost numeric_cast returns the result of converting a value of type Source to a value of type Target. If out-of-range is detected, an exception is thrown (see bad_numeric_cast, negative_overflow and positive_overflow ).
How does one convert from one integer type to another safely and with setting off alarm bells in compilers and static analysis tools?
Control when conversion is needed. As able, only convert when there is no value change. Sometimes, then one must step back and code at a higher level. IOWs, was a lossy conversion needed or can code be re-worked to avoid conversion loss?
It is not hard to add an if(). The test just needs to be carefully formed.
Example where size_t n and int len need a compare. Note that positive values of int may exceed that of size_t - or visa-versa or the same. Note in this case, the conversion of int to unsigned only happens with non-negative values - thus no value change.
int len = snprintf(buf, n, ...);
if (len < 0 || (unsigned)len >= n) {
// Handle_error();
}
unsigned to int example when it is known that the unsigned value at this point of code is less than or equal to INT_MAX.
unsigned n = ...
int i = n & INT_MAX;
Good analysis tools see that n & INT_MAX always converts into int without loss.
There is no built-in safe narrowing conversion between int types in c++ and STL. You could implement it yourself using as an example Microsoft GSL.
Theoretically, if you want perfect safety, you shouldn't be mixing types like this at all. (And you definitely shouldn't be using explicit casts to silence warnings, as you know.) If you've got values of type size_t, it's best to always carry them around in variables of type size_t.
There is one case where I do sometimes decide I can accept less than 100.000% perfect type safety, and that is when I assign sizeof's return value, which is a size_t, to an int. For any machine I am ever going to use, the only time this conversion might lose information is when sizeof returns a value greater than 2147483647. But I am content to assume that no single object in any of my programs will ever be that big. (In particular, I will unhesitatingly write things like printf("sizeof(int) = %d\n", (int)sizeof(int)), explicit cast and all. There is no possible way that the size of a type like int will not fit in an int!)
[Footnote: Yes, it's true, on a 16-bit machine the assumption is the rather less satisfying threshold that sizeof won't return a value greater than 32767. It's more likely that a single object might have a size like that, but probably not in a program that's running on a 16-bitter.]

Best way to define number is unsigned char in c++

I had some code that when simplified is essentially this
unsigned char a=255;
unsigned char b=0;
while (a+1==b) {//do something}
Now since 255+1=0 with unsigned chars I was expecting it to do something but it didn't because it promoted everything to int. I can make it work by replacing a+1 with either (unsigned char)(a+1) or (a+1)%256.
What would be the best way to tell the compiler that I don't want the types to be changed? Or should I just be doing 1 of the ways I know works?
According to This cppreference.com page, "arithmetic operators don't accept types smaller than int as arguments." That said, your proposed solution of (unsigned char)(a + 1) is enough to keep this promotion from happening and a will roll over to 0. Using the modulo operator is more explicit but it introduces an additional operation in the machine code. It's a balance between clarity and performance.

how does the short(vector.size()) command conversion work in C++?

I don't know any other way to return the size of a vector other than the .size() command, and it works very well, but, it return a variable of type long long unsigned int, and this in very cases are very good, but I'm sure my program will never have a vector so big that it need all that size of return, short int is more than enough.
I know, for today's computers those few enused bytes are irrelevant, but I don't like to leave these "loose ends" even if they are small, and whem I was programming, I came across some details that bothered me.
Look at these examples:
for(short int X = 0 ; X < Vector.size() ; X++){
}
compiling this, I receive this warning:
warning: comparison of integer expressions of different signedness: 'short int' and 'std::vector<unsigned char>::size_type' {aka 'long long unsigned int'} [-Wsign-compare]|
this is because the .size() return value type is different from the short int I'm compiling, "X" is a short int, and Vector.size() return a long long unsigned int, was expected, so if I do this:
for(size_t X = 0 ; X < Vector.size() ; X++){
}
the problem is gone, but by doing this, I'm creating a long long unsigned int in variable size_t and I'm returning another variable long long unsigned int, so, my computer allocale two variables long long unsigned int, so, what I do for returning a simple short int? I don't need anything more than this, long long unsigned int is overkill, so I did this:
for(short int X = 0 ; X < short(Vector.size()) ; X++){
}
but... how is this working? short int X = 0 is allocating a short int, nothing new, but what about short (Vector.size()), is the computer allocating a long unsigned int and converting it to a short int? or is the compiler "changing" the return of the .size() function by making it naturally return a short int and, in this case, not allocating a long long unsined int? because I know the compilers are responsible for optimizing the code too, is there any "problem" or "detail" when using this method? since I rarely see anyone using this, what exactly is this short() doing in memory allocation? where can i read more about it?
(thanks to everyone who responded)
Forget for a moment that this involves a for loop; that's important for the underlying code, but it's a distraction from what's going on with the conversion.
short X = Vector.size();
That line calls Vector.size(), which returns a value of type std::size_t. std::size_t is an unsigned type, large enough to hold the size of any object. So it could be unsigned long, or it could be unsigned long long. In any event, it's definitely not short. So the compiler has to convert that value to short, and that's what it does.
Most compilers these days don't trust you to understand what this actually does, so they warn you. (Yes, I'm rather opinionated about compilers that nag; that doesn't change the analysis here). So if you want to see that warning (i.e., you don't turn it off), you'll see it. If you want to write code that doesn't generate that warning, then you have to change the code to say "yes, I know, and I really mean it". You do that with a cast:
short X = short(Vector.size());
The cast tells the compiler to call Vector.size() and convert the resulting value to short. The code then assigns the result of that conversion to X. So, more briefly, in this case it tells the compiler that you want it to do exactly what it would have done without the cast. The difference is that because you wrote a cast, the compiler won't warn you that you might not know what you're doing.
Some folks prefer to write that cast is with a static_cast:
short X = static_cast<short>(Vector.size());
That does the same thing: it tells the compiler to do the conversion to short and, again, the compiler won't warn you that you did it.
In the original for loop, a different conversion occurs:
X < Vector.size()
That bit of code calls Vector.size(), which still returns an unsigned type. In order to compare that value with X, the two sides of the < have to have the same type, and the rules for this kind of expression require that X gets promoted to std::size_t, i.e., that the value of X gets treated as an unsigned type. That's okay as long as the value isn't negative. If it's negative, the conversion to the unsigned type is okay, but it will produce results that probably aren't what was intended. Since we know that X is not negative here, the code works perfectly well.
But we're still in the territory of compiler nags: since X is signed, the compiler warns you that promoting it to an unsigned type might do something that you don't expect. Again, you know that that won't happen, but the compiler doesn't trust you. So you have to insist that you know what you're doing, and again, you do that with a cast:
X < short(Vector.size())
Just like before, that cast converts the result of calling Vector.size() to short. Now both sides of the < are the same type, so the < operation doesn't require a conversion from a signed to an unsigned type, so the compiler has nothing to complain about. There is still a conversion, because the rules say that values of type short get promoted to int in this expression, but don't worry about that for now.
Another possibility is to use an unsigned type for that loop index:
for (unsigned short X = 0; X < Vector.size(); ++X)
But the compiler might still insist on warning you that not all values of type std::size_t can fit in an unsigned short. So, again, you might need a cast. Or change the type of the index to match what the compiler think you need:
for (std::size_t X = 0; X < Vector.size(); ++X_
If I were to go this route, I would use unsigned int and if the compiler insisted on telling me that I don't know what I'm doing I'd yell at the compiler (which usually isn't helpful) and then I'd turn off that warning. There's really no point in using short here, because the loop index will always be converted to int (or unsigned int) wherever it's used. It will probably be in a register, so there is no space actually saved by storing it as a short.
Even better, as recommended in other answers, is to use a range-base for loop, which avoids managing that index:
for (auto& value: Vector) ...
In all cases, X has a storage duration of automatic, and the result of Vector.size() does not outlive the full expression where it is created.
I don't need anything more than this, long long unsigned int is overkill
Typically, automatic duration variables are "allocated" either on the stack, or as registers. In either case, there is no performance benefit to decreasing the allocation size, and there can be a performance penalty in narrowing and then widening values.
In the very common case where you are using X solely to index into Vector, you should strongly consider using a different kind of for:
for (auto & value : Vector) {
// replace Vector[X] with value in your loop body
}

What is the use of intptr_t?

I know it is an integer type that can be cast to/from pointer without loss of data, but why would I ever want to do this? What advantage does having an integer type have over void* for holding the pointer and THE_REAL_TYPE* for pointer arithmetic?
EDIT
The question marked as "already been asked" doesn't answer this. The question there is if using intptr_t as a general replacement for void* is a good idea, and the answers there seem to be "don't use intptr_t", so my question is still valid: What would be a good use case for intptr_t?
The primary reason, you cannot do bitwise operation on a void *, but you can do the same on a intptr_t.
On many occassion, where you need to perform bitwise operation on an address, you can use intptr_t.
However, for bitwise operations, best approach is to use the unsigned counterpart, uintptr_t.
As mentioned in the other answer by #chux, pointer comparison is another important aspect.
Also, FWIW, as per C11 standard, §7.20.1.4,
These types are optional.
There's also a semantic consideration.
A void* is supposed to point to something. Despite modern practicality, a pointer is not a memory address. Okay, it usually/probably/always(!) holds one, but it's not a number. It's a pointer. It refers to a thing.
A intptr_t does not. It's an integer value, that is safe to convert to/from a pointer so you can use it for antique APIs, packing it into a pthread function argument, things like that.
That's why you can do more numbery and bitty things on an intptr_t than you can on a void*, and why you should be self-documenting by using the proper type for the job.
Ultimately, almost everything could be an integer (remember, your computer works on numbers!). Pointers could have been integers. But they're not. They're pointers, because they are meant for different use. And, theoretically, they could be something other than numbers.
The uintptr_t type is very useful when writing memory management code. That kind of code wants to talk to its clients in terms of generic pointers (void *), but internally do all kinds of arithmetic on addresses.
You can do some of the same things by operating in terms of char *, but not everything, and the result looks like pre-Ansi C.
Not all memory management code uses uintptr_t - as an example, the BSD kernel code defines a vm_offset_t with similar properties. But if you are writing e.g. a debug malloc package, why invent your own type?
It's also helpful when you have %p available in your printf, and are writing code that needs to print pointer sized integral variables in hex on a variety of architectures.
I find intptr_t rather less useful, except possibly as a way station when casting, to avoid the dread warning about changing signedness and integer size in the same cast. (Writing portable code that passes -Wall -Werror on all relevant architectures can be a bit of a struggle.)
What is the use of intptr_t?
Example use: order comparing.
Comparing pointers for equality is not a problem.
Other compare operations like >, <= may be UB. C11dr §6.5.8/5 Relational operators.
So convert to intptr_t first.
[Edit] New example: Sort an array of pointers by pointer value.
int ptr_cmp(const void *a, const void *b) {
intptr_t ia = (intptr) (*((void **) a));
intptr_t ib = (intptr) (*((void **) b));
return (ia > ib) - (ia < ib);
}
void *a[N];
...
qsort(a, sizeof a/sizeof a[0], sizeof a[0], ptr_cmp);
[Former example]
Example use: Test if a pointer is of an array of pointers.
#define N 10
char special[N][1];
// UB as testing order of pointer, not of the same array, is UB.
int test_special1(char *candidate) {
return (candidate >= special[0]) && (candidate <= special[N-1]);
}
// OK - integer compare
int test_special2(char *candidate) {
intptr_t ca = (intptr_t) candidate;
intptr_t mn = (intptr_t) special[0];
intptr_t mx = (intptr_t) special[N-1];
return (ca >= mn) && (ca <= mx);
}
As commented by #M.M, the above code may not work as intended. But at least it is not UB. - just non-portably functionality. I was hoping to use this to solve this problem.
(u)intptr_t is used when you want to do arithmetic on pointers, specifically bitwise operations. But as others said, you'll almost always want to use uintptr_t because bitwise operations are better done in unsigned. However if you need to do an arithmetic right shift then you must use intptr_t1. It's usually used for storing data in the pointer, usually called tagged pointer
In x86-64 you can use the high 16/7 bits of the address for data, but you must do sign extension manually to make the pointer canonical because it doesn't have a flag for ignoring the high bits like in ARM2. So for example if you have char* tagged_address then you'll need to do this before dereferencing it
char* pointer = (char*)((intptr_t)tagged_address << 16 >> 16);
The 32-bit Chrome V8 engine uses smi (small integer) optimization where the low bits denote the type
|----- 32 bits -----|
Pointer: |_____address_____w1| # Address to object, w = weak pointer
Smi: |___int31_value____0| # Small integer
So when the pointer's least significant bit is 0 then it'll be right shifted to retrieve the original 31-bit signed int
int v = (intptr_t)address >> 1;
For more information read
Using the extra 16 bits in 64-bit pointers
Pointer magic for efficient dynamic value representations
Another usage is when you pass a signed integer as void* which is usually done in simple callback functions or threads
void* my_thread(void *arg)
{
intptr_t val = (intptr_t)arg;
// Do something
}
int main()
{
pthread_t thread1;
intptr_t some_val = -2;
int r = pthread_create(&thread1, NULL, my_thread, (void*)some_val);
}
1 When the implementation does arithmetic shift on signed types of course
2 Very new x86-64 CPUs may have UAI/LAM support for that

correctly fixing conversion and loss of data warnings?

I have a class called PointF and it has a constructor that takes a Point, and I keep getting "possible loss of data warnings". How can I show that my intention is in fact to make a float value into an int, among other tings? I tried static_cast and (float) but they did not fix the warning.
For example:
int curPos = ceil(float(newMouseY / font.getLineHeight())) ; //float to int
And
outputChars[0] = uc; //converting from size_t to char
A cast should do the trick; that says "explicitly make this type into that type", which is generally pretty silly for a compiler to warn for:
int curPos = static_cast<int>(ceil(float(newMouseY / font.getLineHeight())));
Or:
outputChars[0] = static_cast<char>(uc);
Make sure the casts you tried were akin to that. You say "I tried ...(float)" which leads me to believe you tried something like this:
int curPos = (float)(ceil(float(newMouseY / font.getLineHeight())));
Which does nothing. The type of the expression is already a float, what would casting it to the same type do? You need to cast it to the destination type.
Keep in mind casts are generally to be avoided. In your first snippet, the cast is sensible because when you quantize something you necessarily need to drop information.
But your second cast is not. Why is uc a size_t in the first place? What happens when the cast does drop information? Is that really a good idea?
You need to cast the result of ceil, preferably with a static_cast.
you may have to use an explicit cast, i.e.
int curPos = int(ceil(...));
You must cast your variable to normally good way change type of her. And if you trying convert size_t to char, then you trying save 4 bytes into 1 byte, data lost is a normal thing in this way.