stranger "may be used uninitialized in this function [-Wmaybe-uninitialized]" - c++

Any idea? this is the log:
util.h: In function ‘bool BusyBees()’:
util.h:162:17: warning: ‘bestHashBee’ may be used uninitialized in this function [-Wmaybe-uninitialized]
data[1] = x >> 8;
~~^~~~
miner.cpp:512:14: note: ‘bestHashBee’ was declared here
uint32_t bestHashBee;
^~~~~~~~~~~
this is util.h (the function declare):
/** Write a 32-bit unsigned little-endian integer. */
void inline WriteLE32(unsigned char *data, uint32_t x) {
data[0] = x;
data[1] = x >> 8;
data[2] = x >> 16;
data[3] = x >> 24;
}
this is where i use it:
uint32_t bestHashBee;
unsigned char beeNonceEncoded[4];
WriteLE32(beeNonceEncoded, bestHashBee);
std::vector<unsigned char> beeNonceVec(beeNonceEncoded, beeNonceEncoded + 4);
Thanks for any help!

Let's take a look at the code:
uint32_t bestHashBee;
// at this point bestHashBee is an integer, but it could be any integer
// you have not told it what value it should have. It might be 0 it might be
// 2879872. It depends entirely on what was in that memory location before.
unsigned char beeNonceEncoded[4];
// beeNonceEncoded is an array of 4 random chars - they have not been
// initialized to any known value
WriteLE32(beeNonceEncoded, bestHashBee);
// now beeNonceEncoded contains 4 random chars.
std::vector<unsigned char> beeNonceVec(beeNonceEncoded, beeNonceEncoded + 4);
// you've passes bestHashBee to this function - it's a random value
void inline WriteLE32(unsigned char *data, uint32_t x) {
data[0] = x;
// now data[0] contains a random value
//...
}
the compiler is warning you that you haven't initialized bestHashbee - it's unlikely that you want a slightly random value there.
The recommendations from all the commenters are: initialize it.
uint32_t bestHashBee = 100;
would do it. 100 might not be what you want either. Depending on the version of c++ you're using, you could also use
uint32_t bestHashBee{100};

Related

MSVC: shift count negative or too big

I am trying to wrap my head around a warning I get from MSVC. From what I can tell, the warning seems to be bogus, but I would like to be sure.
I am trying to convert an off_t to an offset in an OVERLAPPED. Given an off_t named offset and an OVERLAPPED named overlapped, I am trying the following:
overlapped.Offset = static_cast<DWORD>(offset);
if constexpr(sizeof(offset) > 4) {
overlapped.OffsetHigh = static_cast<DWORD>(offset >> 32);
}
MSVC complains about the bitshift, pretending the shift count is either negative or too big. Since it's clearly not negative - and even MSVC should be able to tell that - it must think it's too big.
How could it be too big? The code in question is only compiled if the size of an off_t is greater than 4. It must therefore be at least 5 bytes (but probably 8), and given 8 bits to the byte meaning a minimum of 40 bits, which is more than 32.
What is going on here?
Could it be the assignment into overlapped.OffsetHigh, and not your explicit shift, that is causing the warning? The following code generates the same warning on VS2015 compiling for x86 32-bit:
struct Clump
{
unsigned int a : 32;
unsigned int b : 32;
unsigned int c : 32;
};
unsigned int x = 0;
unsigned int y = 0;
unsigned int z = 0;
Clump clump = { x, y, z }; // This is line 1121.
1>E.cpp(1121): warning C4293: '<<': shift count negative or too big, undefined behavior
But removing the bit-fields, there is no warning:
struct Clump
{
unsigned int a;
unsigned int b;
unsigned int c;
};
unsigned int x = 0;
unsigned int y = 0;
unsigned int z = 0;
Clump clump = { x, y, z }; // No warning.

Does this correctly combine two unsigned 32-bit integers into one unsigned 64-bit integer in C++?

Does this correctly combine two unsigned 32-bit integers into one unsigned 64-bit integer in C++?
std::uint32_t a = ...
std::uint32_t b = ...
std::uint64_t result = ((std::uint64_t)a << 32) | (std::uint64_t)b)
Is this code valid for all the unsigned integer values of a & b?
Actually, I want unique result values for all possible unsigned integer values of a & b. The aim is to keep the size/length of the result minimal (in this case, we can bind it in 64 bit).
Yes, it works as you'd expect (if they are really unsigned).
Another method:
uint32_t a = xxx;
uint32_t b = xxx;
uint64_t result;
uint32_t * p = (uint32_t *)&result;
p[0] = b;
p[1] = a;
Or maybe better:
union
{
uint32_t b;
uint32_t a;
uint64_t res;
} u3264;
u3264 u;
u.a = xxx;
u.b = yyy;
// 64 bit result in u.res

c++ - store byte[4] in an int

I want to take a byte array with 4 bytes in it, and store it in an int.
For example (non-working code):
unsigned char _bytes[4];
int * combine;
_bytes[0] = 1;
_bytes[1] = 1;
_bytes[2] = 1;
_bytes[3] = 1;
combine = &_bytes[0];
I do not want to use bit shifting to put the bytes in the int, I would like to point at the bytes memory and use them as an int if possible.
In Standard C++ it's not possible to do this reliably. The strict aliasing rule says that when you read through an expression of type int, it must actually designate an int object (or a const int etc.) otherwise it causes undefined behaviour.
However you can do the opposite: declare an int and then fill in the bytes:
int combine;
unsigned char *bytes = reinterpret_cast<unsigned char *>(&combine);
bytes[0] = 1;
bytes[1] = 1;
bytes[2] = 1;
bytes[3] = 1;
std::cout << combine << std::endl;
Of course, which value you get out of this depends on how your system represents integers. If you want your code to use the same mapping on different systems then you can't use memory aliasing; you'd have to use an equation instead.

C++ Bits in 64 bit integer

Hello I have a struct here that is 7 bytes and I'd like to write it to a 64 bit integer. Next, I'd like to extract out this struct later from the 64 bit integer.
Any ideas on this?
#include "stdafx.h"
struct myStruct
{
unsigned char a;
unsigned char b;
unsigned char b;
unsigned int someNumber;
};
int _tmain(int argc, _TCHAR* argv[])
{
myStruct * m = new myStruct();
m->a = 11;
m->b = 8;
m->c = 12;
m->someNumber = 30;
printf("\n%s\t\t%i\t%i\t%i\t%i\n\n", "struct", m->a, m->b, m->c, m->someNumber);
unsigned long num = 0;
// todo: use bitwise operations from m into num (total of 7 bytes)
printf("%s\t\t%i\n\n", "ulong", num);
m = new myStruct();
// todo: use bitwise operations from num into m;
printf("%s\t\t%i\t%i\t%i\t%i\n\n", "struct", m->a, m->b, m->c, m->someNumber);
return 0;
}
You should to do something like this:
class structured_uint64
{
uint64_t data;
public:
structured_uint64(uint64_t x = 0):data(x) {}
operator uint64_t&() { return data; }
unsigned uint8_t low_byte(size_t n) const { return data >> (n * 8); }
void low_byte(size_t n, uint8_t val) {
uint64_t mask = static_cast<uint64_t>(0xff) << (8 * n);
data = (data & ~mask) | (static_cast<uint64_t>(val) << (8 * n));
}
unsigned uint32_t hi_word() const { return (data >> 24); }
// et cetera
};
(there is, of course, lots of room for variation on the details of the interface and where among the 64 bits the constituents are placed)
Using different types to alias the same portion of memory is a generally bad idea. The thing is, it's very valuable for the optimizer to be able to use reasoning like:
"Okay, I've read a uint64_t at the start of this block, and nowhere in the middle does the program write to any uint64_ts, therefore the value must be unchanged!"
which means it will get the wrong answer if you tried to change the value of the uint64_t object through a uint32_t reference. And as this is very dependent what optimizations are possible and done, it is actually pretty easy to never run across the problem in test cases, but see it in the real program you're trying to write -- and you'll spend forever trying to find the bug because you convinced yourself it's not this problem.
So, you really should do the insertion/extraction of the fields with bit twiddling (or intrinsics, if profiling shows that this is a performance issue and there are useful ones available) rather than trying to set up a clever struct.
If you really know what you're doing, you can make the aliasing work, I believe. But it should only be done if you really know what you're doing, and that includes knowing relevant rules from the standard inside and out (which I don't, and so I can't advise you on how to make it work). And even then you probably shouldn't do it.
Also, if you intend your integral types to be a specific size, you should really use the correct types. For example, never use unsigned int for an integer that is supposed to be exactly 32 bits. Instead use uint32_t. Not only is it self-documenting, but you won't run into a nasty surprise when you try to build your program in an environment where unsigned int is not 32 bits.
Use a union. Each element of a union occupies the same address space. The struct is one element, the unsigned long long is another.
#include <stdio.h>
union data
{
struct
{
unsigned char a;
unsigned char b;
unsigned char c;
unsigned int d;
} e;
unsigned long long f;
};
int main()
{
data dat;
dat.f = 0xFFFFFFFFFFFFFFFF;
dat.e.a = 1;
dat.e.b = 2;
dat.e.c = 3;
dat.e.d = 4;
printf("f=%016llX\n",dat.f);
printf("%02X %02X %02X %08X\n",dat.e.a,dat.e.b,dat.e.c,dat.e.d);
return 0;
}
Output, but note one byte of the original unsigned long long remains. Compilers like to align data such as 4-byte integers on addresses divisible by 4, so three bytes, then a pad byte so the integer is at offset 4 and the struct has a total size of 8.
f=00000004FF030201
01 02 03 00000004
This can be controlled in compiler-dependent fashion. Below is for Microsoft C++:
#include <stdio.h>
#pragma pack(push,1)
union data
{
struct
{
unsigned char a;
unsigned char b;
unsigned char c;
unsigned int d;
} e;
unsigned long long f;
};
#pragma pack(pop)
int main()
{
data dat;
dat.f = 0xFFFFFFFFFFFFFFFF;
dat.e.a = 1;
dat.e.b = 2;
dat.e.c = 3;
dat.e.d = 4;
printf("f=%016llX\n",dat.f);
printf("%02X %02X %02X %08X\n",dat.e.a,dat.e.b,dat.e.c,dat.e.d);
return 0;
}
Note the struct occupies seven bytes now and the highest byte of the unsigned long long is now unchanged:
f=FF00000004030201
01 02 03 00000004
Got it.
static unsigned long long compress(char a, char b, char c, unsigned int someNumber)
{
unsigned long long x = 0;
x = x | a;
x = x << 8;
x = x | b;
x = x << 8;
x = x | c;
x = x << 32;
x = x | someNumber;
return x;
}
myStruct * decompress(unsigned long long x)
{
printBinary(x);
myStruct * m = new myStruct();
m->someNumber = x | 4294967296;
x = x >> 32;
m->c = x | 256;
x = x >> 8;
m->b = x | 256;
x = x >> 8;
m->a = x | 256;
return m;
}

Store an int in a char array?

I want to store a 4-byte int in a char array... such that the first 4 locations of the char array are the 4 bytes of the int.
Then, I want to pull the int back out of the array...
Also, bonus points if someone can give me code for doing this in a loop... IE writing like 8 ints into a 32 byte array.
int har = 0x01010101;
char a[4];
int har2;
// write har into char such that:
// a[0] == 0x01, a[1] == 0x01, a[2] == 0x01, a[3] == 0x01 etc.....
// then, pull the bytes out of the array such that:
// har2 == har
Thanks guys!
EDIT: Assume int are 4 bytes...
EDIT2: Please don't care about endianness... I will be worrying about endianness. I just want different ways to acheive the above in C/C++. Thanks
EDIT3: If you can't tell, I'm trying to write a serialization class on the low level... so I'm looking for different strategies to serialize some common data types.
Unless you care about byte order and such, memcpy will do the trick:
memcpy(a, &har, sizeof(har));
...
memcpy(&har2, a, sizeof(har2));
Of course, there's no guarantee that sizeof(int)==4 on any particular implementation (and there are real-world implementations for which this is in fact false).
Writing a loop should be trivial from here.
Not the most optimal way, but is endian safe.
int har = 0x01010101;
char a[4];
a[0] = har & 0xff;
a[1] = (har>>8) & 0xff;
a[2] = (har>>16) & 0xff;
a[3] = (har>>24) & 0xff;
#include <stdio.h>
int main(void) {
char a[sizeof(int)];
*((int *) a) = 0x01010101;
printf("%d\n", *((int *) a));
return 0;
}
Keep in mind:
A pointer to an object or incomplete type may be converted to a pointer to a different
object or incomplete type. If the resulting pointer is not correctly aligned for the
pointed-to type, the behavior is undefined.
Note: Accessing a union through an element that wasn't the last one assigned to is undefined behavior.
(assuming a platform where characters are 8bits and ints are 4 bytes)
A bit mask of 0xFF will mask off one character so
char arr[4];
int a = 5;
arr[3] = a & 0xff;
arr[2] = (a & 0xff00) >>8;
arr[1] = (a & 0xff0000) >>16;
arr[0] = (a & 0xff000000)>>24;
would make arr[0] hold the most significant byte and arr[3] hold the least.
edit:Just so you understand the trick & is bit wise 'and' where as && is logical 'and'.
Thanks to the comments about the forgotten shift.
int main() {
typedef union foo {
int x;
char a[4];
} foo;
foo p;
p.x = 0x01010101;
printf("%x ", p.a[0]);
printf("%x ", p.a[1]);
printf("%x ", p.a[2]);
printf("%x ", p.a[3]);
return 0;
}
Bear in mind that the a[0] holds the LSB and a[3] holds the MSB, on a little endian machine.
Don't use unions, Pavel clarifies:
It's U.B., because C++ prohibits
accessing any union member other than
the last one that was written to. In
particular, the compiler is free to
optimize away the assignment to int
member out completely with the code
above, since its value is not
subsequently used (it only sees the
subsequent read for the char[4]
member, and has no obligation to
provide any meaningful value there).
In practice, g++ in particular is
known for pulling such tricks, so this
isn't just theory. On the other hand,
using static_cast<void*> followed by
static_cast<char*> is guaranteed to
work.
– Pavel Minaev
You can also use placement new for this:
void foo (int i) {
char * c = new (&i) char[sizeof(i)];
}
#include <stdint.h>
int main(int argc, char* argv[]) {
/* 8 ints in a loop */
int i;
int* intPtr
int intArr[8] = {1, 2, 3, 4, 5, 6, 7, 8};
char* charArr = malloc(32);
for (i = 0; i < 8; i++) {
intPtr = (int*) &(charArr[i * 4]);
/* ^ ^ ^ ^ */
/* point at | | | */
/* cast as int* | | */
/* Address of | */
/* Location in char array */
*intPtr = intArr[i]; /* write int at location pointed to */
}
/* Read ints out */
for (i = 0; i < 8; i++) {
intPtr = (int*) &(charArr[i * 4]);
intArr[i] = *intPtr;
}
char* myArr = malloc(13);
int myInt;
uint8_t* p8; /* unsigned 8-bit integer */
uint16_t* p16; /* unsigned 16-bit integer */
uint32_t* p32; /* unsigned 32-bit integer */
/* Using sizes other than 4-byte ints, */
/* set all bits in myArr to 1 */
p8 = (uint8_t*) &(myArr[0]);
p16 = (uint16_t*) &(myArr[1]);
p32 = (uint32_t*) &(myArr[5]);
*p8 = 255;
*p16 = 65535;
*p32 = 4294967295;
/* Get the values back out */
p16 = (uint16_t*) &(myArr[1]);
uint16_t my16 = *p16;
/* Put the 16 bit int into a regular int */
myInt = (int) my16;
}
char a[10];
int i=9;
a=boost::lexical_cast<char>(i)
found this is the best way to convert char into int and vice-versa.
alternative to boost::lexical_cast is sprintf.
char temp[5];
temp[0]="h"
temp[1]="e"
temp[2]="l"
temp[3]="l"
temp[5]='\0'
sprintf(temp+4,%d",9)
cout<<temp;
output would be :hell9
union value {
int i;
char bytes[sizof(int)];
};
value v;
v.i = 2;
char* bytes = v.bytes;