When using bit fields inside structures like:
struct abc{
int a:3;
unsigned int b:1;
} t;
So, my question is, do variables (with set bit-fields) share the same memory space even inside structures? Because when I saw their representation it looked like - first, a will have 3 bits(from LSB) then 1 bit for b (MSB) and all in one 4-byte(32 bits) memory space.
In case of Unions:
typedef union abc {
unsigned int a:32;
int f:1;
} t;
int main()
{ t q;
q.a = (unsigned int)(pow(2,32)-1);
q.f = 1;
cout << sizeof(t) << endl;
cout << q.a << " " << q.f << endl;
return 0;
}
First of all, the sizeof() operator returns 4 bytes as the maximum size for this union
and as 32 bits are set for variable 1 then how can it even manage space for variable b?
Thank you for reading till the end, I'll really appreciate your answers.
Related
I am trying to read in a binary file in a known format. I want to find the most efficient way to extract values from it. My ideas are:
Method 1: Read each value into a new char array then get it into the correct data type. For the first 4 byte positive int, I bitshift the values accordingly and assign to an integer as below.
Method 2: Keep the whole file in a char array, then create pointers to different parts of it. In the code below I am trying to point to these first 4 bytes and use reinterpret_cast to interpret them as an integer when I dereference the variable 'bui'.
But the ouput from this code is:
11000000001100000000110000000011
3224374275
00000011000011000011000011000000
51130560
My questions are
why does the endianness get swapped using my method 2 and how do I point to it correctly?
which method is more efficient? I need all of the file, and the file contains other data types too so I will need to write different methods to interpret them if using method 1. I was assuming I could just define different type pointers if using method 2 without doing extra work!
Thanks
#include <iostream>
#include <bitset>
int main(void){
unsigned char b[4];
//ifs.read((char*)b,sizeof(b));
//let's pretend the following 4 bytes are read in representing the number 3224374275:
b[0] = 0b11000000;
b[1] = 0b00110000;
b[2] = 0b00001100;
b[3] = 0b00000011;
//method 1:
unsigned int a = 0; //4 byte capacity
a = b[0] << 24 | b[1] << 16 | b[2] << 8 | b[3];
std::bitset<32> xm1(a);
std::cout << xm1 << std::endl;
std::cout << a << std::endl;
//method 2;
unsigned int* bui = reinterpret_cast<unsigned int*>(b);
std::bitset<32> xm2(*bui);
std::cout << xm2 << std::endl;
std::cout << *bui << std::endl;
}
I have a struct with same type members in it. I am trying to convert it into uint8_t type. I am able to do that but cannot see the output please tell me where I am going wrong. Also I know there are another ways to do it? I am trying to do it this way because I want to get use to static_cast and reinterpret_cast.
Code is below:
int main()
{
struct xs{
bool x :1 ;
bool y :1;
bool z :1;
uint8_t num :5;
} zs;
uint8_t* P = static_cast<uint8_t*>(static_cast<void*>(&zs));
cout << *P << endl;
return 0;
}
There a lot of problems here:
You seem to believe that x, y, and z will all pack into the single uint_8. This is not the case. "Adjacently declared bit fields of the same type can then be packed by the compiler into a reduced number of words"[1]
"The value of sizeof(bool) is implementation defined and might differ from 1"[2] Therefore your xs will be implementation defined, but certainly not equivilent to sizeof(uint_8)
Because xs is not "similar" to a uint_8 according to the rules defined for C++'s type aliasing the behavior of your reinterpret_cast<uint_8*> is undefined
Finally as has been pointed out by many others the reason that you can't see anything is that whatever implementation defined value is at *P it is likely a control character with no visible representation when treated by cout as a char
A possible workaround to the code you have would be to use these definitions:
constexpr uint8_t X = 0B1000'0000;
constexpr uint8_t Y = 0B0100'0000;
constexpr uint8_t Z = 0B0010'0000;
constexpr uint8_t NUM = 0B0001'1111;
uint8_t zs;
Then given that some value is assigned to zs you can perform these functions to output the former bit fields:
cout << ((zs & X) != 0) << endl;
cout << ((zs & Y) != 0) << endl;
cout << ((zs & Z) != 0) << endl;
cout << (zs & NUM) << endl;
Live Example
it is undefined behavior to access an object through a pointer to a type other than the type of the object. it works with most compilers, but technically your programm is allowed to do whatever it wants.
assuming we are not running into a problem mentioned in 1, my guess is that uint8_t is an alias for char, so cout will put a char onto your console.
you did not initialize the memory you are inspecting, so it can be 0. a char with value zero wont print out anything "observeable" in your console, look at the ascii table. try filling your struct with a 50 for example
There are two things that need to be solved here. First off, as some pointed out, you cannot access your object this way. If you want to construct a uint8_t properly, you would need to read the variables inside your struct and do some bit shifts, something like this:
uint8_t value = 0;
value |= (zs.x ? 1 << 7 : 0);
value |= (zs.y ? 1 << 6 : 0);
value |= (zs.z ? 1 << 5 : 0);
value |= zs.num;
Now, the second problem you are facing is that you're trying to output a number of 8-bits wide. By default, this is interpreted as a 'character' and will display as such. In order to accomplish what you want to do, you can either use a variable with a different length (uint16_t, uint32_t, ...) or use std::to_string.
cout << std::to_string(value) << endl;
I'm working on an Arduino project. I'm trying to pass a byte pointer to a function, and let that function calculate the size of the data that the pointer refers to. But when I let the pointer refer to a byte, sizeof() returns 2. I wrote the following snippet to try to debug:
byte b;
byte *byteptr;
byteptr = &b;
print("sizeof(b): ");
println(sizeof(b));
print("sizeof(*byteptr) pointing to byte: ");
println(sizeof(*byteptr));
print("sizeof(byteptr) pointing to byte: ");
println(sizeof(byteptr));
the printed result is:
sizeof(b): 1
sizeof(*byteptr) pointing to byte: 1
sizeof(byteptr) pointing to byte: 2
So the size of a byte is 1, but via the pointer it's 2??
It appears that on Arduino, pointers are 16 bit. I believe your confusion stems from what * means in this context.
sizeof(*byteptr) is equivalent to the sizeof(byte). The * does not indicate a pointer type, it indicates dereferencing the pointer stored in byteptr. Ergo, it is 1 byte, which you would expect from the type byte.
sizeof(byteptr) does not dereference the pointer, and as such, is the size of the pointer itself, which on this system seems to be 2 bytes/16 bits.
Consider the following:
#include "iostream"
using namespace std;
int main()
{
char a = 1;
char* aP = &a;
cout << "sizeof(char): " << sizeof(char) << endl;
cout << "sizeof(char*): " << sizeof(char*) << endl;
cout << "sizeof(a): " << sizeof(a) << endl;
cout << "sizeof(aP): " << sizeof(aP) << endl;
cout << "sizeof(*aP): " << sizeof(*aP) << endl;
}
Output (on a 64 bit OS/compiler):
sizeof(char): 1
sizeof(char*): 8
sizeof(a): 1
sizeof(aP): 8
sizeof(*aP): 1
#Maseb I think you've gotten a good discussion of the differences between the size of a dereferenced pointer and the actual size of the pointer itself. I'll just add that the sizeof(byte_pointer) must be large enough so that every address of memory space where a byte value could potentially be stored will fit into the pointer's memory width. For example, if there 32,000 bytes of storage on your Arduino then you could potentially have a pointer that needs to point to the address 32,000. Since 2^15 is about 32,000 you need 14 or 15 bits to create a unique address for each memory storage location. We set pointer address space length to blocks of four bits. Therefore, your Arduino has a 16bit address space for pointers and sizeof(byte_pointer) is 2 bytes, or 16 bits.
With that said, I'll go ahead an answer your other question too. If you need to pass an array and a size, just create your own struct that includes both of those data elements. Then you can pass the pointer to this templated struct which includes the size (This is the basic implementation for the C++ Array container).
I've written the short code sample below to demonstrate how to create your own template for an array with a size element and then use that size element to iterate over the elements.
template<int N>
struct My_Array{
int size = N;
int elem[N];
};
//create the pointer to the struct
My_Array<3>* ma3 = new My_Array<3>;
void setup() {
//now fill the array element
for(int i=0; i < ma3->size; i++) {
ma3->elem[0]=i;
}
Serial.begin(9600);
//now you can use the size value to iterate over the elements
Serial.print("ma3 is this big: ");
Serial.println(ma3->size);
Serial.println("The array values are:");
Serial.print("\t[");
for(int i=0; i<ma3->size; i++) {
Serial.print(ma3->elem[i]);
if(i < ma3->size-1) Serial.print(", ");
}
Serial.println("]");
}
void loop() {
while(true) { /* do nothing */ }
}
#include <iostream>
using namespace std;
int main() {
bool *a = new bool[10];
cout << sizeof(bool) << endl;
cout << sizeof(a[0]) << endl;
for (int i = 0; i < 10; i++) {
cout << a[i] << " ";
}
delete[] a;
}
The above code outputs:
1
1
112 104 151 0 0 0 0 0 88 1
The last line should contain garbage values, but why are they not all 0 or 1? The same thing happens for a stack-allocated array.
Solved: I forgot that sizeof counts bytes, not bits as I thought.
You have an array of default-initialized bools. Default-initialization for primitive types entail no initialization, thus they all have indeterminate values.
You can zero-initialize them by providing a pair of parentheses:
bool *a = new bool[10]();
Booleans are 1-byte integral types so the reason you're seeing this output is probably because that is the data on the stack at that moment that can be viewed with a single byte. Notice how they are values under 255 (the largest number that can be produced from an unsigned 1-byte integer).
OTOH, printing out an indeterminate value is Undefined Behavior, so there really is no logic to consider in this program.
sizeof(bool) on your machine returns 1.
That's 1 byte, not 1 bit, so the values you show can certainly be present.
What you are seeing is uninitialized values, different compilers generate different code. On GCC I see everything as 0 on windows i see junk values.
generally char is the smallest byte addressable- even though bool has 1/0 value- memory access wise it will be a char. Thus you will never see junk value greater than 255
Following initialization (memset fixes the things for you)
#include <iostream>
using namespace std;
int main() {
bool* a = new bool[10];
memset(a, 0, 10*sizeof(bool));
cout << sizeof(bool) << endl;
cout << sizeof(a[0]) << endl;
for (int i = 0; i < 10; ++i)
{
bool b = a[i];
cout << b << " ";
}
return 0;
}
Formally speaking, as pointed out in this answer, reading any uninitialized variable is undefined behaviour, which basically means everything is possible.
More practically, the memory used by those bools is filled with what you called garbage. ostreams operator<< inserts booleans via std::num_put::put(), which, if boolalpha is not set, converts the value present to an int and outputs the result.
I do not know why you put a * sign before variable a .
Is it a pointer to point a top element address of the array?
Why does the function sizeof not return the same size when its getting used on the struct itself?
I need to cast it because of a winsock program that im working on.
Thanks for any help, true.
#include <iostream>
#include <string>
using namespace std;
struct stringstruct
{
string s1;
string s2;
};
int main()
{
stringstruct ss = {"123","abc"};
char *NX = (char*)&ss;
cout << sizeof(NX) << endl << sizeof(*NX) << endl;
cout << sizeof(&ss) << endl << sizeof(ss) << endl;
getchar();
return 0;
}
the example above outputs
4
1
4
64
sizeof will tell you the size of the given expression's type. In both the sizeof(NX) and sizeof(&ss), the result is 4 because pointers on your machine take up 4 bytes. For sizeof(*NX), you are dereferencing a char*, which gives you a char, and a char takes up 1 byte (and always does), so you get the output 1. When you do sizeof(ss), ss is a stringstruct, so you get the size of a stringstruct, which appears to be 64 bytes.
stringstruct ss = {"123","abc"};
char *NX = (char*)&ss;
cout << sizeof(NX) << endl << sizeof(*NX) << endl;
cout << sizeof(&ss) << endl << sizeof(ss) << endl;
I'm pretty sure that any of these casts are pretty meaningless. NX will point at the beginning of your struct. Inside the struct are two objects of type string, which in turn have pointers pointing to the data they were initialized with "123" and "abc" respectively. sizeof(*NX) is just that - size of a char, and sizeof(NX) is indeed the size of a pointer. sizeof(ss) is the size of your two string members (and any padding added by the compiler) - and sizeof(&ss) is the size of a pointer to a stringstruct.
Now, I expect what you REALLY want is a way to send your data, "123" and "abc" as two separate strings over a network. None of the above will help you do that, since even if sizeof(ss) gives you the size of the data structure you want to send, the string values are not within that structure [1]. What you really need is something calls serialization - something that writes out your strings as separate elements as text/string.
Something like this would work:
struct stringstruct {
string s1;
string s2;
string to_string()
}
string stringstruct::to_string()
{
string res = s1 + " " + s2;
return res;
}
Then use to_string like this:
string temp = ss.to_string();
const char *to_send = temp.c_str();
int send_len = temp.length();
... send the string `to_send` with number of bytes `send_len`.
[1] There is an optimization where std::string is actually storing short strings within the actual class itself. But given a sufficiently long strong, it won't do that.
A pointer is of size 4(in your case seems to be 32 bit) no matter what it points. Size of the object itself on the other hand returns the real number of bytes that an object of that structure takes.