Why is gcc giving returning 13 as the sizeof of the following class ?
It seems to me that we should get e (4 bytes) + d (4 bytes) + 1 byte (for a and b) = 9 bytes. If it was alignment, aren't most 32 bit systems aligned on 8 byte boundaries ?
class A {
unsigned char a:1;
unsigned char b:4;
unsigned int d;
A* e;
} __attribute__((__packed__));
int main( int argc, char *argv[] )
{
cout << sizeof(A) << endl;
}
./a.out
13
You are very likely running on a 64 bit platform and the size of the pointer is not 4 but 8 bytes. Just do a sizeof on A * and print it out.
The actual size of structs with bitfields is implementation dependent, so whatever size gcc decides it to be would be right.
Related
I came across this syntax for reading a BMP file in C++
#include <fstream>
int main() {
std::ifstream in('filename.bmp', std::ifstream::binary);
in.seekg(0, in.end);
size = in.tellg();
in.seekg(0);
unsigned char * data = new unsigned char[size];
in.read((unsigned char *)data, size);
int width = *(int*)&data[18];
// omitted remainder for minimal example
}
and I don't understand what the line
int width = *(int*)&data[18];
is actually doing. Why doesn't a simple cast from unsigned char * to int, int width = (int)data[18];, work?
Note
As #user4581301 indicated in the comments, this depends on the implementation and will fail in many instances. And as #NathanOliver- Reinstate Monica and #ChrisMM pointed out this is Undefined Behavior and the result is not guaranteed.
According to the bitmap header format, the width of the bitmap in pixels is stored as a signed 32-bit integer beginning at byte offset 18. The syntax
int width = *(int*)&data[18];
reads bytes 19 through 22, inclusive (assuming a 32-bit int) and interprets the result as an integer.
How?
&data[18] gets the address of the unsigned char at index 18
(int*) casts the address from unsigned char* to int* to avoid loss of precision on 64 bit architectures
*(int*) dereferences the address to get the referred int value
So basically, it takes the address of data[18] and reads the bytes at that address as if they were an integer.
Why doesn't a simple cast to `int` work?
sizeof(data[18]) is 1, because unsigned char is one byte (0-255) but sizeof(&data[18]) is 4 if the system is 32-bit and 8 if it is 64-bit, this can be larger (or even smaller for 16-bit systems) but with the exception of 16-bit systems it should be at minimum 4 bytes. Obviously reading more than 4 bytes is not desired in this case, and the cast to (int*) and subsequent dereference to int yields 4 bytes, and indeed the 4 bytes between offsets 18 and 21, inclusive. A simple cast from unsigned char to int will also yield 4 bytes, but only one byte of the information from data. This is illustrated by the following example:
#include <iostream>
#include <bitset>
int main() {
// Populate 18-21 with a recognizable pattern for demonstration
std::bitset<8> _bits(std::string("10011010"));
unsigned long bits = _bits.to_ulong();
for (int ii = 18; ii < 22; ii ++) {
data[ii] = static_cast<unsigned char>(bits);
}
std::cout << "data[18] -> 1 byte "
<< std::bitset<32>(data[18]) << std::endl;
std::cout << "*(unsigned short*)&data[18] -> 2 bytes "
<< std::bitset<32>(*(unsigned short*)&data[18]) << std::endl;
std::cout << "*(int*)&data[18] -> 4 bytes "
<< std::bitset<32>(*(int*)&data[18]) << std::endl;
}
data[18] -> 1 byte 00000000000000000000000010011010
*(unsigned short*)&data[18] -> 2 bytes 00000000000000001001101010011010
*(int*)&data[18] -> 4 bytes 10011010100110101001101010011010
I am trying to convert 4 bytes to an integer using C++.
This is my code:
int buffToInteger(char * buffer)
{
int a = (int)(buffer[0] << 24 | buffer[1] << 16 | buffer[2] << 8 | buffer[3]);
return a;
}
The code above works in almost all cases, for example:
When my buffer is: "[\x00, \x00, \x40, \x00]" the code will return 16384 as expected.
But when the buffer is filled with: "[\x00, \x00, \x3e, \xe3]", the code won't work as expected and will return "ffffffe1".
Does anyone know why this happens?
Your buffer contains signed characters. So, actually, buffer[0] == -29, which upon conversion to int gets sign-extended to 0xffffffe3, and in turn (0x3e << 8) | 0xffffffe3 == 0xffffffe3.
You need ensure your individual buffer bytes are interpreted unsigned, either by declaring buffer as unsigned char *, or by explicitly casting:
int a = int((unsigned char)(buffer[0]) << 24 |
(unsigned char)(buffer[1]) << 16 |
(unsigned char)(buffer[2]) << 8 |
(unsigned char)(buffer[3]));
In the expression buffer[0] << 24 the value 24 is an int, so buffer[0] will also be converted to an int before the shift is performed.
On your system a char is apparently signed, and will then be sign extended when converted to int.
There's a implict promotion to a signed int in your shifts.
That's because char is (apparently) signed on your platform (the common thing) and << promotes to integers implicitly. In fact none of this would work otherwise because << 8 (and higher) would scrub all your bits!
If you're stuck with using a buffer of signed chars this will give you what you want:
#include <iostream>
#include <iomanip>
int buffToInteger(char * buffer)
{
int a = static_cast<int>(static_cast<unsigned char>(buffer[0]) << 24 |
static_cast<unsigned char>(buffer[1]) << 16 |
static_cast<unsigned char>(buffer[2]) << 8 |
static_cast<unsigned char>(buffer[3]));
return a;
}
int main(void) {
char buff[4]={0x0,0x0,0x3e,static_cast<char>(0xe3)};
int a=buffToInteger(buff);
std::cout<<std::hex<<a<<std::endl;
// your code goes here
return 0;
}
Be careful about bit shifting on signed values. Promotions don't just add bytes but may convert values.
For example a gotcha here is that you can't use static_cast<unsigned int>(buffer[1]) (etc.) directly because that converts the signed char value to a signed int and then reinterprets that value as an unsigned.
If anyone asks me all implicit numeric conversions are bad. No program should have so many that they would become a chore. It's a softness in the C++ inherited from C that causes all sorts of problems that far exceed their value.
It's even worse in C++ because they make the already confusing overloading rules even more confusing.
I think this could be also done with use of memcpy:
int buffToInteger(char* buffer)
{
int a;
memcpy( &a, buffer, sizeof( int ) );
return a;
}
This is much faster than the example mentioned in the original post, because it just treats all bytes "as is" and there is no need to do any operations such as bit shift etc.
It also doesn't cause any signed-unsigned issues.
char buffer[4];
int a;
a = *(int*)&buffer;
This takes a buffer reference, type casts it to an int reference and then dereferences it.
int buffToInteger(char * buffer)
{
return *reinterpret_cast<int*>(buffer);
}
This conversion is simple and fast. We only tell compiler to treat a byte array in a memory as a single integer
This question already has an answer here:
Create a 10-bit data type in C/C++ [closed]
(1 answer)
Closed 6 years ago.
Is it possible to define some odd sized data type instead of the standard types using type-def like 10 bit or 12 bit in C++ ?
You can use a bitfield for that:
struct bit_field
{
unsigned x: 10; // 10 bits
};
and use it like
bit_field b;
b.x = 15;
Example:
#include <iostream>
struct bit_field
{
unsigned x: 10; // 10 bits
};
int main()
{
bit_field b;
b.x = 1023;
std::cout << b.x << std::endl;
b.x = 1024; // now we overflow our 10 bits
std::cout << b.x << std::endl;
}
AFAIK, there is no way of having a bitfield outside a struct, i.e.
unsigned x: 10;
by itself is invalid.
Sort of, if you use bit fields. However, bear in mind that bit fields are still packed within some intrinsic type. In the example pasted below, both has_foo and foo_count are "packed" inside of an unsigned integer, which on my machine, uses four bytes.
#include <stdio.h>
struct data {
unsigned int has_foo : 1;
unsigned int foo_count : 7;
};
int main(int argc, char* argv[])
{
data d;
d.has_foo = 1;
d.foo_count = 42;
printf("d.has_foo = %u\n", d.has_foo);
printf("d.foo_count = %d\n", d.foo_count);
printf("sizeof(d) = %lu\n", sizeof(d));
return 0;
}
Use Bitfields for this. Guess this should help http://www.cs.cf.ac.uk/Dave/C/node13.html#SECTION001320000000000000000
This should be simple but I have no clue where to look for the issue:
I have a struct:
struct region
{
public:
long long int x;
long long int y;
long long int width;
long long int height;
unsigned char scale;
};
When I do sizeof(region) it gives me 40 when I am expecting 33.
Any ideas?
(mingw gcc, win x64 os)
It's padding the struct to fit an 8-byte boundary. So it actually is taking 40 bytes in memory - sizeof is returning the correct value.
If you want it to only take 33 bytes then specify the packed attribute:
struct region
{
public:
long long int x;
long long int y;
long long int width;
long long int height;
unsigned char scale;
} __attribute__ ((packed));
long long int values are 8 bytes each. scale is only 1 byte but is padded for alignments, so it effectively takes up 8 bytes too. 5*8 = 40.
As others said, structs are padded for alignments, and such padding not only depends on the type of the members, but also on the order of the members in which they're defined.
For example, consider these two structs A and B as defined below. Both structs are identical in terms of members and types; the only difference is that the order in which members are defined isn't same:
struct A
{
int i;
int j;
char c;
char d;
};
struct B
{
int i;
char c;
int j;
char d;
};
Would the sizeof(A) be equal to sizeof(B) just because they've same number of members of same type? No. Try printing the size of each:
cout << "sizeof(A) = "<< sizeof(A) << endl;
cout << "sizeof(B) = "<< sizeof(B) << endl;
Output:
sizeof(A) = 12
sizeof(B) = 16
Surprised? See the output yourself : http://ideone.com/yCX4S
This question already has answers here:
Closed 12 years ago.
Possible Duplicate:
What is the difference between an int and a long in C++?
#include <iostream>
int main()
{
std::cout << sizeof(int) << std::endl;
std::cout << sizeof(long int) << std::endl;
}
Output:
4
4
How is this possible? Shouldn't long int be bigger in size than int?
The guarantees you have are:
sizeof(int) <= sizeof(long)
sizeof(int) * CHAR_BITS >= 16
sizeof(long) * CHAR_BITS >= 32
CHAR_BITS >= 8
All these conditions are met with:
sizeof(int) == 4
sizeof(long) == 4
C++ langauge never guaranteed or required long int to be bigger than int. The only thing the language is promising is that long int is not smaller than int. In many popular implementations long int has the same size as int.
It depends on the language and the platform. In ISO/ANSI C, For example, the long integer type is 8 bytes in 64-bit system Unix, and 4 bytes in other os/platforms.
No. It's valid under the C standard for int to be the same size as short int, for int to be the same size as long int, for int not to be the same as either long int or short int, or even for all three to be the same size. On 16-bit machines it was common for sizeof(int) == sizeof(short int) == 2 and sizeof(long int) == 4, but the most common arrangement on 32-bit machines is sizeof(int) == sizeof(long int) == 4 and sizeof(short int) == 2. And on 64-bit machines you may find sizeof(short int) == 2, sizeof(int) == 4, and sizeof(long int) == 8.
See http://bytes.com/topic/c/answers/163333-sizeof-int-sizeof-long-int.
No nothing is wrong. The size of data-types is generally hardware dependant (exception being char which is always 1 register wide). In gcc/Unix, int and long int both require 4 bytes. You can also try sizeof(long long int) and see the results and sizeof(short int).