I am working on refactoring some old code and have found few structs containing zero length arrays (below). Warnings depressed by pragma, of course, but I've failed to create by "new" structures containing such structures (error 2233). Array 'byData' used as pointer, but why not to use pointer instead? or array of length 1? And of course, no comments were added to make me enjoy the process...
Any causes to use such thing? Any advice in refactoring those?
struct someData
{
int nData;
BYTE byData[0];
}
NB It's C++, Windows XP, VS 2003
Yes this is a C-Hack.
To create an array of any length:
struct someData* mallocSomeData(int size)
{
struct someData* result = (struct someData*)malloc(sizeof(struct someData) + size * sizeof(BYTE));
if (result)
{ result->nData = size;
}
return result;
}
Now you have an object of someData with an array of a specified length.
There are, unfortunately, several reasons why you would declare a zero length array at the end of a structure. It essentially gives you the ability to have a variable length structure returned from an API.
Raymond Chen did an excellent blog post on the subject. I suggest you take a look at this post because it likely contains the answer you want.
Note in his post, it deals with arrays of size 1 instead of 0. This is the case because zero length arrays are a more recent entry into the standards. His post should still apply to your problem.
http://blogs.msdn.com/oldnewthing/archive/2004/08/26/220873.aspx
EDIT
Note: Even though Raymond's post says 0 length arrays are legal in C99 they are in fact still not legal in C99. Instead of a 0 length array here you should be using a length 1 array
This is an old C hack to allow a flexible sized arrays.
In C99 standard this is not neccessary as it supports the arr[] syntax.
Your intution about "why not use an array of size 1" is spot on.
The code is doing the "C struct hack" wrong, because declarations of zero length arrays are a constraint violation. This means that a compiler can reject your hack right off the bat at compile time with a diagnostic message that stops the translation.
If we want to perpetrate a hack, we must sneak it past the compiler.
The right way to do the "C struct hack" (which is compatible with C dialects going back to 1989 ANSI C, and probably much earlier) is to use a perfectly valid array of size 1:
struct someData
{
int nData;
unsigned char byData[1];
}
Moreover, instead of sizeof struct someData, the size of the part before byData is calculated using:
offsetof(struct someData, byData);
To allocate a struct someData with space for 42 bytes in byData, we would then use:
struct someData *psd = (struct someData *) malloc(offsetof(struct someData, byData) + 42);
Note that this offsetof calculation is in fact the correct calculation even in the case of the array size being zero. You see, sizeof the whole structure can include padding. For instance, if we have something like this:
struct hack {
unsigned long ul;
char c;
char foo[0]; /* assuming our compiler accepts this nonsense */
};
The size of struct hack is quite possibly padded for alignment because of the ul member. If unsigned long is four bytes wide, then quite possibly sizeof (struct hack) is 8, whereas offsetof(struct hack, foo) is almost certainly 5. The offsetof method is the way to get the accurate size of the preceding part of the struct just before the array.
So that would be the way to refactor the code: make it conform to the classic, highly portable struct hack.
Why not use a pointer? Because a pointer occupies extra space and has to be initialized.
There are other good reasons not to use a pointer, namely that a pointer requires an address space in order to be meaningful. The struct hack is externalizeable: that is to say, there are situations in which such a layout conforms to external storage such as areas of files, packets or shared memory, in which you do not want pointers because they are not meaningful.
Several years ago, I used the struct hack in a shared memory message passing interface between kernel and user space. I didn't want pointers there, because they would have been meaningful only to the original address space of the process generating a message. The kernel part of the software had a view to the memory using its own mapping at a different address, and so everything was based on offset calculations.
It's worth pointing out IMO the best way to do the size calculation, which is used in the Raymond Chen article linked above.
struct foo
{
size_t count;
int data[1];
}
size_t foo_size_from_count(size_t count)
{
return offsetof(foo, data[count]);
}
The offset of the first entry off the end of desired allocation, is also the size of the desired allocation. IMO it's an extremely elegant way of doing the size calculation. It does not matter what the element type of the variable size array is. The offsetof (or FIELD_OFFSET or UFIELD_OFFSET in Windows) is always written the same way. No sizeof() expressions to accidentally mess up.
Related
I often see structures in the code, at the end of which there is a memory reserve.
struct STAT_10K4
{
int32_t npos; // position number
...
float Plts;
Pxts;
float Plto [NUM];
uint32_t reserv [(NUM * 3)% 2 + 1];
};
Why do they do this?
Why are some of the reserve values dependent on constants?
What can happen if you do not make such reserves? Or make a mistake in their size?
This is a form of manual padding of a class to make its size a multiple of some number. In your case:
uint32_t reserv [(NUM * 3)% 2 + 1];
NUM * 3 % 2 is actually nonsensical, as it would be equivalent to NUM % 2 (not considering overflow). So if the array size is odd, we pad the struct with one additional uint32_t, on top of + 1 additional ones. This padding means that STAT_10K4's size is always a multiple of 8 bytes.
You will have to consult the documentation of your software to see why exactly this is done. Perhaps padding this struct with up to 8 bytes makes some algorithm easier to implement. Or maybe it has some perceived performance benefit. But this is pure speculation.
Typically, the compiler will pad your structs to 64-bit boundaries if you use any 64-bit types, so you don't need to do this manually.
Note: This answer is specific to mainstream compilers and x86. Obviously this does not apply to compiling for TI-calculators with 20-bit char & co.
This would typically be to support variable-length records. A couple of ways this could be used will be:
1 If the maximum number of records is known then a simple structure definition can accomodate all cases.
2 In many protocols there is a "header-data" idiom. The header will be a fixed size but the data variable. The data will be received as a "blob". Thus the structure of the header can be declared and accessed by a pointer to the blob, and the data will follow on from that. For example:
typedef struct
{
uint32_t messageId;
uint32_t dataType;
uint32_t dataLenBytes;
uint8_t data[MAX_PAYLOAD];
}
tsMessageFormat;
The data is received in a blob, so a void* ptr, size_t len.
The buffer pointer is then cast so the message can be read as follows:
tsMessageFormat* pMessage = (psMessageFormat*) ptr;
for (int i = 0; i < pMessage->dataLenBytes; i++)
{
//do something with pMessage->data[i];
}
In some languages the "data" could be specified as being an empty record, but C++ does not allow this. Sometimes you will see the "data" omitted and you have to perform pointer arithmetic to access the data.
The alternative to this would be to use a builder pattern and/or streams.
Windows uses this pattern a lot; many structures have a cbSize field which allows additional data to be conveyed beyond the structure. The structure accomodates most cases, but having cbSize allows additional data to be provided if necessary.
I'm trying to cast a struct into a char vector.
I wanna send my struct casted in std::vector throw a UDP socket and cast it back on the other side. Here is my struct whith the PACK attribute.
#define PACK( __Declaration__ ) __pragma( pack(push, 1) ) __Declaration__ __pragma( pack(pop) )
PACK(struct Inputs
{
uint8_t structureHeader;
int16_t x;
int16_t y;
Key inputs[8];
});
Here is test code:
auto const ptr = reinterpret_cast<char*>(&in);
std::vector<char> buffer(ptr, ptr + sizeof in);
//send and receive via udp
Inputs* my_struct = reinterpret_cast<Inputs*>(&buffer[0]);
The issue is:
All works fine except my uint8_t or int8_t.
I don't know why but whenever and wherever I put a 1Bytes value in the struct,
when I cast it back the value is not readable (but the others are)
I tried to put only 16bits values and it works just fine even with the
maximum values so all bits are ok.
I think this is something with the alignment of the bytes in the memory but i can't figure out how to make it work.
Thank you.
I'm trying to cast a struct into a char vector.
You cannot cast an arbitrary object to a vector. You can cast your object to an array of char and then copy that array into a vector (which is actually what your code is doing).
auto const ptr = reinterpret_cast<char*>(&in);
std::vector<char> buffer(ptr, ptr + sizeof in);
That second line defines a new vector and initializes it by copying the bytes that represent your object into it. This is reasonable, but it's distinct from what you said you were trying to do.
I think this is something with the alignment of the bytes in the memory
This is good intuition. If you hadn't told the compiler to pack the struct, it would have inserted padding bytes to ensure each field starts at its natural alignment. The fact that the operation isn't reversible suggests that somehow the receiving end isn't packed exactly the same way. Are you sure the receiving program has exactly the same packing directive and struct layout?
On x86, you can get by with unaligned data, but you may pay a large performance cost whenever you access an unaligned member variable. With the packing set to one, and the first field being odd-sized, you've guaranteed that the next fields will be unaligned. I'd urge you to reconsider this. Design the struct so that all the fields fall at their natural alignment boundaries and that you don't need to adjust the packing. This may make your struct a little bigger, but it will avoid all the alignment and performance problems.
If you want to omit the padding bytes in your wire format, you'll have to copy the relevant fields byte by byte into the wire format and then copy them back out on the receiving end.
An aside regarding:
#define PACK( __Declaration__ ) __pragma( pack(push, 1) ) __Declaration__ __pragma( pack(pop) )
Identifiers that begin with underscore and a capital letter or with two underscores are reserved for "the implementation," so you probably shouldn't use __Declaration__ as the macro's parameter name. ("The implementation" refers to the compiler, the standard library, and any other runtime bits the compiler requires.)
1
vector class has dynamically allocated memory and uses pointers inside. So you can't send the vector (but you can send the underlying array)
2
SFML has a great class for doing this called sf::packet. It's free, open source, and cross-platform.
I was recently working on a personal cross platform socket library for use in other personal projects and I eventually quit it for SFML. There's just TOO much to test, I was spending all my time testing to make sure stuff worked and not getting any work done on the actual projects I wanted to do.
3
memcpy is your best friend. It is designed to be portable, and you can use that to your advantage.
You can use it to debug. memcpy the thing you want to see into a char array and check that it matches what you expect.
4
To save yourself from having to do tons of robustness testing, limit yourself to only chars, 32-bit integers, and 64-bit doubles. You're using different compilers? struct packing is compiler and architecture dependent. If you have to use a packed struct, you need to guarantee that the packing is working as expected on all platforms you will be using, and that all platforms have the same endianness. Obviously, that's what you're having trouble with and I'm sorry I can't help you more with that. I would I would recommend regular serializing and would definitely avoid struct packing if I was trying to make portable sockets.
If you can make those guarantees that I mentioned, sending is really easy on LINUX.
// POSIX
void send(int fd, Inputs& input)
{
int error = sendto(fd, &input, sizeof(input), ..., ..., ...);
...
}
winsock2 uses a char* instead of a void* :(
void send(int fd, Inputs& input)
{
char buf[sizeof(input)];
memcpy(buf, &input, sizeof(input));
int error = sendto(fd, buf, sizeof(input), ..., ..., ...);
...
}
Did you tried the most simple approach of:
unsigned char *pBuff = (unsigned char*)∈
for (unsigned int i = 0; i < sizeof(Inputs); i++) {
vecBuffer.push_back(*pBuff);
pBuff++;
}
This would work for both, pack and non pack, since you will iterate the sizeof.
I would like to allocate some char buffers0, to be passed to an external non-C++ function, that have a specific alignment requirement.
The requirement is that the buffer be aligned to a N-byte1 boundary, but not to a 2N boundary. For example, if N is 64, then an the pointer to this buffer p should satisfy ((uintptr_t)p) % 64 == 0 and ((uintptr_t)p) % 128 != 0 - at least on platforms where pointers have the usual interpretation as a plain address when cast to uintptr_t.
Is there a reasonable way to do this with the standard facilities of C++11?
If not, is there is a reasonable way to do this outside the standard facilities2 which works in practice for modern compilers and platforms?
The buffer will be passed to an outside routine (adhering to the C ABI but written in asm). The required alignment will usually be greater than 16, but less than 8192.
Over-allocation or any other minor wasted-resource issues are totally fine. I'm more interested in correctness and portability than wasting a few bytes or milliseconds.
Something that works on both the heap and stack is ideal, but anything that works on either is still pretty good (with a preference towards heap allocation).
0 This could be with operator new[] or malloc or perhaps some other method that is alignment-aware: whatever makes sense.
1 As usual, N is a power of two.
2 Yes, I understand an answer of this type causes language-lawyers to become apoplectic, so if that's you just ignore this part.
Logically, to satisfy "aligned to N, but not 2N", we align to 2N then add N to the pointer. Note that this will over-allocate N bytes.
So, assuming we want to allocate B bytes, if you just want stack space, alignas would work, perhaps.
alignas(N*2) char buffer[B+N];
char *p = buffer + N;
If you want heap space, std::aligned_storage might do:
typedef std::aligned_storage<B+N,N*2>::type ALIGNED_CHAR;
ALIGNED_CHAR buffer;
char *p = reinterpret_cast<char *>(&buffer) + N;
I've not tested either out, but the documentation suggests it should be OK.
You can use _aligned_malloc(nbytes,alignment) (in MSVC) or _mm_malloc(nbytes,alignment) (on other compilers) to allocate (on the heap) nbytes of memory aligned to alignment bytes, which must be an integer power of two.
Then you can use the trick from Ken's answer to avoid alignment to 2N:
void*ptr_alloc = _mm_malloc(nbytes+N,2*N);
void*ptr = static_cast<void*>(static_cast<char*>(ptr_alloc) + N);
/* do your number crunching */
_mm_free(ptr_alloc);
We must ensure to keep the pointer returned by _mm_malloc() for later de-allocation, which must be done via _mm_free().
I'm porting an application from 32 bit to 64 bit.
It is C style coding (legacy product) although it is C++. I have an issue where a combination of union and struct are used to store values. Here a custom datatype called "Any" is used that should hold data of any basic datatype. The implementation of Any is as follows:
typedef struct typedvalue
{
long data; // to hold all other types of 4 bytes or less
short id; // this tells what type "data" is holding
short sign; // this differentiates the double value from the rest
}typedvalue;
typedef union Any
{
double any_any;
double any_double; // to hold double value
typedvalue any_typedvalue;
}Any;
The union is of size 8 bytes. They have used union so that at a given time there will only be one value and they have used struct to differentiate the type. You can store a double, long, string, char, float and int values at any given time. Thats the idea.
If its a double value, the value is stored in any_double. if its any other type, then its stored in "data" and the type of the value is stored in the "id". The "sign" would tell if value "Any" is holding a double or another type.
any_any is used liberally in the code to copy the value in the address space irrespective of the type. (This is our biggest problem since we do not know at a given time what it will hold!)
If its a string or pointer "Any" is suppose to hold, it is stored in "data" (which is of type long). In 64 bit, here is where the problem lies. pointers are 8 bytes. So we will need to change the "long" to an equivalent 8 byte (long long). But then that would increase the size of the union to 16 bytes and the liberal usage of "any_any" will cause problems. There are too many usage of "any_any" and you are never sure what it can hold.
I already tried these steps and it turned unsuccessful:
1. Changed the "long data" to "long long data" in the struct, this will make the size of the union to 16 bytes. - This will not allow the data to be passed as "any_any" (8 bytes).
2. Declared the struct as a pointer inside union. And changed the "long data" to "long long data" inside struct. - the issue encountered here was that, since its a pointer we need to allocate memory for the struct. The liberal use of "any_any" makes it difficult for us to allocate memory. Sometimes we might overwrite the memory and hence erase the value.
3. Create a separate collection that will hold the value for "data" (a key value pair). - This will not work because this implementation is at the core of application, the collection will run into millions of data.
Can anybody help me in this?
"Can anybody help me" this sounds like a cry of desperation, and I totally understand it.
Whoever wrote this code had absolutely no respect for future-proofing, or of portability, and now you're paying the price.
(Let this be a lesson to anyone who says "but our platform is 32bit! we will never use 64bit!")
I know you're going to say "but the codebase is too big", but you are better off rewriting the product. And do it properly this time!
Ignoring that fact that the original design is insane, you could use <stdint.h> (or soon <cstdint> to get a little bit of predictability:
struct typedvalue
{
uint16_t id;
uint16_t sign;
uint32_t data;
};
union any
{
char any_raw[8];
double any_double
typedvalue any_typedvalue;
};
You're still not guaranteed that typedvalue will be tightly packed, since there are no alignment guarantees for non-char members. You could make a struct Foo { char x[8]; }; and type-pun your way around, like *(uint32_t*)(&Foo.x[0]) and *(uint16_t*)(&Foo.x[4]) if you must, but that too would be extremely ugly.
If you are in C++0x, I would definitely throw in a static assertion somewhere for sizeof(typedvalue) == sizeof(double).
If you need to store both an 8 byte pointer and a "type" field then you have no choice but to use at least 9 bytes, and on a 64-bit system alignment will likely pad that out to 16 bytes.
Your data structure should look something like:
typedef struct {
union {
void *any_pointer;
double any_double;
long any_long;
int any_int;
} any;
char my_type;
} any;
If using C++0x consider using a strongly typed enumeration for the my_type field. In earlier versions the storage required for an enum is implementation dependent and likely to be more than one byte.
To save memory you could use (compiler specific) directives to request optimal packing of the data structure, but the resulting mis-aligned memory accesses may cause performance issues.
Okay, Allow me to re-ask the question, as none of the answers got at what I was really interested in (apologies if whole-scale editing of the question like this is a faux-paus).
A few points:
This is offline analysis with a different compiler than the one I'm testing, so SIZEOF() or similar won't work for what I'm doing.
I know it's implementation-defined, but I happen to know the implementation that is of interest to me, which is below.
Let's make a function called pack, which takes as input an integer, called alignment, and a tuple of integers, called elements. It outputs another integer, called size.
The function works as follows:
int pack (int alignment, int[] elements)
{
total_size = 0;
foreach( element in elements )
{
while( total_size % min(alignment, element) != 0 ) { ++total_size; }
total_size += element;
}
while( total_size % packing != 0 ) { ++total_size; }
return total_size;
}
I think what I want to ask is "what is the inverse of this function?", but I'm not sure whether inversion is the correct term--I don't remember ever dealing with inversions of functions with multiple inputs, so I could just be using a term that doesn't apply.
Something like what I want (sort of) exists; here I provide pseudo code for a function we'll call determine_align. The function is a little naive, though, as it just calls pack over and over again with different inputs until it gets an answer it expects (or fails).
int determine_align(int total_size, int[] elements)
{
for(packing = 1,2,4,...,64) // expected answers.
{
size_at_cur_packing = pack(packing, elements);
if(actual_size == size_at_cur_packing)
{
return packing;
}
}
return unknown;
}
So the question is, is there a better implementation of determine_align?
Thanks,
Alignment of struct members in C/C++ is entirely implementation-defined. There are a few guarantees there, but I don't see how they would help you.
Thus, there's no generic way to do what you want. In the context of a particular implementation, you should refer to the documentation of that implementation that covers this (if it is covered).
When choosing how to pack members into a struct an implementation doesn't have to follow the sort of scheme that you describe in your algorithm although it is a common one. (i.e. minimum of sizeof type being aligned and preferred machine alignment size.)
You don't have to compare overall size of a struct to determine the padding that has been applied to individual struct members, though. The standard macro offsetof will give the byte offset from the start of the struct of any individual struct member.
I let the compiler do the alignment for me.
In gcc,
typedef struct _foo
{
u8 v1 __attribute__((aligned(4)));
u16 v2 __attribute__((aligned(4)));
u32 v3 __attribute__((aligned(8)));
u8 v1 __attribute__((aligned(4)));
} foo;
Edit: Note that sizeof(foo) will return the correct value including any padding.
Edit2: And offsetof(foo, v2) also works. Given these two functions/macros, you can figure out everything you need to know about the layout of the struct in memory.
I'm honestly not sure what you're trying to do, and I'm probably completely misunderstanding what you're looking for, but if you want to simply determine what the alignment requirement of a struct is, the following macro might be helpful:
#define ALIGNMENT_OF( t ) offsetof( struct { char x; t test; }, test )
To determine the alignment of your foo structure, you can do:
ALIGNMENT_OF( foo);
If this isn't what you're ultimately tring to do, it might be possible that the macro might help in whatever algorithm you do come up with.
You need to pad based on the alignment of the next field and then pad the last element based on the maximum alignment you've seen in the struct. Note that the actual alignment of a field is the minimum of its natural alignment and the packing for that struct. I.e., if you have a struct packed at 4 bytes, a double will be aligned to 4 bytes, even though its natural alignment is 8.
You can make your inner loop faster with total_size+= total_size % min(packing, element.size); You can optimize it further if packing and element.size is a power of two.
If the problem is just that you want to guarantee a particular alignment, that is easy. For a particular alignment=2^n:
void* p = malloc( sizeof( _foo ) + alignment -1 );
p = (void*) ( ( (char*)(p) + alignment - 1 ) & ~alignment );
I've neglected to save to original p returned from malloc. If you intend to free this memory, you need to save that pointer somewhere.
I'm not sure what you want to achieve here. As Pavel Minaev said, alignment is handled by a compiler which in turn is constrained by a platform's Application Binary Interface for data that is made accessible to code compiled by a different compiler. The following paper discusses the problem in the context of a compiler that needs to implement calling conventions:
Christian Lindig and Norman Ramsey. Declarative Composition of Stack Frames. In Evelyn Duesterwald, editors, Proc. of the 14th International Conference on Compiler Construction, Springer, LNCS 2985, 2004.