Using C++ (GCC specifically, should have put this sooner), I'm storing raw texture data in an array of unsigned bytes, in a RGBA format, with 32 bits per pixel (8 bits per color value with Alpha, so on and so forth...). The thing is, I want to write a function that returns the raw data as a array of Colors, where a Color is a struct defined as the following:
struct Color
{
uint8 r;
uint8 g;
uint8 b;
uint8 a;
};
Plus functions and whatnot, but those are the only variables in the struct. My thinking is that since each color is 4 bytes long, what I can somehow cast the raw byte array to a Color array that is 1/4 of the original size (in "length" of array, not in absolute size). I think that reinterpret_cast is what I am looking for, but I cannot find anything after a google search that confirms 100% that you can convert it into an array of structs instead of just one struct.
So I guess I am just asking someone to either confirm that this is indeed possible with reinterpret_cast, or if there is a different cast or way to do this. Thanks.
EDIT: My wording is a little weird, so as an arbitrary example I'd like to somehow cast a array of 16 unsigned bytes into an array of 4 Colors.
EDIT: Also I know it's kind of a little late, but I cant seem to find how to cast a small portion of the array at a specific place to a single struct using a reinterpret_cast, if that is possible, without copying to a smaller array and casting like that. So any help with this problem would also be greatly appreciated.
as an arbitrary example I'd like to somehow cast a array of 16 unsigned bytes into an array of 4 Colors.
Like this:
#pragma pack(push, 1)
struct Color
{
uint8 r;
uint8 g;
uint8 b;
uint8 a;
};
#pragma pack(pop)
uint8 bytearray[16];
...
Color *colorarray = reinterpret_cast<Color*>(bytearray);
Then you can do things like this:
for (int idx = 0; idx < 4; ++idx)
{
Color &c = colorarray[idx];
// use c.r, c.g, c.b, c.a as needed...
}
Related
Im trying to pass a 2D float array to a constant buffer:
//In the shader:
cbuffer myBuffer
{
other buffer elements
.
.
float myArray[16][16];
};
//In the CPU:
struct myBuffer_struct
{
other buffer elements
.
.
float myArray[16][16];
};
But im having a lot of problems dealing with the padding. I tried using
float4[size/4][size]
in my cbuffer and a lot of other type combinations but I cant access to my array by indexation in any way. What is the proper way to do this?
Thank you.
I've had this issue and it comes down to basically the alignment of the buffer. Your HLSL cbuffer definition most definitely will be padding differently to what you have defined in your struct.
The alignment probably along 16 byte (4 floats) alignment. In my code, I was writing 4 floats out into a buffer. Like this below, as the array alignment was different in the cbuffer.
for (int i = 0; i < 8; i++)
{
stream.Write<float>(m_waveLengths[i] );
stream.Write<float>(m_waveSpeeds[i] );
stream.Write<float>(m_amplitudes[i] );
stream.Write<float>(m_steepness[i]);
}
To read this, I used a float4 array definition.
// hlsl definition
float4 Wave[8];
I then referenced the relevant item as Wave[0].x, Wave[0].y, Wave[0].z, Wave[0].w
The memory alignment would make the buffer 4 times bigger if I didn't pack it like this. This is because in the HLSL code, the buffer definition seems to of aligned each element of the array along 16 byte boundries (4 x floats). So instead, I interweaved my 4 arrays into 1 array and used the properties of float4 to reference it.
because the alignment of float waveLengths[8] would of meant that I would have to write it into the buffer like this:
for (int i = 0; i < 8; i++)
{
stream.Write<float>(m_waveLengths[i] );
stream.Write<float>(0.0f);
stream.Write<float>(0.0f);
stream.Write<float>(0.0f);
}
For some reason (and I am probably not setting a certain HLSL compiler directive), using arrays in the Cbuffer had some quirks where it would pad each element to a 16 byte boundary.
So, for your float myArray[16][16], I would assume that you look at the alignment, you may have to write the buffer for this out in a similar manner, padding out 12 bytes after each element in the array. I'm sure someone will respond with correct compiler directive to get rid of this quirk, I just solved this a while ago and your problem looks similar to what I had.
I've got an array of bytes, declared like so:
typedef unsigned char byte;
vector<byte> myBytes = {255, 0 , 76 ...} //individual bytes no larger in value than 255
The problem I have is I need to access the raw data of the vector (without any copying of course), but I need to assign an arbitrary amount of bits to any given pointer to an element.
In other words, I need to assign, say an unsigned int to a certain position in the vector.
So given the example above, I am looking to do something like below:
myBytes[0] = static_cast<unsigned int>(76535); //assign n-bit (here 32-bit) value to any index in the vector
So that the vector data would now look like:
{2, 247, 42, 1} //raw representation of a 32-bit int (76535)
Is this possible? I kind of need to use a vector and am just wondering whether the raw data can be accessed in this way, or does how the vector stores raw data make this impossible or worse - unsafe?
Thanks in advance!
EDIT
I didn't want to add complication, but I'm constructing variously sized integer as follows:
//**N_TYPES
u16& VMTypes::u8sto16(u8& first, u8& last) {
return *new u16((first << 8) | last & 0xffff);
}
u8* VMTypes::u16to8s(u16& orig) {
u8 first = (u8)orig;
u8 last = (u8)(orig >> 8);
return new u8[2]{ first, last };
}
What's terrible about this, is I'm not sure of the endianness of the numbers generated. But I know that I am constructing and destructing them the same everywhere (I'm writing a stack machine), so if I'm not mistaken, endianness is not effected with what I'm trying to do.
EDIT 2
I am constructing ints in the following horrible way:
u32 a = 76535;
u16* b = VMTypes::u32to16s(a);
u8 aa[4] = { VMTypes::u16to8s(b[0])[0], VMTypes::u16to8s(b[0])[1], VMTypes::u16to8s(b[1])[0], VMTypes::u16to8s(b[1])[1] };
Could this then work?:
memcpy(&_stack[0], aa, sizeof(u32));
Yes, it is possible. You take the starting address by &myVector[n] and memcpy your int to that location. Make sure that you stay in the bounds of your vector.
The other way around works too. Take the location and memcpy out of it to your int.
As suggested: by using memcpy you will copy the byte representation of your integer into the vector. That byte representation or byte order may be different from your expectation. Keywords are big and little endian.
As knivil says, memcpy will work if you know the endianess of your system. However, if you want to be safe, you can do this with bitwise arithmetic:
unsigned int myInt = 76535;
const int ratio = sizeof(int) / sizeof(byte);
for(int b = 0; b < ratio; b++)
{
myBytes[b] = byte(myInt >> (8*sizeof(byte)*(ratio - b)));
}
The int can be read out of the vector using a similar pattern, if you want me to show you how let me know.
Title itself explains the question.
The last parameter of glTexImage2D is the array of bytes (unsigned, signed depends).
Should rgb array contain padding bytes or not?
Should RGB array contain padding bytes or not?
That entirely depends on your needs. You can configure OpenGL to accept various data layouts. See the reference documentation of glPixelStore, the unpack parameters are what you should look at.
Padding bytes are normally found between between rows, to fill up to a certain alignment. The unpack alignment specifies the byte alignment of each row.
If your pixels are 8 bit per component, but packed into 4 bytes each with a padding byte, you can specify that, by declaring the data type to be GL_UNSIGNED_INT_8_8_8_8; if you use a type/internal type with less than 4 components the excessive bytes are ignored.
The short answer is no. I typically create an array of structures.
The structure would look like:
typedef struct
{
unsigned char red;
unsigned char green;
unsigned char blue;
unsigned char alpha;
} AlphaPixelBytes;
Then the array I create would look like:
AlphaPixelBytes bitmapData[NUMTEXTUREPOINTS];
You can then use bitmapData as the last argument to glTexImage2D when you create the texture or you can pass NULL as the last argument to create an empty texture you'll later populate with glTexSubImage2D.
This sort of array of structures is also useful as a data source for an NSBitmapImageRep for use (for example) in exporting a PNG file of your texture.
EDIT:
Sorry, I didn't notice you're dealing with RGB not RGBA data. The structure for RGB would look like:
typedef struct
{
unsigned char red;
unsigned char green;
unsigned char blue;
} PixelBytes;
And the array:
PixelBytes bitmapData[NUMTEXTUREPOINTS];
I was just playing around with bit fields and came across something that I can't quite figure out how to get around.
(Note about the platform: size of an int = 2bytes, long = 4bytes, long long = 8bytes - thought it worth mentioning as I know it can vary. Also the 'byte' type is defined as an 'unsigned char')
I would like to be able to make an array of two 36 bit variables and put them into a union with an array of 9 bytes. This is what I came up with:
typedef union {
byte bytes[9];
struct {
unsigned long long data:36;
} integers[2];
} Colour;
I was working on the theory that the compiler would realise there was supposed to be two bitfields as part of the anonymous struct and put them together into the space of 9 bytes. However it turns out that they get aligned at a byte boundary so the union occupies 10 bytes not 9, which makes perfect sense.
The question is then, is there a way to create an array of two bit fields like this? I considered the 'packed' attribute, but the compiler just ignores it.
While this works as expected (sizeof() returns 9):
typedef union {
byte bytes[9];
struct {
unsigned long long data0:36;
unsigned long long data1:36;
} integers;
} Colour;
It would be preferable to have it accessible as an array.
Edit:
Thanks to cdhowie for his explanation of why this won't work.
Fortunately I thought of a way to achieve what I want:
typedef union {
byte bytes[9];
struct {
unsigned long long data0:36;
unsigned long long data1:36;
unsigned long long data(byte which){
return (which?data1:data0);
}
void data(byte which, unsigned long long _data){
if(which){
data1 = _data;
} else {
data0 = _data;
}
}
} integers;
} Colour;
You can't directly do this using arrays, if you want each bitfield to be exactly 36 bits wide.
Pointers must be aligned to byte boundaries, that's just the way pointers are. Since arrays function like pointers in most cases (with exceptions), this is just not possible with bitfields that contain a number of bits not evenly divisible by 8. (What would you expect &(((Colour *) 0)->integers[1]) to return if the bitfields were packed? What value would make sense?)
In your second example, the bitfields can be tightly-packed because there is no pointer math going on under the hood. For things to be addressable by pointer, they must fall on a byte boundary, since bytes are the units used to "measure" pointers.
You will note that if you try to take the address of (((Colour *) 0)->integers.data0) or data1 in the second example, the compiler will issue an error, for exactly this reason.
i'm approaching c++ with some basic computer graphics.
pixels data is usually represented as :
unsigned char *pixels
and an unsigned char is good because is a value between 0 and 255 (256 = 2^8 because a char is 2 byte and 1 byte is 8 bit?). and this is good because in RGB color are represented with a number between 0 and 255.
but.. i understand this as a monchromatic image, in a normal image i have RGB, i would have 3 array of unsiged char, one for red, one for green, one for blue. something like:
unsigned char *pixels[3]
but i never found something similar for RGB pixels data
RGB images are usually stored in interleaved order (R1, G1, B1, R2, G2, B2, ...), so one pointer (to R1) is enough.
This makes it a bit harder to address individual pixels: pixel with index N is stored at pixels[3*N+0], pixels[3*N+1] and pixels[3*N+2] instead of just red[N], green[N], blue[N].
However, this has the advantage of allowing faster access: less pointers lead to easier programs, improving their speed; interleaved order also makes memory caching more effective.
unsigned char *pixels[3];
declares an array of three pointers to unsigned char. I'm not sure if that's what you wanted.
There are several different ways to represent pixels. The simplest is probably something like:
struct Pixel
{
unsigned char red;
unsigned char green;
unsigned char blue;
};
But you may have to (or want to) conform to some external format. Another frequent possibility is to put all three colors in a uint32_t. Also, in some graphic systems, there may be a fourth element, and alpha, representing transparency.
Really whenever you refer to a block of bytes, it's going to be of type unsigned char* because of the fact that unsigned char by the C-specification has no padding in the type itself (i.e., every bit is used for a value in the byte, and there are no padded bits that are not used), and pixel-data is going to be some block of X bytes with no padding (at least not internal padding ... there may be padding at the end of the buffer for alignment purposes). It will also most likely be allocated on the heap somewhere. So no matter if it's going to be monochrome, color-data, etc., you will often find that a pixel buffer will be pointed to via an unsigned char pointer, and you may then cast it to some struct like James mentioned in order to easily access the pixel information. Other times you may have to index into the buffer like anatolyg mentions. But in the end, a buffer of pixels is just a buffer of data, and a general buffer of data bytes should be accessed in C/C++ using type unsigned char*.
With *pixels[3] you've got separate arrays for the three colour components, whereas in files the three colour components for a single pixel are stored together. It also means you can use a single fread()/fwrite() for the whole block of image data,