If I have a PixelBuffer object of size (200 * 200 * 3) where each pixel has three consecutive spots for the RGB colors. How can I index them so that if i am trying to implement the DDA line drawing algorithm. I have seen a lot on the web that uses PutPixel(x,y) but im not sure how I can access the pixels in this method.
The pixels will be arranged row by row, with each pixel using 3 bytes. To address a point (x, y), you basically just need to multiply the y value by the size of a row (which is the width multiplied by 3), multiply the x value by the size of a pixel (3).
With a few constants for readability, the code for the function could look like this:
const int IMG_WIDTH = 200;
const int IMG_HEIGHT = 200;
const int BYTES_PER_PIXEL = 3;
const int BYTES_PER_ROW = IMG_WIDTH * BYTES_PER_PIXEL;
void PutPixel(uint8_t* pImgData, int x, int y, const uint8_t color[3])
{
uint8_t pPixel = pImgData + y * BYTES_PER_ROW + x * BYTES_PER_PIXEL;
for (int iByte = 0; iByte < BYTES_PER_PIXEL; ++iByte)
{
pPixel[iByte] = color[iByte];
}
}
Example how this function could be used:
// Allocate image data.
uint8_t* pImgData = new uint8_t[IMG_WIDTH * IMG_HEIGHT];
// Initialize image data, unless you are planning to set all pixels.
...
// Set pixel (50, 30) to yellow.
uint8_t yellow[3] = {255, 255, 0};
PutPixel(pImgData, 50, 30, yellow);
Once you have your image built in memory, you can store the content in a pixel buffer object using glBufferData():
GLuint bufId = 0;
glGenBuffers(1, &bufId);
glBindBuffer(GL_PIXEL_UNPACK_BUFFER, bufId);
glBufferData(GL_PIXEL_UNPACK_BUFFER, IMG_HEIGHT * BYTES_PER_ROW,
pImgData, GL_STATIC_DRAW);
Related
I need to create a bitmap from an array of pixels for a raycaster I'm working on in Direct2D. However, I'm having trouble understanding how to use the CreateBitmap function. Specifically, I'm not sure what the srcData parameter is supposed to be. I'm pretty sure/hoping it's a pointer to an array of pixels, but I'm not sure how to set up that array. What kind of array is it supposed to be? What data type? Etc.
Here's what I've tried:
int width = 400, height = 400;
D2D1::ColorF * arr = (D2D1::ColorF*)calloc(width * height * 4, sizeof(D2D1::ColorF));
for (int i = 0; i < width * height * 4; i++) { arr[i] = D2D1::ColorF(0.0f, 1.0f, 0.0f); }
// Create the bitmap and draw it on the screen
ID2D1Bitmap * bmp;
HRESULT hr;
hr = renderTarget->CreateBitmap(
D2D1::SizeU(width, height),
arr,
width * sizeof(int) * 4,
D2D1::BitmapProperties(),
&bmp);
if (hr != S_OK) { return; } // I've tested and found that hr does not equal S_OK
// Draw the bitmap...
What should the second and third lines look like? Is there anything else I'm doing incorrectly?
Syntax:
HRESULT CreateBitmap(
D2D1_SIZE_U size,
const void *srcData,
UINT32 pitch,
const D2D1_BITMAP_PROPERTIES & bitmapProperties,
ID2D1Bitmap **bitmap
);
Your code:
hr = renderTarget->CreateBitmap(
D2D1::SizeU(width, height),
arr, // <<--- Wrong, see (a) below
width * sizeof(int) * 4, // <<--- Close but wrong, see (b) below
D2D1::BitmapProperties(), // <<--- Wrong, see (c) below
&bmp);
(a) - you are supposed to provide an array of pixel data here, where the format depends on format of the bitmap. Note that this is optional an d you can create a bitmap without initialization. The pixels are not D2D1::ColorF exactly. They could be 4 byte RGBA data if you request respective bitmap format, see (c) below.
(b) - this is distance between rows in bytes, if your pixels are supposed to be 32-bit values you would normally want Width * 4 here
(c) - this requests DXGI_FORMAT_UNKNOWN D2D1_ALPHA_MODE_UNKNOWN and results in bitmap creation error. You need a real format here such as DXGI_FORMAT_B8G8R8A8_UNORM (see Pixel Formats and also Supported Pixel Formats and Alpha Modes)
The first link above shows how exactly bytes in memory map to pixel colors, and you are supposed to prepare your data respectively.
UPD
With DXGI_FORMAT_B8G8R8A8_UNORM your initialization structure is this:
UINT8* Data = malloc(Height * Width * 4);
for(UINT Y = 0; Y < Height; Y++)
for(UINT X = 0; X < Width; X++)
{
UINT8* PixelData = Data + ((Y * Width) + X) * 4;
PixelData[0] = unsigned integer blue in range 0..255;
PixelData[1] = unsigned integer red in range 0..255;
PixelData[2] = unsigned integer green in range 0..255;
PixelData[3] = 255;
}
I'm trying to write a function that unpacks an image into separate quads. But for some reason the results are distorted (kinda stretched 45 degrees), so I must be reading the pixel array incorrectly, though I can't see the problem with my function...
The function takes 2 unsigned char arrays, "source" and "target" and two unsigned int values, the "width" and "height" of the source image. Width is dividable by 4, and height is dividable by 3 (both return the same value, because the texture is 600 * 450) so each face is 150*150 px. So the w/h values are correct. Then it also takes in 2 ints, "xIt" and "yIt" which determine the offset - which 150*150 block should be read.
Here's the function:
const unsigned int trgImgWidth = width / 4;
const unsigned int trgImgHeight = height / 3;
unsigned int trgBufferOffset = 0;
// Compute pixel offset to start reading from
unsigned int Yoffset = yIt * trgImgHeight * width * 3;
unsigned int Xoffset = xIt * trgImgWidth * 3;
for (unsigned int y = 0; y < trgImgHeight; y++)
{
unsigned int o = Yoffset + Xoffset; // Offset of current line of pixels
for (unsigned int x = 0; x < trgImgWidth * 3; x++) // for each pixel component (rgb) in the line
{
target[trgBufferOffset] = source[o + x];
trgBufferOffset++;
}
Yoffset += width * 3;
}
Anyone see where I might be going wrong here?
I am writing a code that reads a ppm file and stores the width, the height and the pixels of the file to an image object. In my image class I have a pointer that holds the image data. I also have an error in a method that sets the rgb values for a x, y pixel.
typedef load compontent_t
class Image
{
public:
enum channel_t { RED = 0, GREEN, BLUE };
protected:
component_t * buffer; //! Holds the image data.
unsigned int width, //! The width of the image (in pixels)
height; //! The height of the image (in pixels)
// data mutators
/*! Sets the RGB values for an (x,y) pixel.
*
* The method should perform any necessary bounds checking.
*
* \param x is the (zero-based) horizontal index of the pixel to set.
* \param y is the (zero-based) vertical index of the pixel to set.
* \param value is the new color for the (x,y) pixel.
*/
void setPixel(unsigned int x, unsigned int y, Color & value) {
if (x > 0 && x < width && y > 0 && y < height) {
size_t locpixel = y*width + x;
size_t componentofpixel = locpixel * 3;
*buffer + componentofpixel = value.r;
*buffer + componentofpixel + 1 = value.g;
*buffer + componentofpixel + 2 = value.b;
}
else {
cout << "Pixel out of bounds" << endl;
}
}
Image(unsigned int width, unsigned int height, component_t * data_ptr): width(width), height(height),buffer(data_ptr) {}
So in the setPixel method when i am trying to find the correct spot of the buffer to set the rgb value it shows me the error: "expression must be a modifiable L value"
I am trying to make a bitmap from scratch. I have a BYTE array (with known size) of RGB values and I would like to generate an HBITMAP.
For further clarification, the array of bytes I am working with is purely RGB values.
I have made sure that all variables are set and proper, and I believe that the issue has to do with lpvBits. I have been doing as much research for this in the past few days I have been unable to find anything that makes sense to me.
For testing purposes the width = 6 and height = 1
Code:
HBITMAP RayTracing::getBitmap(void){
BYTE * bytes = getPixels();
void * lpvBits = (void *)bytes;
HBITMAP hBMP = CreateBitmap(width, height, 1, 24, lpvBits);
return hBMP;
}
BYTE * RayTracing::getPixels(void){
Vec3 * vecs = display.getPixels();
BYTE * bytes;
bytes = new BYTE[(3 * width * height)];
for (unsigned int i = 0; i < (width * height); i++){
*bytes = static_cast<BYTE>(vecs->x);
bytes++;
*bytes = static_cast<BYTE>(vecs->y);
bytes++;
*bytes = static_cast<BYTE>(vecs->z);
bytes++;
vecs++;
}
return bytes;
}
You need to properly dword-align your array so each line is an even multiple of 4 bytes, and then skip those bytes when filling the array:
HBITMAP RayTracing::getBitmap(void)
{
BYTE * bytes = getPixels();
HBITMAP hBMP = CreateBitmap(width, height, 1, 24, bytes);
delete[] bytes;
return hBMP;
}
BYTE * RayTracing::getPixels(void)
{
Vec3 * vecs = display.getPixels(); // <-- don't forget to free if needed
int linesize = ((3 * width) + 3) & ~3; // <- 24bit pixels, width number of pixels, rounded to nearest dword boundary
BYTE * bytes = new BYTE[linesize * height];
for (unsigned int y = 0; y < height; y++)
{
BYTE *line = &bytes[linesize*y];
Vec3 *vec = &vecs[width*y];
for (unsigned int x = 0; x < width; x++)
{
*line++ = static_cast<BYTE>(vec->x);
*line++ = static_cast<BYTE>(vec->y);
*line++ = static_cast<BYTE>(vec->z);
++vec;
}
}
return bytes;
}
The third parameter of CreateBitmap should be 3, not 1. There are three color planes: Red, Green, and Blue.
Also, if you set the height to anything greater than one, you'll need to pad each row of pixels with zeroes to make the width a multiple of 4. So for a 6x2 image, after saving the 6*3 bytes for the first row, you'd need to save two zero bytes to make the row 20 bytes long.
I'm generating a terrain from a .bmp file, as a very early precursor for a strategy game. In my code I load the BMP file as an openGL texture, then using a double loop to generate coordinates (x, y redChannel). Then I create indices by again double looping and generating the triangles for a square between (x,y) to (x+1, y+1). However, when I run the code, I end up with an extra triangle going from the end of one line to the beginning of the next line, and which I cannot seem to solve. This only happens when I use varied heights and a sufficiently large map, or at least it is not visible otherwise.
This is the code:
void Map::setupVertices(GLsizei* &sizeP, GLint * &vertexArray, GLubyte* &colorArray){
//textureNum is the identifier generated by glGenTextures
GLuint textureNum = loadMap("heightmap.bmp");
//Bind the texture again, and extract the needed data
glBindTexture(GL_TEXTURE_2D, textureNum);
glGetTexLevelParameteriv(GL_TEXTURE_2D, 0, GL_TEXTURE_WIDTH, &width);
glGetTexLevelParameteriv(GL_TEXTURE_2D, 0, GL_TEXTURE_HEIGHT, &height);
GLint i = height*width;
GLubyte * imageData = new GLubyte[i+1];
glGetTexImage(GL_TEXTURE_2D,0,GL_RED, GL_UNSIGNED_BYTE, &imageData[0]);
//Setup varibles: counter (used for counting vertices)
//VertexArray: pointer to address for storing the vertices. Size: 3 ints per point, width*height points total
//ColorArray: pointer to address for storing the color data. 3 bytes per point.
int counter = 0;
vertexArray = new GLint[height*width*3];
colorArray = new GLubyte[height*width*3];
srand(time(NULL));
//Loop through rows
for (int y = 0; y < height; y++){
//Loop along the line
for (int x=0; x < width; x++){
//Add vertices: x, y, redChannel
//Add colordata: the common-color.
colorArray[counter] = imageData[x+y*width];
vertexArray[counter++] = x;
colorArray[counter] = imageData[x+y*width];
vertexArray[counter++] = y;
colorArray[counter] = imageData[x+y*width];//(float) (rand() % 255);
vertexArray[counter++] = (float)imageData[x+y*width] /255 * maxHeight;
}
}
//"Return" total vertice amount
sizeP = new GLsizei(counter);
}
void Map::setupIndices(GLsizei* &sizeP, GLuint* &indexArray){
//Pointer to location for storing indices. Size: 2 triangles per square, 3 points per triangle, width*height triangles
indexArray = new GLuint[width*height*2*3];
int counter = 0;
//Loop through rows, don't go to top row (because those triangles are to the row below)
for (int y = 0; y < height-1; y++){
//Loop along the line, don't go to last point (those are connected to second last point)
for (int x=0; x < width-1; x++){
//
// TL___TR
// | / |
// LL___LR
int lowerLeft = x + width*y;
int lowerRight = lowerLeft+1;
int topLeft = lowerLeft + width+1;
int topRight = topLeft + 1;
indexArray[counter++] = lowerLeft;
indexArray[counter++] = lowerRight;
indexArray[counter++] = topLeft;
indexArray[counter++] = topLeft;
indexArray[counter++] = lowerRight;
indexArray[counter++] = topRight;
}
}
//"Return" the amount of indices
sizeP = new GLsizei(counter);
}
I eventually draw this with this code:
void drawGL(){
glPushMatrix();
glEnableClientState(GL_VERTEX_ARRAY);
glVertexPointer(3,GL_INT,0,mapHeight);
glEnableClientState(GL_COLOR_ARRAY);
glColorPointer(3,GL_UNSIGNED_BYTE,0,mapcolor);
if (totalIndices != 0x00000000){
glDrawElements(GL_TRIANGLES, *totalIndices, GL_UNSIGNED_INT, indices);
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_COLOR_ARRAY);
glPopMatrix();
}
Here's a picture of the result:
http://s22.postimg.org/k2qoru3kx/open_GLtriangles.gif
And with only blue lines and black background.
http://s21.postimg.org/5yw8sz5mv/triangle_Error_Blue_Line.gif
There also appears to be one of these going in the other direction as well, at the very edge right, but I'm supposing for now that it may be related to the same issue.
I'd simplify this part:
int lowerLeft = x + width * y;
int lowerRight = (x + 1) + width * y;
int topLeft = x + width * (y + 1);
int topRight = (x + 1) + width * (y + 1);
The problem looks like topLeft has an extra + 1 when it should only have the + width.
This causes the "top" vertices to both be shifted along by one column. You might not notice the offsets within the grid and, as you pointed out, they're not visible until the height changes.
Also, returning new GLsizei(counter) seems a bit round about. Why not just pass in GLsizei& counter.
These might be worth a look too. You can save a fair bit of data using strip primitives for many procedural objects:
Generate a plane with triangle strips
triangle-strip-for-grids-a-construction