C++ fwrite access violation when writing image file - c++

I need to append RGB frame to file on each call.
Here is what I do :
size_t lenght=_viewWidth * _viewHeight * 3;
BYTE *bytes=(BYTE*)malloc(lenght);
/////////////// read pixels from OpenGL tex /////////////////////
glBindTexture(GL_TEXTURE_2D,tex);
glGetTexImage(GL_TEXTURE_2D,0,GL_BGR,GL_UNSIGNED_BYTE,bytes);
glBindTexture(GL_TEXTURE_2D,0);
///write it to file :
hOutFile = fopen( outFileName.c_str(), cfg.appendMode ? "ab" : "wb" );
assert(hOutFile!=0);
fwrite(bytes, 1 ,w * h, hOutFile); // Write
fclose(hOutFile);
Somehow I am getting access violation when fwrite gets called.Probably I misunderstood how to use it.

How do you determine _viewWidth and _viewHeight? When reading back a texture you should retrieve them with glGetTexLevelparameteri to retrieve the GL_TEXTURE_WIDTH, and GL_TEXTURE_HEIGHT parameters from the GL_TEXTURE_2D target.
Also the line
fwrite(bytes, 1 ,w * h, hOutFile);
is wrong. What is w, what is h? They never get initialized in the code and are not connected to the other allocations up there. Also if those are width and height of the image, it still lacks the number of elements of a pixel. Most likely 3.
It would make more sense to have something like
int elements = ...; // probably 3
int w = ...;
int h = ...;
size_t bytes_length = w*elements * h;
bytes = malloc(bytes_length)
...
fwrite(bytes, w*elements, h, hOutFile);

Is it caused by bytes?
maybe w * h is not what you think it is.

Is the width ever an odd number or not evenly divisible by 4?
By default OpenGL assumes that a row of pixel data is aligned to a four byte boundary. With RGB/BGR this isn't always the case, and if so you'll be writing beyond the malloc'ed block and clobbering something. Try putting
glPixelStorei(GL_PACK_ALIGNMENT, 1)
before reading the pixels and see if the problem goes away.

Related

Efficient Way to draw many individual pixels to a screen in SDL2

I'm currently working on something in C++ using SDL2 that requires being able to draw a lot of individual pixels with specific color values to the screen every update. I'm using SDL_RenderDrawPoint just to make sure my program works but I'm sure the performance on that is terrible. From a cursory search it seems like using a texture that is the size of my window would be fastest by using SDL_UpdateTexture and updating it with a vector of pixels with my desired pixel values with a default of {0,0,0,0} RGBA value for any pixel not changed.
However every attempt I've had at writing it fails and I'm not sure where my misunderstandings lie. This is my current code that attempts to draw a specific RGBA color value to a specific x,y coordinate in my texture. I assume the part of the buffer I'm accessing using my x,y values is incorrect but I'm unsure how to make it correct if so.
Any help is appreciated including suggestions on how to efficiently do this without a texture if there's a better way.
SDL_Texture* windowTexture = SDL_CreateTexture(render, SDL_PIXELFORMAT_RGBA8888, SDL_TEXTUREACCESS_STREAMING, screenWidth, screenHeight);
unsigned int* lockedPixels = nullptr;
std::vector<int> pixels(screenHeight*screenWidth*4, 0);
int pitch = 0;
int start = (y * screenWidth + x) * 4;
pixels[start + 0] = B;
pixels[start + 1] = G;
pixels[start + 2] = R;
pixels[start + 3] = A;
SDL_UpdateTexture(windowTexture, nullptr, pixels.data(), screenWidth * 4);
The pixel format RGBA8888 means that each pixel is a 32 bit element with each channel (i.e. red, green, blue or alpha) taking up 8 bits, in that order.
You may want to declare pixels as containing the type "32 bit unsigned integer". An unsigned int is typically 32 bits, but it may also be larger.
std::vector<Uint32> pixels(screenHeight*screenWidth, 0); // Note: no *4
The individual R, G, B, A values (which should each be 8 bit unsigned integers) can be combined into one pixel by using shifts and bit-wise ORs:
int start = y * screenWidth + x; // Note: no *4
pixels[start] = (R << 24U) | (G << 16U) | (B << 8U) | A;
Lastly, you may want to not hardcode the last parameter of SDL_UpdateTexture (i.e. pitch). Instead, use screenWidth * sizeof(Uint32).
The implementation above is basically a direct implementation of "RGBA8888" and allows you to access individual pixels.
Alternatively, you could also declare an array of four times the size containing 8 bit unsigned integers. Then, the first four indices would correspond to the R, G, B, A values of the first pixel, the next four indices would correspond to the R, G, B, A values of the second pixel, etc.
Which one is faster would depend on the exact system and use-case (whether the most common operations are on pixels or individual channels).
PS. Instead of Uint32 you could also use C++'s own std::uint32_t from the cstdlib header.

Read SDL2 texture without duplication

I tried to create heightmap with an png or jpg file. And it works too 75% but I can't solve the last 25...
Here is a picture of the map as png
And this is the resulting heightmap/terrain
As you can see the symbols starts to repeat and I have no clue why.
The code:
auto image = IMG_Load(path.c_str());
int lineOffSet = i*(image->pitch/4);
uint32 pixel = static_cast<uint32*>(image->pixels)[lineOffSet + j];
uint8 r, g ,b;
SDL_GetRGB(pixel,image->format,&r, &g, &b);
What I tried:
The number of vertices is correct(256x256).
int lineOffSet = i*(image->pitch/4);
4 represents the bytes per pixel which should be in this case 3 but than I get a complete different terrain (The pitch is 768). The range from i and j goes from 0-255.
I hope someone has a hint to solve this thing
I think you calculate the address of the desired pixel wrong. You assume that one pixel is 4 bytes in size. It's usually more reliable to directly calculate the address in bytes and then cast to uint32. Try this:
uint32 pixel = *static_cast<uint32*>(image->pixels +
image->pitch * i +
image->format->BytesPerPixel * j);

How to use a .raw file in opengl

I'm trying to read a .raw image format and do some modifications on it in OpenGL. I can read the image like this:
int width, height;
BYTE * data;
FILE * file;
file = fopen( filename, "rb" );
if ( file == NULL ) return 0;
width = 256;
height = 256;
data = malloc( width * height * 3 );
fread( data, width * height * 3, 1, file );
fclose( file );
But i dont know how to use glDrawPixels to draw the picture.
My second problem is that I dont know how can I access each pixel. I mean in a .raw image format, each pixel should have 3 integers for storing RGB values(Am I right?). How can I access these RGB values directly?
There's no such thing as a .raw in the hard and fast sense. The name implies image data with no header but doesn't specify the format of the data. RGB is likely but so is RGBA and it's trivial to think of almost endless other possibilities.
Assuming RGB ordering, one byte per channel, then: each pixel is three bytes wide. So the nth pixel is:
r = data[n*3 + 0]
g = data[n*3 + 1]
b = data[n*3 + 2]
Assuming the data is set out so that the pixels are stored in left-to-right order, line by line, then on the first line the pixel at x=3 is at n=3, on the second it's at n=(width of first line)+3, on the third it's at n=(combined width of first two lines)+3, etc.
So:
r = data[(x + y*width)*3 + 0]
g = data[(x + y*width)*3 + 1]
b = data[(x + y*width)*3 + 2]
To use glDrawPixels just follow what the manual tells you to specify as the parameters. It says:
void glDrawPixels( GLsizei width,
GLsizei height,
GLenum format,
GLenum type,
const GLvoid * data);
You say that width and height are 256. You've said that the format is RGB. Scan down the documentation and you'll see that the corresponding GLenum is GL_RGB. You're saying each channel is a single byte in size. So that's GL_UNSIGNED_BYTE. You've loaded the data to data. So:
glDrawPixels(256, 256, GL_RGB, GL_UNSIGNED_BYTE, data);
Further comments: obviously get this working first so you've something to build on but glDrawPixels is almost unused in practice. As a result it isn't even part of OpenGL ES or, correspondingly, WebGL. Look at the semantics of the thing. You supply your buffer every time you call. OpenGL can't know whether it has been modified since the last call. So every call transfers your data from CPU to GPU. Look into submitting your data once as a texture and drawing using geometry. That'll save the per-call transfer cost and therefore be a lot more efficient.

glGetTexImage reads too much data with texture format GL_ALPHA

I'm trying to retrieve the pixel information for an alpha-only texture via glGetTexImage.
The problem is, the glGetTexImage-Call seems to read more data than it should, leading to memory corruption and a crash at the delete[]-Call. Here's my code:
int format;
glGetTexLevelParameteriv(target,0,GL_TEXTURE_INTERNAL_FORMAT,&format);
int w;
int h;
glGetTexLevelParameteriv(target,0,GL_TEXTURE_WIDTH,&w);
glGetTexLevelParameteriv(target,0,GL_TEXTURE_HEIGHT,&h);
if(w == 0 || h == 0)
return false;
if(format != GL_ALPHA)
return false;
unsigned int size = w *h *sizeof(unsigned char);
unsigned char *pixels = new unsigned char[size];
glGetTexImage(target,level,format,GL_UNSIGNED_BYTE,&pixels[0]);
delete[] pixels;
glGetError reports no errors, and without the glGetTexImage-Call it doesn't crash.
'target' is GL_TEXTURE_2D (The texture is valid and bound before the shown code), 'w' is 19, 'h' is 24, 'level' is 0.
If I increase the array size to (w *h *100) it doesn't crash either. I know for a fact that GL_UNSIGNED_BYTE has the same size as an unsigned char on my system, so I don't understand what's going on here.
Where's the additional data coming from and how can I make sure that my array is large enough?
Each row written to or read from by OpenGL pixel operations like glGetTexImage are aligned to a 4-byte boundary by default, which may add some padding.
To modify the alignment, use glPixelStorei with the GL_[UN]PACK_ALIGNMENT setting. GL_PACK_ALIGNMENT affects operations that read from OpenGL memory (glReadPixels, glGetTexImage, etc.) while GL_UNPACK_ALIGNMENT affects operations that write to OpenGL memory (glTexImage, etc.)
The alignment can be any of 1 (tightly packed with no padding), 2, 4 (the default), or 8.
So in your case, run glPixelStorei(GL_PACK_ALIGNMENT, 1); before running glGetImage2D.

C++ memcpy and happy access violation

For some reason i can't figure i am getting access violation.
memcpy_s (buffer, bytes_per_line * height, image, bytes_per_line * height);
This is whole function:
int Flip_Bitmap(UCHAR *image, int bytes_per_line, int height)
{
// this function is used to flip bottom-up .BMP images
UCHAR *buffer; // used to perform the image processing
int index; // looping index
// allocate the temporary buffer
if (!(buffer = (UCHAR *) malloc (bytes_per_line * height)))
return(0);
// copy image to work area
//memcpy(buffer, image, bytes_per_line * height);
memcpy_s (buffer, bytes_per_line * height, image, bytes_per_line * height);
// flip vertically
for (index = 0; index < height; index++)
memcpy(&image[((height - 1) - index) * bytes_per_line], &buffer[index * bytes_per_line], bytes_per_line);
// release the memory
free(buffer);
// return success
return(1);
} // end Flip_Bitmap
Whole code:
http://pastebin.com/udRqgCfU
To run this you'll need 24-bit bitmap, in your source directory.
This is a part of a larger code, i am trying to make Load_Bitmap_File function to work...
So, any ideas?
You're getting an access violation because a lot of image programs don't set biSizeImage properly. The image you're using probably has biSizeImage set to 0, so you're not allocating any memory for the image data (in reality, you're probably allocating 4-16 bytes, since most malloc implementations will return a non-NULL value even when the requested allocation size is 0). So, when you go to copy the data, you're reading past the ends of that array, which results in the access violation.
Ignore the biSizeImage parameter and compute the image size yourself. Keep in mind that the size of each scan line must be a multiple of 4 bytes, so you need to round up:
// Pseudocode
#define ROUNDUP(value, power_of_2) (((value) + (power_of_2) - 1) & (~((power_of_2) - 1)))
bytes_per_line = ROUNDUP(width * bits_per_pixel/8, 4)
image_size = bytes_per_line * height;
Then just use the same image size for reading in the image data and for flipping it.
As the comments have said, the image data is not necessarily width*height*bytes_per_pixel
Memory access is generally faster on 32bit boundaries and when dealing with images speed generally matters. Because of this the rows of an image are often shifted to start on a 4byte (32bit) boundary
If the image pixels are 32bit (ie RGBA) this isn't a problem but if you have 3bytes per pixel (24bit colour) then for certain image widths, where the number of columns * 3 isn't a multiple of 4, then extra blank bytes will be inserted at the edn of each row.
The image format probably has a "stride" width or elemsize value to tell you this.
You allocate bitmap->bitmapinfoheader.biSizeImage for image but proceed to copy bitmap->bitmapinfoheader.biWidth * (bitmap->bitmapinfoheader.biBitCount / 8) * bitmap->bitmapinfoheader.biHeight bytes of data. I bet the two numbers aren't the same.