How to use a .raw file in opengl - opengl

I'm trying to read a .raw image format and do some modifications on it in OpenGL. I can read the image like this:
int width, height;
BYTE * data;
FILE * file;
file = fopen( filename, "rb" );
if ( file == NULL ) return 0;
width = 256;
height = 256;
data = malloc( width * height * 3 );
fread( data, width * height * 3, 1, file );
fclose( file );
But i dont know how to use glDrawPixels to draw the picture.
My second problem is that I dont know how can I access each pixel. I mean in a .raw image format, each pixel should have 3 integers for storing RGB values(Am I right?). How can I access these RGB values directly?

There's no such thing as a .raw in the hard and fast sense. The name implies image data with no header but doesn't specify the format of the data. RGB is likely but so is RGBA and it's trivial to think of almost endless other possibilities.
Assuming RGB ordering, one byte per channel, then: each pixel is three bytes wide. So the nth pixel is:
r = data[n*3 + 0]
g = data[n*3 + 1]
b = data[n*3 + 2]
Assuming the data is set out so that the pixels are stored in left-to-right order, line by line, then on the first line the pixel at x=3 is at n=3, on the second it's at n=(width of first line)+3, on the third it's at n=(combined width of first two lines)+3, etc.
So:
r = data[(x + y*width)*3 + 0]
g = data[(x + y*width)*3 + 1]
b = data[(x + y*width)*3 + 2]
To use glDrawPixels just follow what the manual tells you to specify as the parameters. It says:
void glDrawPixels( GLsizei width,
GLsizei height,
GLenum format,
GLenum type,
const GLvoid * data);
You say that width and height are 256. You've said that the format is RGB. Scan down the documentation and you'll see that the corresponding GLenum is GL_RGB. You're saying each channel is a single byte in size. So that's GL_UNSIGNED_BYTE. You've loaded the data to data. So:
glDrawPixels(256, 256, GL_RGB, GL_UNSIGNED_BYTE, data);
Further comments: obviously get this working first so you've something to build on but glDrawPixels is almost unused in practice. As a result it isn't even part of OpenGL ES or, correspondingly, WebGL. Look at the semantics of the thing. You supply your buffer every time you call. OpenGL can't know whether it has been modified since the last call. So every call transfers your data from CPU to GPU. Look into submitting your data once as a texture and drawing using geometry. That'll save the per-call transfer cost and therefore be a lot more efficient.

Related

Luminance values clipped to [0, 1] during texture transfer?

I am uploading a host-side texture to OpenGL using something like:
GLfloat * values = new [nRows * nCols];
// initialize values
for (int i = 0; i < nRows * nCols; ++i)
{
values[i] = (i % 201 - 100) / 10.0f; // values from -10.0f .. + 10.0f
}
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, nRows, nCols, GL_LUMINANCE, GL_FLOAT, values);
However, when I read back the texture using glGetTexImage(), it turns out that all values are clipped to the range [0..1].
First, I cannot find where this behavior is documented (I am using the Red Book for OpenGL 2.1).
Second, is it possible to change this behavior and let the values pass unchanged? I want to access the unscaled, unclipped data in an GLSL shader.
I cannot find where this behavior is documented
In the actual specification, it's in the section on Pixel Rectangles, titled Transfer of Pixel Rectangles.
Second, is it possible to change this behavior and let the values pass unchanged?
Yes. If you want to use "unscaled, unclamped" data, you have to use a floating point image format. The format of your texture is defined when you created the storage for it, probably by a call to glTexImage2D. The third parameter of that function defines the format. So use a proper floating-point format instead of an integer one.

Setting individual pixels of an RGB frame for ffmpeg encoding

I'm trying to change the test pattern of an ffmpeg streamer, Trouble syncing libavformat/ffmpeg with x264 and RTP , into familiar RGB format. My broader goal is to compute frames of a streamed video on the fly.
So I replaced its AV_PIX_FMT_MONOWHITE with AV_PIX_FMT_RGB24, which is "packed RGB 8:8:8, 24bpp, RGBRGB..." according to http://libav.org/doxygen/master/pixfmt_8h.html .
To stuff its pixel array called data, I've tried many variations on
for (int y=0; y<HEIGHT; ++y) {
for (int x=0; x<WIDTH; ++x) {
uint8_t* rgb = data + ((y*WIDTH + x) *3);
const double i = x/double(WIDTH);
// const double j = y/double(HEIGHT);
rgb[0] = 255*i;
rgb[1] = 0;
rgb[2] = 255*(1-i);
}
}
At HEIGHTxWIDTH= 80x60, this version yields
, when I expect a single blue-to-red horizontal gradient.
640x480 yields the same 4-column pattern, but with far more horizontal stripes.
640x640, 160x160, etc, yield three columns, cyan-ish / magenta-ish / yellow-ish, with the same kind of horizontal stripiness.
Vertical gradients behave even more weirdly.
Appearance was unaffected by an AV_PIX_FMT_RGBA attempt (4 not 3 bytes per pixel, alpha=255). Also unaffected by a port from C to C++.
The argument srcStrides passed to sws_scale() is a length-1 array, containing the single int HEIGHT.
Access each Pixel of AVFrame asks the same question in less detail, so far unanswered.
The streamer emits one warning, which I doubt affects appearance:
[rtp # 0x269c0a0] Encoder did not produce proper pts, making some up.
So. How do you set the RGB value of a pixel in a frame to be sent to sws_scale() (and then to x264_encoder_encode() and av_interleaved_write_frame())?
Use avpicture_fill() as described in Encoding a screenshot into a video using FFMPEG .
Instead of passing data directly to sws_scale(), do this:
AVFrame* pic = avcodec_alloc_frame();
avpicture_fill((AVPicture *)pic, data, AV_PIX_FMT_RGB24, WIDTH, HEIGHT);
and then replace the 2nd and 3rd args of sws_scale() with
pic->data, pic->linesize,
Then the gradients above work properly, at many resolutions.
The argument srcStrides passed to sws_scale() is a length-1 array, containing the single int HEIGHT.
Stride (AKA linesize) is the distance in bytes between two lines. For various reasons having mostly to do with optimization it is often larger than simply width in bytes, so there is padding on the end of each line.
In your case, without any padding, stride should be width * 3.

C++ fwrite access violation when writing image file

I need to append RGB frame to file on each call.
Here is what I do :
size_t lenght=_viewWidth * _viewHeight * 3;
BYTE *bytes=(BYTE*)malloc(lenght);
/////////////// read pixels from OpenGL tex /////////////////////
glBindTexture(GL_TEXTURE_2D,tex);
glGetTexImage(GL_TEXTURE_2D,0,GL_BGR,GL_UNSIGNED_BYTE,bytes);
glBindTexture(GL_TEXTURE_2D,0);
///write it to file :
hOutFile = fopen( outFileName.c_str(), cfg.appendMode ? "ab" : "wb" );
assert(hOutFile!=0);
fwrite(bytes, 1 ,w * h, hOutFile); // Write
fclose(hOutFile);
Somehow I am getting access violation when fwrite gets called.Probably I misunderstood how to use it.
How do you determine _viewWidth and _viewHeight? When reading back a texture you should retrieve them with glGetTexLevelparameteri to retrieve the GL_TEXTURE_WIDTH, and GL_TEXTURE_HEIGHT parameters from the GL_TEXTURE_2D target.
Also the line
fwrite(bytes, 1 ,w * h, hOutFile);
is wrong. What is w, what is h? They never get initialized in the code and are not connected to the other allocations up there. Also if those are width and height of the image, it still lacks the number of elements of a pixel. Most likely 3.
It would make more sense to have something like
int elements = ...; // probably 3
int w = ...;
int h = ...;
size_t bytes_length = w*elements * h;
bytes = malloc(bytes_length)
...
fwrite(bytes, w*elements, h, hOutFile);
Is it caused by bytes?
maybe w * h is not what you think it is.
Is the width ever an odd number or not evenly divisible by 4?
By default OpenGL assumes that a row of pixel data is aligned to a four byte boundary. With RGB/BGR this isn't always the case, and if so you'll be writing beyond the malloc'ed block and clobbering something. Try putting
glPixelStorei(GL_PACK_ALIGNMENT, 1)
before reading the pixels and see if the problem goes away.

OpenSceneGraph float Image

Using C++ and OSG I'm trying to upload a float texture to my shader, but somehow it does not seem to work. At the end I posted some part of my code. Main question is how to create an osg::Image object using data from a float array. In OpenGL the desired code would be
glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE32F_ARB, width, height, 0,
GL_LUMINANCE, GL_FLOAT, data);
but in this case I have to use OSG.
The code runs fine when using
Image* image = osgDB::readImageFile("someImage.jpg");
instead of
image = new Image;
but I need to upload generated float data. It's also not possible to switch to unsigned char arrays as I need the GL_LUMINANCE32F_ARB data range in the shader code.
I hope someone can help me here as Google couldn't help me with it (googled for eg: osg float image). So here's my code.
using namespace std;
using namespace osg;
//...
float* data = new float[width*height];
fill_n(data, size, 1.0); // << I actually do this for testing purposes
Texture2D* texture = new Texture2D;
Image* image = new Image;
osg::State* state = new osg::State;
Uniform* uniform = new Uniform(Uniform::SAMPLER_2D, "texUniform");
texture->setInternalFormat(GL_LUMINANCE32F_ARB);
texture->setDataVariance(osg::Object::DYNAMIC);
texture->setFilter(osg::Texture2D::MIN_FILTER, osg::Texture2D::LINEAR);
texture->setFilter(osg::Texture2D::MAG_FILTER, osg::Texture2D::LINEAR);
texture->setWrap(osg::Texture2D::WRAP_T, osg::Texture2D::CLAMP_TO_EDGE);
texture->setWrap(osg::Texture2D::WRAP_S, osg::Texture2D::CLAMP_TO_EDGE);
if (data == NULL)
cout << "texdata null" << endl; // << this is not printed
image->setImage(width, height, 1, GL_LUMINANCE32F_ARB,
GL_LUMINANCE, GL_FLOAT,
(unsigned char*)data, osg::Image::USE_NEW_DELETE);
if (image->getDataPointer() == NULL)
cout << "datapointernull" << endl; // << this is printed
if (!image->valid())
exit(1); // << here the code exits (hard exit just for testing purposes)
osgDB::writeImageFile(*image, "blah.png");
texture->setInternalFormat(GL_LUMINANCE32F_ARB);
texture->setImage(image);
camera->getOrCreateStateSet()->setTextureAttributeAndModes(4, texture);
state->setActiveTextureUnit(4);
texture->apply(*state);
uniform->set(4);
addProgrammUniform(uniform);
I found another way on the web, letting osg::Image create the data and fill it afterwards. But somehow this also does not work. I inserted this just after the new XYZ; lines.
image->setInternalTextureFormat(GL_LUMINANCE32F_ARB);
image->allocateImage(width,height,1,GL_LUMINANCE,GL_FLOAT);
if (image->data() == NULL)
cout << "null here?!" << endl; // << this is printed.
I use the following (simplified) code to create and set a floating-point texture:
// Create texture and image
osg::Texture* texture = new osg::Texture2D;
osg::Image* image = new osg::Image();
image->allocateImage(size, size, 1, GL_LUMINANCE, GL_FLOAT);
texture->setInternalFormat(GL_LUMINANCE32F_ARB);
texture->setFilter(osg::Texture::MIN_FILTER, osg::Texture::LINEAR);
texture->setFilter(osg::Texture::MAG_FILTER, osg::Texture::LINEAR);
texture->setWrap(osg::Texture::WRAP_S, osg::Texture::CLAMP_TO_EDGE);
texture->setWrap(osg::Texture::WRAP_T, osg::Texture::CLAMP_TO_EDGE);
texture->setImage(image);
// Set texture to node
osg::StateSet* stateSet = node->getOrCreateStateSet();
stateSet->setTextureAttributeAndModes(TEXTURE_UNIT_NUMBER, texture);
// Set data
float* data = reinterpret_cast<float*>(image->data());
/* ...data processing... */
image->dirty();
You may want to change some of the parameters, but this should give you a start. I believe that in your case TEXTURE_UNIT_NUMBER should be set to 4.
but I need to upload generated float data. It's also not possible to switch to unsigned char arrays as I need the GL_LUMINANCE32F_ARB data range in the shader code.
osgDB::writeImageFile(*image, "blah.png");
png files don't support 32bit per channel data, so you can not write your texture to file this way. See the libpng book:
PNG grayscale images support the widest range of pixel depths of any image type. Depths of 1, 2, 4, 8, and 16 bits are supported, covering everything from simple black-and-white scans to full-depth medical and raw astronomical images.[63]
[63] Calibrated astronomical image data is usually stored as 32-bit or 64-bit floating-point values, and some raw data is represented as 32-bit integers. Neither format is directly supported by PNG, although one could, in principle, design an ancillary chunk to hold the proper conversion information. Conversion of data with more than 16 bits of dynamic range would be a lossy transformation, however--at least, barring the abuse of PNG's alpha channel or RGB capabilities.
For 32 bit per channel, check out the OpenEXR format.
If however 16bit floating points (i.e. half floats) suffice, then you can go about it like so:
osg::ref_ptr<osg::Image> heightImage = new osg::Image;
int pixelFormat = GL_LUMINANCE;
int type = GL_HALF_FLOAT;
heightImage->allocateImage(tex_width, tex_height, 1, pixelFormat, type);
Now to actually use and write half floats, you can use the GLM library. You get the half float type by including <glm/detail/type_half.hpp>, which is then called hdata.
You now need to get the data pointer from your image and cast it to said format:
glm::detail::hdata *data = reinterpret_cast<glm::detail::hdata*>(heightImage->data());
This you can then access like you would a one dimensional array, so for example
data[currentRow*tex_width+ currentColumn] = glm::detail::toFloat16(3.1415f);
Not that if you write this same data to a bmp or tif file (using the osg plugins), the result will be incorrect. In my case I just got the left half of the intended image stretched onto the full width and not in grayscale, but in some strange color encoding.

C++ memcpy and happy access violation

For some reason i can't figure i am getting access violation.
memcpy_s (buffer, bytes_per_line * height, image, bytes_per_line * height);
This is whole function:
int Flip_Bitmap(UCHAR *image, int bytes_per_line, int height)
{
// this function is used to flip bottom-up .BMP images
UCHAR *buffer; // used to perform the image processing
int index; // looping index
// allocate the temporary buffer
if (!(buffer = (UCHAR *) malloc (bytes_per_line * height)))
return(0);
// copy image to work area
//memcpy(buffer, image, bytes_per_line * height);
memcpy_s (buffer, bytes_per_line * height, image, bytes_per_line * height);
// flip vertically
for (index = 0; index < height; index++)
memcpy(&image[((height - 1) - index) * bytes_per_line], &buffer[index * bytes_per_line], bytes_per_line);
// release the memory
free(buffer);
// return success
return(1);
} // end Flip_Bitmap
Whole code:
http://pastebin.com/udRqgCfU
To run this you'll need 24-bit bitmap, in your source directory.
This is a part of a larger code, i am trying to make Load_Bitmap_File function to work...
So, any ideas?
You're getting an access violation because a lot of image programs don't set biSizeImage properly. The image you're using probably has biSizeImage set to 0, so you're not allocating any memory for the image data (in reality, you're probably allocating 4-16 bytes, since most malloc implementations will return a non-NULL value even when the requested allocation size is 0). So, when you go to copy the data, you're reading past the ends of that array, which results in the access violation.
Ignore the biSizeImage parameter and compute the image size yourself. Keep in mind that the size of each scan line must be a multiple of 4 bytes, so you need to round up:
// Pseudocode
#define ROUNDUP(value, power_of_2) (((value) + (power_of_2) - 1) & (~((power_of_2) - 1)))
bytes_per_line = ROUNDUP(width * bits_per_pixel/8, 4)
image_size = bytes_per_line * height;
Then just use the same image size for reading in the image data and for flipping it.
As the comments have said, the image data is not necessarily width*height*bytes_per_pixel
Memory access is generally faster on 32bit boundaries and when dealing with images speed generally matters. Because of this the rows of an image are often shifted to start on a 4byte (32bit) boundary
If the image pixels are 32bit (ie RGBA) this isn't a problem but if you have 3bytes per pixel (24bit colour) then for certain image widths, where the number of columns * 3 isn't a multiple of 4, then extra blank bytes will be inserted at the edn of each row.
The image format probably has a "stride" width or elemsize value to tell you this.
You allocate bitmap->bitmapinfoheader.biSizeImage for image but proceed to copy bitmap->bitmapinfoheader.biWidth * (bitmap->bitmapinfoheader.biBitCount / 8) * bitmap->bitmapinfoheader.biHeight bytes of data. I bet the two numbers aren't the same.