OpenGL Reading Pixels from Texture? - opengl

I need a way to get the pixels of an already existing texture. Similarly to how D3DTexture's LockRect works with ReadOnly and NoSysLock. Some of my textures are also stored in compressed DXT1/3/5 formats, not entirely sure if that would affect anything. If those formats are simply decoded by Opengl and stored as raw pixels instead of in the compression. So would retrieving the pixels guarantee the same format that was used to set the texture with?

Generally you will want to use a PBO for reading pixels. Here's all the information you need on PBOs, click here
So would retrieving the pixels guarantee the same format that was used
to set the texture with?
It is possible to convert the format and retrieve the pixels at the same time. Look at the format conversion section on the page I linked.

Related

How to determine top-down/bottom-up from WIC decoder?

I'm using WIC (Windows Imaging Component) to decode image files and get access to the pixel data. I'm trying to figure out the pixel order (i.e., bottom-up or top-down).
I use IWICImagingFactory::CreateDecoderFromFileName to create the decoder from which I grab the (first) frame (IWICBitmapFrameDecode). With the frame, I use GetPixelFormat and GetSize to compute a buffer size, and finally I use CopyPixels to get the decoded pixel data into my buffer.
This works fine with a variety of JPEG files, giving me pixel rows in top-down sequence, and the pixels are in BGRX order (GUID_WICPixelFormat32bppBGR).
When I try with GIF files, however, the pixel rows come in bottom-up sequence. The reported pixel format is RGBA (GUID_WICPixelFormat32bppRGBA), but the ground truth shows the channel order is BGRA (with the blue in the low byte of each 32-bit pixel, just like JPEG).
My primary question: Is there a way for me to query the top-down/bottom-up orientation of the pixel data?
I found a similar question that asked about rotation when using JPEG sources, and the answer was to query the EXIF data to know whether the image was rotated. But EXIF isn't used with GIF. So I'm wondering whether I'm supposed to assume that pixels are always bottom-up, except for ones that do have an EXIF orientation that says otherwise. Update 6/25/2020 Nope, the JPEG orientation is neutral and the GIF has no orientation information, yet MS Paint and other programs can open the files in the correct orientation.
My secondary question: What's up with the incorrect channel order (RGB/BGR) from the GIF decoder?
Not only that, the WIC documentation says that the GIF decoder should return indexes into a color table (GUID_WICPixelFormat8bppIndexed) rather than actual pixel values. Is it possible some software on my machine installed its own buggy GIF decoder that supersedes the one that comes with Windows 10?
To query photo orientation for formats that support it you should use System.Photo.Orientation photo metadata policy (or one of file format specific metadata query paths) using IWICMetadataQueryReader interface.
As for GetPixelFormat() reporting "incorrect" pixel format, it is right there in the Remarks section:
The pixel format returned by this method is not necessarily the pixel format the image is stored as. The codec may perform a format conversion from the storage pixel format to an output pixel format.
Native byte order of image bitmaps under Windows is BGRA, so that is what you are getting from the decoder. If you want image in a different format you need to use IWICImagingFactory::CreateFormatConverter() to create a format converter and convert the image data before copying.
Finally, GIF doesn't have orientation metadata because it is always encoded from top to bottom. Most likely reason you are getting a vertically inverted image is because you are reading it directly from the decoder -- try calling CopyPixels() on the converter instead.

Compressed Textures in OpenGL

I have read that compressed textures are not readable and are not color render-able.
Though I have some idea of why its not allowed, can some one explain in little detail.
What exactly does it mean its not readable. I can not read from them in shader using say image Load etc? Or I cant even sample from them?
What does it mean its not render-able to? Is it because user is going to see all garbage anyway, so its not allowed.
I have not tried using compressed textures.
Compressed textures are "readable", by most useful definitions of that term. You can read from them via samplers. However, you can't use imageLoad operations on them. Why? Because reading such memory is not a simple memory fetch. It involves fetching lots of memory and doing a decompression operation.
Compressed images are not color-renderable, which means they cannot be attached to an FBO and used as a render target. One might think the reason for this was obvious, but if you need it spelled out. Writing to a compressed image requires doing image compression on the fly. And most texture compression formats (or compressed formats of any kind) are not designed to easily deal with changing a few values. Not to mention, most compressed texture formats are lossy, so every time you do a decompress/write/recompress operation, you lose image fidelity.
From the OpenGL Wiki:
Despite being color formats, compressed images are not color-renderable, for obvious reasons. Therefore, attaching a compressed image to a framebuffer object will cause that FBO to be incomplete and thus unusable. For similar reasons, no compressed formats can be used as the internal format of renderbuffers.
So "not color render-able" means that they can't be used in FBOs.
I'm not sure what "not readable" means; it may mean that you can't bind them to an FBO and read from the FBO (since you can't bind them to an FBO in the first place).

how to read in a texture picture into opengl

What is the easiest format to read a texture into opengl? Are there any tutorials -good tutorials, for loading image formats like jpg, png, or raw into an array which can be used for texture mapping (preferably without the use of a library like libpng)?
OpenGL itself does not knows nothing about common image formats (other than natively supported S3TC/DXT compressed and alikes, but they are a different story). You need to expand you source images into RGBA arrays. Number of formats and combinations are supported. You need to choose one that suits you, e.g. GL_ALPHA4 for masks, GL_RGB5_A1 for 1bit transparency, GL_BGRA/GL_RGBA for fullcolor, etc.
For me the easiest (not the fastest) way are PNGs, for their lossless compression and full Alpha support. I read the PNG and write RGBA values into array which I then hand over to OpenGL texture creation. If you don't need alpha you may as well accept JPG or BMP. Pipeline is common Source -> Expanded RGBA array -> OpenGL texture.
There is a handy OpenGL texture tutorial available at the link: http://www.nullterminator.net/gltexture.html

How can I process an image?

I'm building a program to convert an image file (whatever file type would be easiest) to G-Code for use on a rep-rap with a pen plotter attachment.
I'm wondering if i wanted to process the image pixel by pixel and check things like pixel color, how could I do this with C++?
I would really like to know how I can process a bitmap image, pixel by pixel, to check the color of the pixel.
The best way is to use a library, like for example Magick++.
When you load an image, you can access it's pixels data with Blob
You will probably want to use an existing library that has been tested.
But for fun/practice/etc, this would be a good exercise and wouldn't be impossible to do. The Bitmap Format is (relatively) simple compared with other image formats. The Wikipedia page has some tons of info, including some C++ code. It looks like once you've gotten past the header information, you get to a pixel array that shouldn't be difficult to parse.
Good luck.
Most image formats consist of a header and the actual raw image data. A bimpap image is no different. If you don't want to use one of the existing libraries, or if you are not allowed to, you should read about bitmap format :
http://en.wikipedia.org/wiki/BMP_file_format
Once you understand this you could create appropriate structs/classes to store the information you want from the header such as x,y size, bpp etc. And also have a pointer to the raw image data. You could then simpy iterate through every pixel and do whatever you want with it :)
Once you decipher the image file, I suggest you place the pixels into a matrix, for the first pass. (Future revisions can use other methods to access the pixels).
You can apply transformations to the pixels by using matrix multiplication. You can also access the pixels individually by using array indexing.
Search the web and SO for "introduction to graphics c++".

Read Framebuffer-texture like an 1D array

I am doing some gpgpu calculations with GL and want to read my results from the framebuffer.
My framebuffer-texture is logically an 1D array, but I made it 2D to have a bigger area. Now I want to read from any arbitrary pixel in the framebuffer-texture with any given length.
That means all calculations are already done on GPU side and I only need to pass certain data to the cpu that could be aligned over the border of the texture.
Is this possible? If yes is it slower/faster than glReadPixels on the whole image and then cutting out what I need?
EDIT
Of course I know about OpenCL/CUDA but they are not desired because I want my program to run out of the box on (almost) any platform.
Also I know that glReadPixels is very slow and one reason might be that it offers some functionality that I do not need (Operating in 2D). Therefore I asked for a more basic function that might be faster.
Reading the whole framebuffer with glReadPixels just to discard it all except for a few pixels/lines would be grossly inefficient. But glReadPixels lets you specify a rect within the framebuffer, so why not just restrict it to fetching the few rows of interest ? So you maybe end up fetching some extra data at the start and end of the first and last lines fetched, but I suspect the overhead of that is minimal compared with making multiple calls.
Possibly writing your data to the framebuffer in tiles and/or using Morton order might help structure it so a tighter bounding box can be be found and the extra data retrieved minimised.
You can use a pixel buffer object (PBO) to transfer pixel data from the framebuffer to the PBO, then use glMapBufferARB to read the data directly:
http://www.songho.ca/opengl/gl_pbo.html