Accessing rendered OpenGL image - opengl

I am rendering an image using OpenGL on C++, and want to access the resulting image to do some more processing on it. (I'm rendering an image, have an actual image it's supposed to look like, and want to compute the pixel difference between the two.)
So far I have only been rendering images to the screen, though, and I can't figure out how to render an image and then later get access at the direct pixels which were drawn. I don't especially care if I can see the image on the screen or not, all I want is that the image gets rendered to some region of memory which I can access from the CPU. How do you do this?
Alternatively, would it be possible to send the image it's supposed to look like to OpenGL and compute the pixel difference on the GPU? Either option is fine with me, but the faster I can make it the better. (Right now, I can render about 100 frames per second, but still haven't figured out how to do the comparisons.)

Yes, you could do it on the GPU. Put the 2 images in textures. Draw a frame-filling quad multi-textured with the two textures, and be sure to provide texture coordinates. Write a fragment shader to compute the difference. (When a commenter asked if you wanted to use a programmable pipeline, this is one reason it matters. If you only use the fixed-function pipeline, you wouldn't have the option of writing a fragment shader.)

The obvious way would be to use glReadPixels to read the rendered results in the framebuffer to host memory.

Related

Can you apply transformation matrices after running your pixel shaders?

I'm working with images, and I was tasked to extend the amount of image post-processing effects that we can perform on our images. Certain required effects need pixel data for calculations, so I created a few pixel shaders to do the job, and they work fine.
The problem is that the images need to be transformable, i.e. they need to be able to rotate, zoom in and out, pan, etc. The creation of all these textures, the algorithms to do the post-processing, they're all slowing the program down. I need a way to do these transformations without completely re-doing every effect. Some of the images the program works on are multi-gigabyte images, so I can't really do the obvious thing of caching the images after transformations for later use.
I'm looking for some sort of reasonable solution here. I'm not a graphics guy, but I can't imagine that similar programs with post-processing redo the post processing every time you pan. My best guess is saving off the last texture and applying the transformations on that, but I don't really know how to do that.
By saying "images" I assume you mean 2D textures you load and apply some post-pro effects. If that's the case just create a render target and render to that with all the post-effects.
Then rotate/pan a quad with that texture attached (a simplistic texture-fetching fragment shader will be required). Rerender that texture in case the post-pro parameters change.
If, on the other hand, you have a 3D scene, then there is no going around it, you have to render it each frame.
If my assumptions are wrong, it would be best if you provided more details on your case.

how to retrieve z depth and color of a rendered pixel

I would like to retrieve the z height of each pixels of a rendered object in a scene.
I will need to retrieve the color rendered too.
What are the opengl technics to implement ?
glReadPixels and CPU side code
use glReadPixels to obtain both RGB and Depth buffers. Here examples for both:
depth buffer got by glReadPixels is always 1
OpenGL Scale Single Pixel Line
That will read the buffers into CPU accessible memory. This way is slow (due to sync) but should work on any platform.
FBO render to texture and GPU shader
Faster method is to use FBO and render to texture and use that output in next rendering pass as input texture for computing your stuff inside shaders. This however will not run properly on Intel and might need additional tweaking of code between nVidia and AMD.
If you have per pixel output use single QUAD covering your screen as the second rendering pass.
If you got single output for the whole screen instead use single POINT render and compute all in the fragment shader (scann the whole texture inside) something like this:
How to implement 2D raycasting light effect in GLSL
The difference is that by usnig shaders and FBO you are not transferring data between GPU/CPU so its way faster.
The content of the targeted textures can be still readed by CPU using texture related GL functions
compute GPU shaders
There are also compute shaders out there but I did not use them yet so I am just guessing however with them it might be possible to do your stuff in single pass and also the form of the result and computation should not be as limiting.
My bet is that you are doing some post processing similar to Deferred Shading so googling such topic/tutorials might help.

Overwrite pixel per pixel in an openGL 2d texture

I want to create an openGL 2D texture and set the RGBA values of every pixel by its own. Can someone give me an explanation for my problem? I didn't find one in the internet.
If you're just looking to write the pixels of a 2D texture, you can simply use glTexImage2D, which takes a buffer specifying the pixel data you wish to upload to the texture (https://www.opengl.org/sdk/docs/man/html/glTexImage2D.xhtml). Alternatively, you can use glTexSubImage2D to write a portion of the texture's pixels (https://www.khronos.org/opengles/sdk/docs/man/xhtml/glTexSubImage2D.xml). If you're instead looking to do the analogous thing with the framebuffer, you can use glDrawPixels (https://www.opengl.org/sdk/docs/man2/xhtml/glDrawPixels.xml).
If the target is the backbuffer, attempting to draw to a exact pixel values to a texture by binding it as a framebuffer, and then rendering a textured quad completely covering it is possible. However, this process is subject to blending and potentially pixel-center issues, whereas glDrawPixels is not.
I did something like this some time ago, when playing around with OpenGL.
Have a look at the code here, on GitHub.
You can find it in main.cpp.
Basically, my idea was to create an array of floats, set the values, copy to GPU with glBufferData and draw with glDrawElements.
As I remember it, doing it often was very bad in terms of performance, so it's probably not the best direction.
Please also note that this code is just my sandbox, and may not be the best possible example to be copied.

Mouse-picking using off-screen rendering?

I have 3d-scene with a lot of simple objects (may be huge number of them), so I think it's not very good idea to use ray-tracing for picking objects by mouse.
I'd like to do something like this:
render all these objects into some opengl off-screen buffer, using pointer to current object instead of his color
render the same scene onto the screen, using real colors
when user picks a point with (x,y) screen coordinates, I take the value from the off-screen buffer (from corresponding position) and have a pointer to object
Is it possible? If yes- what type of buffer can I choose for "drawing with pointers"?
I suppose you can render in two passes. First to a buffer or a texture data you need for picking and then on the second pass the data displayed. I am not really familiar with OGL but in DirectX you can do it like this: http://www.two-kings.de/tutorials/dxgraphics/dxgraphics16.html. You could then find a way to analyse the texture. Keep in mind that you are rendering data twice, which will not necessarily double your render time (as you do not need to apply all your shaders and effects) bud it will be increased quite a lot. Also per each frame you are essentially sending at least 2MB of data (if you go for 1byte per pixel on 2K monitor) from GPU to CPU but that might change if you have more than 256 objects on screen.
Edit: Here is how to do the same with OGL although I cannot verify that the tutorial is correct: http://www.opengl-tutorial.org/intermediate-tutorials/tutorial-14-render-to-texture/ (There is also many more if you look around on Google)

OpenGL: How to update subimage of a rectangular texture?

I am trying to update a small square section in a large rectangular texture.
I've tried using glTexSubImage2D with the target set to GL_TEXTURE_RECTANGLE_ARB, but I'm running into issues. It may just be that I don't know how to use glTexSubImage2D correctly though. The issues may be due to the fact that I'm trying not to load the whole texture into main memory using glTexImage2D before updating the subimage?
Can anyone tell me if it's possible to update a subimage of a rectangular texture without having to read the entire texture into main memory? I see glTexCopyTexSubImage2D... Still wondering if these methods work with rectangular textures though.
I'm quite sure you have to at least upload full-size texture even once with glTexImage2D. You can even use a blank array if you don't have any access to whole image at the beginning.
glCopyTexSubImage2D is not really like what you want. It copies a specific portion from "frame buffer" to a texture. But, you want to upload from main memory, right?
I don't see any reason why rectangular textures cannot be supported by those methods unless there is a broken driver.
Consider that rectangular textures do not use texture coordinates in [0..1] range like the other textures. Instead they use [0..Width] and [0..Height] range.