I want to store and update informations in a texture. So the idea is, that I create a new texture with current informations. While storing it in the render process I actually want to read the informations out of the same pixel and store a weighted average of both values. So the value that was rendered to that pixel and the value that was already on that pixel.
Now I read very often that I can not read and write on the same texture. Now my questions is, may it maybe be possible? and if not should I copy the texture information, before the rendering step and pass the copy to the shader? If so, how can I copy the texture? or should I do a extra rendering step for copying?
I see two possible options here, depending on the mix equation
Alpha Blending: If the equation used can be mapped to one of the glBlendFunc functions, then this is the way to go. If you want to use linear factors for the stored and the new value this should be possible. This is also the option where I would expect the best performance.
Image Load Store: With this method one can read and write to the same texture at the same time (see here). The performance will usually be very bad here and you will have to use the image atomic operations to ensure that multiple fragments at the same location always read the correct value.
Copying the texture would, in my opinion, only work if you render an image and then perform one weighted average computation on it afterwards (otherwise you would have to copy the texture after each store operation). But if this is the case, one could simple render the result of the average computation to a different texture and completely avoid all the trouble of copying the input data.
If resorting to an extension is an option, you can use NV_texture_barrier which allows writing and reading from the same texture.
Related
I'm interested in sub-pixel sampling my OpenGL renders around the edge silhouettes of my meshes for a computer vision task. I'm thinking of using MSAA to do it efficiently (but the application is not for anti-aliasing). The problem I find with multisampling is that in order to read the samples from the GPU I can only blit the framebuffers into a non multisampling one, thus I cannot recover individual sample information. My questions are:
Is there a way to impelement a fragment shader that stores the results of a per-sample (GL_SAMPLE_SHADING) computation such that I can read those samples back to CPU? I've thought of using glSampleID to index the output to different out buffers but don't know if that's possible at all. Perhaps a method like the linked-list structures used for OIT (i.e. http://on-demand.gputechconf.com/gtc/2014/presentations/S4385-order-independent-transparency-opengl.pdf)? However, there they perform all computations on GPU so I'm not sure if I can read the linked list data from the CPU in any way.
Maybe MSAA is the wrong approach and there are other methods to do so. I guess my last resort is to super sample the render x times and thus recover individual samples, but that seems to be a very inefficient solution.
You can write a compute shader which reads the samples and writes each sample's data via imageLoad, and then writes it to an SSBOs (FS outputs and image load/store would not be appropriate for the output). You'll need the usual memory barrier synchronization when it comes time to read it, but this way, you can write directly to a buffer object, rather than having to use a PBO to read from a texture.
The hardest part will be converting gl_GlobalInvocationID and the other compute shader inputs into the index in the SSBO array as well as the texture coordinate and sample index for your imageLoad operation.
Goal: compensate and visualize a stream of 14-bit data (2D video).
Existing solution: Each sample needs to be compensated for a gain and offset, so it requires one multiplication and one addition. Then I assign a colour to the sample by a look-up table and output a stream of "colours" directly to the display. Everything is done on CPU.
Requirements: I need to be able to dynamically set a look-up table (palette).
It seems obvious to use GPU for such an operation, but I couldn't find any info about how to move from data domain to picture domain with OpenGL. I've thought about using OpenCL for data compensation and image generation and then moving to OpenGL for displaying (or in general: for manipulating picture).
Can you recommend me a good approach for this? Can this all be efficiently achieved just with the OpenGL? How?
Yes, it can be done using only OpenGL.
I would suggest a workflow like the following:
For each frame:
Upload frame from stream to texture memory
Draw a full-screen quad, with texture coordinates from 0,0 to 1,1
In a fragment shader apply for each pixel the appropriate transformation. The lookup table can also be stored in a texture, so you only have to perform a lookup on the appropriate location.
In general: This question is at the moment a little bit too broad to be answered in more detail. For example a stream of 14-bit data could be a lot of things. I assumed for this answer you meant a (2D) video stream.
I have a rendering step which I would like to perform on a dynamically-generated texture.
The algorithm can operate on rows independently in parallel. For each row, the algorithm will visit each pixel in left-to-right order and modify it in situ (no distinct output buffer is needed, if that helps). Each pass uses state variables which must be reset at the beginning of each row and persist as we traverse the columns.
Can I set up OpenGL shaders, or OpenCL, or whatever, to do this? Please provide a minimal example with code.
If you have access to GL 4.x-class hardware that implements EXT_shader_image_load_store or ARB_shader_image_load_store, I imagine you could pull it off. Otherwise, in-situ read/write of an image is generally not possible (though there are ways with NV_texture_barrier).
That being said, once you start wanting pixels to share state the way you do, you kill off most of your potential gains from parallelism. If the value you compute for a pixel is dependent on the computations of the pixel to its left, then you cannot actually execute each pixel in parallel. Which means that the only parallelism your algorithm actually has is per-row.
That's not going to buy you much.
If you really want to do this, use OpenCL. It's much friendlier to this kind of thing.
Yes, you can do it. No, you don't need 4.X hardware for that, you need fragment shaders (with flow control), framebuffer objects and floating point texture support.
You need to encode your data into 2D texture.
Store "state variable" in 1st pixel for each row, and encode the rest of the data into the rest of the pixels. It goes without saying that it is recommended to use floating point texture format.
Use two framebuffers, and render them onto each other in a loop using fragment shader that updates "state variable" at the first column, and performs whatever operation you need on another column, which is "current". To reduce amount of wasted resources you can limit rendering to columns you want to process. NVidia OpenGL SDK examples had "game of life", "GDGPU fluid", "GPU partciles" demos that work in similar fashion - by encoding data into texture and then using shaders to update it.
However, because you can do it, it doesn't mean you should do it and it doesn't mean that it is guaranteed to be fast. Some GPUs might have a very high memory texture memory read speed, but relatively slow computation speed (and vice versa) and not all GPUs have many conveyors for processing things in parallel.
Also, depending on your app, CUDA or OpenCL might be more suitable.
I am doing some gpgpu calculations with GL and want to read my results from the framebuffer.
My framebuffer-texture is logically an 1D array, but I made it 2D to have a bigger area. Now I want to read from any arbitrary pixel in the framebuffer-texture with any given length.
That means all calculations are already done on GPU side and I only need to pass certain data to the cpu that could be aligned over the border of the texture.
Is this possible? If yes is it slower/faster than glReadPixels on the whole image and then cutting out what I need?
EDIT
Of course I know about OpenCL/CUDA but they are not desired because I want my program to run out of the box on (almost) any platform.
Also I know that glReadPixels is very slow and one reason might be that it offers some functionality that I do not need (Operating in 2D). Therefore I asked for a more basic function that might be faster.
Reading the whole framebuffer with glReadPixels just to discard it all except for a few pixels/lines would be grossly inefficient. But glReadPixels lets you specify a rect within the framebuffer, so why not just restrict it to fetching the few rows of interest ? So you maybe end up fetching some extra data at the start and end of the first and last lines fetched, but I suspect the overhead of that is minimal compared with making multiple calls.
Possibly writing your data to the framebuffer in tiles and/or using Morton order might help structure it so a tighter bounding box can be be found and the extra data retrieved minimised.
You can use a pixel buffer object (PBO) to transfer pixel data from the framebuffer to the PBO, then use glMapBufferARB to read the data directly:
http://www.songho.ca/opengl/gl_pbo.html
What is the difference between the two functions?
Any performance difference?
Thanks..
You create a texture using glTexImage, and then update its contents with glTexSubImage. When you update the texture, you can update the entire texture, or just a sub-rectangle of it.
It is far more efficient to create the one texture and update it than to create it and delete it repeatedly, so in that sense if you have a texture you want to update, always use glTexSubImage (after the initial creation).
Other techniques may be applicable for texture updates. For example, see this article on texture streaming for further information.
(Originally, this post suggested using glMapBuffer for texture updates - see discussion below.)
The gl functions with "sub" in the name aren't limited to power-of-2 dimensions. As gavinb points out, you need to use the non-sub variant once to set the overall dimensions, but I don't agree that calling the non-sub variant repeatedly is any slower than using "sub" for updates -- the GPU is still free to overwrite the existing texture in place as long as you are using the same texture id.