I am here because I'm working on an OpenGL program and I have some issues with performance. I work with OpenGL ES 3.0 on iMX6 soc.
Here is my algorithm :
I get an image from camera which is directly map to a texture.
Using an FBO, I render to texture to map the image on a specific form.
I do the same thing (with a second FBO) for another image which is sent via shared memory by another application. This step is performed only if the image is updated. Only once per second.
I blend these two textures in the default frame buffer to render the result to the screen.
If I perform these three steps separately, It works well and the screen is updated at 30FPS. But when I include the three step in one program the render is very slow and I got only 0.5FPS.
I am wondering if the GPU on the iMX6 is enough powerful, but I think it is not a complex algorithm. I think I am doing something in the wrong way, but what?
I use 3 different frame buffers, so is that a good way or should I use only one?
Can someone give me answer, clues, anything that can help me? :-)
My images dimensions are 1280x1024 x RGBA. Then I am doing some conversion from floating-point texture to integer and back to float, this is done to perform bitwise operation on pixels.
Thanks to #Columbo the problem came from all the conversion, I work with floating-point texture and only for the bitwise operations I do the conversion which improve a lot the performance of the algorithm.
Another point which decrease the performance was the texture format. For the first step, the image was 1280x1024 but only on one composent (grayscale image). To keep only the grayscale composant and not to use too much memory I worked with a GL_RED texture but this wasn't a good idea because when I changed it to GL_RGB, I double the framerate of the render too.
Related
I am now using FFMPEG to read a high resolution video (6480*1920) and use opengl to show it
after decoding, I get 3 pointer that point to the Y,U,V.
At first, I use swsscale to convert it rgb and show it, but I find it's too slow. So I directly deal with YUV. My second try is generate 3 one channel texture and convert it to rgb in fragment shader. It is faster, but still cannot achieve 60fps
I find the bottleneck is this function : texture(texy, tex_coord.xy). When the texture is large, it cost a lot of time. So instead of call it 3 times, my idea is to put the YUV in one single texture since a texture can have 4 channel. But I wonder that how can I update a certain channel of a texture.
I try the following code, but it seems do not work. Instead of update a channel, glTexSubImage2D changes the whole texture:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, frame->width, frame->height,0, GL_RED, GL_UNSIGNED_BYTE, Y);
glTexSubImage2D(GL_TEXTURE_2D,0,0,0,frame->width, frame->height, GL_GREEN,U);
glTexSubImage2D(GL_TEXTURE_2D,0,0,0,frame->width, frame->height, GL_BLUE,V);
So how can I use one texture to pass the YUV data ? I also try that gather the YUV data into one array then generate the texture. But it does not help since it need a lot of time to generate that array.
Any good idea?
You're approaching this from the wrong angle, since you don't actually understand what is causing the poor performance in the first place. Yes, texture access is a rather expensive operation. But it is not that expensive; I mean, just think about of the amount of texture data that gets pushed around in modern games at very high frame rates.
The problem is not the channel format of the texture, and it is also not the call of GLSL texture.
Your problem is this:
(…) high resolution video (6480*1920)
Plain and simple the dimensions of the frame are outside the range of what the GPU is comfortable working with. Try breaking down the picture into a set of smaller textures. Using glPixelStorei paramters GL_UNPACK_ROW_LENGTH, GL_UNPACK_SKIP_PIXELS and GL_UNPACK_SKIP_ROWS you can select the rectangle inside your source picture to copy.
You don't have to make several draw calls BTW, just select the texture inside the shader based on the target fragment position or texture coordinate.
Unfortunately OpenGL doesn't offer a convenient function to determine the sweet spot, for most GPUs these days the maximum size in either direction for dense textures is 2048. Go above it and in my experience the performance tanks for dense textures.
Sparse textures are an entirely different chapter, and irrelevant for this problem.
And just for the sake of completeness: I take it, that you don't reinitialize the texture for each and every frame with a call to glTexImage2D. Do that only once at the start of the video, then just update the texture(s).
I am basically trying to do something with the default frame buffer pixmap. I wish to blur it when somebody pauses the game . My problem is that even if I am using a separate thread for the whole blur operation, the method ScreenUtils.getFrameBufferPixmap has to be called on the rendering thread. But this method takes atleast 1 second to return even on nexus 5. Calling the method on my blur processing thread is not possible as there is no gl context available on any other thread other than rendering thread .
Is there any solution for eliminating the stall
What you're trying to do: take a screenshot, modify it on the CPU and upload it back to the GPU. There are 3 problems with this approach.
1.Grabbing pixels, takes a lot of time.
2.Blurring can be successfully executed independentely for each pixel so there is no point doing it on CPU. GPU can do it in a blink of an eye.
3. Uploading the texture back still takes some time.
The correct approach is: instead of rendering everything to the screen render it to the offscreen texture. (See offscreen rendering tutorials) Next, draw this texture on a quad of the size of your screen, but while drawing, use a blur shader. There is a number of example blur shaders available. It should basically sample the surroundings of the target pixel and render it's average.
In the source for ScreenUtils.java you can see that getFrameBufferPixmap is basically a wrapper around OpenGL's glReadPixels. There isn't too much you can do to improve the Java or Libgdx wrapper. This is not the direction OpenGL is optimized for (its good at pushing data up to the GPU, not pulling data off).
You might be better off re-rendering your current screen to a (smaller, off-screen) FrameBuffer, and then pulling that down. You could use the GPU to do the blurring this way, too.
I believe the screen's format (e.g., not RGBA8888) may have an impact on the read performance.
This isn't Libgdx-specific, so any tips or suggestions for OpenGL in general like Making glReadPixel() run faster should apply.
I have a rendering system where I draw to an FBO with a multisampled renderbuffer, then blit it to another FBO with a texture in order to resolve the samples in order to read off the texture to perform post-processing shading while drawing to the backbuffer (FBO index 0).
Now I'd like to get some correct sRGB output... The problem is the behavior of the program is rather inconsistent between when I run it on OS X and Windows and this also changes depending on the machine: On Windows with the Intel HD 3000 it will not apply the sRGB nonlinearity but on my other machine with a Nvidia GTX 670 it does. On the Intel HD 3000 in OS X it will also apply it.
So this probably means that I'm not setting my GL_FRAMEBUFFER_SRGB enable state at the right points in the program. However I can't seem to find any tutorials that actually tell me when I ought to enable it, they only ever mention that it's dead easy and comes at no performance cost.
I am currently not loading in any textures so I haven't had a need to deal with linearizing their colors yet.
To force the program to not simply spit back out the linear color values, what I have tried is simply comment out my glDisable(GL_FRAMEBUFFER_SRGB) line, which effectively means this setting is enabled for the entire pipeline, and I actually redundantly force it back on every frame.
I don't know if this is correct or not. It certainly does apply a nonlinearization to the colors but I can't tell if this is getting applied twice (which would be bad). It could apply the gamma as I render to my first FBO. It could do it when I blit the first FBO to the second FBO. Why not?
I've gone so far as to take screen shots of my final frame and compare raw pixel color values to the colors I set them to in the program:
I set the input color to RGB(1,2,3) and the output is RGB(13,22,28).
That seems like quite a lot of color compression at the low end and leads me to question if the gamma is getting applied multiple times.
I have just now gone through the sRGB equation and I can verify that the conversion seems to be only applied once as linear 1/255, 2/255, and 3/255 do indeed map to sRGB 13/255, 22/255, and 28/255 using the equation 1.055*C^(1/2.4)+0.055. Given that the expansion is so large for these low color values it really should be obvious if the sRGB color transform is getting applied more than once.
So, I still haven't determined what the right thing to do is. does glEnable(GL_FRAMEBUFFER_SRGB) only apply to the final framebuffer values, in which case I can just set this during my GL init routine and forget about it hereafter?
When GL_FRAMEBUFFER_SRGB is enabled, all writes to an image with an sRGB image format will assume that the input colors (the colors being written) are in a linear colorspace. Therefore, it will convert them to the sRGB colorspace.
Any writes to images that are not in the sRGB format should not be affected. So if you're writing to a floating-point image, nothing should happen. Thus, you should be able to just turn it on and leave it that way; OpenGL will know when you're rendering to an sRGB framebuffer.
In general, you want to work in a linear colorspace for as long as possible. Only your final render, after post-processing, should involve the sRGB colorspace. So your multisampled framebuffer should probably remain linear (though you should give it higher resolution for its colors to preserve accuracy. Use GL_RGB10_A2, GL_R11F_G11F_B10F, or GL_RGBA16F as a last resort).
As for this:
On Windows with the Intel HD 3000 it will not apply the sRGB nonlinearity
That is almost certainly due to Intel sucking at writing OpenGL drivers. If it's not doing the right thing when you enable GL_FRAMEBUFFER_SRGB, that's because of Intel, not your code.
Of course, it may also be that Intel's drivers didn't give you an sRGB image to begin with (if you're rendering to the default framebuffer).
I've recently been reading a little about sRGB formats and how they allow the hardware to automatically perform colour correction for typical monitors. As part of my reading, I see that you can simulate this step with an ordinary texture and a pow function on the return result.
Anyway I want to ask two questions as I've never used this feature before. Firstly, can anyone confirm from my screenshot that this is what you would expect to see? The left picture is ordinary RGBA and the right picture is with an sRGB target. There is no ambient lighting in the scene and the model is bog standard Phong (the light is a spotlight).
The second question I would like to ask is at what point is the correction actually performed by the hardware? For example I am writing frames to an FBO, then later I'm rendering a screen-sized quad to the back buffer using an FBO colour buffer (I'm intending to switch to deferred shading soon). Should I use sRGB textures attached to the FBO, or do I only need to specify an sRGB texture as the back buffer target? If you're using sRGB, should ALL texture resources be sRGB?
Note: the following discussion assumes you understand what the sRGB colorspace is, what gamma correction is, what a linear RGB colorspace is, and so forth. This focuses primarily on the OpenGL implementation of the technology.
If you want an in-depth discussion of these subjects, I would suggest looking at my tutorials on HDR/Gamma correction (to understand linear colorspaces and gamma), as well the tutorial on sRGB images and how they handle gamma correction.
Firstly, can anyone confirm from my screenshot that this is what you would expect to see?
I'm not sure I understand what you mean by that question. If you apply proper gamma correction (which is what sRGB does more or less), you will generally get more detail in darker areas of the image and a "brighter" result.
However, the correct way to think about it is that until you do proper gamma correction all of your images have been wrong. Your images have been too dark, and the gamma correction is now making them the appropriate brightness. Every decision you've made about what colors things should be and how bright lights ought to be has been wrong.
The second question I would like to ask is at what point is the correction actually performed by the hardware?
This is a very different question than the "for example" part that you continue on with covers.
sRGB images (remember: a texture contains images, but framebuffers can have images too) can be used in the following contexts:
Transferring data from the user directly to the image (for example, with glTexSubImage2D and so forth). OpenGL assumes that you are providing data that is already in the sRGB colorspace. So there is no translation of the data when you upload it. This is done because it makes the most sense: generally, any image you get from an artist will be in the sRGB colorspace unless the artist took great pains to put it in some other colorspace. Virtually every image editor works directly in sRGB.
Reading values in shaders via samplers (ie: accessing a texture). This is quite simple as well. OpenGL knows that the texel data in the image is in the sRGB colorspace. OpenGL assumes that the shader wants linear RGB color data. Therefore, all attempts to sample from a texture with an sRGB image format will result in the sRGB->lRGB conversion. Which is free, btw.
And on the plus side, if you've got GL 3.x+ capable hardware, you'll almost certainly get filtering done in the linear colorspace, where it makes sense. sRGB is a non-linear colorspace, so linear interpolation of sRGB values is always wrong.
Storing values output from the fragment shader to the framebuffer image(s). This is where it gets slightly complicated. Even if the framebuffer image you're rendering to is in the sRGB colorspace, that's not enough to force conversion. You must explicitly glEnable(GL_FRAMEBUFFER_SRGB); this tells OpenGL that the values you're writing from your fragment shader are linear colorspace values. Therefore, OpenGL needs to convert these to sRGB when storing them in the image
Again, if you've got GL 3.x+ hardware, you'll almost certainly get blending in the linear colorspace. That is, OpenGL will read the sRGB value from the framebuffer, convert it to a linear RGB value, blend it with the incoming linear RGB value (the one you wrote from your shader), convert the blended value into the sRGB colorspace and store it. Again, that's what you want; blending in the sRGB colorspace is always bad.
Now that we understand that, let's look at your example.
For example I am writing frames to an FBO, then later I'm rendering a screen-sized quad to the back buffer using an FBO colour buffer (I'm intending to switch to deferred shading soon).
The problem with this is that you're not asking the right questions. What you need to keep in mind, especially as you move into deferred rendering, is this question:
Is this linear RGB or not?
In general, you should hold off on storing any intermediate data in gamma-correct space for as long as possible. So any intermediate buffers (ie: where you accumulate your lights) should not be sRGB.
This isn't about the cost of the conversion; it's really about what you're doing. If you're doing deferred rendering, then you're probably also doing HDR lighting and so forth. So your light accumulation buffer needs to be floating-point. And float buffers are always linear; there's no reason for them to not be linear.
Your final image, the default framebuffer, must be sRGB if you want to take advantage of free gamma correction (and you do). If you do all your work in HDR float buffers, and then tone-map the result down for the final display, you should write that to an sRGB image.
I have an application where I need take the average intensity of an image for around 1 million images. It "feels" like a job for a GPU fragment shader, but fragment shaders are for per-pixel local computations, while image averaging is a global operation.
One approach I considered is loading the image into a texture, applying a 2x2 box-blur, load the result back into a N/2 x N/2 texture and repeating until the output is 1x1. However, this would take log n applications of the shader.
Is there a way to do it in one pass? Or should I just break down and use CUDA/OpenCL?
The summation operation is a specific case of the "reduction," a standard operation in CUDA and OpenCL libraries. A nice writeup on it is available on the cuda demos page. In CUDA, Thrust and CUDPP are just two examples of libraries that provide reduction. I'm less familiar with OpenCL, but CLPP seems to be a good library that provides reduction. Just copy your color buffer to an OpenGL pixel buffer object and use the appropriate OpenGL interoperability call to make that pixel buffer's memory accessible in CUDA/OpenCL.
If it must be done using the opengl API (as the original question required), the solution is to render to a texture, create a mipmap of the texture, and read in the 1x1 texture. You have to set the filtering right (bilinear is appropriate, I think), but it should get close to the right answer, modulo precision error.
My gut tells me to attempt your implementation in OpenCL. You can optimize for your image size and graphics hardware by breaking up the images into bespoke chunks of data that are then summed in parallel. Could be very fast indeed.
Fragment shaders are great for convolutions but that result is usually written to the gl_FragColor so it makes sense. Ultimately you will have to loop over every pixel in the texture and sum the result which is then read back in the main program. Generating image statistics perhaps not what the fragment shader was designed for and its not clear that a major performance gain is to be had since its not guaranteed a particular buffer is located in GPU memory.
It sounds like you may be applying this algorithm to a real-time motion detection scenario, or some other automated feature detection application. It may be faster to compute some statistics from a sample of pixels rather than the entire image and then build a machine learning classifier.
Best of luck to you in any case!
It doesn't need CUDA if you like to stick to GLSL. Like in the CUDA solution mentioned here, it can be done in a fragment shader staight forward. However, you need about log(resolution) draw calls.
Just set up a shader that takes 2x2 pixel samples from the original image, and output the average sum of those. The result is an image with half resolution in both axes. Repeat that until the image is 1x1 px.
Some considerations: Use GL_FLOAT luminance textures if avaliable, to get an more precise sum. Use glViewport to quarter the rendering area in each stage. The result then ends up in the top left pixel of your framebuffer.