Direct3D11 Pixel Shader Interpolation - gradient

I'm trying to create a simple transition between two colors. So a render of a quad with top edge colors being #2c2b35 and bottom color being #1a191f. When I put it through simple pixel shader which just applies colors to the vertices and interpolates it I get an effect of color banding and I'm wondering if there is an option to eliminate that effect without some advanced pixel shader dithering techniques.
I'm attaching a picture of a render where the left part is being rendered in my app and part on the right is rendered in Photoshop with "Dithered" option checked.
Thanks in advance.

Banding is a problem that every rendering engine have to deal with at some point.
Before you start implementing dithering, you should first confirm you are following a proper srgb pipeline. (https://en.wikipedia.org/wiki/SRGB)
To be SRGB correct :
Swapchain and render surfaces in RGBX8 have to contains SRGB values. The simplest way is to create a render target view with a XXXX_UNORM_SRGB format.
Texture have to use a UNORM_SRGB format view when appropriate ( albedo map ) and UNORM if not appropriate ( normal map )
Shader computation like lighting has to be in linear space.
Texture and render target when using the right format will perform the conversion for you at read and write, but for constant, you have to do it manually. For example in #2c2b35 : 0x2c is in srgb 44(0.1725) but in linear value is 0.025
To do a gradient, you have to do it in linear space in the shader and let the render target convert it back to srgb on write.

Related

How can I use openGL to draw a graph based on the values of a texture?

I want to take an RGB texture, convert it to YUV and draw a graph based on the UV components of each pixel, essentially, a vectorscope (https://en.wikipedia.org/wiki/Vectorscope).
I have no problem getting openGL to convert the texture to YUV in a fragment shader and even to draw the texture itself (even if it looks goofy because it is in YUV color space), but beyond that I am at a bit of a loss. Since I'm basically drawing a line from one UV coord to the next, using a fragment shader seems horribly inefficient (lots of discarded fragments).
I just don't know enough about what I can do with openGL to know what my next step is. I did do a CPU rendered version that I discarded since it simply wasn't fast enough (100ms for a single 1080p frame). My source image updates at up to 60fps.
Just for clarity, I am currently using openTK. Any help nudging me in a workable direction is very appreciated.
Assuming that the image you want a graph of is the texture, I suggest two steps.
First step, convert the RGB texture to YUV which you've done. Render this to an offscreen framebuffer/texture target instead of a window so you have the YUV texture map for the next step.
Second step, draw a line W x H times, ie once for each pixel in the texture. Use instanced rendering, one line N times, rather than actually creating geometry for them all, because the coordinates for the ends of the line will be dummies.
In the vertex shader, gl_InstanceID will be the number of this line from 0 to N - 1. Convert to 2D texture coords for the pixel in the YUV texture that you want to graph. I've never written a vectorscope myself, but presumably you know how to convert this YUV color you get from the texture into 2D/3D coords.

Best way to do real-time per-pixel filtering with OpenGL?

I have an image that needs to be filtered and then displayed on the screen. Below is a simplified example of what I want to do:
The left image is the screen-buffer as it would be displayed on the screen.
The middle is a filter that should be applied to the screen buffer.
The right image is the screen buffer as it should be displayed to the screen.
I am wondering what the best method of achieving this within the context of OpenGL would be.
Fragment Shader?
Modify the pixels one-by-one?
The final version of this code will be applied to a screen that is constantly changing and needs to be per-pixel filtered no matter what the "original" screen-buffer shows.
Edit, Concerns about fragment shader:
- The fragment shader isn't guaranteed to give fragments of size 1x1, so how would I can't say "ModifiedImage[x][y].red += Filter[x][y].red" Within the fragment shader
You could blend the images together using OpenGL's blending functions (glBlendFunc, glEnable( GL_BLEND ) etc.)

How to render grayscale texture without using fragment shaders? (OpenGL)

Is it possible to draw a RGB texture as grayscale without using fragment shaders, using only fixed pipeline openGL?
Otherwise I'd have to create two versions of texture, one in color and one in black and white.
I don't know how to do this with an RGB texture and the fixed function pipeline.
If you create the texture from RGB source data but specify the internal format as GL_LUMINANCE, OpenGL will convert the color data into greyscale for you. Use the standard white material and MODULATE mode.
Hope this helps.
No. Texture environment combiners are not capable of performing a dot product without doing the scale/bias operation. That is, it always pretends that [0, 1] values are encoded as [-1, 1] values. Since you can't turn that off, you can't do a proper dot product.

Setting neighbor fragment color via GLSL

I need to setup a GLSL fragment shader to change the color of a fragment other than the one currently being processed. Since that may not seem desirable, I'll provide a very brief context.
The project utilizes a render pass whereby a given model is drawn into an FBO with unique colors that correspond to UV coordinates in the texture map. These colors are then sampled and converted to image coordinates so that the texture map for the model can be updated based on what's visible to the camera. Essentially:
Render model to FBO
For each FBO pixel
1. sample secondary texture based on FBO pixel position
2. convert color at current pixel to image coordinate for the model's texture map
3. update model's texture with sampled secondary texture at calculated coordinate
End loop
The problem is that the current implementation is very CPU bound, so I'm reading the pixels out of the FBO and then manipulating them. Ideally, since I already have the color of the fragment to work with in the fragment shader, I want to just tack on the last few steps to the process and keep everything on the GPU.
The specific issue I'm having is that I don't quite know how (or if it's even possible) to have a fragment shader set the color of a fragment that it is not processing. If I can't work something up by using an extra large FBO and just offsetting the fragment that I want to set the color on, can I work something up that writes directly into a texture?
Any help/advice?
It's not possible to have a fragment shader write to anywhere other than the fragment it is processing. What you probably want to do is ping pong rendering.
In your code, you'd have three textures, matching your listed tasks:
the secondary texture
the source model texture map
the destination model texture map
At a first run, you'd use (1) and (2) as source textures, to draw to (3). Next time through the loop you'd use (1) and (3) to write to (2). Then you'd switch back to using (1) and (2) to write to (3). And so on.
So (2) and (3) are connected with framebuffer objects with the textures supplied as the colour buffer in place of a renderbuffer.
NVidia authored the GL_NV_texture_barrier extension in 2009 that allows you to compact (2) and (3) into a single texture, provided you are explicit about the dividing line between where you're reading and where you're writing. I don't have the expertise to say how widely available it is.
Attempting to read and write to the same texture (as is possible with FBOs) otherwise produces undefined results in OpenGL. Prompting issues at the hardware level are related to caching and multisampling.
As far as I understand, you need a scatter operation (uniform FBO pixel space -> random mesh UV texture destination) to be performed in OpenGL. There is a way to do this, not as simple as you may expect, and not even as fast, but I can't find a better one:
Run a draw call of type GL_POINTS and size equal to the width*height of your source FBO.
Select model texture as a destination FBO color layer, with no depth layer attached
In a vertex shader, compute the original screen coordinate by using gl_VertexID.
Sample from the source FBO texture to get color and target position (assuming your original FBO surface was a texture). Assign a proper gl_Position and pass the target color to the fragment shader.
In a fragment shader, just copy the color to the output.
This will make GPU to go through each of your original FBO pixels and scatter the computed colors over the destination texture.

Texture Image processing on the GPU?

I'm rendering a certain scene into a texture and then I need to process that image in some simple way. How I'm doing this now is to read the texture using glReadPixels() and then process it on the CPU. This is however too slow so I was thinking about moving the processing to the GPU.
The simplest setup to do this I could think of is to display a simple white quad that takes up the entire viewport in an orthogonal projection and then write the image processing bit as a fragment shader. This will allow many instances of the processing to run in parallel as well as to access any pixel of the texture it requires for the processing.
Is this a viable course of action? is it common to do things this way?
Is there maybe a better way to do it?
Yes, this is the usual way of doing things.
Render something into a texture.
Draw a fullscreen quad with a shader that reads that texture and does some operations.
Simple effects (e.g. grayscale, color correction, etc.) can be done by reading one pixel and outputting one pixel in the fragment shader. More complex operations (e.g. swirling patterns) can be done by reading one pixel from offset location and outputting one pixel. Even more complex operations can be done by reading multiple pixels.
In some cases multiple temporary textures would be needed. E.g. blur with high radius is often done this way:
Render into a texture.
Render into another (smaller) texture, with a shader that computes each output pixel as average of multiple source pixels.
Use this smaller texture to render into another small texture, with a shader that does proper Gaussian blur or something.
... repeat
In all of the above cases though, each output pixel should be independent of other output pixels. It can use one more more input pixels just fine.
An example of processing operation that does not map well is Summed Area Table, where each output pixel is dependent on input pixel and the value of adjacent output pixel. Still, it is possible to do those kinds on the GPU (example pdf).
Yes, it's the normal way to do image processing. The color of the quad doesn't really matter if you'll be setting the color for every pixel. Depending on your application, you might need to careful about pixel sampling issues (i.e. ensuring that you sample from exactly the correct pixel on the source texture, rather than halfway between two pixels).