There are 2 textures in memory, an RGB texture and a single channel texture that is it's alpha mask. Is there any way to combine the two in OpenGL?
I'm currently combining the two in openCV and then passing the whole to openGL for rendering but the channel combination in openCV is too slow and I'm looking for alternatives.
Related
Is it possible to detect when a format has a single channel in HLSL or GLSL? Or just as good, is it possible to extract a greyscale color from such a texture without knowing if it has a single channel or 4?
When sampling from texture formats such as DXGI_FORMAT_R8_*/GL_R8 or DXGI_FORMAT_BC4_UNORM, I am getting pure red RGBA values (g,0,0,1). This would not be a problem if I knew (within the shader) that the texture only had the single channel, as I could then flood the other channels with that red value. But doing anything of this nature would break the logic for color textures, requiring a separate compiled version for the grey sampling (for every texture slot).
Is it not possible to make efficient use of grey textures in modern shaders without specializing the shader for them?
The only solution I can come up with at the moment would be to detect the grey texture on the CPU side and generate a macro on the GPU side that selects a different compiled version of the shader for every texture slot. Doing this with 8 texture slots would add up to 8x8=64 compiled versions every shader that wants to support grey inputs. That's not counting the other macro-like switches that actually make sense being there.
Just to be clear, I do know that I can load these textures into GPU memory as 4-channel greyscale textures, and go from there. But doing that uses 4X the memory, and I would rather load in 3 more textures.
In OpenGL there's two ways to achieve what you're looking for:
Legacy: The INTENSITY and LUMINANCE texture formats will when sampled result in vec4(I,I,I,I) or vec4(L,L,L,1).
Modern: Use a swizzle mask to apply user defined channel swizzling per texture: glTexParameteriv(GL_TEXTURE_2D, GL_TEXTURE_SWIZZLE_RGBA, {GL_RED,GL_RED,GL_RED,GL_ONE});
In DirectX 12 you can use component mapping during the creation of a ShaderResourceView.
I want to take an RGB texture, convert it to YUV and draw a graph based on the UV components of each pixel, essentially, a vectorscope (https://en.wikipedia.org/wiki/Vectorscope).
I have no problem getting openGL to convert the texture to YUV in a fragment shader and even to draw the texture itself (even if it looks goofy because it is in YUV color space), but beyond that I am at a bit of a loss. Since I'm basically drawing a line from one UV coord to the next, using a fragment shader seems horribly inefficient (lots of discarded fragments).
I just don't know enough about what I can do with openGL to know what my next step is. I did do a CPU rendered version that I discarded since it simply wasn't fast enough (100ms for a single 1080p frame). My source image updates at up to 60fps.
Just for clarity, I am currently using openTK. Any help nudging me in a workable direction is very appreciated.
Assuming that the image you want a graph of is the texture, I suggest two steps.
First step, convert the RGB texture to YUV which you've done. Render this to an offscreen framebuffer/texture target instead of a window so you have the YUV texture map for the next step.
Second step, draw a line W x H times, ie once for each pixel in the texture. Use instanced rendering, one line N times, rather than actually creating geometry for them all, because the coordinates for the ends of the line will be dummies.
In the vertex shader, gl_InstanceID will be the number of this line from 0 to N - 1. Convert to 2D texture coords for the pixel in the YUV texture that you want to graph. I've never written a vectorscope myself, but presumably you know how to convert this YUV color you get from the texture into 2D/3D coords.
I'm porting some OpenGL code from a technical paper to use with Metal. In it, they use a render target with only one channel - a 16-bit float buffer. But then they set blending operations on it like this:
glBlendFunci(1, GL_ZERO, GL_ONE_MINUS_SRC_ALPHA);
With only one channel, does that mean that with OpenGL, the target defaults to being an alpha channel?
Does anyone know if it is the same with Metal? I am not seeing the results I expect and I am wondering if Metal differs, or if there is a setting that controls how single-channel targets are treated with regards to blending.
In OpenGL, image format channels are labeled explicitly with their channels. There is only one one-channel color format: GL_R* (obviously with different bitdepths and other info). That is, red-only. And while texture swizzling can make the red channel appear in other channels from a texture fetch, that doesn't work for framebuffer writes.
Furthermore, that blend function doesn't actually use the destination alpha. It only uses the source alpha, which has the value the FS gave it. So the fact that the framebuffer doesn't store an alpha is essentially irrelevant.
I can get the histogram of an opengl texture using the glGetHistogram() function.
Similar to the OpenCV histogram function, where a second OpenCV matrix can be given as a mask, I have an OpenGL Texture and a binary mask (either as alpha channel or as a separate texture), and I would like to get a histogram of all the pixels in the image that are not masked.
Is this possible somehow?
glGetHistogram is deprecated since OpenGL 3.1 anyway.
Using compute shaders or occlusion queries would be a better idea.
I am loading bitmaps with OpenGL to texture a 3d mesh. Some of these bitmaps have alpha channels (transparency) for some of the pixels and I need to figure out the best way to
obtain the values of transparency for each pixel
and
render them with the transparency applied
Does anyone have a good example of this? Does OpenGL support this?
First of all, it's generally best to convert your bitmap data to 32-bit so that each channel (R,G,B,A) gets 8 bits. When you upload your texture, specify a 32bit format.
Then when rendering, you'll need to glEnable(GL_BLEND); and set the blend function, eg: glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);. This tells OpenGL to mix the RGB of the texture with that of the background, using the alpha of your texture.
If you're doing this to 3D objects, you might also want to turn off back-face culling (so that you see the back of the object through the front) and sort your triangles back-to-front (so that the blends happen in the correct order).
If your source bitmap is 8-bit (ie: using a palette with one colour specified as the transparency mask), then it's probably easiest to convert that to RGBA, setting the alpha value to 0 when the colour matches your transparency mask.
Some hints to make things (maybe) look better:
Your alpha channel is going to be an all-or-nothing affair (either 0x00 or 0xff), so apply some blur algorithm to get softer edges, if that's what you're after.
For texels (texture-pixels) with an alpha of zero (fully transparent), replace the RGB colour with the closest non-transparent texel. When texture coordinates are being interpolated, they wont be blended towards the original transparency colour from your BMP.
If your pixmap are 8-bit single channel they are either grayscale or use a palette. What you first need to do is converting the pixmap data into RGBA format. For this you allocate a buffer large enough to hold a 4-channel pixmap of the dimensions of the original file. Then for each pixel of the pixmap use that pixel's value as index into the palette (look up table) and put that color value into the corresponding pixel in the RGBA buffer. Once finished, upload to OpenGL using glTexImage2D.
If your GPU supports fragment shaders (very likely) you can do that LUT transformation in the shader: Upload the 8-bit pixmal as a GL_RED or GL_LUMINANCE 2D texture. And upload the palette as a 1D GL_RGBA texture. Then in the fragment shader:
uniform sampler2D texture;
uniform sampler1D palette_lut;
void main()
{
float palette_index = texture2D(texture,gl_TexCoord[0].st).r;
vec4 color = texture1D(palette_lut, palette_index);
gl_FragColor = color;
}
Blended rendering conflicts with the Z buffer algorithm, so you must sort your geometry back-to-front for things to look properly. As long as this affects objects at a whole this is rather simple, but it becomes tedious if you need to sort the faces of a mesh rendering each and every frame. A method to avoid this is breaking down meshes into convex submeshes (of course a mesh that's convex already can not be broken down further). Then use the following method:
Enable face culling
for convex_submesh in sorted(meshes, far to near):
set face culling to front faces (i.e. the backside gets rendered)
render convex_submesh
set face culling to back faces (i.e. the fronside gets rendered)
render convex_submesh again