Qt 5 with QOpenGLTexture and 16 bits integers images - c++

For a while I've been using RGB images in 32 bits floating point precision in textures with QOpenGLTexture. I had no trouble with it.
Originally those images have an unsigned short data type, and I'd liketo keep this data type for sending the data to openGL (BTW, does it actually save some memory at all to do that?). After many attempts, I can't get QOpenGLTexture to display the image. All I end up with is a black image.
Below is how I setup QOpenGLTexture. The parts that used floating points, and that worked so far, is commented out. The part that assumes images in 16 bits unsigned integers, is right below the latter, uncommented. I'm using OpenGL 3.3, GLSL 330, core profile, on a macbook pro retina with Iris graphics.
QOpenGLTexture *oglt = new QOpenGLTexture(QOpenGLTexture::Target2D);
oglt->setMinificationFilter(QOpenGLTexture::NearestMipMapNearest);
oglt->setMagnificationFilter(QOpenGLTexture::NearestMipMapNearest);
//oglt->setFormat(QOpenGLTexture::RGB32F); // works
oglt->setFormat(QOpenGLTexture::RGB16U);
oglt->setSize(naxis1, naxis2);
oglt->setMipLevels(10);
//oglt->allocateStorage(QOpenGLTexture::RGB, QOpenGLTexture::Float32); // works
//oglt->setData(QOpenGLTexture::RGB, QOpenGLTexture::Float32, tempImageRGB.data); // works
oglt->allocateStorage(QOpenGLTexture::RGB_Integer, QOpenGLTexture::UInt16);
oglt->setData(QOpenGLTexture::RGB_Integer, QOpenGLTexture::UInt16, tempImageRGB.data);
So, in just these lines above, is there something wrong?
My data in tempImageRGB.data are between [0-65535] when I used UInt16. When I use QOpenGLTexture::Float32, the values in tempImageRGB.data are already normalized so they would be within [0-1].
Then, here is my fragment shader:
#version 330 core
in mediump vec2 TexCoord;
out vec4 color;
uniform mediump sampler2D ourTexture;
void main()
{
mediump vec3 textureColor = texture(ourTexture, TexCoord).rgb;
color = vec4(textureColor, 1.0);
}
What am I missing?

It seems I fixed the problem by simply not using NearestMipMapNearest for magnification filter. Things work if I only use it for the minification. While in general it makes sense but I don't understand why I had no problem when using NearestMipMapNearest for both Magnification and Minification in the floating point case.
So, the code is working by simply changing 'sampler2D' to 'usampler2D' in the shader, by changing 'setMagnificationFilter(QOpenGLTexture::NearestMipMapNearest)' into 'setMagnificationFilter(QOpenGLTexture::Nearest)'. The minification filter does not need to change. In addition, although it works with and without, I did not need to set the MipMapLevels explicitly so I can remove oglt->setMipLevels(10).
To be clear, here is the corrected code:
QOpenGLTexture *oglt = new QOpenGLTexture(QOpenGLTexture::Target2D);
oglt->setMinificationFilter(QOpenGLTexture::NearestMipMapNearest);
oglt->setMagnificationFilter(QOpenGLTexture::Nearest);
//oglt->setFormat(QOpenGLTexture::RGB32F); // works
oglt->setFormat(QOpenGLTexture::RGB16U); // now works with integer images (unsigned)
oglt->setSize(naxis1, naxis2);
//oglt->allocateStorage(QOpenGLTexture::RGB, QOpenGLTexture::Float32); // works
//oglt->setData(QOpenGLTexture::RGB, QOpenGLTexture::Float32, tempImageRGB.data); // works
oglt->allocateStorage(QOpenGLTexture::RGB_Integer, QOpenGLTexture::UInt16); // now works with integer images (unsigned)
oglt->setData(QOpenGLTexture::RGB_Integer, QOpenGLTexture::UInt16, tempImageRGB.data); // now works with integer images (unsigned)
The fragment shader becomes simply:
#version 330 core
in mediump vec2 TexCoord;
out vec4 color;
uniform mediump usampler2D ourTexture;
void main()
{
mediump vec3 textureColor = texture(ourTexture, TexCoord).rgb;
color = vec4(textureColor, 1.0);
}

Related

OpenGL shader fill works with constant color, doesn't work with interpolation, how to debug?

We have code that mostly works filling polygons on a map, though it draws convex hulls and fills in some areas (will require tessellation).
The shader is given a set of triangle fan operations, and draws using hardcoded color yellow (and it works).
Then we try to interpolate based on the value, and it turns black (does not work).
Here is the fragment shader. Values coming in are all 0.0 to 1.0
With minVal = 0.0, maxVal = 1.0
and colors set to (0,0,1) and (1,0,0)
While I would appreciate knowing the bug, I would much more like to know how I can debug it. I need to be able to get the values in the shader and see what is happening. In short, I need some kind of debugging facility for GLSL. I did find NVIDIA nsight: https://developer.nvidia.com/nsight-graphics but could not get it working on linux.
#version 330 core
out vec4 FragColor;
//in vec2 TexCoord;
in float val;
//uniform sampler2D ourTexture;
uniform vec3 minColor;
uniform vec3 maxColor;
uniform float minVal;
uniform float maxVal;
void main()
{
float f = (val - minVal)/ (maxVal-minVal);
//FragColor = vec4(1,1,0,1);//texture(ourTexture, f);
FragColor = vec4(minColor*(1.0-f) + maxColor * f,1.0);
}
It turns out that we were using glUniform4fv to set a color with rgba.
There was no compile or runtime error. These calls do not have an error return that I know of.
The shader also did not generate an error, but the variables minColor and maxColor were not correctly set.
Thus the interpolation was always black.
vec4(minColor*(1.0-f) + maxColor * f,1.0);
There should have been an error, attempting to set an RGBA color into a vec3 variable.
I have found printf functions on stackoverflow that would have allowed viewing this kind of information: Convert floating-point numbers to decimal digits in GLSL

flat shading in webGL

I'm trying to implement flat-shading in webgl,
I knew that varying keyword in vertex shader will interpolation that value and pass it to fragment shader.
I'm trying to disable interpolation, and I found that flat keyword can do this, but it seems cannot use in webgl?
flat varying vec4 fragColor;
always getting error: Illegal use of reserved word 'flat'
Check out webGL 2. Flat shading is supported.
For vertex shadder:
#version 300 es
in vec4 vPos; //vertex position from application
flat out vec4 vClr;//color sent to fragment shader
void main(){
gl_Position = vPos;
vClr = gl_Position;//for now just using the position as color
}//end main
For fragment shader
#version 300 es
precision mediump float;
flat in vec4 vClr;
out vec4 fragColor;
void main(){
fragColor = vClr;
}//end main
I think 'flat' is not supported by the version of GLSL used in WebGL. If you want flat shading, there are several options:
1) replicate the polygon's normal in each vertex. It is the simplest solution, but I find it a bit unsatisfactory to duplicate data.
2) in the vertex shader, transform the vertex in view coordinates, and in the fragment shader, compute the normal using the dFdx() and dFdy() functions that compute derivatives. These functions are supported by the extension GL_OES_standard_derivatives (you need to check whether it is supported by the GPU before using it), most GPUs, including the ones in smartphones, support the extension.
My vertex shader is as follows:
struct VSUniformState {
mat4 modelviewprojection_matrix;
mat4 modelview_matrix;
};
uniform VSUniformState GLUP_VS;
attribute vec4 vertex_in;
varying vec3 vertex_view_space;
void main() {
vertex_view_space = (GLUP_VS.modelview_matrix * vertex_in).xyz;
gl_Position = GLUP_VS.modelviewprojection_matrix * vertex_in;
}
and in the associated fragment shader:
#extension GL_OES_standard_derivatives : enable
varying vec3 vertex_view_space;
...
vec3 U = dFdx(vertex_view_space);
vec3 V = dFdy(vertex_view_space);
N = normalize(cross(U,V));
... do the lighting with N
I like this solution because it makes the setup code simpler. A drawback may be that it gives more work to the fragment shader (but with today's GPUs it should not be a problem). If performance is an issue, it may be a good idea to measure it.
3) another possibility is to have a geometry shader (if supported) that computes the normals. In general it is slower (but again, it may be a good idea to measure performance, it may depend on the specific GPU).
See also answers to this question:
How to get flat normals on a cube
My implementation is available here:
http://alice.loria.fr/software/geogram/doc/html/index.html
Some online web-GL demos are here (converted from C++ to JavaScript using emscripten):
http://homepages.loria.fr/BLevy/GEOGRAM/

GLSL alpha test optimized out on NVIDIA

I have a small fragment shader on glsl, that is used when I want to render shadow texture. To have the desired shadow from textures with alpha channel (like leaves), alpha test is used.
#version 130
in lowp vec2 UV; //comes from vertex shader, works fine
uniform sampler2D DiffuseTextureSampler; //ok, alpha channel IS present in passed texture
void main(){
lowp vec4 MaterialDiffuseColor = texture( DiffuseTextureSampler, UV ).rgba;
if(MaterialDiffuseColor.a < 0.5)
{
discard;
}
//no color output is required at this stage
}
The following shader works fine on Intel cards (HD 520 && HD Graphics 3000), but on NVIDIA (420M and GTX 660, on Win7 and Linux Mint 17.3 respectively, using the latest drivers, 37x.xx something...) the alpha test does not work. Shader does compile without any errors, so it seems that NVIDIA optimizer is doing weird stuff.
I've copy-pasted parts of 'fullbright' shader to the 'shadow' shader, and got to this odd result, that does work as intended, though a lot of useless stuff (for shadow rendering) is done.
#version 130
in lowp vec2 UV;
in lowp float fade;
out lowp vec4 color;
uniform sampler2D DiffuseTextureSampler;
uniform bool overlayBool;
uniform lowp float translucencyAlpha;
uniform lowp float useImageAlpha;
void main(){
lowp vec4 MaterialDiffuseColor = texture( DiffuseTextureSampler, UV ).rgba;
//notice the added OR :/
if(MaterialDiffuseColor.a < 0.5 || overlayBool)
{
discard;
}
//next line is meaningless
color = vec4(0.0, 0.0, 1.0, min(translucencyAlpha * useImageAlpha, fade));
}
If I remove any uniform, or change something (like, replace the min function with some arithmetic, save the declaration of uniform variable but do not use it etc), the alpha test breaks again.
Just outputting the color does not work (ie color = vec4(1.0, 1.0, 1.0, 1.0); has no effect).
I tried using the
#pragma optimize (off)
but it did not help.
By the way, when alpha test is broken, the expression "MaterialDiffuseColor.a == 0.0" is true.
Sorry, if it is a dumb question, but what causes such behaviour on NVIDIA cards and what can I do to avoid it? Thank you.

c++/OpenGL/GLSL, textures with "random" artifacts

Would like to know if someone has experienced this and knows the reason. I'm getting these strange artifacts after using "texture arrays"
http://i.imgur.com/ZfLYmQB.png
(My gpu is AMD R9 270)
ninja edit deleted the rest of the post for readability since it was just showing code where the problem could have been, since the project is open source now I only show the source of the problem(fragment shader)
Frag:
#version 330 core
layout (location = 0) out vec4 color;
uniform vec4 colour;
uniform vec2 light;
in DATA{
vec4 position;
vec2 uv;
float tid;
vec4 color;
}fs_in;
uniform sampler2D textures[32];
void main()
{
float intensity = 1.0 / length(fs_in.position.xy - light);
vec4 texColor = fs_in.color;
if(fs_in.tid > 0.0){
int tid = int(fs_in.tid - 0.5);
texColor = texture(textures[tid], fs_in.uv);
}
color = texColor; // * intensity;
}
Edit: github repos (sorry if It is missing some libs, having trouble linking them to github) https://github.com/PedDavid/NubDevEngineCpp
Edit: Credits to derhass for pointing out I was doing something with undefined results (accessing the array without a constant ([tid])). I have it now working with
switch(tid){
case 0: textureColor = texture(textures[0], fs_in.uv); break;
...
case 31: textureColor = texture(textures[31], fs_in.uv); break;
}
Not the prettiest, but fine for now!
I'm getting these strange artifacts after using "texture arrays"
You are not using "texture arrays". You use arrays of texture samplers. From your fragment shader:
#version 330 core
// ...
in DATA{
// ...
float tid;
}fs_in;
//...
if(fs_in.tid > 0.0){
int tid = int(fs_in.tid - 0.5);
texColor = texture(textures[tid], fs_in.uv);
}
What you try to do here is not allowed as per the GLSL 3.30 specification which states
Samplers aggregated into arrays within a shader (using square brackets
[ ]) can only be indexed with integral constant expressions (see
section 4.3.3 “Constant Expressions”).
Your tid is not a constant, so this will not work.
In GL 4, this constraint has been somewhat relaxed to (quote is from GLSL 4.50 spec):
When aggregated into arrays within a shader, samplers can only be
indexed with a dynamically uniform integral expression, otherwise
results are undefined.
Your now your input also isn't dynamically uniform either, so you will get undefined results too.
I don't know what you are trying to achieve, but maybe you can get it dobe by using array textures, which will represent a complete set of images as a single GL texture object and will not impose such constraints at accessing them.
That looks like the shader is rendering whatever random data it finds in memory.
Have you tried checking that glBindTexture(...) is called at the right time (before render) and that the value used (as returned by glGenTextures(...)) is valid?

LibGDX Overlapping 2D Shadows

I'm working on shadows for a 2D overhead game. Right now, the shadows are just sprites with the color (0,0,0,0.1) drawn on a layer above the tiles.
The problem: When many entities or trees get clumped together, the shadows overlap, forming unnatural-looking dark areas.
I've tried drawing the shadows to a framebuffer and using a simple shader to prevent overlapping, but that lead to other problems, including layering issues.
Is it possible to enable a certain blend function for the shadows that prevents "stacking", or a better way to use a shader?
If you don't want to deal with sorting issues, I think you could do this with a shader. But every object will have to be either affected by shadow or not. So tall trees could be marked as not shadow receiving, while the ground, grass, and characters would be shadow receiving.
First make a frame buffer with clear color white. Draw all your shadows on it as pure black.
Then make a shadow mapping shader to draw everything in your world. This relies on you not needing all four channels of the sprite's color, because we need one of those channels to mark each sprite as shadow receiving or not. For example, if you aren't using RGB to tint your sprites, we could use the R channel. Or if you aren't fading them in and out, we could use A. I'll assume the latter here:
Vertex shader:
attribute vec4 a_position;
attribute vec4 a_color;
attribute vec2 a_texCoord0;
varying vec2 v_texCoords;
varying vec2 v_texCoordsShadowmap;
varying vec4 v_color;
uniform mat4 u_projTrans;
void main()
{
v_texCoords = a_texCoord0;
v_color = a_color;
v_color.a = v_color.a * (255.0/254.0); //this is a correction due to color float precision (see SpriteBatch's default shader)
vec3 screenPosition = u_projTrans * a_position;
v_texCoordsShadowmap = (screenPosition.xy * 0.5) + 0.5;
gl_Position = screenPosition;
}
Fragment shader:
#ifdef GL_ES
precision mediump float;
#endif
varying vec2 v_texCoords;
varying vec2 v_texCoordsShadowmap;
varying vec4 v_color;
uniform sampler2D u_texture;
uniform sampler2D u_textureShadowmap;
void main()
{
vec4 textureColor = texture2D(u_texture, v_texCoords);
float shadowColor = texture2D(u_textureShadowmap, v_texCoordsShadowmap).r;
shadowColor = mix(shadowColor, 1.0, v_color.a);
textureColor.rgb *= shadowColor * v_color.rgb;
gl_FragColor = textureColor;
}
These are completely untested and probably have bugs. Make sure you assign the frame buffer's color texture to "u_textureShadowmap". And for all your sprites, set their color's alpha based on how much shadow you want them to have cast on them, which will generally always be 0 or 0.1 (based on the brightness you were using before).
Draw your shadows to fbo with disabled blending.
Draw background e.g. grass
Draw shadows texture from fbo
Draw all other sprites