Qt3d/C++ - How to use frameGraphe to achive Outlines? - c++

I made simple viewer that allow you to import .obj file, what I want to achieve is, when user selecting the model, shaders will draw model outlines in different color.
What I use to represent the .obj file is a QEntity with Custom QMaterial, Custom QPickerObject and QMesh.
With my Custom QMaterial I made simple Flat Shading(coloring based on face normal)
// My Custom QMaterial :
explicit CustomizedMaterial(Qt3DCore::QNode *parent = nullptr) : QMaterial(parent)
{
// Create effect, technique, render pass and shader
Qt3DRender::QEffect *effect = new Qt3DRender::QEffect();
Qt3DRender::QTechnique *gl3Technique = new Qt3DRender::QTechnique();
Qt3DRender::QRenderPass *gl3Pass = new Qt3DRender::QRenderPass();
Qt3DRender::QShaderProgram *glShader = new Qt3DRender::QShaderProgram();
QByteArray ver(
"#version 330\n"
"out vec3 vViewPos;\n"
"in vec3 vertexPosition;\n"
"in vec3 vertexNormal;\n"
"uniform mat4 modelView;\n"
"uniform mat3 modelViewNormal;\n"
"uniform mat4 mvp;\n"
"void main()\n"
"{\n"
"vec4 pos = vec4(vertexPosition, 1.0);\n"
"vec4 mpos = modelView * pos;\n"
"gl_Position = mvp * vec4(vertexPosition, 1.0);\n"
"vViewPos = -mpos.xyz;\n"
"}\n");
QByteArray frag(
"#version 330\n"
"vec3 normals(vec3 pos)\n"
"{\n"
"vec3 fdx = dFdx(pos);\n"
"vec3 fdy = dFdy(pos);\n"
"return normalize(cross(fdx, fdy));\n"
"}\n"
"in vec3 vViewPos;\n"
"out vec4 fragColor;\n"
"void main()\n"
"{\n"
"vec3 normal = normals(vViewPos);\n"
"vec3 gray = vec3(0.9, 0.9, 0.9);\n"
"float theta = dot(normal, vec3(0, 0, 1)) / length(normal);\n"
"fragColor = vec4(gray * theta , 1.0);\n"
"}\n");
glShader->setVertexShaderCode(ver);
glShader->setFragmentShaderCode(frag);
// Set the shader on the render pass
gl3Pass->setShaderProgram(glShader);
// filter
Qt3DRender::QFilterKey *m_filterKey = new Qt3DRender::QFilterKey(this);
m_filterKey->setName(QStringLiteral("renderingStyle"));
m_filterKey->setValue(QStringLiteral("forward"));
// Add the pass to the technique
gl3Technique->addRenderPass(gl3Pass);
// Set the targeted GL version for the technique
gl3Technique->graphicsApiFilter()->setApi(Qt3DRender::QGraphicsApiFilter::OpenGL);
gl3Technique->graphicsApiFilter()->setMajorVersion(3);
gl3Technique->graphicsApiFilter()->setMinorVersion(2);
gl3Technique->graphicsApiFilter()->setProfile(Qt3DRender::QGraphicsApiFilter::CoreProfile);
// Add filter
gl3Technique->addFilterKey(m_filterKey);
// Add the technique to the effect
effect->addTechnique(gl3Technique);
// Set the effect on the materials
setEffect(effect);
}
From my searching I think the easiest way is by using two rendering pass technique, sadly there is no documentation or example in Qt3d/C++ show me how to do it, can some one help ?
Thanks in advance.

Yes there is not much information in this regard unfortunately. It is looks like that Qt put more effort and resource into QML and not C++ / all examples are 5 to 1 in favor of QML/.
Ok I manage to make custom shader work. I played with your code and changed just a few:
I moved the configuration of the QTechnique immediately after the creation plus I change the order of initialization of the technique:
gl3Technique->graphicsApiFilter()->setProfile(Qt3DRender::QGraphicsApiFilter::CoreProfile);
gl3Technique->graphicsApiFilter()->setApi(Qt3DRender::QGraphicsApiFilter::OpenGL);
gl3Technique->graphicsApiFilter()->setMajorVersion(3);
gl3Technique->graphicsApiFilter()->setMinorVersion(1);
I put QFilterKey for the technique
Qt3DRender::QFilterKey *filterkey = new Qt3DRender::QFilterKey(this);
filterkey->setName(QStringLiteral("renderingStyle"));
filterkey->setValue(QStringLiteral("forward"));
and I load shaders from resource the same way as is shown in example QML code :
glShader->setVertexShaderCode(Qt3DRender::QShaderProgram::loadSource(QUrl(QStringLiteral("qrc:/MyShader/simpleColor.vert"))));
glShader->setFragmentShaderCode(Qt3DRender::QShaderProgram::loadSource(QUrl(QStringLiteral("qrc:/MyShader/simpleColor.frag"))));
I did not investigate which one of those is the main reason but is working.
After that I found another confirmation - the same way of processing - in this post
How to make color of a section be different on a 3D object
from #AdaRaider and #user3405291

Related

glGetUniformLocation returns -1 for samplers other than first?

I have been developing some application in OpenGL.
In first pass I write some values to my FBO which has 3 color textures attached to it. In second pass I attach this 3 textures as Samplers in shader and do some calculations for color. following is shader code for second pass .
const char* final_fragment_source[] = {
"uniform sampler2D rt1;\n"
"uniform sampler2D rt2;\n"
"uniform sampler2D rt3;\n"
"out vec4 color;\n"
"void main(){\n"
"vec3 bg = vec3(0.0, 0.0, 0.0);\n"
"vec4 RGB1 = texture2D(rt1,vec2(gl_FragCoord.xy));\n"
"vec4 RGB2 = texture2D(rt2,vec2(gl_FragCoord.xy));\n"
"vec4 RGB3 = texture2D(rt3,vec2(gl_FragCoord.xy));\n"
"vec3 tempcolor = RGB1.rgb - bg * RGB1.a + bg * RGB3.a + bg * RGB2.rgb * RGB3.a + RGB3.rgb * RGB2.a * 0.0f + bg;\n"
"color = vec4(tempcolor,0.25);\n"
"} \n"
};
Problem is when I call glGetUniformLocation() for rt2 and rt3 I get -1. I get correct location for rt1.
This I have tried
-- I know that if you dont use any of the uniform variable you declare in fragment shader then driver may optimize and return -1 for that variable. Here clearly I am using all variables in calculation for final color.
--There is no compile time or linking error in this shader code.
Following is code where I get error
glUseProgram(fp_render_prog);
err = glGetError();
rt1 = glGetUniformLocation(fp_render_prog, "rt1");
err = glGetError();
rt2 = glGetUniformLocation(fp_render_prog, "rt2");
err = glGetError();
rt3 = glGetUniformLocation(fp_render_prog, "rt3");
err = glGetError();
MVPLocation = glGetUniformLocation(render_prog, "MVP");
err = glGetError();``
--I have tried putting glGetError() and do not get any error.
Thanks for any help in advance.
glGetUniformLocation(fp_render_prog, "rt2") and glGetUniformLocation(fp_render_prog, "rt3") return -1, because rt2 and rt3 are not active.
Note, that the 2nd parameter of glGetUniformLocation must be the name of an active uniform variable:
rt2 and rt3 are used when setting RGB2 and RGB3:
vec4 RGB2 = texture2D(rt2,vec2(gl_FragCoord.xy));
vec4 RGB3 = texture2D(rt3,vec2(gl_FragCoord.xy));
RGB2 and RGB3 are multiplied by 0.0:
vec3 bg = vec3(0.0, 0.0, 0.0);
vec3 tempcolor =
RGB1.rgb -
bg * RGB1.a +
bg * RGB3.a +
bg * RGB2.rgb * RGB3.a +
RGB3.rgb * RGB2.a * 0.0f +
bg;
The compiler may optimize that:
vec3 tempcolor =
RGB1.rgb -
vec3(0.0) * RGB1.a +
vec3(0.0) * RGB3.a +
vec3(0.0) * RGB2.rgb * RGB3.a +
RGB3.rgb * RGB2.a * 0.0f +
vec3(0.0);
Which in final is the same as:
vec3 tempcolor = RGB1.rgb;
This causes that rt2 and rt3 are not used in the executable code and that the uniform variables rt2 and rt3 are inactive.
See [OpenGL ES 2 Specifications - 2.10.4 Shader Variables - p. 35] (https://www.khronos.org/registry/OpenGL/specs/es/2.0/es_full_spec_2.0.pdf):
A uniform is considered active if it is determined by the compiler and linker that the uniform will actually be accessed when the executable code is executed. In cases where the compiler and linker cannot make a conclusive determination, the uniform will be considered active.
.....
To find the location of an active uniform variable within a program object, use the command
int GetUniformLocation( uint program, const char *name );
See OpenGL ES 2.0 Online Manual Pages. - glGetActiveUniform:
A uniform variable (either built-in or user-defined) is considered active if it is determined during the link operation that it may be accessed during program execution.
A location value of -1 is not an error condition, it just implies that the given uniform is not needed. This can occur for multiple reasons, including:
Uniform value never existed in the shader.
Uniform value is declared but never used in the shader.
Uniform value is declared and used in the shader/program, but is optimized out by the compiler because the compiler can prove the value is not needed.

Why comment glBindFragDataLocation, the GL also works correctly?

const GLchar* vertexSource1 = "#version 330 core\n"
"layout (location = 0) in vec2 position;\n"
"layout (location = 1) in vec3 color;\n"
"out vec3 Color;\n"
"void main()\n"
"{\n"
"gl_Position = vec4(position, 0.0, 1.0);\n"
"Color = color;\n"
"}\0";
const GLchar* fragmentSource1 = "#version 330 core\n"
" in vec3 Color;\n"
" out vec4 outColor;\n"
" void main()\n"
" {\n"
" outColor = vec4(Color, 1.0);\n"
" }\n";
GLuint shaderProgram1 = glCreateProgram();
glAttachShader(shaderProgram1, vertexShader1);
glAttachShader(shaderProgram1, fragmentShader1);
// glBindFragDataLocation(shaderProgram1, 0, "Color");
glLinkProgram(shaderProgram1);
Whether I add glBindFragDataLocation or not, the GL works correctly, Why?
Because you're "lucky". The OpenGL specification provides no guarantees about how fragment shader output locations are assigned if you don't assign them. It only says that each one will have a separate location; what locations those are is up to the implementation.
However, considering the sheer volume of code that writes to a single output variable without explicitly assigning it to a location, it's highly unlikely that an OpenGL implementation would ever assign the first FS output location to anything other than 0. So while it isn't a spec guarantee, at this point, it is a de-facto requirement of implementations.
Note: That doesn't mean you shouldn't assign that location manually. It's always best to be on the safe and explicit side.
FYI: layout(location) works for fragment shader outputs too. So you should use that if you're using it on vertex attributes. Then you don't have to worry about doing it from code.

How to draw red lines if I used a fragment shader for texture?

I am writing a simple video player using opengl. I used Qt and followed its basic texture examples.
The vertex and fragment shaders are here:
QOpenGLShader *vshader = new QOpenGLShader(QOpenGLShader::Vertex, this);
const char *vsrc =
"attribute highp vec4 vertex;\n"
"attribute mediump vec4 texCoord;\n"
"varying mediump vec4 texc;\n"
"uniform mediump mat4 matrix;\n"
"void main(void)\n"
"{\n"
" gl_Position = matrix * vertex;\n"
" texc = texCoord;\n"
"}\n";
vshader->compileSourceCode(vsrc);
QOpenGLShader *fshader = new QOpenGLShader(QOpenGLShader::Fragment, this);
const char *fsrc =
"uniform sampler2D texture;\n"
"varying mediump vec4 texc;\n"
"void main(void)\n"
"{\n"
" gl_FragColor = texture2D(texture, texc.st);\n"
"}\n";
fshader->compileSourceCode(fsrc);
And I did this to display a image:
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, texture_cv.cols, texture_cv.rows, GL_RGB, GL_UNSIGNED_BYTE, texture_cv.data);
//then draw a quad
...
Then after this how could I draw several red lines on the screen, since I am using the fragment shader (I am very new to shader), I cannot turn off the texture.
By far the easiest solution is to use a different shader program for drawing your red lines. Since it just draws a solid color, it will be very easy. The fragment shader could be something like:
void main()
{
gl_FragColor = vec4(1.0, 0.0, 0.0, 1.0);
}
The vertex shader will be very similar to what you have, except that it does not need to produce texture coordinates. You might even be able to use the existing vertex shader.
It is very common to use multiple shader programs for rendering. You have a shader program for each different type of rendering, and switch between them with glUseProgram().

Libgdx shader, render and draw confusion

in my Libgdx Scene2D stage I am trying to have an actor flashing with white color.
With OpenGLES 2.0 I understood I needed to use a shader to achieve this and I have a problem implementing it.
My goal is to color an actor in solid white color given a speficic game event. The problem is given this specific event, everything in the stage becomes white (all actors and the background) instead of just the selected actors textures.
My World class is where the stage is created. It is rendered like this:
public void render() {
Gdx.gl.glClearColor(1, 1, 1, 1);
Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT);
update();
stage.act(Gdx.graphics.getDeltaTime());
stage.draw();
}
In the stage I use actors that are defined in a MyActor class. This class has its own
draw method where I am trying to color the actor in solid white with the shader. Here is the draw method:
public void draw(Batch batch, float alpha) {
if (flashing) {
batch.setShader(shader);
}
batch.flush();
Color color = getColor();
batch.setColor(color.r, color.g, color.b, color.a * alpha);
batch.draw(tr, getX(), getY(), side/2, side/2, side, side, getScaleX(), getScaleY(), 0);
}
Obviously I do something wrong with the batch. Instead of coloring this one actor in white, it colors everything. Also it remains white even after the Boolean flashing = false;
Here is how I have set up my shader, in the actor class MyActor:
ShaderProgram shader;
String vertexShader =
"attribute vec4 a_position; \n" +
"attribute vec4 a_color;\n" +
"attribute vec2 a_texCoord0; \n" +
"uniform mat4 u_projTrans; \n" +
"varying vec4 v_color; \n" +
"varying vec2 v_texCoords; \n" +
"void main() { \n" +
"v_color = a_color; \n" +
"v_texCoords = a_texCoord0; \n" +
"gl_Position = u_projTrans * a_position; \n" +
"};";
String fragmentShader = "#ifdef GL_ES\n" +
"precision mediump float;\n" +
"#endif\n" + "varying vec4 v_color;\n" +
"varying vec2 v_texCoords;\n" +
"uniform sampler2D u_texture;\n" +
"uniform float grayscale;\n" +
"void main()\n" +
"{\n" +
"vec4 texColor = texture2D(u_texture, v_texCoords);\n" +
"float gray = dot(texColor.rgb, vec3(5, 5, 5));\n" +
"texColor.rgb = mix(vec3(gray), texColor.rgb, grayscale);\n" +
" gl_FragColor = v_color * texColor;\n" +
"}";
Then initiated with:
shader = new ShaderProgram(vertexShader, fragmentShader);
What is it that I am doing wrong?
I have to admit I probably don't understand correctly how the batch of MyActor, the drawing method and the rendering methods work together!
Thanks for your time
The problem here is, that you never set the Shader back to the default Shader, so your Spritebatch always uses the "solid white" Shader.
As the javadock of the setShader(ShaderProgram shader)sais you can reset the Shader by calling setShader(null).
Another possibility would be to use another Texture for the solid white and the simply say:
Texture t = flashing ? whiteTexture : defaultTexture;
and draw the Texture t.
Just a suggestion: Do not use batch.flush() if it is not absolutely neccessary, because flush() makes a draw call to the GPU and this costs some performance.

WebGL returns a error when trying to set aTextureCoord parameter

I have a simple webGL program which I am almost copy pasting from Mozilla developer network. For some reason I manage to create a cube sides with a single colour and top and bottom with textures and textures are visible but I am not sure if the lightning is properly set. There is an error that I am trying to fix which is the following one and I noticed that the error happens to be occurring when I try to initiate shaders with aTextureCoord parameter. Following is my shader and the javascript code that I am trying to use. Can some one figure out why this happens.
attribute highp vec3 aVertexNormal;
attribute highp vec3 aVertexPosition;
attribute highp vec2 aTextureCoord;
uniform highp mat4 uNormalMatrix;
uniform highp mat4 uMVMatrix;
uniform highp mat4 uPMatrix;
varying highp vec3 vLighting;
varying highp vec2 vTextureCoord;
void main(void) {
gl_Position = uPMatrix * uMVMatrix * vec4(aVertexPosition, 1.0);
vTextureCoord = aTextureCoord;
highp vec3 ambientLight = vec3(0.6, 0.6, 0.6);
highp vec3 directionalLightColor = vec3(0.5, 0.5, 0.75);
highp vec3 directionalVector = vec3(0.85, 0.8, 0.75);
highp vec4 transformedNormal = uNormalMatrix * vec4(aVertexNormal, 1.0);
highp float directional = max(dot(transformedNormal.xyz, directionalVector), 0.0);
vLighting = ambientLight + (directionalLightColor * directional);
}
</script>
following is the javascript code.
shaderProgram.vertexPositionAttribute = gl.getAttribLocation(shaderProgram, "aVertexPosition");
gl.enableVertexAttribArray(shaderProgram.vertexPositionAttribute);
shaderProgram.vertexColorAttribute = gl.getAttribLocation(shaderProgram, "aVertexColor");
gl.enableVertexAttribArray(shaderProgram.vertexColorAttribute);
shaderProgram.pMatrixUniform = gl.getUniformLocation(shaderProgram, "uPMatrix");
shaderProgram.mvMatrixUniform = gl.getUniformLocation(shaderProgram, "uMVMatrix");
shaderProgram4Tex.vertexPositionAttribute = gl.getAttribLocation(shaderProgram4Tex, "aVertexPosition");
gl.enableVertexAttribArray(shaderProgram4Tex.vertexPositionAttribute);
shaderProgram4Tex.vertexTextureAttribute = gl.getAttribLocation(shaderProgram4Tex, "aTextureCoord");
gl.enableVertexAttribArray(shaderProgram4Tex.vertexTextureAttribute);
shaderProgram4Tex.pMatrixUniform = gl.getUniformLocation(shaderProgram4Tex, "uPMatrix");
shaderProgram4Tex.mvMatrixUniform = gl.getUniformLocation(shaderProgram4Tex, "uMVMatrix");
shaderProgram4Tex.vertexNormalAttribute = gl.getAttribLocation(shaderProgram4Tex, "aVertexNormal");
gl.enableVertexAttribArray(shaderProgram4Tex.vertexNormalAttribute);
shaderProgram4Tex.samplerUniform = gl.getUniformLocation(shaderProgram4Tex, "uSampler");
shaderProgram4Tex.normalMatrix = gl.getUniformLocation(shaderProgram4Tex, "uNormalMatrix");
I get following error when I run the program.
WebGL: INVALID_OPERATION: drawElements: attribs not setup correctly
Hopefully someone can answer this as I have spent good 10 hours on this and could not figure out why this is.
"attribs not setup correctly" means either
You have no program set up. In other words you didn't call gl.useProgram with a valid program.
You turned on an attribute with gl.enableVertexAttribArray but you did not assign a buffer to it by calling gl.bindBuffer(gl.ARRAY_BUFFER, someBuffer) followed at some point by calling gl.vertexAttribPointer which assigns the currently bound ARRAY_BUFFER to the specified attribute.