gstreamer has an element called glshader with properties like fragment and vertex. The documentation includes an example of trivial fragment shader, but as of 1.16 does not include an example of a vertex shader.
What kind of skeleton should be used to start writing my own vertex shader for use with the glshader element?
While the documentation is currently sparse, you can find an example in the gst-plugins-base package in gst-libs/gst/gl/gstglshaderstrings.c.
It looks like
attribute vec4 a_position;
attribute vec2 a_texcoord;
varying vec2 v_texcoord;
void main()
{
gl_Position = a_position;
v_texcoord = a_texcoord;
}
A good starting point for testing your shaders is
gst-launch-1.0 gltestsrc ! glshader fragment="\"`frag_shader`\"" vertex="\"`vert_shader`\"" ! glimagesink
If you add something like v_texcoord.y*=0.8; you can demonstrate that the geometry glshader is feeding into the shader is a rectangle with v=0 on the top, which is backwards from how Unity interprets texture coordinates (Unity uses v=0 for the bottom except when you get to close to the metal on a system with different conventions). An experiment like v_texcoord.y *= 2.0; demonstrates that the default wrap mode seems to be CLAMP_TO_EDGE.
Related
I am trying to create an example of an interpolated surface.
First I created an example of an interpolated trefoil.
Here the source of my example.
Then I had to noticed that the animation is pretty slow, around 20-30FPS.
After some papers, I know that have to "move" the evaluation of the trefoil into the GPU. Thus I studied some papers about tessellation shaders.
At the moment I bind following simply vertex shader:
#version 130
in vec4 Position;
in vec3 Normal;
uniform mat4 Projection;
uniform mat4 Modelview;
uniform mat3 NormalMatrix;
uniform vec3 DiffuseMaterial;
out vec3 EyespaceNormal;
out vec3 Diffuse;
void main()
{
EyespaceNormal = NormalMatrix * Normal;
gl_Position = Projection * Modelview * Position;
Diffuse = DiffuseMaterial;
}
Now I have multiply questions:
Do I use an array of vertices to pass GL_PATCHES like I already did with Triangle_Strips ? Which way is faster? DrawElements?
glDrawElements(GL_TRIANGLE_STRIP, Indices.Length, OpenGL.GL_UNSIGNED_SHORT, IntPtr.Zero);
or should I use
glPatchParameteri(GL_PATCH_VERTICES,16);
glBegin(GL_PATCHES);
glVertex3f(x0,y0,z0)
...
glEnd();
What about the array of indices? How can I determine the path means in which order the patches will be passed.
Do I calculate the normals in the Shader as well?
I found some examples of tessellation shader but in #version400
Can I use this version on mobile devices as well?(OpenGL ES)
Can I pass multiple Patches to the GPU by Multithreading?
Many many thanks in advance.
In essence I don't believe you have to send anything to the GPU in terms of indices (or vertices) as everything can be synthesized. I don't know if the evaluation of the trefoil knot directly maps onto the connectivity of the resulting tessellated mesh of a bilinear patch, but this could work.
You could do with a simple vertex buffer where each vertex is the position of a single trefoil knot. Set glPatchParameteri(GL_PATCH_VERTICES​, 1). Then you could draw multiple knots with a single call to glDrawArrays:
glDrawArrays(GL_PATCHES, 0, numKnots);
The tessellation control stage can be a simple pass through stage. Then in the tessellation evaluation shader you can use the abstract patch type of quads. Then move the evaluation of the trefoil knot, or any other biparametric shape, into the tessellation evaluation shader, using the supplied [u, v] coordinates. Then you could translate every trefoil by the input vertex. The normals can be calculated in the shader as well.
Alternatively, you could use the geometry shader to synthesize the trefoil just from one input vertex position using points as input primitive and triangle strip as output primitive. Then you could just call again
glDrawArrays(GL_POINTS, 0, numKnots);
and create the trefoil in the geometry shader using the function for the generation of the indices to describe the order of evaluation and translating the generated vertices with the input vertex.
In both cases there would be no need to multithread draw calls, which is ineffective with OpenGL anyways. You are limited by the number of vertices that can be generated maximum per-patch which should be 64 times 64 for tessellation and GL_MAX_GEOMETRY_OUTPUT_VERTICES for geometry shaders.
When seeing some OpenGL examples, some use the following types of variables when declaring them at the top of the shader:
in
out
and some use:
attribute
varying
uniform
What is the difference? Are they mutually exclusive?
attribute and varying were removed from GLSL 1.40 and above (desktop GL version 3.1) in core OpenGL. OpenGL ES 2 still uses them, but were removed from ES 3.0 code.
You can still use the old constructs in compatibility profiles, but attribute only maps to vertex shader inputs. varying maps to both VS outputs and FS inputs.
uniform has not changed; it still means what it always has: values set by the outside world which are fixed during a rendering operation.
In modern OpenGL, you have a series of shaders hooked up to a pipeline. A simple pipeline will have a vertex shader and a fragment shader.
For each shader in the pipeline, the in is the input to that stage, and the out is the output to that stage. The out from one stage will get matched with the in from the next stage.
A uniform can be used in any shader and will stay constant for the entire draw call.
If you want an analogy, think of it as a factory. The in and out are conveyor belts going in and out of machines. The uniform are knobs that you turn on a machine to change how it works.
Example
Vertex shader:
// Input from the vertex array
in vec3 VertPos;
in vec2 VertUV;
// Output to fragment shader
out vec2 TexCoord;
// Transformation matrix
uniform mat4 ModelViewProjectionMatrix;
Fragment shader:
// Input from vertex shader
in vec2 TexCoord;
// Output pixel data
out vec4 Color;
// Texture to use
uniform sampler2D Texture;
Older OpenGL
In older versions of OpenGL (2.1 / GLSL 1.20), other keywords were used instead of in and out:
attribute was used for the inputs to the vertex shader.
varying was used for the vertex shader outputs and fragment shader inputs.
Fragment shader outputs were implicitly declared, you would use gl_FragColor instead of specifying your own.
I know using a very simple vertex shader like
attribute vec3 aVertexPosition;
attribute vec4 aVertexColor;
uniform mat4 uMVMatrix;
uniform mat4 uPMatrix;
varying vec4 vColor;
void main(void) {
gl_Position = uPMatrix * uMVMatrix * vec4(aVertexPosition, 1.0);
vColor = aVertexColor;
}
and a very simple fragment shader like
precision mediump float;
varying vec4 vColor;
void main(void) {
gl_FragColor = vColor;
}
to draw a triangle with red, blue, and green vertices will end up having a triangle like this
My questions are:
Do calculations for interpolating fragment colors belonging to one triangle (or a primitive) happen in parallel on GPU?
What are the algorithm and also hardware support for interpolating fragment colors inside the triangle?
The interpolation is the step Fragment Processor
Algorithm is very simple they just interpolate the color according to their UV
Yes, absolutely.
Triangle color interpolation is part of the fixed-function pipeline (it's actually part of the rasterization step, which happens before fragment processing), so it is carried out entirely in hardware with probably all video cards. The equations for interpolating vertex data can be found e.g. in OpenGL 4.3 specification, section 14.6.1 (pp. 405-406). The algorithm defines barycentric coordinates for the triangle and uses them to interpolate between the vertices.
Besides the answers giving here, I wanted to add that there doesn't have to be dedicated fixed-function hardware for the interpolations. Modern GPU tend to use "pull-model interpolation" where the interpolation is actually done via the shader units.
I recommend reading Fabian Giesen's blog article about the topic (and the whole series of articles about the graphics pipeline in genreal).
On the first question - though there are parallel units on the GPU, it depends on the size of the triangle in consideration. For most of the GPUs, drawing happens on a tile by tile basis, and depending on the "screen" size of the triangle, if it falls within just one tile completely, it will be processed completely in only one tile processor. If it is split, it can be done in parallel by different units.
The second question is answered by other posters before me.
I need to setup a 'dependent texture' such that the return values from one texture lookup are used to determine where to look up from a second texture.
Can you point me to the right gl API calls I would need to do this?
I need to setup a 'dependent texture' such that the return values from one texture lookup are used to determine where to look up from a second texture.
This can be done using shaders, only.
Can you point me to the right gl API calls I would need to do this?
You were asking for the API calls: Well here they are:
glCreateShader to create new shader objects
glShaderSource to load the shader source code into the shader objects
glCompileShader to compile the loaded shader sources
glCreateProgram to create a program object
glLinkProgram to link the shader objects into a program
glUseProgram to actually use the shader program created with the above calls
glUniform1i to set the fragment shaders sampler uniforms to the texture units sourced
Also, you were not asking for them, but you need them as well, here are the required GLSL language elements:
sampler… uniforms to bind the texture units to
The texture GLSL function to fetch a texture sample. Use the value of a sampled texture to determine the texture coordinate for the next one.
Like this.
uniform sampler2D coord_texture;
uniform sampler2D sampling_texture;
uniform vec2 InvWinSize;
void main(void){
vec2 uv = gl_FragCord.st*InvWinSize;
vec2 tex_coord = texture(coord_texture, uv).st;
vec4 sampled = texture(sampling_texture,tex_coord);
}
I accessed the first texture with the screen coordinates, but you can use whatever uv you need, for examples, uv coming from a vertex shader:
uniform sampler2D coord_texture;
uniform sampler2D sampling_texture;
in vec2 uv;
void main(void){
vec2 tex_coord = texture(coord_texture, uv).st;
vec4 sampled = texture(sampling_texture,tex_coord);
}
While implementing billboard objects to my engine i encountered a problem (screenshot below)
as you can see billboard object covers everything in background (skybox seems to be an exception). And this is not exacly how i would like it to work. I have no idea where is the problem.
my fragment shader is pretty simple:
#version 330
uniform sampler2D tex;
in vec2 TexCoord;
out vec4 FragColor;
void main()
{
FragColor = texture2D(tex, TexCoord);
}
and the billboard is just triangle strip made in geometry shader.
All ideas would be nice.
Probably draw order issue, you need to draw opaque objects first and then alpha blended objects back to front. Alternatively you can enabled alpha testing or in your shader discard fragments if their alpha is below a certain threshold.