These are the four points to be drawn.
m_points = { -0.905 , 0.905 , // Top Left
0.905 , 0.905 , // Top Right
0.905 , -0.905 , // Bottom Left
-0.905 , -0.905 // Bottom Right
};
and than calling the draw arrays.
glDrawArrays(GL_POINTS, 0, 4);
According to the documents the -> The geometry shader receives all vertices of a primitive as its input.
In the Geometry i want a line to be drawn from one point to another using this code.
#version 400 core
layout (points) in;
layout (line_strip, max_vertices = 4) out;
void main() {
gl_Position = gl_in[0].gl_Position ;
EmitVertex();
gl_Position = gl_in[1].gl_Position ;
EmitVertex();
gl_Position = gl_in[2].gl_Position ;
EmitVertex();
gl_Position = gl_in[3].gl_Position ;
EmitVertex();
EndPrimitive();
}
But shader is throwing the error that array out of bounds , can i get all the four points in the geometry shader ?
Related
I am trying to generate OpenGL primitives out of a 6 integer Vertex.
I.E. the 6 integer value will generate 4 custom line_strip.
First I am trying to move the 6 integer array from Vertex to Shader and in order to do that I am doing a simple test as follows.
This is the code for the Vertex Shader:
#version 330
layout (location = 0) in int inVertex[6];
out int outVertex[6];
void main()
{
outVertex = inVertex;
}
And for the Geometry Shader, which hardcodes a segment:
#version 330
in int vertex[6];
layout (line_strip, max_vertices = 2) out;
void main()
{
gl_Position = vec4(-0.2, -0.2, 0.0, 1.0);
EmitVertex();
gl_Position = vec4(-0.2 +0.4, -0.2, 0.0, 1.0);
EmitVertex();
EndPrimitive();
}
But I get an empty screen.
If I modify the Vertex shader to this:
#version 330
layout (location = 0) in int inVertex[6];
out int outVertex[6];
void main()
{
//outVertex = inVertex;
gl_Position = vec4(0.0, 0.0, 0.0, 0.0);
}
and the Geometry Shader to this:
#version 330
//in int candle[6];
layout (points) in;
layout (line_strip, max_vertices = 2) out;
void main()
{
gl_Position = vec4(-0.2, -0.2, 0.0, 1.0);
EmitVertex();
gl_Position = vec4(-0.2 +0.4, -0.2, 0.0, 1.0);
EmitVertex();
EndPrimitive();
}
Then I get the segment in the screen:
Is it mandatory to use gl_Position? If so, How can I pass additional variables together to gl_Position to enrich my Vertex?
Vertex shaders operate on vertices (oddly enough). They have a 1:1 correspondance with vertex data: for each vertex in the rendering operation, you get one output vertex.
Geometry shaders operate on primitives. They have a 1:many relationship with their input primitives. A GS gets a single primitive and it outputs 0 or more primitives, within various restrictions.
A primitive is composed of one or more vertices. So a GS must be able to get multiple pieces of input data, as generated by the VSs that executed to generate the primitive the GS is processing. As such, all GS inputs are arrayed. That is, for any type T which the VS outputs, the GS takes a T[] (or equivalent). This is true even if your GS's input primitive type is points.
So if the VS has this as an output out int outVertex[6];, then the corresponding GS ought to be in int outVertex[][6];. Note that the indices of arrays of arrays are read left-to-right, so this is an array of primitive-count elements, where each element is an array of 6 ints. Also, note that the names need to match. You should have gotten a linker error for the name mismatch.
Alternatively, you can use an interface block, to make the arraying a bit easier:
//VS:
out Prim
{
int data[6];
};
//GS:
in Prim
{
int data[6];
} vsData[];
//Access with vsData[primIx].data[X];
The Vertex Shader is executed for each vertex, hence the inputs and outputs are not an arrays, they are single attribute:
#version 330
layout (location = 0) in vec3 inVertex;
out vec3 vertex;
void main()
{
vertex = inVertex;
}
The input to the Geometry shader is a primitive. All the outputs of the vertex shader which form a primitive are composed. Thus the inputs of the geometry shader are arrays.
The size of the input array depends on the input primitive type to the geometry shader. e.g for a line the input array size is 2:
#version 330
layout (lines) in;
layout (line_strip, max_vertices = 4) out;
in vec3 vertex[];
void main()
{
// [...]
}
I want to write the model-space vertex positions of a 3D mesh to a texture in OGL. Currently in order to write to a texture I set it to a fullscreen quad and write to it using a separate pass (based on tutorial seen here.)
The problem is that, from what I understand, I cannot pass more than one vertex list to the shader as the vertex shader can only be bound to one vertex list at a time, currently occupied by the screenspace quad.
Vertex Shader code:
layout(location = 0) in vec4 in_position;
out vec4 vs_position;
void main() {
vs_position = in_position;
gl_Position = vec4(in_position.xy, 0.0, 1.0);
}
Fragment Shader code:
in vec4 position; // coordinate in the screenspace quad
out vec4 outColor;
void main() {
vec2 uv = vec2(0.5, 0.5) * position.xy + vec2(0.5, 0.5);
outColor = ?? // Here I need my vertex position
}
Possible solution (?):
My idea was to introduce another shader pass before this to output the positions as r, g, b so that the position of the current texel can be retrieved from the texture (the only input format large enough to store many vertecies).
Although not 100% accurate, this image might give you an idea of what I want to do:
Model space coordinate map
Is there a way to encode the positions to the texture without using a fullscreen quad on the GPU?
Please let me know if you want to see more code.
Instead of generating the quad CPU side I would attach a geometry shader and create the quad there, that should free up the slot for your model-geometry to be passed in.
Geometry shader:
layout(points) in;
layout(triangle_strip, max_vertices = 4) out;
out vec2 texcoord;
void main()
{
gl_Position = vec4( 1.0, 1.0, 0.5, 1.0 );
texcoord = vec2( 1.0, 1.0 );
EmitVertex();
gl_Position = vec4(-1.0, 1.0, 0.5, 1.0 );
texcoord = vec2( 0.0, 1.0 );
EmitVertex();
gl_Position = vec4( 1.0,-1.0, 0.5, 1.0 );
texcoord = vec2( 1.0, 0.0 );
EmitVertex();
gl_Position = vec4(-1.0,-1.0, 0.5, 1.0 );
texcoord = vec2( 0.0, 0.0 );
EmitVertex();
EndPrimitive();
}
I want to draw multiple fans with a GS. Each fan should billboard to the camera at each time, which makes it necessary that each vertex is multiplied with MVP matrix.
Since each fan is movable by the user, I came up with the idea to feed the GS with the position.
The following geometry shader works as expected with points as in and output:
uniform mat4 VP;
uniform mat4 sharedModelMatrix;
const int STATE_VERTEX_NUMBER = 38;
layout (shared) uniform stateShapeData {
vec2 data[STATE_VERTEX_NUMBER];
};
layout (triangles) in;
layout (triangle_strip, max_vertices = 80) out;
void main(void)
{
int i;
mat4 modelMatrix = sharedModelMatrix;
modelMatrix[3] = gl_in[0].gl_Position;
mat4 MVP = VP * modelMatrix;
gl_Position = MVP * vec4( 0, 0 , 0, 1 );
EmitVertex(); // epicenter
for (i = 37; i >= 0; i--) {
gl_Position = MVP * vec4( data[i], 0, 1 );
EmitVertex();
}
gl_Position = MVP * vec4( data[0], 0, 1 );
EmitVertex();
}
I tried to run this with glDrawElements, glDrawArrays and glMultiDrawArrays. None of these commands draws the full fan. Each draws the first triangle filled and the remaining vertices as points.
So, the bottom question is: Is it possible to draw a fan with GS created vertices and how?
Outputting fans in a Geometry Shader is very unnatural as you have discovered.
You are currently outputting the vertices in fan-order, which is a construct that is completely foreign to GPUs after primitive assembly. Fans are useful as assembler input, but as far as output is concerned the rasterizer only understands the concept of strips.
To write this shader properly, you need to decompose this fan into a series of individual triangles. That means the loop you wrote is actually going to output the epicenter on each iteration.
void main(void)
{
int i;
mat4 modelMatrix = sharedModelMatrix;
modelMatrix[3] = gl_in[0].gl_Position;
mat4 MVP = VP * modelMatrix;
for (i = 37; i >= 0; i--) {
gl_Position = MVP * vec4( 0, 0 , 0, 1 );
EmitVertex(); // epicenter
gl_Position = MVP * vec4( data[i], 0, 1 );
EmitVertex();
gl_Position = MVP * vec4( data[i-1], 0, 1 );
EmitVertex();
// Fan and strip DNA just won't splice
EndPrimitive ();
}
}
You cannot exploit strip-ordering when drawing this way; you wind up having to end the output primitive (strip) multiple times. About the only possible benefit you get to drawing in fan-order is cache locality within the loop. If you understand that geometry shaders are expected to output triangle strips, why not order your input vertices that way to begin with?
In my first opengl 'voxel' project I'm using geometry shader to create cubes from gl_points and it works pretty well but I'm sure it can be done better. In the alpha color I'm passing info about which faces should be rendered ( to skip faces adjacent to other cubes) then vertices for visible faces are created using 'reference' cube definition. Every point is multiplied by 3 matrices. Instinct tells me that maybe whole face could be multiplied by them instead of every point but my math skills are poor so please advise.
#version 330
layout (points) in;
layout (triangle_strip,max_vertices=24) out;
smooth out vec4 oColor;
in VertexData
{
vec4 colour;
//vec3 normal;
} vertexData[];
uniform mat4 cameraToClipMatrix;
uniform mat4 worldToCameraMatrix;
uniform mat4 modelToWorldMatrix;
const vec4 cubeVerts[8] = vec4[8](
vec4(-0.5 , -0.5, -0.5,1), //LB 0
vec4(-0.5, 0.5, -0.5,1), //L T 1
vec4(0.5, -0.5, -0.5,1), //R B 2
vec4( 0.5, 0.5, -0.5,1), //R T 3
//back face
vec4(-0.5, -0.5, 0.5,1), // LB 4
vec4(-0.5, 0.5, 0.5,1), // LT 5
vec4(0.5, -0.5, 0.5,1), // RB 6
vec4(0.5, 0.5, 0.5,1) // RT 7
);
const int cubeIndices[24] = int [24]
(
0,1,2,3, //front
7,6,3,2, //right
7,5,6,4, //back or whatever
4,0,6,2, //btm
1,0,5,4, //left
3,1,7,5
);
void main()
{
vec4 temp;
int a = int(vertexData[0].colour[3]);
//btm face
if (a>31)
{
for (int i=12;i<16; i++)
{
int v = cubeIndices[i];
temp = modelToWorldMatrix * (gl_in[0].gl_Position + cubeVerts[v]);
temp = worldToCameraMatrix * temp;
gl_Position = cameraToClipMatrix * temp;
//oColor = vertexData[0].colour;
//oColor[3]=1;
oColor=vec4(1,1,1,1);
EmitVertex();
}
a = a - 32;
EndPrimitive();
}
//top face
if (a >15 )
...
}
------- updated code:------
//one matrix to transform them all
mat4 mvp = cameraToClipMatrix * worldToCameraMatrix * modelToWorldMatrix;
//transform and store cube verts for future use
for (int i=0;i<8; i++)
{
transVerts[i]=mvp * (gl_in[0].gl_Position + cubeVerts[i]);
}
//btm face
if (a>31)
{
for (int i=12;i<16; i++)
{
int v = cubeIndices[i];
gl_Position = transVerts[v];
oColor = vertexData[0].colour*0.55;
//oColor = vertexData[0].colour;
EmitVertex();
}
a = a - 32;
EndPrimitive();
}
In OpenGL, you don't work with faces (or lines, for that matter), so you can't apply transformations to a face. You need to do it to the vertices that compose that face, as you're doing.
On possible optimization is that you don't need to separate out the matrix transformations, as you do. If you multiple them once in your application code, and pass them as a single uniform into your shader, that will save some time.
Another optimization would be to transform the eight cube vertices in a loop at the beginning, store them to a local array, and then reference their transformed positions in your if logic. Right now, if you render every face of the cube, you're transforming 24 vertices, each one three times.
I'm trying to render arbitrary wide lines (in screen space) using a geometry shader. At first it seems all good, but on certain view position the lines are rendered incorrectly:
The image on the left present the correct rendering (three lines on positive X, Y and Z axes, 2 pixel wide).
When the camera moves near the origin (and indeed near the lines), the lines are rendered like the right image. The shader seems straightforward, and I don't understand what's going on my GPU:
--- Vertex Shader
#version 410 core
// Modelview-projection matrix
uniform mat4 ds_ModelViewProjection;
// Vertex position
in vec4 ds_Position;
// Vertex color
in vec4 ds_Color;
// Processed vertex color
out vec4 ds_VertexColor;
void main()
{
gl_Position = ds_ModelViewProjection * ds_Position;
ds_VertexColor = ds_Color;
}
--- Geometry Shader
#version 410 core
// Viewport size, in pixels
uniform vec2 ds_Viewport;
// Line width, in pixels
uniform float ds_LineWidth = 2.0;
// Processed vertex color (from VS, in clip space)
in vec4 ds_VertexColor[2];
// Processed primitive vertex color
out vec4 ds_GeoColor;
layout (lines) in;
layout (triangle_strip, max_vertices = 4) out;
void main()
{
vec3 ndc0 = gl_in[0].gl_Position.xyz / gl_in[0].gl_Position.w;
vec3 ndc1 = gl_in[1].gl_Position.xyz / gl_in[1].gl_Position.w;
vec2 lineScreenForward = normalize(ndc1.xy - ndc0.xy);
vec2 lineScreenRight = vec2(-lineScreenForward.y, lineScreenForward.x);
vec2 lineScreenOffset = (vec2(ds_LineWidth) / ds_ViewportSize) * lineScreenRight;
gl_Position = vec4(ndc0.xy + lineScreenOffset, ndc0.z, 1.0);
ds_GeoColor = ds_VertexColor[0];
EmitVertex();
gl_Position = vec4(ndc0.xy - lineScreenOffset, ndc0.z, 1.0);
ds_GeoColor = ds_VertexColor[0];
EmitVertex();
gl_Position = vec4(ndc1.xy + lineScreenOffset, ndc1.z, 1.0);
ds_GeoColor = ds_VertexColor[1];
EmitVertex();
gl_Position = vec4(ndc1.xy - lineScreenOffset, ndc1.z, 1.0);
ds_GeoColor = ds_VertexColor[1];
EmitVertex();
EndPrimitive();
}
--- Fragment Shader
// Processed primitive vertex color
in vec4 ds_GeoColor;
// The fragment color.
out vec4 ds_FragColor;
void main()
{
ds_FragColor = ds_GeoColor;
}
Your mistake is in this:
gl_Position = vec4(ndc0.xy + lineScreenOffset, ndc0.z, 1.0 /* WRONG */);
To fix it:
vec4 cpos = gl_in[0].gl_Position;
gl_Position = vec4(cpos.xy + lineScreenOffset*cpos.w, cpos.z, cpos.w);
What you did was: lose information about W, and thus detune the HW clipper, downgrading it from a 3D clipper into a 2D one.
Today I've found the answer by myself. I don't understand it entirely, but this solved the question.
The problem arise when the line vertices goes beyond the near plane of the projection matrix defined for the scene (in my case, all ending vertices of the three lines). The solution is to manually clip the line vertices within the view frustum (in this way the vertices cannot go beyond the near plane!).
What happens to ndc0 and ndc1 when they are out of the view frustum? Looking at the images, it seems that the XY components have the sign changed (after they are transformed in clip space!): that would mean that the W coordinate is opposite of the normal one, isn't?
Without the geometry shader, the rasterizer would have be taken the responsability to clip those primitives outside the view frustum, but since I've introduced the geometry shader, I need to compute those result by myself. Do anyone can suggest me some link about this matter?
I met a similar problem, where I was trying to draw normals of vertices as colored lines. The way I was drawing the normal was that I draw all the vertices as points, and then use the GS to expand each vertex to a line. The GS was straightforward, and I found that there were random incorrect lines running through the entire screen. Then I added this line into the GS (marked by the comment below), the problem is fixed. Seems like that the problem was because one end of the line is within frustum but the other is outside, so I end up with lines running across the entire screen.
// Draw normal of a vertex by expanding a vertex into a line
[maxvertexcount(2)]
void GSExpand2( point PointVertex points[ 1 ], inout LineStream< PointInterpolants > stream )
{
PointInterpolants v;
float4 pos0 = mul(float4(points[0].pos, 1), g_viewproj);
pos0 /= pos0.w;
float4 pos1 = mul(float4(points[0].pos + points[0].normal * 0.1f, 1), g_viewproj);
pos1 /= pos1.w;
// seems like I need to manually clip the lines, otherwise I end up with incorrect lines running across the entire screen
if ( pos0.z < 0 || pos1.z < 0 || pos0.z > 1 || pos1.z > 1 )
return;
v.color = float3( 0, 0, 1 );
v.pos = pos0;
stream.Append( v );
v.color = float3( 1, 0, 0 );
v.pos = pos1;
stream.Append( v );
stream.RestartStrip();
}