glLineStipple has been deprecated in the latest OpenGL APIs.
What is it replaced with?
If not replaced, how can I get a similar effect?
(I don't want to use a compatibility profile of course...)
Sorry, it hasn't been replaced with anything. The first idea coming to my mind for emulating it would be the geometry shader. You feed the geometry shader with a line, compute its screen space length and based on that you generate a variable number of sub lines between its start and end vertex.
EDIT: Perhaps you could also use a 1D texture with the alpha (or red) channel encoding the pattern as 0.0 (no line) or 1.0 (line) and then have the lines texture coordinate go from 0 to 1 and in the fragment chader you make a simple alpha test, discarding fragments with alpha below some threshold. You can facilitate the geometry shader to generate your line texCoords, as otherwise you need different vertices for every line. This way you can also make the texCoord dependent on the screen space length of the line.
The whole thing get's more difficult if you draw triangles (using polygon mode GL_LINE). Then you have to do the triangle-line transformation yourself in the geometry shader, putting in triangles and putting out lines (that could also be a reason for deprecating polygon mode in the future, if it hasn't already).
EDIT: Although I believe this question abandomned, I have made a simple shader triple for the second approach. It's just a minimal solution, feel free to add custom features yourself. I haven't tested it because I lack the neccessary hardware, but you should get the point:
uniform mat4 modelViewProj;
layout(location=0) in vec4 vertex;
void main()
{
gl_Position = modelViewProj * vertex;
}
The vertex shader is a simple pass through.
layout(lines) in;
layout(line_strip, max_vertices=2) out;
uniform vec2 screenSize;
uniform float patternSize;
noperspective out float texCoord;
void main()
{
vec2 winPos0 = screenSize.xy * gl_in[0].gl_Position.xy / gl_in[0].gl_Position.w;
vec2 winPos1 = screenSize.xy * gl_in[1].gl_Position.xy / gl_in[1].gl_Position.w;
gl_Position = gl_in[0].gl_Position;
texCoord = 0.0;
EmitVertex();
gl_Position = gl_in[1].gl_Position;
texCoord = 0.5 * length(winPos1-winPos0) / patternSize;
EmitVertex();
}
In the geometry shader we take a line and compute its screen space length in pixels. We then devide this by the size of the stipple pattern texture, which would be factor*16 when emulating a call to glLineStipple(factor, pattern). This is taken as 1D texture coordinate of the second line end point.
Note that this texture coordinate has to be interpolated linearly (noperspective interpolation specifier). The usual perpective-correct interpolation would cause the stipple pattern to "squeeze together" on farther away parts of the line, whereas we are explicitly working with screen-space values.
uniform sampler1D pattern;
uniform vec4 lineColor;
noperspective in float texCoord;
layout(location=0) out vec4 color;
void main()
{
if(texture(pattern, texCoord).r < 0.5)
discard;
color = lineColor;
}
The fragment shader now just performs a simple alpha test using the value from the pattern texture, which contains a 1 for line and a 0 for no line. So to emulate the fixed function stipple you would have a 16 pixel 1-component 1D texture instead of a 16bit pattern. Don't forget to set the pattern's wrapping mode to GL_REPEAT, about the filtering mode I'm not that sure, but I suppose GL_NEAREST would be a good idea.
But as said earlier, if you want to render triangles using glPolygonMode, it won't work this way. Instead you have to adapt the geometry shader to accept triangles and generate 3 lines for each triangle.
EDIT: In fact OpenGL 3's direct support for integer operations in shaders allows us to completely drop this whole 1D-texture approach and work straight-forward with an actual bit-pattern. Thus the geometry shader is slightly changed to put out the actual screen-size pattern coordinate, without normalization:
texCoord = 0.5 * length(winPos1-winPos0);
In the fragment shader we then just take a bit pattern as unsigned integer (though 32-bit in contrast to glLineStipple's 16-bit value) and the stretch factor of the pattern and just take the texture coordinate (well, no texture anymore actually, but nevermind) modulo 32 to get it's position on the pattern (those explicit uints are annoying, but my GLSL compiler says implicit conversions between int and uint are evil):
uniform uint pattern;
uniform float factor;
...
uint bit = uint(round(linePos/factor)) & 31U;
if((pattern & (1U<<bit)) == 0U)
discard;
To answer this question, we've to investigate first, what glLineStipple actually does.
See the image, where the quad at the left is drawn by 4 separated line segments using the primitive type GL_LINES.
The circle at the right is drawn by a consecutive polygon line, using the primitive type GL_LINE_STRIP.
When using line segments, the stipple pattern started at each segment. The pattern is restarted at each primitive.
When using a line strip, then the stipple pattern is applied seamless to the entire polygon. A pattern seamlessly continuous beyond vertex coordinates.
Be aware that the length of the pattern is stretched at the diagonals. This is possibly the key to the implementation.
For separate line segments, this is not very complicated at all, but for line strips things get a bit more complicated. The length of the line cannot be calculated in the shader program, without knowing all the primitives of the line. Even if all the primitives would be known (e.g. SSBO), then the calculation would have to be done in a loop.
See also Dashed lines with OpenGL core profile.
Anyway, it is not necessary to implement a geometry shader. The trick is to know the start of the line segment in the fragment shader. This easy by using a flat interpolation qualifier.
The vertex shader has to pass the normalized device coordinate to the fragment shader. Once with default interpolation and once with no (flat) interpolation. This causes that in the fragment shade, the first input parameter contains the NDC coordinate of the actual position on the line and the later the NDC coordinate of the start of the line.
#version 330
layout (location = 0) in vec3 inPos;
flat out vec3 startPos;
out vec3 vertPos;
uniform mat4 u_mvp;
void main()
{
vec4 pos = u_mvp * vec4(inPos, 1.0);
gl_Position = pos;
vertPos = pos.xyz / pos.w;
startPos = vertPos;
}
Additionally the varying inputs, the fragment shader has uniform variables. u_resolution contains the width and the height of the viewport. u_factor and u_pattern are the multiplier and the 16 bit pattern according to the parameters of glLineStipple.
So the length of the line from the start to the actual fragment can be calculated:
vec2 dir = (vertPos.xy-startPos.xy) * u_resolution/2.0;
float dist = length(dir);
And fragment on the gap can be discarded, by the discard command.
uint bit = uint(round(dist / u_factor)) & 15U;
if ((u_pattern & (1U<<bit)) == 0U)
discard;
Fragment shader:
#version 330
flat in vec3 startPos;
in vec3 vertPos;
out vec4 fragColor;
uniform vec2 u_resolution;
uniform uint u_pattern;
uniform float u_factor;
void main()
{
vec2 dir = (vertPos.xy-startPos.xy) * u_resolution/2.0;
float dist = length(dir);
uint bit = uint(round(dist / u_factor)) & 15U;
if ((u_pattern & (1U<<bit)) == 0U)
discard;
fragColor = vec4(1.0);
}
This implementation is much easier and shorter, then using geometry shaders. The flat interpolation qualifier is supported since GLSL 1.30 and GLSL ES 3.00. In this version geometry shaders are not supported.
See the line rendering which was generated with the above shader.
The shader gives a proper result line segments, but fails for line strips, since the stipple pattern is restarted at each vertex coordinate.
The issue can't even be solved by a geometry shader. This part of the question remains still unresolved.
For the following simple demo program I've used the GLFW API for creating a window, GLEW for loading OpenGL and GLM -OpenGL Mathematics for the math. I don't provide the code for the function CreateProgram, which just creates a program object, from the vertex shader and fragment shader source code:
#include <vector>
#include <string>
#include <glm/glm.hpp>
#include <glm/gtc/matrix_transform.hpp>
#include <glm/gtc/type_ptr.hpp>
#include <gl/gl_glew.h>
#include <GLFW/glfw3.h>
std::string vertShader = R"(
#version 330
layout (location = 0) in vec3 inPos;
flat out vec3 startPos;
out vec3 vertPos;
uniform mat4 u_mvp;
void main()
{
vec4 pos = u_mvp * vec4(inPos, 1.0);
gl_Position = pos;
vertPos = pos.xyz / pos.w;
startPos = vertPos;
}
)";
std::string fragShader = R"(
#version 330
flat in vec3 startPos;
in vec3 vertPos;
out vec4 fragColor;
uniform vec2 u_resolution;
uniform uint u_pattern;
uniform float u_factor;
void main()
{
vec2 dir = (vertPos.xy-startPos.xy) * u_resolution/2.0;
float dist = length(dir);
uint bit = uint(round(dist / u_factor)) & 15U;
if ((u_pattern & (1U<<bit)) == 0U)
discard;
fragColor = vec4(1.0);
}
)";
GLuint CreateVAO(std::vector<glm::vec3> &varray)
{
GLuint bo[2], vao;
glGenBuffers(2, bo);
glGenVertexArrays(1, &vao);
glBindVertexArray(vao);
glEnableVertexAttribArray(0);
glBindBuffer(GL_ARRAY_BUFFER, bo[0] );
glBufferData(GL_ARRAY_BUFFER, varray.size()*sizeof(*varray.data()), varray.data(), GL_STATIC_DRAW);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, 0);
return vao;
}
int main(void)
{
if ( glfwInit() == 0 )
return 0;
GLFWwindow *window = glfwCreateWindow( 800, 600, "GLFW OGL window", nullptr, nullptr );
if ( window == nullptr )
return 0;
glfwMakeContextCurrent(window);
glewExperimental = true;
if ( glewInit() != GLEW_OK )
return 0;
GLuint program = CreateProgram(vertShader, fragShader);
GLint loc_mvp = glGetUniformLocation(program, "u_mvp");
GLint loc_res = glGetUniformLocation(program, "u_resolution");
GLint loc_pattern = glGetUniformLocation(program, "u_pattern");
GLint loc_factor = glGetUniformLocation(program, "u_factor");
glUseProgram(program);
GLushort pattern = 0x18ff;
GLfloat factor = 2.0f;
glUniform1ui(loc_pattern, pattern);
glUniform1f(loc_factor, factor);
//glLineStipple(2.0, pattern);
//glEnable(GL_LINE_STIPPLE);
glm::vec3 p0(-1.0f, -1.0f, 0.0f);
glm::vec3 p1(1.0f, -1.0f, 0.0f);
glm::vec3 p2(1.0f, 1.0f, 0.0f);
glm::vec3 p3(-1.0f, 1.0f, 0.0f);
std::vector<glm::vec3> varray1{ p0, p1, p1, p2, p2, p3, p3, p0 };
GLuint vao1 = CreateVAO(varray1);
std::vector<glm::vec3> varray2;
for (size_t u=0; u <= 360; u += 8)
{
double a = u*M_PI/180.0;
double c = cos(a), s = sin(a);
varray2.emplace_back(glm::vec3((float)c, (float)s, 0.0f));
}
GLuint vao2 = CreateVAO(varray2);
glm::mat4(project);
int vpSize[2]{0, 0};
while (!glfwWindowShouldClose(window))
{
int w, h;
glfwGetFramebufferSize(window, &w, &h);
if (w != vpSize[0] || h != vpSize[1])
{
vpSize[0] = w; vpSize[1] = h;
glViewport(0, 0, vpSize[0], vpSize[1]);
float aspect = (float)w/(float)h;
project = glm::ortho(-aspect, aspect, -1.0f, 1.0f, -10.0f, 10.0f);
glUniform2f(loc_res, (float)w, (float)h);
}
glClear(GL_COLOR_BUFFER_BIT);
glm::mat4 modelview1( 1.0f );
modelview1 = glm::translate(modelview1, glm::vec3(-0.6f, 0.0f, 0.0f) );
modelview1 = glm::scale(modelview1, glm::vec3(0.5f, 0.5f, 1.0f) );
glm::mat4 mvp1 = project * modelview1;
glUniformMatrix4fv(loc_mvp, 1, GL_FALSE, glm::value_ptr(mvp1));
glBindVertexArray(vao1);
glDrawArrays(GL_LINES, 0, (GLsizei)varray1.size());
glm::mat4 modelview2( 1.0f );
modelview2 = glm::translate(modelview2, glm::vec3(0.6f, 0.0f, 0.0f) );
modelview2 = glm::scale(modelview2, glm::vec3(0.5f, 0.5f, 1.0f) );
glm::mat4 mvp2 = project * modelview2;
glUniformMatrix4fv(loc_mvp, 1, GL_FALSE, glm::value_ptr(mvp2));
glBindVertexArray(vao2);
glDrawArrays(GL_LINE_STRIP, 0, (GLsizei)varray2.size());
glfwSwapBuffers(window);
glfwPollEvents();
}
glfwTerminate();
return 0;
}
See also
Dashed line in OpenGL3?
OpenGL ES - Dashed Lines
Since I struggled a bit (no pun intended) to get it right, I thought it could be useful to others if I shared my implementation of a set of stippling shaders based on Christian Rau's version.
To control pattern density, the fragment shader requires the number of patterns nPatterns per unit length of the viewport - instead of setting a factor. Also included is an optional clipping plane feature.
The rest is mainly commenting and cleaning.
Free to use to all intents and purposes.
The vertex shader:
#version 330
in vec4 vertex;
void main(void)
{
// just a pass-through
gl_Position = vertex;
}
The geometry shader:
#version 330
layout(lines) in;
layout(line_strip, max_vertices = 2) out;
uniform mat4 pvmMatrix;
uniform mat4 mMatrix;
uniform mat4 vMatrix;
out vec3 vPosition; // passed to the fragment shader for plane clipping
out float texCoord; // passed to the fragment shader for stipple pattern
void main(void)
{
// to achieve uniform pattern density whatever the line orientation
// the upper texture coordinate is made proportional to the line's length
vec3 pos0 = gl_in[0].gl_Position.xyz;
vec3 pos1 = gl_in[1].gl_Position.xyz;
float max_u_texture = length(pos1 - pos0);
// Line Start
gl_Position = pvmMatrix * (gl_in[0].gl_Position);
texCoord = 0.0;
// depth position for clip plane
vec4 vsPos0 = vMatrix * mMatrix * gl_Position;
vPosition = vsPos0.xyz / vsPos0.w;
EmitVertex(); // one down, one to go
// Line End
gl_Position = pvmMatrix * (gl_in[1].gl_Position);
texCoord = max_u_texture;
// depth position for clip plane
vec4 vsPos1 = vMatrix * mMatrix * gl_Position;
vPosition = vsPos0.xyz / vsPos0.w;
EmitVertex();
// done
EndPrimitive();
}
The fragment shader:
#version 330
uniform int pattern; // an integer between 0 and 0xFFFF representing the bitwise pattern
uniform int nPatterns; // the number of patterns/unit length of the viewport, typically 200-300 for good pattern density
uniform vec4 color;
uniform vec4 clipPlane0; // defined in view-space
in float texCoord;
in vec3 vPosition;
layout(location=0) out vec4 fragColor;
void main(void)
{
// test vertex postion vs. clip plane position (optional)
if (vPosition.z > clipPlane0.w) {
discard;
return;
}
// use 4 bytes for the masking pattern
// map the texture coordinate to the interval [0,2*8[
uint bitpos = uint(round(texCoord * nPatterns)) % 16U;
// move a unit bit 1U to position bitpos so that
// bit is an integer between 1 and 1000 0000 0000 0000 = 0x8000
uint bit = (1U << bitpos);
// test the bit against the masking pattern
// Line::SOLID: pattern = 0xFFFF; // = 1111 1111 1111 1111 = solid pattern
// Line::DASH: pattern = 0x3F3F; // = 0011 1111 0011 1111
// Line::DOT: pattern = 0x6666; // = 0110 0110 0110 0110
// Line::DASHDOT: pattern = 0xFF18; // = 1111 1111 0001 1000
// Line::DASHDOTDOT: pattern = 0x7E66; // = 0111 1110 0110 0110
uint up = uint(pattern);
// discard the bit if it doesn't match the masking pattern
if ((up & bit) == 0U) discard;
fragColor = color;
}
Related
I have been playing around with OpenGL and shaders and got myself into shadow mapping.
Trying to follow tutorials on the Internet (ogldev and learnopengl), got some unexpected results.
The issue is best described with few screenshots (I have added a static quad with depth framebuffer for debugging):
Somehow I managed to get shadows to be rendered on a ground quad once, with a static light (this commit). But the shadow pattern is, again, incorrect. I strongly suspect model transformation matrix calculaitons on this:
The way I render the scene is quite straightforward:
create the pipelines:
for mapping the shadows (filling the depth frame buffer)
for rendering the scene using the depth frame buffer
(extra) debugging one, rendering depth frame buffer to a static quad on a screen
fill the depth frame buffer: using the shadow mapping pipeline, render the scene from the light point, using orthographic projection
render the shaded scene: using the rendering pipeline and depth frame buffer bind as the first texture, render the scene from a camera point, using perspective projection
Seems like the algorithm in all those tutorials on shadow mapping out there. Yet, instead of a mouray effect (like in all of the tutorials), I get no shadow on the bottom plane whatsoever and weird artifacts (incorrect shadow mapping) on the 3D (chicken) model.
Interestingly enough, if I do not render (for both the shadow mapping and final rendering pass) the chicken model, the plane is lit with the same weird pattern:
I also had to remove any normal transformations from the fragment shader and disable face culling to make the ground plane lit. With front-face culling the plane does not appear in the shadow map (depth buffer).
I assume the following might be causing this issue:
wrong depth frame buffer setup (data format or texture parameters)
flipped depth frame buffer texture
wrong shadow calculations in rendering shaders
wrong light matrices (view & projection) setup
wrong matrix calculations in the rendering shaders (given the model transformation matrices for both chicken model and the quad contain both rotation and scaling)
Unfortunately, I ran out of ideas even on how to assess the above assumptions.
Looking for any help on the matter (also feel free to criticize any of my approaches, including C++, CMake, OpenGL and computer graphics).
The full solution source code is available on GitHub, but for convenience I have placed the heavily cut source code below.
shadow-mapping.vert:
#version 410
layout (location = 0) in vec3 vertexPosition;
out gl_PerVertex
{
vec4 gl_Position;
};
uniform mat4 lightSpaceMatrix;
uniform mat4 modelTransformation;
void main()
{
gl_Position = lightSpaceMatrix * modelTransformation * vec4(vertexPosition, 1.0);
}
shadow-mapping.frag:
#version 410
layout (location = 0) out float fragmentDepth;
void main()
{
fragmentDepth = gl_FragCoord.z;
}
shadow-rendering.vert:
#version 410
layout (location = 0) in vec3 vertexPosition;
layout (location = 1) in vec3 vertexNormal;
layout (location = 2) in vec2 vertexTextureCoord;
out VS_OUT
{
vec3 fragmentPosition;
vec3 normal;
vec2 textureCoord;
vec4 fragmentPositionInLightSpace;
} vsOut;
out gl_PerVertex {
vec4 gl_Position;
};
uniform mat4 projection;
uniform mat4 view;
uniform mat4 model;
uniform mat4 lightSpaceMatrix;
void main()
{
vsOut.fragmentPosition = vec3(model * vec4(vertexPosition, 1.0));
vsOut.normal = transpose(inverse(mat3(model))) * vertexNormal;
vsOut.textureCoord = vertexTextureCoord;
vsOut.fragmentPositionInLightSpace = lightSpaceMatrix * model * vec4(vertexPosition, 1.0);
gl_Position = projection * view * model * vec4(vertexPosition, 1.0);
}
shadow-rendering.frag:
#version 410
layout (location = 0) out vec4 fragmentColor;
in VS_OUT {
vec3 fragmentPosition;
vec3 normal;
vec2 textureCoord;
vec4 fragmentPositionInLightSpace;
} fsIn;
uniform sampler2D shadowMap;
uniform sampler2D diffuseTexture;
uniform vec3 lightPosition;
uniform vec3 lightColor;
uniform vec3 cameraPosition;
float shadowCalculation()
{
vec2 shadowMapCoord = fsIn.fragmentPositionInLightSpace.xy * 0.5 + 0.5;
float occluderDepth = texture(shadowMap, shadowMapCoord).r;
float thisDepth = fsIn.fragmentPositionInLightSpace.z * 0.5 + 0.5;
return occluderDepth < thisDepth ? 1.0 : 0.0;
}
void main()
{
vec3 color = texture(diffuseTexture, fsIn.textureCoord).rgb;
vec3 normal = normalize(fsIn.normal);
// ambient
vec3 ambient = 0.3 * color;
// diffuse
vec3 lightDirection = normalize(lightPosition - fsIn.fragmentPosition);
float diff = max(dot(lightDirection, normal), 0.0);
vec3 diffuse = diff * lightColor;
// specular
vec3 viewDirection = normalize(cameraPosition - fsIn.fragmentPosition);
vec3 halfwayDirection = normalize(lightDirection + viewDirection);
float spec = pow(max(dot(normal, halfwayDirection), 0.0), 64.0);
vec3 specular = spec * lightColor;
// calculate shadow
float shadow = shadowCalculation();
vec3 lighting = ((shadow * (diffuse + specular)) + ambient) * color;
fragmentColor = vec4(lighting, 1.0);
}
main.cpp, setting up shaders and frame buffer:
// loading the shadow mapping shaders
auto shadowMappingVertexProgram = ...;
auto shadowMappingFragmentProgram = ...;
auto shadowMappingLightSpaceUniform = shadowMappingVertexProgram->getUniform<glm::mat4>("lightSpaceMatrix");
auto shadowMappingModelTransformationUniform = shadowMappingVertexProgram->getUniform<glm::mat4>("modelTransformation");
auto shadowMappingPipeline = std::make_unique<globjects::ProgramPipeline>();
shadowMappingPipeline->useStages(shadowMappingVertexProgram.get(), gl::GL_VERTEX_SHADER_BIT);
shadowMappingPipeline->useStages(shadowMappingFragmentProgram.get(), gl::GL_FRAGMENT_SHADER_BIT);
// (omitted) loading the depth frame buffer debugging shaders and creating a pipeline here
// loading the rendering shaders
auto shadowRenderingVertexProgram = ...;
auto shadowRenderingFragmentProgram = ...;
auto shadowRenderingModelTransformationUniform = shadowRenderingVertexProgram->getUniform<glm::mat4>("model");
auto shadowRenderingViewTransformationUniform = shadowRenderingVertexProgram->getUniform<glm::mat4>("view");
auto shadowRenderingProjectionTransformationUniform = shadowRenderingVertexProgram->getUniform<glm::mat4>("projection");
auto shadowRenderingLightSpaceMatrixUniform = shadowRenderingVertexProgram->getUniform<glm::mat4>("lightSpaceMatrix");
auto shadowRenderingLightPositionUniform = shadowRenderingFragmentProgram->getUniform<glm::vec3>("lightPosition");
auto shadowRenderingLightColorUniform = shadowRenderingFragmentProgram->getUniform<glm::vec3>("lightColor");
auto shadowRenderingCameraPositionUniform = shadowRenderingFragmentProgram->getUniform<glm::vec3>("cameraPosition");
auto shadowRenderingPipeline = std::make_unique<globjects::ProgramPipeline>();
shadowRenderingPipeline->useStages(shadowRenderingVertexProgram.get(), gl::GL_VERTEX_SHADER_BIT);
shadowRenderingPipeline->useStages(shadowRenderingFragmentProgram.get(), gl::GL_FRAGMENT_SHADER_BIT);
// loading the chicken model
auto chickenModel = Model::fromAiNode(chickenScene, chickenScene->mRootNode, { "media" });
// INFO: this transformation is hard-coded specifically for Chicken.3ds model
chickenModel->setTransformation(glm::rotate(glm::scale(glm::mat4(1.0f), glm::vec3(0.01f)), glm::radians(-90.0f), glm::vec3(1.0f, 0, 0)));
// loading the quad model
auto quadModel = Model::fromAiNode(quadScene, quadScene->mRootNode);
// INFO: this transformation is hard-coded specifically for quad.obj model
quadModel->setTransformation(glm::rotate(glm::scale(glm::translate(glm::mat4(1.0f), glm::vec3(-5, 0, 5)), glm::vec3(10.0f, 0, 10.0f)), glm::radians(-90.0f), glm::vec3(1.0f, 0, 0)));
// loading the floor texture
sf::Image textureImage = ...;
auto defaultTexture = std::make_unique<globjects::Texture>(static_cast<gl::GLenum>(GL_TEXTURE_2D));
defaultTexture->setParameter(static_cast<gl::GLenum>(GL_TEXTURE_MIN_FILTER), static_cast<GLint>(GL_LINEAR));
defaultTexture->setParameter(static_cast<gl::GLenum>(GL_TEXTURE_MAG_FILTER), static_cast<GLint>(GL_LINEAR));
defaultTexture->image2D(0, static_cast<gl::GLenum>(GL_RGBA8), glm::vec2(textureImage.getSize().x, textureImage.getSize().y), 0, static_cast<gl::GLenum>(GL_RGBA), static_cast<gl::GLenum>(GL_UNSIGNED_BYTE), reinterpret_cast<const gl::GLvoid*>(textureImage.getPixelsPtr()));
// initializing the depth frame buffer
auto shadowMapTexture = std::make_unique<globjects::Texture>(static_cast<gl::GLenum>(GL_TEXTURE_2D));
shadowMapTexture->setParameter(static_cast<gl::GLenum>(GL_TEXTURE_MIN_FILTER), static_cast<gl::GLenum>(GL_LINEAR));
shadowMapTexture->setParameter(static_cast<gl::GLenum>(GL_TEXTURE_MAG_FILTER), static_cast<gl::GLenum>(GL_LINEAR));
shadowMapTexture->setParameter(static_cast<gl::GLenum>(GL_TEXTURE_WRAP_S), static_cast<gl::GLenum>(GL_CLAMP_TO_BORDER));
shadowMapTexture->setParameter(static_cast<gl::GLenum>(GL_TEXTURE_WRAP_T), static_cast<gl::GLenum>(GL_CLAMP_TO_BORDER));
shadowMapTexture->setParameter(static_cast<gl::GLenum>(GL_TEXTURE_BORDER_COLOR), glm::vec4(1.0f, 1.0f, 1.0f, 1.0f));
shadowMapTexture->image2D(0, static_cast<gl::GLenum>(GL_DEPTH_COMPONENT), glm::vec2(window.getSize().x, window.getSize().y), 0, static_cast<gl::GLenum>(GL_DEPTH_COMPONENT), static_cast<gl::GLenum>(GL_FLOAT), nullptr);
auto framebuffer = std::make_unique<globjects::Framebuffer>();
framebuffer->attachTexture(static_cast<gl::GLenum>(GL_DEPTH_ATTACHMENT), shadowMapTexture.get());
main.cpp, rendering (main loop):
// (omitted) event handling, camera updates go here
glm::mat4 cameraProjection = glm::perspective(glm::radians(fov), (float) window.getSize().x / (float) window.getSize().y, 0.1f, 100.0f);
glm::mat4 cameraView = glm::lookAt(cameraPos, cameraPos + cameraForward, cameraUp);
// moving light together with the camera, for debugging purposes
glm::vec3 lightPosition = cameraPos;
// light settings
const float nearPlane = 1.0f;
const float farPlane = 10.0f;
glm::mat4 lightProjection = glm::ortho(-5.0f, 5.0f, -5.0f, 5.0f, nearPlane, farPlane);
glm::mat4 lightView = glm::lookAt(lightPosition, glm::vec3(0.0f, 0.0f, 0.0f), glm::vec3(0.0f, 1.0f, 0.0f));
glm::mat4 lightSpaceMatrix = lightProjection * lightView;
::glViewport(0, 0, static_cast<GLsizei>(window.getSize().x), static_cast<GLsizei>(window.getSize().y));
// first render pass - shadow mapping
framebuffer->bind();
::glClearColor(1.0f, 1.0f, 1.0f, 1.0f);
::glClear(GL_DEPTH_BUFFER_BIT);
framebuffer->clearBuffer(static_cast<gl::GLenum>(GL_DEPTH), 0, glm::vec4(1.0f));
glEnable(GL_DEPTH_TEST);
// cull front faces to prevent peter panning the generated shadow map
glCullFace(GL_FRONT);
shadowMappingPipeline->use();
shadowMappingLightSpaceUniform->set(lightSpaceMatrix);
shadowMappingModelTransformationUniform->set(chickenModel->getTransformation());
chickenModel->draw();
shadowMappingModelTransformationUniform->set(quadModel->getTransformation());
quadModel->draw();
framebuffer->unbind();
shadowMappingPipeline->release();
glCullFace(GL_BACK);
// second pass - switch to normal shader and render picture with depth information to the viewport
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
shadowRenderingPipeline->use();
shadowRenderingLightPositionUniform->set(lightPosition);
shadowRenderingLightColorUniform->set(glm::vec3(1.0, 1.0, 1.0));
shadowRenderingCameraPositionUniform->set(cameraPos);
shadowRenderingProjectionTransformationUniform->set(cameraProjection);
shadowRenderingViewTransformationUniform->set(cameraView);
shadowRenderingLightSpaceMatrixUniform->set(lightSpaceMatrix);
// draw chicken
shadowMapTexture->bind();
shadowRenderingModelTransformationUniform->set(chickenModel->getTransformation());
chickenModel->draw();
shadowRenderingModelTransformationUniform->set(quadModel->getTransformation());
defaultTexture->bind();
quadModel->draw();
defaultTexture->unbind();
shadowMapTexture->unbind();
shadowRenderingPipeline->release();
// (omitted) render the debugging quad with depth (shadow) map
window.display();
As shameful as it might be, the issue was with the wrong texture being bound.
The globjects library that I use to have few nice(-r) abstractions over OpenGL actually does not provide a smart logic around texture binding (as I blindly assumed). So using just Texture::bind() and Texture::unbind() won't automagically keep track of how many textures have been bound and increment an index.
E.g. it does not behave (roughly) like this:
static int boundTextureIndex = -1;
void Texture::bind() {
glBindTexture(this->textureType, this->textureId);
glActivateTexture(GL_TEXTURE0 + (++boundTextureIndex));
}
void Texture::unbind() {
--boundTextureIndex;
}
So after changing the texture->bind() to texture->bindActive(0) followed by shaderProgram->setUniform("texture", 0), I finally got to the mouray effect and correct shadow mapping:
Full change is in this commit.
I have a problem with rendering my quads in OpenGL. They look darker when translucency is applied, if the camera is below a certain point. How can I fix this? The objects are lots of quads with tiny amounts of Z difference. I have implemented rendering of translucent objects from this webpage: http://www.alecjacobson.com/weblog/?p=2750
Render code:
double alpha_factor = 0.75;
double alpha_frac = (r_alpha - alpha_factor * r_alpha) / (1.0 - alpha_factor * r_alpha);
double prev_alpha = r_alpha;
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glEnable(GL_BLEND);
// quintuple pass to get the rendering of translucent objects, somewhat correct
// reverse render order for getting alpha going!
// 1st pass: only depth checks
glDisable(GL_CULL_FACE);
glDepthFunc(GL_LESS);
r_alpha = 0;
// send alpha for each pass
// reverse order
drawobjects(RENDER_REVERSE);
// 2nd pass: guaranteed back face display with normal alpha
glEnable(GL_CULL_FACE);
glCullFace(GL_FRONT);
glDepthFunc(GL_ALWAYS);
r_alpha = alpha_factor * (prev_alpha + 0.025);
// reverse order
drawobjects(RENDER_REVERSE);
// 3rd pass: depth checked version of fraction of calculated alpha. (minus 1)
glEnable(GL_CULL_FACE);
glCullFace(GL_FRONT);
glDepthFunc(GL_LEQUAL);
r_alpha = alpha_frac + 0.025;
// normal order
drawobjects(RENDER_NORMAL);
// 4th pass: same for back face
glEnable(GL_CULL_FACE);
glCullFace(GL_BACK);
glDepthFunc(GL_ALWAYS);
r_alpha = alpha_factor * (prev_alpha + 0.025);
// reverse order
drawobjects(RENDER_REVERSE);
// 5th pass: just put out the entire thing now
glDisable(GL_CULL_FACE);
glDepthFunc(GL_LEQUAL);
r_alpha = alpha_frac + 0.025;
// normal order
drawobjects(RENDER_NORMAL);
glDisable(GL_BLEND);
r_alpha = prev_alpha;
GLSL shaders:
Vertex shader:
#version 330 core
layout(location = 0) in vec3 vPos_ModelSpace;
layout(location = 1) in vec2 vertexUV;
layout(location = 2) in mat4 model_instance;
out vec2 UV;
out float alpha;
flat out uint alpha_mode;
// model + view + proj matrix
uniform mat4 proj;
uniform mat4 view;
uniform float v_alpha;
uniform uint v_alpha_mode;
void main() {
gl_Position = proj * view * model_instance * vec4(vPos_ModelSpace, 1.0);
// send to frag shader
UV = vertexUV;
alpha = v_alpha;
alpha_mode = v_alpha_mode;
}
Fragment shader:
#version 330 core
// texture UV coordinate
in vec2 UV;
in float alpha;
flat in uint alpha_mode;
out vec4 color;
// Values that stay constant for the whole mesh.
uniform sampler2D texSampler;
void main() {
int amode = int(alpha_mode);
color.rgb = texture(texSampler, UV).rgb;
color.a = alpha;
if(amode == 1)
color.rgb *= alpha;
}
Image when problem happens:
Image comparison for how it should look regardless of my position:
The reason it fades away in the center is because when you look at the infinitely thin sides of the planes they disappear. As for the brightness change top vs bottom, it's due to how your passes treat surface normals. The dark planes are normals facing away from the camera but with no planes facing the camera to lighten them up.
It looks like you are rendering many translucent planes in a cube to estimate a volume. Here is a simple example of a volume rendering: https://www.shadertoy.com/view/lsG3D3
http://developer.download.nvidia.com/books/HTML/gpugems/gpugems_ch39.html is a fantastic resource. It explains different ways to render volume, shows how awesome it is. For reference, that last example used a sphere as proxy geometry to raymarch a volume fractal.
Happy coding!
I am trying to implement a simple projective texture mapping approach by using shaders in OpenGL 3+. While there are some examples on the web I am having trouble creating a working example with shaders.
I am actually planning on using two shaders, one which does a normal scene draw, and another for projective texture mapping. I have a function for drawing a scene void ProjTextureMappingScene::renderScene(GLFWwindow *window) and I am using glUseProgram() to switch between shaders. The normal drawing works fine. However, it is unclear to me how I am supposed to render the projective texture on top of an already textured cube. Do I somehow have to use a stencil buffer or a framebuffer object(the rest of the scene should be unaffected)?
I also don't think that my projective texture mapping shaders are correct since the second time I render a cube it shows black. Further, I tried to debug by using colors and only the t component of the shader seems to be non-zero(so the cube appears green). I am overriding the texColor in the fragment shader below just for debugging purposes.
VertexShader
#version 330
uniform mat4 TexGenMat;
uniform mat4 InvViewMat;
uniform mat4 P;
uniform mat4 MV;
uniform mat4 N;
layout (location = 0) in vec3 inPosition;
//layout (location = 1) in vec2 inCoord;
layout (location = 2) in vec3 inNormal;
out vec3 vNormal, eyeVec;
out vec2 texCoord;
out vec4 projCoords;
void main()
{
vNormal = (N * vec4(inNormal, 0.0)).xyz;
vec4 posEye = MV * vec4(inPosition, 1.0);
vec4 posWorld = InvViewMat * posEye;
projCoords = TexGenMat * posWorld;
// only needed for specular component
// currently not used
eyeVec = -posEye.xyz;
gl_Position = P * MV * vec4(inPosition, 1.0);
}
FragmentShader
#version 330
uniform sampler2D projMap;
uniform sampler2D gSampler;
uniform vec4 vColor;
in vec3 vNormal, lightDir, eyeVec;
//in vec2 texCoord;
in vec4 projCoords;
out vec4 outputColor;
struct DirectionalLight
{
vec3 vColor;
vec3 vDirection;
float fAmbientIntensity;
};
uniform DirectionalLight sunLight;
void main (void)
{
// supress the reverse projection
if (projCoords.q > 0.0)
{
vec2 finalCoords = projCoords.st / projCoords.q;
vec4 vTexColor = texture(gSampler, finalCoords);
// only t has non-zero values..why?
vTexColor = vec4(finalCoords.s, finalCoords.t, finalCoords.r, 1.0);
//vTexColor = vec4(projCoords.s, projCoords.t, projCoords.r, 1.0);
float fDiffuseIntensity = max(0.0, dot(normalize(vNormal), -sunLight.vDirection));
outputColor = vTexColor*vColor*vec4(sunLight.vColor * (sunLight.fAmbientIntensity + fDiffuseIntensity), 1.0);
}
}
Creation of TexGen Matrix
biasMatrix = glm::mat4(0.5f, 0, 0, 0.5f,
0, 0.5f, 0, 0.5f,
0, 0, 0.5f, 0.5f,
0, 0, 0, 1);
// 4:3 perspective with 45 fov
projectorP = glm::perspective(45.0f * zoomFactor, 4.0f / 3.0f, 0.1f, 1000.0f);
projectorOrigin = glm::vec3(-3.0f, 3.0f, 0.0f);
projectorTarget = glm::vec3(0.0f, 0.0f, 0.0f);
projectorV = glm::lookAt(projectorOrigin, // projector origin
projectorTarget, // project on object at origin
glm::vec3(0.0f, 1.0f, 0.0f) // Y axis is up
);
mModel = glm::mat4(1.0f);
...
texGenMatrix = biasMatrix * projectorP * projectorV * mModel;
invViewMatrix = glm::inverse(mModel*mModelView);
Render Cube Again
It is also unclear to me what the modelview of the cube should be? Should it use the view matrix from the slide projector(as it is now) or the normal view projector? Currently the cube is rendered black(or green if debugging) in the middle of the scene view, as it would appear from the slide projector(I made a toggle hotkey so that I can see what the slide projector "sees"). The cube also moves with the view. How do I get the projection unto the cube itself?
mModel = glm::translate(projectorV, projectorOrigin);
// bind projective texture
tTextures[2].bindTexture();
// set all uniforms
...
// bind VBO data and draw
glBindVertexArray(uiVAOSceneObjects);
glDrawArrays(GL_TRIANGLES, 6, 36);
Switch between main scene camera and slide projector camera
if (useMainCam)
{
mCurrent = glm::mat4(1.0f);
mModelView = mModelView*mCurrent;
mProjection = *pipeline->getProjectionMatrix();
}
else
{
mModelView = projectorV;
mProjection = projectorP;
}
I have solved the problem. One issue I had is that I confused the matrices in the two camera systems (world and projective texture camera). Now when I set the uniforms for the projective texture mapping part I use the correct matrices for the MVP values - the same ones I use for the world scene.
glUniformMatrix4fv(iPTMProjectionLoc, 1, GL_FALSE, glm::value_ptr(*pipeline->getProjectionMatrix()));
glUniformMatrix4fv(iPTMNormalLoc, 1, GL_FALSE, glm::value_ptr(glm::transpose(glm::inverse(mCurrent))));
glUniformMatrix4fv(iPTMModelViewLoc, 1, GL_FALSE, glm::value_ptr(mCurrent));
glUniformMatrix4fv(iTexGenMatLoc, 1, GL_FALSE, glm::value_ptr(texGenMatrix));
glUniformMatrix4fv(iInvViewMatrix, 1, GL_FALSE, glm::value_ptr(invViewMatrix));
Further, the invViewMatrix is just the inverse of the view matrix not the model view (this didn't change the behaviour in my case, since the model was identity, but it is wrong). For my project I only wanted to selectively render a few objects with projective textures. To do this, for each object, I must make sure that the current shader program is the one for projective textures using glUseProgram(projectiveTextureMappingProgramID). Next, I compute the required matrices for this object:
texGenMatrix = biasMatrix * projectorP * projectorV * mModel;
invViewMatrix = glm::inverse(mView);
Coming back to the shaders, the vertex shader is correct except that I re-added the UV texture coordinates (inCoord) for the current object and stored them in texCoord.
For the fragment shader I changed the main function to clamp the projective texture so that it doesn't repeat (I couldn't get it to work with the client side GL_CLAMP_TO_EDGE) and I am also using the default object texture and UV coordinates in case the projector does not cover the whole object (I also removed lighting from the projective texture since it is not needed in my case):
void main (void)
{
vec2 finalCoords = projCoords.st / projCoords.q;
vec4 vTexColor = texture(gSampler, texCoord);
vec4 vProjTexColor = texture(projMap, finalCoords);
//vec4 vProjTexColor = textureProj(projMap, projCoords);
float fDiffuseIntensity = max(0.0, dot(normalize(vNormal), -sunLight.vDirection));
// supress the reverse projection
if (projCoords.q > 0.0)
{
// CLAMP PROJECTIVE TEXTURE (for some reason gl_clamp did not work...)
if(projCoords.s > 0 && projCoords.t > 0 && finalCoords.s < 1 && finalCoords.t < 1)
//outputColor = vProjTexColor*vColor*vec4(sunLight.vColor * (sunLight.fAmbientIntensity + fDiffuseIntensity), 1.0);
outputColor = vProjTexColor*vColor;
else
outputColor = vTexColor*vColor*vec4(sunLight.vColor * (sunLight.fAmbientIntensity + fDiffuseIntensity), 1.0);
}
else
{
outputColor = vTexColor*vColor*vec4(sunLight.vColor * (sunLight.fAmbientIntensity + fDiffuseIntensity), 1.0);
}
}
If you are stuck and for some reason can not get the shaders to work, you can check out an example in "OpenGL 4.0 Shading Language Cookbook" (textures chapter) - I actually missed this, until I got it working by myself.
In addition to all of the above, a great help for debugging if the algorithm is working correctly was to draw the frustum (as wireframe) for the projective camera. I used a shader for frustum drawing. The fragment shader just assigns a solid color, while the vertex shader is listed below with explanations:
#version 330
// input vertex data
layout(location = 0) in vec3 vp;
uniform mat4 P;
uniform mat4 MV;
uniform mat4 invP;
uniform mat4 invMV;
void main()
{
/*The transformed clip space position c of a
world space vertex v is obtained by transforming
v with the product of the projection matrix P
and the modelview matrix MV
c = P MV v
So, if we could solve for v, then we could
genrerate vertex positions by plugging in clip
space positions. For your frustum, one line
would be between the clip space positions
(-1,-1,near) and (-1,-1,far),
the lower left edge of the frustum, for example.
NB: If you would like to mix normalized device
coords (x,y) and eye space coords (near,far),
you need an additional step here. Modify your
clip position as follows
c' = (c.x * c.z, c.y * c.z, c.z, c.z)
otherwise you would need to supply both the z
and w for c, which might be inconvenient. Simply
use c' instead of c below.
To solve for v, multiply both sides of the equation above with
-1
(P MV)
This gives
-1
(P MV) c = v
This is equivalent to
-1 -1
MV P c = v
-1
P is given by
|(r-l)/(2n) 0 0 (r+l)/(2n) |
| 0 (t-b)/(2n) 0 (t+b)/(2n) |
| 0 0 0 -1 |
| 0 0 -(f-n)/(2fn) (f+n)/(2fn)|
where l, r, t, b, n, and f are the parameters in the glFrustum() call.
If you don't want to fool with inverting the
model matrix, the info you already have can be
used instead: the forward, right, and up
vectors, in addition to the eye position.
First, go from clip space to eye space
-1
e = P c
Next go from eye space to world space
v = eyePos - forward*e.z + right*e.x + up*e.y
assuming x = right, y = up, and -z = forward.
*/
vec4 fVp = invMV * invP * vec4(vp, 1.0);
gl_Position = P * MV * fVp;
}
The uniforms are used like this (make sure you use the right matrices):
// projector matrices
glUniformMatrix4fv(iFrustumInvProjectionLoc, 1, GL_FALSE, glm::value_ptr(glm::inverse(projectorP)));
glUniformMatrix4fv(iFrustumInvMVLoc, 1, GL_FALSE, glm::value_ptr(glm::inverse(projectorV)));
// world camera
glUniformMatrix4fv(iFrustumProjectionLoc, 1, GL_FALSE, glm::value_ptr(*pipeline->getProjectionMatrix()));
glUniformMatrix4fv(iFrustumModelViewLoc, 1, GL_FALSE, glm::value_ptr(mModelView));
To get the input vertices needed for the frustum's vertex shader you can do the following to get the coordinates (then just add them to your vertex array):
glm::vec3 ftl = glm::vec3(-1, +1, pFar); //far top left
glm::vec3 fbr = glm::vec3(+1, -1, pFar); //far bottom right
glm::vec3 fbl = glm::vec3(-1, -1, pFar); //far bottom left
glm::vec3 ftr = glm::vec3(+1, +1, pFar); //far top right
glm::vec3 ntl = glm::vec3(-1, +1, pNear); //near top left
glm::vec3 nbr = glm::vec3(+1, -1, pNear); //near bottom right
glm::vec3 nbl = glm::vec3(-1, -1, pNear); //near bottom left
glm::vec3 ntr = glm::vec3(+1, +1, pNear); //near top right
glm::vec3 frustum_coords[36] = {
// near
ntl, nbl, ntr, // 1 triangle
ntr, nbl, nbr,
// right
nbr, ftr, ntr,
ftr, nbr, fbr,
// left
nbl, ftl, ntl,
ftl, nbl, fbl,
// far
ftl, fbl, fbr,
fbr, ftr, ftl,
//bottom
nbl, fbr, fbl,
fbr, nbl, nbr,
//top
ntl, ftr, ftl,
ftr, ntl, ntr
};
After all is said and done, it's nice to see how it looks:
As you can see I applied two projective textures, one of a biohazard image on Blender's Suzanne monkey head, and a smiley texture on the floor and a small cube. You can also see that the cube is partly covered by the projective texture, while the rest of it appears with its default texture. Finally, you can see the green frustum wireframe for the projector camera - and everything looks correct.
I just started with OpenGL tessellation and have run into a bit a trouble. I am tessellating series of patches formed by one vertex each. These vertices/patches are structured in a gridlike fashion to later form a terrain generated by Perlin Noise.
The problem I have run into is that starting from the second patch, and every 5th patch after that, sometimes have a lot of tessellation (not the way i configured) but most of the time it doesn't get tessellated at all.
Like so:
The two white circles mark the highly/over tessellated patches. Also note the pattern of untessellated patches.
The strange thing is that it works on my Surface Pro 2 (Intel HD4400 graphics) but bugs on my main desktop computer (AMD HD6950 graphics). Is it possible the hardware is bad?
The patches are generated with the code:
vec4* patches = new vec4[m_patchesWidth * m_patchesDepth];
int c = 0;
for (unsigned int z = 0; z < m_patchesDepth; ++z) {
for (unsigned int x = 0; x < m_patchesWidth; ++x) {
patches[c] = vec4(x * 1.5f, 0, z * 1.5f, 1.0f);
c++;
}
}
m_fxTerrain->Apply();
glGenBuffers(1, &m_planePatches);
glBindBuffer(GL_ARRAY_BUFFER, m_planePatches);
glBufferData(GL_ARRAY_BUFFER, m_patchesWidth * m_patchesDepth * sizeof(vec4), patches, GL_STATIC_DRAW);
GLuint loc = m_fxTerrain->GetAttrib("posIn");
glEnableVertexAttribArray(loc);
glVertexAttribPointer(0, 4, GL_FLOAT, GL_FALSE, sizeof(vec4), nullptr);
delete(patches);
And drawn with:
glPatchParameteri(GL_PATCH_VERTICES, 1);
glBindVertexArray(patches);
glPolygonMode(GL_FRONT_AND_BACK, GL_LINE);
glDrawArrays(GL_PATCHES, 0, nrOfPatches);
Vertex Shader:
#version 430 core
in vec4 posIn;
out gl_PerVertex {
vec4 gl_Position;
};
void main() {
gl_Position = posIn;
}
Control shader:
#version 430
#extension GL_ARB_tessellation_shader : enable
layout (vertices = 1) out;
uniform float OuterTessFactor;
uniform float InnerTessFactor;
out gl_PerVertex {
vec4 gl_Position;
} gl_out[];
void main() {
if (gl_InvocationID == 0) {
gl_TessLevelOuter[0] = OuterTessFactor;
gl_TessLevelOuter[1] = OuterTessFactor;
gl_TessLevelOuter[2] = OuterTessFactor;
gl_TessLevelOuter[3] = OuterTessFactor;
gl_TessLevelInner[0] = InnerTessFactor;
gl_TessLevelInner[1] = InnerTessFactor;
}
gl_out[gl_InvocationID].gl_Position = gl_in[gl_InvocationID].gl_Position;
}
Evaluation shader:
#version 430
#extension GL_ARB_tessellation_shader : enable
layout (quads, equal_spacing, ccw) in;
uniform mat4 ProjView;
uniform sampler2D PerlinNoise;
out vec3 PosW;
out vec3 Normal;
out vec4 ColorFrag;
out gl_PerVertex {
vec4 gl_Position;
};
void main() {
vec4 pos = gl_in[0].gl_Position;
pos.xz += gl_TessCoord.xy;
pos.y = texture2D(PerlinNoise, pos.xz / vec2(8, 8)).x * 10.0f - 10.0f;
Normal = vec3(0, 1, 0);
gl_Position = ProjView * pos;
PosW = pos.xyz;
ColorFrag = vec4(pos.x / 64.0f, 0.0f, pos.z / 64.0f, 1.0f);
}
Fragment shader:
#version 430 core
in vec3 PosW;
in vec3 Normal;
in vec4 ColorFrag;
in vec4 PosH;
out vec3 FragColor;
out vec3 FragNormal;
void main() {
FragNormal = Normal;
FragColor = ColorFrag.xyz;
}
I have tried to hardcode the different tessellation levels but that did not help. I recently started out with OpenGL so please let me know if i am doing something stupid.
So does anyone have any idea what could be causing this "flickering" of certain patches?
Update: I had a friend run the project and he got the same pattern of flickering tessellation but the failing patches were not drawn at all except when being overly tessellated. He has the same graphics card as I do (AMD HD6950).
You should use triangle/quad tessellation, in which each patch has 3 or 4 vertices. As I can see, you use quads (I use them too). In that case, you can set it like this:
glPatchParameteri(GL_PATCH_VERTICES,4);
glBindVertexArray(VertexArray);
(TIP: use drawelements for your terrain, much better performance for 2D-displacement based mesh.)
In the control shader, use
layout (vertices = 4) out;
since your patch has 4 control points. The ordering is still important (CCW/CW).
Personally I don't like to use built-in variables, so for the vertex shader you can send your vertex data to the tesscontrol like this:
layout (location = 0) out vec3 outPos;
....
outPos.xz = grid.xy;
outPos.y = noise(outPos.xz);
Tess control:
layout (location = 0) in vec3 inPos[]; //outPos (location = 0) from vertex shader
//'collects' the 4 control points to an array in the order they're sended
layout (location = 0) out vec3 outPos[]; //send the c.points to the ev. shader
...
gl_TessLevelOuter[0] = outt[0];
gl_TessLevelOuter[1] = outt[1];
gl_TessLevelOuter[2] = outt[2];
gl_TessLevelOuter[3] = outt[3];
gl_TessLevelInner[0] = inn[0];
gl_TessLevelInner[1] = inn[1];
outPos[ID] = inPos[ID];//gl_invocationID = ID
Note that both in and out vertex data is an array.
The tessev is simple:
layout (location = 0) in vec3 inPos[]; //the 4 control points
layout (location = 0) out vec3 outPos; //this is no longer array, next is the fragment shader
...
//edit: do not forgot to add the next line
layout (quads) in;
vec3 interpolate3D(vec3 v0, vec3 v1, vec3 v2, vec3 v3) //linear interpolation for x,y,z coords on the quad
{
return mix(mix(v0,v1,gl_TessCoord.x),mix(v3,v2,gl_TessCoord.x),gl_TessCoord.y);
};
...main{...
outPos = interpolate3D(inPos[0],inPos[1],inPos[2],inPos[3]); //the four control points of the quad. Every other point is linearly interpolated between them according to the TessCoord.
gl_Position = mvp * vec4(outPos,1.0f);
A good representation of the quad domain: http://ogldev.atspace.co.uk/www/tutorial30/tutorial30.html.
I think the problem is with your one-vertex patch. I cannot imagine how a one vertex path can be divided into triangles, I don't know how it works on another hardware. The tessellation is for divide primitives into other simple primitives, to triangles in case of OGL, since it can be handled by a GPU easily (3 points always lie in a plane). So, the minimum number of patch vertices should be 3, for a triangle. I like quads, because it simplier to index, and the memory cost is less. It will be divided into triangles too during tessellation. http://www.informit.com/articles/article.aspx?p=2120983
Also, there is another type, the isoline tessellation. (check out the links, the second is pretty good.)
All in all, try it with quads or triangles, and set the control vertices to 4 (or 3). My (pretty complex) terrain shader is here with frustum culling, tessellation shader culling for a geoclipmap based terrain. Also, without tessellation it works with vertex morph in vertex shader. Maybe some part of this code will be useful. http://speedy.sh/TAvPR/gshader.txt
A scene with tessellation at about 4 pixels/triangle runs at 75 FPS (with fraps) with runtime normal calculation and bicubic smoothing and other things. I'm using AMD HD 5750. It still could be much faster with better code and pre-baked normals:D. (runs at max 120 w/o normal calc.)
Oh, and you can only send the x and z coords if you displace the vertex in the shader. It will be faster too.
Lots of vertices.
I'd like to display a simple UV sphere (exported from Blender) and generate lines with normal coordinates using a unique geometry shader.
In a first time, I wrote a simple geometry shader which simply return the input vertices informations to the fragment shader. For a sake of simplicity (for the exemple) I erased the luminosity calculations in the fragment shader.
Vertex shader :
#version 400
layout (location = 0) in vec3 VertexPosition;
layout (location = 1) in vec3 VertexNormal;
uniform mat4 MVP;
out vec3 VPosition;
out vec3 VNormal;
void main(void)
{
VNormal = VertexNormal;
gl_Position = vec4(VertexPosition, 1.0f);
}
Geometry shader :
#version 400
layout(points) in;
layout(line_strip, max_vertices = 2) out;
uniform mat4 MVP;
in vec3 VNormal[];
out vec3 fcolor;
void main(void)
{
float size = 2.5f;
fcolor = vec3(0.0f, 0.0f, 1.0f);
gl_Position = MVP * gl_in[0].gl_Position;
EmitVertex();
fcolor = vec3(1.0f, 1.0f, 0.0f);
gl_Position = MVP * vec4(gl_in[0].gl_Position.xyz + vec3(
VNormal[0].x * size, VNormal[0].y * size, VNormal[0].z * size), 1.0f);
EmitVertex();
EndPrimitive();
}
And the fragment shader :
#version 400
in vec3 Position;
in vec3 Normal;
in vec2 TexCoords;
out vec4 FragColor;
in vec3 fcolor;
void main(void)
{
FragColor = vec4(fcolor, 1.0f);
}
Now in the C++ code the primitive type to draw (here triangles):
glDrawArrays(GL_TRIANGLES, 0, meshList[idx]->getVertexBuffer()->getBufferSize());
And finally the output :
Until here all is ok.
Now I want to generate strands on the sphere as normals. To do the job done I wrote the following geometry shader (the vertex and fragment shaders are the sames).
#version 400
layout(points) in;
layout(line_strip, max_vertices = 2) out;
uniform mat4 MVP;
in vec3 VNormal[];
out vec3 fcolor;
void main(void)
{
float size = 1.0f;
fcolor = vec3(0.0f, 0.0f, 1.0f);
gl_Position = MVP * gl_in[0].gl_Position;
EmitVertex();
fcolor = vec3(1.0f, 1.0f, 0.0f);
gl_Position = MVP * vec4(gl_in[0].gl_Position.xyz + vec3(
VNormal[0].x * size, VNormal[0].y * size, VNormal[0].z * size), 1.0f);
EmitVertex();
EndPrimitive();
}
The input primitive type being points I modified the C++ code to draw the scene :
glDrawArrays(GL_POINTS, 0, meshList[idx]->getVertexBuffer()->getBufferSize());
And the output:
Finally if I want to get a triangle input as input primitive and a line_strip as output primitive in the geometry shader I have the following shader:
#version 400
layout(triangles, invocations = 3) in;
layout(line_strip, max_vertices = 6) out;
uniform mat4 MVP;
in vec3 VNormal[];
out vec3 fcolor;
void main(void)
{
float size = 1.0f;
for (int i = 0; i < 3; i++)
{
fcolor = vec3(0.0f, 0.0f, 1.0f);
gl_Position = MVP * gl_in[i].gl_Position;
EmitVertex();
fcolor = vec3(1.0f, 1.0f, 0.0f);
gl_Position = MVP * vec4(gl_in[0].gl_Position.xyz + vec3(
VNormal[0].x * size, VNormal[0].y * size, VNormal[0].z * size), 1.0f);
EmitVertex();
EndPrimitive();
}
}
And the output is the following :
But my goal is to display in one output the scene (sphere + strands) using the same geometry shader. I'd like to know if it's possible to do this. I don't think so because a geometry shader must have just one type of input primitive and an other one in output and not several types. I want to be sure if it's possible or not.
Who knows, maybe one day there'll be an extension to emit multiple primitive types from a geometry shader, but as you say it can't currently be done.
One alternative might be to draw the normal lines with triangles instead.
Another, but completely useless in this case, might be to use the transform feedback extension to save the vertex shader results and reuse that data with two separate geometry shaders. I only mention this as it's the closest thing I could think of to emit multiple primitive types after the vertex stage.
EDIT
The two geometry shaders for drawing normals confuses me. In the second one, max_vertices = 3, which should be 6 for 3 separate lines and EndPrimitive should also be inside the for-loop so the 3 lines aren't connected. But you've already sorted this out by drawing GL_POINTS in the previous one. Is this intended to be structured for multiple primitive output, if it were supported? (fixed)
Given your geometry reuses many vertices, indices with glDrawElements would be more efficient. Although you'd still want to use glDrawArrays for drawing normal lines to avoid drawing duplicate vertices referenced by an index array.