Follow 2d player openGL - c++

So I am ecountering this small problem where my camera is fixed on the player wrongly.
The blue sprite in the left upper corner is the player but it is supposed to be in the center of the screen. All the other threads on this matter where using the fixed rendering pipeline while I use the VBO based one.
My matrices are as followed:
Transform matrix:
glm::vec2 position = glm::vec2(x, y);
glm::vec2 size = glm::vec2(width, height);
this->transform = glm::translate(this->transform, glm::vec3(position, 0.0f));
this->transform = glm::scale(this->transform, glm::vec3(size, 1.0f));
Projection matrix:
glm::mat4 Screen::projection2D = glm::ortho(0.0f, (float)800, (float)600, 0.0f, -1.0f, 1.0f);
View matrix (where translation is the translation of the player):
Screen::view = glm::lookAt(translation, translation+glm::vec3(0,0,-1), glm::vec3(0,1,0));
And the vertex shader:
#version 330 core
layout (location = 0) in vec3 aPos;
layout (location = 1) in vec2 aTexCoord;
uniform mat4 transform;
uniform mat4 projection;
uniform mat4 view;
out vec2 TexCoord;
void main()
{
gl_Position = projection * view* transform * vec4(aPos.xy, 0.0, 1.0);
TexCoord = aTexCoord;
}
So what is going wrong here. Is there something I did not understand about the way it works? Or did I make a minor mistake somewhere?

So I found the answer myself XD,
The translation has to be centered by subtracting half of the screen width and height.
glm::vec3 cameraPos = glm::vec3(translation.x-Screen::width*0.5f, translation.y-Screen::height*0.5f, translation.z);

Related

How I can port a shadertoy into a vertex that is using a projection matrix?

I'm trying to port this shadertoy into OpenGL https://www.shadertoy.com/view/7lBBR3
Shadertoday has a vec4 fragCoord and a vec3 iResolution; that I'm not sure how to translate into my OpenGL shader.
I have a 2D plane that is projeted like this:
glm::vec3 camera = {0.f, 0.f, -5.f};
glm::vec3 projection = glm::perspective(glm::radians(45.f), app.aspectRatio, 0.1f, 100.f);
projection = glm::scale(projection, {1.f, -1.f, 1.f});
glm::mat view = glm::translate(projection, camera);
And then my vertex shader uses this view like this
layout(location = 0) in vec2 vPosition;
layout(location = 1) in vec2 vTexturePosition;
layout(location = 0) out vec2 position;
layout(location = 1) out vec2 texturePosition;
layout(binding = 0) uniform ubo {
mat4 uView;
};
void main() {
gl_Position = uView * vec4(vPosition, 0.f, 1.f);
texturePosition = vTexturePosition;
}
So now is where I'm not sure how to proceed, in the shadertoy shader you can see lines like this
vec3 planeposition = vec3(fragCoord.xy / iResolution.y, 0.0);
vec2 cursorposition = iMouse.xy / iResolution.y;
vec2 uv = fract(fragCoord.xy / iResolution.y);
vec2 noise = fract(fragCoord.xy * 0.5);
Since I'm using a projection matrix I don't think iResolution is relevant, since it's just the size in pixels of the viewport.
Also, fragCoord what it is? Is my vPosition from the vertex buffer?
Shadertoy's shaders are designed for a screen space render pass. iResolution is always the size of the viewport. iMouse is the window coordinate of the mouse pointer. fragCoord is the same as the fragment shader built-in uniform gl_FragCoord. So if your rectangles cover the entire viewport, you just need to create and set the iResolution and iMouse uniforms and replace fragCoord with gl_FragCoord.
Note that you cannot omit iResolution entirely, as it also includes the aspect ratio of the viewport.

OpenGL shadow mapping weirdness

I have been playing around with OpenGL and shaders and got myself into shadow mapping.
Trying to follow tutorials on the Internet (ogldev and learnopengl), got some unexpected results.
The issue is best described with few screenshots (I have added a static quad with depth framebuffer for debugging):
Somehow I managed to get shadows to be rendered on a ground quad once, with a static light (this commit). But the shadow pattern is, again, incorrect. I strongly suspect model transformation matrix calculaitons on this:
The way I render the scene is quite straightforward:
create the pipelines:
for mapping the shadows (filling the depth frame buffer)
for rendering the scene using the depth frame buffer
(extra) debugging one, rendering depth frame buffer to a static quad on a screen
fill the depth frame buffer: using the shadow mapping pipeline, render the scene from the light point, using orthographic projection
render the shaded scene: using the rendering pipeline and depth frame buffer bind as the first texture, render the scene from a camera point, using perspective projection
Seems like the algorithm in all those tutorials on shadow mapping out there. Yet, instead of a mouray effect (like in all of the tutorials), I get no shadow on the bottom plane whatsoever and weird artifacts (incorrect shadow mapping) on the 3D (chicken) model.
Interestingly enough, if I do not render (for both the shadow mapping and final rendering pass) the chicken model, the plane is lit with the same weird pattern:
I also had to remove any normal transformations from the fragment shader and disable face culling to make the ground plane lit. With front-face culling the plane does not appear in the shadow map (depth buffer).
I assume the following might be causing this issue:
wrong depth frame buffer setup (data format or texture parameters)
flipped depth frame buffer texture
wrong shadow calculations in rendering shaders
wrong light matrices (view & projection) setup
wrong matrix calculations in the rendering shaders (given the model transformation matrices for both chicken model and the quad contain both rotation and scaling)
Unfortunately, I ran out of ideas even on how to assess the above assumptions.
Looking for any help on the matter (also feel free to criticize any of my approaches, including C++, CMake, OpenGL and computer graphics).
The full solution source code is available on GitHub, but for convenience I have placed the heavily cut source code below.
shadow-mapping.vert:
#version 410
layout (location = 0) in vec3 vertexPosition;
out gl_PerVertex
{
vec4 gl_Position;
};
uniform mat4 lightSpaceMatrix;
uniform mat4 modelTransformation;
void main()
{
gl_Position = lightSpaceMatrix * modelTransformation * vec4(vertexPosition, 1.0);
}
shadow-mapping.frag:
#version 410
layout (location = 0) out float fragmentDepth;
void main()
{
fragmentDepth = gl_FragCoord.z;
}
shadow-rendering.vert:
#version 410
layout (location = 0) in vec3 vertexPosition;
layout (location = 1) in vec3 vertexNormal;
layout (location = 2) in vec2 vertexTextureCoord;
out VS_OUT
{
vec3 fragmentPosition;
vec3 normal;
vec2 textureCoord;
vec4 fragmentPositionInLightSpace;
} vsOut;
out gl_PerVertex {
vec4 gl_Position;
};
uniform mat4 projection;
uniform mat4 view;
uniform mat4 model;
uniform mat4 lightSpaceMatrix;
void main()
{
vsOut.fragmentPosition = vec3(model * vec4(vertexPosition, 1.0));
vsOut.normal = transpose(inverse(mat3(model))) * vertexNormal;
vsOut.textureCoord = vertexTextureCoord;
vsOut.fragmentPositionInLightSpace = lightSpaceMatrix * model * vec4(vertexPosition, 1.0);
gl_Position = projection * view * model * vec4(vertexPosition, 1.0);
}
shadow-rendering.frag:
#version 410
layout (location = 0) out vec4 fragmentColor;
in VS_OUT {
vec3 fragmentPosition;
vec3 normal;
vec2 textureCoord;
vec4 fragmentPositionInLightSpace;
} fsIn;
uniform sampler2D shadowMap;
uniform sampler2D diffuseTexture;
uniform vec3 lightPosition;
uniform vec3 lightColor;
uniform vec3 cameraPosition;
float shadowCalculation()
{
vec2 shadowMapCoord = fsIn.fragmentPositionInLightSpace.xy * 0.5 + 0.5;
float occluderDepth = texture(shadowMap, shadowMapCoord).r;
float thisDepth = fsIn.fragmentPositionInLightSpace.z * 0.5 + 0.5;
return occluderDepth < thisDepth ? 1.0 : 0.0;
}
void main()
{
vec3 color = texture(diffuseTexture, fsIn.textureCoord).rgb;
vec3 normal = normalize(fsIn.normal);
// ambient
vec3 ambient = 0.3 * color;
// diffuse
vec3 lightDirection = normalize(lightPosition - fsIn.fragmentPosition);
float diff = max(dot(lightDirection, normal), 0.0);
vec3 diffuse = diff * lightColor;
// specular
vec3 viewDirection = normalize(cameraPosition - fsIn.fragmentPosition);
vec3 halfwayDirection = normalize(lightDirection + viewDirection);
float spec = pow(max(dot(normal, halfwayDirection), 0.0), 64.0);
vec3 specular = spec * lightColor;
// calculate shadow
float shadow = shadowCalculation();
vec3 lighting = ((shadow * (diffuse + specular)) + ambient) * color;
fragmentColor = vec4(lighting, 1.0);
}
main.cpp, setting up shaders and frame buffer:
// loading the shadow mapping shaders
auto shadowMappingVertexProgram = ...;
auto shadowMappingFragmentProgram = ...;
auto shadowMappingLightSpaceUniform = shadowMappingVertexProgram->getUniform<glm::mat4>("lightSpaceMatrix");
auto shadowMappingModelTransformationUniform = shadowMappingVertexProgram->getUniform<glm::mat4>("modelTransformation");
auto shadowMappingPipeline = std::make_unique<globjects::ProgramPipeline>();
shadowMappingPipeline->useStages(shadowMappingVertexProgram.get(), gl::GL_VERTEX_SHADER_BIT);
shadowMappingPipeline->useStages(shadowMappingFragmentProgram.get(), gl::GL_FRAGMENT_SHADER_BIT);
// (omitted) loading the depth frame buffer debugging shaders and creating a pipeline here
// loading the rendering shaders
auto shadowRenderingVertexProgram = ...;
auto shadowRenderingFragmentProgram = ...;
auto shadowRenderingModelTransformationUniform = shadowRenderingVertexProgram->getUniform<glm::mat4>("model");
auto shadowRenderingViewTransformationUniform = shadowRenderingVertexProgram->getUniform<glm::mat4>("view");
auto shadowRenderingProjectionTransformationUniform = shadowRenderingVertexProgram->getUniform<glm::mat4>("projection");
auto shadowRenderingLightSpaceMatrixUniform = shadowRenderingVertexProgram->getUniform<glm::mat4>("lightSpaceMatrix");
auto shadowRenderingLightPositionUniform = shadowRenderingFragmentProgram->getUniform<glm::vec3>("lightPosition");
auto shadowRenderingLightColorUniform = shadowRenderingFragmentProgram->getUniform<glm::vec3>("lightColor");
auto shadowRenderingCameraPositionUniform = shadowRenderingFragmentProgram->getUniform<glm::vec3>("cameraPosition");
auto shadowRenderingPipeline = std::make_unique<globjects::ProgramPipeline>();
shadowRenderingPipeline->useStages(shadowRenderingVertexProgram.get(), gl::GL_VERTEX_SHADER_BIT);
shadowRenderingPipeline->useStages(shadowRenderingFragmentProgram.get(), gl::GL_FRAGMENT_SHADER_BIT);
// loading the chicken model
auto chickenModel = Model::fromAiNode(chickenScene, chickenScene->mRootNode, { "media" });
// INFO: this transformation is hard-coded specifically for Chicken.3ds model
chickenModel->setTransformation(glm::rotate(glm::scale(glm::mat4(1.0f), glm::vec3(0.01f)), glm::radians(-90.0f), glm::vec3(1.0f, 0, 0)));
// loading the quad model
auto quadModel = Model::fromAiNode(quadScene, quadScene->mRootNode);
// INFO: this transformation is hard-coded specifically for quad.obj model
quadModel->setTransformation(glm::rotate(glm::scale(glm::translate(glm::mat4(1.0f), glm::vec3(-5, 0, 5)), glm::vec3(10.0f, 0, 10.0f)), glm::radians(-90.0f), glm::vec3(1.0f, 0, 0)));
// loading the floor texture
sf::Image textureImage = ...;
auto defaultTexture = std::make_unique<globjects::Texture>(static_cast<gl::GLenum>(GL_TEXTURE_2D));
defaultTexture->setParameter(static_cast<gl::GLenum>(GL_TEXTURE_MIN_FILTER), static_cast<GLint>(GL_LINEAR));
defaultTexture->setParameter(static_cast<gl::GLenum>(GL_TEXTURE_MAG_FILTER), static_cast<GLint>(GL_LINEAR));
defaultTexture->image2D(0, static_cast<gl::GLenum>(GL_RGBA8), glm::vec2(textureImage.getSize().x, textureImage.getSize().y), 0, static_cast<gl::GLenum>(GL_RGBA), static_cast<gl::GLenum>(GL_UNSIGNED_BYTE), reinterpret_cast<const gl::GLvoid*>(textureImage.getPixelsPtr()));
// initializing the depth frame buffer
auto shadowMapTexture = std::make_unique<globjects::Texture>(static_cast<gl::GLenum>(GL_TEXTURE_2D));
shadowMapTexture->setParameter(static_cast<gl::GLenum>(GL_TEXTURE_MIN_FILTER), static_cast<gl::GLenum>(GL_LINEAR));
shadowMapTexture->setParameter(static_cast<gl::GLenum>(GL_TEXTURE_MAG_FILTER), static_cast<gl::GLenum>(GL_LINEAR));
shadowMapTexture->setParameter(static_cast<gl::GLenum>(GL_TEXTURE_WRAP_S), static_cast<gl::GLenum>(GL_CLAMP_TO_BORDER));
shadowMapTexture->setParameter(static_cast<gl::GLenum>(GL_TEXTURE_WRAP_T), static_cast<gl::GLenum>(GL_CLAMP_TO_BORDER));
shadowMapTexture->setParameter(static_cast<gl::GLenum>(GL_TEXTURE_BORDER_COLOR), glm::vec4(1.0f, 1.0f, 1.0f, 1.0f));
shadowMapTexture->image2D(0, static_cast<gl::GLenum>(GL_DEPTH_COMPONENT), glm::vec2(window.getSize().x, window.getSize().y), 0, static_cast<gl::GLenum>(GL_DEPTH_COMPONENT), static_cast<gl::GLenum>(GL_FLOAT), nullptr);
auto framebuffer = std::make_unique<globjects::Framebuffer>();
framebuffer->attachTexture(static_cast<gl::GLenum>(GL_DEPTH_ATTACHMENT), shadowMapTexture.get());
main.cpp, rendering (main loop):
// (omitted) event handling, camera updates go here
glm::mat4 cameraProjection = glm::perspective(glm::radians(fov), (float) window.getSize().x / (float) window.getSize().y, 0.1f, 100.0f);
glm::mat4 cameraView = glm::lookAt(cameraPos, cameraPos + cameraForward, cameraUp);
// moving light together with the camera, for debugging purposes
glm::vec3 lightPosition = cameraPos;
// light settings
const float nearPlane = 1.0f;
const float farPlane = 10.0f;
glm::mat4 lightProjection = glm::ortho(-5.0f, 5.0f, -5.0f, 5.0f, nearPlane, farPlane);
glm::mat4 lightView = glm::lookAt(lightPosition, glm::vec3(0.0f, 0.0f, 0.0f), glm::vec3(0.0f, 1.0f, 0.0f));
glm::mat4 lightSpaceMatrix = lightProjection * lightView;
::glViewport(0, 0, static_cast<GLsizei>(window.getSize().x), static_cast<GLsizei>(window.getSize().y));
// first render pass - shadow mapping
framebuffer->bind();
::glClearColor(1.0f, 1.0f, 1.0f, 1.0f);
::glClear(GL_DEPTH_BUFFER_BIT);
framebuffer->clearBuffer(static_cast<gl::GLenum>(GL_DEPTH), 0, glm::vec4(1.0f));
glEnable(GL_DEPTH_TEST);
// cull front faces to prevent peter panning the generated shadow map
glCullFace(GL_FRONT);
shadowMappingPipeline->use();
shadowMappingLightSpaceUniform->set(lightSpaceMatrix);
shadowMappingModelTransformationUniform->set(chickenModel->getTransformation());
chickenModel->draw();
shadowMappingModelTransformationUniform->set(quadModel->getTransformation());
quadModel->draw();
framebuffer->unbind();
shadowMappingPipeline->release();
glCullFace(GL_BACK);
// second pass - switch to normal shader and render picture with depth information to the viewport
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
shadowRenderingPipeline->use();
shadowRenderingLightPositionUniform->set(lightPosition);
shadowRenderingLightColorUniform->set(glm::vec3(1.0, 1.0, 1.0));
shadowRenderingCameraPositionUniform->set(cameraPos);
shadowRenderingProjectionTransformationUniform->set(cameraProjection);
shadowRenderingViewTransformationUniform->set(cameraView);
shadowRenderingLightSpaceMatrixUniform->set(lightSpaceMatrix);
// draw chicken
shadowMapTexture->bind();
shadowRenderingModelTransformationUniform->set(chickenModel->getTransformation());
chickenModel->draw();
shadowRenderingModelTransformationUniform->set(quadModel->getTransformation());
defaultTexture->bind();
quadModel->draw();
defaultTexture->unbind();
shadowMapTexture->unbind();
shadowRenderingPipeline->release();
// (omitted) render the debugging quad with depth (shadow) map
window.display();
As shameful as it might be, the issue was with the wrong texture being bound.
The globjects library that I use to have few nice(-r) abstractions over OpenGL actually does not provide a smart logic around texture binding (as I blindly assumed). So using just Texture::bind() and Texture::unbind() won't automagically keep track of how many textures have been bound and increment an index.
E.g. it does not behave (roughly) like this:
static int boundTextureIndex = -1;
void Texture::bind() {
glBindTexture(this->textureType, this->textureId);
glActivateTexture(GL_TEXTURE0 + (++boundTextureIndex));
}
void Texture::unbind() {
--boundTextureIndex;
}
So after changing the texture->bind() to texture->bindActive(0) followed by shaderProgram->setUniform("texture", 0), I finally got to the mouray effect and correct shadow mapping:
Full change is in this commit.

How does one restrict drawing in OpenGL between two coordinates in the axis?

I'm a new opengl programmer. I am plotting a height map composed of thousands of triangles in a 3D graph format. They are scaled so that they are plotted between -1 and +1 in the three axis. Now I am able to zoom in the X axis only and am able to translate in the X axis as well by applying the appropriate scale and translation matrices. This effectively allows me to zoom right into the data and move it in the x direction as I choose.
The problem is, once I zoom, the data in the x direction now extends outside the -1 to + 1 region which the boundaries of a graph. I want this data to not be shown.
How is this done in modern OpenGL?
Thank you
Edit:
The matrices are as follows:
plottingProgram["projection_matrix"].SetValue(Matrix4.CreatePerspectiveFieldOfView(0.45f, (float)width / height, 0.1f, 1000f));
plottingProgram["view_matrix"].SetValue(Matrix4.LookAt(new Vector3(0, 0, 10), Vector3.Zero, new Vector3(0, 1, 0)));
and the vertex shader is
public static string VertexShader = #"
#version 130
in vec3 vertexPosition;
out vec2 textureCoordinate;
uniform mat4 projection_matrix;
uniform mat4 view_matrix;
uniform mat4 model_matrix;
void main(void)
{
textureCoordinate = vertexPosition.xy;
gl_Position = projection_matrix * view_matrix * model_matrix * vec4(vertexPosition, 1);
}
";
Here is a link to the graph:
http://va2fsq.com/wp-content/uploads/graph.jpg
Thanks
I solved this. After trying many things like glScissor, I happened upon the glClip_distance. So my initial try placed a Uniform in the shader which was set to
in vec3 vertexPosition;
uniform vec4 plane0 = (-1,0,0,1);
uniform vec4 plane1 = (1,0,0,1);
void main(void)
{
gl_ClipDistance[0] = dot(vec4(vertexPosition,1), plane0;
gl_ClipDistance[1] = dot(vec4(vertexPosition,1), plane2;
Now the problem with this is that the vec4 planes are scaled and translated by any model matrix scaling or transformations. So that wouldn't work. So the solution is to move the clip vectors outside of the vertex shader and apply the opposite scaling to them as follows:
Vector4 clip1 = new Vector4(-1, 0, 0, 1.0f / scale);
Vector4 clip2 = new Vector4(1, 0, 0, 1.0f / scale);
plottingProgram["model_matrix"].SetValue(Matrix4.CreateScaling(new Vector3(scale,1,1)) * Matrix4.CreateTranslation(new Vector3(0,0,0)) *Matrix4.CreateRotationY(yangle) * Matrix4.CreateRotationX(xangle));
plottingProgram["plane0"].SetValue(clip1);
plottingProgram["plane1"].SetValue(clip2);
and the complete vertex shader is given by
public static string VertexShader = #"
#version 130
in vec3 vertexPosition;
out vec2 textureCoordinate;
uniform mat4 projection_matrix;
uniform mat4 view_matrix;
uniform mat4 model_matrix;
uniform vec4 plane0;
uniform vec4 plane1;
void main(void)
{
textureCoordinate = vertexPosition.xy;
gl_ClipDistance[0] = dot(vec4(vertexPosition,1), plane0);
gl_ClipDistance[1] = dot(vec4(vertexPosition,1), plane1);
gl_Position = projection_matrix * view_matrix * model_matrix * vec4(vertexPosition, 1);
}
";
You can also translate in the same way.

Geometry shader calculated lines disappear on camera move

I draw vertex normals using geometry shader. Everything shows up as expected except that when I move the camera some lines partially disappear. First, I thought this was due to the frustrum size but I have other objects in the scene bigger than this one drawn just fine.
Before movement
After movement
If anyone could give me any pointer how to get rid of this effect of line disappearing, I would really appreciate it.
Below is the code of my geometry shader
#version 330 core
layout (triangles) in;
layout (line_strip, max_vertices = 6) out;
in Data{
vec4 position;
vec4 t_position;
vec4 normal;
vec2 texCoord;
vec4 color;
mat4 mvp;
mat4 view;
mat4 mv;
} received[];
out Data{
vec4 color;
vec2 uv;
vec4 normal;
vec2 texCoord;
mat4 view;
} gdata;
const float MAGNITUDE = 1.5f;
void GenerateLine(int index) {
const vec4 green = vec4(0.0f, 1.0f, 0.0f, 1.0f);
const vec4 blue = vec4(0.0f, 0.0f, 1.0f, 1.0f);
gl_Position = received[index].t_position;
//gdata.color = received[index].color;
gdata.color = green;
EmitVertex();
gl_Position = received[index].t_position + received[index].normal * MAGNITUDE;
//gdata.color = received[index].color;
gdata.color = blue;
EmitVertex();
EndPrimitive();
}
void main() {
GenerateLine(0); // First vertex normal
GenerateLine(1); // Second vertex normal
GenerateLine(2); // Third vertex normal
}
Vertex Shader
#version 330
layout(location = 0) in vec3 Position;
layout(location = 1) in vec2 TexCoord;
layout(location = 2) in vec3 Normal;
out Data{
vec4 position;
vec4 t_position;
vec4 normal;
vec2 texCoord;
vec4 color;
mat4 mvp;
mat4 view;
mat4 mv;
} vdata;
//MVP
uniform mat4 model;
uniform mat4 projection;
uniform mat4 view;
void main() {
vdata.position = vec4(Position, 1.0f);
vdata.normal = view * model * vec4(Normal, 0.0);
vdata.texCoord = TexCoord;
vdata.view = view;
vec4 modelColor = vec4(0.8f, 0.8f, 0.8f, 1.0f);
vdata.color = modelColor;
vdata.mvp = projection * view * model;
vdata.mv = view * model;
vdata.t_position = vdata.mvp * vdata.position;
gl_Position = vdata.t_position;
};
Referring to the answer by Illia May 18 '16 at 22:53
I originally had my m_zNear equal to 0.1 but when switched it to 1.0, the lines stopped disappearing. I am not completely sure why is that. If any one knows please share it
The disappearing line is about depth buffer clipping. The vertex is projected (multiplied with MVP-matrix) and then the vertex position is changed AFTER the projection in geometry shader (GS). These changes in GS causes the z-value to fall outside normalized device coordinates (NDC) in perspective division. With a larger zNear value the projected z-value is smaller so it does not fall outside NDC. Though if the value of MAGNITUDE in GS was large enough the lines would be clipped anyway even with a larger zNear. One option to fix this is to do projection in GS.
If anyone else runs into this issue the solution is the following:
glm::perspective(45.0f, m_aspectRatio, m_zNear, m_zFar);
I originally had my m_zNear equal to 0.1 but when switched it to 1.0, the lines stopped disappearing. I am not completely sure why is that. If any one knows please share it.
Have you try to google zFar zNear? Here you go.
Or at least trying to google that miraculous helping glm::perspective(...) function?

Is it possible to draw a sphere with strands using a unique geometry shader?

I'd like to display a simple UV sphere (exported from Blender) and generate lines with normal coordinates using a unique geometry shader.
In a first time, I wrote a simple geometry shader which simply return the input vertices informations to the fragment shader. For a sake of simplicity (for the exemple) I erased the luminosity calculations in the fragment shader.
Vertex shader :
#version 400
layout (location = 0) in vec3 VertexPosition;
layout (location = 1) in vec3 VertexNormal;
uniform mat4 MVP;
out vec3 VPosition;
out vec3 VNormal;
void main(void)
{
VNormal = VertexNormal;
gl_Position = vec4(VertexPosition, 1.0f);
}
Geometry shader :
#version 400
layout(points) in;
layout(line_strip, max_vertices = 2) out;
uniform mat4 MVP;
in vec3 VNormal[];
out vec3 fcolor;
void main(void)
{
float size = 2.5f;
fcolor = vec3(0.0f, 0.0f, 1.0f);
gl_Position = MVP * gl_in[0].gl_Position;
EmitVertex();
fcolor = vec3(1.0f, 1.0f, 0.0f);
gl_Position = MVP * vec4(gl_in[0].gl_Position.xyz + vec3(
VNormal[0].x * size, VNormal[0].y * size, VNormal[0].z * size), 1.0f);
EmitVertex();
EndPrimitive();
}
And the fragment shader :
#version 400
in vec3 Position;
in vec3 Normal;
in vec2 TexCoords;
out vec4 FragColor;
in vec3 fcolor;
void main(void)
{
FragColor = vec4(fcolor, 1.0f);
}
Now in the C++ code the primitive type to draw (here triangles):
glDrawArrays(GL_TRIANGLES, 0, meshList[idx]->getVertexBuffer()->getBufferSize());
And finally the output :
Until here all is ok.
Now I want to generate strands on the sphere as normals. To do the job done I wrote the following geometry shader (the vertex and fragment shaders are the sames).
#version 400
layout(points) in;
layout(line_strip, max_vertices = 2) out;
uniform mat4 MVP;
in vec3 VNormal[];
out vec3 fcolor;
void main(void)
{
float size = 1.0f;
fcolor = vec3(0.0f, 0.0f, 1.0f);
gl_Position = MVP * gl_in[0].gl_Position;
EmitVertex();
fcolor = vec3(1.0f, 1.0f, 0.0f);
gl_Position = MVP * vec4(gl_in[0].gl_Position.xyz + vec3(
VNormal[0].x * size, VNormal[0].y * size, VNormal[0].z * size), 1.0f);
EmitVertex();
EndPrimitive();
}
The input primitive type being points I modified the C++ code to draw the scene :
glDrawArrays(GL_POINTS, 0, meshList[idx]->getVertexBuffer()->getBufferSize());
And the output:
Finally if I want to get a triangle input as input primitive and a line_strip as output primitive in the geometry shader I have the following shader:
#version 400
layout(triangles, invocations = 3) in;
layout(line_strip, max_vertices = 6) out;
uniform mat4 MVP;
in vec3 VNormal[];
out vec3 fcolor;
void main(void)
{
float size = 1.0f;
for (int i = 0; i < 3; i++)
{
fcolor = vec3(0.0f, 0.0f, 1.0f);
gl_Position = MVP * gl_in[i].gl_Position;
EmitVertex();
fcolor = vec3(1.0f, 1.0f, 0.0f);
gl_Position = MVP * vec4(gl_in[0].gl_Position.xyz + vec3(
VNormal[0].x * size, VNormal[0].y * size, VNormal[0].z * size), 1.0f);
EmitVertex();
EndPrimitive();
}
}
And the output is the following :
But my goal is to display in one output the scene (sphere + strands) using the same geometry shader. I'd like to know if it's possible to do this. I don't think so because a geometry shader must have just one type of input primitive and an other one in output and not several types. I want to be sure if it's possible or not.
Who knows, maybe one day there'll be an extension to emit multiple primitive types from a geometry shader, but as you say it can't currently be done.
One alternative might be to draw the normal lines with triangles instead.
Another, but completely useless in this case, might be to use the transform feedback extension to save the vertex shader results and reuse that data with two separate geometry shaders. I only mention this as it's the closest thing I could think of to emit multiple primitive types after the vertex stage.
EDIT
The two geometry shaders for drawing normals confuses me. In the second one, max_vertices = 3, which should be 6 for 3 separate lines and EndPrimitive should also be inside the for-loop so the 3 lines aren't connected. But you've already sorted this out by drawing GL_POINTS in the previous one. Is this intended to be structured for multiple primitive output, if it were supported? (fixed)
Given your geometry reuses many vertices, indices with glDrawElements would be more efficient. Although you'd still want to use glDrawArrays for drawing normal lines to avoid drawing duplicate vertices referenced by an index array.