I want to draw a square from point data with the geometry shader.
In the vertex shader, I emit a single point.
#version 330 core
void main() {
gl_Position = vec4(0, 0, 0, 1.0);
}
In the geometry shader, I want to create a triangle strip forming a square.
The size is irrelevant at the moment so the model should have a size of 1 (ranging from (-0.5, -0.5) from the initial point position to (+0.5, +0.5).
I need help to calculate the position of the emitted vertices, as visible in the code:
#version 330 core
layout(points) in;
layout(triangle_strip, max_vertices=4) out;
out vec2 tex_coord;
uniform mat4x4 model;
uniform mat4x4 view;
uniform mat4x4 projection;
void main() {
int i;
for(i = 0; i < gl_in.length(); ++i) {
mat4x4 trans;
trans = //???
gl_Position = projection * view * model * trans * gl_in[i].gl_Position;
tex_coord = vec2(0, 0);
EmitVertex();
trans = //???
gl_Position = projection * view * model * trans * gl_in[i].gl_Position;
tex_coord = vec2(1, 0);
EmitVertex();
trans = //???
gl_Position = projection * view * model * trans * gl_in[i].gl_Position;
tex_coord = vec2(1, 1);
EmitVertex();
trans = //???
gl_Position = projection * view * model * trans * gl_in[i].gl_Position;
tex_coord = vec2(0, 1));
EmitVertex();
}
EndPrimitive();
}
I thought to use trans to translate the initial point to the desired coordinates. How would I realize this?
Edit for clarity
I want to generate from a single point what else would be given by the vertex buffer; a single plane:
float vertex[] {
-0.5, 0.5, 0.0,
0.5, 0.5, 0.0,
0.5, -0.5, 0.0,
-0.5, -0.5, 0.0
}
Instead I give only a point in the middle of these points and want to generate the real points by subtracting and adding the differences to the center (0.5 and -0.5 etc.). All I need is to know how to apply this transformation in the code (where the ??? are).
Judging by your updated question, I think this pseudo-code should get you pointed in the right direction. It seems to me that all you want to do is offset the x and y coordinates of your point by a constant amount, so an array is the perfect way to do this.
const vec3 offset [4] =
vec3 [] ( vec3 (-0.5, 0.5, 0.0),
vec3 ( 0.5, 0.5, 0.0),
vec3 ( 0.5, -0.5, 0.0),
vec3 (-0.5, -0.5, 0.0) );
const vec2 tc [4] =
vec2 [] ( vec2 (0.0, 0.0),
vec2 (1.0, 0.0),
vec2 (1.0, 1.0),
vec2 (0.0, 1.0) );
void
main (void)
{
int i;
for (i = 0; i < gl_in.length (); ++i) {
gl_Position = projection * view * model * (gl_in [i].gl_Position + offset [0]);
tex_coord = tc [0];
EmitVertex ();
gl_Position = projection * view * model * (gl_in [i].gl_Position + offset [1]);
tex_coord = tc [1];
EmitVertex ();
gl_Position = projection * view * model * (gl_in [i].gl_Position + offset [2]);
tex_coord = tc [2];
EmitVertex ();
gl_Position = projection * view * model * (gl_in [i].gl_Position + offset [3]);
tex_coord = tc [3];
EmitVertex ();
}
EndPrimitive ();
}
Related
I have a situation where i have two textures on a single mesh. I want to transform these textures independently. I have base code wherein i was able to load and transform one texture. Now i have code to load two textures but the issue is that when i try to transform the first texture both of them gets
transformed as we are modifying texture coordinates.
Green one is the first texture and star is the second texture.
I have no idea how to transform just the second texture. Guide me with any solution you have.
You can do it in many ways , one of them would be to have two different texture Matrices.
and than pass them to the vertex shader.
#version 400 compatibility
layout (location = 0) in vec3 aPos;
layout (location = 1) in vec3 aNormal;
layout (location = 2) in vec2 aTexCoord;
out vec2 TexCoord;
out vec2 TexCoord2;
uniform mat4 textureMatrix;
uniform mat4 textureMatrix2;
void main()
{
vec4 mTex2;
vec4 mTex;
Normal = mat3(NormalMatrix) * aNormal;
Tex2Matrix = textureMatrix2;
ViewDirMatrix = textureMatrix;
mTex = textureMatrix * vec4( aTexCoord.x , aTexCoord.y , 0.0 , 1.0 ) ;
mTex2 = textureMatrix2 * vec4( aTexCoord.x , aTexCoord.y , 0.0 , 1.0 ) ;
TexCoord = vec2(mTex.x , mTex.y );
TexCoord2 = vec2(mTex2.x , mTex2.y );
FragPos = vec3( ubo_model * (vec4( aPos, 1.0 )));
gl_Position = ubo_projection * ubo_view * (vec4(FragPos, 1.0));
}
This is how you can create a texture matrix.
glm::mat4x4 GetTextureMatrix()
{
glm::mat4x4 matrix = glm::mat4x4(1.0f);
matrix = glm::translate(matrix, glm::vec3(-PositionX + 0.5, PositionY + 0.5, 0.0));
matrix = glm::scale(matrix, glm::vec3(1.0 / ScalingX, 1.0 / ScalingY, 0.0));
matrix = glm::rotate(matrix, glm::radians(RotationX) , glm::vec3(1.0, 0.0, 0.0));
matrix = glm::rotate(matrix, glm::radians( RotationY), glm::vec3(0.0, 1.0, 0.0));
matrix = glm::rotate(matrix, glm::radians(-RotationZ), glm::vec3(0.0, 0.0, 1.0));
matrix = glm::translate(matrix, glm::vec3(-PositionX -0.5, -PositionY -0.5, 0.0));
matrix = glm::translate(matrix, glm::vec3(PositionX, PositionY, 0.0));
return matrix;
}
In the following example I would like to manually create points (x, y, angle) from SFML then fill a circle around each point. The angle will be used later, for now I use it for debugging.
SFML draws 2 points
Vertex shader convert points to -1..1 range
Geometry shader creates squares at each point position and pass the center to fragment shader
Fragment shader would pain a circle within each square.
From my understanding, in the geometry shader I emit center which is the center coordinates of each primitive. From computing the distance from this center I would be able to paint a circle in each primitive from the fragment shader.
In the picture below I notice the center is only set once and I don't understand why.
SFML App
#include <iostream>
#include <SFML/Graphics.hpp>
#include <vector>
#include <GL/glew.h>
#include <random>
#define WIDTH 800
int main() {
sf::RenderWindow window(sf::VideoMode(WIDTH, WIDTH), "Test");
sf::Shader shader;
shader.loadFromFile("shader.vert", "shader.geom", "shader.frag");
sf::Transform matrix = sf::Transform::Identity;
matrix.scale(1.0 / WIDTH, 1.0 / WIDTH);
sf::Glsl::Mat4 projectionViewMatrix = matrix;
shader.setUniform("projectionViewMatrix", projectionViewMatrix);
std::vector<GLfloat> vertices;
vertices.push_back(400.0); vertices.push_back(400.0); vertices.push_back(0.0);
vertices.push_back(400.0); vertices.push_back(-400.0); vertices.push_back(0.25);
vertices.push_back(-400.0); vertices.push_back(-400.0); vertices.push_back(0.5);
vertices.push_back(-400.0); vertices.push_back(400.0); vertices.push_back(0.75);
while (window.isOpen()) {
sf::Event currEvent;
while (window.pollEvent(currEvent)) {
switch (currEvent.type) {
case(sf::Event::Closed):
window.close(); break;
}
}
window.clear(sf::Color::Black);
glVertexPointer(3, GL_FLOAT, 0, vertices.data());
glEnableClientState(GL_VERTEX_ARRAY);
glDrawArrays(GL_POINTS, 0, vertices.size() / 3);
glDisableClientState(GL_VERTEX_ARRAY);
sf::Shader::bind(&shader);
window.display();
}
}
Vertex Shader
#version 150
in vec3 position;
out vec3 pass_colour;
out float angle;
uniform mat4 projectionViewMatrix;
void main(void) {
gl_Position = projectionViewMatrix * vec4(position.xy, 0.0 ,1.0);
angle = position.z;
pass_colour = vec3(1.0);
}
Geometry shader
#version 150
layout (points) in;
layout (triangle_strip, max_vertices = 6) out;
in vec3 pass_colour[];
in float angle[];
out vec3 finalColour;
out vec4 centerPosition;
uniform mat4 projectionViewMatrix;
vec3 hsv2rgb(vec3 c) {
vec4 K = vec4(1.0, 2.0 / 3.0, 1.0 / 3.0, 3.0);
vec3 p = abs(fract(c.xxx + K.xyz) * 6.0 - K.www);
return c.z * mix(K.xxx, clamp(p - K.xxx, 0.0, 1.0), c.y);
}
void createVertex(vec3 offset, vec3 colour, float z = 0.0) {
vec4 actualOffset = vec4(offset, z);
vec4 worldPosition = gl_in[0].gl_Position + actualOffset;
gl_Position = worldPosition;
finalColour = colour;
vec4 pointPosition = gl_in[0].gl_Position;
centerPosition = pointPosition;
EmitVertex();
}
void main(void) {
float corner = 0.3;
vec3 colour = hsv2rgb(vec3(angle[0], 1.0, 1.0));
createVertex(vec3(-corner, -corner, 0.0), colour, 0.0);
createVertex(vec3(corner, -corner, 0.0), colour, 0.0);
createVertex(vec3(-corner, corner, 0.0), colour, 0.0);
createVertex(vec3(corner, corner, 0.0), colour, 0.0);
createVertex(vec3(corner, -corner, 0.0), colour, 0.0);
createVertex(vec3(-corner, corner, 0.0), colour, 0.0);
EndPrimitive();
}
Fragment shader
#version 150
in vec3 finalColour;
in vec4 centerPosition;
out vec4 out_Colour;
void main(void){
vec2 resolution = vec2(800.0/2.0, 800.0/2.0);
vec2 uv = gl_FragCoord.xy / resolution.xy;
vec2 uvc = (centerPosition.xy + vec2(1.0)) / 2.0;
float dist = length(uv - uvc);
out_Colour = vec4(finalColour * dist, 0.8);
}
I still don't explain everything, but it works with this fragment:
#version 150
in vec4 finalColour;
in vec4 centerPosition;
out vec4 out_Colour;
void main(void){
vec2 resolution = vec2(800.0/2.0, 800.0/2.0);
vec2 uv = gl_FragCoord.xy / resolution.xy;
vec2 p = vec2(1.0, 1.0) + centerPosition.xy;
vec2 uvc = p;
float dist = length(uv - uvc);
float col = 1.0 - smoothstep(0.0, 0.1, dist);
out_Colour = vec4(finalColour.rgb * col, 1.0);
}
I am trying to use this tutorial for per-fragment shading and adapt it to GLSL #version 140. The results I am getting are obviously not correct. Seems to me that I am doing something wrong with the supplied normals, since there is a direct change between light and shade on some triangles which are next to each other on the same plane.
The vertex shader code:
#version 140
in vec3 position;
in vec2 texIn;
in vec3 normal;
smooth out vec2 texCoor;
out vec4 v_position; // position of the vertex (and fragment) in world space
out vec3 NormalDirection; // surface normal vector in world space
uniform mat4 mP, mV, mM; // transformation matrices
uniform mat3 m_3x3_inv_transp;
void main() {
v_position = mM * vec4(position, 1.0);
NormalDirection = normalize(m_3x3_inv_transp * normal);
mat4 mvp = mP*mV*mM;
gl_Position = mvp * vec4(position, 1.0);
texCoor = texIn;
}
The fragment shader:
#version 140
uniform mat4 mM, mV, mP;
uniform mat4 mV_inv;
uniform sampler2D texSampler; // sampler for texture access
smooth in vec2 texCoor; // from Vertex shader
in vec4 v_position; // position of the vertex (and fragment) in world space
in vec3 NormalDirection; // surface normal vector in world space
out vec4 colorOut; // fragment color
struct lightSource {
vec4 position;
vec4 diffuse;
vec4 specular;
float constantAttenuation, linearAttenuation, quadraticAttenuation;
float spotCutoff, spotExponent;
vec3 spotDirection;
};
lightSource light0 = lightSource(
vec4(5.0, 5.0, 5.0, 1.0),
vec4(2.0, 2.0, 2.0, 1.0),
vec4(2.0, 2.0, 2.0, 1.0),
0.0, 1.0, 0.0,
180.0, 0.0,
vec3(0.0, 0.0, 0.0)
);
vec4 scene_ambient = vec4(1.2, 1.2, 1.2, 1.0);
struct material {
vec4 ambient;
vec4 diffuse;
vec4 specular;
float shininess;
};
material frontMaterial = material(
vec4(0.2, 0.2, 0.2, 1.0),
vec4(1.0, 0.8, 0.8, 1.0),
vec4(1.0, 1.0, 1.0, 1.0),
5.0
);
void main() {
vec3 normalDirection = normalize(NormalDirection);
vec3 viewDirection = normalize(vec3(mV_inv * vec4(0.0, 0.0, 0.0, 1.0) - v_position));
vec3 lightDirection;
float attenuation;
if (0.0 == light0.position.w) // directional light?
{
attenuation = 1.0; // no attenuation
lightDirection = normalize(vec3(light0.position));
}
else // point light or spotlight (or other kind of light)
{
vec3 positionToLightSource = vec3(light0.position - v_position);
float distance = length(positionToLightSource);
lightDirection = normalize(positionToLightSource);
attenuation = 1.0 / (light0.constantAttenuation
+ light0.linearAttenuation * distance
+ light0.quadraticAttenuation * distance * distance);
if (light0.spotCutoff <= 90.0) // spotlight?
{
float clampedCosine = max(0.0, dot(-lightDirection, light0.spotDirection));
if (clampedCosine < cos(radians(light0.spotCutoff))) // outside of spotlight cone?
{
attenuation = 0.0;
}
else
{
attenuation = attenuation * pow(clampedCosine, light0.spotExponent);
}
}
}
vec3 ambientLighting = vec3(scene_ambient) * vec3(frontMaterial.ambient);
vec3 diffuseReflection = attenuation
* vec3(light0.diffuse) * vec3(frontMaterial.diffuse)
* max(0.0, dot(normalDirection, lightDirection));
vec3 specularReflection;
if (dot(normalDirection, lightDirection) < 0.0) // light source on the wrong side?
{
specularReflection = vec3(0.0, 0.0, 0.0); // no specular reflection
}
else // light source on the right side
{
specularReflection = attenuation * vec3(light0.specular) * vec3(frontMaterial.specular)
* pow(max(0.0, dot(reflect(-lightDirection, normalDirection), viewDirection)), frontMaterial.shininess);
}
colorOut = vec4(ambientLighting + diffuseReflection + specularReflection, 1.0) * texture(texSampler, texCoor);
}
C++ code (uniforms):
// matrices to vertex shader
glm::mat4 modelMatrix = glm::mat4(1.0); // identity matrix - table is static
glUniformMatrix4fv(locations.Mmatrix, 1, GL_FALSE, glm::value_ptr(modelMatrix));
glUniformMatrix4fv(locations.Pmatrix, 1, GL_FALSE, glm::value_ptr(projectionMatrix));
glUniformMatrix4fv(locations.Vmatrix, 1, GL_FALSE, glm::value_ptr(viewMatrix));
glm::mat3 m_inv_transp = glm::transpose(glm::inverse(glm::mat3(modelMatrix)));
glUniformMatrix3fv(locations.m_3x3_inv_transp, 1, GL_FALSE, glm::value_ptr(m_inv_transp));
glm::mat3 v_inv = glm::inverse(glm::mat3(viewMatrix));
glUniformMatrix4fv(locations.Vmatrix_inv, 1, GL_FALSE, glm::value_ptr(v_inv));
i'm having difficulties understanding the math between the different shader stages.
in the fragment shader from the lights perspective i basically write out the fragDepth to rgb color
#version 330
out vec4 shader_fragmentColor;
void main()
{
shader_fragmentColor = vec4(gl_FragCoord.z, gl_FragCoord.z, gl_FragCoord.z, 1);
//shader_fragmentColor = vec4(1, 0.5, 0.5, 1);
}
when rendering the scene using the above shader it displays the scene in an all white color. i suppose thats because gl_FragCoord.z is bigger than 1. hopefully its not maxed out at 1. but we can leave that question alone for now.
in the geometry shader from the cameras perspective i basicly turn all points into quads and write out the probably "incorrect" texture position to lookup in the lightTexture. the math here is the question. im also a bit unsure about if the interpolation value will be correct in the next shader stage.
#version 330
#extension GL_EXT_geometry_shader4 : enable
uniform mat4 p1_modelM;
uniform mat4 p1_cameraPV;
uniform mat4 p1_lightPV;
out vec4 shader_lightTexturePosition;
void main()
{
float s = 10.00;
vec4 llCorner = vec4(-s, -s, 0.0, 0.0);
vec4 llWorldPosition = ((p1_modelM * llCorner) + gl_in[0].gl_Position);
gl_Position = p1_cameraPV * llWorldPosition;
shader_lightTexturePosition = p1_lightPV * llWorldPosition;
EmitVertex();
vec4 rlCorner = vec4(+s, -s, 0.0, 0.0);
vec4 rlWorldPosition = ((p1_modelM * rlCorner) + gl_in[0].gl_Position);
gl_Position = p1_cameraPV * rlWorldPosition;
shader_lightTexturePosition = p1_lightPV * rlWorldPosition;
EmitVertex();
vec4 luCorner = vec4(-s, +s, 0.0, 0.0);
vec4 luWorldPosition = ((p1_modelM * luCorner) + gl_in[0].gl_Position);
gl_Position = p1_cameraPV * luWorldPosition;
shader_lightTexturePosition = p1_lightPV * luWorldPosition;
EmitVertex();
vec4 ruCorner = vec4(+s, +s, 0.0, 0.0);
vec4 ruWorldPosition = ((p1_modelM * ruCorner) + gl_in[0].gl_Position);
gl_Position = p1_cameraPV * ruWorldPosition;
shader_lightTexturePosition = p1_lightPV * ruWorldPosition;
EmitVertex();
EndPrimitive();
}
in the fragment shader from the cameras perspective i basicly lookup in the lightTexture what color would be shown from the lights perspecive and write out the same color.
#version 330
uniform sampler2D p1_lightTexture;
in vec4 shader_lightTexturePosition;
out vec4 shader_fragmentColor;
void main()
{
vec4 lightTexel = texture2D(p1_lightTexture, shader_lightTexturePosition.xy);
shader_fragmentColor = lightTexel;
/*
if(lightTexel.x < shader_lightTexturePosition.z)
shader_fragmentColor = vec4(1, 0, 0, 1);
else
shader_fragmentColor = vec4(0, 1, 0, 1);
*/
//shader_fragmentColor = vec4(1, 1, 1, 1);
}
when rendering from the cameras perspective i see the scene drawn as it should but with the incorrect texture coordinates applied on them that repeats. repeating texture is probably caused by the texture-coordinate being outside the bounds of 0 to 1.
I've tried several things but still fail to understand what the math should be. some of out commented code and one example im unsure of is:
shader_lightTexturePosition = normalize(p1_lightPV * llWorldPosition) / 2 + vec4(0.5, 0.5, 0.5, 0.5);
for the lower-left corner. similair code to the other corners
from the solution i expect the scene to be rendered from the cameras perspective with exactly the same color as from the lights perspective. with perhaps some precision error.
i figured out the texture mapping bit myself. the depth value bit is still a bit strange.
convert the screenProjectedCoords to normalizedDeviceCoords then add 1 divide by 2.
vec4 textureNormalizedCoords(vec4 screenProjected)
{
vec3 normalizedDeviceCoords = (screenProjected.xyz / screenProjected.w);
return vec4( (normalizedDeviceCoords.xy + 1.0) / 2.0, screenProjected.z * 0.005, 1/screenProjected.w);
}
void main()
{
float s = 10.00;
vec4 llCorner = vec4(-s, -s, 0.0, 0.0);
vec4 llWorldPosition = ((p1_modelM * llCorner) + gl_in[0].gl_Position);
gl_Position = p1_cameraPV * llWorldPosition;
shader_lightTextureCoords = textureNormalizedCoords(p1_lightPV * llWorldPosition);
EmitVertex();a
i have implemented shadowmapping with an FBO and GLSL.
it is used on a heightfield. that is some objects (trees, plants, ...) cast shadows on the heightfield.
the problem i have, is that the shadows are only visible on the ground of the heightfield. that is, where the heightfield's height = 0. as soon as there is some height involved, the shadows disappear. if i look at the shadowmap itself, everything looks fine... objects that are closer to the light are darker.
here is my GLSL vertexshader:
uniform mat4 lightView, lightProjection;
const mat4 biasMatrix = mat4( 0.5, 0.0, 0.0, 0.0,
0.0, 0.5, 0.0, 0.0,
0.0, 0.0, 0.5, 0.0,
0.5, 0.5, 0.5, 1.0); //bias from [-1, 1] to [0, 1]
void main()
{
gl_Position = ftransform();
mat4 shadowMatrix = biasMatrix * lightProjection * lightView;
shadowTexCoord = shadowMatrix * gl_Vertex;
}
fragmentshader:
uniform sampler2DShadow shadowmap;
varying vec4 shadowTexCoord;
void main()
{
vec4 shadow = shadow2DProj(shadowmap, shadowTexCoord, 0.0);
float colorshadow = shadow.r < 0.1 ? 0.5 : 1.0;
vec4 color = vec4(1,1,1,1);
gl_FragColor = vec4( color*colorshadow, color.w );
}
thanks a lot for any help on this!
I think there might be some confusion between the different spaces here. As written, it looks like your code would only work if gl_ModelViewMatrix for the ground contains only camera transformations. This is because ftransform basically goes
gl_Position = gl_ProjectionMatrix * (gl_ModelViewMatrix * gl_Vertex)
that means that the gl_Vertex is specified in object coordinates. However typically the view matrix of the light maps from world coordinates to the light's view space so this code would only work if object space = world space. So basically, lets say you scale the terrain, well then object space doesn't equal world space anymore. Because of this you need to separate out the gl_ModelViewMatrix into two parts: the camera view matrix and the modeling transform (eg object -> world space)
I havent tested this code, but I would try something like this:
uniform mat4 lightView, lightProjection;
uniform mat4 camView, camProj, modelTrans;
const mat4 biasMatrix = mat4( 0.5, 0.0, 0.0, 0.0,
0.0, 0.5, 0.0, 0.0,
0.0, 0.0, 0.5, 0.0,
0.5, 0.5, 0.5, 1.0); //bias from [-1, 1] to [0, 1]
void main()
{
mat4 modelViewProjMatrix = camProj * camView * modelTrans;
gl_Position = modelViewProjMatrix * gl_Vertex;
mat4 shadowMatrix = biasMatrix * lightProjection * lightView * modelTrans;
shadowTexCoord = shadowMatrix * gl_Vertex;
}
Technically it's faster to multiply the matrices on the CPU and only pass the exact ones you need but for getting stuff working sometimes its easier to do this way.
Maybe you just missed it copy-pasting, but I don't see shadowTexCoord as varying in the vertex shader. This should result in a compilation error, though.