optimizing cubes rendering with geometry shader - opengl

In my first opengl 'voxel' project I'm using geometry shader to create cubes from gl_points and it works pretty well but I'm sure it can be done better. In the alpha color I'm passing info about which faces should be rendered ( to skip faces adjacent to other cubes) then vertices for visible faces are created using 'reference' cube definition. Every point is multiplied by 3 matrices. Instinct tells me that maybe whole face could be multiplied by them instead of every point but my math skills are poor so please advise.
#version 330
layout (points) in;
layout (triangle_strip,max_vertices=24) out;
smooth out vec4 oColor;
in VertexData
{
vec4 colour;
//vec3 normal;
} vertexData[];
uniform mat4 cameraToClipMatrix;
uniform mat4 worldToCameraMatrix;
uniform mat4 modelToWorldMatrix;
const vec4 cubeVerts[8] = vec4[8](
vec4(-0.5 , -0.5, -0.5,1), //LB 0
vec4(-0.5, 0.5, -0.5,1), //L T 1
vec4(0.5, -0.5, -0.5,1), //R B 2
vec4( 0.5, 0.5, -0.5,1), //R T 3
//back face
vec4(-0.5, -0.5, 0.5,1), // LB 4
vec4(-0.5, 0.5, 0.5,1), // LT 5
vec4(0.5, -0.5, 0.5,1), // RB 6
vec4(0.5, 0.5, 0.5,1) // RT 7
);
const int cubeIndices[24] = int [24]
(
0,1,2,3, //front
7,6,3,2, //right
7,5,6,4, //back or whatever
4,0,6,2, //btm
1,0,5,4, //left
3,1,7,5
);
void main()
{
vec4 temp;
int a = int(vertexData[0].colour[3]);
//btm face
if (a>31)
{
for (int i=12;i<16; i++)
{
int v = cubeIndices[i];
temp = modelToWorldMatrix * (gl_in[0].gl_Position + cubeVerts[v]);
temp = worldToCameraMatrix * temp;
gl_Position = cameraToClipMatrix * temp;
//oColor = vertexData[0].colour;
//oColor[3]=1;
oColor=vec4(1,1,1,1);
EmitVertex();
}
a = a - 32;
EndPrimitive();
}
//top face
if (a >15 )
...
}
------- updated code:------
//one matrix to transform them all
mat4 mvp = cameraToClipMatrix * worldToCameraMatrix * modelToWorldMatrix;
//transform and store cube verts for future use
for (int i=0;i<8; i++)
{
transVerts[i]=mvp * (gl_in[0].gl_Position + cubeVerts[i]);
}
//btm face
if (a>31)
{
for (int i=12;i<16; i++)
{
int v = cubeIndices[i];
gl_Position = transVerts[v];
oColor = vertexData[0].colour*0.55;
//oColor = vertexData[0].colour;
EmitVertex();
}
a = a - 32;
EndPrimitive();
}

In OpenGL, you don't work with faces (or lines, for that matter), so you can't apply transformations to a face. You need to do it to the vertices that compose that face, as you're doing.
On possible optimization is that you don't need to separate out the matrix transformations, as you do. If you multiple them once in your application code, and pass them as a single uniform into your shader, that will save some time.
Another optimization would be to transform the eight cube vertices in a loop at the beginning, store them to a local array, and then reference their transformed positions in your if logic. Right now, if you render every face of the cube, you're transforming 24 vertices, each one three times.

Related

How to assign dynamic color to vertex?

I am working on a simple OpenGL application, which uses the base of Triangle Tutorial. For fun, I decided to add a third value. The Data format is (data, x_axis, y_axis). data acts as a z-axis value, but the z-axis can be kept zero. However, I want to draw just the data at the point where x_axis and y_axis point in the 2D plane. for example, if (x,y) = (0.23, 0.2) and data value is 0.23, I want to place this data value to the marked (x,y). Moreover, the shaders which I have written (sort of basic), are not working properly, or I am missing something?
every x_axis and y_axis has data scanned in a range of angles. x_axis lies between 30-120, whereas the y_axis lies between 30-110.
using vec4 since I am considering 4 points, of which Z remains 0. Will remove Z-axis completely.
Can I draw the data separately, after the vertex are drawn?
vertex shader
#version 460 core
uniform mat4 model;
uniform mat4 projection;
in vec4 pos;
out vec4 cvColor;
void main()
{
if(pos.z <= 0.10)
{
cvColor = vec4(0.323, 0.242, 0.22, 1.0 );
}
else if (pos.z > 0.11 && pos.z <=0.20)
{
cvColor = vec4(0.322, 0.241, 0.3, 1.0);
}
else if (pos.z > 0.21 && pos.z <=0.30 )
{
cvColor = vec4(0.453, 0.245, 0.33, 1.0);
}
else
{
cvColor = vec4(1.0, 1.0, 1.0, 1.0);
}
gl_Position = projection*model*vec4(pos.x, pos.y, pos.z, 1.0);
}
fragment shader
#version 460 core
uniform vec4 cvColor;
out vec4 color;
void main()
{
color = cvColor;
}

How to draw TRIANGLE_FAN with geometry shader created coordinates? (GLSL 3.3)

I want to draw multiple fans with a GS. Each fan should billboard to the camera at each time, which makes it necessary that each vertex is multiplied with MVP matrix.
Since each fan is movable by the user, I came up with the idea to feed the GS with the position.
The following geometry shader works as expected with points as in and output:
uniform mat4 VP;
uniform mat4 sharedModelMatrix;
const int STATE_VERTEX_NUMBER = 38;
layout (shared) uniform stateShapeData {
vec2 data[STATE_VERTEX_NUMBER];
};
layout (triangles) in;
layout (triangle_strip, max_vertices = 80) out;
void main(void)
{
int i;
mat4 modelMatrix = sharedModelMatrix;
modelMatrix[3] = gl_in[0].gl_Position;
mat4 MVP = VP * modelMatrix;
gl_Position = MVP * vec4( 0, 0 , 0, 1 );
EmitVertex(); // epicenter
for (i = 37; i >= 0; i--) {
gl_Position = MVP * vec4( data[i], 0, 1 );
EmitVertex();
}
gl_Position = MVP * vec4( data[0], 0, 1 );
EmitVertex();
}
I tried to run this with glDrawElements, glDrawArrays and glMultiDrawArrays. None of these commands draws the full fan. Each draws the first triangle filled and the remaining vertices as points.
So, the bottom question is: Is it possible to draw a fan with GS created vertices and how?
Outputting fans in a Geometry Shader is very unnatural as you have discovered.
You are currently outputting the vertices in fan-order, which is a construct that is completely foreign to GPUs after primitive assembly. Fans are useful as assembler input, but as far as output is concerned the rasterizer only understands the concept of strips.
To write this shader properly, you need to decompose this fan into a series of individual triangles. That means the loop you wrote is actually going to output the epicenter on each iteration.
void main(void)
{
int i;
mat4 modelMatrix = sharedModelMatrix;
modelMatrix[3] = gl_in[0].gl_Position;
mat4 MVP = VP * modelMatrix;
for (i = 37; i >= 0; i--) {
gl_Position = MVP * vec4( 0, 0 , 0, 1 );
EmitVertex(); // epicenter
gl_Position = MVP * vec4( data[i], 0, 1 );
EmitVertex();
gl_Position = MVP * vec4( data[i-1], 0, 1 );
EmitVertex();
// Fan and strip DNA just won't splice
EndPrimitive ();
}
}
You cannot exploit strip-ordering when drawing this way; you wind up having to end the output primitive (strip) multiple times. About the only possible benefit you get to drawing in fan-order is cache locality within the loop. If you understand that geometry shaders are expected to output triangle strips, why not order your input vertices that way to begin with?

shadowmapping opengl glsl shader graphics math

i'm having difficulties understanding the math between the different shader stages.
in the fragment shader from the lights perspective i basically write out the fragDepth to rgb color
#version 330
out vec4 shader_fragmentColor;
void main()
{
shader_fragmentColor = vec4(gl_FragCoord.z, gl_FragCoord.z, gl_FragCoord.z, 1);
//shader_fragmentColor = vec4(1, 0.5, 0.5, 1);
}
when rendering the scene using the above shader it displays the scene in an all white color. i suppose thats because gl_FragCoord.z is bigger than 1. hopefully its not maxed out at 1. but we can leave that question alone for now.
in the geometry shader from the cameras perspective i basicly turn all points into quads and write out the probably "incorrect" texture position to lookup in the lightTexture. the math here is the question. im also a bit unsure about if the interpolation value will be correct in the next shader stage.
#version 330
#extension GL_EXT_geometry_shader4 : enable
uniform mat4 p1_modelM;
uniform mat4 p1_cameraPV;
uniform mat4 p1_lightPV;
out vec4 shader_lightTexturePosition;
void main()
{
float s = 10.00;
vec4 llCorner = vec4(-s, -s, 0.0, 0.0);
vec4 llWorldPosition = ((p1_modelM * llCorner) + gl_in[0].gl_Position);
gl_Position = p1_cameraPV * llWorldPosition;
shader_lightTexturePosition = p1_lightPV * llWorldPosition;
EmitVertex();
vec4 rlCorner = vec4(+s, -s, 0.0, 0.0);
vec4 rlWorldPosition = ((p1_modelM * rlCorner) + gl_in[0].gl_Position);
gl_Position = p1_cameraPV * rlWorldPosition;
shader_lightTexturePosition = p1_lightPV * rlWorldPosition;
EmitVertex();
vec4 luCorner = vec4(-s, +s, 0.0, 0.0);
vec4 luWorldPosition = ((p1_modelM * luCorner) + gl_in[0].gl_Position);
gl_Position = p1_cameraPV * luWorldPosition;
shader_lightTexturePosition = p1_lightPV * luWorldPosition;
EmitVertex();
vec4 ruCorner = vec4(+s, +s, 0.0, 0.0);
vec4 ruWorldPosition = ((p1_modelM * ruCorner) + gl_in[0].gl_Position);
gl_Position = p1_cameraPV * ruWorldPosition;
shader_lightTexturePosition = p1_lightPV * ruWorldPosition;
EmitVertex();
EndPrimitive();
}
in the fragment shader from the cameras perspective i basicly lookup in the lightTexture what color would be shown from the lights perspecive and write out the same color.
#version 330
uniform sampler2D p1_lightTexture;
in vec4 shader_lightTexturePosition;
out vec4 shader_fragmentColor;
void main()
{
vec4 lightTexel = texture2D(p1_lightTexture, shader_lightTexturePosition.xy);
shader_fragmentColor = lightTexel;
/*
if(lightTexel.x < shader_lightTexturePosition.z)
shader_fragmentColor = vec4(1, 0, 0, 1);
else
shader_fragmentColor = vec4(0, 1, 0, 1);
*/
//shader_fragmentColor = vec4(1, 1, 1, 1);
}
when rendering from the cameras perspective i see the scene drawn as it should but with the incorrect texture coordinates applied on them that repeats. repeating texture is probably caused by the texture-coordinate being outside the bounds of 0 to 1.
I've tried several things but still fail to understand what the math should be. some of out commented code and one example im unsure of is:
shader_lightTexturePosition = normalize(p1_lightPV * llWorldPosition) / 2 + vec4(0.5, 0.5, 0.5, 0.5);
for the lower-left corner. similair code to the other corners
from the solution i expect the scene to be rendered from the cameras perspective with exactly the same color as from the lights perspective. with perhaps some precision error.
i figured out the texture mapping bit myself. the depth value bit is still a bit strange.
convert the screenProjectedCoords to normalizedDeviceCoords then add 1 divide by 2.
vec4 textureNormalizedCoords(vec4 screenProjected)
{
vec3 normalizedDeviceCoords = (screenProjected.xyz / screenProjected.w);
return vec4( (normalizedDeviceCoords.xy + 1.0) / 2.0, screenProjected.z * 0.005, 1/screenProjected.w);
}
void main()
{
float s = 10.00;
vec4 llCorner = vec4(-s, -s, 0.0, 0.0);
vec4 llWorldPosition = ((p1_modelM * llCorner) + gl_in[0].gl_Position);
gl_Position = p1_cameraPV * llWorldPosition;
shader_lightTextureCoords = textureNormalizedCoords(p1_lightPV * llWorldPosition);
EmitVertex();a

GLSL/OpenGL 2.1: Specular Lighting using Uniforms

So, I've begun a quest to implement awesome lighting without using OpenGL's lighting system. I've successfully implemented Phong diffuse lighting. Specular is giving me trouble.
I need to know what spaces the OpenGL constants I'm using occupy, because it seems they are mis-transformed, and that it results in lighting glitches.
I have confirmed that there is no problem with my C++ code by successfully loading and running a Phong diffuse shader. The C++ code may, however, be passing invalid data to the shaders, which is one of the things I'm worried about. I will paste my shaders with comments, as well as all C++ code directly pertaining to the shaders (although I'm 90% sure the problem is in the shaders).
In these images, the light sources are large points, and the axes are shown.
The lights are rotating at y = 0 around an icosphere.
Here's the diffuse, so you get an idea what the model is...
Note I haven't done per-pixel yet...
Here's the Fresnel lighting, as shown in source...
Note how the lit faces are facing the light, not somewhere between the light and the camera
Here's the Blinn-Phong, which I had to multiply by 30...
Note again how the lit faces point towards the light source, and also the fact that I had to multiply the Specular factor (S) by 30 to achieve this
Vertex Shader Source (loaded from "dirlight.vs")
const int MAXLIGHTS = 4;
uniform bool justcolor = false;
uniform int lightcount;
uniform vec4 lightposs[MAXLIGHTS];
uniform vec4 lightdirs[MAXLIGHTS];
uniform vec4 lightdifs[MAXLIGHTS];
uniform vec4 lightambs[MAXLIGHTS];
//diffuse
vec4 D;
//specular, normaldotlight
float S, NdotL[MAXLIGHTS];
//normal, eyevec, lightvecs, halfvecs
vec3 N, E, L[MAXLIGHTS], H[MAXLIGHTS];
void main() {
//if(lightcount > MAXLIGHTS) lightcount = MAXLIGHTS;
D = vec4(0.0, 0.0, 0.0, 0.0);
S = 0.0;
N = gl_Normal;
E = normalize(vec3(-gl_Vertex));
for(int i = 0; i < lightcount; i++)
{
//calculating direction to light source
L[i] = normalize(vec3(lightposs[i] - gl_Vertex));
//normal dotted with direction to light source
NdotL[i] = max(dot(N, L[i]), 0.0);
//diffuse term, works just fine
D += gl_Color * lightdifs[i] * NdotL[i];
if(NdotL[i] >= 0.0)
{
//halfvector = normalize(lightdir + eyedir)
H[i] = normalize(L[i] + E);
//Blinn-Phong, only lights up faces whose normals
//point directly to the light source for some reason...
//S += max(0.0, dot(H[i], N));
//Fresnel, lights up more than Blinn-Phong
//but the faces still point directly to the light source,
//not somewhere between the lightsource and myself, like they should.
S += pow(max(0.0, dot(reflect(L[i], N), E)), 50.0);
}
else
{
H[i] = vec3(0.0, 0.0, 0.0);
}
}
//currently only showing specular. To show diffuse add D.
gl_FrontColor = justcolor ? gl_Color : vec4(S * 0.3, S * 0.3, S * 0.3, 1.0);
gl_Position = ftransform();
}
Fragment Shader Source (loaded from "dirlight.fs")
void main()
{
gl_FragColor = gl_Color;
}
Excerpt from C++ main initialization...
//class program manages shaders
Program shaders = Program();
//attach a vertex shader, compiled from source in dirlight.vs
shaders.addShaderFile(GL_VERTEX_SHADER, "dirlight.vs");
//attach a fragment shader compiled from source in dirlight.fs
shaders.addShaderFile(GL_FRAGMENT_SHADER, "dirlight.fs");
//link program
shaders.link();
//use program
shaders.use();
//Program::getUniformLoc(const char* name) grabs the location
//of the uniform specified
GLint sTime = shaders.getUniformLoc("time");
GLint lightcount = shaders.getUniformLoc("lightcount");
GLint lightdir = shaders.getUniformLoc("lightdirs");
GLint lightdif = shaders.getUniformLoc("lightdifs");
GLint lightamb = shaders.getUniformLoc("lightambs");
GLint lightpos = shaders.getUniformLoc("lightposs");
GLint justcolor = shaders.getUniformLoc("justcolor");
glUniform1i(justcolor, 0);
glUniform1i(lightcount, 2);
//diffuse light colors
GLfloat lightdifs[] = {1.f, 1.f, 1.f, 1.f,
1.f, 1.f, 1.f, 1.f};
glUniform4fv(lightdif, 2, lightdifs);
glUniform4f(lightamb, 0.4f, 0.4f, 0.4f, 1.f);
Excerpt from C++ main loop...
//My lights rotate around the origin, where I have placed an icosphere
GLfloat lightposs[] = {-4 * sinf(newTime), lighth, -4 * cosf(newTime), 0.0f,
-4 * sinf(newTime + M_PI), lighth, -4 * cosf(newTime + M_PI), 0.0f};
glUniform4fv(lightpos, 2, lightposs);
There are few important things missing from your code. First you should transform vertex position and normal into eye space. Lighting calculations are easiest there. Vertex position transforms using the modelview matrix, the normals transform with the transposed inverse of the modelview. Usually light positions are in world coordinates, so it makes sense to supply an additional matrix from world to eye coordinates.

GLSL Checkerboard Pattern

i want to shade the quad with checkers:
f(P)=[floor(Px)+floor(Py)]mod2.
My quad is:
glBegin(GL_QUADS);
glVertex3f(0,0,0.0);
glVertex3f(4,0,0.0);
glVertex3f(4,4,0.0);
glVertex3f(0,4, 0.0);
glEnd();
The vertex shader file:
varying float factor;
float x,y;
void main(){
x=floor(gl_Position.x);
y=floor(gl_Position.y);
factor = mod((x+y),2.0);
}
And the fragment shader file is:
varying float factor;
void main(){
gl_FragColor = vec4(factor,factor,factor,1.0);
}
But im getting this:
It seems that the mod function doeasn't work or maybe somthing else...
Any help?
It is better to calculate this effect in fragment shader, something like that:
vertex program =>
varying vec2 texCoord;
void main(void)
{
gl_Position = vec4(gl_Vertex.xy, 0.0, 1.0);
gl_Position = sign(gl_Position);
texCoord = (vec2(gl_Position.x, gl_Position.y)
+ vec2(1.0)) / vec2(2.0);
}
fragment program =>
#extension GL_EXT_gpu_shader4 : enable
uniform sampler2D Texture0;
varying vec2 texCoord;
void main(void)
{
ivec2 size = textureSize2D(Texture0, 0);
float total = floor(texCoord.x * float(size.x)) +
floor(texCoord.y * float(size.y));
bool isEven = mod(total, 2.0) == 0.0;
vec4 col1 = vec4(0.0, 0.0, 0.0, 1.0);
vec4 col2 = vec4(1.0, 1.0, 1.0, 1.0);
gl_FragColor = (isEven) ? col1 : col2;
}
Output =>
Good luck!
Try this function in your fragment shader:
vec3 checker(in float u, in float v)
{
float checkSize = 2;
float fmodResult = mod(floor(checkSize * u) + floor(checkSize * v), 2.0);
float fin = max(sign(fmodResult), 0.0);
return vec3(fin, fin, fin);
}
Then in main you can call it using :
vec3 check = checker(fs_vertex_texture.x, fs_vertex_texture.y);
And simply pass x and y you are getting from vertex shader. All you have to do after that is to include it when calculating your vFragColor.
Keep in mind that you can change chec size simply by modifying checkSize value.
What your code does is calculate the factor 4 times (once for each vertex, since it's vertex shader code) and then interpolate those values (because it's written into a varying varible) and then output that variable as color in the fragment shader.
So it doesn't work that way. You need to do that calculation directly in the fragment shader. You can get the fragment position using the gl_FragCoord built-in variable in the fragment shader.
May I suggest the following:
float result = mod(dot(vec2(1.0), step(vec2(0.5), fract(v_uv * u_repeat))), 2.0);
v_uv is a vec2 of UV values,
u_repeat is a vec2 of how many times the pattern should be repeated for each axis.
result is 0 or 1, you can use it in mix function to provide colors, for example:
gl_FragColor = mix(vec4(1.0, 1.0, 1.0, 1.0), vec4(0.0, 0.0, 0.0, 1.0) result);
Another nice way to do it is by just tiling a known pattern (zooming out). Assuming that you have a square canvas:
void mainImage( out vec4 fragColor, in vec2 fragCoord )
{
// Normalized pixel coordinates (from 0 to 1)
vec2 uv = fragCoord/iResolution.xy;
uv -= 0.5; // moving the coordinate system to middle of screen
// Output to screen
fragColor = vec4(vec3(step(uv.x * uv.y, 0.)), 1.);
}
Code above gives you this kind of pattern.
Code below by just zooming 4.5 times and taking the fractional part repeats the pattern 4.5 times resulting in 9 squares per row.
void mainImage( out vec4 fragColor, in vec2 fragCoord )
{
// Normalized pixel coordinates (from 0 to 1)
vec2 uv = fract(fragCoord/iResolution.xy * 4.5);
uv -= 0.5; // moving the coordinate system to middle of screen
// Output to screen
fragColor = vec4(vec3(step(uv.x * uv.y, 0.)), 1.);
}