OpenGL Line Width - c++

In my OpenGL app, it won't let me draw a line greater then ten pixels wide. Is there a way to make it draw more than ten pixels?
void OGL_Renderer::drawLine(int x, int y, int x2, int y2, int r, int g, int b, int a, int line_width)
{
glColor4ub(r, g, b, a);
glLineWidth((GLfloat)line_width);
glBegin(GL_LINES);
glVertex2i(x, y);
glVertex2i(x2, y2);
glEnd();
glLineWidth(1.0f);
}

I recommend to use a Shader, which generates triangle primitives along a line strip (or even a line loop).
The task is to generate thick line strip, with as less CPU and GPU overhead as possible. That means to avoid computation of polygons on the CPU as well as geometry shaders (or tessellation shaders).
Each segment of the line consist of a quad represented by 2 triangle primitives respectively 6 vertices.
0 2 5
+-------+ +
| / / |
| / / |
| / / |
+ +-------+
1 3 4
Between the line segments the miter hast to be found and the quads have to be cut to the miter.
+----------------+
| / |
| segment 1 / |
| / |
+--------+ |
| segment 2
| |
| |
+-------+
Create an array with the corners points of the line strip. The first and the last point define the start and end tangents of the line strip. So you need to add 1 point before the line and one point after the line. Of course it would be easy, to identify the first and last element of the array by comparing the index to 0 and the length of the array, but we don't want to do any extra checks in the shader.
If a line loop has to be draw, then the last point has to be add to the array head and the first point to its tail.
The array of points is stored to a Shader Storage Buffer Object. We use the benefit, that the last variable of the SSBO can be an array of variable size. In older versions of OpenGL (or OpenGL ES) a Uniform Buffer Object or even a Texture can be used.
The shader doesn't need any vertex coordinates or attributes. All we have to know is the index of the line segment. The coordinates are stored in the buffer. To find the index we make use of the the index of the vertex currently being processed (gl_VertexID).
To draw a line strip with N points (N-1 segments), 6*(N-1) vertices have tpo be processed.
We have to create an "empty" Vertex Array Object (without any vertex attribute specification):
glGenVertexArrays(1, &vao);
glBindVertexArray(vao);
And to draw 2*(N-1) triangle (6*(N-1) vertices):
glDrawArrays(GL_TRIANGLES, 0, 6*(N-1));
For the coordinate array in the SSBO, the data type vec4 is used (Pleas believe me, you don't want to use vec3):
layout(std430, binding = 0) buffer TVertex
{
vec4 vertex[];
};
Compute the index of the line segment, where the vertex coordinate belongs too and the index of the point in the 2 triangles:
int line_i = gl_VertexID / 6;
int tri_i = gl_VertexID % 6;
Since we are drawing N-1 line segments, but the number of elements in the array is N+2, the elements form vertex[line_t] to vertex[line_t+3] can be accessed for each vertex which is processed in the vertex shader.
vertex[line_t+1] and vertex[line_t+2] are the start respectively end coordinate of the line segment. vertex[line_t] and vertex[line_t+3] are required to compute the miter.
The thickness of the line should be set in pixel unit (uniform float u_thickness). The coordinates have to be transformed from model space to window space. For that the resolution of the viewport has to be known (uniform vec2 u_resolution). Don't forget the perspective divide. The drawing of the line will even work at perspective projection.
vec4 va[4];
for (int i=0; i<4; ++i)
{
va[i] = u_mvp * vertex[line_i+i];
va[i].xyz /= va[i].w;
va[i].xy = (va[i].xy + 1.0) * 0.5 * u_resolution;
}
The miter and the start and end tangents are calculated from the vectors between the points. It would be a waste of performance to test the points in the vertex shader for equality or for vectors of zero length. It is up to the vertex setup to take care of a proper list of points.
However the miter calculation even works if the predecessor and successor point of a point are equal. In this case the end of the line is cut normal to the line segemnt or tangent:
vec2 v_line = normalize(va[2].xy - va[1].xy);
vec2 nv_line = vec2(-v_line.y, v_line.x);
vec2 v_pred = normalize(va[1].xy - va[0].xy);
vec2 v_succ = normalize(va[3].xy - va[2].xy);
vec2 v_miter1 = normalize(nv_line + vec2(-v_pred.y, v_pred.x));
vec2 v_miter2 = normalize(nv_line + vec2(-v_succ.y, v_succ.x));
In the final vertex shader we just need to calculate either v_miter1 or v_miter2 dependent on the tri_i. With the miter, the normal vector to the line segment and the line thickness (u_thickness), the vertex coordinate can be computed:
vec4 pos;
if (tri_i == 0 || tri_i == 1 || tri_i == 3)
{
vec2 v_pred = normalize(va[1].xy - va[0].xy);
vec2 v_miter = normalize(nv_line + vec2(-v_pred.y, v_pred.x));
pos = va[1];
pos.xy += v_miter * u_thickness * (tri_i == 1 ? -0.5 : 0.5) / dot(v_miter, nv_line);
}
else
{
vec2 v_succ = normalize(va[3].xy - va[2].xy);
vec2 v_miter = normalize(nv_line + vec2(-v_succ.y, v_succ.x));
pos = va[2];
pos.xy += v_miter * u_thickness * (tri_i == 5 ? 0.5 : -0.5) / dot(v_miter, nv_line);
}
Finally the window coordinates have to be transformed back to clip space coordinates. Transform from window space to normalized device space. The perspective divide has to be reversed:
pos.xy = pos.xy / u_resolution * 2.0 - 1.0;
pos.xyz *= pos.w;
The shader can generate the following polygons (rendered with glPolygonMode(GL_FRONT_AND_BACK, GL_LINE))
(with default mode - glPolygonMode(GL_FRONT_AND_BACK, GL_FILL))
For the following simple demo program I've used the GLFW API for creating a window, GLEW for loading OpenGL and GLM -OpenGL Mathematics for the math. I don't provide the code for the function CreateProgram, which just creates a program object, from the vertex shader and fragment shader source code:
#include <vector>
#include <string>
#include <glm/glm.hpp>
#include <glm/gtc/matrix_transform.hpp>
#include <glm/gtc/type_ptr.hpp>
#include <gl/gl_glew.h>
#include <GLFW/glfw3.h>
std::string vertShader = R"(
#version 460
layout(std430, binding = 0) buffer TVertex
{
vec4 vertex[];
};
uniform mat4 u_mvp;
uniform vec2 u_resolution;
uniform float u_thickness;
void main()
{
int line_i = gl_VertexID / 6;
int tri_i = gl_VertexID % 6;
vec4 va[4];
for (int i=0; i<4; ++i)
{
va[i] = u_mvp * vertex[line_i+i];
va[i].xyz /= va[i].w;
va[i].xy = (va[i].xy + 1.0) * 0.5 * u_resolution;
}
vec2 v_line = normalize(va[2].xy - va[1].xy);
vec2 nv_line = vec2(-v_line.y, v_line.x);
vec4 pos;
if (tri_i == 0 || tri_i == 1 || tri_i == 3)
{
vec2 v_pred = normalize(va[1].xy - va[0].xy);
vec2 v_miter = normalize(nv_line + vec2(-v_pred.y, v_pred.x));
pos = va[1];
pos.xy += v_miter * u_thickness * (tri_i == 1 ? -0.5 : 0.5) / dot(v_miter, nv_line);
}
else
{
vec2 v_succ = normalize(va[3].xy - va[2].xy);
vec2 v_miter = normalize(nv_line + vec2(-v_succ.y, v_succ.x));
pos = va[2];
pos.xy += v_miter * u_thickness * (tri_i == 5 ? 0.5 : -0.5) / dot(v_miter, nv_line);
}
pos.xy = pos.xy / u_resolution * 2.0 - 1.0;
pos.xyz *= pos.w;
gl_Position = pos;
}
)";
std::string fragShader = R"(
#version 460
out vec4 fragColor;
void main()
{
fragColor = vec4(1.0);
}
)";
GLuint CreateSSBO(std::vector<glm::vec4> &varray)
{
GLuint ssbo;
glGenBuffers(1, &ssbo);
glBindBuffer(GL_SHADER_STORAGE_BUFFER, ssbo );
glBufferData(GL_SHADER_STORAGE_BUFFER, varray.size()*sizeof(*varray.data()), varray.data(), GL_STATIC_DRAW);
return ssbo;
}
int main(void)
{
if ( glfwInit() == 0 )
return 0;
GLFWwindow *window = glfwCreateWindow( 800, 600, "GLFW OGL window", nullptr, nullptr );
if ( window == nullptr )
{
glfwTerminate();
retturn 0;
}
glfwMakeContextCurrent(window);
if ( glewInit() != GLEW_OK )
return 0;
GLuint program = CreateProgram(vertShader, fragShader);
GLint loc_mvp = glGetUniformLocation(program, "u_mvp");
GLint loc_res = glGetUniformLocation(program, "u_resolution");
GLint loc_thi = glGetUniformLocation(program, "u_thickness");
glUseProgram(program);
glUniform1f(loc_thi, 20.0);
GLushort pattern = 0x18ff;
GLfloat factor = 2.0f;
glm::vec4 p0(-1.0f, -1.0f, 0.0f, 1.0f);
glm::vec4 p1(1.0f, -1.0f, 0.0f, 1.0f);
glm::vec4 p2(1.0f, 1.0f, 0.0f, 1.0f);
glm::vec4 p3(-1.0f, 1.0f, 0.0f, 1.0f);
std::vector<glm::vec4> varray1{ p3, p0, p1, p2, p3, p0, p1 };
GLuint ssbo1 = CreateSSBO(varray1);
std::vector<glm::vec4> varray2;
for (int u=-8; u <= 368; u += 8)
{
double a = u*M_PI/180.0;
double c = cos(a), s = sin(a);
varray2.emplace_back(glm::vec4((float)c, (float)s, 0.0f, 1.0f));
}
GLuint ssbo2 = CreateSSBO(varray2);
GLuint vao;
glGenVertexArrays(1, &vao);
glBindVertexArray(vao);
glClearColor(0.0f, 0.0f, 0.0f, 0.0f);
//glPolygonMode(GL_FRONT_AND_BACK, GL_LINE);
glm::mat4(project);
int vpSize[2]{0, 0};
while (!glfwWindowShouldClose(window))
{
int w, h;
glfwGetFramebufferSize(window, &w, &h);
if (w != vpSize[0] || h != vpSize[1])
{
vpSize[0] = w; vpSize[1] = h;
glViewport(0, 0, vpSize[0], vpSize[1]);
float aspect = (float)w/(float)h;
project = glm::ortho(-aspect, aspect, -1.0f, 1.0f, -10.0f, 10.0f);
glUniform2f(loc_res, (float)w, (float)h);
}
glClear(GL_COLOR_BUFFER_BIT);
glm::mat4 modelview1( 1.0f );
modelview1 = glm::translate(modelview1, glm::vec3(-0.6f, 0.0f, 0.0f) );
modelview1 = glm::scale(modelview1, glm::vec3(0.5f, 0.5f, 1.0f) );
glm::mat4 mvp1 = project * modelview1;
glUniformMatrix4fv(loc_mvp, 1, GL_FALSE, glm::value_ptr(mvp1));
glBindBufferBase(GL_SHADER_STORAGE_BUFFER, 0, ssbo1);
GLsizei N1 = (GLsizei)varray1.size()-2;
glDrawArrays(GL_TRIANGLES, 0, 6*(N1-1));
glm::mat4 modelview2( 1.0f );
modelview2 = glm::translate(modelview2, glm::vec3(0.6f, 0.0f, 0.0f) );
modelview2 = glm::scale(modelview2, glm::vec3(0.5f, 0.5f, 1.0f) );
glm::mat4 mvp2 = project * modelview2;
glUniformMatrix4fv(loc_mvp, 1, GL_FALSE, glm::value_ptr(mvp2));
glBindBufferBase(GL_SHADER_STORAGE_BUFFER, 0, ssbo2);
GLsizei N2 = (GLsizei)varray2.size()-2;
glDrawArrays(GL_TRIANGLES, 0, 6*(N2-1));
glfwSwapBuffers(window);
glfwPollEvents();
}
glfwTerminate();
return 0;
}

You could try drawing a quad. Make it as wide as you want your line to be long, and tall as the line width you need, then rotate and position it where the line would go.

Ah, now that I understood what you meant:
draw a one by one square.
calc the length and orientation of the line
stretch it to the length in x
translate to startpos and rotate to line_orientation
or:
get vector of line: v :(x2 - x1, y2 - y1)
normalize v: n
3- get orthogonal (normal) of the vector : o (easy in 2d)
add and subtract o from the line's end and start point to get 4 corner points
draw a quad with these points.

It makes sense that you can't. From the glLineWidth reference:
The range of supported widths and the size difference between supported widths within the range can be queried by calling glGet with arguments GL_LINE_WIDTH_RANGE and GL_LINE_WIDTH_GRANULARITY.

Related

OpenGL line width/thickness [duplicate]

In my OpenGL app, it won't let me draw a line greater then ten pixels wide. Is there a way to make it draw more than ten pixels?
void OGL_Renderer::drawLine(int x, int y, int x2, int y2, int r, int g, int b, int a, int line_width)
{
glColor4ub(r, g, b, a);
glLineWidth((GLfloat)line_width);
glBegin(GL_LINES);
glVertex2i(x, y);
glVertex2i(x2, y2);
glEnd();
glLineWidth(1.0f);
}
I recommend to use a Shader, which generates triangle primitives along a line strip (or even a line loop).
The task is to generate thick line strip, with as less CPU and GPU overhead as possible. That means to avoid computation of polygons on the CPU as well as geometry shaders (or tessellation shaders).
Each segment of the line consist of a quad represented by 2 triangle primitives respectively 6 vertices.
0 2 5
+-------+ +
| / / |
| / / |
| / / |
+ +-------+
1 3 4
Between the line segments the miter hast to be found and the quads have to be cut to the miter.
+----------------+
| / |
| segment 1 / |
| / |
+--------+ |
| segment 2
| |
| |
+-------+
Create an array with the corners points of the line strip. The first and the last point define the start and end tangents of the line strip. So you need to add 1 point before the line and one point after the line. Of course it would be easy, to identify the first and last element of the array by comparing the index to 0 and the length of the array, but we don't want to do any extra checks in the shader.
If a line loop has to be draw, then the last point has to be add to the array head and the first point to its tail.
The array of points is stored to a Shader Storage Buffer Object. We use the benefit, that the last variable of the SSBO can be an array of variable size. In older versions of OpenGL (or OpenGL ES) a Uniform Buffer Object or even a Texture can be used.
The shader doesn't need any vertex coordinates or attributes. All we have to know is the index of the line segment. The coordinates are stored in the buffer. To find the index we make use of the the index of the vertex currently being processed (gl_VertexID).
To draw a line strip with N points (N-1 segments), 6*(N-1) vertices have tpo be processed.
We have to create an "empty" Vertex Array Object (without any vertex attribute specification):
glGenVertexArrays(1, &vao);
glBindVertexArray(vao);
And to draw 2*(N-1) triangle (6*(N-1) vertices):
glDrawArrays(GL_TRIANGLES, 0, 6*(N-1));
For the coordinate array in the SSBO, the data type vec4 is used (Pleas believe me, you don't want to use vec3):
layout(std430, binding = 0) buffer TVertex
{
vec4 vertex[];
};
Compute the index of the line segment, where the vertex coordinate belongs too and the index of the point in the 2 triangles:
int line_i = gl_VertexID / 6;
int tri_i = gl_VertexID % 6;
Since we are drawing N-1 line segments, but the number of elements in the array is N+2, the elements form vertex[line_t] to vertex[line_t+3] can be accessed for each vertex which is processed in the vertex shader.
vertex[line_t+1] and vertex[line_t+2] are the start respectively end coordinate of the line segment. vertex[line_t] and vertex[line_t+3] are required to compute the miter.
The thickness of the line should be set in pixel unit (uniform float u_thickness). The coordinates have to be transformed from model space to window space. For that the resolution of the viewport has to be known (uniform vec2 u_resolution). Don't forget the perspective divide. The drawing of the line will even work at perspective projection.
vec4 va[4];
for (int i=0; i<4; ++i)
{
va[i] = u_mvp * vertex[line_i+i];
va[i].xyz /= va[i].w;
va[i].xy = (va[i].xy + 1.0) * 0.5 * u_resolution;
}
The miter and the start and end tangents are calculated from the vectors between the points. It would be a waste of performance to test the points in the vertex shader for equality or for vectors of zero length. It is up to the vertex setup to take care of a proper list of points.
However the miter calculation even works if the predecessor and successor point of a point are equal. In this case the end of the line is cut normal to the line segemnt or tangent:
vec2 v_line = normalize(va[2].xy - va[1].xy);
vec2 nv_line = vec2(-v_line.y, v_line.x);
vec2 v_pred = normalize(va[1].xy - va[0].xy);
vec2 v_succ = normalize(va[3].xy - va[2].xy);
vec2 v_miter1 = normalize(nv_line + vec2(-v_pred.y, v_pred.x));
vec2 v_miter2 = normalize(nv_line + vec2(-v_succ.y, v_succ.x));
In the final vertex shader we just need to calculate either v_miter1 or v_miter2 dependent on the tri_i. With the miter, the normal vector to the line segment and the line thickness (u_thickness), the vertex coordinate can be computed:
vec4 pos;
if (tri_i == 0 || tri_i == 1 || tri_i == 3)
{
vec2 v_pred = normalize(va[1].xy - va[0].xy);
vec2 v_miter = normalize(nv_line + vec2(-v_pred.y, v_pred.x));
pos = va[1];
pos.xy += v_miter * u_thickness * (tri_i == 1 ? -0.5 : 0.5) / dot(v_miter, nv_line);
}
else
{
vec2 v_succ = normalize(va[3].xy - va[2].xy);
vec2 v_miter = normalize(nv_line + vec2(-v_succ.y, v_succ.x));
pos = va[2];
pos.xy += v_miter * u_thickness * (tri_i == 5 ? 0.5 : -0.5) / dot(v_miter, nv_line);
}
Finally the window coordinates have to be transformed back to clip space coordinates. Transform from window space to normalized device space. The perspective divide has to be reversed:
pos.xy = pos.xy / u_resolution * 2.0 - 1.0;
pos.xyz *= pos.w;
The shader can generate the following polygons (rendered with glPolygonMode(GL_FRONT_AND_BACK, GL_LINE))
(with default mode - glPolygonMode(GL_FRONT_AND_BACK, GL_FILL))
For the following simple demo program I've used the GLFW API for creating a window, GLEW for loading OpenGL and GLM -OpenGL Mathematics for the math. I don't provide the code for the function CreateProgram, which just creates a program object, from the vertex shader and fragment shader source code:
#include <vector>
#include <string>
#include <glm/glm.hpp>
#include <glm/gtc/matrix_transform.hpp>
#include <glm/gtc/type_ptr.hpp>
#include <gl/gl_glew.h>
#include <GLFW/glfw3.h>
std::string vertShader = R"(
#version 460
layout(std430, binding = 0) buffer TVertex
{
vec4 vertex[];
};
uniform mat4 u_mvp;
uniform vec2 u_resolution;
uniform float u_thickness;
void main()
{
int line_i = gl_VertexID / 6;
int tri_i = gl_VertexID % 6;
vec4 va[4];
for (int i=0; i<4; ++i)
{
va[i] = u_mvp * vertex[line_i+i];
va[i].xyz /= va[i].w;
va[i].xy = (va[i].xy + 1.0) * 0.5 * u_resolution;
}
vec2 v_line = normalize(va[2].xy - va[1].xy);
vec2 nv_line = vec2(-v_line.y, v_line.x);
vec4 pos;
if (tri_i == 0 || tri_i == 1 || tri_i == 3)
{
vec2 v_pred = normalize(va[1].xy - va[0].xy);
vec2 v_miter = normalize(nv_line + vec2(-v_pred.y, v_pred.x));
pos = va[1];
pos.xy += v_miter * u_thickness * (tri_i == 1 ? -0.5 : 0.5) / dot(v_miter, nv_line);
}
else
{
vec2 v_succ = normalize(va[3].xy - va[2].xy);
vec2 v_miter = normalize(nv_line + vec2(-v_succ.y, v_succ.x));
pos = va[2];
pos.xy += v_miter * u_thickness * (tri_i == 5 ? 0.5 : -0.5) / dot(v_miter, nv_line);
}
pos.xy = pos.xy / u_resolution * 2.0 - 1.0;
pos.xyz *= pos.w;
gl_Position = pos;
}
)";
std::string fragShader = R"(
#version 460
out vec4 fragColor;
void main()
{
fragColor = vec4(1.0);
}
)";
GLuint CreateSSBO(std::vector<glm::vec4> &varray)
{
GLuint ssbo;
glGenBuffers(1, &ssbo);
glBindBuffer(GL_SHADER_STORAGE_BUFFER, ssbo );
glBufferData(GL_SHADER_STORAGE_BUFFER, varray.size()*sizeof(*varray.data()), varray.data(), GL_STATIC_DRAW);
return ssbo;
}
int main(void)
{
if ( glfwInit() == 0 )
return 0;
GLFWwindow *window = glfwCreateWindow( 800, 600, "GLFW OGL window", nullptr, nullptr );
if ( window == nullptr )
{
glfwTerminate();
retturn 0;
}
glfwMakeContextCurrent(window);
if ( glewInit() != GLEW_OK )
return 0;
GLuint program = CreateProgram(vertShader, fragShader);
GLint loc_mvp = glGetUniformLocation(program, "u_mvp");
GLint loc_res = glGetUniformLocation(program, "u_resolution");
GLint loc_thi = glGetUniformLocation(program, "u_thickness");
glUseProgram(program);
glUniform1f(loc_thi, 20.0);
GLushort pattern = 0x18ff;
GLfloat factor = 2.0f;
glm::vec4 p0(-1.0f, -1.0f, 0.0f, 1.0f);
glm::vec4 p1(1.0f, -1.0f, 0.0f, 1.0f);
glm::vec4 p2(1.0f, 1.0f, 0.0f, 1.0f);
glm::vec4 p3(-1.0f, 1.0f, 0.0f, 1.0f);
std::vector<glm::vec4> varray1{ p3, p0, p1, p2, p3, p0, p1 };
GLuint ssbo1 = CreateSSBO(varray1);
std::vector<glm::vec4> varray2;
for (int u=-8; u <= 368; u += 8)
{
double a = u*M_PI/180.0;
double c = cos(a), s = sin(a);
varray2.emplace_back(glm::vec4((float)c, (float)s, 0.0f, 1.0f));
}
GLuint ssbo2 = CreateSSBO(varray2);
GLuint vao;
glGenVertexArrays(1, &vao);
glBindVertexArray(vao);
glClearColor(0.0f, 0.0f, 0.0f, 0.0f);
//glPolygonMode(GL_FRONT_AND_BACK, GL_LINE);
glm::mat4(project);
int vpSize[2]{0, 0};
while (!glfwWindowShouldClose(window))
{
int w, h;
glfwGetFramebufferSize(window, &w, &h);
if (w != vpSize[0] || h != vpSize[1])
{
vpSize[0] = w; vpSize[1] = h;
glViewport(0, 0, vpSize[0], vpSize[1]);
float aspect = (float)w/(float)h;
project = glm::ortho(-aspect, aspect, -1.0f, 1.0f, -10.0f, 10.0f);
glUniform2f(loc_res, (float)w, (float)h);
}
glClear(GL_COLOR_BUFFER_BIT);
glm::mat4 modelview1( 1.0f );
modelview1 = glm::translate(modelview1, glm::vec3(-0.6f, 0.0f, 0.0f) );
modelview1 = glm::scale(modelview1, glm::vec3(0.5f, 0.5f, 1.0f) );
glm::mat4 mvp1 = project * modelview1;
glUniformMatrix4fv(loc_mvp, 1, GL_FALSE, glm::value_ptr(mvp1));
glBindBufferBase(GL_SHADER_STORAGE_BUFFER, 0, ssbo1);
GLsizei N1 = (GLsizei)varray1.size()-2;
glDrawArrays(GL_TRIANGLES, 0, 6*(N1-1));
glm::mat4 modelview2( 1.0f );
modelview2 = glm::translate(modelview2, glm::vec3(0.6f, 0.0f, 0.0f) );
modelview2 = glm::scale(modelview2, glm::vec3(0.5f, 0.5f, 1.0f) );
glm::mat4 mvp2 = project * modelview2;
glUniformMatrix4fv(loc_mvp, 1, GL_FALSE, glm::value_ptr(mvp2));
glBindBufferBase(GL_SHADER_STORAGE_BUFFER, 0, ssbo2);
GLsizei N2 = (GLsizei)varray2.size()-2;
glDrawArrays(GL_TRIANGLES, 0, 6*(N2-1));
glfwSwapBuffers(window);
glfwPollEvents();
}
glfwTerminate();
return 0;
}
You could try drawing a quad. Make it as wide as you want your line to be long, and tall as the line width you need, then rotate and position it where the line would go.
Ah, now that I understood what you meant:
draw a one by one square.
calc the length and orientation of the line
stretch it to the length in x
translate to startpos and rotate to line_orientation
or:
get vector of line: v :(x2 - x1, y2 - y1)
normalize v: n
3- get orthogonal (normal) of the vector : o (easy in 2d)
add and subtract o from the line's end and start point to get 4 corner points
draw a quad with these points.
It makes sense that you can't. From the glLineWidth reference:
The range of supported widths and the size difference between supported widths within the range can be queried by calling glGet with arguments GL_LINE_WIDTH_RANGE and GL_LINE_WIDTH_GRANULARITY.

Drawing a line in modern OpenGL

I simply want to draw a line to the screen. I'm using OpenGl 4.6. All tutorials I found used a glVertexPointer, which is deprecated as far as I can tell.
I know how you can draw triangles using buffers, so I tried that with a line. It didn't work, merely displaying a black screen. (I'm using GLFW and GLEW, and I am using a vertex+fragment shader I already tested on the triangle)
// Make line
float line[] = {
0.0, 0.0,
1.0, 1.0
};
unsigned int buffer; // The ID, kind of a pointer for VRAM
glGenBuffers(1, &buffer); // Allocate memory for the triangle
glBindBuffer(GL_ARRAY_BUFFER, buffer); // Set the buffer as the active array
glBufferData(GL_ARRAY_BUFFER, 2 * sizeof(float), line, GL_STATIC_DRAW); // Fill the buffer with data
glVertexAttribPointer(0, 2, GL_FLOAT, GL_FALSE, 2 * sizeof(float), 0); // Specify how the buffer is converted to vertices
glEnableVertexAttribArray(0); // Enable the vertex array
// Loop until the user closes the window
while (!glfwWindowShouldClose(window))
{
// Clear previous
glClear(GL_COLOR_BUFFER_BIT);
// Draw the line
glDrawArrays(GL_LINES, 0, 2);
// Swap front and back buffers
glfwSwapBuffers(window);
// Poll for and process events
glfwPollEvents();
}
Am I going in the right direction, or is a completely different approach the current best practice?
If I am, how do I fix my code?
The issue is the call to glBufferData. The 2nd argument is the size of the buffer in bytes. Since the vertex array consists of 2 coordinates with 2 components, the size of the bufferis 4 * sizeof(float) rather than 2 * sizeof(float):
glBufferData(GL_ARRAY_BUFFER, 2 * sizeof(float), line, GL_STATIC_DRAW);
glBufferData(GL_ARRAY_BUFFER, 4 * sizeof(float), line, GL_STATIC_DRAW);
But note that is still not "modern" OpenGL. If you want to use core profile OpenGL Context, then you have to use a Shader program and a Vertex Array Object
However, if you are using a core OpenGL context and the forward compatibility bit is set, the width of a line (glLineWidth), cannot be grater than 1.0.
See OpenGL 4.6 API Core Profile Specification - E.2 Deprecated and Removed Features
Wide lines - LineWidth values greater than 1.0 will generate an INVALID_VALUE error.
You have to find a different approach.
I recommend to use a Shader, which generates triangle primitives along a line strip (or even a line loop).
The task is to generate thick line strip, with as less CPU and GPU overhead as possible. That means to avoid computation of polygons on the CPU as well as geometry shaders (or tessellation shaders).
Each segment of the line consist of a quad represented by 2 triangle primitives respectively 6 vertices.
0 2 5
+-------+ +
| / / |
| / / |
| / / |
+ +-------+
1 3 4
Between the line segments the miter hast to be found and the quads have to be cut to the miter.
+----------------+
| / |
| segment 1 / |
| / |
+--------+ |
| segment 2
| |
| |
+-------+
Create an array with the corners points of the line strip. The first and the last point define the start and end tangents of the line strip. So you need to add 1 point before the line and one point after the line. Of course it would be easy, to identify the first and last element of the array by comparing the index to 0 and the length of the array, but we don't want to do any extra checks in the shader.
If a line loop has to be draw, then the last point has to be add to the array head and the first point to its tail.
The array of points is stored to a Shader Storage Buffer Object. We use the benefit, that the last variable of the SSBO can be an array of variable size. In older versions of OpenGL (or OpenGL ES) a Uniform Buffer Object or even a Texture can be used.
The shader doesn't need any vertex coordinates or attributes. All we have to know is the index of the line segment. The coordinates are stored in the buffer. To find the index we make use of the the index of the vertex currently being processed (gl_VertexID).
To draw a line strip with N segments, 6*(N-1) vertices have tpo be processed.
We have to create an "empty" Vertex Array Object (without any vertex attribute specification):
glGenVertexArrays(1, &vao);
glBindVertexArray(vao);
And to draw 2*(N-1) triangle (6*(N-1) vertices):
glDrawArrays(GL_TRIANGLES, 0, 6*(N-1));
For the coordinate array in the SSBO, the data type vec4 is used (Pleas believe me, you don't want to use vec3):
layout(std430, binding = 0) buffer TVertex
{
vec4 vertex[];
};
Compute the index of the line segment, where the vertex coordinate belongs too and the index of the point in the 2 triangles:
int line_i = gl_VertexID / 6;
int tri_i = gl_VertexID % 6;
Since we are drawing N-1 line segments, but the number of elements in the array is N+2, the elements form vertex[line_t] to vertex[line_t+3] can be accessed for each vertex which is processed in the vertex shader.
vertex[line_t+1] and vertex[line_t+2] are the start respectively end coordinate of the line segment. vertex[line_t] and vertex[line_t+3] are required to compute the miter.
The thickness of the line should be set in pixel unit (uniform float u_thickness). The coordinates have to be transformed from model space to window space. For that the resolution of the viewport has to be known (uniform vec2 u_resolution). Don't forget the perspective divide. The drawing of the line will even work at perspective projection.
vec4 va[4];
for (int i=0; i<4; ++i)
{
va[i] = u_mvp * vertex[line_i+i];
va[i].xyz /= va[i].w;
va[i].xy = (va[i].xy + 1.0) * 0.5 * u_resolution;
}
The miter calculation even works if the predecessor or successor point is equal to the start respectively end point of the line segment. In this case the end of the line is cut normal to its tangent:
vec2 v_line = normalize(va[2].xy - va[1].xy);
vec2 nv_line = vec2(-v_line.y, v_line.x);
vec2 v_pred = normalize(va[1].xy - va[0].xy);
vec2 v_succ = normalize(va[3].xy - va[2].xy);
vec2 v_miter1 = normalize(nv_line + vec2(-v_pred.y, v_pred.x));
vec2 v_miter2 = normalize(nv_line + vec2(-v_succ.y, v_succ.x));
In the final vertex shader we just need to calculate either v_miter1 or v_miter2 dependent on the tri_i. With the miter, the normal vector to the line segment and the line thickness (u_thickness), the vertex coordinate can be computed:
vec4 pos;
if (tri_i == 0 || tri_i == 1 || tri_i == 3)
{
vec2 v_pred = normalize(va[1].xy - va[0].xy);
vec2 v_miter = normalize(nv_line + vec2(-v_pred.y, v_pred.x));
pos = va[1];
pos.xy += v_miter * u_thickness * (tri_i == 1 ? -0.5 : 0.5) / dot(v_miter, nv_line);
}
else
{
vec2 v_succ = normalize(va[3].xy - va[2].xy);
vec2 v_miter = normalize(nv_line + vec2(-v_succ.y, v_succ.x));
pos = va[2];
pos.xy += v_miter * u_thickness * (tri_i == 5 ? 0.5 : -0.5) / dot(v_miter, nv_line);
}
Finally the window coordinates have to be transformed back to clip space coordinates. Transform from window space to normalized device space. The perspective divide has to be reversed:
pos.xy = pos.xy / u_resolution * 2.0 - 1.0;
pos.xyz *= pos.w;
Polygons created with glPolygonMode(GL_FRONT_AND_BACK, GL_FILL) and glPolygonMode(GL_FRONT_AND_BACK, GL_LINE):
Demo program using GLFW API for creating a window, GLEW for loading OpenGL and GLM -OpenGL Mathematics for the math. (I don't provide the code for the function CreateProgram, which just creates a program object, from the vertex shader and fragment shader source code):
#include <vector>
#include <string>
#include <glm/glm.hpp>
#include <glm/gtc/matrix_transform.hpp>
#include <glm/gtc/type_ptr.hpp>
#include <gl/gl_glew.h>
#include <GLFW/glfw3.h>
std::string vertShader = R"(
#version 460
layout(std430, binding = 0) buffer TVertex
{
vec4 vertex[];
};
uniform mat4 u_mvp;
uniform vec2 u_resolution;
uniform float u_thickness;
void main()
{
int line_i = gl_VertexID / 6;
int tri_i = gl_VertexID % 6;
vec4 va[4];
for (int i=0; i<4; ++i)
{
va[i] = u_mvp * vertex[line_i+i];
va[i].xyz /= va[i].w;
va[i].xy = (va[i].xy + 1.0) * 0.5 * u_resolution;
}
vec2 v_line = normalize(va[2].xy - va[1].xy);
vec2 nv_line = vec2(-v_line.y, v_line.x);
vec4 pos;
if (tri_i == 0 || tri_i == 1 || tri_i == 3)
{
vec2 v_pred = normalize(va[1].xy - va[0].xy);
vec2 v_miter = normalize(nv_line + vec2(-v_pred.y, v_pred.x));
pos = va[1];
pos.xy += v_miter * u_thickness * (tri_i == 1 ? -0.5 : 0.5) / dot(v_miter, nv_line);
}
else
{
vec2 v_succ = normalize(va[3].xy - va[2].xy);
vec2 v_miter = normalize(nv_line + vec2(-v_succ.y, v_succ.x));
pos = va[2];
pos.xy += v_miter * u_thickness * (tri_i == 5 ? 0.5 : -0.5) / dot(v_miter, nv_line);
}
pos.xy = pos.xy / u_resolution * 2.0 - 1.0;
pos.xyz *= pos.w;
gl_Position = pos;
}
)";
std::string fragShader = R"(
#version 460
out vec4 fragColor;
void main()
{
fragColor = vec4(1.0);
}
)";
// main
GLuint CreateSSBO(std::vector<glm::vec4> &varray)
{
GLuint ssbo;
glGenBuffers(1, &ssbo);
glBindBuffer(GL_SHADER_STORAGE_BUFFER, ssbo );
glBufferData(GL_SHADER_STORAGE_BUFFER, varray.size()*sizeof(*varray.data()), varray.data(), GL_STATIC_DRAW);
return ssbo;
}
int main(void)
{
if ( glfwInit() == 0 )
throw std::runtime_error( "error initializing glfw" );
GLFWwindow *window = glfwCreateWindow( 800, 600, "GLFW OGL window", nullptr, nullptr );
if (window == nullptr)
{
glfwTerminate();
throw std::runtime_error("error initializing window");
}
glfwMakeContextCurrent(window);
if (glewInit() != GLEW_OK)
throw std::runtime_error("error initializing glew");
OpenGL::CContext::TDebugLevel debug_level = OpenGL::CContext::TDebugLevel::all;
OpenGL::CContext context;
context.Init( debug_level );
GLuint program = OpenGL::CreateProgram(vertShader, fragShader);
GLint loc_mvp = glGetUniformLocation(program, "u_mvp");
GLint loc_res = glGetUniformLocation(program, "u_resolution");
GLint loc_thi = glGetUniformLocation(program, "u_thickness");
glUseProgram(program);
glUniform1f(loc_thi, 20.0);
GLushort pattern = 0x18ff;
GLfloat factor = 2.0f;
std::vector<glm::vec4> varray;
varray.emplace_back(glm::vec4(0.0f, -1.0f, 0.0f, 1.0f));
varray.emplace_back(glm::vec4(1.0f, -1.0f, 0.0f, 1.0f));
for (int u=0; u <= 90; u += 10)
{
double a = u*M_PI/180.0;
double c = cos(a), s = sin(a);
varray.emplace_back(glm::vec4((float)c, (float)s, 0.0f, 1.0f));
}
varray.emplace_back(glm::vec4(-1.0f, 1.0f, 0.0f, 1.0f));
for (int u = 90; u >= 0; u -= 10)
{
double a = u * M_PI / 180.0;
double c = cos(a), s = sin(a);
varray.emplace_back(glm::vec4((float)c-1.0f, (float)s-1.0f, 0.0f, 1.0f));
}
varray.emplace_back(glm::vec4(1.0f, -1.0f, 0.0f, 1.0f));
varray.emplace_back(glm::vec4(1.0f, 0.0f, 0.0f, 1.0f));
GLuint ssbo = CreateSSBO(varray);
GLuint vao;
glGenVertexArrays(1, &vao);
glBindVertexArray(vao);
glBindBufferBase(GL_SHADER_STORAGE_BUFFER, 0, ssbo);
GLsizei N = (GLsizei)varray.size() - 2;
glClearColor(0.0f, 0.0f, 0.0f, 0.0f);
glm::mat4(project);
int vpSize[2]{0, 0};
while (!glfwWindowShouldClose(window))
{
int w, h;
glfwGetFramebufferSize(window, &w, &h);
if (w != vpSize[0] || h != vpSize[1])
{
vpSize[0] = w; vpSize[1] = h;
glViewport(0, 0, vpSize[0], vpSize[1]);
float aspect = (float)w/(float)h;
project = glm::ortho(-aspect, aspect, -1.0f, 1.0f, -10.0f, 10.0f);
glUniform2f(loc_res, (float)w, (float)h);
}
glClear(GL_COLOR_BUFFER_BIT);
glm::mat4 modelview1( 1.0f );
modelview1 = glm::translate(modelview1, glm::vec3(-0.6f, 0.0f, 0.0f) );
modelview1 = glm::scale(modelview1, glm::vec3(0.5f, 0.5f, 1.0f) );
glm::mat4 mvp1 = project * modelview1;
glPolygonMode(GL_FRONT_AND_BACK, GL_FILL);
glUniformMatrix4fv(loc_mvp, 1, GL_FALSE, glm::value_ptr(mvp1));
glDrawArrays(GL_TRIANGLES, 0, 6*(N-1));
glm::mat4 modelview2( 1.0f );
modelview2 = glm::translate(modelview2, glm::vec3(0.6f, 0.0f, 0.0f) );
modelview2 = glm::scale(modelview2, glm::vec3(0.5f, 0.5f, 1.0f) );
glm::mat4 mvp2 = project * modelview2;
glPolygonMode(GL_FRONT_AND_BACK, GL_LINE);
glUniformMatrix4fv(loc_mvp, 1, GL_FALSE, glm::value_ptr(mvp2));
glDrawArrays(GL_TRIANGLES, 0, 6*(N-1));
glfwSwapBuffers(window);
glfwPollEvents();
}
glfwTerminate();
return 0;
}

Trying to render this code without using transformation concept

I'm trying to change the position of triangle without using transformation function, By changing only the position of x each time,
this my code in main while loop
float MyPoints[] = { 0.1 , 0.2, 0.3, 0.4, 0.5 , 0.6, 0.7, 0.8, 0.9};
int offset = (-1, 1);
for (int i = 0; i < sizeof(MyPoints); i++) {
offset += MyPoints[i];
ourShader.Use();
glBindVertexArray(VAO);
glDrawArrays(GL_TRIANGLES, 0, 3);
glBindVertexArray(0);// unbind
}
and this is in shader
out vec3 ourColor;
out vec2 TexCoord;
uniform vec4 offset;
void main()
{
gl_Position = vec4(position.x + offset, position.y, position.z, 1.0f);
ourColor = color;
TexCoord = texCoord;
}
Edit
this my code in main while loop
float offset = 1.0f;
float step = 0.001f; //move
int i=0;
// Loop until window closed (Game loop)
while (!glfwWindowShouldClose(mainWindow))
{
// Get + Handle user input events
glfwPollEvents();
//Render
// Clear the colorbuffer
glClearColor(0.0f, 0.1f, 0.2f, 1.0f);
//glPointSize(400.0f);
glClear(GL_COLOR_BUFFER_BIT);
// Call Shader Program
//Rendering the first triangle
GLuint program =ourShader.Program ; // program object from "ourShader"
GLint offset_loc = glGetUniformLocation(program, "offset");
float MyPoints[] = { -0.1 , -0.2,-0.3,-0.4,-0.5 ,-0.6,-0.7,-0.8,-0.9 };
int noPoints = sizeof(MyPoints) / sizeof(float);
ourShader.Use();
for (int i = 0; i < noPoints; i++) {
glUniform1f(offset_loc, MyPoints[i] + offset);
}
offset += step;
if (MyPoints[i] + offset >= 1.0f || MyPoints[i] + offset <= -1.0f)
step *= -1.0f;
//update uniform data
glBindVertexArray(VAO);
glDrawArrays(GL_TRIANGLES, 0, 3);
glfwSwapBuffers(mainWindow);
glBindVertexArray(0);// unbind
}
and this is in shader
out vec3 ourColor;
out vec2 TexCoord;
uniform float offset;
void main()
{
gl_Position = vec4(position.x + offset, position.y, position.z, 1.0f);
ourColor = color;
TexCoord = texCoord;
}
the Edit code make an movement from (-1.0) till the middle to the end of the window
First of all the number of elements in the array is sizeof(MyPoints) / sizeof(float).
The type of the uniform variable offset has to be float:
uniform float offset;
You've to get the location of the uniform variable offset by glGetUniformLocation and to set the value of the uniform by e.g. glUniform1f:
GLuint program = ; // program object from "ourShader"
GLint offset_loc = glGetUniformLocation(program, "offset");
float MyPoints[] = { 0.1 , 0.2, 0.3, 0.4, 0.5 , 0.6, 0.7, 0.8, 0.9};
int noPoints = sizeof(MyPoints) / sizeof(float);
// bind vertex array
glBindVertexArray(VAO);
// install program
ourShader.Use();
float offset = -1.0f;
for (int i = 0; i < noPoints; i++) {
// set value of the uniform (after program is installed)
offset += MyPoints[i];
glUniform1f(offset_loc, offset);
// draw one triangle
glDrawArrays(GL_TRIANGLES, 0, 3);
}
glBindVertexArray(0);
If you want to make the triangles move, then you've to change the offset of each individual triangle in every frame. e.g.:
float offset = 0.0f;
float step = 0.01f;
while (!glfwWindowShouldClose(mainWindow))
{
// [...]
ourShader.Use();
glUniform1f(offset_loc, offset);
glDrawArrays(GL_TRIANGLES, 0, 3);
// [...]
// change offset
offset += step;
if (offset >= 1.0f || offset <= -1.0f)
step *= -1.0f; // reverse direction
}

Random coloured blocks

I have recently used a freetype library to read text files and followed some guide on how to display text in 2D.
I tried to extend the code to support 3D text rendering but i started having opengl related problems with it.
At certain angles the text picture starts to fade, and the whole axis on which the text is located starts to inherit its colour.
Fading;
Black Slice
All the related code is:
Drawing function (inherited from learnopengl.com)
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
// Activate corresponding render state
glActiveTexture(GL_TEXTURE0);
glBindVertexArray(VAO);
glEnableVertexAttribArray(0);
scale /= RESOLUTION;
vec2 start(x, y);
// Iterate through all characters
std::string::const_iterator c;
for (c = text.begin(); c != text.end(); c++)
{
Character ch = Characters[*c];
if (*c == '\r' || (x-start.x > xMax && xMax != 0.0f))
{
y += ((ch.Advance >> 6) + 16) * scale ;
x = start.x;
continue;
}
GLfloat xpos = x + ch.Bearing.x * scale;
GLfloat ypos = y + (ch.Size.y - ch.Bearing.y) * scale;
GLfloat w = ch.Size.x * scale;
GLfloat h = ch.Size.y * scale;
// Update VBO for each character
GLfloat vertices[6][4] = {
{ xpos, ypos - h, 0.0, 0.0 },
{ xpos, ypos, 0.0, 1.0 },
{ xpos + w, ypos, 1.0, 1.0 },
{ xpos, ypos - h, 0.0, 0.0 },
{ xpos + w, ypos, 1.0, 1.0 },
{ xpos + w, ypos - h, 1.0, 0.0 }
};
// Render glyph texture over quad
glBindTexture(GL_TEXTURE_2D, ch.TextureID);
// Update content of VBO memory
glBindBuffer(GL_ARRAY_BUFFER, VBO);
glBufferSubData(GL_ARRAY_BUFFER, 0, sizeof(vertices), vertices);
// Render quad
glDrawArrays(GL_TRIANGLES, 0, 6);
glBindBuffer(GL_ARRAY_BUFFER, 0);
// Now advance cursors for next glyph (note that advance is number of 1/64 pixels)
x += (ch.Advance >> 6) * scale; // Bitshift by 6 to get value in pixels (2^6 = 64)
}
glBindVertexArray(0);
glBindTexture(GL_TEXTURE_2D, 0);
glDisable(GL_BLEND);
Shader uniform initialization
ShaderBuilder::LoadShader(shader)->Add_mat4("projection", projection).Add_mat4("view", view).
Add_mat4("model", model).Add_vec3("textColor", color).Add_texture("text", 0);
Vertex Shader
#version 400 core
layout (location = 0) in vec4 vertex; //
out vec2 TexCoords;
uniform mat4 projection;
uniform mat4 view;
uniform mat4 model;
void main()
{
vec2 vertexGL = (vertex.xy - vec2(400,300)) / vec2(400,300);
vertexGL = vertex.xy;
vec4 position = projection * view * model * vec4(vertexGL.xy, 0.0, 1.0);
gl_Position = position / position.w;
TexCoords = vertex.zw;
}
Fragment Shader
#version 400 core
in vec2 TexCoords;
out vec4 color;
uniform sampler2D text;
uniform vec3 textColor;
void main()
{
vec4 sampled = vec4(1.0, 1.0, 1.0, texture(text, TexCoords).r);
color = vec4(textColor, 1.0) * sampled;
//color = vec4(1);
}
I finally found the mistake, for some unknown reason i thought normalizing my vertex coords after applying the matrix multiplication would be a good practice.
Apparently it isn't.
vec4 position = projection * view * model * vec4(vertexGL.xy, 0.0, 1.0);
gl_Position = position;// / position.w;
so as the commenting declares, this removed the mistake.

Lighting Dual depth peeling

I'm doing Dual depth peeling. I want to ask you, how to properly. I have algorithm like this.
glDisable(GL_DEPTH_TEST);
glEnable(GL_BLEND);
glBindFramebuffer(GL_FRAMEBUFFER, dualDepthFBOID);
// Render targets 1 and 2 store the front and back colors
// Clear to 0.0 and use MAX blending to filter written color
// At most one front color and one back color can be written every pass
glDrawBuffers(2, &drawBuffers[1]);
glClearColor(0, 0, 0, 0);
glClear(GL_COLOR_BUFFER_BIT);
GL_CHECK_ERRORS
// Render target 0 stores (-minDepth, maxDepth, alphaMultiplier)
glDrawBuffer(drawBuffers[0]);
glClearColor(-MAX_DEPTH, -MAX_DEPTH, 0, 0);
glClear(GL_COLOR_BUFFER_BIT);
glBlendEquation(GL_MAX);
DrawScene(MVP, initShader);
// 2. Depth peeling + blending pass
glDrawBuffer(drawBuffers[6]);
glClearColor(bg.x, bg.y, bg.z, bg.w);
glClear(GL_COLOR_BUFFER_BIT);
int numLayers = (NUM_PASSES - 1) * 2;
int currId = 0;
for (int layer = 1; bUseOQ || layer < numLayers; layer++) {
currId = layer % 2;
int prevId = 1 - currId;
int bufId = currId * 3;
glDrawBuffers(2, &drawBuffers[bufId+1]);
glClearColor(0, 0, 0, 0);
glClear(GL_COLOR_BUFFER_BIT);
glDrawBuffer(drawBuffers[bufId+0]);
glClearColor(-MAX_DEPTH, -MAX_DEPTH, 0, 0);
glClear(GL_COLOR_BUFFER_BIT);
// Render target 0: RG32F MAX blending
// Render target 1: RGBA MAX blending
// Render target 2: RGBA MAX blending
glDrawBuffers(3, &drawBuffers[bufId+0]);
glBlendEquation(GL_MAX);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_RECTANGLE, depthTexID[prevId]);
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_RECTANGLE, texID[prevId]);
DrawScene(MVP, dualPeelShader, true,true);
// Full screen pass to alpha-blend the back color
glDrawBuffer(drawBuffers[6]);
glBlendEquation(GL_FUNC_ADD);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
if (bUseOQ) {
glBeginQuery(GL_SAMPLES_PASSED_ARB, queryId);
}
GL_CHECK_ERRORS
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_RECTANGLE, backTexID[currId]);
blendShader.Use();
DrawFullScreenQuad();
blendShader.UnUse();
if (bUseOQ) {
glEndQuery(GL_SAMPLES_PASSED);
GLuint sample_count;
glGetQueryObjectuiv(queryId, GL_QUERY_RESULT, &sample_count);
if (sample_count == 0) {
break;
}
}
GL_CHECK_ERRORS
}
GL_CHECK_ERRORS
glDisable(GL_BLEND);
// 3. Final render pass
glBindFramebuffer(GL_FRAMEBUFFER, 0);
glDrawBuffer(GL_BACK_LEFT);
glBindTexture(GL_TEXTURE_RECTANGLE, colorBlenderTexID);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_RECTANGLE, depthTexID[currId]);
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_RECTANGLE, texID[currId]);
glActiveTexture(GL_TEXTURE2);
glBindTexture(GL_TEXTURE_RECTANGLE, colorBlenderTexID);
finalShader.Use();
DrawFullScreenQuad();
finalShader.UnUse();
I'm doing lighting in dualPeelShader, where are lighting each pass. This is resulting to extremly bright object. Should I do lighting in finalShader?
---EDIT----
Peel Fragment Shader
#version 330 core
layout(location = 0) out vec4 vFragColor0;
layout(location = 1) out vec4 vFragColor1;
layout(location = 2) out vec4 vFragColor2;
uniform vec4 vColor;
uniform float isObject;
uniform vec3 LightPosition;
uniform sampler2DRect depthBlenderTex;
uniform sampler2DRect frontBlenderTex;
in vec4 vOutColor;
in vec3 position;
in vec3 normal;
in vec3 eyeDirection;
in vec3 lightDirection;
#define MAX_DEPTH 1.0
vec4 Lighted()
{
vec3 LightColor = vec3(1.0,1.0,1.0);
float LightPower = 50;
// Material properties
vec3 MaterialDiffuseColor = vOutColor.rgb;
vec3 MaterialAmbientColor = vec3(0.1,0.1,0.1) * MaterialDiffuseColor;
vec3 MaterialSpecularColor = vec3(0.3,0.3,0.3);
// Distance to the light
float distance = length( LightPosition - position );
// Normal of the computed fragment, in camera space
vec3 n = normalize( normal );
// Direction of the light (from the fragment to the light)
vec3 l = normalize( lightDirection);
// Cosvaryinge of the angle between the normal and the light direction,
// clamped above 0
// - light is at the vertical of the triangle -> 1
// - light is perpendicular to the triangle -> 0
// - light is behvaryingd the triangle -> 0
float cosTheta = clamp( dot( n,l ), 0,1 );
// Eye vector (towards the camera)
vec3 E = normalize(eyeDirection);
// Direction in which the triangle reflects the light
vec3 R = reflect(-l,n);
// Cosvaryinge of the angle between the Eye vector and the Reflect vector,
// clamped to 0
// - Lookvaryingg varyingto the reflection -> 1
// - Lookvaryingg elsewhere -> < 1
float cosAlpha = clamp( dot( E,R ), 0,1 );
return vec4(MaterialAmbientColor + MaterialDiffuseColor * LightColor * LightPower * cosTheta / (distance*distance) +
MaterialSpecularColor * LightColor * LightPower * pow(cosAlpha,5) / (distance*distance),vOutColor.a);
}
void main(void)
{
float fragDepth = gl_FragCoord.z;
vec2 depthBlender = texture(depthBlenderTex, gl_FragCoord.xy).xy;
vec4 forwardTemp = texture(frontBlenderTex, gl_FragCoord.xy);
// Depths and 1.0-alphaMult always increase
// so we can use pass-through by default with MAX blending
vFragColor0.xy = depthBlender;
// Front colors always increase (DST += SRC*ALPHA_MULT)
// so we can use pass-through by default with MAX blending
vFragColor1 = forwardTemp;
// Because over blending makes color increase or decrease,
// we cannot pass-through by default.
// Each pass, only one fragment writes a color greater than 0
vFragColor2 = vec4(0.0);
float nearestDepth = -depthBlender.x;
float farthestDepth = depthBlender.y;
float alphaMultiplier = 1.0 - forwardTemp.w;
if (fragDepth < nearestDepth || fragDepth > farthestDepth) {
// Skip this depth in the peeling algorithm
vFragColor0.xy = vec2(-MAX_DEPTH);
return;
}
if (fragDepth > nearestDepth && fragDepth < farthestDepth) {
// This fragment needs to be peeled again
vFragColor0.xy = vec2(-fragDepth, fragDepth);
return;
}
// If we made it here, this fragment is on the peeled layer from last pass
// therefore, we need to shade it, and make sure it is not peeled any farther
vFragColor0.xy = vec2(-MAX_DEPTH);
vec4 Color;
if(isObject == 0.0)
Color = vColor;
else
Color = Lighted();
if (fragDepth == nearestDepth) {
vFragColor1.xyz += Color.rgb * Color.a * alphaMultiplier;
vFragColor1.w = 1.0 - alphaMultiplier * (1.0 - Color.a);
} else {
vFragColor2 += Color;
}
}
Blend Fragment Shader
#version 330 core
uniform sampler2DRect tempTexture;
layout(location = 0) out vec4 vFragColor;
void main(void)
{
vFragColor = texture(tempTexture, gl_FragCoord.xy);
if(vFragColor.a == 0)
discard;
}