Drawing round points using modern OpenGL - opengl

I know how to draw round points using fixed pipeline. However I need to do the same using modern OpenGL. Is it possible, or should I use point sprites and textures?
For the interested.Here is how it is done with fixed pipeline:
glEnable(GL_ALPHA_TEST);
glAlphaFunc(GL_NOTEQUAL, 0);
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glEnable( GL_POINT_SMOOTH );
glPointSize( 8.0 );
glMatrixMode(GL_PROJECTION);
glLoadMatrixf(myMatrix);
glMatrixMode(GL_MODELVIEW);
glLoadMatrixf(myAnotherMatrix);
glBegin(GL_POINTS);
glColor3f(1,1,1);
glVertex3fv(position);
glEnd();
glDisable(GL_POINT_SMOOTH);
glBlendFunc(GL_NONE, GL_NONE);
glDisable(GL_BLEND);

One way would be to draw point sprites with a circle-texture and a self-made alpha test in the fragment shader:
uniform sampler2D circle;
void main()
{
if(texture(circle, gl_PointCoord).r < 0.5)
discard;
...
}
But in fact you don't even need a texture for this, since a circle is a pretty well-defined mathematical concept. So just check the gl_PointCoord only, which says in which part of the [0,1] square representing the whole point your current fragment is:
vec2 coord = gl_PointCoord - vec2(0.5); //from [0,1] to [-0.5,0.5]
if(length(coord) > 0.5) //outside of circle radius?
discard;

Drawing circle with shader.
OpenGL ES shader sphere.

Related

How to get the Viewing Direction in the fragment shader while rendering a Fullscreen Quad

I have to render a scene in 2 steps. First I render into a Frame Buffer Object and I reuse the texture of the FBO for the next step. I render the texture on a fullscreen quad like this with a shader attached:
glMatrixMode(GL_MODELVIEW);
glPushMatrix();
glLoadIdentity();
gluLookAt(0.0,0.0,0.0, 0.0,0.0,-10.0, 0.0,1.0,0.0);
glMatrixMode(GL_PROJECTION);
glPushMatrix();
glLoadIdentity();
glOrtho(-1.0,1.0,-1.0,1.0,-1.0,1.0);
glBegin(GL_QUADS);
glTexCoord2f(0.0,0.0);
glVertex3f(-1.0,-1.0,0.0);
glTexCoord2f(1.0,0.0);
glVertex3f(1.0,-1.0,0.0);
glTexCoord2f(1.0,1.0);
glVertex3f(1.0,1.0,0.0);
glTexCoord2f(0.0,1.0);
glVertex3f(-1.0,1.0,0.0);
glEnd();
glMatrixMode(GL_PROJECTION);
glPopMatrix();
glMatrixMode(GL_MODELVIEW);
glPopMatrix();
In the attached Fragment Shader I need the viewing direction to the actual fragment of the old scene in Eye Space Coords. My first idea was like this:
vec2 uv = gl_TexCoord[0].st;
vec3 viewDirEye = normalize(vec3(uv.x*2.0-1.0, (uv.y)*2.0-1.0, -1.0));
But it seems not working, and I don't know why. It would be great if someone can help me out here.
Your formula does set up a direction vector from the texture coordinates in normalized device coordinates, but not in eye space.
You have to take the projection matrix into account (which will define the horizontal and vertical field of view angle).
You can use the inverse projection matrix to transform into eye space.

OpenGL how to render background

Hi I am doing an assignment and can't figure out how to render a background.
I've drawn the triangles and every thing renders to the screen ok but it always becomes the foreground and blocks everything else from view.
Here is my code for rendering the back ground.
void render(){
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glUseProgram(bgShaderID);
glBindVertexArray(bgArrayID);
glDrawArrays(GL_TRIANGLES, 0, 6);
glBindVertexArray(0);
// draw everything else
glutSwapBuffers();
glFlush();
}
In my vertex shader I have the following:
in vec3 a_vertex;
in vec3 a_colour;
out vec3 fragmentColor;
void main(){
gl_Position = vec4(a_vertex.xy, 0.0 ,1);
fragmentColor = a_colour;
}
It seems like you have the GL_DEPTH_TEST enabled. I don't know what projection matrix and z values you use for drawing your foreground objects, but
gl_Position = vec4(a_vertex.xy, 0.0 ,1);
is setting clip space z of the background to 0. Assuming a perspecitve projection, this is redicolously close to the front plane. Assuming some prthographic projection, this is still in the middle of the depth range.
You could of course try to set z=1.0 to set it to the far plane in the shader. However, since you draw the background first, you might be better off just disabling the GL_DEPTH_TEST (or disabling depth writes via glDepthMask(GL_FALSE)) temporarily during drawing of your backgorund.

Draw texture from QGLFramebufferObject on full screen quad using custom shader

Using QT 4.7 and QGLWidget, I want to use QGLWidget::paintGL() to render a scene into a QGLFramebufferObject and then - using the texture generated by the QGLFramebufferObject - onto the screen. For the second step I render a full-screen quad with an orthographic projection and use an own shader to render the texture onto it.
Rendering into the QGLFramebufferObject seems to work fine (at least I can call QGLFramebufferObject::toImage().save(filename) and I get the correctly rendered image), but I can't get the rendered texture to be drawn onto the screen.
Here the code I use to render into the framebuffer object:
//Draw into framebufferobject
_fbo->bind();
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glLoadIdentity();
_camera->applyModelviewMatrix();
_scene->glDraw();
glFlush();
_fbo->release();
_fbo->toImage().save("image.jpg");
As said, the image saved here contains the correctly rendered image.
Here the code I try to render the framebuffer object onto the screen. I'm using an own shader to render it.
//Draw framebufferobject to a full-screen quad on the screen
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0.0,1.0,0.0,1.0,-1.0,1.0);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glEnable(GL_TEXTURE_2D);
_selectionShader->bind();
glBindTexture(GL_TEXTURE_2D, _fbo->texture());
_selectionShader->setUniformValue("renderedTexture", _fbo->texture());
glBegin(GL_QUADS);
glTexCoord2f(0.0,0.0);
glVertex3f(0.0f,0.0f,0.0f);
glTexCoord2f(1.0,0.0);
glVertex3f(1.0f,0.0f,0.0f);
glTexCoord2f(1.0,1.0);
glVertex3f(1.0f,1.0f,0.0f);
glTexCoord2f(0.0,1.0);
glVertex3f(0.0f,1.0f,0.0f);
glEnd();
glDisable(GL_TEXTURE_2D);
glFlush();
}
The vertex shader simply passes through the position it uses as texture coordinates
varying vec2 position;
void main()
{
gl_Position = ftransform();
position = gl_Vertex.xy;
}
And the fragment shader draws the texture
varying vec2 position;
uniform sampler2D renderedTexture;
void main()
{
gl_FragColor = texture2D(renderedTexture, position);
}
The projection I'm doing is correct, for when I exchange the fragment shader with the following one, it draws the expected color gradient:
varying vec2 position;
uniform sampler2D renderedTexture;
void main()
{
gl_FragColor = vec4(position.x, 0.0f, position.y, 1.0f);
}
But using the other fragment shader that should render the texture, I only get a blank screen (that was made blank by glClear() in the beginning of the rendering). So the fragment shader seems to draw either black or nothing.
Am I missing anything? Am I passing the texture correctly to the shader? Do I have to do anything else to prepare the texture?
_selectionShader->setUniformValue("renderedTexture", _fbo->texture());
This is the (or a) wrong part. A sampler uniform in a shader is not set to the texture object, but to the texture unit that this object is bound as texture to (which you already did with glBindTexture(GL_TEXTURE_2D, _fbo->texture()). So since you seem to use GL_TEXTURE0 all the time, you just have to set it to texture unit 0:
_selectionShader->setUniformValue("renderedTexture", 0);
By the way, no need to glEnable(GL_TEXTURE_2D), that isn't necessary when using shaders. And why use glTexCoord2f if you're just using the vertex position as texture coordinate anyway?

GLSL rendering in 2D

(OpenGL 2.0)
I managed to do some nice text-rendering in opengl, and decided to make it shader-designed.
However, rendered font texture that looked nice in fixed pipeline mode looked unpleasant in GLSL mode.
In fixed pipeline mode, I don't see any difference between GL_LINEAR and GL_NEAREST filtering, it's because the texture doesn't need to filter really, because I set orthographic projection and align quad's width and height to the texture coordinates.
Now when I'm trying to render it with shader, i can see some very bad GL_NEAREST filtering artifacts, and for GL_LINEAR the texture appears too blurry.
Fixed pipeline, satysfying, best quality (no difference between linear/nearest):
GLSL, nearest (visible artifacts, for example, look at fraction glyphs):
GLSL, linear (too blurry):
Shader program:
Vertex shader was successfully compiled to run on hardware.
Fragment shader was successfully compiled to run on hardware.
Fragment shader(s) linked, vertex shader(s) linked.
------------------------------------------------------------------------------------------
attribute vec2 at_Vertex;
attribute vec2 at_Texcoord;
varying vec2 texCoord;
void main(void) {
texCoord = at_Texcoord;
gl_Position = mat4(0.00119617, 0, 0, 0, 0, 0.00195503, 0, 0, 0, 0, -1, 0, -1, -1, -0, 1)* vec4(at_Vertex.x, at_Vertex.y, 0, 1);
}
-----------------------------------------------------------------------------------------
varying vec2 texCoord;
uniform sampler2D diffuseMap;
void main(void) {
gl_FragColor = texture2D(diffuseMap, texCoord);
}
Quad rendering, fixed:
glTexCoord2f (0.0f, 0.0f);
glVertex2f (40.0f, 40.0f);
glTexCoord2f (0.0f, 1.0f);
glVertex2f ((font.tex_r.w+40.0f), 40.0f);
glTexCoord2f (1.0f, 1.0f);
glVertex2f ((font.tex_r.w+40.0f), (font.tex_r.h+40.0f));
glTexCoord2f (1.0f, 0.0f);
glVertex2f (40.0f, (font.tex_r.h+40.0f));
Quad rendering, shader-mode:
glVertexAttrib2f(__MeshShader::ATTRIB_TEXCOORD, 0.0f, 0.0f);
glVertexAttrib2f(__MeshShader::ATTRIB_VERTEX, 40.0f, 40.0f);
glVertexAttrib2f(__MeshShader::ATTRIB_TEXCOORD, 0.0f, 1.0f);
glVertexAttrib2f(__MeshShader::ATTRIB_VERTEX, (font.tex_r.w+40.0f), 40.0f);
glVertexAttrib2f(__MeshShader::ATTRIB_TEXCOORD, 1.0f, 1.0f);
glVertexAttrib2f(__MeshShader::ATTRIB_VERTEX, (font.tex_r.w+40.0f), (font.tex_r.h+40.0f));
glVertexAttrib2f(__MeshShader::ATTRIB_TEXCOORD, 1.0f, 0.0f);
glVertexAttrib2f(__MeshShader::ATTRIB_VERTEX, 40.0f, (font.tex_r.h+40.0f));
In both cases the matrices are calculated from the same source, though for performance reasons, as you can see, I'm writing constant values into the shader code with the help of such a function (if that is the reason, how do I write them properly ? ):
std::ostringstream buffer;
buffer << f;
return buffer.str().c_str();
where "f" is some double value.
EDIT:
Result of my further research is a little bit surprising.
Now I'm multiplying vertex coordinates by the same orthogonal matrix on CPU (not in vertex shader like before) and I'm leaving the vertex untouched in vertex shader, just passing it to the gl_Position. I couldn't believe, but this really works and actually solves my problem. Every operation is made on floats, as in GPU.
Seems like matrix/vertex multiplication is much more accurate on CPU.
question is: why ?
EDIT: Actually, whole reason was different matrix sources..! Really, really small bug!
Nicol was nearest the solution.
though for performance reasons, as you can see, I'm writing constant values into the shader code
That's not going to help your performance. Uploading a single matrix uniform is pretty standard for most OpenGL shaders, and will cost you nothing of significance in terms of performance.
Seems like matrix/vertex multiplication is much more accurate on CPU. question is: why ?
It's not more accurate; it's simply using a different matrix. If you passed that matrix to GLSL via a shader uniform, you would probably get the same result. The matrix you use in the shader is not the same matrix that you used on the CPU.

How do you render primitives as wireframes in OpenGL?

How do you render primitives as wireframes in OpenGL?
glPolygonMode( GL_FRONT_AND_BACK, GL_LINE );
to switch on,
glPolygonMode( GL_FRONT_AND_BACK, GL_FILL );
to go back to normal.
Note that things like texture-mapping and lighting will still be applied to the wireframe lines if they're enabled, which can look weird.
From http://cone3d.gamedev.net/cgi-bin/index.pl?page=tutorials/ogladv/tut5
// Turn on wireframe mode
glPolygonMode(GL_FRONT, GL_LINE);
glPolygonMode(GL_BACK, GL_LINE);
// Draw the box
DrawBox();
// Turn off wireframe mode
glPolygonMode(GL_FRONT, GL_FILL);
glPolygonMode(GL_BACK, GL_FILL);
Assuming a forward-compatible context in OpenGL 3 and up, you can either use glPolygonMode as mentioned before, but note that lines with thickness more than 1px are now deprecated. So while you can draw triangles as wire-frame, they need to be very thin. In OpenGL ES, you can use GL_LINES with the same limitation.
In OpenGL it is possible to use geometry shaders to take incoming triangles, disassemble them and send them for rasterization as quads (pairs of triangles really) emulating thick lines. Pretty simple, really, except that geometry shaders are notorious for poor performance scaling.
What you can do instead, and what will also work in OpenGL ES is to employ fragment shader. Think of applying a texture of wire-frame triangle to the triangle. Except that no texture is needed, it can be generated procedurally. But enough talk, let's code. Fragment shader:
in vec3 v_barycentric; // barycentric coordinate inside the triangle
uniform float f_thickness; // thickness of the rendered lines
void main()
{
float f_closest_edge = min(v_barycentric.x,
min(v_barycentric.y, v_barycentric.z)); // see to which edge this pixel is the closest
float f_width = fwidth(f_closest_edge); // calculate derivative (divide f_thickness by this to have the line width constant in screen-space)
float f_alpha = smoothstep(f_thickness, f_thickness + f_width, f_closest_edge); // calculate alpha
gl_FragColor = vec4(vec3(.0), f_alpha);
}
And vertex shader:
in vec4 v_pos; // position of the vertices
in vec3 v_bc; // barycentric coordinate inside the triangle
out vec3 v_barycentric; // barycentric coordinate inside the triangle
uniform mat4 t_mvp; // modeview-projection matrix
void main()
{
gl_Position = t_mvp * v_pos;
v_barycentric = v_bc; // just pass it on
}
Here, the barycentric coordinates are simply (1, 0, 0), (0, 1, 0) and (0, 0, 1) for the three triangle vertices (the order does not really matter, which makes packing into triangle strips potentially easier).
The obvious disadvantage of this approach is that it will eat some texture coordinates and you need to modify your vertex array. Could be solved with a very simple geometry shader but I'd still suspect it will be slower than just feeding the GPU with more data.
In Modern OpenGL(OpenGL 3.2 and higher), you could use a Geometry Shader for this :
#version 330
layout (triangles) in;
layout (line_strip /*for lines, use "points" for points*/, max_vertices=3) out;
in vec2 texcoords_pass[]; //Texcoords from Vertex Shader
in vec3 normals_pass[]; //Normals from Vertex Shader
out vec3 normals; //Normals for Fragment Shader
out vec2 texcoords; //Texcoords for Fragment Shader
void main(void)
{
int i;
for (i = 0; i < gl_in.length(); i++)
{
texcoords=texcoords_pass[i]; //Pass through
normals=normals_pass[i]; //Pass through
gl_Position = gl_in[i].gl_Position; //Pass through
EmitVertex();
}
EndPrimitive();
}
Notices :
for points, change layout (line_strip, max_vertices=3) out; to layout (points, max_vertices=3) out;
Read more about Geometry Shaders
If you are using the fixed pipeline (OpenGL < 3.3) or the compatibility profile you can use
//Turn on wireframe mode
glPolygonMode(GL_FRONT_AND_BACK, GL_LINE);
//Draw the scene with polygons as lines (wireframe)
renderScene();
//Turn off wireframe mode
glPolygonMode(GL_FRONT_AND_BACK, GL_FILL);
In this case you can change the line width by calling glLineWidth
Otherwise you need to change the polygon mode inside your draw method (glDrawElements, glDrawArrays, etc) and you may end up with some rough results because your vertex data is for triangles and you are outputting lines. For best results consider using a Geometry shader or creating new data for the wireframe.
The easiest way is to draw the primitives as GL_LINE_STRIP.
glBegin(GL_LINE_STRIP);
/* Draw vertices here */
glEnd();
You can use glut libraries like this:
for a sphere:
glutWireSphere(radius,20,20);
for a Cylinder:
GLUquadric *quadratic = gluNewQuadric();
gluQuadricDrawStyle(quadratic,GLU_LINE);
gluCylinder(quadratic,1,1,1,12,1);
for a Cube:
glutWireCube(1.5);
Use this function:
void glPolygonMode(GLenum face, GLenum mode);
face: Specifies the polygon faces that mode applies to. Can be GL_FRONT for the front side of the polygon, GL_BACK for the back and GL_FRONT_AND_BACK for both.
mode: Three modes are defined.
GL_POINT: Polygon vertices that are marked as the start of a boundary edge are drawn as points.
GL_LINE: Boundary edges of the polygon are drawn as line segments. (your target)
GL_FILL: The interior of the polygon is filled.
P.S: glPolygonMode controls the interpretation of polygons for rasterization in the graphics pipeline.
For more information look at the OpenGL reference pages in khronos group.
If it's OpenGL ES 2.0 you're dealing with, you can choose one of draw mode constants from
GL_LINE_STRIP, GL_LINE_LOOP, GL_LINES, to draw lines,
GL_POINTS (if you need to draw only vertices), or
GL_TRIANGLE_STRIP, GL_TRIANGLE_FAN, and GL_TRIANGLES to draw filled triangles
as first argument to your
glDrawElements(GLenum mode, GLsizei count, GLenum type, const GLvoid * indices)
or
glDrawArrays(GLenum mode, GLint first, GLsizei count) calls.
A good and simple way of drawing anti-aliased lines on a non anti-aliased render target is to draw rectangles of 4 pixel width with an 1x4 texture, with alpha channel values of {0.,1.,1.,0.}, and use linear filtering with mip-mapping off. This will make the lines 2 pixels thick, but you can change the texture for different thicknesses.
This is faster and easier than barymetric calculations.