Drawing a cube for each vertex - c++

I have a list of 3D vertices which I can easily render as a pointcloud by passing the whole list to my vertex shader, setting gl_Position = pos, then setting FragColor = vec4(1.0, 1.0, 1.0, 1.0)and use GL_POINTS in the drawing function.
I would now like to render an actual cube at that vertex position, with the vertex being the center of the cube and some given width. How can I achieve this in the most easy and performant way? Looping through all vertices, loading a cube into a buffer and then passing the vertex position to the vertex shader to draw each cube individually does not seem to be feasible to me, or is that the way to go?

Related

How to get homogeneous screen space coordinates in openGL

I'm studying opengl and I'v got i little 3d scene with some objects. In GLSL vertex shader I multiply vertexes on matixes like this:
vertexPos= viewMatrix * worldMatrix * modelMatrix * gl_Vertex;
gl_Position = vertexPos;
vertexPos is a vec4 varying variable and I pass it to fragment shader.
Here is how the scene renders normaly:
normal render
But then I wana do a debug render. I write in fragment shader:
gl_FragColor = vec4(vertexPos.x, vertexPos.x, vertexPos.x, 1.0);
vertexPos is multiplied by all matrixes, including perspective matrix, and I assumed that I would get a smooth gradient from the center of the screen to the right edge, because they are mapped in -1 to 1 square. But look like they are in screen space but perspective deformation isn't applied. Here is what I see:
(dont look at red line and light source, they are using different shader)
debug render
If I devide it by about 15 it will look like this:
gl_FragColor = vec4(vertexPos.x, vertexPos.x, vertexPos.x, 1.0)/15.0;
devided by 15
Can someone please explain me, why the coordinates aren't homogeneous and the scene still renders correctly with perspective distortion?
P.S. if I try to put gl_Position in fragment shader instead of vertexPos, it doesn't work.
A so-called perspective division is applied to gl_Position after it's computed in a vertex shader:
gl_Position.xyz /= gl_Position.w;
But it doesn't happen to your varyings unless you do it manually. Thus, you need to add
vertexPos.xyz /= vertexPos.w;
at the end of your vertex shader. Make sure to do it after you copy the value to gl_Position, you don't want to do the division twice.

opengl aligning texture coordinates

I try to simulate reflection on a plane with render to texture.
My only problem is how to adjust the texture coordinates correctly to the current view.
In the shader i multiply the texture coordinates with a rotation matrix.
The rotation matrix is set with:
glm::vec3 v1 = glm::vec3(0.0f,1.0f,0.0f);
glm::vec3 v2=glm::vec3(camlocation[0],camlocation[1],0.0);
if(glm::length(v2)!=0.0f)
{
v2=glm::normalize(v2);
}
float alpha=glm::angle(v1,v2);
texturematrix=glm::mat4(1.0f);
texturematrix = glm::translate(texturematrix,glm::vec3(0.5f,0.5f,0.0f));
texturematrix = glm::rotate(texturematrix,alpha,glm::vec3(0.0f,0.0f,1.0f));
texturematrix = glm::translate(texturematrix,glm::vec3(-0.5f,-0.5f,0.0f));
I dont know if its the right way, but the reflection looks wrong.
edit:
Step 1: i bind a framebuffer and my reflection texture and render my model, a teapot for example.
in the shader i invert Z-position.
Step 2: i bind the texture again and draw the plane. in the shader i use
vec4 texcoord = texturematrix*vec4(VertexIn.texcoord,1.0,1.0);
vec4 firsttex = texture(reflectionMap,texcoord.xy);
Step 3: i draw the real model
vec4 texcoord = texturematrix*vec4(VertexIn.texcoord,1.0,1.0);
ok, one mistake was the third coordinate. it must be 0.0.
now it looks better, but still wrong. i have to add the current eye direction to the camera location angle
http://fs1.directupload.net/images/141207/temp/7wk8lvms.png

How to make radial gradient on each face using shader in OpenGL

using simple shaders I've found a way to create gradients.
Here's result of my job:
http://goo.gl/A7pY01 (A little updated after OpenGL ES 2.0 Shader - 2D Radial Gradient in Polygon question)
It's nice, but I still need to display this gradient pattern on each face of my meshes. Or on the billboard face, just like it's a texture.
The glsl function gl_FragCoord returns window-related coordinates. Could someone explain me the way how to translate this into face-related coords and then draw my pattern?
Okey. A little surfing of stackoverflow gave me this topic: OpenGL: How to render perfect rectangular gradient?
Here is the meaning string: gl_FragColor = mix(color0, color1, uv.u + uv.v - 2 * uv.u * uv.v);
Of course we cannot translate window-space coordinates into something "face-related", but we could use UV coordinates of a face. So, I decided, what if we have a square face with uv-coordinates corresponding to full-sized texture (like 0,0; 0,1; 1,0; 1,1); So the center of a structure is 0.5,0.5. This could be a center of my round-gradient.
so my code of fragment shader is:
vec2 u_c = vec2(0.5,0.5);
float distanceFromLight = length(uv - u_c);
gl_FragColor = mix(vec4(1.,0.5,1.,1.), vec4(0.,0.,0.,1.), distanceFromLight*2.0);
Vertex shader:
gl_Position = _mvProj * vec4(vertex, 1.0);
uv = uv1;
Of course, we need to give correct UV coordinates, but the point is understood.
Here's example:
http://goo.gl/A7pY01

How to render a radial field in OpenGL?

How would I render a 2D radial field in OpenGL? I know I can render it pixel by pixel but I'm wondering if there are more efficient solutions? I don't mind if it requires OpenGL3+ functionality.
How familiar are you with shaders? Because I'm thinking an easy-ish answer would be to render a quad and then write a fragment shader to color the quad based off of how far each pixel is from the center.
Pseudocode:
vertex shader:
vec2 center = vec2((x1+x2)/2,(y1+y2)/2); //pass this to the fragment shader
fragment shader:
float dist = distance(pos,center); //"pos" is the interpolated position of the fragment. Its passed in from the vertex shader
//Now that we have the distance between each fragment and the center, we can do all kinds of stuff:
gl_fragcolor = vec4(1,1,1,dist) //Assuming you're drawing a unit square, this will make each pixel's transparency smoothly vary from 1 (right next to the center) to 0 (on the edce of the square)
gl_fragcolor = vec4(dist, dist, dist, 1.0) //Vary each pixel's color from white to black
//etc, etc
Let me know if you need more detail

How do you render primitives as wireframes in OpenGL?

How do you render primitives as wireframes in OpenGL?
glPolygonMode( GL_FRONT_AND_BACK, GL_LINE );
to switch on,
glPolygonMode( GL_FRONT_AND_BACK, GL_FILL );
to go back to normal.
Note that things like texture-mapping and lighting will still be applied to the wireframe lines if they're enabled, which can look weird.
From http://cone3d.gamedev.net/cgi-bin/index.pl?page=tutorials/ogladv/tut5
// Turn on wireframe mode
glPolygonMode(GL_FRONT, GL_LINE);
glPolygonMode(GL_BACK, GL_LINE);
// Draw the box
DrawBox();
// Turn off wireframe mode
glPolygonMode(GL_FRONT, GL_FILL);
glPolygonMode(GL_BACK, GL_FILL);
Assuming a forward-compatible context in OpenGL 3 and up, you can either use glPolygonMode as mentioned before, but note that lines with thickness more than 1px are now deprecated. So while you can draw triangles as wire-frame, they need to be very thin. In OpenGL ES, you can use GL_LINES with the same limitation.
In OpenGL it is possible to use geometry shaders to take incoming triangles, disassemble them and send them for rasterization as quads (pairs of triangles really) emulating thick lines. Pretty simple, really, except that geometry shaders are notorious for poor performance scaling.
What you can do instead, and what will also work in OpenGL ES is to employ fragment shader. Think of applying a texture of wire-frame triangle to the triangle. Except that no texture is needed, it can be generated procedurally. But enough talk, let's code. Fragment shader:
in vec3 v_barycentric; // barycentric coordinate inside the triangle
uniform float f_thickness; // thickness of the rendered lines
void main()
{
float f_closest_edge = min(v_barycentric.x,
min(v_barycentric.y, v_barycentric.z)); // see to which edge this pixel is the closest
float f_width = fwidth(f_closest_edge); // calculate derivative (divide f_thickness by this to have the line width constant in screen-space)
float f_alpha = smoothstep(f_thickness, f_thickness + f_width, f_closest_edge); // calculate alpha
gl_FragColor = vec4(vec3(.0), f_alpha);
}
And vertex shader:
in vec4 v_pos; // position of the vertices
in vec3 v_bc; // barycentric coordinate inside the triangle
out vec3 v_barycentric; // barycentric coordinate inside the triangle
uniform mat4 t_mvp; // modeview-projection matrix
void main()
{
gl_Position = t_mvp * v_pos;
v_barycentric = v_bc; // just pass it on
}
Here, the barycentric coordinates are simply (1, 0, 0), (0, 1, 0) and (0, 0, 1) for the three triangle vertices (the order does not really matter, which makes packing into triangle strips potentially easier).
The obvious disadvantage of this approach is that it will eat some texture coordinates and you need to modify your vertex array. Could be solved with a very simple geometry shader but I'd still suspect it will be slower than just feeding the GPU with more data.
In Modern OpenGL(OpenGL 3.2 and higher), you could use a Geometry Shader for this :
#version 330
layout (triangles) in;
layout (line_strip /*for lines, use "points" for points*/, max_vertices=3) out;
in vec2 texcoords_pass[]; //Texcoords from Vertex Shader
in vec3 normals_pass[]; //Normals from Vertex Shader
out vec3 normals; //Normals for Fragment Shader
out vec2 texcoords; //Texcoords for Fragment Shader
void main(void)
{
int i;
for (i = 0; i < gl_in.length(); i++)
{
texcoords=texcoords_pass[i]; //Pass through
normals=normals_pass[i]; //Pass through
gl_Position = gl_in[i].gl_Position; //Pass through
EmitVertex();
}
EndPrimitive();
}
Notices :
for points, change layout (line_strip, max_vertices=3) out; to layout (points, max_vertices=3) out;
Read more about Geometry Shaders
If you are using the fixed pipeline (OpenGL < 3.3) or the compatibility profile you can use
//Turn on wireframe mode
glPolygonMode(GL_FRONT_AND_BACK, GL_LINE);
//Draw the scene with polygons as lines (wireframe)
renderScene();
//Turn off wireframe mode
glPolygonMode(GL_FRONT_AND_BACK, GL_FILL);
In this case you can change the line width by calling glLineWidth
Otherwise you need to change the polygon mode inside your draw method (glDrawElements, glDrawArrays, etc) and you may end up with some rough results because your vertex data is for triangles and you are outputting lines. For best results consider using a Geometry shader or creating new data for the wireframe.
The easiest way is to draw the primitives as GL_LINE_STRIP.
glBegin(GL_LINE_STRIP);
/* Draw vertices here */
glEnd();
You can use glut libraries like this:
for a sphere:
glutWireSphere(radius,20,20);
for a Cylinder:
GLUquadric *quadratic = gluNewQuadric();
gluQuadricDrawStyle(quadratic,GLU_LINE);
gluCylinder(quadratic,1,1,1,12,1);
for a Cube:
glutWireCube(1.5);
Use this function:
void glPolygonMode(GLenum face, GLenum mode);
face: Specifies the polygon faces that mode applies to. Can be GL_FRONT for the front side of the polygon, GL_BACK for the back and GL_FRONT_AND_BACK for both.
mode: Three modes are defined.
GL_POINT: Polygon vertices that are marked as the start of a boundary edge are drawn as points.
GL_LINE: Boundary edges of the polygon are drawn as line segments. (your target)
GL_FILL: The interior of the polygon is filled.
P.S: glPolygonMode controls the interpretation of polygons for rasterization in the graphics pipeline.
For more information look at the OpenGL reference pages in khronos group.
If it's OpenGL ES 2.0 you're dealing with, you can choose one of draw mode constants from
GL_LINE_STRIP, GL_LINE_LOOP, GL_LINES, to draw lines,
GL_POINTS (if you need to draw only vertices), or
GL_TRIANGLE_STRIP, GL_TRIANGLE_FAN, and GL_TRIANGLES to draw filled triangles
as first argument to your
glDrawElements(GLenum mode, GLsizei count, GLenum type, const GLvoid * indices)
or
glDrawArrays(GLenum mode, GLint first, GLsizei count) calls.
A good and simple way of drawing anti-aliased lines on a non anti-aliased render target is to draw rectangles of 4 pixel width with an 1x4 texture, with alpha channel values of {0.,1.,1.,0.}, and use linear filtering with mip-mapping off. This will make the lines 2 pixels thick, but you can change the texture for different thicknesses.
This is faster and easier than barymetric calculations.