OpenGL 3.2 Core Profile glLineWidth - opengl

I'm setting up a OpenGL 3.2 Core context on Mac OS X. I want to be able to draw some thick black likes on the screen. In pervious version of OpenGL, I could just set
glLineWidth(10.0f);
and I will get a line 10 pixels wide. However when I check the line width ranges in 3.2 Core
GLint range[2];
glGetIntegerv(GL_ALIASED_LINE_WIDTH_RANGE, range);
glGetIntegerv(GL_SMOOTH_LINE_WIDTH_RANGE, range);
I get the values of 1 for Aliased Lines and 0-1 for Smooth lines. How can I make a line that is 10.0 pixels wide in screen space? Is there a simple way to draw this other than making each line segment a rectangle?

Using OpenGL 3.2 core profile, calling glLineWidth with a value greater than 1.0 give an INVALID_VALUE error (call glGetError to prove it).
Surely you can get the wanted result by determining the quad required for drawing the line.
I think you should be able to generate quads from from line points: hey, a larger line is a quad! Maybe you could use techinques like this to get your wanted result.
The key is: instead of rely on LineWidth, you give a unit quad as input (4 vertices using triangle strip), then transform incoming vertices inside a shader passing to it appropriate uniforms.
Maybe another approach would be rendering using a geometry shader: generate a quad from a point. However, I'm not sure about this point. I don't know if a geometry shader (only if it feasible, of course) would be the best approach: the cost of drawing a line strip using a single quad would be the shader uniform setup for each line composing the strip.

This could be depending on the type of projection you set up. Are you using orthographic or perspective projection matrix?
I think that if you are not using the orthographic projection, the final rasterisation of the primitive will be subject to the distance of the object (model matrix) from the camera (view matrix).
Cheers

Line width > 1.0 is deprecated and not further supported in a core profile OpenGL Context.
However, it is still maintained in a compatibility profile context.
See OpenGL 4.6 API Core Profile Specificatio - E.2.1 Deprecated But Still Supported Features:
The following features are deprecated, but still present in the core profile. They
may be removed from a future version of OpenGL, and are removed in a forwardcompatible context implementing the core profile.
Wide lines - LineWidth values greater than 1.0 will generate an INVALID_VALUE error
For a core profile context possible solutions are presented in the answers to:
OpenGL Line Width
GLSL Geometry shader to replace glLineWidth
Drawing a variable width line in openGL (No glLineWidth)
OpenGL : thick and smooth/non-broken lines in 3D

Related

Set OpenGL sample position

Is it possible to change the sample position for the OpenGL rasterisation from (0.5, 0.5) to something else? (I am referring to the sample position used when rendering without any multisampling etc.)
The reason is that I would like to implement anti aliasing by blending multiple render results together with different sample positions. I would then need variables coming from the vertex shader to be interpolated to different positions within the pixel.
You can use the dedicated GL extension (https://www.opengl.org/registry/specs/ARB/sample_locations.txt) if your driver support it.

GLSL shader: occlusion order and culling

I have a GLSL shader that draws a 3D curve given a set of Bezier curves (3d coordinates of points). The drawing itself is done as I want except the occlusion does not work correctly, i.e., under certain viewpoints, the curve that is supposed to be in the very front appears to be still occluded, and reverse: the part of a curve that is supposed to be occluded is still visible.
To illustrate, here are couple examples of screenshots:
Colored curve is closer to the camera, so it is rendered correctly here.
Colored curve is supposed to be behind the gray curve, yet it is rendered on top.
I'm new to GLSL and might not know the right term for this kind of effect, but I assume it is occlusion culling (update: it actually indicates the problem with depth buffer, terminology confusion!).
My question is: How do I deal with occlusions when using GLSL shaders?
Do I have to treat them inside the shader program, or somewhere else?
Regarding my code, it's a bit long (plus I use OpenGL wrapper library), but the main steps are:
In the vertex shader, I calculate gl_Position = ModelViewProjectionMatrix * Vertex; and pass further the color info to the geometry shader.
In the geometry shader, I take 4 control points (lines_adjacency) and their corresponding colors and produce a triangle strip that follows a Bezier curve (I use some basic color interpolation between the Bezier segments).
The fragment shader is also simple: gl_FragColor = VertexIn.mColor;.
Regarding the OpenGL settings, I enable GL_DEPTH_TEST, but it does not seem to have anything of what I need. Also if I put any other non-shader geometry on the scene (e.g. quad), the curves are always rendered on the top of it regardless the viewpoint.
Any insights and tips on how to resolve it and why it is happening are appreciated.
Update solution
So, the initial problem, as I learned, was not about finding the culling algorithm, but that I do not handle the calculation of the z-values correctly (see the accepted answer). I also learned that given the right depth buffer set-up, OpenGL handles the occlusions correctly by itself, so I do not need to re-invent the wheel.
I searched through my GLSL program and found that I basically set the z-values as zeros in my geometry shader when translating the vertex coordinates to screen coordinates (vec2( vertex.xy / vertex.w ) * Viewport;). I had fixed it by calculating the z-values (vertex.z/vertex.w) separately and assigned them to the emitted points (gl_Position = vec4( screenCoords[i], zValues[i], 1.0 );). That solved my problem.
Regarding the depth buffer settings, I didn't have to explicitly specify them since the library I use set them up by default correctly as I need.
If you don't use the depth buffer, then the most recently rendered object will be on top always.
You should enable it with glEnable(GL_DEPTH_TEST), set the function to your liking (glDepthFunc(GL_LEQUAL)), and make sure you clear it every frame with everything else (glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT)).
Then make sure your vertex shader is properly setting the Z value of the final vertex. It looks like the simplest way for you is to set the "Model" portion of ModelViewProjectionMatrix on the CPU side to have a depth value before it gets passed into the shader.
As long as you're using an orthographic projection matrix, rendering should not be affected (besides making the draw order correct).

point sprites with gl_PointSize renders at a different size in OpenGL ES 2.0 and OpenGL 3+

I have a OpenGL program which utilizes OpenGL 3.2 Core profile on Mac OS X and OpenGL ES 2.0 on iOS.
Part of my application renders point sprites by writing to gl_PointSizein the vertex shader. Unfortunately it appears that gl_PointSize renders at roughly 50x larger in OpenGL 3 than it does in OpenGL ES 2.0. The documentation for each API states that gl_PointSize defines the number of pixels, so I am unsure why this would be the case. Is there perhaps a default OpenGL parameter that modified the output of gl_PointSize? Is there anything else that may be causing the vast difference in size?
Each platform uses exactly the same shader (desktop has ARB_ES2 compatibility). I have also checked that all uniform inputs are identical and both render at the same resolution.
Outside of the shader, the only point sprite related call I make is glEnable(GL_VERTEX_PROGRAM_POINT_SIZE);. On each platform independent of one another, I can adjust the point size just fine.
The comments were correct about retina. A combination of scaling multiplies produced unintended results because of the 2x resolution retina screens. This caused the point sprites to be rendered 16x larger on OpenGL 3 than on OpenGL ES 2.0.
When raw values were written in the shader the Point Sprite was in fact 2x larger, which fills 4x the area.
To correct the scaling problems it was helpful to read this from the specification:
Point rasterization produces a fragment for each framebuffer pixel
whose center lies inside a square centered at the point’s (xw; yw),
with side length equal to the current point size.

point rendering in openGL and GLSL

Question: How do I render points in openGL using GLSL?
Info: a while back I made a gravity simulation in python and used blender to do the rendering. It looked something like this. As an exercise I'm porting it over to openGL and openCL. I actually already have it working in openCL, I think. It wasn't until i spent a fair bit of time working in openCL that I realized that it is hard to know if this is right without being able to see the result. So I started playing around with openGL. I followed the openGL GLSL tutorial on wikibooks, very informative, but it didn't cover points or particles.
I'm at a loss for where to start. most tutorials I find are for the openGL default program. I want to do it using GLSL. I'm still very new to all this so forgive me my potential idiocy if the answer is right beneath my nose. What I'm looking for is how to make halos around the points that blend into each other. I have a rough idea on how to do this in the fragment shader, but so far as I'm aware I can only grab the pixels that are enclosed by polygons created by my points. I'm sure there is a way around this, it would be crazy for there not to be, but me in my newbishness is clueless. Can some one give me some direction here? thanks.
I think what you want is to render the particles as GL_POINTS with GL_POINT_SPRITE enabled, then use your fragment shader to either map a texture in the usual way, or generate the halo gradient procedurally.
When you are rendering in GL_POINTS mode, set gl_PointSize in your vertex shader to set the size of the particle. The vec2 variable gl_PointCoord will give you the coordinates of your fragment in the fragment shader.
EDIT: Setting gl_PointSize will only take effect if GL_PROGRAM_POINT_SIZE has been enabled. Alternatively, just use glPointSize to set the same size for all points. Also, as of OpenGL 3.2 (core), the GL_POINT_SPRITE flag has been removed and is effectively always on.
simply draw a point sprites (using GL_POINT_SPRITE) use blending functions: gl_src_alpha and gl_one and then "halos" should be visible. Blending should be responsible for "halos" so look for some more info about that topic.
Also you have to disable depth wrties.
here is some link about that: http://content.gpwiki.org/index.php/OpenGL:Tutorials:Tutorial_Framework:Particles

How to make textured fullscreen quad in OpenGL 2.0 using SDL?

Simple task: draw a fullscreen quad with texture, nothing more, so we can be sure the texture will fill whole screen space. (We will do some more shader magic later).
Drawing fullscreen quad with simple fragment shader was easy, but now we are stuck for a whole day trying to make it textured. We read plenty of tutorials, but none of them helped us. Theose about sdl are mainly using opengl 1.x, those about OpenGL 2.0 are not about texturing, or SDL. :(
The code is here. Everything is in colorLUT.c, and fragment shader is in colorLUT.fs. The result is window of the same size as image, and if you comment the last line in shader, you get nice red/green gradient, so the shader is fine.
Texture initialization hasn't changed compared to OpenGL 1.4. Tutorials will work fine.
If fragment shader works, but you don't see texture (and get black screen), texture loading is broken or texture hasn't been set correctly. Disable shader, and try displaying textured polygon with fixed-function functionality.
You may want to call glPixelStorei(GL_UNPACK_ALIGNMENT, 1) before trying to init texture. Default value is 4.
Easier way to align texture to screen is to add vertex shader and pass texture coordinates - instead of trying to calculate them using gl_FragCoord.
You're passing surface size into "resolution" uniform. This is an error. You should be passing viewport size instead.
You may want to generate mipmaps. Either generate them yourself, or use GL_GENERATE_MIPMAPS because it is available in OpenGL 2 (but has been deprecated in later versions)
OpenGL.org has specifications for OpenGL 2.0 and GLSL 1.5. Download them and use them as reference, when in doubt.
NVIdia OpenGL SDK has examples you may want to check - they cover shaders.
And there's "OpenGL Orange book" (OpenGL shading language) which specifically deals with shaders.
Next time include code into question.