GLSL Shader for geometric primitives in the Blender Game Engine - opengl

I want to draw geometric primitives in Blenders Game Engine by using a GLSL Shader.
For example:
glBegin(GL_LINE_STRIP);
glVertex2f(-0.6,0.3);
glVertex2f(-0.3,0.2);
glVertex2f(-.65, 0.2);
glVertex2f(-0.4, 0.05);
glEnd();
How to wrap this into VertexShader = """
...code...
""" or does it even belong there at all since this shape has nothing to do with a scene-objects vertecies? And do I enable it in the Logic Editor to an object by using an always sensor that links to the script in order for it to work?
All I want to do is to draw a line, along some x and y coordinates (z is 0 in this case). Also it should be scalable enough to draw an entire array of values, but after hours of research i'd be happy to just get three points to work! A simple example would be appreciated a lot.
So its all about how to geht basic stuff like this displayed in BGE by using gpu based GLSL:
http://www.informit.com/articles/article.aspx?p=328646&seqNum=6
or here on stackoverflow
Is it possible to draw line thickness in a fragment shader?
PS.
I could not find anything fitting for my Problem on http://en.wikibooks.org/wiki/GLSL_Programming
And Rasterizer.drawLine() from bge doesn't work for my purposes, since it behaves odd, disappears immediately and the points cant be accessed/edited/moved afterwards.

Related

GLSL shader: occlusion order and culling

I have a GLSL shader that draws a 3D curve given a set of Bezier curves (3d coordinates of points). The drawing itself is done as I want except the occlusion does not work correctly, i.e., under certain viewpoints, the curve that is supposed to be in the very front appears to be still occluded, and reverse: the part of a curve that is supposed to be occluded is still visible.
To illustrate, here are couple examples of screenshots:
Colored curve is closer to the camera, so it is rendered correctly here.
Colored curve is supposed to be behind the gray curve, yet it is rendered on top.
I'm new to GLSL and might not know the right term for this kind of effect, but I assume it is occlusion culling (update: it actually indicates the problem with depth buffer, terminology confusion!).
My question is: How do I deal with occlusions when using GLSL shaders?
Do I have to treat them inside the shader program, or somewhere else?
Regarding my code, it's a bit long (plus I use OpenGL wrapper library), but the main steps are:
In the vertex shader, I calculate gl_Position = ModelViewProjectionMatrix * Vertex; and pass further the color info to the geometry shader.
In the geometry shader, I take 4 control points (lines_adjacency) and their corresponding colors and produce a triangle strip that follows a Bezier curve (I use some basic color interpolation between the Bezier segments).
The fragment shader is also simple: gl_FragColor = VertexIn.mColor;.
Regarding the OpenGL settings, I enable GL_DEPTH_TEST, but it does not seem to have anything of what I need. Also if I put any other non-shader geometry on the scene (e.g. quad), the curves are always rendered on the top of it regardless the viewpoint.
Any insights and tips on how to resolve it and why it is happening are appreciated.
Update solution
So, the initial problem, as I learned, was not about finding the culling algorithm, but that I do not handle the calculation of the z-values correctly (see the accepted answer). I also learned that given the right depth buffer set-up, OpenGL handles the occlusions correctly by itself, so I do not need to re-invent the wheel.
I searched through my GLSL program and found that I basically set the z-values as zeros in my geometry shader when translating the vertex coordinates to screen coordinates (vec2( vertex.xy / vertex.w ) * Viewport;). I had fixed it by calculating the z-values (vertex.z/vertex.w) separately and assigned them to the emitted points (gl_Position = vec4( screenCoords[i], zValues[i], 1.0 );). That solved my problem.
Regarding the depth buffer settings, I didn't have to explicitly specify them since the library I use set them up by default correctly as I need.
If you don't use the depth buffer, then the most recently rendered object will be on top always.
You should enable it with glEnable(GL_DEPTH_TEST), set the function to your liking (glDepthFunc(GL_LEQUAL)), and make sure you clear it every frame with everything else (glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT)).
Then make sure your vertex shader is properly setting the Z value of the final vertex. It looks like the simplest way for you is to set the "Model" portion of ModelViewProjectionMatrix on the CPU side to have a depth value before it gets passed into the shader.
As long as you're using an orthographic projection matrix, rendering should not be affected (besides making the draw order correct).

OpenGL - How to create Order Independent transparency?

I've been working on a game engine for educational purposes and I came across this issue I cannot seem to find an answer for:
Alpha channel only works for objects that have already been drawn before the object that has the alpha channel (For example: in a scene with 3 objects, let's say a cat, a dog and a bottle(transparent). both the cat and the dog are behind the bottle; the dog is drawn first, the bottle second, the cat third. only the dog will be seen through the bottle).
Here's a picture of this issue:
I used C++ for the engine, Win32 API for the editor and GLSL for shading:
// some code here
vec4 alpha = texture2D(diffuse, texCoord0).aaaa;
vec4 negalpha = alpha * vec4(-1,-1,-1,1) + vec4(1,1,1,0);
vec4 textureComponentAlpha = alpha*textureComponent+negalpha*vec4(1,1,1,0);//(texture2D ( diffuse, texCoord0 ) ).aaaa;
gl_FragColor = (textureComponentAlpha + vec4(additiveComponent.xyz, 0)) * vec4(lightingComponent.xyz, 1);
In C++:
glEnable(GL_ALPHA_TEST);
glDepthFunc(GL_EQUAL);
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
I assume it has something to do with the way the alpha test is made, or something like that.
Could anyone help me fix this, please?
I am using something similar to that answer linked by #RetoKoradi comment but I got double layered transparent models with textures (glass with both inner and outer surface) with fully solid machinery and stuff around.
For such scenes I am using also multi pass approach and the Z-sorting is done by sequence of setting front face.
render all solid objects
render all transparent objects
This is the tricky part first I set
glGetIntegerv(GL_DEPTH_FUNC,&depth_funct);
glDepthFunc(GL_ALWAYS);
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA,GL_ONE_MINUS_SRC_ALPHA);
glEnable(GL_CULL_FACE);
I got the geometry layers stored separately (inner outer) so The Z-sorting is done like this:
Render outer layer back faces with glFrontFace(GL_CW);
Render inner layer back faces with glFrontFace(GL_CW);
Render inner layer front faces with glFrontFace(GL_CCW);
Render outer layer front faces with glFrontFace(GL_CCW);
And lastly restore
glDisable(GL_BLEND);
glDepthFunc(depth_funct);
render all solid objects again
It is far from perfect but enough for my purposes it looks like this:
I cannot encourage you enough to have a look at this NVidia paper and the related blog post by Morgan McGuire.
This is pretty easy to implement and has great results overall.
Im not entirely sure this will help your situation, but do you have blending and alpha enabled? As in :
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
The method to get correct transparency of rendered objects independently from the order in which they are drawn is named Order Independent Transparency (OIT).
There is a great presentation from Nvidia summarizing the latest solutions in this area: Order Independent Transparency In OpenGL 4.x
"OpenGL 4.x" in the title is not accidental, because only in OpenGL 4.2 core appears Atomic Counters, which are important for OIT implementation.
One of the algorithms of OIT is as follows:
During the first pass of rendering, store each fragment in a buffer, and collect all fragments for a single screen pixel in a linked list. Atomic Counters are used both to store new fragments in a buffer and to maintain linked list in each screen pixel.
During the second pass of rendering, each linked list is sorted according to z-depth and fragments are alpha-blended in correct order.
A simple alternative to OIT is to discard every second (odd) fragment in a fragment shader:
if (onlyOddFragments && ((int(gl_FragCoord.x) + int(gl_FragCoord.y)) % 2) == 1)
discard;
So you will see the objects farther from the camera in the discarded fragments. If multisample antialiasing (MSAA) is activated, then no checkboard pattern is visible even in lowest resolution.
Here is a video comparing the standard transparency approach where all triangles are output simply in order, as well as two above approaches. The implementation can be found in some open-source GitHub projects, e.g. here.

OpenGL - Lights don't affect (only) glutSolidCone

As seen in the following image, I have a nice rendering with OpenGL using a mesh and OpenGL lights.
However, when I try to depict just the underlying skeleton of the hand, the ball-joints are depicted in a nice way, but OpenGL lights seem not to have an impact on the cone-bones, something that ruins the 3d perception of them.
Both the sptheres and the cones are depicted at the same point of the code (no intermediate things that can cause harm), using glut.
glutSolidSphere
glutSolidCone
The exact call to glutSolidCone (please ingore variables the set lenght, etc) is:
glutSolidCone( 2.2,boneLength-2*_screenshotWidth_Points,4,100*boneLength );
This has been pending for quite some time now, whenever I have some free time I look into this, but no luck up to now. Any hint?
The problem you're running into is, that in fixed function OpenGL (which is used by glutSolidCone) illumination calculations are done only at the vertices and then the resulting colors interpolated across the face. This of course looks bad if there are not enough vertices to sample the light falloff or specular highlights.
The most straightforward solution would be to drop in a per-fragment illumination shader program in compatibility profile mode, that uses the built-in variables instead of user supplied uniforms.

Doubts in RayTracing with GLSL

I am trying to develop a basic Ray Tracer. So far i have calculated intersection with a plane and blinn-phong shading.i am working on a 500*500 window and my primary ray generation code is as follows
vec3 rayDirection = vec3( gl_FragCoord.x-250.0,gl_FragCoord.y-250.0 , 10.0);
Now i doubt that above method is right or wrong. Please give me some insights.
I am also having doubt that do we need to construct geometry in OpenGL code while rayTracing in GLSL. for example if i am trying to raytrace a plane do i need to construct plane in OpenGL code using glVertex2f ?
vec3 rayDirection = vec3( gl_FragCoord.x-250.0,gl_FragCoord.y-250.0 , 10.0);
Now i doubt that above method is right or wrong. Please give me some insights.
There's no right or wrong with projections. You could as well map viewport pixels to azimut and elevation angle. Actually your way of doing this is not so bad at all. I'd just pass the viewport dimensions in a additional uniform, instead of hardcoding, and normalize the vector. The Z component literally works like focal lengths.
I am also having doubt that do we need to construct geometry in OpenGL code while rayTracing in GLSL. for example if i am trying to raytrace a plane do i need to construct plane in OpenGL code using glVertex2f?
Raytracing works on a global description containing the full scene. OpenGL primitives however are purely local, i.e. just individual triangles, lines or points, and OpenGL doesn't maintain a scene database. So geometry passed using the usual OpenGL drawing function can not be raytraced (at least not that way).
This is about the biggest obstacle for doing raytracing with GLSL: You somehow need to implement a way to deliver the whole scene as some freely accessible buffer.
It is possible to use Ray Marching to render certain types of complex scenes in a single fragment shader. Here are some examples: (use Chrome or FireFox, requires WebGL)
Gift boxes: http://glsl.heroku.com/e#820.2
Torus Journey: http://glsl.heroku.com/e#794.0
Christmas tree: http://glsl.heroku.com/e#729.0
Modutropolis: http://glsl.heroku.com/e#327.0
The key to making this stuff work is writing "distance functions" that tell the ray marcher how far it is from the surface of an object. For more info on distance functions, see:
http://www.iquilezles.org/www/articles/distfunctions/distfunctions.htm

Displaying multiple cubes in OpenGL with shaders

I'm new to OpenGL and shaders. I have a project that involves using shaders to display cubes.
So basically I'm supposed to display eight cubes using a perspective projection at (+-10,+-10,+-10) from the origin each in a different color. In other words, there would be a cube centered at (10, 10, 10), another centered at (10, 10, -10) and so on. There are 8 combinations in (+-10, +-10, +-10). And then I'm supposed to provide a key command 'c' that changes the color of all the cubes each time the key is pressed.
So far I was able to make one cube at the origin. I know I should use this cube and translate it to create the eight cubes but I'm not sure how I would do that. Does anyone know how I would go about with this?
That question is, as mentioned, too broad. But you said that you managed to draw one cube so I can assume that you can set up camera and your window. That leaves us whit how to render 8 cubes. There are many ways to do this, but I'll mention 2 very different ones.
Classic:
You make function that takes 2 parameters - center of cube, and size. Whit these 2 you can build up cube the same way you're doing it now, but instead of fixed values you will use those variables. For example, front face would be:
glBegin(GL_TRIANGLE_STRIP);
glVertex3f(center.x-size/2, center.y-size/2, center.z+size/2);
glVertex3f(center.x+size/2, center.y-size/2, center.z+size/2);
glVertex3f(center.x-size/2, center.y+size/2, center.z+size/2);
glVertex3f(center.x+size/2, center.y+size/2, center.z+size/2);
glEnd();
This is just for showcase how to make it from variables, you can do it the same way you're doing it now.
Now, you mentioned you want to use shaders. Shader topic is very broad, just like openGL itself, but I can tell you the idea. In openGL 3.2 special shaders called geometry were added. Their purpose is to work with geometry as whole - on contrary that vertex shaders works whit just 1 vertex at time or that fragment shaders work just whit one fragment at time - geometry shaders work whit one geometry piece at time. If you're rendering triangles, you get all info about single triangle that is just passing through shaders. This wouldn't be anything serious, but these shaders doesn't only modify these geometries, they can create new ones! So I'm doing in one of my shader programs, where I render points, but when they pass through geometry shader, these points are converted to circles. Similarly you can render just points, but inside geometry shader you can render whole cubes. The point position would work as center for these cubes and you should pass size of cubes in uniform. If size of cubes may vary, you need to make vertex shader also that will pass the size from attribute to variable, which can be read in geometry shader.
As for color problem, if you don't implement fragment shaders, only thing you need to do is call glColor3f before rendering cubes. It takes 3 parameters - red, green and blue values. Note that these values doesn't range from 0 to 255, but from 0 to 1. You can get confused that you cubes aren't rendered if you use white background and think that when you set colors to 200,10,10 you should see red cubes but you don't see anything. That's because in fact you render white cubes. To avoid such errors, I recommend to set background to something like grey whit glClearColor.