What is stereoscopic shader? - opengl

These days, I am making some shaders such that Phong, Gourard, even Toon Shader in GLSL.
I have a curious question, I want to make a stereoscopic shader which using 2 camera, and left camera takes red light and right camera takes cyan light can be implemented by using combined them in one camera, so It can be a stereoscopic shader. I think.
Do I think wrong or not? I want to implement in 3D object which consists of 2D primitives.

You'll probably need to render the scene twice, once for the left eye and once for the right eye. You can then blend the 2 together.
One way would be to render each eye into a different texture-backed FBO, and then combine those 2 textures into 1 either using a custom shader or even using additive blending, if you can render each eye with the correct colors to begin with. (If the left eye is truly only the red channel and the right is only the green and blue channels, an additive blend should do the right thing, I think.)

If you want to create an anaglyph imagage, using OpenGL lights to color the scene is stupid.
Either use the method descibed in the other answer, i.e. using FBOs to render the scene into textures, then combine the results in a shader, or by simply drawing them on two overlaid quads in glBlendFunc(GL_ONE, GL_ONE) mode with modulated colors. Or, in case of red-cyan anaglyph you can use glColorMask to select which color channels are going to be written to.

Related

How can I use instancing to generate 2 single different texture?

For example, I have two transform matrix: WVP_Left and WVP_Right.
Can I render geometry (like a rabbit) using instancing to generate left texture and right texture?
The left texture should just have only one rabbit with the WVP_Left effect, and the right texture should just have one rabbit with the WVP_Right effect.
For now, I get two textures which both have 2 rabbit with some overlap part.
How can I fix it?
I don't want to render the left and right scene into one texture, and split it to 2 texture in another pass.
Also, I don't want to use geometry shader to finish the thing, because geometry shader will add the workload of GPU

What is the best way to specify the colors of different squares while drawing a chess board?

What is the best way to specify the colors of different squares while drawing a chess board?
Suppose I want a 2 by 2 board with colors like this:
*-----*-----*
|black|white|
*-----*-----*
|white|black|
*-----*-----*
I can now have 9 vertices and draw the board with GL_QUADS primitive. As I understand filling a square with some color means specifying a color of each vertex with that color.
But filling every square with a different color means duplicating 5 vertices
*-----**----*
|black|white|
**-----**----**
|white|black|
*-----**----*
Is it the simplest way to do this? And is it actually allowed in OpenGL to have vertices with equal coordinates and different colors?
If you really want to draw a quad for each field, duplicating the vertices is the way to go. There are no problems with different vertices having the same coordinates. The GL's rasterization rules will make sure that there are a) no gaps at such shared edges and b) there is also no overdraw, so you will be fine.
However, you can also draw the whole field as one quad and use texturing. All you would need is a 2x2 sized texture with the black and white colors and can use the GL_NEAREST filtering mode so get a nice and sharp checkerboard pattern.
With that approach, you can also dynamically change the number of fields without changing the texture at all, just by using the GL_REPEAT mode and only changing the texcoords.
In modern shader based GL, you can also procedurally generate the checkerboard pattern directly in the fragment shader.

Using vertex colors and textures in OpenGL?

I am familiar with texture usage in OpenGL. I am also familiar with the coloring and interpolation of colors between vertices. Can the two be used in conjunction with one another? Does doing so tint the texture with the color supplied at each vertex? I am rendering an older game format which supplies both so I am trying to figure out if they both work in conjunction to create variations for the textures.
Cheers.
Yes you can do that. If you set the colour on a vertex to anything other than white, the texture that is applied will be filtered by the colour given. If different vertices on the same polygon have different colours, and there are textures, the colours will be interpolated exactly the same as for no textures.

Displaying multiple cubes in OpenGL with shaders

I'm new to OpenGL and shaders. I have a project that involves using shaders to display cubes.
So basically I'm supposed to display eight cubes using a perspective projection at (+-10,+-10,+-10) from the origin each in a different color. In other words, there would be a cube centered at (10, 10, 10), another centered at (10, 10, -10) and so on. There are 8 combinations in (+-10, +-10, +-10). And then I'm supposed to provide a key command 'c' that changes the color of all the cubes each time the key is pressed.
So far I was able to make one cube at the origin. I know I should use this cube and translate it to create the eight cubes but I'm not sure how I would do that. Does anyone know how I would go about with this?
That question is, as mentioned, too broad. But you said that you managed to draw one cube so I can assume that you can set up camera and your window. That leaves us whit how to render 8 cubes. There are many ways to do this, but I'll mention 2 very different ones.
Classic:
You make function that takes 2 parameters - center of cube, and size. Whit these 2 you can build up cube the same way you're doing it now, but instead of fixed values you will use those variables. For example, front face would be:
glBegin(GL_TRIANGLE_STRIP);
glVertex3f(center.x-size/2, center.y-size/2, center.z+size/2);
glVertex3f(center.x+size/2, center.y-size/2, center.z+size/2);
glVertex3f(center.x-size/2, center.y+size/2, center.z+size/2);
glVertex3f(center.x+size/2, center.y+size/2, center.z+size/2);
glEnd();
This is just for showcase how to make it from variables, you can do it the same way you're doing it now.
Now, you mentioned you want to use shaders. Shader topic is very broad, just like openGL itself, but I can tell you the idea. In openGL 3.2 special shaders called geometry were added. Their purpose is to work with geometry as whole - on contrary that vertex shaders works whit just 1 vertex at time or that fragment shaders work just whit one fragment at time - geometry shaders work whit one geometry piece at time. If you're rendering triangles, you get all info about single triangle that is just passing through shaders. This wouldn't be anything serious, but these shaders doesn't only modify these geometries, they can create new ones! So I'm doing in one of my shader programs, where I render points, but when they pass through geometry shader, these points are converted to circles. Similarly you can render just points, but inside geometry shader you can render whole cubes. The point position would work as center for these cubes and you should pass size of cubes in uniform. If size of cubes may vary, you need to make vertex shader also that will pass the size from attribute to variable, which can be read in geometry shader.
As for color problem, if you don't implement fragment shaders, only thing you need to do is call glColor3f before rendering cubes. It takes 3 parameters - red, green and blue values. Note that these values doesn't range from 0 to 255, but from 0 to 1. You can get confused that you cubes aren't rendered if you use white background and think that when you set colors to 200,10,10 you should see red cubes but you don't see anything. That's because in fact you render white cubes. To avoid such errors, I recommend to set background to something like grey whit glClearColor.

Can you render two quads with transparency at the same point?

I'm learning about how to use JOGL and OpenGL to render texture-mapped quads. I have a test program and a test quad, and I figured out how to enable GL_BLEND so that I can specify the alpha value of a vertex to make a quad with a sort of gradient... but now I want this to show through to another textured quad at the same position.
Drawing two quads with the same vertex locations didn't work, it only renders the first quad. Is this possible then, or will I need to basically construct a custom texture on-the-fly based on what I want and then draw one quad with this texture? I was really hoping to take advantage of blending in this case...
Have a look at which glDepthFunc you're using, perhaps you're using GL_LESS/GL_GREATER and it could work if you're using GL_LEQUAL/GL_GEQUAL.
Its difficult to make out of the question what exactly you're trying to achieve but here's a try
For transparency to work correctly in OpenGL you need to draw the polygons from the furthest to the nearest to the camera. If you're scene is static this is definitely something you can do. But if it's rotating and moving then this is usually not feasible since you'll have to sort the polygons for each and every frame.
More on this can be found in this FAQ page:
http://www.opengl.org/resources/faq/technical/transparency.htm
For alpha blending, the renderer blends all colors behind the current transparent object (from the camera's point of view) at the time the transparent object is rendered. If the transparent object is rendered first, there is nothing behind it to blend with. If it's rendered second, it will have something to blend it with.
Try rendering your opaque quad first, then rendering your transparent quad second. Plus, make sure your opaque quad is slightly behind your transparent quad (relative to the camera) so you don't get z-buffer striping.