OpenGL lighting with Cg - opengl

I'm already familiar with OpenGL's native lights.
My question is how do I go about rendering lights with Cg?
Do I still need to declare normal OpenGL lights and then use Cg to render the light?Or is it all done with Cg?
If you could point me to some reading material about lighting with Cg that would be great also.

Well, after some experimenting, I figured out that you don't need to use any OpenGL ligthing directive.
All you need to do is write a vertex shader and give it the parameters needed for the lights: light position, light color. You can also send reflectance and shininess parameters for each object to simulate the way you declare material properties in opengl.

Related

Texture Mapping in GLSL

I am currently working on a project where I have to texture a cube using the reflection vector between the normal of a fragment and the camera.
I have the sampler2D picture, and I somehow have to implement it to a cube using reflection.
The question is: Can someone explain how this process goes. That would help me finish my project and further understand the process behind texturing.
The thing is that I can't use textureCube(), but texture2D(), so that the fragment shader is applicable to not only cubes but to every surface.
Thank you in advance for the answer!
The thing is that I can't use textureCube(), but texture2D(), so that the fragment shader is applicable to not only cubes but to every surface.
Why do you have to do this? Implementing this yourself with texture2D (...) is going to involve multiple texture lookups. textureCube (...) will leave the implementation details hidden and can even seamlessly filter stuff on supported hardware.
In all cases, the fact that it is called a cubemap means nothing about the surface you are mapping it onto, it is actually the texture itself that is a cube (six 2D textures define all of the cube faces).
When you sample a cubemap, you are shooting a ray through this virtual cube and the color or depth returned is where that ray intersects it. The sampled value will come from at least one of the six cube faces, possibly multiple cube faces depending on the texture filter setup.

OpenGL: Fragment vs Vertex shader for gradients?

I'm new to OpenGL, and I'm trying to understand vertex and fragment shaders. It seems you can use a vertex shader to make a gradient if you define the color you want each of the vertices to be, but it seems you can also make gradients using a fragment shader if you use the FragCoord variable, for example.
My question is, since you seem to be able to make color gradients using both kinds of shaders, which one is better to use? I'm guessing vertex shaders are faster or something since everyone seems to use them, but I just want to make sure.
... since everyone seems to use them
Using vertex and fragment shaders are mandatory in modern OpenGL for rendering absolutely everything.† So everyone uses both. It's the vertex shader responsibility to compute the color at the vertices, OpenGL's to interpolate it between them, and fragment shader's to write the interpolated value to the output color attachment.
† OK, you can also use a compute shader with imageStore, but I'm talking about the rasterization pipeline here.

GLSL - A do-nothing vertex shader?

So I have an opengl program that draws a group on objects. When I draw these objects I want to use my shader program is a vertex shader and a vertex shader exclusively. Basically, I am aiming to adjust the height of the model inside the vertex shader depending on a texture calculation. And that is it. Otherwise I want the object to be drawn as if using naked openGL (no shaders). I do not want to implement a fragment shader.
However I haven't been able to find how to make it so I can have a shader program with only a vertex shader and nothing else. Forgetting the part about adjust my model's height, so far I have:
gl_FrontColor = gl_Color;
gl_Position = modelViewProjectionMain * Position;
It transforms the object to the correct position alright, however when I do this I loose texture coordinates and also lighting information (normals are lost). What am I missing? How do I write a "do-nothing" vertex shader? That is, a vertex shader you could turn off and on when drawing a textured .obj with normals, and there would be no difference?
You can't write a shader with partial implementation. Either you do everything in a shader or completely rely on fixed functionality(deprecated) for a given object.
What you can do is this:
glUseProgram(handle)
// draw objects with shader
glUseProgram(0)
// draw objects with fixed functionality
To expand a little on the entirely correct answer by Abhishek Bansal, what you want to do would be nice but is not actually possible. You're going to have to write your own vertex and fragment shaders.
From your post, by "naked OpenGL" you mean the fixed-function pipeline in OpenGL 1 and 2, which included built-in lighting and texturing. Shaders in OpenGL entirely replace the fixed-function pipeline rather than extending it. And in OpenGL 3+ the old functionality has been removed, so now they're compulsory.
The good news is that vertex/fragment shaders to perform the same function as the original OpenGL lighting and texturing are easy to find and easy to modify for your purpose. The OpenGL Shading Language book by Rost, Licea-Kane, etc has a whole chapter "Emulating OpenGL Fixed Functionality" Or you could get a copy of the 5th edition OpenGL SuperBible book and code (not the 6th edition) which came with a bunch of useful predefined shaders. Or if you prefer online resources to books, there are the NeHe tutorials.
Writing shaders seems a bit daunting at first, but it's easier than you might think, and the extra flexibility is well worth it.

point rendering in openGL and GLSL

Question: How do I render points in openGL using GLSL?
Info: a while back I made a gravity simulation in python and used blender to do the rendering. It looked something like this. As an exercise I'm porting it over to openGL and openCL. I actually already have it working in openCL, I think. It wasn't until i spent a fair bit of time working in openCL that I realized that it is hard to know if this is right without being able to see the result. So I started playing around with openGL. I followed the openGL GLSL tutorial on wikibooks, very informative, but it didn't cover points or particles.
I'm at a loss for where to start. most tutorials I find are for the openGL default program. I want to do it using GLSL. I'm still very new to all this so forgive me my potential idiocy if the answer is right beneath my nose. What I'm looking for is how to make halos around the points that blend into each other. I have a rough idea on how to do this in the fragment shader, but so far as I'm aware I can only grab the pixels that are enclosed by polygons created by my points. I'm sure there is a way around this, it would be crazy for there not to be, but me in my newbishness is clueless. Can some one give me some direction here? thanks.
I think what you want is to render the particles as GL_POINTS with GL_POINT_SPRITE enabled, then use your fragment shader to either map a texture in the usual way, or generate the halo gradient procedurally.
When you are rendering in GL_POINTS mode, set gl_PointSize in your vertex shader to set the size of the particle. The vec2 variable gl_PointCoord will give you the coordinates of your fragment in the fragment shader.
EDIT: Setting gl_PointSize will only take effect if GL_PROGRAM_POINT_SIZE has been enabled. Alternatively, just use glPointSize to set the same size for all points. Also, as of OpenGL 3.2 (core), the GL_POINT_SPRITE flag has been removed and is effectively always on.
simply draw a point sprites (using GL_POINT_SPRITE) use blending functions: gl_src_alpha and gl_one and then "halos" should be visible. Blending should be responsible for "halos" so look for some more info about that topic.
Also you have to disable depth wrties.
here is some link about that: http://content.gpwiki.org/index.php/OpenGL:Tutorials:Tutorial_Framework:Particles

Basic OpenGL lighting question

I think this is an extremely stupid and newbie question, but then I am a newbie in graphics and openGL. Having drawn a sphere and put a light source nearby, also having specified ambient light, I started experimenting with light and material values and came to a surprising conclusion: the colors which we specify with glColor* do not matter at all when lighting is enabled. Instead, the equivalent is the material's ambient component. Is this conclusion correct? Thanks
If the lighting is enabled, then instead of the vertex color, the material color (well, colors - there are several of them for different types of response to light) is used. Material colors are specified by glMaterial* functions.
If you want to reuse your code, you can use glEnable(GL_COLOR_MATERIAL) and glColorMaterial(GL_AMBIENT_AND_DIFFUSE) to have your old glColor* calls mapped to material color automatically.
(And please switch to shaders as fast as possible - the shader approach is both easier and more powerful)
I suppose you don't use fragment shader yet. From glprogramming.com:
vertex color =
the material emission at that vertex +
the global ambient light scaled by the materials ambient
property at that vertex +
the ambient, diffuse, and specular contributions from all the
light sources, properly attenuated
So yes, vertex color is not used.
Edit: You can also look for GL lightning equation in GL specification (you have one nearby, do you? ^^)