Using both programmable and fixed pipeline functionality in OpenGL - opengl

I have a vertex shader that transforms vertices to create a fisheye affect. Is is possible to just use just the vertex shader and use fixed pipeline for the fragment portion.
So basically i have an application that doesnt use shaders. I want to apply a fisheye affect using a vertex shader to transform all vertices, and then leave it to the application to take care to lighting, texturing, etc?
If this is not possible, is it possible to get a fisheye affect by messing with the contents of the gl back buffer?
Thanks

If your code is on fixed function, then what you described is a problem - that's why having your graphics code in shaders is good: they let you change anything easily. Remember to use them in your next project. :)
OK, but for this particular I assume that you don't want to rewrite your whole rendering from scratch to shaders now...
You mentioned you want to have a "fisheye effect". Seems like you're lucky, because I believe you don't need shaders for that effect! If we're talking about the same effect, then you can achieve it just by replacing the GL_PROJECTION matrix from OpenGL's fixed function to a perspective matrix with a wider angle of vision.

Yes, it's possible, altough some cards (notably ATI) don't support using a vertex shader without a fragment shader.

Related

Get face lighting in "model world" without shaders. OpenGL

I have rather simple OpenGL workflow. I just use lists (no shaders attached to them):
glNewList(list, GL.COMPILE);
//add vertices and normals
glEndList();
glCallList(list)
I want to get from OpenGL some information about faces of created object. Especially I need to know if their are on light or not for a given moment of time. Something like glReadPixels but not from framebuffer, but from 3D world.
Is it possible via gl* functions?
Without using any shaders, it is not possible to query any information on the geometry itself. OpenGL is not designed for geometry processing, it is a rendering API.
There are several ways to achieve what you need by using shaders:
Perform the whole computation in a compute shader (probably the option with best performance).
Use geometry shader and transform feedback.
How exactly you would implemented it depends on which data you have and on which computations should be performed.

GLSL - A do-nothing vertex shader?

So I have an opengl program that draws a group on objects. When I draw these objects I want to use my shader program is a vertex shader and a vertex shader exclusively. Basically, I am aiming to adjust the height of the model inside the vertex shader depending on a texture calculation. And that is it. Otherwise I want the object to be drawn as if using naked openGL (no shaders). I do not want to implement a fragment shader.
However I haven't been able to find how to make it so I can have a shader program with only a vertex shader and nothing else. Forgetting the part about adjust my model's height, so far I have:
gl_FrontColor = gl_Color;
gl_Position = modelViewProjectionMain * Position;
It transforms the object to the correct position alright, however when I do this I loose texture coordinates and also lighting information (normals are lost). What am I missing? How do I write a "do-nothing" vertex shader? That is, a vertex shader you could turn off and on when drawing a textured .obj with normals, and there would be no difference?
You can't write a shader with partial implementation. Either you do everything in a shader or completely rely on fixed functionality(deprecated) for a given object.
What you can do is this:
glUseProgram(handle)
// draw objects with shader
glUseProgram(0)
// draw objects with fixed functionality
To expand a little on the entirely correct answer by Abhishek Bansal, what you want to do would be nice but is not actually possible. You're going to have to write your own vertex and fragment shaders.
From your post, by "naked OpenGL" you mean the fixed-function pipeline in OpenGL 1 and 2, which included built-in lighting and texturing. Shaders in OpenGL entirely replace the fixed-function pipeline rather than extending it. And in OpenGL 3+ the old functionality has been removed, so now they're compulsory.
The good news is that vertex/fragment shaders to perform the same function as the original OpenGL lighting and texturing are easy to find and easy to modify for your purpose. The OpenGL Shading Language book by Rost, Licea-Kane, etc has a whole chapter "Emulating OpenGL Fixed Functionality" Or you could get a copy of the 5th edition OpenGL SuperBible book and code (not the 6th edition) which came with a bunch of useful predefined shaders. Or if you prefer online resources to books, there are the NeHe tutorials.
Writing shaders seems a bit daunting at first, but it's easier than you might think, and the extra flexibility is well worth it.

Understanding the shader workflow in OpenGL?

I'm having a little bit of trouble conceptualizing the workflow used in a shader-based OpenGL program. While I've never really done any major projects using either the fixed-function or shader-based pipelines, I've started learning and experimenting, and it's become quite clear to me that shaders are the way to go.
However, the fixed-function pipeline makes much more sense to me from an intuitive perspective. Rendering a scene with that method is simple and procedural—like painting a picture. If I want to draw a box, I tell the graphics card to draw a box. If I want a lot of boxes, I draw my box in a loop. The fixed-function pipeline fits well with my established programming tendencies.
These all seem to go out the window with shaders, and this is where I'm hitting a block. A lot of shader-based tutorials show how to, for example, draw a triangle or a cube on the screen, which works fine. However, they don't seem to go into at all how I would apply these concepts in, for example, a game. If I wanted to draw three procedurally generated triangles, would I need three shaders? Obviously not, since that would be infeasible. Still, it's clearly not as simple as just sticking the drawing code in a loop that runs three times.
Therefore, I'm wondering what the "best practices" are for using shaders in game development environments. How many shaders should I have for a simple game? How do I switch between them and use them to render a real scene?
I'm not looking for specifics, just a general understanding. For example, if I had a shader that rendered a circle, how would I reuse that shader to draw different sized circles at different points on the screen? If I want each circle to be a different color, how can I pass some information to the fragment shader for each individual circle?
There is really no conceptual difference between the fixed-function pipeline and the programmable pipeline. The only thing shaders introduce is the ability to program certain stages of the pipeline.
On current hardware you have (for the most part) control over the vertex, primitive assembly, tessellation and fragment stages. Some operations that occur inbetween and after these stages are still fixed-function, such as depth/stencil testing, blending, perspective divide, etc.
Because shaders are actually nothing more than programs that you drop-in to define the input and output of a particular stage, you should think of input to a fragment shader as coming from the output of one of the previous stages. Vertex outputs are interpolated during rasterization and these are often what you're dealing with when you have an in variable in a fragment shader.
You can also have program-wide variables, known as uniforms. These variables can be accessed by any stage simply by using the same name in each stage. They do not vary across invocations of a shader, hence the name uniform.
Now you should have enough information to figure out this circle example... you can use a uniform to scale your circle (likely a simple scaling matrix) and you can either rely on per-vertex color or a uniform that defines the color.
You don't have shaders that draws circles (ok, you may with the right tricks, but's let's forget it for now, because it is misleading and has very rare and specific uses). Shaders are little programs you write to take care of certain stages of the graphic pipeline, and are more specific than "drawing a circle".
Generally speaking, every time you make a draw call, you have to tell openGL which shaders to use ( with a call to glUseProgram You have to use at least a Vertex Shader and a Fragment Shader. The resulting pipeline will be something like
Vertex Shader: the code that is going to be executed for each of the vertices you are going to send to openGL. It will be executed for each indices you sent in the element array, and it will use as input data the correspnding vertex attributes, such as the vertex position, its normal, its uv coordinates, maybe its tangent (if you are doing normal mapping), or whatever you are sending to it. Generally you want to do your geometric calculations here. You can also access uniform variables you set up for your draw call, which are global variables whic are not goin to change per vertex. A typical uniform variable you might watn to use in a vertex shader is the PVM matrix. If you don't use tessellation, the vertex shader will be writing gl_Position, the position which the rasterizer is going to use to create fragments. You can also have the vertex outputs different things (as the uv coordinates, and the normals after you have dealt with thieri geometry), give them to the rasterizer an use them later.
Rasterization
Fragment Shader: the code that is going to be executed for each fragment (for each pixel if that is more clear). Generally you do here texture sampling and light calculation. You will use the data coming from the vertex shader and the rasterizer, such as the normals (to evaluate diffuse and specular terms) and the uv coordinates (to fetch the right colors form the textures). The texture are going to be uniform, and probably also the parameters of the lights you are evaluating.
Depth Test, Stencil Test. (which you can move before the fragment shader with the early fragments optimization ( http://www.opengl.org/wiki/Early_Fragment_Test )
Blending.
I suggest you to look at this nice program to develop simple shaders http://sourceforge.net/projects/quickshader/ , which has very good examples, also of some more advanced things you won't find on every tutorial.

OpenGL - Fixed pipeline shader defaults (Mimic fixed pipeline with shaders)

Can anyone provide me the shader that are similar to the Fixed function Pipeline?
I need the Fragment shader default the most, because I found a similar vertex shader online. But if you have a pair that should be fine!
I want to use fixed pipeline, but have the flexability of shaders, so I need similar shaders so I'll be able to mimic the functionality of the fixed pipeline.
Thank you very much!
I'm new here so if you need more information tell me:D
This is what I would like to replicate: (texture unit 0)
functionality of glTranslatef
functionality of glColor4f
functionality of glTexCoord2f
functionality of glVertex2f
functionality of glOrtho (I know it does some magic stuff behind the scenes with the shader)
Thats it. That is all the functionality I would like to replicate form the fixed function pipeline. Can anyone show me an example of how to replicate those things with shaders?
You have a couple of issues here that will make implementing this using shaders more difficult.
First and foremost, in addition to using fixed-function features you are also using immediate mode. Before you can make the transition to shaders, you should switch to vertex arrays. You could write a class that takes immediate mode-like commands that would come between glBegin (...) and glEnd (...) and pushes them into a vertex array if you absolutely need to structure your software this way.
As for glTranslatef (...) and glOrtho (...) these are nothing particularly special. They create translation matrices and orthographic projection matrices and multiply the "current" matrix by this. It is unclear what language you are using, but one possible replacement for these functions could come from using a library like glm (C++).
The biggest obstacle will be getting rid of the "current" state mentality that comes with thinking in terms of the fixed-function pipeline. With shaders you have full control over just about every state, and you don't have to use functions that multiply the "current" matrix or set the "current" color. You can simply pass the exact matrix or color value that you need to your shader. This is an altogether better way of approaching these problems, and is why I honestly think you should ditch the fixed-function approach altogether instead of trying to emulate it.
This is why your desire to "use the fixed-function pipeline but have the flexibility of shaders" fundamentally makes very little sense.
Having said all that, in OpenGL compatibility mode, there are reserved words in GLSL that refer to many of the fixed-function constructs. These include things like gl_MultiTexCoord<N>, gl_ModelViewProjectionMatrix, etc. They can be used as a transitional aid, but really should not be relied upon in the long run.
Se also this question: OpenGL Fixed function shader implementation where they point to a few web resources.
The OpenGL ES 2 book contains an implementation of the OpenGL ES 1.1 fixed function pipeline in Chapter 8 (vertex shader) and Chapter 10 (fragment shader).
Unfortunately, these shaders seem to not be included in the book's sample code. On the other hand, reading the book and typing the code is certainly worthwile.

point rendering in openGL and GLSL

Question: How do I render points in openGL using GLSL?
Info: a while back I made a gravity simulation in python and used blender to do the rendering. It looked something like this. As an exercise I'm porting it over to openGL and openCL. I actually already have it working in openCL, I think. It wasn't until i spent a fair bit of time working in openCL that I realized that it is hard to know if this is right without being able to see the result. So I started playing around with openGL. I followed the openGL GLSL tutorial on wikibooks, very informative, but it didn't cover points or particles.
I'm at a loss for where to start. most tutorials I find are for the openGL default program. I want to do it using GLSL. I'm still very new to all this so forgive me my potential idiocy if the answer is right beneath my nose. What I'm looking for is how to make halos around the points that blend into each other. I have a rough idea on how to do this in the fragment shader, but so far as I'm aware I can only grab the pixels that are enclosed by polygons created by my points. I'm sure there is a way around this, it would be crazy for there not to be, but me in my newbishness is clueless. Can some one give me some direction here? thanks.
I think what you want is to render the particles as GL_POINTS with GL_POINT_SPRITE enabled, then use your fragment shader to either map a texture in the usual way, or generate the halo gradient procedurally.
When you are rendering in GL_POINTS mode, set gl_PointSize in your vertex shader to set the size of the particle. The vec2 variable gl_PointCoord will give you the coordinates of your fragment in the fragment shader.
EDIT: Setting gl_PointSize will only take effect if GL_PROGRAM_POINT_SIZE has been enabled. Alternatively, just use glPointSize to set the same size for all points. Also, as of OpenGL 3.2 (core), the GL_POINT_SPRITE flag has been removed and is effectively always on.
simply draw a point sprites (using GL_POINT_SPRITE) use blending functions: gl_src_alpha and gl_one and then "halos" should be visible. Blending should be responsible for "halos" so look for some more info about that topic.
Also you have to disable depth wrties.
here is some link about that: http://content.gpwiki.org/index.php/OpenGL:Tutorials:Tutorial_Framework:Particles