I have a 3D object I've created in Maya (and exported to OBJ and MTL files) and I've created a model viewer app in OGL to view it. If my assumptions are correct (and you know what they say about assumptions...) then because I haven't specified my own GLSL shader, OGL should be using the FFP to determine the fragment colour for each pixel? Is this correct?
In my understanding, the FFP must be implement some sort of default shader because it is able to display specular highlights and reflections etc. Can someone give me some information on this and perhaps tell me how this shading is done?
I understand the material definitions are used to set the properties of the materials of the objects, but I'm unsure of how to the final effects of the lights interacting with material display in the OGL window, without manually specifying a shader (hence my belief that there is some default shader).
In the case of 3rd generation and later GPUs all your assumptions are correct, indeed. As long as no custom shader is specified the driver provides the GPU with a default shader mimicking the FFP.
The default shader usually implements a Phong Lighting model, with the exact details depending on the set parameters of texture environment and such.
For older GPU generations the fixed function pipeline is hardwired.
Related
Is it possible for a OpenGL geometry shader to access the current settings for glFrontFace and glCullFace, and whether face culling is enabled? I have a geometry shader that renders normals for vertexes of triangles, and would like to render them only for triangles that will not be culled. What I would like to have is global variables, similar to built-in uniform variables, that tell whether glFrontFace was given GL_CCW or CL_CW, and whether glCullFace is set to GL_FRONT, GL_BACK, or GL_FRONT_AND_BACK, and whether face culling is enabled.
A workaround is for my C++ program to set the values as uniform variables in the shader program, but it would be more general, and make the shader program more easily usable, if it could query the status of culling, glFrontFace and glCullFace settings from OpenGL.
Please note, I do not want the gl_FrontFacing variable that is available to the fragment shader. Instead, the geometry shader needs to be able to access the values to know whether to generate a line representing a normal at the vertices of the triangle.
No, these are all the inputs you have in geometry shaders: https://www.khronos.org/opengl/wiki/Geometry_Shader#Inputs
You will have to use use a uniform.
Think of functions that change OpenGL server's states as interfaces to their internal implementation's uniforms used by fixed pipeline shaders. As you implement your own pipeline, you have to implement your own uniforms. One of ways to switch from fixed pipeline to flexible one within preexisting code is to implement your own emulation of OpenGl functions, although it would not be very effective one.
So I have an opengl program that draws a group on objects. When I draw these objects I want to use my shader program is a vertex shader and a vertex shader exclusively. Basically, I am aiming to adjust the height of the model inside the vertex shader depending on a texture calculation. And that is it. Otherwise I want the object to be drawn as if using naked openGL (no shaders). I do not want to implement a fragment shader.
However I haven't been able to find how to make it so I can have a shader program with only a vertex shader and nothing else. Forgetting the part about adjust my model's height, so far I have:
gl_FrontColor = gl_Color;
gl_Position = modelViewProjectionMain * Position;
It transforms the object to the correct position alright, however when I do this I loose texture coordinates and also lighting information (normals are lost). What am I missing? How do I write a "do-nothing" vertex shader? That is, a vertex shader you could turn off and on when drawing a textured .obj with normals, and there would be no difference?
You can't write a shader with partial implementation. Either you do everything in a shader or completely rely on fixed functionality(deprecated) for a given object.
What you can do is this:
glUseProgram(handle)
// draw objects with shader
glUseProgram(0)
// draw objects with fixed functionality
To expand a little on the entirely correct answer by Abhishek Bansal, what you want to do would be nice but is not actually possible. You're going to have to write your own vertex and fragment shaders.
From your post, by "naked OpenGL" you mean the fixed-function pipeline in OpenGL 1 and 2, which included built-in lighting and texturing. Shaders in OpenGL entirely replace the fixed-function pipeline rather than extending it. And in OpenGL 3+ the old functionality has been removed, so now they're compulsory.
The good news is that vertex/fragment shaders to perform the same function as the original OpenGL lighting and texturing are easy to find and easy to modify for your purpose. The OpenGL Shading Language book by Rost, Licea-Kane, etc has a whole chapter "Emulating OpenGL Fixed Functionality" Or you could get a copy of the 5th edition OpenGL SuperBible book and code (not the 6th edition) which came with a bunch of useful predefined shaders. Or if you prefer online resources to books, there are the NeHe tutorials.
Writing shaders seems a bit daunting at first, but it's easier than you might think, and the extra flexibility is well worth it.
I'm having a little bit of trouble conceptualizing the workflow used in a shader-based OpenGL program. While I've never really done any major projects using either the fixed-function or shader-based pipelines, I've started learning and experimenting, and it's become quite clear to me that shaders are the way to go.
However, the fixed-function pipeline makes much more sense to me from an intuitive perspective. Rendering a scene with that method is simple and procedural—like painting a picture. If I want to draw a box, I tell the graphics card to draw a box. If I want a lot of boxes, I draw my box in a loop. The fixed-function pipeline fits well with my established programming tendencies.
These all seem to go out the window with shaders, and this is where I'm hitting a block. A lot of shader-based tutorials show how to, for example, draw a triangle or a cube on the screen, which works fine. However, they don't seem to go into at all how I would apply these concepts in, for example, a game. If I wanted to draw three procedurally generated triangles, would I need three shaders? Obviously not, since that would be infeasible. Still, it's clearly not as simple as just sticking the drawing code in a loop that runs three times.
Therefore, I'm wondering what the "best practices" are for using shaders in game development environments. How many shaders should I have for a simple game? How do I switch between them and use them to render a real scene?
I'm not looking for specifics, just a general understanding. For example, if I had a shader that rendered a circle, how would I reuse that shader to draw different sized circles at different points on the screen? If I want each circle to be a different color, how can I pass some information to the fragment shader for each individual circle?
There is really no conceptual difference between the fixed-function pipeline and the programmable pipeline. The only thing shaders introduce is the ability to program certain stages of the pipeline.
On current hardware you have (for the most part) control over the vertex, primitive assembly, tessellation and fragment stages. Some operations that occur inbetween and after these stages are still fixed-function, such as depth/stencil testing, blending, perspective divide, etc.
Because shaders are actually nothing more than programs that you drop-in to define the input and output of a particular stage, you should think of input to a fragment shader as coming from the output of one of the previous stages. Vertex outputs are interpolated during rasterization and these are often what you're dealing with when you have an in variable in a fragment shader.
You can also have program-wide variables, known as uniforms. These variables can be accessed by any stage simply by using the same name in each stage. They do not vary across invocations of a shader, hence the name uniform.
Now you should have enough information to figure out this circle example... you can use a uniform to scale your circle (likely a simple scaling matrix) and you can either rely on per-vertex color or a uniform that defines the color.
You don't have shaders that draws circles (ok, you may with the right tricks, but's let's forget it for now, because it is misleading and has very rare and specific uses). Shaders are little programs you write to take care of certain stages of the graphic pipeline, and are more specific than "drawing a circle".
Generally speaking, every time you make a draw call, you have to tell openGL which shaders to use ( with a call to glUseProgram You have to use at least a Vertex Shader and a Fragment Shader. The resulting pipeline will be something like
Vertex Shader: the code that is going to be executed for each of the vertices you are going to send to openGL. It will be executed for each indices you sent in the element array, and it will use as input data the correspnding vertex attributes, such as the vertex position, its normal, its uv coordinates, maybe its tangent (if you are doing normal mapping), or whatever you are sending to it. Generally you want to do your geometric calculations here. You can also access uniform variables you set up for your draw call, which are global variables whic are not goin to change per vertex. A typical uniform variable you might watn to use in a vertex shader is the PVM matrix. If you don't use tessellation, the vertex shader will be writing gl_Position, the position which the rasterizer is going to use to create fragments. You can also have the vertex outputs different things (as the uv coordinates, and the normals after you have dealt with thieri geometry), give them to the rasterizer an use them later.
Rasterization
Fragment Shader: the code that is going to be executed for each fragment (for each pixel if that is more clear). Generally you do here texture sampling and light calculation. You will use the data coming from the vertex shader and the rasterizer, such as the normals (to evaluate diffuse and specular terms) and the uv coordinates (to fetch the right colors form the textures). The texture are going to be uniform, and probably also the parameters of the lights you are evaluating.
Depth Test, Stencil Test. (which you can move before the fragment shader with the early fragments optimization ( http://www.opengl.org/wiki/Early_Fragment_Test )
Blending.
I suggest you to look at this nice program to develop simple shaders http://sourceforge.net/projects/quickshader/ , which has very good examples, also of some more advanced things you won't find on every tutorial.
Can anyone provide me the shader that are similar to the Fixed function Pipeline?
I need the Fragment shader default the most, because I found a similar vertex shader online. But if you have a pair that should be fine!
I want to use fixed pipeline, but have the flexability of shaders, so I need similar shaders so I'll be able to mimic the functionality of the fixed pipeline.
Thank you very much!
I'm new here so if you need more information tell me:D
This is what I would like to replicate: (texture unit 0)
functionality of glTranslatef
functionality of glColor4f
functionality of glTexCoord2f
functionality of glVertex2f
functionality of glOrtho (I know it does some magic stuff behind the scenes with the shader)
Thats it. That is all the functionality I would like to replicate form the fixed function pipeline. Can anyone show me an example of how to replicate those things with shaders?
You have a couple of issues here that will make implementing this using shaders more difficult.
First and foremost, in addition to using fixed-function features you are also using immediate mode. Before you can make the transition to shaders, you should switch to vertex arrays. You could write a class that takes immediate mode-like commands that would come between glBegin (...) and glEnd (...) and pushes them into a vertex array if you absolutely need to structure your software this way.
As for glTranslatef (...) and glOrtho (...) these are nothing particularly special. They create translation matrices and orthographic projection matrices and multiply the "current" matrix by this. It is unclear what language you are using, but one possible replacement for these functions could come from using a library like glm (C++).
The biggest obstacle will be getting rid of the "current" state mentality that comes with thinking in terms of the fixed-function pipeline. With shaders you have full control over just about every state, and you don't have to use functions that multiply the "current" matrix or set the "current" color. You can simply pass the exact matrix or color value that you need to your shader. This is an altogether better way of approaching these problems, and is why I honestly think you should ditch the fixed-function approach altogether instead of trying to emulate it.
This is why your desire to "use the fixed-function pipeline but have the flexibility of shaders" fundamentally makes very little sense.
Having said all that, in OpenGL compatibility mode, there are reserved words in GLSL that refer to many of the fixed-function constructs. These include things like gl_MultiTexCoord<N>, gl_ModelViewProjectionMatrix, etc. They can be used as a transitional aid, but really should not be relied upon in the long run.
Se also this question: OpenGL Fixed function shader implementation where they point to a few web resources.
The OpenGL ES 2 book contains an implementation of the OpenGL ES 1.1 fixed function pipeline in Chapter 8 (vertex shader) and Chapter 10 (fragment shader).
Unfortunately, these shaders seem to not be included in the book's sample code. On the other hand, reading the book and typing the code is certainly worthwile.
I was wondering if there is support in the newer shader models to read-back a pixel value from the target framebuffer. I assume that this is alrdy done in later (non-programmable) stages in the drawing pipeline which made me hope that this feature might have been added into the programmable pipeline.
I am aware that it is possible to draw to a texture bound framebuffer and then send this texture to the shader, I was just hoping for a more elegant way to achieve the same functionality.
As Andrew notes, the framebuffer access is logically a separate stage from the fragment shader, so reading the framebuffer in the fragment shader is impossible. The reason for this (to answer Andrew's question) is a combination of performance and the ordering requirements of the graphics pipeline. The way the rendering pipeline is defined, framebuffer blending operations MUST occur in the same order as the triangles/primitives that went into the beginning of the pipeline. The fragment shaders, on the other hand, can happen in any order. So by having them be separate stages, the GPU is free to run fragment shaders as fast as it can, as their inputs become available, without having to synchronize between them. As long as it maintains enough bufffer space to hold on to the outputs of the fragment shaders, so that they can be accumulated and allow the framebuffer blends and writes to occur in order, all is well, as the results of any given fragment shader are not visible until after the blending stage.
If there was a way for the fragment shader to read the framebuffer, it would require some sort of synchronization to ensure that those reads happen in order, thus greatly slowing things down.
No. As you mention, rendering to a texture is the way to achieve that functionality.
If you take a look at a block diagram of a GPU pipeline, you'll see that the blending stage - which is what combines fragment shader output with the framebuffer - is separate from the fragment shader and is fixed-function.
I'm not a GPU designer - so I can only speculate the reason for this. Presumably it is to keep framebuffer access fast and insulate the fragment shader stage from the frame buffer so that it can be better parallelised. There are probably also issues regarding multi-sampling, and so on.
(Not to mention that fixed-function blending is "good enough" in most cases.)
Actually I think this is now doable with Direct3D 11 SM 5.0 (I didn't test it though).
You can bind an UAV to a PS 5.0, for allowing read and write operations on it using method OMSetRenderTargetsAndUnorderedAccessViews.
In that case the backbuffer of the swap chain in which you render has to be created with flag DXGI_USAGE_UNORDERED_ACCESS (I guess).
This is used in DXSDK OIT11 sample.
It is possible to read back the contents of the frame buffer in the fragment shader with Shader_framebuffer_fetch extension. The support can be added to the GPU with some performance loss. In fact, these days I'm working on to add the support of this extension in the OpenGL ES2.0 driver of a well known GPU brand in the consumer electronics market.
You can draw to a texture TEX (using a render target view) and then bind that as an input to another shader (using a shader resource view). TEX is then a pseduo-framebuffer.