I just found out that OpenGL has built-in lighting which can be enabled with glEnable(GL_LIGHTING);. Why does tutorials etc. use custom one made with shaders? There must be a reason. What does built-in lighting do worse?
OpenGL has built-in lighting
No, it does not. All such things were removed from OpenGL in 3.1 and put into the compatibility profile. Which is not required to be supported.
What does built-in lighting do worse?
Everything. It does everything worse.
Fixed function lighting is per-vertex, while shader-based lighting can be whatever you want: per-vertex, per-fragment, whatever. Fixed-function lighting doesn't work with deferred rendering, lighting pre-passes, or various other rendering techniques. Fixed-function lighting can't handle HDR or gamma correct illumination.
There is nothing that fixed-function lighting can do that user-defined lighting cannot. While there is a ton of stuff that user-defined lighting can do that fixed-function can't.
It is good that modern OpenGL tutorials don't teach that outdated garbage.
Related
I knew that there are only two shading modes in [openGL] which are GL_FLAT and GL_SMOOTH.
I just wanna know if there are ways to achieve Gouraud Shading and Phong Shading using only the above shading modes in [openGL]?
The functionality you are talking about is waaaay too outdated. GL_FLAT and GL_SMOOTH are modes of fixed function pipeline, which used Blinn-Phong lighting.
Both modes yield the same lighting model, but with GL_FLAT values for pixels inside a poly are not interpolated. So each polygon with GL_FLAT gets uniformly lighted and looks flat.
Answering your question, you can't get anything except Blinn-Phong with GL_SMOOTH and GL_FLAT.
I knew that there are only two shading modes
It's not true since ~15 years ago.
At present fixed-pipeline functionality is deprecated. Please use shaders and implement any lighting you want, unless you are forced to use legacy GL by threats of violence.
I am currently drawing some geometry with "modern OpenGL" using QUAD and QUADSTRIP primitives.
I have faced with strange artifacts. My quads actually tessellates with visible triangles. Hope you'll see while lines across quads.
Any ideas?
Modern OpenGL (3.1+ Core Profile) does not support QUADS or QUADSTRIPS. Check for example here for allowed primitive types.
The culprit most likely is that you enabled polygon smooth antialiasing (still supported in compatibility profile), i.e. did glEnable(GL_POLYGON_SMOOTH) + some blending function. Artifacts like the one you observe are the reason, nobody really bothered to use that method of antialiasing.
However, it may very well be, that you did enable antialiasing in your graphics driver settings and the AA method used doesn't play along nicely with your program.
Is there equivalent functionality to Directx's texturing blending? http://msdn.microsoft.com/en-us/library/windows/desktop/bb206241(v=vs.85).aspx
It basically blends several textures together before applying it onto the mesh.
From scanning through that page, I believe you can do all of that in a fragment shader. You can bind multiple textures, sample them all in your shader, and combine the results at your hearts desire.
It looks similar to functionality that OpenGL used to have in the fixed function pipeline. My old version of the red book (OpenGL Programming Guide) has chapters on "Multitexturing" and "Texture Combiner Functions". This is still available if you use the compatibility profile. But IMHO, this is a great example of where squeezing certain kinds of functionality into the fixed pipeline looked very cumbersome, while doing the same thing in shaders is much easier and more flexible.
Can anyone provide me the shader that are similar to the Fixed function Pipeline?
I need the Fragment shader default the most, because I found a similar vertex shader online. But if you have a pair that should be fine!
I want to use fixed pipeline, but have the flexability of shaders, so I need similar shaders so I'll be able to mimic the functionality of the fixed pipeline.
Thank you very much!
I'm new here so if you need more information tell me:D
This is what I would like to replicate: (texture unit 0)
functionality of glTranslatef
functionality of glColor4f
functionality of glTexCoord2f
functionality of glVertex2f
functionality of glOrtho (I know it does some magic stuff behind the scenes with the shader)
Thats it. That is all the functionality I would like to replicate form the fixed function pipeline. Can anyone show me an example of how to replicate those things with shaders?
You have a couple of issues here that will make implementing this using shaders more difficult.
First and foremost, in addition to using fixed-function features you are also using immediate mode. Before you can make the transition to shaders, you should switch to vertex arrays. You could write a class that takes immediate mode-like commands that would come between glBegin (...) and glEnd (...) and pushes them into a vertex array if you absolutely need to structure your software this way.
As for glTranslatef (...) and glOrtho (...) these are nothing particularly special. They create translation matrices and orthographic projection matrices and multiply the "current" matrix by this. It is unclear what language you are using, but one possible replacement for these functions could come from using a library like glm (C++).
The biggest obstacle will be getting rid of the "current" state mentality that comes with thinking in terms of the fixed-function pipeline. With shaders you have full control over just about every state, and you don't have to use functions that multiply the "current" matrix or set the "current" color. You can simply pass the exact matrix or color value that you need to your shader. This is an altogether better way of approaching these problems, and is why I honestly think you should ditch the fixed-function approach altogether instead of trying to emulate it.
This is why your desire to "use the fixed-function pipeline but have the flexibility of shaders" fundamentally makes very little sense.
Having said all that, in OpenGL compatibility mode, there are reserved words in GLSL that refer to many of the fixed-function constructs. These include things like gl_MultiTexCoord<N>, gl_ModelViewProjectionMatrix, etc. They can be used as a transitional aid, but really should not be relied upon in the long run.
Se also this question: OpenGL Fixed function shader implementation where they point to a few web resources.
The OpenGL ES 2 book contains an implementation of the OpenGL ES 1.1 fixed function pipeline in Chapter 8 (vertex shader) and Chapter 10 (fragment shader).
Unfortunately, these shaders seem to not be included in the book's sample code. On the other hand, reading the book and typing the code is certainly worthwile.
While trying to get the gist of OpenGL, I eventually ran into GLSL. I have used OpenGL before for miminal things, like triangles and colors (since I haven't learnt much yet), but when I found out about deprecated functions like glBegin and glEnd, I had to unlearn the things I had just learnt.
Now, I have come across vertex buffers, vertex buffer objects, vertex and fragments shaders... One thing I never understood though is why should one use GLSL? Why use GLSL along with OpenGL? What are the things you can't do using pure OpenGL? To me integrating GLSL shaders into programs adds complexity, since you have deal with external files or you have to embed shaders into programs, which causes more work.
I have very little experience. I'd like to learn more about the subject, but because of this incomprehensible contradiction, I'm unable to progress.
So, why use GLSL along with OpenGL?
Your question is not about GLSL; it's about shaders in general. GLSL is just the sanctioned way to provide shaders in OpenGL. Your question is really, "Why use shaders along with OpenGL?"
I'm not going to get into all of the details of what shaders can do that fixed function cannot. But here are just a few of the things that you cannot do with fixed-function OpenGL:
GPU-powered vertex weighted skinning.
Dual-quaternion-based skinning.
Proper tangent-space bump mapping.
Various deferred rendering techniques.
On-GPU frustum culling for instanced rendering.
Arbitrary lighting models, including BRDFs or other complex illumination models.
Complex shadow mapping techniques.
High-dynamic range rendering (OpenGL FFP doesn't let you exceed 1.0 on any color calculations).
Arbitrary tone mapping techniques.
I can keep going (for a long time), but I think you get the point. If you actually care about little things like "visual fidelity", you should be using shaders.
The question isn't "why use shaders;" it's "why not use shaders?"