glEnableClientState with modern OpenGL (glVertexAttribPointer etc) - opengl

I'd like to lay out some things I think I've learned, but am unsure about:
VBOs are the way to go. They're created with glGenBuffers and glBufferData.
For maximum flexibility, it's best to pass generic vertex attributes to shaders with glVertexAttribPointer, rather than glVertex, glNormal, etc..
glDrawElements can be used with vertex buffers and an index buffer to efficiently render geometry with lots of shared vertices, such as a landscape mesh.
Assuming all of that is correct so far, here's my question. All of the tutorials I've read about modern OpenGL completely omit glEnableClientState. But the OpenGL man pages say that without glEnableClientState, glDrawElements will do nothing:
http://www.opengl.org/sdk/docs/man/xhtml/glDrawElements.xml
The key passage is: "If GL_VERTEX_ARRAY is not enabled, no geometric primitives are constructed."
This leads me to the following questions:
None of the tutorials use glEnableClientState before calling glDrawElements. Does this mean the man page is wrong or outdated?
GL_VERTEX_ARRAY would seem to be the thing you enable if you're going to use glVertexPointer, and likewise you'd use GL_NORMAL_ARRAY with glNormalPointer, and so on. But if I'm not using those functions, and am instead using generic vertex attributes with glVertexAttribPointer, then why would it be necessary to enable GL_VERTEX_ARRAY?

If GL_VERTEX_ARRAY is not enabled, no geometric primitives are constructed.
That's because the man page is wrong. The man page covers GL 2.1 (and it's still wrong for that), and for whatever reason, the people updating the man page refuse to update the older GL versions for bug fixes.
In GL 2.1, you must use either generic attribute index 0 or GL_VERTEX_ARRAY. In GL 3.1+, you don't need to use any specific attribute indices.
This is because, in GL versions before 3.1, all array rendering functions were defined in terms of calls to glArrayElement, which used immediate mode-based rendering. That means that you need something to provoke the vertex. Recall that, in immediate mode, calling glVertex*() not only sets the vertex position, it also causes the vertex to be sent with the other attributes. Calling glVertexAttrib*(0, ...) does the same thing. That's why older versions require you to use either attribute 0 or GL_VERTEX_ARRAY.
In GL 3.1+, once they took out immediate mode, they had to specify array rendering differently. And because of that, they didn't have to limit themselves to using attribute 0.
If you want API docs for core GL 3.3 works, I suggest you look at the actual API docs for core GL 3.3. Though to be honest, I'd just look at the spec if you want accurate information. Those docs have a lot of misinformation in them. And since they're not actually wiki-pages, that information never gets corrected.

Your first 3 points are correct. And to answer your last half of your question, do not use glEnableClientState for modern OpenGL. Start coding!

VBOS are the way to go. They're created with glGenBuffers and
glBufferData.
VBOs are often used in high-performance applications. You might want to do something simpler first. Vertex arrays can be a good way to go to get started quickly. Because VBOs sit on top of vertex arrays anyway, you'll be using much of the same code when you switch to VBOs, and it might be good to run a test using vertex arrays or indexed vertex arrays first.
For maximum flexibility, it's best to pass generic vertex attributes
to shaders with glVertexAttribPointer, rather than glVertex, glNormal,
etc..
That's a good approach.
glDrawElements can be used with vertex buffers and an index buffer to
efficiently render geometry with lots of shared vertices, such as a
landscape mesh.
Perhaps. You have to be sure the vertices truly are shared, though. A vertex that is in the same location but has a different normal (e.g. for flat shading) isn't truly shared.

Related

What is the best way to draw multiple VAO Using the same shader but not having the same texture or colors

I'm wondering what would be the best thing to do if I want to draw
more than ~6000 different VAOs using the same shader.
At the moment I bind my shader then give it all information needed (uniform) then looping through each VAO to binding and draw them.
This code make my computer fall at ~ 200 fps instead of 3000 or 4000.
According to https://learnopengl.com/Advanced-OpenGL/Instancing, using glDrawElementsInstanced can allow me to handle a HUGE amount of same VAO but since I have ~6000 different VAO It seems like I can't use it.
Can someone confirm me this? What you guys would do to draw so many VAO and save many performance as you can?
Step 1: do not have 6,000 different VAOs.
You are undoubtedly treating each VAO as a separate mesh. Stop doing this. You should instead treat each VAO as a separate vertex format. That is, you only need a new VAO if you're passing different kinds of vertex data. The number of attributes and the format of each attributes constitute the format information.
Ideally, you only need between 4 and 10 separate sets of vertex formats. Given that you're using the same shader on multiple VAOs, you probably already have this understanding.
So, how do you use the same VAO for multiple meshes? Ideally, you would do this by putting all of the mesh data for a particular kind of mesh (ie: vertex format) in the same buffer object(s). You would select which data to retrieve for a particular rendering operation via tricks like the baseVertex parameter of glDrawElementsBaseVertex, or just by selecting which range of index data to draw from for a particular draw command. Other alternatives include the multi-draw family of rendering functions.
If you cannot put all of the data in the same buffers for some reason, then you should adopt the glVertexAttribFormat style of VAO usage. That way, you set your vertex format data with glVertexAttribFormat calls, and you can change the buffers as needed with glBindVertexBuffers without ever having to touch the vertex format itself. This is known to be faster than changing VAOs.
And to be honest, you should adopt glVertexAttribFormat anyway, because it's a much better API that isn't stupid like glVertexAttribPointer and its ilk.
glDrawElementsInstanced can allow me to handle a HUGE amount of same VAO but since I have ~6000 differents VAO It seems like I can't use it.
So what you should do is to combine your objects into the same VAO. Then use glMultiDrawArraysIndirect or glMultiDrawElementsIndirect to issue a draw of all the different objects from within the same VAO. This answer demonstrates how to do this.
In order to handle different textures you either build a texture atlas, pack the textures into a texture array, or use the GL_ARB_bindless_texture extensions if available.

OpenGL - Fixed pipeline shader defaults (Mimic fixed pipeline with shaders)

Can anyone provide me the shader that are similar to the Fixed function Pipeline?
I need the Fragment shader default the most, because I found a similar vertex shader online. But if you have a pair that should be fine!
I want to use fixed pipeline, but have the flexability of shaders, so I need similar shaders so I'll be able to mimic the functionality of the fixed pipeline.
Thank you very much!
I'm new here so if you need more information tell me:D
This is what I would like to replicate: (texture unit 0)
functionality of glTranslatef
functionality of glColor4f
functionality of glTexCoord2f
functionality of glVertex2f
functionality of glOrtho (I know it does some magic stuff behind the scenes with the shader)
Thats it. That is all the functionality I would like to replicate form the fixed function pipeline. Can anyone show me an example of how to replicate those things with shaders?
You have a couple of issues here that will make implementing this using shaders more difficult.
First and foremost, in addition to using fixed-function features you are also using immediate mode. Before you can make the transition to shaders, you should switch to vertex arrays. You could write a class that takes immediate mode-like commands that would come between glBegin (...) and glEnd (...) and pushes them into a vertex array if you absolutely need to structure your software this way.
As for glTranslatef (...) and glOrtho (...) these are nothing particularly special. They create translation matrices and orthographic projection matrices and multiply the "current" matrix by this. It is unclear what language you are using, but one possible replacement for these functions could come from using a library like glm (C++).
The biggest obstacle will be getting rid of the "current" state mentality that comes with thinking in terms of the fixed-function pipeline. With shaders you have full control over just about every state, and you don't have to use functions that multiply the "current" matrix or set the "current" color. You can simply pass the exact matrix or color value that you need to your shader. This is an altogether better way of approaching these problems, and is why I honestly think you should ditch the fixed-function approach altogether instead of trying to emulate it.
This is why your desire to "use the fixed-function pipeline but have the flexibility of shaders" fundamentally makes very little sense.
Having said all that, in OpenGL compatibility mode, there are reserved words in GLSL that refer to many of the fixed-function constructs. These include things like gl_MultiTexCoord<N>, gl_ModelViewProjectionMatrix, etc. They can be used as a transitional aid, but really should not be relied upon in the long run.
Se also this question: OpenGL Fixed function shader implementation where they point to a few web resources.
The OpenGL ES 2 book contains an implementation of the OpenGL ES 1.1 fixed function pipeline in Chapter 8 (vertex shader) and Chapter 10 (fragment shader).
Unfortunately, these shaders seem to not be included in the book's sample code. On the other hand, reading the book and typing the code is certainly worthwile.

Fixed-Function Vs. Shaders - help understand the conceptual differences

My background: I first started experimenting with OpenGL some months ago, for no particular purpose, just fun. I started reading the OpenGL redbook, and got as far as making a planetary system with a lot of different lighting. That lasted for a month, and my interest for openGL went away. It awoke again a week or so ago, and as I gathered from some SO posts, the redbook is outdated and the OpenGL Superbible is a better source for learning. So I started reading it. I like the concept of shaders but there's a real mess going on in my brain because of transition from my old memories of the fixed pipeline and the new concept of shaders.
Question: I would like to write some statements which I think are true and I am asking OpenGL experts to verify them (i.e. whether I am understanding correctly, not quite correctly or absolutely incorrectly). So...
1) If we don't use any shader program, nothing changes. We have current color, current normal, current transformation matrix, current everything, and as soon as we call glVertex**(...) these current values are taken and the vertex is fed to ... I don't know what. The fact is that it's transformed with the current matrix, the current color and normal are applied to it etc.
2) As soon as we use a shader program, all the above stops working. That is, glColor, glRotate etc. make no sense (Do they?). I mean, glColor still does set the current color, glRotate still multiplies the current matrix by the rotation matrix, but these aren't used at all. Instead, we feed vertex attributes by glVertexAttrib. Which attribute means what is totally dependent on our vertex shader and the in variable binding. We also find ans set the values of the uniforms and then call glVertex and the shader is executed ( I don't know immediately or after glEnd() is called). The actual vertex and fragment processing is done entirely manually in the shader program.
3) Shaders don't add anything to depth testing. That is, I don't need to take care of it in a shader. I just call glEnable(GL_DEPTH_TEST). Neither is face culling affected.
4) Alpha blending and antialiasing need not be taken care of in shaders. glEnable calls will suffice.
5) Is it a good idea to use gluPerspective, glRotate, glPushMatrix and other matrix functions, and then retrieve the current matrix and feed it as a uniform to a shader? Thus there won't be any need in using a 3rd party matrix library.
It depends on what version of OpenGL you're talking about. Up through OpenGL 3.0, all the fixed functionality is still present, so yes, if you decide to just use fixed functionality it continues to work like it always did. Starting from 3.0, quite a bit of the fixed pipeline was deprecated, and as of 3.1 it disappears completely. Using these, you no longer really have the option to just use the fixed pipeline.
Again, it depends. For example, up through OpenGL 3.0, glColor is still supported, even when you use a shader. The difference is that instead of automatically being applied to what gets drawn, it's supplied to your shader, which can use it unchanged, modify it as it sees fit, or ignore it completely. So, your fragment shader receives gl_FrontColor and gl_BackColor, and writes the actual fragment color to gl_FragColor. If you're using OpenGL 3.1 or newer, however, glColor (for example) just no longer exists -- a color will be just another value you supply to your shader like you could/would anything else.
That's correct, at least up to OpenGL 3.1. As of 4.0, there's a new compute shader that (I believe) can get involved in things like depth testing (but I haven't used it, so I'm a bit uncertain about that).
Yes, you can still use built-in alpha blending. Depending on your hardware, you may also want to consider using the gl_ARB_draw_buffers_blend extension (which is mandatory as of OpenGL 4, if I recall correctly).
Yet again, it depends on the version of OpenGL you're talking about. Current OpenGL completely eliminates all support for matrices so you have no choice but to use some other matrix library. Older versions supplied things like gl_ModelViewMatrix and gl_NormalMatrix to your shader as a uniform so you could go that route if you chose.
2) In modern OpenGL, there is no glColor, glBegin, glVertex, glRotate etc. so they don't make sense.
5) In modern OpenGL there are no built-in matrices, so you have to use a 3rd party library or write your own. So to answer your question, no, it's not a good idea.

Can you use GlVertexAttribPointer without shaders

According to the following wiki page:
OpenGL Wiki Page
It says "One of the requirements is to use shaders.". Is this true? To use GlVertexAttribPointer do I have to use shaders? I'm just starting out in OpenGL and just want to keep things simple for now, without having to introduce shaders at such an early stage of development. I will be using GLSL eventually, but want to get each feature "working" before adding any new features to my code.
Thanks
Yes, it's true, you need shaders to use generic vertex attributes, if not, how would OpenGL know that attribute 0 is normals, 1 is position and 2 is texture coordinates? There is no API for doing that in the Fixed Function pipeline.
It might work, but that's just luck, not defined behaviour.

GLSL dynamically indexed arrays

I've been using DirectX (with XNA) for a while now, and have recently switched to OpenGL. I'm really loving it, but one thing has got me annoyed.
I've been trying to implement something that requires dynamic indexing in the vertex shader, but I've been told that this requires the equivilant of SM 4.0. However I know that this works in DX even with SM 2.0, possibly even 1.0. XNA's instancing sample uses this to do instancing on SM2.0 only cards http://create.msdn.com/en-US/education/catalog/sample/mesh_instancing.
The compiler can't have been "unrolling" it into a giant list of if statements, since this would surely exceed the instruction limit on SM2 for our 250 instances.
So is DX doing some trickery that I can't do with OpenGL, can I manipulate OpenGL to do the same, or is it a hardware feature that OpenGL doesn't expose?
You can upload an array for your light directions with something like glUniform3fv, then (assuming I understand what you're trying to do correctly) you just need your vertex format to include an index into this array (so there be lots of duplication of these indices if the index only changes once per mesh or something). If you don't already know, you can use glGetAttribLocation + glVertexAttribPointer to send arbitrary vertex attributes like this to the shader (as opposed to using the deprecated built-in attributes like gl_Vertex, gl_Normal, etc).
From your link:
Note that there is no single perfect
instancing technique. This must be
implemented in a different way on
Windows compared to Xbox 360, and on
Windows the ideal technique requires
shader 3.0, but there is also a
fallback approach that will work with
shader 2.0. This sample implements
several different instancing
techniques, so it can work on both
platforms and shader versions.
Not the emboldened part. So ont hat basis you should be able to do similar instancing on shader model 3. Shader model 2's instancing is usually performed using a matrix palette. It sumply means you can render multiple meshes in one call by uploading a load of transformation matrices in one go. This reduces draw calls and improves speed.
Anyway for OpenGL there was a lot of troubles finalising this extension and hence you need shader 4. You CAN, however, still stick a per vertex matrix palette index in yoru vertex structure and do matrix palette rendering using a shader...