GLsizeiptr and GLintptr - function list - opengl

I would like to check which core OpenGL functions use GLsizeiptr and GLintptr types. Where can I find the full OpenGL function list ? I have check glspec46.core.pdf but there is no such list. When GLsizeiptr and GLintptr have been introduced to the core specification ?

Related

glEnableClientState vs shader's attribute location

I can connect a VBO data with the shader's attribute using the followings functions:
GLint attribVertexPosition = glGetAttribLocation(progId, "vertexPosition");
glEnableVertexAttribArray(attribVertexPosition);
glVertexAttribPointer(attribVertexPosition, 3, GL_FLOAT, false, 0, 0);
I analyze a legacy OpenGL code where a VBO is used with the following functions:
glEnableClientState(GL_VERTEX_ARRAY);
glVertexPointer(3, GL_FLOAT, 0, 0);
How that legacy code makes a connnection between a VBO data and the shader's attribute:
attribute vec3 vertexPosition;
?
For these fixed-function functions, the binding between the buffer and the vertex attribute is fixed. You don't tell glVertexArray which attribute it feeds; it always feeds the attribute gl_Vertex, which is defined for you in the shader. It cannot feed any user-defined attribute.
User-defined attributes are a separate set of attributes from the fixed-functioned ones.
Note that NVIDIA hardware is known to violate this rule. It aliases certain attribute locations with certain built-in vertex arrays, which allows user-defined attributes to receive data from fixed-function arrays. Perhaps you are looking at code that relies on this non-standard behavior that is only available on NVIDIA's implementation.

glGenBuffers vs glGenTextures

What exactly is the difference between glGenBuffers and glGenTextures? I've noticed that both work just fine when I try to generate a texture:
int texture1 = GL30.glGenBuffers();
GL30.glBindTexture(GL30.GL_TEXTURE_2D,texture1);
int texture2 = GL30.glGenTextures();
GL30.glBindTexture(GL30.GL_TEXTURE_2D,texture2);
Both of these seem to behave exactly the same for me. Is there any advantage to using glGenTextures over glGenBuffers? And if not, why does glGenTextures even exist when glGenBuffers can just be used instead?
glGenTextures and glGenBuffers doesn't create any objects or buffers, they just marke object names as used. This means they reserve objects names.
While glGenTextures reserves texture names, glGenBuffers reserves buffer names, but the names returned by glGenTextures may be the same as the names returned by glGenBuffers.
glGenTextures just guarantees that the names which are returned are not used for the purpose of textures.
glGenBuffers guarantees that the names which are returned are not used for the purpose of buffers.
Note, a buffer object and a texture object may have the same value, but there are 2 completely different objects. A name which is returned by glGenBuffers is not marked used (or reserved) for the used as a texture object, but it is reserved for use as a buffer object.
OpenGL ES 3.2 Specification - 8.1 Texture Objects; page 140
The command
void GenTextures( sizei n, uint *textures );
returns n previously unused texture names in textures. These names are marked as used, for the purposes of GenTextures only, but they acquire texture state and a dimensionality only when they are first bound, just as if they were unused.
OpenGL ES 3.2 Specification - 6 Buffer Objects; page 50
The command
void GenBuffers( sizei n, uint *buffers );
returns n previously unused buffer object names in buffers. These names are marked as used, for the purposes of GenBuffers only, but they acquire buffer state only when they are first bound with BindBuffer (see below), just as if they were unused.
Note, the OpenGL ES specification differs here from the (desktop) OpenGL core profile specification.
In (desktop) OpenGL core profile, it is not valid to pass a value to glBindTexture, which was not return by glGenTexture.
See OpenGL 4.6 API Core Profile Specification - 8.1 Texture Objects; page 180
The binding is effected by calling
void BindTexture( enum target, uint texture );
[...]
Errors
[...]
An INVALID_OPERATION error is generated if texture is not zero or a name returned from a previous call to GenTextures, or if such a name has since been deleted.

When using glVertexAttribPointer, what index should I use for the gl_Normal attribute?

I buffer normal data to a VBO, then point to it using glVertexAttribPointer:
glVertexAttribPointer(<INDEX?>, 3, GL_FLOAT, GL_FALSE, 0, NULL);
However, what value should I use for the first parameter, the index, if I wish the data to be bound to the gl_Normal attribute in the shaders?
I am using an NVidia card, and I read here https://www.opengl.org/sdk/docs/tutorials/ClockworkCoders/attributes.php that gl_Normal is always at index 2 for these types of cards. But how do I know that gl_Normal is at this index for other cards?
Additionally, using an index of 2 doesn't seem to be working, and the gl_Normal data in the shader is all (0,0,0).
I am aware of glGetAttribLocation and glBindAttribLocation, however the documentation specifically says the function will throw an error if attempted with one of the built in vertex attributes that begin with 'gl_'.
EDIT:
Using OpenGL 3.0 with GLSL 130.
You don't. When using the core profile and VAOs, none of the fixed-function vertex attributes exist.
Define your own vertex attribute for normals in your shader:
in vec3 myVertexNormal;
Get the attribute location (or bind it to a location of your choice):
normalsLocation = glGetAttribLocation(program, "myVertexNormal");
Then use glVertexAttribPointer with the location:
glVertexAttribPointer(normalsLocation, 3, GL_FLOAT, GL_FALSE, 0, NULL);
In the core profile, you must also do this for positions, texture coordinates, etc. as well. OpenGL doesn't actually care what the data is, as long as your vertex shader assigns something to gl_Position and your fragment shader assigns something to its output(s).
If you insist on using the deprecated fixed-function attributes and gl_Normal, use glNormalPointer instead.

How does glDrawArrays know what to draw?

I am following some begginer OpenGL tutorials, and am a bit confused about this snippet of code:
glBindBuffer(GL_ARRAY_BUFFER, vertexBufferObject); //Bind GL_ARRAY_BUFFER to our handle
glEnableVertexAttribArray(0); //?
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, 0); //Information about the array, 3 points for each vertex, using the float type, don't normalize, no stepping, and an offset of 0. I don't know what the first parameter does however, and how does this function know which array to deal with (does it always assume we're talking about GL_ARRAY_BUFFER?
glDrawArrays(GL_POINTS, 0, 1); //Draw the vertices, once again how does this know which vertices to draw? (Does it always use the ones in GL_ARRAY_BUFFER)
glDisableVertexAttribArray(0); //?
glBindBuffer(GL_ARRAY_BUFFER, 0); //Unbind
I don't understand how glDrawArrays knows which vertices to draw, and what all the stuff to do with glEnableVertexAttribArray is. Could someone shed some light on the situation?
The call to glBindBuffer tells OpenGL to use vertexBufferObject whenever it needs the GL_ARRAY_BUFFER.
glEnableVertexAttribArray means that you want OpenGL to use vertex attribute arrays; without this call the data you supplied will be ignored.
glVertexAttribPointer, as you said, tells OpenGL what to do with the supplied array data, since OpenGL doesn't inherently know what format that data will be in.
glDrawArrays uses all of the above data to draw points.
Remember that OpenGL is a big state machine. Most calls to OpenGL functions modify a global state that you can't directly access. That's why the code ends with glDisableVertexAttribArray and glBindBuffer(..., 0): you have to put that global state back when you're done using it.
DrawArrays takes data from ARRAY_BUFFER.
Data are 'mapped' according to your setup in glVertexAttribPointer which tells what is the definition of your vertex.
In your example you have one vertex attrib (glEnableVertexAttribArray) at position 0 (you can normally have 16 vertex attribs, each 4 floats).
Then you tell that each attrib will be obtained by reading 3 GL_FLOATS from the buffer starting from position 0.
Complementary to the other answers, here some pointers to OpenGL documentation. According to Wikipedia [1], development of OpenGL has ceased in 2016 in favor of the successor API "Vulkan" [2,3]. The latest OpenGL specification is 4.6 of 2017, but it has only few additions over 3.2 [1].
The code snippet in the original question does not require the full OpenGL API, but only a subset that is codified as OpenGL ES (originally intended for embedded systems) [4]. For instance, the widely used GUI development platform Qt uses OpenGL ES 3.X [5].
The maintainer of OpenGL is the Khronos consortium [1,6]. The reference of the latest OpenGL release is at [7], but has some inconsistencies (4.6 linked to 4.5 pages). If in doubt, use the 3.2 reference at [8].
A collection of tutorials is at [9].
https://en.wikipedia.org/wiki/OpenGL
https://en.wikipedia.org/wiki/Vulkan
https://vulkan.org
https://en.wikipedia.org/wiki/OpenGL_ES
see links in function references like https://doc.qt.io/qt-6/qopenglfunctions.html#glVertexAttribPointer
https://registry.khronos.org
https://www.khronos.org/opengl
https://registry.khronos.org/OpenGL-Refpages/es3
http://www.opengl-tutorial.org

GLSL OpenGL 3.x how to specify the mapping between generic vertex attribute indices and semantics?

I'm switching from HLSL to GLSL
When defining vertex attributes in of a vertexbuffer, one has to call
glVertexAttribPointer( GLuint index,
GLint size,
GLenum type,
GLboolean normalized,
GLsizei stride,
const GLvoid * pointer);
and pass an index. But how do I specify which index maps to which semantic in the shader?
for example gl_Normal. How can I specify that when using gl_Normal in a vertex shader, I want this to be the generic vertex attribute with index 1?
There is no such thing as a "semantic" in GLSL. There are just attribute indices and vertex shader inputs.
There are two kinds of vertex shader inputs. The kind that were removed in 3.1 (the ones that start with "gl_") and the user-defined kind. The removed kind cannot be set with glVertexAttribPointer; each of these variables had its own special function. gl_Normal had glNormalPointer, gl_Color had glColorPointer, etc. But those functions aren't around in core OpenGL anymore.
User-defined vertex shader inputs are associated with an attribute index. Each named input is assigned an index in one of the following ways, in order from most overriding to the default:
Through the use of the GLSL 3.30 or ARB_explicit_attrib_location extension syntax layout(location = #), where # is the attribute index. So if I have an input called position, I would give it index 3 like this:
layout(location = 3) in vec4 position;
This is my preferred method of handling this. Explicit_attrib_location is available on pretty much any hardware that is still being supported (that isn't Intel).
Explicit association via glBindAttribLocation. You call this function before linking the program. To do the above, we would do this:
GLuint program = glCreateProgram();
glAttachShader(program, some_shader);
glBindVertexAttrib(program, 3, "position");
glLinkProgram(program);
You can set multiple attributes. Indeed, you can set multiple attribute names to the same index. The idea with that is to be able to just set a bunch of mappings automatically and let OpenGL figure out which one works with the actual shader code. So you could have "position" and "axis" map to index 3, and as long as you don't put a shader into this system that has both of those inputs, you'll be fine.
Let OpenGL assign it. If you don't assign an attribute index to an attribute in one of the other ways, the GLSL linker will assign it for you. You can fetch the attribute post-linking with glGetAttribLocation.
I really don't advise this, because OpenGL will assign the indices arbitrarily. So every shader that uses an attribute named position may have the position in a different index. I don't think it's a good idea. So if you can't explicitly set it in the shader, then at least explicitly set it in your OpenGL code before linking. That way, you can have a convention about what attribute index 0 means, what index 1 means, etc.