My OpenGL version is 4.0. I want to render a simple rectangle by glDrawArrays(GL_QUADS, 0, 4).
Although I use compatibility profile, I still get GL_INVALID_ENUM. But when I add two points and use glDrawArrays(GL_TRIANGLES, 0, 6), it works. So is there another way to use GL_QUADS or I can only use GL_TRIANGLES?
Related
I'm drawing a point cloud with different colors of points with this:
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_COLOR_ARRAY);
glVertexPointer(3, GL_FLOAT, 0, vertices.get());
glColorPointer(3, GL_FLOAT, 0, colors.get());
glDrawArrays(GL_POINTS, 0, n);
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_COLOR_ARRAY);
is there a way to tell glDrawArrays (or the default shader) to use another client state for size of each point?
If there was, that would be terribly inefficient!
Use the programmable pipeline, in a core context => OpenGL 3.3 and above.
Create a buffer with all your vertices (your points).
Create a buffer with the sizes of each point.
Pass buffers 2 and 3 to your vertex shader. Assign the size to the global gl_PointSize.
If you don't get what I am suggesting, you must then begin by learning the modern OpenGL way of rendering :)
how to use glColor when I used glEnableClientState(GL_COLOR_ARRAY);
Normally under glEnableClientState(GL_COLOR_ARRAY); glColorPointer is used.
At times I want to colour everything with single colour temporarily.
In that sense, what I felt using glColor3f is good.
But while I am using it is working sometimes and not working some times.
Can any body help me how to use glColor in this context.
glDisableClientState(GL_COLOR_ARRAY);
Is it possible to use both old and new OpenGL in one program?
Assuming I've understood the difference.
In my program I've used:
WGL_CONTEXT_MAJOR_VERSION_ARB, 3,
WGL_CONTEXT_MINOR_VERSION_ARB, 2,
But for example, can I use a function that contains this to draw a grid: (old)
glBegin(GL_LINES);
glVertex3f(-50, 0, (GLfloat)x);
glVertex3f( 50, 0, (GLfloat)x);
glVertex3f((GLfloat)x, 0, -50);
glVertex3f((GLfloat)x, 0, 50);
glEnd();
And a function like this, to texture and render something: (new)
glUseProgram(myShader->handle());
glBindTexture(GL_TEXTURE_2D, texName);
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glBindVertexArray(m_vaoID[0]); //select first VAO
glDrawArrays(GL_TRIANGLES, 0, 6); //draw two triangles
glDisable(GL_BLEND);
glUseProgram(0);
glBindVertexArray(0);
Or does the use of newer versions that make use of vao/vbo's make functions that contain glBegin/glEnd obsolete?
I hope that makes sense. Please excuse the naivety.
If it's an OpenGL 3.2 or higher compatibility profile then yes, you can mix immediate mode calls with proper rendering. Whether you should or not is another matter (you probably shouldn't in production code, but it can be useful for debugging). With a core profile, you won't be able to use the deprecated APIs.
Note that prior to 3.2, there was no concept of profiles, so with a 3.0/3.1 context, things are more complicated (see link above), but in practice there isn't much use in targeting 3.0/3.1 since just about any 3.0 capable hardware will be fine with 3.2.
I actually have two question.
I am learning OpenGL and I encountered that many samples in internet pass view matrix, projection matrix and model matrix or combination of them to shader. I want to know why? Because you already have them from gl_modelview, gl_modelviewporjection and etc... so whats the use of passing them again as uniform to shader?
So anyhow I want to build a shadow map but I dont get it what to pass to shader to transform coordinates into shadow map. I prefer using standard gl_* matrixes as I already coded my program based on them.
Here is the code I have now.
void FirstPass()
{
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, shadow_fbo);
glViewport(0,0,shadow_Width,shadow_Height);
glClear(GL_DEPTH_BUFFER_BIT);
glColorMask(GL_FALSE, GL_FALSE, GL_FALSE, GL_FALSE);
}
void SecondPass()
{
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT,0);
glColorMask(GL_TRUE, GL_TRUE, GL_TRUE, GL_TRUE);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glActiveTexture(GL_TEXTURE7);
glBindTexture(GL_TEXTURE_2D,shadow_texmap);
}
void display(void)
{
glUseProgramObjectARB(0);
float myarray[16];
FirstPass();
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
gluLookAt(light_positionFix[0], light_positionFix[1], light_positionFix[2], 0, 0, 0, 0, 1, 0);
DrawObjects();
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
SecondPass();
if (!LightFollowCamera)
glLightfv(GL_LIGHT0, GL_POSITION, light_positionFix);
gluLookAt(eye[0], eye[1], eye[2], lookat[0], lookat[1], lookat[2], 0, 1, 0);
if (LightFollowCamera)
{
light_positionFix[0]=eye[0];
light_positionFix[1]=eye[1];
light_positionFix[2]=eye[2];
}
DrawObjects();
glutSwapBuffers ();
}
Lots of these shader variables still work but are deprecated since OpenGL 3. For an up-to-date list of the existing built in variables take a look at page 7 of this monstrous pdf. Outdated variables aren't even mentioned there anymore. The pdf is for the very latest version of OpenGL which you shouldn't target as a beginner because you don't need all the cutting edge features. OpenGL 3.2 (core profile) is perfectly fine in terms of compability with 4.x, the support from graphics vendors and you'll find all the features you need as a beginner. Take a look at the quick reference card. Old built-in variables are still mentioned in 3.2 but are marked as deprecated. The often used term modern OpenGL relates to OpenGL 3.x core profile or higher.
I am following some begginer OpenGL tutorials, and am a bit confused about this snippet of code:
glBindBuffer(GL_ARRAY_BUFFER, vertexBufferObject); //Bind GL_ARRAY_BUFFER to our handle
glEnableVertexAttribArray(0); //?
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, 0); //Information about the array, 3 points for each vertex, using the float type, don't normalize, no stepping, and an offset of 0. I don't know what the first parameter does however, and how does this function know which array to deal with (does it always assume we're talking about GL_ARRAY_BUFFER?
glDrawArrays(GL_POINTS, 0, 1); //Draw the vertices, once again how does this know which vertices to draw? (Does it always use the ones in GL_ARRAY_BUFFER)
glDisableVertexAttribArray(0); //?
glBindBuffer(GL_ARRAY_BUFFER, 0); //Unbind
I don't understand how glDrawArrays knows which vertices to draw, and what all the stuff to do with glEnableVertexAttribArray is. Could someone shed some light on the situation?
The call to glBindBuffer tells OpenGL to use vertexBufferObject whenever it needs the GL_ARRAY_BUFFER.
glEnableVertexAttribArray means that you want OpenGL to use vertex attribute arrays; without this call the data you supplied will be ignored.
glVertexAttribPointer, as you said, tells OpenGL what to do with the supplied array data, since OpenGL doesn't inherently know what format that data will be in.
glDrawArrays uses all of the above data to draw points.
Remember that OpenGL is a big state machine. Most calls to OpenGL functions modify a global state that you can't directly access. That's why the code ends with glDisableVertexAttribArray and glBindBuffer(..., 0): you have to put that global state back when you're done using it.
DrawArrays takes data from ARRAY_BUFFER.
Data are 'mapped' according to your setup in glVertexAttribPointer which tells what is the definition of your vertex.
In your example you have one vertex attrib (glEnableVertexAttribArray) at position 0 (you can normally have 16 vertex attribs, each 4 floats).
Then you tell that each attrib will be obtained by reading 3 GL_FLOATS from the buffer starting from position 0.
Complementary to the other answers, here some pointers to OpenGL documentation. According to Wikipedia [1], development of OpenGL has ceased in 2016 in favor of the successor API "Vulkan" [2,3]. The latest OpenGL specification is 4.6 of 2017, but it has only few additions over 3.2 [1].
The code snippet in the original question does not require the full OpenGL API, but only a subset that is codified as OpenGL ES (originally intended for embedded systems) [4]. For instance, the widely used GUI development platform Qt uses OpenGL ES 3.X [5].
The maintainer of OpenGL is the Khronos consortium [1,6]. The reference of the latest OpenGL release is at [7], but has some inconsistencies (4.6 linked to 4.5 pages). If in doubt, use the 3.2 reference at [8].
A collection of tutorials is at [9].
https://en.wikipedia.org/wiki/OpenGL
https://en.wikipedia.org/wiki/Vulkan
https://vulkan.org
https://en.wikipedia.org/wiki/OpenGL_ES
see links in function references like https://doc.qt.io/qt-6/qopenglfunctions.html#glVertexAttribPointer
https://registry.khronos.org
https://www.khronos.org/opengl
https://registry.khronos.org/OpenGL-Refpages/es3
http://www.opengl-tutorial.org