Most of the tutorials, guides and books that I've found out there are related to OpenGL, explains how to draw a triangle and initialize OpenGL. That's fine. But when they try to explain it they just list a bunch of functions and parameters like:
glClear()
glClearColor()
glBegin()
glEnd()
...
Since I'm not very good at learning things by memory, I always need an answer to "why are we doing this?" so that I'll write that bunch of functions because I remember that I have to set a certain things before doing somethings else and so on not because the tutorial told me so.
Could please someone explain to me what do I have to define to OpenGL (only pure OpenGL, I'm using SFML as background library but that really doesn't matter) before starting to draw something with glBegin() and glEnd()?
Sample answer:
You have to first tell OpenGL what color does it need to clear the
screen with. Because each frame needs to be cleared by the previous
before we start to draw the current one...
First you should know, that OpenGL is a state machine. That means, that apart from creating the OpenGL context (which is done by SFML) there's no such thing as initialization!
Since I'm not very good at learning things by memory,
This is good…
I always need an answer to "why are we doing this?"
This is excellent!
Could please someone explain to me what do I have to define to OpenGL (only pure OpenGL, I'm using SFML as background library but that really doesn't matter) before starting to draw something with glBegin() and glEnd()?
As I already told: OpenGL is a state machine. That basically means, that there are two kinds of calls you can do: Setting state and executing operations.
For example glClearColor sets a state variable, that of the clear color, which value is used for clearing the active framebuffer color to, when a call to glClear with the GL_COLOR_BUFFER_BIT flag set. There exists a similar function glClearDepth for the depth value (GL_DEPTH_BUFFER_BIT flag to glClear).
glBegin and glEnd belong to the immediate mode of OpenGL, which have been deprecated. So there's little reason in learning them. You should use Vertex Arrays instead, preferrably through Vertex Buffer Objects.
But here it goes: glBegin sets OpenGL in a state that it should now draw geometry, of the kind of primitive selected as parameter to glBegin. GL_TRIANGLES for example means, that OpenGL will now interpret every 3 calls to glVertex as forming a triangle. glEnd tells OpenGL that you've finished that batch of triangles. Within a glBegin…glEnd block certain state changes are disallowed. Among those everything that has to do with transforming the geometry and generating the picture, which matrices, shaders, textures, and some others.
One common misconception is, that OpenGL is initialized. This is due to badly written tutorials which have a initGL function or similar. It's a good practice to set all state from scratch when beginning to render a scene. But since a single frame may contain several scenes (think of a HUD or split screen gaming) this happens several times a scene.
Update:
So how do you draw a triangle? Well, it's simple enough. First you need the geometry data. For example this:
GLfloat triangle[] = {
-1, 0, 0,
+1, 0, 0,
0, 1, 0
};
In the render function we tell OpenGL that the next calls to glDrawArrays or glDrawElements shall fetch the data from there (for the sake of simplicity I'll use OpenGL-2 functions here):
glVertexPointer(3, /* there are three scalars per vertex element */
GL_FLOAT, /* element scalars are float */
0, /* elements are tightly packed (could as well be sizeof(GLfloat)*3 */
trignale /* and there you find the data */ );
/* Note that glVertexPointer does not make a copy of the data!
If using a VBO the data is copied when calling glBufferData. */
/* this switches OpenGL into a state that it will
actually access data at the place we pointed it
to with glVertexPointer */
glEnableClientState(GL_VERTEX_ARRAY);
/* glDrawArrays takes data from the supplied arrays and draws them
as if they were submitted sequentially in a for loop to immediate
mode functions. Has some valid applications. Better use index
based drawing for models with a lot of shared vertices. */
glDrawArrays(Gl_TRIANGLE, /* draw triangles */
0, /* start at index 0 */
3, /* process 3 elements (of 3 scalars each) */ );
What I didn't include yet is setting up the transformation and viewport mapping.
The viewport defines how the readily projected and normalized geometry is placed in the window. This state is set using glViewport(pos_left, pos_bottom, width, height).
Transformation today happens in a vertex shader, Essentially a vertex shader is a small program written in a special language (GLSL), that takes the vertex attributes and calculates the clip space position of the resulting vertex. The usual approach for this is emulating the fixed function pipeline, which is a two stage process: First transform the geometry into view space (some calculations, like illumination are easier in this space), then project it into clip space, which is kind of the lens of the renderer. In the fixed function pipeline there are two transformation matrices for this: Modelview and Projection. You set them to whatever is required for the desired outcome. In the case of just a triangle, we leave the modelview identity and use a ortho projection from -1 to 1 in either dimension.
glMatrixMode(GL_PROJECTION);
/* the following function multiplies onto what's already on the stack,
so reset it to identity */
glLoadIdentity();
/* our clip volume is defined by 6 orthogonal planes with normals X,Y,Z
and ditance 1 from origin into each direction */
glOrtho(-1, 1, -1, 1, -1, 1);
glMatrixMode(GL_MODELVIEW);
/* now a identity matrix is loaded onto the modelview */
glLoadIdentity();
Having set up the transformation we can now draw the triangle as outlined above:
draw_triangle();
Finally we need to tell OpenGL we're done with sending commands and it should finish it's renderings.
if(singlebuffered)
glFinish();
However most of the time your window is double buffered, so you need to swap it to make things visime. Since swapping makes no sense without finishing the swap implies a finish
else
SwapBuffers();
You're using the API to set and change the OpenGL state machine.
You're not actually programming directly to the GPU, you're using a medium between your application and your GPU to do whatever you're trying to do.
The reason it is like this and doesn't work the same way as a CPU and memory, is because openGL was intended to run on os/system-independent hardware, so that your code can run on any OS and run on any hardware and not just the one your programming to.
Hence, because of this, you need to learn to use their preset code that makes sure that whatever you're trying to do it will be able to be run on all systems/OS/hardware within a reasonable range.
For example if you were to create your application on windows 8.1 with a certain graphics card(say amd's) you still want your application to be able to run on Andoird/iOS/Linux/other Windows systems/other hardware(gpus) such as Nvidia.
Hence why Khronos, when they created the API, they made it as system/hardware independent as possible so that it can run on everything and be a standard for everyone.
This is the price we have to pay for it, we have to learn their API instead of learning how to directly write to gpu memory and directly utilize the GPU to process information/data.
Although with the introduction of Vulkan things might be different when it is released(also from khronos)and we will find out how it will be working.
Related
I have a working implementation of this technique for view frustum culling of instanced geometry. The gist of the technique is that we use a vertex shader to check if the bounds of an object lie within the view frustum, and if they do we output the position of that object, using a transform feedback buffer and a geometry shader, to a texture. We can then, during an actual rendering pass, use that texture, along with a query of how many positions we emitted, to acquire the relevant position data for the object we're rendering, and number of draws to specify in our call to glDrawElementsInstanced. One difference between what I do, and what the article does, is that I emit a full transformation matrix, rather than a simple position vector, to the texture, but I doubt that has any bearing on my problem.
The actual problem: Currently I have this setup so that, for each object type being rendered (i.e. tree, box, rock, whatever), the actual rendering pass follows immediately upon the frustum cull rendering pass. This works, and gives the intended results. What I want to do instead, however, is to go over all my drawcommands and do all the frustum culling for the various objects first, and only thereafter do all the actual rendering, to avoid a bunch of unnecessary state changes (i.e. switching back and forth between shader programs). When I do this, however, I encounter the problem that previously established textures -- the ones I use for reading positions from during the actual rendering passes -- all seem to be overwritten by the latest call to the frustum culling function, meaning that all textures established seemingly contain only the position information from the last frustum cull call.
For example: I render, in order, 4 trees, 10 boxes and 3 rocks, and what I will see instead is a tree, a box, and a rock, at all the (three) positions where I would expect only the 3 rocks to be. I cannot for the life of me figure out why this is, because I quite clearly bind new buffers and textures to the TRANSFORM_FEEDBACK_BUFFER every time I call the function. Why are the previously used textures still receiving the new data from the latest call?
Code, in C, for the frustum culling function:
void fcullidraw(drawcommand *tar) {
/* printf("Fculling %s\n", tar->res->name); */
mesh *rmesh = &tar->res->amod->meshes[0];
/* glDeleteTextures(1, &rmesh->ctex); */
if(rmesh->ctbuf == 0)
glGenBuffers(1, &rmesh->ctbuf);
glBindBuffer(GL_TEXTURE_BUFFER, rmesh->ctbuf);
glBufferData(GL_TEXTURE_BUFFER, sizeof(instancedata) * tar->nodraws, NULL, GL_DYNAMIC_COPY);
if(rmesh->ctex == 0)
glGenTextures(1, &rmesh->ctex);
glBindTexture(GL_TEXTURE_BUFFER, rmesh->ctex);
glTexBuffer(GL_TEXTURE_BUFFER, GL_RGBA32F, rmesh->ctbuf);
if(rmesh->cquery == 0)
glGenQueries(1, &rmesh->cquery);
checkactiveshader(tar->tar, findshader("icull"));
glEnable(GL_RASTERIZER_DISCARD);
glUniform1f(activeshader->radius, tar->res->amesh->bbox.radius);
glUniform3fv(activeshader->extent, 1, (const GLfloat*)&tar->res->amesh->bbox.ext);
glUniform3fv(activeshader->cp, 1, (const GLfloat*)&tar->res->amesh->bbox.cp);
glBindVertexArray(tar->res->amod->meshes[0].vao);
glBindBuffer(GL_ARRAY_BUFFER, tar->res->amod->meshes[0].posarray);
glBufferData(GL_ARRAY_BUFFER, sizeof(mat4_t) * tar->nodraws, tar->posarray, GL_DYNAMIC_DRAW);
glBindBufferBase(GL_TRANSFORM_FEEDBACK_BUFFER, 0, rmesh->ctbuf);
glBeginTransformFeedback(GL_POINTS);
glBeginQuery(GL_PRIMITIVES_GENERATED, rmesh->cquery);
glDrawArrays(GL_POINTS, 0, tar->nodraws);
glEndQuery(GL_PRIMITIVES_GENERATED);
glEndTransformFeedback();
glDisable(GL_RASTERIZER_DISCARD);
glGetQueryObjectuiv(rmesh->cquery, GL_QUERY_RESULT, &rmesh->visibleinstances);
}
tar and rmesh obviously vary between each call to this function. Do note that I have left in a few lines of comments here containing code to delete the buffers and textures between each rendering cycle, rather than simply overwriting them, but using that code instead has no effect on the error mode.
I'm stumped. I feel that the textures and buffers are well defined and clearly kept separate, so I do not understand how the textures from previous calls to fcullidraw are somehow still bound to and being overwritten by the TransformFeedback, if that is indeed what is happening, and it certainly seems to be, because the earlier objects will read in the entire transformation matrix of the rock quite neatly, with the "right" rotation, translation, and everything.
The article linked does do the operations in the order I want to do them -- i.e. first repeated frustum culls, and then repeated rendering -- and I'm not sure I see what I do differently. Might be some small and obvious thing, and I might be an idiot, but in that case I'd love to know why and how I am that.
EDIT: I pushed on and updated my implementation with a refinement of the original technique, suggested here, which gets rid of the writing-to-texture method altogether, in favor of instead simply writing to a buffer bound to the VAO, and set to update once per rendered instance with a VertexAttribDivisor. This method looks at lot cleaner on the whole, and incidentally had the additional side effect of not having my original problem at all, as I'm no longer writing to and uploading textures. This is, thus, no longer a practical problem for me, but the answer to the theoretical question does still elude me, so if anyone has ideas I'm still all ears.
So I've been learning OpenGL 3.3 on https://open.gl/ and I got really confused about some stuff.
VAO-s. By my understanding they are used to store the glVertexAttribPointer calls.
VBO-s. They store vertecies. So if I am making something with multiple objects do I need a VBO for every object?
Shader Programs - Why do we need multiple ones and what exactly do they do ?
What exactly does this line do : glBindFragDataLocation(shaderProgram, 0, "outColor");
The most important thing is how does all of this fit into a big program? For what exactly are used the VAO-s? Most tutorials just cover the things just to drawing a cube or 2 with hard coded vertices, so how would one go to managing scenes with a lot of objects? I've read this thread and got a little bit of understanding on how the scene management happens and all but still I can't figure out how to connect the OpenGL stuff to it all.
1-Yes. VAOs store vertex array bindings in general. When you see that you're doing lots of calls that does enabling, disabling and changing of GPU states, you can do all that at some early point in the program and then use VAOs to take a "snapshot" ,of what is bound and what isn't, at that point in time. Later, during your actual draw calls, all you need to do is bind that VAO again to set all the vertex states to what they were then. Just like how VBOs are faster that immediate mode because they send all vertices at once, VAOs work faster by changing many vertex states at once.
2-VBOs are just another way to send your glPosition, glColor..etc coordinates to the GPU to render on screen. The idea is, unlike with immediate mode where you send your vertex data one by one with the gl*Attribute* calls, is to upload all your vertices to the GPU in advance and retrieve their location as an ID. At time of rendering, you're only going to point the GPU (you bind the VBO id to something like GL_ARRAY_BUFFER, and use glVertexAttribPointer to specify details of how you stored the vertices data) to that location and issue your order to render. That obviously saves lots of time by doing things overhead, and so it's much faster.
As for whether one should have one VBO per object or even one VBO for all the objects is up to the programmer and the structure of the objects they want to render. After all, VBOs themselves are just a bunch of data you stored in the GPU, and you tell the computer how they're arranged using the glVertexAttribPointer calls.
3-Shaders are used to define a pipeline - a routine - of what happens to the vertices, colors, normals..etc after they've been sent to the GPU until they're rendered as fragments or pixels on the screen. When you send vertices over to the GPU, they're often still 3D coordinates, but the screen is a 2D sheet of pixels. There still comes the process of re-positioning these vertices according to the ProjectionModelView matrices (job of vertex shader) and then "flattening" or rasterizing the 3D geometry (geometry shader) into a 2D plane. Then it follows with coloring the flattened 2D scene (fragment shader) and finally lighting the pixels on your screen accordingly. In OpenGL versions 1.5 core and below, you didn't have much control over those stages as it was all fixed (hence the term fixed pipeline). Just think about what you could do in any of these shader stages and you will see that there is a lot of awesome things you can do with them. For example, in the fragment shader, just before you send the fragment color to the GPU, negate the sign of the color and add 1 to have colors of objects rendered with that shader inverted!
As for how many shaders one needs to use, again, it's up to the programmer to decide whether to have many or not. They could merge all the functionalities they need into one big giant shader (uber shader) and switch these functionalities on and off with boolean uniforms (very often considered as a bad practice), or have every shader do a certain thing and bind the right one according to what they need.
What exactly does this line do :
glBindFragDataLocation(shaderProgram, 0, "outColor");
It means that whatever is stored in the out declared variable "outColor" at the end of the fragment shader execution will be sent to the GPU as the final primary fragment color.
The most important thing is how does all of this fit into a big
program? For what exactly are used the VAO-s? Most tutorials just
cover the things just to drawing a cube or 2 with hard coded vertices,
so how would one go to managing scenes with a lot of objects? I've
read this thread and got a little bit of understanding on how the
scene management happens and all but still I can't figure out how to
connect the OpenGL stuff to it all.
They all work together to draw your nice colored shapes on the screen. VBOs are the structures where the vertices of your scene are stored (all aligned in an ugly fashion), VertexAttribPointer calls to tell the GPU how the data in the VBO is arranged, VAOs to store all these VertexAttribPointer instructions ahead of time and send them all at once with simply binding one during rendering in your main loop, and shaders to give you more control during the process of drawing your scene on the screen.
All of this can sound overwhelming at first, but with practice you will get used to it.
I am new to OpenGL and I am still experimenting with basic shapes. I sometimes find many functions like glEnd and many more, that are not mentioned in the OpenGL 3+ documentation. Were they replaced by other functions? Or do I have to write them manually?
Is there a tutorial online that uses OpenGL 3+?
As for " gluPerspective" I have read that it isn't used in Opengl 3+. Isn't it supposed to be a separate function in GLUT? what does it has to do with OpenGL 3+? Last, what does Transform( Width, Height ); do? (I found it in some sample code I downloaded, and I can't find it in GLUT or OpenGL).
here is the code:
GLvoid Transform(GLfloat Width, GLfloat Height)
{
glViewport(00, 00, Width, Height); /* Set the viewport */
glMatrixMode(GL_PROJECTION); /* Select the projection matrix */
glLoadIdentity(); /* Reset The Projection Matrix */
gluPerspective(20.0,Width/Height,0.1,100.0); /* Calculate The Aspect Ratio Of The Window */
glMatrixMode(GL_MODELVIEW); /* Switch back to the modelview matrix */
}
/* A general OpenGL initialization function. Sets all of the initial parameters. */
GLvoid InitGL(GLfloat Width, GLfloat Height)
{
glClearColor(0.0, 0.0, 0.0, 0.0); /* This Will Clear The Background Color To Black */
glLineWidth(2.0); /* Add line width, ditto */
Transform( Width, Height ); /* Perform the transformation */
}
/* The function called when our window is resized */
GLvoid ReSizeGLScene(GLint Width, GLint Height)
{
if (Height==0) Height=1; /* Sanity checks */
if (Width==0) Width=1;
Transform( Width, Height ); /* Perform the transformation */
}
I sometimes find many functions like glEnd and many more, that are not mentioned in the OpenGL 3+ documentation. Were they replaced by other functions?
They have been completely removed, since their workings doesn't reflect well with how modern graphics systems work on both the hardware and the software side. glBegin(…) and glEnd() form the surroundings of the so called immediate mode: Every call causes an operation. This reflects how early graphics systems were built, some 20 years ago.
Today one prepares batches of data, transfers them to GPU memory and triggers batch drawings with a single drawing call. OpenGL does this through vertex arrays and vertex buffer objects (VBOs). Vertex arrays have been around since OpenGL-1.1 (1996), and the VBO API is founded on vertex arrays, so for any reasonable program VBO support was added easily.
Or do I have to write them manually? Is there a tutorial online that uses OpenGL 3+?
It depends on the function in question. For example the whole texture environment, combiners have been removed. Just like the matrix manipulation functions and the whole lighting interface.
What they did and configured is now done through shaders and uniforms. Since you're expected to supply shaders one might say, you're expected to implement this yourself. OTOH you'll quickly find out, that often writing a shader is easier and more concise, than fiddling with large numbers of OpenGL parameter setting calls. Also once you've progressed far enough you'll hardly miss the matrix manipulation functions. Every serious application dealing with 3D graphics maintains the transformation matrices itself; be it for enhanced flexibilty or simply because those matrices are required in other places, too, e.g. some physics simulation.
As for " gluPerspective" I have read that it isn't used in Opengl 3+. Isn't it supposed to be a separate function in GLUT? what does it has to do with OpenGL 3+? Last, what does Transform( Width, Height ); do? (I found it in some sample code I downloaded, and I can't find it in GLUT or OpenGL).
gluPerspective is part of GLU. GLU is a companion library of OpenGL Utility functions, that used to ship with OpenGL-1.1. However it is not part of the OpenGL specification and completely optional.
GLUT is something else again. It's a simplicistic framework for quick and dirty setup of a OpenGL window and context, offering some minimalistic input API. Also it's no longer actively maintained. Personally I recommend not using it. If you must use a GLUT API, use FreeGLUT. Or better yet, don't GLUT at all, use a toolkit like Qt, GTK or a framework like GLFW or SDL.
Were they replaced by other functions?
No.
Or do I have to write them manually?
For old-style immediate-mode geometry submission you'll have to make your own work-alike. The matrix stack has a replacement.
Is there a tutorial online that uses OpenGL 3+?
At least one.
I have implemented a 2D Particle System based on the ideas and concepts outlined in "Bulding an Advanced Particle System" (John van der Burg, Game Developer Magazine, March 2000).
Now I am wondering what performance I should expect from this system. I am currently testing it within the context of my simple (unfinished) SDL/OpenGL platformer, where all particles are updated every frame. Drawing is done as follows
// Bind Texture
glBindTexture(GL_TEXTURE_2D, *texture);
// for all particles
glBegin(GL_QUADS);
glTexCoord2d(0,0); glVertex2f(x,y);
glTexCoord2d(1,0); glVertex2f(x+w,y);
glTexCoord2d(1,1); glVertex2f(x+w,y+h);
glTexCoord2d(0,1); glVertex2f(x,y+h);
glEnd();
where one texture is used for all particles.
It runs smoothly up to about 3000 particles. To be honest I was expecting a lot more, particularly since this is meant to be used with more than one system on screen. What number of particles should I expect to be displayed smoothly?
PS: I am relatively new to C++ and OpenGL likewise, so it might well be that I messed up somewhere!?
EDIT Using POINT_SPRITE
glEnable(GL_POINT_SPRITE);
glBindTexture(GL_TEXTURE_2D, *texture);
glTexEnvi(GL_POINT_SPRITE, GL_COORD_REPLACE, GL_TRUE);
// for all particles
glBegin(GL_POINTS);
glPointSize(size);
glVertex2f(x,y);
glEnd();
glDisable( GL_POINT_SPRITE );
Can't see any performance difference to using GL_QUADS at all!?
EDIT Using VERTEX_ARRAY
// Setup
glEnable (GL_POINT_SPRITE);
glTexEnvi(GL_POINT_SPRITE, GL_COORD_REPLACE, GL_TRUE);
glPointSize(20);
// A big array to hold all the points
const int NumPoints = 2000;
Vector2 ArrayOfPoints[NumPoints];
for (int i = 0; i < NumPoints; i++) {
ArrayOfPoints[i].x = 350 + rand()%201;
ArrayOfPoints[i].y = 350 + rand()%201;
}
// Rendering
glEnableClientState(GL_VERTEX_ARRAY); // Enable vertex arrays
glVertexPointer(2, GL_FLOAT, 0, ArrayOfPoints); // Specify data
glDrawArrays(GL_POINTS, 0, NumPoints); // ddraw with points, starting from the 0'th point in my array and draw exactly NumPoints
Using VAs made a performance difference to the above. I've then tried VBOs, but don't really see a performance difference there?
I can't say how much you can expect from that solution, but there are some ways to improve it.
Firstly, by using glBegin() and glEnd() you are using immediate mode, which is, as far as I know, the slowest way of doing things. Furthermore, it isn't even present in the current OpenGL standard anymore.
For OpenGL 2.1
Point Sprites:
You might want to use point sprites. I implemented a particle system using them and came up with a nice performance (for my knowledge back then, at least). Using point sprites you are doing less OpenGL calls per frame and you send less data to the graphic card (or even have the data stored at the graphic card, not sure about that). A short google search should even give you some implementations of that to look at.
Vertex Arrays:
If using point sprites doesn't help, you should consider using vertex arrays in combination with point sprites (to save a bit of memory). Basically, you have to store the vertex data of the particles in an array. You then enable vertex array support by calling glEnableClientState() with GL_VERTEX_ARRAY as parameter. After that, you call glVertexPointer() (the parameters are explained in the OpenGL documentation) and call glDrawArrays() to draw the particles. This will reduce your OpenGL calls to only a handfull instead of 3000 calls per frame.
For OpenGL 3.3 and above
Instancing:
If you are programming against OpenGL 3.3 or above, you can even consider using instancing to draw your particles, which should speed that up even further. Again, a short google search will let you look at some code about that.
In General:
Using SSE:
In addition, some time might be lost while updating your vertex positions. So, if you want to speed that up, you can take a look at using SSE for updating them. If done correctly, you will gain a lot of performance (at a large amount of particles at least)
Data Layout:
Finally, I recently found a link (divergentcoder.com/programming/aos-soa-explorations-part-1, thanks Ben) about structures of arrays (SoA) and arrays of structures (AoS). They were compared on how they affect the performance with an example of a particle system.
Consider using vertex arrays instead of immediate mode (glBegin/End): http://www.songho.ca/opengl/gl_vertexarray.html
If you are willing to get into shaders, you could also search for "vertex shader" and consider using that approach for your project.
I'm just starting OpenGL programming in Win32 C++ so don't be too hard on me :) I've been wandering along the NeHe tutorials and 'the red book' a bit now, but I'm confused. So far I've been able to set up an OpenGL window, draw some triangles etc, no problem. But now I want to build a model and view it from different angles. So do we:
Load a model into memory (saving triangles/quads coordinates in structs on the heap) and in each scene render we draw all stuff we have to the screen using glVertex3f and so on.
Load/draw the model once using glVertex3f etc and we can just change the viewing position in each scene.
Other...?
It seems to me option 1 is most plausible from all I read so far, however it seems a bit ehh.. dumb! Do we have to decide which objects are visible, and only draw those. Isn't that very slow? Option 2 might seem more attractive :)
EDIT: Thanks for all the help, I've decided to do: read my model from file, then load it into the GPU memory using glBufferData and then feed that data to the render function using glVertexPointer and glDrawArrays.
First you need to understand, that OpenGL actually doesn't understand the term "model", all what OpenGL sees is a stream of vertices coming in and depending on the current mode it uses those streams of vertices to draw triangles to the screen.
Every frame drawing iteration follows some outline like this:
clear all buffers
for each window element (main scene, HUD, minimap, etc.):
set scissor and viewport
conditionally clear depth and/or stencil
set projection matrix
set modelview matrix for initial view
for each model
apply model transformation onto matrix stack
bind model data (textures, vertices, etc.)
issue model drawing commands
swap buffers
OpenGL does not remember what's happening up there. There was (is) some facility, called Display Lists but they are not able to store all kinds of commands – also they got deprecated and removed from recent OpenGL versions. The immediate mode commands glBegin, glEnd, glVertex, glNormal and glTexCoord have been removed as well.
So the idea is to upload some data (textures, vertex arrays, etc.) into OpenGL buffer objects. However only textures are directly understood by OpenGL as what they are (images). All other kinds of buffers require you telling OpenGL how to deal with them. This is done by calls to gl{Vertex,Color,TexCoord,Normal,Attrib}Pointer to set data access parameters and glDraw{Arrays,Elements} to trigger OpenGL fetching a stream of vertices to be fed to the rasterizer.
You should upload the data to the GPU memory once, and then draw each frame using as few commands as possible.
Previously, this was done using display lists. Nowadays, it's all about vertex buffer objects (a.k.a. VBOs), so look into those.
Here's a tutorial about VBOs, written before they were only an extension, and not a core part of OpenGL.