What is the modern OpenGL equivalent to glBegin/glEnd - c++

I'm building a graphics API for OpenGL, which is based off the basic call-to-draw graphics style. Basically, instead of storing the data into the GPU, and call it using it's handle, give the info to draw what it should be drawing each update. I know it's slow, but it's simple and it's for non-performance critical applications. Anyway, is there any modern equivalent to glBegin/glEnd? It doesn't have to a call for every vertex, but a way where I can send the data each update, without storing the vertices in the gpu?

You pretty much answered your own question.
is there any modern equivalent to glBegin/glEnd? It doesn't have to a call for every vertex, but a way where I can send the data each update, without storing the vertices in the gpu?
Basically no, the modern way is to use VAOs with VBOs (and IBOs).
If you're going to change the data within the VBO, then remember that you can change the mode parameter in glBufferData.
GL_STREAM_DRAW - The data store contents will be modified once and used at most a few times.
GL_STATIC_DRAW - The data store contents will be modified once and used many times.
GL_DYNAMIC_DRAW - The data store contents will be modified repeatedly and used many times.
Then instead of using GL_STATIC_DRAW, then use GL_DYNAMIC_DRAW this will increase the FPS a lot compared to when using GL_STATIC_DRAW, though this depends on the amount of data, and how frequent you change it. But try to limit it as much as you can, like don't update the data within the buffers if you don't actually need to.
You can read more about the different buffers on the OpenGL Wiki.

Look for VAO / VBO usage that is what you want to implement.
In C/C++ code bellow is a simple example.
Input variable mode is GL_POINTS/TRIANGLES/QUADS/... (as in glBegin())
This is also the only option with GLSL and core profile to pass attributes (glVertex/glNormal/... is unknown in core for some time now)
//------------------------------------------------------------------------------
//--- Open GL VAO example (GLSL) -----------------------------------------------
//------------------------------------------------------------------------------
#ifndef _OpenGL_VAO_example_h
#define _OpenGL_VAO_example_h
//------------------------------------------------------------------------------
GLuint vbo[4]={-1,-1,-1,-1};
GLuint vao[4]={-1,-1,-1,-1};
const float vao_pos[]=
{
// x y z
0.75f, 0.75f, 0.0f,
0.75f,-0.75f, 0.0f,
-0.75f,-0.75f, 0.0f,
};
const float vao_col[]=
{
// r g b
1.0f,0.0f,0.0f,
0.0f,1.0f,0.0f,
0.0f,0.0f,1.0f,
};
//---------------------------------------------------------------------------
void vao_init()
{
glGenVertexArrays(4,vao);
glGenBuffers(4,vbo);
glBindVertexArray(vao[0]);
glBindBuffer(GL_ARRAY_BUFFER,vbo[0]);
glBufferData(GL_ARRAY_BUFFER,sizeof(vao_pos),vao_pos,GL_STATIC_DRAW);
glEnableVertexAttribArray(0);
glVertexAttribPointer(0,3,GL_FLOAT,GL_FALSE,0,0);
glBindBuffer(GL_ARRAY_BUFFER,vbo[1]);
glBufferData(GL_ARRAY_BUFFER,sizeof(vao_col),vao_col,GL_STATIC_DRAW);
glEnableVertexAttribArray(1);
glVertexAttribPointer(1,3,GL_FLOAT,GL_FALSE,0,0);
glDisableVertexAttribArray(0);
glDisableVertexAttribArray(1);
glBindVertexArray(0);
glBindBuffer(GL_ARRAY_BUFFER,0);
}
//---------------------------------------------------------------------------
void vao_exit()
{
glDeleteVertexArrays(4,vao);
glDeleteBuffers(4,vbo);
}
//---------------------------------------------------------------------------
void vao_draw(GLuint mode)
{
void *p=NULL;
glBindVertexArray(vao[0]);
glEnableVertexAttribArray(0);
glEnableVertexAttribArray(1);
glDrawArrays(mode,0,3);
glBindVertexArray(0);
}
//------------------------------------------------------------------------------
#endif
//------------------------------------------------------------------------------
//--- end. ---------------------------------------------------------------------
//------------------------------------------------------------------------------
If you do not want to use GLSL than you must change the code a little to something like this instead:
//tetraeder
#define V_SIZ 12
#define I_SIZ 6
GLfloat tet_verts[V_SIZ] = { \
-0.5f, -1.0f, -0.86f, \
-0.5f, -1.0f, 0.86f, \
1.0f, -1.0f, 0.0f, \
0.0f, 1.0f, 0.0f};
GLushort tet_index = {3, 0, 1, 2, 3, 0};
void init_buffers() {
glGenBuffersARB(1, &vertex_buf);
glBindBufferARB(GL_ARRAY_BUFFER_ARB, vertex_buf);
glBufferDataARB(GL_ARRAY_BUFFER_ARB, V_SIZ*sizeof(GLfloat), tet_verts, GL_STATIC_DRAW_ARB); //upload data
glGenBuffersARB(1, &index_buf);
glBindBufferARB(GL_ELEMENT_ARRAY_BUFFER_ARB, index_buf);
glBufferDataARB(GL_ELEMENT_ARRAY_BUFFER_ARB, I_SIZ*sizeof(GLushort), tet_index, GL_STATIC_DRAW_ARB); //upload data
return;
}
void draw_buffers() {
glBindBufferARB(GL_ARRAY_BUFFER_ARB, vertex_buf);
glVertexPointer(3, GL_FLOAT, 0, 0); //3 is xyz, last 0 ("pointer") is offset in vertex-array
glBindBufferARB(GL_ELEMENT_ARRAY_BUFFER_ARB, index_buf);
glEnableClientState(GL_VERTEX_ARRAY);
//use indexing
glDrawElements(GL_TRIANGLE_STRIP, I_SIZ, GL_UNSIGNED_SHORT, 0); //last 0 is offset in element-array
return;
}
void deinit_buffers() {
glDeleteBuffersARB(1, &vertex_buf);
glDeleteBuffersARB(1, &index_buf);
return;
}
PS. i recommend not to use indexing its usually much slower on all cards i use but of course that takes more memory. Also indexing is not very good implemented on drivers sometimes gets buggy (even on nVidia and of course on ATI too if the correct circumstances are met)
If you want also shaders see my:
complete GL+GLSL+VAO/VBO C++ example

Related

Multiple images of same mesh without duplicate triangle transfers

I take multiple images of the same mesh using OpenGL, GLEW and GLFW. The mesh (triangles) doesn't change in each shot, only the ModelViewMatrix does.
Here's the important code of my mainloop:
for (int i = 0; i < number_of_images; i++) {
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
/* set GL_MODELVIEW matrix depending on i */
glBegin(GL_TRIANGLES);
for (Triangle &t : mesh) {
for (Point &p : t) {
glVertex3f(p.x, p.y, p.z);
}
}
glReadPixels(/*...*/) // get picture and store it somewhere
glfwSwapBuffers();
}
As you can see, I set/transfer the triangle vertices for each shot I want to take. Is there a solution in which I only need to transfer them once? My mesh is quite large, so this transfer takes quite some time.
In the year 2016 you must not use glBegin/glEnd. No way. Use Vertex Array Obejcts instead; and use custom vertex and/or geometry shaders to reposition and modify your vertex data. Using these techniques, you will upload your data to the GPU once, and then you'll be able to draw the same mesh with various transformations.
Here is an outline of how your code may look like:
// 1. Initialization.
// Object handles:
GLuint vao;
GLuint verticesVbo;
// Generate and bind vertex array object.
glGenVertexArrays(1, &vao);
glBindVertexArray(vao);
// Generate a buffer object.
glGenBuffers(1, &verticesVbo);
// Enable vertex attribute number 0, which
// corresponds to vertex coordinates in older OpenGL versions.
const GLuint ATTRIBINDEX_VERTEX = 0;
glEnableVertexAttribArray(ATTRIBINDEX_VERTEX);
// Bind buffer object.
glBindBuffer(GL_ARRAY_BUFFER, verticesVbo);
// Mesh geometry. In your actual code you probably will generate
// or load these data instead of hard-coding.
// This is an example of a single triangle.
GLfloat vertices[] = {
0.0f, 0.0f, -9.0f,
0.0f, 0.1f, -9.0f,
1.0f, 1.0f, -9.0f
};
// Determine vertex data format.
glVertexAttribPointer(ATTRIBINDEX_VERTEX, 3, GL_FLOAT, GL_FALSE, 0, 0);
// Pass actual data to the GPU.
glBufferData(GL_ARRAY_BUFFER, sizeof(GLfloat)*3*3, vertices, GL_STATIC_DRAW);
// Initialization complete - unbinding objects.
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindVertexArray(0);
// 2. Draw calls.
while(/* draw calls are needed */) {
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glBindVertexArray(vao);
// Set transformation matrix and/or other
// transformation parameters here using glUniform* calls.
glDrawArrays(GL_TRIANGLES, 0, 3);
glBindVertexArray(0); // Unbinding just as an example in case if some other code will bind something else later.
}
And a vertex shader may look like this:
layout(location=0) in vec3 vertex_pos;
uniform mat4 viewProjectionMatrix; // Assuming you set this before glDrawArrays.
void main(void) {
gl_Position = viewProjectionMatrix * vec4(vertex_pos, 1.0f);
}
Also take a look at this page for a good modern accelerated graphics book.
#BDL already commented that you should abandon the immediate mode drawing calls (glBegin … glEnd) and switch to Vertex Array drawing (glDrawElements, glDrawArrays) that fetch their data from Vertex Buffer Objects (VBOs). #Sergey mentioned Vertex Array Objects in his answer, but those are actually state containers for VBOs.
A very important thing you have to understand – and the way you asked your question it's apparently something you're not aware of, yet – is, that OpenGL does not deal with "meshes", "scenes" or the like. OpenGL is just a drawing API. It draws points… lines… and triangles… one at a time… with no connection between them whatsoever. That's it. So when you show multiple views of the "same" thing, you must draw it several times. There's no way around this.
Most recent versions of OpenGL support multiple viewport rendering, but it still takes a geometry shader to multiply the geometry into several pieces to be drawn.

OpenGL 3.3 Batch Rendering - Triangle doesn't show up

I'm trying to implement a batch-rendering system using OpenGL, but the triangle I'm trying to render doesn't show up.
In the constructor of my Renderer-class, I'm initializing the VBO and VAO and also load my shader program (this does work, so the error can't be found here). The VBO is supposed to be capable of holding the maximum amount of vertices I'll permit which is defined in the header to be 30000. The VAO contains the information about how the data that I'll store in that buffer is laid out - in this case I use a struct called VertexData which only contains a 3D-vector ('vertex'), but will also contain stuff like colors etc. later on. So I create the buffer with the size I already stated, don't fill in any content yet and provide the layout using 'glVertexAttribPointer'. The '_vertexCount', as the name implies, counts the amount of vertices currently stored inside that buffer for drawing purposes.
The constructor of my Renderer-class (note that every private member variable defined in the header file starts with an _ ):
Renderer::Renderer(std::string vertexShaderPath, std::string fragmentShaderPath) {
_shaderProgram = ShaderLoader::createProgram(vertexShaderPath, fragmentShaderPath);
glGenBuffers(1, &_vbo);
glGenVertexArrays(1, &_vao);
glBindVertexArray(_vao);
glBindBuffer(GL_ARRAY_BUFFER, _vbo);
glEnableVertexAttribArray(0);
glBufferData(GL_ARRAY_BUFFER, RENDERER_MAX_VERTICES * sizeof(VertexData), NULL, GL_DYNAMIC_DRAW);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, (const GLvoid*) 0);
glDisableVertexAttribArray(0);
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindVertexArray(0);
_vertexCount = 0;
}
Once the initization is done, to render anything, the 'begin' procedure has to be called during the main-loop. This gets the current buffer with write permissions to fill in the vertices that should be rendered in the current frame:
void Renderer::begin() {
glBindBuffer(GL_ARRAY_BUFFER, _vbo);
_buffer = (VertexData*) glMapBuffer(GL_ARRAY_BUFFER, GL_WRITE_ONLY);
}
After beginning, the 'submit' procedure can be called to add vertices and their corrosponding data to the buffer. I add the data to the location in memory the buffer currently points to, then advance the buffer and increase the vertexcount:
void Renderer::submit(VertexData* data) {
_buffer = data;
_buffer++;
_vertexCount++;
}
Finally, once all vertices are pushed to the buffer, the 'end' procedure will unmap the buffer to enable the actual rendering of the vertices, bind the VAO, use the shader program, render the provided vertices as triangles, unbind the VAO and reset the vertex count:
void Renderer::end() {
glUnmapBuffer(GL_ARRAY_BUFFER);
glBindBuffer(GL_ARRAY_BUFFER, 0);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glBindVertexArray(_vao);
glUseProgram(_shaderProgram);
glDrawArrays(GL_TRIANGLES, 0, _vertexCount);
glBindVertexArray(0);
_vertexCount = 0;
}
In the main loop I'm beginning the rendering, submitting three vertices to render a simple triangle and ending the rendering process. This is the most important part of that file:
Renderer renderer("../sdr/basicVertex.glsl", "../sdr/basicFragment.glsl");
Renderer::VertexData one;
one.vertex = glm::vec3(-1.0f, 1.0f, 0.0f);
Renderer::VertexData two;
two.vertex = glm::vec3( 1.0f, 1.0f, 0.0f);
Renderer::VertexData three;
three.vertex = glm::vec3( 0.0f,-1.0f, 0.0f);
...
while (running) {
...
renderer.begin();
renderer.submit(&one);
renderer.submit(&two);
renderer.submit(&three);
renderer.end();
SDL_GL_SwapWindow(mainWindow);
}
This may not be the most efficient way of doing this and I'm open for criticism, but my biggest problem is that nothing appears at all. The problem has to lie within those code snippets, but I can't find it - I'm a newbie when it comes to OpenGL, so help is greatly appreciated. If full source code is required, I'll post it using pastebin, but I'm about 99% sure that I did something wrong in those code snippets.
Thank you very much!
You have the vertex attribute disabled when you make the draw call. This part of the setup code looks fine:
glBindVertexArray(_vao);
glBindBuffer(GL_ARRAY_BUFFER, _vbo);
glEnableVertexAttribArray(0);
glBufferData(GL_ARRAY_BUFFER, RENDERER_MAX_VERTICES * sizeof(VertexData), NULL, GL_DYNAMIC_DRAW);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, (const GLvoid*) 0);
At this point, the attribute is set up and enabled. But this is followed by:
glDisableVertexAttribArray(0);
Now the attribute is disabled, and there's nothing else in the posted code that enables it again. So when you make the draw call, you don't have a vertex attribute that is actually enabled.
You can simply remove the glDisableVertexAttribArray() call to fix this.
Another problem in your code is the submit() method:
void Renderer::submit(VertexData* data) {
_buffer = data;
_buffer++;
_vertexCount++;
}
Both _buffer and data are pointers to a VertexData structure. So the assignment:
_buffer = data;
is a pointer assignment. Instead of copying the data into the buffer, it modifies the buffer pointer. This should be:
*_buffer = *data;
This will copy the vertex data into the buffer, and leave the buffer pointer unchanged until you explicitly increment it in the next statement.

OpenGL refuses to draw a triangle

I'm attempting to draw a single large triangle in a window in OpenGL. My program compiles and runs, but I get just a black screen in my window.
I've checked and double-checked multiple tutorials and it seems like my steps are correct... Am I missing something obvious?
Here is the program in its entirety:
#include <stdlib.h>
#include <stdio.h>
#include <GL/glew.h>
#include <GLUT/glut.h>
GLuint VBO;
struct vector {
float _x;
float _y;
float _z;
vector() { }
vector(float x, float y, float z) { _x = x; _y = y; _z = z; }
};
void render()
{
glClear(GL_COLOR_BUFFER_BIT);
glEnableVertexAttribArray(0);
glBindBuffer(GL_ARRAY_BUFFER, VBO);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, 0);
glDrawArrays(GL_TRIANGLES, 0, 3);
glDisableVertexAttribArray(0);
glutSwapBuffers();
}
void create_vbo()
{
vector verts[3];
verts[0] = vector(-1.0f, -1.0f, 0.0f);
verts[1] = vector(1.0f, -1.0f, 0.0f);
verts[2] = vector(0.0f, 1.0f, 0.0f);
glGenBuffers(1, &VBO);
glBindBuffer(GL_ARRAY_BUFFER, VBO);
glBufferData(GL_ARRAY_BUFFER, sizeof(verts), verts, GL_STATIC_DRAW);
}
int main(int argc, char** argv)
{
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_DOUBLE|GLUT_RGBA);
glutInitWindowSize(1024, 768);
glutInitWindowPosition(100, 100);
glutCreateWindow("Triangle Test");
glutDisplayFunc(render);
glewInit();
glClearColor(0.0f, 0.0f, 0.0f, 0.0f);
create_vbo();
glutMainLoop();
return 0;
}
Update: It turns out that drawing this way without a "program" (that is, compiled shader files) produces undefined behavior (the newer your graphics card, the more likely it is to work, however).
Because my card is right on the edge and only supports OpenGL 2.1, it was a little difficult to find an appropriate shader example that would work -- seems like there are many different tutorials out there written at different stages in the evolution of OpenGL.
My vertex shader (entire file):
void main()
{
gl_Position = ftransform();
}
My fragment shader (entire file):
void main()
{
gl_FragColor = vec4(0.4,0.4,0.8,1.0);
}
I used the example LoadShaders function from this OpenGL Tutorial Site to create the program, and now, I, too, can see the triangle!
(Thanks to #chbaker0 for pointing me in the right direction.)
I do not know if this will help you or not but in your create_vbo() function where you have:
glBufferData(GL_ARRAY_BUFFER, sizeof(verts), verts, GL_STATIC_DRAW);
try this instead:
glBufferData( GL_ARRAY_BUFFER, sizeof( verts[0] * 3 ), &verts[0], GL_STATIC_DRAW );
And after this function call add in this function call to the end of your create_vbo() function
// This MUST BE LAST! Used to Stop The Buffer!
glBindBuffer( GL_ARRAY_BUFFER, 0 );
It is hard for me to see your error. In my projects I do have some vbos, but I am also using vaos as well. My code is able to working in OpenGL 2.0 - 4.5 but for the older versions there is a split in logic because of the deprecated functions within the API. I also do not use glut. I hope this helps.
The other thing I noticed too is did you pay attention to your vertex winding order? Meaning are they being used by OpenGL in a CCW order or CW order? Is back face culling turned on or off? There are a lot of elements to consider when setting up and configuring an OpenGL context. It has been a while since I worked with older versions of OpenGL but I do know that once you start working with a specific version or newer you will have to supply your own model view projection matrix, just something to consider.
The issue I ran into was using pipeline features without defining a shader program. The spec says this should work, but on my graphics card it did not did. (See my update in the question for more specifics).
Thanks to all the commenters for nudging me in the right direction.

Display latency using OpenGL Quad Buffer with nvidia stereoscopic 3D

I need to achieve real-time performance (60fps) with my stereoscopic 3D application in c++ for video rendering, using OpenGL Quad Buffer. The application runs on Xubuntu Lts 14.04.
My hardware setup is composed by two Gopro cameras, a nvidia quadro k4000 card and a 120Hz display.
I'm experiencing image latency when watching the displayed stereo video.
There is no important delay when displaying only from one camera, in a mono setup, while not using the quad buffer functionality.
When measuring the times of execution of the display function from OpenGL it takes less than 10 mseconds to execute on most cycles, but there are spikes of more than 60ms that are probably the origin of such latency.
I've tested both with and without setting the sync to vblank option on the graphic's card, and the difference is very little, resulting in the same delayed image on the screen.
The code bellow is part of a more complex application so I hope the parts I provide at least shed a light on what I am trying to accomplish.
EDIT 1: This is the initialization code, outside the display function.
createTexture(&textureID, width, height);
createPBO(&pbo, width, height);
// map OpenGL buffer object for writing from CUDA
cudaGLMapBufferObject((void **)(Img, pbo);
// Unmap Buffers
cudaGLUnmapBufferObject(pbo);
// Paths for the stereo shaders' files
const std::string vertex_file_path = "VertexShader_stereo.vertexshader";
const std::string fragment_file_path = "FragmentShader_stereo.fragmentshader";
GLfloat vertices [] = {
// Position (2 elements) Texcoords (2 elements)
-1.0f, 1.0f, 0.0f, 0.0f, // Top-left
1.0f, 1.0f, 1.0f, 0.0f, // Top-right
1.0f, -1.0f, 1.0f, 1.0f, // Bottom-right
-1.0f, -1.0f, 0.0f, 1.0f // Bottom-left
};
GLushort elements [] = {
0, 1, 2, // indices of the first triangle
2, 3, 0 // indices of the second triangle
};
// Create a pixel buffer object for the right view
createPBO(&pbo_Right, width, height);
// map OpenGL buffer object for writing from CUDA
cudaGLMapBufferObject((void **)Img_Right, pbo_Right);
// Unmap Buffers
cudaGLUnmapBufferObject(pbo_Right);
// Create a vector buffer object
createVBO(&vbo, vertices, sizeof(vertices));
// Create a element buffer object
createEBO(&ebo, elements, sizeof(elements));
// Create shader program from the shaders
shaderProgram = LoadShader(vertex_file_path, fragment_file_path);
posAttrib = glGetAttribLocation(shaderProgram, "position");
texAttrib = glGetAttribLocation(shaderProgram, "texcoord");
}
Edit 2: The createPBO function goes like this:
createPBO(GLuint* pbo_, unsigned int w, unsigned int h) {
// set up vertex data parameter
int num_texels = w * h;
int num_values = num_texels * 4;
int size_tex_data = sizeof(GLubyte) * num_values;
// Generate a buffer ID called a PBO (Pixel Buffer Object)
glGenBuffers(1, pbo_);
// Make this the current UNPACK buffer (OpenGL is state-based)
glBindBuffer(GL_PIXEL_UNPACK_BUFFER, *pbo_);
// Allocate data for the buffer. 4-channel 8-bit image
glBufferData(GL_PIXEL_UNPACK_BUFFER, size_tex_data, NULL, GL_DYNAMIC_COPY);
cudaGLRegisterBufferObject(*pbo_);
}
Bellow is the part of my display function related to the OpenGL environment. I am using several glFinish() calls as an experiment, since it seems to be enhancing the execution times.
// Make this the current array buffer
glBindBuffer(GL_ARRAY_BUFFER, vbo);
// Make this the current element array buffer
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, ebo);
// Activate shader program
glUseProgram(shaderProgram);
// Specify the layout of the vertex data
glEnableVertexAttribArray(posAttrib);
glVertexAttribPointer(posAttrib, // index of of attribute in array
2, // size
GL_FLOAT, // type
GL_FALSE, // normalized
4 * sizeof(GLfloat), // stride
0); // offset in the array
glEnableVertexAttribArray(texAttrib);
glVertexAttribPointer(texAttrib, // index of of attribute in array
2, // size
GL_FLOAT, // type
GL_FALSE, // normalized
4* sizeof(GLfloat), // stride
(void*)(2 * sizeof(GLfloat))); // offset in the array
glFinish();
// Specifies the target to which the buffer object is bound. GL_PIXEL_UNPACK_BUFFER affects the glTexSubImage2D command.
glBindBuffer( GL_PIXEL_UNPACK_BUFFER, pbo);
// Draw Left Back Buffer
glDrawBuffer(GL_BACK_LEFT);
// Specifies the target (GL_TEXTURE_2D) to which the texture is bound
glBindTexture(GL_TEXTURE_2D, textureID);
// Note: NULL indicates the data resides in device memory
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, width, height, GL_RGBA, GL_UNSIGNED_BYTE, NULL);
// Draw a rectangle from the 2 triangles using 6 indices
glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_SHORT, 0);
// Specifies the target to which the buffer object is bound. GL_PIXEL_UNPACK_BUFFER affects the glTexSubImage2D command.
glBindBuffer(GL_PIXEL_UNPACK_BUFFER, pbo_Right);
// Draw Right Back Buffer
glDrawBuffer(GL_BACK_RIGHT);
// Note: NULL indicates the data resides in device memory
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, width, height, GL_RGBA, GL_UNSIGNED_BYTE, NULL);
// Draw a rectangle from the 2 triangles using 6 indices
glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_SHORT, 0);
glFinish();
// Disable and unbind objects
glDisableVertexAttribArray(posAttrib);
glDisableVertexAttribArray(texAttrib);
glUseProgram(0);
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0);
// Swap buffers
glutSwapBuffers();
glutPostRedisplay();
It is possible to make some optimizations? or is this a known problem in stereoscopic 3D?

Something wrong with my VBO

I'm trying to emulate exactly how a game sets up a VBO and draws it to the screen. I've never set one up before and the tutorials all show how to do it with glDrawArrays but I want to use glDrawElements.
I came up with the following:
glViewport(0, 0, 765, 553);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0.0, 765, 553, 0.0, -1.0, 1.0);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
xCast(ptr_glActiveTextureARB, ptr_wglGetProcAddress("glActiveTextureARB"));
xCast(ptr_glMultiTexCoord2fARB, ptr_wglGetProcAddress("glMultiTexCoord2fARB"));
xCast(ptr_glGenBuffersARB, ptr_wglGetProcAddress("glGenBuffersARB"));
xCast(ptr_glBindBufferARB, ptr_wglGetProcAddress("glBindBufferARB"));
xCast(ptr_glBufferDataARB, ptr_wglGetProcAddress("glBufferDataARB"));
struct PointInfo
{
float Pos[3];
float Colour[3];
};
const int NumVerts = 3, NumInds = 3;
std::vector<PointInfo> Vertices;
Vertices.push_back({{0.0f, 1.0f, 0.0f}, {1, 1, 1}}); ///top left;
Vertices.push_back({{0.5f, 0.0f, 0.0f}, {1, 1, 1}}); ///bottom middle;
Vertices.push_back({{1.0f, 1.0f, 0.0f}, {1, 1, 1}}); ///top right;
std::vector<std::uint32_t> Indices = {0, 1, 2};
std::uint32_t VBO = 0, IBO = 0;
ptr_glGenBuffersARB(1, &VBO);
ptr_glGenBuffersARB(1, &IBO);
///Put Vertices In.
ptr_glBindBufferARB(GL_ARRAY_BUFFER, VBO);
ptr_glBufferDataARB(GL_ARRAY_BUFFER, sizeof(PointInfo) * NumVerts, &Vertices[0], GL_STATIC_DRAW);
Log(glGetError());
///Put Indices In.
ptr_glBindBufferARB(GL_ELEMENT_ARRAY_BUFFER, IBO);
ptr_glBufferDataARB(GL_ELEMENT_ARRAY_BUFFER, sizeof(int) * NumInds, &Indices[0], GL_STATIC_DRAW);
Log(glGetError());
I run the above only once at the start of my program. Then in my while loop, I run:
glPushMatrix();
glClearColor(0.0f, 1.0f, 1.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
Log(glGetError());
ptr_glBindBufferARB(GL_ARRAY_BUFFER, VBO);
Log(glGetError());
glVertexPointer(3, GL_FLOAT, sizeof(PointInfo), (void*) offsetof(PointInfo, Pos));
Log(glGetError());
glColorPointer(3, GL_FLOAT, sizeof(PointInfo), (void*) offsetof(PointInfo, Colour));
Log(glGetError());
ptr_glBindBufferARB(GL_ELEMENT_ARRAY_BUFFER, IBO);
Log(glGetError());
glDrawElements(GL_TRIANGLES, NumInds, GL_UNSIGNED_INT, 0);
Log(glGetError());
glPopMatrix();
SwapBuffers(DC);
Sleep(1);
But the only thing that happens is my screen clearing. I never see my triangle at all :S I think it might be my view setup via the glOrtho but I'm not sure. Is there anything wrong with what I did? The glGetError just prints 0.. No errors :S
The triangle coordinates you specified are very small. The triangle occupies only half of a pixel at the top left corner of the screen. Try scaling it by 100.
Also I think you're missing calls to glEnableClientState with GL_VERTEX_ARRAY and GL_COLOR_ARRAY.
As a general approach I would suggest to take things one step at a time. Start with immediate mode glVertex to make sure you got the coordinates and camera setup right. Then add shaders. Then convert to a position-only VBO with DrawArrays. Then add vertex colors. Then convert to DrawElements. That way you have a better sense of where problems might lie.
You might also be interested in the glload library here to get rid of these ptr_ prefixes.
You should use glVertexAttribPointer. The functions you are using are deprecated. Perhaps you could get this code to work, but if you aren't forced to use such an ancient OpenGL, chances are you'd save yourself a lot of trouble.
Oh also manually loading function pointers is extremely cumbersome. I suggest you looked at libraries such as GLload.
A specialized debugger such as CodeXL or gDebugger can be very helpful in solving issues like that.
As for the problems in this code, your triangle is simply too small.