OpenGL: Buffer Objects/Shaders going out of scope - c++

I'm learning OpenGL and I have a problem.
I know that all the memory is allocated onto the GPU by OpenGL, so technically scope shouldn't be an issue? However, when I try to do this:
int SetUpVertexShader(){
glViewport(0, 0, SCREEN_WIDTH, SCREEN_HEIGHT);
//BUILD AND COMPILE SHADER
//VERTEX SHADER
GLuint vertexShader;
vertexShader = glCreateShader(GL_VERTEX_SHADER);
//snip - code to compile shader, etc etc
}
};
int SetUpFragmentShaderandLink(){
//BUILD AND COMPILE SHADER
//FRAGMENT SHADER
GLint success;
GLuint fragmentShader = glCreateShader(GL_FRAGMENT_SHADER);
//snip - code to compile shader, etc etc
};
int linkShaders(){
GLuint shaderProgram = glCreateProgram();
glAttachShader(shaderProgram, vertexShader); //this line has complaints saying vertexShader is an undeclared identifier
glAttachShader(shaderProgram, fragmentShader); // this line also has complaints saying fragmentShader is an undeclared identifier
//the rest of the lines all have undeclared identifier complaints
glLinkProgram(shaderProgram);
//check linking errors
glGetProgramiv(shaderProgram, GL_LINK_STATUS, &success);
if(!success){
glGetProgramInfoLog(shaderProgram, 512, NULL, infoLog);
std::cout << "ERROR::SHADER::PROGRAM::LINKING_FAILED" << infoLog << std::endl;
}
};
then once I get to the linking shaders and program function linkShaders, I get complaints saying that the vertex shader and fragment shader are out of scope (more specifically, I get told that those two are "undeclared identifiers"). If these are being allocated on the GPU, instead of on the stack, then why am I being told they are going out of scope? Am I doing something wrong?

Your variables need to be in scope to use them (this is true regardless of OpenGL involved or not).
What glCreateShader and glCreateProgram return and glAttachShader and glLinkProgram use are handles (unique identifiers for specific purposes) to OpenGL objects. Make sure to save them in a place where you can access them when you need to supply them to other OpenGL calls (especially to cleanup functions if you no longer need the resources you initially requested).

The issue is not the memory itself being out of scope, as GPU memory is following a Heap-Pattern, while variables goong out of scope can only happen in a Stack-Pattern.
The issue is that the Pointer to this memory, which is stored in CPU Memory, is located on the stack and going out of scope. So you simply have to declare your GLuint Pointers outside the function body.

Related

Opengl only binds buffers from function that created them

I'm trying to write a very barebones game engine to learn how they work internally and I've gotten to the point where I have a "client" app sending work to the engine. This works so far but the problem I am having is that my test triangle only renders when I bind the buffer from the "main" function (or wherever the buffers were created)
This is even the case when the buffers are abstracted and have the same function with the same member values (validated using clion's debugger) but they still have to be bound in the function that created them
for example I have this code to create the buffers and set their data
...
Venlette::Graphics::Buffer vertexBuffer;
Venlette::Graphics::Buffer indexBuffer;
vertexBuffer.setTarget(GL_ARRAY_BUFFER);
indexBuffer.setTarget(GL_ELEMENT_ARRAY_BUFFER);
vertexBuffer.setData(vertices, sizeof(vertices));
indexBuffer.setData(indices, sizeof(indices));
glEnableVertexAttribArray(0);
glVertexAttribPointer(0, 2, GL_FLOAT, GL_FALSE, sizeof(float)*2, nullptr);
...
where vertices is a c array of 6 floats and indices being 3 unsigned int's
the buffers are then passed to a "context" to be stored for rendering later using the following code
...
context.addBuffer(vertexBuffer);
context.addBuffer(indexBuffer);
context.setNumVertices(3);
context.setNumIndices(3);
context.
...
which calls: addBuffer
Task& task = m_tasks.back();
task.buffers.push_back(std::move(buffer));
this again, works. The buffers are stored correctly, the buffers still exist when the code reaches here and nothing is wrong.
then it gets to the drawing part where the buffers are: bound, drawn, then unbound, as shown
...
for (auto& buffer : task.buffers) {
buffer.upload();
buffer.bind();
}
glDrawElements(GL_TRIANGLES, task.numIndices, GL_UNSIGNED_INT, nullptr);
for (auto& buffer : task.buffers) {
buffer.unbind();
}
...
bind and unbind being these functions
void Buffer::bind() const noexcept {
if (m_id == -1) return;
glBindBuffer(m_target, m_id);
spdlog::info("bind() -{}-{}-", m_id, m_target);
}
void Buffer::unbind() const noexcept {
if (m_id == -1) return;
glBindBuffer(m_target, 0);
spdlog::info("unbind() -{}-{}-", m_id, m_target);
}
but this is where nothing works. If I call buffer.bind() from the "doWork" function where the buffers are drawn, nothing renders, but if I call buffer.bind() from the main function I get a white triangle in the middle of the screen
Even when I bound then unbound the buffers from the main buffer encase that was the issue it still doesn't draw. Only when the buffers are bound and remain bound from the main function, that is draws
a pastebin of the full code (no headers)
pastebin
Does anyone know why this happens, even if you don't know how to fix it. is it something to do with buffer lifetime, or with moving the buffer into the vector?
it is just buffer.bind() that doesn't work, uploading data works from the context, just not binding it
You seem to not bind the vertex buffer to the GL_ARRAY_BUFFER buffer binding point before calling glVertexAttribPointer.
glVertexAttribPointer uses the buffer bound to GL_ARRAY_BUFFER in order to know which buffer is the vertex attribute source for that generic vertex attribute.
So, you should bind the vertexBuffer before calling glVertexAttribPointer.

OpenCV Mat to OpenGL - error on glBindTexture()

I'm trying to convert an OpenCV cv::Mat image to an OpenGL texture (I need a GLuint that points to the texture). Here's the code I have so far, pieced together from numerous Google searches:
void otherFunction(cv::Mat tex_img) // tex_img is a loaded image
{
GLuint* texID;
cvMatToGlTex(tex_img, texID);
// Operations on the texture here
}
void cvMatToGlTex(cv::Mat tex_img, GLuint* texID)
{
glGenTextures(1, texID);
// Crashes on the next line:
glBindTexture(GL_TEXTURE_2D, *texID);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, tex_img.cols, tex_img.rows, 0, GL_BGR, GL_UNSIGNED_BYTE, tex_img.ptr());
return;
}
The crash happens when calling glBindTextures.
I tried a try-catch for all exceptions with catch (const std::exception& ex) and catch (...), but it didn't seem to throw anything, just crash my program.
I apologize if I'm doing anything blatantly wrong, this has been my first experience with OpenGL, and it's still early days for me with C++. I've been stuck on this since yesterday. Do I have a fundamental misunderstanding of OpenGL textures? I'm open to a major reworking of the code if necessary. I'm running Ubuntu 16.04 if that changes anything.
OK, here's what's happening:
void otherFunction(cv::Mat tex_img)
{
GLuint* texID; // You've created a pointer but not initialised it. It could be pointing
// to anywhere in memory.
cvMatToGlTex(tex_img, texID); // You pass in the value of the GLuint*
}
void cvMatToGlTex(cv::Mat tex_img, GLuint* texID)
{
glGenTextures(1, texID); // OpenGL will dereference the pointer and write the
// value of the new texture ID here. The pointer value you passed in is
// indeterminate because you didn't assign it to anything.
// Dereferencing a pointer pointing to memory that you don't own is
// undefined behaviour. The reason it's not crashing here is because I'll
// bet your compiler initialised the pointer value to null, and
// glGenTextures() checks if it's null before writing to it.
glBindTexture(GL_TEXTURE_2D, *texID); // Here you dereference an invalid pointer yourself.
// It has a random value, probably null, and that's
// why you get a crash
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, tex_img.cols, tex_img.rows, 0, GL_BGR, GL_UNSIGNED_BYTE, tex_img.ptr());
return;
}
One way to fix this would be to allocate the pointer to a GLuint like:
GLuint* texID = new GLuint; // And pass it in
That will work for starters, but you should be careful about how you go about this. Anything allocated with new will have to be deleted with a call to delete or you'll leak memory. Smart pointers delete the object automatically once it goes out of scope.
You could also do this:
GLuint texID;
cvMatToGlTex(tex_img, &texID);
Remember though that the lifetime of GLuint texID is only within that function you declared it in, because it's a local variable. I'm not sure what better way to do this because I don't know what your program looks like, but at least that should make the crash go away.

(OpenGL) Uniform is preserved after context loss?

glUniform is needed after context loss?
in other words, uniform data is preserved although context is lost?
e.g.
glUseProgram(program);
glUniform1i(location, 123);
glUseProgram(0);
/* ... !! Context is lost !! ... */
glUseProgram(program);
glUniform1i(location, 123); // <- needed? uniform is preserved?
glUseProgram(0);
If the OpenGL context is lost, everything is gone. And yes, that includes program objects and their state.

Do uniform values remain in GLSL shader if unbound?

I am writing a program that uses two different shaders for different primitives. My question is: if I bind a program, send it uniform variables, then use another shader program and come back to the first one, will the passed uniform values remain? Here is some pseudocode:
glUseProgram(shader1);
glUniform(shader1,...);
//stuff
for(elements in a list) {
if(element.type = 1) {
glUseProgram(shader2);
element.draw();
} else {
glUseProgram(shader1); //Here, do the uniforms from above remain, if shader2 was bound before?
element.draw();
}
}
Yes, uniforms are specific to a program, and will be persistent if you unbind and rebind it.
Also, if you want, you could easily verify this yourself in that sample with glGetUniform.
From the OpenGL 4.1 Specification:
2.11.7 Uniform Variables
... Uniforms are program object-specific state. They retain their values once loaded, and their values are restored whenever a program object is used, as long as the program object has not been re-linked. ...

OpenGL 2d texture not working

I'm working on a 2D game project, and I wanted to wrap the openGl texture in a simple class. The texture is read from a 128x128px .png (with alpha channel) using libpng. Since the amount of code is pretty large, I'm using pastebin.
The code files:
Texture class: http://pastebin.com/gbGMEF2Z
PngReader class: http://pastebin.com/h6uP5Uc8 (seems to work okay, so I removed the description).
OpenGl code: http://pastebin.com/PVhwnDif
To avoid wasting your time, I will explain the code a little bit:
Texture class: a wrapper for an OpenGL texture. The loadData function sets up the texture in gl (this is the function I suspect that doesn't work).
OpenGl code: the debugSetTexture function puts a texture in the temp variable which is used in the graphicsDraw() function. This is because it is not in the same source file as main(). In the graphicsMainLoop() function, I use the Fork() function which in fact calls fork(), and stores the pid of the spawned process.
From main(), this is what I do:
Strategy::IO::PngReader reader ("/cygdrive/c/Users/Tibi/Desktop/128x128.png");
reader.read();
grahpicsInit2D(&argc, argv);
debugSetTexture(reader.generateTexture());
graphicsMainLoop();
reader.close();
I tried an application called gDEBugger, and in the texture viewer, there was a texture generated, but size was 0x0px.
I suspect that the problem happens when the texture is loaded using Texture::loadTexture().
You need to check GL error codes after GL calls.
For example add this method to your class:
GLuint Texture::checkError(const char *context)
{
GLuint err = glGetError();
if (err > 0 ) {
std::cout << "0x" << std::hex << err << " glGetError() in " << context
<< std::endl;
}
return err;
}
then call it like so:
glBindTexture(GL_TEXTURE_2D, handle);
checkError("glBindTexture");
Assuming it succeeds in loading the png file, suppose your program fails in glBindTexture? (strong hint)
You did call your Error function for your file handling, but does your program halt then or chug on?
Here's a serious issue: Texture PngReader::generateTexture() returns Texture by value. This will cause your Texture object to be copied on return (handle and all) and then ~Texture() to be called, destroying the stack-based copy. So your program will call glDeleteTextures a couple times!
If you want to return it by value, you could wrap it in a shared_ptr<> which does reference counting. This would cause the destructor to be called only once:
#include <tr1/memory>
typedef std::tr1::shared_ptr<Texture> TexturePtr;
Use TexturePtr as your return type. Initialize it in generateTexture() like this:
TexturePtr t(new Texture);
then change all the method access to go through -> instead of .