OpenCV Mat to OpenGL - error on glBindTexture() - c++

I'm trying to convert an OpenCV cv::Mat image to an OpenGL texture (I need a GLuint that points to the texture). Here's the code I have so far, pieced together from numerous Google searches:
void otherFunction(cv::Mat tex_img) // tex_img is a loaded image
{
GLuint* texID;
cvMatToGlTex(tex_img, texID);
// Operations on the texture here
}
void cvMatToGlTex(cv::Mat tex_img, GLuint* texID)
{
glGenTextures(1, texID);
// Crashes on the next line:
glBindTexture(GL_TEXTURE_2D, *texID);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, tex_img.cols, tex_img.rows, 0, GL_BGR, GL_UNSIGNED_BYTE, tex_img.ptr());
return;
}
The crash happens when calling glBindTextures.
I tried a try-catch for all exceptions with catch (const std::exception& ex) and catch (...), but it didn't seem to throw anything, just crash my program.
I apologize if I'm doing anything blatantly wrong, this has been my first experience with OpenGL, and it's still early days for me with C++. I've been stuck on this since yesterday. Do I have a fundamental misunderstanding of OpenGL textures? I'm open to a major reworking of the code if necessary. I'm running Ubuntu 16.04 if that changes anything.

OK, here's what's happening:
void otherFunction(cv::Mat tex_img)
{
GLuint* texID; // You've created a pointer but not initialised it. It could be pointing
// to anywhere in memory.
cvMatToGlTex(tex_img, texID); // You pass in the value of the GLuint*
}
void cvMatToGlTex(cv::Mat tex_img, GLuint* texID)
{
glGenTextures(1, texID); // OpenGL will dereference the pointer and write the
// value of the new texture ID here. The pointer value you passed in is
// indeterminate because you didn't assign it to anything.
// Dereferencing a pointer pointing to memory that you don't own is
// undefined behaviour. The reason it's not crashing here is because I'll
// bet your compiler initialised the pointer value to null, and
// glGenTextures() checks if it's null before writing to it.
glBindTexture(GL_TEXTURE_2D, *texID); // Here you dereference an invalid pointer yourself.
// It has a random value, probably null, and that's
// why you get a crash
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, tex_img.cols, tex_img.rows, 0, GL_BGR, GL_UNSIGNED_BYTE, tex_img.ptr());
return;
}
One way to fix this would be to allocate the pointer to a GLuint like:
GLuint* texID = new GLuint; // And pass it in
That will work for starters, but you should be careful about how you go about this. Anything allocated with new will have to be deleted with a call to delete or you'll leak memory. Smart pointers delete the object automatically once it goes out of scope.
You could also do this:
GLuint texID;
cvMatToGlTex(tex_img, &texID);
Remember though that the lifetime of GLuint texID is only within that function you declared it in, because it's a local variable. I'm not sure what better way to do this because I don't know what your program looks like, but at least that should make the crash go away.

Related

Opengl only binds buffers from function that created them

I'm trying to write a very barebones game engine to learn how they work internally and I've gotten to the point where I have a "client" app sending work to the engine. This works so far but the problem I am having is that my test triangle only renders when I bind the buffer from the "main" function (or wherever the buffers were created)
This is even the case when the buffers are abstracted and have the same function with the same member values (validated using clion's debugger) but they still have to be bound in the function that created them
for example I have this code to create the buffers and set their data
...
Venlette::Graphics::Buffer vertexBuffer;
Venlette::Graphics::Buffer indexBuffer;
vertexBuffer.setTarget(GL_ARRAY_BUFFER);
indexBuffer.setTarget(GL_ELEMENT_ARRAY_BUFFER);
vertexBuffer.setData(vertices, sizeof(vertices));
indexBuffer.setData(indices, sizeof(indices));
glEnableVertexAttribArray(0);
glVertexAttribPointer(0, 2, GL_FLOAT, GL_FALSE, sizeof(float)*2, nullptr);
...
where vertices is a c array of 6 floats and indices being 3 unsigned int's
the buffers are then passed to a "context" to be stored for rendering later using the following code
...
context.addBuffer(vertexBuffer);
context.addBuffer(indexBuffer);
context.setNumVertices(3);
context.setNumIndices(3);
context.
...
which calls: addBuffer
Task& task = m_tasks.back();
task.buffers.push_back(std::move(buffer));
this again, works. The buffers are stored correctly, the buffers still exist when the code reaches here and nothing is wrong.
then it gets to the drawing part where the buffers are: bound, drawn, then unbound, as shown
...
for (auto& buffer : task.buffers) {
buffer.upload();
buffer.bind();
}
glDrawElements(GL_TRIANGLES, task.numIndices, GL_UNSIGNED_INT, nullptr);
for (auto& buffer : task.buffers) {
buffer.unbind();
}
...
bind and unbind being these functions
void Buffer::bind() const noexcept {
if (m_id == -1) return;
glBindBuffer(m_target, m_id);
spdlog::info("bind() -{}-{}-", m_id, m_target);
}
void Buffer::unbind() const noexcept {
if (m_id == -1) return;
glBindBuffer(m_target, 0);
spdlog::info("unbind() -{}-{}-", m_id, m_target);
}
but this is where nothing works. If I call buffer.bind() from the "doWork" function where the buffers are drawn, nothing renders, but if I call buffer.bind() from the main function I get a white triangle in the middle of the screen
Even when I bound then unbound the buffers from the main buffer encase that was the issue it still doesn't draw. Only when the buffers are bound and remain bound from the main function, that is draws
a pastebin of the full code (no headers)
pastebin
Does anyone know why this happens, even if you don't know how to fix it. is it something to do with buffer lifetime, or with moving the buffer into the vector?
it is just buffer.bind() that doesn't work, uploading data works from the context, just not binding it
You seem to not bind the vertex buffer to the GL_ARRAY_BUFFER buffer binding point before calling glVertexAttribPointer.
glVertexAttribPointer uses the buffer bound to GL_ARRAY_BUFFER in order to know which buffer is the vertex attribute source for that generic vertex attribute.
So, you should bind the vertexBuffer before calling glVertexAttribPointer.

How to let a C++ compiler to know that a function is `Idempotent`

I'm writing an OpenGL C++ wrapper. This wrapper aims at reducing the complex and fallible usage.
For example, I currently want the user paying only a little attention to OpenGL Context. To do this, I wrote a class gl_texture_2d. As is known to us all, an OpenGL texture basically has the following operations:
Set it's u/v parameter as repeat/mirror and so on
Set it's min/mag filter as linear
...
Based on this, we have:
class gl_texture_2d
{
public:
void mirror_u(); // set u parameter as mirror model
void mirror_v(); // set v parameter as mirror model
void linear_min_filter(); // ...
void linear_mag_filter(); // ...
};
Well, we know that, we can only perform these operations only if the handle of OpenGL texture object is currently bound to OpenGL context.
Suppose we have a function do this:
void bind(GLuint htex); // actually an alias of related GL function
Ok, we can now design our gl_texture_2d usage as:
gl_texture_2d tex;
bind(tex.handle());
tex.mirror_u();
tex.linear_min_filter();
unbind(tex.handle());
It confirms GL's logic, but it lose the wrapper's significant, right? As an user, I wish to operate like:
gl_texture_2d tex;
tex.mirror_u();
tex.linear_min_filter();
To achieve this, we must implement the function alike to:
void gl_texture_2d::mirror_u()
{
glBindTexture(GL_TEXTURE_2D, handle());
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_MIRRORED_REPEAT);
glBindTexture(GL_TEXTURE_2D, 0);
}
Always doing the binding operation internally makes sure that the operation is valid. But the cost is expensive!
The codes:
tex.mirror_u();
tex.mirror_v();
will expand to a pair of meaningless binding/unbinding operation.
So is there any mechanism so that the compiler can know:
If bind(b) immediately follows bind(a), the bind(a) can be removed;
If bind(a) occurs twice in a block, the last has no effect.
If you're working with pre-DSA OpenGL, and you absolutely must wrap OpenGL calls this directly with your own API, then the user is probably going to have to know about the whole bind-to-edit thing. After all, if they've bound a texture for rendering purposes, then they try to modify one, it could damage the current binding.
As such, you should build the bind-to-edit notion directly into the API.
That is, a texture object (which, BTW, should not be limited to just 2D textures) shouldn't actually have functions for modifying it, since you cannot modify an OpenGL texture without binding it (or without DSA, which you really ought to learn). It shouldn't have mirror_u and so forth; those functions should be part of a binder object:
bound_texture bind(some_texture, tex_unit);
bind.mirror_u();
...
The constructor of bound_texture binds some_texture to tex_unit. Its member functions will modify that texture (note: they need to call glActiveTexture to make sure that nobody has changed the active texture unit).
The destructor of bound_texture should automatically unbind the texture. But you should have a release member function that manually unbinds it.
You're not going to be able to do this at a compilation level. Instead, if you're really worried about the time costs of these kinds of mistakes, a manager object might be the way to go:
class state_manager {
GLuint current_texture;
/*Maybe other stuff?*/
public:
void bind_texture(gl_texture_2d const& tex) {
if(tex.handle() != current_texture) {
current_texture = tex.handle();
glBindTexture(/*...*/, current_texture);
}
}
};
int main() {
state_manager manager;
/*...*/
gl_texture_2d tex;
manager.bind(tex);
manager.bind(tex); //Won't execute the bind twice in a row!
/*Do Stuff with tex bound*/
}

How does glDeleteTextures and glDeleteBuffers work?

Basically, in my code I hook the glDeleteTextures and glBufferData functions. I store a list of textures and a list of buffers. The buffer list holds checksums and pointers to the buffer. The below code intercepts the data before it reaches the graphics card.
Hook_glDeleteTextures(GLsizei n, const GLuint* textures)
{
for (int I = 0; I < n; ++I)
{
if (ListOfTextures[I] == textures[I]) //??? Not sure if correct..
{
//Erase them from list of textures..
}
}
(*original_glDeleteTextures)(n, textures);
}
And I do the same thing for my buffers. I save the buffers and textures to a list like below:
void Hook_glBufferData(GLenum target, GLsizeiptr size, const GLvoid* data, GLenum usage)
{
Buffer.size = size;
Buffer.target = target;
Buffer.data = data;
Buffer.usage = usage;
ListOfBuffers.push_back(Buffer);
(*original_glBufferData(target, size, data, usage);
}
Now I need to delete whenever the client deletes. How can I do this? I used a debugger and it seems to know exactly which textures and buffers are being deleted.
Am I doing it wrong? Should I be iterating the pointers passed and deleting the textures?
You do realize, that you should to it other way round: Have a list of texture-info objects and when you delete one of them, call OpenGL to delete the textures. BTW: OpenGL calls don't go to the graphics card, they go to the driver and textures may be stored not on GPU memory at all but be swapped out to system memory.
Am I doing it wrong? Should I be iterating the pointers passed and deleting the textures?
Yes. You should not intercept OpenGL calls to trigger data management in your program. For one, you'd have to track the active OpenGL context as well. But more importantly, it's your program that does the OpenGL calls in the first place. And unless your program/compiler/CPU is schizophrenic it should be easier to track the data first and manage OpenGL objects according to this. Also the usual approach is to keep texture image data in a cache, but delete OpenGL textures of those images, if you don't need them right now, but may need them in the near future.
Your approach is basically inside-out, you're putting the cart before the horse.

use opengl in thread

I have a library, which is engaged in rendering on opengl and prinimaet streams from the network.
I write under a poppy, but plans to use on linux
so the window is created for objective c
I start drawing in a separate thread in the other receiving and decoding the data.
I crash bug (EXT_BAD_ACCESS) on methods of opengl, even if I use them only in a single thread.
my code
main glut:
int main(int argc, const char * argv[]){
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_DOUBLE|GLUT_RGBA);
int win = glutGetWindow();
glutInitWindowSize(800, 600);
glutCreateWindow("OpenGL lesson 1");
client_init(1280, 720, win, "192.168.0.98", 8000, 2222);
return 0;
}
or objective c
- (id)initWithFrame:(NSRect)frameRect pixelFormat:(NSOpenGLPixelFormat*)format{
self = [super initWithFrame:frameRect];
if (self != nil) {
NSOpenGLPixelFormatAttribute attributes[] = {
NSOpenGLPFANoRecovery,
NSOpenGLPFAFullScreen,
NSOpenGLPFAScreenMask,
CGDisplayIDToOpenGLDisplayMask(kCGDirectMainDisplay),
(NSOpenGLPixelFormatAttribute) 0
};
_pixelFormat = [[NSOpenGLPixelFormat alloc] initWithAttributes:attributes];
if (!_pixelFormat)
{
return nil;
}
//_pixelFormat = [format retain];
[[NSNotificationCenter defaultCenter] addObserver:self
selector:#selector(_surfaceNeedsUpdate:)
name:NSViewGlobalFrameDidChangeNotification
object:self];
_openGLContext = [self openGLContext];
client_init(1280, 720, win, "192.168.0.98", 8000, 2222);
}
return self;
}
client_init code
// pthread_create(&posixThreadID, NULL, (void*(*)(void*))ShowThread, dh_tmp);
pthread_create(&posixThreadID, NULL, (void*(*)(void*))ShowThread, NULL);
void* ShowThread(struct drawhandle * dh){
//glViewport(0, 0, dh->swidth, dh->sheight);//EXT_BAD_ACCESS
glViewport(0, 0, 1280, 720);//EXT_BAD_ACCESS
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
//gluOrtho2D(0, dh->swidth, 0, dh->sheight);
gluOrtho2D(0, 1280, 0, 720);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
...
return 0;
}
I think the problem is? that uncreated context opengl.
How to create it in macos / linux?
This thread has no current OpenGL context. Even if you did create a context earlier in the program (not visible in your snippet), it will not be current in the thread you launch.
An OpenGL context is always, with no exceptions, "current" for exactly one thread at a time. By default this is the thread that created the context. Any thread calling OpenGL must be made "current" first.
You must either create the context in this thread, or call glXMakeCurrent (Unix/Linux) or aglMakeCurrent (Mac) or wglMakeCurrent (Windows) inside ShowThread (before doing anything else related to OpenGL).
(probably not the reason for the crash, though... see datenwolf's answer for the likely reason of the crash -- nevertheless it's wrong)`
OpenGL and multithreading are on difficult terms. It can be done, but it requires some care. First and foremost, a OpenGL context can be active in only one thread at a time. And on some systems, like Windows extension function pointers are per context, so with different contexts in different threads you may end up with different extension function pointers, which must be provisioned for.
So there's problem number one: You've probably got no OpenGL context on this thread, but this should not crash on calling a non-extension function, it would just do nothing.
If it really crashes on the line you indicated, then the dh pointer is invalid for sure. It's the only explanation. A pointer in C is just some number that's interpreted in a special way. If you pass around pointers – especially if used as a parameter to a callback, or thread function – then the object to pointer points to must not go invalid until it's made sure this pointer can no longer be accessed. Which means: You must not use this on things you create on the stack, i.e. in C auto storage.
This will break:
void foo(void)
{
struct drawhandle dh_tmp;
pthread_create(&posixThreadID, NULL, (void*(*)(void*))ShowThread, &dh_tmp);
}
why? Because the moment foo returns the object dh_tmp goes invalid. But &dh_tmp (the pointer to it) is just a number and this number will not "magically" turn zero, the moment dh_tmp gets invalid.
You must allocate it on the heap for this to work. Of course there's the problem, when to free the memory again.

OpenGL 2d texture not working

I'm working on a 2D game project, and I wanted to wrap the openGl texture in a simple class. The texture is read from a 128x128px .png (with alpha channel) using libpng. Since the amount of code is pretty large, I'm using pastebin.
The code files:
Texture class: http://pastebin.com/gbGMEF2Z
PngReader class: http://pastebin.com/h6uP5Uc8 (seems to work okay, so I removed the description).
OpenGl code: http://pastebin.com/PVhwnDif
To avoid wasting your time, I will explain the code a little bit:
Texture class: a wrapper for an OpenGL texture. The loadData function sets up the texture in gl (this is the function I suspect that doesn't work).
OpenGl code: the debugSetTexture function puts a texture in the temp variable which is used in the graphicsDraw() function. This is because it is not in the same source file as main(). In the graphicsMainLoop() function, I use the Fork() function which in fact calls fork(), and stores the pid of the spawned process.
From main(), this is what I do:
Strategy::IO::PngReader reader ("/cygdrive/c/Users/Tibi/Desktop/128x128.png");
reader.read();
grahpicsInit2D(&argc, argv);
debugSetTexture(reader.generateTexture());
graphicsMainLoop();
reader.close();
I tried an application called gDEBugger, and in the texture viewer, there was a texture generated, but size was 0x0px.
I suspect that the problem happens when the texture is loaded using Texture::loadTexture().
You need to check GL error codes after GL calls.
For example add this method to your class:
GLuint Texture::checkError(const char *context)
{
GLuint err = glGetError();
if (err > 0 ) {
std::cout << "0x" << std::hex << err << " glGetError() in " << context
<< std::endl;
}
return err;
}
then call it like so:
glBindTexture(GL_TEXTURE_2D, handle);
checkError("glBindTexture");
Assuming it succeeds in loading the png file, suppose your program fails in glBindTexture? (strong hint)
You did call your Error function for your file handling, but does your program halt then or chug on?
Here's a serious issue: Texture PngReader::generateTexture() returns Texture by value. This will cause your Texture object to be copied on return (handle and all) and then ~Texture() to be called, destroying the stack-based copy. So your program will call glDeleteTextures a couple times!
If you want to return it by value, you could wrap it in a shared_ptr<> which does reference counting. This would cause the destructor to be called only once:
#include <tr1/memory>
typedef std::tr1::shared_ptr<Texture> TexturePtr;
Use TexturePtr as your return type. Initialize it in generateTexture() like this:
TexturePtr t(new Texture);
then change all the method access to go through -> instead of .