I'm working on a 2D game project, and I wanted to wrap the openGl texture in a simple class. The texture is read from a 128x128px .png (with alpha channel) using libpng. Since the amount of code is pretty large, I'm using pastebin.
The code files:
Texture class: http://pastebin.com/gbGMEF2Z
PngReader class: http://pastebin.com/h6uP5Uc8 (seems to work okay, so I removed the description).
OpenGl code: http://pastebin.com/PVhwnDif
To avoid wasting your time, I will explain the code a little bit:
Texture class: a wrapper for an OpenGL texture. The loadData function sets up the texture in gl (this is the function I suspect that doesn't work).
OpenGl code: the debugSetTexture function puts a texture in the temp variable which is used in the graphicsDraw() function. This is because it is not in the same source file as main(). In the graphicsMainLoop() function, I use the Fork() function which in fact calls fork(), and stores the pid of the spawned process.
From main(), this is what I do:
Strategy::IO::PngReader reader ("/cygdrive/c/Users/Tibi/Desktop/128x128.png");
reader.read();
grahpicsInit2D(&argc, argv);
debugSetTexture(reader.generateTexture());
graphicsMainLoop();
reader.close();
I tried an application called gDEBugger, and in the texture viewer, there was a texture generated, but size was 0x0px.
I suspect that the problem happens when the texture is loaded using Texture::loadTexture().
You need to check GL error codes after GL calls.
For example add this method to your class:
GLuint Texture::checkError(const char *context)
{
GLuint err = glGetError();
if (err > 0 ) {
std::cout << "0x" << std::hex << err << " glGetError() in " << context
<< std::endl;
}
return err;
}
then call it like so:
glBindTexture(GL_TEXTURE_2D, handle);
checkError("glBindTexture");
Assuming it succeeds in loading the png file, suppose your program fails in glBindTexture? (strong hint)
You did call your Error function for your file handling, but does your program halt then or chug on?
Here's a serious issue: Texture PngReader::generateTexture() returns Texture by value. This will cause your Texture object to be copied on return (handle and all) and then ~Texture() to be called, destroying the stack-based copy. So your program will call glDeleteTextures a couple times!
If you want to return it by value, you could wrap it in a shared_ptr<> which does reference counting. This would cause the destructor to be called only once:
#include <tr1/memory>
typedef std::tr1::shared_ptr<Texture> TexturePtr;
Use TexturePtr as your return type. Initialize it in generateTexture() like this:
TexturePtr t(new Texture);
then change all the method access to go through -> instead of .
Related
I'm trying to write a very barebones game engine to learn how they work internally and I've gotten to the point where I have a "client" app sending work to the engine. This works so far but the problem I am having is that my test triangle only renders when I bind the buffer from the "main" function (or wherever the buffers were created)
This is even the case when the buffers are abstracted and have the same function with the same member values (validated using clion's debugger) but they still have to be bound in the function that created them
for example I have this code to create the buffers and set their data
...
Venlette::Graphics::Buffer vertexBuffer;
Venlette::Graphics::Buffer indexBuffer;
vertexBuffer.setTarget(GL_ARRAY_BUFFER);
indexBuffer.setTarget(GL_ELEMENT_ARRAY_BUFFER);
vertexBuffer.setData(vertices, sizeof(vertices));
indexBuffer.setData(indices, sizeof(indices));
glEnableVertexAttribArray(0);
glVertexAttribPointer(0, 2, GL_FLOAT, GL_FALSE, sizeof(float)*2, nullptr);
...
where vertices is a c array of 6 floats and indices being 3 unsigned int's
the buffers are then passed to a "context" to be stored for rendering later using the following code
...
context.addBuffer(vertexBuffer);
context.addBuffer(indexBuffer);
context.setNumVertices(3);
context.setNumIndices(3);
context.
...
which calls: addBuffer
Task& task = m_tasks.back();
task.buffers.push_back(std::move(buffer));
this again, works. The buffers are stored correctly, the buffers still exist when the code reaches here and nothing is wrong.
then it gets to the drawing part where the buffers are: bound, drawn, then unbound, as shown
...
for (auto& buffer : task.buffers) {
buffer.upload();
buffer.bind();
}
glDrawElements(GL_TRIANGLES, task.numIndices, GL_UNSIGNED_INT, nullptr);
for (auto& buffer : task.buffers) {
buffer.unbind();
}
...
bind and unbind being these functions
void Buffer::bind() const noexcept {
if (m_id == -1) return;
glBindBuffer(m_target, m_id);
spdlog::info("bind() -{}-{}-", m_id, m_target);
}
void Buffer::unbind() const noexcept {
if (m_id == -1) return;
glBindBuffer(m_target, 0);
spdlog::info("unbind() -{}-{}-", m_id, m_target);
}
but this is where nothing works. If I call buffer.bind() from the "doWork" function where the buffers are drawn, nothing renders, but if I call buffer.bind() from the main function I get a white triangle in the middle of the screen
Even when I bound then unbound the buffers from the main buffer encase that was the issue it still doesn't draw. Only when the buffers are bound and remain bound from the main function, that is draws
a pastebin of the full code (no headers)
pastebin
Does anyone know why this happens, even if you don't know how to fix it. is it something to do with buffer lifetime, or with moving the buffer into the vector?
it is just buffer.bind() that doesn't work, uploading data works from the context, just not binding it
You seem to not bind the vertex buffer to the GL_ARRAY_BUFFER buffer binding point before calling glVertexAttribPointer.
glVertexAttribPointer uses the buffer bound to GL_ARRAY_BUFFER in order to know which buffer is the vertex attribute source for that generic vertex attribute.
So, you should bind the vertexBuffer before calling glVertexAttribPointer.
I'm trying to use multiple threads to make one function run concurrently with another, but when the function that the new thread is running uses a static function, it always returns 0 for some reason.
I'm using Boost for the threading, on Linux, and the static functions work exactly as expected when not using threads. I'm pretty sure this isn't a data race issue because if I join the thread directly after making it (not giving any other code a chance to change anything), the problem persists.
The function that the thread is created in:
void WorldIOManager::createWorld(unsigned int seed, std::string worldName, bool isFlat) {
boost::thread t( [=]() { P_createWorld(seed, worldName, isFlat); } );
t.join();
//P_createWorld(seed, worldName, isFlat); // This works perfectly fine
}
The part of P_createWorld that uses a static function (The function that the newly-created thread actually runs):
m_world->chunks[i]->tiles[y][x] = createBlock(chunkData[i].tiles[y][x].id, chunkData[i].tiles[y][x].pos, m_world->chunks[i]);
m_world is a struct that contains an array of Chunks, which has a two dimensional array of Tiles, which each have texture IDs associated with a texture in a cache. createBlock returns a pointer to a new tile pointer, completely initialized. The static function in question belongs to a statically-linked library, and is defined as follows:
namespace GLEngine {
//This is a way for us to access all our resources, such as
//Models or textures.
class ResourceManager
{
public:
static GLTexture getTexture(std::string texturePath);
private:
static TextureCache _textureCache;
};
}
Also, its implementation:
#include "ResourceManager.h"
namespace GLEngine {
TextureCache ResourceManager::_textureCache;
GLTexture ResourceManager::getTexture(std::string texturePath) {
return _textureCache.getTexture(texturePath);
}
}
Expected result: For each tile to actually get assigned its proper texture ID
Actual result: Every tile, no matter the texturePath, is assigned 0 as its texture ID.
If you need any more code like the constructor for a tile or createBlock(), I'll happily add it, I just don't really know what information is relevant in this kind of situation...
So, as I stated before, all of this works perfectly if I don't have a thread, so my final question is: Is there some sort of undefined behaviour that has to do with static functions being called by threads, or am I just doing something wrong here?
As #fifoforlifo mentioned, OpenGL contexts have thread affinity, and it turns out I was making GL calls deeper into my texture loading function. I created a second GL context and turned on context sharing and then it began to work. Thanks a lot, #fifoforlifo!
In my Unity game, I have to modify a lot of graphic resources like textures and vertex buffers via native code to keep good performance.
The problems start when code calls ID3D11ImmediateContext::Map several times in a very short time (I mean very short - called from different threads running parallel). There is no rule if mapping is successful or not. Method call looks like
ID3D11DeviceContext* sU_m_D_context;
void* BeginModifyingVBO(void* bufferHandle)
{
ID3D11Buffer* d3dbuf = static_cast<ID3D11Buffer*>(bufferHandle);
D3D11_MAPPED_SUBRESOURCE mapped;
HRESULT res = sU_m_D_context->Map(d3dbuf, 0, D3D11_MAP_WRITE_DISCARD, 0, &mapped);
assert(mapped.pData);
return mapped.pData;
}
void FinishModifyingVBO(void* bufferHandle)
{
ID3D11Buffer* d3dbuf = static_cast<ID3D11Buffer*>(bufferHandle);
sU_m_D_context->Unmap(d3dbuf, 0);
}
std::mutex sU_m_D_locker;
void Mesh::ApplyBuffer()
{
sU_m_D_locker.lock();
// map buffer
VBVertex* mappedBuffer = (VBVertex*)BeginModifyingVBO(this->currentBufferPtr);
memcpy(mappedBuffer, this->mainBuffer, this->mainBufferLength * sizeof(VBVertex));
// unmap buffer
FinishModifyingVBO(this->currentBufferPtr);
sU_m_D_locker.unlock();
this->markedAsChanged = false;
}
where d3dbuf is dynamic vertex buffer. I don't know why, but sometimes result is E_OUTOFMEMORY, despite there is a lot of free memory. I tried to surround code with mutexes - no effect.
Is this really memory problem or maybe something less obvious?
None of the device context methods are thread safe. If you are going to use them from several threads you will need to either manually sync all the calls, or use multiple (deferred) contexts, one per thread. See Introduction to Multithreading in Direct3D 11.
Also error checking should be better: you need to always check returned HRESULT values because in case of failure something like assert(mapped.pData); may still succeed.
I am trying to parallelize a program I have made in OpenGL. I have fully tested the single threaded version of my code and it works. I ran it with valgrind and things were fine, no errors and no memory leaks, and the code behaved exactly as expected in all tests I managed to do.
In the single threaded version, I am sending a bunch of cubes to be rendered. I do this by creating the cubes in a data structure called "world", sending the OpenGL information to another structure called "Renderer" by appending them to a stack, and then finally I iterate through the queue and render every object.
Since the single threaded version works I think my issue is that I am not using the multiple OpenGL contexts properly.
These are the 3 functions that pipeline my entire process:
The main function, which initializes the global structures and threads.
int main(int argc, char **argv)
{
//Init OpenGL
GLFWwindow* window = create_context();
Rendering_Handler = new Renderer();
int width, height;
glfwGetWindowSize(window, &width, &height);
Rendering_Handler->set_camera(new Camera(mat3(1),
vec3(5*CHUNK_DIMS,5*CHUNK_DIMS,2*CHUNK_DIMS), width, height));
thread world_thread(world_handling, window);
//Render loop
render_loop(window);
//cleanup
world_thread.join();
end_rendering(window);
}
The world handling, which should run as it's own thread:
void world_handling(GLFWwindow* window)
{
GLFWwindow* inv_window = create_inv_context(window);
glfwMakeContextCurrent(inv_window);
World c = World();
//TODO: this is temprorary, implement this correctly
loadTexture(Rendering_Handler->current_program, *(Cube::textures[0]));
while (!glfwWindowShouldClose(window))
{
c.center_frame(ivec3(Rendering_Handler->cam->getPosition()));
c.send_render_data(Rendering_Handler);
openGLerror();
}
}
And the render loop, which runs in the main thread:
void render_loop(GLFWwindow* window)
{
//Set default OpenGL values for rendering
glEnable(GL_DEPTH_TEST);
glDepthFunc(GL_LEQUAL);
glPointSize(10.f);
//World c = World();
//loadTexture(Rendering_Handler->current_program, *(Cube::textures[0]));
while (!glfwWindowShouldClose(window))
{
glfwPollEvents();
Rendering_Handler->update(window);
//c.center_frame(ivec3(Rendering_Handler->cam->getPosition()));
//c.send_render_data(Rendering_Handler);
Rendering_Handler->render();
openGLerror();
}
}
Notice the comments on the third function, if I uncomment those out and then comment out the multi-threading statemnts on the main function (i.e single thread my program) everything works.
I don't think this is caused by a race condition, because the queue, where the OpenGL info is being put before rendering, is always locked before being used (i.e whenever a thread needs to read or write to the queue, the thread locks a mutex, reads or writes to the queue, then unlocks the mutex).
Does anybody have an intuition on what I could be doing wrong? Is it the OpenGL context?
I'm writing an OpenGL C++ wrapper. This wrapper aims at reducing the complex and fallible usage.
For example, I currently want the user paying only a little attention to OpenGL Context. To do this, I wrote a class gl_texture_2d. As is known to us all, an OpenGL texture basically has the following operations:
Set it's u/v parameter as repeat/mirror and so on
Set it's min/mag filter as linear
...
Based on this, we have:
class gl_texture_2d
{
public:
void mirror_u(); // set u parameter as mirror model
void mirror_v(); // set v parameter as mirror model
void linear_min_filter(); // ...
void linear_mag_filter(); // ...
};
Well, we know that, we can only perform these operations only if the handle of OpenGL texture object is currently bound to OpenGL context.
Suppose we have a function do this:
void bind(GLuint htex); // actually an alias of related GL function
Ok, we can now design our gl_texture_2d usage as:
gl_texture_2d tex;
bind(tex.handle());
tex.mirror_u();
tex.linear_min_filter();
unbind(tex.handle());
It confirms GL's logic, but it lose the wrapper's significant, right? As an user, I wish to operate like:
gl_texture_2d tex;
tex.mirror_u();
tex.linear_min_filter();
To achieve this, we must implement the function alike to:
void gl_texture_2d::mirror_u()
{
glBindTexture(GL_TEXTURE_2D, handle());
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_MIRRORED_REPEAT);
glBindTexture(GL_TEXTURE_2D, 0);
}
Always doing the binding operation internally makes sure that the operation is valid. But the cost is expensive!
The codes:
tex.mirror_u();
tex.mirror_v();
will expand to a pair of meaningless binding/unbinding operation.
So is there any mechanism so that the compiler can know:
If bind(b) immediately follows bind(a), the bind(a) can be removed;
If bind(a) occurs twice in a block, the last has no effect.
If you're working with pre-DSA OpenGL, and you absolutely must wrap OpenGL calls this directly with your own API, then the user is probably going to have to know about the whole bind-to-edit thing. After all, if they've bound a texture for rendering purposes, then they try to modify one, it could damage the current binding.
As such, you should build the bind-to-edit notion directly into the API.
That is, a texture object (which, BTW, should not be limited to just 2D textures) shouldn't actually have functions for modifying it, since you cannot modify an OpenGL texture without binding it (or without DSA, which you really ought to learn). It shouldn't have mirror_u and so forth; those functions should be part of a binder object:
bound_texture bind(some_texture, tex_unit);
bind.mirror_u();
...
The constructor of bound_texture binds some_texture to tex_unit. Its member functions will modify that texture (note: they need to call glActiveTexture to make sure that nobody has changed the active texture unit).
The destructor of bound_texture should automatically unbind the texture. But you should have a release member function that manually unbinds it.
You're not going to be able to do this at a compilation level. Instead, if you're really worried about the time costs of these kinds of mistakes, a manager object might be the way to go:
class state_manager {
GLuint current_texture;
/*Maybe other stuff?*/
public:
void bind_texture(gl_texture_2d const& tex) {
if(tex.handle() != current_texture) {
current_texture = tex.handle();
glBindTexture(/*...*/, current_texture);
}
}
};
int main() {
state_manager manager;
/*...*/
gl_texture_2d tex;
manager.bind(tex);
manager.bind(tex); //Won't execute the bind twice in a row!
/*Do Stuff with tex bound*/
}