OpenGL application works differently on different computers - c++

I sent my application for testing to several people. First tester has the same result as mine, but other twos have something strange. For some reason, image, which should be in original size at lower left corner, they see as stretched to fullscreen. Also they don't see GUI elements (however, buttons work if they are found by mouse). I will make reservation, that this isn't stretched image overlaps the buttons, I sent them version with transparent image and the buttons still aren't drawn. For GUI drawning I use Nuklear library. I will give screenshots and code that's responsible for positioning problem image. What can cause that?
[ Good behavior / Bad behavior ]
int width, height;
{
fs::path const path = fs::current_path() / "gamedata" / "images" / "logo.png";
unsigned char *const texture = stbi_load(path.u8string().c_str(), &width, &height, nullptr, STBI_rgb_alpha);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, texture);
stbi_image_free(texture);
}
...
{
float const x = -1.0f + width * 2.0f / xResolution;
float const y = -1.0f + height * 2.0f / yResolution;
float const vertices[30] = {
/* Position */ /* UV */
-1.0f, -1.0f, 0.0f, 0.0f, 0.0f,
-1.0f, y, 0.0f, 0.0f, 1.0f,
x, y, 0.0f, 1.0f, 1.0f,
-1.0f, -1.0f, 0.0f, 0.0f, 0.0f,
x, -1.0f, 0.0f, 1.0f, 0.0f,
x, y, 0.0f, 1.0f, 1.0f
};
glBufferData(GL_ARRAY_BUFFER, 30 * sizeof(float), vertices, GL_STATIC_DRAW);
}
[ UPDATE ]
By trial and error, I realized that problem is caused by classes that are responsible for rendering the background and logo, and both of them are wrong. Separately they work as it should, but as soon as something else is added to game loop, everything breaks down.
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
background.render();
glEnable(GL_BLEND);
glDepthMask(GL_FALSE);
logo.render();
nk_glfw3_render();
glDepthMask(GL_TRUE);
glDisable(GL_BLEND);
I wrote these classes myself, so most likely I missed something. Having found mistake in one, there's another, cuz they're almost identical. So far I can't determine, what exactly in these classes are wrong…
[ Background.hpp / Background.cpp ]

I found some errors in the posted code. In dramatic fashion, I will reveal the culprit last.
NULL used as an integer
For example,
glBindTexture(GL_TEXTURE_2D, NULL); // Incorrect
The glBindTexture function accepts an integer parameter, not a pointer. Here is the correct version:
glBindTexture(GL_TEXTURE_2D, 0); // Correct
This also applies to glBindBuffer and glBindVertexArray.
Nullary constructor is defined explicit
The explicit keyword only affects unary constructors (constructors which take one parameter). It does not affect constructors with any other number of parameters.
explicit Background() noexcept; // "explicit" does not do anything.
Background() noexcept; // Exact same declaration as above.
Constructor is incorrectly defined as noexcept
The noexcept keyword means "this function will never throw an exception". However it contains the following code which can throw:
new Shader("image")
According to the standard, this can throw a std::bad_alloc. So the noexcept annotation is incorrect. See Can the C++ `new` operator ever throw an exception in real life?
On a more practical note, the constructor reads an image from disk. This is likely to fail, and throwing an exception is a reasonable way of handling this.
The noexcept keyword is not particularly useful here. Perhaps the compiler can generate slightly less code at the call site, but this is likely to make at most an infinitesimal difference because the constructor is cold code (cold = not called often). The noexcept qualifier is mostly just useful for choosing between different generic algorithms, see What is noexcept useful for?
No error handling
Remember that stdbi_load will return NULL if an error occurs. This case is not handled.
Rule of three is violated
The Background class does not define a copy constructor or copy assignment operator even though it defines a destructor. While this is not guaranteed to make your program incorrect, it is kind of like leaving a loaded and cocked gun on the kitchen counter and hoping nobody touches it. This is called the Rule of Three and it is simple to fix. Add a deleted copy constructor and copy assignment operator.
// These three go together, either define all of them or none.
// Hence, "rule of three".
Background(const Background &) = delete;
Background &operator=(const Background &) = delete;
~Background();
See What is The Rule of Three?
Buffer is incorrectly deleted
Here is the line:
glDeleteBuffers(1, &VBO);
The short version... you should move this into Background::~Background().
The long version... when you delete the buffer, it is not removed from the VAO but the name can get reused immediately. According to the OpenGL 4.6 spec 5.1.2:
When a buffer, texture, or renderbuffer object is deleted, it is ... detached from any attachments of container objects that are bound to the current context, ...
So, because the VAO is not currently bound, deleting the VBO does not remove the VBO from the VAO (if the VAO were bound, this would be different). But, section 5.1.3:
When a buffer, texture, sampler, renderbuffer, query, or sync object is deleted, its name immediately becomes invalid (e.g. is marked unused), but the underlying object will not be deleted until it is no longer in use.
So the VBO will remain, but the name may be reused. This means that a later call to glGenBuffers might give you the same name. Then, when you call glBufferData, it overwrites the data in both your background and your logo. Or, glGenBuffers might give you a completely different buffer name. This is completely implementation-dependent, and this explains why you see different behavior on different computers.
As a rule, I would avoid calling glDeleteBuffers until I am done using the buffer. You can technically call glDeleteBuffers earlier, but it means you can get the same buffer back from glGenBuffers.

Related

How to pass a template class as a function argument without C7568 error

C++ newbie here. I'm pretty sure there's an easy and obvious solution to this problem, but even after reading through dozens of similar Q&As here, I haven't got closer to it. But here's my problem:
I have a template class:
template<class T>
struct KalmanSmoother
{
Eigen::MatrixX<T> P;
...
KalmanSmoother(int dynamParams, int measureParams, int controlParams = 0);
...
}
And I can use it without any problem, like this:
KalmanSmoother<float> smoother(4, 2);
smoother.P = Eigen::Matrix4f {
{0.1f, 0.0f, 0.1f, 0.0f},
{0.0f, 0.1f, 0.0f, 0.1f},
{0.1f, 0.0f, 0.1f, 0.0f},
{0.0f, 0.1f, 0.0f, 0.1f}
};
...
Works like charm. But when I want to refactor my code and I extract the initialization part into an other function, the compiler (MSVC 19.31.31104.0) starts crying. The function extraction looks like this:
// Declaration in the header:
void setupKalmanSmoother(KalmanSmoother<float> & smoother);
// Definition in the .cpp
inline void Vehicle::setupKalmanSmoother(KalmanSmoother<float> & smoother)
{
smoother.P = Eigen::Matrix4f {
{0.1f, 0.0f, 0.1f, 0.0f},
{0.0f, 0.1f, 0.0f, 0.1f},
{0.1f, 0.0f, 0.1f, 0.0f},
{0.0f, 0.1f, 0.0f, 0.1f}
};
...
}
And I'd just like to call it like this:
KalmanSmoother<float> smoother(4, 2);
setupKalmanSmoother(smoother);
Nothing magical. It should be working (I suppose...), but I get this compiler error:
error C7568: argument list missing after assumed function template 'KalmanSmoother'
The error message points to the declaration in the header. It's worth to mention that all function definitions of the template class are in the header file, since I have already run into - I think - exactly the same error when out of habit I put the definitions into the .cpp file.
So what am I missing?
Thanks in advance!!!
I got a very similar error. In my case it was due to circular #includes.
I ran into the same error when creating a datamember of a templated class.
List<Graphics::RenderObject*> m_RenderObjects;
The fix in my instance was I needed to scope into the namespace that the 'List<>' was in.
STD::List<Graphics::RenderObject*> m_RenderObjects;
Instead of saying 'List' is undefined, it gave me error 7568. Which is misleading. I recommend checking for similar issues.

DirectX11 IASetVertexBuffers "ID3D11Buffer*" is incompatible with

A problem with returning a pointer from a class
I am making a project for a 3D course that involves utilizing DirectX 11. In assignments that were much smaller in scope we were encouraged to just put all the code in and around the main-file. For this project I wanted more structure, so I decided to subdivide the code into three classes: WindowHandler, D3DHandler and PipelineHandler.
D3DHandler sits in WindowHandler, and PipelineHandler sits in D3DHandler like so:
class WindowHandler
{
private:
HWND window;
MSG message
class D3DHandler()
{
private:
ID3D11Device*;
ID3D11DeviceContext*;
IDXGISwapChain*;
ID3D11RenderTargetView*;
ID3D11Texture2D*;
ID3D11DepthStencilView*;
D3D11_VIEWPORT;
class PipelineHandler()
{
private:
ID3D11VertexShader* vShader;
ID3D11PixelShader* pShader;
ID3D11InputLayout* inputLayout;
ID3D11Buffer* vertexBuffer;
}
}
}
(The classes are divided into their own .h libraries, I just wanted to compress the code a bit)
Everything ran smoothly until I tried to bind my VertexBuffer in my Render() function.
void D3DHandler::Render()
{
float clearColor[] = { 0.0f, 0.0f, 0.0f, 1.0f };
this->immediateContext->ClearRenderTargetView(this->rtv, clearColor); //Clear backbuffer
this->immediateContext->ClearDepthStencilView(this->dsView, D3D11_CLEAR_DEPTH | D3D11_CLEAR_STENCIL, 1.0f, 0);
UINT32 vertexSize = sizeof(Vertex); //Will be the size of 8 floats, x y z, r g b, u v
UINT32 offset = 0;
this->immediateContext->VSSetShader(this->ph.GetVertexShader(), nullptr, 0);
this->immediateContext->IASetVertexBuffers(0, 1, this->ph.GetVertexBuffer(), &vertexSize, &offset);
this->immediateContext->IASetInputLayout(this->ph.GetInputLayout());
this->immediateContext->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_TRIANGLELIST);
this->immediateContext->PSSetShader(this->ph.GetPixelShader(), nullptr, 0);
//
// Draw geometry
this->immediateContext->Draw(3, 0);
}
The problem occurs at IASetVertexBuffers, where this->ph.GetVertexBuffer() gives the error
"ID3D11Buffer*" is incomaptible with parameter of type "ID3D11Buffer
*const *"
The Get-function in PipelineHandler returns a pointer to the COM object, and for VertexBuffer it looks like this:
ID3D11Buffer * PipelineHandler::GetVertexBuffer() const
{
return this->vertexBuffer;
}
From what documentation I've found, IASetVertexBuffers expects an array of buffers, so a double pointer. However, in the code I've worked with before, where the ID3D11Buffer * has been global, the same code generated no errors (and yes, most of the code I'm working so far has been wholesale copied from the last assignment in this course, just placed into classes).
I have tried to:
to remove the this-> pointer in front of the ph
Creating a new function called ID3D11Buffer & GetVertexBufferRef() const which returns return *this->vertexBuffer; using that instead.
I yet to try:
Moving vertexBuffer from the PipelineHandler and up to D3DHandler. I have a feeling that would work, but it would also disconnect the vertex buffer from the other objects that I deem to be part of the pipeline and not the context, which is something I would like to avoid.
I am grateful for any feedback I can get. I'm fairly new to programming, so any comments that aren't a solution to my problem are just as welcome.
The problem your are having is that this function signature does not take a ID3D11Buffer* interface pointer. It takes an array of ID3D11Buffer* interface pointers.
this->immediateContext->IASetVertexBuffers(0, 1, this->ph.GetVertexBuffer(), &vertexSize, &offset);
Can be any of the following:
this->immediateContext->IASetVertexBuffers(0, 1, &this->ph.GetVertexBuffer(), &vertexSize, &offset);
-or-
ID3D11Buffer* vb = this->ph.GetVertexBuffer();
this->immediateContext->IASetVertexBuffers(0, 1, &vb, &vertexSize, &offset);
-or-
ID3D11Buffer* vb[] = { this->ph.GetVertexBuffer() };
this->immediateContext->IASetVertexBuffers(0, 1, vb, &vertexSize, &offset);
This is because the Direct3D 11 API supports 'multi-stream' rendering where you have more than one Vertex Buffer referenced in the Input Layout which is merged as it is drawn. The function also takes an array of vertex sizes and offsets, which is why you had to take the address of the values vertexSize and offset instead of just passing by value. See Microsoft Docs for full details on the function signature.
This feature was first introduced back in Direct3D 9, so the best introduction to it is found here. With Direct3D 11, depending on Direct3D hardware feature level you can use up to 16 or 32 slots at once. That said, most basic rendering uses a single stream/VertexBuffer.
You should take a look at this page as to why the 'first' option above can get you into trouble with smart-pointers.
For the 'second' option, I'd recommend using C++11's auto keyboard so it would be: auto vb = this->ph.GetVertexBuffer();
The 'third' option obviously makes most sense if you are using more than one VertexBuffer for multi-stream rendering.
Note that you can never have more than one ID3D11Buffer active for the Index Buffer, so the function signature does in fact take a ID3D11Buffer* in that case and not an array. See Microsoft Docs

OpenGL 3.3: GL_INVALID_OPERATION when calling glBindBuffer [duplicate]

This question already has answers here:
OpenGL object in C++ RAII class no longer works
(2 answers)
Closed 5 years ago.
Alright, so I am having a pretty tenacious issue setting up my OpenGL code. I'm trying to refactor my graphics code into a renderer object, but I can't seem to get to the bottom of a tricky GL_INVALID_OPERATION error (code 1282).
I start by creating a mesh object that initializes a loosely defined collection of OpenGL objects, and manages their lifespan in an attempt at RAII style:
struct OpenGLMesh
{
OpenGLMesh(OpenGLRenderer& renderer,
int shader_index,
const char* fpath);
~OpenGLMesh();
GLuint vbo_;
GLuint ebo_;
GLuint texture_;
std::vector<float> vertices_;
std::vector<unsigned int> indices_;
GLuint shader_id_;
GLuint mvp_id_;
};
OpenGLMesh::OpenGLMesh(OpenGLRenderer& renderer, int shader_index, const char* fpath)
{
glGenBuffers(1, &vbo_);
glGenBuffers(1, &ebo_);
glGenTextures(1, &texture_);
renderer.loadTexture(*this, fpath);
const std::vector<GLuint>& shaders = renderer.getShaders();
shader_id_ = shaders.at(shader_index);
mvp_id_ = glGetUniformLocation(shader_id_, "MVP");
}
OpenGLMesh::~OpenGLMesh()
{
glDeleteBuffers(1, &vbo_);
glDeleteBuffers(1, &ebo_);
glDeleteTextures(1, &texture_);
}
At the same time I have a renderer object that owns the majority of the initialization and rendering functions. For example, the loadTexture function in the above constructor is part of the my OpenGLRenderer class:
OpenGLRenderer::OpenGLRenderer()
{
glGenVertexArrays(1, &vao_); // allocate + assign a VAO to our handle
shaders_.push_back(loadShaders("shaders/texture.vert", "shaders/texture.frag"));
}
OpenGLRenderer::~OpenGLRenderer()
{
std::vector<GLuint>::iterator it;
for (it = shaders_.begin(); it != shaders_.end(); ++it)
{
glDeleteProgram(*it);
}
glDeleteVertexArrays(1, &vao_);
}
My first concern is that compartmentalization of these function calls may have somehow invalidated some part of my OpenGL setup calls. However, the error doesn't make an appearance until I attempt to bind my mesh's VBO.
Below is the code from a stripped down test module I built to debug this issue:
// create the renderer object
OpenGLRenderer renderer;
// create and store a mesh object
std::vector<OpenGLMesh> meshes;
meshes.push_back(OpenGLMesh(renderer, 0, "./assets/dune_glitch.png"));
// SDL Event handling loop
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glBindVertexArray(vao_);
glBindBuffer(GL_ARRAY_BUFFER, mesh.vbo_);
printOpenGLError(); // prints out error code 1282
I've verified that it is definitely this line that breaks every time, although it doesn't seem to send a kill signal until the next iteration of the loop.
I haven't been able to find any insight on this problem - seems like glBindBuffer doesn't normally generate this kind of error. I've also made sure that the mesh.vbo_ ID still points to the same location.
For some reason, my application's stack trace doesn't play well with GDB, so I haven't been able to look at the trace as much as I would normally want to. Any advice would be a help, from debugging tips to possible sources of failure - thanks in advance!
(This is my first real post, let me know if I messed anything up!)
It the constructor of the calss OpenGLMesh the object buffers are generated (glGenBuffers). In the destructor OpenGLMesh::~OpenGLMesh the object buffers are destroyed (glDeleteBuffers).
In the following line:
meshes.push_back(OpenGLMesh(renderer, 0, "./assets/dune_glitch.png"));
you push_back the OpenGLMesh to a std::vector. This means that a temporary OpenGLMesh object is generated, an the object bffers are generated in its constructor. At this point all the data are valid and the object buffers are generated (GPU). When calling std::vector::push_back, then a new OpenGLMesh object is is generated, in the std::vector. The object is constructed by the default copy constructor and gets a copy of all the members of the first OpenGLMesh object. Immediately after this is done, the temporary OpenGLMesh object gets destroyed and the object buffers get deleted (glDeleteBuffers) by the destructor OpenGLMesh::~OpenGLMesh of the temporary object. At this point all the data are gone.
See std::vector::push_back. Put a breakpoint in the destrutor OpenGLMesh::~OpenGLMesh, then you can simply track the expiration.
You should make the class not copyable and not copy constructable, but specify a move constructor and move operator.
class OpenGLMesh
{
OpenGLMesh(const OpenGLMesh &) = delete;
OpenGLMesh & operator = (const OpenGLMesh &) = delete;
OpenGLMesh( OpenGLMesh && );
OpenGLMesh & operator = ( OpenGLMesh && );
....
};
You can quickly fix this behavior for debug reasons, by replacing
meshes.push_back(OpenGLMesh(renderer, 0, "./assets/dune_glitch.png"));
by
meshes.emplace_back(renderer, 0, "./assets/dune_glitch.png");
(see std::vector::emplace_back)
For the implementation of a move constructor and move operator see:
Move assignment operator
Move constructors
C++11 Tutorial: Introducing the Move Constructor and the Move Assignment Operator

OpenGL - glUniformBlockBinding after or before glLinkProgram?

I'm trying to use Uniform Buffer Objects to share my projection matrix accross different shaders (e.g., Deferred pass for solid objects and Forward pass for transparent ones). I think that I'll add more data to the UBO later on, when the complexity grows up. My problem is that the Red Book says:
To explicitly control a uniform block's binding, call glUniformBlockBinding() before calling glLinkProgram().
But the online documentation says:
When a program object is linked or re-linked, the uniform buffer object binding point assigned to each of its active uniform blocks is reset to zero.
What am I missing? Should I bind the Uniform Block after or before the linking?
Thanks.
glUniformBlockBinding​ needs the program name and the index where to find the block in this particular program.
void glUniformBlockBinding​( GLuint program​, GLuint uniformBlockIndex​, GLuint uniformBlockBinding​ );
uniformBlockIndex​ can be obtained by calling glGetUniformBlockIndex on the program.
GLuint glGetUniformBlockIndex​( GLuint program​, const char *uniformBlockName​ );
From the API (http://www.opengl.org/wiki/GLAPI/glGetUniformBlockIndex):
program​ must be the name of a program object for which the command glLinkProgram​ must have been called in the past, although it is not required that glLinkProgram​ must have succeeded. The link could have failed because the number of active uniforms exceeded the limit.
So the correct order is:
void glLinkProgram(GLuint program​);
GLuint glGetUniformBlockIndex​( GLuint program​, const char *uniformBlockName​ );
void glUniformBlockBinding​( GLuint program​, GLuint uniformBlockIndex​, GLuint uniformBlockBinding​ );

mapped resources and assigning data in direct3d11

Hi i'm working with shaders and i've just got a quick question about whether or not i can do something here. When mapping data to a buffer i normally see it done like this. Define class or struct to represent whats in the buffer.
class MatrixBuffer
{
public:
D3DXMATRIX worldMatrix;
D3DXMATRIX viewMatrix;
D3DXMATRIX projectionMatrix;
}
then set up a input element description when you make when you create the input layout, i got all that. Now, when i see people update the buffers before rendering, i see it mostly done like this:
MatrixBuffer * dataPtr;
if(FAILED(deviceContext->Map(m_matrixBuffer, 0, D3D11_MAP_WRITE_DISCARD, 0, &mappedResource)))
return false;
dataPtr = (MatrixBuffer *) mappedResource.pData;
dataPtr->world = worldMatrix;
dataPtr->view = viewMatrix;
dataPtr->projection = projectionMatrix;
deviceContext->Unmap(m_matrixBuffer, 0);
where the values assigned are actually valid D3DXMATRIX's. seeing as that's okay, would anything go wrong if i just did it like this?
MatrixBuffer* dataPtr;
if(FAILED(deviceContext->Map(m_matrixBuffer, 0, D3D11_MAP_WRITE_DISCARD, 0, &mappedResource)))
return false;
dataPtr = (MatrixBuffer *) mappedResource.pData;
*dataPtr = matrices;
deviceContext->Unmap(m_matrixBuffer, 0);
where matrices is a valid filled out MatrixBuffer object? Would there be dire consequences later because of some special direct X handles this or is it fine?
I don't see anything wrong with your version, from a technical perspective.
It might be slightly harder to grasp what actually gets copied there, and from a theoretical viewpoint the resulting code should be pretty much the same -- I would expect that any decent optimizer would notice that the "long" version just copies several successive items, and lump them all into one larger copy.
Try avoiding using mapping as it stalls the pipeline. For stuff that gets updated once a frame it's better to have D3D11_USAGE_DEFAULT buffer and then use UpdateSubresource method to push new data in.
More on that here: http://blogs.msdn.com/b/shawnhar/archive/2008/04/14/stalling-the-pipeline.aspx