I'm currently working with OpenGL in C++, and I'm trying to debug by identifying what the currently bound vertex buffer and index buffer are. I have three functions.
GLint getBoundVAO()
{
GLint id = 0;
glGetIntegerv(GL_VERTEX_ARRAY_BINDING, &id);
return id;
};
GLint getBoundVBO()
{
GLint id = 0;
// ???
return id;
};
GLint getBoundIBO()
{
GLint id = 0;
// ???
return id;
};
How would I go about getting the vertex buffer and index buffer in a similar way to how I am getting the VAO? I've looked at the OpenGL page https://www.khronos.org/registry/OpenGL-Refpages/gl4/html/glGet.xhtml and am not seeing a value which will allow me to get the index or vertex buffers.
See the "Parameters" section here. The symbolic constants used for binding the buffers match the ones used for glGet* (but with a _BINDING suffix).
For the vertex buffer object, use:
glGetIntegerv(GL_ARRAY_BUFFER_BINDING, &id);
For the index buffer, use:
glGetIntegerv(GL_ELEMENT_ARRAY_BUFFER_BINDING, &id);
I am trying to render a texture that gets passed through a pixel shader.
Currently my shader is as follows:
float4 EffectProcess( float2 Tex : TEXCOORD0 ) : COLOR0
{
return float4(1,0,0,1);
}
technique MyTechnique
{
pass p0
{
VertexShader = null;
PixelShader = compile ps_2_0 EffectProcess();
}
}
As you can see, it is a very basic shader that makes that forces the pixels to be red.
UINT uiPasses = 0;
res= g_lpEffect->Begin(&uiPasses, 0);
for (UINT uiPass = 0; uiPass < uiPasses; uiPass++)
{
res = g_lpEffect->BeginPass(uiPass);
res = sprite->Begin(D3DXSPRITE_SORT_TEXTURE);
res = sprite->Draw(tex, NULL, 0x0, 0x0, 0xFFFFFFFF);
res = sprite->End();
res = g_lpEffect->EndPass();
}
res = g_lpEffect->End();
And I am drawing the texture using the shader like so. I am not sure this is the correct way to do it though and have found very little resources on the subject.
The shader is being created correctly and the texture aswell, all calls return a hresult of S_OK, yet when I run the code, the texture shows perfectly, without being overwritten by red.
Both sprite and effects by default store initial pipeline state and set up their own when Begin is called and then restore it when End is called. So I suspect that sprite->Begin(D3DXSPRITE_SORT_TEXTURE); will disable effect processing and your pixel shader is never called. You may try to pass something like D3DXSPRITE_DONOTMODIFY_RENDERSTATE into Begin to prevent it from modifying pipeline state, though this may break sprite rendering. It would be better to get rid of sprite altogether and write your own sprite shader (both vertex and pixel) because fixed pipeline rendering is mostly deprecated these days.
Originally using glDrawElementsInstancedBaseVertex to draw the scene meshes. All the meshes vertex attributes are being interleaved in a single buffer object. In total there are only 30 unique meshes. So I've been calling draw 30 times with instance counts, etc. but now I want to batch the draw calls into one using glMultiDrawElementsIndirect. Since I have no experience with this command function, I've been reading articles here and there to understand the implementation with little success. (For testing purposes all meshes are instanced only once).
The command structure from the OpenGL reference page.
struct DrawElementsIndirectCommand
{
GLuint vertexCount;
GLuint instanceCount;
GLuint firstVertex;
GLuint baseVertex;
GLuint baseInstance;
};
DrawElementsIndirectCommand commands[30];
// Populate commands.
for (size_t index { 0 }; index < 30; ++index)
{
const Mesh* mesh{ m_meshes[index] };
commands[index].vertexCount = mesh->elementCount;
commands[index].instanceCount = 1; // Just testing with 1 instance, ATM.
commands[index].firstVertex = mesh->elementOffset();
commands[index].baseVertex = mesh->verticeIndex();
commands[index].baseInstance = 0; // Shouldn't impact testing?
}
// Create and populate the GL_DRAW_INDIRECT_BUFFER buffer... bla bla
Then later down the line, after setup I do some drawing.
// Some prep before drawing like bind VAO, update buffers, etc.
// Draw?
if (RenderMode == MULTIDRAW)
{
// Bind, Draw, Unbind
glBindBuffer(GL_DRAW_INDIRECT_BUFFER, m_indirectBuffer);
glMultiDrawElementsIndirect (GL_TRIANGLES, GL_UNSIGNED_INT, nullptr, 30, 0);
glBindBuffer(GL_DRAW_INDIRECT_BUFFER, 0);
}
else
{
for (size_t index { 0 }; index < 30; ++index)
{
const Mesh* mesh { m_meshes[index] };
glDrawElementsInstancedBaseVertex(
GL_TRIANGLES,
mesh->elementCount,
GL_UNSIGNED_INT,
reinterpret_cast<GLvoid*>(mesh->elementOffset()),
1,
mesh->verticeIndex());
}
}
Now the glDrawElements... still works fine like before when switched. But trying glMultiDraw... gives indistinguishable meshes but when I set the firstVertex to 0 for all commands, the meshes look almost correct (at least distinguishable) but still largely wrong in places?? I feel I'm missing something important about indirect multi-drawing?
//Indirect data
commands[index].firstVertex = mesh->elementOffset();
//Direct draw call
reinterpret_cast<GLvoid*>(mesh->elementOffset()),
That's not how it works for indirect rendering. The firstVertex is not a byte offset; it's the first vertex index. So you have to divide the byte offset by the size of the index to compute firstVertex:
commands[index].firstVertex = mesh->elementOffset() / sizeof(GLuint);
The result of that should be a whole number. If it wasn't, then you were doing unaligned reads, which probably hurt your performance. So fix that ;)
I'm currently trying to get a triangle to render using OpenGL 3.3 and C++ with the GLM, GLFW3 and GLEW libraries, but get an error when trying to create my shaderprogram.
Vertex info
(0) : error C5145: must write to gl_Position
I already tried to find out why this happens and asked on other forums, but no one knew what the reason is. There are three possible points where this error could have his origin - in my main.cpp, where I create the window, the context, the program, the vao etc. ...
#include <GL/glew.h>
#include <GLFW/glfw3.h>
#include <glm/glm.hpp>
#include <iostream>
#include <string>
#include "util/shaderutil.hpp"
#define WIDTH 800
#define HEIGHT 600
using namespace std;
using namespace glm;
GLuint vao;
GLuint shaderprogram;
void initialize() {
glGenVertexArrays(1, &vao);
glBindVertexArray(vao);
glClearColor(0.5, 0.7, 0.9, 1.0);
string vShaderPath = "shaders/shader.vert";
string fShaderPath = "shaders/shader.frag";
shaderprogram = ShaderUtil::createProgram(vShaderPath.c_str(), fShaderPath.c_str());
}
void render() {
glClear(GL_COLOR_BUFFER_BIT);
glUseProgram(shaderprogram);
glDrawArrays(GL_TRIANGLES, 0, 3);
}
void clean() {
glDeleteProgram(shaderprogram);
}
int main(int argc, char** argv) {
if (!glfwInit()) {
cerr << "GLFW ERROR!" << endl;
return -1;
}
glfwWindowHint(GLFW_OPENGL_FORWARD_COMPAT, GL_TRUE);
glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3);
GLFWwindow* win = glfwCreateWindow(WIDTH, HEIGHT, "Rendering a triangle!", NULL, NULL);
glfwMakeContextCurrent(win);
glewExperimental = GL_TRUE;
if (glewInit() != GLEW_OK) {
cerr << "GLEW ERROR!" << endl;
return -1;
} else {
glGetError();
//GLEW BUG: SETTING THE ERRORFLAG TO INVALID_ENUM; THEREFORE RESET
}
initialize();
while (!glfwWindowShouldClose(win)) {
render();
glfwPollEvents();
glfwSwapBuffers(win);
}
clean();
glfwDestroyWindow(win);
glfwTerminate();
return 0;
}
...the ShaderUtil class, where I read in the shader files, compile them, do error checking and return a final program...
#include "shaderutil.hpp"
#include <iostream>
#include <string>
#include <fstream>
#include <vector>
using namespace std;
GLuint ShaderUtil::createProgram(const char* vShaderPath, const char* fShaderPath) {
/*VARIABLES*/
GLuint vertexShader;
GLuint fragmentShader;
GLuint program;
ifstream vSStream(vShaderPath);
ifstream fSStream(fShaderPath);
string vSCode, fSCode;
/*CREATING THE SHADER AND PROGRAM OBJECTS*/
vertexShader = glCreateShader(GL_VERTEX_SHADER);
fragmentShader = glCreateShader(GL_FRAGMENT_SHADER);
program = glCreateProgram();
/*READING THE SHADERCODE*/
/*CONVERTING THE SHADERCODE TO CHAR POINTERS*/
while (vSStream.is_open()) {
string line = "";
while (getline(vSStream, line)) {
vSCode += "\n" + line;
}
vSStream.close();
}
const char* vSCodePointer = vSCode.c_str();
while (fSStream.is_open()) {
string line = "";
while (getline(fSStream, line)) {
fSCode += "\n" + line;
}
fSStream.close();
}
const char* fSCodePointer = fSCode.c_str();
/*COMPILING THE VERTEXSHADER*/
glShaderSource(vertexShader, 1, &vSCodePointer, NULL);
glCompileShader(vertexShader);
/*VERTEXSHADER ERROR CHECKING*/
GLint vInfoLogLength;
glGetShaderiv(vertexShader, GL_INFO_LOG_LENGTH, &vInfoLogLength);
if (vInfoLogLength > 0) {
vector<char> vInfoLog(vInfoLogLength + 1);
glGetShaderInfoLog(vertexShader, vInfoLogLength, &vInfoLogLength, &vInfoLog[0]);
for(int i = 0; i < vInfoLogLength; i++) {
cerr << vInfoLog[i];
}
}
/*COMPILING THE FRAGMENTSHADER*/
glShaderSource(fragmentShader, 1, &fSCodePointer, NULL);
glCompileShader(fragmentShader);
/*FRAGMENTSHADER ERROR CHECKING*/
GLint fInfoLogLength;
glGetShaderiv(fragmentShader, GL_INFO_LOG_LENGTH, &fInfoLogLength);
if (fInfoLogLength > 0) {
vector<char> fInfoLog(fInfoLogLength + 1);
glGetShaderInfoLog(fragmentShader, fInfoLogLength, &fInfoLogLength, &fInfoLog[0]);
for(int i = 0; i < fInfoLogLength; i++) {
cerr << fInfoLog[i];
}
}
/*LINKING THE PROGRAM*/
glAttachShader(program, vertexShader);
glAttachShader(program, fragmentShader);
glLinkProgram(program);
//glValidateProgram(program);
/*SHADERPROGRAM ERROR CHECKING*/
GLint programInfoLogLength;
glGetProgramiv(program, GL_INFO_LOG_LENGTH, &programInfoLogLength);
if (programInfoLogLength > 0) {
vector<char> programInfoLog(programInfoLogLength + 1);
glGetProgramInfoLog(program, programInfoLogLength, &programInfoLogLength, &programInfoLog[0]);
for(int i = 0; i < programInfoLogLength; i++) {
cerr << programInfoLog[i];
}
}
/*CLEANUP & RETURNING THE PROGRAM*/
glDeleteShader(vertexShader);
glDeleteShader(fragmentShader);
return program;
}
...and the vertex shader itself, which is nothing special. I just create an array of vertices and push them into gl_Position.
#version 330 core
void main() {
const vec3 VERTICES[3] = vec3[3] {
0.0, 0.5, 0.5,
0.5,-0.5, 0.5,
-0.5,-0.5, 0.5
};
gl_Position.xyz = VERTICES;
gl_Position.w = 1.0;
}
The fragmentshader just outputs a vec4 called color, which is set to (1.0, 0.0, 0.0, 1.0). The compiler doesn't show me any errors, but when I try to execute the program, I just get a window without the triangle and the error message that's shown above.
There's a few things I already tried to solve this problem, but none of them worked:
I tried creating the vertices inside my main.cpp and pushing them into the vertex-shader via a vertex buffer object; I changed some code inspired by opengl-tutorials.org and finally got a triangle to show up, but the shaders weren't applied; I only got the vertices inside my main.cpp to show up on the screen, but the "must write to gl_Position" problem remained.
I tried using glGetError() on different places and got 2 different error-codes: 1280 and 1282; the first one was caused by a bug inside GLEW, which causes the state to change from GL_NO_ERROR to GL_INVALID_ENUM or something like that. I was told to ignore this one and just change the state back to GL_NO_ERROR by using glGetError() after initializing GLEW. The other error code appeared after using glUseProgram() in the render-function. I wanted to get some information out of this, but the gluErrorString() function is deprecated in OpenGL 3.3 and I couldn't find an alternative provided by any of my libraries.
I tried validating my program via glValidateProgram() after linking it. When I did this, the gl_Position error message didn't show up anymore, but the triangle didn't either, so I assumed that this function just clears the infolog to put in some new information about the validation process
So right now, I have no idea what causes this error.
The problem got solved! I tried to print the source that OpenGL tries to compile and saw that there was no source loaded by the ifstream. Things I had to change:
Change the "while (vVStream.is_open())" to "if (vVStream.is_open())".
Error check, if the condition I listed first is executed (add "else {cerr << "OH NOES!" << endl}
Add a second parameter to the ifstreams I'm creating: change "ifstream(path)" to "ifstream(path, ios::in)"
Change the path I'm passing from a relative path (e.g "../shaders/shader.vert") to an absolute path (e.g "/home/USERNAME/Desktop/project/src/shaders/shader.vert"); this somehow was necessary, because the relative path wasn't understood; using an absolute one isn't a permanent solution though, but it fixes the problem of not finding the shader.
Now it actually loads and compiles the shaders; there are still some errors to fix, but if someone has the same "must write to gl_Position" problem, double, no, triple-check if the source you're trying to compile is actually loaded and if the ifstream is actually open.
I thank everyone who tried to help me, especially #MtRoad. This problem almost made me go bald.
Vertex shaders run on each vertex individually, so gl_Position is the output vertex after whatever transforms you wish to apply to the vertex being processed by vertex shader, so trying to emit multiple vertices doesn't make sense. Geometry shaders can emit additional geometry on the fly and can be used to do this to create motion blur for example.
For typical drawing, you bind a vertex array object like you did, but put data into buffers called Vertex Buffer Objects and tell OpenGL how to interpret the data's "attributes" using glVertexAttrib which you can read in your shaders.
Recently encountered this issue & I suspect the cause may be same as yours.
I'm not familiar with g++ however, on VS ones' build environment & the location on where your *exe is running from when you're debugging can impact on this. For example one such setting:
Project Properties -> General -> Output directory ->
Visual Studios Express - change debug output directory
And another similar issue here "The system cannot find the file specified" when running C++ program
You need to make sure if you've change the build environment and you're debugging from a different output directory that any of the relevant files are relative from where the *exe is being executed from.
This would explain why you've had to resort to using "if (vVStream.is_open())", which I suspect fails, & so then subsequently use full filepath of the shaders as the original referenced files are not relative.
My issue was exactly as yours but only in release mode. Once I copied over my shaders, into the release folder where the *exe could access them, the problem went away.
After letting my opengl program run for a while and viewing the scene from different angles I am getting an OpenGL "invalid value" error in my shader program. This is literally my program:
Vertex
#version 420
in vec4 Position;
uniform mat4 modelViewProjection;
void main()
{
in vec4 Position;
uniform mat4 modelViewProjection;
}
Fragment
#version 420
out vec4 fragment;
void main()
{
fragment = vec4(1,0,0,1);
}
This error occurs right after the function call to tell OpenGL to use my shader program. What could the cause of this be? It happens regardless of the object I call it on. How can I get more information on what is going on? The error occurs almost randomly for a series of frames, but then works again after a while, fails again after a bit, ect.
If it helps, here is what my program linking looks like:
...
myShader = glCreateProgram();
CreateShader(myShader,GL_VERTEX_SHADER, "shaders/prog.vert");
CreateShader(myShader,GL_FRAGMENT_SHADER, "shaders/prog.frag");
glLinkProgram(myShader);
PrintProgramLog(myShader);
...
void CreateShader(int prog, const GLenum type, const char* file)
{
int shad = glCreateShader(type);
char* source = ReadText(file);
glShaderSource(shad,1,(const char**)&source,NULL);
free(source);
glCompileShader(shad);
PrintShaderLog(shad,file);
glAttachShader(prog,shad);
}
This is what I'm using to get the error:
void ErrCheck(const char* where)
{
int err = glGetError();
if (err) fprintf(stderr,"ERROR: %s [%s]\n",gluErrorString(err),where);
}
And here is what is being printed out at me:
ERROR: invalid value [drawThing]
It happens after I call to use the program:
glUseProgram(_knightShaders[0]);
ErrCheck("drawThing");
or glGetUniformLocation:
glGetUniformLocation(myShader, "modelViewProjection");
ErrCheck("drawThing2");
So I fixed the problem. What I had above wasn't the whole truth, what I actually had was
myShader[0] = glCreateProgram();
myShader was an array of 4 GLuint(s), each int being a different shader program (although at this point they were all copies of the shader program I posted above). The problem was fixed when I stopped using an array and instead used:
GLuint myShader0;
GLuint myShader1;
GLuint myShader2;
GLuint myShader3;
Why this fixed the problem makes no sense to me, it's also pretty annyoing because rather than being able to index the shader mode I want, such as:
int mode = ... (code the determine what shader to use here)
glUseProgram(myShader[mode]);
I have to instead use conditionals:
int mode = ... (code the determine what shader to use here)
if (mode == 0) glUseProgram(myShader0);
else if (mode == 1) glUseProgram(myShader1);
else if (mode == 2) glUseProgram(myShader2);
else glUseProgram(myShader3);
If anyone of you know why this fixes the problem, I would very much appreciate the knowledge!