I'm reading the OpenGl red book, and I'm pretty much stuck at the first tutorial. Everything works fine if I use freeglut and glew, but I'd like to handle input and such myself. So I ditched freeglut and glew and wrote my own code. I've looked at some other tutorials and finished the code, but nothing is displayed. It seems like FreeGlut does some voodoo magic in the background, but I don't know what I'm missing. I've tried this:
int attributeListInt[19];
int pixelFormat[1];
unsigned int formatCount;
int result;
PIXELFORMATDESCRIPTOR pixelFormatDescriptor;
int attributeList[5];
context = GetDC (hwnd);
if (!context)
return -1;
attributeListInt[0] = WGL_SUPPORT_OPENGL_ARB;
attributeListInt[1] = TRUE;
attributeListInt[2] = WGL_DRAW_TO_WINDOW_ARB;
attributeListInt[3] = TRUE;
attributeListInt[4] = WGL_ACCELERATION_ARB;
attributeListInt[5] = WGL_FULL_ACCELERATION_ARB;
attributeListInt[6] = WGL_COLOR_BITS_ARB;
attributeListInt[7] = 24;
attributeListInt[8] = WGL_DEPTH_BITS_ARB;
attributeListInt[9] = 24;
attributeListInt[10] = WGL_DOUBLE_BUFFER_ARB;
attributeListInt[11] = TRUE;
attributeListInt[12] = WGL_SWAP_METHOD_ARB;
attributeListInt[13] = WGL_SWAP_EXCHANGE_ARB;
attributeListInt[14] = WGL_PIXEL_TYPE_ARB;
attributeListInt[15] = WGL_TYPE_RGBA_ARB;
attributeListInt[16] = WGL_STENCIL_BITS_ARB;
attributeListInt[17] = 8;
attributeListInt[18] = 0;
result = wglChoosePixelFormatARB (context, attributeListInt, NULL, 1, pixelFormat, &formatCount);
if (result != 1)
return -1;
result = SetPixelFormat (context, pixelFormat[0], &pixelFormatDescriptor);
if (result != 1)
return -1;
attributeList[0] = WGL_CONTEXT_MAJOR_VERSION_ARB;
attributeList[1] = 4;
attributeList[2] = WGL_CONTEXT_MINOR_VERSION_ARB;
attributeList[3] = 2;
attributeList[4] = 0;
rendercontext = wglCreateContextAttribsARB (context, 0, attributeList);
if (rendercontext == NULL)
return -1;
result = wglMakeCurrent (context, rendercontext);
if (result != 1)
return -1;
glClearDepth (1.0f);
glFrontFace (GL_CCW);
glEnable (GL_CULL_FACE);
glCullFace (GL_BACK);
return 0;
This sets up a graphics context, but is apparently not enough to make everything work. The tutorial didn't include anything about view or projection matrices, so I'm not sure whether I should add anything like that. But the window remains black.
This is the tutorial code, adjusted to my code:
#define BUFFER_OFFSET(offset) ((void *)(offset))
bool init ();
bool mainloop ();
enum VAO_IDs { Triangles, NumVAOs };
enum Buffer_IDs { ArrayBuffer, NumBuffers };
enum Attrib_IDs { vPosition = 0 };
GLuint VAOs[NumVAOs];
GLuint Buffers[NumBuffers];
const GLuint NumVertices = 6;
int main (int argc, char** argv)
{
Window w;
w.init (&mainloop);
if (!init ())
return 0;
w.run ();
w.shutdown ();
return 0;
}
bool init ()
{
glGenVertexArrays (NumVAOs, VAOs);
glBindVertexArray (VAOs[Triangles]);
GLfloat vertices[NumVertices][2] = {
{-0.90f, -0.90f}, // Triangle 1
{0.85f, -0.90f},
{-0.90f, 0.85f},
{0.90f, -0.85f}, // Triangle 2
{0.90f, 0.90f},
{-0.85f, 0.90f}
};
glGenBuffers (NumBuffers, Buffers);
glBindBuffer (GL_ARRAY_BUFFER, Buffers[ArrayBuffer]);
glBufferData (GL_ARRAY_BUFFER, sizeof(vertices),
vertices, GL_STATIC_DRAW);
ShaderInfo shaders[] = {
{GL_VERTEX_SHADER, "triangles.vert"},
{GL_FRAGMENT_SHADER, "triangles.frag"},
{GL_NONE, NULL}
};
GLuint program = LoadShaders (shaders);
glUseProgram (program);
glVertexAttribPointer (vPosition, 2, GL_FLOAT, GL_FALSE, 0, BUFFER_OFFSET (0));
glEnableVertexAttribArray (vPosition);
return true;
}
bool mainloop ()
{
glClear (GL_COLOR_BUFFER_BIT);
glBindVertexArray (VAOs[Triangles]);
glDrawArrays (GL_TRIANGLES, 0, NumVertices);
glFlush ();
return true;
}
Creating a OpenGL context is not trivial. Especially if you want to use wglChoosePixelFormatARB which must be loaded through the OpenGL extension mechanism… which requires a functioning OpenGL context to work in the first place. I think you see that this is kind of a chicken-and-egg problem. In addition the Window you use to create an OpenGL context requires certain attributed to work reliably. For one the WndClass should have the CS_OWNDC style set and the window style should include WS_CLIPSIBLINGS | WS_CLIPCHILDREN
I recently approached the aforementioned chicken-and-egg problem with my small wglarb helper tools: https://github.com/datenwolf/wglarb
It also comes with a small test program that shows how to use it.
I suggest you use the functions provided by that. I wrote this little helper library in a way, that it is not negatively impacted if your program uses other extension loading mechanisms. However it's not thread-safe yet; I'll have to add that eventually. I took the time to make the code thread safe. You can now use the exposed functions without having to care about synchronization; it's all done internally in a reliable way.
Related
This is my first big OpenGL project and am confused about a new feature I want to implement.
I am working on a game engine. In my engine I have two classes: Renderer and CustomWindow. GLFW needs to be initialized, then an OpenGL context needs to be created, then glew can be initialized. There is no problem with this, until I decided to support multiple windows to be created at the same time. Here are the things I am confused about:
Do I need to initialize GLEW for every window that is created? If no, can I still call glewInit() for every window creation and everything be fine?
If I create a window, and then destroy it, do I have to call glewInit() again and will I have to call these functions again?:
glGetIntegerv(GL_MAX_TEXTURE_IMAGE_UNITS, &numberOfTexturesSupported);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glEnable(GL_BLEND);
glEnable(GL_MULTISAMPLE);
glEnable(GL_LINE_SMOOTH);
glEnable(GL_POINT_SMOOTH);
glEnable(GL_PROGRAM_POINT_SIZE);
If there is any off topic comments that would help, they are very welcomed.
Update 1: More Context
For reference, the reason I want to do this is to implement multiple window rendering that share the same OpenGL context. Note, each window uses its own vertex array object (VAO). Here is the code for reference:
// CustomWindow.cpp
CustomWindow::CustomWindow() {
window = nullptr;
title = defaultTitle;
shouldClose = false;
error = false;
vertexArrayObjectID = 0;
frameRate = defaultFrameRate;
window = glfwCreateWindow(defaultWidth, defaultHeight, title.c_str(), nullptr, nullptr);
if (!window) {
error = true;
return;
}
glfwMakeContextCurrent(window);
if (glewInit() != GLEW_OK) {
error = true;
return;
}
glGenVertexArrays(1, &vertexArrayObjectID);
glBindVertexArray(vertexArrayObjectID);
allWindows.push_back(this);
}
CustomWindow::CustomWindow(int width, int height, const std::string& title, GLFWmonitor* monitor, GLFWwindow* share) {
window = nullptr;
this->title = title;
shouldClose = false;
error = false;
vertexArrayObjectID = 0;
frameRate = defaultFrameRate;
window = glfwCreateWindow(width, height, title.c_str(), monitor, share);
if (!window) {
error = true;
return;
}
glfwMakeContextCurrent(window);
glGenVertexArrays(1, &vertexArrayObjectID);
allWindows.push_back(this);
}
CustomWindow::~CustomWindow() {
if (window != nullptr || error)
glfwDestroyWindow(window);
unsigned int position = 0;
for (unsigned int i = 0; i < allWindows.size(); i++)
if (allWindows[i] == this) {
position = i;
break;
}
allWindows.erase(allWindows.begin() + position);
if (mainWindow == this)
mainWindow = nullptr;
}
// Rendere.cpp
Renderer::Renderer() {
error = false;
numberOfTexturesSupported = 0;
if (singleton != nullptr) {
error = true;
return;
}
singleton = this;
// Init GLFW
if (!glfwInit()) {
error = true;
return;
}
// Set window hints
glfwWindowHint(GLFW_MAXIMIZED, true);
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 4);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3);
glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_COMPAT_PROFILE);
glfwWindowHint(GLFW_SAMPLES, 4);
// Init GLEW
if (glewInit() != GLEW_OK) {
error = true;
return;
}
// Set graphics message reporting
glEnable(GL_DEBUG_OUTPUT);
glEnable(GL_DEBUG_OUTPUT_SYNCHRONOUS);
glDebugMessageCallback(openglDebugCallback, nullptr);
// Set up OpenGL
glGetIntegerv(GL_MAX_TEXTURE_IMAGE_UNITS, &numberOfTexturesSupported);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glEnable(GL_BLEND);
glEnable(GL_MULTISAMPLE);
glEnable(GL_LINE_SMOOTH);
glEnable(GL_POINT_SMOOTH);
glEnable(GL_PROGRAM_POINT_SIZE);
}
After some research, i would say that it depends, therefore it's always best to have a look at the base to form an opinion.
The OpenGL wiki has some useful information to offer.
Loading OpenGL Functions is an important task for initializing OpenGL after creating an OpenGL context. You are strongly advised to use an OpenGL Loading Library instead of a manual process. However, if you want to know how it works manually, read on.
Windows
This function only works in the presence of a valid OpenGL context. Indeed, the function pointers it returns are themselves context-specific. The Windows documentation for this function states that the functions returned may work with another context, depending on the vendor of that context and that context's pixel format.
In practice, if two contexts come from the same vendor and refer to the same GPU, then the function pointers pulled from one context will work in the other.
Linux and X-Windows
This function can operate without an OpenGL context, though the functions it returns obviously can't. This means that functions are not associated with a context in any way.
If you take a look into the source code of glew (./src/glew.c), you will see that the lib simply calls the loading procedures of the underlying system and assigns the results of those calls to the global function pointers.
In other words, calling glewInit multiple times has no other side effect other than that explained in the OpenGL wiki.
Another question would be: do you really need multiple windows for that task? A different approach could be achieved with only one context and multiple framebuffer objects.
Multiple contexts (sharing resources between them) and event handling (which can only be called from the 'main' thread) needs proper synchronization and multiple context switches.
What I'm trying to do is make it so that if I replace the window I'm rendering with a new window, which could happen because the user switches screens, or switches from fullscreen to windowed, or for any number of other reasons.
My code so far looks like this:
"Context.h"
struct window_deleter {
void operator()(GLFWwindow * window) const;
};
class context {
std::unique_ptr<GLFWwindow, window_deleter> window;
public:
context(int width, int height, const char * s, GLFWmonitor * monitor, GLFWwindow * old_window, bool borderless);
GLFWwindow * get_window() const;
void make_current() const;
};
"Context.cpp"
context::context(int width, int height, const char * s, GLFWmonitor * monitor, GLFWwindow * old_window, bool borderless) {
if (!glfwInit()) throw std::runtime_error("Unable to Initialize GLFW");
if (borderless) glfwWindowHint(GLFW_DECORATED, 0);
else glfwWindowHint(GLFW_DECORATED, 1);
window.reset(glfwCreateWindow(width, height, s, monitor, old_window));
if (!window) throw std::runtime_error("Unable to Create Window");
make_current();
}
GLFWwindow * context::get_window() const {
return window.get();
}
void context::make_current() const {
glfwMakeContextCurrent(window.get());
}
"WindowManager.h"
#include "Context.h"
class window_style;
/* window_style is basically a really fancy "enum class", and I don't
* believe its implementation or interface are relevant to this project.
* I'll add it if knowing how it works is super critical.
*/
class window_manager {
context c_context;
uint32_t c_width, c_height;
std::string c_title;
window_style c_style;
std::function<bool()> close_test;
std::function<void()> poll_task;
public:
static GLFWmonitor * get_monitor(window_style style);
window_manager(uint32_t width, uint32_t height, std::string const& title, window_style style);
context & get_context();
const context & get_context() const;
bool resize(uint32_t width, uint32_t height, std::string const& title, window_style style);
std::function<bool()> get_default_close_test();
void set_close_test(std::function<bool()> const& test);
std::function<void()> get_default_poll_task();
void set_poll_task(std::function<void()> const& task);
void poll_loop();
};
"WindowManager.cpp"
GLFWmonitor * window_manager::get_monitor(window_style style) {
if (style.type != window_style::style_type::fullscreen) return nullptr;
if (!glfwInit()) throw std::runtime_error("Unable to initialize GLFW");
int count;
GLFWmonitor ** monitors = glfwGetMonitors(&count);
if (style.monitor_number >= uint32_t(count)) throw invalid_monitor_exception{};
return monitors[style.monitor_number];
}
std::function<bool()> window_manager::get_default_close_test() {
return [&] {return glfwWindowShouldClose(c_context.get_window()) != 0; };
}
window_manager::window_manager(uint32_t width, uint32_t height, std::string const& title, window_style style) :
c_context(int(width), int(height), title.c_str(), get_monitor(style), nullptr, style.type == window_style::style_type::borderless),
c_width(width), c_height(height), c_title(title), c_style(style), close_test(get_default_close_test()), poll_task(get_default_poll_task()) {
}
context & window_manager::get_context() {
return c_context;
}
const context & window_manager::get_context() const {
return c_context;
}
bool window_manager::resize(uint32_t width, uint32_t height, std::string const& title, window_style style) {
if (width == c_width && height == c_height && title == c_title && style == c_style) return false;
c_width = width;
c_height = height;
c_title = title;
c_style = style;
c_context = context(int(width), int(height), title.c_str(), get_monitor(style), get_context().get_window(), style.type == window_style::style_type::borderless);
return true;
}
void window_manager::set_close_test(std::function<bool()> const& test) {
close_test = test;
}
std::function<void()> window_manager::get_default_poll_task() {
return [&] {glfwSwapBuffers(c_context.get_window()); };
}
void window_manager::set_poll_task(std::function<void()> const& task) {
poll_task = task;
}
void window_manager::poll_loop() {
while (!close_test()) {
glfwPollEvents();
poll_task();
}
}
"Main.cpp"
int main() {
try {
glfwInit();
const GLFWvidmode * vid_mode = glfwGetVideoMode(glfwGetPrimaryMonitor());
gl_backend::window_manager window(vid_mode->width, vid_mode->height, "First test of the window manager", gl_backend::window_style::fullscreen(0));
glfwSetKeyCallback(window.get_context().get_window(), [](GLFWwindow * window, int, int, int, int) {glfwSetWindowShouldClose(window, 1); });
glbinding::Binding::initialize();
//Anything with a "glresource" prefix is basically just a std::shared_ptr<GLuint>
//with some extra deletion code added.
glresource::vertex_array vao;
glresource::buffer square;
float data[] = {
-.5f, -.5f,
.5f, -.5f,
.5f, .5f,
-.5f, .5f
};
gl::glBindVertexArray(*vao);
gl::glBindBuffer(gl::GL_ARRAY_BUFFER, *square);
gl::glBufferData(gl::GL_ARRAY_BUFFER, sizeof(data), data, gl::GL_STATIC_DRAW);
gl::glEnableVertexAttribArray(0);
gl::glVertexAttribPointer(0, 2, gl::GL_FLOAT, false, 2 * sizeof(float), nullptr);
std::string vert_src =
"#version 430\n"
"layout(location = 0) in vec2 vertices;"
"void main() {"
"gl_Position = vec4(vertices, 0, 1);"
"}";
std::string frag_src =
"#version 430\n"
"uniform vec4 square_color;"
"out vec4 fragment_color;"
"void main() {"
"fragment_color = square_color;"
"}";
glresource::shader vert(gl::GL_VERTEX_SHADER, vert_src);
glresource::shader frag(gl::GL_FRAGMENT_SHADER, frag_src);
glresource::program program({ vert, frag });
window.set_poll_task([&] {
gl::glUseProgram(*program);
gl::glBindVertexArray(*vao);
glm::vec4 color{ (glm::sin(float(glfwGetTime())) + 1) / 2, 0.f, 0.5f, 1.f };
gl::glUniform4fv(gl::glGetUniformLocation(*program, "square_color"), 1, glm::value_ptr(color));
gl::glDrawArrays(gl::GL_QUADS, 0, 4);
glfwSwapBuffers(window.get_context().get_window());
});
window.poll_loop();
window.resize(vid_mode->width, vid_mode->height, "Second test of the window manager", gl_backend::window_style::fullscreen(1));
glfwSetKeyCallback(window.get_context().get_window(), [](GLFWwindow * window, int, int, int, int) {glfwSetWindowShouldClose(window, 1); });
window.poll_loop();
}
catch (std::exception const& e) {
std::cerr << e.what() << std::endl;
std::ofstream error_log("error.log");
error_log << e.what() << std::endl;
system("pause");
}
return 0;
}
So the current version of the code is supposed to do the following:
Display a fullscreen window on the primary monitor
On this monitor, display a "square" (rectangle, really....) that over time transitions between magenta and blue, while the background transitions between magenta and a green-ish color.
When the user presses a key, create a new fullscreen window on the second monitor using the first window's context to feed into GLFW's window creation, and destroy the original window (in that order)
Display the same rectangle on this second window
Continue to transition the background periodically
When the user presses a key again, destroy the second window and exit the program.
Of these steps, step 4 doesn't work at all, and step 3 partially works: the window does get created, but it doesn't display by default, and the user has to call it up via the taskbar. All the other steps work as expected, including the transitioning background on both windows.
So my assumption is that something is going wrong with respect to the object sharing between contexts; specifically, it doesn't appear that the second context I'm creating is receiving the objects created by the first context. Is there an obvious logic error I'm making? Should I be doing something else to ensure that context sharing works as intended? Is it possible that there's just a bug in GLFW?
So my assumption is that something is going wrong with respect to the object sharing between contexts; specifically, it doesn't appear that the second context I'm creating is receiving the objects created by the first context. Is there an obvious logic error I'm making?
Yes, your premise is just wrong. Shared OpenGL context will not share the whole state, just the "big" objects which actually hold user-specific data (like VBOs, textures, shaders and programs, renderbuffers and so on), and not the ones which only reference them - state containers like VAOs, FBOs and so on are never shared.
Should I be doing something else to ensure that context sharing works as intended?
Well, if you really want to go that route, you have to re-build all those state containers, and also restore the global state (all those glEnables, the depth buffer setting, blending state, tons of other things) of your original context.
However, I find your whole concept doubtful here. You do not need to destroy a window when going from fullscreen to windowed, or to a different monitor on the same GPU, and GLFW directly supports that via glfwSetWindowMonitor().
And even if you do re-create a window, this does not imply that you have to re-create the GL context. There might be some restrictions imposed by GLFWs API in that regard, but the underlying concepts are separate. You basically can make the old context current in the new window, and are just done with it. GLFW just inseperably links Window and Context together, which is kind of an unfortunate abstraction.
However, the only scenario I could imagine where re-creating the window would be necessary is something where different screens are driven be different GPUs - but GL context sharing won't work across different GL implementations, so even in that scenario, you would have to rebuild the whole context state.
EDIT* I rearranged the initialization list, as suggested by much_a_chos, so that the Window object initializes before the Game object, ensuring that glew is initialized first. However, this did not work:
//Rearranged initialization list
class TempCore
{
public:
TempCore(Game* g) :
win(new Window(800, 800, "EngineTry", false)), gamew(g) {}
~TempCore() { if(gamew) delete gamew; }
...
};
And here is the code I changed in the Mesh constructor when the above didn't work:
Mesh::Mesh( Vertex* vertices, unsigned int numVerts )
{
m_drawCount = numVerts;
glewExperimental = GL_TRUE;
if(glewInit() != GLEW_OK){
exit(-150); //application stops and exits here with the code -150
}
glGenVertexArrays(1, &m_vertexArrayObject);
glBindVertexArray(m_vertexArrayObject);
...
}
What happens when I compile and run is surprising. The program exits at the if(glewInit() != GLEW_OK) I copied from the Window constructor. For some reason glew initializes properly in the Window constructor (which is called before the Game constructor), but it fails to initialize when called the second time in the Mesh constructor. I assume its bad practice to call on glewInit() more than once in a program, but I don't think it should fail if I actually did so. Does anybody know what might be happening? Am I making a mistake in calling glewInit() more than once?
*END OF EDIT
I've been following a 3D Game Engine Development tutorial and I've encountered a weird bug in my code, which I will demonstrate below. I'm attempting to make my own game-engine purely for educational reasons. I'm using Code-blocks 13.12 as my IDE and mingw-w64 v4.0 as my compiler. I'm also using SDL2, glew, Assimp and boost as my third-party libraries.
I apologize in advance for the numerous code extracts, but I put in what I thought what was necessary to understand the context of the error.
I have a Core class for my game-engine that holds the main loop and updates and renders accordingly, calling the Game class update() and render() methods in the process as well. The Game class is intended as the holder for all the assets in the game, and will be the base class for any games made using the engine, thus, it contains mesh, texture and camera references. The Game class update(), render() and input() methods are all virtual as the Game class is meant to be derived.
My problem is: when I initialize the Game member variable in the Core class, I get a SIGSEGV (i.e. segmentation fault) in the Mesh object's constructor at the glGenVertexArrays call.
However, when I move my Game object out of the Core class and straight into the main method (so I changed it from being a class member to a simple scoped variable in the main method), along with the necessary parts from the Core class, then its runs perfectly and renders my rudimentary triangle example. This is a bug I've never come across and I would really appreciate any help I can get.
Below is an extract of my morphed code that ran perfectly and rendered the triangle:
int WINAPI WinMain (HINSTANCE hThisInstance, HINSTANCE hPrevInstance, LPSTR lpszArgument, int nCmdShow)
{
Window win(800, 800, "EngineTry", false); //Creates an SDL implemented window with a GL_context
Game* gamew = new Game;
const double frameTime = 1.0 / 500; //500 = maximum fps
double lastTime = FTime::getTime(); //gets current time in milliseconds
double unprocessedTime = 0.0;
int frames = 0;
double frameCounter = 0;
while(win.isRunning()){
bool _render = false;
double startTime = FTime::getTime();
double passedTime = startTime - lastTime;
lastTime = startTime;
unprocessedTime += passedTime / (double)FTime::SECOND;
frameCounter += passedTime;
while(unprocessedTime > frameTime){
if(!win.isRunning())
exit(0);
_render = true;
unprocessedTime -= frameTime;
FTime::delta = frameTime;
gamew->input();
Input::update();
gamew->update();
if(frameCounter >= FTime::SECOND)
{
std::cout << "FPS: " << frames << std::endl;
frames = 0;
frameCounter = 0;
}
}
if(_render){
RenderUtil::clearScreen(); //simple wrapper to the glClear function
gamew->render();
win.Update();
frames++;
}else{
Sleep(1);
}
}
delete gamew;
return 0;
}
Here is an extract of my modified Core class that doesn't work (throws the sigsegv in the Mesh constructor)
class TempCore
{
public:
TempCore(Game* g) :
gamew(g), win(800, 800, "EngineTry", false) {}
~TempCore() { if(gamew) delete gamew; }
void start();
private:
Window win;
Game* gamew;
};
int WINAPI WinMain (HINSTANCE hThisInstance, HINSTANCE hPrevInstance, LPSTR lpszArgument, int nCmdShow)
{
TempCore m_core(new Game());
m_core.start();
return 0;
}
void TempCore::start()
{
const double frameTime = 1.0 / 500;
double lastTime = FTime::getTime();
double unprocessedTime = 0.0;
int frames = 0;
double frameCounter = 0;
while(win.isRunning()){
bool _render = false;
double startTime = FTime::getTime();
double passedTime = startTime - lastTime;
lastTime = startTime;
unprocessedTime += passedTime / (double)FTime::SECOND;
frameCounter += passedTime;
while(unprocessedTime > frameTime){
if(!win.isRunning())
exit(0);
_render = true;
unprocessedTime -= frameTime;
FTime::delta = frameTime;
gamew->input();
Input::update();
gamew->update();
if(frameCounter >= FTime::SECOND){
//double totalTime = ((1000.0 * frameCounter)/((double)frames));
//double totalMeasuredTime = 0.0;
std::cout << "Frames: " << frames << std::endl;
//m_frames_per_second = frames;
frames = 0;
frameCounter = 0;
}
}
if(_render){
RenderUtil::clearScreen();
gamew->render();
win.Update();
frames++;
}else{
Sleep(1);
}
}
}
Mesh constructor where the sigsegv occurs in the above TestCore implementation:
Mesh::Mesh( Vertex* vertices, unsigned int numVerts )
{
m_drawCount = numVerts;
glGenVertexArrays(1, &m_vertexArrayObject); //sigsegv occurs here
glBindVertexArray(m_vertexArrayObject);
std::vector<glm::vec3> positions;
std::vector<glm::vec2> texCoords;
positions.reserve(numVerts);
texCoords.reserve(numVerts);
for(unsigned i = 0; i < numVerts; i++){
positions.push_back(vertices[i].pos);
texCoords.push_back(vertices[i].texCoord);
}
glGenBuffers(NUM_BUFFERS, m_vertexArrayBuffers);
glBindBuffer(GL_ARRAY_BUFFER, m_vertexArrayBuffers[POSITION_VB]);
glBufferData(GL_ARRAY_BUFFER, numVerts*sizeof(positions[0]), &positions[0], GL_STATIC_DRAW);
glEnableVertexAttribArray(0);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, 0);
glBindBuffer(GL_ARRAY_BUFFER, m_vertexArrayBuffers[TEXCOORD_VB]);
glBufferData(GL_ARRAY_BUFFER, numVerts*sizeof(texCoords[0]), &texCoords[0], GL_STATIC_DRAW);
glEnableVertexAttribArray(1);
glVertexAttribPointer(1, 2, GL_FLOAT, GL_FALSE, 0, 0);
glBindVertexArray(0);
}
The Game constructor that initializes the Mesh object:
Vertex vertices[] = { Vertex(-0.5f, -0.5f, 0, 0, 0),
Vertex(0, 0.5f, 0, 0.5f, 1.0f),
Vertex(0.5f, -0.5f, 0, 1.0f, 0)};
//Vertex is basically a struct with a glm::vec3 for position and a glm::vec2 for texture coordinate
Game::Game() :
m_mesh(vertices, sizeof(vertices)/sizeof(vertices[0])),
m_shader("res\\shaders\\basic_shader"),
m_texture("res\\textures\\mist_tree.jpg")
{
}
The Window class constructor that initializes glew:
Window::Window(int width, int height, const std::string& title, bool full_screen) :
m_fullscreen(full_screen)
{
SDL_Init(SDL_INIT_EVERYTHING);
SDL_GL_SetAttribute(SDL_GL_RED_SIZE, 8);
SDL_GL_SetAttribute(SDL_GL_GREEN_SIZE, 8);
SDL_GL_SetAttribute(SDL_GL_BLUE_SIZE, 8);
SDL_GL_SetAttribute(SDL_GL_ALPHA_SIZE, 8);
SDL_GL_SetAttribute(SDL_GL_BUFFER_SIZE, 32);
SDL_GL_SetAttribute(SDL_GL_DOUBLEBUFFER, 1);
//SDL_Window* in private of class declaration
m_window = SDL_CreateWindow(title.c_str(), SDL_WINDOWPOS_CENTERED, SDL_WINDOWPOS_CENTERED, width, height, SDL_WINDOW_OPENGL | SDL_WINDOW_RESIZABLE);
//SDL_GLContext in private of class declaration
m_glContext = SDL_GL_CreateContext(m_window);
std::cout << "GL Version: " << glGetString(GL_VERSION) << std::endl;
glewExperimental = GL_TRUE;
if(glewInit() != GLEW_OK || !glVersionAbove(3.0)){
std::cerr << "Glew failed to initialize...\n";
exit(-150);
}
}
A long shot here, since the given amount of information is pretty big. I searched for similar questions like this one and this one, but every one of them have been answered with tricks you're doing in your Window class constructor that have to be called before your game constructor. And as I can see in your TempCore constructor, you build your game object (and make a call to glGenVertexArrays) before your Window object is constructed
...
TempCore(Game* g) :
gamew(g), win(800, 800, "EngineTry", false) {}
...
So before making calls for creating your OpenGL context with SDL_GL_CreateContext(m_window) and before glewExperimental = GL_TRUE; glewInit();. And then you say that putting it in the main in this order solves the problem...
...
Window win(800, 800, "EngineTry", false); //Creates an SDL implemented window with a GL_context
Game* gamew = new Game;
...
Maybe reordering your initialization list in your constructor like this could solve your problem?
class TempCore
{
public:
TempCore(Game* g) :
win(800, 800, "EngineTry", false), gamew(g) {}
~TempCore() { if(gamew) delete gamew; }
...
};
UPDATE
I was wrong, as stated in the comments, the initialization list order doesn't matter. It's the definition order that matters, which is correct here...
Thanks to both #much_a_chos and #vu1p3n0x for your help. Turns out much_a_chos had the right idea with the Game object initializing before the Window object, thus missing the glewInit() call altogether, resulting in the sigsegv error. The problem, however, was not in the initializer list but in the main.cpp file. I was creating a Game class object and then passing that Game object via pointer to the core class, so regardless of how I arranged the Core class, the Game object would always initialize before the Window class, and would therefore always do its glGenVertexArrays call before glewInit() is called. This is a terrible logic error on my side and I apologize for wasting your time.
Below are extracts from the fixed main.cpp file and the fixed TempCore class (please keep in mind that these are temporary fixes to illustrate how I would go about fixing my mistake):
class TempCore
{
public:
TempCore(Window* w, Game* g) : //take in a Window class pointer to ensure its created before the Game class constructor
win(w), gamew(g) {}
~TempCore() { if(gamew) delete gamew; }
void start();
private:
Window* win;
Game* gamew;
};
int WINAPI WinMain (HINSTANCE hThisInstance, HINSTANCE hPrevInstance, LPSTR lpszArgument, int nCmdShow)
{
Window* win = new Window(800, 800, "EngineTry", false); //this way the Window constructor with the glewinit() call is called before the Game contructor
TempCore m_core(win, new Game());
m_core.start();
return 0;
}
Addressing your edit: You should not call glewInit() more than once. I'm not familiar with glew in this regard but in general, anything should only be "initialized" once. glew probably assumes that it is uninitialized and errors out when some initialization is already there.
I'd recommend calling glewInit() at the very beginning of the program and not in an object constructor. (Unless you have that object "own" glew)
Edit: It seems my assumption about glewInit() was slightly wrong. glewInit() behaves differently depending on the build, but regardless should only be called if you switch contexts. However, because you aren't changing context (from what I see) you should not call it more than once.
I am making a 2 dimensional image in opengl with C++, and am running into an interesting issue. Whenever I try to draw a partially transparent polygon on my image, it makes the window itself partially transparent where the polygon is. For example, I can see whatever is behind my window (e.g. my code) when I am running the program (which I don't want). I can also see the image behind the polygon (which I do want). Is there any way I can turn the "transparent window" behavior off? I have included what I feel to be relevant portions of the code below:
glClearColor(0.0f, 0.0f, 0.0f, 0.0f); // I have tried 1.0f for the alpha value too
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glHint(GL_LINE_SMOOTH_HINT, GL_NICEST);
glHint(GL_POLYGON_SMOOTH_HINT, GL_NICEST);
glEnable(GL_BLEND);
glEnable(GL_LINE_SMOOTH);
glEnable(GL_POLYGON_SMOOTH);
glPolygonMode (GL_FRONT_AND_BACK, GL_FILL);
glHint(GL_POINT_SMOOTH_HINT, GL_FASTEST);
glDisable(GL_POINT_SMOOTH);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glMatrixMode (GL_MODELVIEW);
glLoadIdentity ();
// other code to draw my opaque "background" object
// Draw my partially transparent quad (note: this is where the window itself becomes partially transparent)
glBegin(GL_QUADS); // Begin drawing quads
glColor4f(1.0,1.0,1.0,0.5); // Make a white quad with .5 alpha
glVertex2f(-0.5, 0.5);
glVertex2f(0.5, .05);
glVertex2f(0.5, -0.5);
glVertex2f(-0.5, -0.5);
glEnd();
Other relevant information:
I am running CentOS 6
I am fairly new to opengl, and am working on the code after a prior developer, so I could be missing something trivial
It is using the X Windows system
Here is the X Window creation code further debug, the problem is likely here rather than the opengl code above.
/* The simplest possible Linux OpenGL program? Maybe...
Modification for creating a RGBA window (transparency with compositors)
by Wolfgang 'datenwolf' Draxinger
(c) 2002 by FTB. See me in comp.graphics.api.opengl
(c) 2011 Wolfgang Draxinger. See me in comp.graphics.api.opengl and on StackOverflow
License agreement: This source code is provided "as is". You
can use this source code however you want for your own personal
use. If you give this source code to anybody else then you must
leave this message in it.
--
<\___/>
/ O O \
\_____/ FTB.
--
datenwolf
------------------------------------------------------------------------*/
static void createTheWindow() {
XEvent event;
int x, y, attr_mask;
XSizeHints hints;
XWMHints *StartupState;
XTextProperty textprop;
XSetWindowAttributes attr;
static char *title = "Fix me";
/* Connect to the X server */
Xdisplay = XOpenDisplay(NULL);
if (!Xdisplay)
{
fatalError("Couldn't connect to X server\n");
}
Xscreen = DefaultScreen(Xdisplay);
Xroot = RootWindow(Xdisplay, Xscreen) ;
fbconfigs = glXChooseFBConfig(Xdisplay, Xscreen, VisData, &numfbconfigs);
for (int i = 0; i < numfbconfigs; i++)
{
visual = (XVisualInfo_CPP*) glXGetVisualFromFBConfig(Xdisplay,
fbconfigs[i]);
if (!visual)
continue;
pictFormat = XRenderFindVisualFormat(Xdisplay, visual->visual);
if (!pictFormat)
continue;
if (pictFormat->direct.alphaMask > 0)
{
fbconfig = fbconfigs[i];
break;
}
}
/* Create a colormap - only needed on some X clients, eg. IRIX */
cmap = XCreateColormap(Xdisplay, Xroot, visual->visual, AllocNone);
/* Prepare the attributes for our window */
attr.colormap = cmap;
attr.border_pixel = 0;
attr.event_mask = StructureNotifyMask | EnterWindowMask | LeaveWindowMask
| ExposureMask | ButtonPressMask | ButtonReleaseMask
| OwnerGrabButtonMask | KeyPressMask | KeyReleaseMask;
attr.background_pixmap = None;
attr_mask = CWBackPixmap | CWColormap | CWBorderPixel | CWEventMask; /* What's in the attr data */
width = DisplayWidth(Xdisplay, DefaultScreen(Xdisplay)) ;
height = DisplayHeight(Xdisplay, DefaultScreen(Xdisplay)) ;
x = width / 2, y = height / 2;
// x=0, y=10;
/* Create the window */
attr.do_not_propagate_mask = NoEventMask;
WindowHandle = XCreateWindow(Xdisplay, /* Screen */
Xroot, /* Parent */
x, y, width, height,/* Position */
1,/* Border */
visual->depth,/* Color depth*/
InputOutput,/* klass */
visual->visual,/* Visual */
attr_mask, &attr);/* Attributes*/
if (!WindowHandle)
{
fatalError("Couldn't create the window\n");
}
/* Configure it... (ok, ok, this next bit isn't "minimal") */
textprop.value = (unsigned char*) title;
textprop.encoding = XA_STRING;
textprop.format = 8;
textprop.nitems = strlen(title);
hints.x = x;
hints.y = y;
hints.width = width;
hints.height = height;
hints.flags = USPosition | USSize;
StartupState = XAllocWMHints();
StartupState->initial_state = NormalState;
StartupState->flags = StateHint;
XSetWMProperties(Xdisplay, WindowHandle, &textprop, &textprop,/* Window title/icon title*/
NULL, 0,/* Argv[], argc for program*/
&hints, /* Start position/size*/
StartupState,/* Iconised/not flag */
NULL);
XFree(StartupState);
/* Open it, wait for it to appear */
int event_base, error_base = 0;
XMapWindow(Xdisplay, WindowHandle);
// }
XIfEvent(Xdisplay, &event, WaitForMapNotify, (char*) &WindowHandle);
/* Set the kill atom so we get a message when the user tries to close the window */
if ((del_atom = XInternAtom(Xdisplay, "WM_DELETE_WINDOW", 0)) != None)
{
XSetWMProtocols(Xdisplay, WindowHandle, &del_atom, 1);
}
}
Here are the settings for VisData:
static int VisData[] = { GLX_RENDER_TYPE, GLX_RGBA_BIT, GLX_DRAWABLE_TYPE,
GLX_WINDOW_BIT, GLX_DOUBLEBUFFER, True, GLX_RED_SIZE, 1, GLX_GREEN_SIZE,
1, GLX_BLUE_SIZE, 1, GLX_ALPHA_SIZE, 1, GLX_DEPTH_SIZE, 1,
None
};
Here is where the rendering context is created:
static void createTheRenderContext() {
/* See if we can do OpenGL on this visual */
int dummy;
if (!glXQueryExtension(Xdisplay, &dummy, &dummy))
{
fatalError("OpenGL not supported by X server\n");
}
/* Create the OpenGL rendering context */
RenderContext = glXCreateNewContext(Xdisplay, fbconfig, GLX_RGBA_TYPE, 0,
True);
if (!RenderContext)
{
fatalError("Failed to create a GL context\n");
}
GLXWindowHandle = glXCreateWindow(Xdisplay, fbconfig, WindowHandle, NULL);
/* Make it current */
if (!glXMakeContextCurrent(Xdisplay, GLXWindowHandle, GLXWindowHandle,
RenderContext))
{
fatalError("glXMakeCurrent failed for window\n");
}
}
What ratchet freak suggestet (Aero Glass effect in Windows) does not happen by accident, because one has to manually enable DWM transparency for this to happen.
However in X11/GLX it is perfectly possible to end up with a visual mode that has an Alpha Channel by default. If you want to get realiably a window that does or does not have an alpha channel the code gets a bit more complex than what most toolkits do.
The code you're using looks strikingly familiar. To be specific it seems to originate from a codesample I wrote about how to create a transparent window (you see where this is going), namely this code:
https://github.com/datenwolf/codesamples/blob/master/samples/OpenGL/x11argb_opengl/x11argb_opengl.c
The key sequence is this:
fbconfigs = glXChooseFBConfig(Xdisplay, Xscreen, VisData, &numfbconfigs);
fbconfig = 0;
for(int i = 0; i<numfbconfigs; i++) {
visual = (XVisualInfo*) glXGetVisualFromFBConfig(Xdisplay, fbconfigs[i]);
if(!visual)
continue;
pict_format = XRenderFindVisualFormat(Xdisplay, visual->visual);
if(!pict_format)
continue;
fbconfig = fbconfigs[i];
if(pict_format->direct.alphaMask > 0) {
break;
}
}
What this does is, it selects an X11 Visual that matches one of the previously selected FBConfigs that also contains an alpha mask.
If I had to make a bet I suspect that the VisData array you passed to glXChooseFBConfig does not specify an alpha channel. So what happens is, that you may end up with a window that has an X11 alpha mask, but not an alpha channel accessible to OpenGL.
Since I never intended that code to be used for windows that don't have an alpha channel this code does only whats originally intended if VisData does select for an alpha channel.
You have now two options:
implement a complementary test if(pict_format->direct.alphaMask == 0 && no_alpha_in(VisData)) break;
select for an alpha channel in VisData and clear the alpha channel to 1.0 with OpenGL glClearColor(…,…,…,1.0f);
This is not a opengl problem, but rather the kind of window you are creating. I suspect you running a window manager with supports transparency effects. Either way, what probably is happening is that, when you render the transparent poly, the window canvas ends up with some alpha, and your window manager assumes that you want the background transparent. Turn off all advanced effects of your window manager to check.
I am not familiar with window creation code using xlib, but it probably has to do with the kind of window you are creating.
I read a few tutorials about OpenGL and now I'm trying to use it with SDL. The thing is that when I use SDL_GL_SwapBuffers() in a while loop the window just freezes. Here's some code:
#include "SDL.h"
#include "system.h"
#include "SDL_opengl.h"
System Sys(800, 600, 32);
SDL_Event kpress;
int main( int argc, char* args[] )
{
Sys.init();
bool quit = false;
while (!quit)
{
while (SDL_PollEvent(&kpress)) if(kpress.type == SDL_QUIT) quit = true;
glClear(GL_COLOR_BUFFER_BIT);
SDL_GL_SwapBuffers();
}
SDL_Quit();
return 0;
}
------------------------------These are in system.h, class System----------------
bool System::init()
{
if (SDL_Init(SDL_INIT_EVERYTHING) < 0)
{
errorCode = 1;
return false;
}
if (SDL_SetVideoMode(screen_h, screen_w, bpp, SDL_OPENGL) == 0)
{
errorCode = 2;
return false;
}
SDL_GL_SetAttribute( SDL_GL_DOUBLEBUFFER, 1 );
if (!init_GL())
{
errorCode = 3;
return false;
}
SDL_WM_SetCaption("Engine", 0);
return true;
}
bool System::init_GL()
{
glClearColor(1, 0, 0, 0);
glClear(GL_COLOR_BUFFER_BIT);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0, screen_h, screen_w, 0, -1, 1);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
if (glGetError() != GL_NO_ERROR) return false;
return true;
}
If I draw some shapes or use a timer for limiting FPS - nothing changes.
Do you have any ideas?
My first advice: Get rid of that bogus System class: So far all the tasks it does are purely sequential/procedural and that should be reflected in the programs outline. People tend to put everything into classes just becase they're taught see everything in terms of object models. But this System class would have to follow the singleton pattern, which, in my opinion, is an anti-pattern.
All the stuff you placed in init_GL belong into the rendering loop. OpenGL initialization ends after creating a render context. OpenGL state is not initialized it is set on demand. OpenGL objects are initialized, but also on demand.
Also you're using glGetError not correctly. It needs to be called in a loop until no more errors are reported. It thus also makes little sense to bail out if a GL error is reported. OpenGL errors should be considered diagnostic.
SDL_GL_SetAttribute must be set before calling SDL_SetVideoMode so you're probably not double buffering.
Hey, you aren't calling the function which initializes OpenGL: init_GL()
youre not checking the result from system::init - it might be failing somewhere in that function and not setting up your initial state correctly
SDL #defines main() to be SDL_main() to allow it to do some extra initialization before program start. Which you seem to be bypassing via the statically initialized class.
Try constructing your System object in main().