OpenGL sigsegv error in class constructor - c++

EDIT* I rearranged the initialization list, as suggested by much_a_chos, so that the Window object initializes before the Game object, ensuring that glew is initialized first. However, this did not work:
//Rearranged initialization list
class TempCore
{
public:
TempCore(Game* g) :
win(new Window(800, 800, "EngineTry", false)), gamew(g) {}
~TempCore() { if(gamew) delete gamew; }
...
};
And here is the code I changed in the Mesh constructor when the above didn't work:
Mesh::Mesh( Vertex* vertices, unsigned int numVerts )
{
m_drawCount = numVerts;
glewExperimental = GL_TRUE;
if(glewInit() != GLEW_OK){
exit(-150); //application stops and exits here with the code -150
}
glGenVertexArrays(1, &m_vertexArrayObject);
glBindVertexArray(m_vertexArrayObject);
...
}
What happens when I compile and run is surprising. The program exits at the if(glewInit() != GLEW_OK) I copied from the Window constructor. For some reason glew initializes properly in the Window constructor (which is called before the Game constructor), but it fails to initialize when called the second time in the Mesh constructor. I assume its bad practice to call on glewInit() more than once in a program, but I don't think it should fail if I actually did so. Does anybody know what might be happening? Am I making a mistake in calling glewInit() more than once?
*END OF EDIT
I've been following a 3D Game Engine Development tutorial and I've encountered a weird bug in my code, which I will demonstrate below. I'm attempting to make my own game-engine purely for educational reasons. I'm using Code-blocks 13.12 as my IDE and mingw-w64 v4.0 as my compiler. I'm also using SDL2, glew, Assimp and boost as my third-party libraries.
I apologize in advance for the numerous code extracts, but I put in what I thought what was necessary to understand the context of the error.
I have a Core class for my game-engine that holds the main loop and updates and renders accordingly, calling the Game class update() and render() methods in the process as well. The Game class is intended as the holder for all the assets in the game, and will be the base class for any games made using the engine, thus, it contains mesh, texture and camera references. The Game class update(), render() and input() methods are all virtual as the Game class is meant to be derived.
My problem is: when I initialize the Game member variable in the Core class, I get a SIGSEGV (i.e. segmentation fault) in the Mesh object's constructor at the glGenVertexArrays call.
However, when I move my Game object out of the Core class and straight into the main method (so I changed it from being a class member to a simple scoped variable in the main method), along with the necessary parts from the Core class, then its runs perfectly and renders my rudimentary triangle example. This is a bug I've never come across and I would really appreciate any help I can get.
Below is an extract of my morphed code that ran perfectly and rendered the triangle:
int WINAPI WinMain (HINSTANCE hThisInstance, HINSTANCE hPrevInstance, LPSTR lpszArgument, int nCmdShow)
{
Window win(800, 800, "EngineTry", false); //Creates an SDL implemented window with a GL_context
Game* gamew = new Game;
const double frameTime = 1.0 / 500; //500 = maximum fps
double lastTime = FTime::getTime(); //gets current time in milliseconds
double unprocessedTime = 0.0;
int frames = 0;
double frameCounter = 0;
while(win.isRunning()){
bool _render = false;
double startTime = FTime::getTime();
double passedTime = startTime - lastTime;
lastTime = startTime;
unprocessedTime += passedTime / (double)FTime::SECOND;
frameCounter += passedTime;
while(unprocessedTime > frameTime){
if(!win.isRunning())
exit(0);
_render = true;
unprocessedTime -= frameTime;
FTime::delta = frameTime;
gamew->input();
Input::update();
gamew->update();
if(frameCounter >= FTime::SECOND)
{
std::cout << "FPS: " << frames << std::endl;
frames = 0;
frameCounter = 0;
}
}
if(_render){
RenderUtil::clearScreen(); //simple wrapper to the glClear function
gamew->render();
win.Update();
frames++;
}else{
Sleep(1);
}
}
delete gamew;
return 0;
}
Here is an extract of my modified Core class that doesn't work (throws the sigsegv in the Mesh constructor)
class TempCore
{
public:
TempCore(Game* g) :
gamew(g), win(800, 800, "EngineTry", false) {}
~TempCore() { if(gamew) delete gamew; }
void start();
private:
Window win;
Game* gamew;
};
int WINAPI WinMain (HINSTANCE hThisInstance, HINSTANCE hPrevInstance, LPSTR lpszArgument, int nCmdShow)
{
TempCore m_core(new Game());
m_core.start();
return 0;
}
void TempCore::start()
{
const double frameTime = 1.0 / 500;
double lastTime = FTime::getTime();
double unprocessedTime = 0.0;
int frames = 0;
double frameCounter = 0;
while(win.isRunning()){
bool _render = false;
double startTime = FTime::getTime();
double passedTime = startTime - lastTime;
lastTime = startTime;
unprocessedTime += passedTime / (double)FTime::SECOND;
frameCounter += passedTime;
while(unprocessedTime > frameTime){
if(!win.isRunning())
exit(0);
_render = true;
unprocessedTime -= frameTime;
FTime::delta = frameTime;
gamew->input();
Input::update();
gamew->update();
if(frameCounter >= FTime::SECOND){
//double totalTime = ((1000.0 * frameCounter)/((double)frames));
//double totalMeasuredTime = 0.0;
std::cout << "Frames: " << frames << std::endl;
//m_frames_per_second = frames;
frames = 0;
frameCounter = 0;
}
}
if(_render){
RenderUtil::clearScreen();
gamew->render();
win.Update();
frames++;
}else{
Sleep(1);
}
}
}
Mesh constructor where the sigsegv occurs in the above TestCore implementation:
Mesh::Mesh( Vertex* vertices, unsigned int numVerts )
{
m_drawCount = numVerts;
glGenVertexArrays(1, &m_vertexArrayObject); //sigsegv occurs here
glBindVertexArray(m_vertexArrayObject);
std::vector<glm::vec3> positions;
std::vector<glm::vec2> texCoords;
positions.reserve(numVerts);
texCoords.reserve(numVerts);
for(unsigned i = 0; i < numVerts; i++){
positions.push_back(vertices[i].pos);
texCoords.push_back(vertices[i].texCoord);
}
glGenBuffers(NUM_BUFFERS, m_vertexArrayBuffers);
glBindBuffer(GL_ARRAY_BUFFER, m_vertexArrayBuffers[POSITION_VB]);
glBufferData(GL_ARRAY_BUFFER, numVerts*sizeof(positions[0]), &positions[0], GL_STATIC_DRAW);
glEnableVertexAttribArray(0);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, 0);
glBindBuffer(GL_ARRAY_BUFFER, m_vertexArrayBuffers[TEXCOORD_VB]);
glBufferData(GL_ARRAY_BUFFER, numVerts*sizeof(texCoords[0]), &texCoords[0], GL_STATIC_DRAW);
glEnableVertexAttribArray(1);
glVertexAttribPointer(1, 2, GL_FLOAT, GL_FALSE, 0, 0);
glBindVertexArray(0);
}
The Game constructor that initializes the Mesh object:
Vertex vertices[] = { Vertex(-0.5f, -0.5f, 0, 0, 0),
Vertex(0, 0.5f, 0, 0.5f, 1.0f),
Vertex(0.5f, -0.5f, 0, 1.0f, 0)};
//Vertex is basically a struct with a glm::vec3 for position and a glm::vec2 for texture coordinate
Game::Game() :
m_mesh(vertices, sizeof(vertices)/sizeof(vertices[0])),
m_shader("res\\shaders\\basic_shader"),
m_texture("res\\textures\\mist_tree.jpg")
{
}
The Window class constructor that initializes glew:
Window::Window(int width, int height, const std::string& title, bool full_screen) :
m_fullscreen(full_screen)
{
SDL_Init(SDL_INIT_EVERYTHING);
SDL_GL_SetAttribute(SDL_GL_RED_SIZE, 8);
SDL_GL_SetAttribute(SDL_GL_GREEN_SIZE, 8);
SDL_GL_SetAttribute(SDL_GL_BLUE_SIZE, 8);
SDL_GL_SetAttribute(SDL_GL_ALPHA_SIZE, 8);
SDL_GL_SetAttribute(SDL_GL_BUFFER_SIZE, 32);
SDL_GL_SetAttribute(SDL_GL_DOUBLEBUFFER, 1);
//SDL_Window* in private of class declaration
m_window = SDL_CreateWindow(title.c_str(), SDL_WINDOWPOS_CENTERED, SDL_WINDOWPOS_CENTERED, width, height, SDL_WINDOW_OPENGL | SDL_WINDOW_RESIZABLE);
//SDL_GLContext in private of class declaration
m_glContext = SDL_GL_CreateContext(m_window);
std::cout << "GL Version: " << glGetString(GL_VERSION) << std::endl;
glewExperimental = GL_TRUE;
if(glewInit() != GLEW_OK || !glVersionAbove(3.0)){
std::cerr << "Glew failed to initialize...\n";
exit(-150);
}
}

A long shot here, since the given amount of information is pretty big. I searched for similar questions like this one and this one, but every one of them have been answered with tricks you're doing in your Window class constructor that have to be called before your game constructor. And as I can see in your TempCore constructor, you build your game object (and make a call to glGenVertexArrays) before your Window object is constructed
...
TempCore(Game* g) :
gamew(g), win(800, 800, "EngineTry", false) {}
...
So before making calls for creating your OpenGL context with SDL_GL_CreateContext(m_window) and before glewExperimental = GL_TRUE; glewInit();. And then you say that putting it in the main in this order solves the problem...
...
Window win(800, 800, "EngineTry", false); //Creates an SDL implemented window with a GL_context
Game* gamew = new Game;
...
Maybe reordering your initialization list in your constructor like this could solve your problem?
class TempCore
{
public:
TempCore(Game* g) :
win(800, 800, "EngineTry", false), gamew(g) {}
~TempCore() { if(gamew) delete gamew; }
...
};
UPDATE
I was wrong, as stated in the comments, the initialization list order doesn't matter. It's the definition order that matters, which is correct here...

Thanks to both #much_a_chos and #vu1p3n0x for your help. Turns out much_a_chos had the right idea with the Game object initializing before the Window object, thus missing the glewInit() call altogether, resulting in the sigsegv error. The problem, however, was not in the initializer list but in the main.cpp file. I was creating a Game class object and then passing that Game object via pointer to the core class, so regardless of how I arranged the Core class, the Game object would always initialize before the Window class, and would therefore always do its glGenVertexArrays call before glewInit() is called. This is a terrible logic error on my side and I apologize for wasting your time.
Below are extracts from the fixed main.cpp file and the fixed TempCore class (please keep in mind that these are temporary fixes to illustrate how I would go about fixing my mistake):
class TempCore
{
public:
TempCore(Window* w, Game* g) : //take in a Window class pointer to ensure its created before the Game class constructor
win(w), gamew(g) {}
~TempCore() { if(gamew) delete gamew; }
void start();
private:
Window* win;
Game* gamew;
};
int WINAPI WinMain (HINSTANCE hThisInstance, HINSTANCE hPrevInstance, LPSTR lpszArgument, int nCmdShow)
{
Window* win = new Window(800, 800, "EngineTry", false); //this way the Window constructor with the glewinit() call is called before the Game contructor
TempCore m_core(win, new Game());
m_core.start();
return 0;
}

Addressing your edit: You should not call glewInit() more than once. I'm not familiar with glew in this regard but in general, anything should only be "initialized" once. glew probably assumes that it is uninitialized and errors out when some initialization is already there.
I'd recommend calling glewInit() at the very beginning of the program and not in an object constructor. (Unless you have that object "own" glew)
Edit: It seems my assumption about glewInit() was slightly wrong. glewInit() behaves differently depending on the build, but regardless should only be called if you switch contexts. However, because you aren't changing context (from what I see) you should not call it more than once.

Related

SDL_RenderCopy not doing anything

I'm calling SDL_RenderCopy and it gets called and returns normally but doesn't draw anything to the window. Edited to make the question and code clearer. I'm thinking I might be trying to use something beyond its scope and hence it can't be called but this doesn't produce any error so I'm not sure. Here's the simple picture I refer to https://commons.wikimedia.org/wiki/Category:PNG_chess_pieces/Standard_transparent#/media/File:Chess_kdt60.png
#include <SDL2/SDL.h>
#include <SDL2/SDL_image.h>
// Recreation of the problem. Doesnt draw anything onto the white screen.
class King{
public:
King(SDL_Renderer *renderer){
SDL_Surface *Piece;
Piece = IMG_Load("Pieces/BK.png"); // I'll attach the picture
king = SDL_CreateTextureFromSurface(renderer, Piece);
SDL_FreeSurface(Piece);
kingRect.h = 100;
kingRect.w = 100;
}
~King(){}
void render(SDL_Renderer *renderer){
SDL_RenderCopy(renderer, king, NULL, &kingRect); // 99% sure the problem is this
}
private:
SDL_Texture *king;
SDL_Rect kingRect;
};
class Game {
public:
Game(const char *title, int sidelength){
isRunning = true;
if(SDL_Init(SDL_INIT_EVERYTHING) != 0) isRunning = false;
window = SDL_CreateWindow(title, SDL_WINDOWPOS_CENTERED, SDL_WINDOWPOS_CENTERED, sidelength, sidelength, SDL_WINDOW_OPENGL);
if(window == NULL) isRunning = false;
renderer = SDL_CreateRenderer(window, -1, 0);
if(!renderer) isRunning = false;
SDL_SetRenderDrawColor(renderer, 255, 255, 255, 255);
}
~Game(){}
void handleEvents(){
//Handles Events. I know this works.
}
}
void update(){};
void render(){
SDL_RenderClear(renderer);
BK.render(renderer);
SDL_RenderPresent(renderer);
}
void clean(){
//Cleans up after. I know this works.
SDL_DestroyWindow(window);
SDL_DestroyRenderer(renderer);
SDL_Quit();
}
bool running(){return(isRunning);}
King BK{renderer};
private:
bool isRunning{true};
SDL_Window *window;
SDL_Renderer *renderer;
};
Game *game = nullptr;
int main(int argc, const char *argv[]){
game = new Game("Testing Window", 800);
while(game->running()){
game->handleEvents();
game->update();
game->render();
}
game->clean();
return(0);
}
King BK{renderer}; field gets initialised before your Game::Game finishes and gets a chance to assign a renderer, so it gets NULL instead. NULL is not a valid renderer and can't create textures. If you would have checked for error you would have got Invalid renderer message. Also decent compiler with enabled warnings will tell something like warning: 'Game::renderer' is used uninitialized in this function [-Wuninitialized]; consider enabling better warning levels in your compiler.
Second thing is that you never called IMG_Init with required image formats you intend to load.
Third thing is that code is misformatted and wouldn't compile without modifications. I suggest testing code that you post as MCCVE for still being compilable and reproducing your problem (as MCCVE implies).

How to properly do Context Sharing with GLFW?

What I'm trying to do is make it so that if I replace the window I'm rendering with a new window, which could happen because the user switches screens, or switches from fullscreen to windowed, or for any number of other reasons.
My code so far looks like this:
"Context.h"
struct window_deleter {
void operator()(GLFWwindow * window) const;
};
class context {
std::unique_ptr<GLFWwindow, window_deleter> window;
public:
context(int width, int height, const char * s, GLFWmonitor * monitor, GLFWwindow * old_window, bool borderless);
GLFWwindow * get_window() const;
void make_current() const;
};
"Context.cpp"
context::context(int width, int height, const char * s, GLFWmonitor * monitor, GLFWwindow * old_window, bool borderless) {
if (!glfwInit()) throw std::runtime_error("Unable to Initialize GLFW");
if (borderless) glfwWindowHint(GLFW_DECORATED, 0);
else glfwWindowHint(GLFW_DECORATED, 1);
window.reset(glfwCreateWindow(width, height, s, monitor, old_window));
if (!window) throw std::runtime_error("Unable to Create Window");
make_current();
}
GLFWwindow * context::get_window() const {
return window.get();
}
void context::make_current() const {
glfwMakeContextCurrent(window.get());
}
"WindowManager.h"
#include "Context.h"
class window_style;
/* window_style is basically a really fancy "enum class", and I don't
* believe its implementation or interface are relevant to this project.
* I'll add it if knowing how it works is super critical.
*/
class window_manager {
context c_context;
uint32_t c_width, c_height;
std::string c_title;
window_style c_style;
std::function<bool()> close_test;
std::function<void()> poll_task;
public:
static GLFWmonitor * get_monitor(window_style style);
window_manager(uint32_t width, uint32_t height, std::string const& title, window_style style);
context & get_context();
const context & get_context() const;
bool resize(uint32_t width, uint32_t height, std::string const& title, window_style style);
std::function<bool()> get_default_close_test();
void set_close_test(std::function<bool()> const& test);
std::function<void()> get_default_poll_task();
void set_poll_task(std::function<void()> const& task);
void poll_loop();
};
"WindowManager.cpp"
GLFWmonitor * window_manager::get_monitor(window_style style) {
if (style.type != window_style::style_type::fullscreen) return nullptr;
if (!glfwInit()) throw std::runtime_error("Unable to initialize GLFW");
int count;
GLFWmonitor ** monitors = glfwGetMonitors(&count);
if (style.monitor_number >= uint32_t(count)) throw invalid_monitor_exception{};
return monitors[style.monitor_number];
}
std::function<bool()> window_manager::get_default_close_test() {
return [&] {return glfwWindowShouldClose(c_context.get_window()) != 0; };
}
window_manager::window_manager(uint32_t width, uint32_t height, std::string const& title, window_style style) :
c_context(int(width), int(height), title.c_str(), get_monitor(style), nullptr, style.type == window_style::style_type::borderless),
c_width(width), c_height(height), c_title(title), c_style(style), close_test(get_default_close_test()), poll_task(get_default_poll_task()) {
}
context & window_manager::get_context() {
return c_context;
}
const context & window_manager::get_context() const {
return c_context;
}
bool window_manager::resize(uint32_t width, uint32_t height, std::string const& title, window_style style) {
if (width == c_width && height == c_height && title == c_title && style == c_style) return false;
c_width = width;
c_height = height;
c_title = title;
c_style = style;
c_context = context(int(width), int(height), title.c_str(), get_monitor(style), get_context().get_window(), style.type == window_style::style_type::borderless);
return true;
}
void window_manager::set_close_test(std::function<bool()> const& test) {
close_test = test;
}
std::function<void()> window_manager::get_default_poll_task() {
return [&] {glfwSwapBuffers(c_context.get_window()); };
}
void window_manager::set_poll_task(std::function<void()> const& task) {
poll_task = task;
}
void window_manager::poll_loop() {
while (!close_test()) {
glfwPollEvents();
poll_task();
}
}
"Main.cpp"
int main() {
try {
glfwInit();
const GLFWvidmode * vid_mode = glfwGetVideoMode(glfwGetPrimaryMonitor());
gl_backend::window_manager window(vid_mode->width, vid_mode->height, "First test of the window manager", gl_backend::window_style::fullscreen(0));
glfwSetKeyCallback(window.get_context().get_window(), [](GLFWwindow * window, int, int, int, int) {glfwSetWindowShouldClose(window, 1); });
glbinding::Binding::initialize();
//Anything with a "glresource" prefix is basically just a std::shared_ptr<GLuint>
//with some extra deletion code added.
glresource::vertex_array vao;
glresource::buffer square;
float data[] = {
-.5f, -.5f,
.5f, -.5f,
.5f, .5f,
-.5f, .5f
};
gl::glBindVertexArray(*vao);
gl::glBindBuffer(gl::GL_ARRAY_BUFFER, *square);
gl::glBufferData(gl::GL_ARRAY_BUFFER, sizeof(data), data, gl::GL_STATIC_DRAW);
gl::glEnableVertexAttribArray(0);
gl::glVertexAttribPointer(0, 2, gl::GL_FLOAT, false, 2 * sizeof(float), nullptr);
std::string vert_src =
"#version 430\n"
"layout(location = 0) in vec2 vertices;"
"void main() {"
"gl_Position = vec4(vertices, 0, 1);"
"}";
std::string frag_src =
"#version 430\n"
"uniform vec4 square_color;"
"out vec4 fragment_color;"
"void main() {"
"fragment_color = square_color;"
"}";
glresource::shader vert(gl::GL_VERTEX_SHADER, vert_src);
glresource::shader frag(gl::GL_FRAGMENT_SHADER, frag_src);
glresource::program program({ vert, frag });
window.set_poll_task([&] {
gl::glUseProgram(*program);
gl::glBindVertexArray(*vao);
glm::vec4 color{ (glm::sin(float(glfwGetTime())) + 1) / 2, 0.f, 0.5f, 1.f };
gl::glUniform4fv(gl::glGetUniformLocation(*program, "square_color"), 1, glm::value_ptr(color));
gl::glDrawArrays(gl::GL_QUADS, 0, 4);
glfwSwapBuffers(window.get_context().get_window());
});
window.poll_loop();
window.resize(vid_mode->width, vid_mode->height, "Second test of the window manager", gl_backend::window_style::fullscreen(1));
glfwSetKeyCallback(window.get_context().get_window(), [](GLFWwindow * window, int, int, int, int) {glfwSetWindowShouldClose(window, 1); });
window.poll_loop();
}
catch (std::exception const& e) {
std::cerr << e.what() << std::endl;
std::ofstream error_log("error.log");
error_log << e.what() << std::endl;
system("pause");
}
return 0;
}
So the current version of the code is supposed to do the following:
Display a fullscreen window on the primary monitor
On this monitor, display a "square" (rectangle, really....) that over time transitions between magenta and blue, while the background transitions between magenta and a green-ish color.
When the user presses a key, create a new fullscreen window on the second monitor using the first window's context to feed into GLFW's window creation, and destroy the original window (in that order)
Display the same rectangle on this second window
Continue to transition the background periodically
When the user presses a key again, destroy the second window and exit the program.
Of these steps, step 4 doesn't work at all, and step 3 partially works: the window does get created, but it doesn't display by default, and the user has to call it up via the taskbar. All the other steps work as expected, including the transitioning background on both windows.
So my assumption is that something is going wrong with respect to the object sharing between contexts; specifically, it doesn't appear that the second context I'm creating is receiving the objects created by the first context. Is there an obvious logic error I'm making? Should I be doing something else to ensure that context sharing works as intended? Is it possible that there's just a bug in GLFW?
So my assumption is that something is going wrong with respect to the object sharing between contexts; specifically, it doesn't appear that the second context I'm creating is receiving the objects created by the first context. Is there an obvious logic error I'm making?
Yes, your premise is just wrong. Shared OpenGL context will not share the whole state, just the "big" objects which actually hold user-specific data (like VBOs, textures, shaders and programs, renderbuffers and so on), and not the ones which only reference them - state containers like VAOs, FBOs and so on are never shared.
Should I be doing something else to ensure that context sharing works as intended?
Well, if you really want to go that route, you have to re-build all those state containers, and also restore the global state (all those glEnables, the depth buffer setting, blending state, tons of other things) of your original context.
However, I find your whole concept doubtful here. You do not need to destroy a window when going from fullscreen to windowed, or to a different monitor on the same GPU, and GLFW directly supports that via glfwSetWindowMonitor().
And even if you do re-create a window, this does not imply that you have to re-create the GL context. There might be some restrictions imposed by GLFWs API in that regard, but the underlying concepts are separate. You basically can make the old context current in the new window, and are just done with it. GLFW just inseperably links Window and Context together, which is kind of an unfortunate abstraction.
However, the only scenario I could imagine where re-creating the window would be necessary is something where different screens are driven be different GPUs - but GL context sharing won't work across different GL implementations, so even in that scenario, you would have to rebuild the whole context state.

Why does my SDL window close when i return from an initializing function?

I have been working on a simple game engine (i know, i know, i've heard "Write games not engines" before, this is just to understand the concepts). I have been using SDL2, since it works well with OpenGL. However for some reason, the program closes once the initializing function is completed.
Screen.cpp:
Screen::Screen(int width, int height, const std::string& title)
{
//Initialize SDL
SDL_Init(SDL_INIT_EVERYTHING);
//Setting OpenGL Attributes
SDL_GL_SetAttribute(SDL_GL_RED_SIZE, 8);
SDL_GL_SetAttribute(SDL_GL_GREEN_SIZE, 8);
SDL_GL_SetAttribute(SDL_GL_BLUE_SIZE, 8);
SDL_GL_SetAttribute(SDL_GL_ALPHA_SIZE, 8);
SDL_GL_SetAttribute(SDL_GL_BUFFER_SIZE, 32);
SDL_GL_SetAttribute(SDL_GL_DOUBLEBUFFER, 1);
//Create the Window
m_window = SDL_CreateWindow(title.c_str(), SDL_WINDOWPOS_CENTERED, SDL_WINDOWPOS_CENTERED, width, height, SDL_WINDOW_OPENGL);
l.writeToDebugLog("Created SDL_Window!");
//Create OpenGL context from within SDL
m_glContext = SDL_GL_CreateContext(m_window);
l.writeToDebugLog("Created SDL GL Context!");
//Initializing GLEW
GLenum status = glewInit();
l.writeToDebugLog( "Initializing GLEW");
if (status != GLEW_OK)
{
l.writeToGLEWELog(" Glew Failed to Initialize!");
}
//setting the windowSurface to the m_window's surface
windowSurface = SDL_GetWindowSurface(m_window);
m_isClosed = false;
}
This is where i create the screen object and initialize all of the SDL functions and OpenGL functions
Engine.cpp:
void Engine::initialize(){
//Console Detecting platform
c.initialize();
//Printing Operating System to screen
std::cout << "Loaded on : " << platformToString(c.os) << " OS " << std::endl;
//Constructing a new Screen to be referenced too
graphics::Screen temp(800,600,"ClimLib 0.0.05");
//setting all the variables
m_window = &temp;
m_EntityManager = nullptr;
m_isRunning = temp.isClosed();
m_renderer = SDL_GetRenderer(m_window->getWindow());
}
void Engine::update(){
do{
//Check whether entities have been created and placed in the manager
if (m_EntityManager != nullptr){
for each(core::Entity *e in *m_EntityManager){
for each(core::Component *c in e->getComponentList()){
c->Update();
}
}
}
//Update Logic Here
m_window->Update();
if (m_window->isClosed()){
m_isRunning = false;
return;
}
}while (isRunning());
}
This initialize function is the last function my window executes before it deletes itself, maybe i need to call it from the main function of the program?
main.cpp:
int main(int argc, char *argv[]){
clim::system::Engine game;
game.initialize();
while (game.isRunning()){
game.update();
}
return 0;
}
That is how i have my main set up at the moment.
EDIT: I believe the reason was because i am creating a variable and storing it as a reference, when the function returns the temp variable is thrown away?
You are creating your screen as a temporary value and then assigning the address of that value to a pointer whose lifetime actually exceeds the lifetime of the value it's pointing to that is very likely a stack value as well. Short of your immediate issue with the window dying, this will cause one or both of the following when used later:
Crashes (illegal memory access)
Undefined behavior (from reading random values on the stack that are sitting at this address later in execution)
While perhaps not applicable now, this is a real good way to cause headaches for yourself in pretty much everything.
You should just assign like this:
m_window = new graphics::Screen(800,600,"ClimLib 0.0.05");
The way you initialized it means that it will destroy itself when the function exits as it was declared and initialized in the function.
Using new will guarantee it exists until you delete it because it will sit somewhere on the heap unmolested unless your code does.
Just make sure you call delete m_window in the destructor of the class containing it to properly clean up the window when you're done using it when using new to create it. You can also declare m_window to be a graphics::Screen instead of a graphics::Screen* and just assign m_window like:
m_window = graphics::Screen(800,600,"ClimLib 0.0.05");
This way you don't have to worry about deleting it yourself later as it will delete itself when the containing class is deleted.

Elementary inquiry about C++ (SDL2 library)

I'm fairly new to programming, so this question will probably be basic. I'm writing a very basic program in C++ with the SDL2 library (in Visual Studio 2013). When I was writing it, I came across a problem. I wrote the following:
int controles(){
//declare actions that will happen when a key is pressed
const Uint8 * estado = SDL_GetKeyboardState(NULL);
if (estado[SDL_SCANCODE_UP]){ y--; SDL_UpdateWindowSurface(ventana); }
if (estado[SDL_SCANCODE_DOWN]){ y++; SDL_UpdateWindowSurface(ventana); }
return 0;
}
The problem is that I need to update the window surface after the value of y is modified, but I get an error because ventana, the name of the window, is defined in another function. I tried to define ventana globally, but the program won't work then. I then thought the following; write a goto statement in graficos, the function where ventana is defined, in order to skip every other statement in that function, except for the one that updates the window surface. However, when I did that, the program doesn't even compile:
int graficos(int caso){
if (caso == 1) {goto reload;} //skip to reload if (1)
SDL_Init(SDL_INIT_VIDEO); //load SDL
//load graphics in memory
SDL_Window * ventana = SDL_CreateWindow("ventana", SDL_WINDOWPOS_UNDEFINED, SDL_WINDOWPOS_UNDEFINED, 640, 480, 0);
SDL_Surface * superficie = SDL_GetWindowSurface(ventana);
SDL_Surface * pantallainicio = SDL_LoadBMP("pantallainicio.bmp");
SDL_Surface * paleta = SDL_LoadBMP("paleta.bmp");
SDL_Rect rpantallainicio = { 0, 0, 640, 480 };
SDL_Rect rpaleta = { x, y, 16, 16 };
//render graphics
SDL_BlitSurface(pantallainicio, NULL, superficie, &rpantallainicio);
SDL_BlitSurface(paleta, NULL, superficie, &rpaleta);
SDL_UpdateWindowSurface(ventana);
reload:SDL_UpdateWindowSurface(ventana);
return 0;
}
I get the following errors:
error C4533: initialization of 'rpaleta' is skipped by 'goto reload'
error C4533: initialization of 'rpantallainicio' is skipped by 'goto reload'
I hope I explained my issue well enough. What can I do? Is there a way to fix this? Or can I reference ventana in some other way? This issue might be very basic, sorry for that, and thanks in advance!
You can fix this issue by simply not using goto at all - use a sub-function instead. Also, extract the variable ventana, as you need it to be stored and usable by graficos whenever.
void init()
{
SDL_Init(SDL_INIT_VIDEO); //load SDL
//load graphics in memory
ventana = SDL_CreateWindow("ventana", SDL_WINDOWPOS_UNDEFINED, SDL_WINDOWPOS_UNDEFINED, 640, 480, 0);
SDL_Surface * superficie = SDL_GetWindowSurface(ventana);
SDL_Surface * pantallainicio = SDL_LoadBMP("pantallainicio.bmp");
SDL_Surface * paleta = SDL_LoadBMP("paleta.bmp");
SDL_Rect rpantallainicio = { 0, 0, 640, 480 };
SDL_Rect rpaleta = { x, y, 16, 16 };
//render graphics
SDL_BlitSurface(pantallainicio, NULL, superficie, &rpantallainicio);
SDL_BlitSurface(paleta, NULL, superficie, &rpaleta);
}
int graficos(int caso)
{
if (caso != 1) { init(); } //skip to reload if (1)
SDL_UpdateWindowSurface(ventana);
return 0;
}
SDL_Window * ventana;
Use of goto should generally be avoided. Use subroutines or other alternatives where possible. Here, the code does exactly the same thing as originally intended, but the "extra" flow that occurs when caso is not 1 is wrapped in its own subroutine named 'init'.

Creating a basic OpenGl context

I'm reading the OpenGl red book, and I'm pretty much stuck at the first tutorial. Everything works fine if I use freeglut and glew, but I'd like to handle input and such myself. So I ditched freeglut and glew and wrote my own code. I've looked at some other tutorials and finished the code, but nothing is displayed. It seems like FreeGlut does some voodoo magic in the background, but I don't know what I'm missing. I've tried this:
int attributeListInt[19];
int pixelFormat[1];
unsigned int formatCount;
int result;
PIXELFORMATDESCRIPTOR pixelFormatDescriptor;
int attributeList[5];
context = GetDC (hwnd);
if (!context)
return -1;
attributeListInt[0] = WGL_SUPPORT_OPENGL_ARB;
attributeListInt[1] = TRUE;
attributeListInt[2] = WGL_DRAW_TO_WINDOW_ARB;
attributeListInt[3] = TRUE;
attributeListInt[4] = WGL_ACCELERATION_ARB;
attributeListInt[5] = WGL_FULL_ACCELERATION_ARB;
attributeListInt[6] = WGL_COLOR_BITS_ARB;
attributeListInt[7] = 24;
attributeListInt[8] = WGL_DEPTH_BITS_ARB;
attributeListInt[9] = 24;
attributeListInt[10] = WGL_DOUBLE_BUFFER_ARB;
attributeListInt[11] = TRUE;
attributeListInt[12] = WGL_SWAP_METHOD_ARB;
attributeListInt[13] = WGL_SWAP_EXCHANGE_ARB;
attributeListInt[14] = WGL_PIXEL_TYPE_ARB;
attributeListInt[15] = WGL_TYPE_RGBA_ARB;
attributeListInt[16] = WGL_STENCIL_BITS_ARB;
attributeListInt[17] = 8;
attributeListInt[18] = 0;
result = wglChoosePixelFormatARB (context, attributeListInt, NULL, 1, pixelFormat, &formatCount);
if (result != 1)
return -1;
result = SetPixelFormat (context, pixelFormat[0], &pixelFormatDescriptor);
if (result != 1)
return -1;
attributeList[0] = WGL_CONTEXT_MAJOR_VERSION_ARB;
attributeList[1] = 4;
attributeList[2] = WGL_CONTEXT_MINOR_VERSION_ARB;
attributeList[3] = 2;
attributeList[4] = 0;
rendercontext = wglCreateContextAttribsARB (context, 0, attributeList);
if (rendercontext == NULL)
return -1;
result = wglMakeCurrent (context, rendercontext);
if (result != 1)
return -1;
glClearDepth (1.0f);
glFrontFace (GL_CCW);
glEnable (GL_CULL_FACE);
glCullFace (GL_BACK);
return 0;
This sets up a graphics context, but is apparently not enough to make everything work. The tutorial didn't include anything about view or projection matrices, so I'm not sure whether I should add anything like that. But the window remains black.
This is the tutorial code, adjusted to my code:
#define BUFFER_OFFSET(offset) ((void *)(offset))
bool init ();
bool mainloop ();
enum VAO_IDs { Triangles, NumVAOs };
enum Buffer_IDs { ArrayBuffer, NumBuffers };
enum Attrib_IDs { vPosition = 0 };
GLuint VAOs[NumVAOs];
GLuint Buffers[NumBuffers];
const GLuint NumVertices = 6;
int main (int argc, char** argv)
{
Window w;
w.init (&mainloop);
if (!init ())
return 0;
w.run ();
w.shutdown ();
return 0;
}
bool init ()
{
glGenVertexArrays (NumVAOs, VAOs);
glBindVertexArray (VAOs[Triangles]);
GLfloat vertices[NumVertices][2] = {
{-0.90f, -0.90f}, // Triangle 1
{0.85f, -0.90f},
{-0.90f, 0.85f},
{0.90f, -0.85f}, // Triangle 2
{0.90f, 0.90f},
{-0.85f, 0.90f}
};
glGenBuffers (NumBuffers, Buffers);
glBindBuffer (GL_ARRAY_BUFFER, Buffers[ArrayBuffer]);
glBufferData (GL_ARRAY_BUFFER, sizeof(vertices),
vertices, GL_STATIC_DRAW);
ShaderInfo shaders[] = {
{GL_VERTEX_SHADER, "triangles.vert"},
{GL_FRAGMENT_SHADER, "triangles.frag"},
{GL_NONE, NULL}
};
GLuint program = LoadShaders (shaders);
glUseProgram (program);
glVertexAttribPointer (vPosition, 2, GL_FLOAT, GL_FALSE, 0, BUFFER_OFFSET (0));
glEnableVertexAttribArray (vPosition);
return true;
}
bool mainloop ()
{
glClear (GL_COLOR_BUFFER_BIT);
glBindVertexArray (VAOs[Triangles]);
glDrawArrays (GL_TRIANGLES, 0, NumVertices);
glFlush ();
return true;
}
Creating a OpenGL context is not trivial. Especially if you want to use wglChoosePixelFormatARB which must be loaded through the OpenGL extension mechanism… which requires a functioning OpenGL context to work in the first place. I think you see that this is kind of a chicken-and-egg problem. In addition the Window you use to create an OpenGL context requires certain attributed to work reliably. For one the WndClass should have the CS_OWNDC style set and the window style should include WS_CLIPSIBLINGS | WS_CLIPCHILDREN
I recently approached the aforementioned chicken-and-egg problem with my small wglarb helper tools: https://github.com/datenwolf/wglarb
It also comes with a small test program that shows how to use it.
I suggest you use the functions provided by that. I wrote this little helper library in a way, that it is not negatively impacted if your program uses other extension loading mechanisms. However it's not thread-safe yet; I'll have to add that eventually. I took the time to make the code thread safe. You can now use the exposed functions without having to care about synchronization; it's all done internally in a reliable way.