Why declare a pointer in SDL before initializing SDL? - c++

In this code, I see that they declare a pointer before initializing SDL:
int main(int argc, char* argv[]) {
SDL_Window *window; // Declare a pointer
SDL_Init(SDL_INIT_VIDEO); // Initialize SDL2
// Create an application window with the following settings:
window = SDL_CreateWindow(
"An SDL2 window", // window title
SDL_WINDOWPOS_UNDEFINED, // initial x position
SDL_WINDOWPOS_UNDEFINED, // initial y position
640, // width, in pixels
480, // height, in pixels
SDL_WINDOW_OPENGL // flags - see below
);
(full code can be found here)
Wouldn't it be more organized to declare the pointer right before you create a window, just so it would be neater and more organized? Why declare it beforehand?
If I would take a guess, it's good to just have all the pointers in one area, so you can see all the pointers at one time. Or is it just a good habit to get used to?
The habit of declaring pointers at the beginning of int main(). (I've also seen this happen in other source programs, from example programs)

There is no technical reason why you need to declare a pointer before SDL_Init. Declaring a pointer variable carries no implications with it, it just reserves space on the stack for that pointer. It could just as easily be declared after SDL_Init, or as part of the statement that calls SDL_CreateWindow.
I honestly don't know why they put it that way in the docs.

Related

Is there a way to prevent the default constructor of a class from running for a single instance of a member variable?

For my game, suppose I have a class called GameTexture, where the default constructor looks like this:
GameTexture::GameTexture() {
this->shader = ShaderManager::get_instance()->get_shader("2d_texture", "2d_texture", "", 0);
}
get_shader() looks like this:
Shader* ShaderManager::get_shader(std::string vertex, std::string fragment, std::string geometry, unsigned int features) {
if (!shader_map.contains(vertex)
|| !shader_map[vertex].contains(fragment)
|| !shader_map[vertex][fragment].contains(geometry)
|| !shader_map[vertex][fragment][geometry].contains(features)) {
shader_map[vertex][fragment][geometry][features].init(vertex, fragment, geometry, features);
}
return &shader_map[vertex][fragment][geometry][features];
}
and initializing a shader starts like this:
void Shader::init(std::string vertex_dir, std::string fragment_dir, std::string geometry_dir, unsigned int features) {
ShaderManager* shader_manager = ShaderManager::get_instance();
id = glCreateProgram();
Note that it's not safe to set the shader to nullptr by default, because then if we ever attempt to render an unloaded GameTexture, the program will immediately crash upon trying to dereference the nullptr. So instead, we set it to a default shader that won't cause any damage even if everything else about the texture is the default. On its own this is fine, but it becomes a problem if we ever load a GameTexture before OpenGL has been initialized. Suppose we add another singleton called RenderManager. RenderManager is responsible for creating a window, loading OpenGL, etc. Suppose it looks like this:
class RenderManager {
public:
RenderManager(RenderManager& other) = delete;
void operator=(const RenderManager& other) = delete;
SDL_Window* window;
SDL_Renderer* sdl_renderer;
SDL_GLContext sdl_context;
int s_window_width;
int s_window_height;
GameTexture example_texture;
static RenderManager* get_instance();
void destroy_instance();
private:
RenderManager();
static RenderManager* instance;
};
RenderManager::RenderManager() {
float width = 1920.0;
float height = 1080.0;
window = SDL_CreateWindow("Example", SDL_WINDOWPOS_CENTERED, SDL_WINDOWPOS_CENTERED, width, height, SDL_WINDOW_OPENGL | SDL_WINDOW_RESIZABLE | SDL_WINDOW_FULLSCREEN_DESKTOP);
SDL_GetWindowSize(window, &s_window_width, &s_window_height);
sdl_renderer = SDL_CreateRenderer(window, -1, SDL_RENDERER_TARGETTEXTURE | SDL_RENDERER_ACCELERATED);
sdl_context = SDL_GL_CreateContext(window);
glewExperimental = GL_TRUE;
if (glewInit() != GLEW_OK) {
std::cout << "Failed to initialize GLEW!" << std::endl;
}
SDL_GL_MakeCurrent(window, sdl_context);
SDL_GL_SetSwapInterval(1);
example_texture.init("example/path/here.png");
}
Suppose we start to create an instance of the RenderManager. Before calling the constructor, it goes through all of its members and finds example_texture. It calls the constructor for example_texture, which tells it to get a shader. ShaderManager hasn't already loaded that shader, so it starts to do so. However, this causes the program to crash because it's calling glCreateProgram() despite OpenGL not yet having been loaded.
Which leads me to my question. I know that I'm going to manually initialize this specific instance of a GameTexture, and in fact in this case I'm required to seeing as how the program trying to create the GameTexture on its own doesn't work, so is there any possible way to force the constructor not to run for this specific instance of a GameTexture without deleting it outright? That way it could be stored in the RenderManager as a member variable, but it wouldn't try to call any gl functions before gl was loaded.
Note that I'm aware that there are other solutions to this which I'm willing to do. If I were to heap-allocate this GameTexture, its constructor wouldn't run until I actually allocated it. That's the approach I'm currently taking, but I'm not happy with the idea of heap-allocating just to use it like a stack-allocated variable in all contexts when no other GameTextures are on the heap to begin with.
The whole point of having constructors is that you have to run one. If you don't want to run one constructor, you have to run a different constructor. If you don't have a different constructor, you have to add one. Or, move the code you don't want to run in the constructor, so it isn't in the constructor, but somewhere else.
You seem to have painted yourself into a corner by insisting that you have to have classes that act a certain way. You want every shader object to actually be a shader (not null); you want to get a shader in every texture object; you want to create a texture object before OpenGL has been initialized; and you can't get a shader until after OpenGL has been initialized. It is impossible to satisfy all these requirements. You have to change one.
One possibility is that you change the requirement that all textures have to have a shader object. I would consider creating the texture object in a null state and then loading the actual texture into it later (as you seem to be already doing!) and loading the shader at the same time. I've done this before. It is not too difficult to make a linked list of all the texture objects that exist, then when OpenGL is loaded, you go through them all and actually load the textures.
Another suggestion is that you don't create the texture object until after OpenGL has been initialized. Initialize OpenGL, then create all the textures. If that means your window and your game rendering have to be in two separate classes, so be it. If you make a Window class and put a Window window; inside class RenderManager - before GameTexture example_texture; - the constructor for Window will run before the constructor for GameTexture. Then you'll have OpenGL initialized when you go to create the texture and shader.
Extras:
In fact it's kinda weird that you don't know when your window will be created. Normally you want to know which order your code runs in, surely? Create the window, then initialize OpenGL, then load the level file, then load all the textures for the level, then send the message saying "a new player has joined the game." - trying to make all this happen in the right order without actually saying the order is sometimes a recipe for disaster. You might have caught "singleton fever" and be trying to do some fancy class shenanigans instead of just. telling. the. computer. what. you. want. it. to. do. It's okay, everyone goes through that phase.
Another thing I noticed is that textures aren't usually associated with shaders. Generally, materials are associated with shaders - a material being a shader and all the textures it needs. Some shaders use zero textures (like a distortion effect); some use multiple (like a diffuse colour texture and also a normal map).

C++ FLTK How to redraw a box from inside an timerfunction

How do I call a box.redraw from a routine?
I have a timer callback from which I a have to assign a new picture to box1.
My programm crashes at this point.
...
Fl_Window *win = NULL;
Fl_Box *box1 = NULL;
static void get_new_pic(void*) { // Timer callback
const char *filename = "pic2.png";
Fl_PNG_Image png(filename);
box1->image(png);
box1->redraw(); // this kicks the application
Fl::repeat_timeout(2,CB_Hole_Info);
}
int main() {
win = new Fl_Window(240,240); // make a window
box1 = new Fl_Box(0,0,240,180); // widget that will contain image
const char *filename = "pic1.png";
Fl_PNG_Image png(filename);
box1->image(png);
Fl::add_timeout(2, get_new_pic, buff); // setup a timer
win->show();
return(Fl::run());
}
Regards
Your way of adding the image in the timeout is correct. However, you allocate the image on the stack: Fl_PNG_Image png(filename);, so when you leave the timer, the image is automatically deleted together with the stack. When the box is actually drawn, the image is not there anymore.
FLTK does not copy the image. It just links to it.
You'd have to write Fl_PNG_Image *png = new Fl_PNG_Image(filename); and fix the rest of the code to use a pointer and make sure that the image is deleted at the very end.
You declared png1 and png2 as global variables (used in get_new_pic()) but you also declared local variables with the same name in main() which "shadow" the global variables.
Please remove Fl_PNG_Image * from the two assignments in main() so these assignments use the global variables as intended.
Hint: you should also assign a solid background to the box like
box1->box(FL_FLAT_BOX);

Scrolling in SDL2, ie, changing integral coordinates of the giu's layout

I'm trying to simulate 'scrolling' in an application in SDL2, however i dont think that moving each individual object on the screen every time the scroll event occurs is an efficient/elegant way of doing it. What i know of SDL2 is the top left begins at 0,0 in coordinates. For me to make this much easier to implement, is it possible to change the top left starting point of the GUI so that, when i scroll, it moves to say, 0,100 and next scroll, 0,200 etc. How could I do this? Thanks
Rather than changing the x,y position of the object itself, or changing the reference co-ordinate of SDL (which cannot be done), you can instead create offset variables.
For example, create an SDL_Point called ViewPointOffset:
SDL_Point ViewPointOffset;
The best practice is to put this in your window class (if you have one), or even better, a Camera class that is a member of the window class.
Then, when you're drawing, just subtract the offset from the x and y co-ordinates that you're drawing:
void draw(SDL_Renderer* renderer, const SDL_Point ViewPointOffset, SDL_Texture* tex, const SDL_Rect* srcrect, const SDL_Rect* dstrect){
SDL_Rect* drawrect;
drawrect->w = dstrect->w;
drawrect->h = dstrect->h;
drawrect->x = dstrect->x - ViewPortOffset.x;
drawrect->y = dstrect->y - ViewPortOffset.y;
SDL_RenderCopy(renderer, tex, srcrect, drawrect);
}
You can either create a second function, or attach a boolean to the input of that function, to allow you to ignore the offset; what if you have a GUI button that you don't want the offset to apply to, etc?
https://github.com/Helliaca/SDL2-Game is a small open source game using a similar method. You can find this code in base.cpp/.h

Ownership of my new Unique_ptrs?

As per suggestion at a job interview I had recently, I was advised to research into the unique_ptr functionality of C++11, as a means of automated garbage collection. So I'm using an older project and replacing my raw pointers to objects created with the 'new' keyword, with unique_ptrs. However I think I have arrived at an issue of ownership.
In my mainclass.cpp (posted below) please turn your attention to the init function and the 3 unique_ptrs to new-instantiated objects I have created. Named "bg","bg2", and "theGrid".
(The commented out declarations below them are how they used to be done, and switching back to this method, the program runs just fine.)
However, using the unique_ptrs, the line in the function void display():
theGrid->doGridCalculations();//MODEL
generates an access violation. This is also the first time in the sequence that any of the pointed objects are dereferenced, which leads me to believe that ownership of the unique_ptr is already lost somewhere. However, the unique_ptrs themselves are never passed into another function or container, and remain in the scope of the mainclass.cpp and therefore I've seen no opportunity to use std::move(theGrid) in order to transfer ownership to where it needs to be.
Mainclass.cpp:
#include <stdio.h>
#include <GL/glut.h>
#include <math.h>
#include "Block.h"
#include "dStructs.h"
#include "Grid.h"
#include "Texture.h"
#include "freetype.h"
#include <Windows.h>
//////////////////////////////////////////////////////
///Declare a couple of textures - for the background
//////////////////////////////////////////
Texture* bg;
Texture* bg2;
//and theGrid
Grid* theGrid;
/////////////////////////////////////////////////
///Declare our font
/////////////////////////////////////////////////
freetype::font_data scoreFont;
/////////////////////////////////////////////////////////
//Initialize the variables
///////////////////////////////////////////////////////
typedef dStructs::point point;
const int XSize = 755, YSize = 600;
point offset = {333,145};
point mousePos = {0,0};
void init(void)
{
//printf("\n......Hello Guy. \n....\nInitilising");
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluOrtho2D(0,XSize,0,YSize);
//////////////////////////
//initialise the fonts
/////////////////////////
try{
scoreFont.init("Visitor TT2 BRK Regular.ttf", 20);
} catch (std::exception &e) {
MessageBox(NULL, e.what(), "EXCEPTION CAUGHT", MB_OK | MB_ICONINFORMATION);
}
///////////////////////////////////////////////////////////////
///bg new MEMORY MANAGED EDITION
//////////////////////////////////////////////////////////////////
unique_ptr<Texture> bg(new Texture(1024,1024,"BackGround.png"));
unique_ptr<Texture> bg2(new Texture(1024,1024,"BackGround2.png"));
unique_ptr<Grid> theGrid(new Grid(offset));
/////////////////////////////////////////////////
/// Old bad-memory-management style of pointed objects
/////////////////////////////////////////////////
//bg = new Texture(1024,1024,"BackGround.png");
//bg2 = new Texture(1024,1024,"BackGround2.png");
//theGrid = new Grid(offset);
glClearColor(0,0.4,0.7,1);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);//activate the alpha blending functionality
glEnable(GL_BLEND);
glLineWidth(2); // Width of the drawing line
glMatrixMode(GL_MODELVIEW);
glDisable(GL_DEPTH_TEST);
//printf("\nInitialisation Complete");
}
void myPassiveMouse(int x, int y)
{
//Stupid OGL coordinate system
y = YSize - y;
mousePos.x = x;
mousePos.y = y;
printf("\nthe mouse coordinates are (%f,%f)",mousePos.x, mousePos.y);
}
void displayGameplayHUD()
{
///////////////////////////////
//SCORE
//////////////////////////////
glColor4f(0.7f,0.0f,0.0f,7.0f);//set the colour of the text
freetype::print(scoreFont, 100,400,"SCORE: ");
glColor4f(1.0f,1.0f,1.0f,1.0f);//Default texture colour. Makes text white, and all other texture's as theyre meant to be.
}
//////////////////////////////////////////////////////
void display()
{
////printf("\nBeginning Display");
glClear(GL_COLOR_BUFFER_BIT);//clear the colour buffer
glPushMatrix();
theGrid->doGridCalculations();//MODEL
point bgLoc = {XSize/2,YSize/2};
point bgSize = {XSize,YSize};
bg2->draw(bgLoc,bgSize);
theGrid->drawGrid();//DISPLAY
bg->draw(bgLoc,bgSize);
if(theGrid->gridState == Grid::STATIC)
{
theGrid->hoverOverBlocks(mousePos);//CONTROLLER
}
displayGameplayHUD();
glPopMatrix();
glFlush(); // Finish the drawing
glutSwapBuffers();
////printf("\nFresh Display Loaded");
glutPostRedisplay();
}
int main(int argc, char** argv)
{
glutInit(&argc, argv); // GLUT Initialization
glutInitDisplayMode(GLUT_RGBA|GLUT_DOUBLE); // Initializing the Display mode
glutInitWindowSize(755,600); // Define the window size
glutCreateWindow("Gem Miners"); // Create the window, with caption.
init(); // All OpenGL initialization
//-- Callback functions ---------------------
glutDisplayFunc(display);
//glutKeyboardFunc(mykey);
//glutSpecialFunc(processSpecialKeys);
//glutSpecialUpFunc(processSpecialUpKeys);
glutMouseFunc(mymouse);
glutPassiveMotionFunc(myPassiveMouse);
glutMainLoop(); // Loop waiting for event
}
I think the ownership needs to be transferred at some point, but I don't know where.
Thanks in advance,
Guy
These are global raw pointers:
Texture* bg;
Texture* bg2;
//and theGrid
Grid* theGrid;
These are completely unrelated unique_ptrs, local to the init function.
unique_ptr<Texture> bg(new Texture(1024,1024,"BackGround.png"));
unique_ptr<Texture> bg2(new Texture(1024,1024,"BackGround2.png"));
unique_ptr<Grid> theGrid(new Grid(offset));
When the unique_ptrs go out of scope, they are destroyed. The objects that they point to are also destroyed, because that is what unique_ptr does in its destructor. At no point in that process were the global raw pointers involved with the debacle. They were hidden by the local unique_ptrs of the same name.
You should change your global raw pointers to unique_ptrs. Then you can set them (don't re-declare them) in the init function like this:
bg.reset(new Texture(1024,1024,"BackGround.png"));
bg2.reset(new Texture(1024,1024,"BackGround2.png"));
theGrid.reset(new Grid(offset));
The unique pointers that you create in your init function do not modify the pointers declared at file scope, the ones at file scope are default-initialised to 0 or nullptr (I'm not that well versed at C++11 so I'm not sure which).
What you're doing in the init function is creating three new objects with names that shadow the ones at file scope, so when you go to use the ones at file scope you get an access violation because they are never set to point at anything valid.
Your unique_ptr<Grid> in init is local to that function. The unique_ptr<Grid> will go out of scope at the end of the function, destroying itself and the Grid it owns. It seems like you want to actually have a global object unique_ptr<Grid> theGrid; which replaces the Grid* theGrid; you've got at the momement. Then in init you can do:
theGrid.reset(new Grid(offset));
The theGrid that is being accessed in display is the global theGrid of type Grid*.
The exact same is true for the other unique_ptrs you've tried to create.
Of course, rather than a global object, it would be much better to be passing these objects around, but your use of GLUT makes that a bit painful.

Prevent an Object from destroying prematurely

Here's the (relevant) code for my pro::surface class:
/** Wraps up SDL_Surface* **/
class surface
{
SDL_Surface* _surf;
public:
/** Constructor.
** #param surf an SDL_Surface pointer.
**/
surface(SDL_Surface*);
/** Overloaded = operator. **/
void operator = (SDL_Surface*);
/** calls SDL_FreeSurface(). **/
void free();
/** destructor. Also free()s the internal SDL_Surface. **/
virtual ~surface();
}
Now the problem is that in my main function, the object destroys itself (and hence calls the destructor which dangerously free()s the SDL Video Surface!) before the real rendering begins.
int main(int argc, char** argv)
{
...
// declared here
pro::surface screen = SDL_SetVideoMode(320,240,16,SDL_HWSURFACE|SDL_DOUBLEBUF);
// event-handling done here, but the Video Surface is already freed!
while(!done) { ... } // note that "screen" is not used in this loop.
// hence, runtime error here. SDL_Quit() tries to free() the Video Surface again.
SDL_Quit();
return 0;
}
So my question is, is there any way to stop the pro::surface instance from destroying itself before the program ends? Doing memory management manually works though:
/* this works, since I control the destruction of the object */
pro::surface* screen = new pro::surface( SDL_SetVideoMode(..) );
/* it destroys itself only when **I** tell it to! Muhahaha! */
delete screen;
/* ^ but this solution uses pointer (ewww! I hate pointers) */
But isn't there a better way, without resorting to pointers? Perhaps some way to tell the stack to not delete my object just yet?
You violated the Rule of Three, bitch.
pro::surface screen = SDL_SetVideoMode(320,240,16,SDL_HWSURFACE|SDL_DOUBLEBUF);
Is equal to
pro::surface screen = pro::surface(SDL_SetVideoMode(320,240,16,SDL_HWSURFACE|SDL_DOUBLEBUF));
Now double free, because you violated the Rule of Three. So give your class a proper copy constructor/assignment operator, or disallow them and explicitly construct it properly.
Edit: This also explains why your pointer version works fine- because you don't invoke a copy.
Wrap the screen in a special wrapper, that pairs SDL_SetVideoMode with SDL_Quit instead of SDL_FreeSurface.
pro::screen screen(320, 240, 16, SDL_HWSURFACE | SDL_DOUBLEBUF);
// ...
And fix your copy ctor, obviously.