How to correctly manage initialisation in constructor? - c++

I'm writing a simple wrapper in C++ around GLFW and OpenGL, as an exercice. I have a class Window and a class Renderer. The Window class owns a Renderer member.
Window sets the GLFW context in its constructor and run the main loop, and call for its Renderer member to draw. Renderer sets the buffers in its constructor, and does the OpenGL things.
The problem I've got is that OpenGL calls require to have an OpenGL context available. If I were to just initialise the Renderer before calling the constructor of Window, the constructor of Renderer would be called before the GLFW context would be created.
What I did is that I instead store a unique_ptr to a Renderer in Window, and I call std::make_unique<Renderer> in the constructor of Window. Then, in the destructor, I call std::unique_ptr::reset() before destroying the GLFW context, so that I can make the OpenGL calls to free stuff with a still valid context.
class Window
{
public:
Window()
{
//initialising GLFW and creating an OpenGL context
//...
m_renderer = std::make_unique<Renderer>();
}
~Window()
{
m_renderer.reset();
glfwTerminate();
}
int run()
{
while(...)
{
m_renderer->draw();
//...
}
return 0;
}
private:
std::unique_ptr<Renderer> m_renderer;
//...
}
class Renderer
{
public:
Renderer() { //Set buffers }
~Renderer() { //Free buffers }
draw() { glDrawElements(...); //... }
private:
//...
}
int main()
{
Window window();
return window->run();
}
I understand that object should already be initialised in the body of the constructor, which is not the case here. I feel I might have done some dependency between Renderer and Window backwards or that my general architecture is wrong. I would rather rely on the constructor and destructors being called at the right moment based on the scope and not on me manually triggering it.
What would be a better solution?

I suggest that you create a separate class, call it GLFWInit that does the GLFW initialization and calls glfwTerminate() in its destructor. Then you have two options:
Embed an object of type GLFWInit in your Window class. Place the member early, but at the least before the m_renderer member.
Derive your Window class from GLFWInit.
Both methods ensure that GLFW is initialied before m_renderer and torn down after it. Then you do not even have to make the renderer a pointer and can embed the member directly (if that is feasible).

Related

Is there a way to prevent the default constructor of a class from running for a single instance of a member variable?

For my game, suppose I have a class called GameTexture, where the default constructor looks like this:
GameTexture::GameTexture() {
this->shader = ShaderManager::get_instance()->get_shader("2d_texture", "2d_texture", "", 0);
}
get_shader() looks like this:
Shader* ShaderManager::get_shader(std::string vertex, std::string fragment, std::string geometry, unsigned int features) {
if (!shader_map.contains(vertex)
|| !shader_map[vertex].contains(fragment)
|| !shader_map[vertex][fragment].contains(geometry)
|| !shader_map[vertex][fragment][geometry].contains(features)) {
shader_map[vertex][fragment][geometry][features].init(vertex, fragment, geometry, features);
}
return &shader_map[vertex][fragment][geometry][features];
}
and initializing a shader starts like this:
void Shader::init(std::string vertex_dir, std::string fragment_dir, std::string geometry_dir, unsigned int features) {
ShaderManager* shader_manager = ShaderManager::get_instance();
id = glCreateProgram();
Note that it's not safe to set the shader to nullptr by default, because then if we ever attempt to render an unloaded GameTexture, the program will immediately crash upon trying to dereference the nullptr. So instead, we set it to a default shader that won't cause any damage even if everything else about the texture is the default. On its own this is fine, but it becomes a problem if we ever load a GameTexture before OpenGL has been initialized. Suppose we add another singleton called RenderManager. RenderManager is responsible for creating a window, loading OpenGL, etc. Suppose it looks like this:
class RenderManager {
public:
RenderManager(RenderManager& other) = delete;
void operator=(const RenderManager& other) = delete;
SDL_Window* window;
SDL_Renderer* sdl_renderer;
SDL_GLContext sdl_context;
int s_window_width;
int s_window_height;
GameTexture example_texture;
static RenderManager* get_instance();
void destroy_instance();
private:
RenderManager();
static RenderManager* instance;
};
RenderManager::RenderManager() {
float width = 1920.0;
float height = 1080.0;
window = SDL_CreateWindow("Example", SDL_WINDOWPOS_CENTERED, SDL_WINDOWPOS_CENTERED, width, height, SDL_WINDOW_OPENGL | SDL_WINDOW_RESIZABLE | SDL_WINDOW_FULLSCREEN_DESKTOP);
SDL_GetWindowSize(window, &s_window_width, &s_window_height);
sdl_renderer = SDL_CreateRenderer(window, -1, SDL_RENDERER_TARGETTEXTURE | SDL_RENDERER_ACCELERATED);
sdl_context = SDL_GL_CreateContext(window);
glewExperimental = GL_TRUE;
if (glewInit() != GLEW_OK) {
std::cout << "Failed to initialize GLEW!" << std::endl;
}
SDL_GL_MakeCurrent(window, sdl_context);
SDL_GL_SetSwapInterval(1);
example_texture.init("example/path/here.png");
}
Suppose we start to create an instance of the RenderManager. Before calling the constructor, it goes through all of its members and finds example_texture. It calls the constructor for example_texture, which tells it to get a shader. ShaderManager hasn't already loaded that shader, so it starts to do so. However, this causes the program to crash because it's calling glCreateProgram() despite OpenGL not yet having been loaded.
Which leads me to my question. I know that I'm going to manually initialize this specific instance of a GameTexture, and in fact in this case I'm required to seeing as how the program trying to create the GameTexture on its own doesn't work, so is there any possible way to force the constructor not to run for this specific instance of a GameTexture without deleting it outright? That way it could be stored in the RenderManager as a member variable, but it wouldn't try to call any gl functions before gl was loaded.
Note that I'm aware that there are other solutions to this which I'm willing to do. If I were to heap-allocate this GameTexture, its constructor wouldn't run until I actually allocated it. That's the approach I'm currently taking, but I'm not happy with the idea of heap-allocating just to use it like a stack-allocated variable in all contexts when no other GameTextures are on the heap to begin with.
The whole point of having constructors is that you have to run one. If you don't want to run one constructor, you have to run a different constructor. If you don't have a different constructor, you have to add one. Or, move the code you don't want to run in the constructor, so it isn't in the constructor, but somewhere else.
You seem to have painted yourself into a corner by insisting that you have to have classes that act a certain way. You want every shader object to actually be a shader (not null); you want to get a shader in every texture object; you want to create a texture object before OpenGL has been initialized; and you can't get a shader until after OpenGL has been initialized. It is impossible to satisfy all these requirements. You have to change one.
One possibility is that you change the requirement that all textures have to have a shader object. I would consider creating the texture object in a null state and then loading the actual texture into it later (as you seem to be already doing!) and loading the shader at the same time. I've done this before. It is not too difficult to make a linked list of all the texture objects that exist, then when OpenGL is loaded, you go through them all and actually load the textures.
Another suggestion is that you don't create the texture object until after OpenGL has been initialized. Initialize OpenGL, then create all the textures. If that means your window and your game rendering have to be in two separate classes, so be it. If you make a Window class and put a Window window; inside class RenderManager - before GameTexture example_texture; - the constructor for Window will run before the constructor for GameTexture. Then you'll have OpenGL initialized when you go to create the texture and shader.
Extras:
In fact it's kinda weird that you don't know when your window will be created. Normally you want to know which order your code runs in, surely? Create the window, then initialize OpenGL, then load the level file, then load all the textures for the level, then send the message saying "a new player has joined the game." - trying to make all this happen in the right order without actually saying the order is sometimes a recipe for disaster. You might have caught "singleton fever" and be trying to do some fancy class shenanigans instead of just. telling. the. computer. what. you. want. it. to. do. It's okay, everyone goes through that phase.
Another thing I noticed is that textures aren't usually associated with shaders. Generally, materials are associated with shaders - a material being a shader and all the textures it needs. Some shaders use zero textures (like a distortion effect); some use multiple (like a diffuse colour texture and also a normal map).

2 QOpenGLWidget shared context causing crash

I would like to solve problem that I still deal with.. thats render 2 QOpenGLWidgets at same time in different top level windows with shared shader programs etc.
My first attempt was to use one context, wasnt working.
Is it even possible currently with QOpenGLWidget? Or I have to go to older QGLWidget? Or use something else?
testAttribute for Qt::AA_ShareOpenGLContexts returns true so there is not problem with sharing
even QOpenGLContext::areSharing returns true. So there is something I missing or I dont know. Not using threads.
Debug output:
MapExplorer true true true QOpenGLShaderProgram::bind: program is not
valid in the current context. MapExlorer paintGL ends MapExplorer true
true true QOpenGLShaderProgram::bind: program is not valid in the
current context. MapExlorer paintGL ends
QOpenGLFramebufferObject::bind() called from incompatible context
QOpenGLShaderProgram::bind: program is not valid in the current
context. QOpenGLShaderProgram::bind: program is not valid in the
current context. QOpenGLShaderProgram::bind: program is not valid in
the current context. QOpenGLFramebufferObject::bind() called from
incompatible context QOpenGLFramebufferObject::bind() called from
incompatible context
MapView initializeGL:
void MapView::initializeGL()
{
this->makeCurrent();
initializeOpenGLFunctions();
// Initialize World
world->initialize(this->context(), size(), worldCoords);
// Initialize camera shader
camera->initialize();
// Enable depth testing
glEnable(GL_DEPTH_TEST);
glEnable(GL_CULL_FACE);
glDepthFunc(GL_LEQUAL); // just testing new depth func
glClearColor(0.65f, 0.77f, 1.0f, 1.0f);
}
MapView paintGL:
void MapView::paintGL()
{
this->makeCurrent();
glDrawBuffer(GL_FRONT);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
world->draw(...);
}
MapExplorer initializeGL:
void MapExplorer::initializeGL()
{
this->makeCurrent();
QOpenGLContext* _context = _mapView->context();
_context->setShareContext(this->context());
_context->create();
this->context()->create();
this->makeCurrent();
initializeOpenGLFunctions();
// Enable depth testing
glEnable(GL_DEPTH_TEST);
glEnable(GL_CULL_FACE);
glDepthFunc(GL_LEQUAL); // just testing new depth func
glClearColor(0.65f, 0.77f, 1.0f, 1.0f);
}
MapExplorer paintGL:
void MapExplorer::paintGL()
{
this->makeCurrent();
qDebug() << "MapExplorer" << QOpenGLContext::areSharing(this->context(), _mapView->context()) << (QOpenGLContext::currentContext() == this->context());
QOpenGLShaderProgram* shader = world->getTerrainShader();
qDebug() << shader->create();
shader->bind(); // debug error "QOpenGLShaderProgram::bind: program is not valid in the current context."
// We need the viewport size to calculate tessellation levels and the geometry shader also needs the viewport matrix
shader->setUniformValue("viewportSize", viewportSize);
shader->setUniformValue("viewportMatrix", viewportMatrix);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
qDebug() << "MapExlorer paintGL ends";
//world->drawExplorerView(...);
}
Hi I've been hacking this for 2 days, and finally got something to work.
The main reference is qt's threadrenderer example.
Basically I have a QOpenglWidget with its own context, and a background thread drawing to the context shared from QOpenglWidget. The framebuffer drawn by the background thread can be directly used by the QOpenglWidget.
Here's the steps to make things work:
I have the QOpenglWidget RenderEngine and the background thread RenderWorker
// the worker is a thread
class RenderWorker : public QThread, protected QOpenGLFunctions
{
// the background thread's context and surface
QOffscreenSurface *surface = nullptr;
QOpenGLContext *context = nullptr;
RenderWorker::RenderWorker()
{
context = new QOpenGLContext();
surface = new QOffscreenSurface();
}
...
}
// the engine is a QOpenglWidget
class RenderEngine : public QOpenGLWidget, protected QOpenGLFunctions
{
protected:
// overwrite
void initializeGL() override;
void resizeGL(int w, int h) override;
void paintGL() override;
private:
// the engine has a background worker
RenderWorker *m_worker = nullptr;
...
}
Create and setup the background thread in QOpenglWidget's initializeGL()
void RenderEngine::initializeGL()
{
initializeOpenGLFunctions();
// done with current (QOpenglWidget's) context
QOpenGLContext *current = context();
doneCurrent();
// create the background thread
m_worker = new RenderWorker();
// the background thread's context is shared from current
QOpenGLContext *shared = m_worker->context;
shared->setFormat(current->format());
shared->setShareContext(current);
shared->create();
// must move the shared context to the background thread
shared->moveToThread(m_worker);
// setup the background thread's surface
// must be created here in the main thread
QOffscreenSurface *surface = m_worker->surface;
surface->setFormat(shared->format());
surface->create();
// worker signal
connect(m_worker, SIGNAL(started()), m_worker, SLOT(initializeGL()));
// must move the thread to itself
m_worker->moveToThread(m_worker);
// the worker can finally start
m_worker->start();
}
The background thread have to initialize the shared context in its own thread
void RenderWorker::initializeGL()
{
context->makeCurrent(surface);
initializeOpenGLFunctions();
}
Now any framebuffer drawn in the background thread can be directly used (as texture, etc.) by the QOpenglWidget, e.g. in the paintGL() function.
As far as I know, an opengl context is kind of binded to a thread. Shared context and the corresponding surface must be created and setup in the main thread, moved to another thread and initialized in it, before it can be finally used.
In your code I don't see where you link your shader program. You should:
Remove shader->create() form paintGL, create it once in initializeGL function of one the views and share it among other views.
link it in paintGL functions this way before binding it:
if (!shader->isLinked())
shader->link();
The link status of a shader program is context dependant (see OpenGL and take a look at QOpenGLShaderProgram::link() source code).
In the MapExplorer::initializeGL() remove _context (it's not used at all...). remove also this->context()->create (this is done by QOpenGLWidget).
In your main function put this at first line (or before QApplication instanciation) :
QApplication::setAttribute(Qt::AA_ShareOpenGLContexts);
I do like this in a multi-QOpenGLWidget app and it works fine.
One reason why your context is giving you grief is because you are trying to create your own in MapExplorer::initializeGL. QOpenGLWidget already creates its own context in its private initialize function. You need to use the one it creates. Its own context is also made current before each of initializeGL, paintGL and resizeGL. Making your own current is probably causing errors and is not how that widget is designed to be used.
Context sharing between widgets needs to be done with context.globalShareContext(). There is a static member of QOpenGLContext that gets initialized when QGuiApplication is created. That static member and the defaulFormat is what the QOpenGLWidgets context is initialized by automatically.

Use OpenGL context in an external class in Qt

I have a window in Qt that inherits from QGLViewer. If I create any shader program in that class, QGLShaderProgram myShader everything runs fine.
However I start moving some rendering calls to classes outside the class that has the draw()call and things are broken.
The application compiles fine without errors, but when executing it I received an error The program has unexpectedly finished.
I found around that from Qt4 to Qt5 the shader class changed, being QOpenGLShaderProgram the one used in Qt5. I gave it a try and the same problem occur, nevertheless I got a different error message QOpenGLFunctions created with a non-current context.
Which makes me think that when calling OpenGL functions from a class that has no direct relation to the class that actually does the drawing the OpenGL context is "lost".
How can I make the context visible across all the classes? In general my code looks like
MyViewer.hpp
class MyViewer : public QGLViewer
{
MyViewer(const QGLFormat format);
~MyViewer();
protected:
init();
draw()
{
// Clear color buffer and depth buffer
// Do stuff
m_cube.render();
}
private:
...
...
Cube m_cube;
};
Cube.cpp
class Cube
{
public:
Cube()
{
m_shaderProgram.addShaderFromSourceFile(QGLShader::Vertex, ":/vertex.glsl");
m_shaderProgram.addShaderFromSourceFile(QGLShader::Fragment, ":/fragment.glsl");
m_shaderProgram.link();
//Initialize VAO and VBOs
}
void render(){ // render OpenGL calls }
private:
QGLShaderProgram m_shaderProgram;
};
Open gl contexts are global but you can explicit share a context between 2 Viewers like so
QGLViewer ( QGLContext * context,
QWidget * parent = 0,
const QGLWidget * shareWidget = 0,
Qt::WindowFlags flags = 0
)
Acoording to ducumentation
Same as QGLViewer(), but a QGLContext can be provided so that viewers
share GL contexts, even with QGLContext sub-classes (use shareWidget
otherwise).
So first check the order in witch you classes are created. Because the Cube could be calling opengl functions whilest your vieuwer is still incomplete
If you call opengl functions before the QGLviewer has created a context you get an error.
if so a quick work around would be to create a new Qglcontext in your construtor of cube and pass it back to they viewer.
Else do this
Cube() {}; // empty cube constructor
void InitShaders()
{
m_shaderProgram.addShaderFromSourceFile(QGLShader::Vertex, ":/vertex.glsl");
m_shaderProgram.addShaderFromSourceFile(QGLShader::Fragment, ":/fragment.glsl");
m_shaderProgram.link();
//Initialize VAO and VBOs
}
and in the constructor of my viewer do
MyViewer(const QGLFormat format){
cube.initShaders();
}
I have not tested this code but it should alter the order of initialization.

How to get stuff to show up in my window?

I've been playing with OpenGL. I'm confused because most commands seem to call a static method instead of the window object I create - by what arcane methods does the compiler divine my target window, I can't fathom.
I assume I've misunderstood something along the way, since I can't get my code to work. The snippet below produces a window with a transparent background, as opposed to a black background; trying a few other commands for drawing also gave me nothing. What am I doing wrong?
public static void Main()
{
using (OpenTK.GameWindow a = new OpenTK.GameWindow(800, 600, GraphicsMode.Default, "Sandboxy"))
{
a.Run(30);
OpenTK.Graphics.OpenGL.GL.ClearColor(0, 0, 0, 0);
OpenTK.Graphics.OpenGL.GL.ClearDepth(1.0);
OpenTK.Graphics.OpenGL.GL.Clear(ClearBufferMask.ColorBufferBit | ClearBufferMask.DepthBufferBit);
}
}
Yes, you're definitely doing something wrong, but firstly, the static method thing.
The way it knows where to draw the stuff to, even though you're using static methods, is because OpenTK tells OpenGL (OpenTK.Graphics.OpenGL.GL) where to draw to when it creates the GameWindow, it binds the window to the static OpenGL context (on windows it does this through the Win32 functions wglCreateContext(HDC) and wglMakeCurrent(HDC, HGLRC), look those up if you want more information).
The reason it doesn't work, is because you try to clear everything after you start the render loop:
a.Run(30);
Doesn't just open the window, it also enters the render loop, meaning it will return only when the windows closes. This is quite obviously not what you want. You want to render in the loop, not afterwards.
The preferred way to draw it (for OpenTK) is create a class deriving from GameWindow, and overriding the functions to do with the loop:
class MyGameWindow : GameWindow
{
public MyGameWindow() : base(800, 600, GraphicsMode.Default, "Sandboxy")
{
}
protected override void OnRenderFrame(FrameEventArgs e)
{
GL.ClearColor(0, 0, 0, 0);
GL.ClearDepth(1.0);
GL.Clear(ClearBufferMask.ColorBufferBit | ClearBufferMask.DepthBufferBit);
this.SwapBuffers();
}
public static void Main(string[] args)
{
using (GameWindow a = new MyGameWindow())
{
a.Run(30);
}
}
}
OpenGL commands operate on the current GL context. You acquire GL contexts from the operating system and make them current in a given thread via an OS-specific (wgl, glx, etc.) *MakeCurrent() call.

C++ / SDL encapsulation design help

So I am semi-new to C++, and completely new to SDL. Most of my conceptual knowledge of OOP comes from Java and PHP. So bear with me.
I am trying to work out some simple design logic with my program / soon to be side-scroller. My problem lies with trying to make my 'screen' layer (screen = SDL_SetVideoMode(...)) accessible to all my other classes; Hero class, background layer, enemies, etc. I have been loosely following some more procedural tutorials, and have been trying to adapt them to a more object oriented approach. Here is a little bit of what I have so far:
main.cpp
#include "Game.h"
#include "Hero.h"
int main(int argc, char* argv[])
{
//Init Game
Game Game;
//Load hero
Hero Hero(Game.screen);
//While game is running
while(Game.runningState())
{
//Handle Window and Hero inputs
Game.Input();
Hero.userInput();
//Draw
Game.DrawBackground();
Hero.drawHero();
//Update
Game.Update();
}
//Clean up
Game.Clean();
return 0;
}
As you can see, I have a Game class, and a Hero class. The Game class is responsible for setting up the initial window, and placing a background. It also updates the screen as you can see.
Now, since my Game class holds the 'screen' property, which is a handle for SDL_SetVideoMode, I am stuck passing this into any other class (ex: Hero Hero(Game.screen);) that needs to update to this screen... say via SDL_BlitSurface.
Now, this works, however I am getting the idea there has GOT to be a more elegant approach. Like possibly keeping that screen handler on the global scope (if possible)?
TLDR / Simple version: What is the best way to go about making my window / screen handler accessible to all my subsequent classes?
I like the way you are doing it.
Though rather than passing the screen reference I would pass a reference to a game itself. Thus each hero object knows which game it belongs too, it can then ask the game object for the screen as required.
The reason I would do this is so that in a few years when your game is a wide and successful product and you convert it for online-play you really need to do no work. The game server will be able to easily support multiple game objects, each game object hosting multiple hero objects. As each hero object wants to draw it asks the game for the screen abd updates the screen (the screen can now very from game object to game object and still work perfectly (as long as they have the same interface).
class Game
{
public:
Game(Screen& screen)
: screen(screen)
{}
virtual ~Game() {}
virtual Screen& screen() { return theGameScreen;}
void update() { /* Draw Screen. Then draw all the heros */ }
private:
friend Hero::Hero(Game&);
friend Hero::~Hero();
void addHero(Hero& newHero) {herosInGame.push_back(&newHero);}
void delHero(Hero& newHeor) {/* Delete Hero from herosInGame */}
// Implementation detail about how a game stores a screen
// I do not have enough context only that a Game should have one
// So theoretically:
Screen& theGameScreen;
std::vector<Hero*> herosInGame;
};
class Hero
{
public:
Hero(Game& game)
: game(game)
{game.addHero(*this);}
virtual ~Hero()
{game.delHero(*this);}
virtual void Draw(Screen& screen) {/* Draw a hero on the screen */}
private:
Game& game;
};
Main.
#include "Game.h"
#include "Hero.h"
int main(int argc, char* argv[])
{
//Init Game
Screen aScreenObject
Game game(aScreenObject);
//Load hero
Hero hero(game); // or create one hero object for each player
//While game is running
while(game.runningState())
{
//Handle Window and Hero inputs
Game.Input();
Hero.userInput();
//Update
Game.update();
}
//Clean up
// Game.Clean(); Don't do this
// This is what the destructor is for.
}
I don't know if it's elegant, but what I do for the side-scrolling game I'm making is to make a show() function in each class than draws to the screen, and passing the screen handle as a parameter. Then whenever I want to draw something to the screen I just do foo.show(screen). The screen handle is in main().
The first, and honestly, easiest solution, is to use a global variable. Yes, yes, yes, everyone says global variables are horrible, but in this situation, it's perfectly fine.
The other solution, which is a bit more work, but can result in somewhat more portable code, is to encapsulate your drawing functions into a single, static class. This way, you can draw to the screen directly without having to pass around a variable, or have to lie awake at night thinking the code review police will get you because you used a global variable. Plus, this can potentially make it easier if you ever decide to port your game to a new library. Some quick and dirty pseudocode:
class Drawing
public:
static void Draw(x, y, sdl_surface graphic, sdl_rect & clip=null);
static void init(sdl_surface & screen);
private:
sdl_surface screen;
void Drawing::Draw(x, y, sdl_surface graphic, sdl_rect & clip=null)
{
sdl_blit(x, y, graphic, clip);
}
void Drawing::init(sdl_surface & screen)
{
this.screen=screen;
}
It sounds like you're looking for a way to implement the Singleton design pattern, where you would have a single Screen object. If you know you're only ever going to have a single Screen object it should work fine.
In this case you would implement a static method on the Game class:
class Game
{
public:
static Game *GetTheSceenObject();
private:
static Screen *theScreen; // details of initialisation ommitted
}
that would return a pointer to the single Screen object.
If there is a possibility that you'll end up using multiple SDL screens, though, it may be worth creating a Draw() method in your Hero class that is responsible for drawing the hero on each of the Screens managed by the Game class by iterating through a list provided by the Game class.
That functionality could be contained in the methods of a common DrawableThingy class that Hero and Enemy are derived from.
Passing Game.screen around is more OO (though it might be better to have a getter function) than having one instance of it that can be accessed from any class, because if you have one global version, you can't have more than one Game.screen at any one time.
However if you know you'll only ever need one in the entire lifetime of the program, you might consider making Game::Screen() a public static function in the Game class that returns a private static member screen. That way, anyone can call Game::Screen() and get the screen.
Example (assuming ScreenType is the type of screen and that you store a pointer to it):
class Game {
public:
static ScreenType* Screen() {
if (!screen)
screen = GetScreenType(args);
return screen;
}
}
private:
// if you don't already know:
// static means only one of this variable exists, *not* one per instance
// so there is only one no matter how many instances you make
// the same applies to static functions, which you don't need an instance to call
static ScreenType* screen;
};
// and somewhere in a .cpp file
ScreenType* Game::screen = NULL;
// and you use it like this
ScreenType* scr = Game::Screen();
// use scr