SDL + OpenGL, irregular frame times with periodic spikes - c++

I was developing an application with SDL and glad. I noticed that there were some small freezes in the animations. I have written a separate project with the minimal functionality and I found the issue appears even with a very basic setup.
This is the code:
#include <iostream>
#include <vector>
#include <fstream>
#include <SDL.h>
#include <glad/glad.h>
using namespace std;
// the application will close after this amount of time
const float MILISECONDS_TO_CLOSE = 10 * 1000;
int main(int argc, char** argv)
{
SDL_Init(SDL_INIT_EVERYTHING);
SDL_GL_SetAttribute(SDL_GL_CONTEXT_MAJOR_VERSION, 3);
SDL_GL_SetAttribute(SDL_GL_CONTEXT_MINOR_VERSION, 3);
SDL_GL_SetAttribute(SDL_GL_CONTEXT_PROFILE_MASK, SDL_GL_CONTEXT_PROFILE_CORE);
SDL_Window* window =
SDL_CreateWindow(
"tracer",
100, 100,
800, 600,
SDL_WINDOW_OPENGL
);
SDL_GLContext context = SDL_GL_CreateContext(window);
// SDL_GL_SetSwapInterval(1); // this line mitigates the problem but just slightly
if (!gladLoadGL())
{
cout << "gladLoadGL failed" << endl;
}
const GLubyte *oglVersion = glGetString(GL_VERSION);
std::cout << "This system supports OpenGL Version: " << oglVersion << std::endl;
const GLubyte *gpuVendor = glGetString(GL_VENDOR);
std::cout << "Graphics card: " << gpuVendor << std::endl;
glClearColor(0.15f, 0.15f, 0.15f, 1.0f);
vector<unsigned> deltas;
deltas.reserve(10 * MILISECONDS_TO_CLOSE);
static unsigned firstTime, prevTime, curTime;
firstTime = prevTime = curTime = SDL_GetTicks();
while (true)
{
// compute delta time
curTime = SDL_GetTicks();
unsigned dt = curTime - prevTime;
prevTime = curTime;
deltas.push_back(dt);
// close the application after some time
if (curTime - firstTime > MILISECONDS_TO_CLOSE) break;
// handle closing events
SDL_Event event;
if (SDL_PollEvent(&event))
{
if (event.type == SDL_KEYDOWN && event.key.keysym.sym == SDLK_ESCAPE || event.type == SDL_QUIT) break;
}
glClear(GL_COLOR_BUFFER_BIT);
SDL_GL_SwapWindow(window);
}
// save recorded deltas to a file
fstream f("out.txt", ios::out | ios::trunc);
for (unsigned dt : deltas) f << dt << endl;
f << flush;
f.close();
return 0;
}
The program records the time between frames during 10 seconds and saves the result in a text file.
I plotted the data using python I got this:
The horizontal axis tells the frame time in milliseconds. The vertical axis tells the time passed since the last frame. Also in milliseconds.
As you can see the time between frames is very irregular and there are periodic spikes (about 1 seconds between spikes).
I have uploaded the CMake project to a github repository, so you can test it if you wish.
I have tested with both my integrated and dedicated GPUs (Intel 530HD and NVIDIA GTX 960M).
The SDL version is 2.0.5.
I tested it on Windows 10 and Ubuntu 16.04 LTS.
EDIT:
I have ported the application to GLFW and the same happens. So, it is very unlikelly that there is a bug in SDL. I have updated the git repo accordingly, now there are two CMake projects.
EDIT2:
I have tested it in another computer and works fine.
I have no clue what's happening. Could it be a hardware problem? Then why it doesn't happen when I run other applications?
Driver problem? Very unlikelly because it happens both using the Intel and NVIDIA GPUs. Also, it happens on both Ubuntu and Windows.

I've had similar behavior (though with longer&lower plateaus, not spikes) caused by the CPU overheating and hence deciding to lower clock rate temporarily. If that happens with your PC, you can try to upgrade your internal coolers or add external ones. Or just lower the temperature setting on the air conditioner in your room.
Naturally, it was always more an issue during summer.
Here's where I've posted about it.

Related

intel real sense programming about librealsense2

I want to get 1280x720 depth image and 1280x720 color image.
So I founded as coded :
// License: Apache 2.0. See LICENSE file in root directory.
// Copyright(c) 2017 Intel Corporation. All Rights Reserved.
#include "librealsense2/rs.hpp" // Include RealSense Cross Platform API
#include "example.hpp" // Include short list of convenience functions for rendering
#include "opencv2/opencv.hpp"
#include <iostream>
#include "stb-master\stb_image_write.h"
using namespace std;
using namespace cv;
// Capture Example demonstrates how to
// capture depth and color video streams and render them to the screen
int main(int argc, char * argv[]) try
{
int width = 1280;
int height = 720;
rs2::log_to_console(RS2_LOG_SEVERITY_ERROR);
// Create a simple OpenGL window for rendering:
window app(width, height, "RealSense Capture Example");
// Declare two textures on the GPU, one for color and one for depth
texture depth_image, color_image;
// Declare depth colorizer for pretty visualization of depth data
rs2::colorizer color_map;
color_map.set_option(RS2_OPTION_HISTOGRAM_EQUALIZATION_ENABLED,1.f);
color_map.set_option(RS2_OPTION_COLOR_SCHEME, 2.f);
// Declare RealSense pipeline, encapsulating the actual device and sensors
rs2::pipeline pipe;
// Start streaming with default recommended configuration
pipe.start();
while (app) // Application still alive?
{
rs2::frameset data = pipe.wait_for_frames(); // Wait for next set of frames from the camera
rs2::frame depth = color_map(data.get_depth_frame()); // Find and colorize the depth data
rs2::frame color = data.get_color_frame(); // Find the color data
// For cameras that don't have RGB sensor, we'll render infrared frames instead of color
if (!color)
color = data.get_infrared_frame();
// Render depth on to the first half of the screen and color on to the second
depth_image.render(depth, { 0, 0, app.width() / 2, app.height() });
color_image.render(color, { app.width() / 2, 0, app.width() / 2, app.height() });
}
return EXIT_SUCCESS;
}
catch (const rs2::error & e)
{
std::cerr << "RealSense error calling " << e.get_failed_function() << "(" << e.get_failed_args() << "):\n " << e.what() << std::endl;
return EXIT_FAILURE;
}
catch (const std::exception& e)
{
std::cerr << e.what() << std::endl;
return EXIT_FAILURE;
}
I want to this..
Push the c <- keyboard value
save the color image and depth image in PNG foramt
I can get the code about 2.
but, i don't know how to call the action when I press the "c"
I guess I have to use this example.hpp.
GLFWwindow * win = glfwCreateWindow(tile_w*cols, tile_h*rows, ss.str().c_str(), 0, 0);
glfwSetWindowUserPointer(win, &dev);
glfwSetKeyCallback(win, [](GLFWwindow * win, int key, int scancode, int action, int mods)
{
auto dev = reinterpret_cast<rs::device *>(glfwGetWindowUserPointer(win));
if (action != GLFW_RELEASE) switch (key)
{
case GLFW_KEY_R: color_rectification_enabled = !color_rectification_enabled; break;
case GLFW_KEY_C: align_color_to_depth = !align_color_to_depth; break;
case GLFW_KEY_D: align_depth_to_color = !align_depth_to_color; break;
case GLFW_KEY_E:
if (dev->supports_option(rs::option::r200_emitter_enabled))
{
int value = !dev->get_option(rs::option::r200_emitter_enabled);
std::cout << "Setting emitter to " << value << std::endl;
dev->set_option(rs::option::r200_emitter_enabled, value);
}
break;
case GLFW_KEY_A:
if (dev->supports_option(rs::option::r200_lr_auto_exposure_enabled))
{
int value = !dev->get_option(rs::option::r200_lr_auto_exposure_enabled);
std::cout << "Setting auto exposure to " << value << std::endl;
dev->set_option(rs::option::r200_lr_auto_exposure_enabled, value);
}
break;
}
});
This code is used in librealsense 1.X version. I would like to change this to librealsense 2.0 version code. But I do not know what to do.
How way do I change this code??
Thanks for reading!
Useful samples to get you on your way with RealSense SDK 2.0 and OpenCV are available in the repo at /wrappers/opencv
Keep in mind that the Supported Devices by SDK 2.0 are:
Intel® RealSense™ Camera D400-Series
Intel® RealSense™ Developer Kit SR300

SFML 2 drawing does not work when connected with OGRE

I am currently working on connecting OGRE and SFML.
SFML should be used for 2D drawing, network stuff and input.
OGRE is for the 3d Graphics.
Currently the whole thing is on Linux.
What works: Connecting OGRE and SFML. First I create a SFML Render Window, then I grab the handle of this window and give it to the OGRE Render WIndow while creating it. I can use the SFML Events now. Did not test the Network stuff, but I am sure this will work too.
What does not work: Drawing in the SFML window.
Case 1: SFML and OGRE are not connected. OGRE does not have the SFML window handle and has its own window. SFML still can't draw in its own window! The main loop executes a maximum of 3 times and then just stops. Nothing more happens. A few seconds later (about 20 or so) I get a Memory Access violation and the program ends.
Case 2: SFML and OGRE are connected. A similar thing happens: The main loop executes exectly 53 times, nothing gets drawn and then the program stops with the terminal message "aborted" (actually its "Abgebrochen", because it's in German)
The strange behaviour also happens, when I let SFML draw into a sf::RenderTexture instead of the sfml_window.
Here is my code:
#include <SFML/Graphics.hpp>
#include <SFML/Window.hpp>
#include <SFML/System.hpp>
#include <iostream>
#include <OGRE/Ogre.h>
#include <vector>
#include <stdio.h>
int main(int argc, char * argv[])
{
if(argc == 1)
return -1;
// start with "1" and you get 1 window, start with "0" and you get two
bool together = atoi(argv[1]);
// create the SFML window
sf::RenderWindow sfml_window(sf::VideoMode(800, 600), "test");
sf::WindowHandle sfml_system_handle = sfml_window.getSystemHandle();
sfml_window.setVerticalSyncEnabled(true);
std::cout<<sfml_system_handle<<std::endl;
// init ogre
Ogre::Root * ogre_root = new Ogre::Root("", "", "");
std::vector<Ogre::String> plugins;
plugins.push_back("/usr/lib/x86_64-linux-gnu/OGRE-1.8.0/RenderSystem_GL");
for(auto p : plugins)
{
ogre_root->loadPlugin(p);
}
const Ogre::RenderSystemList& render_systems = ogre_root->getAvailableRenderers();
if(render_systems.size() == 0)
{
std::cerr<<"no rendersystem found"<<std::endl;
return -1;
}
Ogre::RenderSystem * render_system = render_systems[0];
ogre_root->setRenderSystem(render_system);
ogre_root->initialise( false, "", "");
// create the ogre window
Ogre::RenderWindow * ogre_window= NULL;
{
Ogre::NameValuePairList parameters;
parameters["FSAA"] = "0";
parameters["vsync"] = "true";
// if started with 1, connect the windows
if(together) parameters["externalWindowHandle"] = std::to_string(sfml_system_handle);
ogre_window = ogre_root->createRenderWindow("ogre window", 800, 600, false, &parameters);
}
// ogre stuff
Ogre::SceneManager * scene = ogre_root->createSceneManager(Ogre::ST_GENERIC, "Scene");
Ogre::SceneNode * root_node = scene->getRootSceneNode();
Ogre::Camera * cam = scene->createCamera("Cam");
Ogre::SceneNode * cam_node = root_node->createChildSceneNode("cam_node");
cam_node->attachObject(cam);
Ogre::Viewport * vp = ogre_window->addViewport(cam);
vp->setAutoUpdated(false);
vp->setBackgroundColour(Ogre::ColourValue(0, 1, 1));
ogre_window->setAutoUpdated(false);
ogre_root->clearEventTimes();
//sfml image loading
sf::Texture ring;
std::cout<<"ring loading: "<<ring.loadFromFile("ring.png")<<std::endl;
sf::Sprite sprite;
sprite.setTexture(ring);
// main loop
int counter = 0;
while(!ogre_window->isClosed() && sfml_window.isOpen())
{
std::cout<<counter<<std::endl;
counter++;
std::cout<<"a"<<std::endl;
// sfml events
sf::Event event;
while(sfml_window.pollEvent(event))
{
if(event.type == sf::Event::Closed)
{
sfml_window.close();
}
}
std::cout<<"b"<<std::endl;
std::cout<<"c"<<std::endl;
ogre_root->renderOneFrame();
std::cout<<"d"<<std::endl;
// here the strange behaviour happens
// if this line (draw) isn't present, everything works
sfml_window.pushGLStates();
sfml_window.draw(sprite);
sfml_window.popGLStates();
vp->update();
std::cout<<"e"<<std::endl;
sfml_window.display();
// only needs to be done for separated windows
// sfml display updates otherwise, both use double buffering
if(!together) ogre_window->update(true);
}
return 0;
}
Help would be really appreciated.
EDIT: I added the pushGLStates(); and popGLStates(); commands, forgot those earlier!
Not an answer really, but too long for a comment:
ogre_window = ogre_root->createRenderWindow("ogre window", 800, 600, false, &parameters);
Are you sure that it's okay to pass the address of an object you destroy the very next line here with &parameters?

Simple C++ SFML program high CPU usage

I'm currently working on a platformer and trying to implement a timestep, but for framerate limits greater than 60 the CPU usage goes up from 1% to 25% and more.
I made this minimal program to demonstrate the issue. There are two comments (lines 10-13, lines 26-30) in the code that describe the problem and what I have tested.
Note that the FPS stuff is not relevant to the problem (I think).
I tried to keep the code short and simple:
#include <memory>
#include <sstream>
#include <iomanip>
#include <SFML\Graphics.hpp>
int main() {
// Window
std::shared_ptr<sf::RenderWindow> window;
window = std::make_shared<sf::RenderWindow>(sf::VideoMode(640, 480, 32), "Test", sf::Style::Close);
/*
When I use the setFramerateLimit() function below, the CPU usage is only 1% instead of 25%+
(And only if I set the limit to 60 or less. For example 120 increases CPU usage to 25%+ again.)
*/
//window->setFramerateLimit(60);
// FPS text
sf::Font font;
font.loadFromFile("font.ttf");
sf::Text fpsText("", font, 30);
fpsText.setColor(sf::Color(0, 0, 0));
// FPS
float fps;
sf::Clock fpsTimer;
sf::Time fpsElapsedTime;
/*
When I set framerateLimit to 60 (or anything less than 60)
instead of 120, CPU usage goes down to 1%.
When the limit is greater, in this case 120, CPU usage is 25%+
*/
unsigned int framerateLimit = 120;
sf::Time fpsStep = sf::milliseconds(1000 / framerateLimit);
sf::Time fpsSleep;
fpsTimer.restart();
while (window->isOpen()) {
// Update timer
fpsElapsedTime = fpsTimer.restart();
fps = 1000.0f / fpsElapsedTime.asMilliseconds();
// Update FPS text
std::stringstream ss;
ss << "FPS: " << std::fixed << std::setprecision(0) << fps;
fpsText.setString(ss.str());
// Get events
sf::Event evt;
while (window->pollEvent(evt)) {
switch (evt.type) {
case sf::Event::Closed:
window->close();
break;
default:
break;
}
}
// Draw
window->clear(sf::Color(255, 255, 255));
window->draw(fpsText);
window->display();
// Sleep
fpsSleep = fpsStep - fpsTimer.getElapsedTime();
if (fpsSleep.asMilliseconds() > 0) {
sf::sleep(fpsSleep);
}
}
return 0;
}
I don't want to use SFML's setFramerateLimit(), but my own implementation with the sleep because I will use the fps data to update my physics and stuff.
Is there a logic error in my code? I fail to see it, given it works with a framerate limit of for example 60 (or less). Is it because I have a 60 Hz monitor?
PS: Using SFML's window->setVerticalSync() doesn't change the results
I answered another similar question with this answer.
The thing is, it's not exactly helping you with CPU usage, but I tried your code and it is working fine under 1% cpu usage at 120 FPS (and much more). When you make a game or an interactive media with a "game-loop", you don't want to lose performance by sleeping, you want to use as much cpu time as the computer can give you. Instead of sleeping, you can process other data, like loading stuff, pathfinding algorithm, etc., or just don't put limits on rendering.
I provide some useful links and code, here it is:
Similar question: Movement Without Framerate Limit C++ SFML.
What you really need is fixed time step. Take a look at the SFML Game
development book source code. Here's the interesting snippet from
Application.cpp:
const sf::Time Game::TimePerFrame = sf::seconds(1.f/60.f);
// ...
sf::Clock clock;
sf::Time timeSinceLastUpdate = sf::Time::Zero;
while (mWindow.isOpen())
{
sf::Time elapsedTime = clock.restart();
timeSinceLastUpdate += elapsedTime;
while (timeSinceLastUpdate > TimePerFrame)
{
timeSinceLastUpdate -= TimePerFrame;
processEvents();
update(TimePerFrame);
}
updateStatistics(elapsedTime);
render();
}
If this is not really what you want, see "Fix your timestep!"
which Laurent Gomila himself linked in the SFML forum.
I suggest to use the setFrameRate limit, because it's natively implemented in SFML and will work a lot better.
For getting the elapsed time you must do :
fpsElapsedTime = fpsTimer.getElapsedTime();
If I had to implement something similar, I would do:
/* in the main loop */
fpsElapsedTime = fpsTimer.getElapsedTime();
if(fpsElapsedTime.asMillisecond() >= (1000/framerateLimit))
{
fpsTimer.restart();
// All your content
}
Other thing, use sf::Color::White or sf::Color::Black instead of (sf::Color(255,255,255))
Hope this help :)

How to keep the CPU usage down while running an SDL program?

I've done a very basic window with SDL and want to keep it running until I press the X on window.
#include "SDL.h"
const int SCREEN_WIDTH = 640;
const int SCREEN_HEIGHT = 480;
int main(int argc, char **argv)
{
SDL_Init( SDL_INIT_VIDEO );
SDL_Surface* screen = SDL_SetVideoMode( SCREEN_WIDTH, SCREEN_HEIGHT, 0,
SDL_HWSURFACE | SDL_DOUBLEBUF );
SDL_WM_SetCaption( "SDL Test", 0 );
SDL_Event event;
bool quit = false;
while (quit != false)
{
if (SDL_PollEvent(&event)) {
if (event.type == SDL_QUIT) {
quit = true;
}
}
SDL_Delay(80);
}
SDL_Quit();
return 0;
}
I tried adding SDL_Delay() at the end of the while-clause and it worked quite well.
However, 80 ms seemed to be the highest value I could use to keep the program running smoothly and even then the CPU usage is about 15-20%.
Is this the best way to do this and do I have to just live with the fact that it eats this much CPU already on this point?
I know this is an older post, but I myself just came across this issue with SDL when starting up a little demo project. Like user 'thebuzzsaw' noted, the best solution is to use SDL_WaitEvent to reduce the CPU usage of your event loop.
Here's how it would look in your example for anyone looking for a quick solution to it in the future. Hope it helps!
#include "SDL.h"
const int SCREEN_WIDTH = 640;
const int SCREEN_HEIGHT = 480;
int main(int argc, char **argv)
{
SDL_Init( SDL_INIT_VIDEO );
SDL_Surface* screen = SDL_SetVideoMode( SCREEN_WIDTH, SCREEN_HEIGHT, 0,
SDL_HWSURFACE | SDL_DOUBLEBUF );
SDL_WM_SetCaption( "SDL Test", 0 );
SDL_Event event;
bool quit = false;
while (quit == false)
{
if (SDL_WaitEvent(&event) != 0) {
switch (event.type) {
case SDL_QUIT:
quit = true;
break;
}
}
}
SDL_Quit();
return 0;
}
I would definitely experiment with fully blocking functions (such as SDL_WaitEvent). I have an OpenGL application in Qt, and I noticed the CPU usage hovers between 0% and 1%. It spikes to maybe 4% during "usage" (moving the camera and/or causing animations).
I am working on my own windowing toolkit. I have noticed I can achieve similar CPU usage when I use blocking event loops. This will complicate any timers you may depend on, but it is not terribly difficult to implement timers with this new approach.
I just figured out how to reduce CPU usage in my game from 50% down to < 10%.
Your program is much more simple and simply using SDL_Delay() should be enough.
What I did was:
Use SDL_DisplayFormat() when loading images, so the blitting would be faster. This brought its CPU usage down to about 30%.
So I found out that blitting the games background (big one-piece .png file) was eating the most out of my CPU. I searched the Internet for a solution, but all I found was the same answer - just use SDL_Delay(). Finally, I found out that the problem was embarrassingly simple - the SDL_DisplayFormat() was converting my 24-bit images to 32-bit. So I set my display BPP to 24, which brought CPU usage to ~20%. Bringing it down to 16 bit solved the problem for me and the CPU usage is under 10% now.
Of course this means loss of color detail, but as my game is a simplistic 2D game with not too detailed graphics, this was OK.
In order to really understand this, you need to understand threading. In a threaded application, the program runs until it is waiting for something, then it tells the OS that something else can run. In essence, you are doing this with the SDL_Delay command. If there was no delay at all, I suspect your program would be running at near 100% capacity.
The amount of time that you should put in the delay statement only matters if the other commands are taking a significant amount of time. In general, I would put the delay to be a similar amount of time that it takes to test the poll command, but not more than, say, 10 ms. What will happen is that the OS will wait at least that length of time, allowing other applications to run in the background.
As to what you can do to improve this, well, it looks like there isn't a whole lot that you can do. However, take note that if there was another process running taking a significant amount of CPU power, your program's share would decrease.

SIGSEGV error while using c++ on linux with openGL and SDL

Myself and a few other guys are taking a crack at building a simple side scroller type game. However, I can not get a hold of them to help answer my question so I put it to you, the following code leaves me with a SIGSEGV error in the notated place... if anyone can tell me why, I would really appreciate it. If you need anymore info I will be watching this closely.
Main.cpp
Vector2 dudeDim(60,60);
Vector2 dudePos(300, 300);
Entity *test = new Entity("img/images.jpg", dudeDim, dudePos, false);
leads to:
Entity.cpp
Entity::Entity(std::string filename, Vector2 size, Vector2 position, bool passable):
mTexture(filename)
{
mTexture.load(false);
mDimension2D = size;
mPosition2D = position;
mPassable = passable;
}
leads to:
Textures.cpp
void Texture::load(bool generateMipmaps)
{
FREE_IMAGE_FORMAT imgFormat = FIF_UNKNOWN;
FIBITMAP *dib(0);
imgFormat = FreeImage_GetFileType(mFilename.c_str(), 0);
//std::cout << "File format: " << imgFormat << std::endl;
if (FreeImage_FIFSupportsReading(imgFormat)) // Check if the plugin has reading capabilities and load the file
dib = FreeImage_Load(imgFormat, mFilename.c_str());
if (!dib)
std::cout << "Error loading texture files!" << std::endl;
BYTE* bDataPointer = FreeImage_GetBits(dib); // Retrieve the image data
mWidth = FreeImage_GetWidth(dib); // Get the image width and height
mHeight = FreeImage_GetHeight(dib);
mBitsPerPixel = FreeImage_GetBPP(dib);
if (!bDataPointer || !mWidth || !mHeight)
std::cout << "Error loading texture files!" << std::endl;
// Generate and bind ID for this texture
vvvvvvvvvv!!!ERROR HERE!!!vvvvvvvvvvv
glGenTextures(1, &mId);
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
glBindTexture(GL_TEXTURE_2D, mId);
int format = mBitsPerPixel == 24 ? GL_BGR_EXT : mBitsPerPixel == 8 ? GL_LUMINANCE : 0;
int iInternalFormat = mBitsPerPixel == 24 ? GL_RGB : GL_DEPTH_COMPONENT;
if(generateMipmaps)
glTexParameteri(GL_TEXTURE_2D, GL_GENERATE_MIPMAP, GL_TRUE);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, mWidth, mHeight, 0, format, GL_UNSIGNED_BYTE, bDataPointer);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR); // Linear Filtering
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR); // Linear Filtering
//std::cout << "texture generated " << mId << std::endl;
FreeImage_Unload(dib);
}
after reading Peter's suggestion I have changed my main.cpp file to:
#include <iostream>
#include <vector>
#include "Game.h"
using namespace std;
int main(int argc, char** argv)
{
Game theGame;
/* Initialize game control objects and resources */
if (theGame.onInit() != false)
{
return theGame.onExecute();
}
else
{
return -1;
}
}
and it would seem the SIGSEGV error is gone and I'm now left with something not initializing. So thank you peter you were correct now I'm off to solve this issue.
ok so this is obviously a small amount of the code but in order to save time and a bit of sanity: all the code is available at:
GitHub Repo
So after looking at your code I can say that it's probably that you have not initialized you OpenGL context before executing that code.
You need to call your Game::onInit() which also calls RenderEngine::initGraphics() before making any calls to OpenGL. Which you currently don't do. You currently do main()->Game ctor (calls rendering engine ctor but that ctor doesn't init SDL and OpenGL)->Entity ctor->load texture
For details look at the OpenGL Wiki FAQ