DirectX background image - c++

I tried to follow a tutorial from my 3D directx book with some modifications. The problem I encountered is that I want to draw an image but can't get this working. The example is working with a camera since it's based on a small game but I just want to load the image without any fancy camera transformations. (Draw the image as it is without resizing + alpha blending).
This is my code which should contain the relevant parts.
.h
class Screen
{
private:
IDirect3DTexture9* m_BGImage;
ID3DXSprite* m_Sprite;
IDirect3DDevice9* m_Device;
public:
Screen();
~Screen();
void setDevice(IDirect3DDevice9* device);
void setBGImage(std::string path);
void Draw();
void onLostDevice();
void onResetDevice();
void Clean();
};
.cpp
Screen::Screen() {}
Screen::~Screen()
{
Clean();
}
void Screen::setDevice(IDirect3DDevice9* device)
{
m_Device = device;
D3DXCreateSprite(m_Device, &m_Sprite);
}
void Screen::setBGImage(std::string path)
{
D3DXCreateTextureFromFileA(m_Device, path.c_str(), &m_BGImage);
}
void Screen::Draw()
{
m_Sprite->Begin(D3DXSPRITE_DONOTMODIFY_RENDERSTATE); // This is (so I believe) what causes the problem. If I use D3DXSPRITE_OBJECTSPACE|D3DXSPRITE_DONOTMODIFY_RENDERSTATE like described in the example from my book I only get a black screen.
m_Sprite->Draw(m_BGImage, 0, &D3DXVECTOR3(256.0f, 256.0f, 0.0f), 0, D3DCOLOR_XRGB(255, 255, 255));
m_Sprite->Flush();
m_Sprite->End();
}
void Screen::Clean()
{
ReleaseCOM(m_Sprite);
ReleaseCOM(m_BGImage);
}
void Screen::onLostDevice()
{
m_Sprite->OnLostDevice();
}
void Screen::onResetDevice()
{
m_Sprite->OnResetDevice();
m_Device->SetRenderState(D3DRS_ALPHAREF, 10);
m_Device->SetRenderState(D3DRS_ALPHAFUNC, D3DCMP_GREATER);
m_Device->SetTextureStageState(0, D3DTSS_ALPHAARG1, D3DTA_TEXTURE);
m_Device->SetTextureStageState(0, D3DTSS_ALPHAOP, D3DTOP_SELECTARG1);
m_Device->SetRenderState(D3DRS_SRCBLEND, D3DBLEND_SRCALPHA);
m_Device->SetRenderState(D3DRS_DESTBLEND, D3DBLEND_INVSRCALPHA);
m_Device->SetTextureStageState(0, D3DTSS_TEXTURETRANSFORMFLAGS, D3DTTFF_COUNT2);
}
Edit: Almost forgot:
#define ReleaseCOM(x) { if(x){ x->Release(); x = 0; } }

You should use D3DXSPRITE_ALPHABLEND instead of D3DXSPRITE_DONOTMODIFY_RENDERSTATE when calling function "Begin", specify D3DXSPRITE_ALPHABLEND does not mean you must use alpha blend, if you don't call Screen::onResetDevice(), the alpha blend won't work, so D3DXSPRITE_ALPHABLEND here is just a parameter to make "Begin" works. if you don't want to specify any flags when calling function "Begin", you can pass 0 to it.
m_Sprite->Begin(0);
m_Sprite->Draw(m_BGImage, 0, &D3DXVECTOR3(256.0f, 256.0f, 0.0f), 0, D3DCOLOR_XRGB(255, 255, 255));
m_Sprite->Flush();
m_Sprite->End();
reference: http://msdn.microsoft.com/en-us/library/windows/desktop/bb205466(v=vs.85).aspx

Related

Rendering single thing without having to clear entire screen in sdl

Are there layers to sdl or something?
by layers I mean like in photoshop we have multiple layer and can draw on one without effecting the other,
for example if I had a main_layer , a background_layer & an enemy_layer where the main player reandering (like moving the character by user), a static background rendering & enemies rendering can take place respectively?
instead of having to clear the entire screen then placing everything back again over and over? i.e. changing a single thing without effecting the other? can someone point me in the right direction?
You can implement your own layer system using render targets.
Create a texture render target for each layer.
Draw to a layer's render target to update it.
Every frame, draw each layer to the screen. You still need to clear the final frame beforehand.
It's worth noting that there is a point of diminishing return here. If a layer only contains a few sprites, it's probably cheaper to draw each sprite directly to the screen every frame even if they don't move.
Example:
// Given a renderer
SDL_Renderer *renderer = ...;
// *** Creating the layer ***
SDL_Texture *my_layer = SDL_CreateTexture(
renderer,
SDL_PIXELFORMAT_RGBA8888,
SDL_TEXTUREACCESS_TARGET,
screen_width, screen_height);
// To make transparency work (for non-base layers):
SDL_SetTextureBlendMode(my_layer, SDL_BLENDMODE_BLEND);
// *** Drawing TO the layer ***
SDL_SetRenderTarget(renderer, my_layer);
// For non-base layers, you want to make sure you clear to *transparent* pixels.
SDL_SetRenderDrawColor(renderer, 0, 0, 0, 0);
SDL_RenderClear(renderer);
// ... Draw to the layer ...
// *** Drawing the layer to the screen / window ***
SDL_SetRenderTarget(renderer, NULL);
SDL_RenderCopy(renderer, my_layer, NULL, NULL);
You can take this a bit further by creating layers that are larger than screen_width x screen_height and use the srcrect parameter of SDL_RenderCopy() to scroll the layer. With a few background layers, that can be used to get efficient and neat-looking old-school parallax effects.
You will also probably want to encapsulate the notion of a layer into some Layer class in C++. Here's a rough starting point:
class Layer {
public:
Layer(SDL_Renderer *renderer, int w, int h)
: texture_(SDL_CreateTexture(renderer, SDL_PIXELFORMAT_RGBA8888, SDL_TEXTUREACCESS_TARGET, w, h),
, display_{0, 0, 0, 0}
{
SDL_GetRendererOutputSize(renderer, &display_.w, &display_.h);
w = std::min(w, display_.w);
h = std::min(h, display_.h);
max_scroll_x = w - display_.w;
max_scroll_y = h - display_.h;
SDL_SetTextureBlendMode(texture_, SDL_BLENDMODE_BLEND);
}
Layer(Layer&& rhs)
: texture_(rhs.texture_)
, display_(rhs.display)
, max_scroll_x(rhs.max_scroll_x)
, max_scroll_y(rhs.max_scroll_y) {
rhs.texture_ = nullptr;
}
Layer& operator=(const Layer& rhs) {
if(texture_) {SDL_DestroyTexture(texture_);}
texture_ = rhs.texture_;
display_ = rhs.display_;
max_scroll_x = rhs.max_scroll_x;
max_scroll_y = rhs.max_scroll_y;
rhs.texture_ = nullptr;
)
Layer(const Layer& rhs) = delete;
Layer& operator=(const Layer& rhs) = delete;
~Layer() {
if(texture_) {SDL_DestroyTexture(texture_);}
}
// Subsequent draw calls will target this layer
void makeCurrent(SDL_Renderer* renderer) {
SDL_SetRenderTarget(renderer, texture_);
}
// Draws the layer to the currently active render target
void commit(SDL_Renderer* renderer, const SDL_Rect * dstrect=nullptr) {
SDL_RenderCopy(renderer, texture_, &display_, dstrect);
}
// Changes the offset of the layer
void scrollTo(int x, int y) {
display_.x = std::clamp(x, 0, max_scroll_x);
display_.y = std::clamp(y, 0, max_scroll_y);
}
private:
SDL_Texture* texture_;
SDL_Rect display_;
int max_scroll_x;
int max_scroll_y;
};

Custom font class crashes game

I'm using C++ with the SDL2 library to create a game. I'm using the SDL_ttf extension to be able to use ttf fonts and I'm trying to create my own class that would be more effective for multiple texts on the screen. The code I currently have starts out good, then crashes after about 15 seconds of running. I added more text and now it crashes after about 5 or 7 seconds. I'm looking for advice on how to solve this problem. my full Font class is as follows:
Font.h
#pragma once
#include "Graphics.h"
#include <string>
class Font
{
public:
Font(std::string path, SDL_Renderer* renderer);
~Font();
void FreeText();
void LoadText(int size, RGB_COLOR color, std::string text);
void Draw(int x, int y, Graphics& gfx, int size, RGB_COLOR color, std::string text);
private:
int width,height;
TTF_Font* font;
SDL_Texture* mTexture;
SDL_Renderer* renderer;
std::string path;
};
Font.cpp
#include "Font.h"
Font::Font(std::string path, SDL_Renderer* renderer)
:
font(NULL),
mTexture(NULL),
renderer(renderer),
path(path)
{
printf("Font con..\n");
}
Font::~Font()
{
}
void Font::LoadText(int size, RGB_COLOR color, std::string text)
{
font = TTF_OpenFont(path.c_str(), size);
SDL_Color c = {color.RED, color.GREEN, color.BLUE};
SDL_Surface* loadedSurface = TTF_RenderText_Solid(font, text.c_str(), c);
mTexture = SDL_CreateTextureFromSurface(renderer, loadedSurface);
width = loadedSurface->w;
height = loadedSurface->h;
SDL_FreeSurface(loadedSurface);
}
void Font::FreeText()
{
SDL_DestroyTexture(mTexture);
mTexture = NULL;
}
void Font::Draw(int x, int y, Graphics& gfx, int size, RGB_COLOR color, std::string text)
{
FreeText();
LoadText(size, color, text);
SDL_Rect rect = {x, y, width * gfx.GetGameDims().SCALE, height * gfx.GetGameDims().SCALE};
gfx.DrawTexture(mTexture, NULL, &rect);
}
My Graphics class just handles the actual drawing as well as dimensions of the game (screen size, tile size, color struct, gamestates, etc) So when I'm calling gfx.Draw it calls SDL_RenderCopy function.
Within my Game class I have a pointer to my Font class. (its called in my Game constructor) Then font->Draw() is called every frame; which destroys the original SDL_Texture, Loads the new text, then renders it on the screen.
My ultimate goal is to have my font class set up to where I choose the color and size from my draw function. Not sure what to check from this point on..
Any suggestions? Ideas?
This is what I get (which is what I want) but then it crashes.
I've managed to get it working. After searching a little more on SDL_ttf, I realized that in my FreeFont() function I was clearing out the SDL_Texture, however I did nothing with the TTF_Font.
Adding these lines in that function did the trick:
TTF_CloseFont(font);
font = NULL;

Qt video frames from camera corrupted

EDIT: The first answer solved my problem. Apart from that I had to set the ASI_BANDWIDTH_OVERLOAD value to 0.
I am programming a Linux application in C++/Qt 5.7 to track stars in my telescope. I use a camera (ZWO ASI 120MM with according SDK v0.3) and grab its frames in a while loop in a separate thread. These are then emitted to a QOpenGlWidget to be displayed. I have following problem: When the mouse is inside the QOpenGlWidget area, the displayed frames get corrupted. Especially when the mouse is moved. The problem is worst when I use an exposure time of 50ms and disappears for lower exposure times. When I feed the pipeline with alternating images from disk, the problem disappears. I assume that this is some sort of thread-synchronization problem between the camera thread and the main thread, but I couldnt solve it. The same problem appears in the openastro software. Here are parts of the code:
MainWindow:
MainWindow::MainWindow(QWidget *parent) : QMainWindow(parent){
mutex = new QMutex;
camThread = new QThread(this);
camera = new Camera(nullptr, mutex);
display = new GLViewer(this, mutex);
setCentralWidget(display);
cameraHandle = camera->getHandle();
connect(camThread, SIGNAL(started()), camera, SLOT(connect()));
connect(camera, SIGNAL(exposureCompleted(const QImage)), display, SLOT(showImage(const QImage)), Qt::BlockingQueuedConnection );
camera->moveToThread(camThread);
camThread->start();
}
The routine that grabs the frames:
void Camera::captureFrame(){
while( cameraIsReady && capturing ){
mutex->lock();
error = ASIGetVideoData(camID, buffer, bufferSize, int(exposure*2*1e-3)+500);
if(error == ASI_SUCCESS){
frame = QImage(buffer,width,height,QImage::Format_Indexed8).convertToFormat(QImage::Format_RGB32); //Indexed8 is for 8bit
mutex->unlock();
emit exposureCompleted(frame);
}
else {
cameraStream << "timeout" << endl;
mutex->unlock();
}
}
}
The slot that receives the image:
bool GLViewer::showImage(const QImage image)
{
mutex->lock();
mOrigImage = image;
mRenderQtImg = mOrigImage;
recalculatePosition();
updateScene();
mutex->unlock();
return true;
}
And the GL function that sets the image:
void GLViewer::renderImage()
{
makeCurrent();
glClear(GL_COLOR_BUFFER_BIT);
if (!mRenderQtImg.isNull())
{
glLoadIdentity();
glPushMatrix();
{
if (mResizedImg.width() <= 0)
{
if (mRenderWidth == mRenderQtImg.width() && mRenderHeight == mRenderQtImg.height())
mResizedImg = mRenderQtImg;
else
mResizedImg = mRenderQtImg.scaled(QSize(mRenderWidth, mRenderHeight),
Qt::IgnoreAspectRatio,
Qt::SmoothTransformation);
}
glRasterPos2i(mRenderPosX, mRenderPosY);
glPixelZoom(1, -1);
glDrawPixels(mResizedImg.width(), mResizedImg.height(), GL_RGBA, GL_UNSIGNED_BYTE, mResizedImg.bits());
}
glPopMatrix();
glFlush();
}
}
I stole this code from here: https://github.com/Myzhar/QtOpenCVViewerGl
And lastly, here is how my problem looks:
This looks awful.
The image producer should produce new images and emit them through a signal. Since QImage is implicitly shared, it will automatically recycle frames to avoid new allocations. Only when the producer thread out-runs the display thread will image copies be made.
Instead of using an explicit loop in the Camera object, you can run the capture using a zero-duration timer, and having the event loop invoke it. That way the camera object can process events, e.g. timers, cross-thread slot invocations, etc.
There's no need for explicit mutexes, nor for a blocking connection. Qt's event loop provides cross-thread synchronization. Finally, the QtOpenCVViewerGl project performs image scaling on the CPU and is really an example of how not to do it. You can get image scaling for free by drawing the image on a quad, even though that's also an outdated technique from the fixed pipeline days - but it works just fine.
The ASICamera class would look roughly as follows:
// https://github.com/KubaO/stackoverflown/tree/master/questions/asi-astro-cam-39968889
#include <QtOpenGL>
#include <QOpenGLFunctions_2_0>
#include "ASICamera2.h"
class ASICamera : public QObject {
Q_OBJECT
ASI_ERROR_CODE m_error;
ASI_CAMERA_INFO m_info;
QImage m_frame{640, 480, QImage::Format_RGB888};
QTimer m_timer{this};
int m_exposure_ms = 0;
inline int id() const { return m_info.CameraID; }
void capture() {
m_error = ASIGetVideoData(id(), m_frame.bits(), m_frame.byteCount(),
m_exposure_ms*2 + 500);
if (m_error == ASI_SUCCESS)
emit newFrame(m_frame);
else
qDebug() << "capture error" << m_error;
}
public:
explicit ASICamera(QObject * parent = nullptr) : QObject{parent} {
connect(&m_timer, &QTimer::timeout, this, &ASICamera::capture);
}
ASI_ERROR_CODE error() const { return m_error; }
bool open(int index) {
m_error = ASIGetCameraProperty(&m_info, index);
if (m_error != ASI_SUCCESS)
return false;
m_error = ASIOpenCamera(id());
if (m_error != ASI_SUCCESS)
return false;
m_error = ASIInitCamera(id());
if (m_error != ASI_SUCCESS)
return false;
m_error = ASISetROIFormat(id(), m_frame.width(), m_frame.height(), 1, ASI_IMG_RGB24);
if (m_error != ASI_SUCCESS)
return false;
return true;
}
bool close() {
m_error = ASICloseCamera(id());
return m_error == ASI_SUCCESS;
}
Q_SIGNAL void newFrame(const QImage &);
QImage frame() const { return m_frame; }
Q_SLOT bool start() {
m_error = ASIStartVideoCapture(id());
if (m_error == ASI_SUCCESS)
m_timer.start(0);
return m_error == ASI_SUCCESS;
}
Q_SLOT bool stop() {
m_error = ASIStopVideoCapture(id());
return m_error == ASI_SUCCESS;
m_timer.stop();
}
~ASICamera() {
stop();
close();
}
};
Since I'm using a dummy ASI API implementation, the above is sufficient. Code for a real ASI camera would need to set appropriate controls, such as exposure.
The OpenGL viewer is also fairly simple:
class GLViewer : public QOpenGLWidget, protected QOpenGLFunctions_2_0 {
Q_OBJECT
QImage m_image;
void ck() {
for(GLenum err; (err = glGetError()) != GL_NO_ERROR;) qDebug() << "gl error" << err;
}
void initializeGL() override {
initializeOpenGLFunctions();
glClearColor(0.2f, 0.2f, 0.25f, 1.f);
}
void resizeGL(int width, int height) override {
glViewport(0, 0, width, height);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0, width, height, 0, 0, 1);
glMatrixMode(GL_MODELVIEW);
update();
}
// From http://stackoverflow.com/a/8774580/1329652
void paintGL() override {
auto scaled = m_image.size().scaled(this->size(), Qt::KeepAspectRatio);
GLuint texID;
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glGenTextures(1, &texID);
glEnable(GL_TEXTURE_RECTANGLE);
glBindTexture(GL_TEXTURE_RECTANGLE, texID);
glTexImage2D(GL_TEXTURE_RECTANGLE, 0, GL_RGB, m_image.width(), m_image.height(), 0,
GL_RGB, GL_UNSIGNED_BYTE, m_image.constBits());
glBegin(GL_QUADS);
glTexCoord2f(0, 0);
glVertex2f(0, 0);
glTexCoord2f(m_image.width(), 0);
glVertex2f(scaled.width(), 0);
glTexCoord2f(m_image.width(), m_image.height());
glVertex2f(scaled.width(), scaled.height());
glTexCoord2f(0, m_image.height());
glVertex2f(0, scaled.height());
glEnd();
glDisable(GL_TEXTURE_RECTANGLE);
glDeleteTextures(1, &texID);
ck();
}
public:
GLViewer(QWidget * parent = nullptr) : QOpenGLWidget{parent} {}
void setImage(const QImage & image) {
Q_ASSERT(image.format() == QImage::Format_RGB888);
m_image = image;
update();
}
};
Finally, we hook the camera and the viewer together. Since the camera initialization may take some time, we perform it in the camera's thread.
The UI should emit signals that control the camera, e.g. to open it, start/stop acquisition, etc., and have slots that provide feedback from the camera (e.g. state changes). A free-standing function would take the two objects and hook them together, using functors as appropriate to adapt the UI to a particular camera. If adapter code would be extensive, you'd use a helper QObject for that, but usually a function should suffice (as it does below).
class Thread : public QThread { public: ~Thread() { quit(); wait(); } };
// See http://stackoverflow.com/q/21646467/1329652
template <typename F>
static void postToThread(F && fun, QObject * obj = qApp) {
QObject src;
QObject::connect(&src, &QObject::destroyed, obj, std::forward<F>(fun),
Qt::QueuedConnection);
}
int main(int argc, char ** argv) {
QApplication app{argc, argv};
GLViewer viewer;
viewer.setMinimumSize(200, 200);
ASICamera camera;
Thread thread;
QObject::connect(&camera, &ASICamera::newFrame, &viewer, &GLViewer::setImage);
QObject::connect(&thread, &QThread::destroyed, [&]{ camera.moveToThread(app.thread()); });
camera.moveToThread(&thread);
thread.start();
postToThread([&]{
camera.open(0);
camera.start();
}, &camera);
viewer.show();
return app.exec();
}
#include "main.moc"
The GitHub project includes a very basic ASI camera API test harness and is complete: you can run it and see the test video rendered in real time.

OSG render scene into image

I trying to render an OSG scene into a image in my Qt program. Refer to the example of SnapImageDrawCallback(https://www.mail-archive.com/osg-users#lists.openscenegraph.org/msg45360.html).
class SnapImageDrawCallback : public osg::CameraNode::DrawCallback {
public:
SnapImageDrawCallback()
{
_snapImageOnNextFrame = false;
}
void setFileName(const std::string& filename) { _filename = filename; }
const std::string& getFileName() const { return _filename; }
void setSnapImageOnNextFrame(bool flag) { _snapImageOnNextFrame = flag;}
bool getSnapImageOnNextFrame() const { return _snapImageOnNextFrame; }
virtual void operator () (const osg::CameraNode& camera) const
{
if (!_snapImageOnNextFrame) return;
int x,y,width,height;
x = camera.getViewport()->x();
y = camera.getViewport()->y();
width = camera.getViewport()->width();
height = camera.getViewport()->height();
osg::ref_ptr<osg::Image> image = new osg::Image;
image->readPixels(x,y,width,height,GL_RGB,GL_UNSIGNED_BYTE);
if (osgDB::writeImageFile(*image,_filename))
{
std::cout << "Saved screen image to `"<<_filename
<<"`"<< std::endl;
}
_snapImageOnNextFrame = false;
}
protected:
std::string _filename;
mutable bool _snapImageOnNextFrame;
};
I set this as a the osg::Viewer's camera's FinalDrawCallback, but I failed with a blank image, and get this warning "Warning: detected OpenGL error 'invalid operation' at start of State::apply()" when invoke image->readPixels, My osgViewer::Viewer in embedded in QQuickFramebufferObject. Can any one give some suggestions?
Not sure to give you the right pointer, you should provide more details about your setup and what you're after.
As a general note, if you're trying to render with OSG into a QtQuick widget the best approach is to have osg to render to an FBO in a separate shared GL context, and copy the FBO contents back the qtquick widget.
I had tested this approach some times ago, see code here:
https://github.com/rickyviking/qmlosg
Another similar project here: https://github.com/podsvirov/osgqtquick
you can use pbo
ext->glGenBuffers(1, &pbo);
ext->glBindBuffer(GL_PIXEL_PACK_BUFFER_ARB, pbo);
ext->glBufferData(GL_PIXEL_PACK_BUFFER_ARB, _width*_height*4, 0, GL_STREAM_READ);
glReadPixels(0, 0, _width, _height, _pixelFormat, _type, 0);
GLubyte* src = (GLubyte*)ext->glMapBuffer(GL_PIXEL_PACK_BUFFER_ARB,
GL_READ_ONLY_ARB);
if(src)
{
memcpy(image->data(), src, _width*_height*4);
ext->glUnmapBuffer(GL_PIXEL_PACK_BUFFER_ARB);
}
ext->glBindBuffer(GL_PIXEL_PACK_BUFFER_ARB, 0);

Texture not being drawn on Screen

I am trying to follow a slightly outdated tutorial on making a Tile Engine.
The problem is that the Texture I am trying to draw on screen, doesn't show up. I just get a black screen.
I've taken the most relevant parts of Engine.cpp:
bool Engine::Init()
{
LoadTextures();
window = new sf::RenderWindow(sf::VideoMode(800,600,32), "RPG");
if(!window)
return false;
return true;
}
void Engine::LoadTextures()
{
sf::Texture sprite;
sprite.loadFromFile("C:\\Users\\Vipar\\Pictures\\sprite1.png");
textureManager.AddTexture(sprite);
testTile = new Tile(textureManager.GetTexture(0));
}
void Engine::RenderFrame()
{
window->clear();
testTile->Draw(0,0,window);
window->display();
}
void Engine::MainLoop()
{
//Loop until our window is closed
while(window->isOpen())
{
ProcessInput();
Update();
RenderFrame();
}
}
void Engine::Go()
{
if(!Init())
throw "Could not initialize Engine";
MainLoop();
}
And here is the TextureManager.cpp
#include "TextureManager.h"
#include <vector>
#include <SFML\Graphics.hpp>
TextureManager::TextureManager()
{
}
TextureManager::~TextureManager()
{
}
void TextureManager::AddTexture(sf::Texture& texture)
{
textureList.push_back(texture);
}
sf::Texture& TextureManager::GetTexture(int index)
{
return textureList[index];
}
In the tutorial itself the Image type was used but there was no Draw() method for Image so I made Texture instead. Why won't the Texture render on the screen?
The problem seems to be in:
void Engine::LoadTextures()
{
sf::Texture sprite;
sprite.loadFromFile("C:\\Users\\Vipar\\Pictures\\sprite1.png");
textureManager.AddTexture(sprite);
testTile = new Tile(textureManager.GetTexture(0));
}
You are creating a local sf::Texture and passing that to TextureManager::AddTexture. It's probably going out of scope at the end of the function, and is then no longer valid when you try to draw it. You fix this by using a smart pointer:
void Engine::LoadTextures()
{
textureManager.AddTexture(std::shared_ptr<sf::Texture>(
new sf::Texture("C:\\Users\\Vipar\\Pictures\\sprite1.png")));
testTile = new Tile(textureManager.GetTexture(0));
}
And changing TextureManager to use it:
void TextureManager::AddTexture(std::shared_ptr<sf::Texture> texture)
{
textureList.push_back(texture);
}
sf::Texture& TextureManager::GetTexture(int index)
{
return *textureList[index];
}
You'll also have to change textureList to be an std::vector<std::shared_ptr<sf::Texture> of course.