SDL flips both surfaces - c++

I use SDL for my programs graphics and I have a problem with it in flipping surfaces.When I compile following code:
int main(int argc , char* argv[])
{
SDL_Surface* scr1 = SDL_SetVideoMode(880 , 600 , 0 , SDL_HWSURFACE |SDL_DOUBLEBUF );
SDL_Surface* scr2 = SDL_SetVideoMode(880 , 600 , 0 , SDL_HWSURFACE |SDL_DOUBLEBUF );
aacircleRGBA(scr1 , 50 , 50 , 30 , 255 , 0 , 0 , 255);
SDL_Flip(scr2);
return 0;
}
It shows the circle on the screen.But I flipped only scr2.Why does it show the circle?

After you call SDL_SetVideoMode() a second time, the original screen buffer pointer is, in the general case, invalid. You shouldn’t be reusing it, because it doesn’t point to an allocated surface anymore.
In this case, calling SDL_SetVideoMode() twice with the same parameters gives scr2 == scr1, because there is no need for SDL to reallocate the video surface. Drawing on the surface referred to by scr1 is thus the same as drawing on that referred to by scr2.

On success. The returned surface is freed by SDL_Quit and must not be freed by the
caller. This rule also includes consecutive calls to SDL_!SetVideoMode (i.e. resize or resolution change) because the existing surface will be released automatically. Whatever flags SDL_!SetVideoMode could satisfy are set in the flags member of the returned surface.
-- SDL_SetVideoMode function (emphasis mine)
There is only one hardware surface to render to, the one that appears on screen immediately after calling SDL_SetVideoMode. where else would you expect that buffer to draw to?

Related

Exception "Texture cannot be null" Direct X

I am coding a 2D Game using DirectX11 and DirectXTK.
I did a class Framework that initializes both the window displayed for the game and initializes DirectX. These initializations work correctly. Then, I decided to draw some backgrounds, etc in the window, but after a while it exits on an exception. I did a try{ ... } catch(){ } block, which tells me that "Texture cannot be null". However, i could not find which texture it is talking about, even by debbugging and checking all the values.
I decided to separate the different elements i was drawing in the window, to see where the problem might come from... So now i have 3 draw methods :
Draw(DWORD &elapsedTime);
DrawBackground(DWORD &elapsedTime);
DrawCharacter(DWORD &elapsedTime);
The Draw(DWORD &elapsedTime) method calls both DrawBackground() and DrawCharacter() methods.
Here is my Draw Method :
void Framework::Draw(DWORD * elapsedTime)
{
// Clearing the Back Buffer
immediateContext->ClearRenderTargetView(renderTargetView, Colors::Aquamarine);
//Clearing the depth buffer to max depth (1.0)
immediateContext->ClearDepthStencilView(depthStencilView, D3D11_CLEAR_DEPTH, 1.0f, 0); //immediateContext is a ID3D11DeviceContext*
CommonStates states(d3dDevice); //d3dDevice is a ID3D11Device*
sprites.reset(new SpriteBatch(immediateContext));
sprites->Begin(SpriteSortMode_Deferred, states.NonPremultiplied());
DrawBackground1(elapsedTime);
DrawCharacter(elapsedTime);
sprites->End();
//Presenting the back buffer to the front buffer
swapChain->Present(0, 0);
}
By debugging i am almost sure that the exception comes from both DrawBackground() and DrawCharacter(). Indeed, when I comment those in the Draw method, i have no error, but as soon as i put one it sets the exception after displaying what i want during a few seconds.
Here is the method DrawBackground() for example :
void Framework::DrawBackground1(DWORD * elpasedTime)
{
RECT *try1 = new RECT();
try1->bottom = 0; try1->left = 0; try1->right = (int)WIDTH; try1->bottom = (int)HEIGHT;
ID3D11ShaderResourceView * texture2 = nullptr;
ID3D11ShaderResourceView * textureRV = nullptr;
CreateDDSTextureFromFile(d3dDevice, L"../Images/backgrounds/set2_background.dds", nullptr, &textureRV);
CreateDDSTextureFromFile(d3dDevice, L"../Images/backgrounds/set3_tiles.dds", nullptr, &texture2);
sprites->Draw(textureRV, XMFLOAT2(0, 0), try1, Colors::White);
sprites->Draw(texture2, XMFLOAT2(0, 0), try1, Colors::CornflowerBlue);
}
So as soon as i uncomment this method (or any DrawCharacter(), which follows the same steps), the window displays what i expect it to for a few seconds, but then i get the exception "Texture cannot be null". I also noticed that the method DrawCharacter() lets the window displaying what i want longer than the method DrawBackground(), whose texture is way bigger than the character's one.
I'm not sure if this information is useful but i think that maybe this might be linked to the size of the texture ?
Would you notice anything that i did wrong in this code ? Why would a texture be considered null while it does display it for a while ? I've been looking for answers for a few hours now, some help would be amazing please !
Thank you
I noticed that you create two new ID3D11ShaderResourceView every iteration without Release-ing the old ones. You could try by creating the ShaderResourceViews only once and storing them as global variables, or you could try by ->Release() them after the sprites->Draw(...) calls.

SDL2 does not draw image using texture

I have been attempting to get a simple SDL2 program up to display an image and then exit. I have this code:
/* compile with `gcc -lSDL2 -o main main.c` */
#include <SDL2/SDL.h>
#include <SDL2/SDL_video.h>
#include <SDL2/SDL_render.h>
#include <SDL2/SDL_surface.h>
#include <SDL2/SDL_timer.h>
int main(void){
SDL_Init(SDL_INIT_VIDEO);
SDL_Window * w = SDL_CreateWindow("Hi", 0, 0, 640, 480, 0);
SDL_Renderer * r = SDL_CreateRenderer(w, -1, 0);
SDL_Surface * s = SDL_LoadBMP("../test.bmp");
SDL_Texture * t = SDL_CreateTextureFromSurface(r, s);
SDL_FreeSurface(s);
SDL_RenderClear(r);
SDL_RenderCopy(r, t, 0, 0);
SDL_RenderPresent(r);
SDL_Delay(2000);
SDL_DestroyTexture(t);
SDL_DestroyRenderer(r);
SDL_DestroyWindow(w);
SDL_Quit();
}
I am aware that I have omitted the normal checks that each function succeeds - they all do succeed, they were removed for ease of reading. I am also aware I have used 0 rather than null pointers or the correct enum values, this also is not the cause of the issue (as the same issue occurs when I correctly structure the program, this was a quick test case drawn up to test the simplest case)
The intention is that a window appear and shows the image (which is definitely at that directory), wait for a couple of seconds and exit. The result, however, is that the window appears correctly but the window is filled with black.
An extra note SDL_ShowSimpleMessageBox() appears to work correctly. I don't know how this relates to the rest of the framework though.
I'll just clear this, you wanted to make a texture, do it directly to ease control, plus this gives you better control over the image, try using this code, fully tested, and working, all you wanted was for the window to show the image and close within 2 seconds right?. If the image doesn't load then it's your image's location.
/* compile with `gcc -lSDL2 -o main main.c` */
#include <SDL.h>
#include <SDL_image.h>
#include <iostream> //I included it since I used cout
int main(int argc, char* argv[]){
bool off = false;
int time;
SDL_Init(SDL_INIT_VIDEO);
SDL_Window * w = SDL_CreateWindow("Hi", 0, 0, 640, 480, SDL_WINDOW_SHOWN);
SDL_Renderer * r = SDL_CreateRenderer(w, -1, SDL_RENDERER_ACCELERATED);
SDL_Texture * s = NULL;
s = IMG_LoadTexture(r, "../test.bmp"); // LOADS A TEXTURE DIRECTLY FROM THE IMAGE
if (s == NULL)
{
cout << "FAILED TO FIND THE IMAGE" << endl; //we did this to check if IMG_LoadTexture found the image, if it showed this message in the cmd window (the black system-like one) then it means that the image can't be found
}
SDL_Rect s_rect; // CREATES THE IMAGE'S SPECS
s_rect.x = 100; // just like the window, the x and y values determine it's displacement from the origin which is the top left of your window
s_rect.y = 100;
s_rect.w = 640; //width of the texture
s_rect.h = 480; // height of the texture
time = SDL_GetTicks(); //Gets the current time
while (time + 2000 < SDL_GetTicks()) //adds 2 seconds to the past time you got and waits for the present time to surpass that
{
SDL_RenderClear(r);
SDL_RenderCopy(r, s, NULL, &s_rect); // THE NULL IS THE AREA YOU COULD USE TO CROP THE IMAGE
SDL_RenderPresent(r);
}
SDL_DestroyTexture(s);
SDL_DestroyRenderer(r);
SDL_DestroyWindow(w);
return 0; //returns 0, closes the program.
}
if you wanted to see a close button on the window and want it to take effect then create an event then add it to the while area to check if it's equal to SDL_Quit();, I didn't include it since you wanted it to immediately close within 2 seconds, thus, rendering the close button useless.
HAPPY CODING :D
When using SDL_RENDERER_SOFTWARE for the renderer flags this worked. Also it worked on a different machine. Guess there must be something screwed up with my configuration, although I'm not sure what it is because I'm still getting no errors shown. Ah well, mark as solved.
I believe this to be (not 100% sure, but fairly sure), due to this line of code:
SDL_Renderer * r = SDL_CreateRenderer(w, -1, 0);
According to the SDL wiki article SDL_CreateRenderer, the parameters required for the arguments that you are passing in are as follows:
SDL_Window* window
int index
Uint32 flags
You are passing in the pointer to the window correctly, as well as the index correctly, but the lack of a flag signifies to SDL that SDL should use the default renderer.
Most systems have a default setting for applications for which renderer should be used, and this can be modified on a application by application basis. If no default setting is provided for a specific application, the render look up immediately checks the default render settings list. The SDL wiki briefly refers to this list by the following line at the bottom of the remarks section:
"Note that providing no flags gives priority to available SDL_RENDERER_ACCELERATED renderers."
What's not explained here in the wiki is that the "renderers" the wiki is referring to comes from the default renderer list.
This leads me to believe that you have either changed a setting somewhere in the course of history of your computer, or elsewhere in you visual studio settings that is resulting in no list to be found. Therefore you must explicitly inform SDL which renderer to use because of your machine settings. Otherwise using an argument of 0 should work just fine. In the end this doesn't hurt as it's better to be explicit in your code rather than implicit if at all possible.
(That said, all of my deductions are based off of the fact that I am assuming that everything you said that works, works. If this is not true, then the issue could be because of a vast variety of reasons due to the lack of error checking.)

DirectX - GetSurfaceLevel Performance Issue

I'm implementing deferred shading in a directx 9 application. My method of deferred shading requires 3 render targets( color, position, and normal ). It is necessary to:
set the render targets in the device at the beginning of the 'render' function
draw the data to them in the 'rt pass'
remove the render targets from the device( so as not to draw over them during subsequent passes)
set the render targets as textures for subsequent passes so that the effect can recall data 'drawn' to the rt's in the 'rt pass'...
This method works fine, however, I am experiencing performance issues. I've narrowed them down to two function calls:
IDirect3DTexture9::GetSurfaceLevel()
IDirect3DDevice9::SetRenderTarget()
Here is code to set render target:
IDirect3DDevice9 *pd3dDevice = CEffectManager::GetDevice();
IDirect3DTexture9 *pRT = CEffectManager::GetColorRT();
IDirect3DSurface9 *pSrf = NULL;
pRT->GetSurfaceLevel( 0, &pSrf );
pd3dDevice->SetRenderTarget( 0, pSrf );
PIX indicates that the duration( cycles ) of the call to GetSurfaceLevel() is very high ~1/2 ms per call( Duration / Total Duration * 1 / FrameRate ). Because it is necessary to get 3 surfaces, combined, the duration is too high! Its more than 4 times greater than the combined draw calls...
I tried to eliminate the call to GetSurfaceLevel() by storing a pointer to the surface during render target creation...oddly enough, the SetRenderTarget() function assumed the same duration( when before its duration was negligible ). Here is altered code:
IDirect3DDevice9 *pd3dDevice = CEffectManager::GetDevice();
IDirect3DSurface9 *pSrf = CEffectManager::GetColorSurface();
pd3dDevice->SetRenderTarget( 0, pSrf );
Is there a way around this performance issue? Why does the second method take as long as the first? It seems as though the process within IDirect3DDevice9::SetRenderTarget() simply takes time...is there a device state that I can set to help performance?
Update:
I've implemented the following code in order to better test performance:
IDirect3DDevice9 *pd3dDevice = CEffectManager::GetDevice();
IDirect3DTexture9 *pRT = CEffectManager::GetColorRT();
IDirect3DSurface9 *pSRF = NULL;
IDirect3DQuery9 *pEvent = NULL;
LARG_INTEGER lnStart, lnStop, lnFrequency;
// create query
pd3dDevice->CreateQuery( D3DQUERYTYPE_EVENT, &pEvent );
// insert 'end' marker
pEvent->Issue( D3DISSUE_END );
// flush command buffer
while( S_FALSE == pEvent->GetData( NULL, 0, D3DGETDATA_FLUSH ) );
// get start time
QueryPerformanceCounter( &lnStart );
// api call
pRT->GetSurfaceLevel();
// insert 'end' marker
pEvent->Issue( D3DISSUE_END )
// flush the command buffer
while( S_FALSE == pEvent->GetData( NULL, 0, D3DGETDATA_FLUSH ) );
QueryPerformanceCounter( &lnStop );
QueryPerformanceFrequency( &lnFreq );
lnStop.QuadPart -= lnStart.QuadPart;
float fElapsedTime = ( float )lnStop.QuadPart / ( float )lnFreq.QuadPart;
fElapsedTime on average measured 10 - 50 microseconds
I performed the same test on IDirect3DDevice9::SetRenderTarget() and the results on average measured 5 - 30 microseconds...
This data is much better than what I got from PIX...It suggests that there is not as much of a delay as I thought, however, the framerate is drastically reduced using deferred shading...this seems to be the most likely source for the loss of performance...did I effectively query the device?

Window resizing and scaling images / Redeclaring back buffer size / C++ / DIRECTX 9.0

C++ / Windows 8 / Win api / DirectX 9.0
I am having real big issues with this:
https://github.com/jimmyt1988/TheGame/tree/master/TheGame
Problem is that I have defined some adjust coordinate functions. They are for when a window is resized and I need to offset all of my coordinates so that my mouse cooridnates are working out the correct collisions and also to scale and yet keep ratio locked for the images I am drawing to the screen.
For example, If I had a screen at 1920 x 1080 and then resized to 1376 x 768, I need to make sure that the bounding boxes for my objects (for when my mouse hovers over them) is adjusted on the mouse coordinates I use to use to check if the mouse was in the bounding box.
I found out that I originally had problems because when I resized my window, directX was automatically scaling everything.. and on top of that, I too was rescaling things, so they would get utterly screwed... I was told by someone that I need to re-declare my screen buffer width and height, which I have done keeping in mind there is a border to my window and also a menu at the top.
Can anyone see why... regardless of doing all this stuff, I am still getting the incorrect results.
If you manage to run my application: Pressing the 1 key will make the resolution 1920 x 1080, pressing the 2 key will make it 1376 x 768. The resize is entirely wrong: https://github.com/jimmyt1988/TheGame/blob/master/TheGame/D3DGraphics.cpp
float D3DGraphics::ResizeByPercentageChangeX( float point )
{
float lastScreenWidth = screen.GetOldWindowWidth();
float currentScreenWidth = screen.GetWindowWidth();
if( lastScreenWidth > currentScreenWidth + screen.GetWidthOffsetOfBorder() )
{
float percentageMoved = currentScreenWidth / lastScreenWidth;
point = point * percentageMoved;
}
return point;
}
float D3DGraphics::ResizeByPercentageChangeY( float point )
{
float lastScreenHeight = screen.GetOldWindowHeight();
float currentScreenHeight = screen.GetWindowHeight();
if( lastScreenHeight > currentScreenHeight + screen.GetHeightOffsetOfBorderAndMenu() )
{
float percentageMoved = currentScreenHeight / lastScreenHeight;
point = point * percentageMoved;
}
return point;
}
and yet if you put the return point above this block of code and just do nothing to it, it scales perfectly because of blooming directX regardless of this which is being called correctly (presparams are previously declared in the D3DGraphics construct and a reference held in the class its self:
void D3DGraphics::ResizeSequence()
{
presParams.BackBufferWidth = screen.GetWindowWidth() - screen.GetWidthOffsetOfBorder();
presParams.BackBufferHeight = screen.GetWindowHeight() - screen.GetHeightOffsetOfBorderAndMenu();
d3dDevice->Reset( &presParams );
}
This is the problem at hand:
Here is the code that makes this abomination of a rectangle:
void Game::ComposeFrame()
{
gfx.DrawRectangle( 50, 50, screen.GetWindowWidth() - screen.GetWidthOffsetOfBorder() - 100, screen.GetWindowHeight() - screen.GetHeightOffsetOfBorderAndMenu() - 100, 255, 0, 0 );
}
EDIT::::::::::::::::
I noticed that On MSDN it says:
Before calling the IDirect3DDevice9::Reset method for a device, an
application should release any explicit render targets, depth stencil
surfaces, additional swap chains, state blocks, and D3DPOOL_DEFAULT
resources associated with the device.
I have now released the vbuffer and reinstantiated it after the presparams and device are reset.
EDIT::::::::::::
I placed an HRESULT on my reset in which I now manage to trigger an error... But, well.. it doesn't really help me! : http://i.stack.imgur.com/lqQ5K.jpg
Basically, the issue was I was being a complete derp. I was putting into my rectangle the window width and then readjusting that size based on the oldwidth / newwidth.. well the new width was already the screen size... GRRRRRRR.

QGLBuffer::map returns NULL?

I'm trying to use QGLbuffer to display an image.
Sequence is something like:
initializeGL() {
glbuffer= QGLBuffer(QGLBuffer::PixelUnpackBuffer);
glbuffer.create();
glbuffer.bind();
glbuffer.allocate(image_width*image_height*4); // RGBA
glbuffer.release();
}
// Attempting to write an image directly the graphics memory.
// map() should map the texture into the address space and give me an address in the
// to write directly to but always returns NULL
unsigned char* dest = glbuffer.map(QGLBuffer::WriteOnly); FAILS
MyGetImageFunction( dest );
glbuffer.unmap();
paint() {
glbuffer.bind();
glBegin(GL_QUADS);
glTexCoord2i(0,0); glVertex2i(0,height());
glTexCoord2i(0,1); glVertex2i(0,0);
glTexCoord2i(1,1); glVertex2i(width(),0);
glTexCoord2i(1,0); glVertex2i(width(),height());
glEnd();
glbuffer.release();
}
There aren't any examples of using GLBuffer in this way, it's pretty new
Edit --- for search here is the working solution -------
// Where glbuffer is defined as
glbuffer= QGLBuffer(QGLBuffer::PixelUnpackBuffer);
// sequence to get a pointer into a PBO, write data to it and copy it to a texture
glbuffer.bind(); // bind before doing anything
unsigned char *dest = (unsigned char*)glbuffer.map(QGLBuffer::WriteOnly);
MyGetImageFunction(dest);
glbuffer.unmap(); // need to unbind before the rest of openGL can access the PBO
glBindTexture(GL_TEXTURE_2D,texture);
// Note 'NULL' because memory is now onboard the card
glTexSubImage2D(GL_TEXTURE_2D, 0, 0,0, image_width, image_height, glFormatExt, glType, NULL);
glbuffer.release(); // but don't release until finished the copy
// PaintGL function
glBindTexture(GL_TEXTURE_2D,textures);
glBegin(GL_QUADS);
glTexCoord2i(0,0); glVertex2i(0,height());
glTexCoord2i(0,1); glVertex2i(0,0);
glTexCoord2i(1,1); glVertex2i(width(),0);
glTexCoord2i(1,0); glVertex2i(width(),height());
glEnd();
You should bind the buffer before mapping it!
In the documentation for QGLBuffer::map:
It is assumed that create() has been called on this buffer and that it has been bound to the current context.
In addition to VJovic's comments, I think you are missing a few points about PBOs:
A pixel unpack buffer does not give you a pointer to the graphics texture. It is a separate piece of memory allocated on the graphics card to which you can write to directly from the CPU.
The buffer can be copied into a texture by a glTexSubImage2D(....., 0) call, with the texture being bound as well, which you do not do. (0 is the offset into the pixel buffer). The copy is needed partly because textures have a different layout than linear pixel buffers.
See this page for a good explanation of PBO usages (I used it a few weeks ago to do async texture upload).
create will return false if the GL implementation does not support buffers, or there is no current QGLContext.
bind returns false if binding was not possible, usually because type() is not supported on this GL implementation.
You are not checking if these two functions passed.
I got the same thing, map returns NULL. When I used the following order it is solved.
bool success = mPixelBuffer->create();
mPixelBuffer->setUsagePattern(QGLBuffer::DynamicDraw);
success = mPixelBuffer->bind();
mPixelBuffer->allocate(sizeof(imageData));
void* ptr =mPixelBuffer->map(QGLBuffer::ReadOnly);