I have this function:
void Texture::render(int x, int y, int w, int h, SDL_Renderer *&renderer, double angle, SDL_Point* center, SDL_RendererFlip flip)
{
// Set a destination value to -1 to keep the current value
if (x < 0) { x = rect.x; }
if (y < 0) { y = rect.y; }
if (w < 0) { w = rect.w; }
if (h < 0) { h = rect.h; }
// Create destination rectangle
SDL_Rect dstRect = { x, y, w, h };
// Render to screen
SDL_RenderCopyEx(renderer, texture, &rect, &dstRect, angle, center, flip);
}
It works. It creates an image of the correct size at the location I want. But I want to add a chunk of code where it resizes the texture itself to be the size given in the destRect.
So, anyone who finds this and reads the conversation I had with Nelfeal in the comments will see that I had a misunderstanding of how SDL_RenderCopyEx works. There's no need to resize the texture. If you need to do something like that, you can just use the dstRect when you copy it.
Actually, as far as I can find, there isn't a method to resize the actual texture itself. I'm sure one exists, but it's definitely not something people are commonly using. Which is usually a sign that it's a bad idea.
I've tweaked my code to try and simplify it, for anybody who's trying to do something similar to me:
void render(SDL_Renderer *&renderer, SDL_Rect *dstRect=NULL, SDL_Rect &srcRect=NULL, double angle=0.0, SDL_Point* center=NULL, SDL_RendererFlip flip=SDL_FLIP_NONE);
void Texture::render(SDL_Renderer *&renderer, SDL_Rect *dstRect, SDL_Rect *srcRect, double angle, SDL_Point* center, SDL_RendererFlip flip)
{
// Check to see if a destination was provided
bool check = false;
if (dstRect == NULL)
{
check = true;
dstRect = new SDL_Rect();
dstRect->x = 0;
dstRect->y = 0;
dstRect->w = SCREEN_WIDTH;
dstRect->h = SCREEN_HEIGHT;
}
// Check to see if the entire texture is being copied
if (srcRect == NULL) { srcRect = ▭ }
// Render to screen
SDL_RenderCopyEx(renderer, texture, srcRect, dstRect, angle, center, flip);
// Free dstRect
if (check) delete dstRect;}
And it looks like this when using the function:
bgTex.render(renderer);
blobTex.render(renderer, &blobDstRect);
Related
I am using D2D with D3D11. I have some code that uses GetCursorpos() from the windows API which is then converted to client coordinates and then draws a small circle at this position using D2D FillEllipse(). The screen to client coordinates work perfectly but for some reason D2D draws the circle a small distance from the expected location (tens of pixels) as if the coordinate had been scaled by a small factor so that the error gets larger as the circle is drawn further from (0, 0).
I noticed changing the dpi for the D2D1_RENDER_TARGET_PROPERTIES affects this 'scaling' so I suspect the problem has something to do with dpi. This is the code for creating the D2D render target from the DXGI surface I obtained from the swapchain in my D3D11 code.
// Create render target
float dpiX, dpiY;
this->factory->GetDesktopDpi(&dpiX, &dpiY);
D2D1_RENDER_TARGET_PROPERTIES rtDesc = D2D1::RenderTargetProperties(
D2D1_RENDER_TARGET_TYPE_HARDWARE,
D2D1::PixelFormat(DXGI_FORMAT_UNKNOWN, D2D1_ALPHA_MODE_PREMULTIPLIED),
dpiX,
dpiY
);
AssertHResult(this->factory->CreateDxgiSurfaceRenderTarget(
surface.Get(),
&rtDesc,
&this->renderTarget
), "Failed to create D2D render target");
Here, dpiX and dpiY become 96 which I notice is also the constant that GetDpiForWindow() from the windows API returns when it is not dpi aware.
I want to know how I can fix my code so that it will draw the circle at the position given by GetCursorPos().
More relevant code:
Driver code
Vector3f cursPos = input.GetCursorPos();
DrawCircle(Colour::Green, cursPos.x, cursPos.y, 3/*radius*/);
Input
POINT pt{};
::GetCursorPos(&pt);
// Convert from screen pixels to client pixels
return ConvertPixelSpace(this->hWnd, (float)pt.x, (float)pt.x, PixelSpace::Screen, PixelSpace::Client);
Direct2D
void DrawCircle(const Colour& c, float centreX, float centreY, float radius, PixelSpace ps)
{
Vector3f centre = ConvertPixelSpace(this->gfx.hWnd, centreX, centreY, ps, PixelSpace::Client);
centreX = centre.x;
centreY = centre.y;
D2D1_ELLIPSE el{};
el.point.x = centreX;
el.point.y = centreY;
el.radiusX = radius;
el.radiusY = radius;
auto brush = this->CreateBrush(c);
this->renderTarget->FillEllipse(
&el,
brush.Get()
);
}
PixelSpace Conversion
Vector3f ConvertPixelSpace(HWND hWnd, float x, float y, PixelSpace curSpace, PixelSpace newSpace)
{
RECT rc = GetClientRectOfWindow(hWnd);
struct
{
float top, left, width, height;
} rectf;
rectf.top = static_cast<float>(rc.top);
rectf.left = static_cast<float>(rc.left);
rectf.width = static_cast<float>(rc.right - rc.left);
rectf.height = static_cast<float>(rc.bottom - rc.top);
// Convert to client space
if (curSpace == PixelSpace::Screen)
{
x -= rectf.left;
y -= rectf.top;
}
// Convert to new space
if (newSpace == PixelSpace::Screen)
{
x += rectf.left;
y += rectf.top;
}
return Vector3f(x, y);
}
RECT GetClientRectOfWindow(HWND hWnd)
{
RECT rc;
::GetClientRect(hWnd, &rc);
// Pretty sure these are valid casts.
// rc.top is stored directly after rc.left and this forms a POINT struct
ClientToScreen(hWnd, reinterpret_cast<POINT*>(&rc.left));
ClientToScreen(hWnd, reinterpret_cast<POINT*>(&rc.right));
return rc;
}
The problem was I was creating the D3D11 swapchain with the window area instead of the client area.
RECT rect{};
GetWindowRect(hWnd, &rect); // !!! This should be GetClientRect()
this->width = rect.right - rect.left;
this->height = rect.bottom - rect.top;
DXGI_SWAP_CHAIN_DESC scDesc{};
scDesc.BufferDesc.Width = width;
scDesc.BufferDesc.Height = height;
//...
I need a double buffer because i'm starting to notice tearing when I want to move my texture made tile map around the screen via mouse click.
I'm using SDL2 and this is a SDL2 specific question, check out my code that produces tearing, whats wrong?
//set up buffer and display
bufferTexture = SDL_CreateTexture(renderer,SDL_PIXELFORMAT_RGBA8888,SDL_TEXTUREACCESS_STREAMING,800,600);
displayTexture = SDL_CreateTexture(renderer,SDL_PIXELFORMAT_RGBA8888,SDL_TEXTUREACCESS_TARGET,800,600);
while(running)
{
handleEvents();
SDL_SetRenderTarget(renderer,bufferTexture); // everything goes into this buffer
SDL_RenderClear(renderer); // clear buffer before draw
gTileMovement.updateMapCoordinates();
for(int i = 0; i < MAP_ROWS; i++)//rows
{
for(int j = 0; j < MAP_COLUMNS; j++)//columns
{
x = (j * 100) - (i * 100);
y = ((i * 100) + (j * 100)) / 2;
drawTiles(i,j,x,y);
}
}
//move from buffer to display texture
memcpy(&displayTexture,&bufferTexture,sizeof((&bufferTexture)+1));
//change render target back to display texture
SDL_SetRenderTarget(renderer,displayTexture);
//show it all on screen
SDL_RenderPresent(renderer);
}
for all it matters here is my drawTiles function too, is this not conventional? :
void drawTiles(int i,int j,int x,int y)
{
//updates based on a mouse clicks xy coords
gTileMovement.updateMapCoordinates();
if(tileMap[i][j] == 1) // grass?
{
gSpriteSheetTexture.render(x+gTileMovement.getUpdatedX(),y+gTileMovement.getUpdatedY(),&gSpriteClips[1]);
}
if(tileMap[i][j] == 0) // wall?
{
gSpriteSheetTexture.render(x+gTileMovement.getUpdatedX(),y+gTileMovement.getUpdatedY(),&gSpriteClips[0]);
}
if(tileMap[i][j] == 2) // tree?
{
gSpriteSheetTexture.render(x+gTileMovement.getUpdatedX(),y+gTileMovement.getUpdatedY(),&gSpriteClips[2]);
}
}
Which followes into how I SDL_RenderCopy the tiles through a class. This copies the textures onto the current targeted renderer does it not? Which is the buffer texture if i'm not mistaken.
void LTexture::render(int x, int y, SDL_Rect * clip)
{
SDL_Rect renderQuad = {x, y, mWidth, mHeight};
if(clip != NULL)
{
renderQuad.w = clip->w;
renderQuad.h = clip->h;
}
SDL_RenderCopy(renderer, mTexture, clip, &renderQuad);
}
I'm having an issue with a program I'm working on. Occasionally, it will just freeze. No errors or anything.
The game is a multiplayer game where you fly a ship around. Pictures of other players and powerups move in and out of view depending on your location. For the most part, it works great, but under certain circumstances, it locks up.
I've tracked it down to when it BLITs one surface onto another. (SDL_BlitSurface).
If I comment out the single line of code where it blits (SDL_BlitSurface), and replace the graphic with a simple circle, it'll never freeze under any circumstances. But, comment out the circle and replace it with blitting the graphic again, and it'll randomly freeze. The frustrating part is, sometimes it will, sometimes it won't. Sometimes the graphic will sit on screen for a few moments and then freeze, sometimes it'll freeze the moment it shows up. Sometimes, it won't freeze at all. I simply cannot track it down to anything in particular.
I have ample amount of code that checks for NULL surfaces and it doesn't seem to stop it.
I also have it set up to output information about all the graphics to a file (such as width, height, location in memory, x, y, etc) and nothing seems out of the ordinary.
My main questions are, what about surfaces can cause SDL_BlitSurface to freeze? And what other checks can I add for surfaces to make sure it doesn't try to blit bad surfaces?
The code is too long to list, but here is how it works:
class Player
{
Player();
int x;
int y;
int xvel;
int yvel;
SDL_Surface *DrawScreen;
SDL_Surface *ShipPic;
void check_player_dist();
void check_powerup_dist();
void update();
};
class PowerUp
{
int x;
int y;
int type;
SDL_Surface *Powerup_Pic;
};
Player::Player()
{
Apply_Surface(0, 0, PlayerShipPics, ShipPic);
}
Player::Update(Player p[], PowerUp pu[])
{
x += xvel;
y += yvel;
for (int i = 0; i < Num_Players; i++)
{
if (check_on_screen(p[i].x, p[i].y) == true)
{
Apply_Surface(x - p[i].x, y - p[i].y, p[i].ShipPic, DrawScreen);
}
}
for (int i = 0; i < Num_PowerUps; i++)
{
if (check_on_screen(pu[i].x, pu[i].y) == true)
{
Apply_Surface(x - pu[i].x, y - pu[i].y, pu[i].Pic, DrawScreen);
}
}
}
int main()
{
SDL_Surface *Screen;
Player players[4];
PowerUp powerups[200];
Num_Players = 4;
Num_PowerUps = 200;
while (quit == false)
{
for (int i = 0; i < Num_Players; i++)
{
players[i].update(players, powerups);
switch (i)
{
case 0: ScreenX = 0; ScreenY = 0; break;
case 1: ScreenX = ScreenWid / 2; ScreenY = 0; break;
case 2: ScreenX = 0; ScreenY = ScreenHigh / 2; break;
case 3: ScreenX = ScreenWid / 2; ScreenY = ScreenHigh / 2; break;
}
Apply_Surface (ScreenX, ScreenY, players[i].DrawScreen, Screen);
}
if (SDL_Flip(Screen) == -1)
{
return -1;
}
}
}
void Apply_Surface (int x, int y, SDL_Surface* Source, SDL_Surface* Destination, SDL_Rect* Clip)
{
SDL_Rect Offset;
Offset.x = x;
Offset.y = y;
if ((Source != NULL) && (Destination != NULL))
{
SDL_BlitSurface (Source, Clip, Destination, &Offset );
}
}
I've noticed it generally freezes when two or more players are near each other and it tries to draw the same power-up on both of their screens. But again...not always!
Well, I figured out what it was.
I was using the SDL_GFX library along with my game. Many of the images were created using rotozoomSurface(), which is a function of SDL_GFX.
Turns out there's a bug in it where, under certain circumstances that I don't know, it'll create a bad surface that will work "most" of the time, but under the right conditions, will crash. Such as, being placed at a particular x & y coordinate on the screen. (Don't know for sure). The rotated/zoomed images would work about 95% of the time, so it was very difficult to pin point what the issue was.
The work around was, when the image was created, just SDL_BlitSurface() it onto another surface under controlled conditions, such as putting it at coordinates (0, 0). Then, delete the rotated and zoomed surface, and just use the new "safe" surface.
Works great after that.
Hopefully this will help anyone who's using SDL_GFX and cannot figure out why their program is crashing.
Example:
Before:
SDL_Surface *original = SDL_CreateRGBSurface(SDL_SWSURFACE, Ship_Width, Ship_Height, Screen_BPP, 0, 0, 0, 0);
Apply_Surface(0, 0, ShipsPic, original, &bounds);
SDL_Surface *finished = rotozoomSurface(original, pic_angle, zoom, SMOOTHING_ON);
SDL_FreeSurface(original);
return finished;
After (fixed):
SDL_Surface *original = SDL_CreateRGBSurface(SDL_SWSURFACE, Ship_Width, Ship_Height, Screen_BPP, 0, 0, 0, 0);
Apply_Surface(0, 0, ShipsPic, original, &bounds);
SDL_Surface *temp = rotozoomSurface(original, pic_angle, zoom, SMOOTHING_ON);
SDL_Surface *finished = SDL_CreateRGBSurface(SDL_SWSURFACE, temp->w, temp->h, Screen_BPP, 0, 0, 0, 0);
Apply_Surface(0, 0, temp, finished);
SDL_FreeSurface(temp);
SDL_FreeSurface(original);
return finished;
And for what it's worth, the Apply_Surface() function:
void Apply_Surface (int x, int y, SDL_Surface* Source, SDL_Surface* Destination, SDL_Rect* Clip)
{
SDL_Rect Offset;
Offset.x = x;
Offset.y = y;
if ((Source != NULL) && (Destination != NULL))
{
SDL_BlitSurface (Source, Clip, Destination, &Offset );
}
}
There's not really enough information to figure out what exactly is going on. Computers don't like to do things "sometimes," they either do them or not, so it leads me to believe that maybe there's some variable that's doing something it shouldn't.
Just in case, what does your Apply_Surface() function look like? I assume that's where you're doing your actual blitting, and if that's where you're having your problems, that would be useful for those of us trying to figure out your dilemma.
I have a problem with my SDL program. My goal is to make a dot move along a line. I have all the coordinates saved in a data file. So I just wanted to read them from the file and display the dot at the right position.
The dot class (which is named linefollower) looks like this.
class Linefollower
{
private:
int x, y;
char orientation;
public:
//Initializes the variables
Linefollower();
void set(int m_x, int m_y, char m_orietnation);
void show();
char get_orientation();
};
Linefollower::Linefollower()
{
x = 0;
y = 0;
orientation = 'E';
}
void Linefollower::set(int m_x, int m_y, char m_orientation)
{
x = m_x;
y = m_y;
orientation = m_orientation;
}
void Linefollower::show()
{
//Show the linefollower
apply_surface(x, y, linefollower, screen );
}
char Linefollower::get_orientation()
{
return orientation;
}
The apply_surface function.
void apply_surface( int x, int y, SDL_Surface * source, SDL_Surface* destination)
{
//Temporary rectangle to hold the offsets
SDL_Rect offset;
//Get the offsets
offset.x = x;
offset.y = y;
//Blit the surface
SDL_BlitSurface( source, NULL, destination, &offset);
}
The loop which ought to display the animation looks like this.
//While the user hasn't quit
while( quit == false )
{
//Apply the surface to the screen
apply_surface( 0, 0, image, screen );
fin.read((char*) &my_linefollower, sizeof my_linefollower);
if(my_linefollower.get_orientation() == 'Q')
break;
my_linefollower.show();
//Upadate the screen
if( SDL_Flip( screen ) == -1 )
{
return 1;
}
SDL_Delay(200);
}
Now I was expecting, that I get a moving dot on the screen, but the only thing which I got is the background (image) for a few seconds untill the if(my_linefollower.get_orientation() == 'Q')
break; was true. What do I do wrong?
PS: I guess it is worth noticing that I am a beginner in SDL and I took most of the code from a tutorial. Learning it exactly would be a waste of time for me, since it is unlikely that I am going to use it again any time soon.
First, you should change your offset in apply_surface to something like this:
SDL_Rect offset = { x, y, 0, 0 };
SDL_Rect doesn't have a constructor to set your members to 0 by default, so you get uninitialized memory for your width and height.
Also, you should check what linefollower contains, if it's a valid SDL_Surface. Removing your file reading code and manually controlling a Linefollower will allow you to easily find where the error comes from.
Use a debugger to validate your x and y coordinates.
Other than that, you code should work, although your window will be unresponsive because you're not pumping events through SDL_PollEvent.
I'm using the sdl library, but it dosent support scale / resize surface, so i downloaded the
SDL_image 1.2 & SDL_gfx Library. My function/code works, but the image appear in bad / low
quality.
Let say i got a image which is 100X100, if i scale down to 95X95 or scale up to 110X110 the
quality appear very low, but if i leave it at 100X100 which is the same size it appear in
good quality. Images most appear in good quality, if scaled down, but ... it dosent
my code is:
int drawImage(SDL_Surface* display, const char * filename, int x, int y, int xx, int yy , const double newwidth, const double newheight, int transparent = NULL)
{
SDL_Surface *image;
SDL_Surface *temp;
temp = IMG_Load(filename); if (temp == NULL) { printf("Unable to load image: %s\n", SDL_GetError()); return 1; }
image = SDL_DisplayFormat(temp); SDL_FreeSurface(temp);
// Zoom function uses doubles for rates of scaling, rather than
// exact size values. This is how we get around that:
double zoomx = newwidth / (float)image->w;
double zoomy = newheight / (float)image->h;
// This function assumes no smoothing, so that any colorkeys wont bleed.
SDL_Surface* sized = zoomSurface( image, zoomx, zoomy, SMOOTHING_OFF );
// If the original had an alpha color key, give it to the new one.
if( image->flags & SDL_SRCCOLORKEY )
{
// Acquire the original Key
Uint32 colorkey = image->format->colorkey;
// Set to the new image
SDL_SetColorKey( sized, SDL_SRCCOLORKEY, colorkey );
}
// The original picture is no longer needed.
SDL_FreeSurface( image );
// Set it instead to the new image.
image = sized;
SDL_Rect src, dest;
src.x = xx; src.y = yy; src.w = image->w; src.h = image->h; // size
dest.x = x; dest.y = y; dest.w = image->w; dest.h = image->h;
if(transparent == true )
{
//Set the color as transparent
SDL_SetColorKey(image,SDL_SRCCOLORKEY|SDL_RLEACCEL,SDL_MapRGB(image->format,0x0,0x0,0x0));
}
else {
}
SDL_BlitSurface(image, &src, display, &dest);
return true;
}
drawImage(display, "Image.png", 50, 100, NULL, NULL, 100, 100,true);
An image that is scaled without allowing smoothing is going to have artifacts. You might have better luck if you start with SVG and render it at the scale that you want. Here's an SVG -> SDL surface library.