How to change an images size in c++, SDL - c++

How can I change the images size in the code below:
const int XHome = 10, YHome = 10;
const int WHome = 50, HHome = 50;
.
.
.
SDL_Surface* Image = SDL_LoadBMP(Address);
SDL_Rect destRect;
destRect.x = WHome * x;
destRect.y = HHome * y;
destRect.w = WHome;
destRect.h = HHome;
SDL_BlitSurface(Image, NULL, mainScreen, &destRect);
SDL_FreeSurface(Image);
When I put Image in mainScreen which is another SDL_Surface, It's bigger than 50*50. Is it possible to resize Image? Thank you.
this is what happens when I set the WHome and HHome, 50*50.
Since I have only 5 reputation, I can't post images. To see the image please click here.
But when I set them like the original images size, this is what I see:
here

According to the SDL_BlitSurface documentation:
Only the position is used in the dstrect (the width and height are ignored).
I highly recommend switching to SDL 2 for many reasons (hardware acceleration being a big one); this task would also become trivial with a texture and SDL_RenderCopy. If you're somehow stuck using SDL 1, you can either look into scaling surfaces manually, or use a library like SDL_gfx, which has custom blit functions.

Related

Is there a way to save a Graphics object in Codename One without without taking a screenshot?

The question is in the title. For example how can I save g in a file in the following snippet ?
public void paints(Graphics g, Image background, Image watermark, int width, int height) {
g.drawImage(background, 0, 0);
g.drawImage(watermark, 0, 0);
g.setColor(0xFF0000);
// Upper left corner
g.fillRect(0, 0, 10, 10);
// Lower right corner
g.setColor(0x00FF00);
g.fillRect(width - 10, height - 10, 10, 10);
g.setColor(0xFF0000);
Font f = Font.createTrueTypeFont("Geometos", "Geometos.ttf").derive(220, Font.STYLE_BOLD);
g.setFont(f);
// Draw a string right below the M from Mercedes on the car windscreen (measured in Gimp)
g.drawString("HelloWorld",
(int) (848 ),
(int) (610)
);
// NOW how can I save g in a file ?
}
The reaseon why I don't want to take a screenshot is because I want to keep the full resolution of g (eg : 2000 x 1500).
I would be so grateful to anyone that can tell me how to do that with Codename one. If not possible then it is already good to know it!
Cheers,
What you could do is to create an Image as buffer, get the graphics object from the image an do all your drawings operations on it. Then draw the whole image to the display and save it as a file:
int height = 2000;
int width = 1500;
float saveQuality = 0.7f;
// Create image as buffer
Image imageBuffer = Image.createImage(width, height, 0xffffff);
// Create graphics out of image object
Graphics imageGraphics = imageBuffer.getGraphics();
// Do your drawing operations on the graphics from the image
imageGraphics.drawWhaterver(...);
// Draw the complete image on your Graphics object g (the screen I guess)
g.drawImage(imageBuffer, w, h);
// Save the image with the ImageIO class
OutputStream os = Storage.getInstance().createOutputStream("storagefilename.png");
ImageIO.getImageIO().save(imageBuffer, os, ImageIO.FORMAT_PNG, saveQuality);
Note, that I have not tested it, but it should work like that.
Graphics is just a proxy to a surface, it has no knowledge or access to the underlying surface to which it is drawing and the reason for that is quite simple. It can draw to a hardware accelerated "surface" where there is physically no underlying image.
This is the case both on iOS and Android where the "screen" is natively drawn and has no buffer.

How to create polygons to display running number in cocos2dx

I'm trying to create a node that is simply a rectangle with a number in it. And this is how I'm doing it now:
int size = 100, fontSize = 64;
auto node = DrawNode::create();
Vec2 vertices[] =
{
Vec2(0,size),
Vec2(size,size),
Vec2(size,0),
Vec2(0,0)
};
node->drawPolygon(vertices, 4, Color4F(1.0f,0.3f,0.3f,1), 0, Color4F(1.0f,1.0f,1.0f,1));
auto texture = new Texture2D();
int numberToDisplay = 2000;
std::string s = std::to_string(numberToDisplay);
texture -> initWithString(s.c_str(), "fonts/Marker Felt.ttf", fontSize, Size(size, size), TextHAlignment::CENTER, TextVAlignment::CENTER);
auto textSprite = Sprite::createWithTexture(texture);
node -> addChild(textSprite);
textSprite -> setPosition(size/2, size/2);
Every time I want to change the number I have to re-create a textureSprite, remove the current child and add the new one. Is there a better way to do it?
i wonder whether you want some special features, so why not use LayerColor and labelTTF?
LayerColor* node = LayerColor::create(Color4B(255, 85, 85, 255), 100, 100);
LabelTTF* label = LabelTTF::create(s, "fonts/Marker Felt.ttf", fontSize);
node->addChild(label);
just change content of labelttf,no need to create sprite
You could use two different techniques for achieve this, to me both of them are good
1:- Use texture cache to cache texture and change image texture at run time(good if u know how many exact textures are there and texture has same Size). in your .h file define no of textures like:-
Texture2D *startTexture, *endTexture, *midTexture;
in you .cpp file do it like:-
startTexture = Director::getInstance()->getTextureCache()->addImage(
"start.png");
endTexture = Director::getInstance()->getTextureCache()->addImage(
"end.png");
middleTexture = Director::getInstance()->getTextureCache()->addImage(
"middle.png");
after that when you want to change texture of any Sprite, simply do it like:-
textSprite->setTexture(startTexture);
for this to work with you, declare "textSprite" in your .h file aswell for quick access.
Pit-fall:- changing texture doesn't change sprite initial bounding box, if initial sprite texture was 32*32 and changed texture was 50*50, then extra texture of 20*20 will be cropped automatically starting from origin point, which might look bad. to over come this you need to change rect also using
textSprite->setTextureRect(
Rect(0, 0, startTexture->getContentSize().width,
startTexture->getContentSize().height));
2:- Using Sprite Frame Cache, put all your texture in a spriteframe, load it into memory like :-
SpriteFrameCache *spriteCache = SpriteFrameCache::getInstance();
spriteCache->addSpriteFramesWithFile("test.plist", "test.png");
now when ever you want to change you texture do it like this
testSprite->setSpriteFrame(
(SpriteFrameCache::getInstance())->getSpriteFrameByName(
"newImage.png"));
this will first check sprite cache for a image named "newImage.png", if it found it in memory then it will return that texture or else it will return nullptr.

Drawing keeps getting stuck to the screen in SDL

In my program what I am drawing gets stuck on to the screen I am drawing on, by this I mean that what I previously drawed onto the screen stays after I call SDL_UpdateWindowSurface(). Here is my code.
void tower_manager::render()
{
m_tower.draw(camx, camy,m_screen);
//SDL_BlitSurface(test, NULL, m_screen, NULL);
SDL_Rect rect = { 32, 32, 32, 32 };
//draw the tower walls;
for (int x = 0; x < towerWidth; x++)
{
for (int y = 0; y < towerHeight * 2; y += 2)
{
rect.x = x*blockSize - camx;
rect.y = y*blockSize - camy;
SDL_BlitSurface(test, NULL, m_screen, &rect);
}
}
SDL_UpdateWindowSurface(m_window);
}
Apparently I need at least 10 reputation to post images so I cant post a screen shot but here is an example, you know what happens to the desktop when a windows application freezes and it keeps drawing the same window over and over again and you can draw it around to make art and stuff? That's exactly what it looks like is happening here. Also I have another issue when I call the tower objects method that is originally going to draw the tower using the same code it does not draw or do anything at all(i am passing in a pointer to the screen I am drawing to in its parameter).
You would want to clear the surface regions that you are drawing to. If you don't, then the screen surface retains the old renderings from previous frames and you are drawing on top of them. This causes a smearing artifact.
An old optimization (no longer so useful with SDL2 or OpenGL) here is to keep track of dirty rectangles and clear each of them, but the simplest way is to just clear the entire surface each frame before rendering.
So, once per frame do something like this:
SDL_FillRect(m_screen, NULL, 0x000000);

How to efficiently render a small sprite in Direct3D / C++ on a large Window (DWM)?

I'm implementing a custom cursor in DirectX/C++ that is drawn on a transparent window on top of the desktop.
I have stripped it down to a basic example. The magic of executing Direct3D on the DWM is based on this article on Code Project
The problem is that when using a very big window (e.g. 2560x1440) as a base for the DirectX rendering, it will give up to 40% GPU Load according to GPU-Z. Even if the only thing I am displaying is a static 128x128 sprite, or nothing at all. If I use an area like 256x256, the GPU Load is around 1-3%.
Basically this loop would make the GPU go crazy on a big window while it's smooth sailing on a small window:
while(true) {
g_pD3DDevice->PresentEx(NULL, NULL, NULL, NULL, NULL);
Sleep(10);
}
So it seems like it re-renders the whole screen whether anything changes or not, am I right? Can I tell Direct3D to only re-render specific parts that needs to be updated?
EDIT:
I have found a way to tell Direct3D to render a specific part by providing RGNDATA Dirty region information to PresentEx. It is now 1% GPU Load instead of 20-40%.
std::vector<RECT> dirtyRects;
//Fill dirtyRects with previous and new cursor boundaries
DWORD size = dirtyRects.size() * sizeof(RECT)+sizeof(RGNDATAHEADER);
RGNDATA *rgndata = NULL;
rgndata = (RGNDATA *)HeapAlloc(GetProcessHeap(), 0, size);
RECT* pRectInitial = (RECT*)rgndata->Buffer;
RECT rectBounding = dirtyRects[0];
for (int i = 0; i < dirtyRects.size(); i++)
{
RECT rectCurrent = dirtyRects[i];
rectBounding.left = min(rectBounding.left, rectCurrent.left);
rectBounding.right = max(rectBounding.right, rectCurrent.right);
rectBounding.top = min(rectBounding.top, rectCurrent.top);
rectBounding.bottom = max(rectBounding.bottom, rectCurrent.bottom);
*pRectInitial = dirtyRects[i];
pRectInitial++;
}
//preparing rgndata header
RGNDATAHEADER header;
header.dwSize = sizeof(RGNDATAHEADER);
header.iType = RDH_RECTANGLES;
header.nCount = dirtyRects.size();
header.nRgnSize = dirtyRects.size() * sizeof(RECT);
header.rcBound.left = rectBounding.left;
header.rcBound.top = rectBounding.top;
header.rcBound.right = rectBounding.right;
header.rcBound.bottom = rectBounding.bottom;
rgndata->rdh = header;
// Update display
g_pD3DDevice->PresentEx(NULL, NULL, NULL, rgndata, 0);
But it's something I do not understand. It will only give 1% GPU Load if I add the following
SetLayeredWindowAttributes(hWnd, 0, 180, LWA_ALPHA);
I want it transparent anyway so it's good, but instead I get some weird tearing effects after a while. It is more noticeable the faster I move the cursor. What does that come from? It looks like image provided. I am sure I have set the dirty rects perfectly accurate.
The above tearing seem to differ from computer to computer.

Window resizing and scaling images / Redeclaring back buffer size / C++ / DIRECTX 9.0

C++ / Windows 8 / Win api / DirectX 9.0
I am having real big issues with this:
https://github.com/jimmyt1988/TheGame/tree/master/TheGame
Problem is that I have defined some adjust coordinate functions. They are for when a window is resized and I need to offset all of my coordinates so that my mouse cooridnates are working out the correct collisions and also to scale and yet keep ratio locked for the images I am drawing to the screen.
For example, If I had a screen at 1920 x 1080 and then resized to 1376 x 768, I need to make sure that the bounding boxes for my objects (for when my mouse hovers over them) is adjusted on the mouse coordinates I use to use to check if the mouse was in the bounding box.
I found out that I originally had problems because when I resized my window, directX was automatically scaling everything.. and on top of that, I too was rescaling things, so they would get utterly screwed... I was told by someone that I need to re-declare my screen buffer width and height, which I have done keeping in mind there is a border to my window and also a menu at the top.
Can anyone see why... regardless of doing all this stuff, I am still getting the incorrect results.
If you manage to run my application: Pressing the 1 key will make the resolution 1920 x 1080, pressing the 2 key will make it 1376 x 768. The resize is entirely wrong: https://github.com/jimmyt1988/TheGame/blob/master/TheGame/D3DGraphics.cpp
float D3DGraphics::ResizeByPercentageChangeX( float point )
{
float lastScreenWidth = screen.GetOldWindowWidth();
float currentScreenWidth = screen.GetWindowWidth();
if( lastScreenWidth > currentScreenWidth + screen.GetWidthOffsetOfBorder() )
{
float percentageMoved = currentScreenWidth / lastScreenWidth;
point = point * percentageMoved;
}
return point;
}
float D3DGraphics::ResizeByPercentageChangeY( float point )
{
float lastScreenHeight = screen.GetOldWindowHeight();
float currentScreenHeight = screen.GetWindowHeight();
if( lastScreenHeight > currentScreenHeight + screen.GetHeightOffsetOfBorderAndMenu() )
{
float percentageMoved = currentScreenHeight / lastScreenHeight;
point = point * percentageMoved;
}
return point;
}
and yet if you put the return point above this block of code and just do nothing to it, it scales perfectly because of blooming directX regardless of this which is being called correctly (presparams are previously declared in the D3DGraphics construct and a reference held in the class its self:
void D3DGraphics::ResizeSequence()
{
presParams.BackBufferWidth = screen.GetWindowWidth() - screen.GetWidthOffsetOfBorder();
presParams.BackBufferHeight = screen.GetWindowHeight() - screen.GetHeightOffsetOfBorderAndMenu();
d3dDevice->Reset( &presParams );
}
This is the problem at hand:
Here is the code that makes this abomination of a rectangle:
void Game::ComposeFrame()
{
gfx.DrawRectangle( 50, 50, screen.GetWindowWidth() - screen.GetWidthOffsetOfBorder() - 100, screen.GetWindowHeight() - screen.GetHeightOffsetOfBorderAndMenu() - 100, 255, 0, 0 );
}
EDIT::::::::::::::::
I noticed that On MSDN it says:
Before calling the IDirect3DDevice9::Reset method for a device, an
application should release any explicit render targets, depth stencil
surfaces, additional swap chains, state blocks, and D3DPOOL_DEFAULT
resources associated with the device.
I have now released the vbuffer and reinstantiated it after the presparams and device are reset.
EDIT::::::::::::
I placed an HRESULT on my reset in which I now manage to trigger an error... But, well.. it doesn't really help me! : http://i.stack.imgur.com/lqQ5K.jpg
Basically, the issue was I was being a complete derp. I was putting into my rectangle the window width and then readjusting that size based on the oldwidth / newwidth.. well the new width was already the screen size... GRRRRRRR.