SDL2 (C++) How to render an image smaller - c++

I have finally started getting used to SDL 2's basic functions for rendering and I have stumbled across a problem that I believe the public might be able to answer. In my code I generate some text and using some code from a tutorial, am able to load the text as a texture (namely Lazy foo's tutorial). This texture now has a width and a height based on font size and how much text was entered. Another function I use loads in a square made of fancy boardering that I wish to use as a menu. This square is 200x200. As an example, if the text texture is 100x160, I want the square to now render as perhaps a 120x180 image (essentially compressing it to be a similar size as the text texture.
tl;dr:
I have 200x200 square.
I have 100x160 text texture
I want to render 200x200 square as a 120x160 square and render 100x160 text inside square.
***loadFromRenderedText takes a ttf font, a string, and a color (RGBA) to create an image texture based on the string -> generates own width/height
menuTextTexture.loadFromRenderedText(menuFont, "Info Item Skill Back",menuTextColor);
menuSize.x = 0;
menuSize.y = 0;
menuSize.w = menuTextTexture.getWidth() + boarderW;
menuSize.h = menuTextTexture.getHeight() + boarderW;
***menuSize is an SDL_Rect
menuBoxTexture.TextRender(XmenuRenderLocX, XmenuRenderLocY, &menuSize, 0, NULL, SDL_FLIP_NONE);
menuTextTexture.render(XmenuRenderLocX+boarderW, XmenuRenderLocY+boarderW);
TextRender and render do the same thing except render uses a scaling factor to multiply the clip size to be bigger (which I leave blank -> clip is then NULL and the basic height/width are used). For TextRender, I specify the render dimensions by passing the menuSize SDL rect. this takes the 200x200 square and renders only the 120x160 of the square at the (XmenuRenderLocX, XmenuRenderLocY)... thus essentially cropping the square, which is not what I want... I want to resize the square.
Any help will be greatly appreciated

Originally, I was using the provided LTexture::render function that was created for Lazy Foo's tutorial. See below code
void LTexture::render( int x, int y, SDL_Rect* clip, double angle, SDL_Point* center, SDL_RendererFlip flip )
{
//Set rendering space and render to screen
SDL_Rect renderQuad = { x, y, mWidth, mHeight };
//Set clip rendering dimensions
if( clip != NULL )
{
renderQuad.w = SCALE_SIZE*(clip->w);
renderQuad.h = SCALE_SIZE*(clip->h);
}
else if(mTexture ==NULL)
{
printf("Warning: Texture to Render is NULL!\n");
}
//Render to screen
SDL_RenderCopyEx( gRenderer, mTexture, clip, &renderQuad, angle, center, flip );
}
But because I didn't fully understand until now how exactly the function renders, I wasn't actually telling it to render at a new dimension (except in the case where I magnify everything with a SCALE_SIZE)
I made a new function for more control
void LTexture::DefinedRender(SDL_Rect* Textureclip, SDL_Rect* renderLocSize, double angle, SDL_Point* center, SDL_RendererFlip flip)
{
//Set clip rendering dimensions
if(mTexture ==NULL)
{
printf("Warning: Texture to Render is NULL!\n");
}
//Render to screen
SDL_RenderCopyEx( gRenderer, mTexture, Textureclip, renderLocSize, angle, center, flip );
}
And now everything works like I want it to.

Related

Drawing an OpenGL overlay using SwapBuffers interception

I'm trying to make a library that would allow me to draw my overlay on top of the content of a game window that uses OpenGL by intercepting the call to the SwapBuffers function. For interception i use Microsoft Detours.
BOOL WINAPI __SwapBuffers(HDC hDC)
{
HGLRC oldContext = wglGetCurrentContext();
if (!context) // Global variable
{
context = wglCreateContext(hDC);
}
wglMakeCurrent(hDC, context);
// Drawing
glRectf(0.1F, 0.5F, 0.2F, 0.6F);
wglMakeCurrent(hDC, oldContext);
return _SwapBuffers(hDC); // Call the original SwapBuffers
}
This code works, but occasionally, when I move my mouse, my overlay blinks. Why? Some forums have said that such an implementation can significantly reduce FPS. Is there any better alternative? How do I correctly translate a normal position to an OpenGL position? For example, width = 1366. It turns out 1366 = 1, and 0 = -1. How to get the value for example for 738? What about height?
To translate a screen coordinate to normal coordinate you need to know the screen width and screen height, linear mapping from [0, screenwidth] to [-1, 1] / [0, screenheight] to [-1, 1]. It is as simple as follows:
int screenwidth, screenheight;
//...
screenwidth = 1366;
screenheight = 738;
//...
float screenx, screeny;
float x = (screenx/(float)screenwidth)*2-1;
float y = (screeny/(float)screenheight)*2-1;
Problem z=0:
glRect renders to z=0, it is a problem because the plane would be infinitely near. Because opengl considers rendering to world space still. Screen space lies at (x, y, 1) in non transformed world space. OpenGL almost always works with 3D coordinates.
There are two ways to tackle this problem:
You should prefer using functions with a z component, because opengl does not render correctly at z=0. z=1 corresponds to the normalized screen space
or you add a glTranslatef(0,0,1); to get to the normalized screen space
Remember to disable depth testing when rendering 2D on the screen space and resetting the modelview matrix.

How I can add a shadow to a texture on SDL2?

So, I'm making a 2D game and I have some textures. I will like some of them to drop a shadow, there is something like drop-shadow in css for SDL2?
Render the texture first, then render a slightly larger semi-transparent gray square slightly behind it. If you want rounded corners, use a shader that increases alpha as you get further from the corners.
Since noone mentionned it yet, here it is:
int SDL_SetTextureColorMod(SDL_Texture* texture,
Uint8 r,
Uint8 g,
Uint8 b)
https://wiki.libsdl.org/SDL_SetTextureColorMod
This function multiplies a texture color channels when copying it to the SDL_Renderer*. Using it with 0 as r, g and b arguments would make your texture pitch black but not affect the alpha, so the texture would keep its shape (like in the case of a transparent PNG). You just have to copy that shadow before (= behind) your texture, with a slight offset. You can also change the overall transparency of the shadow, with SDL_SetTextureAlphaMod(SDL_Texture* texture, Uint8 a)
Just don't forget to set the values back to 255 when you're done.
Code example:
SDL_SetRenderDrawColor(renderer, 0, 0, 0, 255);
SDL_RenderClear(renderer);
// [...] draw background, etc... here
SDL_Rect characterRect;
// [...] fill the rect however you want
SDL_Rect shadowRect(characterRect);
shadowRect.x += 30; // offsets the shadow
shadowRect.y += 30;
SDL_SetTextureColorMod(characterTexture, 0, 0, 0); // makes the texture black
SDL_SetTextureAlphaMod(characterTexture, 191); // makes the texture 3/4 of it's original opacity
SDL_RenderCopy(renderer, characterTexture, NULL, &shadowRect); // draws the shadow
SDL_SetTextureColorMod(characterTexture, 255, 255, 255); // sets the values back to normal
SDL_SetTextureAlphaMod(characterTexture, 255);
SDL_RenderCopy(renderer, characterTexture, NULL, &characterRect); // draws the character
// [...] draw UI, additionnal elements, etc... here
SDL_RenderPresent(renderer);

SDL draws images blurred without scaling

I'm working on a project in C++ using SDL (Simple Directmedia Layer) but when I draw a SDL_Texture to the screen it's blurred eventhough it is not scaled.
How the image is loaded:
SDL_Surface* loadedSurface = IMG_Load("image.png");
SDL_Texture* gImage = SDL_CreateTextureFromSurface( gRenderer, loadedSurface);
How the image is drawn to the screen:
SDL_Rect renderQuad = { x, y, width, height };
SDL_RenderCopy(gRenderer, gImage , NULL, &renderQuad );
See image, left is in the program and right is the original:
Is there a parameter a forgot to set? And is it normal that SDL does this?
I'm using SDL 2.0 32-bit on a Windows 8.1 64-bit machine.
Ahead of your call to SDL_CreateTextureFromSurface try calling:
SDL_SetHint(SDL_HINT_RENDER_SCALE_QUALITY, "0");
According to SDL Wiki this should affect how SDL_CreateTextureFromSurface calls interpolate the surface. "0" should result in nearest neighbour removing the blurring effect you are seeing.

How to draw on just a portion of the screen with SpriteBatch in libgdx?

When I do this:
SpriteBatch spriteBatch = new SpriteBatch();
spriteBatch.setProjectionMatrix(new Matrix4().setToOrtho(0, 320, 0, 240, -1, 1));
spriteBatch.begin();
spriteBatch.draw(textureRegion, 0, 0);
spriteBatch.end();
SpriteBatch will draw the textureRegion onto the coordinate system 320-240 that I have specified to the whole screen. Say I want to draw with the same coordinate system 320 240 but only on the left half of the screen (which means everything will be scaled down horizontally in the left side, leaving the right half of the screen black), how can I do?
You're going to want to use the ScissorStack. Effectively, you define a rectangle that you want to draw in. All drawing will be in the rectangle that you defined.
Rectangle scissors = new Rectangle();
Rectangle clipBounds = new Rectangle(x,y,w,h);
ScissorStack.calculateScissors(camera, spriteBatch.getTransformMatrix(), clipBounds, scissors);
ScissorStack.pushScissors(scissors);
spriteBatch.draw(...);
spriteBatch.flush();
ScissorStack.popScissors();
This will limit rendering to within the bounds of the rectangle "clipBounds".
It is also possible push multiple rectangles. Only the pixels of the sprites that are within all of the rectangles will be rendered.
From http://code.google.com/p/libgdx/wiki/GraphicsScissors
Before rendering the batch, you can set the viewport to draw on a specific screen area. The important line is:
Gdx.gl.glViewport(x, y, w, h);
The viewport usually starts at x = 0 and y = 0 and extends to the full width and height of the screen. If we want to see only a part of that original viewport, we need to change both the size and the starting position. To draw only on the left half of the screen, use:
x = 0;
y = 0;
w = Gdx.graphics.getWidth()/2;
h = Gdx.graphics.getWidth();
I found the solution here and originally answered this question to a slightly more complicated problem, but the technique is the same.
To focus on any different portion of the viewport, simply choose x, y, w, and h accordingly. If you're going to do any more rendering in the normal fashion, make sure to reset the viewport with the original x, y, w, and h values.
Perhaps I am misunderstanding the question, but could you not just double the viewport width, setting it to 640 instead of 320?
SpriteBatch spriteBatch = new SpriteBatch;
spriteBatch.setProjectionMatrix(new Matrix4().setToOrtho(0, 640, 0, 240, -1, 1));
spriteBatch.begin();
spriteBatch.draw(textureRegion, 0, 0);
spriteBatch.end();
You could either
double the viewport width of the SpriteBatch
use a Sprite and set its width scale to 0.5f (be careful about the origin) and use its draw(SpriteBatch) method to draw it.

Drawing points in top of texture openGL

I have a texture drawn in a GLcontrol and I want to draw points on top of it. Instead, I get the full texture set to the colour of the point I want to draw. I guess that I have to disable the texture format and enable the points drawings, but cant reach the solution...
Here is the draw function:
Basically the point to draw is ROI[0], but instead drawing just the point I got the image shown below (the image is grayscale before drawing "the point").
private: void drawImg(int img){
int w=this->glControl_create_grid->Width;
int h=this->glControl_create_grid->Height;
GL::MatrixMode(MatrixMode::Projection);
GL::LoadIdentity();
GL::Ortho(0, w, 0, h, -1, 1); // Bottom-left corner pixel has coordinate (0, 0)
GL::Viewport(0, 0, w, h); // Use all of the glControl painting area
GL::Clear(ClearBufferMask::ColorBufferBit | ClearBufferMask::DepthBufferBit);
GL::ClearColor(Color::LightGray);
GL::MatrixMode(MatrixMode::Modelview);
GL::LoadIdentity();
GL::Enable(EnableCap::Texture2D);
GL::BindTexture(TextureTarget::Texture2D, img);
OpenTK::Graphics::OpenGL::ErrorCode error=GL::GetError();
GL::Begin(BeginMode::Quads);
GL::TexCoord2(0, 0);
GL::Vertex2(0 ,h);
GL::TexCoord2(1, 0);
GL::Vertex2(w, h);
GL::TexCoord2(1, 1);
GL::Vertex2(w, 0);
GL::TexCoord2(0, 1);
GL::Vertex2(0, 0);
GL::End();
GL::Disable(EnableCap::Texture2D);
if (ROI[0].x!=0||ROI[0].y!=0){
GL::Color3(Color::Red);
GL::Begin(BeginMode::Points);
GL::Vertex2(ROI[0].x,ROI[0].y);
GL::End();
}
}
What should I change in my code? I can't seem to achieve it....
I found the answer. It seems that the color also applies to textures when binding them so I just needed to add GL::Color3(Color::White) before drawing the texture.