Scaling a mesh object with texture - c++

I have a mesh which have a tiled texture applied. When, I scaled that object in DirectX, the size of texture tiles also gets scaled.
Is there a way to keep texture size same and scale only the object. If, object size gets increased, reapply same texture on that part again with original tile size.

Related

How to Upscale window or renderer in sdl2 for pixelated look?

I want this pixelated look in sdl2 for all the object in my screen
To do this, the nearest scaling must be set (default in SDL2), which does not use antialiasing. If so, you can use SDL_SetHint by setting hint SDL_HINT_RENDER_SCALE_QUALITY to nearest (or 0). If you now render a small texture in a large enough area (much larger than the texture size), you will see large pixels in the window.
If, on the other hand, you have large textures (just like in the linked thread), or you just want to render the entire frame pixelated, you can do this by rendering the contents of the frame on a low-resolution auxiliary texture (serving as the back buffer), and after rendering the entire frame, rendering the back buffer in the window. The buffer texture will be stretched across the entire window and the pixelation will then be visible.
I used this method for the Fairtris game which renders the image in NES-like resolution. Internal back buffer texture has resolution of 256×240 pixels and is rendered in a window of any size, maintaining the required proportions (4:3, so slightly stretched horizontally). However, in this game I used linear scaling to make the image smoother.
To do this you need to:
remember that the nearest scaling must be set,
create a renderer with the SDL_RENDERER_TARGETTEXTURE flag,
create back buffer texture with low resolution (e.g. 256×240) and with SDL_TEXTUREACCESS_TARGET flag.
When rendering a frame, you need to:
set the renderer target to the backbuffer texture with SDL_SetRenderTarget,
render everything the frame should contain using the renderer and back buffer size (e.g. 256×240),
bring the renderer target back to the window using SDL_SetRenderTarget again.
You can resize the back buffer texture at any time if you want a smaller area (zoom in effect, so larger pixels on the screen) or a larger area (zoom out effect, so smaller pixels on the screen) in the frame. To do this, you will most likely have to destroy and recreate the backbuffer texture with a different size. Or you can create a big backbuffer texture with an extra margin and when rendering, use a smaller or bigger area of it — this will avoid redundant memory operations.
At this point, you have the entire frame in an auxiliary texture that you can render in the window. To render it in a window, use the SDL_RenderCopy function, specifying the renderer handle and back buffer texture handle (rects should not be given so that the texture will be rendered completely over the entire window area), and finally SDL_RenderPresent.
If you need to render in window the frame respecting the aspect ratio, get the current window size with SDL_GetWindowSize and calculate the target area taking into account the aspect ratio of the back buffer texture and the window proportions (portrait and landscape). However, before rendering the back buffer texture in the window, first clean the window with SDL_RenderClear so that the remaining areas of the window (black bars) are filled with black.

Resizing an OpenGL texture without changing UV coordinates

I have a texture atlas that gets generated during gameplay. Textures are loaded and unloaded depending on which textures are visible when the player moves around.
The problem is when there are too many textures to fit in the atlas and I need to resize it, but if I simply resize the texture atlas by copying it to a larger texture, then I would need to re-render everything to update the UV coordinates of the quads.
Is it possible to somehow create an OpenGL texture and define its width and height in UV coordinates to be something other than 1?
One solution would be to use texel coordinates rather than normalized coordinates (assuming you are not going to scale or move the images around within the atlas when you make it bigger). If you can live with their limitations (no wrapping, no mipmaps, only linear filtering), you could just use Rectangle Textures and call it a day. Alternatively, you could simply apply scaling factors to the texture coordinates in your vertex shader, either by passing them as uniforms or directly querying the texture dimensions in the shader using the textureSize() GLSL function.
Alternatively, you might consider using an Array Texture to hold multiple atlases. Instead of enlarging an existing atlas to make room for more subimages, you could just start a new atlas in another array slice. That way, the texture coordinates of all existing subimages would stay the same…

Can I tile a texture on a RectangleShape?

I have a rectangleshape that can change size in a program of mine (I won't copy it here as it is too large), and I have assigned a 64x64 pixel texture to it. The shape itself is much larger than the texture, but the texture just gets spread over the whole shape. Is there a way to change it so that the texture remains 64x64, but tiles across the rectangleshape?
Figured out how to do it, I just had to use the line
tex.setRepeated(true);
and the line
rect.setTextureRect(sf::IntRect(0, 0, xRect, yRect));
with xRect and yRect being the dimensions of the rectangleobject, tex bing the texture name and and rect being the rectangle object that the texture has been assigned to.

How to match texture size with window size?

I am trying to draw pixels on a given texture, when I move my mouse around. Something similar to drawing program, but with layers.
How do I expand my texture so that its dimension matches my window size (e.g. 1280 x 1080), this is mainly because I'd like to click anywhere on the window, and it will update the pixel.
In OpenGL textures get their size set upon creation. You can't change the size afterwards. But you can create a new texture and copy the contents of the old one to it. For that you bind the old texture as a framebuffer object (FBO) color attachment and render the old texture into it.

Is it possible to save the current viewport and then re draw the saved viewport in OpenGL and C++ during the next draw cycle?

I want to know if I can save a bitmap of the current viewport in memory and then on the next draw cycle simply draw that memory to the viewport?
I'm plotting a lot of data points as a 2D scatter plot in a 256x256 area of the screen and I could in theory re render the entire plot each frame but in my case it would require me to store a lot of data points (50K-100K) most of which would be redundant as a 256x256 box only has ~65K pixels.
So instead of redrawing and rendering the entire scene at time t I want to take a snapshot of the scene at t-1 and draw that first, then I can draw updates on top of that.
Is this possible? If so how can I do it, I've looked around quite a bit for clues as to how to do this but I haven't been able to find anything that makes sense.
What you can do is render the scene into a texture and then first draw this texture (using a textured full-screen quad) before drawing the additional points. Using FBOs you can directly render into a texture without any data copies. If these are not supported, you can copy the current framebuffer (after drawing, of course) into a texture using glCopyTex(Sub)Image2D.
If you don't clear the framebuffer when rendering into the texture, it already contains the data of the previous frame and you just need to render the additional points. Then all you need to do to display it is drawing the texture. So you would do something like:
render additional points for time t into texture (that already contains the data of time t-1) using an FBO
display texture by rendering textured full-screen quad into display framebuffer
t = t+1 -> step 1.
You might even use the framebuffer_blit extension (which is core since OpenGL 3.0, I think) to copy the FBO data onto the screen framebuffer, which might even be faster than drawing the textured quad.
Without FBOs it would be something like this (requiring a data copy):
render texture containing data of time t-1 into display framebuffer
render additional points for time t on top of the texture
capture framebuffer into texture (using glCopyTexSubImage2D) for next loop
t = t+1 -> step 1
You can render to texture the heavy part. Then when rendering the scene, render that texture, and on top the changing things.