Using texture during window resize - opengl

So what I'm trying to do is copy the current front buffer to a texture and use this during a resize to mimic what a normal window resize would do. I'm doing this because the scene is too expensive to render during a resize and I want to provide a fluid resize.
The texture coping is fine, but I'm struggling to work out the maths to get the texture to scale and translate appropriately (I know there will be borders visible when magnifying beyond the largest image dimension).
Can anyone help me?

but I'm struggling to work out the maths to get the texture to scale and translate appropriately
Well, this depends on which axis you base your field of view on. If it's the vertical axis, then increasing the ratio width/height will inevitably lead to letterboxing left and right. Similar if your FOV is based on the horizontal axis inreasing height/width will letterbox top and bottom. If the aspect variation goes opposite you have no letterboxing, because you don't need additional picture information for this.
There's unfortunately no one-size-fits all solition for this. Either you live with some borders, or stretch your image without keeping the aspect, or before resizing the window you render with a largely bigger FOV to a quadratic texture of which you show only a subset.

Related

Is there an opposite of a viewport in SDL2?

I'm working on an alpha blending effect for which I need to be able to exclude a rectangular area of varying size and position on the screen from being rendered to, basically the opposite of a what a SDL_Viewport does. I can't use occlusion methods like SDL_RenderClear or SDL_RenderFillRect for this since that would interfere with the effect I'm going for, I actually need this area to only be rendered to once per frame.
Is there a better solution than having 4 constantly updated SDL_Rects acting as a frame of the exclusion zone to simulate something like a negative SDL_Viewport?

Using LibGDX (Orthographic) Camera.zoom makes tiles flicker when moving camera?

I have some 64x64 sprites which work fine (no flicker/shuffling) when I move my camera normally. But as soon as I change the camera.zoom (was supposed to be a major mechanic in my game) level away from 1f the sprites flicker every time you move.
For example changing to 0.8f:
Left flicker:
One keypress later: (Right flicker)
So when you move around it's really distracting for gameplay when the map is flickering... (however slightly)
The zoom is a flat 0.8f and I'm currently using camera.translate to move around, I've tried casting to (int) and it still flickered... My Texture/sprite is using Nearest filtering.
I understand zooming may change/pixelate my tiles but why do they flicker?
Edit
For reference here is my tilesheet:
It's because of the nearest filtering. Depending on amount of zoom, certain lines of artwork pixels will straddle lines of screen pixels so they get drawn one pixel wider than other lines. As the camera moves, the rounding works out differently on each frame of animation so that different lines are drawn wider on each frame.
If you aren't going for a retro low-res aesthetic, you could use linear filtering with mip maps (MipMapLinearLinear or MipMapLinearNearest). Then start with larger resolution art. The first looks better if you are smoothly transitioning between zoom levels, with a possible performance impact.
Otherwise, you could round the camera's position to an exact multiple of the size of a pixel in world units. Then the same enlarged columns will always correspond with the same screen pixels, which would cut down on perceived flickering considerably. You said you were casting the camera translation to an int, but this requires the art to be scaled such that one pixel of art exactly corresponds with one pixel of screen.
This doesn't fix the other problem, that certain lines of pixels are drawn wider so they appear to have greater visual weight than similar nearby lines, as can be seen in both your screenshots. Maybe one way to get round that would be to do the zoom a secondary step, so you can control the appearance with an upscaling shader. Draw your scene to a frame buffer that is sized so one pixel of texture corresponds to one world pixel (with no zoom), and also lock your camera to integer locations. Then draw the frame buffer's contents to the screen, and do your zooming at this stage. Use a specialized upscaling shader for drawing the frame buffer texture to the screen to minimize blurriness and avoid nearest filtering artifacts. There are various shaders for this purpose that you can find by searching online. Many have been developed for use with emulators.

CCRenderTexture size and position in Cocos2d-iphone

I was trying to use CCRenderTexture for pixel perfect collision detection, as outlined in this forum posting:
http://www.cocos2d-iphone.org/forum/topic/18522/page/2
The code "as is" works, and I have integrated it with my project
But I am having trouble doing some of the other things discussed:
If I create the renderTexture to be any size less than the screen size, the collision detection doesn't work properly - It seems to show collisions when the sprites are close (<15px) to each other but not actually colliding.
Also I have trouble changing the location of the render texture. Regardless of the position I specify, it seems to go from bottom left (0,0) till the width & height specified. I followed this post:
http://www.cocos2d-iphone.org/forum/topic/18796
But it doesn't solve my problem. I still get bad collisions like specified above. Also, the first post I mentioned on the list contains comments by many users who have resized their textures to 10x10, and repositioned them off screen.
Does anyone have any sample code, so I can see what I am doing wrong? I just use the boilerplate code:
CCRenderTexture* _rt = [CCRenderTexture renderTextureWithWidth:winSize.width height:winSize.height];
_rt.position = CGPointMake(winSize.width*0.5f, winSize.height*0.5f);
[[RIGameScene sharedGameScene]addChild:_rt];
_rt.visible = YES;
I use cocos2d-iphone 1.0.1
You need to move the sprites you intend to draw into the region of the renderTexture before calling draw or visit. Moving the renderTexture does not change the position of _rt.sprite.
The intersection rectangle must be in the region of the renderTexture, otherwise you get inaccurate collisions.
It seems that you cannot change the position of _rt.sprite.
The solution that I use is to determine the origin (x,y) of the intersection box, and offset both the colliding sprites by that much. This will ensure that the intersection rectangle will have its origin at 0,0. Then I calculate the intersection rectangle again (after ensuring the origin of the intersection rect is 0,0) . Then I follow the instructions in the forum posting.
When determining the dimensions of the render texture, I ensure that they are at least as large as the intersection rectangle, and I ensure that the intersection rectangle is fully inside the render texture. This way there are accurate collisions. If even part of the intersection box is outside the render texture i get inaccurate collisions, so before drawing into the render texture, make sure you move the sprites you intend to visit so that the intersection box is entirely within the render texture.
Remember to move the sprites back after you're done. :)

Zooming a 2D game on a mobile device?

I'm currently developing a 2D mobile game, and I want the camera to be able to pan around and zoom in and out. One method would be to render the entire level to a backbuffer, and simply draw a region and scale it up/down to the device screen. Angry birds has the effect I'm looking for where the user can zoom out and see the whole level.
This would take huge amounts of memory to have a surface for the entire level. Another method would be to scale down each sprite and draw it in the view, but this may take more rendering power than resizing an entire surface.
Another option is similar to the first where I could have a large fixed surface to render the images on instead of having a surface for the whole level. Again, this would take huge amounts of memory especially if the user wanted to zoom out really far.
I've never really developed for mobile before. Would scaling each individual image be too expensive? Should I scale each sprite and then render, or render a large portion of the map to a surface and downscale/upscale that surface?
You shouldn't have to mess with the size of the buffer you're drawing to. What you could do is to adjust the viewport size, thereby fitting more of the world into your viewport (thus ending up on the buffer). Sprites will of course be smaller, and if it becomes to slow for you, you could implement some level of detail (LOD) so that for example "when the viewport is larger, draw the sprite using a smaller, pre-loaded, image". What would still make the solution slow/"inefficient" is that you'll be drawing more sprites, but that's not something you'll get around.

How to clip rendering in OpenGL (C++)

How to clip rendering in OpenGL (simple rectangle area)?
Please post a C++ example.
What you probably need is OpenGL's scissor mechanism.
It clips rendering of pixels that do not fall into a rectangle defined by x, y, width and height parameters.
Note also that this OpenGL state when enabled, affects glClear command by restricting the area cleared.
If you only want to display a specific rectangle, you need a combination of something like glFrustrum or glOrtho along with glViewPort. It's actually glViewPort that sets the clipping rectangle. glFrustrum, glOrtho (gluPerspective, etc.) then map some set of real coordinates to that rectangle. Typically you hardly notice the glViewPort, because it's normally set to the entire area of whatever window you're using, and what you change is the mapping to get different views in the window.
If you just adjust glFrustum (for example) by itself, the display area on the screen will stay the same, and you'll just change the mapping so you'll still fill the entire window area, and basically just move the virtual camera around, so you zoom in or out (etc.) on the "world" being displayed. Conversely, if you just adjust glViewPort, you'll display exactly the same data, but into a smaller rectangle.
To "clip" the data to the smaller rectangle, you need to adjust both at once, in more or less the "opposite" directions so as your view-port rectangle gets smaller, you zoom in your view frustum to compensate.