Is there an opposite of a viewport in SDL2? - c++

I'm working on an alpha blending effect for which I need to be able to exclude a rectangular area of varying size and position on the screen from being rendered to, basically the opposite of a what a SDL_Viewport does. I can't use occlusion methods like SDL_RenderClear or SDL_RenderFillRect for this since that would interfere with the effect I'm going for, I actually need this area to only be rendered to once per frame.
Is there a better solution than having 4 constantly updated SDL_Rects acting as a frame of the exclusion zone to simulate something like a negative SDL_Viewport?

Related

OpenGLSkybox (CubeMap) movement along with camera movement

I have been following some OpenGL tutorials for an open world project i am currently working on where the goal is to have an Openworld Scene with several objects (mountains etc...) present and with a SkyBox where all the objects are placed inside it.
I would like to ask if there is any way of the camera freely moving inside the skybox, "interacting" with potential objects in it, but without actually getting out of the boundaries of the box. In the tutorials the translation of the camera is removed, so it can only look around without moving around.
Is it a common practice to actually move the camera inside the skybox, or should i somehow move the skybox along with the camera, thus never reaching the boundaries of the box?
Skybox is usually rendered without offset to camera because its content represent stuff very far away (many times bigger than actual camera movement) like stars or mountains that are many kilometers away. So even if you move like 100 m in any direction the rendered result is not changed at all (or very little that can not be recognized).
If your skybox contains stuff you want to move towards than is doable but you need to limit the movement so you not get too close as that would result in pixelation of the skybox and eventually even crossing it. That can be done by game terrain (you can not jump above boundary mountains or swim too far from an island etc.
Another option is to limit camera distance from skybox center to some safe distance. If more far then the limit move the skybox to match the distance again... that way you can come near/far to skybox up to a point (it gets bigger/smaller on the close/far side) and never cross it ... without any actual camera position restrictions...
First things first, when you are rendering a sky box, generally, you don't render an actual box.
The skybox contains stuff that generally never or only very slowly change and is so far away that the player will never reach. The skybox is stored in a cube map texture and rendered through a full screen rectangle. In the shader you use OpenGL's cubemap sampling by sampling with the eye vector into the map.
If the skybox is dynamic, like dynamic time of day, it is only re rendered every couple of frames or only when needed.
A while back I wrote an article on how to do it: GLSL Skybox (You will need to update the code to a modern OpenGL version through...)

Opengl, blend only when destination pixels' alpha value is positive

I'm searching for a function/way to make blending work only when destination pixels' (i.e. the back buffer) alpha value is greater than 0.
What i'm looking for is something like the glAlphaFunc which tests the incoming fragments, but in my case i want to test the fragments already found in the back buffer.
Any ideas?
Thank you in advance
ps. I cannot do a pixel-by-pixel test in the drawing function because this is set as a callback function to the user.
Wait, your answer is somewhat confusing, but i think what you're looking for is something like this : opengl - blending with previous contents of framebuffer
Sorry for this, but i think it's better answering instead of commenting.
So, let me explain better giving an example.
Let's say we have to draw something (whatever the user wants, like a table) and after that (before swapping the buffers of course) we must draw over it the "saved" textures using blending.
Let's say we have to draw two transparent boxes. If those boxes are to be saved in a different texture, this can be done by:
Clear the screen with (0, 0, 0, 0)
set blend function (GL_ONE, GL_ZERO)
draw the box
save it to texture.
Now, whenever the user wants to redraw them all, he simply draws the main theme (the table) and over it draws the textures using blend function (GL_SOURCE_ALPHA, GL_ONE_MINUS_SOURCE_ALPHA).
This works fine. But if the user wants to save both boxes in one texture and the boxes overlap, how can we save the blending of those two boxes without blend them with the "cleared" background?
Summarizing, the final image of the whole painting should be a table with two boxes (let's say a yellow and a green box) over it, blended with function (GL_SOURCE_ALPHA, GL_ONE_MINUS_SOURCE_ALPHA).

Methods of zooming in/out with openGL (C++)

I am wondering what kind of methods are commonly used when we do zoom in/out.
In my current project, I have to display millions of 2D rectangles on the screen, and I am using a fixed viewport and changing glortho2D variables when I have to zoom in/out.
I am wondering if this is a good way of doing it and what other solution can I use.
I also have another question which I think it is related to
how should I do zoom in/out.
As I said, I am currenly using a fixed viewport and changing glortho2D variables in my code, and I assumed that opengl will be able to figure out which rectangles are out of the screen and not render them. However, it seems like opengl is redrawing all the rectangles again and again. The rendering time of viewing millions of rectangles (zoom out) is equal to vewing hundreds of rectangles (zoom into a particular area), which is opposite of what I expected. I am wondering if it is related to the zooming methods I used or am I missing something important.
ie . I am using VBO while rendering the rectangles.
and I assumed that opengl will be able to figure out which rectangles are out of the screen
You assumed wrong
and not render them.
OpenGL is a rather dumb drawing API. There's no such thing like a scene in OpenGL. All it does is coloring pixels on the framebuffer one point, line or triangle at a time. When geometry lies outside the viewport it still has to be processed up to the point it gets clipped (and then discarded).

Perfect filled triangle rendering algorithm?

Where can I get an algorithm to render filled triangles? Edit3: I cant use OpenGL for rendering it. I need the per-pixel algorithm for this.
My goal is to render a regular polygon from triangles, so if I use this triangle filling algorithm, the edges from each triangle wouldn't overlap (or make gaps between them), because then it would result into rendering errors if I use for example XOR to render the pixels.
Therefore, the render quality should match to OpenGL rendering, so I should be able to define - for example - a circle with N-vertices, and it would render like a circle with any size correctly; so it doesn't use only integer coordinates to render it like some triangle filling algorithms do.
I would need the ability to control the triangle filling myself: I could add my own logic on how each of the individual pixels would be rendered. So I need the bare code behind the rendering, to have full control on it. It should be efficient enough to draw tens of thousands of triangles without waiting more than a second perhaps. (I'm not sure how fast it can be at best, but I hope it wont take more than 10 seconds).
Preferred language would be C++, but I can convert other languages to my needs.
If there are no free algorithms for this, where can I learn to build one myself, and how hard would that actually be? (me=math noob).
I added OpenGL tag since this is somehow related to it.
Edit2: I tried the algo in here: http://joshbeam.com/articles/triangle_rasterization/ But it seems to be slightly broken, here is a circle with 64 triangles rendered with it:
But if you zoom in, you can see the errors:
Explanation: There is 2 pixels overlapping to the other triangle colors, which should not happen! (or transparency or XOR etc effects will produce bad rendering).
It seems like the errors are more visible on smaller circles. This is not acceptable if I want to have a XOR effect for the pixels.
What can I do to fix these, so it will fill it perfectly without overlapped pixels or gaps?
Edit4: I noticed that rendering very small circles isn't very good. I realised this was because the coordinates were indeed converted to integers. How can I treat the coordinates as floats and make it render the circle precisely and perfectly just like in OpenGL ? Here is example how bad the small circles look like:
Notice how perfect the OpenGL render is! THAT is what I want to achieve, without using OpenGL. NOTE: I dont just want to render perfect circle, but any polygon shape.
There's always the half-space method.
OpenGL uses the GPU to perform this job. This is accelerated in hardware and is called rasterization.
As far as i know the hardware implementation is based on the scan-line algorithm.
This used to be done by creating the outline and then filling in the horizontal lines. See this link for more details - http://joshbeam.com/articles/triangle_rasterization/
Edit: I don't think this will produce the lone pixels you are after, there should be a pixel on every line.
Your problem looks a lot like the problem one has when it comes to triangles sharing the very same edge. What is done by triangles sharing an edge is that one triangle is allowed to conquer the space while the other has to leave it blank.
When doing work with a graphic card usually one gets this behavior by applying a drawing order from left to right while also enabling a z-buffer test or testing if the pixel has ever been drawn. So if a pixel with the very same z-value is already set, changing the pixel is not allowed.
In your example with the circles the line of both neighboring circle segments are not exact. You have to check if the edges are calculated differently and why.
Whenever you draw two different shapes and you see something like that you can either fix your model (so they share all the edge vertexes), go for a z-buffer test or a color test.
You can also minimize the effect by drawing edges using a sub-buffer that has a higher resolution and down-sample it. Since this does not effect the whole area it is more cost effective in terms of space and time when compared to down-sampling the whole scene.

OpenGL 2D game question

I want to make a game with Worms-like destructible terrain in 2D, using OpenGL.
What is the best approach for this?
Draw pixel per pixel? (Uh, not good?)
Have the world as a texture and manipulate it (is that possible?)
Thanks in advance
Thinking about the way Worms terrain looked, I came up with this idea. But I'm not sure how you would implement it in OpenGL. It's more of a layered 2D drawing approach. I'm posting the idea anyway. I've emulated the approach using Paint.NET.
First, you have a background sky layer.
And you have a terrain layer.
The terrain layer is masked so the top portion isn't drawn. Draw the terrain layer on top of the sky layer to form the scene.
Now for the main idea. Any time there is an explosion or other terrain-deforming event, you draw a circle or other shape on the terrain layer, using the terrain layer itself as a drawing mask (so only the part of the circle that overlaps existing terrain is drawn), to wipe out part of the terrain. Use a transparent/mask-color brush for the fill and some color similar to the terrain for the thick pen.
You can repeat this process to add more deformations. You could keep this layer in memory and add deformations as they occur or you could even render them in memory each frame if there aren't too many deformations to render.
I guess you'd better use texture-filled polygons with the correct mapping (a linear one that doesn't stretch the texture to use all the texels, but leaves the cropped areas out), and then reshape them as they get destroyed.
I'm assuming your problem will be to implement the collision between characters/weapons/terrain.
As long as you aren't doing this on opengl es, you might be able to get away with using the stencil buffer to do per-pixel collision detection and have your terrain be a single modifyable texture.
This page will give an idea:
http://kometbomb.net/2007/07/11/hardware-accelerated-2d-collision-detection-in-opengl/
The way I imagine it is this:
a plane with the texture applied
a path( a vector of points/segments ) used for ground collisions.
When something explodes, you do a boolean operation (rectangle-circle) for the texture(revealing the background) and for the 'walkable' path.
What I'm trying to say is you do a geometric boolean operation and you use the result to update the texture(with an alpha mask or something) and update the data structure you use to keep track of the walkable area(which ever that might be).
Split things up, instead of relying only on gl draw methods
I think I would start by drawing the foreground into the stencil buffer so the stencil buffer is set to 1 bits anywhere there's foreground, and 0 elsewhere (where you want your sky to show).
Then to draw a frame, you draw your sky, enable the stencil buffer, and draw the foreground. For the initial frame (before any explosion has destroyed part of the foreground) the stencil buffer won't really be doing anything.
When you do have an explosion, however, you draw it to the stencil buffer (clearing the stencil buffer for that circle). Then you re-draw your data as before: draw the sky, enable the stencil buffer, and draw the foreground.
This lets you get the effect you want (the foreground disappears where desired) without having to modify the foreground texture at all. If you prefer not to use the stencil buffer, the alternative that seems obvious to me would be to enable blending, and just manipulate the alpha channel of your foreground texture -- set the alpha to 0 (transparent) where it's been affected by an explosion. IMO, the stencil buffer is a bit cleaner approach, but manipulating the alpha channel is pretty simple as well.
I think, but this is just a quick idea, that a good way might be to draw a Very Large Number of Lines.
I'm thinking that you represent the landscape as a bunch of line segments, for each column of the screen you have 0..n vertical lines, that make up the ground:
12 789
0123 6789
0123456789
0123456789
In the above awesomeness, the column of "0":s makes up a single line, and so on. I didn't try to illustrate the case where a single pixel column has more than one line, since it's a bit hard in this coarse format.
I'm not sure this will be efficient, but it at least makes some sense since lines are an OpenGL primitive.
You can color and texture the lines by enabling texture-mapping and specifying the desired texture coordinates for each line segment.
Typically the way I have seen it done is to have each entity be a textured quad, then update the texture for animation. For a destructible terrain it might be best to break the train into tiles then you only have to update the ones that have changed. Don't use GLdrawpixels it is probably the slowest approach possible (outside of reloading textures from disk every frame though it would be close.)