Using LibGDX (Orthographic) Camera.zoom makes tiles flicker when moving camera? - opengl

I have some 64x64 sprites which work fine (no flicker/shuffling) when I move my camera normally. But as soon as I change the camera.zoom (was supposed to be a major mechanic in my game) level away from 1f the sprites flicker every time you move.
For example changing to 0.8f:
Left flicker:
One keypress later: (Right flicker)
So when you move around it's really distracting for gameplay when the map is flickering... (however slightly)
The zoom is a flat 0.8f and I'm currently using camera.translate to move around, I've tried casting to (int) and it still flickered... My Texture/sprite is using Nearest filtering.
I understand zooming may change/pixelate my tiles but why do they flicker?
Edit
For reference here is my tilesheet:

It's because of the nearest filtering. Depending on amount of zoom, certain lines of artwork pixels will straddle lines of screen pixels so they get drawn one pixel wider than other lines. As the camera moves, the rounding works out differently on each frame of animation so that different lines are drawn wider on each frame.
If you aren't going for a retro low-res aesthetic, you could use linear filtering with mip maps (MipMapLinearLinear or MipMapLinearNearest). Then start with larger resolution art. The first looks better if you are smoothly transitioning between zoom levels, with a possible performance impact.
Otherwise, you could round the camera's position to an exact multiple of the size of a pixel in world units. Then the same enlarged columns will always correspond with the same screen pixels, which would cut down on perceived flickering considerably. You said you were casting the camera translation to an int, but this requires the art to be scaled such that one pixel of art exactly corresponds with one pixel of screen.
This doesn't fix the other problem, that certain lines of pixels are drawn wider so they appear to have greater visual weight than similar nearby lines, as can be seen in both your screenshots. Maybe one way to get round that would be to do the zoom a secondary step, so you can control the appearance with an upscaling shader. Draw your scene to a frame buffer that is sized so one pixel of texture corresponds to one world pixel (with no zoom), and also lock your camera to integer locations. Then draw the frame buffer's contents to the screen, and do your zooming at this stage. Use a specialized upscaling shader for drawing the frame buffer texture to the screen to minimize blurriness and avoid nearest filtering artifacts. There are various shaders for this purpose that you can find by searching online. Many have been developed for use with emulators.

Related

Given an input of fragment positions in a shader, how can I blur each fragment position with an airy disc?

I am attempting to create a reasonably interactive N-body simulation, with the novelty of being able to observe the simulation from the surface of one of the bodies. By this, I mean that I have some randomly placed 'stars' of very high masses with random velocities and 'planets' of smaller masses given initial circular velocities around these stars. I am then rendering this in real-time via OpenGL on Linux and DirectX11 on Windows.
My question is in regards to rendering the scene out, NOT the N-body simulation. I have a very efficient/accurate solver working now, and it can always be improved later without affecting the rendering.
The problem obviously arises that stars are obscenely far away from each other, thus the fragment shader is incapable of rendering distant stars as they are fractions of pixels in size. Using a logarithmic depth-buffer works fine for standing on a planet and looking at a moon and the host star, but I am really struggling on how to deal with the distant stars. I am not interested in 'faking' it, or rendering a star map centered on the player, as the whole point is to be able to view the simulation in real time. A.k.a the star your planet is orbiting is ~1e6m away and is rendered as a sphere, as it has a radius ~1e4 m. Other stars are ~1e8m away from you, so they show up as single lit pixels (sometimes) with a far Z-plane of ~1e13.
I think I have an idea/plan, but I think it involves knowledge/techniques I am not aware of yet.
Rationale:
Have world space of stars on a given frame
This gives us 'screen' space, or fragment position, of star's center of mass in fragment shader
Rather than render this as a scaled sphere, we can try to mimic what our eye's actually do: convolve this point (pixel) with an airy disc (or gaussian or whatever is most efficient, doesn't matter) so that stars are rendered instead as 'blurs' on the sky, with their 'bigness' depending on their luminosity and distance (in essence re-creating the magnitude system for free)
Theoretically this would enable me to change the 'lens' parameters of my airy disc at will in order to produce things that look reasonably accurate/artistic.
The problem: I have no idea how to achieve this blurring effect!
I have some basic understanding of shaders, and have different render passes going on currently, but this seems to involve things I have not stumbled upon, or even how to achieve this effect.
TLDR: given an input of a fragment position, how can I blur it in a fragment/pixel shader with an airy disc/gaussian/etc.?
I thought a logarithmic depth buffer would work initially, but obviously that only helps with z-fighting, not dealing with angular size of far away objects.
You are over-thinking it. For stars smaller than a pixel, just render a square with an Airy disc texture. This is not "faking" - this is just how [real-time] computer graphics works.
If the lens diameter changes, calculate a new Airy disc texture.
For stars that are a few pixels big (do they exist?) maybe you want to render a few-pixel sphere convolved with an Airy disc, then use that texture. Asking the GPU to do convolution every frame is a waste of time, unless you really need it to. If the size really is only a few pixels, you could alternatively render a few copies of the single-pixel texture, overlapping itself and 1 pixel apart. Though computing the texture would allow you to have precision smaller than a pixel, if that's something you need.
For the nearby stars, the Airy disc from each pixel sums up to make a halo, I think? Then you just render a halo, instead of doing the convolution. It isn't cheating, I swear.
If you really do want to do a convolution, you can do it directly: render everything to a texture by using a framebuffer, and then render that texture onto the screen, using a shader that reads from several adjacent texture pixels, and multiplies them by the kernel. Since this runs for every pixel multiplied by the size of the kernel, it quickly gets expensive, the more pixels you want to sample for the convolution, so you may prefer to skip some and make it approximate. If you are not doing real-time rendering then you can make it as slow as you want, of course.
When game developers do a Gaussian blur (quite common) or a box blur, they do a separate X blur and Y blur. This works because the convolution of an X blur and a Y blur is a 2D blur, but I don't know if this works for the Airy disc function. It minimizes the number of pixels sampled for the convolutions.

OpenGL Voxel Game - Avoid transparency overlapping

Im making a voxel game, and i have designed the water as cubes with 0.5 alpha. It works great if all the water is at the same height, like in the image below:
But, if the water is not at the same height, alpha overlapping happens:
How can I prevent this overlapping to occur? (For example, only drawing the nearest water body for each pixel, discarding the remaining). Should I need to use FrameBuffers, drawing the scene with multiple passes, or it would be enough by using a alternate blend function, or taking another less GPU expensive approach?
I found an answer without drawing the scene with multiple passes. I hope it helps somebody:
We are going to draw the nearest water body for each pixel, discarding the remaining, and so, avoiding the overlapping.
First, you draw the solid blocks normally.
Then, you draw the water after disabling writing in the color buffer glColorMask(false,false,false,false). The Z-buffer will be updated as desired, but no water will be drawn yet.
Finally, you enable writing in the color buffer (glColorMask(true,true,true,true) ) and set the depthFunc to LEQUAL ( glDepthFunc(GL_LEQUAL) ). Only the nearest water pixels will pass the depth test (Setting it to LEQUAL instead of EQUAL deals with some rare but possible floating point approximation errors). Enabling blending and drawing the water again will produce the effect we wanted:

Rendering Point Sprites across cameras in cube maps

I'm rendering a particle system of vertices, which are then tessellated into quads in a geom shader, and textured/rendered as point sprites. Then they are scaled in size depending on how far away they are from the camera. I'm trying to render out every frame of my scene into cube maps. So essentially I place six cameras into my scene and point them in each direction for the face of the cube and save an image.
My point sprites are of varying sizes. When they near the border of one camera, (if they are large enough) they appear in two cameras simultaneously. Since point sprites are always facing the camera, this means that they are not continuous along the seam when I wrap my cube map back into 3d space. This is especially noticeable when the points are quite close to the camera, as the points are larger, and stretch further across into both camera views. I'm also doing some alpha blending, so this may be contributing to the problem as well.
I don't think I can just cull points that near the edge of the camera, because when I put everything back into 3d I'd think there would be strange areas where the cloud is more sparsely populated. Another thought I had would be to blur the edges of each camera, but I think this too would give me a weird blurry zone when I go back to 3d space. I feel like I could manually edit the frames in photoshop so they look ok, but this would be kind of a pain since it's an animation at 30fps.
The image attached is a detail from the cube map. You can see the horizontal seam where the sprites are not lining up correctly, and a slightly less noticeable vertical one on the right side of the image. I'm sure that my camera settings are correct, because I've used this same camera setup in other scenes and my cubemaps look fine.
Anyone have ideas?
I'm doing this in openFrameworks / openGL fwiw.
Instead of facing the camera, make them face the origin of the cameras? Not sure if this fixes everything, but intuitively I'd say it should look close to OK. Maybe this is already what you do, I have no idea.
(I'd like for this to be a comment, but no reputation)

How to draw an array of pixels directly to the screen with OpenGL?

I want to write pixels directly to to screen (not using vertices and polygons). I have investigated a variety of answers to similar questions, the most notable ones here and here.
I see a couple ways drawing pixels to the screen might be possible, but they both seem to be indirect and use unnecessary floating point operations:
Draw a GL_POINT for each pixel on the screen. I've tried this and it works, but this seems like an inefficient way to draw pixels onto the screen. Why write my data in floating-points when it's going to be transformed into an array of pixel data.
Create a 2d quad that spans the entire screen and write a texture to it. Like the first options, this seems to be a roundabout way of putting pixels on the screen. The texture would still have to go through rasterization before getting put on the screen. Also textures must be square, and most screens are not square, so I'd have to handle that problem.
How do I get, a matrix of colors, where pixels[0][0] corresponds to the upper left corner and pixels[1920][1080] corresponds to the bottom right, onto the screen in the most direct and efficient way possible using OpenGL?
Writing directly to the framebuffer seems like the most promising choice, but I have only seen people using the framebuffer for shading.
First off: OpenGL is a drawing API designed to make use of a rasterizer system that ingests homogenous coordinates to define geometric primitives, which get transformed and, well rasterized. Merely drawing pixels is not what the OpenGL API is concerned with. Also most GPUs are floating point processors by nature and in fact can process floating point data more efficiently than integers.
Why write my data in floating-points when it's going to be transformed into an array of pixel data.
Because OpenGL is a rasterizer API, i.e. it takes primitive geometrical data and turns it into pixels. It doesn't deal with pixels as input data, except in the form of image objects (textures).
Also textures must be square, and most screens are not square, so I'd have to handle that problem.
Whoever told you that, or whereever you got that from: They are wrong. OpenGL-1.x had that constraint that textures had to be power-of-2 sized in either direction, but width and height may differ. Ever since OpenGL-2 texture sizes are completely arbitrary.
However a texture might not be the most efficient way to directly update single pixels on the screen either. It is however a great idea to first draw pixels of an pixel buffer, which for display is loaded into a texture, that then gets drawn onto a full viewport quad.
However if your goal is direct manipulation of on-screen pixels, without a rasterizer inbetween, then OpenGL is not the right API for the job. There are other, 2D graphics APIs that allow you to directly push pixels to the screen.
However pushing individual pixels is very inefficient. I strongly recomment operating on a pixel buffer, which is then blited or drawn as a whole for display. And doing it with OpenGL, drawing a full viewport, textured quad is as good for this, and as efficient as any other graphics API.

labels in an opengl map application

Short Version
How can I draw short text labels in an OpenGL mapping application without having to manually recompute coordinates as the user zooms in and out?
Long Version
I have an OpenGL-based mapping application where I need to be able to draw data sets with up to about 250k points. Each point can have a short text label, usally about 4 or 5 characters long.
Currently, I do this using a single textue containing all the characters. For each point, I define a quad for each character in its label. So a point with the label "Fred" would have four quads associated with it, and each quad uses texture coordinates into that single texture to draw its corresponding character.
When I draw the map, I draw the map points themselves in map coordinates (e.g., longitude/latitude). Then I compute the position of each point in screen coordinates and update the four corner points for each of that point's label quads, again in screen coordinates. (For instance, if I determine the point is drawn at screen point 100, 150, I could set the quad for the first character in the point's label to be the rectangle starting with left-top point of 105, 155 and having a width of 6 pixels and a height of 12 pixels, as appropriate for the particular character. Then the second character might start at 120, 155, and so on.) Then once all these label character quads are positioned correctly, I draw them using an orthogonal screen projection.
The problem is that the process of updating all of those character quad coordinates is slow, taking about half a second for a particular test data set with 150k points (meaning that, since each label is about four characters long, there are about 150k * [ 4 characters per point] * [ 4 coordinate pairs per character] coordinate pairs that need to be set on each update.
If the map application didn't involve zooming, I would not need to recompute all these coordinates on each refresh. I could just compute the label coordinates once and then simply shift my viewing rectangle to show the right area. But with zooming, I can't see how to make it work without doing coordniate computation, because otherwise the characters will grow huge as you zoom in and tiny as you zoom out.
What I want (and what I understand OpenGL doesn't provide) is a way to tell OpenGL that a quad should be drawn in a fixed screen-coordinate rectangle, but that the top-left position of that rectangle should be a fixed distance from a given point in map coordinate space. So I want both a primitive hierarchy (a given map point is that parent of its label character quads) and the ability to mix two different coordinate systems within this hierarchy.
I'm trying to understand whether there is some magic transformation matrix I can set that will do all this form me, but I can't see how to do it.
The other alternative I've considered is using a shader on each point to handle computing the label character quad coordinates for that point. I haven't worked with shaders before, and I'm just trying to understand (a) if it's possible to use shaders to do this, and (b) whether computing all those points in shader code actually buys me anything over computing them myself. (By the way, I have confirmed that the big bottleneck is computing the quad coordinates, not in uploading the updated coordinates to the GPU. The latter takes a bit of time, but it's the computation, the sheer number of coordinates being updated, that takes up the bulk of that half second.)
(Of course, the other other alternative is to be smarter about which labels need to be drawn in a given view in the first place. But for now I'd like to concentrate on the solution assuming all labels need to be drawn.)
So the basic problem ("because otherwise the characters will grow huge as you zoom in and tiny as you zoom out") is that you are doing calculations in map coordinates rather than screen coordinates? And if you did it in screen coords, this would require more computations? Obviously, any rendering needs to translate from map coordinates to screen coordinates. The problem seems to be that you are translating from map to screen too late. Therefore, rather than doing a single map-to-screen for each point, and then working in screen coords, you are working mostly in map coords, and then translating per-character to screen coords at the very end. And the slow part is that you are working in screen coords, then having to manually translate back to map coords just to tell OpenGL the map coords, and it will convert those back to screen coords! Is that a fair assessment of your problem?
The solution therefore is to push that transformation earlier in your pipeline. However, I can see why it is tricky, because at first glance, OpenGL seems want to do everything in "world coordinates" (for you, map coords), but not in screen coords.
Firstly, I am wondering why you are doing separate coordinate calculations for each character. What font rendering system are you using? Something like FreeType will automatically generate a bitmap image of an entire string, and doesn't require you to work per-character [edit: this isn't quite true; see comments]. You definitely shouldn't need to calculate the map coordinate (or even screen coordinate) for every character. Calculate the screen coordinate for the top-left corner of the label, and have your font rendering system produce the bitmap of the entire label in one go. That should speed things up about fourfold (since you assume 4 characters per label).
Now as for working in screen coords, it may be helpful to learn a bit about shaders. The more you learn about OpenGL, the more you learn that really it isn't a 3D rendering engine at all. It's just a 2D graphics library with some very fast matrix primitives built-in. OpenGL actually works, at the lowest level, in screen coordinates (not pixel coordinates -- it works in normalized screen space, I think from memory from -1 to 1 in both the X and Y axis). The only reason it "feels" like you're working in world coordinates is because of these matrices you have set up.
So I think the reason why you are working in map coords all the way until the end is because it's easiest: OpenGL naturally does the map-to-screen transform for you (using the matrices). You have to change that, because you want to work in screen coords yourself, and therefore you need to make the transformation a long time before OpenGL gets its hands on your data. So when you go to draw a label, you should manually apply the map-to-screen transformation matrix on each point, as follows:
You have a particular point (which needs a label drawn) in map coords.
Apply the map-to-screen matrix to convert the point to screen coords. This probably means multiplying the point by the MODELVIEW and PROJECTION matrices, using the same algorithm that OpenGL does when it's rendering a vertex. So you could either glGet the GL_MODELVIEW_MATRIX and GL_PROJECTION_MATRIX to extract OpenGL's current matrices, or you could manually keep around a copy of the matrix yourself.
Now you have the map label in screen coords, compute the position of the label's text. This is simply adding 5 pixels in the X and Y axis, as you said above. However, remember that you aren't in pixel space, but normalised screen space, so you are working in percentages (add 0.05 units, would add 5% of the screen space, for example). It's probably better not to think in pixels, because then your application will scale to match the resolution. But if you really want to think in pixels, you will have to calculate the pixels-to-units based on the resolution.
Use glPushMatrix to save the current matrix, then glLoadIdentity to set the current matrix to the identity -- tell OpenGL not to transform your vertices. (I think you will have to do this for both the PROJECTION and MODELVIEW matrices.)
Draw your label, in screen coordinates.
So you don't really need to write a shader. You could certainly do this in a shader, and it would certainly make step 2 faster (no need to write your own software matrix multiply code; multiplying matrices on the GPU is extremely fast). But that would be a later optimisation, and a lot of work. I think the above steps will help you work in screen coordinates and avoid having to waste a lot of time just to give OpenGL map coordinates.
Side comment on:
"""
generate a bitmap image of an entire string, and doesn't require you to work per-character
...
Calculate the screen coordinate for the top-left corner of the label, and have your font rendering system produce the bitmap of the entire label in one go. That should speed things up about fourfold (since you assume 4 characters per label).
"""
Freetype or no, you could certainly compute a bitmap image for each label, rather than each character, but that would require one of:
storing thousands of different textures, one for each label
It seems like a bad idea to store that many textures, but maybe it's not.
or
rendering each label, for each point, at each screen update.
this would certainly be too slow.
Just to follow up on the resolution:
I didn't really solve this problem, but I ended up being smarter about when I draw labels in the first place. I was able to quickly determine whether I was about to draw too many characters (i.e., so many characters that on a typical screen with a typical density of points the labels would be too close together to read in a useful way) and then I simply don't label at all. With drawing up to about 5000 characters at a time there isn't a noticeable slowdown recomputing the character coordinates as described above.