Simulating constant Pixels-Per-Unit in game graphics at different "Zoom Levels" - opengl

I'm currently trying to lay down the graphical style for a game environment I am working on. Meshes are rendered at a fairly low resolution in isometric to give a "sprite-like" appearance.
The issue is keeping the resolution I have picked to maintain the correct Pixels-Per-Unit in game. For example, here is a Cube rendered so that its top face is roughly equal to that of a 64 by 32px isometric tile: Lets say this is Zoom Level x1
Now in an engine like Unity, I can play around with screen size and cameras to zoom in or out easily. But to keep the original rendering resolution, I can only think of forcing the player into Fullscreen, changing the screen-resolution, and allowing the scene to scale accordingly.
So to zoom in, I cut the resolution in half but bring the cube twice as close to the screen. This is the intended outcome: Zoom Level x2 (scaled in image editor).
The top face of the cube appears to still be 64 by 32px even though it actually takes up 128 by 64px on the screen.
I want to know if this effect can be simulated by writing my own shader and having some parameter for 'Zoom Level' (x1, x2, x4, x8). This way I don't have to play magic tricks with fullscreen resolution, and I can also let the screen size be independent of the "render resolution."
Zooming in with off the shelf shaders won't keep a fixed Pixels-Per-Unit, since the cube is closer and can take up more pixels on screen, its quality also increases. (I can only post 2 pictures at the time).
The higher quality of the rendered cube is nice and standard behavior, but it is not the intended effect. Think "nearest neighbor" scaling in image editing software. That is the kind of rendering I'm going after at zoom levels x2, x4, x8. Just in real-time.
I know it seems weird to want meshes to render like sprites, but this is a concept that interests me greatly as a 'look' for a game.
So can this be done at the shader level? I have looked into writing my own glsl shaders for OpenGL and am slowly learning the general flow of things.
Can I change how a shader rasterizes fragments to get that "nearest neighbor" sampling I'm going after?
Can a shader even interact with a parameter like "Zoom Level = 2" to change its behavior while rendering?
Or perhaps there are other ways to achieve this style of real-time rendering that I haven't come across?
And of course, is this even something that a shader does? I might be incorrect on what stage of the graphics pipeline this is handled at.

Related

Using LibGDX (Orthographic) Camera.zoom makes tiles flicker when moving camera?

I have some 64x64 sprites which work fine (no flicker/shuffling) when I move my camera normally. But as soon as I change the camera.zoom (was supposed to be a major mechanic in my game) level away from 1f the sprites flicker every time you move.
For example changing to 0.8f:
Left flicker:
One keypress later: (Right flicker)
So when you move around it's really distracting for gameplay when the map is flickering... (however slightly)
The zoom is a flat 0.8f and I'm currently using camera.translate to move around, I've tried casting to (int) and it still flickered... My Texture/sprite is using Nearest filtering.
I understand zooming may change/pixelate my tiles but why do they flicker?
Edit
For reference here is my tilesheet:
It's because of the nearest filtering. Depending on amount of zoom, certain lines of artwork pixels will straddle lines of screen pixels so they get drawn one pixel wider than other lines. As the camera moves, the rounding works out differently on each frame of animation so that different lines are drawn wider on each frame.
If you aren't going for a retro low-res aesthetic, you could use linear filtering with mip maps (MipMapLinearLinear or MipMapLinearNearest). Then start with larger resolution art. The first looks better if you are smoothly transitioning between zoom levels, with a possible performance impact.
Otherwise, you could round the camera's position to an exact multiple of the size of a pixel in world units. Then the same enlarged columns will always correspond with the same screen pixels, which would cut down on perceived flickering considerably. You said you were casting the camera translation to an int, but this requires the art to be scaled such that one pixel of art exactly corresponds with one pixel of screen.
This doesn't fix the other problem, that certain lines of pixels are drawn wider so they appear to have greater visual weight than similar nearby lines, as can be seen in both your screenshots. Maybe one way to get round that would be to do the zoom a secondary step, so you can control the appearance with an upscaling shader. Draw your scene to a frame buffer that is sized so one pixel of texture corresponds to one world pixel (with no zoom), and also lock your camera to integer locations. Then draw the frame buffer's contents to the screen, and do your zooming at this stage. Use a specialized upscaling shader for drawing the frame buffer texture to the screen to minimize blurriness and avoid nearest filtering artifacts. There are various shaders for this purpose that you can find by searching online. Many have been developed for use with emulators.

Draw shiny lights in OpenGL to simulate a old monitor scanline

I'm writing a terminal emulator that simulates the look of a old monitor (software link). Here's a screenshot:
For this version, I use 2D graphics. My intention is to migrate to OpenGL to achieve higher perfomance and to be able to have a screen curvature, such as this:
Screenshot 2 http://www.meeho.net/blog/wp-content/uploads/Cathode.png
To achieve a higher realism, I want to draw the scanlines individually. This way, it would look something like this when greatly amplified:
So my question is: what would be the best strategy to achieve this (that is, draw these grainy shiny lights over a curved surface with a high framerate) with OpenGL?
I should point out that not all terminals had shadow mask CRTs (which was responsible for the bulge). Higher end terminals had (relatively) flat aperture grille CRTs. On the other extreme, really cheap terminals had somewhat annoying scrolling horizontal bars.
My fondest memories are programming on a SONY Trinitron terminal, which didn't have issues with brightness on horizontal scan lines, but did have a very pronounced vertical pitch between pixels.
Here's what things looked like on aperture grille CRTs:
I haven't seen any CRT emulating shaders that ever replicate this though.
To me, there's more than one way to skin a CRT... you might want to emulate a dot matrix grid, darken alternate fields, have a horizontal line that slowly scrolls up/down the screen, apply a pincushion distortion to simulate non-flat CRTs.
In any case, don't think of this as drawing lights. Draw the basic text into an FBO and then modulate the luminance of each pixel and apply the pincushion distortion in a fragment shader.
To achieve the effect in your final screenshot, you are going to need more than scanlines. You will also have to simulate the shadow mask dot matrix, you can probably do this with a simple texture.

Blurry Skybox Texture

I have a problem when I render my skybox. I am using DirectX 11 with c++. The picture is too blurry. I think it might me I'm using too low resolution textures. Currently for every face of the skybox, the resolution is 1024x1024. My screen resolution is 1920x1080. On average I will be staring into one face of the skybox at all times, this means the 1024x1024 picture will be stretched to fill my screen, which is why it is blurry. I'm considering using 2048x2048 textures. I created a simple skybox texture and it is not blurry anymore. But my problem is it takes too much memory! Almost 100MB loaded to the GPU just for the background.
My question is that is there a better way to render skyboxes? I've looked around on the internet without much luck. Some say that the norm is 512x512 per face. The blurriness then is unacceptable. I'm wondering how the commercial games did their skyboxes? Did they use huge texture sizes? In particular, for those have seen it, I love the Dead Space 3 space environment. I would like to create something like that. So how did they do it?
Firstly, the pixel density will depend not only on the resolution of your texture and the screen, but also the field of view. A narrow field of view will result in less of the skybox filling the screen, and thus will zoom into the texture more, requiring higher resolution. You don't say exactly what FOV you're using, but I'm a little surprised a 1k texture is particularly blurry, so maybe it's a bit on the narrow side?
Oh, and before I forget - you should be using compressed textures... 2k textures shouldn't be that scary.
However, aside from changing the resolution, which obviously does start to burn through memory fairly quickly, I generally always combine the skybox with some simple distant objects.
For example, in a space scene I would probably render a fairly simple skybox which only contained things like nebula, etc., where resolution wasn't critical. I'd perhaps render at least some of the stars as sprites, where the texture density can be locally higher. A planet could be textured geometry.
If I was rendering a more traditional outdoor scene, I could render sky and clouds on the skybox, but a distant horizon as geometry. A moon in the sky might be an overlay.
There is no one standard answer - a variety of techniques can be employed, depending on the situation.

Why does stereo 3D rendering require software written especially for it?

Given a naive take on 3D graphics rendering it seems that stereo 3D rendering should be essentially transparent to the developer and be entirely a feature of the graphics hardware and drivers. Wherever an OpenGL window is displaying a scene, it takes the geometry, lighting, camera and texture etc. information to render a 2D image of the scene.
Adding stereo 3D to the scene seems to essentially imply using two laterally offset cameras where there was originally one, and all other scene variables stay the same. The only additional information then would be how far apart to make the cameras and how far out to to make their central rays converge. Given this it would seem trivial to take a GL command sequence and interleave the appropriate commands at driver level to drive a 3D rendering.
It seems though applications need to be specially written to make use of special 3D hardware architectures making it cumbersome and prohibitive to implement. Would we expect this to be the future of stereo 3D implementations or am I glossing over too many important details?
In my specific case we are using a .net OpenGL viewport control. I originally hoped that simply having stereo enabled hardware and drivers would be enough to enable stereo 3D.
Your assumptions are wrong. OpenGL does not "take geometry, lighting camera and texture information to render a 2D image". OpenGL takes commands to manipulate its state machine and commands to execute draw calls.
As Nobody mentions in his comment, the core profile does not even care about transformations at all. The only thing it really provides you with now is ways to provide arbitrary data to a vertex shader, and an arbitrary 3D cube to do rendering to. Wether that corresponds or not to the actual view, GL does not care, nor should it.
Mind you, some people have noticed that a driver can try to guess what's the view and what's not, and this is what the nvidia driver tries to do when doing automatic stereo rendering. This requires some specific guess-work, which amounts to actual analysis of game rendering to tweak the algorithms so that the driver guesses right. So it's typically a per-title, in-driver change. And some developers have noticed that the driver can guess wrong, and when that happens, it starts to get confusing. See some first-hand account of those questions.
I really recommend you read that presentation, because it makes some further points as to where the camera should be pointing towards (should the 2 view directions be parallel and such).
Also, It turns out that is essentially costs twice as much rendering for everything that is view dependent. Some developers (including, for example, the Crytek guys, see Part 2), figured out that to a great extent, you can do a single render, and fudge the picture with additional data to generate the left and right eye pictures.
The amount of saved work here is worth a lot by itself, for the developer to do this themselves.
Stereo 3D rendering is unfortunately more complex than just adding a lateral camera offset.
You can create stereo 3D from an original 'mono' rendered frame and the depth buffer. Given the range of (real world) depths in the scene, the depth buffer for each value tells you how far away the corresponding pixel would be. Given a desired eye separation value, you can slide each pixel left or right depending on distance. But...
Do you want parallel axis stereo (offset asymmetrical frustums) or 'toe in' stereo where the two cameras eventually converge? If the latter, you will want to tweak the camera angles scene by scene to avoid 'reversing' bits of geometry beyond the convergence point.
For objects very close to the viewer, the left and right eyes see quite different images of the same object, even down to the left eye seeing one side of the object and the right eye the other side - but the mono view will have averaged these out to just the front. If you want an accurate stereo 3D image, it really does have to be rendered from different eye viewpoints. Does this matter? FPS shooter game, probably not. Human surgery training simulator, you bet it does.
Similar problem if the viewer tilts their head to one side, so one eye is higher than the other. Again, probably not important for a game, really important for the surgeon.
Oh, and do you have anti-aliasing or transparency in the scene? Now you've got a pixel which really represents two pixel values at different depths. Move an anti-aliased pixel sideways and it probably looks worse because the 'underneath' color has changed. Move a mostly-transparent pixel sideways and the rear pixel will be moving too far.
And what do you do with gunsight crosses and similar HUD elements? If they were drawn with depth buffer disabled, the depth buffer values might make them several hundred metres away.
Given all these potential problems, OpenGL sensibly does not try to say how stereo 3D rendering should be done. In my experience modifying an OpenGL program to render in stereo is much less effort than writing it in the first place.
Shameless self promotion: this might help
http://cs.anu.edu.au/~Hugh.Fisher/3dteach/stereo3d-devel/index.html

OpenGL water refraction

I'm trying to create an OpenGL application with water waves and refraction. I need to either cast rays from the sun and then the camera and figure out where they intersect, or I need to start from the ocean floor and figure out in which direction(s, if any) I have to go in order to hit the sun or the camera. I'm kind of stuck, can any one give me an inpoint into either OpenGL ray casting or a crash course in advanced geometry? I don't want the ocean floor to be at a constant depth and I don't want the water waves to be simple sinusoidal waves.
First things first: The effect you're trying to achieve can be implemented using OpenGL, but it is not a feature of OpenGL. OpenGL by itself is just a sophisticated triangle to screen drawing API. You got some input data and write a program that performs relatively simple rasterizing drawing operations based on the input data using the OpenGL API. Shaders give it some space; you can implement a raytracer in the fragment shader.
In your case that means, you must implement a some algorithm that generates a picture like you intend. For water is must be some kind of raytracer or fake refraction method to get the effect of looking into the water. The caustics require either a full features photon mapper, or you're good with a fake effect based on the 2nd derivative of the water surface.
There is a WebGL demo, rendering stunningly good looking, interactive water: http://madebyevan.com/webgl-water/ And here's a video of it on YouTube http://www.youtube.com/watch?v=R0O_9bp3EKQ
This demo uses true raytracing (the water surface, the sphere and the pool are raytraced), the caustics are a "fake caustics" effect, based on projecting the 2nd derivative of the water surface heightmap.
There's nothing very OpenGL-specific about this.
Are you talking about caustics? Here's another good Gamasutra article.
Reflections are normally achieved by reflecting the camera in the plane of the mirror and rendering to a texture, you can apply distortion and then use it to texture the water surface. This only works well for small waves.
What you're after here is lots of little ways to cheat :-)
Techincally, all you perceive is a result of lightwaves/photons bouncing off the surfaces and propagating through mediums. For the "real deal" you'll have to trace the light directly from the Sun with each ray following the path:
hit the water surface
refract+reflect, reflected goes into the camera(*), refracted part goes further
hits the ocean bottom
reflects
hits the water from beneath
reflect+refracts, refracted part gets out of the water and hits the camera(*), reflected again goes to the ocean bottom, reflects etc.
(*) Actually, most of the rays will miss the camera, but that will be overly expensive, so this is a cheat.
Do this for at least three wavelengths - "red", "green" and "blue". Each of them will refract and reflect differently. You'll get the whole picture by combining the three.
Then you just create a texture with the rays that got into the camera and overlay it in OpenGL.
That's a straighforward, simple and very computationally expensive way that gives an approximation to the physics beyond the caustics.