OpenGL rendering different objects - c++

I have some problem with render my solar system.
I have small objects and a large object, which is located at a distance 10e9 times greater than small objects. How I can render all that? (I want see the Sun, when I around my small objects)
glm::perspective(m_fInitialFoV, 4.0f / 3.0f, near, far);
If far/near is to big, all object are flickering.

Best way to do it is to group objects by scale and render them separately. For example, the solar system could use as unit the scale of Earth, which is 6378.1 km radius, and 147.09x10^6 km far away from the sun.
Normalizing these around the Earth's radius, we get:
Earth's Radius: 1.0 unit
Distance from sun: 23061.72 units
Same principle of normalization applies to other solar system bodies:
Venus' Radius: 0.9488405638 units
Distance from sun: 16851.41 units
Pluto's Radius: 0.186105580 units
Distance from sun: 695633.4 units
You should also setup your depth to incorporate such numbers. As long as bodies have a lot of space between themselves, no errors and z-fighting should occur.
So:
Render the 'big' group first. (big depth range)
Then clear only the depth of your screen.
Then render objects that are small and close to the camera, such as asteroids, spaceships, etc. Things that do not need such large
numbers. (here use a substantially smaller depth range)
You're done.
As you can see, you could group the scene based on scale and on multiple levels. You could, for example, get the list of coordinates for the local group of stars, render them first as points, clear the depth, then render the bodies of the system you are into, clear the depth again, and render the local objects at your position. You should also consider, that a planet like Pluto is barely visible from Earth, so you can simply use a lot smaller depth range for drawing.

A simpler solution than multi-pass rendering would be to use a logarithmic depth buffer. At the cost of some precision, you can expand your available depth range exponentially. Take a look here for more info: http://outerra.blogspot.com/2009/08/logarithmic-z-buffer.html

Related

Given an input of fragment positions in a shader, how can I blur each fragment position with an airy disc?

I am attempting to create a reasonably interactive N-body simulation, with the novelty of being able to observe the simulation from the surface of one of the bodies. By this, I mean that I have some randomly placed 'stars' of very high masses with random velocities and 'planets' of smaller masses given initial circular velocities around these stars. I am then rendering this in real-time via OpenGL on Linux and DirectX11 on Windows.
My question is in regards to rendering the scene out, NOT the N-body simulation. I have a very efficient/accurate solver working now, and it can always be improved later without affecting the rendering.
The problem obviously arises that stars are obscenely far away from each other, thus the fragment shader is incapable of rendering distant stars as they are fractions of pixels in size. Using a logarithmic depth-buffer works fine for standing on a planet and looking at a moon and the host star, but I am really struggling on how to deal with the distant stars. I am not interested in 'faking' it, or rendering a star map centered on the player, as the whole point is to be able to view the simulation in real time. A.k.a the star your planet is orbiting is ~1e6m away and is rendered as a sphere, as it has a radius ~1e4 m. Other stars are ~1e8m away from you, so they show up as single lit pixels (sometimes) with a far Z-plane of ~1e13.
I think I have an idea/plan, but I think it involves knowledge/techniques I am not aware of yet.
Rationale:
Have world space of stars on a given frame
This gives us 'screen' space, or fragment position, of star's center of mass in fragment shader
Rather than render this as a scaled sphere, we can try to mimic what our eye's actually do: convolve this point (pixel) with an airy disc (or gaussian or whatever is most efficient, doesn't matter) so that stars are rendered instead as 'blurs' on the sky, with their 'bigness' depending on their luminosity and distance (in essence re-creating the magnitude system for free)
Theoretically this would enable me to change the 'lens' parameters of my airy disc at will in order to produce things that look reasonably accurate/artistic.
The problem: I have no idea how to achieve this blurring effect!
I have some basic understanding of shaders, and have different render passes going on currently, but this seems to involve things I have not stumbled upon, or even how to achieve this effect.
TLDR: given an input of a fragment position, how can I blur it in a fragment/pixel shader with an airy disc/gaussian/etc.?
I thought a logarithmic depth buffer would work initially, but obviously that only helps with z-fighting, not dealing with angular size of far away objects.
You are over-thinking it. For stars smaller than a pixel, just render a square with an Airy disc texture. This is not "faking" - this is just how [real-time] computer graphics works.
If the lens diameter changes, calculate a new Airy disc texture.
For stars that are a few pixels big (do they exist?) maybe you want to render a few-pixel sphere convolved with an Airy disc, then use that texture. Asking the GPU to do convolution every frame is a waste of time, unless you really need it to. If the size really is only a few pixels, you could alternatively render a few copies of the single-pixel texture, overlapping itself and 1 pixel apart. Though computing the texture would allow you to have precision smaller than a pixel, if that's something you need.
For the nearby stars, the Airy disc from each pixel sums up to make a halo, I think? Then you just render a halo, instead of doing the convolution. It isn't cheating, I swear.
If you really do want to do a convolution, you can do it directly: render everything to a texture by using a framebuffer, and then render that texture onto the screen, using a shader that reads from several adjacent texture pixels, and multiplies them by the kernel. Since this runs for every pixel multiplied by the size of the kernel, it quickly gets expensive, the more pixels you want to sample for the convolution, so you may prefer to skip some and make it approximate. If you are not doing real-time rendering then you can make it as slow as you want, of course.
When game developers do a Gaussian blur (quite common) or a box blur, they do a separate X blur and Y blur. This works because the convolution of an X blur and a Y blur is a 2D blur, but I don't know if this works for the Airy disc function. It minimizes the number of pixels sampled for the convolutions.

How to choose the Light Size in World Space for Shadow Mapping and Percentage Closer Filtering?

Hi computer graphics and math people :-)
Short question: How to let an artist choose a meaningful light size in world space for shadow maps filtered by percentage closer filtering (PCF) and is it possible to use the same technique to support spot and directional light sources?
Longer question: I have implemented shadow mapping and filter the edges by applying percentage closer filtering (PCF). The filter kernel is a Poisson-disk in contrast to a regular, rectangular filter kernel. You can think of a Poisson-disk as sample positions more or less randomly distributed inside the unit circle. So the size of the filter region is simply a factor multiplied to each of the 2D sample positions of the kernel (Poisson-disk).
I can adjust the radius/factor for the Poisson-disk and change the size of the penumbra at runtime for either a spot light (perspective frustum) or directional light (orhtographic frustum). This works great but the values for the parameter does not really make any sense which is fine for small 3d samples or even games where one can invest some time to adjust the value empirically. What I want is a parameter called "LightSize" that has an actual meaning in world space. A large scene, for example, with a building that is 100 units long the LightSize has to be larger than in a scene with a close-up of a book shelf to result in the same smooth shadows. On the other hand a fixed LightSize would result in extremely smooth shadows on the shelf and quite hard shadows outside the building. This question is not about soft shadows, contact hardening etc. so ignore physically accurate blocker-receiver estimations ;-)
Oh and take a look at my awesome MS Paint illustrations:
Idea 1: If I use the LightSize directly as the filter-size, a factor of 0.5 would result in a Poisson-disk of the diagonal 1.0 and a radius 0.5. Since texture coordinates are in the range [0,1] this leads to a filter size that evaluates the whole texture for each fragment: imagine a fragment in the center of the shadow map, this fragment would fetch neighboring texels that are distributed inside the whole area of the texture. This would of course yield extremely large penumbra, but let's call this the "maximum". A penumbra factor of 0.05 for example would result in a diagonal of 0.1 so that each fragment would evaluate about 10% of its neighboring texels (ignore the circle etc. just think in 2d from a side view). This approach works but when the angle of a spotlight becomes larger or the frustum of a directional light changes its size, the penumbra changes its width because the LightSize defines the penumbra in texture space (UV space). The penumbra should stay the same independent of the size of the near plane. Imagine a fitted orthographic frustum. When the camera rotates the fitted frustum changes in size and so does the size of the penumbra, which is wrong.
Idea 2: Divide the LightSize by the size of the near plane in world space. This works great for orthographic projections because when the size of the frustum becomes larger, the LightSize is divided by a larger value so that the penumbra stays the same in world space. Unfortunately this doesn't work for a perspective frustum because the distance of the near plane leads to a changing size of the near plane so that the penumbra size is now dependent on the near plane distance which is anoying and wrong.
It feels like there has to be a way so that the artist can choose a meaningful light size in world space. I know that PCF is only a (quite bad) approximation of a physically plausible light source, but imagine the following:
When the light source is sampled multiple times by using a Poisson-disk in world space one can create physically accurate shadows by rendering a hard shadow for each sample position. This works for spot lights. The situation is different for directional lights. One can use an "angle in the origin of the directional light" and render multiple hard shadows for each slightly rotated frustum. This does not make any physically sense at all in the real world, but directional lights do not exist, so... by the way the sampling of a light source is often referred to as multi-view soft shadows (MVSS).
Do you have any suggestions? Could it be that spot and directional lights have to be handled differently and PCF does not allow me to use a meaningful real world light size for a perspective frustum?

OpenGL projection clipping

For example if we have ortho projection:
left = -aspect, right = aspect, top = 1.0, bottom = -1.0, far = 1.0, near = -1.0
And will draw triangle at -2.0, it will be cut of by near clipping plane. Will it really saves some precious rendering time?
Culling determinate if we need to draw something and discards if out of our view (written by programer in vertex shader/in main program). Clipping == cheap auto culling?
Also just in theme of cheap culling - will be
if(dist(cam.pos, sprite.pos) < MAX_RENDER_DIST)
draw_sprite(sprite);
just enough for simple 2d game?
Default OpenGL clipping space is -1 to +1, for x, y and z.
The conditional test for sprite distance will work. It is kind of not needed, as the far clipping plane will do almost the same thing. Usually it is good enough. There are cases where the test is needed. Objects at the corner inside the clipping planes may come outside the far clipping plane with the camera turns. The reason is that distance from the camera to the corner is longer than the perpendicular distance from the camera to the far clipping plane. This is not a problem if you have a 2D game and do not allow changes of the camera viewing angle.
If you have a simple 2D game, chances are high that you do not need to worry about graphics optimization. If you are drawing sprites outside of the clipping planes, you save time. But how much time you save depends. If a huge amount of the sprites are outside, you may save considerable time. But then, you should probably consider what algorithm you use, and not draw things that are not going to be shown anyway. If only a small percentage of the sprites are outside, then the time saved will be negligible.
The problem with clipping in the GPU is that it happens relatively late in the pipeline, just before rasterization, so a lot of computations could already be done for nothing.
Doing it on the CPU can save these computations from happening and, also very important, reduce the number of actual draw commands (which can also be a bottleneck).
However, you do want to do this fast in the CPU, typically you'll use an octree or similar to represent data so you can discard an entire subtree at once. If you have to go over each polygon or even object separately this can become to expensive.
So in conclusion: the usefulness depends on where your bottleneck lies (cpu, vertex shader, transmission rate, ...).

OpenGl changing the unit of measure for coordinate system

I'm learning OpenGL using the newest blue book, and I was hoping to get a point of clarification. The book says you can use any system of measurement for the coordinate system, however when doing 3d perspective programming, the world coordinates are +1/-1.
I guess where I am a little confused is lets say I want to use feet and inches for my world, and I want to build the interior of my house. Would I just use translates (x(feet), y(feet), z(feet)), extra, or is there some function to change you x,y world coordinates (i.e change the default from -1/+1 to lets say (-20/20). I know openGl converts everything unit left.
So i guess My biggest gap of knowledge is how to I model real world objects (lengths) to make sense in OPenGL?
I think the author is referring to normalized coordinates (device coordinates) that appear after the perspective divide.
Typically when you actually manipulate objects in OpenGL you use standard world coordinates which are not limited to -1/+1. These world coordinates can be anything you like and are translated into normalized coordinates by multiplying by the modelview and projection matrices then dividing by the homogeneous coordinate 'w'.
OpenGL will do all this for you (until you get into shaders) so don't really worry about normalized coordinates.
Well its all relative. If you set 1 OpenGL unit to be 1 meter and then build everything perfectly to scale on that basis then you have the scale you want (ie 10 meters would be 10 openGL units and 1 millimeter would be 0.001 OpenGL units.
If, however, you were to then introduce a model that was designed such that 1 OpenGL unit = 1 foot then the object is going to end up being ~3 times too large.
Sure you can change between the various methods but by far the best way to do this is to re-scale your model before loading it into your engine.
…however when doing 3d perspective programming, the world coordinates are +1/-1.
Says who? Maybe you are referring to clip space, however this is just one special space in which all coordinates are after projection.
You world units may be anything. The relevant part is, how those are projected. Say you have a perspective projection of 90°FOV, near clip plane at 0.01, far clip plane at 10.0, quadratic aspect (for simplicity). Then your view volume into the world will resolve a rectangle with side lengths 0.01 and 0.01 distance close to the viewer, and stretch to 10.0 in the distant with the 0.02 side lengths. Reduce the FOV and the lateral lenghts shrink accordingly.
Nevertheless, coordinates in about the range 0.01 to 10. are projected into the +/-1 clip space.
So it's up to you to choose the projection limits to match your scene and units of choice.
Normalized window coordinate range is always -1 to +1. But it can be used as you wish.
Suppose we want a range -100 to +100 then divide actual coordinate with 100. In this case use
the function glVertex2f or glVertex3f only.
e.g. glVertex2f(10/100,10/100).

OpenGL - gluPerspective / glFrustum - zNear & zFar problems

I'm writing a space exploration application. I've decided on light years being the units and have accurately modeled the distances between stars. After tinkering and a lot of arduous work (mostly learning the ropes) I have got the camera working correctly from the point of view of a starship traversing through the cosmos.
Initially I paid no attention to the zNear parameter of gluPerspective () until I worked on planetary objects. Since my scale is in light year units I soon realized that due to zNear being 1.0f I would not be able to see such objects. After experimentation I arrived at these figures:
#define POV 45
#define zNear 0.0000001f
#define zFar 100000000.0f
gluPerspective (POV, WinWidth/WinHeight, zNear ,zFar);
This works exceptionally well in that I was able to cruise my solar system (position 0,0,0) and move up close to the planets which look great lit and texture mapped. However other systems (not at position 0,0,0) were much harder to cruise through because the objects moved away from the camera in unusual ways.
I had noticed however that strange visual glitches started to take place when cruising through the universe. Objects behind me would 'wrap around' and show ahead, if I swing 180 degrees in the Y direction they'll also appear in their original place. So when warping through space, most the stars are correctly parallaxing but some appear and travel in the opposite direction (which is disturbing to say the least).
By changing the zNear to 0.1f immediately corrects ALL of these glitches (but also won't resolve solar system objects). So I'm stuck. I've also tried working with glFrustum and it produces exactly the same results.
I use the following to view the world:
glTranslatef(pos_x, pos_y, pos_z);
With relevant camera code to orientate as required. Even disabling camera functionality does not change anything. I've even tried gluLookAt() and again it produces the same results.
Does gluPerspective() have limits when extreme zNear / zFar values are used? I tried to reduce the range but to no avail. I even changed my world units from light years to kilometers by scaling everything up and using a bigger zNear value - nothing. HELP!
The problem is that you want to resolve too much at the same time. You want to view things on the scale of the solar system, while also having semi-galactic scale. That is simply not possible. Not with a real-time renderer.
There is only so much floating-point precision to go around. And with your zNear being incredibly close, you've basically destroyed your depth buffer for anything that is more than about 0.0001 away from your camera.
What you need to do is to draw things based on distance. Near objects (within a solar system's scale) are drawn with one perspective matrix, using one depth range (say, 0 to 0.8). Then more distant objects are drawn with a different perspective matrix and a different depth range (0.8 to 1). That's really the only ways you're going to make this work.
Also, you may need to compute the matrices for objects on the CPU in double-precision math, then translate them back to single-precision for OpenGL to use.
OpenGL should not be drawing anything farther from the camera than zFar, or closer to the camera than zNear.
But for things in between, OpenGL computes a depth value that is stored in the depth buffer which it uses to tell whether one object is blocking another. Unfortunately, the depth buffer has limited precision (generally 16 or 24 bits) and according to this, roughly log2(zFar/zNear) bits of precision are lost. Thus, a zFar/zNear ratio of 10^15 (~50 bits lost) is bound to cause problems. One option would be to slightly increase zNear (if you can). Otherwise, you will need to look into Split Depth Buffers or Logarithmic Depth Buffers
Nicol Bolas already told you one piece of the story. The other is, that you should start thinking about a structured way to store the coordinates: Store the position of each object in relation to the object that dominates it gravitatively and use apropriate units for those.
So you have stars. Distances between stars are measured in lightyears. Stars are orbited by planets. Distances within a starsystem are measured in lightminutes to lighthours. Planets are orbited by moons. Distances in a planetary system are measured in lightseconds.
To display such scales you need to render in multiple passes. The objects with their scales form a tree. First you sort the branches distant to close, then you traverse the tree depth first. For each branching level you use apropriate projection parameters so that the near→far clip planes snuggily fit the to be rendered objects. After rendering each level clear the depth buffer.