Blurry Skybox Texture - c++

I have a problem when I render my skybox. I am using DirectX 11 with c++. The picture is too blurry. I think it might me I'm using too low resolution textures. Currently for every face of the skybox, the resolution is 1024x1024. My screen resolution is 1920x1080. On average I will be staring into one face of the skybox at all times, this means the 1024x1024 picture will be stretched to fill my screen, which is why it is blurry. I'm considering using 2048x2048 textures. I created a simple skybox texture and it is not blurry anymore. But my problem is it takes too much memory! Almost 100MB loaded to the GPU just for the background.
My question is that is there a better way to render skyboxes? I've looked around on the internet without much luck. Some say that the norm is 512x512 per face. The blurriness then is unacceptable. I'm wondering how the commercial games did their skyboxes? Did they use huge texture sizes? In particular, for those have seen it, I love the Dead Space 3 space environment. I would like to create something like that. So how did they do it?

Firstly, the pixel density will depend not only on the resolution of your texture and the screen, but also the field of view. A narrow field of view will result in less of the skybox filling the screen, and thus will zoom into the texture more, requiring higher resolution. You don't say exactly what FOV you're using, but I'm a little surprised a 1k texture is particularly blurry, so maybe it's a bit on the narrow side?
Oh, and before I forget - you should be using compressed textures... 2k textures shouldn't be that scary.
However, aside from changing the resolution, which obviously does start to burn through memory fairly quickly, I generally always combine the skybox with some simple distant objects.
For example, in a space scene I would probably render a fairly simple skybox which only contained things like nebula, etc., where resolution wasn't critical. I'd perhaps render at least some of the stars as sprites, where the texture density can be locally higher. A planet could be textured geometry.
If I was rendering a more traditional outdoor scene, I could render sky and clouds on the skybox, but a distant horizon as geometry. A moon in the sky might be an overlay.
There is no one standard answer - a variety of techniques can be employed, depending on the situation.

Related

How to avoid distance ordering in large scale billboard rendering with transparency

Setting the scene:
I am rendering a height map (vast non-transparent surface) with a large amount of billboards on it (typically grass, flowers and so on).
The billboards thus have a mostly transparent color map applied, with only a few pixels colored to produce the grass or leaf shapes and such. Note that the edges of those shapes use a bit of transparency gradient to make them look smoother, but I have also tried with basic, binary color/transparent textures.
Pseudo rendering code goes like so:
map->render();
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
wildGrass->render();
glDisable(GL_BLEND);
Where the wildGrass render instruction renders multiple billboards at various locations in a single OGL call.
The issue I am experiencing has to do with transparency and the fact that billboards apparently hide each-other, even on their transparent area. However the height-map solid background is correctly displayed on those transparent parts.
Here's the glitch:
Left is with an explicit fragment shader discard on fully transparent pixels
Right is without the discard, clearly showing the billboard's flat quad
Based on my understanding of OGL blending and some reading, it seems that the solution is to have a controlled order of rendering, starting from the most distant objects to the closest, so that the color buffer is filled properly in the end.
I am desperately hoping that there is another way... The ordering here would typically vary depending on the point of view, which means it has to be applied in-real-time for each frame. Plus the nature of those particular billboards is to be produced in a -very large- number... Performance alert!
Any suggestions or is my approach of blending wrong?
Did not work for me:
#httpdigest's suggestion to disable depth buffer writing:
It worked essentially for billboards with the same texture (and possibly a specific type of texture, like wild grass for instance), because the depth inconsistencies are not visually noticeable - however introducing another texture, say a flower with drastically different colours, will immediately highlights those mistakes.
Solution:
#Rabbid76's suggestion to use not-semi-transparent textures with multi-sampling & anti-aliasing: I believe this is the way to go for best visual effect with reasonably low cost on performance.
Alternative solution:
I found an intermediary solution which is probably the cheapest in performance to the expense of quality. I still use textures with gradient transparent edges, but instead of discarding fully transparent pixels, I introduced a degree of tolerance, for example any pixel with alpha < 0.6 is discarded - the value is found experimentally to find the right balance.
With this approach:
I still perform depth tests, so output is correct
Textures quality is degraded/look less smooth - but reasonably so
The glitches on semi-transparent pixels still appear - but are nearly not noticeable
See capture below
So to conclude:
My solution is a cheap and simple approximation giving less smooth visual result
Best result can be obtained by rendering all the billboards to a multi-sampled texture resolve with anti-aliasing and finally output the result in a full screen quad. There are probably to ways to do this:
Either rendering the map first and use the resulting depth buffer when rendering the billboards
Or render both the map and billboards on the multi-sampled texture
Note that the above approaches are both meant to avoid having to distance-base sort a large number of billboards - but this remains a valid option and I have read about storing billboard locations in a quad tree for quick access.

Performance of GL_POINTS on modern hardware

Is there any difference in performance between drawing a scene with full triangles (GL_TRIANGLES) instead of just drawing their vertices (GL_POINTS), on modern hardware?
Where GL_POINTS is initialized like this:
glPointSize(1.0);
glDisable(GL_POINT_SMOOTH);
I have a somewhat low-end graphics card (9600gt) and drawing vertices-only can bring a 2x fps increase on certain sceneries. Not sure if it applies too on more recent gpus.
2x fps increase on
You lose 98% of picture and get only 2x fps increase. That's not impressive. If you take into account that you should be able to easily render 300..500 fps on any decent hardware (with vsync disabled and minor optimizations), that's probably not worth it.
Is there any difference in performance between drawing a scene with full triangles (GL_TRIANGLES) instead of just drawing their vertices (GL_POINTS), on modern hardware?
Well, if your scene has a LOT of alpha-blending and very "heavy" pixel shaders, then, obviously, displaying scene as point cloud will speed things up, because there's less pixels to fill.
On other hand, this kind of "optimization" will be completely useless for any practical task. I mean, if you're using blending and shaders, you probably wouldn't want to display your scene as pointlist in the first place, unless you're doing some kind of debug render (using glPolygonMode), and in case of debug render, you'll probably turn shaders off (because shaded/lit point will be hard to see) and disable lighting.
Even if you're using point sprites as particles or something, I'd stick with triangles - they give more control and do not have maximum size limit (compared to point sprites).
I can display more objects?
If you want more objects, you should probably try to optimzie things elsewhere first. If you stop trying to draw invisible objects (outside of field of view, etc), that'll be a start that can improve performance.
you have a mesh which is very far away from the camera. 1 million triangles and you know it is always in view. At this density ratio, triangles can't be bigger than a pixel,
When triangles are smaller than a pixel, and there are many of them, your mesh start looking like garbage and turns into pixelated mess of points. It will be ugly. Roughly same effect as when you disable mippimapping and texture filters and then render checkboard pattern. Using points instead of triangles might even aggravate effect.
: If you have 1mil triangle mesh that is always visible, you already need different kind of optimization. Reduce number of triangles (level of detail, dynamic tesselation or some solution that can simplify geometry on the fly), use bump mapping(maybe parallax mapping) to simulate extra geometry details that aren't even here, or even turn it into static background or a sprite. That'll work much better. Trying to render it using points will simply make it look ugly.
No, if the number of triangles is similar to the number of their shared vertices (considering the glDrawElements rendering command being used) in both modes the geometry-wise part of the rendering pipeline will be evaluated at roughly the same speed. The only benefit you can get from drawing GL_POINTS relies solely on the percentage of empty screen space you get from not drawing faces, thus only at fragment shader level.

Why does stereo 3D rendering require software written especially for it?

Given a naive take on 3D graphics rendering it seems that stereo 3D rendering should be essentially transparent to the developer and be entirely a feature of the graphics hardware and drivers. Wherever an OpenGL window is displaying a scene, it takes the geometry, lighting, camera and texture etc. information to render a 2D image of the scene.
Adding stereo 3D to the scene seems to essentially imply using two laterally offset cameras where there was originally one, and all other scene variables stay the same. The only additional information then would be how far apart to make the cameras and how far out to to make their central rays converge. Given this it would seem trivial to take a GL command sequence and interleave the appropriate commands at driver level to drive a 3D rendering.
It seems though applications need to be specially written to make use of special 3D hardware architectures making it cumbersome and prohibitive to implement. Would we expect this to be the future of stereo 3D implementations or am I glossing over too many important details?
In my specific case we are using a .net OpenGL viewport control. I originally hoped that simply having stereo enabled hardware and drivers would be enough to enable stereo 3D.
Your assumptions are wrong. OpenGL does not "take geometry, lighting camera and texture information to render a 2D image". OpenGL takes commands to manipulate its state machine and commands to execute draw calls.
As Nobody mentions in his comment, the core profile does not even care about transformations at all. The only thing it really provides you with now is ways to provide arbitrary data to a vertex shader, and an arbitrary 3D cube to do rendering to. Wether that corresponds or not to the actual view, GL does not care, nor should it.
Mind you, some people have noticed that a driver can try to guess what's the view and what's not, and this is what the nvidia driver tries to do when doing automatic stereo rendering. This requires some specific guess-work, which amounts to actual analysis of game rendering to tweak the algorithms so that the driver guesses right. So it's typically a per-title, in-driver change. And some developers have noticed that the driver can guess wrong, and when that happens, it starts to get confusing. See some first-hand account of those questions.
I really recommend you read that presentation, because it makes some further points as to where the camera should be pointing towards (should the 2 view directions be parallel and such).
Also, It turns out that is essentially costs twice as much rendering for everything that is view dependent. Some developers (including, for example, the Crytek guys, see Part 2), figured out that to a great extent, you can do a single render, and fudge the picture with additional data to generate the left and right eye pictures.
The amount of saved work here is worth a lot by itself, for the developer to do this themselves.
Stereo 3D rendering is unfortunately more complex than just adding a lateral camera offset.
You can create stereo 3D from an original 'mono' rendered frame and the depth buffer. Given the range of (real world) depths in the scene, the depth buffer for each value tells you how far away the corresponding pixel would be. Given a desired eye separation value, you can slide each pixel left or right depending on distance. But...
Do you want parallel axis stereo (offset asymmetrical frustums) or 'toe in' stereo where the two cameras eventually converge? If the latter, you will want to tweak the camera angles scene by scene to avoid 'reversing' bits of geometry beyond the convergence point.
For objects very close to the viewer, the left and right eyes see quite different images of the same object, even down to the left eye seeing one side of the object and the right eye the other side - but the mono view will have averaged these out to just the front. If you want an accurate stereo 3D image, it really does have to be rendered from different eye viewpoints. Does this matter? FPS shooter game, probably not. Human surgery training simulator, you bet it does.
Similar problem if the viewer tilts their head to one side, so one eye is higher than the other. Again, probably not important for a game, really important for the surgeon.
Oh, and do you have anti-aliasing or transparency in the scene? Now you've got a pixel which really represents two pixel values at different depths. Move an anti-aliased pixel sideways and it probably looks worse because the 'underneath' color has changed. Move a mostly-transparent pixel sideways and the rear pixel will be moving too far.
And what do you do with gunsight crosses and similar HUD elements? If they were drawn with depth buffer disabled, the depth buffer values might make them several hundred metres away.
Given all these potential problems, OpenGL sensibly does not try to say how stereo 3D rendering should be done. In my experience modifying an OpenGL program to render in stereo is much less effort than writing it in the first place.
Shameless self promotion: this might help
http://cs.anu.edu.au/~Hugh.Fisher/3dteach/stereo3d-devel/index.html

Sprite Sheet With OpenGL and SDL

I been working in a new game, and finally reached the point where I started to code the motion of my main character but I have a doubt about how do that.
Previously, I make two games in Allegro, so the spritesheets are kind of easy to implement, because I establish the frame and position on the image, and save every frame in a different bitmap, but I know that do that with OpenGL it's not neccesary and cost a little bit more.
So, I been thinking in how save my spritesheet and used in my program and I have only one idea.
I loaded the image and transformed in a texture, in my function that help me animate I simply grab a portion of the texture to draw instead of store every single texture in my program.
This is the best way to do that?
Thanks beforehand for the help.
You're on the right track.
Things to consider:
leave enough dead space around each sprite so that the video card does not blend in texels from adjacent sprites at small scales.
set texture min/mag filtering appropriately. GL_NEAREST is OK if you're going for the blocky look.
if you want to be fancy and save some texture memory, there's no reason that the sprites have to be laid out in a regular grid. Smaller sprites can be packed closer in the texture.
if your sprites are being rendered from 3D models, you could output normal & displacement maps from the model into another texture, then combine them in a fragment shader for some awesome lighting and self-shadowing.
You got the right idea, if you have a bunch of sprites it is much better to just stick them all in one big textures. Just draw your sprites as textured quads whose texture coordinates index into the frame of the sprite. You can do a few optimizations, but most of them revolve around trying to get the most out of your texture memory and packing the sprites closely together with out blending issues.
I know that do that with OpenGL it's not neccesary and cost a little bit more.
Why not? There are no real downsides to putting a lot of sprites into a single texture. All you need to do is change the texture coordinates to pick the region in question out of the texture.

MipMapping problems in OpenGL

I'm loading 3D objects (obj or 3ds or collada files) into my openGL application. The 3 environment is quite large (a few hundred metres in all axis').
My problem is that smaller 3D objects (i.e. in the order of ~< 1-2m ) don't appear to be depth-tested properly. Depending on the zoom of the camera, I can sometimes see the back face of the object (I have been using a simple cube for testing) or other faces becoming visible/invisible/torn. Please see the attached images for a better explanation.
I am led to believe the problem is due to mipmapping being enabled. I would either like to disable mipmapping (can someone suggest a simple, fast way to do this) or set the resolution to be greater for the mipmapped objects. Or am I barking up the wrong tree completely?
Thanks
Chris
That's the result of insufficient z-buffer precission, which is an issue in games that have huge worlds but (relatively) small objects. The immediate solution would be to try using a 24 bit z-buffer instead of a 16 bit one. Another way to tackle this would be to render the game world it two steps, first the big distant objects, then clearing the zbuffer and then drawing the closer objects.
This specific problem is called z-fighting by the way, here's a great resource on this issue: http://www.codermind.com/articles/Depth-buffer-tutorial.html
The take-away is the last paragraph of the article above:
the true issue is that you can't draw
both objects that are very far and
objects that are very near with the
same depth buffer equations. If you
want to draw very far objects then you
need to sacrifice your near view by
pushing it further. To avoid clipping
artifacts you can make your collision
envelope large enough so that your
clip plane will never intercept an
existing object within your frustum.
Or you can make object gradually
disappear with transparency as they
come near your clip plane.
If you want to keep near objects and
at the same time draw mountains (or
planets) in the far distance, then you
can cut your rendering in parts. First
drawing your far objects, then
clearing the depth buffer and
rendering the near objects with a
different z buffer.
Like Julio, I believe that this is a depth precision issues, not something related to mip-mapping. However, I suggest you start by adjusting your near and far clipping plane before changing anything else (You are probably already using a 24-bit depth buffer anyways, as that is the default on most drivers/cards). Particularly the near plane should be as far away as possible for your scene. Look for calls to glFrustum or gluPerspective.