I got a CCSprite.
I scale it to 1.89, and it seems I need a bit more to get my desired result. Therefore I scale it to 1.9 (0.01 more than before). How come that now the sprite is much bigger by far now?
In fact, I tried to add some precision (I make it like 1.895 or something, but the difference is minimal or even inexistent).
To avoid any external influences you should run a test with a new project with two images side by side using slightly different scale factors.
You should also know that eventually the sprite's texture pixels need to be mapped to the screen pixels which can lead to reduced precision of scaling. If you have a texture that is 10x10 pixels in size it will occupy 10x10 pixels on the screen with scale factor 1.0. That texture will only increase to 11x11 pixels when you use a scale factor of 1.05 or higher. It is likely that the texture's size on screen remains unchanged with scale factors ranging from 0.95 through 1.04.
Depending on rounding algorithm and subpixel rendering the end result can slightly differ. But it explains the basic principle that ultimately you can't scale a texture with infinite precision because eventually one pixel is either displaying a texture pixel (texel), or it isn't.
Turning off subpixel rendering in ccConfig.h may lead to better results.
Related
I am attempting to create a reasonably interactive N-body simulation, with the novelty of being able to observe the simulation from the surface of one of the bodies. By this, I mean that I have some randomly placed 'stars' of very high masses with random velocities and 'planets' of smaller masses given initial circular velocities around these stars. I am then rendering this in real-time via OpenGL on Linux and DirectX11 on Windows.
My question is in regards to rendering the scene out, NOT the N-body simulation. I have a very efficient/accurate solver working now, and it can always be improved later without affecting the rendering.
The problem obviously arises that stars are obscenely far away from each other, thus the fragment shader is incapable of rendering distant stars as they are fractions of pixels in size. Using a logarithmic depth-buffer works fine for standing on a planet and looking at a moon and the host star, but I am really struggling on how to deal with the distant stars. I am not interested in 'faking' it, or rendering a star map centered on the player, as the whole point is to be able to view the simulation in real time. A.k.a the star your planet is orbiting is ~1e6m away and is rendered as a sphere, as it has a radius ~1e4 m. Other stars are ~1e8m away from you, so they show up as single lit pixels (sometimes) with a far Z-plane of ~1e13.
I think I have an idea/plan, but I think it involves knowledge/techniques I am not aware of yet.
Rationale:
Have world space of stars on a given frame
This gives us 'screen' space, or fragment position, of star's center of mass in fragment shader
Rather than render this as a scaled sphere, we can try to mimic what our eye's actually do: convolve this point (pixel) with an airy disc (or gaussian or whatever is most efficient, doesn't matter) so that stars are rendered instead as 'blurs' on the sky, with their 'bigness' depending on their luminosity and distance (in essence re-creating the magnitude system for free)
Theoretically this would enable me to change the 'lens' parameters of my airy disc at will in order to produce things that look reasonably accurate/artistic.
The problem: I have no idea how to achieve this blurring effect!
I have some basic understanding of shaders, and have different render passes going on currently, but this seems to involve things I have not stumbled upon, or even how to achieve this effect.
TLDR: given an input of a fragment position, how can I blur it in a fragment/pixel shader with an airy disc/gaussian/etc.?
I thought a logarithmic depth buffer would work initially, but obviously that only helps with z-fighting, not dealing with angular size of far away objects.
You are over-thinking it. For stars smaller than a pixel, just render a square with an Airy disc texture. This is not "faking" - this is just how [real-time] computer graphics works.
If the lens diameter changes, calculate a new Airy disc texture.
For stars that are a few pixels big (do they exist?) maybe you want to render a few-pixel sphere convolved with an Airy disc, then use that texture. Asking the GPU to do convolution every frame is a waste of time, unless you really need it to. If the size really is only a few pixels, you could alternatively render a few copies of the single-pixel texture, overlapping itself and 1 pixel apart. Though computing the texture would allow you to have precision smaller than a pixel, if that's something you need.
For the nearby stars, the Airy disc from each pixel sums up to make a halo, I think? Then you just render a halo, instead of doing the convolution. It isn't cheating, I swear.
If you really do want to do a convolution, you can do it directly: render everything to a texture by using a framebuffer, and then render that texture onto the screen, using a shader that reads from several adjacent texture pixels, and multiplies them by the kernel. Since this runs for every pixel multiplied by the size of the kernel, it quickly gets expensive, the more pixels you want to sample for the convolution, so you may prefer to skip some and make it approximate. If you are not doing real-time rendering then you can make it as slow as you want, of course.
When game developers do a Gaussian blur (quite common) or a box blur, they do a separate X blur and Y blur. This works because the convolution of an X blur and a Y blur is a 2D blur, but I don't know if this works for the Airy disc function. It minimizes the number of pixels sampled for the convolutions.
I'm currently implementing automatically adapting exposure for use with HDR in OpenGL. For this I need to retrieve the average brightness of all pixels in the previous frame.
I've not managed to find any solid explanations of how to do this. As far as I can see there are two ways to go about it.
Use glReadPixels to copy the framebuffer to memory and average them on the CPU. This is likely to be painfully slow and doesn't make good use of the GPU.
Take the frame and render it to successively smaller FBOs using linear filtering. This lets the GPU do most of the work but it's going to require a lot of FBOs (roughly 10 for a 1080p screen).
There has got to be a better way of getting average scene brightness. Does anyone have any suggestions?
There are two options that come into my mind:
Using glGenerateMipmap, which calculates the average of a 2x2 window, leaving you with the average scene brightness at the smallest level. This can be retrieved using textureLod function in a shader. Since each mipmap level has half the size of the previous one, the correct level will be log2(max), where max is the returned value of GL_MAX_TEXTURE_SIZE.
Using compute shaders to do basically the same thing glGenerateMipmap does, but with a bigger window size, which could potentially be faster (although I never tested this).
Your Option 2 is not much different from using glGenerateMipmap on the texture, just that you don't need to hassle with any client side objects like FBOs. So basically, rendering to mipmap level 0 of the texture, letting the GL generate the mipmap pyramid, and reading back just the highest level 1x1 image is probably the easiest way to get some approximation of the average color value.
Is the only true advantage of the mipmaps that the filtering required in real time will be less demanding, as it will have been done partly in advance?
Could you not achieve identical results with linear or anisotropic filtering and a little bit more processing power?
Not with "a little bit" more processing power, but with several orders of magnitude more. As an extreme example, consider a quad with a texture mapped to it, but scaled down so the quad is rendered onto a single screen pixel. That screen pixel is then expected to have the average colour value over the entire texture.
When using mipmaps, there will be a 1x1 precomputed mipmap level that has the colour value you want. One simple lookup, fast and easy.
When not using mipmaps, to achieve the exact same effect, rendering this one pixel would mean doing a texture lookup for every single pixel in the texture and then averaging over them. We could take shortcuts by averaging over only, say, 16 equally spaced pixels, but that could make a marked difference in the output (consider what this would do to checkerboard patterns).
So whilst this theoretically could be done without mipmaps in real time, it would effectively mean calculating large portions of the entire mipmap pyramid for every pixel. There would be no visual difference, but you'd have to start measuring framerates in frames per hour.
Let's say i have 2 textured triangles.
I want to draw one triangle over the other one, such that the top one is basically laying on top of the second one.
Now technically they are on the same plane, but they do not share the same "space" (they do not intersect), though visually it is tough to tell at a certain distance.
Basically when these triangles are very close together (in parallel) i see texture "artifacts". I should ONLY see the triangle that is on top. But what im seeing is that the triangle in the background tends to "bleed" through.
Is there a way to alleviate this side effect, like increasing the depth precision or something? Maybe even increase the tessellation of the triangles?
* Update *
I am using vertex and index buffers. This is using OpenGL ES on iPhone.
I dont know if this picture will help or make things worse. But here it is. Two triangles very close to each other along the Z-axis (but not touching). (NOTE: the normal vector for these triangles are going straight towards you).
You can increase the depth precision up to 32 bits per pixel. However, if the 2 triangles are coplanar, that likely won't fix the problem. If they aren't coplanar (it's really hard to tell from your description what you're talking about), then increasing the depth precision might help. If you're using FBOs for your drawing, simply create the depth texture with 32-bits per component by using GL_DEPTH_COMPONENT32 for the internal format. There are several examples here. If you're not using FBOs, please describe how you create your context (also what OS you're on - Windows, OS X, Linux?).
You could try changing the Depth Buffer function to something more appropriate...
glDepthFunc(GL_ALWAYS) - Essentially disables depth testing
glDepthFunc(GL_GEQUAL) - Overwrites when greater OR equal
If they are too close (assuming they are parallel, not on the same plane), you will get precision errors (like banding artifacts=. Try adding some small offset to the top polygon using glPolygonOffsset: http://www.opengl.org/sdk/docs/man/xhtml/glPolygonOffset.xml Check this simple tutorial: http://www.felixgers.de/teaching/jogl/polygonOffset.html
EDIT: Also try increasing precision as #user1118321 says.
What you are describing is called Z-Fighting (http://en.wikipedia.org/wiki/Z-fighting).
Sadly depth buffers only have limited precision, so if the difference in depth of two polygons is smaller than the precision of the depth buffer, you can't predict which polygon will pass the depth test and be drawn.
As others have said, you can increase the precision of the depth buffer so that polygons have to be closer to each other before the z-fighting artifacts occur, or you can disable the depth test so you are ensured that polygons rendered wont be blocked by anything previously drawn.
I have textures that i'm creating and would like to antialias them. I have access to each pixel's color, given this how could I antialias the entire texture?
Thanks
I'm sorry but true anti-aliasing does not consist in getting the average color from the neighbours as commented above. This will undoubtfully soften the edges but it's not anti-aliasing but blurring. True anti-aliasing just cannot be done properly on a bitmap, since it has to be calculated at drawing time to tell which pixels and/or edges must be "softened" and which ones must not. For instance: imagine you draw an horizontal line which must be exactly 1 pixel thick (say "high") and must be placed exactly on an integer screen row coordinate. Obviously, you'll want it unsoftened, and proper anti-aliasing algorithm will do it, drawing your line as a perfect row of solid pixels surrounded by perfect background-coloured pixels, with no tone blending at all. But if you take this same line once it's been drawn (i.e. bitmap) and apply the average method, you'll get blurring above and below the line, resulting a 3 pixels thick horizontal line, which is not the goal. Of course, everything could be achieved through the right coding but from a very different and much more complex approach.
The basic method for anti-aliasing is: for a pixel P, sample the pixels around it. P's new color is the average of its original color with those of the samples.
You might sample more or less pixels, change the size of the area around the pixel to sample, or randomly choose which pixels to sample.
As others have said, though: anti-aliasing isn't really something that's done to an image that's already a bitmap of pixels. It's a technique that's implemented in the 2D/3D rendering engine.
"Anti-aliasing" can refer to a broad range of different techniques. None of those would typically be something that you would do in advance to a texture.
Maybe you mean mip-mapping? http://en.wikipedia.org/wiki/Mipmap
It's not very meaningful to ask about antialiasing in terms this vague. It all depends on the nature of the textures and how they will be used.
Generally though, if you don't have control over the display environment and your textures might be scaled, rotated, or composited with other elements you don't have any control over, you should use transparency to indicate where your image ends and where it only partially fills a pixel.
It's also generally good to avoid sharp edges and features that are small relative to the amount of scaling that might be done.