what is multisample per pixel in directx11 DXGI_SAMPLE_DECS - c++

I was reading documentation about DXGI_SWAP_CHAIN_DESC and i came across with DXGI_SAMPLE_DESC
Count
Type: UINT
The number of multisamples per pixel.
now what exactly is multisamples per pixel?

DXGI_SAMPLE_DESC as you surmised is for specifying Multi-Sample Anti-Aliasing (MSAA).
That said, you should be aware that the SwapChain support for MSAA is not something you should use anymore. As such, just always set DXGI_SWAP_CHAIN_DESC.SampleDesc.Count = 1; and DXGI_SWAP_CHAIN_DESC.SampleDesc.Quality = 0;.
Instead, to use MSAA you should explicitly create your own MSAA render target and explicitly resolve the result yourself as part of your presentation of the results to the single-sample SwapChain. For details on why and how, see this blog post series.
Note that you can use MSAA SwapChains for DirectX 11 with the older DXGI_SWAP_EFFECT_DISCARD and DXGI_SWAP_EFFECT_SEQUENTIAL flip-effects, and the DirectX 11 runtime will do the resolve automatically. Per the blog post, this is NOT supported for DirectX 12 or the use of modern DXGI_SWAP_EFFECT_FLIP_DISCARD or DXGI_SWAP_EFFECT_FLIP_SEQUENTIAL swap effects. This is really a 'toy' setup as any production rendering will do additional processing after the resolve from multi-sample to single-sample before putting it into the swapchain for display.
As you are likely new to DirectX 11, you may want to look at DirectX Tool Kit. I have a tutorial that covers MSAA.

ok it is anti-aliasing (my bad i didn't know that)
Anti-aliasing is a technique used by users to get rid of jaggies that form on the screen. Since pixels are rectangular, they form small jagged edges when used to display round edges. Anti-aliasing tries to smooth out the shape and produce perfect round edges.

Related

How Can I implement MSAA on DX12?

I Searched many other questions and samples, but I still can't understand what I must do.
What I know about this process is
Create a Render Target for msaa. - Different from SwapChain's Backbuffer.
Draw everything (like meshes) on msaa render target.
Copy the contents of the msaa Render Target to the current BackBuffer using the ResolveSubresource function.
Is this the right process? Or is there a part that I left out?
These samples demonstrate using MSAA with DirectX12:
https://github.com/microsoft/Xbox-ATG-Samples/tree/master/PCSamples/IntroGraphics/SimpleMSAA_PC12
https://github.com/microsoft/Xbox-ATG-Samples/tree/master/UWPSamples/IntroGraphics/SimpleMSAA_UWP12
I also cover this (among other topics) in this blog series.
Per the comments, you can also find MSAA covered in the DirectX Tool Kit for DX12 tutorials.

10-bit alternative to GL_SRGB_ALPHA?

We are currently using GL_SRGB8_ALPHA8 for FBO color correction but it causes significant color banding in darker scenes.
Is there a version of GL_SRGB8_ALPHA8 that has 10 bit per color channel (e.g. GL_RGB10_A2)? If not, what workarounds are there for this use case?
The attached image has been added contrast to make it more visible but it's still noticeable in the source as well.
On the surface Direct3D 9 seems to support this because it doesn't encode sRGB into formats. Instead it sets the sampler to decode to linear space and so I can't see how it doesn't work properly on D3D9 or else the textures would be filtered incorrectly. I don't know how it implements it otherwise. Even with GL_EXT_texture_sRGB_decode it was decided (there's a note) not to enable this D3D9 like behavior.
As usual OpenGL seems to always be chasing the ghost of this now old API. It could just have something like D3DSAMP_SRGBTEXTURE and it would have parity with it. Presumably the hardware implements it. Any banding could depend on the monitor since ultimately it has to be down-converted to the monitor's color depth which is probably much lower than 10 bits.
In the end we ended up with linear 16-bit floating-point format
I suspect drivers use it internally for SRGB anyway
At any rate, as #mick-p noted, we're always limited to display color depth

DirectX11 Non-Solid wireframe

I'm attempting to draw a 100x100 grid in DirectX11 using C++, but the following issue happens:
The image above shows a grid I am drawing in wireframe, using a rasterizer with its fillmode set to 'D3D11_FILL_WIREFRAME' and a 'D3D11_PRIMITIVE_TOPOLOGY_LINELIST' topology. The lines of the wireframe appear staggered rather than straight, with some parts of the wireframe missing.
As I'm not sure what this issue is referred to as, I'm not entirely sure what I should be looking for and as such, any help is appreciated.
This is a classic problem of 'aliasing'.
With Direct3D 11, you can use either MSAA which is a general anti-aliasing option, or use a specific line algorithm if you are not using MSAA. See D3D11_RASTERIZER_DESC AntialiasedLineEnable and MultisampleEnable.
See Aliasing and Multisample anti-aliasing
UPDATE: I've added MSAA and the AA mode to the DirectX Tool Kit tutorial.
I stumbled across the solution to this and it wasn't what I expected at all. Since the window I created had outline bars, the windows client area was being misrepresented. So when I was creating my graphics device, I was initializing it with the windows size, rather than the windows client size. Somehow, this caused the issue above.
I fixed this by implementing a 'GetClientSize' method for the window, using 'GetWindowRect'.

sRGB correction for textures in OpenGL on iOS

I am experiencing the issue described in this article where the second color ramp is effectively being gamma-corrected twice, resulting in overbright and washed-out colors. This is in part a result of my using an sRGB framebuffer, but that is not the actual reason for the problem.
I'm testing textures in my test app on iOS8, and in particular I am currently using a PNG image file and using GLKTextureLoader to load it in as a cubemap.
By default, textures are treated NOT as being in sRGB space (which they are invariably saved in by the image editing software used to build the texture).
The consequence of this is that Apple has made GLKTextureLoader do the glTexImage2D call for you, and they invariably are calling it with the GL_RGB8 setting, whereas for actual correctness in future color operations we have to uncorrect the gamma in order to obtain linear brightness values in our textures for our shaders to sample.
Now I can actually see the argument that it is not required of most mobile applications to be pedantic about color operations and color correctness as applied to advanced 3D techniques involving color blending. Part of the issue is that it's unrealistic to use the precious shared device RAM to store textures at any bit depth greater than 8 bits per channel, and if we read our JPG/PNG/TGA/TIFF and gamma-uncorrect its 8 bits of sRGB into 8 bits linear, we're going to degrade quality.
So the process for most apps is just happily toss linear color correctness out the window, and just ignore gamma correction anyway and do blending in the SRGB space. This suits Angry Birds very well, as it is a game that has no shading or blending, so it's perfectly sensible to do all operations in gamma-corrected color space.
So this brings me to the problem that I have now. I need to use EXT_sRGB and GLKit makes it easy for me to set up an sRGB framebuffer, and this works great on last-3-or-so-generation devices that are running iOS 7 or later. In doing this I address the dark and unnatural shadow appearance of an uncorrected render pipeline. This allows my lambertian and blinn-phong stuff to actually look good. It lets me store sRGB in render buffers so I can do post-processing passes while leveraging the improved perceptual color resolution provided by storing the buffers in this color space.
But the problem now as I start working with textures is that it seems like I can't even use GLKTextureLoader as it was intended, as I just get a mysterious error (code 18) when I set the options flag for SRGB (GLKTextureLoaderSRGB). And it's impossible to debug as there's no source code to go with it.
So I was thinking I could go build my texture loading pipeline back up with glTexImage2D and use GL_SRGB8 to specify that I want to gamma-uncorrect my textures before I sample them in the shader. However a quick look at GL ES 2.0 docs reveals that GL ES 2.0 is not even sRGB-aware.
At last I find the EXT_sRGB spec, which says
Add Section 3.7.14, sRGB Texture Color Conversion
If the currently bound texture's internal format is one of SRGB_EXT or
SRGB_ALPHA_EXT the red, green, and blue components are converted from an
sRGB color space to a linear color space as part of filtering described in
sections 3.7.7 and 3.7.8. Any alpha component is left unchanged. Ideally,
implementations should perform this color conversion on each sample prior
to filtering but implementations are allowed to perform this conversion
after filtering (though this post-filtering approach is inferior to
converting from sRGB prior to filtering).
The conversion from an sRGB encoded component, cs, to a linear component,
cl, is as follows.
{ cs / 12.92, cs <= 0.04045
cl = {
{ ((cs + 0.055)/1.055)^2.4, cs > 0.04045
Assume cs is the sRGB component in the range [0,1]."
Since I've never dug this deep when implementing a game engine for desktop hardware (which I would expect color resolution considerations to be essentially moot when using render buffers of 16 bit depth per channel or higher) my understanding of how this works is unclear, but this paragraph does go some way toward reassuring me that I can have my cake and eat it too with respect to retaining all 8 bits of color information if I am to load in the textures using SRGB_EXT image storage format.
Here in OpenGL ES 2.0 with this extension I can use SRGB_EXT or SRGB_ALPHA_EXT rather than the analogous SRGB or SRGB8_ALPHA from vanilla GL.
My apologies for not presenting a simple answerable question. Let it be this one: Am I barking up the wrong tree here or are my assumptions more or less correct? Feels like I've been staring at these specs for far too long now. Another way to answer my question is if you can shed some light on the GLKTextureLoader error 18 that I get when I try to set the sRGB option.
It seems like there is yet more reading for me to do as I have to decide whether to start to branch my code to get one codepath that uses GL ES 2.0 with EXT_sRGB, and the other using GL ES 3.0, which certainly looks very promising by comparing the documentation for glTexImage2D with other GL versions and appears closer to OpenGL 4 than the others, so I am really liking that ES 3 will be bringing mobile devices a lot closer to the API used on the desktop.
Am I barking up the wrong tree here or are my assumptions more or less
correct?
Your assumptions are correct. If the GL_EXT_sRGB OpenGL ES extension is supported, both sRGB framebuffers (with automatic conversion from linear to gamma-corrected sRGB) and sRGB texture formats (with automatic conversion from sRGB to linear RGB when sampling from it) are available, so that is definitively the way to go, if you want to work in a linear color space.
I can't help with that GLKit issue, no idea about that.

Antialiasing an entire scene after resterization

I ran into an issue while compiling an openGl code. The thing is that i want to achieve full scene anti-aliasing and i don't know how. I turned on force-antialiasing from the Nvidia control-panel and that was what i really meant to gain. I do it now with GL_POLYGON_SMOOTH. Obviously it is not efficient and good-looking. Here are the questions
1) Should i use multi sampling?
2) Where in the pipeline does openGl blend the colors for antialiasing?
3) What alternatives do exist besides GL_*_SMOOTH and multisampling?
GL_POLYGON_SMOOTH is not a method to do Full-screen AA (FSAA).
Not sure what you mean by "not efficient" in this context, but it certainly is not good looking, because of its tendency to blend in the middle of meshes (at the triangle edges).
Now, with respect to FSAA and your questions:
Multisampling (aka MSAA) is the standard way today to do FSAA. The usual alternative is super-sampling (SSAA), that consists in rendering at a higher resolution, and downsample at the end. It's much more expensive.
The specification says that logically, the GL keeps a sample buffer (4x the size of the pixel buffer, for 4xMSAA), and a pixel buffer (for a total of 5x the memory), and on each sample write to the sample buffer, updates the pixel buffer with the resolved value from the current 4 samples in the sample buffer (It's not called blending, by the way. Blending is what happens at the time of the write into the sample buffer, controlled by glBlendFunc et al.). In practice, this is not what happens in hardware though. Typically, you write only to the sample buffer (and the hardware usually tries to compress the data), and when comes the time to use it, the GL implementation will resolve the full buffer at once, before the usage happens. This also helps if you actually use the sample buffer directly (no need to resolve at all, then).
I covered SSAA and its cost. The latest technique is called Morphological anti-aliasing (MLAA), and is actively being researched. The idea is to do a post-processing pass on the fully rendered image, and anti-alias what looks like sharp edges. Bottom line is, it's not implemented by the GL itself, you have to code it as a post-processing pass. I include it for reference, but it can cost quite a lot.
I wrote a post about this here: Getting smooth, big points in OpenGL
You have to specify WGL_SAMPLE_BUFFERS and WGL_SAMPLES (or GLX prefix for XOrg/GLX) before creating your OpenGL context, when selecting a pixel format or visual.
On Windows, make sure that you use wglChoosePixelFormatARB() if you want a pixel format with extended traits, NOT ChoosePixelFormat() from GDI/GDI+. wglChoosePixelFormatARB has to be queried with wglGetProcAddress from the ICD driver, so you need to create a dummy OpenGL context beforehand. WGL function pointers are valid even after the OpenGL context is destroyed.
WGL_SAMPLE_BUFFERS is a boolean (1 or 0) that toggles multisampling. WGL_SAMPLES is the number of buffers you want. Typically 2,4 or 8.