Nsight showing a discontinuous OpenGL pixel history? - opengl

I'm trying to figure out how a particular fragment is getting its color. The pixel should not be brown (R=136,G=109,B=95) and yet it is. I loaded up NSight and ran a pixel history. But I'm not understanding what it's showing me.
Event #1787 failed the depth test, so presumably that event had no effect. So how does it go from R=135,G=206,B=235 in the "RT After" of Event #2 to the brown we see in "RT Before" of Event #1830?

Related

Metal Texture updating in a very weird way

I am in the process of trying to write a Monte Carlo Path tracer using Metal. I have the entire pipeline working (almost) correctly. I have some weird banding issues, but that seems like it has more to do with my path tracing logic than with Metal.
For people who may not have experience with path tracer, what it does is generates rays from the camera, bounces it around the scene in random directions for a given depth (in my case, 8), and at every intersection/bounce it shades the ray with that material's color. The end goal is for it to "converge" to a very nice and clean image by averaging out many iterations over and over again. In my code, my metal compute pipeline is run over and over again, with the pipeline representing an iteration.
The way I've structured my compute pipeline is using the following stages:
Generate Rays
Looping 8 times (i.e. bounce the ray around 8 times) :
1 - Compute Ray Intersections and Bounce it off that intersection (recalculate direction)
2 - "Shade" the ray's color based on that intersection
Get the average of all iterations by getting the current texture buffer's color, multiplying it by iteration and then add the ray's color to it, then divide by iteration+1. Then store that new combined_color in the same exact buffer location.
So on an even higher level what my entire Renderer does is:
1 - Do a bunch of ray calculations in a compute shader
2 - Update Buffer (which is the MTKView's drawable)
The problem is that for some reason, what happens is that my texture cycles between 3 different levels of color accumulation and keeps glitching between different colors, almost as if there are three different programs trying to write to the same buffer. This can't be due to race conditions because we're reading and writing from the same buffer location right? How can this be happening?
Here is my system trace of the first few iterations:
As you can see, for the first few iterations, it doesn't render anything for some reason, and they're super fast. I have no clue why that is. Then later, the iterations are very slow. Here is a close-up of that first part:
I've tried outputting just a single iteration's color every time, and it seems perfectly fine. My picture doesn't converge to a clean image (which is what happens after multiple iterations are averaged)
I've tried using semaphores to synchronize things, but all I end up with is a stalled program because I keep waiting for the command buffer and for some reason it is never ready. I think I may just not be getting semaphores. I've tried looking things up and I seem to be doing it right.
Help.. I've been working on this bug for two days. I can't fix it. I've tried everything. I just do not know enough to even begin to discern the problem. Here is a link to the project. The system trace file can be found under System Traces/NaiveIntegrator.trace. I hate to just paste in my code, and I understand this is not recommended at SO, but the problem is I just have no idea where the bug could be. I promise that as soon as the problem is fixed I'll paste the relevant code snippets here.
If you run the code you will see a simple cornell box with a yellow sphere in the middle. If you leave the code running for a while you'll see that the third image in the cycle eventually converges to a decent image (ignoring the banding effect on the floor that is probably irrelevant). But the problem is that the image keeps flickering between 3 different images.

PBR reflection stange behaviour

I've implemented a PBR rendered using OpenGL in my deferred rendering engine.
The problem is that there are strange signs on my object as the roughness increases.
You can see this on these images:
I've found out that the problem is with the filtering. Using LINEAR_MIPMAP_LINEAR gives the results shown above, but when I use NEAREST_MIPMAP_LINEAR the strange seams aren't present but when the surface is more rough you can see the pixels of the texture (as you can see on the image below).
As Robinson mentioned in his comment, reading the article he posted showed me the answer: i just needed to enable GL_TEXTURE_CUBE_MAP_SEAMLESS on my cubemap texture.

How to minimize GPU over draw in Google glass?

I am trying to understand how the the recently announced "GPU over draw" feature works. Why are certain parts of the screen drawn twice or thrice? How does this really work? Does this have anything to do with nesting of layouts? How to minimize this over draw. In windows phone we have an option like Bitmapcachemode which will cache the redraw and prevent re drawing over and over again. Is there anything similar to this in Android? (The snippet below is from official Google docs)
With the latest version of glass, developers have an option of
turning on GPU over draw.When you turn this setting on, the system
will color in each pixel on the screen depending on how many times it
was drawn in the last paint cycle. This setting helps you debug
performance issues with deeply nested layouts or complex paint logic.
Pixels drawn in their original color were only drawn once.
Pixels shaded in blue were drawn twice.
Pixels shaded in green were drawn three times.
Pixels shaded in light red were drawn four times.
Pixels shaded in dark red were drawn five or more times.
Source - Google official docs.
The GPU overdraw feature is simply a debugging tool for visualizing overdraw. Excessive overdraw can cause poor drawing/animation performance as it eats up time on the UI thread.
This feature has been present in Android for some time, Glass simply exposed a menu option to turn it on. See the "Visualizing overdraw" section of http://www.curious-creature.org/docs/android-performance-case-study-1.html for more information.
Overdraw is not necessarily caused by nested layouts. Overdraw occurs when you have views over other views that draw to the same region of the screen. For instance, it is common to set a background on your activity, but then have a full screen view that also has a background. In this instance, you are drawing every pixel on the screen at least 2 times. To fix this specific issue, you can remove the background on the activity since it is never visible due to the child view.
Currently Android does not have the ability to automatically detect and prevent overdraw, so it is on the developer to account for this in their implementation.

Horizontal Tearing DirectX9

I've been trying to develop a video capture display application with DirectX9 under Win7 using a vertex shader and a pixel shader (very basic ones). However, the image being displayed is showing some tearing, always at the same location on the screen. The specs are the following
Video is being captured via a webcam
Display is not in fullscreen mode
Refresh rate of screen is 60Hz
D3DPRESENT_INTERVAL_ONE is being used to force to a good refresh rate (found on some forum, doesn't work though)
I tried modifying this last parameter with all that are available only to realize that D3DPRESENT_INTERVAL_ONE gives me a consistent (always at the same position on screen) tearing.
I know that "enabling" V-Sync could maybe solve my problem, but I can't seem to find any info about this on the web (Yes I know, DirectX9 is getting outdated), so any help would be very appreciated!
Use D3DPRESENT_INTERVAL_DEFAULT if it doesn't give you tearing.
This flag also enables V-sync. From documentation:
D3DPRESENT_INTERVAL_DEFAULT uses the default system timer resolution
whereas the D3DPRESENT_INTERVAL_ONE calls timeBeginPeriod to enhance
system timer resolution. This improves the quality of vertical sync,
but consumes slightly more processing time. Both parameters attempt to
synchronize vertically.

Image disappears after first frame, iOS, OpenGL

I wrote a program which should render a cone to the screen. The problem is that in the first frame I can see the cone displaying correctly on my iOS simulate screen; after that, the cone disappears. When I set a break point, all the data seems to be OK to me.
I am using OpenGL ES 1.0.
It is true that you should show us some code... But for all it's worth, I had the same problem and in my case the error was that I was calling theglEnableClientState(...)at the initialisation stage while theglDisableClientState(...)at every frame (everyRender() call).