Here is my issue. I'm tesselating complex, self intersection, multicontour polygons with hundreds of verticies. The GLU Tesselator crashes with null pointer 0x0000000 issue. It never ever crashes when I do not make self intersecting polygons. If it does not intersect, it will never crash no matter what the circumstances. I check for NULL EVERYWHERE in my application, I'm sure it's not on my side of things. I found an old version of GLU 1.2 from SGI and it never crashes, however the mesa and Windows versions based on GLU 1.3 both crash. Nothing crashes in debug mode strangly enough. To get more information I compiled Mesa's GLU and saw that first an assert fails, then if I comment that out, there is a pointer which is set to NULL from a function which fails to malloc. I'm very unsure at this point what to do. What could I do to try to solve this issue? Should I just try to make a version of Mesa's GLU which works for me? I'm just unsure how to proceed from here.
After more debugging I see I'm getting GLU_TESS_ERROR_5 which I think is a number too large error but I did a for loop to test for numbers greater than that but no luck :(
At least on Windows, GLU_TESS_ERROR_5 means that one of the coordinates was too large. Specifically, GLU requires that the coordinates are small enough to be multiplied together without overflow. The specification says that the limit is defined in the constant GLU_TESS_COORD_TOO_LARGE. If this constant exists, check that the absolute value of every coordinate is less than it. If not, I think it would be safe to check that the coordinates are between - 10^150 and 10^150. If that doesn't work, try progressively smaller ranges.
It may also be that there's a problem with geometry, which triggers another problem in GLU. Try to find the simplest polygon that will trigger this error.
If that doesn't work, see if there is a newer version of GLU available. I don't know about Mesa, but the version of OpenGL shipped with VC++ is notoriously out of date.
If all else fails, you could try using another library to perform the tesselation. After a quick search, Triangle1 appears to be a good candidate.
Related
I have an OpenGL test application that is producing incredibly unusual results. When I start up the application it may or may not feature a severe graphical bug.
It might produce an image like this:
http://i.imgur.com/JwPoDrh.jpg
Or like this:
http://i.imgur.com/QEYwhBY.jpg
Or just the correct image, like this:
http://i.imgur.com/zUJbwCM.jpg
The scene consists of one spinning colored cube (made of 12 triangles) with a simple shader on it that colors the pixels based on the absolute value of their model space coordinates. The junk faces appear to spin with the cube as though they were attached to it and often junk triangles or quads flash on the screen briefly as though they were rendered in 2D.
The thing I find most unusual about this is that the behavior is highly inconsistent, starting the exact same application repeatedly without me personally changing anything else on the system will produce different results, sometimes bugged, sometimes not, the arrangement of the junk faces produced isn't consistent either.
I can't really post source code for the application as it is very lengthy and the actual OpenGL calls are spread out across many wrapper classes and such.
This is occurring under the following conditions:
Windows 10 64 bit OS (although I have observed very similar behavior under Windows 8.1 64 bit).
AMD FX-9590 CPU (Clocked at 4.7GHz on an ASUS Sabertooth 990FX).
AMD 7970HD GPU (It is a couple years old and occasionally areas of the screen in 3D applications become scrambled, but nothing on the scale of what I'm experiencing here).
Using SDL (https://www.libsdl.org/) for window and context creation.
Using GLEW (http://glew.sourceforge.net/) for OpenGL.
Using OpenGL versions 1.0, 3.3 and 4.3 (I'm assuming SDL is indeed using the versions I instructed it to).
AMD Catalyst driver version 15.7.1 (Driver Packaging Version listed as 15.20.1062.1004-150803a1-187674C, although again I have seen very similar behavior on much older drivers).
Catalyst Control Center lists my OpenGL version as 6.14.10.13399.
This looks like a broken graphics card to me. Most likely some problem with the memory (either the memory itself, or some soldering problem). Artifacts like those you see can happen if for some reason setting the address for a memory operation does not fully settle or happen at all, before starting the read; that can happen due to a bad connection between the GPU and the memory (solder connections failed) or because the memory itself failed.
Solution: Buy new graphics card. You may try out what happens if you resolder it using a reflow process; there are some tutorials on how to do this DIY, but a proper reflow oven gives better results.
Whenever I call a function to swap buffers I get tons of errors from glDebugMessageCallback saying:
glVertex2f has been removed from OpenGL Core context (GL_INVALID_OPERATION)
I've tried using both with GLFW and freeglut, and neither work appropriately.
I haven't used glVertex2f, of course. I even went as far as to delete all my rendering code to see if I can find what's causing it, but the error is still there, right after glutSwapBuffers/glfwSwapBuffers.
Using single-buffering causes no errors either.
I've initialized the context to 4.3, core profile, and flagged forward-compatibility.
As explained in comments, the problem here is actually third-party software and not any code you yourself wrote.
When software such as the Steam overlay or FRAPS need to draw something overtop OpenGL they usually go about this by hooking/injecting some code into your application's SwapBuffers implementation at run-time.
You are dealing with a piece of software (RivaTuner) that still uses immediate mode to draw its overlay and that is the source of the unexplained deprecated API calls on every buffer swap.
Do you have code you can share? Either the driver is buggy or something tries to call glVertex in your process. You could try to use glloadgen to build a loader library that covers OpenGL-4.3 symbols only (and only that symbols), so that when linking your program uses of symbols outside the 4.3 specs causes linkage errors.
Is there a way i can debug a glsl shader? including like breakpoints and data tracking
i seen simple ones that let me see what shaders make my shade programs but nothing i can put break points in.
I need to check out values of matrices and just throwing a glFragColor will not work since there's so many values to be compared and checked.
is there a simple way of doing this? besides me just writing down what values i think i might have and doing my math out else where.
it's really annoying when I'm trying to understand all of OpenGL and knowing how to navigate around DirectX. I can see why some people get scared away from OpenGL when resources get hard to find.
According to the development updates for NVIDIA Nsight, they recently added features for GLSL debugging (https://developer.nvidia.com/nsight-visual-studio-edition-3_0-new-features). I would look there first. Also glslDevil (http://www.vis.uni-stuttgart.de/glsldevil/index.html) looks good. I haven't tried either program myself, so can't give first hand experience about quality. I have been impressed by NVIDIA's support for debugging in CUDA though, so have high expectations.
So I have just realized that the code I was working on for 3d textures was for OpenGL 1.1 or something and is no longer supported in OpenGL 3.3. Is there another way to do this without glTexture3D? Perhaps through a library or another function in OpenGL 3.3 that I do not know about?
EDIT:
I am not sure where I read that 3d texturing was taken out of OpenGL in newer versions (been googling a lot today), but consider this:
I have been following the tutorial/guide here. The program compiles without a hitch. Now read the following quote from the article:
The potential exists that the environment the program is being run on does not support 3D texturing, which would cause us to get a NULL address back, and attempting to use a NULL pointer is A Bad Thing so make sure to check for it and respond appropriately (the provided example exits with an error).
That quote is referring to the following function:
glTexImage3D = (PFNGLTEXIMAGE3DPROC) wglGetProcAddress("glTexImage3D");
When running my program on my computer (which has OpenGL 3.3) that same function returns null for me. When my friend runs it on his computer (which has OpenGL 1.2) it does not return null.
The way one uploads 3D textures has not changes since OpenGL-1.2. The functions for this are still named
glTexImage3D
glTexSubImage3D
glCopyTexSubImage3D
I'm trying to make a vector drawing application using OpenGL which will allow the user to see the result in real time. The way I have it set up is with an edge flag callback so the glu tesselator only outputs triangles which I then pass to a VBO. I'v tried t make all my algorithms as fast as possible and this is not where my issue is. According to a few code profilers, my big slowdown occurs in a call to GLUTessEndPolygon() which is the function that makes the polygon. I have found that when the shape exceeds 100 input verticies, it gets really really slow and basically destroys all the hard work I did to optimize everything else. What can I do? I provide the normal of (0,0,1). I also tried all the tips from the GL redbook. Is there a way to make the tesselator tesselate quicker but with less precision?
Thanks
You might give poly2tri a try to see if it's any faster.