current opengl drivers use compiled shader cache located in
c:/users/name/appdata/roaming/amd|nvidia/glcache/...
unfortunately, it causes program crashes almost every time i change some of the shaders, which i currently fix by manually deleting the shader cache.
the question is, is there any good way of purging the cache when i ship new version of the program? any opengl extension to control the caching? or some magical api from the operating system? or, at least, a proper way to find the folder?
another question: what keys do the drivers use to identify individual shaders? so that i can somehow change the key every time i change a shader.
unfortunately, it causes program crashes almost every time i change some of the shaders, which i currently fix by manually deleting the shader cache.
If that is happening there's something seriously broken with your system and/or your drivers' installation. This must not happen and if it does then it's not something a OpenGL program should have to concern itself with.
another question: what keys do the drivers use to identify individual shaders?
Usually some hash derived from the shader source AST (i.e. just adding a whitespace or renaming a symbol will not do the trick).
the question is, is there any good way of purging the cache when i ship new version of the program?
Not that I know of. Shaders are a "black box" in the OpenGL specification. You send in GLSL source text, it gets compiled and linked and that's it. Things like a shader cache or the internal representation are not specified by OpenGL.
any opengl extension to control the caching?
Nope. Technically vendors could add a vendor specific extension for that, but none did.
or some magical api from the operating system?
Nothing officially specified for that.
or, at least, a proper way to find the folder?
Again nothing about this is properly specified.
Related
I have opengl code that uses the fixed pipeline.
Hitting two birds with one stone, I need a wrapper that can help me with the following tasks:
Convert the code to the new shader-based pipeline with minimal effort.
I have a class that calls opengl functions, such as: glBegin(triangles/lines), glVertex, glPushMatrix, glTranslate, glColor, gluSphere.
Ideally, I'd like it to derive from a class that supplies these functions in the base class. Behind the scenes, it would use the same high level logic as the fixed pipeline.
I'd like to export an opengl scene to .collada to load in an external renderer.
Opengl is low level rendering, and it doesn't have the concept of a scene. For example, this reddit post:
"You realize that you have to write a shim to capture all API calls
you are interested in to do that. Then, when finally, a draw call is
emitted you have to parse every single vertex and collect the data
from all over the memory from the buffers that you have recorded from
the APi calls that set up VAOs, VBOs and IBOs. Then you have to parse
the shader source code so that you can see which uniforms and vertex
attributes contribute to vertex clip coordinate generation. Then you
also have to synthesize/guess which outputs are normal, color, texture
coordinate and so on from the shader source if the resulting program
even have those in .obj file format-wise.
This gets even more complicated if Compute is used to generate data
inside the GPU for any of the buffers. If geometry or tessellator is
used then you also have to implement one of those so that you get
accurate outputs from the vertex processing. TL;DR - you have to write
your own OpenGL 4.5 driver that does exactly the same things a real
hardware driver would do. Good luck with that."
However, my scene is simple, using the fixed pipeline operations above.
I'd like the wrapper to keep track and construct a scene that can be exported.
--
EDIT: Since recommendation is off-topic, I'll ask the following question.
What I need above seems like something obvious that many should have found useful. Since I can't find a library that accomplishes that, I'm wondering if my approach is unreasonable?
More specifically, how do people port their legacy opengl code; do they write the relevant part from scratch, or does everyone implement his own wrapper as I suggested?
What about constructing a scene to export to collada?
Posted also:
https://community.khronos.org/t/c-opengl-wrapper-interface-similar-to-fixed-pipeline-can-export-collada/105829
Although there are some parts in legacy OpenGL that are not optimized in current drivers (like glDrawPixels, the raster drawing operations and indexed color mode), between modern hardware and the modest requirements of legacy applications, legacy OpenGL stuff runs well enough on modern systems.
The main reason to "modernize" legacy OpenGL code is, if one want to make use of the modern features. Any sort of "wrapper" will just run into the same kind of design problems that the OpenGL API ran between OpenGL-1.5 to OpenGL-2.1: Lots of built-in variables, default state, implicit action, etc. etc. This is difficult to document properly, and even more difficult to make use of reliably. Which is the reason you usually don't find these kinds of wrappers.
If you find yourself in the situation, that you absolutely must port your legacy code to modern OpenGL, e.g. to be interoperable with core contexts, then your best course of action will be to do a proper rewrite. Replace implcit mode calls to filling vertex buffers, replace calls to glTexEnv…, glMaterial…, glLight… with loading appropriate shaders and setting their uniforms.
Or, if you want a quick and dirty method: Just create two contexts, a modern one, and a legacy one and switch between them; often you can establish "list" sharing between them.
Whenever I call a function to swap buffers I get tons of errors from glDebugMessageCallback saying:
glVertex2f has been removed from OpenGL Core context (GL_INVALID_OPERATION)
I've tried using both with GLFW and freeglut, and neither work appropriately.
I haven't used glVertex2f, of course. I even went as far as to delete all my rendering code to see if I can find what's causing it, but the error is still there, right after glutSwapBuffers/glfwSwapBuffers.
Using single-buffering causes no errors either.
I've initialized the context to 4.3, core profile, and flagged forward-compatibility.
As explained in comments, the problem here is actually third-party software and not any code you yourself wrote.
When software such as the Steam overlay or FRAPS need to draw something overtop OpenGL they usually go about this by hooking/injecting some code into your application's SwapBuffers implementation at run-time.
You are dealing with a piece of software (RivaTuner) that still uses immediate mode to draw its overlay and that is the source of the unexplained deprecated API calls on every buffer swap.
Do you have code you can share? Either the driver is buggy or something tries to call glVertex in your process. You could try to use glloadgen to build a loader library that covers OpenGL-4.3 symbols only (and only that symbols), so that when linking your program uses of symbols outside the 4.3 specs causes linkage errors.
Is there a way i can debug a glsl shader? including like breakpoints and data tracking
i seen simple ones that let me see what shaders make my shade programs but nothing i can put break points in.
I need to check out values of matrices and just throwing a glFragColor will not work since there's so many values to be compared and checked.
is there a simple way of doing this? besides me just writing down what values i think i might have and doing my math out else where.
it's really annoying when I'm trying to understand all of OpenGL and knowing how to navigate around DirectX. I can see why some people get scared away from OpenGL when resources get hard to find.
According to the development updates for NVIDIA Nsight, they recently added features for GLSL debugging (https://developer.nvidia.com/nsight-visual-studio-edition-3_0-new-features). I would look there first. Also glslDevil (http://www.vis.uni-stuttgart.de/glsldevil/index.html) looks good. I haven't tried either program myself, so can't give first hand experience about quality. I have been impressed by NVIDIA's support for debugging in CUDA though, so have high expectations.
Ok, I know that online there are millions of answers to what OpenGL is, but I'm trying to figure out what it is in terms of a file on my computer. I've researched and found that OpenGL acts as a multi-platform translator for different computer graphics cards. So, then, is it a dll?
I was wondering, if it's a dll, then couldn't I download any version of the dll (preferably the latest), and then use it, knowing what it has?
EDIT: Ok, so if it's a windows dll, and I make an OpenGL game that uses a late version, what if it's not supported on someone else's computer with an earlier version? Am I allowed to carry the dll with my game so that it's supported on other windows computers? Or is the dll set up to communicate with the graphics card strictly on specific computers?
OpenGL is constantly being updated (whatever it is). How can this be done if all it's doing is communicating with a graphics card on a bunch of different computers that have graphics cards that are never going to be updated since they're built in?
There are two "parts" to OpenGL - the specification that's updated by the Khronos Group once every few months, and the driver that's written by your graphics card manufacturer specifically for your graphics card model.
The OpenGL specification essentially details how everything about the OpenGL API should work - what the expected behavior should be, when something is considered unexpected behavior, when to throw which errors, etc. The specification lets the driver writers know exactly what they need to do and lets application writers know what to expect from a driver. This is what OpenGL really "is" - the glue that holds applications and drivers together. You can read all the specifications for each version here.
Then there's drivers that implement the OpenGL API and are considered compliant to the specification. The driver does exactly what you'd expect it to do - copy data to and from the graphics card's memory, write data to graphics card registers, keep track of state, process vertices, compile shaders, instruct hundreds of stream processors to simultaneously transform vertices and fill pixels, etc. Without OpenGL, each graphics card model would have a separate, slightly faster API that would only work for that one graphics card because of the way it was structured. With OpenGL, the drivers are all written against the same API and an application's code will run on all graphics cards.
Compliance to the OpenGL specification doesn't change with driver updates. Most driver updates will either fix minor bugs or do some internal optimizing.
I know at one point there was a small bug with ATI driver where you had to call glEnable(GL_TEXTURE_2D); before you could generate mipmaps the OpenGL 3 way (glGenerateMipMaps()) despite GL_TEXTURE_2D being deprecated as a possible value for glEnable(). I'm not sure if it's fixed now, but it's certainly the type of edge case that can easily be overlooked by driver writers.
As for optimizations, there's a lot to optimize. Maybe there's another way to optimize shaders when they're being compiled, maybe there's a more efficient way to distribute work between the stream processors, I don't know.
OpenGL is a cross-platform API for graphics programming. In terms of compiled code, it will be available as an OS-specific library - e.g. a DLL (opengl32.dll) in Windows, or an SO in Linux.
You can get the SDK and binary redistributables from OpenGL.org
Depending on which language you're using, there may be OpenGL wrappers available. These are classes or API libraries designed to specifically work with your language. For example, there are .NET OpenGL wrappers available for C# / VB.NET developers. A quick Google search on the subject should give you some results.
The OpenGL API occasionally has new versions released, but these updates are backwards-compatible in nature. Moreover, new features are generally added as extensions, and it's possible to detect which extensions are present and only use those which are locally available and supported... so you can build your software to take advantage of new features when they're available but still be able to run when they aren't.
The API has nothing to do with individual drivers -- drivers can be updated without changing the API, and so the fact that drivers are constantly updated does not matter for purposes of compatibility with your software. On that account, you can choose an API version to develop against, and as long as your target operating systems ships with a version of the OpenGL library compatible with that API, you don't need to worry about driver updates breaking your software's ability to be dynamically linked against the locally available libraries.
Does someone know a fast way to invoke shader processing via DirectX?
Right now I'm setting shaders using D3DXCreateEffectFromFile calls, which create shaders in runtime (once per each shader) from *.fx files.
Rendering part for every object (every patch in my case - see further) then means something like:
// --------------------
// Preprocessing
effect->Begin();
effect->BeginPass(0);
effect->SetMatrix (or Vector or whatever - internal shader parameters) (...)
effect->CommitChanges();
// --------------------
// Geometry rendering
// Pass the geometry to render
// ...
// --------------------
// Postprocessing
// End 'effect' passes
effect->EndPass();
effect->End();
This is okay, but the profiler shows weird things - preprocessing (see code) takes about 60% of time (I'm rendering terrain object of 256 patches where every patch contains about 10k vertices).
Actual geometry rendering takes ~35% and postprocessing - 5% of total rendering time.
This seems pretty strange to me and I guess that D3DXEffect interface may not be the best solution for this sort of things.
I've got 2 questions:
1. Do I need to implement my own shader controller / wrapper (probably, low-level) and where should I start from?
2. Would compiling shaders help to somehow improve parameter setting performance?
Maybe somebody knows how to solve this kind of problem / some implemented shader interface or could give some advices about how is this kind of problem solved in modern game engines.
Thank you.
Actual geometry rendering takes ~35% and postprocessing - 5% of total rendering time
If you want to profile shader performance you need to use NVPerfHud or something similar. Using CPU profiler and measuring ticks is not going to help you - rendering is often asynchronous.
Do I need to implement my own shader controller / wrapper (probably, low-level)
Using your own shader wrapper isn't a bad idea - I never liked ID3DXEffect anyway.
With your own wrapper you'll have a total control of resources and program behavior.
Whether you need it or not is for you to decide. With ID3DXEffect you won't have a warranty that implementation is as fast as it could be - it could be wasting cpu cycles doing something you don't really need. D3DX library contains few classes that are useful, but aren't guaranteed to be efficient (ID3DXEffect, ID3DXMesh, All animation-related and skin-related functions, etc).
and where should I start from?
D3DXAssembleShader, IDirect3DDevice9::CreateVertexShader, IDirect3DDevice9::CreatePixelShader on DirectX 9, D3D10CompileShader on DirectX 10. Also download DirectX SDK and read shader documentation/tutorials.
Would compiling shaders help to somehow improve parameter setting performance?
Shaders are automatically compiled when you load them. You could compiling try with different optimization settings, but don't expect miracles.
Are you using a DirectX profiler or just timing your client code? Profiling DirectX API calls using timers in the client code is generally not that effective because it's not necessarily synchronously processing your state updates/draw calls as you make them. There's a lot of optimization that goes on behind the scenes. Here is an article about this for DX9 but I'm sure this hasn't changed for later versions:
http://msdn.microsoft.com/en-us/library/bb172234(VS.85).aspx
I've used effects before in DirectX and the system generally works fine. It provides some nice features that might be a pain to implement yourself at a lower-level, so I would stick with it for the moment.
As bshields suggested, your timing information might be inaccurate. It sounds likely that the drawing actually is taking the most time, compared.
The shader is compiled when it's loaded. Precompiling will save you a half-second of startup time, but so long as the shader doesn't change during runtime, you won't see any actual speed increase. Precompiling is also kind of a pain, if you're still testing a shader. You can do it with the final copy, but unless you have a lot of shaders, you won't get much benefit while loading them.
If you're creating the shaders every frame or every time your geometry is rendered, that's probably the issue. Unless the shader itself (not parameters) changes every frame, you should create the effect once and reuse that.
I don't remember where SetParameter calls go, but you may want to check the docs to make sure your SetMatrix is in the right spot. Setting parameters after the pass has started won't help anything, certainly not speed. Make sure that's set up correctly. Also, set parameters as rarely as possible, there is some slight overhead involved. Per-frame sets will give you a notable slow-down, if you have too many.
All in all, the effects system does work fine in most cases and you shouldn't be seeing what you are. Make sure your profiling is correct, your shader is valid and optimized, and your calls are in the right places.