Is it possible to create shaders without the use of D3DX functions?
The HLSL compiler is, as of D3D9, part of the D3DX library. To write your shaders in HLSL, you have to use D3DX.
However, there's IDirect3DDevice9::CreatePixelShader and IDirect3DDevice9::CreateVertexShader, which create a shader handle from shader byte code, that is, from what is generated by the HLSL compiler.
You can run the HLSL compiler offline (see D3DXCompileShader), save the machine code to a file and load it at runtime using the aforementioned functions. Sadly this means that you cannot rely on the work that is otherwise done by the D3DX framework. Uploading your constants and optimizing changes is totally up to you in this case.
Related
I learnt to use legacy OpenGl (v1 / 2) around a year back but now I am trying to make something that is a bit more up to date (i.e. >OpenGL 3.3).
I want to use a lot of my old code, however I could really do with the compiler flagging up an error when it tries to compile something legacy (e.g. glBegin() ... glEnd()).
I compiled on a mac a while back and it flagged up this error when trying to compile, but now I'm using a raspberry pi running raspbian.
Thanks for your help in advance!
Depending on your use-case, you might be able to use the OpenGL ES header instead of the standard OpenGL header. The OpenGL ES header doesn't contain the deprecated functions.
Another possibility would be to use a loader like gl3w which will also make your code more portable.
I'd recommend using the OpenGL loader generator glad to generate a loader for core profile of the OpenGL version you want to target. The resulting headers will not contain any of the deprected compatibility profile functions and GLenum definitions.
However, be aware that this will not catch all deprecated GL usage at compile time. For example, a core profiles mandates that a VAO != 0 is bound when rendering, that vertex arrays come from VBOs and not client-side memory, and a shader program != 0 is used. Such issues can't really be detected at compile time. I recommend to use the OpenGL Debug Output functionality to catch those remaining issues at runtime. Most GL implementations will produce very useful error or warning messages that way.
I've been using GLAD with SFML for some time and I've been using GLAD's built-in function loader, gladLoadGL which worked just fine for me. Now I'm looking at GLFW and it's saying both in their guide and on the Khronos opengl wiki that you should be using gladLoadGLLoader((GLADloadproc) glfwGetProcAddress) instead. Is there any particular reason for it?
Is there any particular reason for it?
Using gladLoadGL in conjunction with for example GLFW is resulting in having two code parts in the same program which basically do the same thing, without having any benefit.
For example, look at what GLFW does on Windows (it is similar on the other platforms):
_glfw.wgl.instance = LoadLibraryA("opengl32.dll");
It dynamically loads the GL library behind your back. And it provides an abstraction for querying OpenGL function pointers (the core ones, and extension ones, using both wglGetProcAddress and raw GetProcAdress).
The GL loader glad generates does the same things:
libGL = LoadLibraryW(L"opengl32.dll");
Now one might argue that loading the same shared library twice isn't a big deal, as this should result in re-using the same internal handles and is dealt by reference counting, but even so, it is just unnecessary code, and it still consumes memory and some time during initialization.
So unless you have some very specific reason for why you would need glad's code - maybe in a modified form to really do something else (like using a different GL library then the one which your system would use by default), there is no use case for this code - and it seems a reasonable recommendation to not include code which isn't needed.
As a side note: I often see projects using GLFW and GL loaders like GLAD or GLEW linking opengl32.lib or libGL.so at link time - this is absolutely unnecessary also, as the code will always load the libraries at runtime manually, and there should not be any GL symbols left at link time which the linker could resolve from the GL lib anyway.
I've recently bumped into an rendering issue that is caused by me inadvertently missed typing the pair of parentheses right after the EmitVertex and EndPrimitive in the OpenGL geometry shader. To my surprise, the glsl compiler didn't throw any compiling error and quietly let it pass. The end result is an blank screen since no vertex is emitted by the geometry shader.
I'm wondering is it a bug in the compiler or there is any other reason for this.
BTW, I have tested it on nvidia Titan X with Win7 and GTX 750M with Win8. They both have the same problem.
That is definitely a bug if it works the way you described (e.g. EmitVertex );).
You can always validate your shaders using Khronos' reference compiler if you are ever in doubt. That sort of thing is already common in D3D-based software, where shaders are pre-compiled when the software is built; GL does not support hardware-independent pre-compiled shaders, so you would only use this as a validation step rather than part of the compile process.
Even though it will not save runtime, it would not be a bad idea to work that into your software's build procedure so you do not have to wait until your software is actually deployed to catch simple parse errors like this. Otherwise, you will often only learn about these things after a driver version change or the software runs on a GPU it was never tested on before.
I'm using samplers quite frequently in my application and everything has been working fine.
The problem is, I can only use opengl 3.1 on my laptop. According to the documentation, samplers are only available at opengl 3.3 or higher, but here's where I'm getting a bit confused.
I can use 'glGenSamplers' just fine, no errors are generated and the sampler ID seems fine as well. When using 'glBindSampler' on a valid texture, I get a 'GL_INVALID_VALUE​' error.
Can anyone clear this up for me? If samplers aren't available in opengl 3.1, why can I use glGenSamplers without a problem?
What can I do to provide backwards compatibility? I'm guessing my only option will be to set the texture parameters every time the texture is being used for rendering, if samplers aren't available?
There are two possibilities:
Your graphics card/driver supports ARB_sampler_objects, in this case it is unsurprising that the function is supported. Feel free to use it.
The function is present anyway. In this case, strange as it sounds, you are not allowed to use it.
Check whether glGetStringi(GL_EXTENSION, ...) returns the sampler objects extension at some index. Only functionality from extensions that the implementation advertizes as "supported" is allowed to be used.
If you find some functions despite no support, they might work anyway, but they might as well not. It's undefined.
Note that although you would normally expect the function being named glGenSamplersARB when it comes from an ARB extension, that is not the case here, since this is a "backwards extension" that provides selected functionality which is present identically in a later version on hardware which isn't able to provide the full functionality of that later version.
(About the error code, note comment by Brett Hale)
Is there any way of detecting from my Windows OpenGL application if a debugger (such as gDEBugger) is being used to catch OpenGL calls? I'd like to be able to detect this and terminate my application if a debugger is found, to avoid shader code and textures from being ripped. The application is developed in C++ Builder 6.
Even if you could find a way to do this it would be a futile attempt because the shader source can be asked for by simply calling glGetShaderSource() at any moment.
An external process can inject a thread into your process using CreateRemoteThread() and then copy back the result with ReadProcessMemory(). This process can be made really simple (just a couple of lines) with the detours library by microsoft.
Alternatively, if creating a remote thread is too much of a hassle, a disassembler such as Ollydgb can be used to inject the a piece of code into the normal execution path which simply saves the shader code into a file just before it is invoked.
Finally, The text of the shader needs to be somewhere in your executable before you activate it and it can probably be extracted just by using a static inspection of the executable with a tool like IDAPro. It really doesn't matter if you encrypt it or compress it or whatever, if its there at some point and the program can extract it then so can a determined enough cracker. You will never win.
Overall, there is no way to detect each and every such "debugger". A custom OpenGL32.dll (or equivalent for the platform) can always be written; and if there is no other way, a virtual graphics card can be designed specifically for purposes of ripping your textures.
However, Graphic Remedy does have some APIs for debugging provided as custom OpenGL commands. They are provided as OpenGL extensions; so, if GetProcAddress() returns NULL on those function calls, you can be reasonably sure it's not gDEBugger. However, there are already several debuggers out there, and, as already mentioned, it's trivial to write one specifically designed for ripping out resources.
Perhaps the closest you can get is load C:\windows\system32\opengl32.dll directly, however that can break your game horribly on future releases of Windows so I'd advise against it. (And this still wouldn't protect you against those enterprising enough to replace system-wide OpenGL32.dll, or who are perhaps using other operating systems).
I'm not 100% sure but I believe that Graphic Remedy replace the Windows opengl32 dll with their own opengl32.dll file for hooking gl calls.
So if it is the case, you just have to check the dll version and terminate if it's not what you expect.