Using OpenGL core functions along with extension - opengl

I want to use DXT compressed textures in my program, so i am loading the core function pointer like this:
/* GL 1.3 core */
PFNGLCOMPRESSEDTEXIMAGE2DPROC glCompressedTexImage2D = NULL;
/* ... */
/* check GL version using glGetString(GL_VERSION) */
/* ... */
glCompressedTexImage2D = (PFNGLCOMPRESSEDTEXIMAGE2DPROC)wglGetProcAddress(
"glCompressedTexImage2D");
if (!glCompressedTexImage2D)
return 0;
/* check if GL_EXT_texture_compression_s3tc is available */
And after that, i use the function like this:
glCompressedTexImage2D(GL_TEXTURE_2D, 0, GL_COMPRESSED_RGB_S3TC_DXT1_EXT, width,
height, 0, size, ptr);
It works well, but the reason i am doubting this is i have been told that i cannot mix OpenGL core functions with extensions functions like this:
glGenBuffersARB(1, &id);
glBindBuffer(GL_ARRAY_BUFFER, id);
Or, core functions with tokens added by some extension like this:
glActiveTexture(GL_TEXTURE0_ARB);
But i am using glCompressedTexImage2D(core function) with GL_COMPRESSED_RGB_S3TC_DXT1_EXT(a token added by GL_EXT_texture_compression_s3tc).
So, is it okay to use extensions that wasn't added to the core (extensions such as GL_EXT_texture_compression_s3tc or WGL_EXT_swap_control) functions/tokens along with core functions?

Generally, it's good advice not to mix core and extension definitions for the same functionality. Often times extensions are promoted to core functionality, with identical definitions, and it's not a problem. But there are cases where the core functionality is not quite the same as earlier versions of he same functionality defined in extensions.
A common example for this are FBOs (Framebuffer Objects). There were a number of different extensions related to FBO functionality before FBOs were introduced as core functionality in OpenGL 3.0, and some of those extensions are not quite the same as what ended up as core functionality. Therefore, mixing older extensions and core definitions for FBOs would be a bad idea.
In this specific case however, it's perfectly fine. It's expected that many/most compressed texture formats are extensions. Many of them are vendor specific, and involve patents, so they will most likely never become core functionality. The spec accommodates for that. Some spec quotes for glCompressedTexImage2D() make this clear:
internalformat must be a supported specific compressed internal format.
For all other compressed internal formats, the compressed image will be decoded according to the specification defining the internalformat token.
Specific compressed internal formats may impose format-specific restrictions on the use of the compressed image specification calls or parameters.
The extension definition for EXT_texture_compression_s3tc also confirms that COMPRESSED_RGB_S3TC_DXT1_EXT can be used as argument for glCompressedTExImage2D():
This extension introduces new tokens:
COMPRESSED_RGB_S3TC_DXT1_EXT 0x83F0
COMPRESSED_RGBA_S3TC_DXT1_EXT 0x83F1
COMPRESSED_RGBA_S3TC_DXT3_EXT 0x83F2
COMPRESSED_RGBA_S3TC_DXT5_EXT 0x83F3
In OpenGL 1.2.1 these tokens are accepted by the <internalformat> parameter
of TexImage2D, CopyTexImage2D, and CompressedTexImage2D and the <format>
parameter of CompressedTexSubImage2D.
The list of supported compressed texture formats can also be obtained without querying extensions. You can use glGetIntegerv() to enumerate them:
GLint numFormats = 0;
glGetIntegerv(GL_NUM_COMPRESSED_TEXTURE_FORMATS, &numFormats);
GLint* formats = new GLint[numFormats];
glGetIntegerv(GL_COMPRESSED_TEXTURE_FORMATS, formats);
This directly gives you the list of formats accepted by glCompressedTexImage2D().

It's perfectly possible to use extensions in core. Core profile just means, that certain cruft from the past has been removed from, well the core. But everything that is reported by your OpenGL context as being available in the extension strings as reported by glGetStringi may be legally used from that context. Any extension that is not "core-compliant" would not appear in a pure core context.
Also texture compression is one of those extensions of high interest in core profiles. See https://www.opengl.org/wiki/OpenGL_Extension#Targeting_OpenGL_3.3

Related

OpenGL: why does glMapNamedBuffer() return GL_INVALID_OPERATION?

Using OpenGL 4.6, I have the following (abbreviated) code, in which I create a buffer and then attempt to map it in order to copy data over using memcpy():
glCreateBuffers(buffers.size(), buffers.data()); // buffers is a std::array of GLuints
// ...
glNamedBufferStorage(buffers[3], n * sizeof(glm::vec4), nullptr, 0); // I also tried GL_DYNAMIC_STORAGE_BIT
// ...
void* bfrptr = glMapNamedBuffer(buffers[3], GL_WRITE_ONLY);
This latter call returns GL_INVALID_OPERATION. I am sure that this is the call that generates the error, as I catch OpenGL errors right before it as well. The manpage suggests that this error is only generated if the given buffer handle is not the name of an existing buffer object, but I'm sure I created it. Is there anything else I'm missing or that I'm doing wrong?
When you create immutable buffer storage, you must tell OpenGL how you intend to access that storage from the CPU. These are not "usage hints"; these are requirements, a contract between yourself and OpenGL which GL will hold you to.
You passed 0 for the access mask. That means that you told OpenGL (among other things) that you were not going to access it by mapping it. Which you then tried to do.
So it didn't let you.
If you want to map an immutable buffer, you must tell OpenGL at storage allocation time that you're going to do that. Specifically, if you want to map it for writing, you must use the GL_MAP_WRITE_BIT flag in the gl(Named)BufferStorage call.

Vulkan: AMD vkCmdDebugMarkerBeginEXT only found with vkGetInstanceProcAddr

I've run into some odd behavior with getting a handle to vkCmdDebugMarkerBeginEXT using vkGetDeviceProcAddr, which differs between AMD and Nvidia. However, using vkGetInstanceProcAddr works.
VkDevice device = ...; // valid initialized device
VkInstance instance = ...; // valid initialized instance
PFN_vkVoidFunction fnDevice = vkGetDeviceProcAddr(device, "vkCmdDebugMarkerBeginEXT");
// fnDevice == nullptr on AMD. Non-null on Nvidia
PFN_vkVoidFunction fnInstance = vkGetInstanceProcAddr(instance, "vkCmdDebugMarkerBeginEXT");
// fnInstance == Non-null on both
From the layer interface documentation:
vkGetDeviceProcAddr can only be used to query for device extension or
core device entry points. Device entry points include any command that
uses a VkDevice as the first parameter or a dispatchable object that
is a child of a VkDevice (currently this includes VkQueue and
VkCommandBuffer). vkGetInstanceProcAddr can be used to query either
device or instance extension entry points in addition to all core
entry points.
The prototype for vkCmdDebugMarkerBeginEXT seems to match this description:
VKAPI_ATTR void VKAPI_CALL vkCmdDebugMarkerBeginEXT(
VkCommandBuffer commandBuffer,
VkDebugMarkerMarkerInfoEXT* pMarkerInfo);
While I can quite easily call the device version, and if this fails, call the instance version (to avoid the extra dispatch cost, if possible), I'm wondering if this is expected behavior, or a driver bug?
Yes, vkCmdDebugMarkerBeginEXT fits that description.
You should quote Vulkan spec instead (which IMO should have higher specifying power in this matter).
There is one additional requirement: the particular extension has to be enabled on that device for vkGetDeviceProcAddr to work. Otherwise seems like a driver bug.
In fact, the in-spec Example 2 does use vkGetDeviceProcAddr.

Can't get CreateDDSTextureFromFile to work

So, I've been trying to figure out my problem for a few hours now, but I have no idea what I'm doing wrong. I'm a noob when it comes to DirectX programming, so I've been following some tutorials, and right now, I'm trying to create a obj loader.
http://www.braynzarsoft.net/index.php?p=D3D11OBJMODEL
However, I can't get my texture to work.
This is how I try to load the DDS-texture:
ID3D11ShaderResourceView* tempMeshSRV = nullptr;
hr = CreateDDSTextureFromFile(gDevice, L"boxTexture.dds", NULL, &tempMeshSRV);
if (SUCCEEDED(hr))
{
textureNameArray.push_back(L"boxTexture.dds");
material[matCount - 1].texArrayIndex = meshSRV.size();
meshSRV.push_back(tempMeshSRV);
material[matCount - 1].hasTexture = true;
}
However, my HRESULT will never Succeed, but it doesn't crash either. If I hoover over the hr, it just says "HRESULT_FROM_WIN32(ERROR_NOT_SUPPORTED) I also tried to remove the if statement, but that will just turn my box black.
Any idea on what I'm doing wrong? =/
Thanks in advance!
The most likely problem is that your "boxTexture.dds" is a 24 bit-per-pixel format file. In Direct3D 9, this was D3DFMT_R8G8B8 and was reasonably common. However, there is no DXGI equivalent format for 24 bits-per-pixel and it therefore requires format conversion to work.
The DDSTextureLoader module in DirectX Tool Kit is designed to be a minimum-overhead function, and therefore does no runtime conversions at all. If the data directly maps to a DXGI format, it loads. If it doesn't, it fails with HRESULT_FROM_WIN32(ERROR_NOT_SUPPORTED).
There are two different solutions depending on your usage scenario.
The ideal solution is to convert 'boxTexture.dds' to a supported format. You can do this with the texconv command-line tool provided with DirectXTex. This is by far the best option so that the potentially expensive conversion operation is done once and not very single time your application runs and loads the data.
If you don't actually control the source of the dds files you are trying load (i.e. they are arbitrary files provided by a user or you are doing some kind of content tool that has to support legacy formats), then you should make use of the DirectXTex 'full-fat' LoadFromDDSFile function which has extensive conversion code for handling legacy DDS file formats.
Note this situation can happen for a number of legacy format DDS files as list in the CodePlex wiki documentation
D3DFMT_R8G8B8 (24bpp RGB) - Use a 32bpp format
D3DFMT_X8B8G8R8 (32bpp RGBX) - Use BGRX, BGRA, or RGBA
D3DFMT_A2R10G10B10 (BGRA 10:10:10:2) - Use RGBA 10:10:10:2
D3DFMT_X1R5G5B5 (BGR 5:5:5) - Use BGRA 5:5:5:1 or BGR 5:6:5
D3DFMT_A8R3G3B2, D3DFMT_R3G3B2 (BGR 3:3:2) - Expand to a supported format
D3DFMT_P8, D3DFMT_A8P8 (8-bit palette) - Expand to a supported format
D3DFMT_A4L4 (Luminance 4:4) - Expand to a supported format
D3DFMT_UYVY (YUV 4:2:2 16bpp) - Swizzle to YUY2
See also Direct3D 11 Textures and Block Compression
If you look at the source code for CreateTextureFromDDS (which is called by CreateDDSTextureFromFile to do the main data processing) - http://directxtk.codeplex.com/SourceControl/latest#Src/DDSTextureLoader.cpp - you will see that there are a lot of reasons you could be getting "HRESULT_FROM_WIN32(ERROR_NOT_SUPPORTED)".
It's not likely a problem with opening or reading the file since that would return a different error code. So most likely its an unsupported DXGI_FORMAT, a malformed cubemap, an invalid mipmap count, or invalid image dimensions (i.e. larger than the limits found here: http://msdn.microsoft.com/en-us/library/ff819065(v=vs.85).aspx ).

How to support different OpenGL versions/extensions in C++ font library

I'm planning to rewrite a small C++ OpenGL font library I made a while back using
FreeType 2 since I recently discovered the changes to newer OpenGL versions. My code
uses immediate mode and some function calls I'm pretty sure are deprecated now, e.g.
glLineStipple.
I would very much like to support a range of OpenGL versions such that the code
uses e.g. VBO's when possible or falls back on immediate mode if nothing else is available
and so forth. I'm not sure how to go about it though. Afaik, you can't do a compile
time check since you need a valid OpenGL context created at runtime. So far, I've
come up with the following proposals (with inspiration from other threads/sites):
Use GLEW to make runtime checks in the drawing functions and to check for function
support (e.g. glLineStipple)
Use some #define's and other preprocessor directives that can be specified at compile
time to compile different versions that work with different OpenGL versions
Compile different versions supporting different OpenGL versions and supply each as a
separate download
Ship the library with a script (Python/Perl) that checks the OpenGL version on the
system (if possible/reliable) and does the approapriate modifications to the source
so it fits with the user's version of OpenGL
Target only newer OpenGL versions and drop support for anything below
I'm probably going to use GLEW anyhow to easily load extensions.
FOLLOW-UP:
Based on your very helpful answers, I tried to whip up a few lines based on my old code, here's a snippet (no tested/finished). I declare the appropriate function pointers in the config header, then when the library is initialized, I try to get the right function pointers. If VBOs fail (pointers null), I fall back to display lists (deprecated in 3.0) and then finally to vertex arrays. I should (maybe?) also check for available ARB extensions if fx. VBOs fail to load or is that too much work? Would this be a solid approrach? Comments are appreciated :)
#if defined(WIN32) || defined(_WIN32) || defined(__WIN32__)
#define OFL_WINDOWS
// other stuff...
#ifndef OFL_USES_GLEW
// Check which extensions are supported
#else
// Declare vertex buffer object extension function pointers
PFNGLGENBUFFERSPROC glGenBuffers = NULL;
PFNGLBINDBUFFERPROC glBindBuffer = NULL;
PFNGLBUFFERDATAPROC glBufferData = NULL;
PFNGLVERTEXATTRIBPOINTERPROC glVertexAttribPointer = NULL;
PFNGLDELETEBUFFERSPROC glDeleteBuffers = NULL;
PFNGLMULTIDRAWELEMENTSPROC glMultiDrawElements = NULL;
PFNGLBUFFERSUBDATAPROC glBufferSubData = NULL;
PFNGLMAPBUFFERPROC glMapBuffer = NULL;
PFNGLUNMAPBUFFERPROC glUnmapBuffer = NULL;
#endif
#elif some_other_system
Init function:
#ifdef OFL_WINDOWS
bool loaded = true;
// Attempt to load vertex buffer obejct extensions
loaded = ((glGenBuffers = (PFNGLGENBUFFERSPROC)wglGetProcAddress("glGenBuffers")) != NULL && loaded);
loaded = ((glBindBuffer = (PFNGLBINDBUFFERPROC)wglGetProcAddress("glBindBuffer")) != NULL && loaded);
loaded = ((glVertexAttribPointer = (PFNGLVERTEXATTRIBPOINTERPROC)wglGetProcAddress("glVertexAttribPointer")) != NULL && loaded);
loaded = ((glDeleteBuffers = (PFNGLDELETEBUFFERSPROC)wglGetProcAddress("glDeleteBuffers")) != NULL && loaded);
loaded = ((glMultiDrawElements = (PFNGLMULTIDRAWELEMENTSPROC)wglGetProcAddress("glMultiDrawElements")) != NULL && loaded);
loaded = ((glBufferSubData = (PFNGLBUFFERSUBDATAPROC)wglGetProcAddress("glBufferSubData")) != NULL && loaded);
loaded = ((glMapBuffer = (PFNGLMAPBUFFERPROC)wglGetProcAddress("glMapBuffer")) != NULL && loaded);
loaded = ((glUnmapBuffer = (PFNGLUNMAPBUFFERPROC)wglGetProcAddress("glUnmapBuffer")) != NULL && loaded);
if (!loaded)
std::cout << "OFL: Current OpenGL context does not support vertex buffer objects" << std::endl;
else {
#define OFL_USES_VBOS
std::cout << "OFL: Loaded vertex buffer object extensions successfully"
return true;
}
if (glMajorVersion => 3.f) {
std::cout << "OFL: Using vertex arrays" << std::endl;
#define OFL_USES_VERTEX_ARRAYS
} else {
// Display lists were deprecated in 3.0 (although still available through ARB extensions)
std::cout << "OFL: Using display lists"
#define OFL_USES_DISPLAY_LISTS
}
#elif some_other_system
First of all, and you're going to be safe with that one, because it's supported everywhere: Rewrite your font renderer to use Vertex Arrays. It's only a small step from VAs to VBOs, but VAs are supported everywhere. You only need a small set of extension functions; maybe it made sense to do the loading manually, to not be dependent on GLEW. Linking it statically was huge overkill.
Then put the calls into wrapper functions, that you can refer to through function pointers so that you can switch render paths that way. For example add a function "stipple_it" or so, and internally it calls glLineStipple or builds and sets the appropriate fragment shader for it.
Similar for glVertexPointer vs. glVertexAttribPointer.
If you do want to make every check by hand, then you won't get away from some #defines because Android/iOS only support OpenGL ES and then the runtime checks would be different.
The run-time checks are also almost unavoidable because (from personal experience) there are a lot of caveats with different drivers from different hardware vendors (for anything above OpenGL 1.0, of course).
"Target only newer OpenGL versions and drop support for anything below" would be a viable option, since most of the videocards by ATI/nVidia and even Intel support some version of OpenGL 2.0+ which is roughly equivalent to the GL ES 2.0.
GLEW is a good way to ease the GL extension fetching. Still, there are issues with the GL ES on embedded platforms.
Now the loading procedure:
On win32/linux just check the function pointer for not being NULL and use the ExtensionString from GL to know what is supported at this concrete hardware
The "loading" for iOS/Android/MacOSX would be just storing the pointers or even "do-nothing". Android is a different beast, here you have static pointers, but the need to check extension. Even after these checks you might not be sure about some things that are reported as "working" (I'm talking about "noname" Android devices or simple gfx hardware). So you will add your own(!) checks based on the name of the videocard.
OSX/iOS OpenGL implementation "just works". So if you're running on 10.5, you'll get GL2.1; 10.6 - 2.1 + some extensions which make it almost like 3.1/3.2; 10.7 - 3.2 CoreProfile. No GL4.0 for Macs yet, which is mostly an evolution of 3.2.
If you're interested in my personal opinion, then I'm mostly from the "reinvent everything" camp and over the years we've been using some autogenerated extension loaders.
Most important, you're on the right track: the rewrite to VBO/VA/Shaders/NoFFP would give you a major performance boost.

Embedding cg shaders in C++ GPGPU library

I'm writing a GPGPU Fluid simulation, which runs using C++/OpenGL/Cg. At the moment, the library requires that the user specify a path to the shaders, which is will then read it from.
I'm finding it extremely annoying to have to specify that in my own projects and testing, so I want to make the shader contents linked in with the rest.
Ideally, my .cg files would still be browsable seperately, but a post-build step or pre-processor directive would include it in the source when required.
To make things slightly more annoying, I have a "utils" shader file, which contains functions that are shared among things (like converting 3d texture coords to the 2d atlas equivalent).
I'd like a solution that's cross platform if possible, but it's not so big a deal, as it is currently windows-only. My searches have only really turned up objcopy for linux, but using that for windows is less than ideal.
If it helps, the project is available at http://code.google.com/p/fluidic
You mean you want the shaders embedded as strings in your binary? I'm not aware of any cross-platform tools/libraries to do that, which isn't that surprising because the binaries will be different formats.
For Windows it sounds like you want to store them as a string resource. You can then read the string using LoadString(). Here's how to add them, but it doesn't look like you can link them to a file.
A particularly hacky but cross-platform solution might be to write a script to convert your shaders into a string constant in a header. Then you can just #include it in your code.
I.e. you have the file myshader.shader which contains:
int aFunction(int i, int j)
{
return i/j;
}
And you have a build step that creates the file myshader.shader.h which looks like:
const char[] MYSHADER_SHADER =
"int aFunction(int i, int j)"
"{"
" return i/j;"
"}";
Then add #include "myshader.shader.h" to your code.
Very hacky, but I see no reason why it wouldn't work (except for maybe length/space limits on string literals).
Update: With the release of G++ 4.5 it supports C++0x raw string literals. These can contain new lines 4. I haven't tested it but you should be able to do something like this:
const char[] MY_SHADER = R"qazwsx[
#include "my_shader.c"
]qazwsx";
I haven't tested it though.