Using OpenGL 4.6, I have the following (abbreviated) code, in which I create a buffer and then attempt to map it in order to copy data over using memcpy():
glCreateBuffers(buffers.size(), buffers.data()); // buffers is a std::array of GLuints
// ...
glNamedBufferStorage(buffers[3], n * sizeof(glm::vec4), nullptr, 0); // I also tried GL_DYNAMIC_STORAGE_BIT
// ...
void* bfrptr = glMapNamedBuffer(buffers[3], GL_WRITE_ONLY);
This latter call returns GL_INVALID_OPERATION. I am sure that this is the call that generates the error, as I catch OpenGL errors right before it as well. The manpage suggests that this error is only generated if the given buffer handle is not the name of an existing buffer object, but I'm sure I created it. Is there anything else I'm missing or that I'm doing wrong?
When you create immutable buffer storage, you must tell OpenGL how you intend to access that storage from the CPU. These are not "usage hints"; these are requirements, a contract between yourself and OpenGL which GL will hold you to.
You passed 0 for the access mask. That means that you told OpenGL (among other things) that you were not going to access it by mapping it. Which you then tried to do.
So it didn't let you.
If you want to map an immutable buffer, you must tell OpenGL at storage allocation time that you're going to do that. Specifically, if you want to map it for writing, you must use the GL_MAP_WRITE_BIT flag in the gl(Named)BufferStorage call.
When I am trying to load a Shader result from memory the compiler says: one or more arguments are invalid. Shader compiling successfully but it seems after D3DCompileFromFile() command in memory is something not correct and ID3DBlob interface does not get correct values for some reason.
ID3DBlob* pBlobFX = NULL;
ID3DBlob* pErrorBlob = NULL;
hr = D3DCompileFromFile(str, NULL, NULL, NULL, "fx_5_0", NULL, NULL, &pBlobFX, &pErrorBlob); // OK
if (FAILED(hr))
{
if (pErrorBlob != NULL)
OutputDebugStringA((char *)pErrorBlob->GetBufferPointer());
SAFE_RELEASE(pErrorBlob);
return hr;
}
// Create the effect
hr = D3DX11CreateEffectFromMemory(pBlobFX->GetBufferPointer(), pBlobFX->GetBufferSize(), 0, pd3dDevice, ppEffect); // Error: E_INVALIDARG One or more arguments are invalid
The legacy DirectX SDK version of Effects for Direct3D 11 only included D3DX11CreateEffectFromMemory for creating effects which required using a compiled shader binary blob loaded by the application.
The latest GitHub version includes the other expected functions:
D3DX11CreateEffectFromFile which loads a compiled binary blob from disk and then creates an effect from it.
D3DX11CompileEffectFromMemory which uses the D3DCompile API to compile a fx file in memory and then create an effect from it.
D3DX11CompileEffectFromFile which uses the D3DCompile API to compile a provided fx file and then create an effect from it.
Using D3DX11CompileEffectFromFile instead of trying to do it manually as the original poster tried is the easiest solution here.
The original library wanted to strongly encourage using build-time rather than run-time compilation of effects. Given that the primary use of Effects 11 today is developer education, this was unnecessarily difficult for new developers to use so the GitHub version now includes all four possible options for creating effects.
Note: The fx_5_0 profile in the HLSL compiler is deprecated, and is required to use Effects 11.
I want to use DXT compressed textures in my program, so i am loading the core function pointer like this:
/* GL 1.3 core */
PFNGLCOMPRESSEDTEXIMAGE2DPROC glCompressedTexImage2D = NULL;
/* ... */
/* check GL version using glGetString(GL_VERSION) */
/* ... */
glCompressedTexImage2D = (PFNGLCOMPRESSEDTEXIMAGE2DPROC)wglGetProcAddress(
"glCompressedTexImage2D");
if (!glCompressedTexImage2D)
return 0;
/* check if GL_EXT_texture_compression_s3tc is available */
And after that, i use the function like this:
glCompressedTexImage2D(GL_TEXTURE_2D, 0, GL_COMPRESSED_RGB_S3TC_DXT1_EXT, width,
height, 0, size, ptr);
It works well, but the reason i am doubting this is i have been told that i cannot mix OpenGL core functions with extensions functions like this:
glGenBuffersARB(1, &id);
glBindBuffer(GL_ARRAY_BUFFER, id);
Or, core functions with tokens added by some extension like this:
glActiveTexture(GL_TEXTURE0_ARB);
But i am using glCompressedTexImage2D(core function) with GL_COMPRESSED_RGB_S3TC_DXT1_EXT(a token added by GL_EXT_texture_compression_s3tc).
So, is it okay to use extensions that wasn't added to the core (extensions such as GL_EXT_texture_compression_s3tc or WGL_EXT_swap_control) functions/tokens along with core functions?
Generally, it's good advice not to mix core and extension definitions for the same functionality. Often times extensions are promoted to core functionality, with identical definitions, and it's not a problem. But there are cases where the core functionality is not quite the same as earlier versions of he same functionality defined in extensions.
A common example for this are FBOs (Framebuffer Objects). There were a number of different extensions related to FBO functionality before FBOs were introduced as core functionality in OpenGL 3.0, and some of those extensions are not quite the same as what ended up as core functionality. Therefore, mixing older extensions and core definitions for FBOs would be a bad idea.
In this specific case however, it's perfectly fine. It's expected that many/most compressed texture formats are extensions. Many of them are vendor specific, and involve patents, so they will most likely never become core functionality. The spec accommodates for that. Some spec quotes for glCompressedTexImage2D() make this clear:
internalformat must be a supported specific compressed internal format.
For all other compressed internal formats, the compressed image will be decoded according to the specification defining the internalformat token.
Specific compressed internal formats may impose format-specific restrictions on the use of the compressed image specification calls or parameters.
The extension definition for EXT_texture_compression_s3tc also confirms that COMPRESSED_RGB_S3TC_DXT1_EXT can be used as argument for glCompressedTExImage2D():
This extension introduces new tokens:
COMPRESSED_RGB_S3TC_DXT1_EXT 0x83F0
COMPRESSED_RGBA_S3TC_DXT1_EXT 0x83F1
COMPRESSED_RGBA_S3TC_DXT3_EXT 0x83F2
COMPRESSED_RGBA_S3TC_DXT5_EXT 0x83F3
In OpenGL 1.2.1 these tokens are accepted by the <internalformat> parameter
of TexImage2D, CopyTexImage2D, and CompressedTexImage2D and the <format>
parameter of CompressedTexSubImage2D.
The list of supported compressed texture formats can also be obtained without querying extensions. You can use glGetIntegerv() to enumerate them:
GLint numFormats = 0;
glGetIntegerv(GL_NUM_COMPRESSED_TEXTURE_FORMATS, &numFormats);
GLint* formats = new GLint[numFormats];
glGetIntegerv(GL_COMPRESSED_TEXTURE_FORMATS, formats);
This directly gives you the list of formats accepted by glCompressedTexImage2D().
It's perfectly possible to use extensions in core. Core profile just means, that certain cruft from the past has been removed from, well the core. But everything that is reported by your OpenGL context as being available in the extension strings as reported by glGetStringi may be legally used from that context. Any extension that is not "core-compliant" would not appear in a pure core context.
Also texture compression is one of those extensions of high interest in core profiles. See https://www.opengl.org/wiki/OpenGL_Extension#Targeting_OpenGL_3.3
Given a device instance ID for a network card, I would like to know its MAC address. Example device instance ID on my system for integrated Intel Gigabit card:
PCI\VEN_8086&DEV_10CC&SUBSYS_00008086&REV_00\3&33FD14CA&0&C8
So far, the algorithm I have used works as follows:
Call SetupDiGetClassDevs with DIGCF_DEVICEINTERFACE.
Call SetupDiEnumDeviceInfo to get the returned device in a SP_DEVINFO_DATA.
Call SetupDiEnumDeviceInterfaces with GUID_NDIS_LAN_CLASS to get a device interface.
Call SetupDiGetDeviceInterfaceDetail for this returned device interface. This gets us the device path as a string: \\?\pci#ven_8086&dev_10cc&subsys_00008086&rev_00#3&33fd14ca&0&c8#{ad498944-762f-11d0-8dcb-00c04fc3358c}\{28fd5409-15bd-4c06-b62f-004d3a06f852}
At this point we have an address to the network card driver's interface. Open it with CreateFile using the result from #4.
Call DeviceIoControl with IOCTL_NDIS_QUERY_GLOBAL_STATS and OID of OID_802_3_PERMANENT_ADDRESS to get the MAC address.
This usually works, and has been used successfully on quite a large number of machines. However, it appears that a very select few machines have network drivers that aren't responding properly to the DeviceIoControl request in step #6; the problem persists even after updating network card drivers to the latest. These are newer, Windows 7-based computers. Specifically, DeviceIoControl completes successfully, but returns zero bytes instead of the expected six bytes containing the MAC address.
A clue seems to be on the MSDN page for IOCTL_NDIS_QUERY_GLOBAL_STATS:
This IOCTL will be deprecated in later operating system releases. You
should use WMI interfaces to query miniport driver information. For
more information see, NDIS Support for WMI.
-- perhaps newer network card drivers are no longer implementing this IOCTL?
So, how should I get this working? Is it possible there's an oversight in my approach and I'm doing something slightly wrong? Or do I need to take a much more different approach? Some alternate approaches seem to include:
Query Win32_NetworkAdapter WMI class: provides needed information but rejected due to horrible performance. See Fast replacement for Win32_NetworkAdapter WMI class for getting MAC address of local computer
Query MSNdis_EthernetPermanentAddress WMI class: appears to be the WMI replacement for IOCTL_NDIS_QUERY_GLOBAL_STATS and queries the OID directly from the driver - and this one works on the troublesome network driver. Unfortunately, the returned class instances only provide the MAC address and the InstanceName, which is a localized string like Intel(R) 82567LM-2 Gigabit Network Connection. Querying MSNdis_EnumerateAdapter yields a list which relates the InstanceName to a DeviceName, like \DEVICE\{28FD5409-15BD-4C06-B62F-004D3A06F852}. I'm not sure how to go from the DeviceName to the plug-and-play device instance ID (PCI\VEN_8086......).
Call GetAdaptersAddresses or GetAdaptersInfo (deprecated). The only non-localized identifier I can find in the return value is the adapter name, which is a string like {28FD5409-15BD-4C06-B62F-004D3A06F852} - same as the DeviceName returned by the WMI NDIS classes. So again, I can't figure out how to relate it to the device instance ID. I'm not sure if it would work 100% of the time either - e.g. for adapters without TCP/IP protocol configured.
NetBIOS method: requires specific protocols to be set up on the card so won't work 100% of time. Generally seems hack-ish, and not a way to relate to device instance ID anyway that I know of. I'd reject this approach.
UUID generation method: rejected for reasons I won't elaborate on here.
It seems like if I could find a way to get the "GUID" for the card from the device instance ID, I'd be well on my way with one of the remaining two ways of doing things. But I haven't figured out how yet. Otherwise, the WMI NDIS approach would seem most promising.
Getting a list of network cards and MAC addresses is easy, and there are several ways of doing it. Doing it in a fast way that lets me relate it to the device instance ID is apparently hard...
EDIT: Sample code of the IOCTL call if it helps anyone (ignore the leaked hFile handle):
HANDLE hFile = CreateFile(dosDevice.c_str(), 0, FILE_SHARE_READ | FILE_SHARE_WRITE, NULL, OPEN_EXISTING, 0, NULL);
if (hFile == INVALID_HANDLE_VALUE) {
DWORD err = GetLastError();
wcout << "GetMACAddress: CreateFile on " << dosDevice << " failed." << endl;
return MACAddress();
}
BYTE address[6];
DWORD oid = OID_802_3_PERMANENT_ADDRESS, returned = 0;
//this fails too: DWORD oid = OID_802_3_CURRENT_ADDRESS, returned = 0;
if (!DeviceIoControl(hFile, IOCTL_NDIS_QUERY_GLOBAL_STATS, &oid, sizeof(oid), address, 6, &returned, NULL)) {
DWORD err = GetLastError();
wcout << "GetMACAddress: DeviceIoControl on " << dosDevice << " failed." << endl;
return MACAddress();
}
if (returned != 6) {
wcout << "GetMACAddress: invalid address length of " << returned << "." << endl;
return MACAddress();
}
The code fails, printing:
GetMACAddress: invalid address length of 0.
So the DeviceIoControl returns non-zero indicating success, but then returns zero bytes.
Here's one way to do it:
Call GetAdaptersAddresses to get a list of IP_ADAPTER_ADDRESSES structs
Iterate over each adapter and get its GUID from the AdapterName field (I'm not sure if this behaviour is guaranteed, but all the adapters in my system have a GUID here, and the documentation says the AdapterName is permanent)
For each adapter read the registry key from HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Network\{4D36E972-E325-11CE-BFC1-08002BE10318}\<the adapter GUID>\Connection\PnPInstanceID (if it exists) (got this idea from here; searching on Google that key seems to be well documented, so it's not likely to change)
From this key you get the device ID for the adapter (something like: PCI\VEN_14E4&DEV_16B1&SUBSYS_96B11849&REV_10\4&2B8260C3&0&00E4)
Do this for each adapter until you find a match. When you get your match just go back to the IP_ADAPTER_ADDRESSES and look at the PhysicalAddress field
Get a beer (optional)
It wouldn't be Windows if there weren't a million ways to do something!
I wound up using SetupDiGetDeviceRegistryProperty to read SPDRP_FRIENDLYNAME. If that's not found, then I read SPDRP_DEVICEDESC instead. Ultimately, this gets me a string like "VirtualBox Host-Only Ethernet Adapter #2". I then match this against the InstanceName property in the WMI NDIS classes (MSNdis_EthernetPermanentAddress WMI class). Both properties must be read in case there are multiple adapters sharing the same driver (i.e. "#2", "#3", etc.) - if there's only one adapter then SPDRP_FRIENDLYNAME isn't available, but if there is more than one then SPDRP_FRIENDLYNAME is required to differentiate them.
The method makes me a little nervous because I'm comparing what seems like a localized string, and there's no documentation that I've found that guarantees what I'm doing will always work. Unfortunately, I haven't found any better ways that are documented to work, either.
A couple other alternate methods involve groveling in undocumented registry locations. One method is spencercw's method, and the other would be to read SPDRP_DRIVER, which is the name of a subkey under HKLM\SYSTEM\CurrentControlSet\Control\Class. Underneath the driver key, look for the Linkage\Export value which then seems like it could be matched to the DeviceName property of the MSNdis_EnumerateAdapter class. But there's no documentation I could find that says these values can be legally matched. Furthermore, the only documentation I found about Linkage\Export was from the Win2000 registry reference and explicitly said that applications shouldn't rely on it.
Another method would be to look at my original question, step 4: "SetupDiGetDeviceInterfaceDetail for this returned device interface". The device interface path actually can be used to reconstruct the device path. Start with device interface path: \\?\pci#ven_8086&dev_10cc&subsys_00008086&rev_00#3&33fd14ca&0&c8#{ad498944-762f-11d0-8dcb-00c04fc3358c}\{28fd5409-15bd-4c06-b62f-004d3a06f852}. Then, remove everything before the final slash, leaving you with: {28fd5409-15bd-4c06-b62f-004d3a06f852}. Finally, prepend \Device\ to this string and match it against the WMI NDIS classes. Again, however, this seems to be undocumented and relying on an implementation detail of a device interface path.
In the end, the other methods I investigated had their own undocumented complications that sounded at least as serious as matching the SPDRP_FRIENDLYNAME / SPDRP_DEVICEDESC strings. So I opted for the simpler approach, which was to just match those strings against the WMI NDIS classes.
I guess you want to get the MAC address in order to implement some sort of DRM, inventory, or classification system, since you tried to get the permanent MAC address instead of the current one.
You seem to forget that there's even an administratively super-imposed MAC address (in other words: a "forced" MAC address).
Some drivers let you do this from the Device Property page, under the Advanced tab (for example: my Marvell network adapter let me do this), while some others don't let you do that (read: they don't support that property).
However, it all ends in a Registry value: HKLM\SYSTEM\CurrentControlSet\Control\Class\{4D36E972-E325-11CE-BFC1-08002BE10318}\xxxx\NetworkAddress, with a REG_SZ type.
Here you can set a different MAC address than the original one, in the form "01020304abcd" (6 bytes, plain hexadecimal, without : separators or 0x prefix).
After you set it, reboot the machine, and on power-up the new MAC address will have effect.
I happen to have a motherboard with two Marvell integrated NICs, and a NETGEAR USB WiFi NIC. The Marvell one supports changing the MAC address: if you set the NetworkAddress value in the Registry, you see the new value in the driver properties page, too, and it has effect immediately, without the need to restart (if you change it from device Property Page).
Here follows the results of reading the MAC address with different methods:
GetAdaptersInfo: new MAC address
IOCTL_NDIS_QUERY_GLOBAL_STATS: original MAC address
MSNdis_EthernetPermanentAddress: original MAC address
I tried adding the NetworkAddress value in the Registry for the NETGEAR USB WiFi NIC, and the results are:
GetAdaptersInfo: new MAC address
IOCTL_NDIS_QUERY_GLOBAL_STATS: new MAC address
MSNdis_EthernetPermanentAddress: new MAC address
The original MAC addres is gone.
So, in order to not be fooled by a "malicious" user, you always need to check the HKLM\SYSTEM\CurrentControlSet\Control\Class\{4D36E972-E325-11CE-BFC1-08002BE10318}\xxxx\NetworkAddress Registry value. If that is set, i guess it's better to not trust that Network Adapter at all, since it is up to the driver implementation to decide what will be presented to you using the different methods.
Some background for getting to that Registry key:
Microsoft documentation about the HKLM\SYSTEM\CurrentControlSet\Class key
According to the Microsoft documentation on that page,
There is a subkey for each class that is named using the GUID of the
setup class
So we choose the {4D36E972-E325-11CE-BFC1-08002BE10318} subkey (aka GUID_DEVCLASS_NET, defined in <devguid.h>, and further documented here)
Again, according to Microsoft documentation,
Each class subkey contains other subkeys known as software keys (or, driver keys) for each device instance of that class installed in the system. Each of these software keys is named by using a device instance ID, which is a base-10, four-digit ordinal value
The xxxx part is a 4-character textual representation of a positive integer, starting from 0
So, you can traverse the subkeys up from 0000, 0001, 0002, up to the number of network adapters in your system.
The documentation stops here: I didn't find any other documentation about the different registry values, or such.
However, in each of these subkeys, you can find REG_SZ values that can help you link the GetAdaptersInfo(), MSNdis_EthernetPermanentAddress, Win32_NetworkAdapter, and Device Instance ID worlds (and this answers your question).
The Registry values are:
DeviceInstanceID: its value is, no surprise, the Device Instance ID
NetCfgInstanceId: its value is the AdapterName member of the IP_ADAPTER_INFO struct, returned by GetAdaptersInfo(). It is also the GUID member of the Win32_NetworkAdapter WMI class.
Don't forget the NetworkAddress one: should a valid MAC address exist here, a driver may report it as the MAC address in use by GetAdaptersInfo(), MSNdis_EthernetPermanentAddress, and IOCTL_NDIS_QUERY_GLOBAL_STATS!
Then, as you already said, the only connection between the MSNdis_EthernetPermanentAddress WMI Class and the rest of the "world" is by its InstanceName member. You can relate it to the Description member of the IP_ADAPTER_INFO struct, returned by GetAdaptersInfo(). Although it may be a localized name, it seems to be unique for the system (For my two integrated Marvell NICs, the second one has a " #2" appended to its name).
Final note:
Said all the above, the user could choose to disable WMI...
I'm planning to rewrite a small C++ OpenGL font library I made a while back using
FreeType 2 since I recently discovered the changes to newer OpenGL versions. My code
uses immediate mode and some function calls I'm pretty sure are deprecated now, e.g.
glLineStipple.
I would very much like to support a range of OpenGL versions such that the code
uses e.g. VBO's when possible or falls back on immediate mode if nothing else is available
and so forth. I'm not sure how to go about it though. Afaik, you can't do a compile
time check since you need a valid OpenGL context created at runtime. So far, I've
come up with the following proposals (with inspiration from other threads/sites):
Use GLEW to make runtime checks in the drawing functions and to check for function
support (e.g. glLineStipple)
Use some #define's and other preprocessor directives that can be specified at compile
time to compile different versions that work with different OpenGL versions
Compile different versions supporting different OpenGL versions and supply each as a
separate download
Ship the library with a script (Python/Perl) that checks the OpenGL version on the
system (if possible/reliable) and does the approapriate modifications to the source
so it fits with the user's version of OpenGL
Target only newer OpenGL versions and drop support for anything below
I'm probably going to use GLEW anyhow to easily load extensions.
FOLLOW-UP:
Based on your very helpful answers, I tried to whip up a few lines based on my old code, here's a snippet (no tested/finished). I declare the appropriate function pointers in the config header, then when the library is initialized, I try to get the right function pointers. If VBOs fail (pointers null), I fall back to display lists (deprecated in 3.0) and then finally to vertex arrays. I should (maybe?) also check for available ARB extensions if fx. VBOs fail to load or is that too much work? Would this be a solid approrach? Comments are appreciated :)
#if defined(WIN32) || defined(_WIN32) || defined(__WIN32__)
#define OFL_WINDOWS
// other stuff...
#ifndef OFL_USES_GLEW
// Check which extensions are supported
#else
// Declare vertex buffer object extension function pointers
PFNGLGENBUFFERSPROC glGenBuffers = NULL;
PFNGLBINDBUFFERPROC glBindBuffer = NULL;
PFNGLBUFFERDATAPROC glBufferData = NULL;
PFNGLVERTEXATTRIBPOINTERPROC glVertexAttribPointer = NULL;
PFNGLDELETEBUFFERSPROC glDeleteBuffers = NULL;
PFNGLMULTIDRAWELEMENTSPROC glMultiDrawElements = NULL;
PFNGLBUFFERSUBDATAPROC glBufferSubData = NULL;
PFNGLMAPBUFFERPROC glMapBuffer = NULL;
PFNGLUNMAPBUFFERPROC glUnmapBuffer = NULL;
#endif
#elif some_other_system
Init function:
#ifdef OFL_WINDOWS
bool loaded = true;
// Attempt to load vertex buffer obejct extensions
loaded = ((glGenBuffers = (PFNGLGENBUFFERSPROC)wglGetProcAddress("glGenBuffers")) != NULL && loaded);
loaded = ((glBindBuffer = (PFNGLBINDBUFFERPROC)wglGetProcAddress("glBindBuffer")) != NULL && loaded);
loaded = ((glVertexAttribPointer = (PFNGLVERTEXATTRIBPOINTERPROC)wglGetProcAddress("glVertexAttribPointer")) != NULL && loaded);
loaded = ((glDeleteBuffers = (PFNGLDELETEBUFFERSPROC)wglGetProcAddress("glDeleteBuffers")) != NULL && loaded);
loaded = ((glMultiDrawElements = (PFNGLMULTIDRAWELEMENTSPROC)wglGetProcAddress("glMultiDrawElements")) != NULL && loaded);
loaded = ((glBufferSubData = (PFNGLBUFFERSUBDATAPROC)wglGetProcAddress("glBufferSubData")) != NULL && loaded);
loaded = ((glMapBuffer = (PFNGLMAPBUFFERPROC)wglGetProcAddress("glMapBuffer")) != NULL && loaded);
loaded = ((glUnmapBuffer = (PFNGLUNMAPBUFFERPROC)wglGetProcAddress("glUnmapBuffer")) != NULL && loaded);
if (!loaded)
std::cout << "OFL: Current OpenGL context does not support vertex buffer objects" << std::endl;
else {
#define OFL_USES_VBOS
std::cout << "OFL: Loaded vertex buffer object extensions successfully"
return true;
}
if (glMajorVersion => 3.f) {
std::cout << "OFL: Using vertex arrays" << std::endl;
#define OFL_USES_VERTEX_ARRAYS
} else {
// Display lists were deprecated in 3.0 (although still available through ARB extensions)
std::cout << "OFL: Using display lists"
#define OFL_USES_DISPLAY_LISTS
}
#elif some_other_system
First of all, and you're going to be safe with that one, because it's supported everywhere: Rewrite your font renderer to use Vertex Arrays. It's only a small step from VAs to VBOs, but VAs are supported everywhere. You only need a small set of extension functions; maybe it made sense to do the loading manually, to not be dependent on GLEW. Linking it statically was huge overkill.
Then put the calls into wrapper functions, that you can refer to through function pointers so that you can switch render paths that way. For example add a function "stipple_it" or so, and internally it calls glLineStipple or builds and sets the appropriate fragment shader for it.
Similar for glVertexPointer vs. glVertexAttribPointer.
If you do want to make every check by hand, then you won't get away from some #defines because Android/iOS only support OpenGL ES and then the runtime checks would be different.
The run-time checks are also almost unavoidable because (from personal experience) there are a lot of caveats with different drivers from different hardware vendors (for anything above OpenGL 1.0, of course).
"Target only newer OpenGL versions and drop support for anything below" would be a viable option, since most of the videocards by ATI/nVidia and even Intel support some version of OpenGL 2.0+ which is roughly equivalent to the GL ES 2.0.
GLEW is a good way to ease the GL extension fetching. Still, there are issues with the GL ES on embedded platforms.
Now the loading procedure:
On win32/linux just check the function pointer for not being NULL and use the ExtensionString from GL to know what is supported at this concrete hardware
The "loading" for iOS/Android/MacOSX would be just storing the pointers or even "do-nothing". Android is a different beast, here you have static pointers, but the need to check extension. Even after these checks you might not be sure about some things that are reported as "working" (I'm talking about "noname" Android devices or simple gfx hardware). So you will add your own(!) checks based on the name of the videocard.
OSX/iOS OpenGL implementation "just works". So if you're running on 10.5, you'll get GL2.1; 10.6 - 2.1 + some extensions which make it almost like 3.1/3.2; 10.7 - 3.2 CoreProfile. No GL4.0 for Macs yet, which is mostly an evolution of 3.2.
If you're interested in my personal opinion, then I'm mostly from the "reinvent everything" camp and over the years we've been using some autogenerated extension loaders.
Most important, you're on the right track: the rewrite to VBO/VA/Shaders/NoFFP would give you a major performance boost.