I'm searching for a way to find out the name of the currently used graphics card driver inside a C++ OpenGL program. At best would be a platform-independent way (Linux and Windows). The only thing I could find was this but that's a shell solution and might even vary along different distributions (and still, Windows would be a problem).
I already looked at glGetString() with the GL_VENDOR parameter, however that outputs the vendor of the graphics card itself, not the driver. I couldn't find any options/functions that give me what I want.
Is there an easy solution to this problem?
Try these:
const GLubyte* vendor = glGetString(GL_VENDOR);
const GLubyte* renderer = glGetString(GL_RENDERER);
const GLubyte* version = glGetString(GL_VERSION);
This is probably not the ultimate answer, but it might help you. You can work out the driver name and version combining both the lsmod and modinfo commands, under Linux.
For example, my lsmods returns the following:
$ lsmod
Module Size Used by
autofs 28170 2
binfmt_misc 7984 1
vboxnetadp 5267 0
vboxnetflt 14966 0
vboxdrv 1793592 2 vboxnetadp,vboxnetflt
snd_hda_codec_nvhdmi 15451 1
snd_hda_codec_analog 80317 1
usbhid 42030 0 hid
nvidia 11263394 54
from which I know that nvidia refers to the graphics card.
I can then run modinfo nvidia and I get
filename: /lib/modules/2.6.35-32-generic/kernel/drivers/video/nvidia.ko
alias: char-major-195-*
version: 304.54
supported: external
license: NVIDIA
alias: pci:v000010DEd00000E00sv*sd*bc04sc80i00*
alias: pci:v000010DEd00000AA3sv*sd*bc0Bsc40i00*
alias: pci:v000010DEd*sv*sd*bc03sc02i00*
alias: pci:v000010DEd*sv*sd*bc03sc00i00*
depends:
And I can extract the driver version etc...
I know this is neither a straight forward solution nor multiplatform, but you might work out an script that extracts driver name and versions if you guess that most of names will be nvidia, ati, intel etc... by grep / awk the output of lsmod.
Related
I'm trying to use the Ocaml Graphics package. I want to create a GUI for my chat server application. My code is:
let window = Graphics.open_graph "";
Graphics.set_window_title "caml-chat";
Graphics.set_font "ubuntu";
Graphics.set_text_size 12;
Graphics.draw_string "hello!"
However, Graphics.set_font "ubuntu" does not work. The documentation says that the string argument is system dependent, but I cannot find any more information than that. The only mention I found was in the answers to this question, and it didn't work.
Does anyone know anything else about setting the font? (Or can point me in the direction of a simple graphics library with better documentation?)
Although you didn't specify your system, I will assume that it is Linux (I doubt that Windows has an ubuntu font).
On Linux, the set_font function passes the argument to the X Lib's XLoadFont function. You can use the fc-list or xfontsel utilities to query for the available fonts on your system, or call directly to the XListFonts function.
For example,
fc-list | cut -d: -f2 | sort -u
will give you a list of font families, which you can pass to set_font function. Some lines will have more than one family per line, separated with comman (,). There are many more options, you can specify various styles, sizes, etc. But this is all out of the scope. You can the fontconfig guide to learn more about the font subsystem. For example, [here], at the section "Font Names", you can find the explanation of how the font name is constructed.
I'm writing an application that can be sped up a lot if a graphics card is available.
I've got the necessary DLL's to make my application utilize NVIDIA and AMD cards. But it requires being launched with specific command line arguments depending on which card is available.
I'd like to make an installer that will detect the specific brand of GPU and then launch the real application with the necessary command line arguments.
What's the best way to detect the type of card?
You have to detect it during runtime by using OpenGL. Use the command glGetString(GL_VENDOR) or GL_VERSION.
To do so using Direct3D 9 API:
Step 1
D3DADAPTER_IDENTIFIER AdapterIdentifier;
Step 2
m_pD3D->GetAdapterIdentifier(D3DADAPTER_DEFAULT, 0, &AdapterIdentifier);
Step 3 Get the max size of the graphics card identifier string
const int cch = sizeof(AdapterIdentifier.Description);
Step 4 Define a TCHAR to hold the description
TCHAR szDescription[cch];
Step 5 Use the unicode DX utility to convert the char string to TCHAR
DXUtil_ConvertAnsiStringToGenericCch( szDescription, AdapterIdentifier.Description, cch );
Credit goes to: Anonymous_Poster_* # http://www.gamedev.net/topic/358770-obtain-video-card-name-size-etc-in-c/
#mark-setchell already post the link from superuser.com above. I just want to make it easier for people to find out the solution.
wmic path win32_VideoController get name
Has anyone successfully used glMultiDrawArraysIndirect? I'm including the latest glext.h but compiler can't seem to find the function. Do I need to define something (#define ... ) before including glext.h?
error: ‘GL_DRAW_INDIRECT_BUFFER’ was not declared in this scope
error: ‘glMultiDrawArraysIndirect’ was not declared in this scope
I'm trying to implement OpenGL superBible example. Here are snippets from source code :
GLuint indirect_draw_buffer;
glGenBuffers(1, &indirect_draw_buffer);
glBindBuffer(GL_DRAW_INDIRECT_BUFFER, indirect_draw_buffer);
glBufferData(GL_DRAW_INDIRECT_BUFFER,
NUM_DRAWS * sizeof(DrawArraysIndirectCommand),
draws,
GL_STATIC_DRAW);
....
// fill the buffers
.....
glMultiDrawArraysIndirect (GL_TRIANGLES, NULL, 3, 0);
I'm on Linux with Quadro 2000 & latest drivers installed (NVidia 319.60).
You cannot simply #include <glext.h> and expect this problem to fix itself. This header is only half of the equation, it defines the basic constants, function signatures, typedefs, etc. used by OpenGL extensions but does not actually solve the problem of extending OpenGL.
On most platforms you are guaranteed a certain version of OpenGL (1.1 on Windows) and to use any part of OpenGL that is newer than this version you must extend the API at runtime. Linux is no different, in order to use glMultiDrawArraysIndirect (...) you have to load this extension from the driver at runtime. This usually means setting up function pointers that are NULL until runtime in order to keep the compiler/linker happy.
By far, the simplest solution is going to be to use something like GLEW, which will load all of the extensions your driver supports for versions up to OpenGL 4.4 at runtime. It will take the place of glext.h, all you have to do is initialize the library after you setup your render context.
I've an nVidia Quadro NVS 295/PCIe/SSE2 card in which when I do glGetString(GL_EXTENSIONS), print out the values and grep for "compress", I get this list
GL_ARB_compressed_texture_pixel_storage
GL_ARB_texture_compression
GL_ARB_texture_compression_rgtc
GL_EXT_texture_compression_dxt1
GL_EXT_texture_compression_latc
GL_EXT_texture_compression_rgtc
GL_EXT_texture_compression_s3tc
GL_NV_texture_compression_vtc
But then again glCompressedTexImage2D says that glGet with GL_COMPRESSED_TEXTURE_FORMATS returns the supported compressions, which only gives
0x83f0 = GL_COMPRESSED_RGB_S3TC_DXT1_EXT
0x83f2 = GL_COMPRESSED_RGBA_S3TC_DXT3_EXT
0x83f3 = GL_COMPRESSED_RGBA_S3TC_DXT5_EXT
these three values.
Now why does glGet not expose the other compression formats that my card can process? Say LATC, RGTC or VTC?
Also why am I not seeing corresponding DXT3 or 5 extensions in the first list?
Now why does glGet not expose the other compression formats that my card can process?
Because NVIDIA doesn't want to. And really, there's no point. The ARB even decided to deprecate (though not remove) the COMPRESSED_TEXTURE_FORMATS stuff from GL 4.3.
In short, don't rely on that particular glGet. Rely on the extensions. If you have GL 3.0+, then you have the RGTC formats; that's required by GL 3.0+. If you have EXT_texture_compression_s3tc, then you have the "DXT" formats. If you have EXT_texture_sRGB as well, then you have the sRGB versions of the "DXT" formats. And so forth.
Also why am I not seeing corresponding DXT3 or 5 extensions in the first list?
ahem:
0x83f2 = GL_COMPRESSED_RGBA_S3TC_DXT3_EXT
0x83f3 = GL_COMPRESSED_RGBA_S3TC_DXT5_EXT
Those are just different forms of S3TC.
why am I not seeing corresponding DXT3 or 5 extensions in the first list?
You are. They're covered by GL_EXT_texture_compression_s3tc.
Using the ICU library with C++ I'm doing:
char const *lang = Locale::getDefault().getLanguage();
If I write a small test program and run it on my Mac system, I get en for lang. However, inside a larger group project I'm working on, I get root. Anybody have any idea why? I did find this:
http://userguide.icu-project.org/locale/resources
so my guess is that, when running under the larger system, some ICU resources aren't being found, but I don't know what resources, why they're not being found, or how to fix it.
Additional Information
/usr/bin/locale returns:
LANG="en_US.ISO8859-1"
LC_COLLATE="C"
LC_CTYPE="C"
LC_MESSAGES="C"
LC_MONETARY="C"
LC_NUMERIC="C"
LC_TIME="C"
LC_ALL="C"
If I write a small C program:
char const *lang = setlocale( LC_ALL, "" ):
I get en_US.ISO8859-1.
OS: Mac OS X 10.6.4 (Snow Leopard)
ICU version: 4.3.4 (latest available via MacPorts).
A little help? Thanks.
root is surely an odd default locale - you don't see many native root-speakers these days.
But seriously, is it safe to assume on the larger system that someone hasn't called one of the variants of setDefault("root")?
What does something like /usr/bin/locale return on this system (if you can run that)?
ICU 4.4 now has a test program called 'icuinfo', does it also return root as the default locale?
What OS/platform is this on, and which version of ICU?