Anyone tried using glMultiDrawArraysIndirect? Compiler can't find the function - c++

Has anyone successfully used glMultiDrawArraysIndirect? I'm including the latest glext.h but compiler can't seem to find the function. Do I need to define something (#define ... ) before including glext.h?
error: ‘GL_DRAW_INDIRECT_BUFFER’ was not declared in this scope
error: ‘glMultiDrawArraysIndirect’ was not declared in this scope
I'm trying to implement OpenGL superBible example. Here are snippets from source code :
GLuint indirect_draw_buffer;
glGenBuffers(1, &indirect_draw_buffer);
glBindBuffer(GL_DRAW_INDIRECT_BUFFER, indirect_draw_buffer);
glBufferData(GL_DRAW_INDIRECT_BUFFER,
NUM_DRAWS * sizeof(DrawArraysIndirectCommand),
draws,
GL_STATIC_DRAW);
....
// fill the buffers
.....
glMultiDrawArraysIndirect (GL_TRIANGLES, NULL, 3, 0);
I'm on Linux with Quadro 2000 & latest drivers installed (NVidia 319.60).

You cannot simply #include <glext.h> and expect this problem to fix itself. This header is only half of the equation, it defines the basic constants, function signatures, typedefs, etc. used by OpenGL extensions but does not actually solve the problem of extending OpenGL.
On most platforms you are guaranteed a certain version of OpenGL (1.1 on Windows) and to use any part of OpenGL that is newer than this version you must extend the API at runtime. Linux is no different, in order to use glMultiDrawArraysIndirect (...) you have to load this extension from the driver at runtime. This usually means setting up function pointers that are NULL until runtime in order to keep the compiler/linker happy.
By far, the simplest solution is going to be to use something like GLEW, which will load all of the extensions your driver supports for versions up to OpenGL 4.4 at runtime. It will take the place of glext.h, all you have to do is initialize the library after you setup your render context.

Related

What are those folders in SDL-1.2.15

I'm trying to understand source code of SDL-1.2.15, and to find out how it renders stuff on windows. But I can't find where the rendering is happening. I looked inside SDL-1.2.15/src/video folder, and there is a ton of subfolders, and I don't know what any of these stands for. See for yourself.
aalib/ directfb/ ipod/ os2fslib/ quartz/ windib/
ataricommon/ dummy/ maccommon/ photon/ riscos/ windx5/
bwindow/ fbcon/ macdsp/ picogui/ svga/ wscons/
caca/ gapi/ macrom/ ps2gs/ symbian/ x11/
dc/ gem/ nanox/ ps3/ vgl/ xbios/
dga/ ggi/ nds/ qtopia/ wincommon/ Xext/
Is this documented somewhere? This is a pretty popular library, so it probably is documented, right? Right? What's the point of having source code if you can't even understand it, if you can't find functions you are using.
While not all the names are self-explanatory, they contain some hints.
directfb, fbcon (framebuffer console) and X (x11, Xext) are output layers on Linux (unix).
The ones starting with win indicate they are for Windows. More specifically, windib should be about device independent bitmaps (DIBs), dx5 about DirectX 5, and wincommon about some common stuff. Indeed, using grep shows that (only) these folders contain Windows-specific code:
grep -r windows.h src/video/*
[ lists files in the win* folders ]
You could also just compile the package on Windows and see which files were compiled (which folders contain object files)
However, to find out what it actually does, you should rather study the function you're interested in (e.g. SDL_BlitSurface), look at it's implementation, and then look at the implementation of the functions it uses. Start in SDL_video.h (and notice that SDL_BlitSurface is just a define).
You should use some tool to search the code base. Grep or some IDE. Or both.
First of all, why not SDL2?
These are different SDL's video drivers. You can get what driver is used by your program by calling SDL_VideoDriverName. Which driver will be used determined by target platform (e.g. operating system - most drivers are platform-specific), environment variable SDL_VIDEODRIVER, or calling side.

glewInit() crashing (segfault) after creating osmesa (off-screen mesa) context

I'm trying to run an opengl application on a remote computing cluster. I'm using osmesa as I intend to execute off-screen software rendering (no x11 forwarding etc). I want to use glew (to make life dealing with shaders and other extension related calls easier), and I seem to have built and linked both mesa and glew fine.
When I call mesa-create-context, glewinit gives a OPENGL Version not available output, which probably means the context has not been created. When I call glGetString(GL_EXTENSIONS) i dont get any output, which confirms this. This also shows that glew is working fine on its own. (Other glew commands like glew version etc also work).
Now when I (as shown below), add the mesa-make-context-current function, glewinit crashes with a segfault. Running glGetString(GL_EXTENSIONS) gives me a list of extensions now however (which means context creation is successful!)
I've spent hours trying to figure this out, tried tinkering but nothing works. Would greatly appreciate any help on this. Maybe some of you has experienced something similar before?? Thanks again!
int Height = 1; int Width = 1;
OSMesaContext ctx; void *buffer;
ctx = OSMesaCreateContext( OSMESA_RGBA, NULL );
buffer = malloc( Width * Height * 4 * sizeof(GLfloat) );
if (!OSMesaMakeCurrent( ctx, buffer, GL_UNSIGNED_BYTE, Width, Height )) {
printf("OSMesaMakeCurrent failed!\n");
return 0;
}
-- glewinit() crashes after this.
Just to add, osmesa and glew actually did not compile initially. Because glew undefines GLAPI in it's last line and since osmesa will not include gl.h again, GLAPI remains undefined and causes an error in osmesa.h (119). I got around this by adding an extern to GLAPI, not sure if this is relevant though.
Looking at the source to glewInit in glew.c if glewContextInit succeeds it returns GLEW_OK, GLEW_OK is defined to 0, and so on Linux systems it will always call glxewContextInit which calls glX functions that in the case of OSMesa will likely not be ready for use. This will cause a segfault (as I see), and it seems that the glewInit function has no capability to handle this case unfortunately without patching the C source and recompiling the library.
If others have already solved this I would be interested, I have seen some patched versions of the glew.c sources that workaround this. It isn't clear if there is any energy in the GLEW community to merge changes in that address this use case.

Find out name of graphics card driver in a C++ OpenGL program

I'm searching for a way to find out the name of the currently used graphics card driver inside a C++ OpenGL program. At best would be a platform-independent way (Linux and Windows). The only thing I could find was this but that's a shell solution and might even vary along different distributions (and still, Windows would be a problem).
I already looked at glGetString() with the GL_VENDOR parameter, however that outputs the vendor of the graphics card itself, not the driver. I couldn't find any options/functions that give me what I want.
Is there an easy solution to this problem?
Try these:
const GLubyte* vendor = glGetString(GL_VENDOR);
const GLubyte* renderer = glGetString(GL_RENDERER);
const GLubyte* version = glGetString(GL_VERSION);
This is probably not the ultimate answer, but it might help you. You can work out the driver name and version combining both the lsmod and modinfo commands, under Linux.
For example, my lsmods returns the following:
$ lsmod
Module Size Used by
autofs 28170 2
binfmt_misc 7984 1
vboxnetadp 5267 0
vboxnetflt 14966 0
vboxdrv 1793592 2 vboxnetadp,vboxnetflt
snd_hda_codec_nvhdmi 15451 1
snd_hda_codec_analog 80317 1
usbhid 42030 0 hid
nvidia 11263394 54
from which I know that nvidia refers to the graphics card.
I can then run modinfo nvidia and I get
filename: /lib/modules/2.6.35-32-generic/kernel/drivers/video/nvidia.ko
alias: char-major-195-*
version: 304.54
supported: external
license: NVIDIA
alias: pci:v000010DEd00000E00sv*sd*bc04sc80i00*
alias: pci:v000010DEd00000AA3sv*sd*bc0Bsc40i00*
alias: pci:v000010DEd*sv*sd*bc03sc02i00*
alias: pci:v000010DEd*sv*sd*bc03sc00i00*
depends:
And I can extract the driver version etc...
I know this is neither a straight forward solution nor multiplatform, but you might work out an script that extracts driver name and versions if you guess that most of names will be nvidia, ati, intel etc... by grep / awk the output of lsmod.

OpenGL compressed textures and extensions

I've an nVidia Quadro NVS 295/PCIe/SSE2 card in which when I do glGetString(GL_EXTENSIONS), print out the values and grep for "compress", I get this list
GL_ARB_compressed_texture_pixel_storage
GL_ARB_texture_compression
GL_ARB_texture_compression_rgtc
GL_EXT_texture_compression_dxt1
GL_EXT_texture_compression_latc
GL_EXT_texture_compression_rgtc
GL_EXT_texture_compression_s3tc
GL_NV_texture_compression_vtc
But then again glCompressedTexImage2D says that glGet with GL_COMPRESSED_TEXTURE_FORMATS returns the supported compressions, which only gives
0x83f0 = GL_COMPRESSED_RGB_S3TC_DXT1_EXT
0x83f2 = GL_COMPRESSED_RGBA_S3TC_DXT3_EXT
0x83f3 = GL_COMPRESSED_RGBA_S3TC_DXT5_EXT
these three values.
Now why does glGet not expose the other compression formats that my card can process? Say LATC, RGTC or VTC?
Also why am I not seeing corresponding DXT3 or 5 extensions in the first list?
Now why does glGet not expose the other compression formats that my card can process?
Because NVIDIA doesn't want to. And really, there's no point. The ARB even decided to deprecate (though not remove) the COMPRESSED_TEXTURE_FORMATS stuff from GL 4.3.
In short, don't rely on that particular glGet. Rely on the extensions. If you have GL 3.0+, then you have the RGTC formats; that's required by GL 3.0+. If you have EXT_texture_compression_s3tc, then you have the "DXT" formats. If you have EXT_texture_sRGB as well, then you have the sRGB versions of the "DXT" formats. And so forth.
Also why am I not seeing corresponding DXT3 or 5 extensions in the first list?
ahem:
0x83f2 = GL_COMPRESSED_RGBA_S3TC_DXT3_EXT
0x83f3 = GL_COMPRESSED_RGBA_S3TC_DXT5_EXT
Those are just different forms of S3TC.
why am I not seeing corresponding DXT3 or 5 extensions in the first list?
You are. They're covered by GL_EXT_texture_compression_s3tc.

glBindFramebuffer causes an "invalid operation" GL error when using the GL_DRAW_FRAMEBUFFER target

I'm using OpenGL 3.3 on a GeForce 9800 GTX. The reference pages for 3.3 say that an invalid operation with glBindFramebuffer indicates a framebuffer ID that was not returned from glGenFramebuffers. Yet, I output the ID returned by glGenFramebuffers and the ID I send later to glBindFramebuffer and they are the same.
The GL error goes away, however, when I change the target parameter in glBindFramebuffer from GL_DRAW_FRAMEBUFFER to GL_FRAMEBUFFER. The documentation says I should be able to use GL_DRAW_FRAMEBUFFER. Is there any case in which you can't bind to GL_DRAW_FRAMEBUFFER? Is there any harm from using GL_FRAMEBUFFER instead of GL_DRAW_FRAMEBUFFER? Is this a symptom of a larger problem?
If glBindFramebuffer(GL_FRAMEBUFFER) works when glBindFramebuffer(GL_DRAW_FRAMEBUFFER) does not, and we're not talking about the EXT version of these functions and enums (note the lack of "EXT" suffixes), then it's likely that you may have done something wrong. GL_INVALID_OPERATION is the error you get when multiple combinations of parameters that depend on different state are in conflict. If it were just a missing enum, you should get GL_INVALID_ENUM.
Of course, it could just be a driver bug too. But there's no way to know without knowing what your code looks like.