glGenFramebuffers() in Qt get 'was not declared in this scope' - c++

I'm trying to compile a code with this call in Qt5 under Linux and I'm getting this error on compile time.
Any problem of compatibility? Any other error
I have this include:
#include <GL/gl.h>

Try this:
#ifdef Q_OS_WIN
#include "gl/GLU.h"
#else
#include <glu.h>
#endif
When building this example on OSX and on Windows that code worked for me for finding the correct headers.
https://github.com/peteristhegreat/circles-in-a-cube/blob/master/glwidget.cpp
Hope that helps.

glGenFramebuffers() is never declared in the standard GL headers, because that is a function which is not guaranteed to be even exported by the GL library on most platforms. For anything beoynd GL 1.1, the GL extension mechanism has to be used to retrieve the function pointers at run time. There are a couple of different OpenGL loading libraries which do all of this for you under the hood, and also provide appropriate header files so that any GL function can be used as if you were directly linking them.
You already use Qt, which provides its own GL loading mechanism, namely QOpenGLFunctions and the more modern QAbstractOpenGLFunctions class.This article provides a brief overview about the different possibilites.
Also note that Qt provides also the QGLFramebufferObject class as a wrapper around GL's FBOs.

Related

GLEW_APPLE_GLX Causes Linking Error When Building GLEW

I grepped inside GLEW while trying to solve my other question, concerning missing __glewX* symbols for Mac, and found that they are guarded by GLEW_APPLE_GLX.
When I attempt to build GLEW from source with that flag defined, I get undefined symbols (stuff like _glXGetClientString). Linking against X11 (-lX11) doesn't help.
Question: assuming defining GLEW_APPLE_GLX does indeed make sense, how can I fix the build?
When building an application that uses the X Server (XQuartz) instead of using CGL, you also need to add -lGL.
Ordinarily when building GL software on OS X you use OpenGL.framework (-framework OpenGL) and that gets you OpenGL and CGL/AGL functions but leaves out GLX.
You should also ditch any includes to things like <OpenGL/gl.h> and use <GL/gl.h> instead, as that will point to /usr/X11R6/include/GL/... instead of the OpenGL framework headers.

How do I use gl.h in a C++ program?

I am trying to understand why this program will compile in C but not C++ and why extern "C" { } doesn't seem to help.
This short program doesn't actually do anything, but shows by example that there is a difference in the compilation when it is C vs C++.
#include <GL/gl.h>
int main() {
glBegin(GL_POLYGON);
glUseProgram(0);
glEnd();
return 0;
}
When you save that as ex.c and compile it with gcc ex.c -lGL -o ex it compiles and links as expected.
When you save it as ex.cpp and compile it with gcc ex.cpp -lGL -o ex you get a compiler error:
error: ‘glUseProgram’ was not declared in this scope
Note that it does not complain about glBegin, glEnd, or GL_POLYGON. In fact, you can comment out the glUseProgram line and it compiles just fine as a cpp program.
Now, why can't I wrap the program in extern "C" like so:
extern "C" {
#include <GL/gl.h>
int main() {
glBegin(GL_POLYGON);
glUseProgram(0);
glEnd();
return 0;
}
}
Doing so still leads to the same compiler error. My understanding of extern "C" is incomplete.
Ultimately I need to understand what is wrong because I am trying to write a C++ program that uses many of the GL functions that apparently won't compile in C++.
How do I use gl.h in a C++ program?
To address a few of the comments: I am using Mesa on X11. glUseProgram definition is in glext.h which is included via gl.h. I have already written a C++ program using OpenGL (actually GLES) on a raspi. converting it to X11 is proving to be a non-trivial matter.
As I've commented, placing #define GL_GLEXT_PROTOTYPES before the include directive solves your problem.
See the OpenGL ABI for Linux:
Normally, prototypes are present in the header files, but are not visible due to conditional compilation. To define prototypes as well as typedefs, the application must #define GL_GLEXT_PROTOTYPES prior to including gl.h or glx.h. (Note: consistency suggests using GLX_GLXEXT_PROTOTYPES for glxext.h - TBD).
This is usually done by something like GLUT or GLEW in their header files, so if you're working with them it's typically not needed to defined the macro yourself -- however be sure to include their headers before the GL/gl.h
EDITED: The reason why glBegin works fine while there's problem with glUseProgram is that glBegin comes with the very first draft of OpenGL while glUseProgram had been an extension and hasn't get introduced until OpenGL 2.0.
EDITED:
Let's be more specific: why it works with C but not C++? First of all, there is no GL_GLEXT_PROTOTYPES defined in neither C or C++ version. You could test it with simply some code like this:
#ifdef GL_GLEXT_PROTOTYPES
puts("macro detected");
#endif
The reason, however, is in C it's allowed that a symbol of function could get no definition at all (which would be assumed as all its parameters are int and also returns an int), while in C++ it's not allowed.
The quickest way to get it working, is to define GL_GLEXT_PROTOTYPES before the gl.h include :
#define GL_GLEXT_PROTOTYPES
#include <GL/gl.h>
in order to expose the function prototypes.
The reason it still works in C without the prototypes (function declaratons), is because C is not as strict as C++ about needing the function declaration for a called function (it'll implicitly assume a int fun() function declaration).
The first thing you need to do, is check if you have the various gl header files. If you have them already, check if you have the needed native files (.dll) and be sure that you load them into your program/project.
Get GLEW (OpenGL Extension Wrangler Library)
For your own sake get something like GLEW. Why? because by default all the GL function aren't called glUseProgram and such. By default on my computer (Windows7) glUseProgram() is actually called PFNGLUSEPROGRAMPROC.
Thereby to get those "ugly" things replaced by the more readable once, simply get GLEW.
You could also simply #define all those "ugly" things away, though that will be A LOT OF WORK. So again it would be easier to get something like GLEW.

Misunderstood Compiler Error on glGenerateMipmap(GL_TEXTURE_2D);

error: there are no arguments to 'glGenerateMipmap' that depend on a template parameter, so a declaration of 'glGenerateMipmap' must be available [-fpermissive]
I have #include <GL/glext.h> included and can see the declaration of the function in the header, however I get the compiler error above. I am on Ubuntu 13.04 with the most up-to-date nvidia drivers installed. I would assume that this function would be defined.
My use of the function is:
if (mipmapped) {
glGenerateMipmap(GL_TEXTURE_2D);
}
Why is my compiler choking on this function? What does that error mean in this context?
The glext.h header does by default not declare those functions, you have to
#define GL_GLEXT_PROTOTYPES
before including that file if you really want to. But you should be warned that just declaring that function is does not mean that you can successfully link on any platform, as the GL lib is not required to export that function. On Linux, it is likely to work, though, but the closest to a standard is the OpenGL Application Binary Interface for Linux which just guarantees that all OpenGL 1.2 core functions are exported.
You should consider using the OpenGL extension mechanism, either manually via glXGetProcAddress[ARB]() or by using a convenient library like GLEW or GL3W.

Cuda with Boost

I am currently writing a CUDA application and want to use the boost::program_options library to get the required parameters and user input.
The trouble I am having is that NVCC cannot handle compiling the boost file any.hpp giving errors such as
1>C:\boost_1_47_0\boost/any.hpp(68): error C3857: 'boost::any': multiple template parameter lists are not allowed
I searched online and found it is because NVCC cannot handle the certain constructs used in the boost code but that NVCC should delegate compilation of host code to the C++ compiler. In my case I am using Visual Studio 2010 so host code should be passed to cl.
Since NVCC seemed to be getting confused I even wrote a simple wrapper around the boost stuff and stuck it in a separate .cpp (instead of .cu) file but I am still getting build errors. Weirdly the error is thrown upon compiling my main.cu instead of the wrapper.cpp but still is caused by boost even though main.cu doesn't include any boost code.
Does anybody know of a solution or even workaround for this problem?
Dan, I have written a CUDA code using boost::program_options in the past, and looked back to it to see how I dealt with your problem. There are certainly some quirks in the nvcc compile chain. I believe you can generally deal with this if you've decomposed your classes appropriately, and realize that often NVCC can't handle C++ code/headers, but your C++ compiler can handle the CUDA-related headers just fine.
I essentially have main.cpp which includes my program_options header, and the parsing stuff dictating what to do with the options. The program_options header then includes the CUDA-related headers/class prototypes. The important part (as I think you've seen) is to just not have the CUDA code and accompanying headers include that options header. Pass your objects to an options function and have that fill in relevant info. Something like an ugly version of a Strategy Pattern. Concatenated:
main.cpp:
#include "myprogramoptionsparser.hpp"
(...)
CudaObject* MyCudaObj = new CudaObject;
GetCommandLineOptions(argc,argv,MyCudaObj);
myprogramoptionsparser.hpp:
#include <boost/program_options.hpp>
#include "CudaObject.hpp"
void GetCommandLineOptions(int argc,char **argv,CudaObject* obj){
(do stuff to cuda object) }
CudaObject.hpp:
(do not include myprogramoptionsparser.hpp)
CudaObject.cu:
#include "CudaObject.hpp"
It can be a bit annoying, but the nvcc compiler seems to be getting better at handling more C++ code. This has worked fine for me in VC2008/2010, and linux/g++.
You have to split the code in two parts:
the kernel have to be compiled by nvcc
the program that invokes the kernel has to be compiled by g++.
Then link the two objects together and everything should be working.
nvcc is required only to compile the CUDA kernel code.
Thanks to #ronag's comment I realised I was still (indirectly) including boost/program_options.hpp indirectly in my header since I had some member variables in my wrapper class definition which needed it.
To get around this I moved these variables outside the class and thus could move them outside the class defintion and into the .cpp file. They are no longer member variables and now global inside wrapper.cpp
This seems to work but it is ugly and I have the feeling nvcc should handle this gracefully; if anybody else has a proper solution please still post it :)
Another option is to wrap cpp only code in
#ifndef __CUDACC__

OpenGL v2.0 Shaders with Dev-C++ and SDL?

I was about to rebuild my library in Dev-C++, under Windows; however, the shader functionality I've added in the meantime is not supported, the compiler could not find the related functions (::glCreateShader(), ::glCreateProgram(), etc.)
Digging around the internet and the Dev-C++ folder, I've found that the OpenGL implementation (gl.h) is only v1.1. I've found recommendations to download the latest headers from SGI. I have found gl3.h, however, after closer scrutiny I have realized that gl.h is not included in my project anyway, and I should be looking at SDL/SDL_opengl.h.
EDIT: SDL_opengl.h does include gl.h and declares prototypes of the functions in question. So the question is, why ame I given compile-time errors rather than linker errors?
(My library only links against mingw32, libOpenGL32, libSDL, libSDL_Image and libSDL_Mixer, much like under OSX (except for mingw32, of course) where I didn't have any problem.)
How can I use OpenGL v2.0 shaders with Dev-C++ and SDL?
gl.h is only for OpenGL 1.1 (and in some cases up to 1.3 depending on which version of the file you are using and which operating system). For everything else you additionally need glext.h and probably glxext.h (Linux/Unix) or wglext.h (Windows).
All functions from newer versions of OpenGL must be linked at runtime. So in order to use them you must get the right function address and assign it to a function pointer. The easiest way to do this is by using something like GLEW.
The manual way would be something like this:
PFNGLCREATESHADERPROC glCreateShader = NULL;
glCreateShader = (PFNGLCREATESHADERPROC) wglGetProcAddress("glCreateShader");
or for Linux:
PFNGLCREATESHADERPROC glCreateShader = NULL;
glCreateShader = (PFNGLCREATESHADERPROC) glXGetProcAddress((GLubyte*) "glCreateShader");
If you define GL_GLEXT_PROTOTYPES before including glext.h you can omit the first line.
EDIT: SDL_opengl.h looks like it contains a copy of glext.h (not up to date though). So if you use that the above should still be valid. If you want to use a seperate glext.h you must define NO_SDL_GLEXT before including SDL_opengl.h. Also, the function prototypes aren't available as long as GL_GLEXT_PROTOTYPES isn't defined or you write them yourself.
EDIT2: Apparently SDL has its own GetProcAddress function called SDL_GL_GetProcAddress.
Can you load the addresses of the functions at runtime?
typedef GLhandle (APIENTRYP PFNGLCREATESHADERPROC) (GLenum shaderType);
PFNGLCREATESHADERPROC glCreateShader = NULL;
glCreateShader = (PFNGLCREATESHADERPROC)wglGetProcAddress("glCreateShader");
typedef GLhandle (APIENTRYP PFNGLCREATEPROGRAMPROC) ();
PFNGLCREATEPROGRAMPROC glCreateProgram = NULL;
glCreateProgram = (PFNGLCREATEPROGRAMPROC)wglGetProcAddress("glCreateProgram");