I have started a new project, which I want to use multitexturing in.
I have done multixexturing before, and is supported by my version of OpenGL
In the header I have:
GLuint m_TerrainTexture[3];//heightmap, texture map and detail map
GLuint m_SkyboxTexture[5]; //left, front, right, back and top textures
PFNGLMULTITEXCOORD2FARBPROC glMultiTexCoord2fARB;
PFNGLACTIVETEXTUREARBPROC glActiveTexture;
In the constructor I have:
glActiveTexture = (PFNGLACTIVETEXTUREARBPROC) wglGetProcAddress((LPCSTR)"glActiveTextureARB");
glMultiTexCoord2fARB = (PFNGLMULTITEXCOORD2FARBPROC) wglGetProcAddress((LPCSTR)"glMultiTexCoord2fARB");
if(!glActiveTexture || !glMultiTexCoord2fARB)
{
MessageBox(NULL, "multitexturing failed", "OGL_D3D Error", MB_OK);
}
glActiveTexture( GL_TEXTURE0_ARB );
...
This shows the message box "multitexturing failed" and the contents of glActiveTexture is 0x00000000
when it gets to glActiveTexture( GL_TEXTURE0_ARB ); I get an access violation error
I am implementing the MVC diagram, so this is all in my terrain view class
You quoted your code to load the extensions like following:
PFNGLMULTITEXCOORD2FARBPROC glMultiTexCoord2fARB;
PFNGLACTIVETEXTUREARBPROC glActiveTexture;
glActiveTexture = (PFNGLACTIVETEXTUREARBPROC) wglGetProcAddress((LPCSTR)"glActiveTextureARB");
glMultiTexCoord2fARB = (PFNGLMULTITEXCOORD2FARBPROC) wglGetProcAddress((LPCSTR)"glMultiTexCoord2fARB");
This is very problematic, since it possibly redefines already existing symbols. The (dynamic) linker will eventually trip over this. For example it might happen that the assignment to the pointer variable glActiveTexture goes into some place, but whenever a function of the same name is called it calls something linked in from somewhere else.
In C you usually use a combination of preprocessor macros and custom prefix to avoid this problem, without having to adjust large portions of code.
PFNGLMULTITEXCOORD2FARBPROC myglMultiTexCoord2fARB;
#define glMultiTexCoord2fARB myglMultiTexCoord2fARB
PFNGLACTIVETEXTUREARBPROC myglActiveTexture;
#define glActiveTexture myglActiveTexture
glActiveTexture = (PFNGLACTIVETEXTUREARBPROC) wglGetProcAddress((LPCSTR)"glActiveTextureARB");
glMultiTexCoord2fARB = (PFNGLMULTITEXCOORD2FARBPROC) wglGetProcAddress((LPCSTR)"glMultiTexCoord2fARB");
I really don't know of any other reason why things should fail if you have a valid render context active and the extensions supported.
GLEE is a dead library; it hasn't been updated in a long time.
GLEW is a fine extension loading library, but it has some issues working with core 3.2 and above.
I would suggest GL3W. The beauty of it is that it is self-updating; it downloads and parses the headers by itself. The downside is that you need a Python 2.6 installation to generate the loader. But it provides reasonably good results otherwise.
I recommend GLEW/GLEE for extension management.
Rastertek tutorial has the complete setup required to make wglGetProcAddress to work. GLEW doesn't work for me either, I've tried everything I could think of and I asked many people about it but it simply doesn't work in VS 2012, not to mention the enormous frustration I experienced when I wanted to compile a shader.
Related
I am working on augmented reality project. So the user should use the Webcam, and to see the captured video with a cube drawn on that frame.
And this is where I get stuck , when I try to use glBindTexture(GL_TEXTURE_2D,texture_background) method, I get error:
(
ArgumentError: argument 2: : wrong type
GLUT Display callback with (),{} failed: returning None argument 2: : wrong type
)
I am completely stuck, have no idea what to do . The project is done in Python 2.7 , am using opencv and PyOpenGl 3.1.0.
You can find code on this link :click here
Thanks in advance.
Interesting error! So I played around with your source code (by the way in the future you should probably just add the code directly to your question instead of as a separate link), and the issue is actually just one of variable scope, not glut or opengl usage. The problem is your texture_background variable does not exist within the scope of the _draw_scene() function. To verify this, simply try calling print texture_background in your _draw_scene() function and you will find it returns None rather than the desired integer texture identifier.
The simple hack-y solution is to just call global texture_background before using it within your _handle_input() function. You will also need to define texture_background = None in the main scope of your program (underneath your ##FOR CUBE FROM OPENGL comment). The same global comment applies for x_axis and z_axis.
That being said, that solution is not really that great. The rigid structure required by GLUT with all of these predefined glut* functions makes it hard to structure code the way you might want to in terms of initializing your app. I would suggest, if you are not forced to use GLUT, to use a more flexible alternative, such as pygame, pysdl2 or pyqt, to create your context instead.
I'm having trouble setting up OpenGL with MSVS 2013. I'm aware that the opengl32.dll on my Windows
platform located at C:\Windows\System32 is an implementation of OpenGL 1.1.
What I'm trying to do is to load the newer OpenGL > 1.1 functions such as glBindBuffer and glBufferData. I have read that it's possible getting a pointer to the function using wglGetProcAddress. When using this function the returned pointer is always null, all the original functions in the dll using GetProcAddress(OpenGL32DLL, "...") work perfectly except the newer functions don't seem to load.
I'm hoping anybody here can help me go through my setup and point out what I did wrong or if I have missed something.
So here we go:
I have downloaded OpenGL Extensions Viewer 4.4 which points out I'm able to perfectly run upto OpenGL 2.1 which
should be more than enough to use or load glBindBuffer and glBufferData.
I downloaded Microsoft SDKs/v7.1 which includes the headers: gl/glu.h and gl/gl.h; I also downloaded the GLEXT extensions API
from here and linked the glext.lib + included the headers.
Files in the Linker:
C:\Program Files\Microsoft SDKs\Windows\v7.1\Lib\OpenGL32.Lib
C:\Program Files\Microsoft SDKs\Windows\v7.1\Lib\GlU32.Lib
C:\Users\user\Desktop\glext\lib\glext.lib
The CPP files included:
C:\Program Files\Microsoft SDKs\Windows\v7.1\Include -> GL.h, GLU.h
C:\Users\user\Desktop\glext\include -> glcorearb.h, glext.h, wglext.h
Instead of handling all these details yourself, I suggest you just grab yourself a copy of GLEW ( http://glew.sourceforge.net/ ) which handles all of this for you in a standard way. I currently use it on several published products without issues.
In your example, you'd be able to do the following:
if (GL_ARB_multi_bind) {
//glBindBuffer is available.
}
(Of course, after a call to glewInit(), possibly with glewExperimental = TRUE; - see documentation for details.)
When using this function the returned pointer is always null, all the original functions in the dll using GetProcAddress(OpenGL32DLL, "...") work perfectly except the newer functions don't seem to load.
Do you have a valid OpenGL context created and made current on the calling thread? In Windows extension functions are technically per-context, i.e. you have to get the pointers to the functions for each OpenGL context you create and make sure to use the right function pointers with the right context.
This of course also means that you must have a OpenGL context to begin with. The usual sequence of setting up an OpenGL context in Windows is:
pseudocode
struct glctx {
HGLRC rc
// dictionary for explanation purposes
// one would normally just have a bunch of
// structure elements here
functionpointer[string:name] extensionfunction
}
if not window_with_desired_pixelformat_exists: {
wnd := create_a_window
pixelformat := select_pixelformat
wnd→set_pixelformat pixelformat
}
dc := wnd→getDC
glctx ctx
ctx→rc := wglCreateContext(dc)
wglMakeCurrent(dc, ctx→rc);
foreach(fname in extensionfunctions_names): {
ctx→extensionfunction[name] = wglGetProcAddress(name)
}
I think that the problem is that there is no current context while calling wglGetProcAddress.
Function pointers can be specific for a precise pixel format, determined by tge context creation procedure.
C++ application, I define a temp context, make it current, and then try to use wglCreateContextAttribsARB, which is simply undefined. Most answers I have seen say to use PFNWGLCREATECONTEXTATTRIBSARBPROC. Which is also undefined. What am I missing?
I'm only using gl.h (provided by VS2015)
SetPixelFormat(g_hDc, chosenPixelFormat, &pfd);
HGLRC temporaryContext = wglCreateContext(g_hDc);
wglMakeCurrent(g_hDc, temporaryContext);
PFNWGLCREATECONTEXTATTRIBSARBPROC wglCreateContextAttribsARB...
Both, however, are just unidentified. I initially tried calling wglCreateContextAttribsARB by itself, to no avail, anywhere in my code.
At this stage, I have a context working, windowed, 480p, updating, stable 60FPS. So I know my side is working. I'm getting no GL errors either. Where do I need to instantiate these two? Am I using the wrong gl header?
I'm using an updated ASUS Radeon R9-285
All data types and constants related to wgl extensions are declared in wglext.h.
You need to query the function pointer of type PFNWGLCREATECONTEXTATTRIBSARBPROC using your current context via the GL extension mechansim (e.g. wglGetProcAddress()).
I have a question about OpenGL dependencies. For example, for ARB_shader_atomic_counters, the dependency section says this:
Dependencies
This extension is written against the OpenGL 4.1 (core) specification
and the GLSL 4.10.6 specification.
OpenGL 3.0 is required.
How do I read this information? Does the graphics card and/or driver need to support OpenGL 4.1 or OpenGL 3.0?
The official docs say:
Extensions may be written against a particular specification version, but it is possible to implement them that extension on older OpenGL versions. Some extensions can be implemented even on OpenGL 1.1, while others have a higher base version. The minimum version that the extension can appear in is specified with the text "X are required"
Is this just a theoretical possibility, and few (if any) drivers will implement it? Or would some drivers provide this function on OpenGL 3 hardware? How can I find out whether it is implemented?
You really should not read too much into those if you can help it. That information is not particularly helpful to the average developer. You will likely never encounter a GL 3.0 implementation that implements this extension because it was designed around a Shader Model 5.0 (DX 11) hardware feature. In theory there is nothing preventing it from being implemented in 3.0, but in practice no hardware/driver combination that implements this extension is going to limit itself to 3.0.
If you were going to implement the extension or devise some alternative solution, then it would be tremendously helpful to know the absolute minimum API version necessary.
When it says it is written against a certain version of a specification, what that means is that anytime the extension specification says something to the effect of "Add the following language to section X, paragraph Y, ...", you will find the original unextended text in that specific spec. version. It also means that the extension makes certain assumptions about how things behave.
For example, if version X says that points are rasterized as hexagons and version Y says they are rasterized as circles, and an extension is written against version Y, then the extension is free to assume that points are rasterized as circles. If this assumption becomes a point of controversy, you will find a blurb about it in the "Issues" section.
As for determining whether the extension is implemented (which is the most important point from your perspective), that is what the GL_EXTENSIONS string is for. Be aware, however, that the way you query this string has changed over the years:
In a compatibility profile context or GL 3.1 or older:
// Returns a massive null terminated string containing every extension the
// implementation supports.
//
// ... do a string search to find your extension.
const GLchar* exts = glGetString (GL_EXTENSIONS);
In a core profile context:
GLint num_exts;
glGetIntegerv (GL_NUM_EXTENSIONS, &num_exts);
for (int i = 0; i < num_exts; i++) {
// Returns a null terminated string containing only one extension
const GLchar* ext = glGetStringi (GL_EXTENSIONS, i);
}
If you try to do the former in a core profile, GL will generate a GL_INVALID_ENUM error and do nothing else. This is the reason glewExperimental = GL_TRUE is necessary before calling glewInit (...) if you use GLEW in a core profile.
I want to use the functions exposed under the OpenGL extensions. I'm on Windows, how do I do this?
Easy solution: Use GLEW. See how here.
Hard solution:
If you have a really strong reason not to use GLEW, here's how to achieve the same without it:
Identify the OpenGL extension and the extension APIs you wish to use. OpenGL extensions are listed in the OpenGL Extension Registry.
Example: I wish to use the capabilities of the EXT_framebuffer_object extension. The APIs I wish to use from this extension are:
glGenFramebuffersEXT()
glBindFramebufferEXT()
glFramebufferTexture2DEXT()
glCheckFramebufferStatusEXT()
glDeleteFramebuffersEXT()
Check if your graphic card supports the extension you wish to use. If it does, then your work is almost done! Download and install the latest drivers and SDKs for your graphics card.
Example: The graphics card in my PC is a NVIDIA 6600 GT. So, I visit the NVIDIA OpenGL Extension Specifications webpage and find that the EXT_framebuffer_object extension is supported. I then download the latest NVIDIA OpenGL SDK and install it.
Your graphic card manufacturer provides a glext.h header file (or a similarly named header file) with all the declarations needed to use the supported OpenGL extensions. (Note that not all extensions might be supported.) Either place this header file somewhere your compiler can pick it up or include its directory in your compiler's include directories list.
Add a #include <glext.h> line in your code to include the header file into your code.
Open glext.h, find the API you wish to use and grab its corresponding ugly-looking declaration.
Example: I search for the above framebuffer APIs and find their corresponding ugly-looking declarations:
typedef void (APIENTRYP PFNGLGENFRAMEBUFFERSEXTPROC) (GLsizei n, GLuint *framebuffers); for GLAPI void APIENTRY glGenFramebuffersEXT (GLsizei, GLuint *);
All this means is that your header file has the API declaration in 2 forms. One is a wgl-like ugly function pointer declaration. The other is a sane looking function declaration.
For each extension API you wish to use, add in your code declarations of the function name as a type of the ugly-looking string.
Example:
PFNGLGENFRAMEBUFFERSEXTPROC glGenFramebuffersEXT;
PFNGLBINDFRAMEBUFFEREXTPROC glBindFramebufferEXT;
PFNGLFRAMEBUFFERTEXTURE2DEXTPROC glFramebufferTexture2DEXT;
PFNGLCHECKFRAMEBUFFERSTATUSEXTPROC glCheckFramebufferStatusEXT;
PFNGLDELETEFRAMEBUFFERSEXTPROC glDeleteFramebuffersEXT;
Though it looks ugly, all we're doing is to declare function pointers of the type corresponding to the extension API.
Initialize these function pointers with their rightful functions. These functions are exposed by the library or driver. We need to use wglGetProcAddress() function to do this.
Example:
glGenFramebuffersEXT = (PFNGLGENFRAMEBUFFERSEXTPROC) wglGetProcAddress("glGenFramebuffersEXT");
glBindFramebufferEXT = (PFNGLBINDFRAMEBUFFEREXTPROC) wglGetProcAddress("glBindFramebufferEXT");
glFramebufferTexture2DEXT = (PFNGLFRAMEBUFFERTEXTURE2DEXTPROC) wglGetProcAddress("glFramebufferTexture2DEXT");
glCheckFramebufferStatusEXT = (PFNGLCHECKFRAMEBUFFERSTATUSEXTPROC) wglGetProcAddress("glCheckFramebufferStatusEXT");
glDeleteFramebuffersEXT = (PFNGLDELETEFRAMEBUFFERSEXTPROC) wglGetProcAddress("glDeleteFramebuffersEXT");
Don't forget to check the function pointers for NULL. If by chance wglGetProcAddress() couldn't find the extension function, it would've initialized the pointer with NULL.
Example:
if (NULL == glGenFramebuffersEXT || NULL == glBindFramebufferEXT || NULL == glFramebufferTexture2DEXT
|| NULL == glCheckFramebufferStatusEXT || NULL == glDeleteFramebuffersEXT)
{
// Extension functions not loaded!
exit(1);
}
That's it, we're done! You can now use these function pointers just as if the function calls existed.
Example:
glGenFramebuffersEXT(1, &fbo);
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, fbo);
glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, GL_TEXTURE_2D, colorTex[0], 0);
Reference: Moving Beyond OpenGL 1.1 for Windows by Dave Astle — The article is a bit dated, but has all the information you need to understand why this pathetic situation exists on Windows and how to get around it.
A 'Very strong reason' not to use GLEW might be that the library is not supported by your compiler/IDE. E.g: Borland C++ Builder.
In that case, you might want to rebuild the library from source. If it works, great, otherwise manual extension loading isnt as bad as it is made to sound.
#Kronikarz: From the looks of it, GLEW seems to be the way of the future. NVIDIA already ships it along with its OpenGL SDK. And its latest release was in 2007 compared to GLEE which was in 2006.
But, the usage of both libraries looks almost the same to me. (GLEW has an init() which needs to be called before anything else though.) So, you don't need to switch unless you find some extension not being supported under GLEE.
GL3W is a public-domain script that creates a library which loads only core functionality for OpenGL 3/4. It can be found on github at:
https://github.com/skaslev/gl3w
GL3W requires Python 2.6 to generate the libraries and headers for OpenGL; it does not require Python after that.