I am working on a small game engine on my main computer, but when i cloned the project on my laptop I just get a lot of error messages and displays a blank screen.
Here are some of the error messages I am getting every frame from calling the SFML draw function:
Warning: The created OpenGL context does not fully meet the settings that were requested
Requested: version = 4.4 ; depth bits = 24 ; stencil bits = 8 ; AA level = 1 ; core = false ; debug = false ; sRGB = false
Created: version = 4.5 ; depth bits = 24 ; stencil bits = 8 ; AA level = 4 ; core = true ; debug = false ; sRGB = false
An internal OpenGL call failed in RenderTarget.cpp(369).
Expression:
GLEXT_glClientActiveTexture(GLEXT_GL_TEXTURE0)
Error description:
GL_INVALID_OPERATION
The specified operation is not allowed in the current state.
An internal OpenGL call failed in RenderTarget.cpp(375).
Expression:
glDisable(GL_LIGHTING)
Error description:
GL_INVALID_ENUM
An unacceptable value has been specified for an enumerated argument.
An internal OpenGL call failed in RenderTarget.cpp(377).
Expression:
glDisable(GL_ALPHA_TEST)
Error description:
GL_INVALID_ENUM
An unacceptable value has been specified for an enumerated argument.
An internal OpenGL call failed in RenderTarget.cpp(378).
Expression:
glEnable(GL_TEXTURE_2D)
Error description:
GL_INVALID_ENUM
An unacceptable value has been specified for an enumerated argument.
An internal OpenGL call failed in RenderTarget.cpp(380).
Expression:
glMatrixMode(GL_MODELVIEW)
Error description:
GL_INVALID_OPERATION
The specified operation is not allowed in the current state.
An internal OpenGL call failed in RenderTarget.cpp(381).
Expression:
glEnableClientState(GL_VERTEX_ARRAY)
Error description:
GL_INVALID_OPERATION
The specified operation is not allowed in the current state.
I am simply drawing sprites and textures in the menu screen, and it seems like even the OpenGL functions to draw also produces errors. Here's the link to my github repo: https://github.com/ZzkilzZ/mfengine
I am running the LTS version of Ubuntu on both my computers, and what i think it might be is a discrepancy in the versions of certain dependencies?
EDIT:
These are my glxinfo results:
OpenGL vendor string: Intel Open Source Technology Center
OpenGL renderer string: Mesa DRI Intel(R) Haswell Mobile
OpenGL core profile version string: 4.5 (Core Profile) Mesa 19.0.8
OpenGL core profile shading language version string: 4.50
OpenGL core profile context flags: (none)
OpenGL core profile profile mask: core profile
OpenGL core profile extensions:
OpenGL version string: 3.0 Mesa 19.0.8
OpenGL shading language version string: 1.30
OpenGL context flags: (none)
OpenGL extensions:
OpenGL ES profile version string: OpenGL ES 3.1 Mesa 19.0.8
OpenGL ES profile shading language version string: OpenGL ES GLSL ES 3.10
OpenGL ES profile extensions:
I am running Ubuntu 18.4 LTS
You are creating an OpenGL Core Profile context, but the code uses legacy fixed-function stuff like glDisable(GL_LIGHTING). The solution is to request a compatiblity profile when creating the context. It depends on your OpenGL implementation if compatibility profile is available. I suggest you find a more modern example code.
It looks like that Compatibility Profile can be set in SFML's sf::ContextSettings::attributeFlags or
glfwOpenWindowHint(GLFW_OPENGL_VERSION_MAJOR,4);
glfwOpenWindowHint(GLFW_OPENGL_VERSION_MINOR,5);
glfwOpenWindowHint(GLFW_OPENGL_PROFILE,GLFW_OPENGL_COMPAT_PROFILE);
Thank you to #SurvivalMachine and #derhass for your help!
I eventually downgraded the opengl all the way down to version 3.0 and glsl down to 130 core and got the menu screen working again. It loads everything but the shaders but i suspect this is the result of me using more modern functions. I am giving up on older computers, this seems like too much of a sacrifice just for 2015 computers to work :D
Related
I have the following code to begin a frame:
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
Without the last line, the program runs (but obviously does not blend), but with it it segfaults. GDB is not a lot of help, as it looks like the stack is corrupted, and after the segfault, running:
set $pc = *(void**)$rsp
set $rsp = $rsp+8
Points to the ending brace of the function as the last frame.
I have a small suspicion that this is a bug in the driver, but couldn't find a bug report on their tracker. The driver is flgrx-updates running on Ubuntu. GLXInfo gives:
OpenGL vendor string: Advanced Micro Devices, Inc.
OpenGL renderer string: AMD Radeon R9 200 Series
OpenGL core profile version string: 4.3.13399 Core Profile Context 15.201.1151
OpenGL core profile shading language version string: 4.40
OpenGL core profile context flags: (none)
OpenGL core profile profile mask: core profile
Okay, I still don't really know why this fixes it - but the seg fault was caused by me changing the order the object files were specified in the linking command. But specifying the object file that loads the function pointers to OpenGL before the file that uses those functions makes everything work nicely.
For OpenGL ES the OES_EGL_image extension provides a function EGLImageTargetTexture2DOES to create a texture from an EGLImage. Is there an equivalent extension/function for desktop OpenGL (not ES)?
I think GL_OES_EGL_image should work with the Mesa drivers and desktop GL. glxinfo shows the extension as supported with both core and compatibility profiles. I did not see any checks for ES with a quick look at the implementation.
A grep through the Mesa provided GL headers shows no other occurrence of EGLImage, so GL_OES_EGL_image is probably your only choice with the Mesa drivers.
I am not sure though whether this behavior is specific to Mesa or other drivers also follow it.
I am using Visual Studio 13 with Nvidia NSights 4.0. In my application I am doing a mix of different types of rendering but, for the purpose of testing the proiler, I did a simple rendering of a scene. I opened the graphics debugger and, when I open the GUI and press spacebar to capture the frame I get this error:
Cannot enter frame debugger. Nsight only supports frame debugging for
D3D9, D3D10, D3D11, and OpenGL 4.2.
Reason: glEnd
I am using a GT540m and I checked my OpenGL version and it is 4.3
If I, then, try to use the performance anaysis tool and trace OpenGL (following the instructions) I always get some percentage of CPU frames and 0 GPU frames.
I have no idea what am I doing wrong. Is there any solution to this or alternative ways to profile OpenGL?
Are you using immediate mode drawing? Ie. glBegin(..); glVertex<> ; glEnd()
From the Nsight User Guide's Supported OpenGL Functions page:
NVIDIA® Nsight™ Visual Studio Edition 4.0 frame debugging supports the set of OpenGL operations, which are defined by the OpenGL 4.2 core profile. Note that it is not necessary to create a core profile context to make use of the frame debugger. An application that uses a compatibility profile context, but restricts itself to using the OpenGL 4.2 core subset, will also work. A few OpenGL 4.2 compatibility profile features, such as support for alpha testing and a default vertex array object, are also supported.
So, replace the immediate mode rendering with newer drawing functions like glDrawArrays and glDrawElements that vertex array objects.
Better yet, create a core profile context to ensure you aren't using deprecated functionality.
My advice: stay away from outdated tutorials online and read the latest edition of the Red book (OpenGL Programming Guide), which only covers modern OpenGL.
You can also try the more basic GPUView tool that can be found in Win 8 SDK
UPDATE:
As for why 0 GPU frames are retrieved, are you sure that your GPU is on the list of supported hardware. I had the same problem where NSight was mostly working (was able to profile other aspects) but 0 GPU frames were collected. Later realized that my card was not officially supported.
Now available Nsight 4.5 RC1, works with cuda sdk 7 RC, and among its features, now support openGL 4.3 !
I started writing programs, in C (for now) using GLFW and OpenGL. The question I have is that, how do I know which version of OpenGL my program will use? My laptop says that my video card has OpenGL 3.3. Typing "glxinfo | grep -i opengl" returns:
OpenGL vendor string: NVIDIA Corporation
OpenGL renderer string: GeForce 9600M GT/PCI/SSE2
OpenGL version string: 3.3.0 NVIDIA 285.05.09
OpenGL shading language version string: 3.30 NVIDIA via Cg compiler
OpenGL extensions:
So is OpenGL 3.3 automatically being used ?
Just call glGetString(GL_VERSION) (once the context is initialized, of course) and put out the result (which is actually the same that glxinfo does, I suppose):
printf("%s\n", glGetString(GL_VERSION));
Your program should automatically use the highest possible version your hardware and driver support, which in your case seems to be 3.3. But for creating a core-profile context for OpenGL 3+ (one where deprecated functionality has been completely removed) you have to take special measures. But since version 2.7 GLFW has means for doing this, using the glfwOpenWindowHint function. But if you don't want to explicitly disallow deprecated functionality, you can just use the context given to you by the default context creation functions of GLFW, which will as said support the highest possible version for your hardware and drivers.
But also keep in mind that for using OpenGL functionality higher than version 1.1 you need to retrieve the corresponding function pointers or use a library that handles this for you, like GLEW.
I have many OpenGl shaders. We try to use as many different hardware as possible to evaluate the portability of our product. One of our customer recently ran into some rendering issues it seems that the target machine only provide version shaders model 2.0 all our development/build/test machine (even oldest ones run version 4.0), everything else (OpenGl version, GSLS version ...) seems identical.
I didn't find a way to downgrade the shaders model version since it's automatically provided by the graphic card driver.
Is there a way to manually install or select OpenGl/GLSL/Shader model version in use for develpment/test purpose ?
NOTE: the main target are windows XP SP2/7 (32&64) for both ATI/NVIDIA cards
OpenGL does not have the concept of "shader models"; that's a Direct3D thing. It only has versions of GLSL: 1.10, 1.20, etc.
Every OpenGL version matches a specific GLSL version. GL 2.1 supports GLSL 1.20. GL 3.0 supports GLSL 1.30. For GL 3.3 and above, they stopped fooling around and just used the same version number, so GL 3.3 supports GLSL 3.30. So there's an odd version number gap between GLSL 1.50 (maps to GL 3.2) and GLSL 3.30.
Technically, OpenGL implementations are allowed to refuse to compile older shader versions than the ones that match to the current version. As a practical matter however, you can pretty much shove any GLSL shader into any OpenGL implementation, as long as the shader's version is less than or equal to the version that the OpenGL implementation supports. This hasn't been tested on MacOSX Lion's implementation of GL 3.2 core.
There is one exception: core contexts. If you try to feed a shader through a core OpenGL context that uses functionality removed from the core, it will complain.
There is no way to force OpenGL to provide you with a particular OpenGL version. You can ask it to, with wgl/glXCreateContextAttribs. But that is allowed to give you any version higher than the one you ask for, so long as that version is backwards compatible with what you asked for.