Trouble Creating a GLFW OpenGL Window in C++ - c++

Here is the current bit of code I'm working on:
int main() {
glfwInit();
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 0);
glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);
glfwWindowHint(GLFW_OPENGL_FORWARD_COMPAT, GL_TRUE);
glfwWindowHint(GLFW_RESIZABLE, GL_FALSE);
GLFWwindow *window = glfwCreateWindow(WIDTH, HEIGHT, "OpenGL", NULL, NULL);
if (nullptr == window) {
std::cout << "Failed to create GLFW Window" << std::endl;
glfwTerminate();
return EXIT_FAILURE;
}
At runtime, the window is not created, and I get the failure message. I am not understanding why window is a nullptr, when I defined it. Am I missing something?

According to the GLFW documentation, the value for the GLFW_OPENGL_PROFILE window hint must be GLFW_OPENGL_ANY_PROFILE if the requested OpenGL context version is less than 3.2 (and GLFW has a platform-independent check built-in whenever glfwCreateWindow is called).
See: https://www.glfw.org/docs/3.3/window_guide.html#GLFW_OPENGL_PROFILE_hint
GLFW_OPENGL_PROFILE specifies which OpenGL profile to create the context for. Possible values are one of GLFW_OPENGL_CORE_PROFILE or GLFW_OPENGL_COMPAT_PROFILE, or GLFW_OPENGL_ANY_PROFILE to not request a specific profile. If requesting an OpenGL version below 3.2, GLFW_OPENGL_ANY_PROFILE must be used. If OpenGL ES is requested, this hint is ignored.
In particular the part: "If requesting an OpenGL version below 3.2, GLFW_OPENGL_ANY_PROFILE must be used."
You will get a GLFW error in your case. In particular, exactly this one - regardless of the platform/OS, which you would see if you had setup a GLFW error handler function via glfwSetErrorCallback().

Related

Open GL version 2.1 instead of 4.1 in Mac OS 10.15

I'm new to Mac, so I'm not very conversant with the get arounds with this OS.
I wrote a simple Open GL program in Xcode and it ran without issues. However, when I checked the versions using the following code
cout<<glGetString(GL_VENDOR)<<endl;
cout<<glGetString(GL_RENDERER)<<endl;
cout<<glGetString(GL_VERSION)<<endl;
cout<<glGetString(GL_SHADING_LANGUAGE_VERSION)<<endl;
Initialization Code
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGB | GLUT_DEPTH);
glutInitWindowSize(400, 400);
glutCreateWindow("First Test");
initRendering();
glutDisplayFunc(drawScene);
glutKeyboardFunc(handleKeypress);
glutReshapeFunc(handleResize);
glutMainLoop();
I get the following output
ATI Technologies Inc.
AMD Radeon Pro 5300M OpenGL Engine
2.1 ATI-3.10.15
1.20
From forums elsewhere I have read that Mac OS 10.15 supports Open GL version 4.1 and the Graphics card here can certainly support higher versions too.
So my questions are as follows:
Why is it showing 2.1 on my machine
How to fix this? Is there a code that I can type into fix the issue or more software needs to be installed?
Any direction would be great.
Thanks
GLUT is ancient and doesn't support common macOS features such as HiDPI or mouse scrolling. You probably want to look into using the GLFW library instead (see here for what you need to do for a 4.1 context).
However if you really want to use GLUT, you need to add
glutInitContextVersion(4, 1);
glutInitContextProfile(GLUT_CORE_PROFILE);
after glutInit.
Edit: the answer was posted before I know he's using glut, also I recommend GLFW for Modern OpenGL 4.1+
I think you should define the version and create the context first, using some libraries like GLFW and set the OpenGL profile intended to use. also use GLEW/GLAD libraries for GL extensions management.
In case you are using GLFW and GLEW you can add this code to define a version and create a context and a window. then check the version again.
#include <iostream>
#include <GL/glew.h>
#include <GLFW/glfw3.h>
using namespace std;
int main()
{
// Initialize GLFW
glfwInit();
// Define version and compatibility settings
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3); //ver
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 2);
glfwWindowHint(GLFW_OPENGL_PROFILE,GLFW_OPENGL_CORE_PROFILE);
glfwWindowHint(GLFW_OPENGL_FORWARD_COMPAT, GL_TRUE); // for MAC ONLY
glfwWindowHint(GLFW_RESIZABLE, GL_FALSE);
// Create OpenGL window and context
GLFWwindow* window = glfwCreateWindow(800, 600, "OpenGL", NULL, NULL);
glfwMakeContextCurrent(window);
// Check for window creation failure
if (!window)
{
// Terminate GLFW
glfwTerminate();
return 0;
}
// Initialize GLEW
glewExperimental = GL_TRUE; glewInit();
// your code
cout<<glGetString(GL_VENDOR)<<endl;
cout<<glGetString(GL_RENDERER)<<endl;
cout<<glGetString(GL_VERSION)<<endl;
cout<<glGetString(GL_SHADING_LANGUAGE_VERSION)<<endl;
// Event loop
while(!glfwWindowShouldClose(window))
{
// Clear the screen to black
glClearColor(0.0f, 0.0f, 0.0f, 1.0f); glClear(GL_COLOR_BUFFER_BIT);
glfwSwapBuffers(window);
glfwPollEvents();
}
// Terminate GLFW
glfwTerminate(); return 0;
}
If you don't have GLFW and GLEW already installed, you can check this tutorial to install them for MacOS : https://riptutorial.com/opengl/example/21105/setup-modern-opengl-4-1-on-macos--xcode--glfw-and-glew- or check this one: https://giovanni.codes/opengl-setup-in-macos/
in case it does not work and still showing 2.1 try to go to the "Energy Saver" in the system settings and deselect the "Automatic graphics switching".

What does glGenBuffers indicate by returning zero?

My program creates many vertex buffer just after startup as soon as vertex data is loaded over a network, and then occasionally deletes or create vertex buffers during hot loop. It works as expected almost always, but sometimes on some machines buffer creation in hot loop produces zero names.
It doesn't look like an invalid state, because it would fire much earlier. Also, documentation and spec is not clear enough about such type of errors. Does it mean that implementation run out of buffer names?
I also found this thread. Topicstarter says that initializing names before passing them to glGenBuffers fixed his problem. Is it necessary to initialize those values?
Since it seems to work on some machines, glGenBuffer returning 0 could be because of an improperly set up context. Here
davek20 had the same problem with glGenBuffers. He solved it by fixing his incorrect context setup.
As stated on here on GLFW 'Getting started' page, under 'Creating a window and context' they state
"If the required minimum version is not supported on the machine, context (and window) creation fails."
and these machines of yours might have correct drivers but probably doesn't support all or some versions of OpenGL, as the documentation states.
If you are using GLFW_CONTEXT_VERSION_MAJOR and GLFW_CONTEXT_VERSION_MINOR consider changing these. I also recommend checking the context creation for returning NULL (0).
Example from GLFW's documentation page:
GLFWwindow* window;
if (!glfwInit())
return -1;
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3);
glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);
window = glfwCreateWindow(960, 540, "OpenGL", NULL, NULL);
if (!window)
{
glfwTerminate();
return -1;
}

Testing glfwSetErrorCallback(...)

I'm developing a 2D-graphics application using th LWJGL library, which uses GLFW.
Somewhere in my code I want to implement custom error handling using
glfwSetErrorCallback(...)
Now I want to trigger some kind of GLFW error to see if my approach works.
Are there any possible ways to do this?
Thanks to #httpdigest, here is a way to to trigger a GLFW error:
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 99);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 99);
glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);
glfwCreateWindow(width, height, "Test", NULL, NULL);
Setting only GLFW_CONTEXT_VERSION_MAJOR or only GLFW_CONTEXT_VERSION_MINOR to an invalid value is sufficient.

Commenting out glfwWindowHint() makes code functional (OpenGL with GLFW), why?

sorry I had a bit of trouble with the proper name for this question, however I've run into a roadblock and I'd like to at least be informed as to the source.
I've been trying to learn to learn openGL and I'm following this tutorial:
open.gl
Lines in question:
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 2);
glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);
glfwWindowHint(GLFW_OPENGL_FORWARD_COMPAT, GL_TRUE);
While following through the tutorial I got an access violation exception in my code. Only after I commented out the following lines was I able to get the program working as the tutorial says it should have worked the first time. The tutorial states the following about the commented out lines:
"You'll immediately notice the first three lines of code that are only relevant for this library. It is specified that we require the OpenGL context to support OpenGL 3.2 at the least. The GLFW_OPENGL_PROFILE option specifies that we want a context that only supports the new core functionality."
My thought was that perhaps my drivers do not support OpenGL 3.2 or higher since when I commented out the lines in question my program ran fine. However, I downloaded a software called GPU caps viewer. When the software scanned my hardware it said that both my NVIDIA GeForce GT 540M and HD Graphics 3000 support OpenGL 4.3. So I'm very confused as to the source of my problem, what I am doing wrong, or is this a hardware issue?
Here are snapshots of GPU CAPS and NVIDIA PORTAL: 2 pictures on imgur
Doesn't Work:
#include "stdafx.h" //necessary headers in here
int _tmain(int argc, _TCHAR* argv[])
{
glfwInit();
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 2);
glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);
glfwWindowHint(GLFW_OPENGL_FORWARD_COMPAT, GL_TRUE);
glfwWindowHint(GLFW_RESIZABLE, GL_FALSE);
GLFWwindow* window = glfwCreateWindow(800, 600, "OpenGL", nullptr, nullptr); // Windowed
glfwMakeContextCurrent(window);
while (!glfwWindowShouldClose(window))
{
glfwSwapBuffers(window);
glfwPollEvents();
if (glfwGetKey(window, GLFW_KEY_ESCAPE) == GLFW_PRESS)
glfwSetWindowShouldClose(window, GL_TRUE);
}
glfwTerminate();
return 0;
}
Works:
#include "stdafx.h" //necessary headers in here
int _tmain(int argc, _TCHAR* argv[])
{
glfwInit();
/**
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 2);
glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);
glfwWindowHint(GLFW_OPENGL_FORWARD_COMPAT, GL_TRUE);
glfwWindowHint(GLFW_RESIZABLE, GL_FALSE);
**/
GLFWwindow* window = glfwCreateWindow(800, 600, "OpenGL", nullptr, nullptr); // Windowed
glfwMakeContextCurrent(window);
while (!glfwWindowShouldClose(window))
{
glfwSwapBuffers(window);
glfwPollEvents();
if (glfwGetKey(window, GLFW_KEY_ESCAPE) == GLFW_PRESS)
glfwSetWindowShouldClose(window, GL_TRUE);
}
glfwTerminate();
return 0;
}
Header File
#pragma once
#define GLEW_STATIC
#include <glew.h>
#include "targetver.h"
#include <stdio.h>
#include <tchar.h>
#include <iostream>
#include <thread>
#define GLFW_INCLUDE_GLU
#include <glfw3.h>
One last thing that came to mind, when I go to my NVIDIA control panel it shows Visual Studio 2013 only as using Intel HD 3000 graphics card. It doesn't give me an option to use my GeForce graphics with it, could this be the issue?
EditOne:
I tried running my Visual Studio 2013 through windows explorer by right clicking on it and choosing "Run with graphics processor..." but to no avail.
I appreciate you looking through my question to help me solve it, and if you had this question yourself I hope it helped you solve your issue.
Actual cause of the problem (as confirmed by OP) were out-of-date and/or not fully featured drivers. This is yet again a reminder that in case of OpenGL problems the first course of action should always be to upgrade to the latest version of the graphics drivers, obtained directly from the GPU maker's download site.
Two reasons for this:
OEM drivers usually lag a significant amount of releases behind the GPU vendors.
much more important: The drivers installed through Windows update are stripped of propper OpenGL support by Microsoft for reasons only known by Microsoft.
To make a long story short: Your program is probably going to get an OpenGL context for the Intel HD graphics which lacks the requested OpenGL capabilities. Thus when setting these GLFW windows hints the window creation fails and since your program does not check for this error it will try to use that window, which of course will cause trouble.
What should you do? First and foremost add error checks!
Here's an example how you can deal with the window creation failing:
GLFWwindow* window = glfwCreateWindow(
800,
600,
"OpenGL",
nullptr,
nullptr);
if( !window ) {
print_some_error_message();
exit(-1);
}
glfwMakeContextCurrent(window);
Next you want to force the system to use the NVidia card. Don't do that "launch through explorer run on NVidia BS"; say you ship your program to someone else? Do you want to put that burden onto them? Or how about you, when you want to debug it?
Do it properly: Tell the system that you want to run on the NVidia hardware in a Optimus configuration. Simply link the following into your program:
extern "C" {
_declspec(dllexport) DWORD NvOptimusEnablement = 0x00000001;
}
The best place to put this would be right beside your program's main function, i.e. in your case
#include "stdafx.h" //necessary headers in here
extern "C" {
_declspec(dllexport) DWORD NvOptimusEnablement = 0x00000001;
}
int _tmain(int argc, _TCHAR* argv[])
{
…

Load specific OpenGL version with SDL + GLEW

I am trying to load a specific OpenGL version functions, but it seems that GLEW loads all of the functions regardless what I specify prior to creation of the GL context.
The reason that I know that it's not loading the specified version that I want is because it returns the function pointer to the function that is available in the later version of OpenGL.
glBlendFunci is only available in >= 4.0, whereas I want the 2.1 version of OpenGL, but glBlendFunci gets loaded regardless.
Here's what I'm trying to do:
int main(int argc, char** args)
{
SDL_Init(SDL_INIT_EVERYTHING);
window = SDL_CreateWindow("Game",
SDL_WINDOWPOS_CENTERED, SDL_WINDOWPOS_CENTERED,
width, height,
SDL_WINDOW_SHOWN | SDL_WINDOW_OPENGL);
SDL_GL_SetAttribute(SDL_GL_CONTEXT_MAJOR_VERSION, 2);
SDL_GL_SetAttribute(SDL_GL_CONTEXT_MINOR_VERSION, 1);
SDL_GL_SetAttribute(SDL_GL_DOUBLEBUFFER, 1);
SDL_GLContext glContext = SDL_GL_CreateContext(window);
glewInit();
std::cout << glBlendFunci << std::endl;
//Initialize();
SDL_GL_DeleteContext(glContext);
SDL_DestroyWindow(window);
SDL_Quit();
return 0;
}
P.S. This is just a some prototyping code and I was just messing around with OpenGL.
The behavior you have observed is well within the spec (see WGL_ARB_create_context or GLX_ARB_create_context):
If a version less than or equal to 3.0 is requested, the context
returned may implement any of the following versions:
Any version no less than that requested and no greater than 3.0.
Version 3.1, if the GL_ARB_compatibility extension is also
implemented.
The compatibility profile of version 3.2 or greater.
What you get is a context which supports GL 2.1 completely, so any code written for GL 2.1 should run - but you may get way more than that - a compatibility profile of the highest GL version your vendor supports is not uncommon.