Can't Start OpenGL Window - c++

I've got all the libraries properly installed as far as I can tell, but for some reason, glfwWindowCreate winds up returning NULL. I'm on a Dell XPS 15 at the moment, so I'm wondering if this has to do with the fact that I'm probably running this on the integrated graphics since it's not demanding enough for it to spin up the 1050ti. I'm brand new to OpenGL in general so I'm not certain that my code is properly written, so I'll post it here as well:
glewExperimental = true;
if (!glewInit())
{
fprintf(stderr, "Failed to initialize GLEW!\n");
return -1;
}
glfwWindowHint(GLFW_SAMPLES, 4);
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 4);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 6);
glfwWindowHint(GLFW_OPENGL_FORWARD_COMPAT, GL_TRUE);
glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);
GLFWwindow* window;
window = glfwCreateWindow(1920, 1080, "Test Window", NULL, NULL);
if (window == NULL)
{
fprintf(stderr, "Failed to initialize the window.");
std::cin.ignore();
glfwTerminate();
return -1;
}
glfwMakeContextCurrent(window);
glewExperimental = true;
if (glewInit() != GLEW_OK)
{
fprintf(stderr, "Failed to initialize GLEW!");
return -1;
}
std::cin.ignore();
std::cin.ignore();
I've just updated my NVIDIA drivers to the latest update, so it's (probably) not that I hope. Unfortunately, I just can't seem to get it to open a window.

You missed to initialize the GLFW libraray. GLFW has to be initialized by glfwInit, before it is used.
The GLEW libraray has to be initialized after a valid OpenGL context has been created and become current. See Initializing GLEW.
Change your code somehow like this, to solve your issue:
if ( glfwInit() != GLFW_TRUE ) // intialize GLFW
{
// error handling
}
glfwWindowHint(GLFW_SAMPLES, 4);
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 4);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 6);
glfwWindowHint(GLFW_OPENGL_FORWARD_COMPAT, GL_TRUE);
glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);
GLFWwindow* window;
window = glfwCreateWindow(1920, 1080, "Test Window", NULL, NULL);
if (window == NULL)
{
// error handling
}
glfwMakeContextCurrent(window);
// now the OpenGL context is valid and current
glewExperimental = true;
if (glewInit() != GLEW_OK) // initialize GLEW
{
// error handling
}

If on Windows, Optimus-enabled driver would look for an exported variable. That is application have to export it to be accessible to other modules. E.g:
extern "C" {
_declspec(dllexport) DWORD NvOptimusEnablement = 0x00000001;
}
Value 1 would mean usage of high performance graphics. 0 or lack of any export would mean use of low profile.
Now, if You're on MacOs or Linux or problem might be somewhere else.. MAc doesn't like Core profiles... On linux you might forgot to disable kernel modesetting and default opensource driver.

Related

Problem while creating a window in C++ with GLFW 3.3.8 | Visual Studio 2019

i just installed glfw and i was testing the little example project for creating a window, and it just dosn't work, when i create the window, it exits the programm with a "-1" cuz the window is not created, what can i do?
Edit: I fixed it alone :) i just change the compiler to x86 and change a bit the code xd
#include <GL/glew.h>
#include <GLFW/glfw3.h>
#include <iostream>
int main(void)
{
/* Initialize GLFW Library */
if (!glfwInit()) {
std::cout<<"ERROR: While initializing GLFW!"<<std::endl;
exit(-1);
}
/* Create a windowed mode window and its OpenGL context */
auto* window = glfwCreateWindow(640, 480, "Hello World", NULL, NULL);
if (!window)
{
std::cout<<"ERROR: While creating Window Object!"<<std::endl;
glfwTerminate();
exit(-1);
}
/* Make the window's context current */
glfwMakeContextCurrent(window);
/* Initialize GLEW Library */
GLenum err = glewInit();
if (err != GLEW_OK) {
fprintf(stderr,"ERROR: %s\n",
glewGetErrorString(err));
exit(-1);
}
fprintf(stdout,"Using GLEW %s\n",
glewGetString(GLEW_VERSION));
//TODO: Create and compile shaders here (vertex and frament shaders)
// and finally draw something with moder OpenGL!
/* Loop until the user closes the window */
while (!glfwWindowShouldClose(window))
{
/* Render here */
glClear(GL_COLOR_BUFFER_BIT);
/* Swap front and back buffers */
glfwSwapBuffers(window);
/* Poll for and process events */
glfwPollEvents();
}
glfwTerminate();
exit(0);
}
You need to initialize glfw before creating a window
Call glfwInit() before glfwCreateWindow
So, if you get the same error, please, change it to x86 in your compiler options, and then, dont initialize the window before de "glfwInit()" or else it will not work

Where is the OpenGL context stored?

I am fairly new to OpenGL and have been using GLFW combined with GLEW to create and display OpenGL contexts. The following code snippet shows how I create a window and use it for OpenGL.
GLFWwindow* window;
if (!glfwInit())
{
return -1;
}
window = glfwCreateWindow(1280, 720, "Hello OpenGL", NULL, NULL);
if (!window)
{
glfwTerminate();
return -1;
}
glfwMakeContextCurrent(window);
GLenum err = glewInit();
if (err != GLEW_OK)
{
glfwTerminate();
return -1;
}
How is glewInit able to fetch the window/context and use it to initialize without myself having to pass any additional arguments to it?
I can only imagine that when we call the glfwMakeContextCurrent function it somehow stores the context somewhere within my process for later use, but no documentation shows this.
The current OpenGL context is a global (or more to the point, thread_local) "variable" of sorts. All OpenGL functions act on whatever context is active in the current thread at the moment.
This includes the OpenGL calls that GLEW makes.

glew not initializing with SDL2

I've been trying to get GLew 1.10 to play nicely with SDL 2.0.3, but GLew won't initialize.
The problem I'm having is that GLew 1.10 requires a function GLEWContext* glewGetContext().
I've tried to use a the same solution used for GLew 1.10 with GLFW3, where you use a struct to handle the window and GLew context, but that method doesn't work with SDL2.
The 2 errors I'm receiving is this which points to glewInit():
C3861: 'glewGetContext': identifier not found
Intellisense: identifier "glewGetContext is undefined
code:
// Create window
_screen = SDL_CreateWindow("Window", SDL_WINDOWPOS_CENTERED, SDL_WINDOWPOS_CENTERED,
800, 600, SDL_WINDOW_OPENGL | SDL_WINDOW_SHOWN);
/* Create Context */
_mainContext = SDL_GL_CreateContext(_screen);
/* swap syncronized */
SDL_GL_SetSwapInterval(0);
// Initialize GLew 1.10
glewExperimental = GL_TRUE;
GLenum glewError = glewInit(); <------------- error
if (glewError != GLEW_OK)
printf("Error with GLew. SDL Error: %s\n", SDL_GetError());

Proper shutdown for SDL with OpenGL

I am using SDL 1.2 in a minimal fashion to create a cross platform OpenGL context (this is on Win7 64bit) in C++. I also use glew to have my context support OpenGL 4.2 (which my driver supports).
Things work correctly at run-time but I have been noticing lately a random crash when shutting down on calling SDL_Quit.
What is the proper sequence for SDL (1.2) with OpenGL start up and shutdown?
Here is what i do currently:
int MyObj::Initialize(int width, int height, bool vsync, bool fullscreen)
{
if(SDL_Init( SDL_INIT_EVERYTHING ) < 0)
{
printf("SDL_Init failed: %s\n", SDL_GetError());
return 0;
}
SDL_GL_SetAttribute(SDL_GL_RED_SIZE, 8);
SDL_GL_SetAttribute(SDL_GL_GREEN_SIZE, 8);
SDL_GL_SetAttribute(SDL_GL_BLUE_SIZE, 8);
SDL_GL_SetAttribute(SDL_GL_ALPHA_SIZE, 8);
SDL_GL_SetAttribute(SDL_GL_DEPTH_SIZE, 24);
SDL_GL_SetAttribute(SDL_GL_STENCIL_SIZE, 8);
SDL_GL_SetAttribute(SDL_GL_BUFFER_SIZE, 24);
SDL_GL_SetAttribute(SDL_GL_MULTISAMPLEBUFFERS, 0);
SDL_GL_SetAttribute(SDL_GL_SWAP_CONTROL, vsync ? 1 : 0);
if((m_SurfDisplay = SDL_SetVideoMode(width, height, 24,
SDL_HWSURFACE |
SDL_GL_DOUBLEBUFFER |
(fullscreen ? SDL_FULLSCREEN : 0) |
SDL_OPENGL)) == NULL)
{
printf("SDL_SetVideoMode failed: %s\n", SDL_GetError());
return 0;
}
GLenum err = glewInit();
if (GLEW_OK != err)
return 0;
m_Running = true;
return 1;
}
int MyObj::Shutdown()
{
SDL_FreeSurface(m_SurfDisplay);
SDL_Quit();
return 1;
}
In between the init and shutdown calls i create a number of GL resources (e.g. Textures, VBOs, VAO, Shaders, etc.) and render my scene each frame, with a SDL_GL_SwapBuffers() at the end of each frame (pretty typical). Like so:
int MyObject::Run()
{
SDL_Event Event;
while(m_Running)
{
while(SDL_PollEvent(&Event))
{ OnEvent(&Event); } //this eventually causes m_Running to be set to false on "esc"
ProcessFrame();
SDL_SwapBuffers();
}
return 1;
}
Within the ~MyObject MyObject::Shutdown() is called. Where just recently SDL_Quit crashes the app. I have also tried call Shutdown instead outside of the destructor, after my render loop returns to the same effect.
One thing that I do not do (that i didn't think I needed to do) is call the glDelete* functions for all my allocated GL resources before calling Shutdown (i thought they would automatically be cleaned up by the destruction of the context, which i assumed was happening during SDL_FreeSurface or SDL_Quit(). I of course call the glDelete* function in the dtors of there wrapping objects, which eventually get called by the tale of ~MyObject, since the wrapper objects are part of other objects that are members of MyObject.
As an experiment i trying forcing all the appropriate glDelete* calls to occur before Shutdown(), and my crash never seems to occur. Funny thing i did not need to do this a week ago, and really nothing has changed according to GIT (may be wrong though).
Is it really necessary to make sure all GL resources are freed before calling MyObject::Shutdown with SDL? Does it look like I might be doing something else wrong?
m_SurfDisplay = SDL_SetVideoMode(...)
...
SDL_FreeSurface(m_SurfDisplay);
^^^^^^^^^^^^^ naughty naughty!
SDL_SetVideoMode():
The returned surface is freed by SDL_Quit and must not be freed by the caller.

Creating parallel offscreen OpenGL contexts on Windows

I am trying to setup parallel Multi GPU offscreen rendering contexts.I use "OpenGL Insights" book ,chapter 27 , "Multi-GPU Rendering on NVIDIA Quadro" .I also looked into wglCreateAffinityDCNV docs but still can't pin it down.
My Machine has 2 NVidia Quadro 4000 cards (no SLI ).Running on Windows 7 64bit.
My workflow goes like this:
Create default window context using GLFW.
Map the GPU devices.
Destroy the default GLFW context.
Create new GL context for each one of the devices (currently trying only one)
Setup boost thread for each context and make it current in that thread.
Run rendering procedures on each thread separately.(No resources share)
Everything is created without errors and runs but once I try to read pixels from an offscreen FBO I am getting a null pointer here :
GLubyte* ptr = (GLubyte*)glMapBuffer(GL_PIXEL_PACK_BUFFER, GL_READ_ONLY);
Also glError returns "UNKNOWN ERROR"
I thought may be the multi-threading is the problem but the same setup gives identical result when running on single thread.
So I believe it is related to contexts creations.
Here is how I do it :
////Creating default window with GLFW here .
.....
.....
Creating offscreen contexts:
PIXELFORMATDESCRIPTOR pfd =
{
sizeof(PIXELFORMATDESCRIPTOR),
1,
PFD_DRAW_TO_WINDOW | PFD_SUPPORT_OPENGL | PFD_DOUBLEBUFFER, //Flags
PFD_TYPE_RGBA, //The kind of framebuffer. RGBA or palette.
24, //Colordepth of the framebuffer.
0, 0, 0, 0, 0, 0,
0,
0,
0,
0, 0, 0, 0,
24, //Number of bits for the depthbuffer
8, //Number of bits for the stencilbuffer
0, //Number of Aux buffers in the framebuffer.
PFD_MAIN_PLANE,
0,
0, 0, 0
};
void glMultiContext::renderingContext::createGPUContext(GPUEnum gpuIndex){
int pf;
HGPUNV hGPU[MAX_GPU];
HGPUNV GpuMask[MAX_GPU];
UINT displayDeviceIdx;
GPU_DEVICE gpuDevice;
bool bDisplay, bPrimary;
// Get a list of the first MAX_GPU GPUs in the system
if ((gpuIndex < MAX_GPU) && wglEnumGpusNV(gpuIndex, &hGPU[gpuIndex])) {
printf("Device# %d:\n", gpuIndex);
// Now get the detailed information about this device:
// how many displays it's attached to
displayDeviceIdx = 0;
if(wglEnumGpuDevicesNV(hGPU[gpuIndex], displayDeviceIdx, &gpuDevice))
{
bPrimary |= (gpuDevice.Flags & DISPLAY_DEVICE_PRIMARY_DEVICE) != 0;
printf(" Display# %d:\n", displayDeviceIdx);
printf(" Name: %s\n", gpuDevice.DeviceName);
printf(" String: %s\n", gpuDevice.DeviceString);
if(gpuDevice.Flags & DISPLAY_DEVICE_ATTACHED_TO_DESKTOP)
{
printf(" Attached to the desktop: LEFT=%d, RIGHT=%d, TOP=%d, BOTTOM=%d\n",
gpuDevice.rcVirtualScreen.left, gpuDevice.rcVirtualScreen.right, gpuDevice.rcVirtualScreen.top, gpuDevice.rcVirtualScreen.bottom);
}
else
{
printf(" Not attached to the desktop\n");
}
// See if it's the primary GPU
if(gpuDevice.Flags & DISPLAY_DEVICE_PRIMARY_DEVICE)
{
printf(" This is the PRIMARY Display Device\n");
}
}
///======================= CREATE a CONTEXT HERE
GpuMask[0] = hGPU[gpuIndex];
GpuMask[1] = NULL;
_affDC = wglCreateAffinityDCNV(GpuMask);
if(!_affDC)
{
printf( "wglCreateAffinityDCNV failed");
}
}
printf("GPU context created");
}
glMultiContext::renderingContext *
glMultiContext::createRenderingContext(GPUEnum gpuIndex)
{
glMultiContext::renderingContext *rc;
rc = new renderingContext(gpuIndex);
_pixelFormat = ChoosePixelFormat(rc->_affDC, &pfd);
if(_pixelFormat == 0)
{
printf("failed to choose pixel format");
return false;
}
DescribePixelFormat(rc->_affDC, _pixelFormat, sizeof(pfd), &pfd);
if(SetPixelFormat(rc->_affDC, _pixelFormat, &pfd) == FALSE)
{
printf("failed to set pixel format");
return false;
}
rc->_affRC = wglCreateContext(rc->_affDC);
if(rc->_affRC == 0)
{
printf("failed to create gl render context");
return false;
}
return rc;
}
//Call at the end to make it current :
bool glMultiContext::makeCurrent(renderingContext *rc)
{
if(!wglMakeCurrent(rc->_affDC, rc->_affRC))
{
printf("failed to make context current");
return false;
}
return true;
}
//// init OpenGL objects and rendering here :
..........
............
AS I said ,I am getting no errors on any stages of device and context creation.
What am I doing wrong ?
UPDATE:
Well ,seems like I figured out the bug.I call glfwTerminate() after I calling wglMakeCurrent() ,so it seems like the latest makes "uncurrent" also the new context.Though it is wired as OpenGL commands keep getting executed.So it works in a single thread.
But now , if I spawn another thread using boost treads I am getting the initial error.Here is my thread class:
GPUThread::GPUThread(void)
{
_thread =NULL;
_mustStop=false;
_frame=0;
_rc =glMultiContext::getInstance().createRenderingContext(GPU1);
assert(_rc);
glfwTerminate(); //terminate the initial window and context
if(!glMultiContext::getInstance().makeCurrent(_rc)){
printf("failed to make current!!!");
}
// init engine here (GLEW was already initiated)
engine = new Engine(800,600,1);
}
void GPUThread::Start(){
printf("threaded view setup ok");
///init thread here :
_thread=new boost::thread(boost::ref(*this));
_thread->join();
}
void GPUThread::Stop(){
// Signal the thread to stop (thread-safe)
_mustStopMutex.lock();
_mustStop=true;
_mustStopMutex.unlock();
// Wait for the thread to finish.
if (_thread!=NULL) _thread->join();
}
// Thread function
void GPUThread::operator () ()
{
bool mustStop;
do
{
// Display the next animation frame
DisplayNextFrame();
_mustStopMutex.lock();
mustStop=_mustStop;
_mustStopMutex.unlock();
} while (mustStop==false);
}
void GPUThread::DisplayNextFrame()
{
engine->Render(); //renders frame
if(_frame == 101){
_mustStop=true;
}
}
GPUThread::~GPUThread(void)
{
delete _view;
if(_rc != 0)
{
glMultiContext::getInstance().deleteRenderingContext(_rc);
_rc = 0;
}
if(_thread!=NULL)delete _thread;
}
Finally I solved the issues by myself. First problem was that I called glfwTerminate after I set another device context to be current. That probably unmounted the new context too.
Second problem was my "noobiness " with boost threads. I failed to init all the rendering related objects in the custom thread because I called the rc init object procedures before setting the thread as is seen in the example above.