How to make make Nvidia as the default graphics card? - c++

March 27 2020: The question boils down to how to run applications in Nvidia graphics card. If Intel Graphics card is enabled, OpenGL version is 4.6 for both Nvidia and Intel GPU's according to GPU-Z software. But, if disable Intel, to run the application using Nvidia, the application crashes; GPU-Z shows OpenGL version 1.1. So, how can I run the application with Nvidia graphics cards?
Notes: 1. I tried adding the application in the graphics settings to use high performance GPU, but the application uses Intel GPU.
2. Also, tried adding the application in Nvidia Control Panel to no luck.
March 16 2020: I was executing the example1 code in NanoGUI in Windows 10. The program is working when I connect my display using HDMI cable(connected to motherboard), but crashes without any errors using DP cable(connected to NVIDIA graphics card). I have Intel UHD Graphics 630 and NVIDIA GeForce GT 730 in my system. The driver version of NVIDIA is 26.21.14.4250.
I ran a simple OpenGL code in debug mode, and the program crashes at glfwInit() function.
The error is at
libEGL!eglDestroyImageKHR
Here is a sample code that crashes with DP port and works with the HDMI port.
// #include <glad/glad.h>
#include <GLFW/glfw3.h>
#include <iostream>
void framebuffer_size_callback(GLFWwindow* window, int width, int height);
void processInput(GLFWwindow *window);
// settings
const unsigned int SCR_WIDTH = 800;
const unsigned int SCR_HEIGHT = 600;
int main()
{
// glfw: initialize and configure
// ------------------------------
glfwInit();
// glfw window creation
// --------------------
GLFWwindow* window = glfwCreateWindow(SCR_WIDTH, SCR_HEIGHT, "LearnOpenGL", NULL, NULL);
if (window == NULL)
{
std::cout << "Failed to create GLFW window" << std::endl;
glfwTerminate();
return -1;
}
glfwMakeContextCurrent(window);
// render loop
// -----------
while (!glfwWindowShouldClose(window))
{
// glfw: swap buffers and poll IO events (keys pressed/released, mouse moved etc.)
// -------------------------------------------------------------------------------
glfwSwapBuffers(window);
glfwPollEvents();
}
// glfw: terminate, clearing all previously allocated GLFW resources.
// ------------------------------------------------------------------
glfwTerminate();
return 0;
}

The issue was solved in another update of the Nvidia drivers to 445.75 standard.
Also, I found that remote desktop has issues with Nvidia drivers. Remote software programs sometimes install their own display drivers. More can be found here.

Related

glxBadDrawable when using bgfx

When using the following BGFX code:
#include "GLFW/glfw3.h"
#include <bgfx/bgfx.h>
int main() {
glfwInit();
GLFWwindow* window = glfwCreateWindow(800, 600, "Hello, bgfx!", NULL, NULL);
bgfx::Init bgfxInit;
bgfxInit.type = bgfx::RendererType::Count; // Automatically choose a renderer.
bgfxInit.resolution.width = 800;
bgfxInit.resolution.height = 600;
bgfxInit.resolution.reset = BGFX_RESET_VSYNC;
bgfx::init(bgfxInit);
}
A black openGL window pops up and appears fine for a second, however, a GLXBadDrawable error then pops up. I do not know what the cause of this error is, and the other question has no answers and has not been active for some time now.
I believe that this is not an issue with the code, but rather my machine, however, I may be wrong.
I currently have a Lenovo T400 laptop, with a Core 2 Duo P9500. I have 2 built-in GPUs, a Mobile 4 Series Chipset integrated graphics chip, along with an ATI Mobility Radeon HD 3450/3470. I am also running Artix Linux with the 6.0.7-artix1-1 kernel. I also am using the glfw-x11 and glfw packages if that helps, along with the i3-gaps window manager.
I have also attempted to use SDL2 instead of GLFW, and the same issue occurs. However, for GLFW a black window shows up, while in SDL2, a transparent(?) window instead shows up. Searching the github issues page also yielded no results.

Open GL version 2.1 instead of 4.1 in Mac OS 10.15

I'm new to Mac, so I'm not very conversant with the get arounds with this OS.
I wrote a simple Open GL program in Xcode and it ran without issues. However, when I checked the versions using the following code
cout<<glGetString(GL_VENDOR)<<endl;
cout<<glGetString(GL_RENDERER)<<endl;
cout<<glGetString(GL_VERSION)<<endl;
cout<<glGetString(GL_SHADING_LANGUAGE_VERSION)<<endl;
Initialization Code
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGB | GLUT_DEPTH);
glutInitWindowSize(400, 400);
glutCreateWindow("First Test");
initRendering();
glutDisplayFunc(drawScene);
glutKeyboardFunc(handleKeypress);
glutReshapeFunc(handleResize);
glutMainLoop();
I get the following output
ATI Technologies Inc.
AMD Radeon Pro 5300M OpenGL Engine
2.1 ATI-3.10.15
1.20
From forums elsewhere I have read that Mac OS 10.15 supports Open GL version 4.1 and the Graphics card here can certainly support higher versions too.
So my questions are as follows:
Why is it showing 2.1 on my machine
How to fix this? Is there a code that I can type into fix the issue or more software needs to be installed?
Any direction would be great.
Thanks
GLUT is ancient and doesn't support common macOS features such as HiDPI or mouse scrolling. You probably want to look into using the GLFW library instead (see here for what you need to do for a 4.1 context).
However if you really want to use GLUT, you need to add
glutInitContextVersion(4, 1);
glutInitContextProfile(GLUT_CORE_PROFILE);
after glutInit.
Edit: the answer was posted before I know he's using glut, also I recommend GLFW for Modern OpenGL 4.1+
I think you should define the version and create the context first, using some libraries like GLFW and set the OpenGL profile intended to use. also use GLEW/GLAD libraries for GL extensions management.
In case you are using GLFW and GLEW you can add this code to define a version and create a context and a window. then check the version again.
#include <iostream>
#include <GL/glew.h>
#include <GLFW/glfw3.h>
using namespace std;
int main()
{
// Initialize GLFW
glfwInit();
// Define version and compatibility settings
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3); //ver
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 2);
glfwWindowHint(GLFW_OPENGL_PROFILE,GLFW_OPENGL_CORE_PROFILE);
glfwWindowHint(GLFW_OPENGL_FORWARD_COMPAT, GL_TRUE); // for MAC ONLY
glfwWindowHint(GLFW_RESIZABLE, GL_FALSE);
// Create OpenGL window and context
GLFWwindow* window = glfwCreateWindow(800, 600, "OpenGL", NULL, NULL);
glfwMakeContextCurrent(window);
// Check for window creation failure
if (!window)
{
// Terminate GLFW
glfwTerminate();
return 0;
}
// Initialize GLEW
glewExperimental = GL_TRUE; glewInit();
// your code
cout<<glGetString(GL_VENDOR)<<endl;
cout<<glGetString(GL_RENDERER)<<endl;
cout<<glGetString(GL_VERSION)<<endl;
cout<<glGetString(GL_SHADING_LANGUAGE_VERSION)<<endl;
// Event loop
while(!glfwWindowShouldClose(window))
{
// Clear the screen to black
glClearColor(0.0f, 0.0f, 0.0f, 1.0f); glClear(GL_COLOR_BUFFER_BIT);
glfwSwapBuffers(window);
glfwPollEvents();
}
// Terminate GLFW
glfwTerminate(); return 0;
}
If you don't have GLFW and GLEW already installed, you can check this tutorial to install them for MacOS : https://riptutorial.com/opengl/example/21105/setup-modern-opengl-4-1-on-macos--xcode--glfw-and-glew- or check this one: https://giovanni.codes/opengl-setup-in-macos/
in case it does not work and still showing 2.1 try to go to the "Energy Saver" in the system settings and deselect the "Automatic graphics switching".

Commenting out glfwWindowHint() makes code functional (OpenGL with GLFW), why?

sorry I had a bit of trouble with the proper name for this question, however I've run into a roadblock and I'd like to at least be informed as to the source.
I've been trying to learn to learn openGL and I'm following this tutorial:
open.gl
Lines in question:
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 2);
glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);
glfwWindowHint(GLFW_OPENGL_FORWARD_COMPAT, GL_TRUE);
While following through the tutorial I got an access violation exception in my code. Only after I commented out the following lines was I able to get the program working as the tutorial says it should have worked the first time. The tutorial states the following about the commented out lines:
"You'll immediately notice the first three lines of code that are only relevant for this library. It is specified that we require the OpenGL context to support OpenGL 3.2 at the least. The GLFW_OPENGL_PROFILE option specifies that we want a context that only supports the new core functionality."
My thought was that perhaps my drivers do not support OpenGL 3.2 or higher since when I commented out the lines in question my program ran fine. However, I downloaded a software called GPU caps viewer. When the software scanned my hardware it said that both my NVIDIA GeForce GT 540M and HD Graphics 3000 support OpenGL 4.3. So I'm very confused as to the source of my problem, what I am doing wrong, or is this a hardware issue?
Here are snapshots of GPU CAPS and NVIDIA PORTAL: 2 pictures on imgur
Doesn't Work:
#include "stdafx.h" //necessary headers in here
int _tmain(int argc, _TCHAR* argv[])
{
glfwInit();
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 2);
glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);
glfwWindowHint(GLFW_OPENGL_FORWARD_COMPAT, GL_TRUE);
glfwWindowHint(GLFW_RESIZABLE, GL_FALSE);
GLFWwindow* window = glfwCreateWindow(800, 600, "OpenGL", nullptr, nullptr); // Windowed
glfwMakeContextCurrent(window);
while (!glfwWindowShouldClose(window))
{
glfwSwapBuffers(window);
glfwPollEvents();
if (glfwGetKey(window, GLFW_KEY_ESCAPE) == GLFW_PRESS)
glfwSetWindowShouldClose(window, GL_TRUE);
}
glfwTerminate();
return 0;
}
Works:
#include "stdafx.h" //necessary headers in here
int _tmain(int argc, _TCHAR* argv[])
{
glfwInit();
/**
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 2);
glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);
glfwWindowHint(GLFW_OPENGL_FORWARD_COMPAT, GL_TRUE);
glfwWindowHint(GLFW_RESIZABLE, GL_FALSE);
**/
GLFWwindow* window = glfwCreateWindow(800, 600, "OpenGL", nullptr, nullptr); // Windowed
glfwMakeContextCurrent(window);
while (!glfwWindowShouldClose(window))
{
glfwSwapBuffers(window);
glfwPollEvents();
if (glfwGetKey(window, GLFW_KEY_ESCAPE) == GLFW_PRESS)
glfwSetWindowShouldClose(window, GL_TRUE);
}
glfwTerminate();
return 0;
}
Header File
#pragma once
#define GLEW_STATIC
#include <glew.h>
#include "targetver.h"
#include <stdio.h>
#include <tchar.h>
#include <iostream>
#include <thread>
#define GLFW_INCLUDE_GLU
#include <glfw3.h>
One last thing that came to mind, when I go to my NVIDIA control panel it shows Visual Studio 2013 only as using Intel HD 3000 graphics card. It doesn't give me an option to use my GeForce graphics with it, could this be the issue?
EditOne:
I tried running my Visual Studio 2013 through windows explorer by right clicking on it and choosing "Run with graphics processor..." but to no avail.
I appreciate you looking through my question to help me solve it, and if you had this question yourself I hope it helped you solve your issue.
Actual cause of the problem (as confirmed by OP) were out-of-date and/or not fully featured drivers. This is yet again a reminder that in case of OpenGL problems the first course of action should always be to upgrade to the latest version of the graphics drivers, obtained directly from the GPU maker's download site.
Two reasons for this:
OEM drivers usually lag a significant amount of releases behind the GPU vendors.
much more important: The drivers installed through Windows update are stripped of propper OpenGL support by Microsoft for reasons only known by Microsoft.
To make a long story short: Your program is probably going to get an OpenGL context for the Intel HD graphics which lacks the requested OpenGL capabilities. Thus when setting these GLFW windows hints the window creation fails and since your program does not check for this error it will try to use that window, which of course will cause trouble.
What should you do? First and foremost add error checks!
Here's an example how you can deal with the window creation failing:
GLFWwindow* window = glfwCreateWindow(
800,
600,
"OpenGL",
nullptr,
nullptr);
if( !window ) {
print_some_error_message();
exit(-1);
}
glfwMakeContextCurrent(window);
Next you want to force the system to use the NVidia card. Don't do that "launch through explorer run on NVidia BS"; say you ship your program to someone else? Do you want to put that burden onto them? Or how about you, when you want to debug it?
Do it properly: Tell the system that you want to run on the NVidia hardware in a Optimus configuration. Simply link the following into your program:
extern "C" {
_declspec(dllexport) DWORD NvOptimusEnablement = 0x00000001;
}
The best place to put this would be right beside your program's main function, i.e. in your case
#include "stdafx.h" //necessary headers in here
extern "C" {
_declspec(dllexport) DWORD NvOptimusEnablement = 0x00000001;
}
int _tmain(int argc, _TCHAR* argv[])
{
…

Load specific OpenGL version with SDL + GLEW

I am trying to load a specific OpenGL version functions, but it seems that GLEW loads all of the functions regardless what I specify prior to creation of the GL context.
The reason that I know that it's not loading the specified version that I want is because it returns the function pointer to the function that is available in the later version of OpenGL.
glBlendFunci is only available in >= 4.0, whereas I want the 2.1 version of OpenGL, but glBlendFunci gets loaded regardless.
Here's what I'm trying to do:
int main(int argc, char** args)
{
SDL_Init(SDL_INIT_EVERYTHING);
window = SDL_CreateWindow("Game",
SDL_WINDOWPOS_CENTERED, SDL_WINDOWPOS_CENTERED,
width, height,
SDL_WINDOW_SHOWN | SDL_WINDOW_OPENGL);
SDL_GL_SetAttribute(SDL_GL_CONTEXT_MAJOR_VERSION, 2);
SDL_GL_SetAttribute(SDL_GL_CONTEXT_MINOR_VERSION, 1);
SDL_GL_SetAttribute(SDL_GL_DOUBLEBUFFER, 1);
SDL_GLContext glContext = SDL_GL_CreateContext(window);
glewInit();
std::cout << glBlendFunci << std::endl;
//Initialize();
SDL_GL_DeleteContext(glContext);
SDL_DestroyWindow(window);
SDL_Quit();
return 0;
}
P.S. This is just a some prototyping code and I was just messing around with OpenGL.
The behavior you have observed is well within the spec (see WGL_ARB_create_context or GLX_ARB_create_context):
If a version less than or equal to 3.0 is requested, the context
returned may implement any of the following versions:
Any version no less than that requested and no greater than 3.0.
Version 3.1, if the GL_ARB_compatibility extension is also
implemented.
The compatibility profile of version 3.2 or greater.
What you get is a context which supports GL 2.1 completely, so any code written for GL 2.1 should run - but you may get way more than that - a compatibility profile of the highest GL version your vendor supports is not uncommon.

Creating an OpenGL 3.2/3.x context in SDL 1.3

I'm facing a problem where SDL says it does not support OpenGL 3.x contexts. I am trying to follow this tutorial: Creating a Cross Platform OpenGL 3.2 Context in SDL (C / SDL). I am using GLEW in this case, but I couldn't get gl3.h to work with this either. This is the code I ended up with:
#include <glew.h>
#include <SDL.h>
int Testing::init()
{
if(SDL_Init(SDL_INIT_EVERYTHING) < 0)
{
DEBUGLINE("Error initializing SDL.");
printSDLError();
system("pause");
return 1; // Error
}
//Request OpenGL 3.2 context.
SDL_GL_SetAttribute(SDL_GL_CONTEXT_MAJOR_VERSION, 3);
SDL_GL_SetAttribute(SDL_GL_CONTEXT_MINOR_VERSION, 2);
//set double buffer
SDL_GL_SetAttribute(SDL_GL_DOUBLEBUFFER, 1);
//Create window
window = SDL_CreateWindow("OpenGL 3.2 test",
SDL_WINDOWPOS_CENTERED, SDL_WINDOWPOS_CENTERED,
600, 400, SDL_WINDOW_OPENGL | SDL_WINDOW_SHOWN);
if(window == NULL) return 3; // Error
//Print errors to console if there are any
printSDLError(__LINE__);
//Set up OpenGL context.
glContext = SDL_GL_CreateContext(window);
printSDLError(__LINE__);
if(glContext == NULL)
{
DEBUGLINE("OpenGL context could not be created.");
system("pause");
return 4;
}
//Initialize glew
GLenum err = glewInit();
if(err != GLEW_OK)
{
DEBUGLINE("GLEW unable to be initialized: " << glewGetErrorString(err));
system("pause");
return 2;
}
return 0; // OK code, no error.
}
The only problem that is reported is after trying to call SDL_GL_CreateContext(window), where SDL reports "GL 3.x is not supported". However, both the tutorial and this sample pack (which I have not bothered to test with) report success in combining SDL 1.3 and OpenGL 3.2. I am aware that SDL 1.3 is in the middle of development, but I somewhat doubt that even unintentional support would be removed.
A context is still created, and GLEW is able to initialize just fine. (I can't figure out for the life of me how to see the version of the context that was created, since it's supposed to be the core profile, and I don't know how to find that either. According to the tutorial, SDL_GL_SetAttribute(SDL_GL_CONTEXT_MAJOR_VERSION, 3) doesn't actually do anything, in which case I have no clue how to get the appropriate context created or change the default context.)
EDIT: After some testing thanks to the helpful function Nicol gave me, I have found that, regardless of the parameters I pass to SDL_GL_SetAttribute, the context is always version 1.1. However, putting in any version below 3.0 doesn't spit out an error saying it is not supported. So the problem is that the "core" version SDL sees is only 1.1.
For the record, I am using Visual C++ 2010 express, GLEW 1.7.0, and the latest SDL 1.3 revision. I am fairly new to using all three of these, and I had to manually build the SDL libraries for both 32 and 64 bit versions, so there's a lot that could go wrong. So far however, the 32 and 64 bit versions are doing the exact same thing.
EDIT: I am using an nVidia 360M GPU with the latest driver, which OpenGL Extension Viewer 4.04 reports to have full compatibility up to OpenGL 3.3.
Any help is appreciated.
UPDATE: I have managed to get SDL to stop yelling at me that it doesn't support 3.x contexts. The problem was that the SDL_GL_SetAttribute must be set BEFORE SDL_Init is called:
//Request OpenGL 3.2 context.
SDL_GL_SetAttribute(SDL_GL_CONTEXT_MAJOR_VERSION, 3);
SDL_GL_SetAttribute(SDL_GL_CONTEXT_MINOR_VERSION, 2);
//Initialize SDL
if(SDL_Init(SDL_INIT_EVERYTHING) < 0)
{
DEBUGLINE("Error initializing SDL.");
return 1; // Error
}
Unfortunately, GLEW still refuses to acknowledge anything higher than OpenGL 1.1 (only GLEW_VERSION_1_1 returns true), which still has me puzzled. glGetString(GL_VERSION) also reports 1.1.0. It seems that my program simply doesn't know of any higher versions, as if I don't have them installed at all.
since I don't know if you already found a solution, here is mine:
I struggled around a lot today and yesterday with this stuff. Advanced GL functions couldn't be used, so I even debugged into opengl32.dll just to see it really works and wraps the calls into the hardware-specific OpenGL DLL (nvoglnt.dll). So there must have been another cause. There were even tips in the internet to link to opengl32.lib before all other libraries, because ChoosePixelFormat and some other functions are overwritten by each other.
But that wasn't the cause, too. My solution was to enable the accelerated visuals here:
// init SDL
if(SDL_Init(SDL_INIT_VIDEO | SDL_INIT_HAPTIC | SDL_INIT_TIMER) < 0) {
fprintf(stderr, "Could not init SDL");
return 1;
}
// we must wish our OpenGL Version!!
SDL_GL_SetAttribute(SDL_GL_CONTEXT_MAJOR_VERSION, 3);
SDL_GL_SetAttribute(SDL_GL_CONTEXT_MINOR_VERSION, 2);
SDL_GL_SetAttribute(SDL_GL_ACCELERATED_VISUAL, 1);
because in the current SDL revision (Dec 15, 2011) he checks for it in SDL_windowsopengl.c
if (_this->gl_config.accelerated >= 0) {
*iAttr++ = WGL_ACCELERATION_ARB;
*iAttr++ = (_this->gl_config.accelerated ? WGL_FULL_ACCELERATION_ARB :
WGL_NO_ACCELERATION_ARB);
}
and this attribute is initialized to -1 if you did not define it on your own.
And: Never set the version attributes before initializing SDL, because settings attributes needs the video backend to be initialized properly!
I hope this helps.
I followed this tutorial. Everything works fine on windowz and linux.
http://people.cs.uct.ac.za/~aflower/tutorials.html