I am using Ubuntu 12.04 and I have installed OpenGL4.
I also have a CUDA-enable NVIDIA graphics card. Note that, I have been doing parallel computation with CUDA on my PC and that works.
[eeuser#roadrunner sample_opengl]$ glxinfo | grep gl
server glx vendor string: NVIDIA Corporation
server glx version string: 1.4
server glx extensions:
client glx vendor string: NVIDIA Corporation
client glx version string: 1.4
client glx extensions:
GL_ARB_texture_query_lod, GL_ARB_texture_rectangle, GL_ARB_texture_rg,
GL_NV_texture_multisample, GL_NV_texture_rectangle, GL_NV_texture_shader,
I cannot get a simple program to work. Can anyone give a sample program that can work on my PC
Here is the code I am using:
#include <GL/glew.h> // include GLEW and new version of GL on Windows
#include <GLFW/glfw3.h> // GLFW helper library
#include <stdio.h>
int main () {
// start GL context and O/S window using the GLFW helper library
if (!glfwInit ()) {
fprintf (stderr, "ERROR: could not start GLFW3\n");
return 1;
}
// uncomment these lines if on Apple OS X
/*glfwWindowHint (GLFW_CONTEXT_VERSION_MAJOR, 3);
glfwWindowHint (GLFW_CONTEXT_VERSION_MINOR, 2);
glfwWindowHint (GLFW_OPENGL_FORWARD_COMPAT, GL_TRUE);
glfwWindowHint (GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);*/
GLFWwindow* window = glfwCreateWindow (640, 480, "Hello Triangle", NULL, NULL);
if (!window) {
fprintf (stderr, "ERROR: could not open window with GLFW3\n");
glfwTerminate();
return 1;
}
glfwMakeContextCurrent (window);
// start GLEW extension handler
glewExperimental = GL_TRUE;
glewInit ();
// get version info
const GLubyte* renderer = glGetString (GL_RENDERER); // get renderer string
const GLubyte* version = glGetString (GL_VERSION); // version as a string
printf ("Renderer: %s\n", renderer);
printf ("OpenGL version supported %s\n", version);
// tell GL to only draw onto a pixel if the shape is closer to the viewer
glEnable (GL_DEPTH_TEST); // enable depth-testing
glDepthFunc (GL_LESS); // depth-testing interprets a smaller value as "closer"
/* OTHER STUFF GOES HERE NEXT */
// close GL context and any other GLFW resources
glfwTerminate();
return 0;
}
I tried compiling with -
g++ main2.cpp -lglut -lGL -lGLEW -lGLU
I get an error :
main2.cpp:2:47: fatal error: GLFW/glfw3.h: No such file or directory
compilation terminated.
I am now wondering what is this GLFW? More precisely how do I install it? Following #eapert
I installed libglfw as -
sudo apt-get install libglfw-dev
Still get the following errors
main2.cpp: In function ‘int main()’:
main2.cpp:18:2: error: ‘GLFWwindow’ was not declared in this scope
main2.cpp:18:14: error: ‘window’ was not declared in this scope
main2.cpp:18:79: error: ‘glfwCreateWindow’ was not declared in this scope
main2.cpp:24:32: error: ‘glfwMakeContextCurrent’ was not declared in this scope
Install the "dev" package for GLFW: sudo apt-get install libglfw3-dev
Warning: the version you get from the package manager may be out of date.
You can follow the instructions here or here.
I tried compiling with -
g++ main2.cpp -lglut -lGL -lGLEW -lGLU
Why are you linking GLUT when you want to use GLFW? Also you don't need GLU for your program. Try this:
g++ main2.cpp -lGL -lGLEW -lglfw
WARNING: The question was changed quite significantly since this answer was made
"I tried a few tutorials but I cannot get a simple program to work" I assume, given what you said earlier, your CUDA programs work on Windows, just not on Ubuntu?
1) Try installing a newer Ubuntu version first (if you have the option on that PC). 12.04 is a bit old and you probably shouldn't be using it if you don't have a good reason to (i.e. it's a company PC and you would violate upgrade policy, something along those lines)
2) Try installing the proprietary NVIDIA drivers for your card (this should also give you the NVIDIA OpenGL implementation); You probably have mesa installed. I suppose current mesa version have at least some GPGPU support, but I'm not sure if they support CUDA (if you aren't particularly fond of CUDA you can also try OpenCL, you might have better chances with that. Although there's still a good chance you'll need the proprietary NVIDIA drivers).
GLFW is a helper library for creating windows, setting up OpenGL in them and provides access to input functions etc. It's similar to GLUT - which you seem to be familiar with - in that regard, but a much more modern alternative. You'll of course need to install it to use it. You should be able to install it through Ubuntus package manager as per usual, try apt-cache search glfw which should help you find the exact package name. If there are multiple "lib" versions, install the one ending in -dev.
Related
I am trying to run an OpenGL code on WSL2 but am getting the following error on trying to run the executable:
GLFW error 65543: GLX: Failed to create context: GLXBadFBConfig
Unable to create GLFW window
This is the code I am using to create a window:
....
GLFWwindow* initialize() {
int glfwInitRes = glfwInit();
if (!glfwInitRes) {
fprintf(stderr, "Unable to initialize GLFW\n");
return nullptr;
}
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 2);
glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);
glfwWindowHint(GLFW_OPENGL_FORWARD_COMPAT, GL_TRUE);
GLFWwindow* window = glfwCreateWindow(1280, 720, "InitGL", nullptr, nullptr);
if (!window) {
fprintf(stderr, "Unable to create GLFW window\n");
glfwTerminate();
return nullptr;
}
glfwMakeContextCurrent(window);
....
return window;
}
....
GLFWwindow* window = initialize();
I am using VcxSrv as my X server.
Following is from the output for glxinfo
direct rendering: No (LIBGL_ALWAYS_INDIRECT set)
server glx vendor string: SGI
server glx version string: 1.4
client glx vendor string: Mesa Project and SGI
client glx version string: 1.4
GLX version: 1.4
OpenGL vendor string: NVIDIA Corporation
OpenGL renderer string: GeForce 940MX/PCIe/SSE2
OpenGL version string: 1.4 (4.6.0 NVIDIA 457.30)
The following fix worked for me.
As #dratenik mentioned above, the problem persists because of direct rendering: No. To solve this, do the following:
In the bashrc/zshrc file, add the following:
export LIBGL_ALWAYS_INDIRECT=0
Or you could just remove export LIBGL_ALWAYS_INDIRECT=1 line from your bashrc/zshrc file if you have added it.
Then, start a new instance of VcxSrv with and unselect the Native opengl box on the Extra Settings page, and select the Disable access control box.
After doing this, direct rendering should be turned on, and you should get the following on running glxinfo:
direct rendering: Yes
server glx vendor string: SGI
server glx version string: 1.4
...
I am trying to run one of the simplest OpenGL 3.3 programs one can ever run but it wouldn't run successfully. The program always returns with negative integer.
Here is how I got to this situation. I did nothing on my own but following this guide LearnOpenGL - Creating a window.
I downloaded the latest source files of GLFW. Generated the GLFW project files from CMAKE GUI application for Visual Studio 2019 (I am using the free Community Edition though). Compiled the GLFW project files and got the glfw3.lib library file. No error whatsoever in this process. CMAKE showed this is for 64-bit build.
Went over to GLAD web service website. Set Language = C/C++, Specification = OpenGL, API/GL = Version 3.3; everything else = none, Profile = Core. Then the website gave me the glad files(.h and .c files).
Then I created a new C++ empty project. Included the location of header files (glfw3.h, glad.h) and location for GLFW library file (glfw3.lib) in the Project's properties's VC++ Directory. In linker -> Input, I added glfw3.lib and opengl32.lib.
Added the glad.c file in project as suggested. Compiled this new OpenGL project. Everything works perfectly.
There is no compilation error. There is no linking error.
Important notice: When you first build the program and then run it the first time, I can see the OpenGL window's opening but within a second it closes automatically; without any KB and/or Mouse interaction and then I get a negative integer as return in the console window. If I keep running the program again and again, I don't see that new UI Window again unless I rebuild it and then run it again.
When I use the debug, it invokes the following exception:
Exception Unhandled
Unhandled exception at 0x0000000010002203 (EZFRD64.dll) in opengl1.exe: 0xC0000005: Access violation reading location 0x00000000731A0090.
What wrong am I doing? Where did I go wrong?
Following is my system configuration:
CPU: Intel Xeon-E3 1246 v3 (This is Intel's 4th Geneartion/Haswell architecture),
GPU: Integrated Intel HD P4600/P4700 (basically it is Intel HD 4600 like all those 4th gen i5s and i7s have)
Latest Graphics Driver (Driver Date under Device Manager: 21-Jan-2020) has been installed.
"OpenGL Extension Viewer" is showing the following core feature support:
OpenGL 3.0: 100% support.
OpenGL 3.1: 100% support.
OpenGL 3.2: 100% support.
OpenGL 3.3: 100% support.
OpenGL 4.0: 100% support.
OpenGL 4.1: 100% support.
OpenGL 4.2: 100% support.
OpenGL 4.3: 100% support.
OpenGL 4.4: 80% support.
OpenGL 4.5: 18% support.
OpenGL 4.6: 9% support.
OpenGL ARB 2015: 8% support.
Following is the code I am trying to run:
#include <glad/glad.h>
#include <GLFW/glfw3.h>
#include <iostream>
void framebuffer_size_callback(GLFWwindow* window, int width, int height);
void processInput(GLFWwindow* window);
const unsigned int SCR_WIDTH = 800;
const unsigned int SCR_HEIGHT = 600;
int main()
{
glfwInit();
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 2);
glfwWindowHint(GLFW_OPENGL_FORWARD_COMPAT, GL_TRUE);
glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);
glfwWindowHint(GLFW_RESIZABLE, GL_FALSE);
GLFWwindow* window = glfwCreateWindow(SCR_WIDTH, SCR_HEIGHT, "LearnOpenGL", NULL, NULL);
if (window == NULL)
{
std::cout << "Failed to create GLFW window" << std::endl;
glfwTerminate();
return -1;
}
glfwMakeContextCurrent(window);
glfwSetFramebufferSizeCallback(window, framebuffer_size_callback);
if (!gladLoadGLLoader((GLADloadproc)glfwGetProcAddress))
{
std::cout << "Failed to initialize GLAD" << std::endl;
return -1;
}
while (!glfwWindowShouldClose(window))
{
glfwSwapBuffers(window);
processInput(window);
glClearColor(0.2f, 0.3f, 0.3f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT);
glfwPollEvents();
}
glfwTerminate();
return 0;
}
void processInput(GLFWwindow* window)
{
if (glfwGetKey(window, GLFW_KEY_ESCAPE) == GLFW_PRESS)
glfwSetWindowShouldClose(window, true);
}
void framebuffer_size_callback(GLFWwindow* window, int width, int height)
{
glViewport(0, 0, width, height);
}
RE: that mysterious EZFRD64.dll, a post on Reddit:
According to google "EZFRD64.dll" mentioned there is a driver for some generic/off-brand "USB Vibration Controller" and appears to be known to cause issues at least on Windows 10.
See 1 2 3 and many more posts just on the first page of results for that dll.
Janky code running in/near the kernel can cause problems, film at 11 :)
I am running Arch Linux with Mesa 10 on an HP laptop with an Intel HD 3000 GPU. (There is also an ATI card but I shut it off at boot.) I am trying to run OpenGL code with the core profile. OpenGL 3.1 and GLSL 1.4 should be supported according to glxinfo:
-> % glxinfo | grep version
OpenGL core profile version string: 3.1 (Core Profile) Mesa 10.0.1
OpenGL core profile shading language version string: 1.40
OpenGL version string: 3.0 Mesa 10.0.1
OpenGL shading language version string: 1.3
However, when I compile a GLFW program, try to force the core profile, and ask for the OpenGL version like so:
#include <GL/glew.h>
#include <GLFW/glfw3.h>
int main(void)
{
// Use OpenGL 3.1 core profile
glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);
glfwWindowHint(GLFW_OPENGL_FORWARD_COMPAT, GL_TRUE);
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 1);
glfwWindowHint(GLFW_CONTEXT_REVISION, 0);
// Create opengl context
int window_width = 1024;
int window_height = 768;
GLFWwindow* window = initialize_glfw(window_width, window_height);
if (!window)
{
glfwTerminate();
std::exit(EXIT_FAILURE);
}
// Display OpenGL version
int major, minor, rev, client, forward, profile;
glfwGetVersion(&major, &minor, &rev);
std::cout << "OpenGL - " << major << "." << minor << "." << rev << std::endl;
}
as well as compile shaders with GLSL #version 140, this is the printed output:
-> % ./main
OpenGL - 3.0.3
Shader compilation failed with this message:
0:1(10): error: GLSL 1.40 is not supported. Supported versions are: 1.10, 1.20, 1.30, 1.00 ES, and 3.00 ES
So, it seems like OpenGL 3.1 and GLSL 1.4 should be supported, but they are not being used in my GLFW program. Can anyone tell me what might be wrong?
After re-reading the documentation, there seem to have been two problems. One, as elmindreda pointed out, is that calling init after making window hints sets the window hints back to their defaults, so init must be called first.
Second, I am using OpenGL 3.1, and the GLFW docs say "If requesting an OpenGL version below 3.2, GLFW_OPENGL_ANY_PROFILE must be used." I was trying to use GLFW_OPENGL_CORE_PROFILE.
You need to initialize GLFW before calling other functions. You also need to make the OpenGL context current before calling functions on it.
I want to use CGL or NSOpenGL to create an OpenGL window without using XCode (compile with command line gcc). I am developing something that is cross platform, and I'd like to keep the code divergence to a minimum if I can! My codebase is fully C++ at the moment. Is this at all possible? I am new to OSX and I haven't touched objective C. I looked through the developer docs and grabbed some example files. They are all using xcode. I successfully compiled the source and linked with the right framework, but the whole app bundle, xib, and info.plist thing seems a bit over the top.
This is bare-bones as one can get:
// g++ glInfo.cpp -o glInfo.x -framework OpenGL
#include <OpenGL/OpenGL.h>
#include <OpenGL/gl3.h>
#include <stdio.h>
int main(int argc, char **argv)
{
CGLContextObj ctx;
CGLPixelFormatObj pix;
GLint npix;
CGLPixelFormatAttribute attribs[] = {
(CGLPixelFormatAttribute) 0
};
CGLChoosePixelFormat( attribs, &pix, &npix );
CGLCreateContext( pix, NULL, &ctx );
CGLSetCurrentContext( ctx );
printf("Vendor: %s\n", glGetString(GL_VENDOR) );
printf("Renderer: %s\n", glGetString(GL_RENDERER) );
printf("Version: %s\n", glGetString(GL_VERSION) );
printf("GLSL: %s\n", glGetString(GL_SHADING_LANGUAGE_VERSION));
return 0;
}
Both fltk and SDL should have enough AGL/CGL code. I've built and run cmdline GL apps on OS X with fltk-1.3.0. It uses the AGL sub-system however, which is deprecated. SDL uses CGL and NSOpenGL for its Quartz video layer: ./src/video/quartz
Use the obj-c flags for the compiler, if you need to build a src.m, and let the native g++, llvm-g++, clang++, or whatever, take care of the linking. You might need to add -framework CGL -framework OpenGL as well as ApplicationServices and Cocoa. A lot of the answer depends on what you mean by 'code divergence'.
You can build apps without using Xcode, but if you want users who aren't sys admins to use them, you'll need an app bundle (and hence .plist) and (unless it's a game) a user interface beyond what OpenGL offers. You don't need to create a .xib for your UI. You can do it all programmatically, though using .xibs is far easier for most things. Why don't you want to use the tools designed to help you? I'm not saying they're the best, but they beat using only a command-line interface. (Once you have an Xcode project for your app, though, you can build it from the command-line and even automate building it.)
Well, you can do it "by hand" (I used to do it at NeXTStep times), but it is a pain before it works for the first time.
XCode uses makefiles under the hood. You can find and adopt the makefiles in
/Developer/Makefiles
(pre XCode 4.3)
or for XCode 4.3 in
/Applications/Xcode.app/Contents/Developer/Makefiles
Good luck!
For the build system, I recommend CMake/CPack which will help you to keep it cross-platform and to build app bundles. For NSGL, it's possible using Obj-C++, but I don't have any code at hand.
I am developing an OpenGL application and need to use the glew library. I am using Visual Studio C++ 2008 Express.
I compiled a program using gl.h, glu.h, and glut.h just fine and it does what it's supposed to do. But after including glew.h it still compiles just fine, but when I try:
glewInit();
if (glewIsSupported("GL_VERSION_2_0"))
printf("Ready for OpenGL 2.0\n");
else {
printf("OpenGL 2.0 not supported\n");
}
It keeps printing:
"OpenGL 2.0 not supported".
I tried to change it to glewIsSupported("GL_VERSION_1_3") or even glewIsSupported("GL_VERSION_1_0") and it still returns false meaning that it does not support OpenGL version whatever.
I have a Radeon HD 5750 so, it should support OpenGL 3.1 and some of the features of 3.2. I know that all the device drivers are installed properly since I was able to run all the programs in the Radeon sdk provided by ATI.
I also installed Opengl Extensions Viewer 3.15 and it says OpenGL Version 3.1 Ati Driver 6.14.10.9116. I tired all of them GLEW_VERSION_1_1, GLEW_VERSION_1_2, GLEW_VERSION_1_3, GLEW_VERSION_2_0, GLEW_VERSION_2_1, GLEW_VERSION_3_0 and all of these return false.
Any other suggestioms? I even tried GLEW_ARB_vertex_shader && GLEW_ARB_fragment_shader and this is returning false as well.
glewIsSupported is meant to check and see if the specific features are supported. You want something more like...
if (GLEW_VERSION_1_3)
{
/* Yay! OpenGL 1.3 is supported! */
}
there may be some lack of necessary initialization. I encounter the same question. And here is how I solve the question: you need to include the glCreateWindow() ahead. Include this function and try again.
Firstly, you should check whether glew has initialized properly:
if(glewInit() != GLEW_OK)
{ // something is wrong };
Secondly, you need to create the context before calling glewInit()
Thirdly, you can also try:
glewExperimental=true;
Before calling glewInit()
I encountered the same problem when running a program through windows RDP, then I noticed that my video card may not working properly when using RDP, so I tried teamviewer instead, both glewinfo.exe and my program start to work normally then.
The OP's problem may be solved for a long time, just for others' infomation.