I tried with following methods -
1. using glew
2. using glut
both almost similar ways as follows -
#include <stdio.h>
#include <stdlib.h>
#include <GL/glew.h>
#include <GLFW/glfw3.h>
int main(int agrc, char **argv)
{
//do windowing related stuff here
if ( !glfwInit())
{
printf("Error: Failed to initialize GLFW\n");
return -1;
}
GLFWwindow* window = glfwCreateWindow(800, 600, "Triangle", NULL, NULL);
if (window == NULL)
{
printf("Failed to create GLFW window\n");
glfwTerminate();
return -1;
}
glfwMakeContextCurrent(window);
glewExperimental = GL_TRUE;
if (glewInit() != GLEW_OK)
{
printf("Error: Failed to initialize GLEW\n");
return -1;
}
printf("GL version: %s\n", glGetString(GL_VERSION));
printf("GL shading language version: %s\n",
glGetString(GL_SHADING_LANGUAGE_VERSION));
}
Question - Is it possible to check GL and GLSL Version with creating native window?
As per my understanding it is necessary to create GL context which is usually done by creating a window, Please tell me alternative without creating window.
According to the OpenGL Wiki FAQ (emphasis mine).
You must create a GL context in order for your GL function calls to make sense. You can't just write a minimal program such as this:
int main(int argc, char **argv)
{
char *GL_version=(char *)glGetString(GL_VERSION);
char *GL_vendor=(char *)glGetString(GL_VENDOR);
char *GL_renderer=(char *)glGetString(GL_RENDERER);
return 0;
}
In the above, the programmer simply wants to get information about this system (without rendering anything) but it simply won't work because no communication has been established with the GL driver. The GL driver also needs to allocate resources with respect to the window such as a backbuffer. Based on the pixelformat you have chosen, there can be a color buffer with some format such as GL_BGRA8. There may or may not be a depth buffer. The depth might contain 24 bits. There might be a 8 bit stencil. There might be an accumulation buffer. Perhaps the pixelformat you have chosen can do multisampling. Up until now, no one has introduced a windowless context.
You must create a window. You must select a pixelformat. You must create a GL context. You must make the GL context current (wglMakeCurrent for Windows and glXMakeCurrent for *nix).
That said, if you're just looking to avoid a temporary window popping up, create a non-visible window, so the end-user doesn't have any idea that you're creating the window. In GLFW, it appears you can do this by setting the window hint GLFW_VISIBLE to false with glfwWindowHint before creating the window. All other windowing systems I've worked with have a similar concept of setting the visibility of a window.
Related
This is my first big OpenGL project and am confused about a new feature I want to implement.
I am working on a game engine. In my engine I have two classes: Renderer and CustomWindow. GLFW needs to be initialized, then an OpenGL context needs to be created, then glew can be initialized. There is no problem with this, until I decided to support multiple windows to be created at the same time. Here are the things I am confused about:
Do I need to initialize GLEW for every window that is created? If no, can I still call glewInit() for every window creation and everything be fine?
If I create a window, and then destroy it, do I have to call glewInit() again and will I have to call these functions again?:
glGetIntegerv(GL_MAX_TEXTURE_IMAGE_UNITS, &numberOfTexturesSupported);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glEnable(GL_BLEND);
glEnable(GL_MULTISAMPLE);
glEnable(GL_LINE_SMOOTH);
glEnable(GL_POINT_SMOOTH);
glEnable(GL_PROGRAM_POINT_SIZE);
If there is any off topic comments that would help, they are very welcomed.
Update 1: More Context
For reference, the reason I want to do this is to implement multiple window rendering that share the same OpenGL context. Note, each window uses its own vertex array object (VAO). Here is the code for reference:
// CustomWindow.cpp
CustomWindow::CustomWindow() {
window = nullptr;
title = defaultTitle;
shouldClose = false;
error = false;
vertexArrayObjectID = 0;
frameRate = defaultFrameRate;
window = glfwCreateWindow(defaultWidth, defaultHeight, title.c_str(), nullptr, nullptr);
if (!window) {
error = true;
return;
}
glfwMakeContextCurrent(window);
if (glewInit() != GLEW_OK) {
error = true;
return;
}
glGenVertexArrays(1, &vertexArrayObjectID);
glBindVertexArray(vertexArrayObjectID);
allWindows.push_back(this);
}
CustomWindow::CustomWindow(int width, int height, const std::string& title, GLFWmonitor* monitor, GLFWwindow* share) {
window = nullptr;
this->title = title;
shouldClose = false;
error = false;
vertexArrayObjectID = 0;
frameRate = defaultFrameRate;
window = glfwCreateWindow(width, height, title.c_str(), monitor, share);
if (!window) {
error = true;
return;
}
glfwMakeContextCurrent(window);
glGenVertexArrays(1, &vertexArrayObjectID);
allWindows.push_back(this);
}
CustomWindow::~CustomWindow() {
if (window != nullptr || error)
glfwDestroyWindow(window);
unsigned int position = 0;
for (unsigned int i = 0; i < allWindows.size(); i++)
if (allWindows[i] == this) {
position = i;
break;
}
allWindows.erase(allWindows.begin() + position);
if (mainWindow == this)
mainWindow = nullptr;
}
// Rendere.cpp
Renderer::Renderer() {
error = false;
numberOfTexturesSupported = 0;
if (singleton != nullptr) {
error = true;
return;
}
singleton = this;
// Init GLFW
if (!glfwInit()) {
error = true;
return;
}
// Set window hints
glfwWindowHint(GLFW_MAXIMIZED, true);
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 4);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3);
glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_COMPAT_PROFILE);
glfwWindowHint(GLFW_SAMPLES, 4);
// Init GLEW
if (glewInit() != GLEW_OK) {
error = true;
return;
}
// Set graphics message reporting
glEnable(GL_DEBUG_OUTPUT);
glEnable(GL_DEBUG_OUTPUT_SYNCHRONOUS);
glDebugMessageCallback(openglDebugCallback, nullptr);
// Set up OpenGL
glGetIntegerv(GL_MAX_TEXTURE_IMAGE_UNITS, &numberOfTexturesSupported);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glEnable(GL_BLEND);
glEnable(GL_MULTISAMPLE);
glEnable(GL_LINE_SMOOTH);
glEnable(GL_POINT_SMOOTH);
glEnable(GL_PROGRAM_POINT_SIZE);
}
After some research, i would say that it depends, therefore it's always best to have a look at the base to form an opinion.
The OpenGL wiki has some useful information to offer.
Loading OpenGL Functions is an important task for initializing OpenGL after creating an OpenGL context. You are strongly advised to use an OpenGL Loading Library instead of a manual process. However, if you want to know how it works manually, read on.
Windows
This function only works in the presence of a valid OpenGL context. Indeed, the function pointers it returns are themselves context-specific. The Windows documentation for this function states that the functions returned may work with another context, depending on the vendor of that context and that context's pixel format.
In practice, if two contexts come from the same vendor and refer to the same GPU, then the function pointers pulled from one context will work in the other.
Linux and X-Windows
This function can operate without an OpenGL context, though the functions it returns obviously can't. This means that functions are not associated with a context in any way.
If you take a look into the source code of glew (./src/glew.c), you will see that the lib simply calls the loading procedures of the underlying system and assigns the results of those calls to the global function pointers.
In other words, calling glewInit multiple times has no other side effect other than that explained in the OpenGL wiki.
Another question would be: do you really need multiple windows for that task? A different approach could be achieved with only one context and multiple framebuffer objects.
Multiple contexts (sharing resources between them) and event handling (which can only be called from the 'main' thread) needs proper synchronization and multiple context switches.
I've been trying to make a simple game in opengl using the GLFW library, but I've gotten stuck on a few parts due to changes in the GLFW. Had a few problems due to the changes in the library, but the change log helped out a bit. My problem is that I can't close my window properly using "glfwGetWindowAttrib" and I have no idea what to variable to add since I've seen no replacement for "GLFW_OPENED".
//Include GLFW
#include <GLFW/glfw3.h>
int main(int argc, char **argv)
{
glfwInit();
glfwWindowHint(GLFW_RESIZABLE, GL_TRUE);
glfwCreateWindow(640, 480, "Test Game", NULL, NULL);
bool running = true;
while (running) {
glClear(GL_COLOR_BUFFER_BIT);
glfwSwapBuffers;
// running = glfwGetWindowAttrib();
}
}
According to the glfw documentation, you have to use the glfwWindowShouldClose method:
while (!glfwWindowShouldClose(window))
{
//Do what you need
glfwSwapBuffers(window);
glfwPollEvents();
}
I have completed the following video
https://www.youtube.com/watch?v=shpdt6hCsT4
however, my hello world window looked like this:
http://s1303.photobucket.com/user/eskimo___/media/screenshot_124_zps890ae561.jpg.html
any ideas where I've gone wrong? Im using osx yosemite with the latest GLFW
cheers
as requested:
my project folder is comprised of 3 files which were made as part of the process using the terminal:
main.cpp(C++ source code)
Makefile(txt)
test(Unix Executable File)
i've set up the glfw3 library on my mac using homebrew.
Main.cpp, which is what is run to produce the undesired effect in the window pictured, is comprised of the example code at GLFW's documentation part of the website:
include <GLFW/glfw3.h>
int main(void)
{
GLFWwindow* window;
/* Initialize the library */
if (!glfwInit())
return -1;
/* Create a windowed mode window and its OpenGL context */
window = glfwCreateWindow(640, 480, "Hello World", NULL, NULL);
if (!window)
{
glfwTerminate();
return -1;
}
/* Make the window's context current */
glfwMakeContextCurrent(window);
/* Loop until the user closes the window */
while (!glfwWindowShouldClose(window))
{
/* Render here */
/* Swap front and back buffers */
glfwSwapBuffers(window);
/* Poll for and process events */
glfwPollEvents();
}
glfwTerminate();
return 0;
}
Insert glClear(GL_COLOR_BUFFER_BIT) prior to glfwSwapBuffers - it's basically updating from uninitialized 'framebuffer' memory, which the GL implementation has probably used for other purposes, like backing store, textures, etc., in the Quartz compositor.
Think of it as like a call to malloc; the memory isn't required to be initialized or zeroed.
My code is here, this returns an error: NSGL: Failed to create OpenGL pixel format
the error callback is the standard callback from glfw.
int main(int argc, const char * argv[]) {
glfwSetErrorCallback(error_callback);
if (!glfwInit ()) {
fprintf (stderr, "ERROR: could not start GLFW3\n");
return 1;
}
GLFWwindow* window = glfwCreateWindow (640, 480, "Hello Triangle", NULL, NULL);
glfwMakeContextCurrent (window);
if (!window) {
fprintf (stderr, "\nERROR: could not open window with GLFW3\n");
return -1;
}
// start GLEW extension handler
glewExperimental = GL_TRUE;
glewInit ();
// get version info
const GLubyte* renderer = glGetString (GL_RENDERER); // get renderer string
const GLubyte* version = glGetString (GL_VERSION); // version as a string
printf ("Renderer: %s\n", renderer);
printf ("OpenGL version supported %s\n", version);
// tell GL to only draw onto a pixel if the shape is closer to the viewer
glEnable (GL_DEPTH_TEST); // enable depth-testing
glDepthFunc (GL_LESS); // depth-testing interprets a smaller value as "closer"
/* OTHER STUFF GOES HERE NEXT */
// close GL context and any other GLFW resources
glfwTerminate();
return 0;
}
does someone know what the problem is?
On my OSX Machine, that issue arised due to the stencil buffer depth setting, which was 16 bit. OSX appearently (or the built in graphics cards) can only handle 8 bit. Since I´m not into OpenGL at all, I cannot reason this yet, but I will update the answer as soon as I have a deeper understanding.
The code responsible for setting the buffer depth is the following (corrected version):
glfwWindowHint(GLFW_STENCIL_BITS, 8);
which is executed before creating the window with glfwCreateWindow(...)
Hope this helps :)
I have started work on a new project with OpenGL 3.3. I was using GLFW and GLEW for window setup and loading of GL functions, but switched to the Unofficial OpenGL SDK instead of GLEW. The problem remained, though:
I was getting a segmentation fault when calling glCreateShader(G_VERTEX_SHADER), and it turned out, that the function pointer was NULL. I later found out that it was caused by an invalid GL Context.
This is the setup code:
#include <glload/gl_3_3.h>
#include <glload/gll.h>
#include <GL/glfw.h>
#include <glm.hpp>
#include <gtc/matrix_transform.hpp>
#include "Cube.h"
template <class T>
int arraySize(T *a) {
return (sizeof(a) / sizeof(*a));
}
int main() {
if(!glfwInit()) {
fprintf(stderr, "Failed to initialize GLFW\n");
return -1;
}
glfwOpenWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);
glfwOpenWindowHint(GLFW_OPENGL_FORWARD_COMPAT, GL_TRUE);
glfwOpenWindowHint(GLFW_OPENGL_VERSION_MAJOR, 3);
glfwOpenWindowHint(GLFW_OPENGL_VERSION_MINOR, 3);
// Open a window and create its OpenGL context
if(!glfwOpenWindow(1024, 768, 0,0,0,0, 0,0, GLFW_WINDOW)) {
fprintf(stderr, "Failed to open GLFW window\n");
glfwTerminate();
return -1;
}
if(LoadFunctions() == LS_LOAD_FAILED) {
fprintf(stderr, "Failed to load GL functions.\n");
return -1;
}
I have searched for answers on Google, and on here, but haven't been able to find anything. I also asked in the OpenGL IRC channel on Freenode, and they told me to try the Unofficial SDK instead of GLEW, because GLEW with the core profile is bad. This didn't work, though.
The most weird thing is, that it worked previously, with the exact same setup as now.
By the way, I am using Windows 7 x64 with the newest available drivers.
SOLUTION:
I was being dumb, and calling glCreateShader() before glewInit(). Sorry for being dumb :(