I'm following the OpenGL Programming Guide:8th edition, and I'm kind of stuck in the 1st chapter (triangles.cpp).
I'm running Ubuntu 14.04.2, and this is the code used:
// Draws 2 triangles on the screen
#include <iostream>
using namespace std;
#include "vgl.h"
#include "LoadShaders.h"
enum VAO_IDs { Triangles, NumVAOs };
enum Buffer_IDs { ArrayBuffer, NumBuffers };
enum Attrib_IDs { vPosition = 0 };
GLuint VAOs[NumVAOs];
GLuint Buffers[NumBuffers];
const GLuint NumVertices = 6;
// init
void init(void)
{
glGenVertexArrays(NumVAOs, VAOs);
glBindVertexArray(VAOs[Triangles]);
GLfloat vertices[NumVertices][2] = {
{ -0.90, -0.90 }, //Triangle 1
{ 0.85, -0.90 },
{ -0.90, 0.85 },
{ 0.90, -0.85 }, //Triangle 2
{ 0.90, 0.90 },
{ -0.85, 0.90 }
};
glGenBuffers(NumBuffers, Buffers);
glBindBuffer(GL_ARRAY_BUFFER, Buffers[ArrayBuffer]);
glBufferData(GL_ARRAY_BUFFER, sizeof(vertices),
vertices, GL_STATIC_DRAW);
ShaderInfo shaders[] = {
{ GL_VERTEX_SHADER, "trianges.vert" },
{ GL_FRAGMENT_SHADER, "triangles.frag" },
{ GL_NONE, NULL }
};
GLuint program = LoadShaders(shaders);
glUseProgram(program);
glVertexAttribPointer(vPosition, 2, GL_FLOAT,
GL_FALSE, 0, BUFFER_OFFSET(0));
glEnableVertexAttribArray(vPosition);
}
// display
void display(void)
{
glClear(GL_COLOR_BUFFER_BIT);
glBindVertexArray(VAOs[Triangles]);
glDrawArrays(GL_TRIANGLES, 0, NumVertices);
glFlush();
}
// main
int main (int argc, char** argv)
{
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_RGBA);
glutInitWindowSize(512, 512);
glutInitContextVersion(4, 3);
glutInitContextProfile(GLUT_CORE_PROFILE);
glutCreateWindow(argv[0]);
if (glewInit()) {
cerr << "Unable to initialize GLEW ... exiting" << '\n';
exit(EXIT_FAILURE);
}
init();
glutDisplayFunc(display);
glutMainLoop();
}
This code compiles successfully, but yields a Segmentation Fault on execution.
I used gdb to debug the error. A random window does pop up (only when I used gdb), but its simply displaying what's behind the window (so it's crashed, doesn't display the two triangles).
This is the output from gdb:
Traceback (most recent call last):
File "/usr/share/gdb/auto-load/usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.19-gdb.py", line 63, in <module>
from libstdcxx.v6.printers import register_libstdcxx_printers
ImportError: No module named 'libstdcxx'
Program received signal SIGSEGV, Segmentation fault.
0x0000000000000000 in ?? ()
The program should be displaying 2 triangles on a window.
I'm relatively new to C++, and I'm very new to linking libraries and debugging with gdb. I've seen many Q&A on the Segmentation Fault. A person did have the exact same problem as I did, but unfortunately, he deleted the question.
I don't really feel like this is a duplicate, since I didn't really understand most of the answers I've read on the same topic. Please make the answer as simple as possible.
I'm using the freeglut3-dev library, and I'm using NVIDIA Driver version 346.59.
EDIT:
The exact error happens at init() in
glGenVertexArrays(NumVAOs, VAOs);
glBindVertexArray(VAOs[Triangles]);
The segmentation fault occurred at the very first command that used GL. So, I made sure to import freeglut, which included all the necessary libraries for opengl too.
#include freeglut
I would like to thank Mateusz Grzejek for pointing this out.
Related
I am trying to run OpenGL Red book examples. I am able to compile the projects in MS VS 2017. However, when I execute some of the examples it throws me the following exception.
Exception thrown at 0x00000000 in Project_1_1.exe: 0xC0000005: Access
violation executing location 0x00000000. occurred
This occurs for the line:
glCreateBuffers( NumBuffers, Buffers );
I have come to understand from reading the forums that glCreateBuffers works if the GPU supports OpenGL 4.5. I tried using glGenBuffers and indeed it works.
I ran glewinfo and it shows me that GL_VERSION_4_5 and GL_ARB_direct_state_access are missing. So it appears that my GPU doesn't support OpenGL 4.5.
However, I have Nvidia GTX 1050 with updated drivers, and according to documentation it ought to support OpenGL 4.5. The system information in Steam shows 4.5 OpenGL support. The codes work fine when I compile in Linux. I am required to use OpenGL 4.5.
Relevant details:
Windows 10
Dell XPS 15 with i7 7700 HQ
Nvidia GTX 1050
Visual Studio 2017
I am compiling codes on Windows for the first time so it is possible that I am missing something obvious. The codes work on Ubuntu 16.04.
Edit (Added code)
The relevant section of the code is from the this repository:
//////////////////////////////////////////////////////////////////////////////
//
// Triangles.cpp
//
//////////////////////////////////////////////////////////////////////////////
#include "vgl.h"
#include "LoadShaders.h"
enum VAO_IDs { Triangles, NumVAOs };
enum Buffer_IDs { ArrayBuffer, NumBuffers };
enum Attrib_IDs { vPosition = 0 };
GLuint VAOs[NumVAOs];
GLuint Buffers[NumBuffers];
const GLuint NumVertices = 6;
//----------------------------------------------------------------------------
//
// init
//
void
init( void )
{
glGenVertexArrays( NumVAOs, VAOs );
glBindVertexArray( VAOs[Triangles] );
GLfloat vertices[NumVertices][2] = {
{ -0.90f, -0.90f }, { 0.85f, -0.90f }, { -0.90f, 0.85f }, // Triangle 1
{ 0.90f, -0.85f }, { 0.90f, 0.90f }, { -0.85f, 0.90f } // Triangle 2
};
glCreateBuffers( NumBuffers, Buffers );
glBindBuffer( GL_ARRAY_BUFFER, Buffers[ArrayBuffer] );
glBufferStorage( GL_ARRAY_BUFFER, sizeof(vertices), vertices, 0);
ShaderInfo shaders[] =
{
{ GL_VERTEX_SHADER, "media/shaders/triangles/triangles.vert" },
{ GL_FRAGMENT_SHADER, "media/shaders/triangles/triangles.frag" },
{ GL_NONE, NULL }
};
GLuint program = LoadShaders( shaders );
glUseProgram( program );
glVertexAttribPointer( vPosition, 2, GL_FLOAT,
GL_FALSE, 0, BUFFER_OFFSET(0) );
glEnableVertexAttribArray( vPosition );
}
//----------------------------------------------------------------------------
//
// display
//
void
display( void )
{
static const float black[] = { 0.0f, 0.0f, 0.0f, 0.0f };
glClearBufferfv(GL_COLOR, 0, black);
glBindVertexArray( VAOs[Triangles] );
glDrawArrays( GL_TRIANGLES, 0, NumVertices );
}
//----------------------------------------------------------------------------
//
// main
//
#ifdef _WIN32
int CALLBACK WinMain(
_In_ HINSTANCE hInstance,
_In_ HINSTANCE hPrevInstance,
_In_ LPSTR lpCmdLine,
_In_ int nCmdShow
)
#else
int
main( int argc, char** argv )
#endif
{
glfwInit();
GLFWwindow* window = glfwCreateWindow(800, 600, "Triangles", NULL, NULL);
glfwMakeContextCurrent(window);
int api, major, minor, revision, profile;
api = glfwGetWindowAttrib(window, GLFW_CLIENT_API);
major = glfwGetWindowAttrib(window, GLFW_CONTEXT_VERSION_MAJOR);
minor = glfwGetWindowAttrib(window, GLFW_CONTEXT_VERSION_MINOR);
revision = glfwGetWindowAttrib(window, GLFW_CONTEXT_REVISION);
profile = glfwGetWindowAttrib(window, GLFW_OPENGL_PROFILE);
gl3wInit();
init();
while (!glfwWindowShouldClose(window))
{
display();
glfwSwapBuffers(window);
glfwPollEvents();
}
glfwDestroyWindow(window);
glfwTerminate();
}
There are several things that should be checked if you are sure that your graphiccard supports OpenGL 4.5:
Make sure that your application actually requests a OpenGL 4.5 context (For example, by setting the correct glfwWindowHint for that)
Make sure that the application uses the correct card. This is especially an issue on Notebooks with both, a dedicated GPU (AMD or NVidia) and an integrated one from Intel. The setting for this should be somewhere in the Nvidia control panel or in the AMD equivalent.
I would like to work through the OpenGL Red Book, The OpenGL Programming Guide, 8th edition, using Xcode on Mac OS X.
I am unable to run the first code example, triangles.cpp. I have tried including the GLUT and GL frameworks that come with Xcode and I have searched around enough to see that I am not likely to figure this out on my own.
Assuming that I have a fresh installation of Mac OS X, and I have freshly installed Xcode with Xcode command-line tools, what are the step-by-step instructions to be able to run triangles.cpp in that environment?
Unlike this question, my preference would be not to use Cocoa, Objective-C or Swift. My preference would be to stay in C++/C only. An answer is only correct if I can follow it step-by-step and end up with a running triangles.cpp program.
My preference is Mac OS X 10.9, however a correct answer can assume 10.9, 10.10 or 10.11.
Thank you.
///////////////////////////////////////////////////////////////////////
//
// triangles.cpp
//
///////////////////////////////////////////////////////////////////////
#include <iostream>
using namespace std;
#include "vgl.h"
#include "LoadShader.h"
enum VAO_IDs { Triangles, NumVAOs };
enum Buffer_IDs { ArrayBuffer, NumBuffers };
enum Attrib_IDs { vPosition = 0 };
GLuint VAOs[NumVAOs];
GLuint Buffers[NumBuffers];
const GLuint NumVertices = 6;
//---------------------------------------------------------------------
//
// init
//
void
init(void)
{
glGenVertexArrays(NumVAOs, VAOs);
glBindVertexArray(VAOs[Triangles]);
GLfloat vertices[NumVertices][2] = {
{ -0.90, -0.90 }, // Triangle 1
{ 0.85, -0.90 },
{ -0.90, 0.85 },
{ 0.90, -0.85 }, // Triangle 2
{ 0.90, 0.90 },
{ -0.85, 0.90 }
};
glGenBuffers(NumBuffers, Buffers);
glBindBuffer(GL_ARRAY_BUFFER, Buffers[ArrayBuffer]);
glBufferData(GL_ARRAY_BUFFER, sizeof(vertices),
vertices, GL_STATIC_DRAW);
ShaderInfo shaders[] = {
{ GL_VERTEX_SHADER, "triangles.vert" },
{ GL_FRAGMENT_SHADER, "triangles.frag" },
{ GL_NONE, NULL }
};
GLuint program = LoadShaders(*shaders);
glUseProgram(program);
glVertexAttribPointer(vPosition, 2, GL_FLOAT,
GL_FALSE, 0, BUFFER_OFFSET(0));
glEnableVertexAttribArray(vPosition);
}
//---------------------------------------------------------------------
//
// display
//
void
display(void)
{
glClear(GL_COLOR_BUFFER_BIT);
glBindVertexArray(VAOs[Triangles]);
glDrawArrays(GL_TRIANGLES, 0, NumVertices);
glFlush();
}
//---------------------------------------------------------------------
//
// main
//
int
main(int argc, char** argv)
{
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_RGBA);
glutInitWindowSize(512, 512);
glutInitContextVersion(4, 3);
glutInitContextProfile(GLUT_CORE_PROFILE);
glutCreateWindow(argv[0]);
glewExperimental = GL_TRUE;
if (glewInit()) {
cerr << "Unable to initialize GLEW ... exiting" << endl;
exit(EXIT_FAILURE);
}
init();
glutDisplayFunc(display);
glutMainLoop();
}
Edit 1: In response to the first comment, here is the naive effort.
Open Xcode 5.1.1 on Mac OS X 10.9.5
Create a new C++ Command-line project.
Paste over the contents of main.cpp with the contents of triangles.cpp.
Click on the project -> Build Phases -> Link Binary with Libraries
Add OpenGL.framework and GLUT.framework
Result: "/Users/xxx/Desktop/Triangles/Triangles/main.cpp:10:10: 'vgl.h' file not found"
Edit 2: Added the vgh translation unit and LoadShaders translation unit, also added libFreeGlut.a and libGlew32.a to my projects compilation/linking. Moved all of the OpenGL Book's Include contents to my projects source directory. Had to change several include statements to use quoted includes instead of angled includes. It feels like this is closer to working but it is unable to find LoadShader.h. Note that the translation unit in the OpenGL download is called LoadShaders (plural). Changing triangles.cpp to reference LoadShaders.h fixed the include problem but the contents of that translation unit don't seem to match the signatures of whats being called from triangles.cpp.
There are some issues with the source and with the files in oglpg-8th-edition.zip:
triangles.cpp uses non-standard GLUT functions that aren't included in glut, and instead are only part of the freeglut implementation (glutInitContextVersion and glutInitContextProfile). freeglut doesn't really support OS X and building it instead relies on additional X11 support. Instead of telling you how to do this I'm just going to modify the source to build with OS X's GLUT framework.
The code depends on glew, and the book's source download apparently doesn't include a binary you can use, so you'll need to build it for yourself.
Build GLEW with the following commands:
git clone git://git.code.sf.net/p/glew/code glew
cd glew
make extensions
make
Now:
Create a C++ command line Xcode project
Set the executable to link with the OpenGL and GLUT frameworks and the glew dylib you just built.
Modify the project "Header Search Paths" to include the location of the glew headers for the library you built, followed by the path to oglpg-8th-edition/include
Add oglpg-8th-edition/lib/LoadShaders.cpp to your xcode project
Paste the triangles.cpp source into the main.cpp of your Xcode project
Modify the source: replace #include "vgl.h" with:
#include <GL/glew.h>
#include <OpenGL/gl3.h>
#include <GLUT/glut.h>
#define BUFFER_OFFSET(x) ((const void*) (x))
Also make sure that the typos in the version of triangle.cpp that you include in your question are fixed: You include "LoadShader.h" when it should be "LoadShaders.h", and LoadShaders(*shaders); should be LoadShaders(shaders). (The code printed in my copy of the book doesn't contain these errors.)
Delete the calls to glutInitContextVersion and glutInitContextProfile.
Change the parameter to glutInitDisplayMode to GLUT_RGBA | GLUT_3_2_CORE_PROFILE
At this point the code builds, links, and runs, however running the program displays a black window for me instead of the expected triangles.
about fixing the black window issue as mentioned in Matthew and Bames53 comments
Follow bames53's answer
Define shader as string
const char *pTriangleVert =
"#version 410 core\n\
layout(location = 0) in vec4 vPosition;\n\
void\n\
main()\n\
{\n\
gl_Position= vPosition;\n\
}";
const char *pTriangleFrag =
"#version 410 core\n\
out vec4 fColor;\n\
void\n\
main()\n\
{\n\
fColor = vec4(0.0, 0.0, 1.0, 1.0);\n\
}";
OpenGl 4.1 supported on my iMac so i change version into 410
ShaderInfo shaders[] = {
{ GL_VERTEX_SHADER, pTriangleVert},
{ GL_FRAGMENT_SHADER, pTriangleFrag },
{ GL_NONE, NULL }
};
Modify the ShaderInfo struct slightly
change
typedef struct {
GLenum type;
const char* filename;
GLuint shader;
} ShaderInfo;
into
typedef struct {
GLenum type;
const char* source;
GLuint shader;
} ShaderInfo;
Modify loadShader function slightly
comment the code about reading shader from file
/*
const GLchar* source = ReadShader( entry->filename );
if ( source == NULL ) {
for ( entry = shaders; entry->type != GL_NONE; ++entry ) {
glDeleteShader( entry->shader );
entry->shader = 0;
}
return 0;
}
glShaderSource( shader, 1, &source, NULL );
delete [] source;*/
into
glShaderSource(shader, 1, &entry->source, NULL);
you'd better turning on DEBUG in case some shader compiling errors
you can use example from this link. It's almost the same. It uses glfw instead of glut.
http://www.tomdalling.com/blog/modern-opengl/01-getting-started-in-xcode-and-visual-cpp/
/*
main
Copyright 2012 Thomas Dalling - http://tomdalling.com/
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
//#include "platform.hpp"
// third-party libraries
#include <GL/glew.h>
#include <GLFW/glfw3.h>
#include <glm/glm.hpp>
// standard C++ libraries
#include <cassert>
#include <iostream>
#include <stdexcept>
#include <cmath>
// tdogl classes
#include "Program.h"
// constants
const glm::vec2 SCREEN_SIZE(800, 600);
// globals
GLFWwindow* gWindow = NULL;
tdogl::Program* gProgram = NULL;
GLuint gVAO = 0;
GLuint gVBO = 0;
// loads the vertex shader and fragment shader, and links them to make the global gProgram
static void LoadShaders() {
std::vector<tdogl::Shader> shaders;
shaders.push_back(tdogl::Shader::shaderFromFile("vertex-shader.txt", GL_VERTEX_SHADER));
shaders.push_back(tdogl::Shader::shaderFromFile("fragment-shader.txt", GL_FRAGMENT_SHADER));
gProgram = new tdogl::Program(shaders);
}
// loads a triangle into the VAO global
static void LoadTriangle() {
// make and bind the VAO
glGenVertexArrays(1, &gVAO);
glBindVertexArray(gVAO);
// make and bind the VBO
glGenBuffers(1, &gVBO);
glBindBuffer(GL_ARRAY_BUFFER, gVBO);
// Put the three triangle verticies into the VBO
GLfloat vertexData[] = {
// X Y Z
0.0f, 0.8f, 0.0f,
-0.8f,-0.8f, 0.0f,
0.8f,-0.8f, 0.0f,
};
glBufferData(GL_ARRAY_BUFFER, sizeof(vertexData), vertexData, GL_STATIC_DRAW);
// connect the xyz to the "vert" attribute of the vertex shader
glEnableVertexAttribAxrray(gProgram->attrib("vert"));
glVertexAttribPointer(gProgram->attrib("vert"), 3, GL_FLOAT, GL_FALSE, 0, NULL);
// unbind the VBO and VAO
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindVertexArray(0);
}
// draws a single frame
static void Render() {
// clear everything
glClearColor(0, 0, 0, 1); // black
glClear(GL_COLOR_BUFFER_BIT);
// bind the program (the shaders)
glUseProgram(gProgram->object());
// bind the VAO (the triangle)
glBindVertexArray(gVAO);
// draw the VAO
glDrawArrays(GL_TRIANGLES, 0, 3);
// unbind the VAO
glBindVertexArray(0);
// unbind the program
glUseProgram(0);
// swap the display buffers (displays what was just drawn)
glfwSwapBuffers(gWindow);
}
void OnError(int errorCode, const char* msg) {
throw std::runtime_error(msg);
}
// the program starts here
void AppMain() {
// initialise GLFW
glfwSetErrorCallback(OnError);
if(!glfwInit())
throw std::runtime_error("glfwInit failed");
// open a window with GLFW
glfwWindowHint(GLFW_OPENGL_FORWARD_COMPAT, GL_TRUE);
glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 2);
glfwWindowHint(GLFW_RESIZABLE, GL_FALSE);
gWindow = glfwCreateWindow((int)SCREEN_SIZE.x, (int)SCREEN_SIZE.y, "OpenGL Tutorial", NULL, NULL);
if(!gWindow)
throw std::runtime_error("glfwCreateWindow failed. Can your hardware handle OpenGL 3.2?");
// GLFW settings
glfwMakeContextCurrent(gWindow);
// initialise GLEW
glewExperimental = GL_TRUE; //stops glew crashing on OSX :-/
if(glewInit() != GLEW_OK)
throw std::runtime_error("glewInit failed");
// print out some info about the graphics drivers
std::cout << "OpenGL version: " << glGetString(GL_VERSION) << std::endl;
std::cout << "GLSL version: " << glGetString(GL_SHADING_LANGUAGE_VERSION) << std::endl;
std::cout << "Vendor: " << glGetString(GL_VENDOR) << std::endl;
std::cout << "Renderer: " << glGetString(GL_RENDERER) << std::endl;
// make sure OpenGL version 3.2 API is available
if(!GLEW_VERSION_3_2)
throw std::runtime_error("OpenGL 3.2 API is not available.");
// load vertex and fragment shaders into opengl
LoadShaders();
// create buffer and fill it with the points of the triangle
LoadTriangle();
// run while the window is open
while(!glfwWindowShouldClose(gWindow)){
// process pending events
glfwPollEvents();
// draw one frame
Render();
}
// clean up and exit
glfwTerminate();
}
int main(int argc, char *argv[]) {
try {
AppMain();
} catch (const std::exception& e){
std::cerr << "ERROR: " << e.what() << std::endl;
return EXIT_FAILURE;
}
return EXIT_SUCCESS;
}
I have adapted the project for MAC here:
https://github.com/badousuan/openGLredBook9th
The project can build successfully and most demo can run as expected. However the original code is based on openGL 4.5,while MAC only support version 4.1,some new API calls may fail. If some target not work well, you should consider this version issue and make some adaptation
I use the code from this tutorial: http://antongerdelan.net/opengl/hellotriangle.html, and it works on my mac.
Here is the code I run.
#include <GL/glew.h> // include GLEW and new version of GL on Windows
#include <GLFW/glfw3.h> // GLFW helper library
#include <stdio.h>
int main() {
// start GL context and O/S window using the GLFW helper library
if (!glfwInit()) {
fprintf(stderr, "ERROR: could not start GLFW3\n");
return 1;
}
// uncomment these lines if on Apple OS X
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 2);
glfwWindowHint(GLFW_OPENGL_FORWARD_COMPAT, GL_TRUE);
glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);
GLFWwindow* window = glfwCreateWindow(640, 480, "Hello Triangle", NULL, NULL);
if (!window) {
fprintf(stderr, "ERROR: could not open window with GLFW3\n");
glfwTerminate();
return 1;
}
glfwMakeContextCurrent(window);
// start GLEW extension handler
glewExperimental = GL_TRUE;
glewInit();
// get version info
const GLubyte* renderer = glGetString(GL_RENDERER); // get renderer string
const GLubyte* version = glGetString(GL_VERSION); // version as a string
printf("Renderer: %s\n", renderer);
printf("OpenGL version supported %s\n", version);
// tell GL to only draw onto a pixel if the shape is closer to the viewer
glEnable(GL_DEPTH_TEST); // enable depth-testing
glDepthFunc(GL_LESS); // depth-testing interprets a smaller value as "closer"
/* OTHER STUFF GOES HERE NEXT */
float points[] = {
0.0f, 0.5f, 0.0f,
0.5f, -0.5f, 0.0f,
-0.5f, -0.5f, 0.0f
};
GLuint vbo = 0; // vertex buffer object
glGenBuffers(1, &vbo);
glBindBuffer(GL_ARRAY_BUFFER, vbo);
glBufferData(GL_ARRAY_BUFFER, 9 * sizeof(float), points, GL_STATIC_DRAW);
GLuint vao = 0; // vertex array object
glGenVertexArrays(1, &vao);
glBindVertexArray(vao);
glEnableVertexAttribArray(0);
glBindBuffer(GL_ARRAY_BUFFER, vbo);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, NULL);
const char* vertex_shader =
"#version 400\n"
"in vec3 vp;"
"void main() {"
" gl_Position = vec4(vp, 1.0);"
"}";
const char* fragment_shader =
"#version 400\n"
"out vec4 frag_colour;"
"void main() {"
" frag_colour = vec4(0.5, 0.0, 0.5, 1.0);"
"}";
GLuint vs = glCreateShader(GL_VERTEX_SHADER);
glShaderSource(vs, 1, &vertex_shader, NULL);
glCompileShader(vs);
GLuint fs = glCreateShader(GL_FRAGMENT_SHADER);
glShaderSource(fs, 1, &fragment_shader, NULL);
glCompileShader(fs);
GLuint shader_programme = glCreateProgram();
glAttachShader(shader_programme, fs);
glAttachShader(shader_programme, vs);
glLinkProgram(shader_programme);
while(!glfwWindowShouldClose(window)) {
// wipe the drawing surface clear
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glUseProgram(shader_programme);
glBindVertexArray(vao);
// draw points 0-3 from the currently bound VAO with current in-use shader
glDrawArrays(GL_TRIANGLES, 0, 3);
// update other events like input handling
glfwPollEvents();
// put the stuff we've been drawing onto the display
glfwSwapBuffers(window);
}
// close GL context and any other GLFW resources
glfwTerminate();
return 0;
}
Hate to make a "find my bug" post, but the internet seems to be severely lacking any examples for ultra-straightforward OpenGL ES2 triangle-drawing.
And technically, this is using OpenGL 2.1, but OpenGL ES2 is just a subset of that... right?
Anyways, here's my code (I stripped logging/error checking where I know it passes)
#include <SDL.h>
#include <SDL_opengl.h>
int main(int argc, char* argv[])
{
SDL_Init( SDL_INIT_VIDEO | SDL_INIT_EVENTS);
SDL_GL_SetAttribute(SDL_GL_CONTEXT_PROFILE_MASK, SDL_GL_CONTEXT_PROFILE_COMPATIBILITY);
SDL_GL_SetAttribute(SDL_GL_CONTEXT_MAJOR_VERSION,2);
SDL_GL_SetAttribute(SDL_GL_CONTEXT_MINOR_VERSION,1);
SDL_GL_SetAttribute(SDL_GL_ACCELERATED_VISUAL,1);
SDL_GL_SetAttribute(SDL_GL_DOUBLEBUFFER,1);
SDL_Window* window = SDL_CreateWindow("Test", 0,0,100,100, SDL_WINDOW_OPENGL);
SDL_GLContext gl = SDL_GL_CreateContext(window);
glEnable(GL_DEPTH_TEST);
glDepthFunc(GL_LESS);
GLuint gl_program_id;
GLuint gl_vs_id;
GLuint gl_fs_id;
GLuint gl_pos_attrib_id;
GLuint gl_pos_buff_id;
char vs_file[2048]; //assume I successfully loaded vertex shader code into this array
char *vs_file_p = &vs_file[0];
char fs_file[2048]; //assume I successfully loaded fragment shader code into this array
char *fs_file_p = &fs_file[0];
gl_vs_id = glCreateShader(GL_VERTEX_SHADER);
glShaderSource(gl_vs_id, 1, &vs_file_p, NULL);
glCompileShader(gl_vs_id);
gl_fs_id = glCreateShader(GL_FRAGMENT_SHADER);
glShaderSource(gl_fs_id, 1, &fs_file_p, NULL);
glCompileShader(gl_fs_id);
gl_program_id = glCreateProgram();
glAttachShader(gl_program_id, gl_vs_id);
glAttachShader(gl_program_id, gl_fs_id);
glLinkProgram(gl_program_id);
glDeleteShader(gl_vs_id);
glDeleteShader(gl_fs_id);
glUseProgram(gl_program_id);
gl_pos_attrib_id = glGetAttribLocation(gl_program_id, "position");
float verts[] = {0.0,0.0,0.5,1.0,1.0,0.0};
glGenBuffers(1, &gl_pos_buff_id);
glBindBuffer(GL_ARRAY_BUFFER, gl_pos_buff_id);
glBufferData(GL_ARRAY_BUFFER, sizeof(float)*2*3, verts, GL_STATIC_DRAW);
glVertexAttribPointer(gl_pos_attrib_id, 3, GL_FLOAT, GL_FALSE, 0, (void*)0);
glEnableVertexAttribArray(gl_pos_attrib_id);
glViewport(0,0,100,100);
glClearColor(0.,0.,0.,1.);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glUseProgram(gl_program_id);
glBindBuffer(GL_ARRAY_BUFFER, gl_pos_buff_id);
glBufferData(GL_ARRAY_BUFFER, sizeof(float)*2*3, verts, GL_STATIC_DRAW);
glVertexAttribPointer(gl_pos_attrib_id, 3, GL_FLOAT, GL_FALSE, 0, (void*)0);
glEnableVertexAttribArray(gl_pos_attrib_id);
glDrawArrays(GL_TRIANGLES,0,3*3);
SDL_GL_SwapWindow(window);
//while loop waiting on input to quit goes here...
SDL_GL_DeleteContext(gl);
SDL_Quit();
exit(0);
return 0;
}
And my shaders are as follows:
Vertex:
#version 110
attribute vec2 position;
void main()
{
gl_Position = vec4(position,1.,1.);
}
Fragment:
#version 110
void main()
{
gl_FragColor = vec4(1.);
}
Just another note that all the shaders compile, the program compiles, the context and window creation all work without a hitch. It just ends up with a black 100x100px window. Any help would be greatly appreciated!
(also, if anyone can recommend decent, consistent, straightforward documentation sources on opengl es2 that would be awesome. So much inconsistency with docs between versions/platforms/whatever. thanks!)
I'm trying to learn openGL myself so I bought a book about openGL and in first chapter are example code so I try it and something went wrong. At line 17(glGenVertexArrays(NumVAOs, VAOs);) and 18(glBindVertexArray(VAOs[Triangles]);) is VS 2013 Ultimate report an error exactly "Unhandled exception at 0x77350309 in openGL_3.exe: 0xC0000005: Access violation executing location 0x00000000.". So I think it's something wrong with memory but i do not know what. Can someone help me?
#include <iostream>
using namespace std;
#include <vgl.h>
#include <LoadShaders.h>
enum VAO_IDs { Triangles, NumVAOs };
enum Buffer_IDs { ArrayBuffer, NumBuffers };
enum Attrib_IDs { vPosition = 0 };
GLuint VAOs[NumVAOs];
GLuint Buffers[NumBuffers];
const GLuint NumVertices = 6;
void init(void)
{
glGenVertexArrays(NumVAOs, VAOs);
glBindVertexArray(VAOs[Triangles]);
GLfloat vertices[NumVertices][2] = {
{ -0.90, -0.90 }, // Triangle 1
{ 0.85, -0.90 },
{ -0.90, 0.85 },
{ 0.90, -0.85 }, // Triangle 2
{ 0.90, 0.90 },
{ -0.85, 0.90 }
};
glGenBuffers(NumBuffers, Buffers);
glBindBuffer(GL_ARRAY_BUFFER, Buffers[ArrayBuffer]);
glBufferData(GL_ARRAY_BUFFER, sizeof(vertices),
vertices, GL_STATIC_DRAW);
ShaderInfo shaders[] = {
{ GL_VERTEX_SHADER, "triangles.vert" },
{ GL_FRAGMENT_SHADER, "triangles.frag" },
{ GL_NONE, NULL }
};
GLuint program = LoadShaders(shaders);
glUseProgram(program);
glVertexAttribPointer(vPosition, 2, GL_FLOAT,
GL_FALSE, 0, BUFFER_OFFSET(0));
glEnableVertexAttribArray(vPosition);
}
void display(void)
{
glClear(GL_COLOR_BUFFER_BIT);
glBindVertexArray(VAOs[Triangles]);
glDrawArrays(GL_TRIANGLES, 0, NumVertices);
glFlush();
}
int main(int argc, char** argv)
{
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_RGBA);
glutInitWindowSize(512, 512);
glutInitContextVersion(4, 3);
glutInitContextProfile(GLUT_CORE_PROFILE);
glutCreateWindow(argv[0]);
if (glewInit()) {
cerr << "Unable to initialize GLEW ... exiting" << endl;
exit(EXIT_FAILURE);
}
init();
glutDisplayFunc(display);
glutMainLoop();
}
You can try setting:
glewExperimental = GL_TRUE;
before your call to glewInit(). Sources:
https://stackoverflow.com/a/20822876/833188
https://stackoverflow.com/a/22820845/833188
Are you sure, that you can request version 4.3 before call to glewInit()? Every version >= 3.0 requires WGL_ARB_create_context (Windows)/GLX_ARB_create_context (Linux), which is an extension.
Usually, to create "modern" OpenGL context (3.0+) it is required to:
Create temporary context, set it as current.
Initialize extensions (either manually or with loader like GLEW or GLFW).
Request desired version (and profile, if you create version 3.2 or higher and WGL_ARB_create_context_profile/GLX_ARB_create_context_profile is present).
Delete temporary context from 1. and bind your new context.
You may want to look at:
OpenGL 3.0 Context Creation (GLX)
OpenGL 3.1 The First Triangle (WGL)
I do not know, what is the connection between requesting context version in GLUT and GLEW initialization (I often use GLEW, but creation of context is something, that I always do manually, with platform-specific API), but obviously, pointers to new OpenGL API are not initialized when you call glGenVertexArrays. That is the reason of your error - you try to call function via pointer, which is NULL.
I generally agree to Sga's answer recommending to set glewExperimental=GL_TRUE before calling glewInit(). GLEW will fail to initialize in a core profile if this option is not set.
However, the fact that the glewInit() does not fail implies that the OP did not get a core profile at all (or that GLEW has finally been fixed, but that is more of a theoretical possibility).
I already had a look into freeglut's implementation of the (freeglut-specific) glutInitContextVersion API for the question "Where is the documentation for glutInitContextVersion?", and the conclusions from back then might be helpful in this case. To quote myself:
From looking into the code (for the current stable version 2.8.1), one
will see that freeglut implements the following logic: If the
implementation cannot fullfill the version constraints, but does
support the ARB_create_context extension, it will generate some error
and no context will be created. However, if a version is requested,
but the implementation does not even support the relevant extensions,
a legacy GL context is created, effectively ignoring the version
request completely.
So from the reported behavior, I deduce that the OP did only get a legacy context, possibly even Microsofts default OpenGL 1.1 implementation.
This also expalins why glGenVertexArrays() is a NULL pointer even after glewInit() succeeded: The extension is not supported for this context.
You should check what glGetString(GL_VERSION) and glGetString(GL_RENDERER) actually return right after the glutCreateWindow() call. And depending on the output, you might consider checking/updating your graphics drivers, or you might learn that your GPU is just not capable of modern GL at all.
For anyone who has seen my previous questions, after working through the RedBook for Version 2.1, I am now moving on to Version 4.3. (Hoary you say, since many of you have been telling me to do this for ages.)
So, I am deep into Chapter 3, but still haven't got Chapter 1 's example program working.
I have two problems. (Actually 3.) Firstly, it doesn't compile. Okay so that's a problem, but kind of irrelevant considering the next two. Secondly, I don't exactly understand how it works or what it is trying to do, but we will get onto that.
Thirdly, it seems to me that the author of this code is a complete magician. I would suggest all sorts of tinkery-hackery are occurring here. This is most likely to be because of Problem Number 2, the fact that I don't understand what it is trying to do. The guys who wrote this book are, of course, not idiots, but bear with me, I will give an example.
Here is a section of code taken from the top of the main.cpp file. I will include the rest of the file later on, but for now:
enum VAO_IDs {
Triangles,
NumVAOs
};
If I understand correctly, this gives VAO_IDs::Triangles the value of 1, since enum's are zero based. (I hope I am correct here, or it will be embarrassing for me.)
A short while later, you can see this line:
GLuint VAOs[NumVAOs];
Which declares an array of GLuint's, containing 1 GLuint's due to the fact that NumVAOs is equal to 1. Now, firstly, shouldn't it be VAO_IDs::NumVAOs?
And secondly, why on earth has an enum been used in this way? I would never use an enum like that for obvious reasons - cannot have more than one data with the same value, values are not explicitly specified etc...
Am I barking up the right tree here? It just doesn't make sense to do this... VAOs should have been a global, like this, surely? GLuint NumVAOs = 1; This is just abusive to the enum!
In fact, below the statement const GLuint NumVertices = 6; appears. This makes sense, doesn't it, because we can change the value 6 if we wanted to, but we cannot change NumVAOs to 0 for example, because Triangles is already set to 0. (Why is it in an enum? Seriously?)
Anyway, forget the enum's... For now... Okay so I made a big deal out of that and that's the end of the problems... Any further comments I have are in the code now. You can ignore most of the glfw stuff, its essentially the same as glut.
// ----------------------------------------------------------------------------
//
// Triangles - First OpenGL 4.3 Program
//
// ----------------------------------------------------------------------------
#include <cstdlib>
#include <cstdint>
#include <cmath>
#include <stdio.h>
#include <iostream>
//#include <GL/gl.h>
//#include <GL/glu.h>
#include <GL/glew.h>
#include <GLFW/glfw3.h>
/// OpenGL specific
#include "vgl.h"
#include "LoadShaders.h" // These are essentially empty files with some background work going on, nothing declared or defined which is relevant here
enum VAO_IDs {
Triangles,
NumVAOs
};
// So Triangles = 0, NumVAOs = 1
// WHY DO THIS?!
enum Buffer_IDs {
ArrayBuffer,
NumBuffers
};
enum Attrib_IDs {
vPosition = 0
}
// Please, please, please someone explain the enum thing to me, why are they using them instead of global -just- variables.
// (Yeah an enum is a variable, okay, but you know what I mean.)
GLuint VAOs[NumVAOs]; // Compile error: expected initializer before 'VAOs'
GLuint Buffers[NumBuffers]; // NumBuffers is hidden in an enum again, so it NumVAOs
const GLuint NumVertices = 6; // Why do something different here?
// ----------------------------------------------------------------------------
//
// Init
//
// ----------------------------------------------------------------------------
void init()
{
glGenVertexArrays(NumVAOs, VAOs); // Error: VAOs was not declared in this scope
glBindVertexArray(VAOs[Triangles]);
GLfloat vertices[NumVertices][2] = {
{ -0.90, -0.90 },
{ +0.85, -0.90 },
{ -0.90, +0.85 },
{ +0.90, -0.85 },
{ +0.90, +0.90 },
{ -0.85, +0.90 }
};
glGenBuffers(NumBuffers, Buffers);
glBindBuffer(GL_ARRAY_BUFFER, Buffers[ArrayBuffer]);
glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_STATIC_DRAW);
ShaderInfo shaders[] = {
{ GL_VERTEX_SHADER, "triangles.vert" },
{ GL_FRAGMENT_SHADER, "triangles.frag" },
{ GL_NONE, nullptr }
};
GLuint program = LoadShaders(shaders);
glUseProgram(program);
glVertexAttribPointer(vPosition, 2, GL_FLOAT, GL_FALSE, 0, BUFFER_OFFSET(0));
glEnableVertexAttribArray(vPosition);
}
// ----------------------------------------------------------------------------
//
// Display
//
// ----------------------------------------------------------------------------
void display()
{
glClear(GL_COLOR_BUFFER_BIT);
glBindVertexArray(VAOs[Triangles]);
glDrawArrays(GL_TRIANGLES, 0, NumVertices); // Error VAOs not declared
glFlush();
}
// ----------------------------------------------------------------------------
//
// Main
//
// ----------------------------------------------------------------------------
void error_handle(int error, const char* description)
{
fputs(description, stderr);
}
void key_handle(GLFWwindow* window, int key, int scancode, int action, int mods)
{
if(key == GLFW_KEY_ESCAPE && action == GLFW_PRESS)
glfwSetWindowShouldClose(window, GL_TRUE);
}
void handle_exit()
{
}
int main(int argc, char **argv)
{
// Setup exit function
atexit(handle_exit);
// GLFW Window Pointer
GLFWwindow* window;
// Setup error callback
glfwSetErrorCallback(error_handle);
// Init
if(!glfwInit())
{
exit(EXIT_FAILURE);
}
// Setup OpenGL
glClearColor(0.0, 0.0, 0.0, 0.0);
glEnable(GL_DEPTH_TEST);
// Set GLFW window hints
glfwWindowHint(GLFW_DEPTH_BITS, 32);
glfwWindowHint(GLFW_RED_BITS, 8);
glfwWindowHint(GLFW_GREEN_BITS, 8);
glfwWindowHint(GLFW_BLUE_BITS, 8);
glfwWindowHint(GLFW_ALPHA_BITS, 8);
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 4);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3);
//glfwWindowHint(GLFW_OPENGL_FORWARD_COMPAT, 1);
glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);
// Init GLEW
if(glewInit())
{
printf("GLEW init failure!\n", stderr);
exit(EXIT_FAILURE);
}
// Init OpenGL
init();
// Create Window
window = glfwCreateWindow(800, 600, "Window Title", nullptr, nullptr);
if(!window)
{
glfwTerminate();
return EXIT_FAILURE;
}
// Make current
glfwMakeContextCurrent(window);
// Set key callback
glfwSetKeyCallback(window, key_handle);
// Check OpenGL Version
char* version;
version = (char*)glGetString(GL_VERSION);
printf("OpenGL Application Running, Version: %s\n", version);
// Enter main loop
while(!glfwWindowShouldClose(window))
{
// Event polling
glfwPollEvents();
// OpenGL Rendering
// Setup OpenGL viewport and clear screen
float ratio;
int width, height;
glfwGetFramebufferSize(window, &width, &height);
ratio = width / height;
glViewport(0, 0, width, height);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// Setup projection
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluPerspective(45.0, ratio, 0.1, 10.0);
// Render
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
// Swap Buffers
glfwSwapBuffers(window);
}
// Free glfw memory allocated for window
glfwDestroyWindow(window);
// Exit
glfwTerminate();
exit(EXIT_SUCCESS);
}
A very verbose question I realize, but I thought it was important to explain why I think its crazy code rather than just saying "I don't get it" like it's easy to. Could someone please explain why these very clever people decided to do it this way and why there are errors. (I can find nothing about this online.)
The author is using the automatic numbering property of enums to automatically update the definition for the NumVAOs and NumBuffers values. For example, when new VAO IDs are added to the enum the NumVAOs value will still be correct as long as it is listed last in the enum.
enum VAO_IDs {
Triangles,
Polygons,
Circles,
NumVAOs
};
Most likely your compiler does not support this trick.