Getting black screen when using glCreateVertexArrays - c++

I am learning opengl right now. I have bought a book called OpenGL Superbible. But I couldn't survived to properly configure the environment. I use GLFW 3.2 as windowing toolkit (if that is what it is called) and GLEW 2.0.
I am trying to compile and use shaders to draw on screen. According to the book this should draw a triangle on screen. But it doesn't. Instead, it shows the clear background color that is set by glClearColor.
This is the Code:
#include <iostream>
#include <GLFW\glfw3.h>
#include <GL\glew.h>
GLuint CompileShaders();
int main(void) {
// Initialise GLFW
if (!glfwInit()) {
fprintf(stderr, "Failed to initialize GLFW\n");
getchar();
return -1;
}
// Open a window and create its OpenGL context
GLFWwindow *window;
window = glfwCreateWindow(1024, 768, "Tutorial 01", NULL, NULL);
if (window == NULL) {
fprintf(stderr, "Failed to open GLFW window. If you have an Intel GPU, "
"they are not 3.3 compatible. Try the 2.1 version of the "
"tutorials.\n");
getchar();
glfwTerminate();
return -1;
}
glfwMakeContextCurrent(window);
// Initialize GLEW
if (glewInit() != GLEW_OK) {
fprintf(stderr, "Failed to initialize GLEW\n");
getchar();
glfwTerminate();
return -1;
}
// Ensure we can capture the escape key being pressed below
glfwSetInputMode(window, GLFW_STICKY_KEYS, GL_TRUE);
GLuint RederingProgram = CompileShaders();
GLuint VertexArrayObject;
glCreateVertexArrays(1, &VertexArrayObject);
glBindVertexArray(VertexArrayObject);
int LoopCounter = 0;
do {
// Clear the screen. It's not mentioned before Tutorial 02, but it can cause
// flickering, so it's there nonetheless.
/*const GLfloat red[] = {
(float)sin(LoopCounter++ / 100.0f)*0.5f + 0.5f,
(float)cos(LoopCounter++ / 100.0f)*0.5f + 0.5f,
0.0f, 1.0f
};*/
// glClearBufferfv(GL_COLOR, 0, red);
// Draw nothing, see you in tutorial 2 !
glUseProgram(RederingProgram);
glDrawArrays(GL_TRIANGLES, 0, 3);
// Swap buffers
glfwSwapBuffers(window);
glfwPollEvents();
} // Check if the ESC key was pressed or the window was closed
while (glfwGetKey(window, GLFW_KEY_ESCAPE) != GLFW_PRESS &&
glfwWindowShouldClose(window) == 0);
// Close OpenGL window and terminate GLFW
glfwTerminate();
return 0;
}
GLuint CompileShaders() {
GLuint VertexShader;
GLuint FragmentShader;
GLuint Program;
static const GLchar *VertexShaderSource[] = {
"#version 450 core "
" "
"\n",
" "
" \n",
"void main(void) "
" "
"\n",
"{ "
" "
" \n",
"const vec4 vertices[3] = vec4[3](vec4(0.25, -0.25, 0.5, 1.0),\n",
" "
"vec4(-0.25, -0.25, 0.5, 1.0),\n",
" "
"vec4(0.25, 0.25, 0.5, 1.0)); \n",
" gl_Position = vertices[gl_VertexID]; \n",
"} "
" "
" \n"};
static const GLchar *FragmentShaderSource[] = {
"#version 450 core "
" "
"\n",
" "
" \n",
"out vec4 color; \n",
" "
" \n",
"void main(void) "
" "
"\n",
"{ "
" "
" \n",
" color = vec4(0.0, 0.8, 1.0, 1.0); \n",
"} "
" "
" \n"};
// Create and compile vertex shader.
VertexShader = glCreateShader(GL_VERTEX_SHADER);
glShaderSource(VertexShader, 1, VertexShaderSource, NULL);
glCompileShader(VertexShader);
// Create and compile fragment shader.
FragmentShader = glCreateShader(GL_FRAGMENT_SHADER);
glShaderSource(FragmentShader, 1, FragmentShaderSource, NULL);
glCompileShader(FragmentShader);
// Create program, attach shaders to it, and link it
Program = glCreateProgram();
glAttachShader(Program, VertexShader);
glAttachShader(Program, FragmentShader);
glLinkProgram(Program);
// Delete the shaders as the program has them now.
glDeleteShader(FragmentShader);
glDeleteShader(VertexShader);
return Program;
}
I am working in visual studio 2015. I have all the libraries to develop some opengl (I think), but somethig is wrong. Please help me. By the way, glCreateVertexArrays() function is only in Opengl 4.5 and above, I know, since the book is explained in opengl 4.5.
I will go crazy soon because no proper tutorials for beginners. People who have learned this are very ambitious people. I bow before those people.

Your shaders shouldn't compile:
glShaderSource(VertexShader, 1, VertexShaderSource, NULL);
This tells the GL that it should expect an array of 1 GLchar pointers. However, your GLSL code is actually split into several individual strings (note the commas);
static const GLchar *VertexShaderSource[] = {
"...GLSL-code..."
"...GLSL-code..."
"...GLSL-code...", // <- this comma ends the first string vertexShaderSource[0]
"...GLSL-code..." // vertexShaderSource[1] starts here
[...]
There are two possible solutions:
Just remove those commas, so that your array contains of just one element pointing to the whole GLSL source as one string.
Tell the GL the truth about your data:
glShaderSoure(..., sizeof(vertexShaderSource)/sizeof(vertexShaderSource[0]), vertexShaderSource, ,,,)
Apart from that, you should always query the compilation and link status of your shaders and program objects. Also query the shader compilation and program link info logs. They will contain human-readbale messages telling you why the compilation / link did fail.

Related

GLFW - many errors is this code

So my university lecturer gave us this code and it doesn't work.. it never has and no one has been able to get it to work so far.. are we being stupid or is our lecturer giving us broken material? I seriously can't figure this out and need help, i managed to get part way through in fixing many mistakes but after that the issues got harder and harder to solve despite this being '100% working' code.... side note: all the directories are formatted correctly and additional dependencies have all been set up correctly to the best of my knowledge.
//First Shader Handling Program
#include "stdafx.h"
#include "gl_core_4_3.hpp"
#include <GLFW/glfw3.h>
int _tmain(int argc, _TCHAR* argv[])
{
//Select the 4.3 core profile
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 4);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3);
glfwWindowHint(GLFW_OPENGL_FORWARD_COMPAT, TRUE);
glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);
//Start the OpenGL context and open a window using the //GLFW helper library
if (!glfwInit()) {
fprintf(stderr, "ERROR: could not start GLFW3\n");
glfwTerminate();
return 1;
}
GLFWwindow* window = glfwCreateWindow(640, 480, "First GLSL Triangle", NULL, NULL);
if (!window) {
fprintf(stderr, "ERROR: could not open window with GLFW3\n");
glfwTerminate();
return 1;
}
glfwMakeContextCurrent(window);
//Load the OpenGL functions for C++ gl::exts::LoadTest didLoad = gl::sys::LoadFunctions(); if (!didLoad) {
//Load failed
fprintf(stderr, "ERROR: GLLoadGen failed to load functions\n");
glfwTerminate();
return 1;
}
printf("Number of functions that failed to load : %i.\n", didLoad.GetNumMissing());
//Tell OpenGL to only draw a pixel if its shape is closer to //the viewer
//i.e. Enable depth testing with smaller depth value //interpreted as being closer gl::Enable(gl::DEPTH_TEST); gl::DepthFunc(gl::LESS);
//Set up the vertices for a triangle
float points[] = {
0.0f, 0.5f, 0.0f,
0.5f, -0.5f, 0.0f,
-0.5f, -0.5f, 0.0f
};
//Create a vertex buffer object to hold this data GLuint vbo=0;
gl::GenBuffers(1, &vbo);
gl::BindBuffer(gl::ARRAY_BUFFER, vbo);
gl::BufferData(gl::ARRAY_BUFFER, 9 * sizeof(float), points,
gl::STATIC_DRAW);
//Create a vertex array object
GLuint vao = 0;
gl::GenVertexArrays(1, &vao);
gl::BindVertexArray(vao);
gl::EnableVertexAttribArray(0);
gl::BindBuffer(gl::ARRAY_BUFFER, vbo);
gl::VertexAttribPointer(0, 3, gl::FLOAT, FALSE, 0, NULL);
//The shader code strings which later we will put in //separate files
//The Vertex Shader
const char* vertex_shader =
"#version 400\n"
"in vec3 vp;"
"void main() {"
" gl_Position = vec4(vp, 1.0);"
"}";
//The Fragment Shader
const char* fragment_shader =
"#version 400\n"
"out vec4 frag_colour;"
"void main() {"
" frag_colour = vec4(1.0, 0.5, 0.0, 1.0);"
"}";
//Load the strings into shader objects and compile GLuint vs = gl::CreateShader(gl::VERTEX_SHADER); gl::ShaderSource(vs, 1, &vertex_shader, NULL); gl::CompileShader(vs);
GLuint fs = gl::CreateShader(gl::FRAGMENT_SHADER); gl::ShaderSource(fs, 1, &fragment_shader, NULL); gl::CompileShader(fs);
//Compiled shaders must be compiled into a single executable //GPU shader program
//Create empty program and attach shaders GLuint shader_program = gl::CreateProgram(); gl::AttachShader(shader_program, fs); gl::AttachShader(shader_program, vs); gl::LinkProgram(shader_program);
//Now draw
while (!glfwWindowShouldClose(window)) {
//Clear the drawing surface
gl::Clear(gl::COLOR_BUFFER_BIT | gl::DEPTH_BUFFER_BIT);
gl::UseProgram(shader_program);
gl::BindVertexArray(vao);
//Draw point 0 to 3 from the currently bound VAO with
//current in-use shader
gl::DrawArrays(gl::TRIANGLES, 0, 3);
//update GLFW event handling
glfwPollEvents();
//Put the stuff we have been drawing onto the display glfwSwapBuffers(window);
}
//Close GLFW and end
glfwTerminate();
return 0;
}
Your line endings seems to been mangled.
There are multiple lines in your code where actual code was not broken into two lines, so that code is now on the same line as a comment and therefor not being executed. This is your program with proper line endings:
//First Shader Handling Program
#include "stdafx.h"
#include "gl_core_4_3.hpp"
#include <GLFW/glfw3.h>
int _tmain(int argc, _TCHAR* argv[])
{
//Select the 4.3 core profile
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 4);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3);
glfwWindowHint(GLFW_OPENGL_FORWARD_COMPAT, TRUE);
glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);
//Start the OpenGL context and open a window using the
//GLFW helper library
if (!glfwInit()) {
fprintf(stderr, "ERROR: could not start GLFW3\n");
glfwTerminate();
return 1;
}
GLFWwindow* window = glfwCreateWindow(640, 480, "First GLSL Triangle", NULL, NULL);
if (!window) {
fprintf(stderr, "ERROR: could not open window with GLFW3\n");
glfwTerminate();
return 1;
}
glfwMakeContextCurrent(window);
//Load the OpenGL functions for C++
gl::exts::LoadTest didLoad = gl::sys::LoadFunctions();
if (!didLoad) {
//Load failed
fprintf(stderr, "ERROR: GLLoadGen failed to load functions\n");
glfwTerminate();
return 1;
}
printf("Number of functions that failed to load : %i.\n", didLoad.GetNumMissing());
//Tell OpenGL to only draw a pixel if its shape is closer to
//the viewer
//i.e. Enable depth testing with smaller depth value
//interpreted as being closer
gl::Enable(gl::DEPTH_TEST);
gl::DepthFunc(gl::LESS);
//Set up the vertices for a triangle
float points[] = {
0.0f, 0.5f, 0.0f,
0.5f, -0.5f, 0.0f,
-0.5f, -0.5f, 0.0f
};
//Create a vertex buffer object to hold this data
GLuint vbo=0;
gl::GenBuffers(1, &vbo);
gl::BindBuffer(gl::ARRAY_BUFFER, vbo);
gl::BufferData(gl::ARRAY_BUFFER, 9 * sizeof(float), points, gl::STATIC_DRAW);
//Create a vertex array object
GLuint vao = 0;
gl::GenVertexArrays(1, &vao);
gl::BindVertexArray(vao);
gl::EnableVertexAttribArray(0);
gl::BindBuffer(gl::ARRAY_BUFFER, vbo);
gl::VertexAttribPointer(0, 3, gl::FLOAT, FALSE, 0, NULL);
//The shader code strings which later we will put in
//separate files
//The Vertex Shader
const char* vertex_shader =
"#version 400\n"
"in vec3 vp;"
"void main() {"
" gl_Position = vec4(vp, 1.0);"
"}";
//The Fragment Shader
const char* fragment_shader =
"#version 400\n"
"out vec4 frag_colour;"
"void main() {"
" frag_colour = vec4(1.0, 0.5, 0.0, 1.0);"
"}";
//Load the strings into shader objects and compile
GLuint vs = gl::CreateShader(gl::VERTEX_SHADER);
gl::ShaderSource(vs, 1, &vertex_shader, NULL);
gl::CompileShader(vs);
GLuint fs = gl::CreateShader(gl::FRAGMENT_SHADER);
gl::ShaderSource(fs, 1, &fragment_shader, NULL);
gl::CompileShader(fs);
//Compiled shaders must be compiled into a single executable
//GPU shader program
//Create empty program and attach shaders
GLuint shader_program = gl::CreateProgram();
gl::AttachShader(shader_program, fs);
gl::AttachShader(shader_program, vs);
gl::LinkProgram(shader_program);
//Now draw
while (!glfwWindowShouldClose(window)) {
//Clear the drawing surface
gl::Clear(gl::COLOR_BUFFER_BIT | gl::DEPTH_BUFFER_BIT);
gl::UseProgram(shader_program);
gl::BindVertexArray(vao);
//Draw point 0 to 3 from the currently bound VAO with
//current in-use shader
gl::DrawArrays(gl::TRIANGLES, 0, 3);
//update GLFW event handling
glfwPollEvents();
//Put the stuff we have been drawing onto the display
glfwSwapBuffers(window);
}
//Close GLFW and end
glfwTerminate();
return 0;
}

OpenGL program won't execute properly if an explicit version is set

My computer runs Ubuntu 16.04 and is equipped with a Nvidia GeForce GT 630M graphics card with a proprietary driver installed. The glGetString(GL_VERSION) function shows that, by default, my graphics card supports OpenGL 4.5.
I have been following the Learn OpenGL tutorial series and I have the following difficulty: I can only get the tutorial's "Hello Triangle" program to run properly if I comment out the lines
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3);
glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);
Leaving those lines as-is will prevent the triangle from appearing.
I am having trouble understanding why setting a required OpenGL version lower than the OpenGL version my card can support would make the program fail.
EDIT: the commands
std::cout << "Renderer: " << glGetString(GL_RENDERER) << std::endl;
std::cout << "Version: " << glGetString(GL_VERSION) << std::endl;
std::cout << "Shading Language: " << glGetString(GL_SHADING_LANGUAGE_VERSION) << std::endl;
output
Renderer: GeForce GT 630M/PCIe/SSE2
Version: 4.5.0 NVIDIA 361.42
Shading Language: 4.50 NVIDIA
if those lines are commented out, and
Renderer: GeForce GT 630M/PCIe/SSE2
Version: 3.3.0 NVIDIA 361.42
Shading Language: 3.30 NVIDIA via Cg compiler
if those lines are left in place.
EDIT2: Here's the actual source code:
#include <array>
#include <fstream>
#include <iostream>
#include <sstream>
#include <string>
#include <GL/glew.h>
#include <GLFW/glfw3.h>
constexpr char FRAGMENT_SHADER_SOURCE_FILE[] = "simple_fragment.shader";
constexpr char VERTEX_SHADER_SOURCE_FILE[] = "simple_vertex.shader";
constexpr int WINDOW_WIDTH = 800;
constexpr int WINDOW_HEIGHT = 800;
constexpr char WINDOW_TITLE[] = "Triangle";
constexpr std::array<GLfloat, 4> bgColour { 0.3f, 0.1f, 0.3f, 1.0f };
/*
* Instructs GLFW to close window if escape key is pressed.
*/
void keyCallback(GLFWwindow *window, int key, int scancode, int action, int mode);
int main() {
// Start GLFW.
if (not glfwInit()) {
std::cerr << "ERROR: Failed to start GLFW.\n";
return 1;
}
// Set OpenGL version.
//glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3);
//glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3);
//glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);
// Create window and bind to current contex.
GLFWwindow *window = glfwCreateWindow(WINDOW_WIDTH, WINDOW_HEIGHT, WINDOW_TITLE, nullptr,
nullptr);
if (not window) {
std::cerr << "ERROR: Failed to create GLFW window.\n";
glfwTerminate();
return 1;
}
glfwMakeContextCurrent(window);
// Set keyboard callback functions.
glfwSetKeyCallback(window, keyCallback);
// Initialize GLEW with experimental features turned on.
glewExperimental = GL_TRUE;
if (glewInit() != GLEW_OK) {
std::cerr << "ERROR: Failed to start GLEW.\n";
glfwTerminate();
return 1;
}
// Create viewport coordinate system.
int width, height;
glfwGetFramebufferSize(window, &width, &height);
glViewport(0, 0, static_cast<GLsizei>(width), static_cast<GLsizei>(height));
// Create a vertex shader object.
GLuint vertexShader = glCreateShader(GL_VERTEX_SHADER);
// Load the vertex shader source code.
std::string vertexShaderSource;
std::ifstream vsfs(VERTEX_SHADER_SOURCE_FILE);
if (vsfs.is_open()) {
std::stringstream ss;
ss << vsfs.rdbuf();
vertexShaderSource = ss.str();
}
else {
std::cerr << "ERROR: File " << VERTEX_SHADER_SOURCE_FILE << " could not be found.\n";
glfwTerminate();
return 1;
}
// Attach the shader source code to the vertex shader object and compile.
const char *vertexShaderSource_cstr = vertexShaderSource.c_str();
glShaderSource(vertexShader, 1, &vertexShaderSource_cstr, nullptr);
glCompileShader(vertexShader);
// Check if compilation was successful.
GLint success;
glGetShaderiv(vertexShader, GL_COMPILE_STATUS, &success);
if (not success) {
std::cerr << "ERROR: Vertex shader compilation failed.\n";
glfwTerminate();
return 1;
}
// Create a fragment shader object.
GLuint fragmentShader = glCreateShader(GL_FRAGMENT_SHADER);
// Load the fragment shader source code.
std::string fragmentShaderSource;
std::ifstream fsfs(FRAGMENT_SHADER_SOURCE_FILE);
if (fsfs.is_open()) {
std::stringstream ss;
ss << fsfs.rdbuf();
fragmentShaderSource = ss.str();
}
else {
std::cerr << "ERROR: File " << FRAGMENT_SHADER_SOURCE_FILE << " could not be found.\n";
glfwTerminate();
return 1;
}
// Attach the shader source code to the fragment shader object and compile.
const char *fragmentShaderSource_cstr = fragmentShaderSource.c_str();
glShaderSource(fragmentShader, 1, &fragmentShaderSource_cstr, nullptr);
glCompileShader(fragmentShader);
// Check if compilation was successful.
glGetShaderiv(fragmentShader, GL_COMPILE_STATUS, &success);
if (not success) {
std::cerr << "ERROR: Fragment shader compilation failed.\n";
glfwTerminate();
return 1;
}
// Create a shader program by linking the vertex and fragment shaders.
GLuint shaderProgram = glCreateProgram();
glAttachShader(shaderProgram, vertexShader);
glAttachShader(shaderProgram, fragmentShader);
glLinkProgram(shaderProgram);
// Check that shader program was successfully linked.
glGetProgramiv(shaderProgram, GL_LINK_STATUS, &success);
if (not success) {
std::cerr << "ERROR: Shader program linking failed.\n";
glfwTerminate();
return 1;
}
// Delete shader objects.
glDeleteShader(vertexShader);
glDeleteShader(fragmentShader);
// Coordinates of triangle vertices in Normalized Device Coordinates (NDC).
std::array<GLfloat, 9> vertices {
-0.5f, -0.5f, 0.0f,
0.5f, -0.5f, 0.0f,
0.0f, 0.5f, 0.0f
};
// Create a vertex array object.
GLuint vao;
glGenBuffers(1, &vao);
glBindVertexArray(vao);
// Create a vertex buffer object.
GLuint vbo;
glGenBuffers(1, &vbo);
glBindBuffer(GL_ARRAY_BUFFER, vbo);
// Pass vertex data into currently bound vertex buffer object.
glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices.data(), GL_STATIC_DRAW);
// Create vertex attribute.
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 3 * sizeof(GLfloat), static_cast<GLvoid*>(0));
glEnableVertexAttribArray(0);
// Unbind the vertex array object and vertex buffer object.
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindVertexArray(0);
glClearColor(bgColour[0], bgColour[1], bgColour[2], bgColour[3]);
while (not glfwWindowShouldClose(window)) {
glClear(GL_COLOR_BUFFER_BIT);
glfwPollEvents();
// Inform OpenGL to use the shader program created above.
glUseProgram(shaderProgram);
// Bind the vertex array object.
glBindVertexArray(vao);
// Draw the triangle.
glDrawArrays(GL_TRIANGLES, 0, 3);
// Unbind the vertex array object.
glBindVertexArray(0);
glfwSwapBuffers(window);
}
// Delete vertex array object.
glDeleteVertexArrays(1, &vao);
// Delete vertex buffer object.
glDeleteBuffers(1, &vbo);
// Delete shader program.
glDeleteProgram(shaderProgram);
glfwDestroyWindow(window);
glfwTerminate();
return 0;
}
void keyCallback(GLFWwindow *window, int key, int scancode, int action, int mode) {
if (key == GLFW_KEY_ESCAPE and action == GLFW_PRESS) {
glfwSetWindowShouldClose(window, GL_TRUE);
}
}
Here are the contents of simple_vertex.shader and simple_fragment.shader:
#version 330 core
layout (location = 0) in vec3 position;
void main() {
gl_Position = vec4(position.x, position.y, position.z, 1.0);
}
and
#version 330 core
out vec4 color;
void main() {
color = vec4(1.0f, 0.5f, 0.2f, 1.0f);
}
I made a typo in my code.
I used the function glGenBuffers instead of glGenVertexArrays to create my vertex array object. Apparently Nvidia accepts this, unless I specify an OpenGL version. I still find it puzzling but at least the problem is fixed.

Nothing is showing up in OpenGL

well, I have being working in java and c++ for a while, but Im new to OpenGL, so I started using a library called GLFW, I have being following a book called "OpenGL Super Bible 6th Edition" but in GLFW mode. The problem here is that I have rechecked all and watched other tutorials and my code seams to be alright but nothing from the shaders renders. I don't know if the part where I declare the shader src is okay or even a valid form.
Thank you for even read this :)
NOTE:
I know it will render only a point but I resized it with "glPointSize(40.0f);".
#include <GL/glew.h>
#define GLFW_DLL
#include <GLFW/glfw3.h>
#include <stdio.h>
#include <iostream>
#include "jelly/lua_manager.h"
#include "jelly/keysManager.h"
jelly::keys_buttons::KeysManager km;
GLuint vertex_array_obj;
GLuint program;
GLuint startRender(GLFWwindow* window)
{
GLuint vertexShader = glCreateShader(GL_VERTEX_SHADER);
GLuint fragmentShader = glCreateShader(GL_FRAGMENT_SHADER);
std::cout << "ASD" << std::endl;
static const GLchar * fragmentShader_src[] =
{
"#version 430 core \n"
" \n"
"void main(void) \n"
"{ \n"
" gl_Position = vec4(0, 0.5, 0.0, 1); \n"
"} \n"
};
static const GLchar * vertexShader_src[] =
{
"#version 430 core \n"
" \n"
"out vec4 color; \n"
" \n"
"void main(void) \n"
"{ \n"
" color = vec4(0.0, 0.8, 1.0, 1.0); \n"
"} \n"
};
glShaderSource(vertexShader, 1, vertexShader_src, NULL);
glCompileShader(vertexShader);
glShaderSource(fragmentShader, 1, fragmentShader_src, NULL);
glCompileShader(fragmentShader);
GLuint tprogram = glCreateProgram();
glAttachShader(tprogram, vertexShader);
glAttachShader(tprogram, fragmentShader);
glLinkProgram(tprogram);
glValidateProgram(tprogram);
glDeleteShader(vertexShader);
glDeleteShader(fragmentShader);
glGenVertexArrays(1, &vertex_array_obj);
glBindVertexArray(vertex_array_obj);
return tprogram;
}
void render(GLFWwindow* window)
{
glClearColor(1.0f, 0.0f, 0.0f, 0.0f);
glClear(GL_COLOR_BUFFER_BIT);
glDrawArrays(GL_POINTS, 0, 1);
glPointSize(40.0f);
}
void mouseCallback(GLFWwindow* window, int button, int action, int mods)
{
km.mouseClick(button, action, mods);
}
void keyCallback(GLFWwindow* window, int key, int scancode, int action, int mods)
{
km.keyPressed(key, action, mods);
}
int main()
{
jelly::lua::LuaManager lm;
// 0 = Build | 1 = Release | 2 = Alpha | 3 = Beta
int buildType = 0;
std::string title = "Relieved";
if (buildType != 1)
{
switch (buildType) {
case 0 :
title += " | Build Version";
break;
case 2 :
title += " | Alpha Version";
break;
case 3 :
title += " | Beta Version";
break;
default :
break;
}
}
GLFWwindow* window;
if (!glfwInit()) {
glfwTerminate();
return -1;
}
window = glfwCreateWindow(640, 400, title.c_str(), NULL, NULL);
if (!window) {
glfwTerminate();
return -1;
}
glfwMakeContextCurrent(window);
glewExperimental = GL_TRUE;
glewInit ();
program = startRender(window);
glUseProgram(program);
glfwSetKeyCallback(window, keyCallback);
glfwSetMouseButtonCallback(window, mouseCallback);
while(!glfwWindowShouldClose(window))
{
render(window);
glfwSwapBuffers(window);
glfwPollEvents();
}
glDeleteVertexArrays(1, &vertex_array_obj);
glDeleteProgram(program);
glDeleteVertexArrays(1, &vertex_array_obj);
glfwTerminate();
return 0;
}
The two variables that contain shaders' sources are named incorrectly. You've misplaced vertex source into fragmentShader_src and fragment source into vertexShader_src.
You would easily found the error if you checked shader compilation and linking status. You should add appropriate ifs and print logs if shader compilation or linking fails.
Also, you're missing an explicit OpenGL version selection. You should ask GLFW to give you 'OpenGL 4.3 compatibility profile' context. ('core profile' works too if you don't need any deprecated features.) Check GLFW docs for information how to do it.

GLSL getting odd values back from my uniform and it seems to be set with the wrong value too

I'm having problems using a uniform in a vertex shader
heres the code
// gcc main.c -o main `pkg-config --libs --cflags glfw3` -lGL -lm
#include <GLFW/glfw3.h>
#include <stdio.h>
#include <stdlib.h>
#include <math.h>
void gluErrorString(const char* why,GLenum errorCode);
void checkShader(GLuint status, GLuint shader, const char* which);
float verts[] = {
-0.5f, 0.5f, 0.0f,
0.5f, 0.5f, 0.0f,
0.5f, -0.5f, 0.0f,
-0.5f, -0.5f, 0.0f
};
const char* vertex_shader =
"#version 330\n"
"in vec3 vp;\n"
"uniform float u_time;\n"
"\n"
"void main () {\n"
" vec4 p = vec4(vp, 1.0);\n"
" p.x = p.x + u_time;\n"
" gl_Position = p;\n"
"}";
const char* fragment_shader =
"#version 330\n"
"out vec4 frag_colour;\n"
"void main () {\n"
" frag_colour = vec4 (0.5, 0.0, 0.5, 1.0);\n"
"}";
int main () {
if (!glfwInit ()) {
fprintf (stderr, "ERROR: could not start GLFW3\n");
return 1;
}
glfwWindowHint (GLFW_CONTEXT_VERSION_MAJOR, 3);
glfwWindowHint (GLFW_CONTEXT_VERSION_MINOR, 2);
//glfwWindowHint (GLFW_OPENGL_FORWARD_COMPAT, GL_TRUE);
glfwWindowHint (GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);
GLFWwindow* window = glfwCreateWindow (640, 480, "Hello Triangle", NULL, NULL);
if (!window) {
fprintf (stderr, "ERROR: could not open window with GLFW3\n");
glfwTerminate();
return 1;
}
glfwMakeContextCurrent (window);
// vert arrays group vert buffers together unlike GLES2 (no vert arrays)
// we *must* have one of these even if we only need 1 vert buffer
GLuint vao = 0;
glGenVertexArrays (1, &vao);
glBindVertexArray (vao);
GLuint vbo = 0;
glGenBuffers (1, &vbo);
glBindBuffer (GL_ARRAY_BUFFER, vbo);
// each vert takes 3 float * 4 verts in the fan = 12 floats
glBufferData (GL_ARRAY_BUFFER, 12 * sizeof (float), verts, GL_STATIC_DRAW);
gluErrorString("buffer data",glGetError());
glEnableVertexAttribArray (0);
glBindBuffer (GL_ARRAY_BUFFER, vbo);
// 3 components per vert
glVertexAttribPointer (0, 3, GL_FLOAT, GL_FALSE, 0, NULL);
gluErrorString("attrib pointer",glGetError());
GLuint vs = glCreateShader (GL_VERTEX_SHADER);
glShaderSource (vs, 1, &vertex_shader, NULL);
glCompileShader (vs);
GLint success = 0;
glGetShaderiv(vs, GL_COMPILE_STATUS, &success);
checkShader(success, vs, "Vert Shader");
GLuint fs = glCreateShader (GL_FRAGMENT_SHADER);
glShaderSource (fs, 1, &fragment_shader, NULL);
glCompileShader (fs);
glGetShaderiv(fs, GL_COMPILE_STATUS, &success);
checkShader(success, fs, "Frag Shader");
GLuint shader_program = glCreateProgram ();
glAttachShader (shader_program, fs);
glAttachShader (shader_program, vs);
glLinkProgram (shader_program);
gluErrorString("Link prog",glGetError());
glUseProgram (shader_program);
gluErrorString("use prog",glGetError());
GLuint uniT = glGetUniformLocation(shader_program,"u_time"); // ask gl to assign uniform id
gluErrorString("get uniform location",glGetError());
printf("uniT=%i\n",uniT);
glEnable (GL_DEPTH_TEST);
glDepthFunc (GL_LESS);
float t=0;
while (!glfwWindowShouldClose (window)) {
glClear (GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
gluErrorString("clear",glGetError());
glUseProgram (shader_program);
gluErrorString("use prog",glGetError());
t=t+0.01f;
glUniform1f( uniT, (GLfloat)sin(t));
gluErrorString("set uniform",glGetError());
float val;
glGetUniformfv(shader_program, uniT, &val);
gluErrorString("get uniform",glGetError());
printf("val=%f ",val);
glBindVertexArray (vao);
gluErrorString("bind array",glGetError());
glDrawArrays (GL_TRIANGLE_FAN, 0, 4);
gluErrorString("draw arrays",glGetError());
glfwPollEvents ();
glfwSwapBuffers (window);
gluErrorString("swap buffers",glGetError());
}
glfwTerminate();
return 0;
}
void checkShader(GLuint status, GLuint shader, const char* which) {
if (status==GL_TRUE) return;
int length;
char buffer[1024];
glGetShaderInfoLog(shader, sizeof(buffer), &length, buffer);
fprintf (stderr,"%s Error: %s\n", which,buffer);
glfwTerminate();
exit(-1);
}
struct token_string
{
GLuint Token;
const char *String;
};
static const struct token_string Errors[] = {
{ GL_NO_ERROR, "no error" },
{ GL_INVALID_ENUM, "invalid enumerant" },
{ GL_INVALID_VALUE, "invalid value" },
{ GL_INVALID_OPERATION, "invalid operation" },
{ GL_STACK_OVERFLOW, "stack overflow" },
{ GL_STACK_UNDERFLOW, "stack underflow" },
{ GL_OUT_OF_MEMORY, "out of memory" },
{ GL_TABLE_TOO_LARGE, "table too large" },
#ifdef GL_EXT_framebuffer_object
{ GL_INVALID_FRAMEBUFFER_OPERATION_EXT, "invalid framebuffer operation" },
#endif
{ ~0, NULL } /* end of list indicator */
};
void gluErrorString(const char* why,GLenum errorCode)
{
if (errorCode== GL_NO_ERROR) return;
int i;
for (i = 0; Errors[i].String; i++) {
if (Errors[i].Token == errorCode) {
fprintf (stderr,"error: %s - %s\n",why,Errors[i].String);
glfwTerminate();
exit(-1);
}
}
}
When the code runs, the quad flickers as if the uniform is getting junk values, also getting the value of the uniform shows some odd values like 36893488147419103232.000000 where it should be just a simple sine value
The problem with your code is only indirectly related to GL at all - your GL code is OK.
However, you are using modern OpenGL functions without loading the function pointers as an extension. This might work on some platforms, but not at others. MacOS does guarantee that these functions are exported in the system's GL libs. On windows, Microsofts opengl32.dll never contains function beyond GL 1.1 - your code wouldn't link there. On Linux, you're somewhere inbetween. There is only this old Linux OpenGL ABI document, which guarantees that OpenGL 1.2 functions must be exported by the library. In practice, most GL implementation's libs on Linux export everything (but the fact that the function is there does not mean that it is supported). But you should never directly link these functions, because nobody is guaranteeing anything.
However, the story does not end here: You apparently did this on an implementation which does export the symbols. However, you did not include the correct headers. And you have set up your compiler very poorly. In C, it is valid (but poor style) to call a function which has not been declared before. The compiler will asslume that it returns int and that all parameters are ints. In effect, you are calling these functions, but the compiler will convert the arguments to int.
You would have noticed that if you had set up your compiler to produce some warnings, like -Wall on gcc:
a.c: In function ‘main’:
a.c:74: warning: implicit declaration of function ‘glGenVertexArrays’
a.c:75: warning: implicit declaration of function ‘glBindVertexArray’
[...]
However, the code compiles and links, and I can reproduces results you described (I'm using Linux/Nvidia here).
To fix this, you should use a OpenGL Loader Library. For example, I got your code working by using GLEW. All I had to do was adding at the very top at the file
#define GLEW_NO_GLU // because you re-implemented some glu-like functions with a different interface
#include <glew.h>
and calling
glewExperimental=GL_TRUE;
if (glewInit() != GLEW_OK) {
fprintf (stderr, "ERROR: failed to initialize GLEW\n");
glfwTerminate();
return 1;
}
glGetError(); // read away error generated by GLEW, it is broken in core profiles...
The GLEW headers include declarations for all the functions, so no implicit type conversions do occur any more. GLEW might not be the best choice for core profiles, however I just used it because that's the loader I'm most familiar with.

Why won't CG shaders work with GL 3.2?

I've tried everything to get OpenGL 3.2 to render with CG shaders in my game engine but I have had no luck. So I decided to make a bare minimal project but still shaders won't work. In theory my test project should just render a red triangle but it is white because the shader is not doing anything.
I'll post the code here:
#include <stdio.h>
#include <stdlib.h>
#include <vector>
#include <string>
#include <GL/glew.h>
#include <Cg/cg.h>
#include <Cg/cgGL.h>
#include <SDL2/SDL.h>
int main()
{
SDL_Window *mainwindow;
SDL_GLContext maincontext;
SDL_Init(SDL_INIT_VIDEO);
SDL_GL_SetAttribute(SDL_GL_CONTEXT_MAJOR_VERSION, 3);
SDL_GL_SetAttribute(SDL_GL_CONTEXT_MINOR_VERSION, 2);
SDL_GL_SetAttribute(SDL_GL_DOUBLEBUFFER, 1);
SDL_GL_SetAttribute(SDL_GL_DEPTH_SIZE, 24);
mainwindow = SDL_CreateWindow("Test", SDL_WINDOWPOS_CENTERED, SDL_WINDOWPOS_CENTERED, 512, 512, SDL_WINDOW_OPENGL | SDL_WINDOW_SHOWN);
maincontext = SDL_GL_CreateContext(mainwindow);
glewExperimental = GL_TRUE;
glewInit();
_CGcontext* cgcontext;
cgcontext = cgCreateContext();
cgGLRegisterStates(cgcontext);
CGerror error;
CGeffect effect;
const char* string;
std::string shader;
shader =
"struct VS_INPUT"
"{"
" float3 pos : ATTR0;"
"};"
"struct FS_INPUT"
"{"
" float4 pos : POSITION;"
" float2 tex : TEXCOORD0;"
"};"
"struct FS_OUTPUT"
"{"
" float4 color : COLOR;"
"};"
"FS_INPUT VS( VS_INPUT In )"
"{"
" FS_INPUT Out;"
" Out.pos = float4( In.pos, 1.0f );"
" Out.tex = float2( 0.0f, 0.0f );"
" return Out;"
"}"
"FS_OUTPUT FS( FS_INPUT In )"
"{"
" FS_OUTPUT Out;"
" Out.color = float4(1.0f, 0.0f, 0.0f, 1.0f);"
" return Out;"
"}"
"technique t0"
"{"
" pass p0"
" {"
" VertexProgram = compile gp4vp VS();"
" FragmentProgram = compile gp4fp FS();"
" }"
"}";
effect = cgCreateEffect(cgcontext, shader.c_str(), NULL);
error = cgGetError();
if(error)
{
string = cgGetLastListing(cgcontext);
fprintf(stderr, "Shader compiler: %s\n", string);
}
glClearColor ( 0.0, 0.0, 1.0, 1.0 );
glClear ( GL_COLOR_BUFFER_BIT );
float* vert = new float[9];
vert[0] = 0.0; vert[1] = 0.5; vert[2] =-1.0;
vert[3] =-1.0; vert[4] =-0.5; vert[5] =-1.0;
vert[6] = 1.0; vert[7] =-0.5; vert[8]= -1.0;
unsigned int m_vaoID;
unsigned int m_vboID;
glGenVertexArrays(1, &m_vaoID);
glBindVertexArray(m_vaoID);
glGenBuffers(1, &m_vboID);
glBindBuffer(GL_ARRAY_BUFFER, m_vboID);
glBufferData(GL_ARRAY_BUFFER, 9 * sizeof(GLfloat), vert, GL_STATIC_DRAW);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, 0);
glEnableVertexAttribArray(0);
CGtechnique tech = cgGetFirstTechnique( effect );
CGpass pass = cgGetFirstPass(tech);
while (pass)
{
cgSetPassState(pass);
glDrawArrays(GL_TRIANGLES, 0, 3);
cgResetPassState(pass);
pass = cgGetNextPass(pass);
}
glDisableVertexAttribArray( 0 );
glBindVertexArray(0);
delete[] vert;
glBindBuffer(GL_ARRAY_BUFFER, 0);
glDeleteBuffers(1, &m_vboID);
glDeleteVertexArrays(1, &m_vaoID);
SDL_GL_SwapWindow(mainwindow);
SDL_Delay(2000);
SDL_GL_DeleteContext(maincontext);
SDL_DestroyWindow(mainwindow);
SDL_Quit();
return 0;
}
What am I doing wrong?
I compiled the code and got the same result. So I added a CG error handler to get a bit more of information:
void errorHandler(CGcontext context, CGerror error, void * appdata) {
fprintf(stderr, "%s\n", cgGetErrorString(error));
}
...
cgSetErrorHandler(&errorHandler, NULL);
When cgSetPassState and cgResetPassState were called I got the following error message:
Technique did not pass validation.
Not really very informative, of course. So I used GLIntercept to trace all OpenGL calls to a log file.
This time, when glewInit was called I got the following error message in the log file:
glGetString(GL_EXTENSIONS)=NULL glGetError() = GL_INVALID_ENUM
According OpenGL documentation, glGetString must not be called with GL_EXTENSIONS, was deprecated in 3.0, and glGetStringi must be used instead.
Finally, I found the issue in the GLEW library: http://sourceforge.net/p/glew/bugs/120/
I removed GLEW dependency and tested with gl3.h (and more recent glcorearb.h). I got the same error, but this time when cgGLRegisterStates was called.
I also tried CG trace.dll, just to get the same error (7939 = 0x1F03 = GL_EXTENSIONS):
glGetString
{
input:
name = 7939
output:
return = NULL
}
Then, I tested OpenGL 3.1 (SDL_GL_SetAttribute(SDL_GL_CONTEXT_MINOR_VERSION, 1);), and found that it was working fine:
glGetString(GL_EXTENSIONS)="GL_AMD_multi_draw_indirec..."
That is, the 3.1 context was compatible with previous OpenGL versions, but 3.2 not.
After a bit of Internet digging I found that you can create this type of compatible OpenGL context with SDL, just adding this line to the code:
SDL_GL_SetAttribute(SDL_GL_CONTEXT_PROFILE_MASK, SDL_GL_CONTEXT_PROFILE_COMPATIBILITY);
IMHO, CG Toolkit needs this type of compatibility profile.
"Cg 3.1 context does not yet support forward-compatible OpenGL contexts!"
Source:
http://3dgep.com/introduction-to-shader-programming-with-cg-3-1/
As the Cg project seems to be abandoned, it's also not likely to happen.