How to use OpenGL ARB_gl_spirv extension? - opengl

I would like to compile my GLSL shaders to SPIR-V binaries, and use that in my OpenGL project.
I've found glslang, using I can compile the glsl shaders to spir-v. But I haven't found any tutorials about how to use it in my C++ project. How to load these binaries and create shader programs from them?

Load the SPIR-V binary just like you're loading any other binary file in C++. Then, when you're compiling shaders, you must call glShaderBinary and glSpecializeShader:
GLuint vertexShader = glCreateShader( GL_VERTEX_SHADER );
glShaderBinary( 1, &vertexShader, GL_SHADER_BINARY_FORMAT_SPIR_V_ARB, vertexData, sizeof( vertexData ) ); // vertexData is the SPIR-V file contents
glSpecializeShader( vertexShader, "main", 0, nullptr, nullptr );
glAttachShader( program, vertexShader );

Related

Vertex Shader Compile Error while trying to pass the info from V.Shader->G.Shader->F.Shader using Interface block [duplicate]

Initializing GL_List for processing.
glBegin(GL_POINTS);
for (i = 0; i < faceNum; i++)
{
mesh->GetFaceNodes(i + 1, ver[0], ver[1], ver[2], ver[3]);
**glVertex4i(ver[0] - 1, ver[1] - 1, ver[2] - 1, i+1);**
}
glEnd();
glEndList();
Vertex Shader gives me a compilation error and doesn't know why.
Vertex Shader:
varying out float final;
void main( void )
{
final = float(gl_Vertex.w); // want to pass this face index on to
}
Geometry Shader:
varying in float final[];
varying out float final_tofrag;
final_tofrag=final[0];
//Doing other calculations here they are totally different
Fragment Shader:
varying in float final_tofrag;
void main( void )
{
if (color.z<0.0)
gl_FragData[0] = vec4(float(final_frag),float(final_frag), -(gl_FragCoord.z), 0); // want to check that value of the final(gl_vertex.w) is being passed from vertex shader to fragment shader or not. Its giving me 0.00000;
else
gl_FragData[0] = vec4(float(final_frag), float(final_frag), gl_FragCoord.z, 0);
}
It would help if you actually posted the compilation error in your question, otherwise we don't know what your error is.
So, since I'm taking random guesses in the dark, I'll make a couple of guesses here.
You are assigning a float to an integer, which might be giving you a conversion error.
// this might now compile, but it will probably only ever give you
// zero or one. Was that the intent?
final = int(gl_Vertex.w);
You are NOT writing to gl_Position within your vertex shader. If you don't write to that value, OpenGL cannot execute your vertex shader.
In your fragment shader, you are checking the value color.z, but you have not declared color as a uniform, shader input, or const.
Whilst this won't cause a compilation error, dividing final (an integer whose value is 1 or 0?), by an integer value of 100 or 1000 is only ever going to give you zero or one. Was the intention to use final as a float rather than integer?
You are mixing integers and floats within the vec4 declaration in your fragment shader. This might be causing the compiler to baulk.
Unfortunately, without access to the GLSL error log, there isn't going to be anything anyone can do to identify your problem beyond what I've listed above.
Since the shader does not contain any version information, it is a OpenGL Shading Language 1.10 Specification shader.
In GLSL 1.10 varyingvariables of type int are not allowed and implicit casts from int to float are not supported.
The glsl 1.10 there are no in an out variables. The keyword for intercace variables is varying.
Furthermore the variable color is not defined int eh fragment shader.
varying float final;
void main( void )
{
final = gl_Vertex.w;
// [...]
}
varying float final_tofrag;
void main( void )
{
if (final_tofrag < 0.0) // ?
gl_FragData[0] = vec4(final_tofrag, final_tofrag, -gl_FragCoord.z, 0.0);
else
gl_FragData[0] = vec4(final_tofrag, final_tofrag, gl_FragCoord.z, 0.0);
}
I recommend to check if the shader compilation succeeded and if the program object linked successfully.
If the compiling of a shader succeeded can be checked by glGetShaderiv and the parameter GL_COMPILE_STATUS. e.g.:
#include <iostream>
#include <vector>
bool CompileStatus( GLuint shader )
{
GLint status = GL_TRUE;
glGetShaderiv( shader, GL_COMPILE_STATUS, &status );
if (status == GL_FALSE)
{
GLint logLen;
glGetShaderiv( shader, GL_INFO_LOG_LENGTH, &logLen );
std::vector< char >log( logLen );
GLsizei written;
glGetShaderInfoLog( shader, logLen, &written, log.data() );
std::cout << "compile error:" << std::endl << log.data() << std::endl;
}
return status != GL_FALSE;
}
If the linking of a program was successful can be checked by glGetProgramiv and the parameter GL_LINK_STATUS. e.g.:
bool LinkStatus( GLuint program )
{
GLint status = GL_TRUE;
glGetProgramiv( program, GL_LINK_STATUS, &status );
if (status == GL_FALSE)
{
GLint logLen;
glGetProgramiv( program, GL_INFO_LOG_LENGTH, &logLen );
std::vector< char >log( logLen );
GLsizei written;
glGetProgramInfoLog( program, logLen, &written, log.data() );
std::cout << "link error:" << std::endl << log.data() << std::endl;
}
return status != GL_FALSE;
}
The code in the question does not make any sense. The GLSL keyword varying was chosen because it was meant to reflect the property that the data will be different for each fragment, due to the automatic interpolation across the primitive. This happens only between the last programmable shader stage before the rasrerizer and the fragment shader.
In the beginning, there was only the vertex shader and the fragment shader. The VS would get attributes as input, and output to varyings, which would be interpolation and become inputs to the FS.
With the introduction of the Geometry Shader in GL 3.0 / GLSL 1.30, this scheme did not make sense any more. The outputs of the VS would not be interpolated any more, but become direct inputs of the GS. As a result, the keywords attribute and varying where removed from GLSL, and replaced by the more general in / out scheme.
As a result, a GS with varying cannot exist. You either use legacy GLSL which doesn't support Geometry Shaders, or you use a newer GLSL with in/out.

Why is glDrawArrays failing on Intel Mesa 10.3, while working with nVidia using OpenGL 3.3

I am trying to run Piccante image processing library on Intel GPU. The library is using OpenGL shaders to apply filters on images. The library is using OpenGL 4.0 according to its documentation, so I had to make a small modifications in order to get it to run in OpenGL 3.3 context, which is supported by Intel Mesa 10.3 driver.
I changed the following line (in buffer_op.hpp) when the shader is created:
prefix += glw::version("330"); // before glw::version("400")
After this modification the my program still works perfectly fine on nVidia GPU, even while initializing the OpenGL context as OpenGL 3.3 (Core Profile).
On the Intel GPU the program works partly. It seems to work fine as long as the images are single channel. When the images are RGB the drawing now longer works, and my images ends up black.
I have traced down the error to the following line in (quad.hpp):
void Render()
{
glBindVertexArray(vao); // (I checked that vao is not 0 here)
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4); // glGetError() = 1286 (GL_INVALID_OPERATION)
glBindVertexArray(0);
}
This is the initiations of the Vertex Array Object and the Vertex Buffer Object:
float *data = new float[8];
data[0] = -halfSizeX;
data[1] = halfSizeY;
data[2] = -halfSizeX;
data[3] = -halfSizeY;
data[4] = halfSizeX;
data[5] = halfSizeY;
data[6] = halfSizeX;
data[7] = -halfSizeY;
//Init VBO
glGenBuffers(1, &vbo[0]);
glBindBuffer(GL_ARRAY_BUFFER, vbo[0]);
glBufferData(GL_ARRAY_BUFFER, 8 * sizeof(GLfloat), data, GL_STATIC_DRAW);
glBindBuffer(GL_ARRAY_BUFFER, 0);
//Init VAO
glGenVertexArrays(1, &vao);
glBindVertexArray(vao);
glBindBuffer(GL_ARRAY_BUFFER, vbo[0]);
glVertexAttribPointer(0, 2, GL_FLOAT, GL_FALSE, 0, 0);
glEnableVertexAttribArray(0);
glBindVertexArray(0);
glDisableVertexAttribArray(0);
glBindBuffer(GL_ARRAY_BUFFER, 0);
This is the generated fragment shader I am trying to run:
#version 330
uniform sampler2D u_tex_1;
uniform vec4 u_val_0;
uniform vec4 u_val_1;
in vec2 v_tex_coord;
out vec4 f_color;
void main(void) {
ivec2 coords = ivec2(gl_FragCoord.xy);
vec4 ret = u_val_0;
f_color = ret;
}
I checked that the vertex shader and fragment shader compiles and links successfully. Does this mean that the shader should be GLSL 3.3 compatible and the problem is not within the shader but somewhere else?
What could be causing the program to fail on RGB images while it works fine on single channel images?
What could cause the program to fail with Intel Mesa 10.3 driver while working fine with nVidia driver when the context is initialized on both as OpenGL 3.3?
There seems to be a lot of reasons that could cause GL_INVALID_OPERATION when rendering. What other things could I check in order to trace down the error?
Thanks a lot for any help!
I've been talking with Francesco Banterle, the author of Piccante library and he pointed out the following:
Regarding Intel Drivers, the issue is due to the fact these drivers do
not automatically align buffers, so I may have to force three colors
channel to be RGBA instead of RGB.
I changed the internal format from GL_RGB32F to GL_RGBA32F when loading the RGB textures:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA32F, width, height, 0,
GL_RGB, GL_FLOAT, data); // before GL_RGB32F
This seemed to fix the problems on the Intel drivers.

Lost OpenGL output

A week ago I upgraded my OS to the latest version, so GLEW, nVidia drivers, g++, Qt, etc. all got upgraded too. That was the day my QGLWidgets stopped showing raw OpenGL 3.3 content, only the 2D QPainter based stuff. Since no other OpenGL applications (including my DE) on my machine were affected, I must have written some dodgy code in my application, that perhaps had been allowed by older versions of these libraries - which have now been amended.
I have a lot of heavily abstracted OpenGL code, so many potential places for a failure; and after a few days of trying to get any sort of output from it (glGetError() was not returning errors before I started messing with it), I decided the best thing to do was write the simplest OpenGL application possible and then slowly build it up until it broke.
But I can't even get a triangle to appear!
void Viewer::initializeGL()
{
// Initialise GLEW, compile/link fragment and vertex shaders, error check, etc.
...
// Create default VAO.
glGenVertexArrays( 1, &vao_ );
glBindVertexArray( vao_ );
glUseProgram( shader_ );
vVertex_ = glGetAttribLocation( shader_, "vVertex" );
// Create the test triangle VBO.
glGenBuffers( 1, &vbo_ );
glBindBuffer( GL_ARRAY_BUFFER, vbo_ );
glEnableVertexAttribArray( vVertex_ );
glVertexAttribPointer( vVertex_, 3, GL_FLOAT, false, 0,
reinterpret_cast< GLvoid* >( 0 ) );
// Upload the data to the GPU.
static const float verts[9] = { 0.0f, 0.0f, -0.5f,
1.0f, 0.0f, -0.5f,
0.5f, 1.0f, -0.5f };
glBufferData( GL_ARRAY_BUFFER, sizeof( verts ),
static_cast< const void* >( verts ), GL_STATIC_DRAW );
glBindBuffer( GL_ARRAY_BUFFER, GL_NONE );
glDisableVertexAttribArray( vVertex_ );
Sy_GL::checkError();
}
void Viewer::paintGL()
{
// Clear the buffers.
qglClearColor( QColor( Qt::black ) );
glClear( GL_COLOR_BUFFER_BIT );
glPolygonMode( GL_FRONT_AND_BACK, GL_LINE );
glBindBuffer( GL_ARRAY_BUFFER, vbo_ );
glEnableVertexAttribArray( vVertex_ );
glVertexAttribPointer( vVertex_, 3, GL_FLOAT, false, 0,
reinterpret_cast< GLvoid* >( 0 ) );
glDrawArrays( GL_TRIANGLES, 0, 3 );
glBindBuffer( GL_ARRAY_BUFFER, GL_NONE );
glDisableVertexAttribArray( vVertex_ );
Sy_GL::checkError();
}
I'm not using my VAO for what it is for because VAOs cannot be shared across contexts, which is the scenario in my 'real' application, so I'm replicating that situation here. Sy_GL::checkError() just calls glGetError() and throws an exception if there's a problem. My two shaders could not be simpler:
// Vertex shader.
#version 330
in vec3 vVertex;
void main( void )
{
gl_Position = vec4( vVertex, 1.0 );
}
// Fragment shader (in different file).
#version 330
out vec4 fragColour;
void main( void )
{
fragColour = vec4( 1.0, 0.0, 0.0, 1.0 );
}
This should display a red line rendered triangle against a black background, but I just get the black background - no console output or exceptions. My system really does support OpenGL 3.3 and higher, here is top of my glxinfo output:
name of display: :0
display: :0 screen: 0
direct rendering: Yes
server glx vendor string: NVIDIA Corporation
server glx version string: 1.4
And my glewinfo output:
GLEW version 1.9.0
Reporting capabilities of display :0, visual 0x2b
Running on a GeForce GTX 560 Ti/PCIe/SSE2 from NVIDIA Corporation
OpenGL version 4.3.0 NVIDIA 310.32 is supported
So my question is: Is my code wrong? Or is my system damaged very subtly?
Edit
It appears that QGLFormat is reporting that I only have OpenGL v1.0 - what mechanism is Qt using to get that value?
In my 'real' application I perform a OpenGL version check using QGLFormat::openGLVersionFlags() & QGLFormat::OpenGL_Version_3_3, and this passes; but using myQGLWidget->format().majorVersion() and myQGLWidget->format().minorVersion() return 1 and 0 respectively.
Edit 2
Interestingly, if I set a default QGLFormat of v3.3 in my main.cpp:
QGLFormat format( QGL::DoubleBuffer |
QGL::DepthBuffer |
QGL::AlphaChannel );
format.setVersion( 3, 3 );
QGLFormat::setDefaultFormat( format );
It segfaults on the first of my OpenGL calls, specifically glGenVertexArrays(1, &vao_), but myQGLWidget->format().majorVersion() and myQGLWidget->format().minorVersion() return 3 and 3 respectively.
We recently had this at work. The cause was the latest NVIDIA drivers, which broke the major and minor version queries.
Edit: I think it may have been related to calling these functions before setting up a valid GL context.
End edit
So, you could try with a slightly older driver. However, there is also an issue with some Qt versions and glversion. Check this link out:
http://qt-project.org/forums/viewthread/20424

Setting up OpenGL program and shaders

After working with one program and one set of vertex / fragment shaders, I need to use more programs, so my initial setup function that didn't have any parameters wasn't sufficient anymore.
To my setup function, I pass the shader_program and the two shader sources. When I try to use the program in my display method, the GLuint program handle is zero. What I currently don't understand is why it all worked when I used fixed sources for my shaders. Could it be a C++ program and I'm passing the parameters in the wrong way? I'm not getting any error messages but nothing is rendered on screen.
Here's the code I use to call the shader setup function and the function itself:
// Variables
GLuint shader_program;
const GLchar *vertex_shader =
"#version 330\n"
""
"layout (location = 0) in vec3 in_position;"
...
"";
// In my main function, I call this to set up the shaders
installShaders(shader_program, vertex_shader, fragment_shader);
// SET-UP-METHOD
void installShaders(GLuint program_handle, const GLchar *vertex_shader_source, const GLchar *fragment_shader_source)
{
// Handles for the shader objects
GLuint vertex_shader_name;
GLuint fragment_shader_name;
// Status values
GLint vertex_shader_compiled;
GLint fragment_shader_compiled;
GLint successfully_linked;
// Generate shader names
vertex_shader_name = glCreateShader(GL_VERTEX_SHADER);
fragment_shader_name = glCreateShader(GL_FRAGMENT_SHADER);
// Specify the sources for the shaders
glShaderSource( vertex_shader_name, 1, (const GLchar**)&vertex_shader_source , NULL);
glShaderSource(fragment_shader_name, 1, (const GLchar**)&fragment_shader_source, NULL);
// Compile Vertex shader and check for errors
glCompileShader(vertex_shader_name);
glGetShaderiv(vertex_shader_name, GL_COMPILE_STATUS, &vertex_shader_compiled);
printShaderInfoLog(vertex_shader_name);
// Compile Fragment shader and check for errors
glCompileShader(fragment_shader_name);
glGetShaderiv(fragment_shader_name, GL_COMPILE_STATUS, &fragment_shader_compiled);
printShaderInfoLog(fragment_shader_name);
// Exit if the shaders couldn't be compiled correctly
if(!vertex_shader_compiled || !fragment_shader_compiled)
{
printf("Shaders were not compiled correctly!\n");
exit(EXIT_FAILURE);
}
// Generate program name
program_handle = glCreateProgram();
// Attach shaders to the program
glAttachShader(program_handle, vertex_shader_name);
glAttachShader(program_handle, fragment_shader_name);
// Link program and check for errors
glLinkProgram(program_handle);
glGetProgramiv(program_handle, GL_LINK_STATUS, &successfully_linked);
printProgramInfoLog(program_handle);
// Exit if the program couldn't be linked correctly
if(!successfully_linked)
{
printf("Program was not linked correctly!\n");
exit(EXIT_FAILURE);
}
// Set up initial values of uniform variables
glUseProgram(program_handle);
location_projectionMatrix = glGetUniformLocation(program_handle, "myProjectionMatrix");
printf("Location of the uniform -->myProjectionMatrix<--- : %i\n", location_projectionMatrix);
projectionMatrix = glm::mat4(1.0f);
glUniformMatrix4fv(location_projectionMatrix, 1, GL_FALSE, glm::value_ptr(projectionMatrix));
location_modelViewMatrix = glGetUniformLocation(program_handle, "myModelViewMatrix");
printf("Location of the uniform -->myModelViewMatrix<--- : %i\n", location_modelViewMatrix);
modelViewMatrix = glm::mat4(1.0f);
glUniformMatrix4fv(location_modelViewMatrix, 1, GL_FALSE, glm::value_ptr(modelViewMatrix));
glUseProgram(0);
// Everything worked
printf("Shaders were set up correctly!\n");
}
The issue is that the variable you store the GLSL shader program name (the value returned from glCreateShader) into is passed-by-value into the routine. As such, its value is lost when the function returns. Passing the value in by reference (as either a pointer or as a C++ reference), then the value will be preserved when the function exits, and you'll be able to reference the shader in other parts of your application.

Can anyone decipher why this OpenGL Hello World draws a black window?

Please excuse the length (and the width; it's for clarity on an IDE) but I thought of showing the full length of the code since the purpose is a simple Hello World in modern VBO and GLSL.
It was initially based on http://people.freedesktop.org/~idr/OpenGL_tutorials/02-GLSL-hello-world.pdf
The main point is no single error message or warning is printed - and you can see the printfs are a lot (actually, almost all of the code is attempted to be caught for errors).
The compilation is done on -std=c99 -pedantic -O0 -g -Wall (with no warnings) so no much room for compiler error either.
I have pin pointed my attention to
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
and
glVertexAttribPointer(0, 2, GL_FLOAT, GL_FALSE, 0, BUFFER_OFFSET(0));
(the latter is the only part of the code I don't fully understand yet; most obscure func 'ever')
The info log does not print anything and it does print a healthy text normally if the shaders are purposely made invalid. Hence it's neither the shaders string assignment or their compilation.
Can you see something that could make it print a blank screen?
It does print a single dot in the middle if glDrawArrays is used with GL_POINTS and it does change color if glClear is preceded with an appropriate glClearColor.
#include "SDL.h" // Window and program management
#include "Glee.h" // OpenGL management; Notice SDL's OpenGL header is not included
#include <stdbool.h> // C99 bool
void initGL(void);
void drawGL(void);
int main (int argc, char **argv) {
// Load the SDL library; Initialize the Video Subsystem
if (SDL_Init(SDL_INIT_VIDEO) < 0 ) printf("SDL_Init fail: %s\n", SDL_GetError());
/* Video Subsystem: set up width, height, bits per pixel (0 = current display's);
Create an OpenGL rendering context */
if (SDL_SetVideoMode(800, 600, 0, SDL_OPENGL) == NULL) printf("SDL_SetVideoMode fail: %s\n", SDL_GetError());
// Title and icon text of window
SDL_WM_SetCaption("gl", NULL);
// Initialize OpenGL ..
initGL();
bool done = false;
// Loop indefinitely unless user quits
while (!done) {
// Draw OpenGL ..
drawGL();
// Deal with SDL events
SDL_Event sdl_event;
do {
if ( sdl_event.type == SDL_QUIT || (sdl_event.type == SDL_KEYDOWN && sdl_event.key.keysym.sym == SDLK_ESCAPE)) {
done = true;
break;
}
} while (SDL_PollEvent(&sdl_event));
}
// Clean SDL initialized systems, unload library and return.
SDL_Quit();
return 0;
}
GLuint program;
GLuint buffer;
#define BUFFER_OFFSET(i) ((char *)NULL + (i))
void initGL(void) {
// Generate 1 buffer object; point its name (in uint form) to *buffer.
glGenBuffers(1, &buffer); if(glGetError()) printf("glGenBuffers error\n");
/* bind the named (by a uint (via the previous call)) buffer object to target GL_ARRAY_BUFFER (target for vertices)
apparently, one object is bound to a target at a time. */
glBindBuffer(GL_ARRAY_BUFFER, buffer); if(glGetError()) printf("glBindBuffer error\n");
/* Create a data store for the current object bound to GL_ARRAY_BUFFER (from above), of a size 8*size of GLfloat,
with no initial data in it (NULL) and a hint to the GrLib that data is going to be modified once and used a
lot (STATIC), and it's going to be modified by the app and used by the GL for drawing or image specification (DRAW)
Store is not mapped yet. */
glBufferData( GL_ARRAY_BUFFER, 4 * 2 * sizeof(GLfloat), NULL, GL_STATIC_DRAW); if(glGetError()) printf("glBufferData error\n");
/* Actually map to the GL client's address space the data store currently bound to GL_ARRAY_BUFFER (from above).
Write only. */
GLfloat *data = (GLfloat *) glMapBuffer(GL_ARRAY_BUFFER, GL_WRITE_ONLY); if (!*data) printf("glMapBuffer error1\n"); if(glGetError()) printf("glMapBuffer error2\n");
// Apparently, write some data on the object.
data[0] = -0.75f; data[1] = -0.75f; data[2] = -0.75f; data[3] = 0.75f;
data[4] = 0.75f; data[5] = 0.75f; data[6] = 0.75f; data[7] = -0.75f;
// Unmap the data store. Required *before* the object is used.
if(!glUnmapBuffer(GL_ARRAY_BUFFER)) printf("glUnmapBuffer error\n");
// Specify the location and data format of an array of generic vertex attributes ..
glVertexAttribPointer(0, 2, GL_FLOAT, GL_FALSE, 0, BUFFER_OFFSET(0));
// the shaders source
GLchar *vertex_shader_code[] = { "void main(void) { gl_Position = gl_Vertex; }"};
GLchar *fragment_shader_code[] = { "void main(void) { gl_FragColor = vec4(0.0, 1.0, 0.0, 1.0); }"};
/* Create an empty shader object; used to maintain the source string; intended to run
on the programmable vertex processor; GL_SHADER_TYPE is set to GL_VERTEX_SHADER
(e.g. for use on glGetShaderiv)*/
GLuint vs = glCreateShader(GL_VERTEX_SHADER); if (!vs) printf("glCreateShader fail\n");
/* Set the source code in vs; 1 string; GLchar **vertex_shader_code array of pointers to strings,
length is NULL, i.e. strings assumed null terminated */
glShaderSource(vs, 1, (const GLchar **) &vertex_shader_code, NULL); if(glGetError()) printf("glShaderSource error\n");
// Actually compile the shader
glCompileShader(vs); GLint compile_status; glGetShaderiv(vs, GL_COMPILE_STATUS, &compile_status); if (compile_status == GL_FALSE) printf("vertex_shader_code compilation fail\n"); if(glGetError()) printf("glGetShaderiv fail\n");
// same
GLuint fs = glCreateShader(GL_FRAGMENT_SHADER); if (!fs) printf("glCreateShader fail\n");
// same
glShaderSource(fs, 1, (const GLchar **) &fragment_shader_code, NULL); if(glGetError()) printf("glShaderSource error\n");
// same
glCompileShader(fs); glGetShaderiv(fs, GL_COMPILE_STATUS, &compile_status); if (compile_status == GL_FALSE) printf("fragment_shader_code compilation fail\n"); if(glGetError()) printf("glGetShaderiv fail\n");
/* Empty program for later attachment of shaders; it provides management mechanism for them.
Shaders can be compiled before or after their attachment. */
program = glCreateProgram(); if(!program) printf("glCreateProgram fail1\n"); if(glGetError()) printf("glCreateProgram fail2\n");
/* Attach shaders to program; this could be done before their compilation or their association with code
Destined to be linked together and form an executable. */
glAttachShader(program, vs); if(glGetError()) printf("glAttachShader fail1\n");
glAttachShader(program, fs); if(glGetError()) printf("glAttachShader fail2\n");
// Link the program; vertex shader objects create an executable for the vertex processor and similarly for fragment shaders.
glLinkProgram(program); GLint link_status; glGetProgramiv(program, GL_LINK_STATUS, &link_status); if (!link_status) printf("linking fail\n"); if(glGetError()) printf("glLinkProgram fail\n");
/* Get info log, if any (supported by the standard to be empty).
It does give nice output if compilation or linking fails. */
GLchar infolog[2048];
glGetProgramInfoLog(program, 2048, NULL, infolog); printf("%s", infolog); if (glGetError()) printf("glGetProgramInfoLog fail\n");
/* Install program to rendering state; one or more executables contained via compiled shaders inclusion.
Certain fixed functionalities are disabled for fragment and vertex processors when such executables
are installed, and executables may reimplement them. See glUseProgram manual page about it. */
glUseProgram(program); if(glGetError()) printf("glUseProgram fail\n");
}
void drawGL(void) {
// Clear color buffer to default value
glClear(GL_COLOR_BUFFER_BIT); if(glGetError()) printf("glClear error\n");
// Render the a primitive triangle
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4); if(glGetError()) printf("glDrawArrays error\n");
SDL_GL_SwapBuffers();
}
Expanding on Calvin1602's answer:
ftransform supposes matrices, which you do not use. gl_Vertex ought to be fine here, considering the final result is supposed to be the [-1:1]^3 cube, and his data is in that interval. Now, it should be gl_VertexAttrib[0] if you really want to go all GL3.1, but gl_Vertex and gl_VertexAttrib[0] alias(see below).
As for the enable. You use vertex attrib 0, so you need:
glEnableVertexAttribArray(0)
An advice on figuring stuff out in general: don't clear to black, it makes life more difficult to figure out if something black is drawn or if nothing is drawn (use glClearColor to change that).
On the pedantic side, your glShaderSource calls look suspicious, as you cast pointers around. I'd clean it up with
glShaderSource(fs, 1, fragment_shader_code, NULL);
The reason why it currently work with &fragment_shader_code is interesting, but here, I don't see why you don't simplify.
== edit to add ==
Gah, not sure what I was thinking with gl_VertexAttrib. It's been a while I did not look at this, and I just made my own feature...
The standard way to provide non-built-in attributes is actually non-trivial until GL4.1.
// glsl
attribute vec4 myinput;
gl_Position = myinput;
// C-code, rely on linker for location
glLinkProgram(prog);
GLint location = glGetAttribLocation(prog, "myinput");
glEnableVertexAttribArray(location, ...)
// alternative C-code, specify location
glBindAttribLocation(prog, 0, "myinput");
glLinkProgram(prog);
glEnableVertexAttribArray(0, ...)
GL4.1 finally supports specifying the location directly in the shader.
// glsl 4.10
layout (location=0) in vec4 myinput;
In the vertex shader : gl_Position = ftransform(); instead of gl_Vertex. This will multiply the input vector by the modelview matrix (giving the point in camera space) and then by the transformation matrix (giving the point in normalized device coordinates, i.e. its position on the screen)
glEnable(GL_VERTEX_ARRAY); before the rendering. cf the glDrawArray reference : "If GL_VERTEX_ARRAY is not enabled, no geometric primitives are generated."
... I don't see anything else