.exe wont compile but does when run in visual studio - c++

the problem only comes up when the .exe file it says its trying to use the file reader method to find the shaders I'm using but when I run it in visual studio it works fine. This is the file reading method
static std::string read_file(const char* filepath)
{
FILE* file = fopen(filepath, "rt");
fseek(file, 0, SEEK_END);
unsigned long length = ftell(file);
char* data = new char[length + 1];
memset(data, 0, length + 1);
fseek(file, 0, SEEK_SET);
fread(data, 1, length, file);
fclose(file);
std::string result(data);
delete[] data;
return result;
}
specifically it creakes a break on the fseek(file, 0, SEEK_END) part the line that calls it is
shader = new Shader("basic.vert", "basic.frag");
the files are in the same folder as the .cpp and in the .exe folder. the shader is defined here
Shader::Shader(const char* vertexPath, const char* fragPath)
: m_VertexPath(vertexPath), m_FragPath(fragPath)
{
m_ShaderID = load();
}
GLuint Shader::load()
{
GLuint program = glCreateProgram();
GLuint vertex = glCreateShader(GL_VERTEX_SHADER);
GLuint fragment = glCreateShader(GL_FRAGMENT_SHADER);
std::string vertexSourceString = read_file(m_VertexPath).c_str();
std::string fragSourceString = read_file(m_FragPath).c_str();
const char* vertexSource = vertexSourceString.c_str();
const char* fragSource = fragSourceString.c_str();
glShaderSource(vertex, 1, &vertexSource, NULL);
glCompileShader(vertex);
GLint result;
glGetShaderiv(vertex, GL_COMPILE_STATUS, &result);
if (result == GL_FALSE)
{
GLint length;
glGetShaderiv(vertex, GL_INFO_LOG_LENGTH, &length);
std::vector<char> error(length);
glGetShaderInfoLog(vertex, length, &length, &error[0]);
std::cout << "Failed to compile vertex shader :(" << std::endl << &error[0] << std::endl;
glDeleteShader(vertex);
return 0;
}
glShaderSource(fragment, 1, &fragSource, NULL);
glCompileShader(fragment);
glGetShaderiv(fragment, GL_COMPILE_STATUS, &result);
if (result == GL_FALSE)
{
GLint length;
glGetShaderiv(fragment, GL_INFO_LOG_LENGTH, &length);
std::vector<char> error(length);
glGetShaderInfoLog(fragment, length, &length, &error[0]);
std::cout << "Failed to compile fragment shader :(" << std::endl << &error[0] << std::endl;
glDeleteShader(fragment);
return 0;
}
glAttachShader(program, vertex);
glAttachShader(program, fragment);
glLinkProgram(program);
glValidateProgram(program);
glDeleteShader(vertex);
glDeleteShader(fragment);
return program;
}
sorry if this is a bit long but I wanted to be thorough and I wasnt sure which bit may be causing the problem.

You need to make sure that the basic.vert and basic.frag files are in the same directory as the executable. OR, you need to specify Absolute Paths when identifying where the files are. OR, you need to invest development into a "Resource Loader" type object which will dynamically query for and load external resources into your program; as an example, I'd write a program that recursively searches through all the subdirectories of the folder the program is located in, and maps each file to an addressable resource in your program.
This isn't a problem when you run the program in Visual Studio because the files that have to be read in are located in the source directory of your project, and by default, when Visual Studio is debugging your program, it uses the source directory as the Working Directory for your program, whereas when you run the .exe on its own, it uses the directory that the executable is located in (or the directory the script running it is located in).

Related

glShaderSource access violation when using Win32 to read files

So, I'm trying to setup basic OpenGL scene using only Win32 API and OpenGL, but I've big problems with loading shaders and glShaderSource function. I'm reading my file like this:
//HEADER
class FileReader
{
public:
FileReader(const LPCSTR FileName);
const void* GetFileData();
~FileReader();
private:
HANDLE FileHandle;
void* FileDataMemory;
};
//ACTUAL CODE
FileReader::FileReader(const LPCSTR FileName)
{
FileHandle = CreateFileA(FileName, GENERIC_READ, FILE_SHARE_READ, 0, OPEN_EXISTING, FILE_ATTRIBUTE_NORMAL, 0);
if (FileHandle == INVALID_HANDLE_VALUE)
{
OutputDebugStringA("Failed to open file ");
OutputDebugStringA(FileName);
OutputDebugStringA(" for reading, application could not be loaded\n");
ExitProcess(-2);
}
unsigned int FileSize = GetFileSize(FileHandle, 0);
DWORD BytesRead;
FileDataMemory = VirtualAlloc(0, FileSize, MEM_COMMIT, PAGE_READWRITE);
ReadFile(FileHandle, FileDataMemory, FileSize, &BytesRead, 0);
if (BytesRead < FileSize)
{
OutputDebugStringA("File was not read completely\n");
}
}
const void* FileReader::GetFileData()
{
return FileDataMemory;
}
FileReader::~FileReader()
{
CloseHandle(FileHandle);
}
And I use this class to load vertex shader from disc like this:
VertexShader = glCreateShader(GL_VERTEX_SHADER);
glShaderSource(VertexShader, 1, static_cast<const GLchar* const*>(VertexShaderFile->GetFileData()), 0);
But my app gives me an access violation reading address 0xFFFFFFFF on line with glShaderSource. And I'm just so confused, because when I try to see this block of memory in debug mode, it looks properly and has correct data insisde it, so I just don't know what to do.
The shader source code string, which is read form the file is not 0 terminated.
Reserve FileSize+1 bytes of memory and ensure that the last byte of the buffer is 0. e.g.:
FileDataMemory = VirtualAlloc(0, FileSize+1, MEM_COMMIT, PAGE_READWRITE);
ZeroMemory(FileDataMemory, FileSize+1);
ReadFile(FileHandle, FileDataMemory, FileSize, &BytesRead, 0);
Further, the 3rd parameter to glShaderSource is an array of strings (const GLchar **):
const GLchar *source = static_cast<const GLchar*>(VertexShaderFile->GetFileData());
glShaderSource(VertexShader, 1, &source, 0);
What you actually do in your code is to cast VertexShaderFile->GetFileData(). What you would have to do ist to cast &VertexShaderFile->GetFileData()
Furthermore, I recommend to use STL to read the shader source file. e.g:
std::ifstream sourceFile(FileName, std::fstream::in);
std::string sourceCode = std::string(
std::istreambuf_iterator<char>(sourceFile),
std::istreambuf_iterator<char>());

Opengl 4 debug output does not work

I am writing a game. I use ArchLinux most of time but I have tried to run my game on the Ubuntu 16.04 recently. On Ubuntu 16.04 there is a strange error: 1280. It is too difficult to find what causes the error so I wanted to see opengl's debug output but I don't see it too. I noticed one thing during shader validation - validation seems to be unsuccessful but the log is empty:
GLint status;
glValidateProgram(program_);
glGetProgramiv(program_, GL_VALIDATE_STATUS, &status);
if (status == GL_TRUE) {
return;
}
// Store log and return false
int length = 0;
glGetProgramiv(program_, GL_INFO_LOG_LENGTH, &length);
if (length > 0) {
GLchar infoLog[512];
glGetProgramInfoLog(program_, 512, nullptr, infoLog);
throw std::runtime_error(std::string("Program failed to validate:") + infoLog);
} else {
throw std::runtime_error(std::string("Program failed to validate. Unknown error"));
}
This gives me Unknown error. Also the opengl's debug output can't be seen, however, user messages are written there successfully. Here is the code:
SDL_GL_SetAttribute(SDL_GL_CONTEXT_MAJOR_VERSION, 4);
SDL_GL_SetAttribute(SDL_GL_CONTEXT_MINOR_VERSION, 4);
SDL_GL_SetAttribute(SDL_GL_CONTEXT_PROFILE_MASK, SDL_GL_CONTEXT_PROFILE_CORE);
int contextFlags = 0;
SDL_GL_GetAttribute(SDL_GL_CONTEXT_FLAGS, &contextFlags);
contextFlags |= SDL_GL_CONTEXT_DEBUG_FLAG;
SDL_GL_SetAttribute(SDL_GL_CONTEXT_FLAGS, contextFlags);
sdlWindow_ = SDL_CreateWindow(title.c_str(),
SDL_WINDOWPOS_UNDEFINED,
SDL_WINDOWPOS_UNDEFINED,
0,
0,
SDL_WINDOW_OPENGL
| SDL_WINDOW_SHOWN
| SDL_WINDOW_FULLSCREEN_DESKTOP
| SDL_WINDOW_INPUT_GRABBED);
if (!sdlWindow_) {
throw std::runtime_error("Unable to create window");
}
SDL_Log("Window created");
glContext_ = SDL_GL_CreateContext(sdlWindow_);
if (!glContext_) {
throw std::runtime_error("Failed to init OpenGL");
}
SDL_Log("GL context created");
{
glewExperimental = GL_TRUE;
GLenum err = glewInit();
if (err != GLEW_OK) {
throw std::runtime_error(std::string("GLEW Error: ") + reinterpret_cast<const char*>(glewGetErrorString(err)));
}
}
if (glDebugMessageCallbackARB != nullptr) {
SDL_Log("GL debug is available.\n");
// Enable the debug callback
glEnable(GL_DEBUG_OUTPUT);
glEnable(GL_DEBUG_OUTPUT_SYNCHRONOUS);
glDebugMessageCallback(_openglDebugCallbackFunction, nullptr);
glDebugMessageControl(GL_DONT_CARE, GL_DONT_CARE, GL_DONT_CARE, 0, nullptr, GL_TRUE);
glDebugMessageInsert(GL_DEBUG_SOURCE_APPLICATION, GL_DEBUG_TYPE_MARKER, 0,
GL_DEBUG_SEVERITY_NOTIFICATION, -1 , "Started debugging");
} else {
SDL_Log("GL debug is not available.\n");
}
So the main question here is why I can't see the opengl's debug output. And, if it is possible, as an additional question, why does the shader validation fail without a log?
GLEW 1.x has some problems when beeing used in a core context (that's also why glewExperimental=true is needed). glewInit always generates an OpenGL error while loading the extensions. You don't get this error through the debug callback because the initialization of the callback happens after the point where the error happend.
You have kind of a chicken-egg problem here: You cannot setup the debug callback before initializing GLEW, but that's where you want to get the debug output from. I recommend calling glGetError() right after glewInit to get rid of the one error you know where it is coming from.

Xcode GL_GEOMETRY_SHADER unidentified?

Here is my include and the code that is causing me distress:
#include <iostream>
#include <vector>
#include <OpenGL/gl.h>
#include <OpenGL/glu.h>
#include <GLUT/glut.h>
using namespace std;
GLuint CreateShader(GLenum eShaderType, const std::string &strShaderFile)
{
GLuint shader = glCreateShader(eShaderType);
const char *strFileData = strShaderFile.c_str();
glShaderSource(shader, 1, &strFileData, NULL);
glCompileShader(shader);
GLint status;
glGetShaderiv(shader, GL_COMPILE_STATUS, &status);
if (status == GL_FALSE)
{
GLint infoLogLength;
glGetShaderiv(shader, GL_INFO_LOG_LENGTH, &infoLogLength);
GLchar *strInfoLog = new GLchar[infoLogLength + 1];
glGetShaderInfoLog(shader, infoLogLength, NULL, strInfoLog);
const char *strShaderType = NULL;
switch(eShaderType)
{
case GL_VERTEX_SHADER: strShaderType = "vertex"; break;
case GL_GEOMETRY_SHADER: strShaderType = "geometry"; break;
case GL_FRAGMENT_SHADER: strShaderType = "fragment"; break;
}
fprintf(stderr, "Compile failure in %s shader:\n%s\n", strShaderType, strInfoLog);
delete[] strInfoLog;
}
return shader;
}
That was the only way I could think of pointing out that crucial bit of code. (I am new to SO)
I am learning from a tutorial so besides the altered includes and the namespace, this is not my code.
Anyway, I am using Xcode to learn about using OpenGL in C++ and it is telling me that I am using an undeclared identifier GL_GEOMETRY_SHADER ... I have coded using OpenGL in java before so to me this seems really strange as GL_GEOMETRY_SHADER should be identified along with its vertex and fragment counterpart, which do not cause any problems. Any help would be appreciated.
(As an aside that I don't think warrants an entire question of its own since I simply opted to not use the function, glutExitMainLoop(); is not defined either)
Thank you

OpenGl Shader Program Linking error with uncertain reason

I'm trying to make a framework that uses OpenGL.
Yet, having problem to reload shader program.
This is what I have right now.
I load and compile a vertex shader and a fragment shader(both contain main).
Than I link a program with them.
At first, it has no problem. No matter how much time I refresh shaders.
(If I press Refresh Shader, the application creates new shader, program resource and replaces original ones with them)
Than, if I load a mesh and preform some drawing calls(using VBO), This time Refresh Shader Fails with following message.
I have no idea how my draw calls or VBO function calls could possibly be related to Shader Linking...
I'm not even sure if that is the exact reason or not.
Shader class:
class Shader : public Named
{
// -------------------------------------------------------
// --------------- Defs ----------------------------------
public:
typedef void (*ErrFunc)( Shader* pShader, const char* const errMsg );
// -------------------------------------------------------
// --------------- Funcs ---------------------------------
public:
//
// Shader Controls
//
// pre: source is string contains source
Shader( const std::string& name, GLenum type);
~Shader( void );
// pre: file to loast source. Give nullptr to load from lastly loaded file
void ShaderSourceFromFile( const char* file );
void ShaderSource( const std::string& source );
void Compile( void );
void AttachTo( GLuint program );
void DetachFrom( GLuint program );
GLuint GetId( void );
void SetId( GLuint shaderId );
GLint GetType( void );
GLint GetCompileStatus( void );
const char* const GetShaderInfoLog( void );
//
// Err Control
//
public:
void Error ( const char* pErrMsg );
static ErrFunc SetErrFunc( ErrFunc fpNewErrFunc );
// -------------------------------------------------------
// ----------- Vars --------------------------------------
private:
// id of the shader
GLuint _shaderId;
// type of the shader
GLenum _type;
// error func
static ErrFunc s_fpErrFunc;
}; // class Shader
//
// Shader
//
Shader::ErrFunc Shader::s_fpErrFunc = nullptr;
Shader::Shader( const std::string& name, GLenum type) :
Named(name),
_shaderId(GL_INVALID_RESOURCE_ID),
_type(type)
{
// Create shader
GLint id = glCreateShader(type);
// Failed to create shader?
if ( id == GL_INVALID_RESOURCE_ID )
Error("(Shader::Shader) Failed to create shader resource");
// Set shader id
SetId(id);
}
Shader::~Shader( void )
{
// Delete shader
glDeleteShader(GetId());
// Error Check
if( glGetError() == GL_INVALID_VALUE )
Error("(Shader::~Shader) given id to delete was not shader");
}
void Shader::Compile( void )
{
// Compile shader
glCompileShader(_shaderId);
switch( glGetError() )
{
case GL_NO_ERROR:break;
case GL_INVALID_VALUE: Error("shader is not a value generated by OpenGL.");break;
case GL_INVALID_OPERATION: Error("shader is not a shader object.");break;
default:Error("Unkown Error");
}
// ErrorCheck
if( GetCompileStatus() == GL_FALSE )
Error("(Shader::Compile) Compilation Failed.");
}
void Shader::ShaderSourceFromFile( const char* file )
{
ShaderSource(ReadEntireFile(file));
}
// pre: source is string contains source
void Shader::ShaderSource( const std::string& source )
{
const GLchar* pSource = source.c_str();
glShaderSource( GetId(), // shader id
1, // count of source. only one at a time at the moment
&pSource, // buffer that source is stored
nullptr); // length is not specified, yet null terminated
switch( glGetError() )
{
case GL_NO_ERROR:
break;
case GL_INVALID_VALUE:
Error("invalid shader or bufsize is less than 0");
break;
case GL_INVALID_OPERATION:
Error("shader is not a shader object.");
break;
default:
Error("Unkown Error");
break;
}
}
void Shader::AttachTo( GLuint program )
{
glAttachShader(program, GetId());
if( glGetError() != GL_NO_ERROR )
{
Error("(Shader::AttachTo) Error Happend Performing glAttachShader");
}
}
void Shader::DetachFrom( GLuint program )
{
glDetachShader(program, GetId());
switch( glGetError() )
{
case GL_NO_ERROR:
break;
case GL_INVALID_VALUE:
Error("(Shader::DetachFrom)either program or shader is a value that was not generated by OpenGL.");
break;
case GL_INVALID_OPERATION:
Error("(Shader::DetachFrom)program is not a program object. or "
"shader is not a shader object. or "
"shader is not attached to program.");
break;
default:
Error("(Shader::DetachFrom)Unkown Error");
break;
}
}
GLuint Shader::GetId( void )
{
return _shaderId;
}
void Shader::SetId( GLuint shaderId )
{
_shaderId = shaderId;
}
GLint Shader::GetType( void )
{
return _type;
}
void Shader::Error ( const char* pErrMsg )
{
if( s_fpErrFunc )
s_fpErrFunc(this, pErrMsg);
}
Shader::ErrFunc Shader::SetErrFunc( ErrFunc fpNewErrFunc )
{
ErrFunc prev = s_fpErrFunc;
s_fpErrFunc = fpNewErrFunc;
return prev;
}
GLint Shader::GetCompileStatus( void )
{
GLint status;
glGetShaderiv(GetId(), GL_COMPILE_STATUS, &status );
GLint err = glGetError();
switch( err )
{
case GL_NO_ERROR:
break;
case GL_INVALID_ENUM:
Error("(Shader::GetCompileStatus) Invalid Enum passed to glGetShaderiv");
break;
case GL_INVALID_VALUE:
Error("(Shader::GetCompileStatus) Invalid Shader id passed to glGetShaderiv");
break;
case GL_INVALID_OPERATION:
Error("(Shader::GetCompileStatus) Invalid Shader id or shader does not support");
break;
default:
Error("(Shader::GetCompileStatus) Unkown Error");
break;
}
return status;
}
const char* const Shader::GetShaderInfoLog( void )
{
static const unsigned s_bufSize = 0x1000;
static GLchar s_buf[s_bufSize] = {0};
// Get InfoLog
glGetShaderInfoLog(GetId(), s_bufSize, nullptr, s_buf);
// Error Check
if( glGetError() != GL_NO_ERROR )
Error("(Shader::GetCompileStatus) There was Error performing glGetShaderInfoLog");
// return buffer
return s_buf;
}
Refreshing Function:
void GLShaderManager::Refresh( void )
{
// Gets current shader
Shader* pVertexPrev = _shaderMap["Vertex"];
Shader* pFragPrev = _shaderMap["Fragment"];
// Delete them(glDeleteShader call is included)
if( pVertexPrev && pFragPrev )
{
delete pVertexPrev;
delete pFragPrev;
}
// Loads shader
LoadShader("Vertex", GL_VERTEX_SHADER, s_vertexFile);
LoadShader("Fragment", GL_FRAGMENT_SHADER, s_fragmentFile);
// Delete Current Program
glDeleteProgram(_program);
// Create new Program
GLuint newProgram = glCreateProgram();
_program = newProgram;
// Attach Shaders
UseShader("Vertex");
UseShader("Fragment");
// Linking
glLinkProgram(_program);
const unsigned bufSize = 0x1000;
char buf[bufSize] = { '\0' };
GLint status = 0;
glGetProgramiv(_program, GL_LINK_STATUS, &status );
if( status )
Gui::Get().Output("Shader program linked successfully.");
glGetProgramInfoLog(_program, bufSize, nullptr, buf);
Gui::Get().Output(buf);
s_uniformErrHistory.clear();
}
void GLShaderManager::LoadShader( const std::string& name, GLenum type, const std::string& sourceFile )
{
Shader* pShader = _shaderMap[name] = new Shader(name, type);
if( pShader )
{
pShader->ShaderSourceFromFile(sourceFile.c_str());
pShader->Compile();
if( pShader->GetCompileStatus() )
Gui::Get().Output( (pShader->GetName() + " shader successfully compiled.").c_str());
Gui::Get().Output(pShader->GetShaderInfoLog());
}
}
void GLShaderManager::UseShader( const std::string& name )
{
Shader* pShader = _shaderMap[name];
if( pShader )
{
pShader->AttachTo(_program);
}
}

glCreateShader returns 0

I have a windows build environment using cygwin and GCC, and am linking to the libraries for GLEE, GLUT, and opengl32. This is a Win32 build.
All calls to glCreateShader are returning 0, yet I'm not picking up any errors. The following is based on the Lighthouse tutorials for GLUT and GLSL, so the sequence of GL operations should be correct.
Here's the relevant code..
#define WIN32
#include <stdio.h>
#include <GL/GLee.h>
#include <GL/glut.h>
#include "SampleUtils.h"
#include "LineShaders.h"
GLint lineVertexHandle = 0;
unsigned int lineShaderProgramID;
...
int main(int argc, char **argv) {
// init GLUT and create window
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_DEPTH | GLUT_DOUBLE | GLUT_RGBA);
glutInitWindowPosition(100,100);
glutInitWindowSize(320,320);
glutCreateWindow("Lighthouse3D Tutorials");
// register callbacks
glutDisplayFunc(renderScene);
glutReshapeFunc(changeSize);
glutIdleFunc(renderScene);
// initialize the shaders
init();
// enter GLUT event processing cycle
glutMainLoop();
return 0;
}
void init() {
glClearColor( 0.0, 0.0, 0.0, 1.0 ); /* Set the clear color */
lineShaderProgramID = SampleUtils::createProgramFromBuffer(lineMeshVertexShader,lineFragmentShader);
lineVertexHandle = glGetAttribLocation(lineShaderProgramID,"vertexPosition");
}
SampleUtils is a utility class w/ the following methods for shader handling. The shaders lineMeshVertexShader and lineFragmentShader are defined in LineShaders.h.
unsigned int SampleUtils::createProgramFromBuffer(const char* vertexShaderBuffer, const char* fragmentShaderBuffer) {
checkGlError("cPFB");
// scroll down for initShader() - we never get past this point.
GLuint vertexShader = initShader(GL_VERTEX_SHADER, vertexShaderBuffer);
if (!vertexShader)
return 0;
GLuint fragmentShader = initShader(GL_FRAGMENT_SHADER,
fragmentShaderBuffer);
if (!fragmentShader)
return 0;
GLuint program = glCreateProgram();
if (program)
{
glAttachShader(program, vertexShader);
checkGlError("glAttachShader");
glAttachShader(program, fragmentShader);
checkGlError("glAttachShader");
glLinkProgram(program);
GLint linkStatus = GL_FALSE;
glGetProgramiv(program, GL_LINK_STATUS, &linkStatus);
if (linkStatus != GL_TRUE)
{
GLint bufLength = 0;
glGetProgramiv(program, GL_INFO_LOG_LENGTH, &bufLength);
if (bufLength)
{
char* buf = (char*) malloc(bufLength);
if (buf)
{
glGetProgramInfoLog(program, bufLength, NULL, buf);
LOG("Could not link program: %s", buf);
free(buf);
}
}
glDeleteProgram(program);
program = 0;
}
}
return program;
}
unsigned int
SampleUtils::initShader(unsigned int shaderType, const char* source)
{
checkGlError("initShader");
//GLuint shader = glCreateShader((GLenum)shaderType);
/* trying explicit enum, just in case - shader is still always 0 */
GLuint shader = glCreateShader(GL_VERTEX_SHADER);
LOG("SHADER %i", shader);
if (shader)
{
glShaderSource(shader, 1, &source, NULL);
glCompileShader(shader);
GLint compiled = 0;
glGetShaderiv(shader, GL_COMPILE_STATUS, &compiled);
if (!compiled)
{
GLint infoLen = 0;
glGetShaderiv(shader, GL_INFO_LOG_LENGTH, &infoLen);
if (infoLen)
{
char* buf = (char*) malloc(infoLen);
if (buf)
{
glGetShaderInfoLog(shader, infoLen, NULL, buf);
LOG("Could not compile shader %d: %s",
shaderType, buf);
free(buf);
}
glDeleteShader(shader);
shader = 0;
}
}
}
return shader;
}
void SampleUtils::checkGlError(const char* operation) {
for (GLint error = glGetError(); error; error = glGetError())
LOG("after %s() glError (0x%x)", operation, error);
}
I'm wondering if the context isn't fully initialized when glCreateShader is called. But I've tried calling init() within the callbacks as well, with no effect. My searches on this issue have turned up the advice to build a known-good example, to confirm the availability of glCreateShader - if anyone has one for C++, pls advise.
UPDATE:
Based on the feedback here I'd checked my OpenGL support using the glewinfo utility and it's reporting that this system is limited to 1.1. - https://docs.google.com/document/d/1LauILzvvxgsT3G2KdRXDTOG7163jpEuwtyno_Y2Ck78/edit?hl=en_US
e.g.
---------------------------
GLEW Extension Info
---------------------------
GLEW version 1.6.0
Reporting capabilities of pixelformat 2
Running on a GDI Generic from Microsoft Corporation
OpenGL version 1.1.0 is supported
GL_VERSION_1_1: OK
---------------
GL_VERSION_1_2: MISSING
---------------
etc.
What's strange is that with GLEE I was able to compile these extensions, though they apparently don't work. I've checked my gl.h and glext.h header files and they are current - the extensions are there. So how is this dealt with on Windows? How do you set up and link your environment so that you can develop w/ more than 1.1 using cygwin and Eclipse?
The solution to this question was provided in the comments, and I'm highlighting it here in order to close this question.
All that was required was a driver upgrade to a version that supports the extensions that I'm using. So I installed NVidia's OpenGL driver, which can be obtained here - http://developer.nvidia.com/opengl-driver
It appears that my system's original NVidia driver was subverted so that the native windows OpenGL driver was being used. This only supports OpenGL 1.1. But I'd mistakenly thought that a GL_VERSION of 1.1.0 was normal on Windows - based on some bad advice I'd gotten. And the fact that I was able to compile and execute this code without errors led me to assume that the extensions were present. They were not.
I have had the same problem, but it was a silly C++ language trick : my shader was compiled in a global / static variable (which was a wrapper class to use program shader), which was so initialized before having a GL context. Hope it can help...