QGLShaderProgram will not compile any shader since upgrading to Qt5 - c++

I am finding that QGLShaderProgram is consistently failing to compile any shader and providing no error log. Here are the symptoms:
QGLShaderProgram reports that it failed to compile but produces an empty error log. If I try to bind the shader an exception is thrown.
I can compile a shader using glCompileShader without problem. However, the first time I try to compile this way after QGLShaderProgram has failed, fails with this error log:
ERROR: error(#270) Internal error: Wrong symbol table level
ERROR: 0:2: error(#232) Function declarations cannot occur inside of functions:
main
ERROR: error(#273) 2 compilation errors. No code generated
Following that one failure, the next time I try to compile using glCompileShader works fine.
The problem has arisen only since upgrading from Qt 4.8 to 5.2. Nothing else has changed on this machine.
I have tested on two PCs, one with an ATI Radeon HD 5700, the other with an AMD FirePro V7900. The problem only appears on the Radeon PC.
Here is my test code demonstrating the problem:
main.cpp
#include <QApplication>
#include "Test.h"
int main(int argc, char* argv[])
{
QApplication* app = new QApplication(argc, argv);
Drawer* drawer = new Drawer;
return app->exec();
}
Test.h
#pragma once
#include <qobject>
#include <QTimer>
#include <QWindow>
#include <QOpenGLContext>
#include <QOpenGLFunctions>
class Drawer : public QWindow, protected QOpenGLFunctions
{
Q_OBJECT;
public:
Drawer();
QTimer* mTimer;
QOpenGLContext* mContext;
int frame;
public Q_SLOTS:
void draw();
};
Test.cpp
#include "Test.h"
#include <QGLShaderProgram>
#include <iostream>
#include <ostream>
using namespace std;
Drawer::Drawer()
: mTimer(new QTimer)
, mContext(new QOpenGLContext)
, frame(0)
{
mContext->create();
setSurfaceType(OpenGLSurface);
mTimer->setInterval(40);
connect(mTimer, SIGNAL(timeout()), this, SLOT(draw()));
mTimer->start();
show();
}
const char* vertex = "#version 110 \n void main() { gl_Position = gl_Vertex; }";
const char* fragment = "#version 110 \n void main() { gl_FragColor = vec4(0.0,0.0,0.0,0.0); }";
void Drawer::draw()
{
mContext->makeCurrent(this);
if (frame==0) {
initializeOpenGLFunctions();
}
// Compile using QGLShaderProgram. This always fails
if (frame < 5)
{
QGLShaderProgram* prog = new QGLShaderProgram;
bool f = prog->addShaderFromSourceCode(QGLShader::Fragment, fragment);
cout << "fragment "<<f<<endl;
bool v = prog->addShaderFromSourceCode(QGLShader::Vertex, vertex);
cout << "vertex "<<v<<endl;
bool link = prog->link();
cout << "link "<<link<<endl;
}
// Manual compile using OpenGL direct. This works except for the first time it
// follows the above block
{
GLuint prog = glCreateShader(GL_FRAGMENT_SHADER);
glShaderSource(prog, 1, &fragment, 0);
glCompileShader(prog);
GLint success = 0;
glGetShaderiv(prog, GL_COMPILE_STATUS, &success);
GLint logSize = 0;
glGetShaderiv(prog, GL_INFO_LOG_LENGTH, &logSize);
GLchar* log = new char[8192];
glGetShaderInfoLog(prog, 8192, 0, log);
cout << "manual compile " << success << endl << log << endl;
delete[] log;
}
glClearColor(1,1,0,1);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
mContext->swapBuffers(this);
frame++;
}
Elsewhere, I have tested using QGLWidget, and on a project that uses GLEW instead of QOpenGLFunctions with exactly the same results.
The version of Qt I'm linking against was built with the following configuration:
configure -developer-build -opensource -nomake examples -nomake tests -mp -opengl desktop -icu -confirm-license
Any suggestions? Or shall I just send this in as a bug report?
Update
In response to peppe's comments:
1) What does QOpenGLDebugLogger says?
The only thing I can get from QOpenGLDebugLogger is
QWindowsGLContext::getProcAddress: Unable to resolve 'glGetPointerv'
This is printed when I initialize it (and not as a debug event firing, but just to console). It happens even though mContext->hasExtension(QByteArrayLiteral("GL_KHR_debug")) returns true and I'm initializing it within the first frame's draw() function.
2) Can you print the compile log of the QOGLShaders even if they compile successfully?
I cannot successfully compile QOpenGLShader or QGLShader at any point so I'm not able to test this. However, when compiling successfully using plain GL functions, the log returns blank.
3) Which GL version did you get from the context? (Check with QSurfaceFormat).
I've tried with versions 3.0, 3.2, 4.2, all with the same result.
4) Please set the same QSurfaceFormat on both the context and the window before creating them
5) Remember to create() the window
I've implemented both of these now and the result is the same.
I've just tested on a third PC and that has no issues. So it is this specific computer which, incidentally, happens to be a Mac Pro running Windows in bootcamp. It has had absolutely no trouble in any other context running the latest ATI drivers but I can only really conclude that there is a bug somewhere between the ATI drivers, this computer's graphics chip and QOpenGLShaderProgram.
I think I'm unlikely to find a solution, so giving up. Thank you for all your input!

Related

New Vulkan project in CLion on Mac OS will not create VkInstance

After my first successful attempt at a 3D engine using Java and OpenGL (LWJGL3), I have decided to try my hand at Vulkan, using C++.
I have barely any experience with C/C++ and I am aware of the steep learning curve of Vulkan. This is however not a problem.
I decided to follow this tutorial: https://vulkan-tutorial.com/Introduction
It has showed me how to create a new project with Vulkan using XCode (as I am on Mac OS Mojave). I would, however, like to continue the rest of the tutorial using CLion as I would be switching between multiple operating systems.
I tried my hand at creating a CLion project and succeeded in making my first CMakeLists file, however something seems to be wrong. The file currently consists of the following:
cmake_minimum_required(VERSION 3.12)
project(VulkanTesting)
set(CMAKE_CXX_STANDARD 14)
add_executable(VulkanTesting main.cpp)
include_directories(/usr/local/include)
include_directories(/Users/[username]/Documents/Vulkan/SDK/vulkansdk-macos-1.1.92.1/macOS/include)
target_link_libraries(VulkanTesting /usr/local/lib/libglfw.3.3.dylib)
target_link_libraries(VulkanTesting /Users/[username]/Documents/Vulkan/SDK/vulkansdk-macos-1.1.92.1/macOS/lib/libvulkan.1.dylib)
target_link_libraries(VulkanTesting /Users/[username]/Documents/Vulkan/SDK/vulkansdk-macos-1.1.92.1/macOS/lib/libvulkan.1.1.92.dylib)
# Don't know if I need the next two lines
link_directories(/usr/local/lib)
link_directories(/Users/[username]/Documents/Vulkan/SDK/vulkansdk-macos-1.1.92.1/macOS/lib)
The reason I showed the above file will become apparent in the question.
The 'Program' so far is the following:
#define GLFW_INCLUDE_VULKAN
#include <GLFW/glfw3.h>
#include <iostream>
#include <stdexcept>
#include <functional>
#include <cstdlib>
#include <vector>
const int WIDTH = 800;
const int HEIGHT = 600;
class HelloTriangleApplication {
public:
void run() {
initWindow();
initVulkan();
mainLoop();
cleanup();
}
private:
GLFWwindow* window;
VkInstance instance;
void initWindow(){
glfwInit();
glfwWindowHint(GLFW_CLIENT_API, GLFW_NO_API);
glfwWindowHint(GLFW_RESIZABLE, GLFW_FALSE);
window = glfwCreateWindow(WIDTH, HEIGHT, "My first Vulkan window", nullptr, nullptr);
}
void initVulkan() {
createInstance();
}
void createInstance(){
// Instantiate Application Info
VkApplicationInfo applicationInfo = {};
applicationInfo.sType = VK_STRUCTURE_TYPE_APPLICATION_INFO;
applicationInfo.pApplicationName = "Hello Triangle";
applicationInfo.applicationVersion = VK_MAKE_VERSION(1,0,0);
applicationInfo.pEngineName = "No Engine";
applicationInfo.engineVersion = VK_MAKE_VERSION(1,0,0);
applicationInfo.apiVersion = VK_API_VERSION_1_0;
// Instantiate Instance Creation Info
VkInstanceCreateInfo createInfo = {};
createInfo.sType = VK_STRUCTURE_TYPE_INSTANCE_CREATE_INFO;
createInfo.pApplicationInfo = &applicationInfo;
// Get GLFW platform specific extensions
uint32_t glfwExtensionCount = 0;
const char** glfwExtensions;
glfwExtensions = glfwGetRequiredInstanceExtensions(&glfwExtensionCount);
// Fill in required extensions in Instance Creation Info
createInfo.enabledExtensionCount = glfwExtensionCount;
createInfo.ppEnabledExtensionNames = glfwExtensions;
// For validation layers, this is a later step in the tutorial.
createInfo.enabledLayerCount = 0;
// Create the Vulkan instance, and check if it was successful.
VkResult result = vkCreateInstance(&createInfo, nullptr, &instance);
if(result != VK_SUCCESS){
std::cout << "glfwExtensionCount: " << glfwExtensionCount << "\n";
std::cout << "glfwExtensionNames: " << &glfwExtensions << "\n";
std::cout << "result: " << result << "\n";
throw std::runtime_error("Failed to create Vulkan Instance");
}
}
void mainLoop() {
while(!glfwWindowShouldClose(window)){
glfwPollEvents();
}
}
void cleanup() {
glfwDestroyWindow(window);
glfwTerminate();
}
};
int main() {
HelloTriangleApplication app;
try {
app.run();
} catch (const std::exception& e) {
std::cerr << e.what() << std::endl;
return EXIT_FAILURE;
}
return EXIT_SUCCESS;
}
The problem I am having is that when I try to run the program, it will not create a VkInstance. The function returns VK_ERROR_INCOMPATIBLE_DRIVER. Now, I doubt that the driver is in fact incompatible as I have run the demo applications that came with the Vulkan SDK for one, and for another I have been able to run the exact same 'program' in XCode. When I investigated the problem a bit further, I noticed that the glfwGetRequiredInstanceExtensions function returns no extensions when the program is run in CLion like this, but does return one in the XCode equivalent.
This all leads me to believe that there is something I have done wrong in linking the libraries/frameworks in the Cmake file because I am aware of the fact that Vulkan is not directly supported in Mac OS, but instead (somehow?) passes through a layer to communicate with Metal.
Do I need to specify a way for the program to pass its Vulkan functionality through a Metal layer, and is this done automagically in XCode, or is there another problem with my approach?
Any help would be greatly appreciated!
You might want to look at the MacOS Getting Started Guide on the LunarXchange website and in your SDK. There is a section at the end that shows how to use CMake to build a Vulkan app and run it on MacOS. You also may want to use the FindVulkan CMake module instead of manually setting the include directories and the target link libraries.
But my first guess about your specific problem is that you may not be setting the VK_ICD_FILENAMES environment variable. You are correct in your observation that there is no direct support for Vulkan. Instead, the support is provided by the MoltenVK library which is treated as a Vulkan driver. But this "driver" is not installed in any system directory by the SDK. The SDK is just unzipped in your home directory structure, so you must tell the Vulkan loader where to find it via this environment variable.
Again, the CMake section at the end of the Getting Started Guide demonstrates the use of this environment variable. And the entire guide goes into additional detail about how the various Vulkan and MoltenVK components work.

Error GLSL incorrect version 450

I have a certain OpenGL application which I compiled in the past but now can't in the same machine. The problem seems to be in the fragment shader not compiling properly.
I'm using:
Glew 2.1.0
Glfw 3.2.1
Also all necessary context is being created on the beginning of the program. Here's how my program creation function looks like:
std::string vSource, fSource;
try
{
vSource = getSource(vertexShader, "vert");
fSource = getSource(fragmentShader, "frag");
}
catch (std::runtime_error& e)
{
std::cout << e.what() << std::endl;
}
GLuint vsID, fsID;
try
{
vsID = compileShader(vSource.c_str(), GL_VERTEX_SHADER); //Source char* was checked and looking good
fsID = compileShader(fSource.c_str(), GL_FRAGMENT_SHADER);
}
catch (std::runtime_error& e)
{
std::cout << e.what() << std::endl; //incorrect glsl version 450 thrown here
}
GLuint programID;
try
{
programID = createProgram(vsID, fsID); //Debugging fails here
}
catch (std::runtime_error& e)
{
std::cout << e.what() << std::endl;
}
glDeleteShader(vsID);
glDeleteShader(fsID);
return programID;
My main:
/* ---------------------------- */
/* OPENGL CONTEXT SET WITH GLEW */
/* ---------------------------- */
static bool contextFlag = initializer::createContext(vmath::uvec2(1280, 720), "mWs", window);
std::thread* checkerThread = new std::thread(initializer::checkContext, contextFlag);
/* --------------------------------- */
/* STATIC STATE SINGLETON DEFINITION */
/* --------------------------------- */
Playing Playing::playingState; //Failing comes from here which tries to create a program
/* ---- */
/* MAIN */
/* ---- */
int main(int argc, char** argv)
{
checkerThread->join();
delete checkerThread;
Application* app = new Application();
...
return 0;
}
Here is the looking of an example of the fragmentShader file:
#version 450 core
out vec4 fColor;
void main()
{
fColor = vec4(0.5, 0.4, 0.8, 1.0);
}
And this is what I catch as errors:
[Engine] Glew initialized! Using version: 2.1.0
[CheckerThread] Glew state flagged as correct! Proceeding to mainthread!
Error compiling shader: ERROR: 0:1: '' : incorrect GLSL version: 450
ERROR: 0:7: 'fColor' : undeclared identifier
ERROR: 0:7: 'assign' : cannot convert from 'const 4-component vector of float' to 'float'
My specs are the following:
Intel HD 4000
Nvidia GeForce 840M
I shall state that I compiled shaders in this same machine before. I can't do it anymore after a disk format. However, every driver is updated.
As stated in comments the problem seemed to be with a faulty option of running the IDE with selected graphics card. As windows defaults the integrated Intel HD 4000 card, switching the NVIDIA card to the default preferred one by the OS fixed the problem.

OpenGL querying GL_COMPILE_STATUS returns incorrect values

I'm having an issue with compiling GLSL code. When I try to print whether my shader was compiled correctly by using glGetShaderiv(), my program sometimes prints out the wrong result. For example, with this shader (test.vert):
#version 410
void main()
{
}
and using the following code:
#include <GL\glew.h>
#include <GLFW\glfw3.h>
#include <iostream>
#include <fstream>
#include <string>
int main() {
glfwInit();
GLFWwindow* window = glfwCreateWindow(200, 200, "OpenGL", nullptr, nullptr);
glfwMakeContextCurrent(window);
glewInit();
std::string fileText = "";
std::string textBuffer = "";
std::ifstream fileStream{ "test.vert" };
while (fileStream.good()) {
getline(fileStream, textBuffer);
fileText += textBuffer;
}
GLuint vertShaderID = glCreateShader(GL_VERTEX_SHADER);
const char* vertShaderText = fileText.c_str();
glShaderSource(vertShaderID, 1, &vertShaderText, NULL);
glCompileShader(vertShaderID);
GLint vertCompiled;
glGetShaderiv(vertShaderID, GL_COMPILE_STATUS, &vertCompiled);
if (vertCompiled != GL_TRUE) {
std::cerr << "vert shader did not compile." << std::endl;
}
glfwTerminate();
system("PAUSE");
return 0;
}
the program outputs that the shader did not compile, although I believe that it should have. I have tested many other shader programs, for example by putting a random 'a' or another letter in the middle of a word in the shader code, and I'm still getting incorrect outputs (this test had no error output).
I have also tried printing out the value of 'fileText' and it was correct (the same as in test.vert). What am I doing wrong?
I'm using a 64-bit Windows system, the supported OpenGL version is 4.40.
getline clips off the \n. That means that your entire file will not have any line breaks. It's all on one line, and therefore looks like this:
#version 410 void main() { }
That's not legal GLSL.
Please stop reading files line-by-line. If you want to read an entire file, then read the entire file.

glActiveTexture causes "has stopped working" error [duplicate]

This question already has answers here:
Why does glGetString(GL_VERSION) return null / zero instead of the OpenGL version?
(2 answers)
Closed 7 years ago.
My IDE can't recognize glActiveTexture method.
I have installed freeglut and GLEW lib, when I build my project IDE doesn't show any error but when i run program i have this "has stopped working" type error. I don't really know how to fix it and what causes this problem.
Another think is that IDE know the name of the function(this #thing) but I gues don't know the function itself (it should be () symbol just like in first function).
glActiveTexture
I hope someone know solution for this problem.
Edit1
Here is mine example code:
#define GLEW_STATIC
#ifdef __APPLE__
#include <GLUT/glut.h>
#else
#include <GL/glew.h>
#include <GL/gl.h>
#include <GL/glut.h>
#endif
#include <iostream>
#include <stdlib.h>
using namespace std;
int main(int argc, char *argv[])
{
glutInit(&argc, argv);
GLenum err = glewInit();
if (GLEW_OK != err)
{
cout<<"Error: "<<glewGetErrorString(err)<<endl;
}
else cout<<"Initialized"<<endl;
return EXIT_SUCCESS;
}
and I'm getting Error: Missing GL version
Here is glewinfo:
GLEW version 1.13.0
Reporting capabilities of pixelformat 3
Running on a Intel(R) HD Graphics 4600 from Intel
OpenGL version 4.3.0 - Build 10.18.10.3960 is supported
You need to create an OpenGL rendering context before calling glewInit:
glutInit(&argc, argv);
glutCreateWindow("My Program");
GLenum err = glewInit();
See here for details.

glewinit() apparently successful, sets error flag anyway

I have recently migrated from Windows to Linux (Debian, 64-bit) and am trying to get a GPGPU development environment up and running, so I am testing a program which worked under Windows.
Compiling and linking goes fine, but when I run the program I get some odd errors. I am using glew and freeglut.
First snippet: OpenGL only
i = 1;
info = PROGRAM_NAME;
glutInitContextVersion(4,2);
glutInit(&i, &info);
glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGBA);
glutInitWindowSize(W_SIZEX, W_SIZEY);
glutInitWindowPosition(W_POSX, W_POSY);
glutCreateWindow(info);
glClearColor(1.0,1.0,1.0,0);
/**/
printf("Before glewInit: %i\n", glGetError());
/**/
printf("glewInit returns: %i\n", glewInit());
/**/
printf("After glewInit: %i\n", glGetError());
/**/
From which I get the following output:
Before glewInit: 0
glewInit returns: 0
After glewInit: 1280
This is an invalid enum error. I don't know what's causing it, but I suspect it might be related to the next error I get, later in the program's execution.
Second snippet: OpenCL-OpenGL interop
/* BUFFERS */
(*BFR).C[0] = clCreateBuffer(*CTX, CL_MEM_READ_WRITE, SD, 0, 0);
(*BFR).C[1] = clCreateBuffer(*CTX, CL_MEM_READ_WRITE, SD, 0, &i);
dcl(i);
glGenBuffers(2, (*BFR).G);
glBindBuffer(GL_ARRAY_BUFFER, (*BFR).G[0]);
glBufferData(GL_ARRAY_BUFFER, SI, 0, GL_DYNAMIC_DRAW);
(*BFR).D[0] = clCreateFromGLBuffer(*CTX, CL_MEM_WRITE_ONLY, (*BFR).G[0], &i);
dcl(i);
glBindBuffer(GL_ARRAY_BUFFER, 0);
Here, the dcl(int) method just decodes the CL error code. When I run this, I get a CL_INVALID_GL_OBJECT error from clCreateFromGLBuffer(). However, OpenGL has no issues generating, binding or unbinding the buffers in question. The OpenCL context is apparently valid, generating no errors on creation or query. Everything works in VS2010 on Windows 7 64-bit.
Compilation Details
Here are the relevant includes:
/* OPENGL */
#include "GL/glew.h"
#include "GL/freeglut.h"
/* OPENCL */
#include "CL/cl.h"
#include "CL/cl_gl.h"
I am using GCC and linking like so:
gcc -w -I./include CLGL.c -o ~/Templates/GOL-CLGL/run/a.out -lGLEW -lGLU -lglut -lGL -lOpenCL;
Compilation and linking results in no errors (plenty of warnings about pointer abuse but I doubt that's the culprit).
I'm currently out of ideas on how to debug this. Can anyone suggest further steps?
I had this issue recently too so here is the answer:
OpenGL: glGetError() returns invalid enum after call to glewInit()
So you can discard that error .