This is based on the debug callback example from https://www.khronos.org/opengl/wiki/Debug_Output and somewhat on https://learnopengl.com/In-Practice/Debugging.
Setting up the callback:
void GLAPIENTRY
MessageCallback( GLenum source,
GLenum type,
GLuint id,
GLenum severity,
GLsizei length,
const GLchar* message,
const void* userParam )
{
fprintf( stderr, "GL CALLBACK: %s type = 0x%x, severity = 0x%x, message = %s\n",
( type == GL_DEBUG_TYPE_ERROR ? "** GL ERROR **" : "" ),
type, severity, message );
}
And then, after window creation ect., registering it:
glfwWindowHint(GLFW_OPENGL_DEBUG_CONTEXT, GL_TRUE);
glEnable ( GL_DEBUG_OUTPUT );
if(glDebugMessageCallback){
std::cout << "Register OpenGL debug callback " << std::endl;
glEnable(GL_DEBUG_OUTPUT_SYNCHRONOUS);
GLuint unusedIds = 0;
//glDebugMessageCallback( MessageCallback, 0 ); also tried setting up the callback here
glDebugMessageControl(GL_DONT_CARE,
GL_DONT_CARE,
GL_DONT_CARE,
0,
&unusedIds,
GL_TRUE);
glDebugMessageCallback( MessageCallback, 0 );
}
else
std::cout << "glDebugMessageCallback not available" << std::endl;
After that I produce an error message by calling
glClear(GL_DEPTH);
which works as expected. It's source is GL_DEBUG_SOURCE_API.
However, if i set
glDebugMessageControl(GL_DEBUG_SOURCE_APPLICATION,
GL_DONT_CARE,
GL_DONT_CARE,
0,
&unusedIds,
GL_TRUE);
the callback is still emitted, even though, as I understand, it should not be because of the GL_DEBUG_SOURCE_APPLICATION filter.
This also occurs for other combinations of filters, so that I assume that the call to glDebugMessageControl has no effect in my implementation.
Has anyone an idea what I'm missing here?
Thank you!
By glDebugMessageControl you can explicitly specify messages, whose state (enabled/disabled) you want to change. But there is no possibility to change the state of all the other messages, which are not specified in the filter, which is passed to glDebugMessageControl.
If you want to disable reporting of specific debug messages, then the last parameter of glDebugMessageControl (enabled) has to be GL_FALSE.
Furthermore, glClear(GL_DEPTH) would cause a GL_INVALID_VALUE error and the source type is GL_DEBUG_SOURCE_API, instead of GL_DEBUG_SOURCE_APPLICATION:
glDebugMessageControl(
GL_DEBUG_SOURCE_API,
GL_DONT_CARE,
GL_DONT_CARE,
0, NULL,
GL_FALSE); // disable all messages with source `GL_DEBUG_SOURCE_APPLICATION`
If you want to disable all error messages, except the API error messages, then you have to disable all messages first and the enable explicitly the API error messages:
glDebugMessageControl(
GL_DONT_CARE, GL_DONT_CARE, GL_DONT_CARE, 0, NULL, GL_FALSE);
glDebugMessageControl(
GL_DEBUG_SOURCE_API, GL_DEBUG_TYPE_ERROR, GL_DONT_CARE, 0, NULL, GL_TRUE);
See also Debug Output - Message Components
Related
I am writing a game. I use ArchLinux most of time but I have tried to run my game on the Ubuntu 16.04 recently. On Ubuntu 16.04 there is a strange error: 1280. It is too difficult to find what causes the error so I wanted to see opengl's debug output but I don't see it too. I noticed one thing during shader validation - validation seems to be unsuccessful but the log is empty:
GLint status;
glValidateProgram(program_);
glGetProgramiv(program_, GL_VALIDATE_STATUS, &status);
if (status == GL_TRUE) {
return;
}
// Store log and return false
int length = 0;
glGetProgramiv(program_, GL_INFO_LOG_LENGTH, &length);
if (length > 0) {
GLchar infoLog[512];
glGetProgramInfoLog(program_, 512, nullptr, infoLog);
throw std::runtime_error(std::string("Program failed to validate:") + infoLog);
} else {
throw std::runtime_error(std::string("Program failed to validate. Unknown error"));
}
This gives me Unknown error. Also the opengl's debug output can't be seen, however, user messages are written there successfully. Here is the code:
SDL_GL_SetAttribute(SDL_GL_CONTEXT_MAJOR_VERSION, 4);
SDL_GL_SetAttribute(SDL_GL_CONTEXT_MINOR_VERSION, 4);
SDL_GL_SetAttribute(SDL_GL_CONTEXT_PROFILE_MASK, SDL_GL_CONTEXT_PROFILE_CORE);
int contextFlags = 0;
SDL_GL_GetAttribute(SDL_GL_CONTEXT_FLAGS, &contextFlags);
contextFlags |= SDL_GL_CONTEXT_DEBUG_FLAG;
SDL_GL_SetAttribute(SDL_GL_CONTEXT_FLAGS, contextFlags);
sdlWindow_ = SDL_CreateWindow(title.c_str(),
SDL_WINDOWPOS_UNDEFINED,
SDL_WINDOWPOS_UNDEFINED,
0,
0,
SDL_WINDOW_OPENGL
| SDL_WINDOW_SHOWN
| SDL_WINDOW_FULLSCREEN_DESKTOP
| SDL_WINDOW_INPUT_GRABBED);
if (!sdlWindow_) {
throw std::runtime_error("Unable to create window");
}
SDL_Log("Window created");
glContext_ = SDL_GL_CreateContext(sdlWindow_);
if (!glContext_) {
throw std::runtime_error("Failed to init OpenGL");
}
SDL_Log("GL context created");
{
glewExperimental = GL_TRUE;
GLenum err = glewInit();
if (err != GLEW_OK) {
throw std::runtime_error(std::string("GLEW Error: ") + reinterpret_cast<const char*>(glewGetErrorString(err)));
}
}
if (glDebugMessageCallbackARB != nullptr) {
SDL_Log("GL debug is available.\n");
// Enable the debug callback
glEnable(GL_DEBUG_OUTPUT);
glEnable(GL_DEBUG_OUTPUT_SYNCHRONOUS);
glDebugMessageCallback(_openglDebugCallbackFunction, nullptr);
glDebugMessageControl(GL_DONT_CARE, GL_DONT_CARE, GL_DONT_CARE, 0, nullptr, GL_TRUE);
glDebugMessageInsert(GL_DEBUG_SOURCE_APPLICATION, GL_DEBUG_TYPE_MARKER, 0,
GL_DEBUG_SEVERITY_NOTIFICATION, -1 , "Started debugging");
} else {
SDL_Log("GL debug is not available.\n");
}
So the main question here is why I can't see the opengl's debug output. And, if it is possible, as an additional question, why does the shader validation fail without a log?
GLEW 1.x has some problems when beeing used in a core context (that's also why glewExperimental=true is needed). glewInit always generates an OpenGL error while loading the extensions. You don't get this error through the debug callback because the initialization of the callback happens after the point where the error happend.
You have kind of a chicken-egg problem here: You cannot setup the debug callback before initializing GLEW, but that's where you want to get the debug output from. I recommend calling glGetError() right after glewInit to get rid of the one error you know where it is coming from.
I need to establish a serial comm between Microsoft Windows Visual C++ 2010 and an Arduino microcontroller via USB. A motion tracking algorithm produces an X and Y coordinate which needs to be sent to the Arduino which in turn controls two pan and tilt servos.
I am a final year mechanical engineering student and have very little experience with Microsoft Visual Studios and C++, so please bear with me and please forgive me if my terms are incorrect...
I have done extensive research on multiple forums, but cannot find an answer specific to my problem:
All the solutions that I have come across only support comms when a normal/"empty" project is created in Visual Studios. An example can be found here: Serial communication (for Arduino) using Visual Studio 2010 and C
When I try and debug the same body of code (which successfully runs in an "empty" project) in a "Win32 Console Application" project, I am presented with the following errors:
error C2065: 'LcommPort' : undeclared identifier
error C2228: left of '.c_str' must have class/struct/union
Unfortunately I cannot simply change my project from a "Win32 Console Application" to a normal "Empty" project due to the fact that the motion tracking algorithm necessitates the use of the console application type of project.
The main body of code that I am using is as follows (this is a simplified test source file to confirm whether comms are established between MS Visual and the Arduino where the frequency at which an LED turns on and off is altered through the serial connection):
#include <Windows.h>
#include "ArduinoSerial.h"
#include "StdAfx.h"
int main() {
try {
ArduinoSerial arduino( "COM3" );
Sleep( 2000 ); // Initial wait to allow Arduino to boot after reset
char buffer[] = { 25, 100 };
arduino.Write( buffer, 2 ); // Send on/off delays to Arduino (if return value is 0, something went wrong)
}
catch ( const ArduinoSerialException &e ) {
MessageBoxA( NULL, e.what(), "ERROR", MB_ICONERROR | MB_OK );
}
return 0;
}
The corresponding source code which is the home of the error is found in line9 of the code:
#include <algorithm>
#include <sstream>
#include <Windows.h>
#include "stdafx.h"
#include "ArduinoSerial.h"
ArduinoSerial::ArduinoSerial( const std::string commPort ) {
comm = CreateFile( TEXT( commPort.c_str() ),
GENERIC_READ | GENERIC_WRITE,
0,
NULL,
OPEN_EXISTING,
0,
NULL );
if ( comm == INVALID_HANDLE_VALUE ) {
std::ostringstream error;
error << "Unable to acquire handle for " << commPort << ": ";
DWORD lastError = GetLastError();
if ( lastError == ERROR_FILE_NOT_FOUND ) {
error << "Invalid port name";
}
else {
error << "Error: " << lastError;
}
throw ArduinoSerialException( error.str() );
}
DCB dcb;
SecureZeroMemory( &dcb, sizeof DCB );
dcb.DCBlength = sizeof DCB;
dcb.BaudRate = CBR_9600;
dcb.ByteSize = 8;
dcb.StopBits = ONESTOPBIT;
dcb.Parity = NOPARITY;
dcb.fDtrControl = DTR_CONTROL_ENABLE;
if ( !SetCommState( comm, &dcb ) ) {
CloseHandle( comm );
std::ostringstream error;
error << "Unable to set comm state: Error " << GetLastError();
throw ArduinoSerialException( error.str() );
}
PurgeComm( comm, PURGE_RXCLEAR | PURGE_TXCLEAR );
}
std::size_t ArduinoSerial::Read( char buffer[], const std::size_t size ) {
DWORD numBytesRead = 0;
BOOL success = ReadFile( comm, buffer, size, &numBytesRead, NULL );
if ( success ) {
return numBytesRead;
}
else {
return 0;
}
}
std::size_t ArduinoSerial::Write( char buffer[], const std::size_t size ) {
DWORD numBytesWritten = 0;
BOOL success = WriteFile( comm, buffer, size, &numBytesWritten, NULL );
if ( success ) {
return numBytesWritten;
}
else {
return 0;
}
}
ArduinoSerial::~ArduinoSerial() {
CloseHandle( comm );
}
ArduinoSerialException::ArduinoSerialException( const std::string message ) :
std::runtime_error( message ) {
}
Any help or advice will be really greatly appreciated.
I am presented with the following errors:
error C2065: 'LcommPort' : undeclared identifier error C2228: left of '.c_str' must have class/struct/union
This little piece of code
TEXT( commPort.c_str())
becomes actually
LcommPort.c_str()
That's why you get this compiler error.
You should notice that TEXT() is a preprocessor macro meant for character literals, to prefix them with L depending in which mode (Unicode/ASCII) your project is compiled. It doesn't work with any variables obviously.
Use either commPort.c_str() directly, or const std::wstring commPort.
So I have the following code:
extern ID3D11Device* dev;
extern ID3D11DeviceContext* devcon;
//shaders
ID3D10Blob *VS, *PS, *error;
HRESULT r;
error = 0;
r = D3DX11CompileFromFile(L"shaders.hlsl", 0, 0, "VShader", "vs_5_0", D3DCOMPILE_DEBUG, 0, 0, &VS, &error, 0);
if(FAILED(r))
{
LPCWSTR errmsg = (LPCWSTR)error->GetBufferPointer();
MessageBox(hWnd, errmsg, L"error", MB_OK);
return;
}
error = 0;
r = D3DX11CompileFromFile(L"shaders.hlsl", 0, 0, "PShader", "ps_5_0", D3DCOMPILE_DEBUG, 0, 0, &PS, &error, 0);
if(FAILED(r))
{
LPCWSTR errmsg = (LPCWSTR)error->GetBufferPointer();
MessageBox(hWnd, errmsg, L"error", MB_OK);
return;
}
// encapsulate both shaders into shader objects
r = dev->CreateVertexShader(VS->GetBufferPointer(), VS->GetBufferSize(), NULL, &vertexShader);
r = dev->CreatePixelShader(PS->GetBufferPointer(), PS->GetBufferSize(), NULL, &pixelShader);
devcon->VSSetShader(this->vertexShader, 0, 0);
devcon->PSSetShader(this->pixelShader, 0, 0);
Running the code, D3DX11CompileFromFile for both vertex and pixel shaders returns S_OK, however when the code hits CreateVertexShader() it throws an access violation, and for the life of me I can't figure out why. I've probably done something rather stupid, I just can't seem to figure it out.
Most common reasons for that kind of error are either dev or VS is null/uninitialized. If the access violation is occurring at 0x00000xxx, then it's a null pointer. If it's 0xCCCCCxxx, then it's an uninitialized pointer (assuming you're running under Visual Studio). Also, it is good practice to zero out interface pointers (such as VS and PS in your code) before they're allocated and after they're freed (and the pointer is about to go out of scope), as this prevents a double free on the pointer and prevents the pointer from not tripping a breakpoint set to stop when the pointer is either not allocated correctly or is somehow overwritten.
Well I'm no pro in DirectX but if I'm correct a c++ compiler will turn a .hlsl into a .cso while it's running. Therefore you should change the D3DX11CompileFromFile parameter from L"shaders.hlsl" to L"shaders.cso" and do the same with the other call to the function.
This should allow the program to correctly read the .hlsl files.
Igot this error and I don't know why, I just follow correctly what he do, and he doesn't get this error. Here is the code.
//Main application loop
MSG msg = {0};
while(WM_QUIT != msg.message())
{
if(PeekMessage(&msg, NULL, NULL, NULL, PM_Remove))
{
//Translate message
TranslateMessage(&msg);
//Dispatch message
DispatchMessage(&msg);
}
}
And here are the error:
error C2064: term does not evaluate to a function taking 0 arguments
fatal error C1903: unable to recover from previous error(s); stopping compilation
And when I clicked it, they all pointing to the while loop.
The message member of the MSG structure is a field, not a method. You should access it instead of calling it:
while (WM_QUIT != msg.message) {
// ...
}
There are other issues in your code snippet. First, C++ is a case-sensitive language, so the last argument to PeekMessage() should be PM_REMOVE instead of PM_Remove.
In addition, PeekMessage() does not block if the message queue is empty, so your code will end up consuming 100% of the CPU core it runs on. You can use GetMessage() instead, which blocks if no message is available and would allow you to remove the explicit test for WM_QUIT:
MSG msg = { 0 };
while (GetMessage(&msg, NULL, 0, 0)) {
TranslateMessage(&msg);
DispatchMessage(&msg);
}
I have a windows build environment using cygwin and GCC, and am linking to the libraries for GLEE, GLUT, and opengl32. This is a Win32 build.
All calls to glCreateShader are returning 0, yet I'm not picking up any errors. The following is based on the Lighthouse tutorials for GLUT and GLSL, so the sequence of GL operations should be correct.
Here's the relevant code..
#define WIN32
#include <stdio.h>
#include <GL/GLee.h>
#include <GL/glut.h>
#include "SampleUtils.h"
#include "LineShaders.h"
GLint lineVertexHandle = 0;
unsigned int lineShaderProgramID;
...
int main(int argc, char **argv) {
// init GLUT and create window
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_DEPTH | GLUT_DOUBLE | GLUT_RGBA);
glutInitWindowPosition(100,100);
glutInitWindowSize(320,320);
glutCreateWindow("Lighthouse3D Tutorials");
// register callbacks
glutDisplayFunc(renderScene);
glutReshapeFunc(changeSize);
glutIdleFunc(renderScene);
// initialize the shaders
init();
// enter GLUT event processing cycle
glutMainLoop();
return 0;
}
void init() {
glClearColor( 0.0, 0.0, 0.0, 1.0 ); /* Set the clear color */
lineShaderProgramID = SampleUtils::createProgramFromBuffer(lineMeshVertexShader,lineFragmentShader);
lineVertexHandle = glGetAttribLocation(lineShaderProgramID,"vertexPosition");
}
SampleUtils is a utility class w/ the following methods for shader handling. The shaders lineMeshVertexShader and lineFragmentShader are defined in LineShaders.h.
unsigned int SampleUtils::createProgramFromBuffer(const char* vertexShaderBuffer, const char* fragmentShaderBuffer) {
checkGlError("cPFB");
// scroll down for initShader() - we never get past this point.
GLuint vertexShader = initShader(GL_VERTEX_SHADER, vertexShaderBuffer);
if (!vertexShader)
return 0;
GLuint fragmentShader = initShader(GL_FRAGMENT_SHADER,
fragmentShaderBuffer);
if (!fragmentShader)
return 0;
GLuint program = glCreateProgram();
if (program)
{
glAttachShader(program, vertexShader);
checkGlError("glAttachShader");
glAttachShader(program, fragmentShader);
checkGlError("glAttachShader");
glLinkProgram(program);
GLint linkStatus = GL_FALSE;
glGetProgramiv(program, GL_LINK_STATUS, &linkStatus);
if (linkStatus != GL_TRUE)
{
GLint bufLength = 0;
glGetProgramiv(program, GL_INFO_LOG_LENGTH, &bufLength);
if (bufLength)
{
char* buf = (char*) malloc(bufLength);
if (buf)
{
glGetProgramInfoLog(program, bufLength, NULL, buf);
LOG("Could not link program: %s", buf);
free(buf);
}
}
glDeleteProgram(program);
program = 0;
}
}
return program;
}
unsigned int
SampleUtils::initShader(unsigned int shaderType, const char* source)
{
checkGlError("initShader");
//GLuint shader = glCreateShader((GLenum)shaderType);
/* trying explicit enum, just in case - shader is still always 0 */
GLuint shader = glCreateShader(GL_VERTEX_SHADER);
LOG("SHADER %i", shader);
if (shader)
{
glShaderSource(shader, 1, &source, NULL);
glCompileShader(shader);
GLint compiled = 0;
glGetShaderiv(shader, GL_COMPILE_STATUS, &compiled);
if (!compiled)
{
GLint infoLen = 0;
glGetShaderiv(shader, GL_INFO_LOG_LENGTH, &infoLen);
if (infoLen)
{
char* buf = (char*) malloc(infoLen);
if (buf)
{
glGetShaderInfoLog(shader, infoLen, NULL, buf);
LOG("Could not compile shader %d: %s",
shaderType, buf);
free(buf);
}
glDeleteShader(shader);
shader = 0;
}
}
}
return shader;
}
void SampleUtils::checkGlError(const char* operation) {
for (GLint error = glGetError(); error; error = glGetError())
LOG("after %s() glError (0x%x)", operation, error);
}
I'm wondering if the context isn't fully initialized when glCreateShader is called. But I've tried calling init() within the callbacks as well, with no effect. My searches on this issue have turned up the advice to build a known-good example, to confirm the availability of glCreateShader - if anyone has one for C++, pls advise.
UPDATE:
Based on the feedback here I'd checked my OpenGL support using the glewinfo utility and it's reporting that this system is limited to 1.1. - https://docs.google.com/document/d/1LauILzvvxgsT3G2KdRXDTOG7163jpEuwtyno_Y2Ck78/edit?hl=en_US
e.g.
---------------------------
GLEW Extension Info
---------------------------
GLEW version 1.6.0
Reporting capabilities of pixelformat 2
Running on a GDI Generic from Microsoft Corporation
OpenGL version 1.1.0 is supported
GL_VERSION_1_1: OK
---------------
GL_VERSION_1_2: MISSING
---------------
etc.
What's strange is that with GLEE I was able to compile these extensions, though they apparently don't work. I've checked my gl.h and glext.h header files and they are current - the extensions are there. So how is this dealt with on Windows? How do you set up and link your environment so that you can develop w/ more than 1.1 using cygwin and Eclipse?
The solution to this question was provided in the comments, and I'm highlighting it here in order to close this question.
All that was required was a driver upgrade to a version that supports the extensions that I'm using. So I installed NVidia's OpenGL driver, which can be obtained here - http://developer.nvidia.com/opengl-driver
It appears that my system's original NVidia driver was subverted so that the native windows OpenGL driver was being used. This only supports OpenGL 1.1. But I'd mistakenly thought that a GL_VERSION of 1.1.0 was normal on Windows - based on some bad advice I'd gotten. And the fact that I was able to compile and execute this code without errors led me to assume that the extensions were present. They were not.
I have had the same problem, but it was a silly C++ language trick : my shader was compiled in a global / static variable (which was a wrapper class to use program shader), which was so initialized before having a GL context. Hope it can help...