glEnable(GL_FOG) does work; gives invalid enumerant error - opengl

When I include the line:
glEnable(GL_FOG);
In my OpenGL 3 file, I get the following error:
GL Error: invalid enumerant
Exception caught: GL Error: invalid enumerant
Program ended with exit code: 255
Is there a main motivating reason for this?
Thanks

As the people pointed out in the comment under your question,your error stems from the fact you're using deprecated, fixed pipeline functionality while having OpenGL 3 core profile on.You are strongly suggested to use programmable pipeline and calculate fog effect in shaders.And here you can learn how to do that.
As a sidenote,many newcomers to OpenGL still tend to use the deperecated API.Please don't do it,unless you abolutely must,for your own sake.Programmable OpenGL is a bit harder to start with,but it gives you much more freedom and possiblities of what you can do with your GPU.

Related

Trying to implement egui with OpenGL and Sdl2 in rust. "Could not create GL context: GLXBadProfileARB"

I am trying to use this library https://github.com/ArjunNair/egui_sdl2_gl in my rust project. Before I tried to implement it, my program worked as expected (just a black window). Now I am getting a runtime error from my OpenGL-context and I can't think of why:
Finished dev [unoptimized + debuginfo] target(s) in 0.03s
Running `target/debug/ivy`
thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: "Could not create GL context: GLXBadProfileARB"', src/engine/engine.rs:52:53
stack backtrace:
0: rust_begin_unwind
at /rustc/7737e0b5c4103216d6fd8cf941b7ab9bdbaace7c/library/std/src/panicking.rs:584:5
1: core::panicking::panic_fmt
at /rustc/7737e0b5c4103216d6fd8cf941b7ab9bdbaace7c/library/core/src/panicking.rs:143:14
2: core::result::unwrap_failed
at /rustc/7737e0b5c4103216d6fd8cf941b7ab9bdbaace7c/library/core/src/result.rs:1749:5
3: core::result::Result<T,E>::unwrap
at /rustc/7737e0b5c4103216d6fd8cf941b7ab9bdbaace7c/library/core/src/result.rs:1065:23
4: ivy::engine::engine::IvyWindow::new
at ./src/engine/engine.rs:52:26
5: ivy::main
at ./src/main.rs:9:19
6: core::ops::function::FnOnce::call_once
at /rustc/7737e0b5c4103216d6fd8cf941b7ab9bdbaace7c/library/core/src/ops/function.rs:227:5
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.
I have done some research but I couldn't find anything that helped me. Probably because I don't know what to look for.
Also, I am pretty new to rust and my code is probably overall pretty unoptimized and unsafe but thats fine for now (I guess). You can find my project here:
https://gitlab.n-j.me/janke/ivy/-/tree/dev
The latest version is on the "dev"-branch.
In general, the GLXBadProfileARB is generated if the value of GLX_CONTEXT_PROFILE_MASK_ARB contains more than a single valid profile bit.
But it looks like you are facing known bug in Mesa library:
https://chromium.googlesource.com/external/github.com/glfw/glfw/+/refs/heads/ci/src/glx_context.c#587
Maybe it would be a good idea to apply similar workaround in rust or to update Mesa?

How to find name of error ID for NVIDIA OpenGL drivers?

I have an error message (which is mostly a warning, not so much an actual error).
using glDebugMessage(), the error ID that is returned in decimal is 131186 (the error ID is the same class of enumerators as GL_NO_ERROR, GL_INVALID_ENUMERATOR...).
I want to read about the documentation of this value, but I seem to not be able to find it by searching it up. It's not an official OpenGL enumerator value, so i assume it to be driver specific (NVIDIA).
EDIT:
The full message is:
Source: GL_DEBUG_SOURCE_API
Type: GL_DEBUG_TYPE_PERFORMANCE
ID: 0x20072
Severity: GL_DEBUG_SEVERITY_MEDIUM
Message:
Buffer performance warning: Buffer object "SSBO" (bound to
GL_SHADER_STORAGE_BUFFER, and GL_SHADER_STORAGE_BUFFER (3), usage hint is
GL_DYNAMIC_DRAW) is being copied/moved from VIDEO memory to HOST memory.
Does anyone know what this error code means or how to find its documentation?
This warning simply means that OpenGL does not have total control over the SSBO. Because of that, it has to either block/copy the SSBO's data for OpenGL to use it properly. This is slightly inefficient, which is why the driver is warning you about it.
As for the documentation, I haven't really found any. But, I did find this other question which referenced a very similar problem with OpenGL and OpenCL: OpenCL Host Copying Performance Warning

How do I initialize a matchTemplateBuff in OpenCV?

I'm writing a pattern matching code using OpenCV with CUDA on Mac OS X. After ~ 70 frames, it slows down a lot. Using mach_absolute_time() I've been able to track the culprit (Initially I thought it was a disk access issue):
gpu::matchTemplate(currentFrame,
correlationTargets[i]->correlationImage,
temporaryImage,
CV_TM_CCOEFF_NORMED);
I believe this is caused by some memory issue inside matchTemplate. After a lot of search, I found the matchTemplateBuf structure which is presumably for memory reuse. Since the problem seems memory releated, I think using this may be the solution. However, the following code crashes:
gpu::MatchTemplateBuf mtBuff;
[...]
for(...) {
gpu::matchTemplate(correlationTargets[i]->croppedImage,
correlationTargets[i]->correlationImage,
correlationTargets[i]->maxAllocation,
CV_TM_CCOEFF_NORMED, mtBuff);
With error:
OpenCV Error: Gpu API call (Unknown error code [Code = 9999]) in convolve, file /Users/sermarc/Downloads/opencv-2.4-3.8/modules/gpu/src/imgproc.cpp, line 1431 libc++abi.dylib: terminating with uncaught exception of type cv::Exception: /Users/sermarc/Downloads/opencv-2.4-3.8/modules/gpu/src/imgproc.cpp:1431: error: (-217) Unknown error code [Code = 9999] in function convolve
I believe this is because the matchTemplateBuff is not properly initialized. However, I cannot find any information or example that shows it being set on a valid state.
The code works with:
gpu::matchTemplate(correlationTargets[i]->croppedImage,
correlationTargets[i]->correlationImage,
correlationTargets[i]->maxAllocation,
CV_TM_CCOEFF_NORMED);

Parsing GLSL error messages

When I compile a broken GLSL shader then the NVidia driver gives me error messages like this:
0(102) : error C1008: undefined variable "vec"
I know the number inside the brackets is the line number. I wonder what the 0 at the beginning of the error message means. I hoped it would be the index in the sources array which is passed to glShaderSource but that's not the case. It's always 0. Does someone know what this first number means?
And is there some official standard for the error message format so I'm able to parse the line number from it or do other OpenGL implementations use other formats? I only have access to NVidia hardware so I can't check how the error messages are looking like when using AMD or Intel hardware.
It is a file name, which you can't specify via GL API, so it is 0.
You can set it with #line num filename preprocessor command right within shader code. Could be helpful if your shader is constructed from many files with #includes via external preprocessor (before passing source to GL).
There is no standard for messages. Everyone doing whatever they want.
glShaderSource accepts an array of source strings. The first number is the index into that array.

glBindFramebuffer causes an "invalid operation" GL error when using the GL_DRAW_FRAMEBUFFER target

I'm using OpenGL 3.3 on a GeForce 9800 GTX. The reference pages for 3.3 say that an invalid operation with glBindFramebuffer indicates a framebuffer ID that was not returned from glGenFramebuffers. Yet, I output the ID returned by glGenFramebuffers and the ID I send later to glBindFramebuffer and they are the same.
The GL error goes away, however, when I change the target parameter in glBindFramebuffer from GL_DRAW_FRAMEBUFFER to GL_FRAMEBUFFER. The documentation says I should be able to use GL_DRAW_FRAMEBUFFER. Is there any case in which you can't bind to GL_DRAW_FRAMEBUFFER? Is there any harm from using GL_FRAMEBUFFER instead of GL_DRAW_FRAMEBUFFER? Is this a symptom of a larger problem?
If glBindFramebuffer(GL_FRAMEBUFFER) works when glBindFramebuffer(GL_DRAW_FRAMEBUFFER) does not, and we're not talking about the EXT version of these functions and enums (note the lack of "EXT" suffixes), then it's likely that you may have done something wrong. GL_INVALID_OPERATION is the error you get when multiple combinations of parameters that depend on different state are in conflict. If it were just a missing enum, you should get GL_INVALID_ENUM.
Of course, it could just be a driver bug too. But there's no way to know without knowing what your code looks like.