I observe that whether I enable or disable GL_PROGRAM_POINT_SIZE, glPointSize(...) in my main program is always ignored and shader code line gl_PointSize = ... always determines the size of points.
Is that the expected behavior on newer OpenGL versions, or do I have to suspect a bug in my code?
OpenGL 4.5 spec is very clear that:
If program point size mode is disabled, the derived point size is specified with the command
void PointSize(float size);
...
Program point size mode is enabled and disabled by calling Enable or Disable with target PROGRAM_POINT_SIZE.
So it's either a bug in the implementation or in your code you didn't show.
Related
I am currently using a NVIDIA GeForce GTX 780 (from Gigabyte if that matters - I don't know how much this could be affected by the onboard BIOS, I've also got two of them Installed but due to Vulkans incapeability of SLI I only use one device at a time in my code. However in the NVIDIA control center SLI is activated. I use the official Driver version 375.63). That GPU is fully capable of geometry shaders of course.
I am using a geometry shader with Vulkan API and it works allright and does everything I exspect it to do. However I get the validation layer report as follows: #[SC]: Shader requires VkPhysicalDeviceFeatures::geometryShader but is not enabled on the device.
Is this a bug? Does someone have similiar issues?
PS: http://vulkan.gpuinfo.org/displayreport.php?id=777#features is saying the support for "Geometry Shader" is "true" as exspected. I am using Vulkan 1.0.30.0 SDK.
Vulkan features work differently from OpenGL extensions. In OpenGL, if an extension is supported, then it's always active. In Vulkan, the fact that a feature is available is not sufficient. When you create a VkDevice, you must explicitly ask for all features you intend to use.
If you didn't ask for the Geometry Shader feature, then you can't use GS's, even if the VkPhysicalDevice advertises support for it.
So the sequence of steps should be to check to see if a VkPhysicalDevice supports the features you want to use, then supply those features in VkDeviceCreateInfo::pEnabledFeatures when you call vkCreateDevice.
Since Vulkan doesn't do validation checking on most of its inputs, the actual driver will likely assume you enabled the feature and just do what it normally would. But it is not required to do so; using a feature which has not been enabled is undefined behavior. So the validation layer is right to stop you.
Whenever I call a function to swap buffers I get tons of errors from glDebugMessageCallback saying:
glVertex2f has been removed from OpenGL Core context (GL_INVALID_OPERATION)
I've tried using both with GLFW and freeglut, and neither work appropriately.
I haven't used glVertex2f, of course. I even went as far as to delete all my rendering code to see if I can find what's causing it, but the error is still there, right after glutSwapBuffers/glfwSwapBuffers.
Using single-buffering causes no errors either.
I've initialized the context to 4.3, core profile, and flagged forward-compatibility.
As explained in comments, the problem here is actually third-party software and not any code you yourself wrote.
When software such as the Steam overlay or FRAPS need to draw something overtop OpenGL they usually go about this by hooking/injecting some code into your application's SwapBuffers implementation at run-time.
You are dealing with a piece of software (RivaTuner) that still uses immediate mode to draw its overlay and that is the source of the unexplained deprecated API calls on every buffer swap.
Do you have code you can share? Either the driver is buggy or something tries to call glVertex in your process. You could try to use glloadgen to build a loader library that covers OpenGL-4.3 symbols only (and only that symbols), so that when linking your program uses of symbols outside the 4.3 specs causes linkage errors.
I am discovering that GL needs different settings on different systems to run correctly. For instance, on my desktop if I request a version 3.3 core-profile context everything works. On a Mac, I have to set the forward-compatibility flag as well. My netbook claims to only support version 2.1, but has most of the 3.1 features as extensions. (So I have to request 2.1.)
My question is, is there a general way to determine:
Whether all of the GL features my application needs are supported, and
What combination of version, profile (core or not), and forward compatibility flag I need to make it work?
I don't think there is any magic bullet here. It's pretty much going to be up to you to do the appropriate runtime checking and provided alternate paths that make the most out of the available features.
When I'm writing an OpenGL app, I usually define a "caps" table, like this:
struct GLInfo {
// GL extensions/features as booleans for direct access:
int hasVAO; // GL_ARB_vertex_array_object........: Has VAO support? Allows faster switching between vertex formats.
int hasPBO; // GL_ARB_pixel_buffer_object........: Supports Pixel Buffer Objects?
int hasFBO; // GL_ARB_framebuffer_object.........: Supports Framebuffer objects for offscreen rendering?
int hasGPUMemoryInfo; // GL_NVX_gpu_memory_info............: GPU memory info/stats.
int hasDebugOutput; // GL_ARB_debug_output...............: Allows GL to generate debug and profiling messages.
int hasExplicitAttribLocation; // GL_ARB_explicit_attrib_location...: Allows the use of "layout(location = N)" in GLSL code.
int hasSeparateShaderObjects; // GL_ARB_separate_shader_objects....: Allows creation of "single stage" programs that can be combined at use time.
int hasUniformBuffer; // GL_ARB_uniform_buffer_object......: Uniform buffer objects (UBOs).
int hasTextureCompressionS3TC; // GL_EXT_texture_compression_s3tc...: Direct use of S3TC compressed textures.
int hasTextureCompressionRGTC; // GL_ARB_texture_compression_rgtc...: Red/Green texture compression: RGTC/AT1N/ATI2N.
int hasTextureFilterAnisotropic; // GL_EXT_texture_filter_anisotropic.: Anisotropic texture filtering!
};
Where I place all the feature information collected at startup with glGet and by testing function pointers. Then, when using a feature/function, I always check first for availability, providing a fallback if possible. E.g.:
if (glInfo.hasVAO)
{
draw_with_VAO();
}
else
{
draw_with_VB_only();
}
Of course that there are some minimal features that you might decide the hardware must have in order to run your software. E.g.: Must have at least OpenGL 2.1 with support for GLSL v120. This is perfectly normal and expected.
About the diferences between core and compatibility profile, if you wish to support both, there are some OpenGL features that you will have to avoid. For example: Always draw using VBOs. They are the norm on Core and existed before via extensions.
I have an issue with a webgl shader that I've determined is related to ANGLE because it only occurs on Windows in Firefox or Chrome, and it doesn't happen if I force opengl (chrome --use-gl=desktop).
I've created a jsfiddle that shows ANGLE-generated HLSL of my custom shader. (for hlsl conversion to work in this jsfiddle, you must run chrome with --enable-privileged-webgl-extensions, or just see my gist of the output)
So I have working glsl and the generated hlsl compiles but doesn't do the same thing. The symptom is that on Windows, the vertices appear in correct initial locations, but do not move although I change the uniform jed. I can't find the bug in the generated code.
Any tips for debugging problems like this?
Hard to say based on your info (including no original GLSL). It's not hard to imagine this being fixed by the July 4 revisions to ANGLE, however. I would say update, first.
As we all get to know eventually, the specification is one thing and the implementation is another. Most of bugs we cause ourselves, but sometimes that's not the case.
I believe it'd be useful to make a small list of:
What are the currently known bugs in the GPU drivers, related to the implementation of recent versions of OpenGL and GLSL?
Please remember to always post the relevant graphics card and driver version.
Let me start:
GPU: confirmed on AMD/ATI Radeon HD 4650
Type: GLSL problem
GL version related: confirmed on 3.3, probably 3.1 and up (or even before)
Relevant link: http://forums.amd.com/devforum/messageview.cfm?catid=392&threadid=139288
Driver version: confirmed on Catalyst 10.10 (9-28-2010)
Status: as of 2010-11-27 it has a fix, but it aparrently didn't reach the public driver release yet (so even if the fix gets released, users with not-so-recent version of drivers will still be affected for like months)
Description:
If in your vertex shader you have any attribute (in) variable whose name is lexically after gl_, then you cannot use built-in attributes, namely gl_VertexID and gl_InstanceID. If you try, the shader won't work (blank screen, likely).
Workaround (new):
Only available with GLSL 3.3 and up, or with the GL_ARB_explicit_attrib_location extension.
Define any attribute's location explicitly to be equal to 0, by appending layout(location=0) to its declaration in the vertex shader. You may, but don't need to use this for other attributes; the important thing is that ANY attribute needs to have location equal to 0. After you do that, the naming is no longer important.
Workaround (alternative):
Use a name convention which requires you to name your attribute variables starting with a_, which won't hurt your code readability and will make all of them be lexically before gl_ (safe zone).
Another gl_VertexID bug:
GPU: NVIDIA GeForce 9400M
Type: GLSL problem
Driver version: NVDANV50Hal 1.6.36
OpenGL version: 2.1, GLSL 1.2 using GL_EXT_gpu_shader4 extension
This occurs on Macbooks. It's possible that the new driver enabling OpenGL 3.2 that comes with OS X Lion has fixed the issue but a lot of frameworks are only configured to use the legacy 2.1 drivers so this is still relevant.
If you read gl_VertexID before you read another attribute in a vertex shader, the latter attribute will return junk data. If the other attribute is gl_Color, regardless of how it is used, nothing will be rendered. Accessing other built-in attributes can lead to other strange behavior.
Workaround:
If you must use gl_VertexID, read all other attributes you will need first. If you read another attribute first, followed by gl_VertexID, any subsequent reads of the attribute will work fine.