What dose glUniform1fARB do?
is there any detailed online reference materials?
What about this google (OpenGL reference) result?
Note that the base name for such methods is glUniform, so look for that. 1f means one float parameter, ARB means the method is common but not part of core OpenGL as of some version. I'm not sure in which exact version the function was promoted, but I assume it was something around 3.0, and thus you'd now just use glUniform1f.
Edit: The spec sais promotion happend in 2.0: "glUniform is available only if the GL version is 2.0 or greater."
Related
According to the documentation glProgramParameteriARB is a part of ARB_geometry_shader4. I have a graphics card which doesn't support ARB_geometry_shader4:
glxinfo | grep ARB_geometry_shader4
When I call glXGetProcAddress((const GLubyte*)glProgramParameteriARB) I get a function address and everything works fine. Does it mean that the documentation has a bug ? How can I find an extension which contains glProgramParameteriARB ?
glXGetProcAddress can be called without having a current OpenGL context (unlike wglGetProcAddress). As such, the function pointers it returns are independent of the current context. Because of that, it will return valid function pointers for any OpenGL function. It uses delayed binding for this kind of stuff.
If you want to know whether you can use a function pointer, check the extension strings, not whether you get a valid pointer.
I am trying to follow this tutorial and make a simple c++ extension with CUDA backend.
My CPU implementation seems to work fine.
I am having trouble finding examples and documentation (it seems like things are constantly changing).
Specifically,
I see pytorch cuda functions getting THCState *state argument - where does this argument come from? How can I get a state for my function as well?
For instance, in cuda implementation of tensor.cat:
void THCTensor_(cat)(THCState *state, THCTensor *result, THCTensor *ta, THCTensor *tb, int dimension)
However, when calling tensor.cat() from python one does not provide any state argument, pytorch provides it "behind the scene". How pytorch provides this information and how can I get it?
state is then converted to cudaStream_t stream = THCState_getCurrentStream(state);
For some reason, THCState_getCurrentStream is no longer defined? How can I get the stream from my state?
I also tried asking on pytorch forum - so far to no avail.
It's deprecated (without documentation!)
See here:
https://github.com/pytorch/pytorch/pull/14500
In short: use at::cuda::getCurrentCUDAStream()
I am using OpenGL version 4.5.0 and getting this error:
error: ‘glEnableClientState’ was not declared in this scope
I have read that glEnableClientState is deprecated in this version, but I need to write code compatible with this method, as this is home assignment from class and they require us to write using this method. Is there any way could I get this working in OpenGL 4.5.0?
Including this has had no effect:
glutInitContextVersion (3,3);
glutInitContextProfile (GLUT_COMPATIBILITY_PROFILE);
glutInitContextProfile (GLUT_CORE_PROFILE);
That's the opposite of what you need to do. If you need compatibility OpenGL features, then you have to use GLUT_COMPATIBILITY_PROFILE.
However:
error: ‘glEnableClientState’ was not declared in this scope
That suggests that the OpenGL loading library you're using doesn't even declare this function. Which means you need to move to one that can expose compatibility profile OpenGL functions.
glEnableVertexAttribArray and glVertexAttribPointer are "modern" replacement for glEnableClientState/glVertexPointer. The new generic variant has been available since GL 2.0.
I'm writing a header. I need to know the values of things like those. Copying and pasting this information or linking to a source would be sufficient. Do not simply answer for GL_TRUE or GL_TEXTURE_2D, I am asking for the values corresponding to every term in existence from OpenGL.
It's still unclear. Why don't you just use the GL symbolic name in your header? For instance:
enum PrimitiveType {
Triangles = GL_TRIANGLES,
TriangleStrip = GL_TRIANGLE_STRIP
}
Anyhow, the OpenGL specification doesn't mandate any specific value for the tokens. It instead refers to the Implementers' Guide and to the official headers released by Khronos, which in turn are generated from the spec/XML files available for instance here.
The only reason I've found so far to hardcode values (instead of putting the symbolic names) is for allowing the code to compile on platforms which don't expose such values. For instance, suppose I'm writing a piece of code that draws primitives, and I'm defining the enum above. I could then continue defining more primitive types, and then I get to:
Patches = GL_PATCHES
The actual usage of such primitive type would still be guarded by a runtime check on the version, but this particular line won't compile on an OpenGL 3 implementation (as GL_PATCHES is for currently used for tessellation, i.e. OpenGL 4). That is:
you can't compile it on a GL3 machine even if you run your application on a GL4 machine;
you can't compile it on a GL3 machine even if you don't actually use patches.
For this exact reason I chose sometimes to hardcode the values and not use the symbolic names.
OK, I'll answer the question.
If you have a Linux box,
% grep '^#define GL_.*0x' /usr/include/GL/gl.h
will generate all the #define name value pairs in the OpenGL header file (and not the other stuff like function calls)
If you have MacOS X, same command but /System/Library/Frameworks/OpenGL.framework/Headers/gl.h
I assume I am forgeting to include something, but it is odd because i am using other opengl functions. How do I find out what I am not including?
Looks like glBlendEquationOES is only an extension for OpenGLES1.1
In version 2.0 it's a core function, just called glBlendEquation.