Check OpenGL Version in Swift 3 beta 6 - opengl

I would like to check, which OpenGL version is installed on the computer, before I execute OpenGL commands. I wrote a small function to get the NSOpenGLVersion:
import OpenGL
func getnsopenglversion () -> String
{
var major: GLint = 0
var minor: GLint = 0
NSOpenGLGetVersion(&major, &minor)
return ("\(major),\(minor)")
}
But this function always returns "1,2" on Yosemite and El Captain, on older Mac Pro 3,1 and new Mac Pro 6,1 or Mac Mini. Always the same.
What I want to get is the OpenGL Version, that means "3,3" or "4,1" or something like this, what is currently installed. I tried the following:
let v = glGetString(GL_VERSION)
But this will not compile but gets the error saying "Cannot convert the value of type int32 to expected type GLenum".
When I instead convert the GL_EXTENSIONS constant to GLenum with:
let v = glGetString(GLenum(GL_EXTENSIONS))
It will compile, but I get an exception when running the code. Is there some initialization needed, before calling glGetString?
The question is: I need a small function, which gets the OpenGL version as a string, not the NSOpenGLVersion but the correct OpenGL version of my current hardware. Can anybody help with this?

Related

Fatal error: exception Graphics.Graphic_failure("Cannot open display ")

I was trying to run the code and it keeps showing the same error.
I start by compiling with ocamlc -o cardioide graphics.cma cardioide.ml and it appears to work, but then I do ./cardioide to execute it and the message Fatal error: exception Graphics.Graphic_failure("Cannot open display ") appears...
I've searched all across the internet and i can't find the solution, can someone please help me?
Thank you
open Graphics
let () = open_graph "300x20"
let () =
moveto 200 150;
for i = 0 to 200 do
let th = atan 1. *. float i /. 25. in
let r = 50. *. (1. -. sin th) in
lineto (150 + truncate (r *. cos th))
(150 + truncate (r *. sin th))
done;
ignore (read_key ())
Error message:
Fatal error: exception Graphics.Graphic_failure("Cannot open display ")
The string argument to the open_graph function is not the size or title, but actually implementation-dependent information that is passed to the underlying graphical subsystem (in X11 it is the screen number). In modern OCaml, optional arguments are passed using labels, but Graphics was written long before this feature was introduced to the language. Therefore, you have to pass an empty string there (if you don't want to pass any specific to implementation of the underlying graphical subsystem information), e.g.,
open_graph ""
will do the work for you in a system-independent way.
Besides, if you want to resize the window, then you can use the resize_window function. And to set the title, use set_window_title.
For the historic reference, the string parameter passed to the open_graph is having the following syntax (it is no longer documented, so there is no reason to believe that it will be respected):
Here are the graphics mode specifications supported by
Graphics.open_graph on the X11 implementation of this library: the
argument to Graphics.open_graph has the format "display-name
geometry", where display-name is the name of the X-windows display
to connect to, and geometry is a standard X-windows geometry
specification. The two components are separated by a space. Either
can be omitted, or both. Examples:
Graphics.open_graph "foo:0" connects to the display foo:0 and creates a
window with the default geometry
Graphics.open_graph "foo:0 300x100+50-0" connects to the display foo:0 and
creates a window 300 pixels wide by 100 pixels tall, at location (50,0)
Graphics.open_graph " 300x100+50-0" connects to the default display and
creates a window 300 pixels wide by 100 pixels tall, at location (50,0)
Graphics.open_graph "" connects to the default display and creates a
window with the default geometry.
Put a 'space' in the argument to get the window you want (should be 200 for your cardioide):
let () = open_graph " 300x200"
I met the same problem and it was because I used the Windows subsystem for linux(WSL) so it needs a XServer to run graphical application. And the Ubuntu Wiki For WSL helped solve the problem. I downloaded and installed MobaXterm (it has free community version!) and it automatically detects the WSL and runs it inside the app. Try the same code before and a graphical window will pop up!

dmd and gdc compiling code differently?

I am currently trying out DerelictSDL2 (a binding to the SDL2 library for D) and I've written a code that successfully loads a JPG image and displays it in a window. That is, when it's compiled with dmd. When I try with gdc (and no code modification), it does compile but it won't load the image at runtime.
I believe I did things right :
SDL_Init(SDL_INIT_VIDEO)
then
IMG_Init(IMG_INIT_JPG)
and somewhere after that
this.data = IMG_LoadTexture(Window.renderer, name.ptr)
where Window.renderer is (obviously) the SDL_Renderer* and name.ptr is a char* pointing to the name of the image to load. But when compiling with gdc, IMG_Load and IMG_LoadTexture both return null, while with dmd they return a pointer to the newly created texture...
Did I forget something else (after all, with dmd it worked even without IMG_Init) ? Does Derelict only works with dmd (even if it's only interfacing to C functions) ?
dmd : 2.065
gdc : 4.9.1
EDIT :
Turns out the problem is completely different. IMG_LoadTexture takes a pointer to data for its second argument but name.ptr seems to only work with dmd. However If I try with a hard-coded argument like this :
IMG_LoadTexture(renderer, "../test/res/image.jpg")
it works with both dmd and gdc.
there is no guarantee that D string will be 0-terminated. it just happens by chance with dmd. the correct way is to use toStringz() function from std.string module.
p.s. please note that string literals are 0-terminated, that's why hardcoded arguments works.

Find out name of graphics card driver in a C++ OpenGL program

I'm searching for a way to find out the name of the currently used graphics card driver inside a C++ OpenGL program. At best would be a platform-independent way (Linux and Windows). The only thing I could find was this but that's a shell solution and might even vary along different distributions (and still, Windows would be a problem).
I already looked at glGetString() with the GL_VENDOR parameter, however that outputs the vendor of the graphics card itself, not the driver. I couldn't find any options/functions that give me what I want.
Is there an easy solution to this problem?
Try these:
const GLubyte* vendor = glGetString(GL_VENDOR);
const GLubyte* renderer = glGetString(GL_RENDERER);
const GLubyte* version = glGetString(GL_VERSION);
This is probably not the ultimate answer, but it might help you. You can work out the driver name and version combining both the lsmod and modinfo commands, under Linux.
For example, my lsmods returns the following:
$ lsmod
Module Size Used by
autofs 28170 2
binfmt_misc 7984 1
vboxnetadp 5267 0
vboxnetflt 14966 0
vboxdrv 1793592 2 vboxnetadp,vboxnetflt
snd_hda_codec_nvhdmi 15451 1
snd_hda_codec_analog 80317 1
usbhid 42030 0 hid
nvidia 11263394 54
from which I know that nvidia refers to the graphics card.
I can then run modinfo nvidia and I get
filename: /lib/modules/2.6.35-32-generic/kernel/drivers/video/nvidia.ko
alias: char-major-195-*
version: 304.54
supported: external
license: NVIDIA
alias: pci:v000010DEd00000E00sv*sd*bc04sc80i00*
alias: pci:v000010DEd00000AA3sv*sd*bc0Bsc40i00*
alias: pci:v000010DEd*sv*sd*bc03sc02i00*
alias: pci:v000010DEd*sv*sd*bc03sc00i00*
depends:
And I can extract the driver version etc...
I know this is neither a straight forward solution nor multiplatform, but you might work out an script that extracts driver name and versions if you guess that most of names will be nvidia, ati, intel etc... by grep / awk the output of lsmod.

Can't comprehend "unknown OpenGL extension entry" error triggered by Haskell OpenGL program

I wrote the following program on Windows XP using GHC 7.4.1 (Haskell Platform 2012.2.0.0):
mkVertexShader :: IO Bool
mkVertexShader = do
shader <- glCreateShader gl_VERTEX_SHADER
withCString vertexShader $ \ptr -> glShaderSource shader 1 (castPtr ptr) nullPtr
glCompileShader shader
status <- with 0 $ \ptr -> do
glGetShaderiv shader gl_COMPILE_STATUS ptr
peek ptr
return $ status == fromIntegral gl_FALSE
When run, the program aborts with
*** Exception: user error (unknown OpenGL extension entry glCreateShader, check for OpenGL 3.1)
I'm not sure what this error means, or how to address it. Can anyone help/
You don't have OpenGL 3.1 support on your computer. You have imported the function from Core31 while you might want the one from Core211 or ARB.ShaderObjects2. You need to check whether your graphics card supports the various versions/extensions when starting the application, and especially that you aren't requesting an OpenGL profile that you don't support.
If you use the Haskell OpenGL library instead of OpenGLRaw, this distinction is taken care of for you automatically.
1Well, the function hasn't changed between Core21 and Core31 so using the old version won't help
2You should never use ARB_shader_objects.

OpenCL enqueTask vs enqueNDRangeKernel

I'm writing OpenCL using the c++ bindings, trying to make a small library.
NDRange offset(0);
NDRange global_size(numWorkItems);
NDRange local_size(1);
//this call fails with error code -56
err = queue.enqueueNDRangeKernel(kernelReduction, offset, global_size, local_size);
//this call works:
err = queue.enqueueTask(kernelReduction);
Now, Error code -56 is CL_INVALID_GLOBAL_OFFSET. And I have no clue why the first call would fail. Any suggestions?
If you are using OpenCL 1.0, you cannot use global offsets afaik (you need to work around by using a constant memory counter or something). Try updating the bindings to OpenCL 1.1 if they don't automatically adapt and make sure you update your drivers as well.
global_work_offset must be NULL. Any value here should produce CL_INVALID_GLOBAL_OFFSET.
check it out: clEnqueueNDRangeKernel