I'm making a C++ program using opencl. It was really challenging to install it but I've finally managed to install it. I'm on Ubuntu 22.04, nvdia-390 GPU, intel core i7 gen3 CPU,
and my clinfo gives this output:
Number of platforms 1
Platform Name Clover
Platform Vendor Mesa
Platform Version OpenCL 1.1 Mesa 22.0.5
Platform Profile FULL_PROFILE
Platform Extensions cl_khr_icd
Platform Extensions function suffix MESA
Platform Name Clover
Number of devices 0
NULL platform behavior
clGetPlatformInfo(NULL, CL_PLATFORM_NAME, ...) Clover
clGetDeviceIDs(NULL, CL_DEVICE_TYPE_ALL, ...) No devices found in platform [Clover?]
clCreateContext(NULL, ...) [default] No devices found in platform
clCreateContextFromType(NULL, CL_DEVICE_TYPE_DEFAULT) No devices found in platform
clCreateContextFromType(NULL, CL_DEVICE_TYPE_CPU) No devices found in platform
clCreateContextFromType(NULL, CL_DEVICE_TYPE_GPU) No devices found in platform
clCreateContextFromType(NULL, CL_DEVICE_TYPE_ACCELERATOR) No devices found in platform
clCreateContextFromType(NULL, CL_DEVICE_TYPE_CUSTOM) No devices found in platform
clCreateContextFromType(NULL, CL_DEVICE_TYPE_ALL) No devices found in platform
ICD loader properties
ICD loader Name OpenCL ICD Loader
ICD loader Vendor OCL Icd free software
ICD loader Version 2.2.14
ICD loader Profile OpenCL 3.0
For some reason it cannot detect my devices. I've ran sudo apt install mesa-opencl-icd to make it finally work. When I create a C++ program to test it I get this error:
Here is my main:
#include <CL/cl.h>
#include <iostream>
int main(int argc, char* argv[])
{
cl_uint dev_cnt = 0;
cl_device_id m_device_id;
clGetPlatformIDs(0, NULL, &dev_cnt);
std::unique_ptr<cl_platform_id[]> platform_ids(new cl_platform_id[dev_cnt]);
clGetPlatformIDs(dev_cnt, platform_ids.get(), NULL);
std::cout << dev_cnt << std::endl;
int error_code = clGetDeviceIDs(platform_ids.get()[0],
CL_DEVICE_TYPE_DEFAULT,
1,
m_device_id,
NULL);
if (error_code != CL_SUCCESS)
{
std::cout << "FATAL ERROR: Failed to create a device group! Error code: " << std::endl;
return;
}
}
I get the error message as an output. I've tried changing the device type to default to no avail.
What's the problem? I don't quite remember the exact steps of how I've installed opencl but I can say it was really hard. What is missing?
Related
I'm struggling with getting GL+CL to work together.
I've been following this tutorial. In my code I first call clGetPlatformIDs and retrieve the first (and only) platform. I also get my gl_context from SDL2. Then I want to query the device used by OpenGL with help of clGetGLContextInfoKHR. I successfully obtain this function with clGetExtensionFunctionAddressForPlatform(platform_id, "clGetGLContextInfoKHR") but unfortunately when I call it, I get a segmentation fault. My code is written in Rust but I use low level OpenCL binding, so it looks almost like its C counterpart.
pub fn new(gl_context: &GLContext) -> Result<Self, ClGlError> {
println!("Initialising OpenCL context");
let raw = unsafe { gl_context.raw() };
println!("Getting default opencl platform");
let platform_id = Self::default_platform()?; // this is valid and not null
let mut props:[cl_sys::cl_context_properties;5] = [
//OpenCL platform
cl_sys::CL_CONTEXT_PLATFORM as cl_sys::cl_context_properties, platform_id as cl_sys::cl_context_properties,
//OpenGL context
cl_sys::CL_GL_CONTEXT_KHR, raw as cl_sys::cl_context_properties,
0
];
let mut device: cl_device_id = std::ptr::null_mut();
let p: *mut cl_device_id = (&mut device) as *mut cl_device_id;
let fn_name = b"clGetGLContextInfoKHR\0" as *const u8 as *const i8;
println!("Getting clGetGLContextInfoKHR");
let clGetGLContextInfoKHR = unsafe{clGetExtensionFunctionAddressForPlatform(platform_id, fn_name ) as cl_sys::clGetGLContextInfoKHR_fn};
if clGetGLContextInfoKHR.is_null(){
// error handling here
}
println!("Getting device"); // this is the last thing I see before segfault
unsafe{
(*clGetGLContextInfoKHR)(props.as_mut_ptr(),cl_sys::CL_CURRENT_DEVICE_FOR_GL_CONTEXT_KHR,std::mem::size_of::<cl_device_id>(),device as *mut c_void,std::ptr::null_mut());
}
panic!("All good") // this is never reached
}
I have a fairly new Graphics card which supports cl_khr_gl_sharing.
Here is clinfo
Number of platforms 1
Platform Name NVIDIA CUDA
Platform Vendor NVIDIA Corporation
Platform Version OpenCL 1.2 CUDA 11.2.162
Platform Profile FULL_PROFILE
Platform Extensions cl_khr_global_int32_base_atomics cl_khr_global_int32_extended_atomics cl_khr_local_int32_base_atomics cl_khr_local_int32_extended_atomics cl_khr_fp64 cl_khr_byte_addressable_store cl_khr_icd cl_khr_gl_sharing cl_nv_compiler_options cl_nv_device_attribute_query cl_nv_pragma_unroll cl_nv_copy_opts cl_nv_create_buffer cl_khr_int64_base_atomics cl_khr_int64_extended_atomics cl_khr_device_uuid
Platform Extensions function suffix NV
Platform Name NVIDIA CUDA
Number of devices 1
Device Name GeForce GTX 960
Device Vendor NVIDIA Corporation
Device Vendor ID 0x10de
Device Version OpenCL 1.2 CUDA
Driver Version 460.80
Device OpenCL C Version OpenCL C 1.2
Device Type GPU
Device Extensions cl_khr_global_int32_base_atomics cl_khr_global_int32_extended_atomics cl_khr_local_int32_base_atomics cl_khr_local_int32_extended_atomics cl_khr_fp64 cl_khr_byte_addressable_store cl_khr_icd cl_khr_gl_sharing cl_nv_compiler_options cl_nv_device_attribute_query cl_nv_pragma_unroll cl_nv_copy_opts cl_nv_create_buffer cl_khr_int64_base_atomics cl_khr_int64_extended_atomics cl_khr_device_uuid
Perhaps the most important clue might be that I tried a bunch of other libraries that build on top of opencl and in all of them, whenever I called clGetGLContextInfoKHR (wrapped in safer and higher-level API) it crashed. I think it's very unlikely that all those libraries had bugs in their code, so probably it's some problem in my environment. However, as you can see, my graphics card clearly supports all necessary extensions.
I am not sure why clGetGLContextInfoKHR is failing but I figured out that it's not really necesary to call it.
Instead you may just use this on Linux
cl_context_properties properties[] = {
CL_GL_CONTEXT_KHR, (cl_context_properties) glXGetCurrentContext(),
CL_GLX_DISPLAY_KHR, (cl_context_properties) glXGetCurrentDisplay(),
CL_CONTEXT_PLATFORM, (cl_context_properties) platform,
0
};
this on Windows
cl_context_properties properties[] = {
CL_GL_CONTEXT_KHR, (cl_context_properties) wglGetCurrentContext(),
CL_WGL_HDC_KHR, (cl_context_properties) wglGetCurrentDC(),
CL_CONTEXT_PLATFORM, (cl_context_properties) platform,
0
};
or this on OS X
CGLContextObj glContext = CGLGetCurrentContext();
CGLShareGroupObj shareGroup = CGLGetShareGroup(glContext);
cl_context_properties properties[] = {
CL_CONTEXT_PROPERTY_USE_CGL_SHAREGROUP_APPLE,
(cl_context_properties)shareGroup,
0
};
More info can be found in the book
OpenCL in Action: How to Accelerate Graphics and Computations
I am trying to run this OpenCL Example in Ubuntu 10.04.
My graphics card is an NVIDIA GeForce GTX 480. I have installed the latest NVIDIA driver and CUDA toolkit manually.
The program compiles without any errors. Thus linking with libOpenCL works. The application also runs but the output is very strange (mostly zeros and some random numbers). Debugging shows that
clGetPlatformIDs(1, &platform_id, &ret_num_platforms);
returns -1001.
google and stack told me that the reason may be a missing nvidia.icd in /etc/OpenCL/vendors. It was not there so I've added /etc/OpenCL/vendors/nvidia.icd with the following line
libnvidia-opencl.so.1
I have also tried some variants (absolute paths etc). But nothing solved the problem. Right now I have no idea what else I can try. Any suggestions?
EDIT: I have installed the Intel OpenCL SDK and I have copied its icd into /etc/OpenCL/vendors and the application works fine for
clGetDeviceIDs( platform_id, CL_DEVICE_TYPE_DEFAULT, 1,
&device_id, &ret_num_devices);
For
clGetDeviceIDs( platform_id, CL_DEVICE_TYPE_GPU, 1,
&device_id, &ret_num_devices);
I get the error -1.
EDIT:
I have noticed one thing in the console when executing the application. After execution of line
cl_int ret = clGetPlatformIDs(1, &platform_id, &ret_num_platforms);
the application gives me the output
modprobe: ERROR: ../libkmod/libkmod-module.c:809 kmod_module_insert_module() could not find module by name='nvidia_331_uvm'
modprobe: ERROR: could not insert 'nvidia_331_uvm': Function not implemented
There seems to be a conflict with an older driver version since I am using 340.
cat /proc/driver/nvidia/version
NVRM version: NVIDIA UNIX x86_64 Kernel Module 340.32 Tue Aug 5 20:58:26 PDT 2014
Maybe I should try to remove Ubuntu's own NVIDIA drivers one more time and reinstall the latest manually one more time?
EDIT:
The old driver was the problem. Somehow it wasn't removed properly thus I have done it one more time with
apt-get remove nvidia-331 nvidia-opencl-icd-331 nvidia-libopencl1-331
and now it works. I hope this helps someone who has similar problems.
The above mentioned problems occurred due to a driver conflict. If you have a similar problem then read the above edits to get the solution.
I am using Ubuntu 12.04 and I have installed OpenGL4.
I also have a CUDA-enable NVIDIA graphics card. Note that, I have been doing parallel computation with CUDA on my PC and that works.
[eeuser#roadrunner sample_opengl]$ glxinfo | grep gl
server glx vendor string: NVIDIA Corporation
server glx version string: 1.4
server glx extensions:
client glx vendor string: NVIDIA Corporation
client glx version string: 1.4
client glx extensions:
GL_ARB_texture_query_lod, GL_ARB_texture_rectangle, GL_ARB_texture_rg,
GL_NV_texture_multisample, GL_NV_texture_rectangle, GL_NV_texture_shader,
I cannot get a simple program to work. Can anyone give a sample program that can work on my PC
Here is the code I am using:
#include <GL/glew.h> // include GLEW and new version of GL on Windows
#include <GLFW/glfw3.h> // GLFW helper library
#include <stdio.h>
int main () {
// start GL context and O/S window using the GLFW helper library
if (!glfwInit ()) {
fprintf (stderr, "ERROR: could not start GLFW3\n");
return 1;
}
// uncomment these lines if on Apple OS X
/*glfwWindowHint (GLFW_CONTEXT_VERSION_MAJOR, 3);
glfwWindowHint (GLFW_CONTEXT_VERSION_MINOR, 2);
glfwWindowHint (GLFW_OPENGL_FORWARD_COMPAT, GL_TRUE);
glfwWindowHint (GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);*/
GLFWwindow* window = glfwCreateWindow (640, 480, "Hello Triangle", NULL, NULL);
if (!window) {
fprintf (stderr, "ERROR: could not open window with GLFW3\n");
glfwTerminate();
return 1;
}
glfwMakeContextCurrent (window);
// start GLEW extension handler
glewExperimental = GL_TRUE;
glewInit ();
// get version info
const GLubyte* renderer = glGetString (GL_RENDERER); // get renderer string
const GLubyte* version = glGetString (GL_VERSION); // version as a string
printf ("Renderer: %s\n", renderer);
printf ("OpenGL version supported %s\n", version);
// tell GL to only draw onto a pixel if the shape is closer to the viewer
glEnable (GL_DEPTH_TEST); // enable depth-testing
glDepthFunc (GL_LESS); // depth-testing interprets a smaller value as "closer"
/* OTHER STUFF GOES HERE NEXT */
// close GL context and any other GLFW resources
glfwTerminate();
return 0;
}
I tried compiling with -
g++ main2.cpp -lglut -lGL -lGLEW -lGLU
I get an error :
main2.cpp:2:47: fatal error: GLFW/glfw3.h: No such file or directory
compilation terminated.
I am now wondering what is this GLFW? More precisely how do I install it? Following #eapert
I installed libglfw as -
sudo apt-get install libglfw-dev
Still get the following errors
main2.cpp: In function ‘int main()’:
main2.cpp:18:2: error: ‘GLFWwindow’ was not declared in this scope
main2.cpp:18:14: error: ‘window’ was not declared in this scope
main2.cpp:18:79: error: ‘glfwCreateWindow’ was not declared in this scope
main2.cpp:24:32: error: ‘glfwMakeContextCurrent’ was not declared in this scope
Install the "dev" package for GLFW: sudo apt-get install libglfw3-dev
Warning: the version you get from the package manager may be out of date.
You can follow the instructions here or here.
I tried compiling with -
g++ main2.cpp -lglut -lGL -lGLEW -lGLU
Why are you linking GLUT when you want to use GLFW? Also you don't need GLU for your program. Try this:
g++ main2.cpp -lGL -lGLEW -lglfw
WARNING: The question was changed quite significantly since this answer was made
"I tried a few tutorials but I cannot get a simple program to work" I assume, given what you said earlier, your CUDA programs work on Windows, just not on Ubuntu?
1) Try installing a newer Ubuntu version first (if you have the option on that PC). 12.04 is a bit old and you probably shouldn't be using it if you don't have a good reason to (i.e. it's a company PC and you would violate upgrade policy, something along those lines)
2) Try installing the proprietary NVIDIA drivers for your card (this should also give you the NVIDIA OpenGL implementation); You probably have mesa installed. I suppose current mesa version have at least some GPGPU support, but I'm not sure if they support CUDA (if you aren't particularly fond of CUDA you can also try OpenCL, you might have better chances with that. Although there's still a good chance you'll need the proprietary NVIDIA drivers).
GLFW is a helper library for creating windows, setting up OpenGL in them and provides access to input functions etc. It's similar to GLUT - which you seem to be familiar with - in that regard, but a much more modern alternative. You'll of course need to install it to use it. You should be able to install it through Ubuntus package manager as per usual, try apt-cache search glfw which should help you find the exact package name. If there are multiple "lib" versions, install the one ending in -dev.
I am trying to get an OpenGL-based rendering engine that relies on OpenGL 3.3 and GLSL 3.3 to run on Ubuntu 13.10 using an AMD Radeon 6950. I want to use the open source drivers (radeon), which rely on Mesa for their OpenGL implementation. Ubuntu 13.10 only provides Mesa 9.2 (implementing OpenGL 3.1) "out of the box". It is however possible to install Mesa 10.1 (implementing OpenGL 3.3) from this PPA as explained in this thread:
StackOverflow: OpenGL & GLSL 3.3 on an HD Graphics 4000 under Ubuntu 12.04
I used the exact same steps as explained there:
1.) Add the PPA Repository
$ sudo add-apt-repository ppa:oibaf/graphics-drivers
2.) Update sources
$ sudo apt-get update
3.) Dist-upgrade (rebuilds many packages)
$ sudo apt-get dist-upgrade
4.) Then I rebooted.
Mesa 10.1 was successfully installed. However, glxinfo, while it now reports that Mesa 10.1 is in use, still reports only OpenGL 3.0 (compat profile) and OpenGL 3.1 (core profile):
$ glxinfo | grep OpenGL
OpenGL vendor string: X.Org
OpenGL renderer string: Gallium 0.4 on AMD CAYMAN
OpenGL core profile version string: 3.1 (Core Profile) Mesa 10.1.0-devel (git-7f57408 saucy-oibaf-ppa+curaga)
OpenGL core profile shading language version string: 1.40
OpenGL core profile context flags: (none)
OpenGL core profile extensions:
OpenGL version string: 3.0 Mesa 10.1.0-devel (git-7f57408 saucy-oibaf-ppa+curaga)
OpenGL shading language version string: 1.30
OpenGL context flags: (none)
OpenGL extensions:
Why is that? How can I enable OpenGL 3.3? As can be seen by comparison in the StackOverflow thread that I mentioned, it is possible to have glxinfo report OpenGL 3.3. I am aware that glxinfo may report the wrong version numbers as per the Mesa 10.1 Release Notes, however the rendering engine I'm trying to run fails because of this.
I use the following code to spawn a window:
glfwOpenWindowHint(GLFW_OPENGL_VERSION_MAJOR, 3);
glfwOpenWindowHint(GLFW_OPENGL_VERSION_MINOR, 3);
glfwOpenWindowHint(GLFW_OPENGL_PROFILE, 0);
if(GL_TRUE != glfwOpenWindow(
_windowDimensions.x, _windowDimensions.y,
0, 0, 0, 0, 32, 0, GLFW_WINDOW))
{
THROW("GLFW error: failed to create window.");
}
When I try to run the rendering engine using this setup, the above exception gets thrown as OpenGL 3.3 is not supported. I can set GLFW_OPENGL_VERSION_MINOR to 0 and then the window opens fine, but an exception will be thrown later as GLSL 3.3 shaders are required.
Also note that the rendering engine runs fine when I use the proprietary fglrx drivers (and then glxinfo reports OpenGL version 4.2), so the application itself really is not the problem, but the supported OpenGL is.
So what am I doing wrong? Why doesn't Mesa 10.1 support OpenGL 3.3 for me? My graphics card certainly supports it.
Here's some additional information that may be useful.
$ apt-cache policy libgl1-mesa-glx
libgl1-mesa-glx:
Installed: 10.1~git1402041945.7f5740+curaga~gd~s
Candidate: 10.1~git1402041945.7f5740+curaga~gd~s
Version table:
*** 10.1~git1402041945.7f5740+curaga~gd~s 0
500 http://ppa.launchpad.net/oibaf/graphics-drivers/ubuntu/ saucy/main amd64 Packages
100 /var/lib/dpkg/status
9.2.1-1ubuntu3 0
500 http://archive.ubuntu.com/ubuntu/ saucy/main amd64 Packages
$ lspci -vv
...snip...
01:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Cayman PRO [Radeon HD 6950] (prog-if 00 [VGA controller])
Subsystem: Hightech Information System Ltd. Device 2307
Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
Latency: 0, Cache Line Size: 64 bytes
Interrupt: pin A routed to IRQ 53
Region 0: Memory at c0000000 (64-bit, prefetchable) [size=256M]
Region 2: Memory at fe620000 (64-bit, non-prefetchable) [size=128K]
Region 4: I/O ports at e000 [size=256]
Expansion ROM at fe600000 [disabled] [size=128K]
Capabilities: <access denied>
Kernel driver in use: radeon
...snip...
$ lsmod | egrep 'radeon|fglrx'
radeon 1402995 3
i2c_algo_bit 13413 1 radeon
ttm 84169 1 radeon
drm_kms_helper 52710 1 radeon
drm 297056 5 ttm,drm_kms_helper,radeon
$ modinfo radeon
filename: /lib/modules/3.11.0-15-generic/kernel/drivers/gpu/drm/radeon/radeon.ko
license: GPL and additional rights
description: ATI Radeon
author: Gareth Hughes, Keith Whitwell, others.
...snip...
firmware: radeon/CAYMAN_smc.bin
firmware: radeon/CAYMAN_rlc.bin
firmware: radeon/CAYMAN_mc.bin
firmware: radeon/CAYMAN_me.bin
firmware: radeon/CAYMAN_pfp.bin
...snip...
srcversion: D174B1E4686391B33437915
alias: pci:v00001002d000099A4sv*sd*bc*sc*i*
alias: pci:v00001002d000099A2sv*sd*bc*sc*i*
...snip...
depends: drm,drm_kms_helper,ttm,i2c-algo-bit
intree: Y
vermagic: 3.11.0-15-generic SMP mod_unload modversions
parm: no_wb:Disable AGP writeback for scratch registers (int)
parm: modeset:Disable/Enable modesetting (int)
parm: dynclks:Disable/Enable dynamic clocks (int)
parm: r4xx_atom:Enable ATOMBIOS modesetting for R4xx (int)
parm: vramlimit:Restrict VRAM for testing (int)
parm: agpmode:AGP Mode (-1 == PCI) (int)
parm: gartsize:Size of PCIE/IGP gart to setup in megabytes (32, 64, etc) (int)
parm: benchmark:Run benchmark (int)
parm: test:Run tests (int)
parm: connector_table:Force connector table (int)
parm: tv:TV enable (0 = disable) (int)
parm: audio:Audio enable (1 = enable) (int)
parm: disp_priority:Display Priority (0 = auto, 1 = normal, 2 = high) (int)
parm: hw_i2c:hw i2c engine enable (0 = disable) (int)
parm: pcie_gen2:PCIE Gen2 mode (-1 = auto, 0 = disable, 1 = enable) (int)
parm: msi:MSI support (1 = enable, 0 = disable, -1 = auto) (int)
parm: lockup_timeout:GPU lockup timeout in ms (defaul 10000 = 10 seconds, 0 = disable) (int)
parm: fastfb:Direct FB access for IGP chips (0 = disable, 1 = enable) (int)
parm: dpm:DPM support (1 = enable, 0 = disable, -1 = auto) (int)
parm: aspm:ASPM support (1 = enable, 0 = disable, -1 = auto) (int)
$ dpkg -S /lib/modules/3.11.0-15-generic/kernel/drivers/gpu/drm/radeon/radeon.ko
linux-image-extra-3.11.0-15-generic: /lib/modules/3.11.0-15-generic/kernel/drivers/gpu/drm/radeon/radeon.ko
$ apt-cache policy linux-image-extra-3.11.0-15-generic
linux-image-extra-3.11.0-15-generic:
Installed: 3.11.0-15.25
Candidate: 3.11.0-15.25
Version table:
*** 3.11.0-15.25 0
500 http://archive.ubuntu.com/ubuntu/ saucy-updates/main amd64 Packages
500 http://archive.ubuntu.com/ubuntu/ saucy-security/main amd64 Packages
100 /var/lib/dpkg/status
What they do not tell you, but indirectly imply ("Some drivers don't support all the features required in OpenGL 3.3."), is that in the last official release of Mesa (10.0), GL 3.3 only works on Intel hardware. This is one of the joys of Intel's close involvement with the Mesa project. If you want reliable GL 3.3 support in any form on AMD hardware, you should use fglrx (the proprietary AMD driver) for the time being.
The development release of Mesa 10.1 may implement GL 3.3 on radeon drivers, but you need to request a 3.3 core profile. You are not doing this currently.
This:
glfwOpenWindowHint(GLFW_OPENGL_PROFILE, 0);
Actually needs to be this:
glfwOpenWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);
Also, there is no such thing as a GL 3.0 compatibility profile or 3.1 core profile. Profiles were not introduced into OpenGL until 3.2. There is a concept of GL_ARB_compatibility in GL 3.1, but that is not the same thing as a profile; glxinfo is giving misleading information.
I answered the thread OP mentions regarding "OpenGL & GLSL 3.3 on an HD Graphics 4000 under Ubuntu 12.04" but I thought I would give the same answer here too considering info seems so scarce. This works for those using freeglut and glew:
so Ive seen a lot of threads surrounding this and I thought here would be a good place to respond. Im running Ubuntu 15.04 with intel ivybridge. After using the "Intel Graphics installer for linux" application, glxinfo gives the following info regarding openGl:
OpenGL core profile version string: 3.3 (Core Profile) Mesa 10.6.0
OpenGL core profile shading language version string: 3.30
OpenGL version string: 3.0 Mesa 10.6.0
OpenGL shading language version string: 1.30
Now from this you can see that the core profile and glsl version are 3.3,but compatible openGl is only 3.0 thus if you want your code to run with 3.3 you need to specify both an opengl core profile and a glsl core profile. The following steps should work if youre using freeglut and glew:
-the glsl #version should specify that you want the core profile:
#version 330 core
-specify you want opengl 3.3:
glutInitContextVersion (3, 3);
-and finally set glewExperimental to true before glewInit():
glewExperimental = GL_TRUE;
hope this helps some people get started :)
I write a C++ application on Win64 using OpenCL. I downloaded CUDA SDK 4.2 and installed OpenCL.lib in my lib directory. In the first invocation of OpenCL:
cl_uint n = 0;
cl_int err = ::clGetPlatformIDs(0, NULL, &n);
my application goes down ( with system fatal error ). Does somebody have the same problem on NVIDIA GTX670M? How did you solve this issue?