Getting Started with OpenCL with NVIDIA graphics cards and Ubuntu Linux - c++

I am looking to start programming using OpenCL. I currently have a laptop running Ubuntu Linux. (More specifically it's Linux Mint however they are similar in many respects and I will be changing back to Xubuntu shortly, so I am hoping any info will work for both.)
This laptop is a "difficult" laptop because it has both an on-chip Intel graphics processor (side by side with the CPU) and a dedicated NVIDIA Graphics Card. (I believe it is a GTX 670?) I say difficult because it was pretty complicated to install the drivers to allow me to develop using OpenGL... Even now I get confused sometimes when I run my program and it explodes because I didn't run it using 'optirun'.
Anyway, back to the question in hand, I researched the required software, and continually keep being pointed at NVIDIA's site to download their OpenCL drivers / toolkits. However I would prefer to use Khronos OpenCL rather than NVIDIA's Cuda. I don't fully understand what the difference is however*, and online info is either limited or cryptic.
The actual programming / problem vectorization I have already done, I'm just a bit lost at the moment as to what software I should / must install and how to go about doing so.
*Edit: I find the OpenCL syntax more intuitive.

Related

Where is opengl located(gpu software or os)? [duplicate]

This question already has answers here:
How does OpenGL work at the lowest level? [closed]
(4 answers)
Closed 5 years ago.
I am writing an x86 os and a question popped into my mind:
If I would want to create a simple opengl game in my os, would I be able to do that without reinventing opengl?
SO what I am asking is, is opengl included in e.g. the nvideo drivers, or is it located in the gpu firmware?
If it would be in the gpu, i could simple port/create a opengl wrapper right?
Can someone elaborate on this?
Thanks
OpenGL is the API of the GPU driver. Taking nVidia as an example, they release closed source drivers for supported operating systems. There are also open source drivers (the nouveou project) that try to reverse engineer the nVidia graphics cards and implement an open source driver for them. The same is also true for other vendors to some extent.
So considering your scenario, you should either implement an ABI compatibility layer in your OS with a widely supported OS so that you could run the closed-source drivers, or port the open-source community drivers to your OS.
The GPU hardware executes specific code. Some of this code is programmable, which means that you write special code that runs inside the GPU card.
The instructions to pass this special code (shaders in OpenGL parlance) and the data they handle are the graphics API (OpenGL, DirectX). There are also more instructions for the GPU, they are also handled by the API.
This API lives in the graphics card driver.
First, an app asks the OS to provide the function pointers to the API commands. These pointers are retrieved from the driver. Then the app use these pointers to comunicate with the GPU (via driver).
Two details: Retriving pointers is not needed in MAC, they provide them as any C++ instruction. This is also true in Windows, but just for OpenGL 1.1
The drivers for Windows and Mac are propietary software.
In Linux nVidia, AMD and Intel provide their drivers (but mostly as closed source). Also in Linux, there are open source drivers, which some developers wrote on their own.
Finally, There is a software inplementation of the OpenGL API done by Mesa. Mesa also is one of those that writes open source drivers for Linux.

OpenGL game runs fine in Win7, drops to 5fps in Windows 8?

I have just recently installed Windows 8, and I tried to compile and build a simple c++ game project in VS 2010, but when I did, it was running at 5 fps. On windows 7, it runs at a solid 60 fps. Nothing has been changed in the code, but there is just horrible slow down.
I have updated my video drivers, but there is still horrible lag. I thought the problem was to do with compatibility issues with windows 8 and OpenGL, but I can't find anything to confirm this. I was wondering if anyone else has had this problem, and if you have solved it.
I would recommend you test your graphics card / drivers first. All sorts of driver issues could arise when you upgrade operating systems. One of the best tests would be to download Cinebench and see how it performs. Cinebench will evaluate your OpenGL performance. If you get poor results, then you know it's a hardware / driver issue and not an issue with your application.
If the Cinebench results are good, then you can move on to the recommendations made by #Robert Rouhani (comments).
http://www.maxon.net/products/cinebench/overview.html
What sort of video card do you have in the Win8 machine?
If it's a laptop you might be battling against nVidia Optimus (or an equivalent technology?). Basically programs have to tell the OS in advance that they want to use the video card or they get defaulted to using the low power GPU embedded in the CPU (note: over-simplification).
If this is the case, there's some options in the nVidia control panel to let you create a profile telling the OS to run your app with the discrete GPU, rather than the embedded one.

Is there any good way to get a indication if a computer can run a specific program/software?

Is there any good way too get a indication if a computer is capable to run a program/software without any performance problem, using pure JavaScript (Google V8), C++ (Windows, Mac OS & Linux), by requiring as little information as possible from the software creator (like CPU score, GPU score)?
That way can I give my users a good indication whether their computer is good enough to run the software or not, so the user doesn't need to download and install it from the first place if she/he will not be able to run it anyway.
I thinking of something like "score" based indications:
CPU: 230 000 (generic processor score)
GPU: 40 000 (generic GPU score)
+ Network/File I/O read/write requirements
That way can I only calculate those scores on the users computer and then compare them, as long as I'm using the same algorithm, but I have no clue about any such algorithm, whose would be sufficient for real-world software for desktop usage.
I would suggest testing on existence of specific libraries and environment (OS version, video card presence, working sound drivers, DirectX, OpenGL, Gnome, KDE). Assign priorities to these libraries and make comparison using the priorities, e.g. video card presence is more important than KDE presence.
The problem is, even outdated hardware can run most software without issues (just slower), but newest hardware cannot run some software without installing requirements.
For example, I can run Firefox 11 on my Pentium III coppermine (using FreeBSD and X server), but if you install windows XP on the newest hardware with six-core i7 and nVidia GTX 640 it still cannot run DirectX 11 games.
This method requires no assistance from the software creator, but is not 100% accurate.
If you want 90+% accurate information, make the software creator check 5-6 checkboxes before uploading. Example:
My application requires DirectX/OpenGL/3D acceleration
My application requires sound
My application requires Windows Vista or later
My application requires [high bandwith] network connection
then you can test specific applications using information from these checkboxes.
Edit:
I think additional checks could be:
video/audio codecs
pixel/vertex/geometry shader version, GPU physics acceleration (may be crucial for games)
not so much related anymore: processor extensions (SSE2 MMX etc)
third party software such as pdf, flash, etc
system libraries (libpng, libjpeg, svg)
system version (Service Pack number, OS edition (premium professional etc)
window manager (some apps on OSX require X11 for functioning, some apps on Linux work only on KDE, etc)
These are actual requirements I (and many others) have seen when installing different software.
As for old hardware, if the computer satisfies hardware requirements (pixel shader version, processor extensions, etc), then there's a strong reason to believe the software will run on the system (possibly slower, but that's what benchmarks are for if you need them).
For GPUs I do not think getting a score is usable/possible without running some code on the machine to test if the machine is up to spec.
With GPU's this is typically checking what Shader Models it is able to use, and either defaulting to a lower shader model (thus the complexity of the application is of less "quality") or telling them they have no hope of running the code and thus quitting.

OpenGL GLUT on VirtualBox Ubuntu 11.10 segmentation fault

DISCLAIMER:
I see that some suggestions for the exact same question come up, however that (similar) post was migrated to SuperUsers and seems to have been removed. I however would still like to post my question here because I consider it software/programming related enough not to post on SuperUsers (the line is vague sometimes between what is a software and what is a hardware issue).
I am running a very simple OpenGL program in Code::Blocks in VirtualBox with Ubuntu 11.10 installed on a SSD. Whenever I build&run a program I get these errors:
OpenGL Warning: XGetVisualInfo returned 0 visuals for 0x232dbe0
OpenGL Warning: Retry with 0x802 returned 0 visuals
Segmentation fault
From what I have gathered myself so far this is VirtualBox related. I need to set
LIBGL_ALWAYS_INDIRECT=1
In other words, enabling indirect rendering via X.org rather then communicating directly with the hardware. This issue is probably not related to the fact that I have an ATI card as I have a laptop with an ATI card that runs the same program flawlessly.
Still, I don't dare to say that the fact that my GPU is an ATI doesn't play any role at all. Nor am I sure if the drivers are correctly installed (it says under System info -> Graphics -> graphics driver: Chromium.)
Any help on HOW to set LIBGL_ALWAYS_INDIRECT=1 would be greatly appreciated. I simply lack the knowledge of where to put this command or where/how to execute it in the terminal.
Sources:
https://forums.virtualbox.org/viewtopic.php?f=3&t=30964
https://www.virtualbox.org/ticket/6848
EDIT: in the terminal type:
export LIBGL_ALWAYS_INDIRECT = 1
To verfiy that direct rendering is off:
glxinfo | grep direct
However, the problem persists. I still get mentioned OpenGL warnings and the segmentation fault.
I ran into this same problem running the Bullet Physics OpenGL demos on Ubuntu 12.04 inside VirtualBox. Rather than using indirect rendering, I was able to solve the problem by modifying the glut window creation code in my source as described here: https://groups.google.com/forum/?fromgroups=#!topic/comp.graphics.api.opengl/Oecgo2Fc9Zc.
This entailed replacing the original
...
glutCreateWindow(title);
...
with
...
if (!glutGet(GLUT_DISPLAY_MODE_POSSIBLE))
{
exit(1);
}
glutCreateWindow(title);
...
as described in the link. It's not clear to me why this should correct the segfault issue; apparently glutGet has some side effects beyond retrieving state values. It could be a quirk of freeglut's implementation of glut.
If you look at the /etc/environment file, you can see a couple of variables exposed there - this will give you and idea of how to expose that environment variable across the entire system. You could also try putting it in either ~/.profile or ~/.bash_profile depending on your needs.
The real question in my mind is: Did you install the guest additions for Ubuntu? You shouldn't need to install any ATI drivers in your guest as VirtualBox won't expose the actual physical graphics hardware to your VM. You can configure your guest to support 3D acceleration in the virtual machine settings (make sure you turn off the VM first) under the Display section. You will probably want to boost the allocated virtual memory - 64MB or 128MB should be plenty depending on your needs.

OpenCV on Embedded Platform

Can some suggest a test/development embedded platform to use with OpenCV.
I would like to develop an embedded video analytics solution, but I don't know where to start.
Some suggestion/ideas/hw starter kits?
Maybe some Pc-104 solutions with Intel Atom? Has someone made some test about performances on this platform or any other embedded platform?
Thanks
A Pentium/PC built OpenCV application will run on any Atom platform with the same OS unmodified. This is because Atoms natively run Pentium executables.
If you are looking for a more embedded solution, there are OpenCV ports for the BeagleBoard. SInce OpenCV is portable code, it can be compiled to most systems that provide a C/C++ compiler. I have successfully used OpenCV on ARM, MIPS and XScale processors.
As for mobile platforms, there are ports to the iPhone, Android and various Windows CE/Mobile/Embdeed versions.
If you're looking for a very small option, I strongly recommend the Gumstix Overo series. I use them for my Computer Vision research, and they work really well. There are a couple of options for processors, I'd recommend the Overo Tide module, which has 512 MB of RAM, and an onboard DSP for offloading some CV operations. Combine this with a Tobi expansion board and a few cables, and you've got a full embedded computer vision research platform for ~$350. They also sell a small camera, which I'm still getting around to trying out. What's nice about the Gumstix is you can just build OpenCV onboard, which saves you some of the headaches with BitBake type solutions.
I'd personally recommend TI OMAP platforms - Beagleboard xM and PandaBoard.
Those boards have embedded video input, run Linux, and have more than enough performance to run OpenCV. They are also extremely portable and have good community support.
Do you mean OpenCV the computer vision library originally developed by Intel? I would be inclined to start with Moblin, Intel's embedded Linux, at moblin.org and for hw use a netbook or any PC that Moblin supports. Hook up a supported webcam from the list at www.qbik.ch/usb/devices/search_res.php?pattern=webcam .
There is a Wikipedia entry that might help. Your project sounds like fun!
cheers -- Rick
You can use the Blackfin kit from Analog Devices. Analog Devices have created a library similar to opencv for the blackfin DSP processor.
you can use Symbian Simulator for this they Nokia have there Open CV for Symbian for hardware testing you have to drop the mail to them they will provide u the hardware through the telnet for given time of time
OpenCV does not need any "special" hardware to function. You can use it fully using images from normal files (e.g. JPG)
Have you looked at some of the tutorials/code? Do they require something specific that you do not have?
Vision Components seem to support the OpenCV in their Smart Cameras (see this article).
I guess I am late to answer.
I have recently used opencv3.4.6 with PC-104 boards (PCM3365) for an INDUSTRIAL Application.
Only thing to note is that when i start webcamera using cv::Videocapture, it takes a long time to open (around 30-40secs), otherwise everything is fine.
Good Luck