I could not find anything on the use of CUDA on Tegra processors,
even though they provide quite a lot SIMD cores (~72).
It would seem that NVIDIA currently focuses development efforts on Tegra
through the Tegra development kit (based on Android).
So my question is:
"Is it possible to use CUDA (or OpenCL) on Tegra 4 or predecessors and if so what version is supported?"
We were confused by the news articles too. We have since learned the following:
CUDA is not supported on Tegra 4, according to this tweet (also here) by SO user "harrism" who works for NVIDIA. It is anticipated for a future Tegra version (same tweet as a source).
OpenCL is not supported on Tegra.
OpenGL ES 2 shaders have always been supported on Tegra and here are some Tegra 2 and Tegra 3 demos with these shaders from our previous work at AccelerEyes.
We're looking forward to running our stuff on the 72 GPU cores though, using our ES 2 shaders. Awesome chip.
Cheers!
Pointed out correctly Tegra 2/3/4 do not support CUDA. Logan will be the first one supporting CUDA and OpenCL.
However, Nvidia is already trying to bring people to using CUDA with Tegras, at the moment you can use a Tegra 3 plus a Nvidia PCIe graphic card.
There are a few Development boards supporting that
Nvidia Jetson: http://www.nvidia.com/object/jetson-automotive-developement-platform.html
Nvidia Kayla
Toradex Apalis: http://www.toradex.com/products/apalis-arm-computer-modules/apalis-t30
Related
I have an inhouse application that uses the now deprecated nvidia scenix and Cg shaders. It works fine, and as it is inhouse we can chose what hardware to run it on.
The shaders are currently using vp40/fp40 profiles (though I can change it to use later profiles like GLSLV/GLSLF). I am trying to confirm that the current crop of nvidia hardware STILL supports Cg shaders? i.e. if we purchase the latest OpenGL4 geforce or quadro cards, will they still support the Cg profiles? I have asked on the nvidia forum but no answer. Eventually we will have to upgrade to a new scene graph and GLSL, but I want to know what 'legacy' support there is for the Cg shaders.
Thanks
Yes you're perfectly fine. In fact the GLSL implementation in the NVidia drivers is actually an add-on to the Cg compiler. even on latest generation GPUs the NVidia driver internally first translates GLSL to NV/ARB_programm_… assembly (source code in fact) and runs this through the assembler. It's unlikely NVidia is going to change that in the near future (although the introduction of SPIR-V may force their hand). And all the legacy OpenGL ARB/NV_program interfaces are supported just fine as extension (even to to OpenGL-4 core profile).
I want to write a program with OpenGL version 4. The currently installed version of OpenGL is 2.1.0 on my computer. I checked for a way to install the latest version of OpenGL, but in online articles it is said that the only way of updating OpenGL libraries is by updating the graphics card driver software.
I have a laptop with Mobile Intel(R) 4 Series Express Chipset Family graphics card. The last update was released in 2010, and it looks like to be abandoned.
Is it possible to write high version OpenGL software with a bad graphics card? I don't care if my program will be running with low FPS rate or be very sluggish on my hardware. I just would like to know if it is technically possible.
Your graphics card must support the OpenGL 4 version to develop with it. It is mandatory that the hardware (graphic card) is compatible with the OpenGl version you want to develop and the driver installed in your system allows the graphic card to use that version.
Supported cards for openGL 4 (Wikipedia):
Nvidia GeForce 400 series, Nvidia GeForce 500 series, Nvidia GeForce
600 series, Nvidia GeForce 700 series, ATI Radeon HD 5000 Series, AMD
Radeon HD 6000 Series, AMD Radeon HD 7000 Series. Supported by Intel's
Windows drivers for the Haswell's integrated GPU.
In your case your graphic card and driver only allows openGl 2.1.
Nowadays almost any graphic card for 40/50 Euros is capable to run openGl 4 (but change it on the laptop usually is not possible)
For more information check Wikipedia and Nvidia
We began learning OpenGL at school and, in particular, implemented a .obj mesh loader. When I run my code at school with quite heavy meshes (4M up to 17M faces), I have to wait a few seconds for the mesh to be loaded but once it is done, I can rotate and move the scene with a perfect fluidity.
I compiled the same code at home, and I have very low performances when moving in a scene where heavy meshes are displayed.
I'm using the 3.0 Mesa 10.1.3 version of OpenGL (this is the output of cout << glGetString(GL_version) << endl) and compiling with g++-4.9. I don't remember the version numbers of my school but I'll update my message as soon as possible if needed. Finally, I'm on Ubuntu 14.04 my graphic card is a Nvidia Geforce 605, my CPU is an Intel(R) Core(TM) i5-2320 CPU # 3.00GHz, and I have 8Go RAM.
If you have any idea to help me to understand (and fix it) why it is running so slowly on a quite good computer (certainly not a racehorse but good enough for that), please tell me. Thanks in advance !
TL;DR: You're using the wrong driver. Install the proprietary, closed source binary drivers from NVidia and you'll get very good performance. Also with a GeForce 605 you should get some OpenGL-4.x support.
I'm using the 3.0 Mesa 10.1.3 version of OpenGL
(…)
my graphic card is a Nvidia Geforce 605
That's your problem right there. The open source "Noveau" drivers for NVidia GPUs that are part of Mesa are a very long way from offering any kind of reasonable HW acceleration support. This is because NVidia doesn't publish openly available documentation on their GPU's low level programming.
So at the moment the only option for getting HW accelerated OpenGL on your GPU is to install NVidia's proprietary drivers. They are available on NVidia's website; however since your GPU isn't "bleeding edge" right now I recommend you use those installable through the package manager; you'll have to add a "nonfree" package source repository though.
This is in stark contrast to the AMD GPUs which have full documentation coverage, openly accessible. Because of that the Mesa "radeon" drivers are quite mature; full OpenGL-3.3 core support, with performance good enough for most applications, in some applications even outperforming AMD's proprietary drivers. OpenGL-4 support is work in progress for Mesa at a whole and last time I checked the "radeon" drivers' development was actually moving at a faster pace than the Mesa OpenGL state tracker itself.
I currently run a machine that allows me to program in OpenGL 2.1. If I were to make a program, should I use the power of the current OpenGL versions like 3.x/4.x or use 2.1?
On a side question: How can I tell what's the highest version of OpenGL my computer can run?
On another side question: does only upgrading my video card allow me to program in upgraded versions of OpenGL?
OpenGL versions (for AMD and NVIDIA GPUs) roughly correspond to levels of hardware. 2.x OpenGL versions are for DX9-level hardware. 3.x represents DX10-level, and 4.x represents DX11-class hardware. So the version you pick restricts you can run your code.
In general, any AMD or NVIDIA GPU you can actually buy new from a store will be 3.x or better (more than likely, 4.x). Even integrated GPUs, motherboard or CPU, from AMD are 3.x or better. I do some home development work on an HD 3300 motherboard GPU, and it works reasonably well.
Intel is a problem. Intel's OpenGL driver quality is pretty poor. Many old Intel machines can only support GL 1.4, which is pre-DX9 class functionality. They do support some higher-level extensions (shaders, but only vertex shaders, since they run them in software).
More recent Intel GPUs are a bit better, but their GL drivers are still rather buggy.
The above describes the situation for Windows. Linux is a bit fuzzier, because there are drivers from NVIDIA/AMD, and open-source community written drivers. The latter are generally not as good, but they are improving. These tend to be for 3.x-class hardware.
The MacOSX world is a bit different. Mac OSX Lion (10.7), recently released, adds support for OpenGL 3.2 (sadly, not 3.3, for some reason). Apple rigidly controls how OpenGL works on their platform, but hopefully they will be updating GL versions more frequently than they have been recently.
So on Macs, you really have two choices: 2.1 or 3.2. Note that Lion's 3.2 support only exposes core OpenGL functionality. See this page for details on what that means.
You cannot tell what the highest version your particular computer is capable of. There is simply the version you get when you create a context. In general, unless you specifically ask for a version (and even then, usually not), you will get the highest version your hardware and drivers can handle.
Oh, and yes: the OpenGL version is controlled by your video card's capabilities (and installed drivers).
The following advise assumes that you're developing a serious application that you intend for others to use. This isn't for little demo apps or whatever.
In general, I would advise against explicitly restricting your code to 4.x. While 4.x adoption increases every day (there are 2 hardware generations from both NVIDIA and AMD with 4.x support, and a third likely will be out by years end from AMD. Also, AMD is starting to embed 4.x capable GPUs in their CPUs now), there is still a lot of 3.x hardware. 4.x doesn't buy you a whole lot, and you can easily add code paths to conditionally support 4.x features if they are available.
In order to use OpenGL 3.x you need a card that supports DirectX10 and proper drivers that have support for it.
The advantage in opposite to DirectX is, that you can also use OpenGL3 and 4 on WindowsXP. No need for 7 or Vista.
Which version you should use depends on your audience. If your audience are gamers, go ahead, use 3. Won't do 4 exclusive yet. DX11 are still rare.
For a first look on how Gamers use their computers and what hardware they have, steam is a good source:
http://store.steampowered.com/hwsurvey
You can determine the version by running:
glGetString(GL_VERSION);
A good OpenGL3 Tutorial:
http://arcsynthesis.org/gltut/
The OpenGL 3.3 SDK Reference:
http://www.opengl.org/sdk/docs/man3/
Hope this helps a bit :).
Lots of embedded Intel graphics are limited to 1.4 or 1.5.
Mac OSX is stuck on 2.1 I hear.
All Radeon and GeForce cards can do 3+ (may need a driver update).
And you can program with any version, but if your hardware doesn't support it, you'll end up testing under a software renderer (slow!).
On a side question: How can I tell what's the highest version of OpenGL my computer can run?
I answer for the above question.
I come across to the tool below, it's really complete in itself and let me see all OpenGL version that my system currently support (from 1.0 up to what it actually support). As well for extensions available for my system to use. Not only for ARB though, it ranges from NV, ATI, OES, etc.
http://www.realtech-vr.com/glview/download.html
Can I expect users to be able to run software that uses OpenGL 3.x?
Can Linux users who have open-source graphics drviers run OpenGL 3.x? I know that Mesa3D 7.8 only supports OpenGL 2.1.
I also know that OS X Snow Leopard supports some but not all OpenGL 3.0 features. I don't know the situation on Leopard.
I don't know the situation on XP, Vista, and Windows 7.
I'd like to start learning OpenGL, and my interest lies more in scientific and engineering applications than games. I know I'll be reading code that uses OpenGL 1.x, but I'd like to write code using the newest specification I can expect user's systems to support. I'm wondering whether I should start learning 2.1 or 3.3. I was thinking of getting either the 4th edition of the OpenGL Superbible to learn 2.1 or the 5th edition which is coming out July 30 to learn 3.3. (I have a bachelor's in physics, so my math background is pretty good.)
Edit: I found this related question with answers that are relevant to my question.
As Martin Beckett already pointed out, the situation is really rather bad as far as support for OpenGL 3.x is concerned. Many "modern" graphic chipsets widely used in notebooks (yes, Intel, I'm looking at you) do not even fully support OpenGL 2.x; some even lack features as old as multisampling.
The only way to make your software run on as many systems as possible is to use things like GLEW to decide which features to use at runtime (i.e. no need for conditional compiling).
As far as learning OpenGL is concerned, 2.1 is definitely a good choice, because it enables you to understand both older code using the fixed-function pipeline and more modern code relying on shaders. Afterwards, getting to grips with the most important 3.x features (e.g. frame buffer objects, vertex array objects) will be rather easy.
I can happily inform you that the open source drivers now officially fully support OpenGL 3.0, and Intel will be supporting OpenGL 3.1 as of the next release of Mesa, now renamed to Mesa 9.0. They added official support for OpenGL3.0 as of Mesa 8.0.
The Intel OpenGL support for Windows is currently at 4.0, so that shouldn't be a problem for you.
Regarding AMD and NVidia support, there is full OpenGL4.3 support for both closed source drivers, on both Windows and GNU/Linux. Regarding Open Source drivers, Radeon will officially be bumped to OpenGL3.0 support as of Mesa 9.0, combined with the 3.6.0 kernel release.
It is probably worth mentioning that the drivers supports subsets of OpenGL3.2/3.3/4.0/4.1/4.2/4.3, but the "supported version" can't be bumped until ALL features are implemented. Please see the official document for more detailed information.
These are exciting times for OpenGL!
The Windows XP drivers for Intel's GMA 950 only support OpenGL 1.4, sans GL_EXT_framebuffer_object. Oddly enough on the same exact hardware (a Mac mini) both Linux and OSX manage to support GL_EXT_framebuffer_object.
I don't know the situation on XP,
Vista, and Windows 7.
Bad - most cards claim to support openGl 2.0 or 2.1 but unless they are Nvidia don't expect any features beyond 1.1 to work
IIRC windows vista/7 supports opengl 1.1 in software or 1.4 with a directX wrapper. The graphics driver is free to support whatever it wants but except for Nvidia the quality is poor.
As long as you do only Scientific and Engineering applications, I would suggest you to use Modern OpenGL. Normally an Engineer can afford to buy a Modern Computer with a nice Graphics Card, if he needs it. For Science the application does often only need to run on one Computer, so compatibility with old computers shouldn't be your biggest concern, but being forward compatible with new hardware is never a wrong decision.
Writing a game is very different. Here is it very important to maximize the audience so that you can sell the maximum amount of copies. Requireing too much resources would reduce the target audience very much.