D R M based computing in Blender - drmaa

Can 2.79 blender or 2.80 blender (Linux Versions) take advantabe of multiple GPU cards? If so, how many in same motherboard.
I have hundreds of GPU cards and motherboards available. Looking for correct drivers for RX 580 series RADION GPUs for LINUX that will work with blender.

Related

Utilizing multiple GPU in my machine (Intel + Nvidia) - Copy data between them

My machine has 1 Intel graphic card and 1 Nvidia 1060 card.
I use Nvidia gpu for object detection (Yolo) .
PipeLine
---Stream--->Intel gpu (decode)----> Nvidia Gpu (Yolo)---->Renderer
I want to utilize both of my gpu cards ; I want to use one for decoding frames (Hardware accleration -ffmpeg ) and other for yolo. (Nvidia restricts number of streams that you can decode at one time to 1, but I dont see such restriction with Intel)
Has anyone tried some thing like this ? any pointers on how to do interGPU frame transfer

Same Direct2D application performs better on a "slower" machine

I wrote a Direct2D application that displays a certain number of graphics.
When I run this application it takes about 4 seconds to display 700,000 graphic elements on my notebook:
Intel Core i7 CPU Q 720 1.6 GHz
NVIDIA Quadro FX 880M
According to the Direct2D MSDN page:
Direct2D is a user-mode library that is built using the Direct3D 10.1
API. This means that Direct2D applications benefit from
hardware-accelerated rendering on modern mainstream GPUs.
I was expecting that the same application (without any modification) should perform better on a different machine with better specs. So I tried it on a desktop computer:
Intel Xeon(R) CPU 2.27 GHz
NVIDIA GeForce GTX 960
But it took 5 seconds (1 second more) to display the same graphics (same number and type of elements).
I would like to know how can it be possible and what are the causes.
It's impossible to say for sure without measuring. However, my gut tells me that melak47 is correct. There is no lack of GPU acceleration, it's a lack of bandwidth. Integrated GPUs have access to the same memory as the CPU. They can skip the step of having to transfer bitmaps and drawing commands across the bus to dedicated graphics memory for the GPU.
With a primarily 2D workload, any GPU will be spending most of its time waiting on memory. In your case, the integrated GPU has an advantage. I suspect that extra second you feel, is your GeForce waiting on graphics coming across the motherboard bus.
But, you could profile and enlighten us.
Some good points in the comments and other replies.(can't add a comment yet)
Your results dont surprise me as there are some differencies between your 2 setups.
Let's have a look there: http://ark.intel.com/fr/compare/47640,43122
A shame we can't see the SSE version supported by your Xeon CPU. Those are often used for code optimization. Is the model I chose for the comparison even the good one?
No integrated GPU in that Core-I7, but 4 cores + hyperthreading = 8 threads against 2 cores with no hyperthreading for the Xeon.
Quadro stuff rocks when it comes to realtime rendering. As your scene seems to be quite simple, it could be well optimized for that, but just "maybe" - I'm guessing here... could someone with experience comment on that? :-)
So it's not so simple. What appears to be a better gfx card doesn't mean better performance for sure. If you have a bottleneck somewhere else you're screwed!
The difference is small, you must compare every single element of your 2 setups: CPU, RAM, HDD, GPU, Motherboard with type of PCI-e and chipset.
So again, a lot of guessing, some tests are needed :)
Have fun and good luck ;-)

Do I need to have a compatible graphics card to develop with the latest version of OpenGL?

I want to write a program with OpenGL version 4. The currently installed version of OpenGL is 2.1.0 on my computer. I checked for a way to install the latest version of OpenGL, but in online articles it is said that the only way of updating OpenGL libraries is by updating the graphics card driver software.
I have a laptop with Mobile Intel(R) 4 Series Express Chipset Family graphics card. The last update was released in 2010, and it looks like to be abandoned.
Is it possible to write high version OpenGL software with a bad graphics card? I don't care if my program will be running with low FPS rate or be very sluggish on my hardware. I just would like to know if it is technically possible.
Your graphics card must support the OpenGL 4 version to develop with it. It is mandatory that the hardware (graphic card) is compatible with the OpenGl version you want to develop and the driver installed in your system allows the graphic card to use that version.
Supported cards for openGL 4 (Wikipedia):
Nvidia GeForce 400 series, Nvidia GeForce 500 series, Nvidia GeForce
600 series, Nvidia GeForce 700 series, ATI Radeon HD 5000 Series, AMD
Radeon HD 6000 Series, AMD Radeon HD 7000 Series. Supported by Intel's
Windows drivers for the Haswell's integrated GPU.
In your case your graphic card and driver only allows openGl 2.1.
Nowadays almost any graphic card for 40/50 Euros is capable to run openGl 4 (but change it on the laptop usually is not possible)
For more information check Wikipedia and Nvidia

OpenGL low performances on my computer

We began learning OpenGL at school and, in particular, implemented a .obj mesh loader. When I run my code at school with quite heavy meshes (4M up to 17M faces), I have to wait a few seconds for the mesh to be loaded but once it is done, I can rotate and move the scene with a perfect fluidity.
I compiled the same code at home, and I have very low performances when moving in a scene where heavy meshes are displayed.
I'm using the 3.0 Mesa 10.1.3 version of OpenGL (this is the output of cout << glGetString(GL_version) << endl) and compiling with g++-4.9. I don't remember the version numbers of my school but I'll update my message as soon as possible if needed. Finally, I'm on Ubuntu 14.04 my graphic card is a Nvidia Geforce 605, my CPU is an Intel(R) Core(TM) i5-2320 CPU # 3.00GHz, and I have 8Go RAM.
If you have any idea to help me to understand (and fix it) why it is running so slowly on a quite good computer (certainly not a racehorse but good enough for that), please tell me. Thanks in advance !
TL;DR: You're using the wrong driver. Install the proprietary, closed source binary drivers from NVidia and you'll get very good performance. Also with a GeForce 605 you should get some OpenGL-4.x support.
I'm using the 3.0 Mesa 10.1.3 version of OpenGL
(…)
my graphic card is a Nvidia Geforce 605
That's your problem right there. The open source "Noveau" drivers for NVidia GPUs that are part of Mesa are a very long way from offering any kind of reasonable HW acceleration support. This is because NVidia doesn't publish openly available documentation on their GPU's low level programming.
So at the moment the only option for getting HW accelerated OpenGL on your GPU is to install NVidia's proprietary drivers. They are available on NVidia's website; however since your GPU isn't "bleeding edge" right now I recommend you use those installable through the package manager; you'll have to add a "nonfree" package source repository though.
This is in stark contrast to the AMD GPUs which have full documentation coverage, openly accessible. Because of that the Mesa "radeon" drivers are quite mature; full OpenGL-3.3 core support, with performance good enough for most applications, in some applications even outperforming AMD's proprietary drivers. OpenGL-4 support is work in progress for Mesa at a whole and last time I checked the "radeon" drivers' development was actually moving at a faster pace than the Mesa OpenGL state tracker itself.

CUDA support for NVIDIA Tegra 4 processors?

I could not find anything on the use of CUDA on Tegra processors,
even though they provide quite a lot SIMD cores (~72).
It would seem that NVIDIA currently focuses development efforts on Tegra
through the Tegra development kit (based on Android).
So my question is:
"Is it possible to use CUDA (or OpenCL) on Tegra 4 or predecessors and if so what version is supported?"
We were confused by the news articles too. We have since learned the following:
CUDA is not supported on Tegra 4, according to this tweet (also here) by SO user "harrism" who works for NVIDIA. It is anticipated for a future Tegra version (same tweet as a source).
OpenCL is not supported on Tegra.
OpenGL ES 2 shaders have always been supported on Tegra and here are some Tegra 2 and Tegra 3 demos with these shaders from our previous work at AccelerEyes.
We're looking forward to running our stuff on the 72 GPU cores though, using our ES 2 shaders. Awesome chip.
Cheers!
Pointed out correctly Tegra 2/3/4 do not support CUDA. Logan will be the first one supporting CUDA and OpenCL.
However, Nvidia is already trying to bring people to using CUDA with Tegras, at the moment you can use a Tegra 3 plus a Nvidia PCIe graphic card.
There are a few Development boards supporting that
Nvidia Jetson: http://www.nvidia.com/object/jetson-automotive-developement-platform.html
Nvidia Kayla
Toradex Apalis: http://www.toradex.com/products/apalis-arm-computer-modules/apalis-t30