Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 years ago.
Improve this question
I have installed Autodesk's Fusion360 on a Windows virtual machine running onto an Ubuntu host. It all works fine except for rendering: all textures are rendered in a random colorful way.
For instance, on a classical windows machine this piece is rendered nicely with a grey aluminium texture, but on the virtual machine I get this:
I guess this is related to the way graphics are handled by the virtual machine. I followed the instructions of this thread, and installed the guest additions + direct3D support on the virtual machine, but I could not get the rendering to work properly.
I have not tried PCIe passthrough yet, but it seems a bit overkill and as there is no guarantee that it solves my problem I would like to find an easier solution.
Did anyone faced this kind of problem before ? Does anyone have an idea of what I could try to solve it ?
Hardware
Asus X99E-WS motherboard with 64Gb of RAM
ZOTAC GeForce GTX TITAN X graphics card (NVidia driver 352.63)
Host machine
Ubuntu 14.04
Virtualbox 5.0.10 (r104061)
Virtual machine
Windows 10 with 8Gb allocated RAM
Guest additions installed
Direct3D support enabled
2D and 3D acceleration enabled
According to this website here, which includes the minimum specification of the application you wish to use in your Virtual Machine.
Graphics Card: 512MB GDDR RAM or more, except Intel GMA X3100 cards
As I know, (please provide your VM's Graphic Card RAM) VirtualBox supports up to 128Mb RAM (maximum) in most cases, and in some cases you can increase it to 256Mb (I didn't tried myself though).
With my limited knowledge on this topic, I don't think there is a way to get higher than that. But if you find a way to increase the VRAM to 512Mb, I think this will solve your problem.
I think you should try a different virtual machine, without being sure, but according to this website, VMware Horizon 6 (unfortunately not free, but is available for your Linux machine) does support 3D Rendering and Graphics RAM up to 512Mb!
For virtual hardware version 9 (vSphere 5.1) and 10 (vSphere 5.5
Update 1) virtual machines, the default VRAM size is 96MB, and you can
configure a maximum size of 512MB.
Related
This question already has answers here:
Forcing NVIDIA GPU programmatically in Optimus laptops
(2 answers)
Closed 5 years ago.
I'm using a Surface Book 2 and visual studio. I'm trying to make an OpenGL application and I noticed that it is defaulting to the integrated intel GPU rather than the discrete NVIDIA GPU that is also on the laptop.
I know that I can use the NVIDIA control panel to set the NVIDIA GPU as the default, but the base setting is to "let the application choose" (I understand that the purpose of this setting is to save battery when the better GPU is not needed). I am trying to find a way that I can choose the GPU in my application without manually changing settings in the NVIDIA control panel.
I looked around and it sounds like OpenGL does not support any methods choosing between different GPUs (which is very surprising to me). Is there any way that I can select which GPU I want without using a different API and without changing the settings in the NVIDIA control panel?
Find the executable generated by Visual Studio, and set your GPU for it.
On the website of Oculus Rift is is stated that the minimum system requirements for the Oculus Rift are a NVIDIA GTX 970 / AMD R9 290 equivalent or greater. I am aware that the Quadro M1000M does not meet those requirements.
My intention is to use the Oculus Rift for developing educational applications (visualization of molecular structures) which in terms of computational demand does not even come close to modern games.
For the above-mentioned kind of purpose, would the Oculus Rift run fine on less powerful GPUs (i.e. the Quadro M1000M) or is the driver developed in such a way that it simply "blocks" cards that do not meet the required specifications?
Further information:
I intent on developing my application in Linux using GLFW in combination with LibOVR as mentioned in this guide: http://www.glfw.org/docs/3.1/rift.html.
edit
It was pointed out that the SDK does not support Linux. So as an alternative option, I could also use Windows / Unity.
Any personal experiences on the topic are highly appreciated!
Consumer Oculus Rift hardware has not been reverse engineered to the point where you can use it without the official software, which currently only supports Windows based desktop systems running one of a specific number of supported GPUs. It will not function on any mobile GPU, nor on any non-Windows OS. Plugging the HMD into the display port on systems where the Oculus service isn't running will not result in anything appearing on the headset.
The Oculus DK2 and DK1 can both be made to function on alternative operating systems and with virtually any graphics card, since when connected they are detected by the OS as just another monitor.
Basically your only path is to either use older HMD hardware, wait for Oculus to support other platforms, or wait for someone to reverse engineer the interaction with the production HMD hardware.
To answer my own question (I hope that's ok), I bought an Oculus Rift CV1. It turns out it runs smoothly on my HP Zbook G3 which has an Quadro M1000M card in it. Admittedly, the Oculus Desktop application gives a warning that my machine does not meet the required specifications. Indeed, if I render a scene with lots of complicated graphics in it and turn my head, the visuals tend to 'stutter' a bit.
I tested a couple of very simple scenes in Unity 5 and these run without any kind of problems. I would say that the above mentioned hardware is perfectly suitable for any kind of educational purposes I had in mind, just nothing extremely fancy.
As #SurvivalMachine mentioned in the comments, Optimus can be a bit problematic, but this is resolved by turning hybrid graphics off in the bios (which I heard is possible for the HP Zbook series, but not for all laptops). Furthermore, the laptop needs to be connected to a power outlet (i.e. not run on its battery) for the graphical card to function properly with the Oculus Rift.
This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
How does OpenGL work at the lowest level?
When we make a program that uses the OpenGL library, for example for the Windows platform and have a graphics card that supports OpenGL, what happens is this:
We developed our program in a programming language linking the graphics with OpenGL (eg Visual C++).
Compile and link the program for the target platform (eg Windows)
When you run the program, as we have a graphics card that supports OpenGL, the driver installed on the same Windows will be responsible for managing the same graphics. To do this, when the CPU will send the required data to the chip on the graphics card (eg NVIDIA GPU) sketch the results.
In this context, we talk about graphics acceleration and downloaded to the CPU that the work of calculating the framebuffer end of our graphic representation.
In this environment, when the driver of the GPU receives data, how leverages the capabilities of the GPU to accelerate the drawing? Translates instructions and data received CUDA language type to exploit parallelism capabilities? Or just copy the data received from the CPU in specific areas of the device memory? Do not quite understand this part.
Finally, if I had a card that supports OpenGL, does the driver installed in Windows detect the problem? Would get a CPU error or would you calculate our framebuffer?
You'd better work into computer gaming sites. They frequently give articles on how 3D graphics works and how "artefacts" present themselves in case of errors in games or drivers.
You can also read article on architecture of 3D libraries like Mesa or Gallium.
Overall drivers have a set of methods for implementing this or that functionality of Direct 3D or OpenGL or another standard API. When they are loading, they check the hardware. You can have cheap videocard or expensive one, recent one or one released 3 years ago... that is different hardware. So drivers are trying to map each API feature to an implementation that can be used on given computer, accelerated by GPU, accelerated by CPU like SSE4, or even some more basic implementation.
Then driver try to estimate GPU load. Sometimes function can be accelerated, yet the GPU (especially low-end ones) is alreay overloaded by other task, then it maybe would try to calculate on CPU instead of waiting for GPU time slot.
When you make mistake there is always several variants, depending on intelligence and quality of driver.
Maybe driver would fix the error for you, ignoring your commands and running its own set of them instead.
Maybe the driver would return to your program some error code
Maybe the driver would execute the command as is. If you issued painting wit hred colour instead of green - that is an error, but the kind that driver can not know about. Search for "3d artefacts" on PC gaming related sites.
In worst case your eror would interfere with error in driver and your computer would crash and reboot.
Of course all those adaptive strategies are rather complex and indeterminate, that causes 3D drivers be closed and know-how of their internals closely guarded.
Search sites dedicated to 3D gaming and perhaps also to 3D modelling - they should rate videocards "which are better to buy" and sometimes when they review new chip families they compose rather detailed essays about technical internals of all this.
To question 5.
Some of the things that a driver does: It compiles your GPU programs (vertex,fragment, etc. shaders) to the machine instructions of the specific card, uploads the compiled programs to the appropriate area of the device memory and arranges the programs to be executed in parallel on the many many graphics cores on the card.
It uploads the graphics data (vertex coordinates, textures, etc.) to the appropriate type of graphics card memory, using various hints from the programmer, for example whether the date is frequently, infrequently, or not at all updated.
It may utilize special units in the graphics card for transfer of data to/from host memory, for example some nVidia card have a DMA unit (some Quadro card may have two or more), which can upload, for example, textures in parallel with the usual driver operation (other transfers, drawing, etc).
I want to write application for digital signage but I want it to run in minimal environment so I don't want X11 server. Is it possible to run on one account OpenGl app without X11 (or any other graphic drawing library with at least 2D graphics)?
One way is via the Mesa off-screen rendering API. Be aware that this will most likely be unaccelerated.
If you just don't want X11 and you're willing to use OpenGL ES then Wayland and corresponding Gallium drivers would get you hardware acceleration.
I`m working in a very similar project. Since for me the need to run opengl without xserver was primarily performance based, I opeted instead to install damn small linux to a flash drive along with the program i wrote. Damn small linux is super small (50 mb for the entire os), and since its designed to run on low spec hardware (it can be run on a pentium 1 with 16 mb of ram) it uses a minimal ammount of system resources. I just run ny application on top of damn small linux, and it performs extremely well.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 3 years ago.
Improve this question
What tools, APIs, libraries are out there that I could use to create a system capable of rendering hi-res 3D scenes in real time in a display made of 4, 8, 9, 16, etc screens/projectors? For a setup with 8 projectors I should go for clustered solutions or should I stay with a single node featuring 4 dual headed video cards? Does someone have any experience with that?
Equalizer is probably one of the better solutions you'll find.
It's specifically designed for splitting apart renders and distributing them across display's.
Description:
Equalizer allows the user to scale rendering performance, visual quality and display size. An Equalizer-based application runs unmodified on any visualization system, from a simple workstation to large scale graphics clusters, multi-GPU workstations and Virtual Reality installations.
Example Usage of Equalizer:
(source: equalizergraphics.com)
I've worked on projects trying to do similar things without Equalizer, and I can honestly say it was pretty bad. We only got it barely working. After finding equalizer later, I can't imagine how much easier it would have been with such a tool.
You can use Xinerama or XRandR when working with X11/Xorg. But to quote Wikipedia on Xinerama:
In most implementations, OpenGL (3D)
direct-rendering only works on one of
the screens. Windows that should show
3D graphics on other screens tend to
just appear black. This is most
commonly seen with 3D screen savers,
which show on one of the screens and
black on the others. (The Solaris
SPARC OpenGL implementation allows
direct rendering to all screens in
Xinerama mode, as does the nvidia
driver when both monitors are on the
same video card.)
I suggest you read the Wikipedia article first.
You should have a look at the "AMD Radeon HD 5870 Eyefinity 6-edition" graphics card. This supports output to six displays simultaneously and allows the setting of several options in the driver regarding the arrangment of the outputs (3 in a row, 2x3 horizontal/vertical), etc).
Regarding API's: with a card like this (but also with a TripleHead2Go) you get a single virtual canvas, which supports full 3D accelerations without performance loss (so much better than with an Extended desktop). At AMD they call this a Single Large Surface (probably equivalent to what NVidia calls a Horizontal/Vertical span). The caveat here is that all outputs need to have the same resolution, frame rate and color depth. Such a surface could have a resolution of 5760 x 3240 or higher, depending on settings, so it's a good thing that the 5870 is so fast.
Then, in your application, you render to this large virtual canvas (using OpenGL, Direct3D or some other way) and you're done... Except that you did not say if you were going to have the displays at an angle to each other or in a flat configuration. In the latter case, you can just use a single perspective camera and render to the entire backbuffer. But if you have more of a 'surround' setup, then you need to have multiple cameras in your scene, all looking out from the same point.
The fastest loop to do this rendering is then probably:
for all all objects
set textures and shader and renderstate
for all viewports
render object to viewport
and not
for all viewports
for all objects
set textures and shader and renderstate
render object to viewport
because switching objects causes the GPU to lose much more useful information from its state and caches than switching viewports.
You could contact AMD to check if it's possible to add two of these cards (power-supply permitting) to a single system to drive up to 12 displays.
Note that not all configurations are supported (e.g. 5x1 is not, as I read from the FAQ).
A lot of my experience regarding this was gathered during the creation of the Future Flight Experience project, which uses three beamers (each with its own camera in the 3D scene), a dual Nvidia GTX 280 in SLI, and a Matrox TripleHead2Go on Windows XP.
I use one of these nifty TripleHead2Go's at home on my gaming rig to drive 3 displays from one video card (even in Vista). Two displays with a bezel in the middle is kinda a bummer for gaming.
(source: maximumpc.com)
I found out about them because we were looking at using several of them at work for driving a system of ours that has about 9 displays. I think for that we ended up going with a system with 5 PCI-X slots and a dual-head card in each. If you have trouble with getting that many PCI slots on a motherboard, there are PCI-X bus expansion systems avilable.
I know that the pyglet OpenGL wrapper (http://www.pyglet.org) for python has multiplatform multimonitor support; you might want to look at their source code and figure out how it is implemented.