Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
So there was a popular article a few years back titled OpenGL 3 & DirectX 11: The War Is Over then there was this article titled OpenGL vs DirectX: The War Is Far From Over
I would like to know what is the stand today ? What is the future of OpenGL when compared to DirectX ? Is OpenGL catching up with latest specifications?
There is this recent article Return of the DirectX vs. OpenGL Debates but it doesn't say anything clearly about questions i asked.
What is the future of OpenGL when compared to DirectX
By the numbers OpenGL, especially its embedded variant clearly dominates the market. Practically every smartphone (except for Windows Phone) sold these days relies on OpenGL for its graphics output.
Also with Valve's strong push for the Linux market and their Steam Boxes, OpenGL gets another push. Other game makers and vendors are following into their steps, with many Triple-A game engines getting Linux ports these days.
Is OpenGL catching up with latest specifications?
These days OpenGL tends to be ahead of hardware development. The latest OpenGL specification is OpenGL-4.4, but the majority of GPUs and drivers found out there are still at OpenGL-4.3 (note that the major version number of OpenGL relates to the hardware class).
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Closed 9 years ago.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Questions concerning problems with code you've written must describe the specific problem — and include valid code to reproduce it — in the question itself. See SSCCE.org for guidance.
Improve this question
I have a shader that is currently doing some raytracing. The shader used to take the scene information as uniforms to render the scene but this proved to be way too limited so we switched to using SSBOs (shader storage buffer objects). The code works perfectly on two computers, but another computer is rendering it very slowly. The video card for that computer is a radeon HD 6950. The video cards that are rendering it correctly are a GTX 570 and a radeon HD 7970. The scene is shown correctly on the three computers but the radeon HD 6950 is rendering it very slowly (1 FPS when we are rotating around the scene). We thought it was a openGL version problem but it doesn't seem to be the case since we updated the drivers and it still doesn't work. Any idea where the problem might be?
There are a few possibilities:
You could be falling off the fast path on that particular card. Some aspect of your rendering may not be implemented as efficiently on the lower-end card, for example.
You may be hitting the VRAM limit on the 6950, but not on the other 2 cards and OpenGL is essentially thrashing, swapping things out to main memory and back
You may have triggered software rendering on that card. There may be some specific OpenGL feature you're using that's only implemented in software for the 6950, but is hardware accelerated on the other cards.
You don't say which OS you're working with, so I'm not sure what to tell you about debugging the problem. On MacOS you can use OpenGL Profiler to see if it's falling back to software and use OpenGL Driver Monitor to see if it's paging out. On iOS you can use Xcode's OpenGL profiling instrument for both of those. I'm not sure on Windows or Linux as I don't have experience with them.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 9 years ago.
Improve this question
I've recently started exploring the guts of VirtualBox's Guest Extensions on my Ubuntu Guest. Mostly from curiosity. Partly due to "OpenGL Warning: ... not found in mesa table" warnings. I noticed they are using Chromium OpenGL implementation. I have a two part question.
1.How do I get rid of those warnings? Are they indications of a larger problem? I'm noticing repaint issues which lead me down this path.
2.Am I missing something are is this a 12 year old project last touched 6 years ago!? Is it being actively developed some where else? Will it support OpenGL 3?
Online references would be appreciated as I'm having a hard time finding anything other than these below.
http://sourceforge.net/p/chromium/discussion/stats
http://chromium.sourceforge.net/doc/index.html
The chromium project is basically dead since 2008 or so. There is no support for GL3.x, and it is not planned. Actually, implementing the main purpose of chromium (application-transparent distributed rendering by manipulating the GL command stream) is incredibly hard to outright impossible with the programmable pipeline and modern GL features.
I'm not really familiar with virtualbox, but I am aware that they just used parts of the chromium project to implement a hw-accelerated guest GL simply by forwarding the GL command stream to the host. Such a task is much easier to adapt to modern GL, as no real stream manipulation is to be done. But I'm not aware of how far they have come on that path. So consider this only half of an answer to your question.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I'm just working myself into the boost libraries and there's one question that stucks in my head :D
Is multithreading, especially the boost one and for game development, still used? I understand the pros of threading but I'm not sure if it's not obsolete. If not, where does it get used in game development?
RenX
Yes, you're still need it. No, it is not obsolete. Moreover, multi-thread support is the one of the most progressive and integral parts of the modern game development. Almost every modern game engine has multi-thread support. Why? Because animation and the physics; rendering and resource loading could performs simultaneously, and even physics itself may be parallellized, this also applies to the paging. What about boost, Ogre3D uses boost::threads for multi-threaded purposes (if you're an enthusiast, you should be familliar with Ogre3D, aren't you?). Unreal Engine 3 uses the rendering thread and the game-logic thread, separated from the main application's thread, moreover, UE 3.5 has Unreal Swarm as the job distribution system and Gemini as ultra-fast HDR rendering pipeline. So yes, it has sense.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I want to start learning DirectX and I do have an extremely strong grasp of C++
Anyways I have searched online and I saw that people recommend that I start with dx9 then move my way up to the other DXs
Now my question is
If I learn dx9 and fully grasp it then move on to Dx10 and then to Dx11 Will that cause me to get mixed up while I'm programming when I learn the newer DirectX because of the similar code
or will it help me understand the language and be better at using it?
Honestly, there's not an awful lot of point in learning DX10. If you don't need hardware compatibility, go straight to DX11. If you do, then stick with DX9. The hard part about graphics programming has little to do with the API- most of it is in programmable shaders and techniques that are not strongly tied to the API you're using.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
The community reviewed whether to reopen this question 8 months ago and left it closed:
Original close reason(s) were not resolved
Improve this question
Is is possible to do Open GL development and run programs on a computer with out a graphics card? (e.g. my netbook running Ubuntu)
Update This was many years ago, the link is not available anymore, and there are probably newer, better, builds now.
Yes, you can use MESA.
For your convenience, I've compiled it in both 32- and 64bit at:
http://dl.dropbox.com/u/9496269/mesa.zip
Simply put them where your executable file is located.
Sure. Many software only implementations of OpenGL exist. Check out the Mesa project at http://www.mesa3d.org/ for one of the most popular. There are parts of the shading language not fully supported, and it tends to lag the standard a bit in general, but that is the case of all software API emulators. Its still very full featured and can be used in production code for many common uses.
You can use OpenGL on many integrated GPUs, mostly AMD chips like the Ryzen 3 3200G, which has a GPU that is the same as a GTX 1050 for around £100.