Learning Curve in OpenGL vs Direct3D + other game components [closed] - opengl

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
Not to start a flame war....
but i am in chapter 4 of the OpenGL superbible and the books organization is NOT helpful for a beginner,
I've been doing a little research on the backstory of each API also but a lot of fingers are pointing towards Direct3D as the future easier API considering their ability to actually progress, add new features and get them out there.
Mainly though, i'm wondering if either have a steeper learning curve since i'm finding OpenGL pretty hard to get going with using my book...
Should I switch to Direct3D if it's easier to learn, and maybe return to OpenGL later if they get going and catch back up to Direct3D again?
by the way i'm hping to learn this stuff in context of studying C++ for a year and future game programming so i would also need to figure out how to get audio, ai, etc set up later too.....
Thanks!! =)

Should you switch to Direct3D? Ask yourself the following questions:
Do you want to be bound to developing for Windows only? [YES/NO]
Are you ready to invest time learning a new API design every time the major version of the API increases? [YES/NO]
Are you okay if new GPU generations' capabilities are available to you only with some major delay until the next major version of the API is released? [YES/NO]
If you answered any of the above with YES, then Direct3D is the right choice. Otherwise I recommend OpenGL:
OpenGL is available on all major operating systems and plattforms as well as on more obscure ones.
OpenGL's API design is astonishing stable, the basic principles have not changed over a decade. Take a look at the open source game Nexuiz Classic, which is largely based on the Darkplaces engine which is a overhaul of the open sourced Quake2 engine (written for OpenGL-1.1) and has been updated to support all modern whistles and bells of shader based rendering.
OpenGL's extension mechanism allows GPU vendors to give access to newest generations' capabilites at release date. With Direct3D you're forced to wait until the next version is released by Microsoft. When NVidia released their GeForce8800 in 2006 it took half a year until Joe Average Developer could write Direct3D code for its new features. However through OpenGL extensions it was immediately accessible for OpenGL programs; I remember the day I bought my GeForce8800 (4 days after official release) built it into my machine and started hacking away with the geometry shaders the same evening.

It sounds to me like you have most of this backwards.
OpenGL is definitely easier to learn than Direct3D -- in particular, you can use immediate mode for simple stuff where you don't care a lot about performance, and delay dealing with vertex buffers and pixel buffers, and so on.
OpenGL used to have a significant performance disadvantage compared to Direct3D, and relatively slow progress in adding new features. New development was turned over to Khronos a while back. While there's some controversy about some steps Khronos has taken, they have done quite well at feature parity, and it's been a few years since I've seen any real performance advantage for Direct3D either.

Related

How to display graphics on a C++ program without third party libraries? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I can't seem to find the answer I'm looking for, or I might miss understand this. Either way I am confused.
When I try to find out, there is always videos about C++ console programming, which annoys me because I wanna move away from console stuff by now. I know how to create a simple window, but not how to display real graphics on it. I have heard of stuff like OpenGL and DirectX, but it makes me wonder, what do those libraries have in their source whcih draws actual shapes and other things on the program? What C++ functions do they use in their code?
Well, you could make your own graphics library. In that case I suggest you grab a good math book, and get ready to do some rather serious maths. You'd also have to get going and start right now to learn about Windows graphics drivers and Windows Kernel development, or about the X11 API and Linux Kernel programming, or all of those. The documentation is available on the Net.
What OpenGL and other graphics libraries do is provide a relatively simple API so you don't have to reinvent the wheel, and make your own library which would take years before having anything close to what they provide.
Don't get me wrong, even with these nice API's, there is much room for doing 'magic'. Computer graphics are an art in and of themselves. Mastering some of the techniques offered by the libraries require a good knowledge of the maths involved, and of the possibilities offered by the hardware as well.
But, like everything, it can be learnt. If you are interested in doing graphics on a computer, you should start by finding a good source on the subject, there are many online. And start playing with the APIs. OpenGL is a good place to start, but there are also open source libraries that build on that, since drawing polygons ans stuff involves many other aspects of computing. Ogre3d comes to mind, but there are also others.
Briefly, what graphics APIs have in their source is interaction with the graphics hardware, and telling it to write pixel values in memory. Ultimately this is usually done in an indirect way, such as setting up a scene graph, and and graphics APIs provide an interface to work with low level hardware systems.

NV_path_rendering alternative [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking us to recommend or find a book, tool, software library, tutorial or other off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 8 years ago.
Improve this question
I just watched a very impressive presentation from Siggraph 2012:
http://nvidia.fullviewmedia.com/siggraph2012/ondemand/SS106.html
My question is, this being a proprietary Nvidia extension, what are the other possibilities to quickly renderer Bezier paths on GPU? Alternatively, is there any hope this will end-up as part of OpenGL standard? Is it possible to give any time estimate on when this eventually happens?
Do you know of any other (preferably open source) project dealing with GPU path rendering?
Edit: There is now a new "annex" to the original paper:
https://developer.nvidia.com/sites/default/files/akamai/gamedev/files/nvpr_annex.pdf
Ready made alternatives
NanoVG (https://github.com/memononen/nanovg) appears to have a little bit of traction (http://www.reddit.com/r/opengl/comments/28z6rf/whats_a_popular_vector_c_library_for_opengl/). So you could look at their implementation. I have not used NanoVG myself though and I'm mostly unfamiliar to its internals; what I do know is that they have specifically rejected using NV_path_rendering: https://github.com/memononen/nanovg/issues/25
As I mentioned already in a comment above, NV_path_rendering has now been implemented in Skia and appears to be courted by cairo too, see my comment below tjklemz's answer above for links on those details. One issue with NV_path_rendering is that it is somewhat dependent on the fixed-function pipeline, so a bit incompatible with OpenGL ES 2.0., but there's a workaround for that https://code.google.com/p/chromium/issues/detail?id=344330
I would stay away from anything OpenVG-related. The committee working on that has folded in 2011; it's now basically a legacy product/API. Most implementations of OpenVG (including ShivaVG) are also ancient and use fixed-function OpenGL according to https://github.com/memononen/nanovg/issues/113 If you really must use an OpenVG-like library, MonkVG appears the most well maintained [read as: the most recently abandoned] among the free ones (code: https://github.com/micahpearlman/MonkVG; 2010 announcement http://www.khronos.org/message_boards/showthread.php/6776-MonkVG-an-OpenSource-implementation-available). They claim it works on Windows, MacOS X, iOS and Android via OpenGL ES 1.1 and 2.0. The [fairly big] caveat is that MonkVG is not a full implementation of OpenVG; see the "TODO" section on their code page for what's missing.
I also found that a cairo (& pango) dev, Behdad Esfahbod, has written a new glyph (i.e. font) rendering library (https://code.google.com/p/glyphy/): "GLyphy is a signed-distance-field (SDF) text renderer using OpenGL ES2 shading language. [...] GLyphy [...] represents the SDF using actual vectors submitted to the GPU. This results in very high quality rendering." As far as I can tell it's not used in Cairo yet. (Behdad moved to Google [from Red Hat] and cairo hasn't seen releases in quite a while, so maybe GLyphy is gonna go into Skia instead, who knows...) I'm not sure how generalizable that solution is to arbitrary paths. (In the other direction, NV_path_rendering can also render fonts and with kerning, in case you didn't know that.) There is a talk at Linux.conf.au 2014 which you should definitely watch if you're interested in GLyphy: https://www.youtube.com/watch?v=KdNxR5V7prk If you're not familiar with the (original) SDF method, see http://www.valvesoftware.com/publications/2007/SIGGRAPH2007_AlphaTestedMagnification.pdf
I found a talk by a Mozilla dev which summarizes the common approaches in use today: https://www.youtube.com/watch?v=LZis03DXWjE#t=828 (The timestamp is to skip the intro part where he tells you what a GPU is.)
DYI (possibly)
By the way, a lot of the path-rendering stuff is command/state-change-intensive. I think that Mantle, DX12, and the OpenGL equivalents of that (mostly extensions http://gdcvault.com/play/1020791/) will probably improve that a fair bit.
I guess I should also mention that Nvidia has been granted (at least) four patents in connection with NV_path_rendering:
https://www.google.com/patents/US8698837
https://www.google.com/patents/US8698808
https://www.google.com/patents/US8704830
https://www.google.com/patents/US8730253
Note that there are something like 17 more USPTO docummets connected to these as "also published as", most of which are patent applications, so it's entirely possible more patents may be granted from those. Update on this: Google doesn't quite link all of them together, so there are some more that have been granted for sure:
https://www.google.com/patents/US8786606
https://www.google.com/patents/US8773439
I'm not sure under what terms they are willing to license those...
I found a really nice FAQ by Kilgard himself for "what's special about vector-graphics/path-rendering" which is unfortunately buried somewhere in the OpenGL forum http://www.opengl.org/discussion_boards/showthread.php/175260-GPU-accelerated-path-rendering?p=1225200&viewfull=1#post1225200. This is quite a useful reading for anyone considering quick/hack alternative solutions.
There is also one new thing in Direct3D 11.1 that's possibly useful because Microsoft used it to improve their Direct2D implementation with it in Windows 8; it's called target independent rasterization (TIR). I don't know much about it other than that Microsoft has a patent application on it. http://www.google.com/patents/US20120086715 The catch is that only AMD GPUs seem to actually support it per this "war of words" http://www.hardwarecanucks.com/news/war-of-words-between-nvidia-and-amd-over-directx-11-1-support-continues/
Adoption
I don't have a crystal ball as to when NVpr is going to get adopted by non-Nvidia, but I think they are pushing it quite hard. The OpenGL 4.5 Nvidia presentation has pretty much been taken over by that--at least as far as demos ware concerned, which I thought was a bit silly (since it's not part of OpenGL 4.5 core). Neil Trevett also covered NVpr more than once (e.g. https://www.youtube.com/watch?v=eTdLwfOLoG0#t=2095) and Adobe Illustrator beta 2014 is using it as well as Google's Skia.
ShivaVG is an open source alternative for path rendering. See this Stack Overflow question for a list of OpenVG implementations: Best OpenVG Implemenatation
Basically, you have a few options: use an OpenVG implementation (like ShivaVG), use an OpenGL implementation or extension (like the NV_path_rendering), or use something else entirely, like Direct2D.
However, other alternatives to NV_path_rendering cannot even come close to its feature set and the quality of its render. NV_path_rendering can natively handle fonts (which is a big deal—without fonts, you're toast), scale and so forth in true perspective (try that in Illustrator!), mix well with 3D, use sRGB, use fragment shaders, and does all this incredibly fast. It also implements user-interaction, which OpenVG does not specify AFAIK.
Uniquely, NV_path_rendering does not invent a new standard. Rather, it implements several industry standards, such as PostScript and SVG, with a focus on quality and speed (it's rare to have them both) that you cannot currently find anywhere else.
(Plus, Mark Kilgard is the project lead. C'mon. The guy's brilliant.)
Will it become standard? Hard to know. As for what to use, it really depends on your purpose/need at this point. Looking for quality path rendering for an app? NV_path_rendering for sure. Looking for basic resolution independent graphics in apps (esp. mobile)? OpenVG might be better. It's too bad that Nvidia's solution is not completely portable, but I wouldn't shy from using it. I'd prefer having a quality solution; sometimes portability isn't everything.
Nvidia has compared their solution to OpenVG and found that OpenVG doesn't provide too much benefit, unfortunately. So, yes, there might be hope for it to become a standard. But, since according to IBM everything in the future will be embedded, perhaps it would be better to hope for it to be open, instead of wanting more standards.
"The nice thing about standards is that you have so many to choose from." --
Computer Networks, 2nd ed., p. 254
For more info on NV_path_rendering features, I recommend looking at this: An Introduction to NV_path_rendering.
what are the other possibilities to quickly renderer Bezier paths on GPU?
In-Situ tesselation of a set of control points into convex patches defines by a triangular hull using a tesselation and/or geometry shader. Then passing the curvature parameters to a fragment shader that performs a per-fragment test if the fragment is within the boundaries of each patch and discarding it otherwise.
If an approximation is in order then just tesselating into a triangular mesh is in order.

OpenGL 4.1 Learning Resources

What are some good resources for learning OpenGL 4.1 aimed at someone relatively new to graphics programming?
I'm aware that this was asked before, but I would think that ~9 months would give us more.
I know that OpenGL Programming Guide: The Official Guide to Learning OpenGL, Versions 4.1 (8th Edition) is coming out apparently in October, but is there anything else? It seems like there's been some major changes, and I'd hate to feel like my time studying until this book release was wasted. Sites can work too, provided they are focusing on 4.1.
Thank you.
If you want to learn graphics programming properly then OGL 4.1 is the place to start these days, but if as a beginner you want to hack stuff out then i'd advise you take an easier route (DirectX). Your programming skills have to up there and especially your maths skills aswell (linear algebra). Get a copy of the Spec and the Red/Orange and Blue books, a good book on mathematics for 3d graphics and prepare for alot of pain, pure 4.1 from scratch is hard.
Don't worry about not getting the latest edition of the red book, 3.3->4.1 didn't change a huge amount in terms of new features or paradigm, mainly just removing deprecated functionality.
Mesh loaders can be fulfilled by OpenCTM, GXBase is also quite good.
Don't bother learning OGL prior to 4.1, it's based on a deprecated paradigm and so you will waste time learning stuff that is officially out of date. The super bible has a good amount of code that will help you along the way.

What 3D graphics framework should I use for a real world game engine? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
I'm a C++ programmer with very extensive server programming experience. I'm however fairly bored at the moment and I decided to tackle a new area: 3D game programming, for learning purposes. Additionally I think this learning project may turn out to be good resume material in the future should I decide to work in this area.
Instead of creating a 3D engine from scratch I decided to emulate as exactly as I'm able an existing one: World of Warcraft's. If you are curious as to why (feel free to skip this):
It's a real world successful game
All the map textures, models and what not are already done (I'm not interested in learning how to actually draw a texture with photoshop or whatever)
Their file formats have been more or less completely reverse engineered
There is already an identical open source project (wowmapview) that I can look at if I'm in trouble.
ok, that was a long preface.. Now, my main question is the following: Should I use DirectX, OpenGL, wrapper libraries such as sdl, or what?
What's the most used one in the real world?
And, something else that perplexes me: World of Warcraft appears to be using both! In fact, normally it uses DirectX, but you can use opengl by starting it with the "-opengl" switch via command line.
Is this something common in games? Why did they do it? I imagine it's a lot of work and from my understanding nobody uses OpenGL anyway (very very few people know about the secret switch at all).
If it's something usually done, do programmers usually create their own 3d engine "wrapper", something like SDL made in house, and based on switches / #defines / whatnot decide which API function to ultimately call (DirectX or OpenGL)? Or is this functionality already built in in sdl (you can switch between DirectX and OpenGL at will)?
And, finally, do you have any books to suggest?
Thanks!
I realize you already accepted an answer, but I think this deserves some more comments. Sorry to quote you out of order, I'm answering by what I think is important.
Instead of creating a 3D engine from scratch I decided to emulate as exactly as I'm able an existing one: World of Warcraft's.
However I wanted to focus on the actual 3d and rendering engine, not the interface, so I don't think I will be using it [lua] for this project.
From these two snippets, I can tell you that you are not trying to emulate the game engine. Just the 3D rendering backend. It's not the same thing, and the rendering backend part is very small part compared to the full game engine.
This, by the way, can help answer one of your questions:
World of Warcraft appears to be using both! In fact, normally it uses DirectX, but you can use opengl by starting it with the "-opengl" switch via command line.
Yep, they implemented both. The amount of work to do that is non-negligeable, but the rendering back-end, in my experience, is at most 10% of the total code, usually less. So it's not that outraging to implement multiple ones.
More to the point, the programming part of a game engine today is not the biggest chunk. It's the asset production that is the bulk (and that includes most game programming. Most lua scripts are considered on that side of things, e.g.)
For WoW, OSX support meant OpenGL. So they did it. They wanted to support older hardware too... So they support DX8-level hardware. That's already 3 backends. I'm not privy to how many they actually implement, but it all boils down to what customer base they wanted to reach.
Multiple back-ends in a game engine is something that is more or less required to scale to all graphics cards/OSs/platforms. I have not seen a single real game engine that did not support multiple backends (even first party titles tend to support an alternate back-end for debugging purposes).
ok, that was a long preface.. Now, my main question is the following: Should I use DirectX, OpenGL, wrapper libraries such as sdl, or what?
Well, this depends strongly on what you want to get out of it. I might add that your option list is not quite complete:
DirectX9
DirectX10
DirectX11
OpenGL < 3.1 (before deprecated API is removed)
OpenGL >= 3.1
OpenGL ES 1.1 (only if you need to. It's not programmable)
OpenGL ES 2.0
Yep, those APIs are different enough that you need to decide which ones you want to handle.
If you want to learn the very basics of 3D rendering, any of those can work. OpenGL < 3.1 tends to hide a lot of things that ultimately has to happen in user code for the other ones (e.g. Matrix manipulation, see this plug).
The DX SDKs do come with a lot of samples that help understand the basic concepts, but they also tend to use the latest and greatest features of DX when it's not necessarily required when starting (e.g. using Geometry shader to render sprites...)
On the other hand, most GL tutorials tend to use features that are essentially non-performant on modern hardware (e.g. glBegin/glEnd, selection/picking, ... see the list of things that got removed from GL 3.1 or this other plug) and tend to seed the wrong concepts for a lot of things.
What's the most used one in the real world?
For games, DirectX9 is the standard today in PC world. By a far margin.
However, I'm expecting DirectX11 to grab more market share as it allows for some more multithreaded work. It's unfortunately significantly more complicated than DX9.
nobody uses OpenGL anyway (very very few people know about the secret switch at all).
Ask the Mac owners what they think.
Side question, do you really think hardware vendors would spend any energy in OpenGL drivers if this was really the case (I realize I generalize your comment, sorry)? there are real world usages of it. Not much in games though. And Apple makes OpenGL more relevant through the iphone (well OpenGL ES, really).
If it's something usually done, do programmers usually create their own 3d engine "wrapper",
It's usually a full part of the engine design. Mind you, it's not abstracting the API at the same level, it's usually more at a "draw this with all its bells and whistles over there". Which rendering algorithm to apply on that draw tends to be back-end specific.
This, however, is very game engine dependent. If you want to understand better, you could look at UE3, it just got released free (beer) for non-commercial use (I have not looked at it yet, so I don't know if they exposed the backends, but it's worth a look).
To get back to my comment that game engine does not just mean 3D, look at this.
I think the primary benefit of using OpenGL over DirectX is the portability. DirectX only runs on windows. However, this is often not a problem (many games only run on Windows).
DirectX also provides other libraries which are useful for games which are unrelated to graphics such as sound and input. I believe there are equivalents which are often used with OpenGL but I don't think they're actually part of OpenGL itself.
If you're going to be locking into windows with DirectX and you are willing to/interested in learning C# and managed code, I have found XNA to be and extremely easy platform to learn. It allows you to learn most of the concepts without dealing with a lot of the really tricky details of DirectX (or OpenGL). You can still use shader code and have the full power of DirectX but in a much friendlier environment. It would be my recomendation but again, you'd have to switch to C# (mind you, you can also put that on you're resume).
You might want to look at some projects that encapsulate the low level 3d api in a higher level interface that is api independent such as Ogre3D. As you are doing this to learn I assume you probably will be more interesting in implementing the low level detail yourself, but you could probably learn a lot from such a project.
if you are really only interested in the rendering part, i can suggest ogre3d. it is clean, c++, tested and cross-platform. i used it in two projects and compared to other experiences (torque3d for example), i liked the good support (tutorials/wiki/forum) and the not so steep learning curve. i think someone can also learn a lot by looking at the sourcecode and the concepts they have used in the design. there is a accompanying book as well, which is a bit outdated, but it is good for the start
the problem with this is, you will be thinking inside this engine and soon you will need gameplay-like (timers, events) elements for simulating and testing your effects or whatever you want to do. so you will end up working around ogre3ds shortcomings (it is not a game engine) and implement in on your own or use another middleware.
so if you really want to touch 3d rendering first, i would take some computer graphics books (gpu gems, shaderx) and see some tutorials and demos on the web and start building my own basic framework. this is for the experience and i think you will learn the most from this approach. at least i did ...
I'm doing some OpenGL work right now (on both Windows and Mac).
Compared to my midnight game programming using the Unity3D engine, usingOpenGL is a bit like having to chop down your own trees to make a house versus buying the materials.
Unity3D runs on everything (Mac, PC and iPhone, Web player, etc) and lets you concentrate on the what makes a game, a game. To top it off, it's faster than anything I could write. You code for it in C#, Java (EDIT: make that JavaScript) or Boo. (EDIT: Boo support has been dropped)
I just used Unity to mock up a demo for a client who wants something made in OpenGL so it has it's real world uses also.
-Chris
PS: The Unity Indie version recently became free.

Direct-X in C++ Game Programming [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I am reletively new to c++ programming can anyone please tell me how does Direct-X SDK is helpful and how does it works and how can we use it in game programming.I Downloaded it and I found lots of header files and documentation also tells something about game programming.
DirectX is a library (a large collection of classes, really) that allows you to "talk" to the video adapter, sound card, keyboard, mouse, joystick, etc. It allows you to do it much more efficiently then other "standard" Windows functions. This is important because games need all the performance gain you can get - and DirectX has plenty to offer in this regard. Especially when it comes to graphics programming, because it has functions that enable you to use the 3D acceleration features of your graphics card. Windows doesn't have such functions by default.
The DirectX SDK contains:
Documentation for all the features of DirectX;
Tutorials in the C++ language to get you started if you don't know anything;
Sample applications;
The necessary .h and .lib files to add DirectX support to your program;
The debug version of DirectX (I think, I'm not so sure about this one)
The DirectX redist that you can include with your own programs.
If you're not up to speed with C++ then starting with DirectX development will be quite difficult, as either of these things has a pretty big learning curve.
Btw - you did download the latest version from Microsoft webpage, not a 5 years old copy from some web guy, right?
Your question is way to broad. DirectX can help you create games in more ways that you can (considering youre new) imagine.
Rendering (putting stuff on the screen), input (responding to what happens on the users mouse/keyboard), network (for multiplayer), reading files, fonts, 3D-models, sound. To name a few.
I urge you not to try to write something yourself directly utilizing DirectX. Getting something good out of it is an extremely complex task. Dont reinvent the wheel unless you plan to learn more about wheels. (The wheel here being DirectX.)
If you just want to get up and running and make World of Warcraft 2, I suggest you use premade DirectX implementations (usually called game engines) such as Ogre, Irrlicht or HGE (simpler, but only for 2D games).
Good luck, dont give up and welcome back later with your first real question. :)
I'd like to add that "game programming" does not necessitate graphics programming. DirectX, like OpenGL, provides a basis to create a graphics application; but, as mentioned, it's very low level.
As a professional game developer, I would not suggest just jumping into DirectX after learning C++. It's a difficult endeavor that will move slowly and provide you little motivation to continue. It's definitely something to keep in mind for your future; but, for the moment, it would be more beneficial to play with something complete, possibly start with gameplay programming.
Note: In addition to C++ skills, you will also need some mathematical talents. Linear algebra and trigonometry are the primary concerns.
Check out a lightweight engine like Angel. It's a fairly intuitive starting point and small enough that you can fully understand what's happening within it.
As always, try to make small edits and projects for yourself in the beginning and then move on to bigger and badder tasks!
Good luck!
I'm not entirely sure what you're asking, but Frank Luna's book on DirectX is a very good intro.
But if you're a relative newcomer to C++, DirectX may be too low-level for what you're looking for. I mean, are you looking to create a colorful rotating cube, or do you want to make a semi-complete game? Assuming the latter, you probably want a framework that abstracts away the low-level details of either DirectX or OpenGL. For that, I can heartily recommend Ogre 3D. The tutorials alone will get you up and running before you know it.
Read: The Art of 3D Game Programming with Direct X 2002.