NV_path_rendering alternative [closed] - opengl

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking us to recommend or find a book, tool, software library, tutorial or other off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 8 years ago.
Improve this question
I just watched a very impressive presentation from Siggraph 2012:
http://nvidia.fullviewmedia.com/siggraph2012/ondemand/SS106.html
My question is, this being a proprietary Nvidia extension, what are the other possibilities to quickly renderer Bezier paths on GPU? Alternatively, is there any hope this will end-up as part of OpenGL standard? Is it possible to give any time estimate on when this eventually happens?
Do you know of any other (preferably open source) project dealing with GPU path rendering?
Edit: There is now a new "annex" to the original paper:
https://developer.nvidia.com/sites/default/files/akamai/gamedev/files/nvpr_annex.pdf

Ready made alternatives
NanoVG (https://github.com/memononen/nanovg) appears to have a little bit of traction (http://www.reddit.com/r/opengl/comments/28z6rf/whats_a_popular_vector_c_library_for_opengl/). So you could look at their implementation. I have not used NanoVG myself though and I'm mostly unfamiliar to its internals; what I do know is that they have specifically rejected using NV_path_rendering: https://github.com/memononen/nanovg/issues/25
As I mentioned already in a comment above, NV_path_rendering has now been implemented in Skia and appears to be courted by cairo too, see my comment below tjklemz's answer above for links on those details. One issue with NV_path_rendering is that it is somewhat dependent on the fixed-function pipeline, so a bit incompatible with OpenGL ES 2.0., but there's a workaround for that https://code.google.com/p/chromium/issues/detail?id=344330
I would stay away from anything OpenVG-related. The committee working on that has folded in 2011; it's now basically a legacy product/API. Most implementations of OpenVG (including ShivaVG) are also ancient and use fixed-function OpenGL according to https://github.com/memononen/nanovg/issues/113 If you really must use an OpenVG-like library, MonkVG appears the most well maintained [read as: the most recently abandoned] among the free ones (code: https://github.com/micahpearlman/MonkVG; 2010 announcement http://www.khronos.org/message_boards/showthread.php/6776-MonkVG-an-OpenSource-implementation-available). They claim it works on Windows, MacOS X, iOS and Android via OpenGL ES 1.1 and 2.0. The [fairly big] caveat is that MonkVG is not a full implementation of OpenVG; see the "TODO" section on their code page for what's missing.
I also found that a cairo (& pango) dev, Behdad Esfahbod, has written a new glyph (i.e. font) rendering library (https://code.google.com/p/glyphy/): "GLyphy is a signed-distance-field (SDF) text renderer using OpenGL ES2 shading language. [...] GLyphy [...] represents the SDF using actual vectors submitted to the GPU. This results in very high quality rendering." As far as I can tell it's not used in Cairo yet. (Behdad moved to Google [from Red Hat] and cairo hasn't seen releases in quite a while, so maybe GLyphy is gonna go into Skia instead, who knows...) I'm not sure how generalizable that solution is to arbitrary paths. (In the other direction, NV_path_rendering can also render fonts and with kerning, in case you didn't know that.) There is a talk at Linux.conf.au 2014 which you should definitely watch if you're interested in GLyphy: https://www.youtube.com/watch?v=KdNxR5V7prk If you're not familiar with the (original) SDF method, see http://www.valvesoftware.com/publications/2007/SIGGRAPH2007_AlphaTestedMagnification.pdf
I found a talk by a Mozilla dev which summarizes the common approaches in use today: https://www.youtube.com/watch?v=LZis03DXWjE#t=828 (The timestamp is to skip the intro part where he tells you what a GPU is.)
DYI (possibly)
By the way, a lot of the path-rendering stuff is command/state-change-intensive. I think that Mantle, DX12, and the OpenGL equivalents of that (mostly extensions http://gdcvault.com/play/1020791/) will probably improve that a fair bit.
I guess I should also mention that Nvidia has been granted (at least) four patents in connection with NV_path_rendering:
https://www.google.com/patents/US8698837
https://www.google.com/patents/US8698808
https://www.google.com/patents/US8704830
https://www.google.com/patents/US8730253
Note that there are something like 17 more USPTO docummets connected to these as "also published as", most of which are patent applications, so it's entirely possible more patents may be granted from those. Update on this: Google doesn't quite link all of them together, so there are some more that have been granted for sure:
https://www.google.com/patents/US8786606
https://www.google.com/patents/US8773439
I'm not sure under what terms they are willing to license those...
I found a really nice FAQ by Kilgard himself for "what's special about vector-graphics/path-rendering" which is unfortunately buried somewhere in the OpenGL forum http://www.opengl.org/discussion_boards/showthread.php/175260-GPU-accelerated-path-rendering?p=1225200&viewfull=1#post1225200. This is quite a useful reading for anyone considering quick/hack alternative solutions.
There is also one new thing in Direct3D 11.1 that's possibly useful because Microsoft used it to improve their Direct2D implementation with it in Windows 8; it's called target independent rasterization (TIR). I don't know much about it other than that Microsoft has a patent application on it. http://www.google.com/patents/US20120086715 The catch is that only AMD GPUs seem to actually support it per this "war of words" http://www.hardwarecanucks.com/news/war-of-words-between-nvidia-and-amd-over-directx-11-1-support-continues/
Adoption
I don't have a crystal ball as to when NVpr is going to get adopted by non-Nvidia, but I think they are pushing it quite hard. The OpenGL 4.5 Nvidia presentation has pretty much been taken over by that--at least as far as demos ware concerned, which I thought was a bit silly (since it's not part of OpenGL 4.5 core). Neil Trevett also covered NVpr more than once (e.g. https://www.youtube.com/watch?v=eTdLwfOLoG0#t=2095) and Adobe Illustrator beta 2014 is using it as well as Google's Skia.

ShivaVG is an open source alternative for path rendering. See this Stack Overflow question for a list of OpenVG implementations: Best OpenVG Implemenatation
Basically, you have a few options: use an OpenVG implementation (like ShivaVG), use an OpenGL implementation or extension (like the NV_path_rendering), or use something else entirely, like Direct2D.
However, other alternatives to NV_path_rendering cannot even come close to its feature set and the quality of its render. NV_path_rendering can natively handle fonts (which is a big deal—without fonts, you're toast), scale and so forth in true perspective (try that in Illustrator!), mix well with 3D, use sRGB, use fragment shaders, and does all this incredibly fast. It also implements user-interaction, which OpenVG does not specify AFAIK.
Uniquely, NV_path_rendering does not invent a new standard. Rather, it implements several industry standards, such as PostScript and SVG, with a focus on quality and speed (it's rare to have them both) that you cannot currently find anywhere else.
(Plus, Mark Kilgard is the project lead. C'mon. The guy's brilliant.)
Will it become standard? Hard to know. As for what to use, it really depends on your purpose/need at this point. Looking for quality path rendering for an app? NV_path_rendering for sure. Looking for basic resolution independent graphics in apps (esp. mobile)? OpenVG might be better. It's too bad that Nvidia's solution is not completely portable, but I wouldn't shy from using it. I'd prefer having a quality solution; sometimes portability isn't everything.
Nvidia has compared their solution to OpenVG and found that OpenVG doesn't provide too much benefit, unfortunately. So, yes, there might be hope for it to become a standard. But, since according to IBM everything in the future will be embedded, perhaps it would be better to hope for it to be open, instead of wanting more standards.
"The nice thing about standards is that you have so many to choose from." --
Computer Networks, 2nd ed., p. 254
For more info on NV_path_rendering features, I recommend looking at this: An Introduction to NV_path_rendering.

what are the other possibilities to quickly renderer Bezier paths on GPU?
In-Situ tesselation of a set of control points into convex patches defines by a triangular hull using a tesselation and/or geometry shader. Then passing the curvature parameters to a fragment shader that performs a per-fragment test if the fragment is within the boundaries of each patch and discarding it otherwise.
If an approximation is in order then just tesselating into a triangular mesh is in order.

Related

Can understanding the low-level intricacies of a GPU/core-drivers develop your skill with working with Graphic APIs?

e.g. If I learn the low-level Graphics Pipeline, take a trip on learning the ARB assembly language and understand the logic for certain gpu device driver calls can it help me enhance my knowledge in GPU API programming or is there no correlation whatsover and will I be only wasting my time?
EDIT: Do professionals need to understand this to a degree?
Its definitely a good idea to have a solid understanding of what makes a GPU tick. If you know your card at a deep level, than you can easily identify bottlenecks and get maximum performance. The trouble is, finding up to date information is pretty difficult, unless you go straight to the manufacturers, and they obviously won't tell you everything about how their card works.
That link you posted is a really helpful primer, I've read it myself, but take note of the date. It was posted in 2011, which means it is almost 3 years out of date now. In the GPU world, 3 years might as well be a lifetime.
Here's a few links to nvidia and AMD's developer sites, you would probably find good information there.
http://developer.amd.com/
https://developer.nvidia.com/
ARB assembly language?
No.
You do not need to know that, though many professionals already do as a consequence of being in the industry before GLSL or Cg existed. To be honest, I cannot say that I feel any better off having experience with the ARB assembly language, all of the same concepts are taught in GLSL. And many of the hardware restrictions that applied to ARB FP/VP are no longer valid.
You would probably be doing yourself a disservice by learning it, like learning immediate mode or the fixed-function pipeline, it simply is not relevant to modern hardware/software.
That said, the blog you linked to is a very good thing to read. Do not concern yourself with hardware instruction sets (even the ARB languages are actually a virtual instruction set, translated by the driver later into the hardware's native ISA). Instead, you should understand the pipeline itself.
A high-level overview of the pipeline will do you better in the long-run. Underlying hardware optimization techniques change with each generation, but the fundamental stages of the pipeline remain relatively static (of course new programmable stages are periodically introduced). If you understand the pipeline from a high-level, then you will be prepared to learn about and apply new hardware features as they are introduced.
Not if you 're going to dive in to some obscure chip, but it most certainly helps to understand the basics of the hardware you're trying to control. I've done some console renderer programming in the past that was pretty much on the metal and that has positively influenced higher-level API usage.

Are there any resources for teaching OpenGL to a Direct3D programmer?

I have a good grasp of Direct3D 9, and now I want to learn some OpenGL. I have the OpenGL Redbook, sixth edition, and it has a lot of good information, but it also has a lot I already know from my D3D work. I'd like a rundown of all the differences and equivalences in OpenGL and Direct3D. Does anyone know where I might find such a thing?
They are both one and the same, if you're looking for feature differences then check wikipedia.
First the big one:
DirectX has a wider scope than OpenGL in that DirectX is composed of DirectSound, DirectPlay and Direct3D etc. Whereas Open Graphics Library is just about graphics.
From my perspective of working with them, DirectX is much better designed and uniform across platforms, whereas OpenGL is just a spec and is interpreted differently across different implementations WIDELY (ATI and Nvidia just bitch at each other constantly throughout development). this makes OpenGL a bit harder to handle, there are no nice and easy features since 3.1
What OpenGL gives is an ability to hack and exploit to your hearts content, it transcends directX with supreme flexibility. You feel alot closer to the hardware in OpenGL and you get a better idea of what's going on. I always found directx to be a bit of a handicap, if you wanted to make a professional game then go with directX, but OpenGL is more free-wheeling/fun than DX; you will definitely learn more and the lack of perfect additional layers around makes you work that bit harder.
To get started read the blue/orange and yellow book. also try GXBase instead of Glut..

Comparison between opengl and directX [duplicate]

This question already has answers here:
Closed 12 years ago.
Possible Duplicate:
OpenGL or DirectX?
Not want to trigger war, but really want to know pros and cons of those two mainstream graphic library?
To be honest the hard part is not the API, it's the higher level 3d stuff. Below that both APIs have vertex buffers, index bufers, textures, shaders, and so on and although they express that in different ways it's the concepts that are the hard part not the API. If you understand d3d11 then you'll pick up opengl in no time and vice-versa.
Practical considerations are that opengl is available on more platforms, but that d3d tends to be better supported and work better on windows platforms. d3d has a more object oriented interface whereas opengl has a strictly "c" style interface (Although it deals internally with objects through "names" and handles). This likely makes opengl easier to start learning than d3d11 which needs quite a bit of setup - but in "real" applications there won't be much in it.
d3d11 is designed to work better on multi core cpus and mult threaded software. This adds some complexity to using it, but allows you to perhaps take more advantage of the hardware then opengl might at this point in time. (However if you are still at the stage of asking which to use then it's very unlikely to matter to you!)
it tends to be much easier to find documentation for d3d9 than opengl (I mean for "modern" stuff, not examples using obsolete ways of doing things, (which is a problem with opengl, a lot of the tutorials and code out there is frankly obsolete and doesn't really use opengl properly now). Whereas it's quite hard to find good d3d11 examples and tutorials still.
If you've not used either I would very much recommend learning the basics of BOTH and the slightly different approaches to the same underlying functionallity. Don't get caught up in saying one is better than the other, learn both and see which seems a better fit. This is what most people do, unfortunatly most of the "advice" you'll get on the internet seems be to from someone who has decided that pushing one or the other API is important to them!
I found it a useful excercise when learning to abstract out the differences with a small c++ framework that creates textures, vertex buffers, index buffers, renderstate collections etc, and implements those concepts in terms of BOTH apis/.
OpenGL is a cross-platform API for 3D graphics. DirectX is a restricted-platform API for graphics, audio, music, device input, networking, and more.
Fist of all, DirectX is a lot more than 3D accelerated rendering. I assume you are talking about Direct3D.
Anyway, here's my completely biased opinion:
Direct3D runs on Windows, Xbox, and sometimes Wine (depending on the particular application/game). Choosing Direct3D ties your product heavily to Microsoft platforms.
Coding in Direct3D (at least the last time I tried it, which was some years ago) makes rolling around naked in honey, walking up to a hornet's nest, and beating the tar out of it sound like a pleasant afternoon.
OpenGL runs on almost every platform imaginable and supports most of the same stuff that Direct3D does in immediate mode.
The OpenGL API is mostly clean and a joy to code against.

OpenGL 4.x learning resources [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
I know there are some question about learning OpenGL.
Here is what I know:
math for 3D
3D theory
Here is what I want to know:
- OpenGL 4.0 Core profile (or latter)
- Shader Language 400 (or latter)
- every part of above (if it do not work across vendors then it still do not bother me)
Here is what I DO NOT want to know
- fixed function pipeline (will not use it ever!)
- older OpenGL's
- Compatibility profile
I prefer bigger portion of info like tutorials, series of articles, books.
PS If you know resources on opengl 3.x core profile, post them too
I cordially dislike negative answers, but I'm afraid I have to give one to this question.
You are ultimately asking for beginner material that uses features unique to OpenGL version 4.0 and above. Well, let's look at some of the unique features of 4.x.
Perhaps the biggest feature is tessellation. What does that mean? It means tessellating primitives to generate more primitives. Before one can even begin to understand what that means, one must first understand how primitives get rendered at all. That uses pre-4.x level features.
But even with a firm understanding of how rendering works, there is still a problem. In order to have any meaningful discussion of tessellation shaders, one must first have a strong understanding of tessellation algorithms. And that is not a simple subject. For a tutorial to teach a user to use tessellation shaders, the tutorial will first have to introduce spline curves and patches or subdivision surfaces. Both of these are lengthy topics that have numerous white papers devoted to them. Only after detailing the algorithms would a user be ready to see how those algorithms are implemented in the tessellation shader.
Or, to put it another way, tessellation shaders are not beginner material. I wouldn't even qualify them as intermediate-level material.
Another big feature of 4.x level hardware is shader image load store (and its companion, atomic counters), core in 4.2. It allows for some very nifty things, including order-independent transparency. However, in order for a user to even begin to understand all of the quirks around it, the user needs to be intimately familiar with the deep workings of modern shader hardware. So any tutorial would first have to explain how modern VLIW/SIMD-based hardware works, as well as how shaders are used with such hardware. Again, this is not trivial material.
Another big feature of 4.x hardware is indirect rendering. That is, putting the parameters to a glDraw call in the buffer object itself. The problem is that there is really only one reason to use this functionality: because the GPU generated data directly into one or more buffers for later rendering. And doing that usually involves some form of GPGPU operation, which is very much not a beginner-level topic.
All of these features are useful and have a real purpose. But none of them should be used by beginners; in some cases, not even intermediate-level programmers should touch them.
Now to be completely fair, there are some 4.2 features that are both non-hardware-based (so they are often implemented on 3.3 and lower versions) and quite useful. Separate program objects, for example. These features hit while I was writing my 3.3-based tutorials, and I considered going back over them and incorporating this functionality. The only reason I didn't is because implementations (drivers) are still not entirely stable with regard to this functionality. But it would be useful to do.
The main point I'm getting across is this: if you are at the stage in your graphics knowledge where you are ready to take advantage of the unique features of GL 4.x hardware, then you also have enough graphics experience that you don't need an explicit step-by-step instructional material to implement features of graphics hardware. You would be the kind of person who could read the GL_ARB_tessellation_shader extension specification and understand how to make them do what you need to.
That being said, if you are interested in material that teaches OpenGL core 3.0 or better, the OpenGL Wiki has a nice collection of such links. In the interest of full disclosure, I did write one of them.
The 5th edition of OpenGL SuperBible has been recently released. This edition reflects OpenGL 3.3 which was released at the same time as OpenGL 4.0, the book only covers the core profile and assumes no prior OpenGL knowledge.
That's what I got from the book's description anyway. I have the 4th edition and it's an excellent resource for OpenGL 2.0, so I assume the new edition along with the latest OpenGL Shading Language book would be just what you're looking for.
Durian Software has an ongoing series of tutorials covering modern OpenGL. They are aimed at OpenGL 2.0 but avoid using any deprecated functionality in later versions.
There is really very little difference between OpenGL 3.0 and 4.1. If you stick to 3.0 or later core profile, you can't use the fixed function pipeline. What you're really asking for is an OpenGL tutorial that does not use the fixed function pipeline.
Jason L. McKesson's excellent tutorial "Learning Modern 3D Graphics Programming" starts out using shaders for the very earliest basics, and never touches the fixed function pipeline.
http://www.arcsynthesis.org/gltut/index.html
I highly recommend it.
I recently stumbled on this online book:
OpenGLBook.com
I haven't read it yet, but description says that it is a free OpenGL 4.0 programming resource in online book format.
Excellent question, really. As a matter of fact, documentation is sparse.
There is a good introduction here : http://sites.google.com/site/opengltutorialsbyaks/
You may also like groovounet's ogl4 samples pack : http://www.g-truc.net/post-0310.html
but I'm afraid that's pretty much it. Lurk on the opengl discussion boards for more info ...
EDIT : found a few seconds ago. Straight from SIGGRAPH http://nvidia.fullviewmedia.com/siggraph2010/02-dev-barthold-lichtenbelt-mark-kilgard.html
http://www.opengl.org/sdk/docs/man4/
There's the man pages for OpenGL 4.1, they prove to be useful when developing.
I think one of the best hopes you have is Joe's Blog. It has a few good introductory articles on modern OpenGL, with more (supposedly) on the way.
Swiftless is updating tutorials to include new context. A good starting tutorial that has been more straight-forward regarding a quick and simple VAO, VBO and render is located at swiftless.com
Have not read it, but OpenGL 4.0 Shading Language Cookbook
The major point of OpenGL 4 is tesselation so i would like to recommend to start from a standalone tutorial on OpenGL 4 tesselation, i.e: http://prideout.net/blog/?p=48
After manuals and tutorials a good follow-up is to take a look at the open-source engines out there that are based on top of "new" OpenGL 3/4. As one of the developers, I would point at Linderdaum Engine.
free resource:
http://www.arcsynthesis.org/gltut/index.html

What 3D graphics framework should I use for a real world game engine? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
I'm a C++ programmer with very extensive server programming experience. I'm however fairly bored at the moment and I decided to tackle a new area: 3D game programming, for learning purposes. Additionally I think this learning project may turn out to be good resume material in the future should I decide to work in this area.
Instead of creating a 3D engine from scratch I decided to emulate as exactly as I'm able an existing one: World of Warcraft's. If you are curious as to why (feel free to skip this):
It's a real world successful game
All the map textures, models and what not are already done (I'm not interested in learning how to actually draw a texture with photoshop or whatever)
Their file formats have been more or less completely reverse engineered
There is already an identical open source project (wowmapview) that I can look at if I'm in trouble.
ok, that was a long preface.. Now, my main question is the following: Should I use DirectX, OpenGL, wrapper libraries such as sdl, or what?
What's the most used one in the real world?
And, something else that perplexes me: World of Warcraft appears to be using both! In fact, normally it uses DirectX, but you can use opengl by starting it with the "-opengl" switch via command line.
Is this something common in games? Why did they do it? I imagine it's a lot of work and from my understanding nobody uses OpenGL anyway (very very few people know about the secret switch at all).
If it's something usually done, do programmers usually create their own 3d engine "wrapper", something like SDL made in house, and based on switches / #defines / whatnot decide which API function to ultimately call (DirectX or OpenGL)? Or is this functionality already built in in sdl (you can switch between DirectX and OpenGL at will)?
And, finally, do you have any books to suggest?
Thanks!
I realize you already accepted an answer, but I think this deserves some more comments. Sorry to quote you out of order, I'm answering by what I think is important.
Instead of creating a 3D engine from scratch I decided to emulate as exactly as I'm able an existing one: World of Warcraft's.
However I wanted to focus on the actual 3d and rendering engine, not the interface, so I don't think I will be using it [lua] for this project.
From these two snippets, I can tell you that you are not trying to emulate the game engine. Just the 3D rendering backend. It's not the same thing, and the rendering backend part is very small part compared to the full game engine.
This, by the way, can help answer one of your questions:
World of Warcraft appears to be using both! In fact, normally it uses DirectX, but you can use opengl by starting it with the "-opengl" switch via command line.
Yep, they implemented both. The amount of work to do that is non-negligeable, but the rendering back-end, in my experience, is at most 10% of the total code, usually less. So it's not that outraging to implement multiple ones.
More to the point, the programming part of a game engine today is not the biggest chunk. It's the asset production that is the bulk (and that includes most game programming. Most lua scripts are considered on that side of things, e.g.)
For WoW, OSX support meant OpenGL. So they did it. They wanted to support older hardware too... So they support DX8-level hardware. That's already 3 backends. I'm not privy to how many they actually implement, but it all boils down to what customer base they wanted to reach.
Multiple back-ends in a game engine is something that is more or less required to scale to all graphics cards/OSs/platforms. I have not seen a single real game engine that did not support multiple backends (even first party titles tend to support an alternate back-end for debugging purposes).
ok, that was a long preface.. Now, my main question is the following: Should I use DirectX, OpenGL, wrapper libraries such as sdl, or what?
Well, this depends strongly on what you want to get out of it. I might add that your option list is not quite complete:
DirectX9
DirectX10
DirectX11
OpenGL < 3.1 (before deprecated API is removed)
OpenGL >= 3.1
OpenGL ES 1.1 (only if you need to. It's not programmable)
OpenGL ES 2.0
Yep, those APIs are different enough that you need to decide which ones you want to handle.
If you want to learn the very basics of 3D rendering, any of those can work. OpenGL < 3.1 tends to hide a lot of things that ultimately has to happen in user code for the other ones (e.g. Matrix manipulation, see this plug).
The DX SDKs do come with a lot of samples that help understand the basic concepts, but they also tend to use the latest and greatest features of DX when it's not necessarily required when starting (e.g. using Geometry shader to render sprites...)
On the other hand, most GL tutorials tend to use features that are essentially non-performant on modern hardware (e.g. glBegin/glEnd, selection/picking, ... see the list of things that got removed from GL 3.1 or this other plug) and tend to seed the wrong concepts for a lot of things.
What's the most used one in the real world?
For games, DirectX9 is the standard today in PC world. By a far margin.
However, I'm expecting DirectX11 to grab more market share as it allows for some more multithreaded work. It's unfortunately significantly more complicated than DX9.
nobody uses OpenGL anyway (very very few people know about the secret switch at all).
Ask the Mac owners what they think.
Side question, do you really think hardware vendors would spend any energy in OpenGL drivers if this was really the case (I realize I generalize your comment, sorry)? there are real world usages of it. Not much in games though. And Apple makes OpenGL more relevant through the iphone (well OpenGL ES, really).
If it's something usually done, do programmers usually create their own 3d engine "wrapper",
It's usually a full part of the engine design. Mind you, it's not abstracting the API at the same level, it's usually more at a "draw this with all its bells and whistles over there". Which rendering algorithm to apply on that draw tends to be back-end specific.
This, however, is very game engine dependent. If you want to understand better, you could look at UE3, it just got released free (beer) for non-commercial use (I have not looked at it yet, so I don't know if they exposed the backends, but it's worth a look).
To get back to my comment that game engine does not just mean 3D, look at this.
I think the primary benefit of using OpenGL over DirectX is the portability. DirectX only runs on windows. However, this is often not a problem (many games only run on Windows).
DirectX also provides other libraries which are useful for games which are unrelated to graphics such as sound and input. I believe there are equivalents which are often used with OpenGL but I don't think they're actually part of OpenGL itself.
If you're going to be locking into windows with DirectX and you are willing to/interested in learning C# and managed code, I have found XNA to be and extremely easy platform to learn. It allows you to learn most of the concepts without dealing with a lot of the really tricky details of DirectX (or OpenGL). You can still use shader code and have the full power of DirectX but in a much friendlier environment. It would be my recomendation but again, you'd have to switch to C# (mind you, you can also put that on you're resume).
You might want to look at some projects that encapsulate the low level 3d api in a higher level interface that is api independent such as Ogre3D. As you are doing this to learn I assume you probably will be more interesting in implementing the low level detail yourself, but you could probably learn a lot from such a project.
if you are really only interested in the rendering part, i can suggest ogre3d. it is clean, c++, tested and cross-platform. i used it in two projects and compared to other experiences (torque3d for example), i liked the good support (tutorials/wiki/forum) and the not so steep learning curve. i think someone can also learn a lot by looking at the sourcecode and the concepts they have used in the design. there is a accompanying book as well, which is a bit outdated, but it is good for the start
the problem with this is, you will be thinking inside this engine and soon you will need gameplay-like (timers, events) elements for simulating and testing your effects or whatever you want to do. so you will end up working around ogre3ds shortcomings (it is not a game engine) and implement in on your own or use another middleware.
so if you really want to touch 3d rendering first, i would take some computer graphics books (gpu gems, shaderx) and see some tutorials and demos on the web and start building my own basic framework. this is for the experience and i think you will learn the most from this approach. at least i did ...
I'm doing some OpenGL work right now (on both Windows and Mac).
Compared to my midnight game programming using the Unity3D engine, usingOpenGL is a bit like having to chop down your own trees to make a house versus buying the materials.
Unity3D runs on everything (Mac, PC and iPhone, Web player, etc) and lets you concentrate on the what makes a game, a game. To top it off, it's faster than anything I could write. You code for it in C#, Java (EDIT: make that JavaScript) or Boo. (EDIT: Boo support has been dropped)
I just used Unity to mock up a demo for a client who wants something made in OpenGL so it has it's real world uses also.
-Chris
PS: The Unity Indie version recently became free.