Recently, I have been spending a lot of my time researching the topic of GPUs, and have came across several articles talking about how PC games are having a hard time staying ahead of the curve compared to console games due to limitations with the APIs. For example, on Xbox 360, it is my understanding that the games run in kernel mode, and that because the hardware will always be the same, the games can be programmed "closer to the metal" and the Directx api has less abstraction. On PC however, making the same number of draw calls with Direct-X or Opengl may take even more the 2 times the amount of time than on console due to switching to kernel mode and more layers of abstraction. I am interested in hearing possible solutions to this problem.
I have heard of a few solutions, such as programing directly on the hardware, but while (from what I understand), ATI has released the specifications of there low level API, nVidia keeps theirs secret, so that wouldn't work too well, not to mention the added development time of making different profiles.
Would programming an entire "software rendering" solution in Opencl and running that on a GPU be any better? My understanding is that games with a lot of draw calls are cpu bound and the calls are single threaded (on PC that is), so is Opencl a viable option?
So the question is:
What are possible methods to increase the efficiency of, or even remove the need for, graphics APIs such as Opengl and Directx?
The general solution is to not make draw as many draw calls. Texture atlases via array textures, instancing, and various other techniques make this possible.
Or to just use the fact that modern computers have a lot more CPU performance than consoles. Or even better, make yourself GPU bound. After all, if your CPU is your bottleneck, then that means you have GPU power to spare. Use it.
OpenCL is not a "solution" to anything related to this. OpenCL has no access to any of the many things one would need to do to actually use a GPU to do rendering. In order to use OpenCL for graphics, you would have to not use the GPU's rasterizer/clipper, it's specialized buffers for transferring information from stage to stage, the post T&L cache, or the blending/depth comparison/stencil/etc hardware. All of that is fixed function and extremely fast and specialized. And completely unavailable to OpenCL.
And even then, it doesn't actually make it not CPU bound anymore. You still have to marshal what you're rendering and so forth. And you probably won't have access to the graphics FIFO, so you'll have to find another way to feed your shaders.
Or, to put it another way, this is a "problem" that doesn't need solving.
If you try to write a renderer in OpenCL, you will end up with something resembling OpenGL and DirectX. You will also most likely end up with something much slower than these APIs which were developed by many experts over many years. They are specialized to handle efficient rasterizing and use internal hooks not available to OpenCL. It could be a fun project, but definitely not a useful one.
Nicol Bolas already gave you some good techniques to increase the load of the GPU relative to the CPU. The final answer is of course that the best technique will depend on your specific domain and constraints. For example, if your rendering needs call for lots of pixel overdraw with complicated shaders and lots of textures, the CPU will not be the bottleneck. However, the most important general rule from with modern hardware is to limit the number of OpenGL calls made by better batching.
APIs. For example, on Xbox 360, it is my understanding that the games run in kernel mode, and that because the hardware will always be the same, the games can be programmed "closer to the metal" and the Directx api has less abstraction. On PC however, making the same number of draw calls with Direct-X or Opengl may take even more the 2 times the amount of time than on console due to switching to kernel mode and more layers of abstraction.
The benefits of close-to-metal operation on consoles is largely overcompensated on PCs by their much larger CPU performance and available memory. Add to this that the HDDs of consoles are not nearly as fast as modern PC ones (SATA-1 vs SATA-3, or even just PATA) and many games get their contents from an optical drive which is even slower.
The PS3 360 for example offers only 256MiB memory for game logic and another 256MiB of RAM for graphics and more you don't get to work with. The X-Box 360 offers 512MiB of unified RAM, so you have to squeeze everthing into that. Now compare this with a low end PC, which easily comes with 2GiB of RAM for the program alone. And even the cheapest graphics cards offer at least 512MiB of RAM. A gamers machine will have several GiB of RAM, and the GPU will offer something between 1GiB to 2GiB.
This extremly limits the possibilites for a game developer and many PC gamers are mourning that so many games are "consoleish", yet their PCs could do so much more.
Related
I'm looking into whether it's better for me to stay with OpenGL or consider a Vulkan migration for intensive bottlenecked rendering.
However I don't want to make the jump without being informed about it. I was looking up what benefits Vulkan offers me, but with a lot of googling I wasn't able to come across exactly what gives performance boosts. People will throw around terms like "OpenGL is slow, Vulkan is way faster!" or "Low power consumption!" and say nothing more on the subject.
Because of this, it makes it difficult for me to evaluate whether or not the problems I face are something Vulkan can help me with, or if my problems are due to volume and computation (and Vulkan would in such a case not help me much).
I'm assuming Vulkan does not magically make things in the pipeline faster (as in shading in triangles is going to be approximately the same between OpenGL and Vulkan for the same buffers and uniforms and shader). I'm assuming all the things with OpenGL that cause grief (ex: framebuffer and shader program changes) are going to be equally as painful in either API.
There are a few things off the top of my head that I think Vulkan offers based on reading through countless things online (and I'm guessing this certainly is not all the advantages, or whether these are even true):
Texture rendering without [much? any?] binding (or rather a better version of 'bindless textures'), which I've noticed when I switched to bindless textures I gained a significant performance boost, but this might not even be worth mentioning as a point if bindless textures effectively does this and therefore am not sure if Vulkan adds anything here
Reduced CPU/GPU communication by composing some kind of command list that you can execute on the GPU without needing to send much data
Being able to interface in a multithreaded way that OpenGL can't somehow
However I don't know exactly what cases people run into in the real world that demand these, and how OpenGL limits these. All the examples so far online say "you can run faster!" but I haven't seen how people have been using it to run faster.
Where can I find information that answers this question? Or do you know some tangible examples that would answer this for me? Maybe a better question would be where are the typical pain points that people have with OpenGL (or D3D) that caused Vulkan to become a thing in the first place?
An example of answer that would not be satisfying would be a response like
You can multithread and submit things to Vulkan quicker.
but a response that would be more satisfying would be something like
In Vulkan you can multithread your submissions to the GPU. In OpenGL you can't do this because you rely on the implementation to do the appropriate locking and placing fences on your behalf which may end up creating a bottleneck. A quick example of this would be [short example here of a case where OpenGL doesn't cut it for situation X] and in Vulkan it is solved by [action Y].
The last paragraph above may not be accurate whatsoever, but I was trying to give an example of what I'd be looking for without trying to write something egregiously wrong.
Vulkan really has four main advantages in terms of run-time behavior:
Lower CPU load
Predictable CPU load
Better memory interfaces
Predictable memory load
Specifically lower GPU load isn't one of the advantages; the same content using the same GPU features will have very similar GPU performance with both of the APIs.
In my opinion it also has many advantages in terms of developer usability - the programmer's model is a lot cleaner than OpenGL, but there is a steeper learning curve to get to the "something working correctly" stage.
Let's look at each of the advantages in more detail:
Lower CPU load
The lower CPU load in Vulkan comes from multiple areas, but the main ones are:
The API encourages up-front construction of descriptors, so you're not rebuilding state on a draw-by-draw basis.
The API is asynchronous and can therefore move some responsibilities, such as tracking resource dependencies, to the application. A naive application implementation here will be just as slow as OpenGL, but the application has more scope to apply high level algorithmic optimizations because it can know how resources are used and how they relate to the scene structure.
The API moves error checking out to layer drivers, so the release drivers are as lean as possible.
The API encourages multithreading, which is always a great win (especially on mobile where e.g. four threads running slowly will consume a lot less energy than one thread running fast).
Predictable CPU load
OpenGL drivers do various kinds of "magic", either for performance (specializing shaders based on state only known late at draw time), or to maintain the synchronous rendering illusion (creating resource ghosts on the fly to avoid stalling the pipeline when the application modifies a resource which is still referenced by a pending command).
The Vulkan design philosophy is "no magic". You get what you ask for, when you ask for it. Hopefully this means no random slowdowns because the driver is doing something you didn't expect in the background. The downside is that the application takes on the responsibility for doing the right thing ;)
Better memory interfaces
Many parts of the OpenGL design are based on distinct CPU and GPU memory pools which require a programming model which gives the driver enough information to keep them in sync. Most modern hardware can do better with hardware-backed coherency protocols, so Vulkan enables a model where you can just map a buffer once, and then modify it adhoc and guarantee that the "other process" will see the changes. No more "map" / "unmap" / "invalidate" overhead (provided the platform supports coherent buffers, of course, it's still not universal).
Secondly Vulkan separates the concept of the memory allocation and how that memory is used (the memory view). This allows the same memory to be recycled for different things in the frame pipeline, reducing the amount of intermediate storage you need allocated.
Predictable memory load
Related to the "no magic" comment for CPU performance, Vulkan won't generate random resources (e.g. ghosted textures) on the fly to hide application problems. No more random fluctuations in resource memory footprint, but again the application has to take on the responsibility to do the right thing.
This is at risk of being opinion based. I suppose I will just reiterate the Vulkan advantages that are written on the box, and hopefully uncontested.
You can disable validation in Vulkan. It obviously uses less CPU (or battery\power\noise) that way. In some cases this can be significant.
OpenGL does have poorly defined multi-threading. Vulkan has well defined multi-threading in the specification. Meaning you do not immediately lose your mind trying to code with multiple threads, as well as better performance if otherwise the single thread would be a bottleneck on CPU.
Vulkan is more explicit; it does not (or tries to not) expose big magic black boxes. That means e.g. you can do something about micro-stutter and hitching, and other micro-optimizations.
Vulkan has cleaner interface to windowing systems. No more odd contexts and default framebuffers. Vulkan does not even require window to draw (or it can achieve it without weird hacks).
Vulkan is cleaner and more conventional API. For me that means it is easier to learn (despite the other things) and more satisfying to use.
Vulkan takes binary intermediate code shaders. While OpenGL used not to. That should mean faster compilation of such code.
Vulkan has mobile GPUs as first class citizen. No more ES.
Vulkan have open source, and conventional (GitHub) public tracker(s). Meaning you can improve the ecosystem without going through hoops. E.g. you can improve\implement a validation check for error that often trips you. Or you can improve the specification so it does make sense for people that are not insiders.
I am creating a GUI program that will run 24/7. I couldn't find much online on the subject, but is OpenGL stable enough to run 24/7 for weeks on end without leaks, crashes, etc?
Should I have any concerns or anything to look into before delving too deep into using OpenGL?
I know that OpenGL and DirectX are primarily used for games or other programs that aren't used for very long lengths. Hopefully someone here has some experience with this or knowledge on the subject. Thanks.
EDIT: Sorry for the lack of detail. This will only be doing 2D rendering, and nothing too heavy, what I have now (which will be similar to production) already runs at a stable 900-1000 FPS on my i5 laptop with Radeon 6850m
Going into OpenGL just for making a GUI sounds insane. You should be worried more about what language you use if you are concerned about stuff like memory leaks. Remember that in C/C++ you manage memory on your own.
Furthermore, do you really need the GUI to be running 24/7? If you are making a service sort of application, you might as well leave it in the background and make a second application which provides the GUI. These two applications would communicate via soma IPC (sockets?). That's how this sort of thing usually works, not having a window open all the time.
In the end, memory leaks are not caused by some graphical library, but more by the programmer writing bad code. The library should be the last in your list of possible reasons for memory leaks/creashes.
I work for a company that makes (windows based) quality assurance software (machine vision) using Delphi.
The main operator screen shows the camera images at up to 20fps (2 x 10fps) with opengl overlay, and has essentially unbounded uptime (longest uptimes close to an year, longer is hard due to power downs for maintenance). Higher speed cameras have their display rates throttled.
I would avoid integrated video from intel for a while longer though. Since i5 it meets our minimal requirements (non power of 2 textures mostly), but the initial drivers were bad, and while they have improved there are occasional stability and regularity problems still.
I verified this with two applications using OpenGL using a GQLWidget. If screenupdates are very frequent, so say 30 fps, and/or resolution is high, CPU usage of one of the cores skyrockets. I'm looking for a solution on how to fix this and/or verify if it happens on Windows as well.
In my experience QGLWidget itself is a very efficient thin wrapper around GL and your windowing system; if you have high CPU usage using it, well chances are that you'd have high CPU usage using any other way of implementing an OpenGL app too.
If you have high CPU usage using OpenGL, chances are either:
You're falling back on a software OpenGL implementation (ie Mesa);
e.g Debian will do this if you don't install any graphics device
drivers.
You're using old school immediate mode OpenGL: glBegin,...vertices...,glEnd. Get into VBOs instead.
The fact you mention display resolution as a factor rather suggests the former problem.
You need to get any profiler, profile your code and see where bottleneck is. Since your program eats CPU resources (and not GPU), this should be fairly easy.
As far as I know, "AQTime 7 Standard"(windows) is currently available for free. Or you could use gprof - depending on your toolkit/platform.
One very possible scenario (aside from software OpenGL fallback) is that you use dynamic memory allocation too frequently or running debug build. Immediate mode could be a problem if you have 100000+ polygons per frame.
I have seen a few GL implementations that were terrible at minimising host CPU usage. There appear to be plenty of situations where the CPU will busy-wait while the GPU draws. Often simply turning on vertical sync in the card settings will cause your app to draw less often and still take up just as much CPU.
Unfortunately there is little you can do about this yourself, save for limiting how often your app draws.
What features make OpenCL unique to choose over OpenGL with GLSL for calculations? Despite the graphic related terminology and inpractical datatypes, is there any real caveat to OpenGL?
For example, parallel function evaluation can be done by rendering a to a texture using other textures. Reducing operations can be done by iteratively render to smaller and smaller textures. On the other hand, random write access is not possible in any efficient manner (the only way to do is rendering triangles by texture driven vertex data). Is this possible with OpenCL? What else is possible not possible with OpenGL?
OpenCL is created specifically for computing. When you do scientific computing using OpenGL you always have to think about how to map your computing problem to the graphics context (i.e. talk in terms of textures and geometric primitives like triangles etc.) in order to get your computation going.
In OpenCL you just formulate you computation with a calculation kernel on a memory buffer and you are good to go. This is actually a BIG win (saying that from a perspective of having thought through and implemented both variants).
The memory access patterns are though the same (your calculation still is happening on a GPU - but GPUs are getting more and more flexible these days).
But what else would you expect than using more than a dozen parallel "CPUs" without breaking your head about how to translate - e.g. (silly example) Fourier to Triangles and Quads...?
Something that hasn't been mentioned in any answers so far has been speed of execution. If your algorithm can be expressed in OpenGL graphics (e.g. no scattered writes, no local memory, no workgroups, etc.) it will very often run faster than an OpenCL counterpart. My specific experience of this has been doing image filter (gather) kernels across AMD, nVidia, IMG and Qualcomm GPUs. The OpenGL implementations invariably run faster even after hardcore OpenCL kernel optimization. (aside: I suspect this is due to years of hardware and drivers being specifically tuned to graphics orientated workloads.)
My advice would be that if your compute program feels like it maps nicely to the graphics domain then use OpenGL. If not, OpenCL is more general and simpler to express compute problems.
Another point to mention (or to ask) is whether you are writing as a hobbyist (i.e. for yourself) or commercially (i.e. for distribution to others). While OpenGL is supported pretty much everywhere, OpenCL is totally lacking support on mobile devices and, imho, is highly unlikely to appear on Android or iOS in the next few years. If wide cross platform compatibility from a single code base is a goal then OpenGL may be forced upon you.
What features make OpenCL unique to choose over OpenGL with GLSL for calculations? Despite the graphic related terminology and inpractical datatypes, is there any real caveat to OpenGL?
Yes: it's a graphics API. Therefore, everything you do in it has to be formulated along those terms. You have to package your data as some form of "rendering". You have to figure out how to deal with your data in terms of attributes, uniform buffers, and textures.
With OpenGL 4.3 and OpenGL ES 3.1 compute shaders, things become a bit more muddled. A compute shader is able to access memory via SSBOs/Image Load/Store in similar ways to OpenCL compute operations (though OpenCL offers actual pointers, while GLSL does not). Their interop with OpenGL is also much faster than OpenCL/GL interop.
Even so, compute shaders do not change one fact: OpenCL compute operations operate at a very different precision than OpenGL's compute shaders. GLSL's floating-point precision requirements are not very strict, and OpenGL ES's are even less strict. So if floating-point accuracy is important to your calculations, OpenGL will not be the most effective way of computing what you need to compute.
Also, OpenGL compute shaders require 4.x-capable hardware, while OpenCL can run on much more inferior hardware.
Furthermore, if you're doing compute by co-opting the rendering pipeline, OpenGL drivers will still assume that you're doing rendering. So it's going to make optimization decisions based on that assumption. It will optimize the assignment of shader resources assuming you're drawing a picture.
For example, if you're rendering to a floating-point framebuffer, the driver might just decide to give you an R11_G11_B10 framebuffer, because it detects that you aren't doing anything with the alpha and your algorithm could tolerate the lower precision. If you use image load/store instead of a framebuffer however, you're much less likely to get this effect.
OpenCL is not a graphics API; it's a computation API.
Also, OpenCL just gives you access to more stuff. It gives you access to memory levels that are implicit with regard to GL. Certain memory can be shared between threads, but separate shader instances in GL are unable to directly affect one-another (outside of Image Load/Store, but OpenCL runs on hardware that doesn't have access to that).
OpenGL hides what the hardware is doing behind an abstraction. OpenCL exposes you to almost exactly what's going on.
You can use OpenGL to do arbitrary computations. But you don't want to; not while there's a perfectly viable alternative. Compute in OpenGL lives to service the graphics pipeline.
The only reason to pick OpenGL for any kind of non-rendering compute operation is to support hardware that can't run OpenCL. At the present time, this includes a lot of mobile hardware.
One notable feature would be scattered writes, another would be the absence of "Windows 7 smartness". Windows 7 will, as you probably know, kill the display driver if OpenGL does not flush for 2 seconds or so (don't nail me down on the exact time, but I think it's 2 secs). This may be annoying if you have a lengthy operation.
Also, OpenCL obviously works with a much greater variety of hardware than just the graphics card, and it does not have a rigid graphics-oriented pipeline with "artificial constraints". It is easier (trivial) to run several concurrent command streams too.
Although currently OpenGL would be the better choice for graphics, this is not permanent.
It could be practical for OpenGL to eventually merge as an extension of OpenCL. The two platforms are about 80% the same, but have different syntax quirks, different nomenclature for roughly the same components of the hardware. That means two languages to learn, two APIs to figure out. Graphics driver developers would prefer a merge because they no longer would have to develop for two separate platforms. That leaves more time and resources for driver debugging. ;)
Another thing to consider is that the origins of OpenGL and OpenCL are different: OpenGL began and gained momentum during the early fixed-pipeline-over-a-network days and was slowly appended and deprecated as the technology evolved. OpenCL, in some ways, is an evolution of OpenGL in the sense that OpenGL started being used for numerical processing as the (unplanned) flexibility of GPUs allowed so. "Graphics vs. Computing" is really more of a semantic argument. In both cases you're always trying to map your math operations to hardware with the highest performance possible. There are parts of GPU hardware which vanilla CL won't use but that won't keep a separate extension from doing so.
So how could OpenGL work under CL? Speculatively, triangle rasterizers could be enqueued as a special CL task. Special GLSL functions could be implemented in vanilla OpenCL, then overridden to hardware accelerated instructions by the driver during kernel compilation. Writing a shader in OpenCL, pending the library extensions were supplied, doesn't sound like a painful experience at all.
To call one to have more features than the other doesn't make much sense as they're both gaining 80% the same features, just under different nomenclature. To claim that OpenCL is not good for graphics because it is designed for computing doesn't make sense because graphics processing is computing.
Another major reason is that OpenGL\GLSL are supported only on graphics cards. Although multi-core usage started with using graphics hardware there are many hardware vendors working on multi-core hardware platform targeted for computation. For example see Intels Knights Corner.
Developing code for computation using OpenGL\GLSL will prevent you from using any hardware that is not a graphics card.
Well as of OpenGL 4.5 these are the features OpenCL 2.0 has that OpenGL 4.5 Doesn't (as far as I could tell) (this does not cover the features that OpenGL has that OpenCL doesn't):
Events
Better Atomics
Blocks
Workgroup Functions:
work_group_all and work_group_any
work_group_broadcast:
work_group_reduce
work_group_inclusive/exclusive_scan
Enqueue Kernel from Kernel
Pointers (though if you are executing on the GPU this probably doesn't matter)
A few math functions that OpenGL doesn't have (though you could construct them yourself in OpenGL)
Shared Virtual Memory
(More) Compiler Options for Kernels
Easy to select a particular GPU (or otherwise)
Can run on the CPU when no GPU
More support for those niche hardware platforms (e.g. FGPAs)
On some (all?) platforms you do not need a window (and its context binding) to do calculations.
OpenCL allows just a bit more control over precision of calculations (including some through those compiler options).
A lot of the above are mostly for better CPU - GPU interaction: Events, Shared Virtual Memory, Pointers (although these could potentially benefit other stuff too).
OpenGL has gained the ability to sort things into different areas of Client and Server memory since a lot of the other posts here have been made.
OpenGL has better memory barrier and atomics support now and allows you to allocate things to different registers within the GPU (to about the same degree OpenCL can). For example you can share registers in the local compute group now in OpenGL (using something like the AMD GPUs LDS (local data share) (though this particular feature only works with OpenGL compute shaders at this time).
OpenGL has stronger more performing implementations on some platforms (such as Open Source Linux drivers).
OpenGL has access to more fixed function hardware (like other answers have said). While it is true that sometimes fixed function hardware can be avoided (e.g. Crytek uses a "software" implementation of a depth buffer) fixed function hardware can manage memory just fine (and usually a lot better than someone who isn't working for a GPU hardware company could) and is just vastly superior in most cases. I must admit OpenCL has pretty good fixed function texture support which is one of the major OpenGL fixed function areas.
I would argue that Intels Knights Corner is a x86 GPU that controls itself.
I would also argue that OpenCL 2.0 with its texture functions (which are actually in lesser versions of OpenCL) can be used to much the same performance degree user2746401 suggested.
In addition to the already existing answers, OpenCL/CUDA not only fits more to the computational domain, but also doesn't abstract away the underlying hardware too much. This way you can profit from things like shared memory or coalesced memory access more directly, which would otherwise be burried in the actual implementation of the shader (which itself is nothing more than a special OpenCL/CUDA kernel, if you want).
Though to profit from such things you also need to be a bit more aware of the specific hardware your kernel will run on, but don't try to explicitly take those things into account using a shader (if even completely possible).
Once you do something more complex than simple level 1 BLAS routines, you will surely appreciate the flexibility and genericity of OpenCL/CUDA.
The "feature" that OpenCL is designed for general-purpose computation, while OpenGL is for graphics. You can do anything in GL (it is Turing-complete) but then you are driving in a nail using the handle of the screwdriver as a hammer.
Also, OpenCL can run not just on GPUs, but also on CPUs and various dedicated accelerators.
OpenCL (in 2.0 version) describes heterogeneous computational environment, where every component of system can both produce & consume tasks, generated by other system components. No more CPU, GPU (etc) notions are longer needed - you have just Host & Device(s).
OpenGL, in opposite, has strict division to CPU, which is task producer & GPU, which is task consumer. That's not bad, as less flexibility ensures greater performance. OpenGL is just more narrow-scope instrument.
One thought is to write your program in both and test them with respect to your priorities.
For example: If you're processing a pipeline of images, maybe your implementation in openGL or openCL is faster than the other.
Good luck.
Which parts of pipelines are done using CPU and which are done using GPU?
Reading Wikipedia on Graphics Pipeline, maybe my question does not precisely represent what I am asking.
Referring to this question, which "steps" are done in CPU and which are done in GPU?
Edit:
My question is more into which parts of logical high level steps needed to display terrain+3D models [from files] are using CPU/GPU instead of which functions.
Which parts of pipelines are done using CPU and which are done using GPU?
As far as I know...
It depends entirely on the hardware, driver and opengl implementation.
Everything you can do in OpenGL, works, but you can't be 100% sure where processing happens - it is conveniently hidden from you (it is actually one of OpenGL benefits).
There are full-software OpenGL emulators, for example - mesa3d (non-certified).
Put such opengl32.dll (if you're on windows) into your program folder, and everything will be running on CPU.
Typically, rasterization (last stage of pipeline - alpha-blend, putting pixel, ztest) is done in hardware. Get old videocard, and it is possible that the rest of functionality will run on CPU. Vertex transformations, vertex manipulations, and even standard lighting can be done on CPU (and they were done on CPU for a long time) - depending on videocard. However, I must admit that that videocards without hardware TL (transform and lighting) support are quite old. AFAIK, transformation and lighting is always performed on CPU if card only supports full hardware acceleration of DirectX 7 (windows-platform specific).
There may be platform dependent way to get videocard capabilities in openGL, though..
Side note:
DirectX gives you more information about card capabilities, and a ways to control what happens where (i.e. you can choose software T&L even if videocard supports hardware acceleration - in some cases this may be necessarry). This comes with penalty - DirectX is less flexible than OpenGL, harder to use, harder to experiment with, it is available on less platforms than OpenGL, and worrying about which feature is supported and which isn't takes development time.
The answer depends on what GPU (since they differ in HW capabilities, specially in integrated ones), but on modern non-integrated GPUs, the entire pipeline is implemented in HW. This includes culling, z-sort and occlusion.
Integrated chipsets, such as Intel's, have less HW features, for example, they usually do tranformation & lighting using software.
It depends a lot on the system configuration, at least the CPU has to issue the commands to the GPU (send vertices lists, upload textures, etc.), and the GPU usually does all the geometric transformations, texture mappings, lighting, etc.
But in many cases, part of the work that the GPU should be doing might be done by the CPU if the hardware doesn't support certain features.
If you're using a 3D engine, it might also do some pre-optimizations on the CPU to alleviate the number of triangles that the GPU has to process, like discarding beforehand parts of the geometry that are known to be hidden (by using BSP's like IdTech engines or Portals like Unreal Engine, for example).
Things like moving and rotating the meshes of the models (e.g. the walking animations) are usually done in the CPU, but nowadays I guess the trend would be to HW accelerate them too.
Regarding the wikipedia link, all the steps mentioned there are usually done in the GPU, but is up to the software running in the CPU to decide what polygons are going to be sent for processing (even if they are discarded by the GPU later on because they're hidden or out of the viewing frustrum).