Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
As I understand, there were no modules in early Qt versions, there were separate classes with different functions, including graphical. Opengl support was realized In qt 1.2. However, QPainter, QImage existed in early versions.
So, is it correct to say that these classes are native (in other words, classes, which were primordial); opengl classes - non-native (it is a separste branch, after all)?
I`d like to learn a further evolution of Qtopengl as non-native and alternative way for creating 2D graphics ih Qt, influence of this module on evolution of native methods (for creating 2D graphics).
So, is it correct to say that these classes are native?
No, it is not.
The reason for that is "native" would mean different things to different people. It is the matter of interpretation. See your other question how confused we got.
By now, I think you mean "non-opengl" 2/3D by native. That probably means software rasterization as opposed to be going through the display driver directly. So, still on the Qt level, but without the opengl classes in Qt.
Now, this is the point where we can come back to QImage and QPainter. Yes, QPainter is basically the initial generation for software rasterization from the times where GPUs were not so common and cheap as these days.
They are basically doing the rendering purely with software techniques. That is, it is more limited, but it worked without more expensive and less common hardwares around.
(Those were the times of Quake and other software products, fun times looking at it from today's perspective ...)
If by "native" you mean "hardware assisted", then the line isn't all so clear anymore.
Note that QPainter can use various paint engines to do the painting, so merely using a QPainter doesn't mean anything by itself.
If by "hardware assisted" one merely means using something more than legacy integer or floating point execution units of the CPU, then yes, the raster paint engine does use various SIMD/vectored operations where available. The raster paint engine is the engine used to paint on QImage, QPixmap and non-GL QWidget.
If by "hardware assistance" you mean "rendered by the graphics card hardware", then you need to use an OpenGL paint engine. It's used when you paint on a QGLWidget or in a QQuickPaintedItem. Of course the painting is still defined by the software - geometry setup and shaders are just code! This software runs on hardware that can execute it much faster than general purpose CPUs can.
Given that the fixed-function OpenGL pipeline is more-or-less a historical artifact these days, it's not incorrect to state that all of rendering in Qt is done using purely software techniques, but the software can run on a general-purpose CPU, or leverage SIMD/vector execution units on a general-purpose CPU, or can run on a GPU.
It should also be said that typical Windows drivers these days do not accelerate GDI/gdiplus drawing other than blits. Thus when doing 2D drawing using the raster engine, especially on older Windows versions like XP, Qt can be faster than platform-native 2D drawing.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 1 year ago.
Improve this question
I'm reading through the OpenGL and OpenCL specification in order to find few informations about the memory model and how exactly they corresponds to each other.
I'm aware OpenGL and OpenCL use essentially the same memory model. What I struggle to understand however, given the namings don't map 1-to-1 (at least it seems to me in that way) is what exactly can be mapped to what (in terms of terminology) between the two.
Any reference would be appreciated.
Under the assumption we have the same GPU as device for both OpenCL and OpenGL the specific question/questions is/are:
How for example VBO actually maps to OpenCL? Does a VBO essentially correspond to a chunk of global memory in the OpenCL terminology?
What about OpenGL Texture object? my understanding is that this corresponds exactly to an image object in OpenCL, and they both map to texture memory.
What about Shared Storage Buffer Object (specifically in the context of Compute Shaders) what does this corresponds to?
Also, even on this site, I find few debates on which one is more performant (between OpenCL and OpenGL). It seems to me that OpenGL Compute Shaders for example should be preferred over OpenCL kernels only if the nature of the problem maps nicely to something graphics related, OpenCL instead is preferred if you have some heavily numerical which is not necessarily graphics related (such as a heavy simulation for example).
What I struggle a bit to understand is why this is the case given that the memory model, and resources, are essentially the same. Apart from experimenting I wonder what is the actual difference justifying the difference. With specific reference to Compute Shaders I'm aware they allow to implement any algorithm you would be able to implement in OpenCL with OpenGL, so why is there a performance difference then?
The kind of problem I'm thinking of would be some relatively heavy optimisation based on level 3 Blas routines (such as GEMM or GEMV).
How well would OpenGL and OpenCL scale for these kind of problems?
The reason I'm asking is because I struggle a bit to find relatively recent information and benchmarks that might answer the question.
How for example VBO actually maps to OpenCL
I have limited experience with OpenGL, but in my understanding, many OpenGL objects don't map to OpenCL objects at all. OpenGL in general works at much higher level of abstraction, and it does a whole lot of things in the background for you. OpenCL is significantly simpler, and more low level (which might also explain why OpenCL can sometimes be faster). There are chunks of memory (cl_mem), code (cl_kernel), you launch the kernels that work with the memory - that's pretty much it. There's no complicated internal state machine like in OpenGL.
With specific reference to Compute Shaders I'm aware they allow to implement any algorithm you would be able to implement in OpenCL with OpenGL
Actually, i think that might be incorrect. OpenCL allows you to do with pointers almost everything you can do in C (arithmetic, reinterpret-casting etc), while GLSL is much more limited (AFAIK).
what is the actual difference justifying the difference
One huge difference (again AFAIK) is the builtin library of math functions (like sin, cos, etc). OpenGL has them too, but in OpenCL they have guaranteed precision by the standard. This makes a huge difference for scientific applications, OTOH it means the OpenCL kernel can be significantly slower (because an implementation of sin() with high precision on full input range is much more code than some crappy implementation that just gives you reasonably precise values on some very limited input range).
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 8 years ago.
Improve this question
I read that a lot of raytracers use CUDA or OpenCL. However, I don't know why modern( version 4.0+) OpenGL is not used.
I know that CUDA and OpenCL have more features, I think they are closer to the hardware, but... is this really useful for this purpose? If so, why?
OpenGL design is all about getting points, lines or triangles rasterized to a frame buffer. Shaders are used to control this process, but ultimately its just rasterization. When rasterizing you take each triangle, one by one, determine where it is going to be in the framebuffer and then manipulate those specific pixels.
Raytracing or Path tracing is something entirely different. Instead of starting out with the triangles determining which pixels to touch, you start out with the pixels and for each pixel trace into the scene which geometry is relevant for the pixel. I.e. it's kind of complementary what OpenGL does. Hence trying to fit this into OpenGL is kind of barking up the wrong tree. You need to work with completely different data structures, your programs are structured differently than shaders. OpenCL and CUDA are much better suited for programming ray or path tracing algorithms.
Both OpenGL and more generic parallel computing mechanisms such as OpenCL or CUDA can be used to implement all kinds of ray-casting, including Raytracing. In fact, if you go to Shadertoy you'll find a great many of the shaders there produce interesting 3D scenes and effects using only a single fragment shader, though mostly using ray marching rather than ray tracing.
With any kind of ray casting, you typically perform a lot of computation for every single pixel you're rendering. Since no pixel depends on the output of any other pixel, and since the algorithm for each pixel is the same, this kind of problem is ideal for parallel processing, which is at the heart of what OpenCL is designed for. You can also use OpenGL for parallel computing. The main advantage for OpenGL here is that if you're attempting to render something in real-time, then you want the results of the computation to stay on the video card, resulting in display on an output device.
On the other hand, if you're doing non-realtime rendering, then OpenCL probably has more functionality and less overhead required to accomplish the task at hand.
In either case, the biggest problem is probably not the implementation of the renderer, but figuring out a way to express the scene description either directly in the rendering code, or load it from some scene descriptor file and encode it in some fashion that the rendering code can interpret within the OpenCL or OpenGL framework. You cannot, for instance, simply load some XML or JSON scene description file and pass it to an OpenCL Kernel / OpenGL fragment shader. Ultimately it ends up having to be expressed in terms of whatever kinds of structures and primitives you can express in the language you choose.
Now I'm facing a problem regarding plotting some curves in a Qt and Qwt application for embedded linux (see more details about the problem in this link).
One of the proposed solutions was to use OpenGL together with QwtPlot, but my boss fears that OpenGL would ensure its graphical optimization with a higher processing cost, so essentially improving in one area to cause problem in another. I must say that this reasoning seems convincing.
Now I haven't checked how much exactly the improvements would be, neither I know how much extra processing OpenGL usage would take, but I came after this to do a more general question (whose answer may actually refute my boss' thesis): what are the disadvantages of using OpenGL, particularly for a embedded linux situation? I tried to find something on the web, but Google wouldn't help be with disadvantages apart from the issues related to the fight between OpenGL and DirectX.
but my boss fears that OpenGL would ensure its graphical optimization with a higher processing cost,
Your boss is speculating without having actual knowledge on the subject. This is akin to premature optimization.
OpenGL is not a library, it's an API to access graphics systems and it has been deliberately designed to have very little overhead and do not provide anything beyond what GPUs actually can do. There are no higher level kinds of "objects" in OpenGL. All what OpenGL does is making the GPU draw points, lines or triangles in exactly the order and way, you tell it to.
what are the disadvantages of using OpenGL, particularly for a embedded linux situation?
If your target embedded device has a OpenGL capable GPU: Zero. In fact using OpenGL will then greatly improve performance and reduce load on the CPU. More likely on an embedded system you'll have to deal with OpenGL-ES, though. In your other post you mention you're using a TI OMAP. Which one exactly? Because some of them come with PowerVR GPUs.
I am reading through the OpenGL Superbible Fifth Edition and they discuss using stacks via their own class. That's all great but they mention that matrix stacks were deprecated. Why were they deprecated and what do people use instead of them?
The reason(s) are political, not technical, and date back to the early 2000s.
OpenGL 3 was the first ever version willing to break backwards compatibility. The designers wanted to create an API for the expert users, the game programmers and high end visualization coders who knew all about shaders and wrote their own matrix code. The intent was that the OpenGL 3 API should match the actual hardware quite closely. (Even in OpenGL 1/2, the matrix stack was usually implemented on the CPU side, not the GPU.)
From a game engine programmer point of view, this was better. And hey, if you have to develop a new game engine every couple of years anyway, what's the big deal about throwing away the old code?
The result of this design process is the OpenGL 3/4 core profile.
Once the "new generation" OpenGL was announced, all the not-so-expert coders in universities and companies realized they would be screwed. These are the people (like me) who teach 3D graphics or write utility programs for research or design. We don't need any more advanced lighting than plain ambient-diffuse-specular. We often have to mix code from different sources together, and that is only easy if everyone is using exactly the same matrix, lighting, and texturing conventions - like those supplied by OpenGL 2.
Also, I've heard but cannot verify, the big CAD/CAM companies realized that they'd be screwed as well. Throwing away two million lines of code from ten years of development is not an option when you've got paying (and well-paying: compare prices for Quadro vs GeForce, or FireGL vs Radeon) customers.
So both NVIDIA and ATI announced they'd support the old API for as long as they could.
The result of this pressure is the compatibility profiles. And the OpenGL ARB now seems to have realized that while they'd like everyone to switch to core profile it just isn't going to happen: read the extension spec for tessellation shaders in OpenGL 4 and it mentions that GL_PATCHES will work with glBegin.
Matrix Stack (and the rest of matrix functions) were deprecated only in the core profile. In the Compatibility profile you should still be able to use them.
From my point of view it was removed because most of engines/frameworks have custom Math code and shader uniform style for sending matrices to shaders.
Although for simple programs/tutorials it is very inconvenient to use and search for something else.
I suggest using:
glm (http://glm.g-truc.net/)
very simple math lib (vsml)
Why were they deprecated
Because nobody actually used it in real world OpenGL programs. Take a physics simulation for example: You'd have all the object placement being stored in the physics system as a 4×4 matrix anyway. So you'd just use that. Same goes for visible object determination and animation systems. All those need to implement the matrix math anyway, so having this in OpenGL is rather redundant, as most of the time the already existing matrices were simply put into glLoadMatrix.
and what do people use instead of them?
What they used before: Their animation systems, physics simulators, scene graphs, etc.
Well the first and main reason, for me, is that with the rise of programmable shaders (being mandatory after the 3rd version of opengl), all the variables such as GL_PROJECTION and GL_MODELVIEW that were automatically transferred to the shaders are being deleted from the shaders, so the user has to define its own matrix to use it in the shader. Since you have to send the matrix manually using the Uniform functions, you don't really need fixed variables anymore.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 8 years ago.
Improve this question
Does anyone know what's the best graphic drawing library for C++, I want a lib that can draw basic shapes and can make image editing, gradients and vector or 3D would be great to.
The windows drawing functions are complicated and are not very advanced.
May I suggest using Cairo?
This vector library is very fast, verbose and powerful! Just look at those pretty examples!
There's even integration with OpenGL if you need vectorized 3D textures!
I tested AGG, Cairo, GDI+ and Quartz (for Mac).
I think Quartz is the best, but is available (as long as I know) for Mac only.
AGG is poweful, but is not well documented. The developer decided to reinvent the wheel, and made his own doc system, instead of using something standard like doxygen. There are good tutorials for basic understanding, but when you dig deeper you find API documentation lacking, imprecise or incomplete.
GDI+ is pretty basic compared to the others, and is available for Windows only.
As a result, I think the best choice is probably Cairo (unless you can choose to develop for Mac only). It's well documented, the code is clean, and is fast and powerful.
Check out CImg Library.
CImg stands for "Cool Image" : It is
easy to use and efficient. It's a very
pleasant toolbox to code image
processing stuffs in C++, and
potentially covers a wide range of
image processing applications.
Graphics libraries OpenGL, DirectX and game engines such as Ogre3D may be too low level for tasks like drawing shapes and gradients.
Maybe you should take a look at Cairo as mentioned above (http://cairographics.org/), or simply at Qt which has a pretty complete and efficient drawing module (http://qt.nokia.com/doc/4.5/examples.html#graphics-view) and allows high level (GraphicsScene & GraphicsView) and low level (OpenGL) drawing.
For 2D drawing, SFML provides a nice API.
See some of the quality tutorials to learn more.
DirectX and OpenGL are two options here. They're both complicated though.
Though meant for 3D you can get to do 2D stuff with Ogre3D