I'm currently in the process of designing and developing GUI's for some audio applications made in C++ (using the Juce framework).
So far I've been playing with using bitmap graphics to create custom sliders and dials, by using 'film strip' style images to animate the components (meaning when the user interacts with a slider it triggers a method that changes the offset of a film-strip image to change the components appearance). Depending on the size of the original image and the number of 'frames', the CPU usage level changes quite dramatically.
Firstly, what would be the most efficient bitmap file format to use in terms of CPU consumption? At the moment I'm using PNG images.
Secondly, would it be more efficient to use vector graphics for these kind of graphical components? I understand the main differences between bitmap and vector graphics, but I haven't found any information regarding their CPU usage levels with regard to GUI interaction.
Or would CPU usage be down to the particular methods/functions/libraries/frameworks being used?
Thanks!
Or would CPU consumption be down to the particular methods/functions/libraries/frameworks being used?
Any of these things could influence it.
Pixel based images might take a while to read off of disk the bigger they are. Compressed types might take more time to uncompress. Vector might take more time to render when are loaded.
That being said, I would definitely not expect that your choice of image type to have any impact on its performance. Since you didn't provide a code example it is hard to speculate beyond that.
In general, you would expect that the run-time costs of the images to happen when they are loaded. So whenever you create an image object. If you create an images all over the place, then maybe its expensive. It is possible that your film strip is recreating the images instead of loading them once and caching them.
Before choosing bitmap vs. vector graphics, investigate if your graphics processor supports vector or bitmap graphics. Some things take a long time to draw as vectors.
Have you tried double-bufferring?
This is where you write to a buffer in memory while the display (graphics processor) is loading another.
Load your bitmaps from the resource once. Store them as memory snapshots to avoid the additional cost of translating them from a format.
Does your graphic processor support "blitting"?
Blitting is where the graphics processor can copy a rectangular area in memory (bitmap) and display it along with apply optional operations before displaying (such as XOR with existing bits).
Summary:
To improve your rendering speed, only convert images from the file into a bitmap form once. Store this somewhere. Refer to this converted bitmap as needed. Next, investigate and implement double buffering. Lastly, investigate and use bit-blitting or blitting.
Other optimization rules apply here too, such as reviewing the design, removing requirements, loop unrolling, passing images via pointer vs. copying them, and reduce "if" statements by using boolean logic and Karnaugh (sp?) maps.
In general, calculations for rendering vector graphics are going to take longer than blitting a rectangular region of a bitmap to the screen. But for basic UI stuff, neither should be particularly intensive.
You probably should do some profiling. Perhaps you're redrawing much more frequently than necessary. Or perhaps the PNG is being decoded each time you try to draw from it. (I'm not familiar with Juce.)
For a straight Windows app, I'd probably render vector graphics into a device-dependent bitmap once on startup and then just blit from the bitmap to the screen. Using vector gives you DPI independence, and blitting from a device-dependent bitmap is about the fastest way to paint a block of pixels. I believe the color matching is done when you render to the device-dependent bitmap, so you don't even have the ICM overhead on the screen drawing.
Vector graphics was ditched long ago - bitmap graphics are more performant. The thing is that you can send a bitmap to the GPU once and then render it forever more by a simple copy.
Secondly, the GPU uses it's own texture compression. DirectX is DXT5, I believe, but when the GPU sees the texture, it doesn't care what you loaded it from.
However, a modern CPU even with a crappy integrated GPU should have absolutely no problem with simple GUI rendering. If you're struggling, then it's time to look again at the technique you're using. Perhaps your framework is slow or your use of it is suboptimal.
Related
I'm writing some code that uses SDL2 to display an image with moving markers layered on it, and I think I'd like to use the new (?) 2D hardware accelerated rendering. As I understand it, I have to load an image and convert it to a texture -- but what's the difference? Searching for 'image texture 2d sdl' only gets me tutorials on how to load textures and I'm looking for more of the background rather than the how-to.
So, some questions:
What's a texture versus an image? Aren't they the same thing?
Am I correct in assuming that I need to load the static background image as a texture if I want hardware accelerated rendering? In fact, it sounds like all the bits need to be textures for this to work.
Speaking of OpenGL, are SDL textures actually OpenGL textures?
I'm writing the main app for a single-purpose machine with limited resources (dual core ARM CPU, dual core Mali 400 GPU, 4GB RAM: Olimex A20 LIME2). All I need to do is render an 480x800 (yes, portrait layout) image and put markers on it. I expect the markers to have a single opaque and two transparency layers, to be updated at around 15 fps, and I expect about 125 of them, tops. Is it worth my while to use 2D hardware acceleration or should I just do it in software?
To understand the basics of textures, I advise you to have a look at a simpler library's documentation. Here, the term pixmap is used in the same way as SDL's texture. Essentially, those are already converted and uploaded into your GPU's memory, which makes operations quite a bit faster, but also more complex to deal with.
OpenGL textures are another beast, but we could basically say that they are the same, that is, images in video memory. When binding a texture in OpenGL, you need to upload it to the GPU memory, which is somewhat similar to this texture transformation.
At 125 objects, I think considering using the 2D acceleration becomes worth the hassle, especially if you have to move them around. If this is just a static image, I guess you could go for the regular image route.
As a general rule, I encourage you to use 2D acceleration (or just acceleration, for that matter) whenever possible, if only for the battery improvements. With that said, if the images are static, the outcome will exactly be the same, maybe just slightly different code-path wise. As such, I suppose you could load the static background image just as a regular image without any downsides (note that I am not a SDL professional, so this mixed approach might not work here, but it is worth trying since it will work on most 2D toolkits).
I hope I answered all of your questions :)
I have an application that contains many millions of 3d rgb points that form an image when plotted. What is the fastest way of getting them to screen in a MFC application? I've tried CDC.SetPixelV in conjunction with a bitmap, which seems quite slow, and am looking towards a Direct3D or OpenGL window in my MFC view class. Any other good places to look?
Double buffering is your solution. There are many examples on codeproject. Check this one for example
Sounds like a point cloud. You might find some good information searching on that term.
3D hardware is the fastest way to take 3D points and get them into a 2D display, so either Direct3D or OpenGL seem like the obvious choices.
If the number of points is much greater than the number of pixels in your display, then you'll probably first want to cull points that are trivially outside the view. You put all your points in some sort of spatial partitioning structure (like an octree) and omit the points inside any node that's completely outside the viewing frustrum. This reduces the amount of data you have to push from system memory to GPU memory, which will likely be the bottleneck. (If your point cloud is static, and you're just building a fly through, and if your GPU has enough memory, you could skip the culling, send all the data at once, and then just update the transforms for each frame.)
If you don't want to use the GPU and instead write a software renderer, you'll want to render to a bitmap that's in the same pixel format as your display (to eliminate the chance of the blit need to do any pixels formatting as it blasts the bitmap to the display). For reasonable window sizes, blitting at 30 frames per second is feasible, but it might not leave much time for the CPU to do the rendering.
My basic problem is to generate 2d renders of 3d objects, such as one could accomplish with openGL or DirectX. However I have no interest in displaying the rendered objects to the screen, only to generate the shaded/ textured/ rotated images as bitmaps (not necessarily written to disk). This process is likely to be a problematic bottleneck in my design, so I would prefer to keep my solution as compact as possible (ie, don't waste the time sending the image to the screen), and would be most pleased if I could make use of hardware-accelerated rendering. Does anyone know of a convenient library or tool to help in this?
Right now I would prefer a C/C++ option, however speed is what I'm going for, so I'm willing to deal with ASM/ super optimized anything if it gets what I want the fastest.
What you need is a so called technique "rendering to texture". With OpenGL you can do it very easy. Take a look at example here: http://www.songho.ca/opengl/gl_fbo.html#example
You can do this using "render to texture"
This process is likely to be a problematic bottleneck in my design, so I would prefer to keep my solution as compact as possible (ie, don't waste the time sending the image to the screen)
If you want rendering to be fast, you want to use the GPU for this. Sending a image to the screen comes for free with those things. But reading an image back to the CPU memory, that is actually quite a bottleneck. But unavoidable in you case probably.
I was wondering if it was faster to render a single quad the size of the window with a texture the size of a window than to draw the bitmap directly to the window using double buffering coupled with the platform specific way of drawing to a window.
The initial setup for textures tends to be relatively slow, but once that's done the drawing is quite fast -- in a typical case where graphics memory is available, it'll upload the texture to the memory on the graphics cards during initial setup, and after that, all the drawing will happen from there. At the same time, that initial upload will also typically include full a full mipmap down to 1x1 resolution, so you're uploading a bit more than just the full-resolution texture.
With platform specific drawing, you usually don't have quite as much work up-front. If only part of the bitmap is visible, only the visible part will be uploaded. If the bitmap is going to be scaled, it'll typically scale it on the CPU and send it to the card at the current scale (and never upload anything resembling a mipmap). OTOH, virtually every time something needs to be redrawn, it'll end up re-sending the bitmap data for the newly exposed area. It doesn't take much of that to lose the (often minor anyway) advantage of minimizing what was sent to start with.
Using textures is usually a lot faster, since most native drawing APIs aren't hardware accelerated.
It will very probably depend on the graphics card and driver.
I'm getting ready to make a drawing application in Windows. I'm just wondering, do drawing programs have a memory bitmap which they lock, then set each pixel, then blit?
I don't understand how Photoshop can move entire layers without lag or flicker without using hardware acceleration. Also in a program like Expression Design, I could have 200 shapes and move them around all at once with no lag. I'm really wondering how this can be done without GPU help.
Also, I don't think super efficient algorithms could justify that?
Look at this question:
Reduce flicker with GDI+ and C++
All you can do about DC drawing without GPU is to reduce flickering. Anything else depends on the speed of filling your memory bitmap. And here you can use efficient algorithms, multithreading and whatever you need.
Certainly modern Photoshop uses GPU acceleration if available. Another possible tool is DMA. You may also find it helpful to read the source code of existing programs like GIMP.
Double (or more) buffering is the way it's done in games, where we're drawing a ton of crap into a "back" buffer while the "front" buffer is being displayed. Then when the draw is done, the buffers are swapped (a pointer swap, not copies!) and the process continues in the new front and back buffers.
Triple buffering offers another bonus, in that you can start drawing two-frames-from-now when next-frame is done, but without forcing a buffer swap in the middle of the screen refresh. Many games do the buffer swap in the middle of the refresh, but you can sometimes see it as visible artifacts (tearing) on the screen.
Anyway- for an app drawing bitmaps into a window, if you've got some "slow" operation, do it into a not-displayed buffer while presenting the displayed version to the rendering API, e.g. GDI. Let the system software handle all of the fancy updating.