If you have images stored in a CImageList, is there an easy way to render them (with proper transparency) scaled to fit a given target rectangle? CImageList::DrawEx takes size information but I don't believe it does scaling, only cropping?
I guess you could render them to an offscreen bitmap, then StretchBlt() them to either your device or another offscreen bitmap, letting StretchBlt() do the scaling... Getting the transparency to carry over correctly will require some fiddling though, depending on your circumstances you may need to use AlphaBlend() instead.
My opinion is that most of the Win32 image handling code, and therefore by extension their MFC equivalents, like CImageList, CIcon, CImage, CBitmap, ... are inadequate for today's graphics needs. Especially handling per-pixel transparency hardly ever works consistently. I usually store my images in a CImage and use ::AlphaBlend() everywhere to get them to DC, or I use GetDIBits()/SetDIBits() and directly manipulate the RGBA entries (not very practical for doing scaling and similar operations, I admit). On the other hand I understand what it's like having to maintain code that uses these things already and wanting to update them to give them a bit of a modern look...
Related
I created this UI framework in Direct2D some time ago to be able to draw/manage my own windows and widgets. I've been using it and updating it according to my needs and it works pretty well. However, now that high resolutions monitors are the new thing, I came across a small problem. Drawing images/icons in the best definition I can.
Since I'm using Direct2D all the draw functions work properly according to the DPIs/scaling of the target machines except of course images that are based in pixels and for that reason are not automatically managed by DirectX.
So, in the beginning I was simply drawing bitmaps as they were in 96 DPI, this meant that if I had an icon 10x10, and I used a function like ID2D1RenderTarget::DrawBitmap by specifying a destination rectangle, my image would be scaled up for higher DPIs. This of course would be noticeable and the icon would be blurry.
My first attempt at fixing this was to create my icons 4x bigger than the default DPI of 96. Then, using the same ID2D1RenderTarget::DrawBitmap and knowing that these images are 4x bigger, the DrawBitmap would draw the icon scaled down instead of scaled up. This had much better results, Starting from a windows scale of 150% and up it's perfect.
However, scaling down from 4x to 1x, the result is not great, images get somewhat pixelized. Much worse that doing the same in Photoshop.
I also tried using SetTransform before the DrawBitmap so see if the result is better, but it's exactly the same.
So my question is, how are people dealing with this issue. I'm sure I'm not the only one...
If your goal is to get best visual results, you'll need to prepare groups of icons in various resolutions, not just downscaled but specifically designed in lower sizes. Then you'll need to select one of those according to current context.
Regarding DrawBitmap, you could try with different interpolation modes.
As for general solutions that people are using, I don't think there is one. Many applications don't support this properly, or if they do for control layout, embedded bitmap resources are still stretched and look deformed or interpolated and look too blurry.
I'm currently writing a 3D renderer (for fun and research), so I need a way to draw my framebuffer to a window. Since I'm doing all of my calculations on CPU, the drawing needs to be as fast as possible.
One of my goals is to use no existing graphics library (OpenGL/DirectX) so the drawing to the screen is pure Win32. In my research I've found a couple of ways to create and draw bitmaps and now I'm looking for the best one.
My current implementation uses a bitmap created with CreateDIBSection(), which is drawn to my window DC using BitBlt().
CreateDIBSection() give me a pointer to my bitmap bytes so I can manipulate it without copying. Using this method I achieve an update rate of about 260 FPS (without any rendering done).
This seems a bit slow, so I'm looking for optimizations.
I've read something about that if you don't create a bitmap with the same palette as the system palette, some slow color conversions are done.
How can I make sure my DIB bitmap and window are compatible?
Are there methods of drawing an bitmap which are faster than my current implementation?
I've also read something about DrawDibDraw(), can anyone confirm that this is faster?
I've read something about that if you don't create a bitmap with the same palette as the system palette, some slow color conversions are done.
Very few systems run in a palette mode any more, so it seems unlikely this is an issue for you.
Aside from palettes, some GDI functions also cause a color matching conversion to be applied if the source bitmap and the destination have different gamuts. BitBlt, however, does not do this type of color matching, so you're not paying a price for that.
How can I make sure my DIB bitmap and window are compatible?
You don't. You can use DIBs (which are Device-Independent Bitmaps) or compatible (device-dependent) bitmaps. It's possible that your DIB bitmap matches the current mode of your device. For example, if you're using a 32 bpp DIB, and your display is in that same mode, then no conversion is necessary. If you want a bitmap that's guaranteed to be in the same mode as your device, then you can't use a DIB and all the nice properties it provides for predictable pixel layout and format.
Are there methods of drawing an bitmap which are faster than my current implementation?
The limitation is most likely in getting the data from system memory to graphics adapter memory. To get around that limitation, you need a faster graphics bus, or you need to render directly into graphic memory, which means you'd need to do your computation on the GPU rather than the CPU.
If you're rendering a 1920 x 1080 pixel image at 24 bits per pixel, that's close to 6 MB for your frame buffer. That's an awful lot of data. If you're doing that 260 times per second, that's actually pretty impressive.
I've also read something about DrawDibDraw(), can anyone confirm that this is faster?
It's conceivable, but the only way to know would be to measure it. And the results might vary from machine to machine because of differences in the graphics adapter (and which bus they use).
I'm currently in the process of designing and developing GUI's for some audio applications made in C++ (using the Juce framework).
So far I've been playing with using bitmap graphics to create custom sliders and dials, by using 'film strip' style images to animate the components (meaning when the user interacts with a slider it triggers a method that changes the offset of a film-strip image to change the components appearance). Depending on the size of the original image and the number of 'frames', the CPU usage level changes quite dramatically.
Firstly, what would be the most efficient bitmap file format to use in terms of CPU consumption? At the moment I'm using PNG images.
Secondly, would it be more efficient to use vector graphics for these kind of graphical components? I understand the main differences between bitmap and vector graphics, but I haven't found any information regarding their CPU usage levels with regard to GUI interaction.
Or would CPU usage be down to the particular methods/functions/libraries/frameworks being used?
Thanks!
Or would CPU consumption be down to the particular methods/functions/libraries/frameworks being used?
Any of these things could influence it.
Pixel based images might take a while to read off of disk the bigger they are. Compressed types might take more time to uncompress. Vector might take more time to render when are loaded.
That being said, I would definitely not expect that your choice of image type to have any impact on its performance. Since you didn't provide a code example it is hard to speculate beyond that.
In general, you would expect that the run-time costs of the images to happen when they are loaded. So whenever you create an image object. If you create an images all over the place, then maybe its expensive. It is possible that your film strip is recreating the images instead of loading them once and caching them.
Before choosing bitmap vs. vector graphics, investigate if your graphics processor supports vector or bitmap graphics. Some things take a long time to draw as vectors.
Have you tried double-bufferring?
This is where you write to a buffer in memory while the display (graphics processor) is loading another.
Load your bitmaps from the resource once. Store them as memory snapshots to avoid the additional cost of translating them from a format.
Does your graphic processor support "blitting"?
Blitting is where the graphics processor can copy a rectangular area in memory (bitmap) and display it along with apply optional operations before displaying (such as XOR with existing bits).
Summary:
To improve your rendering speed, only convert images from the file into a bitmap form once. Store this somewhere. Refer to this converted bitmap as needed. Next, investigate and implement double buffering. Lastly, investigate and use bit-blitting or blitting.
Other optimization rules apply here too, such as reviewing the design, removing requirements, loop unrolling, passing images via pointer vs. copying them, and reduce "if" statements by using boolean logic and Karnaugh (sp?) maps.
In general, calculations for rendering vector graphics are going to take longer than blitting a rectangular region of a bitmap to the screen. But for basic UI stuff, neither should be particularly intensive.
You probably should do some profiling. Perhaps you're redrawing much more frequently than necessary. Or perhaps the PNG is being decoded each time you try to draw from it. (I'm not familiar with Juce.)
For a straight Windows app, I'd probably render vector graphics into a device-dependent bitmap once on startup and then just blit from the bitmap to the screen. Using vector gives you DPI independence, and blitting from a device-dependent bitmap is about the fastest way to paint a block of pixels. I believe the color matching is done when you render to the device-dependent bitmap, so you don't even have the ICM overhead on the screen drawing.
Vector graphics was ditched long ago - bitmap graphics are more performant. The thing is that you can send a bitmap to the GPU once and then render it forever more by a simple copy.
Secondly, the GPU uses it's own texture compression. DirectX is DXT5, I believe, but when the GPU sees the texture, it doesn't care what you loaded it from.
However, a modern CPU even with a crappy integrated GPU should have absolutely no problem with simple GUI rendering. If you're struggling, then it's time to look again at the technique you're using. Perhaps your framework is slow or your use of it is suboptimal.
I need to render some formatted text (colours, different font sizes, underlines, bold, etc) however I'm not sure how to go about doing it. D3DXFont only allows text of a single font/size/weight/colour/etc to be rendered at once, and I cant see a practical way to "combine" multiple calls to ID3DXFont::DrawText to do such things...
I looked around and there doesn't seem to be any existing libraries that do these things, but I have no idea how to implement such a text renderer, and I couldn't even find any documentation on how such a text render would work, only rendering simple fixed width, ASCII bitmap fonts which looking at it is probably an entirely different approach that is only suitable for rendering simple blocks of text where Unicode is not important.
If there's no direct3d font renders capable of doing this, is there any other renderers (eg for use in rendering rich text in a normal window), and would rendering those to a texture in RAM, then uploading that to the video card to render onto the back buffer yield reasonable performance?
You tagged this with Direct3D, so I'm going to assume that's the target environment. Otherwise, GDI can handle all this stuff.
Actually, looking at GDI is a good place to start, since it supports this as it stands. With GDI, there is only a single font selected at a time for drawing text. Similarly, there can only be a single text color selected at a time. So, to draw a line of text that contains characters in multiple colors and/or fonts, you have to divide the line up into chunks of characters that all have the same rendering state. You set the state for that chunk (text foreground/background color, text font, text spacing, etc.), then call ::TextOut for that chunk of text, then set the state for the next chunk and draw that, and so-on.
The same principle applied in Direct3D. Each chunk of text may require its own ID3DXFont (or your own font mechanism if ID3DXFont isn't sufficient) and color, etc. You set state, draw text, set state, draw text, etc.
Now, if you want to do your own text rendering, you could do fancier things with a shader, but its probably not worth it unless you have the need for really high-quality typography.
In Windows 7 (back-filled for Vista once Windows 7 ships), you'll be able to use DirectWrite for high quality typography in a Direct3D rendering context (D3D10 and later).
In case of using Direct3D9 or early you really have big problems:
1. You always may use GDI, render your text in the HDC, copy it to the Direct3D dynamic texture and use shader to replace background to transparency by color key for example.
2. Take a look on FreeType library may be you'll find it more suitable for you than GDI, but it really will not be easy.
Any way even if you will find a good way of text rendering you obligotary will have problems with small fonts by reason of ClearType Microsoft technology. At first you will not be able just make a matrix of small symbols to type text from it (because symbol is rendered with using of this subpixel rendering). You will probably need some special shader... or you will be enforced reject to use ClearType, but as i've said before it has bad aftereffects on small fonts. At second this technology is patented by Microsoft
But if you do not need support of Windows XP you may try to use Direct2D with DirectWrite and DirectX10 (I didn't try 11). I tried them and they shown good results, you only should avoid often switching between Direct2D and Direct3D contexts, because it may seriously degrade performance. To render something to your texture you may use ID2D1Factory::CreateDxgiSurfaceRenderTarget function
I suggest measuring the performance of rendering to a bitmap and uploading to a texture.
I suspect that this pretty much what D3DXFont has to do behing the scenes anyway.
You could probably cache letters / phrases etc. if needed
Does anyone have any tutorials/info for creating and rendering fonts in native directx 9 that doesn't use GDI? (eg doesn't use ID3DXFont).
I'm reading that this isn't the best solution (due to accessing GDI) but what is the 'right' way to render fonts in dx?
ID3DXFont is a great thing for easy to use, early, debug output. However, it does use the GDI for font rasterization (not hardware accelerated) and there is a significant performance hit (try it, its actually very noticable). As of DirectX 11, though, fonts will be rendered with Direct2D and be hardware accelerated.
The fastest way to render text is using what's called "Bitmap Fonts". I would explain how to do this, except that there is a lot of different ways to do implement this technique, each differing in complexity and capability. It can be as simple as a system that loads a pre-created texture and draws the letters from that, or a system that silently registers a font with Windows and creates a texture in memory at load-time (The engine I developed with a friend did this, it was very slick). Either way, you should see a very noticable performance increase with bitmap fonts.
Why this isn't a good solution?
Mixing GDI rendering and D3D rendering into the same window is a bad idea.
However, ID3DXFont does not use that. It uses GDI to rasterize the glyphs into a texture. And uses that texture to render the actual text.
About the only alternative would be using another library (e.g. FreeType) to rasterize glyphs into a texture, but I'm not sure if that would result in any substantial benefits.
Of course, for simple (e.g. non-Asian) fonts you could rasterize all glyphs into a texture beforehand, then use that texture to draw text at runtime. This way runtime does not need to use any font rendering library, it just draws quads using the texture. This approach does not scale well with large font sizes or fonts with lots of characters. Also would not handle complex typography very well (e.g. where letters have to be joined etc.)
With DirectX, the correct way to render standard fonts is with GDI.
However, IF
You want to support cross platform font rendering
with proper support for internationalization - including far eastern languages where maintaining a glyph for every character in a font is impractical
and/or You want to distribute your own fonts and render them without "installing" them...
Then libfreetype might be what you are looking for. I don't claim its easy: Its a lot more complex than using the native font api.
Personally I think that ID3DXFont is the way to go.
If you really wanted to make your own font routines, I suggest you look at:
http://creators.xna.com/en-us/utilities/bitmapfontmaker
You can use this to create a bitmap with all the characters printed on it. Then its just a matter or loading the texture and blitting the relevant chars onto the screen at the right place. (This is what XNA uses for its font drawing)
Its a lot more work, but you don't need the font to be installed on the target PC, and you have the advantage to being able to go into photoshop and edit the font appearance there.