i have a little problem .. i am developing an SkinEngine that allow Delphi Vcl Application to be Skined . for this goal, i had developed a new file format (mSkin) in order to host my skin data .so my skin file contains 2 header , the first contains some information about the colors used by the skin , the second contains the bitmap used by skin (the bitmap type is Alpha channel bitmap in order to support transparency ).in my control i use a function to extract object bitmap from the bitmap(mSkin.Bitmap) and draw this bitmap onto my control . the problem is that when the bitmap is not shaped i got a bad quality when scaling the source bitmap .the size of the object bitmap is proportional to the control size (when the contol size changed ==> the bitmap siwe change too .)
i had try to read the vcl style to solve the problem .. but it seems to be very difficult to read .
is there a way to copy bitmap and Maintaining the quality ?
If you use bitmaps you simple can't do scaling without the problems you have. If you want scaling where e.g. a one-pixel border stays a one-pixel border, then you have to use a vector-based format for your images.
You need to divide that into 9 different bitmaps, like a 3x3 grid. then you only scale the middle on, the rest stay the same size but move. This link is for android but the same principles apply.
Here is another link. This is for flash, but it also explains the principle.
Try to use a resampling algorithm.
For upscaling, I like very much the B-Spline.
For simple content like yours, the hqnx family sometimes gives good results, and is very fast to render (even in real-time). For some pascal source code, you may take a look at this forum thread.
See also this more general question.
Related
I am using the EM_FORMATRANGE message to render the output of a rich text control to an arbitrary device context. However, when rendering to a bitmap, the dots-per-inch of the bitmap's device context is the same as the display device's DPI, which is 96 dots-per-inch. This is much lower than what I would like to render to. I'd rather render at a much higher DPI so that the user can zoom in, and perhaps print on a high-DPI printer later.
I suspect what happens is that the RTF control calls GetDeviceCaps with LOGPIXELSX and LOGPIXELSY to get the number of pixels per inch of the device. It then renders the document using this DPI value at a 100% zoom level. Windows display devices always return a value of 96 DPI, unless large fonts are being used on the system (as set in Control Panel) and the application is DPI-aware.
Many examples on the Internet propose scaling the output of EM_FORMATRANGE. This is so that any arbitrary DPI resolution can be achieved. Most examples generally involve using SetMapMode, SetWindowExtEx, and SetViewportExtEx (e.g. see http://social.msdn.microsoft.com/Forums/en-us/netfxbcl/thread/37fd1bfb-f07b-421d-9b5e-5f4492ffbbc3). These functions can be used to scale the rich text control's rendered output: for example, if I specify 400% scaling, then if the rich text control rendered something that was 5 pixels wide, it would actually become 20 pixels wide.
Unfortunately, the old GDI functions use integers instead of floating point numbers. For example, suppose the RTF control decided that an element should be drawn at (12.7, 15.3) pixels. This would be rounded to a position of (13, 15). These rounded coordinates are passed to GDI, which then scales up the image using scaling specified by SetMapMode: for the example of 400%, it would be (13*4, 15*4), or (52, 60). But this is not accurate: the element would have better been placed at (12.7*4, 15.3*4), or (51, 61). The worst part is that for some cases, the error becomes cumulative.
I believe this is the underlying cause of this very noticeable error when scaling some simple text:
The above example is 8 point Segoe UI, scaled to 400% using EM_FORMATRANGE and SetMapMode on a 96 DPI display device context. The text has now become 32 point size, but the space between each character is too high and looks unnatural.
The above example was created in WordPad by entering the text as 8 point Segoe UI and then using the zoom control to set to a 400% zoom level. The space between each character looks normal. The exact same result is achieved with a 32 point font and 100% zoom level.
To work around this issue, I have tried the following. For each thing tried, the result has been identically unsatisfactory when scaled to 400%.
Using a scaling transform set using SetWorldTransform instead of the scaling done with SetMapMode and SetWindowExtEx etc.
Passing the device context for a metafile to EM_FORMATRANGE, and then scaling the metafile later.
Using SetMapMode to scale in conjunction with rendering to a metafile, and then showing the metafile later without scaling.
I believe the results are always unsatisfactory because the problem boils down to the fact that the rich edit control is rounding to the nearest integer and rendering to what it thinks is a 96 DPI device - ignoring the transforms in place. I looked into the metafile format and what I discovered is that the individual character positions are actually stored in the metafile at pixel-level resolution - that's why scaling the metafile obviously didn't work since the rounding has already happened by that point.
I can think of two real solutions that would work around this issue:
Use a device context with a higher user-specified dots per inch, such that GetDeviceCaps returns different values. (Note: some examples propose using the printer device since they generally have higher DPI, but I want my code to work on systems that don't have a printer and be able to render to an off-screen buffer).
Some way to tell the rich edit control to assume the device context has a different dots per inch than reported by GetDeviceCaps.
Anything else seems like it would still be subject to these rounding errors.
Does anyone (1) have an idea of how to implement either of the solutions I have proposed, or (2) have an alternate idea of how to achieve my goal of getting an accurate high-DPI output into a buffer?
I'm having the exact same problem.
A quick solution is to draw the text into 100% scale bitmap, and then just scale the bitmap.
it's not the best solution, but it might work for you.
Did you find any better solutions? if so, please share them here.
Also notice, that this problem also occurs when you draw the text to a 100% meta-file and then scale the meta-file to the screen - I believe this has something to do with GDI text drawing functions that aren't working well with scaling.
Roey
You could multiply the point size of all the text in the control by a factor of 4 and render the control to a bitmap that's 4 times larger.
If you're populating the control yourself this would be quite straightforward. If you support arbitrary content entered by the user it would be a lot more work and would require extra effort to handle anything that wasn't text (e.g. embedded bitmaps).
I just spent two weeks on a similar problem. I needed a Rich Edit that was scalable for WYSISWG editing. As we've found the windows rich edit control does not support scaling correctly with EM_FORMATRANGE and inter character spacing does not change between zoom levels and font sizes only scale in discrete font size steps.
Since I did not need large differences in scale the solution I settled on was to use the windowless text edit interfaces from ITextServices to render to an internal bitmap at a fixed resolution. Then I used GDI+ to resample the internal bitmap to the needed screen size with trilinear filtering. The results emulated a scalable rich edit well enough as long as scale difference were not too large, it was good enough for my needs.
After trying many different options I am convinced you can not get precise scaling with the windows rich edit control. You can write your own control that renders text. However, you would need to have a separate draw call for every piece of text with a different style. Also you would need to handle all the nicities rich edit handles for you like highlighting text, placing the cursor, handling mouse and keyboard input, parsing rtf text, et cetera. It would probably be best just to buy a third party component in this case(I could not find any suitable free open source components). In case someone wants to attempt it I will point out the relevant starting points for text rendering for different APIs.
GDI - TextOut does not set inter-character spacing correctly. You need GetCharacterPlacement and ExTextOut. You also need to calculate scaling yourself. You probably don't want to use GDI
GDI+ - DrawString handles scaling correctly. GDI+ is a reasonable option
DirectWrite - If you are willing to limit yourself to Vista Platform Update or later, DirectWrite is the newest text API from Microsoft.
Also here is link describing how text rendering is different between GDI and GDI+:
http://windowsclient.net/articles/gdiptext.aspx
Try using the EM_SETZOOM message to let the rich edit control scale the output itself.
I'll first tell you the problem and then I'll tell you my solution.
Problem: I have a blank white PNG image approximately 900x900 pixels. I want to copy circles 30x30 pixels in size, which are essentially circles with a different colour. There are 8 different circles, and placed on the image depending on data values which I've created elsewhere.
Solution: I've used ImageMagicK, it's suppose to be good for general purpose image editing etc. I created a blank image
Image.outimage("900x900","white");
I upload all other small 30x30 pixel images with 'read' function.
I upload the data and extract vales.
I place the small 'circle' images on the blank one using the composite command.
outimage.composite("some file.png",pixelx,pixely,InCompositeOp);
This all works fine and the images come up the way I want them too.
However its painfully SLOW. It takes 20 seconds to do one image, and I have 1000 of them. Surely there must be a better way to do this. I've seen other researchers simulate images way more complex and way faster. It's quite possible I took the wrong approach. Maybe I sould be 'drawing' circles instead of 'pasting' them or something. I'm quite baffled. Any input is appreciated.
I suspect that you just need some library that is capable of drawing circles on bitmap and saving that bitmap as png.
For example my Graphin library: http://code.google.com/p/graphin/
Or some such. With Graphin you can also draw one PNG on surface of another as in your case.
You did not give any information about the platform you are using (only "C++"), so if you are looking for a platform independent solution, the CImg library might be worth a try.
http://cimg.sourceforge.net/
By the way, did you try drawing the circles using the ImageMagick C++ API Magick++ instead of "composing" them? I cannot believe that it is that slow.
I am currently using a QGraphicsItem that I am loading a pixmap into to display some raster data. I am currently not doing any tiling or anything of the sort, but I have overriden my QGraphicsItem so that I can implement features like zooming under mouse, tracking whick pixel I am hovering over, etc etc.
My files that are coming off the disk are 1 - 2GB in size, and I would like to figure out a more optimal way of displaying them. For starters - it seems like I could display them all at once if I wanted - because the QImage that I am using (Qpixmap->QImage->QgraphicsItem) seems to fail at any pixel index over 32,xxx (16 bit).
So how should I implement tiling here if I want to maintain using a single QGraphicsItem? I dont think I want to use multiple QGraphicsItems to save the displayed data + neighboring data "about" to be displayed. This would require me to scale them all when the person moused over and tried to scale a single tile, and thus causing me to also have to reposition everything, right? I guess this will also require having some knowledge about what data to exactly get from the file.
I am however open to ideas. I also suppose it would be nice to do this in some kind of threaded way, that way the user can keep panning the image or zooming even if all the tiles are not loaded yet.
I looked at the 40000 chip demo, but I am not sure that is what I am after - it looks like it basically still displays all of the chips like you normally would in a scene, just overrode the paint method to supply less level of detail...or did I miss something about that demo?
It's not too surprising that there would be difficulty handling images that size. Qt just isn't designed for it and there are possibly other contributing factors due to the particular OS and perhaps the way memory is managed.
You very clearly need (or at least, should use) a tiling mechanism. Your main issue is that you need a way to access your image data that does not involve using a QImage (or QPixmap) to load the entire thing and provide access to that image data since it has already been determined that this fails.
You would either need to find a method (library) that can load the entire image into memory and allow you to pull regions of image data out of it, or load only a specific region from the file on disk. You would also need the ability to resize very large regions to lower resolution sections when trying to "zoom" out on any part of the image. Unfortunately, I have never done image processing like this so am unfamiliar with what library options are available, Qt likely won't be able to help you directly with this.
One option you might explore however is using an image editing package to break your large image up into more manageable chunks. Then perhaps a QGraphicsView solution similar to the chip demo would work.
I am trying to create a piece of software that can be used to create VERY large (10000x10000) sized bitmaps. All I need is something that can work in monochrome, since the required output is a matrix containing details of black and white pixels in the bitmap. The closest thing I can think of is a font editor, but the size is a problem.
Is there any library out there that I can use to create the software, or will I have to write the whole thing from the start?
Edited on May 25: OK, so I've been searching around and I have found that using the GtkTree Widget is a good way to create grids. Has anybody tried that with the large sizes that I require? And if so, can it be made to look like a drawing surface rather than a Spreadsheet like view?
Why don't you use bitmap objects, like gdk pixmaps if you use GTK?
10,000 x 10,000 pixels with a depth of 1 (monochrome) is 100,000,000 bits, which is 12,500,000 bytes, around 12 megabytes.
Not that large.
Summary
Using Windows GDI to convert 24-bit color to indexed color, it seems GDI chooses colors which are "close enough" even though there are exact matches in the supplied palette.
Can anyone confirm this as a GDI issue or am I making a mistake somewhere?
Maybe there's a "please check the whole palette for color matches" flag which I've failed to find?
Note: This is not about quantizing. The source is 24-bit but contains 256 or fewer colors so an exact palette is trivial to calculate. The problem is GDI doesn't use the full palette.
Workaround
I've worked around the problem by mapping the colors myself but I'd prefer to use GDI as it should be better optimized. Problem is, it seems to be "fast but wrong."
Detailed description
My source image is 24-bit but uses 256 (or fewer) colors. I generate an exact palette for it and ask GDI to transfer the image into an indexed bitmap using that palette. For some pixels GDI chooses similar, but not exact, colors even though there are exact colors elsewhere in the palette. This ruins smooth gradients.
This problem happens with:
SetDIBitsToDevice
StretchDIBits
BitBlt
StretchBlt
The problem does not happen with:
SetPixel or SetPixelV in a loop (incredibly slow!)
Using my own code to do the mapping
I've tested this on:
Windows 7 (NVidia hardware/drivers)
Windows Vista (ATI hardware/drivers)
Windows 2000 (VMware hardware/drivers)
In every test I get the same results. (Not just the wrong colours but always the same wrong colors.)
I don't think the issue is color management (ICM/ICC profiles/etc.) as most of the APIs say they don't use it, I've tried explicitly turning it off on the GDI DC as well as via the V5 bitmap header, and I don't think it would apply within my vanlilla-Win2k VM.
Test Project
Code for a simple Win32/GDI/VS2008 test project can be found here:
http://www.pretentiousname.com/data/GdiIndexColor.zip
The Test1 function within Win32UI.cpp is the actual test. It has two arrays of RGBQUADs, one the source image and the other the exact palette for it. It verifies that the palette really is exact and then asks GDI to convert the image using the APIs mentioned above, testing the result each time. For each test it'll tell you the first incorrect pixel's before & after colors, or tell you that all pixels are correct if it worked.
Thanks!
Thanks for reading my question! Sorry if it's the result of me doing something really dumb! :-)
I ran into this exact same problem, eventually contacted Microsoft and provided them with a test case. In the test case I provided a gradient image that had 128 colors in a 24bit DIB, I then converted that to an 8bit DIB that was created with a color table containing all 128 colors from the 24bit image. After conversion, the 8 bit image had only used 65 of the 128 colors.
To sum up their response:
This is not a bug, GDI does use a close enough calculation when down converting the color depth of an image. This is not really documented anywhere, and the only way to insure all of the original colors will convert exactly is to manually manipulate the pixels yourself.
Are you using SetDIBColorTable()? This article seems to imply that, when drawing to a DIB, it is not sufficient to call SelectPalette() but that SetDIBColorTable() also needs to be called to set the palette for the DIB:
However, if the application is using
a DIB section, you create a logical
palette from the DIB colour table as
usual and then also pass the DIB
colour table to the DIB section with a
call to SetDIBColorTable(). Despite
what the "Platform SDK" documentation
of RealizePalette() appears to imply,
RealizePalette() does not adjust the
colour table of the DIB section.
The article contains some more information on drawing into palettized DIBs that may be relevant (see the section "Palettes and DIB sections").
I vaguely remember that you also need to call RealizePalette(hdc) after a palette is selected into a DC. We ditched our palette code so long ago that the code isn't even in our source tree anymore. I see from your code that you alrady tried this, but I suggest that you might want to play with that some more.
I do remember that the palette code was pretty fragile, and we stopped using it as soon as we could.
Some older AVI files would have 8 bit palettized video with a palette imbedded in the file, so playback code for those files would need to load an realize a palette. I remember that realize didn't do anything unless you were the foreground app, but that SHOULD only apply to screen DC's and not memory DC's.
If you searched around for sample source code that could play palettized AVI's you might find something that shows the magic formula for getting palettes to work.
Sorry I can't be more help.