Change the unit for setting position in TextOut, C++ - c++

I'm currently working on a printing plugin with C++, and starting working with TextOut to print the text I want. It works great, but apparently, the positions that TextOut uses as params are in pixels. Is there a way to set them to be in cm or mm? or any other?.

Well, it's pretty simple. The coordinates are not in pixels, but they are in the coordinates of your mapping mode. It just so happens that the default mapping mode of a DC is MM_TEXT which has each coordinate unit to be one pixel on the device.
Change your mapping mode using SetMapMode() to the coordinate system you prefer to use. You can also play with window extents, viewport extents, and origins to customize it however you want. You might want to look at the documentation for SetMapMode() and the MM_LOMETRIC (or MM_HIMETRIC) mapping mode.

There should be special handling implemented for printing. Basically, you need to perform conversion based on HIMETRIC units. The paper size is in HIMETRIC units.
Here is the code that will help you get started (MFC-based):
if (pDC->IsPrinting())
{
// printable area in millimeters
int nWidth = pDC->GetDeviceCaps(HORZSIZE);
int nHeight = pDC->GetDeviceCaps(VERTSIZE);
CDC ScreenDC;
ScreenDC.CreateIC(_T("DISPLAY"), NULL, NULL, NULL);
int nPixelsPerInchX = ScreenDC.GetDeviceCaps(LOGPIXELSX);
int nPixelsPerInchY = ScreenDC.GetDeviceCaps(LOGPIXELSY);
// paper size is in HIMETRIC units. we need to convert
CSize PaperSize(MulDiv(nWidth,nPixelsPerInchX*100,HIMETRIC_PER_INCH),
MulDiv(nHeight,nPixelsPerInchY*100,HIMETRIC_PER_INCH));
// now we need to calculate zoom ratio so the layer content fits on page
double fZoomX = (double)PaperSize.cx/(double)m_DocSize.cx;
double fZoomY = (double)PaperSize.cy/(double)m_DocSize.cy;
m_PrintZoom = min(fZoomX, fZoomY);
ResetViewSize(TRUE);
if (pDC->IsKindOf(RUNTIME_CLASS(CPreviewDC)))
{
pDC->SetMapMode(MM_ANISOTROPIC);
pDC->SetWindowExt(nPixelsPerInchX, nPixelsPerInchY);
pDC->SetViewportExt(pDC->GetDeviceCaps(LOGPIXELSX), pDC->GetDeviceCaps(LOGPIXELSY));
pDC->SetViewportOrg(0,0);
pDC->SetWindowOrg(0,0);
}
}

Related

TextOut to the center of Device Context

It frustrates me how difficult this is for me to figure out. I have an MFC application with a print feature. When I hit print I want to create a pdf file with a single word in the center of the A4 page. I am using a CPrintDialog and I actually managed to obtain my result on my machine (by hardcoding the coordinates for TextOut).
I want this software to work on any type of device context. My guess is that there has to be a way to get the logical coordinates (which TextOut expects) out of the CDC. I was playing around with GetDeviceCaps (with HORZSIZE, HORZRES, LOGPIXELSX, ASPECTX, PHYSICALWIDTH) but none of these values even gets close to my hardcoded values. The fact that the y coordinate I use is a negative value baffles me even more.
Is there a generic way to find the center of the device context?
Here are the values I see:
const int logPixY = dcPrinter.GetDeviceCaps(LOGPIXELSY); //600
const int logPixX = dcPrinter.GetDeviceCaps(LOGPIXELSX); //600
const int horzSize = dcPrinter.GetDeviceCaps(HORZSIZE); //216
const int vertSize = dcPrinter.GetDeviceCaps(VERTSIZE); //279
const int horzRez = dcPrinter.GetDeviceCaps(HORZRES); //5100
const int vertRez = dcPrinter.GetDeviceCaps(VERTRES); //6600
const int pWidth = dcPrinter.GetDeviceCaps(PHYSICALWIDTH); //5100
const int pHeight = dcPrinter.GetDeviceCaps(PHYSICALHEIGHT);//6600
const int aspectX = dcPrinter.GetDeviceCaps(ASPECTX); //600
const int aspectY = dcPrinter.GetDeviceCaps(ASPECTY); //600
And the hardcoded DrawText that writes at the center of the page (kind of) is:
dcPrinter.TextOut(4200, -5000, L"Center");
If using GDI it would be far easier to do centering via
SetTextAlign(hdc, TA_VCENTER|TA_VCENTER);
then simply use the page's center point (width/2, height/2) as the TextOut reference point. Let the TextOut API do the work of calculating the text's display area.

Converting truetype font default to pixel size in NV Path

I am writing text module for OpenGL engine using NVidia Path extension (NV Path). The extension allows loading system and external font files using trutype metrics. Now, I need to be able to set a standard font size (in pixels) for the glyphs when rendering the text. By default the loaded glyph has EMscale = 2048. Searching for glyph metrics-to-pixels conversion I have found this:
Converting FUnits to pixels
Values in the em square are converted to values in the pixel
coordinate system by multiplying them by a scale. This scale is:
pointSize * resolution / ( 72 points per inch * units_per_em )
So units_per_em equals 2048, pointSize and resolution are the unknowns I can't resolve.How do I get the resolution value for the viewport width and height to get into this equation? Also, what should be the point size if my input is the pixel size for the font?
I tried to solve this equation with different kind of input but my rendered text gets always smaller (or bigger) than the reference text (AfterEffects).
NV_Path docs refer to FreeType2 metrics. The reference says:
The metrics found in face->glyph->metrics are normally expressed in
26.6 pixel format (i.e., 1/64th of pixels), unless you use the FT_LOAD_NO_SCALE flag when calling FT_Load_Glyph or FT_Load_Char. In
this case, the metrics will be expressed in original font units.
I tried to scale down the text model matrix by 1/64. It approximates to the correct size but still not perfect.
Here is how I currently setup the text rendering in the code:
emScale=2048;
glyphBase = glGenPathsNV(1+numChars);
pathTemplate= ~0;
glPathGlyphRangeNV(glyphBase,GL_SYSTEM_FONT_NAME_NV,
"Verdana",GL_BOLD_BIT_NV,0,numChars,GL_SKIP_MISSING_GLYPH_NV,pathTemplate,emScale);
/* Query font and glyph metrics. */
glGetPathMetricRangeNV(
GL_FONT_Y_MIN_BOUNDS_BIT_NV|
GL_FONT_Y_MAX_BOUNDS_BIT_NV|
GL_FONT_X_MIN_BOUNDS_BIT_NV|
GL_FONT_X_MAX_BOUNDS_BIT_NV|
GL_FONT_UNDERLINE_POSITION_BIT_NV|
GL_FONT_UNDERLINE_THICKNESS_BIT_NV,glyphBase+' ' ,1 ,6*sizeof(GLfloat),font_data);
glGetPathMetricRangeNV(GL_GLYPH_HORIZONTAL_BEARING_ADVANCE_BIT_NV,
glyphBase,numChars,0,horizontalAdvance);
/* Query spacing information for example's message. */
messageLen = strlen(message);
xtranslate =(GLfloat*)malloc(sizeof(GLfloat) *messageLen);
if(!xtranslate){
fprintf(stderr, "%s: malloc of xtranslate failed\n", "Text3D error");
exit(1);
}
xtranslate[0] = 0.0f; /* Initial xtranslate is zero. */
/* Use 100% spacing; use 0.9 for both for 90% spacing. */
GLfloat advanceScale = 1.0f,
kerningScale = 1.0f;
glGetPathSpacingNV(GL_ACCUM_ADJACENT_PAIRS_NV,
(GLsizei)messageLen,GL_UNSIGNED_BYTE,message,glyphBase,
advanceScale,kerningScale,GL_TRANSLATE_X_NV,&xtranslate[1]); /* messageLen-1 accumulated translates are written here. */
const unsigned char *message_ub = (const unsigned char*)message;
totalAdvance = xtranslate[messageLen-1] +
horizontalAdvance[message_ub[messageLen-1]];
xBorder = totalAdvance / messageLen;
glEnable(GL_STENCIL_TEST);
glStencilFunc(GL_NOTEQUAL ,0 ,~0);
glStencilOp(GL_KEEP,GL_KEEP,GL_ZERO);
////////// init matrices /////////////
translate(model ,vec3(0));
translate(view ,vec3(0));
float nearF=1 ,farF = 1200;
glViewport(0,0,_viewPortW,_viewPortH);
glMatrixLoadIdentityEXT(GL_PROJECTION);
float aspect_ratio =(float) _viewPortW /(float) _viewPortH;
glMatrixFrustumEXT(GL_PROJECTION ,-aspect_ratio,aspect_ratio,-1 ,1 ,nearF,farF);
model=translate(model,vec3(0.0f,384.0,0.0f));//move up
//scale by 26.6 also doesn't work:
model=scale(model,vec3((1.0f/26.6f),1.0f/26.6f,1.0f/26.6f));
view=lookAt(vec3(0,0,0),vec3(0,0,-1),vec3(0,1,0));
}
The resolution is device dependent and using your equation given as DPI (dots per inch). point_size is the size of the font, choosen by the user in Points. A Point = 1/72 Inch (actually this is the size of a Point as used in PostScript, the actual unit "Point" as used by typesetters is slightly different).
The device resolution can be queried using OS dependent methods. Google for "Display DPI $NAMEOFOPERATINGSYSTEM". Using a size in points this gives you constant physical font sizes independent of the used display device.
Note that when rendering with OpenGL you'll still go through the transformation pipeline which must be accounted for.

How to rescale an image with c/c++ in windows?

What's the shortest solution in c/c++?
You didn't give too much information so I will go with StretchBlt
For an example, see Scaling an Image.
I won't give you a demo, but try to do the following:
create destination bitmap that is of your desired size
select that bitmap into device context
StretchBlt original bitmap onto device context previously mentioned
unselect bitmap from the device context
That recipe above needs no any library then GDI that is already present in windows. And if you plan to draw something in c++, you should get familiarity with that library anyway.
Look here:
http://www.ucancode.net/Free-VC-Draw-Print-gdi-example-tutorial/GDI-Object-VC-MFC-Tutorial.htm
or here:
http://www.olivierlanglois.net/clover.html
if you don't plan to use MFC for the task.
One of the easiest rescale algorithms is nearest-neighbour. Suppose your are rescaling from an image in an array of size x1 y1 to another size x2 y2. The idea is to find the nearest integer offset in original array to each target array position. So your rescale algorithm ends for something like this:
const int x1 = 512;
const int y1 = 512;
const int x2 = 64;
const int y2 = 64;
unsigned char orig[x1*y1]; /* Original byte array */
unsigned char target[x2*y2] /* Target byte array */
for(int i=0;i<x2;i++)
{
for(int j=0;j<y2;j++)
{
xoff = (i*x2)/x1;
yoff = (j*y2)/y1;
target[i+j*x2] = orig[xoff+yoff*x1]
}
}
This will give a blocky resized image. For better results you can use average or any other fancier polynomial based interpolators.
What libraries are you using? How do you represent images? Most image libraries should already be able to do that, e.g. Qt has QPixmap with scaled() and GDI has StretchBlt.
Or you could code it yourself with bicubic interpolation.

Calculate QGraphicsTextItem font size based on scale

I have QGraphicsTextItem objects on a QGraphicsScene. The user can scale the QGraphicsTextItem objects by dragging the corners. (I am using a custom "transformation editor" to do this.) The user can also change the size of the QGraphicsTextItem by changing the font size from a property panel. What I would like to do is unify these so that when the user scales the object by dragging the corner with the mouse, behind the scenes it actually is calculating "What size font is necessary to make the resulting object fit the target size and keep the scale factor at 1.0?"
What I am doing now is letting the object scale as normal using QGraphicsItem::mouseMoveEvent and then triggering a FinalizeMapScale method in QGraphicsItem::mouseReleaseEvent once the mouse scale is complete. This method should then change the font to the appropriate size and set the scale back to 1.0.
I have a solution that appears to be working, but I'm not crazy about it. I'm relatively new to both Qt and C++, so would appreciate any comments or corrections.
Is there a better way to architect this whole thing?
Are there Qt methods that already do this?
Is my method on the right track but has some Qt or C++ errors?
Feel free to comment on my answer below on submit your own preferred solution. Thanks!
[EDIT] As requested in comment, here is the basics of the scaling code. We actually went a different direction with this, so this code (and the code below) is no longer being used. This code is in the mouseMoveEvent method, having previously set a "scaling_" flag to true in mousePressEvent if the mouse was clicked in the bottom-right "hot spot". Note that this code is in a decorator QGraphicsItem that holds a pointer to the target it is scaling. This abstraction was necessary for our project, but is probably overkill for most uses.
void TransformDecorator::mouseMoveEvent(QGraphicsSceneMouseEvent *event) {
...
if (scaling_) {
QGraphicsItem *target_item = target_->AsQGraphicsItem();
target_item->setTransformOriginPoint(0.0, 0.0);
QPointF origin_scene = mapToScene(target_item->transformOriginPoint());
QPointF scale_position_scene = mapToScene(event->pos());
qreal unscaled_width = target_item->boundingRect().width();
qreal scale_x = (scale_position_scene.x() - origin_scene.x()) / unscaled_width;
if (scale_x * unscaled_width < kMinimumSize) {
scale_x = kMinimumSize / unscaled_width;
}
target_item->setScale(scale_x);
} else {
QGraphicsObject::mouseMoveEvent(event);
}
}
Please no holy wars about the loop-with-exit construct. We're comfortable with it.
void MapTextElement::FinalizeMapScale() {
// scene_document_width is the width of the text document as it appears in
// the scene after scaling. After we are finished with this method, we want
// the document to be as close as possible to this width with a scale of 1.0.
qreal scene_document_width = document()->size().width() * scale();
QString text = toPlainText();
// Once the difference between scene_document_width and the calculated width
// is below this value, we accept the new font size.
const qreal acceptable_delta = 1.0;
// If the difference between scene_document_width and the calculated width is
// more than this value, we guess at the new font size by calculating a new
// scale factor. Once it is beneath this value, we creep up (or down) by tiny
// increments. Without this, we would sometimes incur long "back and forth"
// loops when using the scale factor.
const qreal creep_delta = 8.0;
const qreal creep_increment = 0.1;
QScopedPointer<QTextDocument> test_document(document()->clone());
QFont new_font = this->font();
qreal delta = 0.0;
// To prevent infinite loops, we store the font size values that we try.
// Because of the unpredictable (at least to me) relationship between font
// point size and rendering size, this was the only way I could get it to
// work reliably.
QList<qreal> attempted_font_sizes;
while (true) {
test_document->setDefaultFont(new_font);
delta = scene_document_width - test_document->size().width();
if (std::abs(delta) <= acceptable_delta ||
attempted_font_sizes.contains(new_font.pointSizeF())) {
break;
}
attempted_font_sizes.append(new_font.pointSizeF());
qreal new_font_size = 0.0;
if (std::abs(delta) <= creep_delta) {
new_font_size = delta > 0.0 ? new_font.pointSizeF() + creep_increment
: new_font.pointSizeF() - creep_increment;
} else {
new_font_size = new_font.pointSizeF()
* scene_document_width
/ test_document->size().width();
}
new_font.setPointSizeF(new_font_size);
}
this->setFont(new_font);
this->setScale(1.0);
}
Another way to look at the problem is: Qt has scaled the font, what is the effective font size (as it appears to the user, not the font size set in the text item) that I need to display to the user as their choice of new font size? This is just an alternative, you still need a calculation similar to yours.
I have a similar problem. I have a text item that I want to be unit size (one pixel size) like my other unit graphic items (and then the user can scale them.) What font (setPointSize) needs to be set? (Also what setTextWidth and what setDocumentMargin?) The advantage of this design is that you don't need to treat the scaling of text items different than the scaling of any other shape of graphics item. (But I don't have it working yet.)
Also, a user interface issue: if the user changes the font size, does the item change size? Or does it stay the same size and the text wrap differently, leaving more or less blank space at the end of the text? When the user appends new text, does the font size change so all the text fits in the size of the shape, or does the shape size grow to accommodate more text? In other words, is it more like a flowchart app (where the shape size is fixed and the font shrinks), or like a word processor app (where the font size is constant and the shape (number of pages) grows?

Different cursor formats in IOFrameBufferShared

I'm reading the moust cursor pixmap data from the StdFBShmem_t structure, as defined in the IOFrameBufferShared API.
Everything works fine, 90% of the time. However, I have noticed that some applications on the Mac set a cursor in a different format. According to the documentation for the data structures, the cursor pixmap format should always be in the same format as the frame buffer. My frame buffer is 32 bpp. I expect the pixmap data to be in the format 0xAARRGGBB, which it is (most of the time). However, in some cases, I'm reading data that looks like a mask. Specifically, the pixels in this data will either be 0x00FFFFFF or `0x00000000. This looks to me to be a mask for separate pixel data stored somewhere else.
As far as I can tell, the only application that uses this cursor pixel format is Qt Creator, but I need to work with all applications, so I'd like to sort this out.
The code I'm using to read the cursor pixmap data is:
NSAutoreleasePool *autoReleasePool = [[NSAutoreleasePool alloc] init];
NSPoint mouseLocation = [NSEvent mouseLocation];
NSArray *allScreens = [NSScreen screens];
NSEnumerator *screensEnum = [allScreens objectEnumerator];
NSScreen *screen;
NSDictionary *screenDesc = nil;
while ((screen = [screensEnum nextObject]))
{
NSRect screenFrame = [screen frame];
screenDesc = [screen deviceDescription];
if (NSMouseInRect(mouseLocation, screenFrame, NO))
break;
}
if (screen)
{
kern_return_t err;
CGDirectDisplayID displayID = (CGDirectDisplayID) [[screenDesc objectForKey:#"NSScreenNumber"] pointerValue];
task_port_t taskPort = mach_task_self();
io_service_t displayServicePort = CGDisplayIOServicePort(displayID);
io_connect_t displayConnection =0;
err = IOFramebufferOpen(displayServicePort,
taskPort,
kIOFBSharedConnectType,
&displayConnection);
if (KERN_SUCCESS == err)
{
union
{
vm_address_t vm_ptr;
StdFBShmem_t *fbshmem;
} cursorInfo;
vm_size_t size;
err = IOConnectMapMemory(displayConnection,
kIOFBCursorMemory,
taskPort,
&cursorInfo.vm_ptr,
&size,
kIOMapAnywhere | kIOMapDefaultCache | kIOMapReadOnly);
if (KERN_SUCCESS == err)
{
// For some reason, cursor data is not always in the same format as
// the frame buffer. For this reason, we need some way to detect
// which structure we should be reading.
QByteArray pixData(
(const char*)cursorInfo.fbshmem->cursor.rgb24.image[currentFrame],
m_mouseInfo.currentSize.width() * m_mouseInfo.currentSize.height() * 4);
IOConnectUnmapMemory(displayConnection,
kIOFBCursorMemory,
taskPort,
cursorInfo.vm_ptr);
} // IOConnectMapMemory
else
qDebug() << "IOConnectMapMemory Failed:" << err;
IOServiceClose(displayConnection);
} // IOServiceOpen
else
qDebug() << "IOFramebufferOpen Failed:" << err;
}// if screen
[autoReleasePool release];
My questions are:
How can I detect if the cursor is a different format
from the framebuffer?
Where can I read the actual pixel data? The bm18Cursor
structure contains a mask section, but it's not in the
right place for me to be reading it using the code
above.
How can I detect if the cursor is a different format from the framebuffer?
The cursor is in the framebuffer. It can't be in a different format than itself.
There is no way to tell what format it's in (x-radar://problem/7751503). There would be a way to divine at least the number of bytes per pixel if you could tell how many frames the cursor has, but since you can't (that information isn't set as of 10.6.1 — x-radar://problem/7751530), you are left trying to figure out two factors of a four-factor product (bytes per pixel × width × height × number of frames, where you only have the width, the height, and the product). And even if you can figure out those missing two factors, you still don't know what order the bytes are in or whether the color components are premultiplied by the alpha component.
Where can I read the actual pixel data?
In the cursor member of the shared-cursor-memory structure.
You should define IOFB_ARBITRARY_SIZE_CURSOR before including the I/O Kit headers. Cursors can be any size now, not just 16×16, which is the size you expect when you don't define that constant. As an example, the usual Mac arrow cursor is 24×24, the “Windows” arrow cursor in CrossOver is 32×32, and the arrow cursor in X11 is 10×16.
However, in some cases, I'm reading data that looks like a mask. Specifically, the pixels in this data will either be 0x00FFFFFF or 0x00000000. This looks to me to be a mask for separate pixel data stored somewhere else.
That sounds to me more like 16-bit pixels with an 8-bit alpha channel. At least it's more probably 5-6-5 than 5-5-5.
As far as I can tell, the only application that uses this cursor pixel format is Qt Creator, but I need to work with all applications, so I'd like to sort this out.
I'm able to capture the current cursor in that app just fine with my new cursor-capturing app. Is there a specific part of the app I should hit to make it show me a specific cursor?
You might try the CGSCreateRegisteredCursorImage function, as demonstrated by Karsten in a comment on my weblog.
It is a private function, so it may change or go away at any time, so you should check whether it exists and hold IOFramebuffer in reserve, but as long as it does exist, you may find it more reliable than the complex and thinly-documented IOFramebuffer.