How to keep font size constant in Inkscape during rescaling? - inkscape

I came across a problem in Inkscape I couldn't solve on my own. Currently I am creating sketches etc. for a LaTex document in Inkscape. Drawing the sketches works fine and adding text to it as well. Then I would just save it as pdf (+ this tex option) and then import it in LaTex using
\begin{figure}
\centering
\input{testfile.pdf_tex}
\label{fig:testfile}
\caption{This is just a testfile}
\end{figure}
The font size and type in the sketch after compiling the document remains always the same as in the rest of my LaTex document.
However, placing the text in the right position requires a lot of iterative steps, because rescaling the sketch in Inkscape (I might find the sketch too big or too small for example) would rescale the text as well - again, just in Inkscape, luckily not in LaTex.
How do I prevent the text in Inkscape to be rescaled? Everything that should change during rescaling is the absolute position of the text boxes while the relative position and font size should remain constant.

I had the same problem and found this Inkscape extension Scientific-Inkscape to be very useful.
I cite one of its features "Scale Plots: Changes the size or aspect ratio of a plot without modifying its text and ticks. Especially useful for assembling multi-panel figures."
See also a scaling example here:

Related

Measure postscript font width in C++

This cannot be so hard, but I simply don't manage. Neither google nor stackoverflow or the documentation of ubuntu or ghostscript were helpful.
I am generating postscript from C++. I place the text word by word to handle line-wrap. For deciding where to place the next word and whether it fits into the current line I rely on freetype to measure the "advance" of each glyph.
The text is a mix of normal text and source code, so I have two fonts involved. I chose Helvetica for normal text and Courier for source code, since both are readily available in postscript and don't need to be embedded. The problematic part of my postscript output is not significantly more complicated than
(Helvetica) findfont 11 scalefont setfont
40 100 moveto (hello world) show
123 100 moveto (hello again) show % I care for the first number
Of course, there is a proper eps header etc.
I did not manage to locate the font files on my ubuntu 16.04 system, so I downloaded best guesses from free font websites. It turns out that they apparently differ from those used by my postscript interpreter. At least, after converting to a PDF with epstopdf (which comes with LaTeX as far as I know), I see that my Helvetica font is too wide and my Courier font is too narrow, so that word spacing is off, up to the point that long words overlap with the subsequent word.
My question: how can I get font width measurements matching those of the postscript interpreter?
I am not even sure whether the question is well-posed, but somehow I do assume that there is one and only one reference Helvetica font, so that postscript output looks the same on all systems and printers.
Making freetype load the correct fonts would probably be the easiest solution, but I do not know how to find the files.
A source for downloading the exactly matching fonts would also solve the problem, although having them twice would be odd.
Even better, asking a postscript interpreter like ghostscript for the ground truth would be preferable, but the ghostscript library documentation is sparse and I did not find any examples.
I could create a postscript file that prints the width of the text obtained with textwidth, convert to a pdf, and extract the text. That would be ugly and slow, and I'd like to go for a proper C++ solution.
Progress in any of these or maybe other directions would be absolutely great!
The fonts you are using should have a .afm (Adobe Font Metrics) file, which you can read the font metrics from if its a PostScript font. Its also true that the 'base 13' fonts should be the same in terms of metrics across all PostScript implementations. Of course, if you are using a TrueType font to get the metrics from then they may well differ from a PostScript font.
You haven't said what PostScript interpreter you are using, it may be that its not using a standard font, but my guess is you are using a TrueType font from your Ubuntu which doesn't quite match the PostScript ones you are using in your 'interpreter'. If memory serves you can look in /etc/fonts/fonts.conf to see where your fonts are stored.
FWIW Ghostscript ships with implementations of the base 13 fonts which are matched to the Adobe fonts, PostScript interpreters should match those. We don't however ship the AFM files, but you can load the fonts into Fontographer, or use FreeType, or simply get the advance width by using stringwidth (not textwidth) in a PostScript program.
I wouldn't have said Ghostscript's docuemntation is 'sparse'. Difficult to find what you want, maybe, but there's lots of documentation there. Just use.htm, the basic information, is a 265Kb HTML file.
The final alternative, of course, is to download the fonts you are actually using in the PostScript program, then you know that they match the metrics you used to create the PostScript in the first place. As with PDF, this is highly recommended, especially for fonts outside the base 13, as its the only way to get reliable output.

scaling a bitmap without losing quality

i have a little problem .. i am developing an SkinEngine that allow Delphi Vcl Application to be Skined . for this goal, i had developed a new file format (mSkin) in order to host my skin data .so my skin file contains 2 header , the first contains some information about the colors used by the skin , the second contains the bitmap used by skin (the bitmap type is Alpha channel bitmap in order to support transparency ).in my control i use a function to extract object bitmap from the bitmap(mSkin.Bitmap) and draw this bitmap onto my control . the problem is that when the bitmap is not shaped i got a bad quality when scaling the source bitmap .the size of the object bitmap is proportional to the control size (when the contol size changed ==> the bitmap siwe change too .)
i had try to read the vcl style to solve the problem .. but it seems to be very difficult to read .
is there a way to copy bitmap and Maintaining the quality ?
If you use bitmaps you simple can't do scaling without the problems you have. If you want scaling where e.g. a one-pixel border stays a one-pixel border, then you have to use a vector-based format for your images.
You need to divide that into 9 different bitmaps, like a 3x3 grid. then you only scale the middle on, the rest stay the same size but move. This link is for android but the same principles apply.
Here is another link. This is for flash, but it also explains the principle.
Try to use a resampling algorithm.
For upscaling, I like very much the B-Spline.
For simple content like yours, the hqnx family sometimes gives good results, and is very fast to render (even in real-time). For some pascal source code, you may take a look at this forum thread.
See also this more general question.

Optimized .plist/textures for walk animations

I am trying to 'squeeze' my textures for walking animations. The anim has 8 frames, but actually can be done quite well with 1-2-3-4-5-4-3-2 which would fit nicely in a 128x128 points texture. Do you know of a tool that can create the plist entries for 6-7-8 that are mapped onto the 4-3-2 areas of the texture ?
Coding is still an option, but was wondering if some tool has that out of the box.
I'm surprised there's still Cocos2D developers out there who aren't using TexturePacker. :)
Check out the Alias Creation section under Features, I'm quoting (but also can confirm that this works perfectly):
If two images are identical after trimming only one image is placed in
the sprite sheet. The duplicates will just be added to the description
file allowing you to access it with both names.
This is perfect when using animations: You simply don't have to care about equal phases.

Debugging of image processing code

What kind of debugging is available for image processing/computer vision/computer graphics applications in C++? What do you use to track errors/partial results of your method?
What I have found so far is just one tool for online and one for offline debugging:
bmd: attaches to a running process and enables you to view a block of memory as an image
imdebug: enables printf-style of debugging
Both are quite outdated and not really what I would expect.
What would seem useful for offline debugging would be some style of image logging, lets say a set of commands which enable you to write images together with text (probably in the form of HTML, maybe hierarchical), easy to switch off at both compile and run time, and the least obtrusive it can get.
The output could look like this (output from our simple tool):
http://tsh.plankton.tk/htmldebug/d8egf100-RF-SVM-RBF_AC-LINEAR_DB.html
Are you aware of some code that goes in this direction?
I would be grateful for any hints.
Coming from a ray tracing perspective, maybe some of those visual methods are also useful to you (it is one of my plans to write a short paper about such techniques):
Surface Normal Visualization. Helps to find surface discontinuities. (no image handy, the look is very much reminiscent of normal maps)
color <- rgb (normal.x+0.5, normal.y+0.5, normal.z+0.5)
Distance Visualization. Helps to find surface discontinuities and errors in finding a nearest point. (image taken from an abandoned ray tracer of mine)
color <- (intersection.z-min)/range, ...
Bounding Volume Traversal Visualization. Helps visualizing a bounding volume hierarchy or other hierarchical structures, and helps to see the traversal hotspots, like a code profiler (e.g. Kd-trees). (tbp of http://ompf.org/forum coined the term Kd-vision).
color <- number_of_traversal_steps/f
Bounding Box Visualization (image from picogen or so, some years ago). Helps to verify the partitioning.
color <- const
Stereo. Maybe useful in your case as for the real stereographic appearance. I must admit I never used this for debugging, but when I think about it, it could prove really useful when implementing new types of 3d-primitives and -trees (image from gladius, which was an attempt to unify realtime and non-realtime ray tracing)
You just render two images with slightly shifted position, focusing on some point
Hit-or-not visualization. May help to find epsilon errors. (image taken from metatrace)
if (hit) color = const_a;
else color = const_b
Some hybrid of several techniques.
Linear interpolation: lerp(debug_a, debug_b)
Interlacing: if(y%2==0) debug_a else debug_b
Any combination of ideas, for example the color-tone from Bounding Box Visualization, but with actual scene-intersection and lighting applied
You may find some more glitches and debugging imagery on http://phresnel.org , http://phresnel.deviantart.com , http://picogen.deviantart.com , and maybe http://greenhybrid.deviantart.com (an old account).
Generally, I prefer to dump bytearray of currently processed image as raw data triplets and run Imagemagick to create png from it with number e.g img01.png. In this way i can trace the algorithms very easy. Imagemagick is run from the function in the program using system call. This make possible do debug without using any external libs for image formats.
Another option, if you are using Qt is to work with QImage and use img.save("img01.png") from time to time like a printf is used for debugging.
it's a bit primitive compared to what you are looking for, but i have done what you suggested in your OP using standard logging and by writing image files. typically, the logging and signal export processes and staging exist in unit tests.
signals are given identifiers (often input filename), which may be augmented (often process name or stage).
for development of processors, it's quite handy.
adding html for messages would be simple. in that context, you could produce viewable html output easily - you would not need to generate any html, just use html template files and then insert the messages.
i would just do it myself (as i've done multiple times already for multiple signal types) if you get no good referrals.
In Qt Creator you can watch image modification while stepping through the code in the normal C++ debugger, see e.g. http://labs.qt.nokia.com/2010/04/22/peek-and-poke-vol-3/

C++ Library for image recognition: images containing words to string

Does anyone know of a c++ library for taking an image and performing image recognition on it such that it can find letters based on a given font and/or font height? Even one that doesn't let you select a font would be nice (eg: readLetters(Image image).
I've been looking into this a lot lately. Your best is simply Tesseract. If you need layout analysis on top of the OCR than go with Ocropus (which in turn uses Tesseract to do the OCR). Layout analysis refers to being able to detect position of text on the image and do things like line segmentation, block segmentation, etc.
I've found some really good tips through experimentation with Tesseract that are worth sharing. Basically I had to do a lot of preprocessing for the image.
Upsize/Downsize your input image to 300 dpi.
Remove color from the image. Grey scale is good. I actually used a dither threshold and made my input black and white.
Cut out unnecessary junk from your image.
For all three above I used netbpm (a set of image manipulation tools for unix) to get to point where I was getting pretty much 100 percent accuracy for what I needed.
If you have a highly customized font and go with tesseract alone you have to "Train" the system -- basically you have to feed a bunch of training data. This is well documented on the tesseract-ocr site. You essentially create a new "language" for your font and pass it in with the -l parameter.
The other training mechanism I found was with Ocropus using nueral net (bpnet) training. It requires a lot of input data to build a good statistical model.
In terms of invoking Tesseract/Ocropus are both C++. It won't be as simple as ReadLines(Image) but there is an API you can check out. You can also invoke via command line.
While I cannot recommend one in particular, the term you are looking for is OCR (Optical Character Recognition).
There is tesseract-ocr which is a professional library to do this.
From there web site
The Tesseract OCR engine was one of the top 3 engines in the 1995 UNLV Accuracy test. Between 1995 and 2006 it had little work done on it, but it is probably one of the most accurate open source OCR engines available
I think what you want is Conjecture. Used to be the libgocr project. I haven't used it for a few years but it used to be very reliable if you set up a key.
The Tesseract OCR library gives pretty accurate results, its a C and C++ library.
My initial results were around 80% accurate, but applying pre-processing on the images before supplying in for OCR the results were around 95% accurate.
What is pre-preprocessing:
1) Binarize the bitmap (B&W worked better for me). How it could be done
2) Resampling your image to 300 dpi
3) Save your image in a lossless format, such as LZW TIFF or CCITT Group 4 TIFF.