How to detect style of text in an image using opencv? - c++

Currently i am working on opencv. I have an image with a text. And i want to find out the style(Bold, Italic) of the text. How can i achieve this? Thanks

What you can do is (assuming a letter by letter approach)
Using segmentation techniques you first segment out the letters
Using the segmented letters,compare against your owns data set of pre-segmented/pre-filtered letters to find the font style.
Comparison can be done using various features, SIFT,SURF,BRISK,Harris corners, template matching, or your come up with something of your own. My best guess would be to go with HAAR-features and training.
Once you get a set of features for a letter, matching for closest candidate against your pre-filtered dataset can be achieved using different techniques such as KNN, euclidean distance, etc If you use HAAR features, OpenCV can help alot in retrieval.
Eventually you might ending doing some OCR which includes font style.
OpenCV has a set of built in feature descriptors which you can read here
Good Luck!

This might help you, I know it's not exact. But it will suffice for my similar project.
"Typefont is an experimental library that detects the font of a text in a image."
https://github.com/Vasile-Peste/Typefont

Related

How to design a label for visual recognition?

Sorry if this is a recurring question, but I can't find the right keywords for this search.
I need to develop a system for visual recognition of labels attached to products from a warehouse. I'm using a fixed focus camera, so the idea is to use a label with some code with 6 alphanumeric digits printed in a large font. Then this system would be responsible for performing ROI extraction and applying OCR to recognize the objects in the scene.
My main problem is the ROI extraction part. I tried to use template matching, but due to the difficulties with scale and rotation, it doesn't seem to be the right technique for the application. I also tried to use feature matching, but the results are still insufficient.
My question is how could I develop the label to facilitate ROI extraction? Could I use something like apriltags to simplify the homography?
Thanks in advance!

Improving tesseract performance for specific tasks

I have already read the answers to this question.
I have a series of images that contain a single word between 3-10 characters. They are images created on a computer itself, so quality of the images is consistent and the images don't have any noise on them. The fonts are quite large (about 30 pixels in height). This should already be easy enough for tesseract to read accurately but what are some techniques I can use for improving the speed, even if it's only an improvement of a few milliseconds?
The character set consists of uppercase letters only. As the OCR task in this case is very specific, Would it help if I train the tesseract engine with this specific font and font-size or is that overkill?
Edited to include sample
Other than tesseract, are there any other solutions that I can use with C/C++ that can provide better performance? Could it be done faster with OpenCV? Compatibility with Linux is preferred.
Sample
If all the letters have same size & style, you can try something really simple like running blob detection followed by template matching of individual letters. I am not sure how will it compare with tesseract but it is a very simple experiment. (Additionaly, lowering the resolution will speed things up...)
You can also have a look at this question: Simple Digit Recognition OCR in OpenCV-Python, it may be relevant

Extracting part of a scanned document (personal ID) - which library and method to choose?

I have to process a lot of scanned IDs and I need to extract photos from them for further processing.
Here's a fictional example:
The problem is that the scans are not perfectly aligned (rotated up to 10 degrees). So I need to find their position, rotate them and cut out the photo. This turned out to be a lot harder than I originally thought.
I checked OpenCV and the only thing I found was rectangle detection but it didn't give me good results: the rectangle not always matches good enough on samples. Also its image matching algorithm works only for not-rotated image since it's just a brute force comparison.
So I though about using ARToolkit (augmented reality lib) because I know that it's able to very precisely locate given marker on an image. But it it seems that the markers have to be very simple, so I can't use a constant part of the document for this purpose (please correct me if I'm wrong). Also, I found it super-hard to compile it on Ubuntu 11.10.
OCR - haven't tried this one yet and before I start my research I'd be thankful for any suggestions what to look for.
I look for a C(preferable)/C++ solution. Python is an option too.
If you don't find another ideal solution, one method I ended up using for OCR preprocessing in the past was to convert the source images to PPM and use unpaper in Ubuntu. You can attempt to deskew the image based on whichever sides you specify as having clearly-defined edges, and there is an option to bypass the filters that would normally be applied to black and white text. You probably don't want those for images.
Example for images skewed no more than 15 degrees, using the bottom and right edges to detect rotation:
unpaper -n -dn bottom,right -dr 15 input.ppm output.ppm
unpaper was written in C, if the source is any help to you.

Basic Pixel/Cell Counting Algorithm

Good night :)
I am currently playing with the DevIL library that allows me to load in image and check RGB values per pixel. Just as a personal learning project, I'm trying to write a very basic OCR system for a couple of images I made myself in Photoshop.
I am successfully able to remove all the distortions in the image and I'm left with text and numbers. I am currently not looking for an advanced neural network that learns from input. I want to start out relatively easy and so I've set out to identify the individual characters and count the pixels in those characters.
I have two problems:
Identifying the individual characters.
Most importantly: I need an algorithm to count connected pixels (of the same color) without counting pixels I've previously counted. I have no mathemathical background so this is the biggest issue for me.
Any help in the matter is appreciated, thanks.
edit:
I have tagged this question as C++ because that is what I am currently using. However, pseudo-code or easily readable code from another language is also fine.
The flood fill algorithm will work for counting the included pixels, as long as you have the images filtered down to simple black & white bitmaps.
Having said that, you can perform character recognition by comparing each character to a set of standard images of each character in your set, measuring the similarity, and then choosing the character with the highest score.
Take a look at this question for more information.
Not sure this helps, but there is a GPL OCR lib called gocr.
Apologies if this is too far off-topic, but IMHO Vigra (not the other one!) is a much better image processing library for C++ than DevIL.

C++ Library for image recognition: images containing words to string

Does anyone know of a c++ library for taking an image and performing image recognition on it such that it can find letters based on a given font and/or font height? Even one that doesn't let you select a font would be nice (eg: readLetters(Image image).
I've been looking into this a lot lately. Your best is simply Tesseract. If you need layout analysis on top of the OCR than go with Ocropus (which in turn uses Tesseract to do the OCR). Layout analysis refers to being able to detect position of text on the image and do things like line segmentation, block segmentation, etc.
I've found some really good tips through experimentation with Tesseract that are worth sharing. Basically I had to do a lot of preprocessing for the image.
Upsize/Downsize your input image to 300 dpi.
Remove color from the image. Grey scale is good. I actually used a dither threshold and made my input black and white.
Cut out unnecessary junk from your image.
For all three above I used netbpm (a set of image manipulation tools for unix) to get to point where I was getting pretty much 100 percent accuracy for what I needed.
If you have a highly customized font and go with tesseract alone you have to "Train" the system -- basically you have to feed a bunch of training data. This is well documented on the tesseract-ocr site. You essentially create a new "language" for your font and pass it in with the -l parameter.
The other training mechanism I found was with Ocropus using nueral net (bpnet) training. It requires a lot of input data to build a good statistical model.
In terms of invoking Tesseract/Ocropus are both C++. It won't be as simple as ReadLines(Image) but there is an API you can check out. You can also invoke via command line.
While I cannot recommend one in particular, the term you are looking for is OCR (Optical Character Recognition).
There is tesseract-ocr which is a professional library to do this.
From there web site
The Tesseract OCR engine was one of the top 3 engines in the 1995 UNLV Accuracy test. Between 1995 and 2006 it had little work done on it, but it is probably one of the most accurate open source OCR engines available
I think what you want is Conjecture. Used to be the libgocr project. I haven't used it for a few years but it used to be very reliable if you set up a key.
The Tesseract OCR library gives pretty accurate results, its a C and C++ library.
My initial results were around 80% accurate, but applying pre-processing on the images before supplying in for OCR the results were around 95% accurate.
What is pre-preprocessing:
1) Binarize the bitmap (B&W worked better for me). How it could be done
2) Resampling your image to 300 dpi
3) Save your image in a lossless format, such as LZW TIFF or CCITT Group 4 TIFF.