How To Detect A Specific Japanese Character from a Image and find the Bounding Box around it? - computer-vision

I want to detect Ichi Kanji, A Japanese character. From an Image.
I am thinking of going with an Object Detection Algorithm like YOLOv5. But I wonder will it work well. Because The character looks like extended Hypen in Many fonts so if might detect it as hypen when I have phone numbers. Basically I am wondering that will YOLO able to distinguish depending on the context of it or not?
Am I going in the right direction?
What is a Better Approach?
I also have a limited datasets i.e. 100 Images

Related

Can I track objects by mapping their coordinates from a sequence of images?

I have a video of simple moving dots (that sometimes overlap) that is saved as a sequence of images. At each image I detect all the dots and save their coordinates:
(snapshot 1 -> snapshot 2)
I would like to infer the trajectory of each dot. The dots move smoothly and not too fast from one frame to the other, but if for each point of the first image I just find their closest point of the next image it often fails to reconstruct the trajectory.
I tried on opencv the multitrackers but the trackers very quickly lose their target by jumping on a different dot when the dots tend to overlap. The detection works very nicely though.
The video and the objects to track are simple. I do not want to believe that I need to implement something more technical to accurately track these dots. Which is why I decided to ask here, I am out of ideas. Any tip or advice is appreciated... Thanks.

OpenCV: Letters and words detection from edge detection image

I am currently dealing with text recognition. Here is a part of binarized image with edge detection (using Canny):
EDIT: I am posting a link to an image. I don't have 10 rep points so I cannot post an image.
EDIT 2: And here's the same piece after thresholding. Honestly, I don't know which approach would be better.
[2
The questions remain the same:
How should I detect certain letters? I need to determine location of every letter and then every word.
Is it a problem that some letters are "opened"? I mean that they are not closed areas.
If I use cv::matchtemplate, does it mean that I need to have 24 templates for every letter + 10 for every digit? And then loop over my image to determine the best correlation?
If both the letters and squares they are in, are 1-pixel wide, what filters / operations should I do to close the opened letters? I tried various combinations of dilate and erode - with no effect.
The question is kind of "how do I do OCR with Open CV?" and the answer is that it's an involved process and quite difficult.
But some pointers. Firstly, its hard to detect letters which are outlined. Most of the tools are designed for filled letters. But that image looks as if there will only be one non-letter distractor if you fill all loops using a certain size threshold. You can get rid of the non-letter lines because they are a huge connected object.
Once you've filled the letters, they can be skeletonised.
You can't use morphological operations like open and close very sensibly on images where the details are one pixel wide. You can put the image through the operation, but essentially there is no distinction between detail and noise if all features are one pixel. However once you fill the letters, that problem goes away.
This isn't in any way telling you how to do it, just giving some pointers.
As mentioned in the previous answer by malcolm OCR will work better on filled letters so you can do the following
1 use your second approach but take the inverse result and not the one you are showing.
2 run connected component labeling
3 for each component you can run the OCR algorithm
In order to discard outliers I will try to use the spatial relation between detected letters. They sold have other letter horizontally or vertically next to them.
Good luck

Windows/GDI- measure size of a font character (Marlett)

I'm trying to measure the size of a character and I can't get the right result out. We use DrawText with the DT_CALCRECT flag to measure strings, and that seems to work well enough (to be fair, we haven't actually tested it for pixel perfect measurements).
However I'm now trying to measure a single character, the 'x' symbol in Marlett to draw a close button, but it really doesn't want to tell me the actual size of the character. It tells me the character is 8x8 when it really is a 6x5, and the character then is drawn slightly displaced from where I ask it to be.
Height wise, I'm guessing it's returning the character height for that font (being the same for all characters, same for all fonts). For the width though, it seems to be adding character spacing into it, which it doesn't seem to do with other fonts (tried a couple of characters in Arial). In Marlett all tested characters seem to return the same width.
Is it possible to find out what the extra spacing around a character is going to be, so I can center the 'x' in a box?
Most fonts have a little bit of spacing on either side of the glyphs so that they don't touch when you line them up. You might not expect this in a specialty symbol font like Marlett, but it's there.
You can use GetCharABCWidths to get an approximate measurement of these spaces (sometimes called "advance widths"). The A value is typically the gap before the glyph; the B value is the width of the glyph itself, and the C value is the space after the glyph. These may not be precise measurements, though, as they're more about making things look right. A round glyph, like O might spill into the surrounding space, and the A and C value can actually be negative.
Well made fonts also contain kerning adjustments so that the spaces between certain pairs of characters look balanced, but I wouldn't expect a specialty symbol font like Marlett to have these.
For your specific need, you could test to see if the B value is the precise value of the glyph and whether the A value is the appropriate offset.
Marlett was a cool hack for dealing with rendering of nice controls at a variety of resolutions; it certainly worked better than the old OEM images you can still get from LoadImage. But modern UI generally relies on code (like Themes as IIinspectable suggested) and/or custom bitmaps (as Hans Passant suggested). Your best bet is probably to pursue one of those techniques for your stated purpose. But I felt the question you actually asked deserved an answer.

Improve Tesseract detection quality

I am trying to extract alphanumeric characters (a-z0-9) which do not form sensefull words from an image which is taken with a consumer camera (including mobile phones). The characters have equal size and font type and are not formated. The actual processing is done under Windows.
The following image shows the raw input:
After perspective processing I apply the following with OpenCV:
Convert from RGB to gray
Apply cv::medianBlur to remove noise
Convert the image to binary using adaptive thresholding cv::adaptiveThreshold
I know the number of rows and columns of the grid. Thus I simply extract each grid cell using this information.
After all these steps I get images which look similar to these:
Then I run tesseract (latest SVN version with latest training data) on each extracted cell image individually (I tried different -psm and -l values):
tesseract.exe -l eng -psm 11 sample.png outtext
The results produced by tesseract are not very good:
Most characters are not recognized.
The grid lines are sometimes interpreted as "l" or "i" characters.
I already experimented with morphologic operations (open, close, erode, dilate) and replaced adaptive thresholding with OTSU thresholding (THRESH_OTSU) but the results got worse.
What else could I try to improve the recognition quality? Or is there even a better method to extract the characters besides using tesseract (for instance template matching?)?
Edit (21-12-2014):
I tested simple template matching (using normalized cross correlation and LMS but with even worse results). But I have made a huge step forward by extracting each character using findCountours and then running tesseract with only one character and the -psm 10 option which interprets each input image as a single character. Additonaly I remove non-alphanumeric characters in a post processing step. The first results are encouraging with detection rates of 90% and better. The main problem are misdetections of "9" and "g" and "q" characters.
Regards,
As I say here, you can tell tesseract to pay attention on "almost same" characters.
Also, there is some option in tesseract that don't help you in your example.
For instance, a "Pocahonta5S" will become, most of the time, a "PocahontaSS" because the number is in a letter word. You can see in this way so.
Concerning pre-processing, you better have to use a sharpen filter.
Don't forget that tesseract will always apply an Otsu's filter before reading anything.
If you want good result, sharpening + Adaptive Threshold with some other filters are good ideas.
I recommend to use OpenCV in Combination with tesseract.
The problem in your input images for tesseract are the non-character regions in your image.
An approach myself
To get rid of these I would use the openCV findContour function to receive all contours in your binary image. Afterwards define some criteria to illiminate the non-character regions. For example only take the regions, which are inside the image and doesn't touch the border, or to only take the regions with a specific region-area or a specific ratio of heigth to width. Find some kind of features, that let you distinguish between character an non-character contours.
Afterwards eliminate these non-character regions and handle the images forward to tesseract.
Just as idea for general testing this approach:
Eliminate the non-character regions manual (gimp or paint,...) and give the image to tesseract. If the result fits your expactations you can try to eliminate the the non-character regions with proposed method of above.
I suggest a similar approach I'm using in my case.
(I only have the problem of speed, which you should not have if its only some characters to compare)
First: Get the form to have default size and transform it:
https://www.youtube.com/watch?v=W9oRTI6mLnU
Second: Use matchTemplate
Improve template matching with many templates for one Image/ find characters on image
I also played around with OCR but I didn't like it because of 2 reasons:
Some kind of blackbox and hard to debug why its not recognized
In my case it was never 100% accurate no matter what i did even for screenshots with "perfect" characters.

How to detect Text Area from image?

i want to detect text area from image as a preprocessing step for tesseract OCR engine, the engine works well when the input is text only but when the input image contains Nontext content it falls, so i want to detect only text content in image,any idea of how to do that will be helpful,thanks.
Take a look at this bounding box technique demonstrated with OpenCV code:
Input:
Eroded:
Result:
Well, I'm not well-experienced in image processing, but I hope I could help you with my theoretical approach.
In most cases, text is forming parallel, horisontal rows, where the space between rows will contail lots of background pixels. This could be utilized to solve this problem.
So... if you compose every pixel columns in the image, you'll get a 1 pixel wide image as output. When the input image contains text, the output will be very likely to a periodic pattern, where dark areas are followed by brighter areas repeatedly. These "groups" of darker pixels will indicate the position of the text content, while the brighter "groups" will indicate the gaps between the individual rows.
You'll probably find that the brighter areas will be much smaller that the others. Text is much more generic than any other picture element, so it should be easy to separate.
You have to implement a procedure to detect these periodic recurrences. Once the script can determine that the input picture has these characteristics, there's a high chance that it contains text. (However, this approach can't distinguish between actual text and simple horisontal stripes...)
For the next step, you must find a way to determine the borderies of the paragraphs, using the above mentioned method. I'm thinking about a pretty dummy algorithm, witch would divide the input image into smaller, narrow stripes (50-100 px), and it'd check these areas separately. Then, it would compare these results to build a map of the possible areas filled with text. This method wouldn't be so accurate, but it probably doesn't bother the OCR system.
And finally, you need to use the text-map to run the OCR on the desired locations only.
On the other side, this method would fail if the input text is rotated more than ~3-5 degrees. There's another backdraw, beacuse if you have only a few rows, then your pattern-search will be very unreliable. More rows, more accuracy...
Regards, G.
I am new to stackoverflow.com, but I wrote an answer to a question similar to this one which may be useful to any readers who share this question. Whether or not the question is actually a duplicate, since this one was first, I'll leave up to others. If I should copy and paste that answer here, let me know. I also found this question first on google rather than the one i answered so this may benefit more people with a link. Especially since it provides different ways of going about getting text areas. For me, when I looked up this question, it did not fit my problem case.
Detect text area in an image using python and opencv
In the Current time, the best way to detect the text is by using EAST (An Efficient and Accurate Scene Text Detector)
The EAST pipeline is capable of predicting words and lines of text at arbitrary orientations on 720p images, and furthermore, can run at 13 FPS, according to the authors.
EAST quick start tutorial can be found here
EAST paper can be found here