I have been trying to solve this issue for days. Here is an image.
I want to get digits from this image.
I tried background subtraction and partition image in segments so as to compare templates. But result is
111117
Along with this i tried Tesseract API and it even refused to give me any output. At one point i got "GALLONS" as a output. Any help is highly appreciated. Thanks
Related
I'm stuck trying to get perspective transformation to work.
I want to draw an image so it fits 4 given points. It's something like you have in Photoshop and it's used to change the perspective of image (see image below).
I have an image in byte array and I'm not using any additional libraries.
Everything I've found so far was either for OpenCV or didn't do what I wanted.
I have found some open-source program PhotoDemon and it does exactly what I want, and there is a code for that. I was trying to get it to work for many hours but it gives me completly weird results (second line on the image below).
Could someone provide me with some code or step-by-step math of what and how to do or even just a pseudo-code. I'm a little bit sick of it. It seems easy but I need some help.
I don't know if the title explains what I'm trying to do, but after a few days with no sleep and searching the web I pretty much don't know how to do this.
I'm trying to take a point on a 2D image and find the depth of the rest of the points in the selected region relative to that point.
From all the internet search I've done, it seems that this is possible by a single 2D image but Stereoimages could be used to increase the accuracy of the triangulation.
Ultimately I am looking into doing this using stereovision but if anyone knows how to do it using a single image I also would appreciate it.
I am using OpenCV 249 in C++ vb12 (visual studio 2013).
any help is appreciated.
I´ve got a problem with a video image that im trying to capture and process in my pc.
The thing is that the video camera is wireless, so the video images that im getting have horizontal bands due to interferences or whatever it is causing this.
What i want to do is to try and remove the horizontal bands from the image to try to get the clearer image possible.
Is there any algorithm or method to do this? The algorithm has to be adaptative because the bands are not always the same. Im trying to accomplish this with opencv in c++, but I´m having trouble finding anything in this subject.
Thanks.
you should figure out what the real cause of these bands are and fix it. for any kind of image processing, bands like that will minimise what you can actually do with an image. you won't be able to replace the bands with what should have been there.
I have to process a lot of scanned IDs and I need to extract photos from them for further processing.
Here's a fictional example:
The problem is that the scans are not perfectly aligned (rotated up to 10 degrees). So I need to find their position, rotate them and cut out the photo. This turned out to be a lot harder than I originally thought.
I checked OpenCV and the only thing I found was rectangle detection but it didn't give me good results: the rectangle not always matches good enough on samples. Also its image matching algorithm works only for not-rotated image since it's just a brute force comparison.
So I though about using ARToolkit (augmented reality lib) because I know that it's able to very precisely locate given marker on an image. But it it seems that the markers have to be very simple, so I can't use a constant part of the document for this purpose (please correct me if I'm wrong). Also, I found it super-hard to compile it on Ubuntu 11.10.
OCR - haven't tried this one yet and before I start my research I'd be thankful for any suggestions what to look for.
I look for a C(preferable)/C++ solution. Python is an option too.
If you don't find another ideal solution, one method I ended up using for OCR preprocessing in the past was to convert the source images to PPM and use unpaper in Ubuntu. You can attempt to deskew the image based on whichever sides you specify as having clearly-defined edges, and there is an option to bypass the filters that would normally be applied to black and white text. You probably don't want those for images.
Example for images skewed no more than 15 degrees, using the bottom and right edges to detect rotation:
unpaper -n -dn bottom,right -dr 15 input.ppm output.ppm
unpaper was written in C, if the source is any help to you.
I am working on a project to stitch together around 400 high resolution aerial images around 36000x2600 to create a map. I am currently using OpenCV and so far I have obtained the match points between the images. Now I am at a lost in figuring out how to get the matrix transformation of the images so I can begin the stitching process. I have absolutely no background in working with images nor graphics so this is a first time for me. Can I get some advice on how I would approach this?
The images that I received also came with a data sheet showing longitude, latitude, airplane wing angle, altitude, etc. of each image. I am unsure how accurate these data are, but I am wondering if I can use these information to perform the proper matrix transformation that I need.
Thanks
Do you want to understand the math behind the process or just have an superficial idea of whats going on and just use it?
The regular term for "image snitching" is image alignment. Feed google with it and you'll find tons of sources.
For example, here.
Best regards,
zhengtonic
In recent opencv 2.3 release...they implemented a whole process of image stitching. Maybe it is worth looking at.