Search detected faces from a database OpenCV - c++

I am working with this Code:
Program sample
the above link has been programmed with the help of this page:
Servo Magazine
This Code can do Extract face, learn face and save the learn face in a database with a label(example: chris_laughing.bmp or chris_sad.bmp). It can be recognize the face that the User saved in a Database.
My Project is send a E-Mail to user if the Person not in the database.
i included a function to send a E-Mail to the user.
So i have saved 2 different Images from 2 Stars chris and john. When i click recognize they show me the correctly star with label(example: chris_laughing.bmp) from the database.
The Problem is if i Extract(detect) a face from a other star or person (also not chris not john) the Code show me the NEAREST star from the database.
What i want is that the Program give me a Messagebox that say: this Person is not in the database.
Is it possible with this Program(code)??

That program works by assuming that face images for each person lie in a subspace different to that for other people. This idea can work really well in some situations. The program learns a subspace for each person and when you input a new image it measures the distance to all subspaces it has previously learned and chooses the nearest one.
The program doesn't seem to have any sort of check that the image is too far from all learned subspaces. However, it would be an interesting exercise to try to add that feature.
Here is a some info about the main idea in the software: http://en.wikipedia.org/wiki/Eigenface

Related

UE4 - I want to make an ingame camera/photography mode and save the pictures

for my game project I want to include a "camera mode".
This means, that on a press of a button the current camera view gets saved in an in-game gallery.
After some search, I only found ways to save a screenshot on the disk (BP for saving Screenshot, semi functional)
, but I want the picture to be still available in my game, maybe as a texture or in a struct. So I can later use it, well in an in-world picture frame or newspaper.
I did try SceneCaptureComponent2D, but I never got that one really working and searching online got no satisfactory results.
By the way, I'm fine with C++, I'm just building my current prototype with BP for faster testing and altering.
I hope you can help me.
I would have commented on your question, but I do not have enough reputation to do so, because this answer I am providing you is more a hint on how you could do it rather than a straight solution for your problem.
Check out this repository on how to capture images with C++ during a running application which is actually meant for recording data.

How to use Tesseract to get text area from a certification card of the vehicle

I need advice about tesseract. I have try to use Tesseract but the result is not perfect. A lot of information is missing. I would like to scan a registration certificate for the french vehicle and I must to recuperate the data into database. You can find below the document, it's a french registration certificate. Is it possible to detect each area in this document and keep this information in database ? I have read on the internet and it's not possible to detect area with only Tesseract ?
You have for example the area A, B, C.1,D.2.1. How to detect/scan each area and insert this information in the database ?
Example:
https://www.ecartegrise.fr/wp-content/uploads/2013/03/nouvelle-carte-grise-specimen.jpg
I would like to do this:
http://www.adoc-solutions.eu/images/Documentations/cartes-grises.png
How to recuperate each area text and insert this into a database ?
Thanks for your help
Nikolas
I am actually working on a project similar to yours here are my suggestions
OCR techniques ? Optical Character recognition
There are a few OCR tools able to extract data from a pdf form or an image thanks to the OCR here are a list of OCR tools that i recommande :
-Convertio
-PDFMiner : PDF2txt-PDF2Word
-Tabula : extracting data from a table
-Abby FineReader 14
-DataWatch
if you have any complementary information please do share
I am working on extracting tables and form data from PDF for quite some time. I think the solution to your problem is to first detect all those areas where the text is written and then create a mapping to columns.
If the registration form is static in nature means if the area of the text of particular fields is fixed then you can create a template specific to your problem and then crop the image from those defined coordinates and then try to apply tesseract.
Tesseract is not 100% correct so to improve accuracy you can train it on your data.

Automatic Numberplate Recognition

As the title suggest, i want to build an ANPR application in windows. I am using Brazilian number plates. And i am using OpenCV for this.
So far i manged to extract the letters form the numberplate. Following images show some of the numbers i have extracted.
The problem i am facing is that how to recognize those letter. I tried to use Google tesseract. But it fails to recognize them sometimes. Then i tried to train an OCR data base using OpenCV i used about 10 images for each character. but it also did not work properly.
So i am stuck here. i need this for final year project.So can anybody help me?? i would really appreciate it.
Following site does it very nicely
https://www.anpronline.net/demo.html
Thank you..
you could train an ann or multi-class svm on the letter images, like here
Check out OpenALPR (http://www.openalpr.com). It already has the problem solved.
If you need to do it yourself, you really do need to train Tesseract. It will give you the best results. 10 images per character is not enough, you need dozens or hundreds. If you can find a font that is similar to your plate characters, a good approach is to print out a sheet of paper with all of the characters used multiple times. Then take 5-10 pictures of the page with your camera. These can then be your input for training Tesseract.

Looking for Ideas: How would you start to write a geo-coder?

Because the open source geo-coders cannot begin to compare to Google's or even Yahoo's, I would like to start a project to create a good open source geo-coder. Just to clarify, a geo-coder takes some text (usually with some constraints) and returns one or more lat/lon pairs.
I realize that this is a difficult and garguntuan task, so I am wondering how you might get started. What would you read? What algorithms would you familiarize yourself with? What code would you review?
And also, assuming you were going to develop this very agilely, what would you want the first prototype to be able to do?
EDIT: Let's set aside the data question for now. I am going to use OpenStreetMap data, along with a database of waypoints that I have. I would later plan to include other data sets as well, and I realize the geo-coder would be inherently limited by the quality of the original data.
The first (and probably blocking) problem would be: where do you get your data from? (unless you are willing to pay thousands of dollars for proprietary sets).
You could build a geocoding-api on top of OpenStreetMap (they publish their data in dumps on a regular basis) I guess, but that one was still very incomplete last time I checked.
Algorithms are easy. Good mapping data, however, is expensive. Very expensive.
Google drove their cars all over the world, collecting this data among other things.
From a .NET point of view these articles might be interesting for you:
Writing Your Own GPS Applications: Part I
Writing Your Own GPS Applications: Part 2
Writing GIS and Mapping Software for .NET
I've only glanced at the articles but they've been on CodeProject's 'Most Popular' list for a long time.
And maybe this CodePlex project which the author of the articles above made available.
I would start at the absolute beginning by figuring out how you're going to get the data that matches a street address with a geocode. Either Google had people going around with GPS units, OR they got the information from some existing source. That existing source may have been... (all guesses)
The Postal Service
Some existing maps(printed)
A bunch of enthusiastic users that were early adopters of GPS technology who ere more than willing to enter in street addresses and GPS coordinates
Some government entity (or entities)
Their own satellites
etc
I guess what I'm getting at is the information was either imported from somewhere or was input by someone via some interface. As my starting point I would look at how to get that information. In an open source situation, you may be able to get a bunch of enthusiastic people to enter information.
So for my first prototype, boring as it would be, I would create a form for entering information.
Then you need to know the math for figuring out the closest distance (as the crow flies). From there, try to figure out how to include roads. (My guess is you would have to have data point for each and every curve, where you hold the geocode location of the curve, and the angle of the road on a north/south and east/west vector. You'd probably need to take incline into account, too to get accurate road measurements.)
That's just where I'd start.
But in all honesty, I wouldn't even start on this. Other programmers have done it already, I'm more interested in what hasn't already been done.
get my free raw data from somewhere like http://ipinfodb.com/ip_database.php
load it into a database, denormalizing for fast lookups
design my API
build it out as a RESTful web service
return results in varying formats: JSON, XML, CSV, raw text
The first prototype should accept a ZIP code and return lat/lon in raw text.

Help with algorithm to dynamically update text display

First, some backstory:
I'm making what may amount to be a "roguelike" game so i can exersize some interesting ideas i've got floating around in my head. The gameplay isn't going to be a dungeon crawl, but in any case, the display is going to be done in a similar fasion, with simple ascii characters.
Being that this is a self exercise, I endeavor to code most of it myself.
Eventually I'd like to have the game runnable on arbitrarily large game worlds. (to the point where i envision havening the game networked and span over many monitors in a computer lab).
Right now, I've got some code that can read and write to arbitrary sections of a text console, and a simple partitioning system set up so that i can path-find efficiently.
And now the question:
I've ran some benchmarks, and the biggest bottleneck is the re-drawing of text consoles.
Having a game world that large will require an intelligent update of the display. I don't want to have to re-push my entire game buffer every frame... I need some pointers on how to set it up so that it only draws sections of the game have have been updated. (and not just individual characters as I've got now)
I've been manipulating the windows console via windows.h, but I would also be interested in getting it to run on linux machines over a puTTY client connected to the server.
I've tried adapting some video-processing routines, as there is nearly a 1:1 ratio between pixel and character, but I had no luck.
Really I want a simple explanation of some of the principles behind it. But some example (psudo)code would be nice too.
Use Curses, or if you need to be doing it yourself, read about the VTnnn control codes. Both of these should work on windows and on *nix terms and consoles (and Windows). You can also consult the nethack source code for hints. This will let you change characters on the screen wherever changes have happened.
I am not going to claim to understand this, but I believe this is close to the issue behind James Gosling's legendary Gosling Emacs redrawing code. See his paper, titled appropriately, "A Redisplay Algorithm", and also the general string-to-string correction problem.
Having a game world that large will
require an intelligent update of the
display. I don't want to have to
re-push my entire game buffer every
frame... I need some pointers on how
to set it up so that it only draws
sections of the game have have been
updated. (and not just individual
characters as I've got now)
The size of the game world isn't really relevant, as all you need to do is work out the visible area for each client and send that data. If you have a typical 80x25 console display then you're going to be sending just 2 or 3 kilobytes of data each time, even if you add in colour codes and the like. This is typical of most online games of this nature: update what the person can see, not everything in the world.
If you want to experiment with trying to find a way to cut down what you send, then feel free to do that for learning purposes, but we're about 10 years past the point where it is inefficient to update a console display in something approaching real time and it would be a shame to waste time fixing a problem that doesn't need fixing. Note that the PDF linked above gives an O(ND) solution whereas simply sending the entire console is half of O(N), where N is defined as the sum of the lengths of A and B and D.