OCR Libraries/Tools for TIFF analysis using Hadoop MapReduce - mapreduce

I am currently working on a requirement to analyze the images and extract text and contents for further analysis. The images need to be processed using Hadoop due to huge volume. I am looking for libraries that can fit well in MapReduce paradigm as well as give most flawless conversion possible. I am looking for not only open source but any commercial libraries that can fit into this requirement. Can the experts on this exchange guide me on what options I should choose for further analysis and prototyping?
I am currently analyzing Tesseract library but will like to see any better candidates for this requirement.

Related

machine learning for any cancer diagnosis on image dataset with python

Blockquote
i am working on this project asssigned by university as final project. But the issue is i am not getting any help from the internet so i thought may be asking here can solve issue. i had read many articles but they had no code or guidance and i am confused what to do. Basically it is an image processing work with machine learning. Data set can be found easily but issue is python python learning algorithm and code
Blockquote
I presume if it's your final project you have to create the program yourself rather than ripping it straight from the internet. If you want a good starting point which you can customise Tensor Flow from Google is very good. You'll want to understand how it works (i.e. how machine learning works) but as a first step there's a good example of image processing on the website in the form of number recognition (which is also the "Hello World" of machine learning).
https://www.tensorflow.org/get_started/mnist/beginners
This also provides a good intro to machine learning with neural nets: https://www.youtube.com/watch?v=uXt8qF2Zzfo
One note on Tensor Flow, you'll probably have to use Python 3.5+ as in my experience it can be difficult getting it on 2.7.
First of all I need to know what type of data are you using because depending on your data, if it is a MRI or PET scan or CT, there could be different suggestion for using machine learning in python for detection.
However, I suppose your main dataset consist of MR images, I am attaching an article which I found it a great overview of different methods>
This project compares four different machine learning algorithms: Decision Tree, Majority, Nearest Neighbors, and Best Z-Score (an algorithm of my own design that is a slight variant of the Na¨ıve Bayes algorithm)
https://users.soe.ucsc.edu/~karplus/abe/Science_Fair_2012_report.pdf
Here, breast cancer and colorectal cancer have been considered and the algorithms that performed best (Best Z-Score and Nearest Neighbors) used all features in classifying a sample. Decision Tree used only 13 features for classifying a sample and gave mediocre results. Majority did not look at any features and did worst. All algorithms except Decision Tree were fast to train and test. Decision Tree was slow, because it had to look at each feature in turn, calculating the information gain of every possible choice of cutpoint.
My Solution:-
Lung Image Database Consortium provides open access dataset for Lung Cancer Images.
Download it then apply any machine learning algorithm to classify images having tumor cells or not.
I attached a link for reference paper. They applied neural network to classify the images.
For coding part, use python "OpenCV" for image pre-processing and segmentation.
When it comes for classification part, use any machine learning libraries (tensorflow, keras, torch, scikit-learn... much more) as you are compatible to work with and perform classification using any better outperforming algorithms as you wish.
That's it..
Link for Reference Journal

How to create a pdf from html5 and css3 in Django?

I have a very simple use case. I want to create a pdf from a html file that I have.
Problem:
I have checked this tool. But it does not work for css3. And gives errors while parsing bootstrap files.
xhtml2pdf
I am currently checking Prince (giving some issues when I am running it.)
Has anyone faced such an issue and have you been able to solve it ?
There is surprisingly very little options available when you are not prepared to pay for a commercial library. I had the same requirement from one of my clients that did not want to pay for any third party tools, so I had to make a plan. This is what I did, not the best solution, but it got the job done
I downloaded the newest version of wkhtmltopdf. Unfortunately the wkhtmltopdf tool did not display some of my google graphs embedded in my HTML when converting to PDF. So I used the wkhtmltoimage tool also included to convert to a PNG, which woked as expected and displayed all the graphs.
I then downloaded the newest version of imagemagick and converted the PNG to PDF.
I automated this process using C#. You should be able to do this using python as well (Please note that I have no knowledge of python, and could be wrong).
Unfortunately this is not the most elegant solution because you have to perform two conversions and do a bit of work to automate everyting, but this is the best solution I could come up with that gave me the desired results and quality.
Of course there are losts of commercial software out there that will do a faster and better job.
Just a side note:
The web page that I had to convert was devloped in HTML5 and CSS3 using version 3 of bootstrap and it contained some google graphs and charts. Everything was converted without any problems.

Include map and reduce in written in C/OpenCL in hadoop

I have written my own codes of map and reduce function in OpenCL kernel. General scenario of MapReduce which is basically incorporated in Hadoop which itself is written in java.
How can I use my own map-reduce codes written in C/OpenCL in hadoop on multi-node cluster?
I have already asked this type of question before but didn't get any response. Any link for tutorial would be useful.
I am willing to read on my own, I just can't find any resources pertaining to this. ANY kind of help would be appreciated. Thank you for time and concern.
HDFS is a file system; you can use HDFS file system with any language.
HDFS data is distributed over multiple machines, it is highly available to process the data in GPU computing.
For more information reference Hadoop Streaming

Open source concept mining tools?

Are there to day any concept mining open source tools available? I have only be coming across like Leximancer, which although seem to fit the role is not open source and quite expensive for a undergraduate student. I have been unsuccessful so far since the word 'concept' on both google and google scholar seems to be un-matching what I want.
It seems to me you need a text mining tool for clustering. RapidMiner has an open-source, Java based Community Edition which has several extensions (Text Mining, R, etc.). In addition you can develop and integrate your own algorithms too.
Moreover Rexer Analytics offers a comprehensive data mining survey annually, you can call for reports for free.

Getting the amplitude(or rms voltage) of audio signal captured in C++ by wavin lib.?

I am working on a very basic robotics project, and wish to implement voice recognition in it.
i know its a complex thing but i wish to do it for only 3 or 4 commands(or words).
i know that using wavin i can record audio. but i wish to do real-time amplitude analysis on the audio signal, how can that be done, the wave will be inputed as 8-bit, mono.
i have thought of divinding the signal into a set of some specific time, further diving it into smaller subsets, getting the average rms value over the subset and then summing them up and then see how much different they are from the actual stored signal.If the error is below accepted value for all(or most) of the sets, then print the word.
How can this be implemented?
if you can provide me any other suggestion also, it would be great.
Thanks, in advance.
There is no simple way to recognize words, because they are basically a sequence of phonemes which can vary in time and frequency.
Classical isolated word recognition systems use signal MFCC (cepstral coefficients) as input data, and try to recognize patterns using HMM (hidden markov models) or DTW (dynamic time warping) algorithms.
You will also need a silence detection module if you don't want a record button.
For instance Edimburgh University toolkit provides some of these tools (with good documentation).
If you don't want to build it "from scratch" or have a source of inspiration, here is an (old but free) implementation of such a system (which uses its own toolkit) with a full explanation and practical examples on how it works.
This system is a LVCSR (Large-Vocabulary Continuous Speech Recognition) and you only need a subset of it. If someone know an open source reduced vocabulary system (like a simple IVR) it would be welcome.
If you want to make a basic system from your own, I recommend you to use MFCC and DTW:
For each target word to modelize:
record some instances of the word
compute some (eg each 10ms) delta-MFCC through the word to have a model
When you want to recognize a signal:
compute some delta-MFCC of this signal
use DTW to compare these delta-MFCC to each modelized word's delta-MFCC
output the word that fits the best (use a threshold to drop garbage)
If you just want to recognize a few commands, there are many commercial and free products you can use. See Need text to speech and speech recognition tools for Linux or What is the difference between System.Speech.Recognition and Microsoft.Speech.Recognition? or Speech Recognition on iPhone. The answers to these questions link to many available products and tools. Speech recognition and understanding of a list of commands is a very common problem solved commercially. Many of the voice automated phone systems you call uses this type of technology. The same technology is available for developers.
From watching these questions for few months, I've seen most developer choices break down like this:
Windows folks - use the System.Speech features of .Net or Microsoft.Speech and install the free recognizers Microsoft provides. Windows 7 includes a full speech engine. Others are downloadable for free. There is a C++ API to the same engines known as SAPI. See at http://msdn.microsoft.com/en-us/magazine/cc163663.aspx. or http://msdn.microsoft.com/en-us/library/ms723627(v=vs.85).aspx
Linux folks - Sphinx seems to have a good following. See http://cmusphinx.sourceforge.net/ and http://cmusphinx.sourceforge.net/wiki/
Commercial products - Nuance, Loquendo, AT&T, others
Online service - Nuance, Yapme, others
Of course this may also be helpful - http://en.wikipedia.org/wiki/List_of_speech_recognition_software