Amazon rekognition EyesOpen errors - amazon-web-services

I'm looking to use Amazon rekogintion to detect whether an eye is closed or not
its working perfectly except for when testing on people who wear eyeglasses, which give false info with a (weirdly enough) a high confidence level note that im using high res images to test this out, is there anything that can be done to fix this?

Related

Error symbol log running GPU on google cloud ML

I'm trying to use google cloud ml with GPU mode.
When I train BASIC_GPU mode, I have many error log.
But, It works training well.
I am not sure whether the learning was good working in GPU mode.
This is error log history.
enter image description here
This is the some part of print config.log_device_placement.
enter image description here
Also, I tried training complex_model_m_gpu mode.
I also have error log like BASIC_GPU.
But, I can't see gpu:/1, gpu:/2, gpu:/3 when i print config.log_device_placement. Only gpu:/0 i can see.
The important thing is that BASIC_GPU and complex_model_m_gpu have same speed for running time.
I wonder whether the learning was good working in GPU mode or there is something wrong.
Sorry for my english, anyone knows the problem then help me.
thank you.
Please refer to TensorFlow's performance guide for optimizing for GPUs for tips on how to make the most of your GPUs.
A couple things to note
You can turn on logging of device placement to see which ops get assigned to which Devices. This is a great way to check that ops are actually assigned to GPUs and that you are using all GPUs when you have multiple GPUs.
TensorBoard should also provide information about device placement so that is another way to check that you are using all GPUs.
When using multiple GPUs, you need to make sure you are assigning ops to all GPUs. The TensorFlow guide provides more information on this topic.

Recognition of an animal in pictures

I am facing a challenging problem. On the courtyard of company I am working is a camera trap which takes a photo of every movement. On some of these pictures there are different kinds of animals (mostly deep gray mice) that cause damages to our cable system. My idea is to use some application that could recognize if there is a gray mouse on the picture or not. Ideally in realtime. So far we have developed a solution that sends alarms for every movement but most of alarms are false. Could you provide me some info about possible ways how to solve the problem?
In technical parlance, what you describe above is often called event detection. I know of no ready-made approach to solve all of this at once, but with a little bit of programming you should be all set even if you don't want to code any computer vision algorithms or some such.
The high-level pipeline would be:
Making sure that your video is of sufficient quality. Gray mice sound kind of tough, plus the pictures are probably taken at night - so you should have sufficient infrared lighting etc. But if a human can make it out whether an alarm is false or true, you should be fine.
Deploying motion detection and taking snapshot images at the time of movements. It seems like you have this part already worked out, great! Detailing your setup could benefit others. You may also need to crop only the area in motion from the image, are you doing that?
Building an archive of images, including your decision of whether they are false or true alarm (labels in machine learning parlance). Try to gather at least a few tens of example images for both cases, and make them representative of real-world variations (do you have the problem during daytime as well? is there snowfall in your region?).
Classifying the images taken from the video stream snapshot to check whether it's a false alarm or contains bad critters eating cables. This sounds tough, but deep learning and machine learning is making advances by leaps; you can either:
deploy your own neural network built in a framework like caffe or Tensorflow (but you will likely need a lot of examples, at least tens of thousands I'd say)
use an image classification API that recognizes general objects, like Clarifai or Imagga - if you are lucky, it will notice that the snapshots show a mouse or a squirrel (do squirrels chew on cables?), but it is likely that on a specialized task like this one, these engines will get pretty confused!
use a custom image classification API service which is typically even more powerful than rolling your own neural network since it can use a lot of tricks to sort out these images even if you give it just a small number of examples for each image category (false / true alarm here); vize.it is a perfect example of that (anyone can contribute more such services?).
The real-time aspect is a bit open-ended, as the neural networks take some time to process an image — you also need to include data transfer etc. when using a public API, but if you roll out your own, you will need to spend a lot of effort to get low latency as the frameworks are by default optimized for throughput (batch prediction). Generally, if you are happy with ~1s latency and have a good internet uplink, you should be fine with any service.
Disclaimer: I'm one of the co-creators of vize.it.
How about getting a cat?
Also, you could train your own custom classifier using the IBM Watson Visual Recognition service. (demo: https://visual-recognition-demo.mybluemix.net/train ) It's free to try and you just need to supply example images for the different categories you want to identify. Overall, Petr's answer is excellent.

Can I create a somewhat complex Mechanical Turk HIT without much programming experience?

I have a task that seems well-suited to Mturk. I've never before used the service, however, and despite reading through some of the documentation I'm having a difficult time judging how hard it would be to set up a task. I'm a strong beginner or weak intermediate in R. I've messed around with a project that involved a little understanding of XML. Otherwise, I have no programming or web development skills (I'm a statistician/epidemiologist). I'm hoping someone can give me an idea of what would be involved in creating my task so I can decide of it is worth the effort to learn how to create a HIT.
Essentially, I have recurring projects that require many graphs to be digitized (i.e. go from images to x,y coordinates). The automatic digitization software that I've tried isn't great for this task because some of the graphs are from old journal articles and they have gray-scale lines that cross each other multiple times. Figuring out which line is which requires a little human judgement. Workflow for the HIT would be to have each Mturker:
Download a properly named empty Excel workbook.
Download a JPEG of the graphs.
Download a free plot digitization program.
Open the graph in the plot digitization software, calibrate the axes, trace the outline of each curve, paste the coordinates into the corresponding Excel workbook that I have given them, extract some numbers off the graph into a second sheet of the same workbook.
Send me the Excel files.
I'd have these done in duplicate to make sure that there is acceptable agreement between the two Mturkers who did each graph.
Is this a reasonable task to accomplish via Mechanical Turk? If so, can a somewhat intelligent person who isn't a programmer/web developer pull it off? I've poked around the internet a bit but I still can't tell if I just haven't found the right resource to teach me how to do this or if I'd need 5 years of experience as a web developer to pull it off. Thanks.
No this really isn't a task for Mechanical Turk at all. Not only because you are requiring them to download a bunch of stuff which they won't do, but it's way too complex for them to have confidence they are doing it right and will get paid. Pay is binary so could go through all that for nothing.
You are also probably violating terms of service if they have to divulge personal info for the programs.
If you have a continuous need for this then MAYBE you can prequalify people by creating qualification on the service and then using just those workers.

IoHub for eye tracking

Can ioHub be used with Eye Tribe glasses?
Using ioHub, is it possible to detect sacaddic, blinks, and fixations?
I have not used EyeTribe (and don't use ioHub), but we do use Psychopy for eye tracking. And for us we just connect to the eye tracker using tcp/ip communication from Python itself. And the EyeTribe website under the API section says:
If you favor any other programming language the open API relies on the standard TCP/IP protocol. If it can open a socket and parse strings then you’re covered.
And you can certainly do that in Python, so if you want to use these glasses I am sure you can, but it will likely require some work on your part. I had an undergrad who got a Mirametrix tracker talking to Python/Psychopy and he left some of his experience detailed here: https://brittlab.uwaterloo.ca/research-tips/
But make sure that the technical specifications (60 Hz and a 20 ms latency) will be sufficient for your needs. I find with eye trackers what you pay for is what you get. There is another nice open source option with a higher speed camera and very supportive developmen team at http://pupil-labs.com/pupil/
Good luck, Please add a comment later on your experience if you get these to work, because alot of us, like you, are looking for cheaper tracking options.

Audio Subtitle Transcription - C++

I'm on a project that among other video related tasks should eventually be capable of extracting the audio of a video and apply some kind of speech recognition to it and get a transcribed text of what's said on the video. Ideally it should output some kind of subtitle format so that the text is linked to a certain point on the video.
I was thinking of using the Microsoft Speech API (aka SAPI). But from what I could see it is rather difficult to use. The very few examples that I found for speech recognition (most are for Text-To-Speech which mush easier) didn't perform very well (they don't recognize a thing). For example this one: http://msdn.microsoft.com/en-us/library/ms717071%28v=vs.85%29.aspx
Some examples use something called grammar files that are supposed to define the words that the recognizer is waiting for but since I haven't trained the Windows Speech Recognition thoroughly I think that might be adulterating the results.
So my question is... what's the best tool for something like this? Could you provide both paid and free options? Well the best "free" (as it comes with Windows) option I believe it's SAPI, all the rest should be paid but if they are really good it might be worth it. Also if you have any good tutorials for using SAPI (or other API) on a context similar to this it would be great.
On the whole this is a big ask!
The issue with any speech recognition system is that it functions best after training. It needs context (what words to expect) and some kind of audio benchmark (what does each voice sound like). This might be possible in some cases, such as a TV series if you wanted to churn through hours of speech -separated for each character- to train it. There's a lot of work there though. For something like a film there's probably no hope of training a recogniser unless you can get hold of the actors.
Most film and TV production companies just hire media companies to transcribe the subtitles based on either direct transcription using a human operator, or converting the script. The fact that they still need humans in the loop for these huge operations suggests that automated systems just aren't up to it yet.
In video you have a plethora of things that make you life difficult, pretty much spanning huge swathes of current speech technology research:
-> Multiple speakers -> "Speaker Identification" (can you tell characters apart? Also, subtitles normally have different coloured text for different speakers)
-> Multiple simultaneous speakers -> The "cocktail party problem" - can you separate the two voice components and transcribe both?
-> Background noise -> Can you pick the speech out from any soundtrack/foley/exploding helicopters.
The speech algorithm will need to be extremely robust as different characters can have different gender/accents/emotion. From what I understand of the current state of recognition you might be able to get a single speaker after some training, but asking a single program to nail all of them might be tough!
--
There is no "subtitle" format that I'm aware of. I would suggest saving an image of the text using a font like Tiresias Screenfont that's specifically designed for legibility in these circumstances, and use a lookup table to cross-reference images against video timecode (remembering NTSC/PAL/Cinema use different timing formats).
--
There's a bunch of proprietary speech recognition systems out there. If you want the best you'll probably want to license a solution off one of the big boys like Nuance. If you want to keep things free the universities of RWTH and CMU have put some solutions together. I have no idea how good they are or how well they might be suited to the problem.
--
The only solution I can think of similar to what you're aiming at is the subtitling you can get on news channels here in the UK "Live Closed Captioning". Since it's live, I assume they use some kind of speech recognition system trained to the reader (although it might not be trained, I'm not sure). It's got better over the past few years, but on the whole it's still pretty poor. The biggest thing it seems to struggle with is speed. Dialogue is normally really fast, so live subtitling has the extra issue of getting everything done in time. Live closed captions quite frequently get left behind and have to miss a lot of content out to catch up.
Whether you have to deal with this depends on whether you'll be subtitling "live" video or if you can pre-process it. To deal with all the additional complications above I assume you'll need to pre-process it.
--
As much as I hate citing the big W there's a goldmine of useful links here!
Good luck :)
This falls into the category of dictation, which is a very large vocabulary task. Products like Dragon Naturally Speaking are amazingly good and that has a SAPI interface for developers. But it's not so simple of a problem.
Normally a dictation product is meant to be single speaker and the best products adapt automatically to that speaker, thereby improving the underlying acoustic model. They also have sophisticated language modeling which serves to constrain the problem at any given moment by limiting what is known as the perplexity of the vocabulary. That's a fancy way of saying the system is figuring out what you're talking about and therefore what types of words and phrases are likely or not likely to come next.
It would be interesting though to apply a really good dictation system to your recordings and see how well it does. My suggestion for a paid system would be to get Dragon Naturally Speaking from Nuance and get the developer API. I believe that provides a SAPI interface, which has the benefit of allowing you to swap in the Microsoft speech or any other ASR engine that supports SAPI. IBM would be another vendor to look at but I don't think you will do much better than Dragon.
But it won't work well! After all the work of integrating the ASR engine, what you will probably find is that you get a pretty high error rate (maybe half). That would be due to a few major challenges in this task:
1) multiple speakers, which will degrade the acoustic model and adaptation.
2) background music and sound effects.
3) mixed speech - people talking over each other.
4) lack of a good language model for the task.
For 1) if you had a way of separating each actor on a separate track that would be ideal. But there's no reliable way of separating speakers automatically in a way that would be good enough for a speech recognizer. If each speaker were at a distinctly different pitch, you could try pitch detection (some free software out there for that) and separate based on that, but this is a sophisticated and error prone task.) The best thing would be hand editing the speakers apart, but you might as well just manually transcribe the speech at that point! If you could get the actors on separate tracks, you would need to run the ASR using different user profiles.
For music (2) you'd either have to hope for the best or try to filter it out. Speech is more bandlimited than music so you could try a bandpass filter that attenuates everything except the voice band. You would want to experiment with the cutoffs but I would guess 100Hz to 2-3KHz would keep the speech intelligible.
For (3), there's no solution. The ASR engine should return confidence scores so at best I would say if you can tag low scores, you could then go back and manually transcribe those bits of speech.
(4) is a sophisticated task for a speech scientist. Your best bet would be to search for an existing language model made for the topic of the movie. Talk to Nuance or IBM, actually. Maybe they could point you in the right direction.
Hope this helps.