I recently recorded a decent chunk of data by recording the mass of a digital scale by videoing it by hand on my phone. The mass is changing over time, and I am needing to look at the relationship there. The equipment I have was relatively limited, which is why I was not able to connect this scale directly to a data logger.
I would very much prefer not to have to manually go through every second of every video to log the data as it would be a very repetitive process.
Thus, I was hoping computer vision would be a good alternative but I have no idea how to go about it. Would anyone happen to know a program or tool I could use or create that can just read these numbers from the video and then record them with their timestamps, possibly as a .csv file?
I'd be willing to learn about computer vision or AI to do so myself as well, as I am interested in this area, but simply don't have experience in it, so any advice or tools would be greatly appreciated.
Related
I have a mini project for my new course in Tensorflow for this semester with random topics. Since I have some background on Convolution Neuron Network, I intend to use it for my project. My computer can only run CPU version of TensorFlow.
However, as a new bee, I realize that there are a lot of topics such that MNIST, CIFAR-10, etc, thus I don't know which suitable topic I should pick out from them. I only have two weeks left. It would be great if the topic is not too complicated but too not easy for study because it matchs my intermediate level.
In your experience, could you give me some advice about the specific topic I should do for my project?
Moreover, it would be better if in this topic I can provide my own data to test my training, because my professor said that it is a plus point to get A grade in my project.
Thanks in advance,
I think that to answer this question you need to properly evaluate the marking criteria for your project. However, I can give you a brief overview of what you've just mentioned.
MNIST: MNIST is a Optical Character Recognition task for individual numbers 0-9 in images size 28px square. This is considered the "Hello World" of CNNs. It's pretty basic and might be too simplistic for your requirements. Hard to gauge without more information. Nonetheless, this will run pretty quickly with CPU Tensorflow and the online tutorial is pretty good.
CIFAR-10: CIFAR is a much bigger dataset of objects and vehicles. The image sizes are 32px square so individual image processing isn't too bad. But the dataset is very large and your CPU might struggle with it. It takes a long time to train. You could try training on a reduced dataset but I don't know how that would go. Again, depends on your course requirements.
Flowers-Poets: There is the Tensorflow for Poets re-training example which might not be suitable for your course, you could use the flowers dataset to build your own model.
Build-your-own-model: You could use tf.Layers to build your own network and experiment with it. tf.Layers is pretty easy to use. Alternatively you could look at the new Estimators API that will automate a lot of the training processes for you. There are a number of tutorials (of varying quality) on the Tensorflow website.
I hope that helps give you a run-down of what's out there. Other datasets to look at are PASCAL VOC and imageNet (however they are huge!). Models to look at experimenting with may include VGG-16 and AlexNet.
I have a task that seems well-suited to Mturk. I've never before used the service, however, and despite reading through some of the documentation I'm having a difficult time judging how hard it would be to set up a task. I'm a strong beginner or weak intermediate in R. I've messed around with a project that involved a little understanding of XML. Otherwise, I have no programming or web development skills (I'm a statistician/epidemiologist). I'm hoping someone can give me an idea of what would be involved in creating my task so I can decide of it is worth the effort to learn how to create a HIT.
Essentially, I have recurring projects that require many graphs to be digitized (i.e. go from images to x,y coordinates). The automatic digitization software that I've tried isn't great for this task because some of the graphs are from old journal articles and they have gray-scale lines that cross each other multiple times. Figuring out which line is which requires a little human judgement. Workflow for the HIT would be to have each Mturker:
Download a properly named empty Excel workbook.
Download a JPEG of the graphs.
Download a free plot digitization program.
Open the graph in the plot digitization software, calibrate the axes, trace the outline of each curve, paste the coordinates into the corresponding Excel workbook that I have given them, extract some numbers off the graph into a second sheet of the same workbook.
Send me the Excel files.
I'd have these done in duplicate to make sure that there is acceptable agreement between the two Mturkers who did each graph.
Is this a reasonable task to accomplish via Mechanical Turk? If so, can a somewhat intelligent person who isn't a programmer/web developer pull it off? I've poked around the internet a bit but I still can't tell if I just haven't found the right resource to teach me how to do this or if I'd need 5 years of experience as a web developer to pull it off. Thanks.
No this really isn't a task for Mechanical Turk at all. Not only because you are requiring them to download a bunch of stuff which they won't do, but it's way too complex for them to have confidence they are doing it right and will get paid. Pay is binary so could go through all that for nothing.
You are also probably violating terms of service if they have to divulge personal info for the programs.
If you have a continuous need for this then MAYBE you can prequalify people by creating qualification on the service and then using just those workers.
I need some advice on this idea that I've had for an UNI project.
I was wondering if it's possible to split an audio file into different "streams" from different audio sources.
For example, split the audio file into: engine noise, train noise, voices, different sounds that are not there all the time, etc.
I wouldn't necessarily need to do this from a programming language(although it would be ideal) but manually as well, by using some sound processing software like Sound Forge. I need to know if this is possible first, though. I know nothing about sound processing.
After the first stage is complete(separating the sounds) I want to determine if one of the processed sounds exists in another audio recording. The purpose would be sound detection. For (an ideal) example, take the car engine sound and match it against another file and determine that the audio is a recording of a car's engine or not. It doesn't need to be THAT precise, I guess detecting a sound that is not constant, like a honk! would be alright as well.
I will do the programming part, I just need some pointers on what to look for(software, math, etc). As I am no sound expert, this would really be an interesting project, if it's possible.
Thanks.
This problem of splitting sounds based on source is known in research as (Audio) Source Separation or Audio Signal Separation. If there is no more information about the sound sources or how they have been mixed, it is a Blind Source Separation problem. There are hundreds of papers on these topics.
However for the purpose of sound detection, it is not typically necessary to separate sounds at the audio level. Very often one can (and will) do detection on features computed on the mixed signal. Search literature for Acoustic Event Detection and Acoustic Event Classification.
For a introduction to the subject, check out a book like Computational Analysis of Sound Scenes and Events
It's extremely difficult to do automated source separation from a single audio stream. Your brain is uncannily good at this task, and it also benefits from a stereo signal.
For instance. voice is full of signals that aren't there all the time. Car noise has components that are quite stationary, but gear changes are outliers.
Unfortunately, there are no simple answers.
Correlate reference signals against the audio stream. Correlation can be done efficiently using FFTs. The output of the correlation calculation can be thresholded and 'debounced' in time for signal identification.
I am really passionate about the machine learning,data mining and computer vision fields and I was thinking at taking things a little bit further.
I was thinking at buying a LEGO Mindstorms NXT 2.0 robot for trying to experiment machine learning/computer vision and robotics algorithms in order to try to understand better several existing concepts.
Would you encourage me into doing so? Do you recommend any other alternative for a practical approach in understanding these fields which is acceptably expensive like(nearly 200 - 250 pounds) ? Are there any mini robots which I can buy and experiment stuff with?
If your interests are machine learning, data mining and computer vision then I'd say a Lego mindstorms is not the best option for you. Not unless you are also interested in robotics/electronics.
Do do interesting machine learning you only need a computer and a problem to solve. Think ai-contest or mlcomp or similar.
Do do interesting data mining you need a computer, a lot of data and a question to answer. If you have an internet connection the amount of data you can get at is only limited by your bandwidth. Think netflix prize, try your hand at collecting and interpreting data from wherever. If you are learning, this is a nice place to start.
As for computer vision: All you need is a computer and images. Depending on the type of problem you find interesting you could do some processing of random webcam images, take all you holiday photo's and try to detect where all your travel companions are in them. If you have a webcam your options are endless.
Lego mindstorms allows you to combine machine learning and computer vision. I'm not sure where the datamining would come in, and you will spend (waste?) time on the robotics/electronics side of things, which you don't list as one of your passions.
Well, I would take a look at the irobot create... well within your budget, and very robust.
Depending on your age, you may not want to be seen with a "lego robot" if you are out of college :-)
Anyway, I buy the creates in batches for my lab. You can link to them with a hard cable(cheap) or put a blue tooth interface on it.
But a webcam on that puppy, hook it up to a multicore machine and you have an awesome working robot for the things you want to explore.
Also, the old roombas had a ttl level serial port (if that did not make sense to you , then skip it). I don't know about the new ones. So, it was possible to control any roomba vacuum from a laptop.
The Number One rule, and I cannot emphasize this enough: Have a reliable platform for experimentation. If you hand build something, just for basic functionality, you will spend all your time on minor issues and not get to the fun stuff.
Anyway. best of luck.
I'm on a project that among other video related tasks should eventually be capable of extracting the audio of a video and apply some kind of speech recognition to it and get a transcribed text of what's said on the video. Ideally it should output some kind of subtitle format so that the text is linked to a certain point on the video.
I was thinking of using the Microsoft Speech API (aka SAPI). But from what I could see it is rather difficult to use. The very few examples that I found for speech recognition (most are for Text-To-Speech which mush easier) didn't perform very well (they don't recognize a thing). For example this one: http://msdn.microsoft.com/en-us/library/ms717071%28v=vs.85%29.aspx
Some examples use something called grammar files that are supposed to define the words that the recognizer is waiting for but since I haven't trained the Windows Speech Recognition thoroughly I think that might be adulterating the results.
So my question is... what's the best tool for something like this? Could you provide both paid and free options? Well the best "free" (as it comes with Windows) option I believe it's SAPI, all the rest should be paid but if they are really good it might be worth it. Also if you have any good tutorials for using SAPI (or other API) on a context similar to this it would be great.
On the whole this is a big ask!
The issue with any speech recognition system is that it functions best after training. It needs context (what words to expect) and some kind of audio benchmark (what does each voice sound like). This might be possible in some cases, such as a TV series if you wanted to churn through hours of speech -separated for each character- to train it. There's a lot of work there though. For something like a film there's probably no hope of training a recogniser unless you can get hold of the actors.
Most film and TV production companies just hire media companies to transcribe the subtitles based on either direct transcription using a human operator, or converting the script. The fact that they still need humans in the loop for these huge operations suggests that automated systems just aren't up to it yet.
In video you have a plethora of things that make you life difficult, pretty much spanning huge swathes of current speech technology research:
-> Multiple speakers -> "Speaker Identification" (can you tell characters apart? Also, subtitles normally have different coloured text for different speakers)
-> Multiple simultaneous speakers -> The "cocktail party problem" - can you separate the two voice components and transcribe both?
-> Background noise -> Can you pick the speech out from any soundtrack/foley/exploding helicopters.
The speech algorithm will need to be extremely robust as different characters can have different gender/accents/emotion. From what I understand of the current state of recognition you might be able to get a single speaker after some training, but asking a single program to nail all of them might be tough!
--
There is no "subtitle" format that I'm aware of. I would suggest saving an image of the text using a font like Tiresias Screenfont that's specifically designed for legibility in these circumstances, and use a lookup table to cross-reference images against video timecode (remembering NTSC/PAL/Cinema use different timing formats).
--
There's a bunch of proprietary speech recognition systems out there. If you want the best you'll probably want to license a solution off one of the big boys like Nuance. If you want to keep things free the universities of RWTH and CMU have put some solutions together. I have no idea how good they are or how well they might be suited to the problem.
--
The only solution I can think of similar to what you're aiming at is the subtitling you can get on news channels here in the UK "Live Closed Captioning". Since it's live, I assume they use some kind of speech recognition system trained to the reader (although it might not be trained, I'm not sure). It's got better over the past few years, but on the whole it's still pretty poor. The biggest thing it seems to struggle with is speed. Dialogue is normally really fast, so live subtitling has the extra issue of getting everything done in time. Live closed captions quite frequently get left behind and have to miss a lot of content out to catch up.
Whether you have to deal with this depends on whether you'll be subtitling "live" video or if you can pre-process it. To deal with all the additional complications above I assume you'll need to pre-process it.
--
As much as I hate citing the big W there's a goldmine of useful links here!
Good luck :)
This falls into the category of dictation, which is a very large vocabulary task. Products like Dragon Naturally Speaking are amazingly good and that has a SAPI interface for developers. But it's not so simple of a problem.
Normally a dictation product is meant to be single speaker and the best products adapt automatically to that speaker, thereby improving the underlying acoustic model. They also have sophisticated language modeling which serves to constrain the problem at any given moment by limiting what is known as the perplexity of the vocabulary. That's a fancy way of saying the system is figuring out what you're talking about and therefore what types of words and phrases are likely or not likely to come next.
It would be interesting though to apply a really good dictation system to your recordings and see how well it does. My suggestion for a paid system would be to get Dragon Naturally Speaking from Nuance and get the developer API. I believe that provides a SAPI interface, which has the benefit of allowing you to swap in the Microsoft speech or any other ASR engine that supports SAPI. IBM would be another vendor to look at but I don't think you will do much better than Dragon.
But it won't work well! After all the work of integrating the ASR engine, what you will probably find is that you get a pretty high error rate (maybe half). That would be due to a few major challenges in this task:
1) multiple speakers, which will degrade the acoustic model and adaptation.
2) background music and sound effects.
3) mixed speech - people talking over each other.
4) lack of a good language model for the task.
For 1) if you had a way of separating each actor on a separate track that would be ideal. But there's no reliable way of separating speakers automatically in a way that would be good enough for a speech recognizer. If each speaker were at a distinctly different pitch, you could try pitch detection (some free software out there for that) and separate based on that, but this is a sophisticated and error prone task.) The best thing would be hand editing the speakers apart, but you might as well just manually transcribe the speech at that point! If you could get the actors on separate tracks, you would need to run the ASR using different user profiles.
For music (2) you'd either have to hope for the best or try to filter it out. Speech is more bandlimited than music so you could try a bandpass filter that attenuates everything except the voice band. You would want to experiment with the cutoffs but I would guess 100Hz to 2-3KHz would keep the speech intelligible.
For (3), there's no solution. The ASR engine should return confidence scores so at best I would say if you can tag low scores, you could then go back and manually transcribe those bits of speech.
(4) is a sophisticated task for a speech scientist. Your best bet would be to search for an existing language model made for the topic of the movie. Talk to Nuance or IBM, actually. Maybe they could point you in the right direction.
Hope this helps.