I've been experimenting with neural networks in C++ in implementing a network that plays and learns tic-tac-toe. A problem I have run into, and that I have been wondering about is, how do you keep a network with it's "memory" or learnt skills, intact, once you end the program/training? At the moment it learns as you keep playing, but once I close the program and restart it. It's stupid again. How do I get around this, and how do other large neural networks get around this problem?
The memory of a neural network is stored in the weights of its connections. If you want to prevent it from forgetting what it learnt you need to serialize these weights in a file or a database.
Related
I am creating my neural network and I am using DFF architecture. I made a game about a car that drives on the road, and I want to train a neural network to drive a car. I want to use Q-learning, I create everything in c++, in fact, I didn't find any information on the Internet about how to set up the weights of a neural network using q-learning, basically they tell the main thing about this method, but how to change the weights is not said anywhere, I tried to implement it myself, but it turned out to be sheer nonsense.
I understand that it is necessary to use the Bellman equation, but I do not know exactly where to use it. I tried to use it to predict the reward, and then compare it with the reward received and, depending on the action and the difference between the expected and received reward, change the weights on the layers (I have 4 layers in the network), but this is of little use.
I recently recorded a decent chunk of data by recording the mass of a digital scale by videoing it by hand on my phone. The mass is changing over time, and I am needing to look at the relationship there. The equipment I have was relatively limited, which is why I was not able to connect this scale directly to a data logger.
I would very much prefer not to have to manually go through every second of every video to log the data as it would be a very repetitive process.
Thus, I was hoping computer vision would be a good alternative but I have no idea how to go about it. Would anyone happen to know a program or tool I could use or create that can just read these numbers from the video and then record them with their timestamps, possibly as a .csv file?
I'd be willing to learn about computer vision or AI to do so myself as well, as I am interested in this area, but simply don't have experience in it, so any advice or tools would be greatly appreciated.
If I build a model and train it, then deploy it. Can I set it up to train on data at runtime? E.g. if I wanted a net that could just train on constant input until I stopped it and tested it. Would I have to Implement that by talking to the protobuffer in C++?
The practical problem with neural networks in production is that you train on known output, but apply them in production in order to create output. That usually precludes in-production updates.
Yet, there's no magic involved. If in production you can still get the desired output (even in hindsight) for a given input, then you can backpropagate the resulting error term and adjust the network weights.
There's an additional challenge here: if you train the network in production, what data are you intending to train with? Initially you can't train on just the first few samples from the field, as you'd greatly overtrain on those. So you'll need to include the initial training set in the deployed solution, and expand on that.
I am facing a challenging problem. On the courtyard of company I am working is a camera trap which takes a photo of every movement. On some of these pictures there are different kinds of animals (mostly deep gray mice) that cause damages to our cable system. My idea is to use some application that could recognize if there is a gray mouse on the picture or not. Ideally in realtime. So far we have developed a solution that sends alarms for every movement but most of alarms are false. Could you provide me some info about possible ways how to solve the problem?
In technical parlance, what you describe above is often called event detection. I know of no ready-made approach to solve all of this at once, but with a little bit of programming you should be all set even if you don't want to code any computer vision algorithms or some such.
The high-level pipeline would be:
Making sure that your video is of sufficient quality. Gray mice sound kind of tough, plus the pictures are probably taken at night - so you should have sufficient infrared lighting etc. But if a human can make it out whether an alarm is false or true, you should be fine.
Deploying motion detection and taking snapshot images at the time of movements. It seems like you have this part already worked out, great! Detailing your setup could benefit others. You may also need to crop only the area in motion from the image, are you doing that?
Building an archive of images, including your decision of whether they are false or true alarm (labels in machine learning parlance). Try to gather at least a few tens of example images for both cases, and make them representative of real-world variations (do you have the problem during daytime as well? is there snowfall in your region?).
Classifying the images taken from the video stream snapshot to check whether it's a false alarm or contains bad critters eating cables. This sounds tough, but deep learning and machine learning is making advances by leaps; you can either:
deploy your own neural network built in a framework like caffe or Tensorflow (but you will likely need a lot of examples, at least tens of thousands I'd say)
use an image classification API that recognizes general objects, like Clarifai or Imagga - if you are lucky, it will notice that the snapshots show a mouse or a squirrel (do squirrels chew on cables?), but it is likely that on a specialized task like this one, these engines will get pretty confused!
use a custom image classification API service which is typically even more powerful than rolling your own neural network since it can use a lot of tricks to sort out these images even if you give it just a small number of examples for each image category (false / true alarm here); vize.it is a perfect example of that (anyone can contribute more such services?).
The real-time aspect is a bit open-ended, as the neural networks take some time to process an image — you also need to include data transfer etc. when using a public API, but if you roll out your own, you will need to spend a lot of effort to get low latency as the frameworks are by default optimized for throughput (batch prediction). Generally, if you are happy with ~1s latency and have a good internet uplink, you should be fine with any service.
Disclaimer: I'm one of the co-creators of vize.it.
How about getting a cat?
Also, you could train your own custom classifier using the IBM Watson Visual Recognition service. (demo: https://visual-recognition-demo.mybluemix.net/train ) It's free to try and you just need to supply example images for the different categories you want to identify. Overall, Petr's answer is excellent.
I am really passionate about the machine learning,data mining and computer vision fields and I was thinking at taking things a little bit further.
I was thinking at buying a LEGO Mindstorms NXT 2.0 robot for trying to experiment machine learning/computer vision and robotics algorithms in order to try to understand better several existing concepts.
Would you encourage me into doing so? Do you recommend any other alternative for a practical approach in understanding these fields which is acceptably expensive like(nearly 200 - 250 pounds) ? Are there any mini robots which I can buy and experiment stuff with?
If your interests are machine learning, data mining and computer vision then I'd say a Lego mindstorms is not the best option for you. Not unless you are also interested in robotics/electronics.
Do do interesting machine learning you only need a computer and a problem to solve. Think ai-contest or mlcomp or similar.
Do do interesting data mining you need a computer, a lot of data and a question to answer. If you have an internet connection the amount of data you can get at is only limited by your bandwidth. Think netflix prize, try your hand at collecting and interpreting data from wherever. If you are learning, this is a nice place to start.
As for computer vision: All you need is a computer and images. Depending on the type of problem you find interesting you could do some processing of random webcam images, take all you holiday photo's and try to detect where all your travel companions are in them. If you have a webcam your options are endless.
Lego mindstorms allows you to combine machine learning and computer vision. I'm not sure where the datamining would come in, and you will spend (waste?) time on the robotics/electronics side of things, which you don't list as one of your passions.
Well, I would take a look at the irobot create... well within your budget, and very robust.
Depending on your age, you may not want to be seen with a "lego robot" if you are out of college :-)
Anyway, I buy the creates in batches for my lab. You can link to them with a hard cable(cheap) or put a blue tooth interface on it.
But a webcam on that puppy, hook it up to a multicore machine and you have an awesome working robot for the things you want to explore.
Also, the old roombas had a ttl level serial port (if that did not make sense to you , then skip it). I don't know about the new ones. So, it was possible to control any roomba vacuum from a laptop.
The Number One rule, and I cannot emphasize this enough: Have a reliable platform for experimentation. If you hand build something, just for basic functionality, you will spend all your time on minor issues and not get to the fun stuff.
Anyway. best of luck.