Balancing robot PID tuning - c++

I'm trying to build a two-wheeled balancing robot for fun. I have all of the hardware built and put together, and I think I have it coded as well. I'm using an IMU with gyro and accelerometers to find my tilt angle with a complimentary filter for smoothing the signal. The input signal from the IMU seems pretty smooth, as in less than 0.7 variance + or - the actual tilt angle.
My IMU sampling rate is 50 Hz and I do a PID calculation at 50 Hz too, which I think should be fast enough.
Basically, I'm using the PID library found at PID Library .
When I set the P value to something low then the wheels go in the right direction.
When I set the P value to something large then I get an output like the graph.

From the graph it looks like your system is not stable.
I hope you have tested each subsystem of your robot before directly going for tuning. Which means that both sensors and actuators are responding properly and with acceptable error. Once each subsytem is calibrated properly for external error. You can start tuning.
Once this done is you can start with valid value of P may be (0.5) to first achieve proper response time, you will need to do some trials here, them increment I slowly to cut down steady state error if any and use D only when required(in case of oscillation).
I would suggest to handle P,I and D one by one instead of tweaking all at one time.
Also during the testing you will need to continuously monitor the your sensor and actuator data to see if they are in acceptable range.

As Praks Wrote, your system looks as if it is either unstable or at perhaps marginally stable.
Generally Two wheeled robots can be quite difficult to control as they are inherently unstable without a controller.
I would personally try A PD controller at first, and if you have problems with setpoint accuracy i would use a PID, but just remember that if you want to have a Differential gain in your controller (The D part) it is extremely important that you have a very smooth signal.
Also, the values of the controller greatly depends on your hardware setup (Weight and weight distribution of the robot, motor coefficients and voltage levels) and the units you use internally in your software for the control signals (eg. mV V, degrees/radians). This entails that it will almost be impossible for anybody to guess the correct parameters for you.
What a control engineer could do would be to make a mathematical model of the robot and analyse the pole/zero locations.
If you have any experience with control theory you can take a look at the following paper, and see if it makes sense to you.
http://dspace.mit.edu/bitstream/handle/1721.1/69500/775672333.pdf

There are many heuristic rules to PID tuning out there, but what most people fail to realize is that PID tuning should not be an heuristic process, but should based on math and science.
What #Sigurd V said is correct: "What a control engineer could do would be to make a mathematical model...", and this can get as complicated as you want. But now a days there are many software tools that can help you automate all the math stuff and get you your desired PID gains quite easily.
Assuming all your hardware is in good shape you can use a free online tool like PidTuner to input your data and get near to optimal PID gains. I have personally used it and achieved good results. Use these as an starting point and then tune manually if required.

If you haven't already, I'd suggest you do a search on the terms Arduino PID (obvious suggestion but lots of people have been down this road). I remember when that PID library was being written, the author posted quite a bit with tutorials, etc. (example). Also I came across this PIDAutotuneLibrary.
I wrote my own PID routines but also had a heck of a time tuning and never got it quite right.

Related

Modifying Gcode mid-print in response to sensor feedback for concept printer

I'm developing a concept printer that burns images onto wood using a magnifying glass on an xy plotter. One of my anticipated challenges is an inconsistent print quality as a result of changing lighting conditions (e.g., atmosphere, clouds).
My plan is to modify my Gcode on-the-fly, (yes, while printing) based on the feedback from photosensors in order to maintain a consistent burn. Modifying my feedrate to accommodate changes in lighting conditions seems like the simplest approach.
What I can't find is how to modify Gcode AFTER a print has begun.
ideas?
CNC machines use ladder logic to change machine parameters while running a program. They never change the code they are running. You need your program to be static and reproducible. You need to be able to version control your code. Editing the code live will most certainly lead to disaster.
You don't need to change the code to change the feed rate. Usually there is a dial to control feedrate. This can be locked out in software and controlled based on sensors.
Your question is lacking the information for a more specific answer. We would need to know what machine and control software you are using.

Recognition of an animal in pictures

I am facing a challenging problem. On the courtyard of company I am working is a camera trap which takes a photo of every movement. On some of these pictures there are different kinds of animals (mostly deep gray mice) that cause damages to our cable system. My idea is to use some application that could recognize if there is a gray mouse on the picture or not. Ideally in realtime. So far we have developed a solution that sends alarms for every movement but most of alarms are false. Could you provide me some info about possible ways how to solve the problem?
In technical parlance, what you describe above is often called event detection. I know of no ready-made approach to solve all of this at once, but with a little bit of programming you should be all set even if you don't want to code any computer vision algorithms or some such.
The high-level pipeline would be:
Making sure that your video is of sufficient quality. Gray mice sound kind of tough, plus the pictures are probably taken at night - so you should have sufficient infrared lighting etc. But if a human can make it out whether an alarm is false or true, you should be fine.
Deploying motion detection and taking snapshot images at the time of movements. It seems like you have this part already worked out, great! Detailing your setup could benefit others. You may also need to crop only the area in motion from the image, are you doing that?
Building an archive of images, including your decision of whether they are false or true alarm (labels in machine learning parlance). Try to gather at least a few tens of example images for both cases, and make them representative of real-world variations (do you have the problem during daytime as well? is there snowfall in your region?).
Classifying the images taken from the video stream snapshot to check whether it's a false alarm or contains bad critters eating cables. This sounds tough, but deep learning and machine learning is making advances by leaps; you can either:
deploy your own neural network built in a framework like caffe or Tensorflow (but you will likely need a lot of examples, at least tens of thousands I'd say)
use an image classification API that recognizes general objects, like Clarifai or Imagga - if you are lucky, it will notice that the snapshots show a mouse or a squirrel (do squirrels chew on cables?), but it is likely that on a specialized task like this one, these engines will get pretty confused!
use a custom image classification API service which is typically even more powerful than rolling your own neural network since it can use a lot of tricks to sort out these images even if you give it just a small number of examples for each image category (false / true alarm here); vize.it is a perfect example of that (anyone can contribute more such services?).
The real-time aspect is a bit open-ended, as the neural networks take some time to process an image — you also need to include data transfer etc. when using a public API, but if you roll out your own, you will need to spend a lot of effort to get low latency as the frameworks are by default optimized for throughput (batch prediction). Generally, if you are happy with ~1s latency and have a good internet uplink, you should be fine with any service.
Disclaimer: I'm one of the co-creators of vize.it.
How about getting a cat?
Also, you could train your own custom classifier using the IBM Watson Visual Recognition service. (demo: https://visual-recognition-demo.mybluemix.net/train ) It's free to try and you just need to supply example images for the different categories you want to identify. Overall, Petr's answer is excellent.

Telling Bullet That Something Happened In The Past

Is it possible to tell bullet that something has happened in the past so it can take this information and adjust internal interpolation to show this change in the present?
There would never be a need to go back in time more than 1-5 seconds, 5 being a very rare occasion, more realistically between 1.5 and 2.5 seconds is where most of the change would occur.
The ability to query an objects position, rotation, and velocities at a specific time in the past would be needed as well, but this can easily be accomplished.
The reason behind all of this would be to facilitate easier synchronization of two physics simulations, specifically in a networked environment.
A server constantly running the simulation in real-time would send position, rotation, and velocity updates to client simulations at periodic intervals. These updates would arrive at the client 'in the past' from the client simulation's perspective due to network latency, thus the need to query the updated objects values in the past to see if they are different arises, if they are different the need to change these values in the past is also necessary. During the next simulation step bullet would take these past-changes into account and update the object accordingly.
Is this ability present in bullet or would it need to be emulated somehow? If emulation is needed could someone point me in the correct direction to getting started on this "rewind and replay" feature?
If you are unfamiliar with "rewind and replay" this article goes into great detail about the theory behind implementation for someone who may be going about creating their own physics library. http://gafferongames.com/game-physics/n ... d-physics/

detect different sounds/sources in audio recording

I need some advice on this idea that I've had for an UNI project.
I was wondering if it's possible to split an audio file into different "streams" from different audio sources.
For example, split the audio file into: engine noise, train noise, voices, different sounds that are not there all the time, etc.
I wouldn't necessarily need to do this from a programming language(although it would be ideal) but manually as well, by using some sound processing software like Sound Forge. I need to know if this is possible first, though. I know nothing about sound processing.
After the first stage is complete(separating the sounds) I want to determine if one of the processed sounds exists in another audio recording. The purpose would be sound detection. For (an ideal) example, take the car engine sound and match it against another file and determine that the audio is a recording of a car's engine or not. It doesn't need to be THAT precise, I guess detecting a sound that is not constant, like a honk! would be alright as well.
I will do the programming part, I just need some pointers on what to look for(software, math, etc). As I am no sound expert, this would really be an interesting project, if it's possible.
Thanks.
This problem of splitting sounds based on source is known in research as (Audio) Source Separation or Audio Signal Separation. If there is no more information about the sound sources or how they have been mixed, it is a Blind Source Separation problem. There are hundreds of papers on these topics.
However for the purpose of sound detection, it is not typically necessary to separate sounds at the audio level. Very often one can (and will) do detection on features computed on the mixed signal. Search literature for Acoustic Event Detection and Acoustic Event Classification.
For a introduction to the subject, check out a book like Computational Analysis of Sound Scenes and Events
It's extremely difficult to do automated source separation from a single audio stream. Your brain is uncannily good at this task, and it also benefits from a stereo signal.
For instance. voice is full of signals that aren't there all the time. Car noise has components that are quite stationary, but gear changes are outliers.
Unfortunately, there are no simple answers.
Correlate reference signals against the audio stream. Correlation can be done efficiently using FFTs. The output of the correlation calculation can be thresholded and 'debounced' in time for signal identification.

Issue regarding practical approach on machine learning/computer vision fields

I am really passionate about the machine learning,data mining and computer vision fields and I was thinking at taking things a little bit further.
I was thinking at buying a LEGO Mindstorms NXT 2.0 robot for trying to experiment machine learning/computer vision and robotics algorithms in order to try to understand better several existing concepts.
Would you encourage me into doing so? Do you recommend any other alternative for a practical approach in understanding these fields which is acceptably expensive like(nearly 200 - 250 pounds) ? Are there any mini robots which I can buy and experiment stuff with?
If your interests are machine learning, data mining and computer vision then I'd say a Lego mindstorms is not the best option for you. Not unless you are also interested in robotics/electronics.
Do do interesting machine learning you only need a computer and a problem to solve. Think ai-contest or mlcomp or similar.
Do do interesting data mining you need a computer, a lot of data and a question to answer. If you have an internet connection the amount of data you can get at is only limited by your bandwidth. Think netflix prize, try your hand at collecting and interpreting data from wherever. If you are learning, this is a nice place to start.
As for computer vision: All you need is a computer and images. Depending on the type of problem you find interesting you could do some processing of random webcam images, take all you holiday photo's and try to detect where all your travel companions are in them. If you have a webcam your options are endless.
Lego mindstorms allows you to combine machine learning and computer vision. I'm not sure where the datamining would come in, and you will spend (waste?) time on the robotics/electronics side of things, which you don't list as one of your passions.
Well, I would take a look at the irobot create... well within your budget, and very robust.
Depending on your age, you may not want to be seen with a "lego robot" if you are out of college :-)
Anyway, I buy the creates in batches for my lab. You can link to them with a hard cable(cheap) or put a blue tooth interface on it.
But a webcam on that puppy, hook it up to a multicore machine and you have an awesome working robot for the things you want to explore.
Also, the old roombas had a ttl level serial port (if that did not make sense to you , then skip it). I don't know about the new ones. So, it was possible to control any roomba vacuum from a laptop.
The Number One rule, and I cannot emphasize this enough: Have a reliable platform for experimentation. If you hand build something, just for basic functionality, you will spend all your time on minor issues and not get to the fun stuff.
Anyway. best of luck.