I have a joint with speed encoder. I can measure the speed of the joint at any time. How can I extract the current position in C++?
I have idea in mind to find the area under the curve between two timestamps. Is this correct way to find the speed? Any other suggestion?
You have to integrate the speed value in order to get the position. So your guess is correct.
But that value will be unreliable as integration may deviate by large quantities.
Encoders should be able to provide position data themselves (counts). See if you can get them somehow. That would be the best way according to me.
Hope this helps.
Related
I am trying to use HMM for location prediction. I have the coordinates (x,y), speed and direction of motion. I have discretized the entire space into small blocks, that I use as states. The objective is to predict the location (state) of the object after time t, 2t, 3t and so on.
I have read multiple articles on HMM. I still have 2 questions:
Can I use some trajectories to create the transition matrix? My mapping from coordinates to block (i.e. the state) is straightforward, so I can use a few samples to create an initial transition matrix.
How do I define the emission matrix with the continuous observables (i.e Position, speed and direction). If I assume them to be gaussian with mean 0, how do I create the initial emissions matrix.
Can I use Viterbi to predict the location after time t, 2t etc?
I have read too many articles and am really confused now. I would appreciate some help to know if I am going in the right direction.
Also, what would be a good c++ library to use for the purpose?
Mlpack (http://www.mlpack.org/) is a very good and simple C++ library.
I couldn't understand what are your observations and what are your hidden states. if you have simple mapping between them, then maybe you don't need HMM in the first place.
In Google Earth you can use the "Sunlight" layer to view shadows cast by the terrain at any given DateTime: http://i.stack.imgur.com/YFGMj.png
However, I have not been able to find any way to access the sunlight/luminosity/shadow/etc values from the API.
I'm looking for a way to supply Lat, Long and DateTime to determine if an area is in sunlight (taking terrain shadows in to account, there are countless services that will provide simple Sunrise and Sunset times, but these do not consider terrain). This can be done manually with Google Earth, but I'm looking for a programatic method.
Thanks for any thoughts, ideas, leads...
I realise that this is an old question, but it surfaced in a google search I just did, and I liked the focus.
Since you're looking for a programmatic way of determining if a point on earth given by a longitude and latitude tuple is exposed to sun at a given time, I can't help you right now. However, I'm in a position to be able to set up such an API quite easily if we see that this is a feature that many people need. At suncurves.com we calculate sunrise and sunset times accounting for terrain. The solution we've set up so far is a web interface where a user can search for an address or drag and drop the icon on a map to get sunrise and sunset times through the year for that exact spot accounting for terrain. We want to create an API to our data, but we do not have a clear specification of the scope of this API yet. What you ask for requires that we need to:
Calculate the apparent horizon from the viewing point of the
longitude and latitude. This means scanning the terrain data in a
search radius of 30-50 km around your point.
Calculate the sun's position at the specified time.
Calculate the sun's position at the specified time. Determine if the
sun is under or over the horizon as given by the terrain surrounding
your point accounting for atmospheric refraction.
Here's an example from Chamonix, France where the common flat terrain versions of sunrise, sunset times are pretty worthless.
http://suncurves.com/v/7/
I am not sure about determining whether an AOI in in the sun or shade at a certain time, however you can set the SUN to be on or off in the API by using
GESun.setVisibility
Edit:
Using the GE-plugin, create a LookAt with your desired AOI lat/long where the view is directly above looking straight down. Depending on the size of you actual AOI I would keep the view as low to the ground as possible.
Then capture a screenshot/image - I do not think this is possible through GE (if anyone knows a way I would like to find out), so maybe use javascript to take it - I found this Q on SO that provides some insight.
Take a screenshot with GESun.setVisibility set ON and then another with it OFF
Compare the two images for darkness/lightness or something and determine if your AOI is in the shade or not. You might find it better to surround your AOI in a Polygon of some sort in order to help your program distinguish it from the rest of the image - depending on the height the LookAt was taken from etc etc....
I do not have any ideas on how to compare the images, but yet again another search on SO resulted in this (I would presume finding the values of COLOR_BLACK in PHP ImageMagick) and this (Color Buckets idea).
Depending on your method of choice, it might help to alter your images to black/white before doing the comparing.
I am working on a microscope that streams live images via a built-in video camera to a PC, where further image processing can be performed on the streamed image. Any processing done on the streamed image must be done in "real-time" (minimal frames dropped).
We take the average of a series of static images to counter random noise from the camera to improve the output of some of our image processing routines.
My question is: how do I know if the image is no longer static - either the sample under inspection has moved or rotated/camera zoom-in or out - so I can reset the image series used for averaging?
I looked through some of the threads, and some ideas that seemed interesting:
Note: using Windows, C++ and Intel IPP. With IPP the image is a byte array (Ipp8u).
1. Hash the images, and compare the hashes (normal hash or perceptual hash?)
2. Use normalized cross correlation (IPP has many variations - which to use?)
Which do you guys think is suitable for my situation (speed)?
If you camera doesn't shake, you can, as inVader said, subtract images. Then a sum of absolute values of all pixels of the difference image is sometimes enough to tell if images are the same or different. However, if your noise, lighting level, etc... varies, this will not give you a good enough S/N ratio.
And in noizy conditions normal hashes are even more useless.
The best would be to identify that some features of your object has changed, like it's boundary (if it's regular) or it's mass center (if it's irregular). If you have a boundary position, you'll need to analyze just one line of pixels, perpendicular to that boundary, to tell that boundary has moved.
Mass center position may be a subject to frequent false-negative responses, but adding a total mass and/or moment of inertia may help.
If the camera shakes, you may have to align images before comparing (depending on comparison method and required accuracy, a single pixel misalignment might be huge), and that's where cross-correlation helps.
And further, you doesn't have to analyze each image. You can skip one, and if the next differs, discard both of them. Here you have twice as much time to analyze an image.
And if you are averaging images, you might just define an optimal amount of images you need and compare just the first and the last image in the sequence.
So, simplest thing to try would be to take subsequent images, subtract them from each other and have a look at the difference. Then define some rules including local and global thresholds for the difference in which two images are considered equal. Simple subtraction of bitmap/array data, looking for maxima and calculating the average differnce across the whole thing should be ne problem to do in real time.
If there are varying light conditions or something moving in a predictable way(like a door opening and closing), then something more powerful, albeit slower, like gaussian mixture models for background modeling, might be worth looking into, click here. It is quite compute intensive, but can be parallelized pretty easily.
Motion detection algorithms is what is used.
http://www.codeproject.com/Articles/10248/Motion-Detection-Algorithms
http://www.codeproject.com/Articles/22243/Real-Time-Object-Tracker-in-C
First of all I would take a series of images at a slow fps rate and downsample those images to make them smaller, not too much but enough to speed up the process.
Now you have several options:
You could make a sum of absolute differences of the two images by subtracting them and use a threshold to value if the image has changed.
If you want to speed it up even further I would suggest doing a progressive SAD using a small kernel and moving from the top of the image to the bottom. You can value the complessive amount of differences during the process and eventually stop when you are satisfied.
I'm doing some work on tracking moving objects using a ceiling mounted downward facing camera. I've got to the point where I can detect the position of the desired object in each frame.
I'm looking into using a Kalman filter to track the object's position and speed through the scene and I've reached a stumbling block. I've set up my system and have all the required parts of the Kalman filter except the measurement variance.
I want to be able to assign a meaningful variance to each measurement to allow the correction phase to use the new information in a sensible manner. I have several measures assigned to my detected objects which could in theory be useful in determining how accurate the position should be and it seems logical to try and combine them to derive a suitable variance.
Am I approaching this in the right manner and if so, can anyone point me in the right direction to continue?
Any help greatly appreciated.
I think you are right. According to this post:
Sensor fusioning with Kalman filter
determining the variance is 100% experimental. It seems to me you have everything you need to get good estimates of the variance.
sorry for the late reply. I have personally encountered the same problem in my previous project. I found the advice given by Gustaf Hendeby in his Sensor Fusion lecture slides ( Page 10 of the slides) extremely valuable.
To summarize:
(1) The SNR of your measurement noise and your process noise determines your filter behavior. A high process noise/measurement noise ration makes your filter slower (low-pass filter), which will usually allow smoother tracking, vice versa a if you set your measurement noise low, you essentially have a high pass filter, which tends to have more jitter.
(2) There are numerous papers in the literature discuss on how to set these noise model properly. However, usually a lot of "tuning" is needed depends on your application. Usually the measurement noise is what we can measure/characterize based on the hardware specification. Therefore a recommendation is to fix "R" (measurement noise covariance) and tune Q (the process model noise covariance).
I am currently working on a data visualization project.My aim is to produce contour lines ,in other words iso-lines, from gridded data.Data can be temperature, weather data or any kind of other environmental parameters but only condition is it must be regularly spaced.
I searched in internet , however i could not find a good algorithm, pseudo-code or source code for producing contour lines from grids.
Does anybody knows a library, source code or an algorithm for producing contour lines from gridded data?
it will be good if your suggestion has a good run time performance, i don't want to wait my users so much :)
Edit: thanks for response but isolines have some constrains like they should not intersects
so just generating bezier curves does not accomplish my goal.
See this question: How to approximate a vector contour from an elevation raster?
It's a near duplicate, but uses quite different terminology. You'll find that cartography and computer graphics solve many of the same problems, but use different terminology for them.
there's some reasonably good contouring available in GNUplot - if you're able to use GPL code that may help.
If your data is placed at regular intervals, this can be done fairly easily (assuming I understand your problem correctly). First you need to determine at what interval you want your contours. Next create the grid you are going to use to store the contour information (i'm assuming just a simple on/off or elevation at this contour level type of data), which should be one interval smaller than the source data.
Now the trick here is to offset the 2 grids by 1/2 an interval (won't actually show up in code like this, but its the concept I'm dealing with here) and compare the 4 coordinates surrounding the current point in the contour data grid you are calculating. If any of the 4 points are in a different interval range, then that 'pixel' in the contour grid should be set to true (or the value of the contour range being crossed).
With this method, there will be a problem when the interval is too fine which will cause several contours to overlap onto each other.
As the link from Paul Tomblin suggests, Bezier curves (which are a subset of B-splines) are a ripe solution for your problem. If runtime performance is an issue, Bezier curves have the added benefit of being constructable via the very fast de Casteljau algorithm, instead of drawing them according to the parametric equations. On the off chance you're working with DirectX, it has a library function for the de Casteljau, but it should not be challenging to brew one yourself using the 1001 web pages that describe it.