How does the ViVe headset differentiate lasers coming out of two base stations? - computer-vision

I am building a tracking system using HTC Vive lighthouse base stations (1.0 version, cable synchronized). I have a customized designed photosensor that delivers pulses (photo-voltage captured from the two base stations) and these pulses are timed and digitized by my MCU and uploaded to the server for localization.
From what I understand, there are two kinds of pulses: a) the pulse with a shorter duration is the IR for synchronization, and b) the pulse with a longer duration is the laser. I shall then take the time difference (delta_t) between the long pulse and short pulse to perform my localization.
here is a sample figure that I have been looking at from a lighthouse tracking project from Github, Kevin Balke
enter image description here
So in the case of 2 base stations, any sensing device would see 4 kinds of delta_t, namely:
from the first base-station: delta_t-x-axis
from the first base-station: delta_t-y-axis
from the second base-station: delta_t-x-axis
from the second base-station: delta_t-y-axis
I wonder how the headset be able to figure out which delta_t is coming from which BS and from which axis? Is there additional information that is encoded in the laser/IR beams for the photo-sensor? Isn't it that if one doesn't figure out the order correctly then the location estimation of the sensor diode will definitely be incorrect? There is no guarantee the sensor board would always capture the first light from the BS.

Related

Basic example of how to do numerical integration in C++

I think most people know how to do numerical derivation in computer programming, (as limit --> 0; read: "as the limit approaches zero").
//example code for derivation of position over time to obtain velocity
float currPosition, prevPosition, currTime, lastTime, velocity;
while (true)
{
prevPosition = currPosition;
currPosition = getNewPosition();
lastTime = currTime;
currTime = getTimestamp();
// Numerical derivation of position over time to obtain velocity
velocity = (currPosition - prevPosition)/(currTime - lastTime);
}
// since the while loop runs at the shortest period of time, we've already
// achieved limit --> 0;
This is the basic building block for most derivation programming.
How can I do this with integrals? Do I use a for loop and add or what?
Numerical derivation and integration in code for physics, mapping, robotics, gaming, dead-reckoning, and controls
Pay attention to where I use the words "estimate" vs "measurement" below. The difference is important.
Measurements are direct readings from a sensor.
Ex: a GPS measures position (meters) directly, and a speedometer measures speed (m/s) directly.
Estimates are calculated projections you can obtain through integrating and derivating (deriving) measured values.
Ex: you can derive position measurements (m) with respect to time to obtain speed or velocity (m/s) estimates, and you can integrate speed or velocity measurements (m/s) with respect to time to obtain position or displacement (m) estimates.
Wait, aren't all "measurements" actually just "estimates" at some fundamental level?
Yeah--pretty much! But, they are not necessarily produced through derivations or integrations with respect to time, so that is a bit different.
Also note that technically, virtually nothing can truly be measured directly. All sensors get reduced down to a voltage or a current, and guess how you measure a current?--a voltage!--either as a voltage drop across a tiny resistance, or as a voltage induced through an inductive coil due to current flow. So, everything boils down to a voltage. Even devices which "measure speed directly" may be using pressure (pitot-static tube on airplane), doppler/phase shift (radar or sonar), or looking at distance over time and then outputting speed. Fluid speed, or speed with respect to fluid such as air or water, can even be measured via a hot wire anemometer by measuring the current required to keep a hot wire at a fixed temperature, or by measuring the temperature change of the hot wire at a fixed current. And how is that temperature measured? Temperature is just a thermo-electrically-generated voltage, or a voltage drop across a diode or other resistance.
As you can see, all of these "measurements" and "estimates", at the low level, are intertwined. However, if a given device has been produced, tested, and calibrated to output a given "measurement", then you can accept it as a "source of truth" for all practical purposes and call it a "measurement". Then, anything you derive from that measurement, with respect to time or some other variable, you can consider an "estimate". The irony of this is that if you calibrate your device and output derived or integrated estimates, someone else could then consider your output "estimates" as their input "measurements" in their system, in a sort of never-ending chain down the line. That's being pedantic, however. Let's just go with the simplified definitions I have above for the time being.
The following table is true, for example. Read the 2nd line, for instance, as: "If you take the derivative of a velocity measurement with respect to time, you get an acceleration estimate, and if you take its integral, you get a position estimate."
Derivatives and integrals of position
Measurement, y Derivative Integral
Estimate (dy/dt) Estimate (dy*dt)
----------------------- ----------------------- -----------------------
position [m] velocity [m/s] - [m*s]
velocity [m/s] acceleration [m/s^2] position [m]
acceleration [m/s^2] jerk [m/s^3] velocity [m/s]
jerk [m/s^3] snap [m/s^4] acceleration [m/s^2]
snap [m/s^4] crackle [m/s^5] jerk [m/s^3]
crackle [m/s^5] pop [m/s^6] snap [m/s^4]
pop [m/s^6] - [m/s^7] crackle [m/s^5]
For jerk, snap or jounce, crackle, and pop, see: https://en.wikipedia.org/wiki/Fourth,_fifth,_and_sixth_derivatives_of_position.
1. numerical derivation
Remember, derivation obtains the slope of the line, dy/dx, on an x-y plot. The general form is (y_new - y_old)/(x_new - x_old).
In order to obtain a velocity estimate from a system where you are obtaining repeated position measurements (ex: you are taking GPS readings periodically), you must numerically derivate your position measurements over time. Your y-axis is position, and your x-axis is time, so dy/dx is simply (position_new - position_old)/(time_new - time_old). A units check shows this might be meters/sec, which is indeed a unit for velocity.
In code, that would look like this, for a system where you're only measuring position in 1-dimension:
double position_new_m = getPosition(); // m = meters
double position_old_m;
// `getNanoseconds()` should return a `uint64_t timestamp in nanoseconds, for
// instance
double time_new_sec = NS_TO_SEC((double)getNanoseconds());
double time_old_sec;
while (true)
{
position_old_m = position_new_m;
position_new_m = getPosition();
time_old_sec = time_new_sec;
time_new_sec = NS_TO_SEC((double)getNanoseconds());
// Numerical derivation of position measurements over time to obtain
// velocity in meters per second (mps)
double velocity_mps =
(position_new_m - position_old_m)/(time_new_sec - time_old_sec);
}
2. numerical integration
Numerical integration obtains the area under the curve, dy*dx, on an x-y plot. One of the best ways to do this is called trapezoidal integration, where you take the average dy reading and multiply by dx. This would look like this: (y_old + y_new)/2 * (x_new - x_old).
In order to obtain a position estimate from a system where you are obtaining repeated velocity measurements (ex: you are trying to estimate distance traveled while only reading the speedometer on your car), you must numerically integrate your velocity measurements over time. Your y-axis is velocity, and your x-axis is time, so (y_old + y_new)/2 * (x_new - x_old) is simply velocity_old + velocity_new)/2 * (time_new - time_old). A units check shows this might be meters/sec * sec = meters, which is indeed a unit for distance.
In code, that would look like this. Notice that the numerical integration obtains the distance traveled over that one tiny time interval. To obtain an estimate of the total distance traveled, you must sum all of the individual estimates of distance traveled.
double velocity_new_mps = getVelocity(); // mps = meters per second
double velocity_old_mps;
// `getNanoseconds()` should return a `uint64_t timestamp in nanoseconds, for
// instance
double time_new_sec = NS_TO_SEC((double)getNanoseconds());
double time_old_sec;
// Total meters traveled
double distance_traveled_m_total = 0;
while (true)
{
velocity_old_mps = velocity_new_mps;
velocity_new_mps = getVelocity();
time_old_sec = time_new_sec;
time_new_sec = NS_TO_SEC((double)getNanoseconds());
// Numerical integration of velocity measurements over time to obtain
// a distance estimate (in meters) over this time interval
double distance_traveled_m =
(velocity_old_mps + velocity_new_mps)/2 * (time_new_sec - time_old_sec);
distance_traveled_m_total += distance_traveled_m;
}
See also: https://en.wikipedia.org/wiki/Numerical_integration.
Going further:
high-resolution timestamps
To do the above, you'll need a good way to obtain timestamps. Here are various techniques I use:
In C++, use my uint64_t nanos() function here.
If using Linux in C or C++, use my uint64_t nanos() function which uses clock_gettime() here. Even better, I have wrapped it up into a nice timinglib library for Linux, in my eRCaGuy_hello_world repo here:
timinglib.h
timinglib.c
Here is the NS_TO_SEC() macro from timing.h:
#define NS_PER_SEC (1000000000L)
/// Convert nanoseconds to seconds
#define NS_TO_SEC(ns) ((ns)/NS_PER_SEC)
If using a microcontroller, you'll need to read an incrementing periodic counter from a timer or counter register which you have configured to increment at a steady, fixed rate. Ex: on Arduino: use micros() to obtain a microsecond timestamp with 4-us resolution (by default, it can be changed). On STM32 or others, you'll need to configure your own timer/counter.
use high data sample rates
Taking data samples as fast as possible in a sample loop is a good idea, because then you can average many samples to achieve:
Reduced noise: averaging many raw samples reduces noise from the sensor.
Higher-resolution: averaging many raw samples actually adds bits of resolution in your measurement system. This is known as oversampling.
I write about it on my personal website here: ElectricRCAircraftGuy.com: Using the Arduino Uno’s built-in 10-bit to 16+-bit ADC (Analog to Digital Converter).
And Atmel/Microchip wrote about it in their white-paper here: Application Note AN8003: AVR121: Enhancing ADC resolution by oversampling.
Taking 4^n samples increases your sample resolution by n bits of resolution. For example:
4^0 = 1 sample at 10-bits resolution --> 1 10-bit sample
4^1 = 4 samples at 10-bits resolution --> 1 11-bit sample
4^2 = 16 samples at 10-bits resolution --> 1 12-bit sample
4^3 = 64 samples at 10-bits resolution --> 1 13-bit sample
4^4 = 256 samples at 10-bits resolution --> 1 14-bit sample
4^5 = 1024 samples at 10-bits resolution --> 1 15-bit sample
4^6 = 4096 samples at 10-bits resolution --> 1 16-bit sample
See:
So, sampling at high sample rates is good. You can do basic filtering on these samples.
If you process raw samples at a high rate, doing numerical derivation on high-sample-rate raw samples will end up derivating a lot of noise, which produces noisy derivative estimates. This isn't great. It's better to do the derivation on filtered samples: ex: the average of 100 or 1000 rapid samples. Doing numerical integration on high-sample-rate raw samples, however, is fine, because as Edgar Bonet says, "when integrating, the more samples you get, the better the noise averages out." This goes along with my notes above.
Just using the filtered samples for both numerical integration and numerical derivation, however, is just fine.
use reasonable control loop rates
Control loop rates should not be too fast. The higher the sample rates, the better, because you can filter them to reduce noise. The higher the control loop rate, however, not necessarily the better, because there is a sweet spot in control loop rates. If your control loop rate is too slow, the system will have a slow frequency response and won't respond to the environment fast enough, and if the control loop rate is too fast, it ends up just responding to sample noise instead of to real changes in the measured data.
Therefore, even if you have a sample rate of 1 kHz, for instance, to oversample and filter the data, control loops that fast are not needed, as the noise from readings of real sensors over very small time intervals will be too large. Use a control loop anywhere from 10 Hz ~ 100 Hz, perhaps up to 400+ Hz for simple systems with clean data. In some scenarios you can go faster, but 50 Hz is very common in control systems. The more-complicated the system and/or the more-noisy the sensor measurements, generally, the slower the control loop must be, down to about 1~10 Hz or so. Self-driving cars, for instance, which are very complicated, frequently operate at control loops of only 10 Hz.
loop timing and multi-tasking
In order to accomplish the above, independent measurement and filtering loops, and control loops, you'll need a means of performing precise and efficient loop timing and multi-tasking.
If needing to do precise, repetitive loops in Linux in C or C++, use the sleep_until_ns() function from my timinglib above. I have a demo of my sleep_until_us() function in-use in Linux to obtain repetitive loops as fast as 1 KHz to 100 kHz here.
If using bare-metal (no operating system) on a microcontroller as your compute platform, use timestamp-based cooperative multitasking to perform your control loop and other loops such as measurements loops, as required. See my detailed answer here: How to do high-resolution, timestamp-based, non-blocking, single-threaded cooperative multi-tasking.
full, numerical integration and multi-tasking example
I have an in-depth example of both numerical integration and cooperative multitasking on a bare-metal system using my CREATE_TASK_TIMER() macro in my Full coulomb counter example in code. That's a great demo to study, in my opinion.
Kalman filters
For robust measurements, you'll probably need a Kalman filter, perhaps an "unscented Kalman Filter," or UKF, because apparently they are "unscented" because they "don't stink."
See also
My answer on Physics-based controls, and control systems: the many layers of control

SFML and PN noise (8-bit emulation)

I have had the absurd idea to write a Commodore VIC-20 emulator, my first computer.
Everything has gone quite well until sound emulation time has come! The VIC-20 has 3 voices (square waveform) and a noise speaker. Searching the net I found that it is a PN generator (somewhere is called "white" noise).
I know that white noise is not frequency driven, but you put a specific frequency value into the noise register (POKE 36877,X command). The formula is:
freq = cpu_speed/(127 - x)
(more details on the VIC-20 Programmer's Guida, especially the MOS6560/6561 VIC-I chip)
where x is the 7-bit value of the noise register (bit 8 is noise on/off switch)
I have a 1024 pre-generated buffer of numbers (the pseudo-random sequence), the question is: how can I correlate the frequency (freq) to create a sample buffer to pass to the sound card (in this case to sf::SoundBuffer that accepts sf::Int16 (aka unsigned short) values?
I guess most of you had a Commodore VIC-20 or C64 at home and played with the old POKE instruction... Can anyone of you help me in understanding this step?
EDIT:
Searching on the internet I found the C64 Programmer's Guida that shows the waveform graph of its noise generator. Can anyone recognize this kind of wave/perturbation etc...? The waveform seems to be periodic (with period of freq), but how tu generate such wave?

Digital signal decimation using gnuradio lib

I write application where I must process digital signal - array of double. I must the signal decimate, filter etc.. I found a project gnuradio where are functions for this problem. But I can't figure how to use them correctly.
I need signal decimate (for example from 250Hz to 200Hz). The function should be similar to resample function in Matlab. I found, the classes for it are:
rational_resampler_base_fff Class source
fir_filter_fff Class source
...
Unfortunately I can't figure how to use them.
gnuradio and shared library I have installed
Thanks for any advice
EDIT to #jcoppens
Thank you very much for you help.
But I must process signal in my code. I find classes in gnuradio which can solve my problem, but I need help how set them.
Functions which I must set are:
low_pass(doub gain, doub sampling_freq, doub cutoff_freq, doub transition_width, window, beta)
where:
use "window method" to design a low-pass FIR filter
gain: overall gain of filter (typically 1.0)
sampling_freq: sampling freq (Hz)
cutoff_freq: center of transition band (Hz)
transition_width: width of transition band (Hz).
The normalized width of the transition band is what sets the number of taps required. Narrow –> more taps
window_type: What kind of window to use. Determines maximum attenuation and passband ripple.
beta: parameter for Kaiser window
I know, I must use window = KAISER and beta = 5, but for the rest I'm not sure.
The func which I use are: low_pass and pfb_arb_resampler_fff::filter
UPDATE:
I solved the resampling using libsamplerate
I need signal decimate (for example from 250Hz to 200Hz)
WARNING: I expressed the original introductory paragraph incorrectly - my apologies.
As 250 Hz is not related directly to 200 Hz, you have to do some tricks to convert 250Hz into 200Hz. Inserting 4 interpolated samples in between the 250Hz samples, lowers the frequency to 50Hz. Then you can raise the frequency to 200Hz again by decimating by a factor 4.
For this you need the "Rational Resampler", where you can define the subsample and decimate factors. Something like this:
This means you would have to do something similar if you use the library. Maybe it's even simpler to do it without the library. Interpolate linearly between the 250 Hz samples (i.e. insert 4 extra samples between each), then decimate by selecting each 4th sample.
Note: There is a Signal Processing forum on stackexchange - maybe this question might fall in that category...
More information: If you only have to resample your input data, and you do not need the actual gnuradio program, then have a look at this document:
https://ccrma.stanford.edu/~jos/resample/resample.pdf
There are several links to other documents, and a link to libresample, libresample4, and others, which may be of use to you. Another, very interesting, page is:
http://www.dspguru.com/dsp/faqs/multirate/resampling
Finally, from the same source as the pdf above, check their snd program. It may solve your problem without writing any software. It can load floating point samples, resample, and save again:
http://ccrma.stanford.edu/planetccrma/software/soundapps.html#SECTION00062100000000000000
EDIT: And yet another solution - maybe the simplest of all: Use Matlab (or the free Octave version):
pkg load signal
t = linspace(0, 10*pi, 50); % Generate a timeline - 5 cycles
s = sin(t); % and the sines -> 250 Hz
tr = resample(s, 5, 4); % Convert to 200 Hz
plot(t, s, 'r') % Plot 250 Hz in red
hold on
plot(t, tr(1:50)) % and resampled in blue
Will give you:

Runtime Sound Generation in C++ on Windows

How might one generate audio at runtime using C++? I'm just looking for a starting point. Someone on a forum suggested I try to make a program play a square wave of a given frequency and amplitude.
I've heard that modern computers encode audio using PCM samples: At a give rate for a specific unit of time (eg. 48 kHz), the amplitude of a sound is recorded at a given resolution (eg. 16-bits). If I generate such a sample, how do I get my speakers to play it? I'm currently using windows. I'd prefer to avoid any additional libraries if at all possible but I'd settle for a very light one.
Here is my attempt to generate a square wave sample using this principal:
signed short* Generate_Square_Wave(
signed short a_amplitude ,
signed short a_frequency ,
signed short a_sample_rate )
{
signed short* sample = new signed short[a_sample_rate];
for( signed short c = 0; c == a_sample_rate; c++ )
{
if( c % a_frequency < a_frequency / 2 )
sample[c] = a_amplitude;
else
sample[c] = -a_amplitude;
}
return sample;
}
Am I doing this correctly? If so, what do I do with the generated sample to get my speakers to play it?
Your loop has to use c < a_sample_rate to avoid a buffer overrun.
To output the sound you call waveOutOpen and other waveOut... functions. They are all listed here:
http://msdn.microsoft.com/en-us/library/windows/desktop/dd743834(v=vs.85).aspx
The code you are using generates a wave that is truly square, binary kind of square, in short the type of waveform that does not exist in real life. In reality most (pretty sure all) of the sounds you hear are a combination of sine waves at different frequencies.
Because your samples are created the way they are they will produce aliasing, where a higher frequency masquerades as a lower frequency causing audio artefacts. To demonstrate this to yourself write a little program which sweeps the frequency of your code from 20-20,000hz. You will hear that the sound does not go up smoothly as it raises in frequency. You will hear artefacts.
Wikipedia has an excellent article on square waves: https://en.m.wikipedia.org/wiki/Square_wave
One way to generate a square wave is to perform an inverse Fast Fourier Transform which transforms a series of frequency measurements into a series of time based samples. Then generating a square wave is a matter of supplying the routine with a collection of the measurements of sin waves at different frequencies that make up a square wave and the output is a buffer with a single cycle of the waveform.
To generate audio waves is computationally expensive so what is often done is to generate arrays of audio samples and play them back at varying speeds to play different frequencies. This is called wave table synthesis.
Have a look at the following link:
https://www.earlevel.com/main/2012/05/04/a-wavetable-oscillator%E2%80%94part-1/
And some more about band limiting a signal and why it’s necessary:
https://dsp.stackexchange.com/questions/22652/why-band-limit-a-signal

Picture entropy calculation

I've run into some nasty problem with my recorder. Some people are still using it with analog tuners, and analog tuners have a tendency to spit out 'snow' if there is no signal present.
The Problem is that when noise is fed into the encoder, it goes completely crazy and first consumes all CPU then ultimately freezes. Since main point od the recorder is to stay up and running no matter what, I have to figure out how to proceed with this, so encoder won't be exposed to the data it can't handle.
So, idea is to create 'entropy detector' - a simple and small routine that will go through the frame buffer data and calculate entropy index i.e. how the data in the picture is actually random.
Result from the routine would be a number, that will be 0 for completely back picture, and 1 for completely random picture - snow, that is.
Routine in itself should be forward scanning only, with few local variables that would fit into registers nicely.
I could use zlib or 7z api for such task, but I would really want to cook something on my own.
Any ideas?
PNG works this way (approximately): For each pixel, replace its value by the value that it had minus the value of the pixel left to it. Do this from right to left.
Then you can calculate the entropy (bits per character) by making a table of how often which value appears now, making relative values out of these absolute ones and adding the results of log2(n)*n for each element.
Oh, and you have to do this for each color channel (r, g, b) seperately.
For the result, take the average of the bits per character for the channels and divide it by 2^8 (assuming that you have 8 bit per color).