Google Tango - Fetch exposure time from TangoImageBuffer? - c++

I'm trying to read the exposure time from the TangoImageBuffer struct and I always see garbage values in it. It starts with 0 and randomly changes to a huge positive number and also negative numbers sometimes.
I basically cast the exposure_time_ns value to %lld when I'm printing it out. Also when I try to convert the huge numbers to seconds, it doesn't make sense since it ends up being more than 100 seconds.
The exposure_time_ns is an int64_t datatype. Am I casting it wrong or is there a different way to fetch it?
MVCE - I'm using a point to point sample from the tango c examples available in github. Here's the link to it-
https://github.com/googlesamples/tango-examples-c/tree/master/cpp_point_to_point_example
I just changed the way in which the OnFrameAvailable callback is handled and I have attached it below.
void PointToPointApplication::OnFrameAvailable(const TangoImageBuffer* buffer) {
TangoSupport_updateImageBuffer(image_buffer_manager_, buffer);
int64_t exposure = 0;
double timestamp = 0;
TangoImageBuffer* currentBuffer = nullptr;
{
std::lock_guard<std::mutex> lock(_tangobufferLock);
TangoSupport_getLatestImageBuffer(image_buffer_manager_,
&currentBuffer);
if(currentBuffer == nullptr)
{
LOGE("Color Buffer manager retrieval issue\n");
return;
}
else
{
exposure = currentBuffer->exposure_duration_ns;
timestamp = currentBuffer->timestamp;
}
}
}
I print the exposure values, every frame and among all the TangoImageBuffer attributes, only the exposure and the frame number are holding garbage values and all other values are valid.

Related

Determining if a 16 bit binary number is negative or positive

I'm creating a library for a temperature sensor that has a 16-bit value in binary that is being returned. I'm trying to find the best way to check if that value returned is negative or positive. I'm curious as to whether or not I can check if the most significant bit is a 1 or a 0 and if that would be the best way to go about it, how to successfully implement it.
I know that I can convert it to decimal and check that way but I just was curious if there was an easier way. I've seen it implemented with shifting values but I don't fully understand that method. (I'm super new to c++)
float TMP117::readTempC(void)
{
int16_t digitalTemp; // Temperature stored in the TMP117 register
digitalTemp = readRegister(TEMP_RESULT); //Reads the temperature from the sensor
// Check if the value is a negative number
/*Insert code to check here*/
// Returns the digital temperature value multiplied by the resolution
// Resolution = .0078125
return digitalTemp*0.0078125;
}
I'm not sure how to check if the code works and I haven't been able to compile it and run it on the device because the new PCB design and sensor has not come in the mail yet.
I know that I can convert it to decimal and check that way
I am not sure what you mean. An integer is an integer, it is an arithmetic object you just compare it with zero:
if( digitalTemp < 0 )
{
// negative
}
else
{
// positive
}
You can as you suggest test the MSB, but there is no particular benefit, it lacks clarity, and will break or need modification if the type of digitalTemp changes.
if( (digitalTemp & 0x8000 )
{
// negative
}
else
{
// positive
}
"conversion to decimal", can only be interpreted as conversion to a decimal string representation of an integer, which does not make your task any simpler, and is entirely unnecessary.
I'm not sure how to check if the code works and I haven't been able to compile it and run it on the device because the new PCB design and sensor has not come in the mail yet.
Compile and run it on a PC in a test harness with stubs for teh hardware dependent functions. Frankly if you are new to C++, you are perhaps better off practising the fundamentals in a PC environment with generally better debug facilities and faster development/test iteration in any case.
In general
float TMP117::readTempC(void)
{
int16_t digitalTemp; // Temperature stored in the TMP117 register
digitalTemp = readRegister(TEMP_RESULT); //Reads the temperature from the sensor
// Check if the value is a negative number
if (digitalTemp < 0)
{
printf("Dang it is cold\n");
}
// Returns the digital temperature value multiplied by the resolution
// Resolution = .0078125
return digitalTemp*0.0078125;
}

MongoDB C driver efficiency

I'm trying to write a program whose job it is to go into shared memory, retrieve a piece of information (a struct 56 bytes in size), then parse that struct lightly and write it to a database.
The catch is that it needs to do this several dozens of thousands of times per second. I'm running this on a dedicated Ubuntu 14.04 server with dual Xeon X5677's and 32GB RAM. Also, Mongo is running PerconaFT as its storage engine. I am making an uneducated guess here, but say worst case load scenario would be 100,000 writes per second.
Shared memory is populated by another program who's reading information from a real time data stream, so I can't necessarily reproduce scenarios.
First... is Mongo the right choice for this task?
Next, this is the code that I've got right now. It starts with creating a list of collections (the list of items I want to record data points on is fixed) and then retrieving data from shared memory until it catches a signal.
int main()
{
//these deal with navigating shared memory
uint latestNotice=0, latestTurn=0, latestPQ=0, latestPQturn=0;
symbol_data *notice = nullptr;
bool done = false;
//this is our 56 byte struct
pq item;
uint64_t today_at_midnight; //since epoch, in milliseconds
{
time_t seconds = time(NULL);
today_at_midnight = seconds/(60*60*24);
today_at_midnight *= (60*60*24*1000);
}
//connect to shared memory
infob=info_block_init();
uint32_t used_symbols = infob->used_symbols;
getPosition(latestNotice, latestTurn);
//fire up mongo
mongoc_client_t *client = nullptr;
mongoc_collection_t *collections[used_symbols];
mongoc_collection_t *collection = nullptr;
bson_error_t error;
bson_t *doc = nullptr;
mongoc_init();
client = mongoc_client_new("mongodb://localhost:27017/");
for(uint32_t symbol = 0; symbol < used_symbols; symbol++)
{
collections[symbol] = mongoc_client_get_collection(client, "scribe",
(infob->sd+symbol)->getSymbol());
}
//this will be used later to sleep one millisecond
struct timespec ts;
ts.tv_sec=0;
ts.tv_nsec=1000000;
while(continue_running) //becomes false if a signal is caught
{
//check that new info is available in shared memory
//sleep 1ms if it isn't
while(!getNextNotice(&notice,latestNotice,latestTurn)) nanosleep(&ts, NULL);
//get the new info
done=notice->getNextItem(item, latestPQ, latestPQturn);
if(done) continue;
//just some simple array math to make sure we're on the right collection
collection = collections[notice - infob->sd];
//switch on the item type and parse it accordingly
switch(item.tp)
{
case pq::pq_event:
doc = BCON_NEW(
//decided to use this instead of std::chrono
"ts", BCON_DATE_TIME(today_at_midnight + item.ts),
//item.pr is a uint64_t, and the guidance I've read on mongo
//advises using strings for those values
"pr", BCON_UTF8(std::to_string(item.pr).c_str()),
"sz", BCON_INT32(item.sz),
"vn", BCON_UTF8(venue_labels[item.vn]),
"tp", BCON_UTF8("e")
);
if(!mongoc_collection_insert(collection, MONGOC_INSERT_NONE, doc, NULL, &error))
{
LOG(1,"Mongo Error: "<<error.message<<endl);
}
break;
//obviously, several other cases go here, but they all look the
//same, using BCON macros for their data.
default:
LOG(1,"got unknown type = "<<item.tp<<endl);
break;
}
}
//clean up once we break from the while()
if(doc != nullptr) bson_destroy(doc);
for(uint32_t symbol = 0; symbol < used_symbols; symbol++)
{
collection = collections[symbol];
mongoc_collection_destroy(collection);
}
if(client != nullptr) mongoc_client_destroy(client);
mongoc_cleanup();
return 0;
}
My second question is: is this the fastest way to do this? The retrieval from shared memory isn't perfect, but this program is getting way behind its supply of data, far moreso than I need it to be. So I'm looking for obvious mistakes with regards to efficiency or technique when speed is the goal.
Thanks in advance. =)

C++ beginner how to use GetSystemTimeAsFileTime

I have a program that reads the current time from the system clock and saves it to a text file. I previously used the GetSystemTime function which worked, but the times weren't completely consistent eg: one of the times is 32567.789 and the next time is 32567.780 which is backwards in time.
I am using this program to save the time up to 10 times a second. I read that the GetSystemTimeAsFileTime function is more accurate. My question is, how to I convert my current code to use the GetSystemTimeAsFileTime function? I tried to use the FileTimeToSystemTime function but that had the same problems.
SYSTEMTIME st;
GetSystemTime(&st);
WORD sec = (st.wHour*3600) + (st.wMinute*60) + st.wSecond; //convert to seconds in a day
lStr.Format( _T("%d %d.%d\n"),GetFrames() ,sec, st.wMilliseconds);
std::wfstream myfile;
myfile.open("time.txt", std::ios::out | std::ios::in | std::ios::app );
if (myfile.is_open())
{
myfile.write((LPCTSTR)lStr, lStr.GetLength());
myfile.close();
}
else {lStr.Format( _T("open file failed: %d"), WSAGetLastError());
}
EDIT To add some more info, the code captures an image from a camera which runs 10 times every second and saves the time the image was taken into a text file. When I subtract the 1st entry of the text file from the second and so on eg: entry 2-1 3-2 4-3 etc I get this graph, where the x axis is the number of entries and the y axis is the subtracted values.
All of them should be around the 0.12 mark which most of them are. However you can see that a lot of them vary and some even go negative. This isn't due to the camera because the camera has its own internal clock and that has no variations. It has something to do with capturing the system time. What I want is the most accurate method to extract the system time with the highest resolution and as little noise as possible.
Edit 2 I have taken on board your suggestions and ran the program again. This is the result:
As you can see it is a lot better than before but it is still not right. I find it strange that it seems to do it very incrementally. I also just plotted the times and this is the result, where x is the entry and y is the time:
Does anyone have any idea on what could be causing the time to go out every 30 frames or so?
First of all, you wanna get the FILETIME as follows
FILETIME fileTime;
GetSystemTimeAsFileTime(&fileTime);
// Or for higher precision, use
// GetSystemTimePreciseAsFileTime(&fileTime);
According to FILETIME's documentation,
It is not recommended that you add and subtract values from the FILETIME structure to obtain relative times. Instead, you should copy the low- and high-order parts of the file time to a ULARGE_INTEGER structure, perform 64-bit arithmetic on the QuadPart member, and copy the LowPart and HighPart members into the FILETIME structure.
So, what you should be doing next are
ULARGE_INTEGER theTime;
theTime.LowPart = fileTime.dwLowDateTime;
theTime.HighPart = fileTime.dwHighDateTime;
__int64 fileTime64Bit = theTime.QuadPart;
And that's it. The fileTime64Bit variable now contains the time you're looking for.
If you want to get a SYSTEMTIME object instead, you could just do the following:
SYSTEMTIME systemTime;
FileTimeToSystemTime(&fileTime, &systemTime);
Getting the system time out of Windows with decent accuracy is something that I've had fun with, too... I discovered that Javascript code running on Chrome seemed to produce more consistent timer results than I could with C++ code, so I went looking in the Chrome source. An interesting place to start is the comments at the top of time_win.cc in the Chrome source. The links given there to a Mozilla bug and a Dr. Dobb's article are also very interesting.
Based on the Mozilla and Chrome sources, and the above links, the code I generated for my own use is here. As you can see, it's a lot of code!
The basic idea is that getting the absolute current time is quite expensive. Windows does provide a high resolution timer that's cheap to access, but that only gives you a relative, not absolute time. What my code does is split the problem up into two parts:
1) Get the system time accurately. This is in CalibrateNow(). The basic technique is to call timeBeginPeriod(1) to get accurate times, then call GetSystemTimeAsFileTime() until the result changes, which means that the timeBeginPeriod() call has had an effect. This gives us an accurate system time, but is quite an expensive operation (and the timeBeginPeriod() call can affect other processes) so we don't want to do it each time we want a time. The code also calls QueryPerformanceCounter() to get the current high resolution timer value.
bool NeedCalibration = true;
LONGLONG CalibrationFreq = 0;
LONGLONG CalibrationCountBase = 0;
ULONGLONG CalibrationTimeBase = 0;
void CalibrateNow(void)
{
// If the timer frequency is not known, try to get it
if (CalibrationFreq == 0)
{
LARGE_INTEGER freq;
if (::QueryPerformanceFrequency(&freq) == 0)
CalibrationFreq = -1;
else
CalibrationFreq = freq.QuadPart;
}
if (CalibrationFreq > 0)
{
// Get the current system time, accurate to ~1ms
FILETIME ft1, ft2;
::timeBeginPeriod(1);
::GetSystemTimeAsFileTime(&ft1);
do
{
// Loop until the value changes, so that the timeBeginPeriod() call has had an effect
::GetSystemTimeAsFileTime(&ft2);
}
while (FileTimeToValue(ft1) == FileTimeToValue(ft2));
::timeEndPeriod(1);
// Get the current timer value
LARGE_INTEGER counter;
::QueryPerformanceCounter(&counter);
// Save calibration values
CalibrationCountBase = counter.QuadPart;
CalibrationTimeBase = FileTimeToValue(ft2);
NeedCalibration = false;
}
}
2) When we want the current time, get the high resolution timer by calling QueryPerformanceCounter(), and use the change in that timer since the last CalibrateNow() call to work out an accurate "now". This is in Now() in my code. This also periodcally calls CalibrateNow() to ensure that the system time doesn't go backwards, or drift out.
FILETIME GetNow(void)
{
for (int i = 0; i < 4; i++)
{
// Calibrate if needed, and give up if this fails
if (NeedCalibration)
CalibrateNow();
if (NeedCalibration)
break;
// Get the current timer value and use it to compute now
FILETIME ft;
::GetSystemTimeAsFileTime(&ft);
LARGE_INTEGER counter;
::QueryPerformanceCounter(&counter);
LONGLONG elapsed = ((counter.QuadPart - CalibrationCountBase) * 10000000) / CalibrationFreq;
ULONGLONG now = CalibrationTimeBase + elapsed;
// Don't let time go back
static ULONGLONG lastNow = 0;
now = max(now,lastNow);
lastNow = now;
// Check for clock skew
if (LONGABS(FileTimeToValue(ft) - now) > 2 * GetTimeIncrement())
{
NeedCalibration = true;
lastNow = 0;
}
if (!NeedCalibration)
return ValueToFileTime(now);
}
// Calibration has failed to stabilize, so just use the system time
FILETIME ft;
::GetSystemTimeAsFileTime(&ft);
return ft;
}
It's all a bit hairy but works better than I had hoped. This also seems to work well as far back on Windows as I have tested (which was Windows XP).
I believe you are looking for GetSystemTimePreciseAsFileTime() function or even QueryPerformanceCounter() - to be short for something that is guarantied to produce monotone values.

c++ stack efficient for multicore application

I am trying to code a multicode Markov Chain in C++ and while I am trying to take advantage of the many CPUs (up to 24) to run a different chain in each one, I have a problem in picking a right container to gather the result the numerical evaluations on each CPU. What I am trying to measure is basically the average value of an array of boolean variables. I have tried coding a wrapper around a `std::vector`` object looking like that:
struct densityStack {
vector<int> density; //will store the sum of boolean varaibles
int card; //will store the amount of elements we summed over for normalizing at the end
densityStack(int size){ //constructor taking as only parameter the size of the array, usually size = 30
density = vector<int> (size, 0);
card = 0;
}
void push_back(vector<int> & toBeAdded){ //method summing a new array (of measurements) to our stack
for(auto valStack = density.begin(), newVal = toBeAdded.begin(); valStack != density.end(); ++valStack, ++ newVal)
*valStack += *newVal;
card++;
}
void savef(const char * fname){ //method outputting into a file
ofstream out(fname);
out.precision(10);
out << card << "\n"; //saving the cardinal in first line
for(auto val = density.begin(); val != density.end(); ++val)
out << << (double) *val/card << "\n";
out.close();
}
};
Then, in my code I use a single densityStack object and every time a CPU core has data (can be 100 times per second) it will call push_back to send the data back to densityStack.
My issue is that this seems to be slower that the first raw approach where each core stored each array of measurement in file and then I was using some Python script to average and clean (I was unhappy with it because storing too much information and inducing too much useless stress on the hard drives).
Do you see where I can be losing a lot of performance? I mean is there a source of obvious overheading? Because for me, copying back the vector even at frequencies of 1000Hz should not be too much.
How are you synchronizing your shared densityStack instance?
From the limited info here my guess is that the CPUs are blocked waiting to write data every time they have a tiny chunk of data. If that is the issue, a simple technique to improve performance would be to reduce the number of writes. Keep a buffer of data for each CPU and write to the densityStack less frequently.

Calculating moving average in C++

I am trying to calculate the moving average of a signal. The signal value ( a double ) is updated at random times.
I am looking for an efficient way to calculate it's time weighted average over a time window, in real time. I could do it my self, but it is more challenging than I thought.
Most of the resources I've found over the internet are calculating moving average of periodical signal, but mine updates at random time.
Does anyone know good resources for that ?
Thanks
The trick is the following: You get updates at random times via void update(int time, float value). However you also need to also track when an update falls off the time window, so you set an "alarm" which called at time + N which removes the previous update from being ever considered again in the computation.
If this happens in real-time you can request the operating system to make a call to a method void drop_off_oldest_update(int time) to be called at time + N
If this is a simulation, you cannot get help from the operating system and you need to do it manually. In a simulation you would call methods with the time supplied as an argument (which does not correlate with real time). However, a reasonable assumption is that the calls are guaranteed to be such that the time arguments are increasing. In this case you need to maintain a sorted list of alarm time values, and for each update and read call you check if the time argument is greater than the head of the alarm list. While it is greater you do the alarm related processing (drop off the oldest update), remove the head and check again until all alarms prior to the given time are processed. Then do the update call.
I have so far assumed it is obvious what you would do for the actual computation, but I will elaborate just in case. I assume you have a method float read (int time) that you use to read the values. The goal is to make this call as efficient as possible. So you do not compute the moving average every time the read method is called. Instead you precompute the value as of the last update or the last alarm, and "tweak" this value by a couple of floating point operations to account for the passage of time since the last update. (i. e. a constant number of operations except for perhaps processing a list of piled up alarms).
Hopefully this is clear -- this should be a quite simple algorithm and quite efficient.
Further optimization: one of the remaining problems is if a large number of updates happen within the time window, then there is a long time for which there are neither reads nor updates, and then a read or update comes along. In this case, the above algorithm will be inefficient in incrementally updating the value for each of the updates that is falling off. This is not necessary because we only care about the last update beyond the time window so if there is a way to efficiently drop off all older updates, it would help.
To do this, we can modify the algorithm to do a binary search of updates to find the most recent update before the time window. If there are relatively few updates that needs to be "dropped" then one can incrementally update the value for each dropped update. But if there are many updates that need to be dropped then one can recompute the value from scratch after dropping off the old updates.
Appendix on Incremental Computation: I should clarify what I mean by incremental computation above in the sentence "tweak" this value by a couple of floating point operations to account for the passage of time since the last update. Initial non-incremental computation:
start with
sum = 0;
updates_in_window = /* set of all updates within window */;
prior_update' = /* most recent update prior to window with timestamp tweaked to window beginning */;
relevant_updates = /* union of prior_update' and updates_in_window */,
then iterate over relevant_updates in order of increasing time:
for each update EXCEPT last {
sum += update.value * time_to_next_update;
},
and finally
moving_average = (sum + last_update * time_since_last_update) / window_length;.
Now if exactly one update falls off the window but no new updates arrive, adjust sum as:
sum -= prior_update'.value * time_to_next_update + first_update_in_last_window.value * time_from_first_update_to_new_window_beginning;
(note it is prior_update' which has its timestamp modified to start of last window beginning). And if exactly one update enters the window but no new updates fall off, adjust sum as:
sum += previously_most_recent_update.value * corresponding_time_to_next_update.
As should be obvious, this is a rough sketch but hopefully it shows how you can maintain the average such that it is O(1) operations per update on an amortized basis. But note further optimization in previous paragraph. Also note stability issues alluded to in an older answer, which means that floating point errors may accumulate over a large number of such incremental operations such that there is a divergence from the result of the full computation that is significant to the application.
If an approximation is OK and there's a minimum time between samples, you could try super-sampling. Have an array that represents evenly spaced time intervals that are shorter than the minimum, and at each time period store the latest sample that was received. The shorter the interval, the closer the average will be to the true value. The period should be no greater than half the minimum or there is a chance of missing a sample.
#include <map>
#include <iostream>
// Sample - the type of a single sample
// Date - the type of a time notation
// DateDiff - the type of difference of two Dates
template <class Sample, class Date, class DateDiff = Date>
class TWMA {
private:
typedef std::map<Date, Sample> qType;
const DateDiff windowSize; // The time width of the sampling window
qType samples; // A set of sample/date pairs
Sample average; // The answer
public:
// windowSize - The time width of the sampling window
TWMA(const DateDiff& windowSize) : windowSize(windowSize), average(0) {}
// Call this each time you receive a sample
void
Update(const Sample& sample, const Date& now) {
// First throw away all old data
Date then(now - windowSize);
samples.erase(samples.begin(), samples.upper_bound(then));
// Next add new data
samples[now] = sample;
// Compute average: note: this could move to Average(), depending upon
// precise user requirements.
Sample sum = Sample();
for(typename qType::iterator it = samples.begin();
it != samples.end();
++it) {
DateDiff duration(it->first - then);
sum += duration * it->second;
then = it->first;
}
average = sum / windowSize;
}
// Call this when you need the answer.
const Sample& Average() { return average; }
};
int main () {
TWMA<double, int> samples(10);
samples.Update(1, 1);
std::cout << samples.Average() << "\n"; // 1
samples.Update(1, 2);
std::cout << samples.Average() << "\n"; // 1
samples.Update(1, 3);
std::cout << samples.Average() << "\n"; // 1
samples.Update(10, 20);
std::cout << samples.Average() << "\n"; // 10
samples.Update(0, 25);
std::cout << samples.Average() << "\n"; // 5
samples.Update(0, 30);
std::cout << samples.Average() << "\n"; // 0
}
Note: Apparently this is not the way to approach this. Leaving it here for reference on what is wrong with this approach. Check the comments.
UPDATED - based on Oli's comment... not sure about the instability that he is talking about though.
Use a sorted map of "arrival times" against values. Upon arrival of a value add the arrival time to the sorted map along with it's value and update the moving average.
warning this is pseudo-code:
SortedMapType< int, double > timeValueMap;
void onArrival(double value)
{
timeValueMap.insert( (int)time(NULL), value);
}
//for example this runs every 10 seconds and the moving window is 120 seconds long
void recalcRunningAverage()
{
// you know that the oldest thing in the list is
// going to be 129.9999 seconds old
int expireTime = (int)time(NULL) - 120;
int removeFromTotal = 0;
MapIterType i;
for( i = timeValueMap.begin();
(i->first < expireTime || i != end) ; ++i )
{
}
// NOW REMOVE PAIRS TO LEFT OF i
// Below needs to apply your time-weighting to the remaining values
runningTotal = calculateRunningTotal(timeValueMap);
average = runningTotal/timeValueMap.size();
}
There... Not fully fleshed out but you get the idea.
Things to note:
As I said the above is pseudo code. You'll need to choose an appropriate map.
Don't remove the pairs as you iterate through as you will invalidate the iterator and will have to start again.
See Oli's comment below also.