I'm currently having some fun with the gspd library and noticed that the time I get switches between two values - The actual value (today) and some date in 1991.
When displayed gpsmon it normally shows the right time. Using cgps -s displays the wrong time, with the absurdly huge offset.
The (so far) only possible problem I've found online is that the sys-time isn't up to date which mixes up the gps-time, but that isn't the case here.
We are accessing gpsd via the following code:
{
gps_stream(&gps_data, WATCH_ENABLE | WATCH_JSON, NULL);
if(gps_waiting(&gps_data, timeout)) {
if(gps_read(&gps_data) == -1) {
return false;
}
}
return true;
}
All the other values (location, altitude etc) are correct. Only the time is off.
Anybody got an idea on why this could be happening? Thanks in advance!
We actually found the answer! The RaspberryPi we ran this is on apparently had some issues with it's OS, so a using an old image worked out perfectly.
Related
Our team has a bug that has stumped us.
The following code returns false:
CMainFrame* pMainFrame = new CMainFrame;
if (!pMainFrame->LoadFrame(IDR_MAINFRAME))
{
AfxMessageBox(GetStr(IDS_MAINFRM_FAIL_TO_LOAD));
ASSERT(FALSE);
return FALSE;
}
We're compiling using VS2010, and we do have the RogueWave Stingray component installed. The CMainFrame is a CBCGPMDIFrameWnd which is based off of a CMDIFrameWnd and made by BCGSoft.
We have our software running on about 100 machines globally with no issues. Its running on Windows 7-10 x86 & x64. It always worked, until this week. A small group of people in Mildura, Australia have reported an issue. For all of them, running WIN7x86Enterprise and Win10x64Home, the code snippet above returns false. I personally inspected one of their machines (Win10x64Home) and everything seems to be in order.
I've tried deleting the RES and APS files for the project. That didn't help.
Does anyone know what the problem might be? I'm open to educated guesses.
Thanks in advance!
PS: New Info:
It looks like its a time issue. On the computer, everything UTC+ fails, and UTC0 & UTC- pass. We aren't sure why. Any help would be appreciated. Thanks!
We were doing TimeDate calcs on an elapsed timestamp using the epoch as a starting point. MFC doesn't allow dates before the epoch, so all UTC+ failed the date creation. This bug has been fixed. Thank you to everyone.
Here is the code that was causing the issue (fixed). We added one day on for everything to work. CTime::CTime(1970,1,1,0,0,0) w/ UTC+ ,making it before 1/1/1970 in London, fails.
Thanks!
CTime t1 = CTime::CTime(yearInt,monthInt,dayInt,0,0,0);
CTime t2 = CTime::CTime(1970,1,2,0,0,0);
CTimeSpan timeSpan = t1-t2;
versionDate = timeSpan.GetDays() + 1;
I am doing some maths on the GPU and reading the result.
And I am getting the wrong value From log. I have verified this for values 0 - 10, 20, 30, 40.
If I hard code the value (as you can see bellow under verify) I get the right result spat out. However if I use log with the hard coded value that should return the same result, I get the wrong result spat out.
This is the kind of thing I have been doing in my function.
vec4 IScale(vec4 value)
{
switch(uScaleType_i)
{
case Log:
//value = log(value);
value = vec4(1,1,1,1);
value.r = log(5);
//verifiy
//value.r = 0.698970004
break;
case Sqrt:
value = sqrt(value);
break;
case None:
break;
}
return value;
}
I am wondering is there any sense here. I have added the results of what I am getting back into excel and done a graph. At first Its almost like its double the correct value but its not quite that clean, it gets further and further away.
Is there any other explanation for this other than a driver issue? I cant think of anything else to check!
And if so how can i possibly work around it, other than refactoring my code to do it on the CPU? And why can't I find evidence online to back this up? I am completely utterly baffled!
I am running on a laptop with:
(Intel(R) HD Graphics 4000 with 132 ext.)
p.s. Sqrt is fine and I get the values I would expect.
p.p.s I checked, I have not accidentally created a function called "log"
I believe you are tripping over the base used for the log. In Excel the base is 10 however in glsl it is e.
To get the right result you should divide the result with the log of the base you want.
value = log(value)/log(10);
Or in excel you can use LN(RC[-1])
This is as per the specification. log() will return the natural logarithm, i.e. the logarithm to the base e. Not the base 10 logarithm.
I am facing some strange behaviour appearing only on some notebook.
I am developing in c++ using msvc 2012 and the qt framework.
I will try to sum up the problem and i am hoping that someone has any idea what the problem could be or what i could try to find out..
Generally it's the following problem:
void myclass::foo()
{
const double value1 = 100.0;
double value2;
value2 = some_function_returning_double();
if(value1 > value2)
{
//__ do something
}
}
The problem is that the condition fails as the local variable gets overwritten.
If I do some debug output i can see that variable value1 is not 100.0 anymore but some random value .. so that the comparison randomly fails ..
One thing i figured out is that everything just works fine if i don't use local variables. If i set up value1 and value2 as member variables of my class everything works without problems, but that can't be the solution.
Now the strange thing is that this error does only occur on some notebook (some mobile i5 cpu).
On my machine (i5) and on many other notebooks (even other mobile i5) everything just works fine.
I know that you won't be able to solve my problem with this little information i can offer here, but maybe some of you has any hint what the problem could be and what i could try to solve this.
Many thanks in advance.
In visual studio 2012, add a data breakpoint (debug->new breakpoint->new data breakpoint) on the address of the variable that gets overwritten.
First, break at the start of the function.
Then set the data breakpoint: just type &value1 in the "New breakpoint` the input box.
Then it should break just after the value has been modified, and you should see the culprit.
Data breakpoints are a very powerful tool, that helped me found nasty bugs very quickly.
I am working with a STM32 eval2 board and trying to debug it. It used to work fine, and I haven´t changed anything, but for the last week or so I am always getting stuck in this loop while I am in debugger mode, but when I am not, the program runs fine.
while(!__HAL_SD_SDIO_GET_FLAG(hsd, SDIO_FLAG_RXOVERR | SDIO_FLAG_DCRCFAIL | SDIO_FLAG_DTIMEOUT | SDIO_FLAG_DBCKEND | SDIO_FLAG_STBITERR))
{
if(__HAL_SD_SDIO_GET_FLAG(hsd, SDIO_FLAG_RXDAVL))
{
*(tempscr + index) = SDIO_ReadFIFO(hsd->Instance);
index++;
}
}
I even tried running the sample project code provided for the board by ST, did not change anything about it, and I am stuck in the same while loop in their code as well.
Does anybody know what I am doing wrong here? It doesn´t make sense because nothing changed.
The errors that are defined by the variables in the while loop are (respectively):
Received FIFO overrun error
Data block sent/received (CRC check failed)
Data timeout
Data block sent/received (CRC check passed)
Start bit not detected on all data signals in wide bus mode
and it looks like in this while loop it is getting stuck in the if statement for a "Data available in receive FIFO" flag, if that makes sense. I cannot step over that if statement.
I am using keil v5 and programming in c++
Well, I have been struggling with this for a week and almost right after I posted this I figured it out.
I had the SD card in, and for some reason taking it out fixed it. So I will leave this in case anyone else ever has this stupid problem.
Well, I have been struggling with this for a week and almost right after I posted this I figured it out.
I had the SD card in, and for some reason taking it out fixed it. So I will leave this in case somebody else has this stupid problem.
I am running cygwin on windows and using latest version of gprof for profiling my code. My problem is that the flat profile shows zero sec for each of the functions in my code, I even tried to loop the functions(tried a for loop for a million) but gprof is unable to accumulate any time .Please help . Here is one of my sample function.
bool is_adjacent(const char* a ,const char* b)
{
for(long long iter=0;iter<=1000000;iter++){
string line1="qwertyuiop";
string line2="asdfghjkl";
string line3="zxcvbnm";
string line4="1234567890";
int pos=line1.find(*a);
if(pos!=string::npos){
if ((line1[pos++]==*b)||((pos!=0)&&(line1[pos--]==*b)))
return true;
else return false;}
pos=line2.find(*a);
if(pos!=string::npos){
if ((line2[pos++]==*b)||((pos!=0)&&(line2[pos--]==*b)))
return true;
else return false;}
pos=line3.find(*a);
if(pos!=string::npos){
if ((line3[pos++]==*b)||((pos!=0)&&(line3[pos--]==*b)))
return true;
else return false;}
pos=line4.find(*a);
if(pos!=string::npos){
if ((line4[pos++]==*b)||((pos!=0)&&(line4[pos--]==*b)))
return true;
else return false;}
}
}
I'm having that problem from time to time. Esp. in heavily threaded code.
You can use valgrind with the --callgrind option (tool) which will allow you to at least have a more detailed view of how much time per function call. There's a kde tool as well to visualize the output (and eswp. callgraph) better called kcachegrind. Don't know if you can install that on cygwin though.
If your overall goal is to find and remove performance problems, you might consider this.
I suspect it will show that essentially 100% of the CPU time is being spent in find and string-compare, leaving almost 0% for your code. That's what happens when only the program counter is sampled.
If you sample the call stack, you will see that the lines of code that invoke find and string-compare will be displayed on the stack with a frequency equal to the time they are responsible for.
That's the glory of gprof.
P.S. You could also figure this out by single-stepping the code at the disassembly level.
What version of gprof are you using? Some old versions have this exact bug.
Run gprof --version and tell us the results.