Conversion of timespec for Windows 7 VS 2010 - c++

I am trying to build OpenVDB viewer for Windows 7 and bumped into this line of code:
secs = fabs(secs);
int isecs = int(secs);
struct timespec sleepTime = { isecs /*sec*/, int(1.0e9 * (secs - isecs)) /*nsec*/ };
nanosleep(&sleepTime, /*remainingTime=*/NULL);
Unfortunately, i dont know what exactly is the meaning of this code as i need to make it VS2010 compiler compatible in order to build it.
So, can i know what is the equivalent of this code or some other library that i can use to edit it easily??

Assuming secs is a float value giving the time the thread shall sleep in seconds, such as e.g.
float secs = 0.8312f;
You can replace that for the windows version with:
float secs = 0.8312f;
DWORD delay = static_cast<DWORD>(fabs(secs * 1000.0f));
Sleep(delay);
Possibly you could add some checks to this (such as if secs is not negative...).
In order to keep the main code base portable, you could create an extra module where you define your own portable sleep function, maybe with the signature void PortableSleep(float seconds);. Then, place in one .cpp file the Unix implementation, in another the win32 implementation and link accordingly.
What you also can do is using std::this_thread::sleep_for() if you like to waste time on figuring out how the <chrono> stuff works (VS lacks one feature, which makes it a bit harder to use).

Related

How to use time library in Arduino?

I am trying to upload this simple program to my Arduino that gets to clock time:
#include <Time.h>
time_t nowTime;
void setup() {
nowTime = now();
}
However, its failing to compile:
exit status 1
'now' was not declared in this scope
Why is now() not declared in this scope? The Time.h file was included. So why wouldn't now() be declared? How can I get around this?
The compilation fails because the Time.h file that the compiler finds has nothing to do with time libraries such as that by Paul Stoffregen (https://github.com/PaulStoffregen/Time).
I tried your Sketch, compiled for an Arduino Uno, and saw the same error you see: that Time.h resolves (the file exists somewhere), yet now() is not defined by that Time.h
After searching my Windows PC for a while, I finally found what I think is the file that #include includes on my installation: C:\Users\Brad\AppData\Local\Arduino15\packages\arduino\hardware\avr\1.8.2\firmwares\wifishield\wifiHD\src\time.h or perhaps C:\Users\Brad\AppData\Local\Arduino15\packages\arduino\tools\avr-gcc\7.3.0-atmel3.6.1-arduino5\avr\include\time.h
Neither of those files defines the now() function.
If you want to use Paul Stoffregen's Time library, download and install it from https://github.com/PaulStoffregen/Time. If instead you wish to use Michael Margolis' Time library, you can find and install it in the Arduino IDE, under Tools / Manage Libraries... and entering "Time" (without quotes) in the search term.
As others have pointed out, the Arduino environment doesn't always know the current date and time. The functions mills() and micros(), return the number of milliseconds or microseconds, respectively, since the Arduino booted. For just looking at the passage of time, most people use millis() or micros() instead of a more complex library.

linux c++: libaio callback function never called?

I'm on ubuntu 16.10 with g++ 6.2, testing libaio feature:
1. I was trying to test io_set_callback() function
2. I was using main thread and a child thread to talk by a pipe
3. child thread writes periodically (by alarm timer, signal), and main thread reads
I hope to use "callback" function to receive notifications. It didn't work as expected: callback function "read_done" is never called
My questions:
1. I expected my program should call "read_done" function, but actually not.
2. Why the output prints 2 "Enter while" each time?
I hope it only print together with "thread write msg:..."
3. I tried to comment out "io_getevents" line, same result.
I'm not sure if callback mode still need io_getevents? So how to fix my program so it work as I expected? Thanks.
You need to integrate io_queue_run(3) and io_queue_init(3) into your program. Though these aren't new functions, they don't seem to be in the manpages for a bunch of currently shipping distros. Here's a couple of the manpages:
http://manpages.ubuntu.com/manpages/precise/en/man3/io_queue_run.3.html
http://manpages.ubuntu.com/manpages/precise/en/man3/io_queue_init.3.html
And, of course, the manpages don't actually say it, but io_queue_run is what calls the callbacks that you set in io_set_callback.
UPDATED: Ugh. Here's the source for io_queue_run from libaio-0.3.109 on Centos/RHEL (LGPL license, Copyright 2002 Red Hat, Inc.)
int io_queue_run(io_context_t ctx)
{
static struct timespec timeout = { 0, 0 };
struct io_event event;
int ret;
/* FIXME: batch requests? */
while (1 == (ret = io_getevents(ctx, 0, 1, &event, &timeout))) {
io_callback_t cb = (io_callback_t)event.data;
struct iocb *iocb = event.obj;
cb(ctx, iocb, event.res, event.res2);
}
return ret;
}
You'd never want to actually call this without the io_queue_wait call. And, the io_queue_wait call is commented out in the header included with both Centos/RHEL 6 and 7. I don't think you should call this function.
Instead, I think you should incorporate this source into your own code, then modify it to do what you want. You could pretty trivially add a timeout argument to this io_queue_run and just replace your call to io_getevents with it, instead of bothering with io_queue_wait. There's even a patch here that makes io_queue_run MUCH better: https://lwn.net/Articles/39285/).

Getting incorrect file modification time using stat APIs

I see a strange behavior while fetching the modification time of a file.
we have been calling _stat64 method to fetch the file modification in our project as following.
int my_win_stat( const char *path, struct _stati64 *buf)
{
if(_stati64( path, buf) == 0)
{
std::cout<<buf->st_mtime << std::endl; //I added to ensure if value is not changing elsewhere in the function.
}
...........
...........
}
When I convert the epoch time returned by st_mtime variable using epoch convertor, it shows 2:30 hrs ahead of current time set on my system.
When I call same API as following from different test project, I see the correct mtime (i.e. according to mtime of file shown by my system).
if (_stat64("D:\\engine_cost.frm", &buffer) == 0)
std::cout << buffer.st_mtime << std::endl;
Even I called GetFileTime() and converted FILETIME to epoch time with the help of this post. I get the correct time according to time set the system.
if (GetFileTime(hFile, &ftCreate, &ftAccess, &ftWrite))
{
ULARGE_INTEGER ull;
ull.LowPart = ftWrite.dwLowDateTime;
ull.HighPart = ftWrite.dwHighDateTime;
std::cout << (ull.QuadPart / 10000000ULL - 11644473600ULL);
}
What I am not able to figure out is why does the time mtime differ when called through my existing project?
What are the parameters that could affect the output of mtime ?
What else I could try to debug the problem further ?
Note
In VS2013, _stati64 is a macro which is replaced replaced by _stat64.
File system is NTFS on windows 7.
Unix time is really easy to deal with. It's the number of seconds since Jan 1, 1970 (i.e. 0 represents that specific date).
Now, what you are saying is that you are testing your time (mtime) with a 3rd party tool in your browser and expects that to give you the right answer. So... let's do a test, the following number is Sept 1, 2015 at 00:00:00 GMT.
1441065600
If you go to Epoch Converter and use that very value, does it give you the correct GMT? (if not, something is really wrong.) Then look at the local time value and see whether you get what you would expect for GMT midnight. (Note that I use GMT since Epoch Converter uses that abbreviation, but the correct abbreviation is UTC.)
It seems to me that it is very likely that your code extracts the correct time, but the epoch convertor fails on your computer for local time.
Note that you could just test in your C++ program with something like this:
std::cerr << ctime(&buf->st_mtime) << std::endl;
(just watch out as ctime() is not thread safe)
That will give you the data according to your locale on your computer at runtime.
To better control the date format, use strftime(). That function makes use of a tm structure so you have to first call gmtime or localtime.
An year later I ran into the similar problem but scenario is little different. But this time I understand why there was a +2:30Hrs of gap. I execute the C++ program through a perl script which intern sets the timezone 'GMT-3' and my machine had been in timezone 'GMT+5:30'. As a result there was a difference of '2:30Hrs'.
Why ? As Harry mentioned in this post
changing the timezone in Perl is probably causing the TZ environment variable to be set, which affects the C runtime library as per the documentation for _tzset.

Speed of calling a function Delphi vs C++

I'm translating the C code from the handmadehero.org project to Delphi code. I ran into a performance issue on a specific piece:
Original:
inline real32
Win32GetSecondsElapsed(LARGE_INTEGER Start, LARGE_INTEGER End)
{
real32 Result = ((real32)(End.QuadPart - Start.QuadPart) /
(real32)GlobalPerfCountFrequency);
return(Result);
}
My version:
function Win32GetSecondsElapsed(Start, &End : LARGE_INTEGER): real32; inline;
begin
Result := (&End.QuadPart - Start.QuadPart) / GlobalPerfCountFrequency;
end;
After calling the sleep function there is this code to make sure that we hit the target frame rate:
SleepMilliSeconds = <some code that calcs how long to wait to hit target frame rate>
Sleep(SleepMilliSeconds);
real32 TestSecondsElapsedForFrame = Win32GetSecondsElapsed(LastCounter, Win32GetWallClock());
Assert(TestSecondsElapsedForFrame < TargetSecondsPerFrame);
If I use the same code (Only that it's the Delphi version), I get an assert error.
If I change the code to this:
TestSecondsElapsedForFrame := ((LastCounter.QuadPart - Win32GetWallClock().QuadPart) / GlobalPerfCountFrequency);
Assert(TestSecondsElapsedForFrame < TargetSecondsPerFrame);
The error goes away, so the call to the function in Delphi takes long enough to push me over the time allowed for the sleep to complete.
Does anyone know how I can fix this?
I have tried changing the parameters in the Win32GetSecondsElapsed to be passed as pointers, but it did not help.
I thought it may be because it's being passed by value and a copy needs to be made, but that is not it.
I think that the 'inline' directive is not taking effect.
I believe it should be possible for a Delphi application to be just as fast as a C application.
This function call is not causing any performance problems - the problems are in a completely different part of the code.
You're passing Win32GetWallClock() as End, and LastCounter as Start, so the function correctly computes
Win32GetWallClock().QuadPart - LastCounter.QuadPart
but the inline version computes
LastCounter.QuadPart - Win32GetWallClock().QuadPart
which is zero or negative.

C stat() and daylight savings

I'm having serious trouble with the function stat(). I have an application compiled under cygwin ond Windows 7 and the same app compiled with MSVC++ on Windows 7. The app contains the following code:
struct stat stb;
memset( &stb, 0, sizeof( stb ) );
stat( szPath, &stb );
cout << hex << uppercase << setw(8) << stb.st_mtime << endl;
szPath is a path to a file. The file does not get modified in any way by the app. The problem is, that i get different results for some files. For example:
cygwin version: 40216D72
MSVC++ version: 40217B82
The difference is always E10 = 3600 = 1 hour
By using google, i found this, which seems to be exactly the same issue i'm seeing. Is there a portable way how to fix this? I cannot use any WinAPI calls. The most simple and reliable solution is what i'm looking for, but if it needs to be complicated, so be it. Reliability and portability (win + linux) is the most important thing here.
To obtain both reliability and portability here (or in most situations of this sort where two platforms do different things with what should be the "same" code), you will probably need to use some form of target-dependent code, like:
#ifdef _MSC_VER
// do MSVC++-specific code
#else
// do Linux/Cygwin/generic code
#endif
You then ought to be able to use WinAPI calls in the _MSC_VER section, because that will only be compiled when you're using MSVC++
Apparently per http://support.microsoft.com/kb/190315 this is actually a FEATURE although it really seems like a bug to me. They say you can work around it by clearing "Automatically adjust clock for daylight saving changes" in the Date/Time Properties dialog box for the system clock.
If you have the date of the file, you can use the relative state of dst to determine if you need to make a one-hour adjustment yourself in MSVC only, but that's hacky as well.
Not sure about the functions you are using but I do know that Windows and Linux use the system clock differently. Windows stores local time (including DST) on the system clock. Linux (at least traditionally) stores GMT (or UTC to be precise) on the system clock. I don't if this applies to cygwin.
If a linux system shares hardware with windows it needs to be configured to use the system clock like windows or get messed-up every time windows adjusts DST.