seemingly complex date time calculations - c++

I have this function that should convert them from Ole to UTC8601 standard, but I don't understand how the integer representations of time seem to work.
Can anyone give me some explanation?
the function is called: RipOf_AfxTmFromOleDate
containing statements as: nDaysAbsolute %= 146097L; Where does the value come from and how is it calculated?
where nDaysAbsolute is a long type.
This app gets values from an Oracle database and sends them to another application.

In 400 years, there are 97 leap years.
146097 = 365 * 400 + 97.

Take a look at Julian Day calculations.

Related

Access Table - Expression Builder unexpected results

I have a huge CSV data file that generates 500,000+ rows and 70+ columns, running Excel queries over this much data causes my desktop to crash.
As an alternative i've managed to import the CSV into Access.
The majority of the data fields i need to review/consider within further calculations i've imported as "double" field type.
I guess the first question is should i use single rather than double? The values i am considering will only ever report to 2 decimal places.
Within the imported table i've created some new columns, as i need to validate that the sum of underlying values equals the totals reported.
A sum of 5 underlying columns (called SUMofService)
[Ancillary Costs] + [Incidental Costs] + [One-Off Costs] + [Ongoing Costs] + [Transaction Costs]
I've not reviewed all 500,000 rows, but this formula seems to be summing the values correctly.
Using this value i've then created a new column to compare this total to the total in the report
IIF([SUMofService] = [Total Service],"Match","No Match")
This also seems to work as expected, but there are instances where this field returns a false.
Looking at the underlying numbers in [SUMofService] and [TotalService] they match, so i am confused as to why i am seeing the false results.
Could anyone review what i've detailed, and perhaps provide a steer as to whether i've considered something incorrectly.
There are probably better ways to achieve what i'm trying to do, but i haven't really used Access since school and you forget quite a bit in 30 years!!
Any responses are much appreciated - i've googled this as much as i can, but not 100% what to ask, and some responses are so far beyond my level of thinking.
should I use single rather than double?
The values I am considering will only ever report to 2 decimal places.
Neither. Use Currency.
That will also provide correct results for:
IIF([SUMofService] = [Total Service],"Match","No Match")
Using Double or, indeed, Single will cause floating point errors - as in this classic example:
? 10.1 - 10.0
9.99999999999996E-02
' thus:
? 10.1 - 10.0 = 0.1
False

Library to discover dates from text?

I need to pull a date out of a string. Since not everyone uses the official ISO format when printing their dates, it is impractical to write a date parser for every possible date format that could be used, and I need to handle as many date formats as possible - I don't control the data and can't expect it to come in a specific format.
This seems like a problem that has probably already been solved ages ago, but my Google-fu is too weak to find the solution. :(
Does there already exist a C++ library that, given a string, will return the month, day, year, hour, minute, second, etc that is referenced in that string, if any?
Pseudocode:
string s1 = "There is an expected meteor shower this Thursday,"
"August 15th 2013 at 4:39 AM.";
string s2 = "20130815T04:39:00";
date d1 = magicConverter(s1);
date d2 = magicConverter(s2);
assert(d1 == d2);
You might use the code from here, but you need to configure a mask, that tells the code which time format is used. If you write a class routine, that takes a mask and a string and gets you out the time and is able to print in any format you like, you should be well prepared. You have to look in more detail, if it also supports Daynames and Monthnames. I got it to work in python with a module providing a function that seems pretty much the same.
For more detail:
Please look at the example 2013-08-03 again. Nobody and as follows no computer is able to tell you if this date belongs to August or April, except of having a mask telling JJJJ-MM-DD or JJJJ-DD-MM. Also this library may tell you only standard masked times. So it might lead you to August in this case. But as you said it can be any date declaration, thus it does not need to follow standards, thus it can also mean March. An other possibility is to tell you about the date from the context (e.g. a table with a column of all te same time formats by looking for the increase (which would also fail if you just look at one day per month for just one year).
Another example... if I ask you 2013-05-04... to which month does it belong? You might tell me... April. I would reply "no, to the 4th of May" and vice versa for May and 5th of April. If you tell me how to solve this puzzle with two possible solutions I would understand your downvote... please think before downvoting someone trying to help you.

incorrect value in coledatetime

I'm fighting a few days with COleDateTime in MFC.
I have CTime with correct values. Correct years, days, months, hours, minutes and seconds.
I tried a few ways to convert CTime to COleDateTime:
-1.I put CTime data to constructor of COleDateTime
COleDateTime(int nYear,int nMonth,int nDay, int nHour, int nMin,int nSec );
-2. I formatted CTime to time.Format("%m/%d/%y %H:%M:%S");
and passed to ParseDateTime of COleDateTime.
-3. Also I tried to use SetDateTime of COleDateTime
After that I'm getting incorrect values of minutes 1-2 min. more or less.
I have never seen it before and I couldn't find nothing in internet.Everybody says abot loss precision but this a second, not a minute.
Please advice something for me!
Thank you
I think the problem is that COleDateTime internally uses a float for storage, and the value represents the number of days since 30 December 1899.
As the number of days gets larger, the precision of the smaller fields (like minutes) decreases. For example, a float can accurately store the values 1000000 and 0.0000001, but it CAN'T store 1000000.0000001. It doesn't have enough bits of precision.
This limitation is hinted at in the MSDN documentation:
This type is also used to represent date-only or time-only values. By convention, the date 0 (30 December 1899) is used for time-only values. Similarly, the time 0:00 (midnight) is used for date-only values.
So basically, if you want a precise time, set the date to 30 December 1899.
It seems like Microsoft could have just designed this class to store the "days" portion as an integer, but hey, that would be too EASY.

c++ function -- which date is first / last?

One of my c++ function does some calculations based off of the values of other variables. The program asks for a bunch of information including start date and end date for 2 separate events.
p1.start_date and p2.start_date; p1.end_date and p2.end_date each of which have a day, month and year stored inside.
I need to set combined.start_date to which happens earlier (p1.start_date or p2.start_date) and I need to set combined.end_date to which happens later.
Could I please have some help in getting this started? Here is what I have now: http://pastebin.com/huJprtHj.
At least assuming the dates involved are reasonably current1, stuff the month/day/year into a struct tm and use mktime to convert to a timt_t, then you can compare the two time_ts directly.
If you need/want to support a wider range of dates, you might consider Ray Gardner's Julian Date routines.
At least in a typical case, dates from 1970 to at least 2038 will work.
Generally, calculations based on dates can be done in two ways.
Convert the date into a "number of days since some fixed date (e.g. 1 Jan 1970)".
Use the date components (year, month, day).
If this is all you need to do, just comparing each part (with the "highest first") will work just fine - you just need a compare function that can tell you if date1 is less than date2.
The rest of your question should be really simple programming.
Edit: to clarify: For DATE calculations, days from a set date is fine. The system library functions have functions that use seconds [and in some systems, fractions of a second] for a complete time down to seconds. This is not required for comparing dates where a the time of day is not involved.
Make this function. I'm guessing that your dates are stored in an object named Date, since you don't specify.
bool operator< ( const Date& left, const Date &right )
{
// ...
}
Then you can compare your date objects the same as if they were built-in types like int.

Oracle OCCI stmt.setTimestamp insert TIMESTAMP(6): microseconds always 0

UPDATE: The "fraction of a second" parameter to Timestamp's constructor actually takes nanoseconds... I guessed it was hundredths of a second and my low values were rounded away. Question left for reference....
I'm struggling with Oracle's C++ library - OCCI. Summarily:
creating Timestamp objects and verifying they're good to hundredths of a second (though I'd like more!)
using stmt.setTimestamp then executeUpdate() to insert into a TIMESTAMP(6) column which should preserve microseconds
selecting the row in Oracle SQL Developer: the sub-second component is always 0-ed e.g. 14-JUL-11 06.03.27.000000000.
Problem
I need subsecond precision - hopefully microseconds! We've put a lot of work into capturing that precision in our servers and need (at least some of) it for analysis.
Details
I create a Timestamp from year/month/day hour/minute/second/millisecond, reducing the last to hundredths of a second as that seems to be what the constructor supports. (No Oracle documentation I can find specifies the interpretation, but in a fromText example "xff" clearly corresponds to a ".##" hundredths suffix in the value to convert. What's the point of TIMESTAMP(6) supporting 6 decimal places if you can't insert them?)
oracle::occi::Timestamp temp =
oracle::occi::Timestamp(_env, year, month, day,
hour, minute, second, millisecond / 10);
// re-extract the broken-down time from temp to prove it's stored successfully
int ye;
unsigned mo, da, ho, mi, se, fs;
temp.getDate(ye, mo, da);
temp.getTime(ho, mi, se, fs);
return temp;
Here, fs gets the milliseconds/10 value as expected.
I use this as in:
oracle::occi::Timestamp ts;
ts = _pImpl->makeOracleTimestamp(p->ATETimeStamp);
stmt.setTimestamp(11, ts);
Where field 11 is a TIMESTAMP(6).
Selecting the row in Oracle SQL Developer, the other parts of the timestamp column are correct but the sub-second component is 0-ed ala 14-JUL-11 06.03.27.000000000.
Any insight much appreciated!
(If relevant, using MSVC++ 2005, Oracle 10.2.0.4 sdk, SQL Developer 3.0.04 - please ask if something else might be relevant).
Thanks,
Tony
Turns out the "fractional seconds" field is nominally in nanoseconds rather than hundredths. I wish Oracle would say that in their documents! I say nominally because if it really preserved the least-significant digits then the hundredths values I had would have appeared as a number of nanoseconds and I might have immediately guessed at the problem - instead it seems values < 100 nanoseconds are lost anyway (and perhaps bigger - I haven't probed the cut-off point).
Thanks to anyone who had a look at the question or tried some research / investigation.