c++ function -- which date is first / last? - c++

One of my c++ function does some calculations based off of the values of other variables. The program asks for a bunch of information including start date and end date for 2 separate events.
p1.start_date and p2.start_date; p1.end_date and p2.end_date each of which have a day, month and year stored inside.
I need to set combined.start_date to which happens earlier (p1.start_date or p2.start_date) and I need to set combined.end_date to which happens later.
Could I please have some help in getting this started? Here is what I have now: http://pastebin.com/huJprtHj.

At least assuming the dates involved are reasonably current1, stuff the month/day/year into a struct tm and use mktime to convert to a timt_t, then you can compare the two time_ts directly.
If you need/want to support a wider range of dates, you might consider Ray Gardner's Julian Date routines.
At least in a typical case, dates from 1970 to at least 2038 will work.

Generally, calculations based on dates can be done in two ways.
Convert the date into a "number of days since some fixed date (e.g. 1 Jan 1970)".
Use the date components (year, month, day).
If this is all you need to do, just comparing each part (with the "highest first") will work just fine - you just need a compare function that can tell you if date1 is less than date2.
The rest of your question should be really simple programming.
Edit: to clarify: For DATE calculations, days from a set date is fine. The system library functions have functions that use seconds [and in some systems, fractions of a second] for a complete time down to seconds. This is not required for comparing dates where a the time of day is not involved.

Make this function. I'm guessing that your dates are stored in an object named Date, since you don't specify.
bool operator< ( const Date& left, const Date &right )
{
// ...
}
Then you can compare your date objects the same as if they were built-in types like int.

Related

FILETIME to/from ISO 8601 with Win32 API. Getting DST right?

Wrote something in .NET; it works well. Now I am trying to rewrite it as a shell extension with the Win32 API. Ultimately I want to convert FILETIMEs to and from ISO-8601 strings. This is doable without fuss, using GetTimeZoneInformation and FileTimeToSystemTime and SystemTimeToTzSpecificLocalTime and StringCchPrintf to assemble the members of the SYSTEMTIME and TIME_ZONE_INFORMATION structs into a string.
The problem, as usual when working with date/times, is Daylight Saving Time. Using GetTimeZoneInformation tells me the UTC offset that's in effect now. Using .NET's DateTime.ToString("o") takes into account the daylight saving time at the time represented in the DateTime.
Example for the same FILETIME:
Output of ToString("o"): 2017-06-21T12:00:00.0000000-05:00
Output of chained APIs: 2017-06-21T12:00:00-06:00
The UTC offset is wrong coming from the chained API calls. How does .NET's DateTime do it? How do we replicate that in Win32? Is there something like GetTimeZoneInformationForYear, but instead of for a year, for a moment in local time?
First, I use DYNAMIC_TIME_ZONE_INFORMATION structure and GetDynamicTimeZoneInformation
DYNAMIC_TIME_ZONE_INFORMATION and TIME_ZONE_INFORMATION also has a DaylightBias member:
The bias value to be used during local time translations that occur
during daylight saving time. This member is ignored if a value for the
DaylightDate member is not supplied.
This value is added to the value of the Bias member to form the bias
used during daylight saving time. In most time zones, the value of
this member is –60.
So, if the date is in daylight saving time, you need to add this DaylightBias to Bias.
In addition, you can determine whether the current date is daylight saving time according to the description in DaylightDate:
To select the correct day in the month, set the wYear member to zero,
the wHour and wMinute members to the transition time, the wDayOfWeek
member to the appropriate weekday, and the wDay member to indicate the
occurrence of the day of the week within the month (1 to 5, where 5
indicates the final occurrence during the month if that day of the
week does not occur 5 times).
If the wYear member is not zero, the transition date is absolute; it
will only occur one time. Otherwise, it is a relative date that occurs
yearly.

C++ Quantlib Vanilla Swap: setting future fixing dates and gearing for floating leg

Is it possible for user to change the future fixing dates and gearing of the floating leg in Quantlib?
First, when Quantlib calculate the NPV for floating leg, it will go into couponpricer.hpp to call inline function BlackIborCouponPricer::swapletPrice(). Inside this function, there is a parameter called gearing_. This parameter is automatically setting to 1 in my case. If I need to change this to other value, say 0.8, where shall I make this change?
Second, all my future fixing dates are the same as the date vector generated in floating leg schedule. i.e. fixing dates are the same as accrual period starting dates. Is it possible to change these fixing dates to be different from the accrual period starting dates, say 2 business days before accrual starting dates subject to normal business day convention adjustment? Alternatively, is it possible for me to pass a date vector to store these fixing dates?
Many thanks.
VanillaSwap doesn't take gearings as a constructor argument (I guess the idea was to keep it simple). Instead, you can create the fixed and floating legs separately using the FixedLeg and IborLeg classes and pass them to a Swap instance. You can see an example of that in SwapTest::testInArrears(), in the test-suite/swap.cpp file.
As for the fixing dates: when you build the IborIndex instance to be passed to IborLeg, you can pass a number of fixing days to its constructor. If you're using the available indexes such as Euribor or USDLibor, though, they already use 2 fixing days (as well as the correct calendar and business-day convention).

Quantlib USDLibor, how does it knows which is the correct fixing date?

When pricing floating rate bonds, one needs to work with instances of the USDLibor class and adding new fixings given a date (which is equivalent to the last reset date minus two business days). However, sometimes it complaints on runtime telling the user that the fixing for a specified date is not available (meaning that one has provided the fixing for a wrong date).
How do instances of USDLibor know which is correct date? I ask this because maybe I can sort this problem by retrieving the correct date directly as USDLibor gets it working around the problem of figuring out the correct date.
The fixing date is two business days before the reset date, as you said (the implementation of the logic is in the FloatingRateCoupon::fixingDate() method, if you want to check it).
However, you might be using the wrong business days. USD LIBOR is fixed in London, so holidays are determined according to the UK calendar, not the US calendar.
In any case, once you have built the bond, you can ask the cashflows themselves for their fixings dates with something like this (which I haven't tested, so it might not even compile, but you should get the idea):
using namespace QuantLib;
Leg cashflows = bond.cashflows();
std::vector<Date> fixingDates;
for (Size i=0; i<cashflows.size(); ++i) {
boost::shared_ptr<FloatingRateCoupon> coupon =
boost::dynamic_pointer_cast<FloatingRateCoupon>(cashflows[i]);
if (coupon)
fixingDates.append(coupon->fixingDate());
}
after which the fixingDates vector will contain (not surprisingly) the fixing dates.

Library to discover dates from text?

I need to pull a date out of a string. Since not everyone uses the official ISO format when printing their dates, it is impractical to write a date parser for every possible date format that could be used, and I need to handle as many date formats as possible - I don't control the data and can't expect it to come in a specific format.
This seems like a problem that has probably already been solved ages ago, but my Google-fu is too weak to find the solution. :(
Does there already exist a C++ library that, given a string, will return the month, day, year, hour, minute, second, etc that is referenced in that string, if any?
Pseudocode:
string s1 = "There is an expected meteor shower this Thursday,"
"August 15th 2013 at 4:39 AM.";
string s2 = "20130815T04:39:00";
date d1 = magicConverter(s1);
date d2 = magicConverter(s2);
assert(d1 == d2);
You might use the code from here, but you need to configure a mask, that tells the code which time format is used. If you write a class routine, that takes a mask and a string and gets you out the time and is able to print in any format you like, you should be well prepared. You have to look in more detail, if it also supports Daynames and Monthnames. I got it to work in python with a module providing a function that seems pretty much the same.
For more detail:
Please look at the example 2013-08-03 again. Nobody and as follows no computer is able to tell you if this date belongs to August or April, except of having a mask telling JJJJ-MM-DD or JJJJ-DD-MM. Also this library may tell you only standard masked times. So it might lead you to August in this case. But as you said it can be any date declaration, thus it does not need to follow standards, thus it can also mean March. An other possibility is to tell you about the date from the context (e.g. a table with a column of all te same time formats by looking for the increase (which would also fail if you just look at one day per month for just one year).
Another example... if I ask you 2013-05-04... to which month does it belong? You might tell me... April. I would reply "no, to the 4th of May" and vice versa for May and 5th of April. If you tell me how to solve this puzzle with two possible solutions I would understand your downvote... please think before downvoting someone trying to help you.

Oracle OCCI stmt.setTimestamp insert TIMESTAMP(6): microseconds always 0

UPDATE: The "fraction of a second" parameter to Timestamp's constructor actually takes nanoseconds... I guessed it was hundredths of a second and my low values were rounded away. Question left for reference....
I'm struggling with Oracle's C++ library - OCCI. Summarily:
creating Timestamp objects and verifying they're good to hundredths of a second (though I'd like more!)
using stmt.setTimestamp then executeUpdate() to insert into a TIMESTAMP(6) column which should preserve microseconds
selecting the row in Oracle SQL Developer: the sub-second component is always 0-ed e.g. 14-JUL-11 06.03.27.000000000.
Problem
I need subsecond precision - hopefully microseconds! We've put a lot of work into capturing that precision in our servers and need (at least some of) it for analysis.
Details
I create a Timestamp from year/month/day hour/minute/second/millisecond, reducing the last to hundredths of a second as that seems to be what the constructor supports. (No Oracle documentation I can find specifies the interpretation, but in a fromText example "xff" clearly corresponds to a ".##" hundredths suffix in the value to convert. What's the point of TIMESTAMP(6) supporting 6 decimal places if you can't insert them?)
oracle::occi::Timestamp temp =
oracle::occi::Timestamp(_env, year, month, day,
hour, minute, second, millisecond / 10);
// re-extract the broken-down time from temp to prove it's stored successfully
int ye;
unsigned mo, da, ho, mi, se, fs;
temp.getDate(ye, mo, da);
temp.getTime(ho, mi, se, fs);
return temp;
Here, fs gets the milliseconds/10 value as expected.
I use this as in:
oracle::occi::Timestamp ts;
ts = _pImpl->makeOracleTimestamp(p->ATETimeStamp);
stmt.setTimestamp(11, ts);
Where field 11 is a TIMESTAMP(6).
Selecting the row in Oracle SQL Developer, the other parts of the timestamp column are correct but the sub-second component is 0-ed ala 14-JUL-11 06.03.27.000000000.
Any insight much appreciated!
(If relevant, using MSVC++ 2005, Oracle 10.2.0.4 sdk, SQL Developer 3.0.04 - please ask if something else might be relevant).
Thanks,
Tony
Turns out the "fractional seconds" field is nominally in nanoseconds rather than hundredths. I wish Oracle would say that in their documents! I say nominally because if it really preserved the least-significant digits then the hundredths values I had would have appeared as a number of nanoseconds and I might have immediately guessed at the problem - instead it seems values < 100 nanoseconds are lost anyway (and perhaps bigger - I haven't probed the cut-off point).
Thanks to anyone who had a look at the question or tried some research / investigation.