clean or dirty price for FixedRateBondHelper - quantlib

I would like to construct a spot curve from supplied bond prices. I know that the curve has to be constructed from dirty prices (i.e. the ones that include accrued interest). However, from FittedBondCurve.cpp example posted on quantlib.org, it appears that FixedRateBondHelper class is initialized with clean prices.
So, my question is: does it mean that FixedRateBondHelper takes care of computing accrued interest and converting clean price to dirty price? Or is it something that a user should do? I believe it's the former but wanted to make sure.

The helper doesn't, but the fitting algorithm does. If you look at the FittedBondDiscountCurve::FittingMethod::FittingCost::value method, you'll cringe a bit at the nested inner classes, but then you'll see that the model price is calculated by adding the discounted future cash flows and subtracting the accrued amount.
A further note: in recent releases, the bond helpers have been given the possibility to work with quoted dirty prices when bootstrapping a curve (see the last parameter of their constructors, useCleanPrice, which defaults to true but can be set to false to use dirty prices. However, the FittedBondDiscountCurve class is not yet aware of this change, and thus setting useCleanPrice to false would break the algorithm. I'll try to fix this in a future release.

Related

LP warehouse problem with a certain truck capacity

The problem that I need to solve is almost like the basic LP warehouse problem, where you have n warehouses, each one with a certain amount of a product, and m shops, each one demanding a certain amount of that product. So the goal is to minimize the amount of Km made by the trucks that have to deliver the products from the warehouses to the shops.
This was the easy part, I already identified the constraints and the objective function.
The part that I can't get my head around, is that the truck that delivers the products has a certain capacity, C. Every single truck has the same capacity. I can't tell if that piece of information is really relevant and should be included in some kind of constraint or something. I would really apreciate a hint, cause I've been stuck on this part for a while now and couldn't fine any example of this exact type of problem on the Internet
The number of trucs needed can be bounded by
numtrucs(i,j)*capacity >= shipment(i,j)
Add a term to the objective that minimizes the number of trucs.

Matched-maturity vanilla swap in Quantlib

Firstly apologies if this has been answered elsewhere.
I am using QuantLib (via Excel) to build a "standard" bond pricing sheet: prices, yields, spline AND matched-maturity ASW.
I can price the bonds, and have successfully built a forecast (Euribor) and discount (EONIA) curve. I can use qlMakeVanillaSwap() to define a spot-start swap by tenor (eg "1y","2Y" etc) and it works fine. However I am struggling to define a "broken date" swap, ie one which starts T+2 and ends on a given date (and so usually has a short stub on the first payment), to match the bond maturity. All the examples I can find have integer year tenors.
I would be grateful if someone could point me to the right method (can be in python, C++ or Excel). Or do I have to go down the route of creating explicit fixed and floating rate schedules for the swaps?
The answer seems to be: Yes, I do have to create explicit fixed and floating rate schedules, using qlSchedule(), but it turns out to be not too onerous. NB. I am pricing a vanilla EUR ABB vs 6m Euribor swap.
As for pricing, it seems the qlMakeVanillaSwap() is doing a few helpful things in one call, but only IF your swap has a whole-period tenor (eg "1y"). I found the answer for what I wanted to do in the example sheet that came with the QuantLibXL download package.
The other thing that qlMakeVanillaSwap() is doing (in addition to creating the schedules) is setting the Pricing Engine (which is used to discount the cashflows). In the longer version you have to (a) set it yourself using qlInstrumentSetPricingEngine() and (b) pass the result of that call to the Trigger parameter of qlVanillaSwapFairRate(), to establish the calculation order.

C++ Quantlib Vanilla Swap: setting future fixing dates and gearing for floating leg

Is it possible for user to change the future fixing dates and gearing of the floating leg in Quantlib?
First, when Quantlib calculate the NPV for floating leg, it will go into couponpricer.hpp to call inline function BlackIborCouponPricer::swapletPrice(). Inside this function, there is a parameter called gearing_. This parameter is automatically setting to 1 in my case. If I need to change this to other value, say 0.8, where shall I make this change?
Second, all my future fixing dates are the same as the date vector generated in floating leg schedule. i.e. fixing dates are the same as accrual period starting dates. Is it possible to change these fixing dates to be different from the accrual period starting dates, say 2 business days before accrual starting dates subject to normal business day convention adjustment? Alternatively, is it possible for me to pass a date vector to store these fixing dates?
Many thanks.
VanillaSwap doesn't take gearings as a constructor argument (I guess the idea was to keep it simple). Instead, you can create the fixed and floating legs separately using the FixedLeg and IborLeg classes and pass them to a Swap instance. You can see an example of that in SwapTest::testInArrears(), in the test-suite/swap.cpp file.
As for the fixing dates: when you build the IborIndex instance to be passed to IborLeg, you can pass a number of fixing days to its constructor. If you're using the available indexes such as Euribor or USDLibor, though, they already use 2 fixing days (as well as the correct calendar and business-day convention).

Swaption pricing in QuantLib

I posted this on Wilmott too, wasn't sure which would get more of a response.
I'm relatively new to the world of Quantlib (and C++ . . .), so perhaps this is quite obvious. I'm trying to figure out if Quantlib can price forward premium vanilla swaptions (OIS discounting, 3mL curve for estimation). All I can see in Quantlib in the Swaption files are inputs for one term structure for discounting. Does it use this also for estimation? Or is there a way to override it, such that I can enter two curves.
Any help, examples etc would be much appreciated (and would save me a lot of time staring at the same files hoping something jumps out at me...)!
Thanks a lot
It depends. If you want to price Bermudan swaptions, you're out of luck; QuantLib can only price them on a tree and there's no way to use the two curves.
If you want to price European swaptions, you can use the two curves in the Black formula, although I agree that it's not obvious to find that out by looking at the code. As you've probably seen already, you'll have to instantiate both an instrument (the Swaption class) and a corresponding engine (the BlackSwaptionEngine class). The constructor of the BlackSwaptionEngine takes a discount curve besides the other args, so you'll pass the OIS curve here. The constructor of the Swaption, on the other hand, takes the swap underlying the option as a VanillaSwap instance. In turn, the VanillaSwap constructor takes an IborIndex instance representing the floating-rate index to be paid; and finally, the IborIndex constructor takes the curve to be used to forecast its fixings, so that's the place where you can pass the 3mL curve. To summarize:
shared_ptr<IborIndex> libor(new GBPLibor(3*Months, forecastCurve));
shared_ptr<VanillaSwap> swap(new VanillaSwap(..., libor, ...));
shared_ptr<Instrument> swaption(new Swaption(swap, ...));
shared_ptr<PricingEngine> engine(new BlackSwaptionEngine(discountCurve, ...));
swaption->setPricingEngine(engine);
double price = swaption->NPV();
Also, note that the current released version (QuantLib 1.1) has a bug that makes it use the wrong curve at some point during the calculations. You'll want to use version 1.2, which is not yet released but can be checked out from the Subversion repository at https://quantlib.svn.sourceforge.net/svnroot/quantlib/branches/R01020x-branch/QuantLib.

Collaborative Filtering: Ways to determine implicit scores for products for each user?

Having implemented an algorithm to recommend products with some success, I'm now looking at ways to calculate the initial input data for this algorithm.
My objective is to calculate a score for each product that a user has some sort of history with.
The data I am currently collecting:
User order history
Product pageview history for both anonymous and registered users
All of this data is timestamped.
What I'm looking for
There are a couple of things I'm looking for suggestions on, and ideally this question should be treated more for discussion rather than aiming for a single 'right' answer.
Any additional data I can collect for a user that can directly imply an interest in a product
Algorithms/equations for turning this data into scores for each product
What I'm NOT looking for
Just to avoid this question being derailed with the wrong kind of answers, here is what I'm doing once I have this data for each user:
Generating a number of user clusters (21 at the moment) using the k-means clustering algorithm, using the pearsons coefficient for the distance score
For each user (on demand) calculating their a graph of similar users by looking for their most and least similar users within their cluster, and repeating for an arbitrary depth.
Calculating a score for each product based on the preferences of other users within the user's graph
Sorting the scores to return a list of recommendations
Basically, I'm not looking for ideas on what to do once I have the input data (I may need further help with that later, but it's not the point of this question), just for ideas on how to generate this input data in the first place
Here's a haymaker of a response:
time spent looking at a product
semantic interpretation of comments left about the product
make a discussion page about a product, brand, or product category and semantically interpret the comments
if they Shared a product page (email, del.icio.us, etc.)
browser (mobile might make them spend less time on the page vis-à-vis laptop while indicating great interest) and connection speed (affects amt. of time spent on the page)
facebook profile similarity
heatmap data (e.g. à la kissmetrics)
What kind of products are you selling? That might help us answer you better. (Since this is an old question, I am addressing both #Andrew Ingram and anyone else who has the same question and found this thread through search.)
You can allow users to explicitly state their preferences, the way netflix allows users to assign stars.
You can assign a positive numeric value for all the stuff they bought, since you say you do have their purchase history. Assign zero for stuff they didn't buy
You could do some sort of weighted value for stuff they bought, adjusted for what's popular. (if nearly everybody bought a product, it doesn't tell you much about a person that they also bought it) See "term frequency–inverse document frequency"
You could also assign some lesser numeric value for items that users looked at but did not buy.