I am using Quantlib to perform calculations on historic data.
After setting up the required framework (curves etc), When I call option.ImpliedVolatility() I get the following exception thrown (for options that have expired):
File "/usr/local/lib/python2.6/dist-packages/QuantLib/QuantLib.py", line 3683, in impliedVolatility
def impliedVolatility(self, *args): return _QuantLib.VanillaOption_impliedVolatility(self, *args)
RuntimeError: option expired
A snippet of the lines of code for setting up required curves etc is shown below:
dividend_yield = YieldTermStructureHandle(FlatForward(0, TARGET(), div_yield, Actual365Fixed()))
risk_free_rate = YieldTermStructureHandle(FlatForward(0, TARGET(), rf_rate, Actual365Fixed()))
volatility = BlackVolTermStructureHandle(BlackConstantVol(0, TARGET(), annualized_histvol, Actual360()))
I STRONGLY suspect that the TARGET() macro used defaults to the current system date.
How may I set up the library to use a specific historic date?
The evaluation date is set by running, say,
Settings.instance().evaluationDate = Date(14,March,2010)
before the calculations. If not set, it defaults to the current date as you suspected.
The TARGET calendar just tells the curve what days are holidays, but has no effect on the evaluation date itself.
Related
I'm trying to generate a trace plot of my model but it shows module 'pymc3' has no attribute 'traceplot' error. My code is:
with pm.Model() as our_first_model:
# a priori
theta = pm.Beta('theta', alpha=1, beta=1)
# likelihood
y = pm.Bernoulli('y', p=theta, observed=data)
#y = pm.Binomial('theta',n=n_experimentos, p=theta, observed=sum(datos))
start = pm.find_MAP()
step = pm.Metropolis()
trace = pm.sample(1000, step=step, start=start)
burnin = 0 # no burnin
chain = trace[burnin:]
pm.traceplot(chain, lines={'theta':theta_real});
which then gives the following error:
AttributeError Traceback (most recent call last)
<ipython-input-8-40f97a342e0f> in <module>
1 burnin = 0 # no burnin
2 chain = trace[burnin:]
----> 3 pm.traceplot(chain, lines={'theta':theta_real});
AttributeError: module 'pymc3' has no attribute 'traceplot'
I'm on windows10 and I've downloaded pymc3 with pip since it was not included in anaconda that I've downloaded.
Since several versions ago, PyMC3 delegates plotting and stats to ArviZ, and the original plotting commands were kept as alias to ArviZ methods for convenience and ease of transition.
Latest PyMC3 release (3.11.0) is the first to not include the alias such as pm.traceplot. You have to use arviz.plot_trace which works with PyMC3 objects.
Extra notes unrelated to the question itself:
You are using pm.find_MAP to initialize the chain and you are manually setting the sampler to pm.Metropolis instead of allowing pm.sample to select its own defaults. There are reasons to do so and it's not intrinsically wrong but it is discourged, see PyMC3 FAQs.
PyMC3 is transitioning to using InferenceData as default output of pm.sample. I would recommend setting return_inferencedata=True in pm.sample for the following reasons: 1) ArviZ functions convert to this format under the hood, you will avoid this small overhead, 2) InferenceData has more capabilities than MultiTrace, 3) PyMC3 is transitioning to InferenceData as the default output of pm.sample so why not get started already?
You have a # no burn-in comment, however, the trace returned by pm.sample has already had a burn-in performed of length the tune parameter passed to it. The default value of tune is 1000. To actually get all the samples and see how the MCMC slowly converges to the typical set, you need to use discard_tuned_samples=False.
Some InferenceData resources:
InferenceData overview: https://arviz-devs.github.io/arviz/getting_started/XarrayforArviZ.html
Working with InferenceData examples (shows how to perform burn-in among other things): https://arviz-devs.github.io/arviz/getting_started/WorkingWithInferenceData.html
I have a functioning tf.estimator pipeline build in TF 1, but now I made the decision to move to TF 2.0, and I have problems in the end of my pipeline, when I want to save the model in the .pb format
I'm using this high level estimator export_saved_model method:
https://www.tensorflow.org/api_docs/python/tf/estimator/BoostedTreesRegressor#export_saved_model
I have two numeric features, 'age' and 'time_spent'
They're defined using tf.feature_column as such:
age = tf.feature_column.numeric_column('age')
time_spent = tf.feature_column.numeric_column('time_spent')
features = [age,time_spent]
After the model has been trained I turn the list of features into a dict using the method feature_column_make_parse_example_spec() and feed it to another method build_parsing_serving_input_receiver_fn() excactly as outlied on tensorflow's webpage, https://www.tensorflow.org/guide/saved_model under estimators.
columns_dict = tf.feature_column_make_parse_example_spec(features)
input_receiver_fn = tf.estimator.export.build_parsing_serving_input_receiver_fn(columns_dict)
model.export_saved_model(export_dir,input_receiver_fn)
I then inspect the output using the CLI tools
saved_model_cli show --dir mydir --all:
Resulting in the following:
enter image description here
Somehow Tensorflow squashes my two usefull numeric features into a useless string input crap called "inputs".
In TF 1 this could be circumvented by creating a custom input_receiver_fn() function using some tf.placeholder method, and I'd get the correct output with two distinct numeric features. But tf.placeholder doesn't exist in TF 2, so now it's pretty useless.
Sorry about the raging, but Tensorflow is horribly documented, and I'm really working with high level API's and it should just be straight out on the horse, but no.
I'd really appreciate any help :)
Tensorflow squashes my two usefull numeric features into a useless
string input crap called "inputs"
is not exactly true, as the exported model expects a serialized tf.Example proto. So, you can warp your age and time_spent into two features which will look like:
features {
feature {
key: "age"
value {
float32_list {
value: 10.2
}
}
}
feature {
key: "time_spent"
value {
float32_list {
value: 40.3
}
}
}
}
you can then call your regress function with the serialized string.
I'm trying to find out how I can modify the way a custom TensorFlow estimator creates event files for Tensorboard. Currently, I have the impression that, by default, a summary (containing the values of all the things (like typically accuracy) I'm following with tf.summary.scalar(...) ) is created every 100 steps in my model directory. The names of the event files later used by tensorboard look like
events.out.tfevents.1531418661.nameofmycomputer.
I found a routine online to change this behaviour and create directories for each run with the date and time of the computation, but it uses TensorFlow basic APIs:
logdir = "tensorboard/" + datetime.datetime.now().strftime("%Y%m%d-%H%M%S") + "/"
writer = tf.summary.FileWriter(logdir, sess.graph)
Is it possible to do something similar with a TF custom estimator?
It is possible to specify a directory for each evaluation run using name argument of the evaluate method of tf.estimator.Estimator e.g.:
estimator = tf.estimator.Estimator(
model_fn=model_fn,
model_dir=model_dir
)
eval_results = estimator.evaluate(
input_fn=eval_input_fn,
name=eval_name
)
The event files for this evaluation will be saved in the directory inside model_dir named "eval_" + eval_name.
Summary Writers are not needed for TensorFlow Estimators. The summary log of the model is written to the designated folder location using the model_dir attribute of tf.Estimator function when the tf.Estimator.fit() method is called.
In the example below, the selected directory to store the training logs is './my_model'.
tf.estimator.DNNClassifier(
model_fn,
model_dir='./my_model',
config=None,
params=None,
warm_start_from=None
)
Launch TensorBoard by running tensorboard --logdir=./my_model from the terminal.
I am writing a wrapper class that takes a generic graph with a special member "train_op" to manage the training, saving, and housekeeping of my model.
I wanted to cleanly keep track of the lifetime number of training steps like so:
with tf.control_dependencies([ step_add_one ]):
self.train_op=tf.identity(self.training_graph.train_op )
raise TypeError('Expected binary or unicode string, got %r'
e, is_training=True, inputs=None)
I think the rub here is that train_op is the return of tf.Optimizer.minimize(), so it is not a tensor per se, but an operation.
An obvious workaround would be to call tf.identity on the training_graph.loss, but I lose a bit of abstraction because I have to then handle the learning rate etc externally. Moreover, I feel like I'm missing something.
How can I best remedy this?
You can use tf.group(), which will work with operations and tensors.
For instance:
x = tf.Variable(1.)
loss = tf.square(x)
optimizer = tf.train.GradientDescentOptimizer(0.1)
train_op = optimizer.minimize(loss)
step = tf.Variable(0)
step_add_one = step.assign_add(1)
with tf.control_dependencies([step_add_one]):
train_op_2 = tf.group(train_op)
Now when you run train_op_2, the value of step will be incremented.
However, the best way to go (if you can modify the graph that created the graph) is to add a parameter global_step to the minimize function:
train_op = optimizer.minimize(loss, global_step=step)
Suppose X is a raw, labeled (ie, with training labels) data set, and Process(X) returns a set of Y instances
that have been encoded with attributes and converted into a weka-friendly file like Y.arff.
Also suppose Process() has some 'leakage':
some instances Leak = X-Y can't be encoded consistently, and need
to get a default classification FOO. The training labels are also known for the Leak set.
My question is how I can best introduce instances from Leak into the
weka evaluation stream AFTER some classifier has been applied to the
subset Y, folding the Leak instances in with their default
classification label, before performing evaulation across the full set X? In code:
DataSource LeakSrc = new DataSource("leak.arff");
Instances Leak = LeakSrc.getDataSet();
DataSource Ysrc = new DataSource("Y.arff");
Instances Y = Ysrc.getDataSet();
classfr.buildClassifer(Y)
// YunionLeak = ??
eval.crossValidateModel(classfr, YunionLeak);
Maybe this is a specific example of folding together results
from multiple classifiers?
the bounty is closing, but Mark Hall, in another forum (
http://list.waikato.ac.nz/pipermail/wekalist/2015-November/065348.html) deserves what will have to count as the current answer:
You’ll need to implement building the classifier for the cross-validation
in your code. You can still use an evaluation object to compute stats for
your modified test folds though, because the stats it computes are all
additive. Instances.trainCV() and Instances.testCV() can be used to create
the folds:
http://weka.sourceforge.net/doc.stable/weka/core/Instances.html#trainCV(int,%20int,%20java.util.Random)
You can then call buildClassifier() to process each training fold, modify
the test fold to your hearts content, and then iterate over the instances
in the test fold while making use of either Evaluation.evaluateModelOnce()
or Evaluation.evaluateModelOnceAndRecordPrediction(). The later version is
useful if you need the area under the curve summary metrics (as these
require predictions to be retained).
http://weka.sourceforge.net/doc.stable/weka/classifiers/Evaluation.html#evaluateModelOnce(weka.classifiers.Classifier,%20weka.core.Instance)
http://weka.sourceforge.net/doc.stable/weka/classifiers/Evaluation.html#evaluateModelOnceAndRecordPrediction(weka.classifiers.Classifier,%20weka.core.Instance)
Depending on your classifier, it could be very easy! Weka has an interface called UpdateableClassifier, any class using this can be updated after it has been built! The following classes implement this interface:
HoeffdingTree
IBk
KStar
LWL
MultiClassClassifierUpdateable
NaiveBayesMultinomialText
NaiveBayesMultinomialUpdateable
NaiveBayesUpdateable
SGD
SGDText
It can then be updated something like the following:
ArffLoader loader = new ArffLoader();
loader.setFile(new File("/data/data.arff"));
Instances structure = loader.getStructure();
structure.setClassIndex(structure.numAttributes() - 1);
NaiveBayesUpdateable nb = new NaiveBayesUpdateable();
nb.buildClassifier(structure);
Instance current;
while ((current = loader.getNextInstance(structure)) != null) {
nb.updateClassifier(current);
}