I know I can use getFileInfo(getCurrentTemplatePath()) to get the current template's last modified date, but it would be better if I could just grab it from memory. I have several possible uses in mind, but I'm not ready to defend them yet, so for now let's just say I'm asking out of curiosity.
I assume the application server must check the modified date at some point to decide if it needs to compile. If I have to use the underlying Java to get to it, that's fine (a pure CF approach would be better, of course, but I'm not holding out much hope).
If the modified date isn't available, then I'd settle for some sort of flag indicating if the current request triggered a recompile (actually, that might work just as well).
You are looking for coldfusion.runtime.TemplateClassLoader. It handles the lookup against the TemplateCache and either fetches an already compiled template class or invokes coldfusion.compiler.NeoTranslator to compile CFML into it.
<cfset templateUri = getCurrentTemplatePath()>
<cfset lastCompiled = createObject("java", "coldfusion.runtime.TemplateClassLoader").getLastCompiledTime(templateUri)>
<!--- lastCompiled = unix timestamp in milliseconds --->
Needless to say, this is an implementation detail and you should not rely on it.
Related
In Frama-C, I would like to copy the results of a plugin like Value from one project to another. How exactly do I do this? I'm guessing I have to use Project.copy with the proper State_selection, but what would that be for Value? More generally, how do I determine what the State_selection would be for a given plugin?
Unfortunately, there is no unified mechanism across plug-ins for that. For the EVA1 plug-in, you would probably do something like
let selection = State_selection.with_codependencies Db.Value.self in
Project.copy ~selection ~src dest
in order to capture EVA's state as well as the intermediate states on which it depends.
That said, I'd advise against trying to copy such a substantial part of Frama-C's internal state. It's very error-prone and implies working with arcane API. If you can afford it, two other solutions seem easier:
work in the original project, possibly creating a new project with a new AST as a result, through the File.create_copy_from_visitor.
copy the entire project with Project.copy and work on the new project.
1: Evolved Value Analysis, the new name of Value
I would really appreciate your inputs on moving from a YieldTermStructure pointer to that of adding a spread as below::
boost::shared_ptr<YieldTermStructure> depoFutSwapTermStructure(new PiecewiseYieldCurve<Discount,
LogLinear>(settlementDate, depoFutSwapInstruments_New, termStructureDayCounter, 1.0e-15));
I tried adding a spread of 50 bps as below...
double OC_Spread(0.50 / 100);
Rate OCSQuote = OC_Spread;
boost::shared_ptr<Quote> OCS_Handler(new SimpleQuote(OCSQuote));
I then proceed to create a zerospreaded object as below:
ZeroSpreadedTermStructure Z_Spread(Handle<YieldTermStructure>(*depoFutSwapTermStructure), Handle<Quote>(OCS_Handler));
But now I am stuck as the code repeatedly breaks down if I go on ahead to do anything like
Z_Spread.zeroYieldImpl;
What is the issue with above code. I have tried several flavors of above approach and failed on all the fronts.
Also is there a native way of calling directly the discount function just like as I do now with the TermStructure object prior to adding the spread currently as below???
depoFutSwapTermStructure->discount(*it)
I'm afraid you got your interfaces a bit mixed up. The zeroYieldImpl method you're trying to call on your ZeroSpreadedTermStructure is protected, so you can't use it from your code (at least, that's how I'm guessing your code breaks, since you're not reporting the error you get).
The way you interact with the curve you created is through the public YieldTermStructure interface that it inherits; that includes the discount method that you want to call, as well as methods such as zeroRate or forwardRate.
Again, it's hard to say why your call to discount fails precisely, since you're not quoting the error and you're not saying what *it is in the call. From the initialization you do report, and from the call you wrote, I'm guessing that you might have instantiated a ZeroSpreadedTermStructure object but you're trying to use it with the -> syntax as if it were a pointer. If that's the case, calling Z_Spread.discount(*it) should work instead (assuming *it resolves to a number).
If that's not the problem, I'm afraid you'll have to add a few more details to your question.
Finally, for a more general treatment of term structures in QuantLib, you can read here and here.
I need to set a variable in Coldfusion with the same name directly after each other. I need to do this and there is no other alternative. Obviously if I could set two variables with different names I would.
The second variable needs to overwrite the first one.
I have done this and it works. My question is if in fact there is any reason why it should not be done.
For example:
<cfset variable_one = "a">
<cfset variable_one = "b">
<cfoutput>#variable_one#</cfoutput>
The reason why it should not be done is that the developer that will work with your code after you, could spend quite some of his time trying to understand why you did it. (that developer could be you, a few months/years in the future)
struct Foo{
Bar get(){
}
}
auto f = Foo();
f.get();
For example you decide that get was a very poor choice for a name but you have already used it in many different files and manually changing ever occurrence is very annoying.
You also can't really make a global substitution because other types may also have a method called get.
Is there anything for D to help refactor names for types, functions, variables etc?
Here's how I do it:
Change the name in the definition
Recompile
Go to the first error line reported and replace old with new
Goto 2
That's semi-manual, but I find it to be pretty easy and it goes quickly because the compiler error message will bring you right to where you need to be, and most editors can read those error messages well enough to dump you on the correct line, then it is a simple matter of telling it to repeat the last replacement again. (In my vim setup with my hotkeys, I hit F4 for next error message, then dot for repeat last change until it is done. Even a function with a hundred uses can be changed reliably* in a couple minutes.)
You could probably write a script that handles 90% of cases automatically too by just looking for ": Error: " in the compiler's output, extracting the file/line number, and running a plain text replace there. If the word shows up only once and outside a string literal, you can automatically replace it, and if not, ask the user to handle the remaining 10% of cases manually.
But I think it is easy enough to do with my editor hotkeys that I've never bothered trying to script it.
The one case this doesn't catch is if there's another function with the same name that might still compile. That should never happen if you do this change in isolation, because an ambiguous name wouldn't compile without it.
In that case, you could probably do a three-step compiler-assisted change:
Make sure your code compiles before. Then add #disable to the thing you want to rename.
Compile. Every place it complains about it being unusable for being disabled, do the find/replace.
Remove #disable and rename the definition. Recompile again to make sure there's nothing you missed like child classes (the compiler will then complain "method foo does not override any function" so they stand right out too.
So yeah, it isn't fully automated, but just changing it and having the compiler errors help find what's left is good enough for me.
Some limited refactoring support can be found in major IDE plugins like Mono-D or VisualD. I remember that Brian Schott had plans to add similar functionality to his dfix tool by adding dependency on dsymbol but it doesn't seem implemented yet.
Not, however, that all such options are indeed of a very limited robustness right now. This is because figuring out the fully qualified name of any given symbol is very complex task in D, one that requires full semantics analysis to be done 100% correctly. Think about local imports, templates, function overloading, mixins and how it all affects identifying the symbol.
In the long run it is quite certain that we need to wait before reference D compiler frontend becomes available as a library to implement such refactoring tool in clean and truly reliable way.
A good find all feature can be better than a bad refactoring which, as mentioned previously, requires semantic.
Personally I have a find all feature in Coedit which displays the context of a match and works on all the project sources.
It's fast to process the results.
What strategies are there for deprecating functions when their return type needs to change? For example, I have:
BadObject foo(int); // Old function: BadObject is being removed.
Object foo(int); // New function.
Object and BadObject are very different internally, and swapping their return types will break code for current users of my library. I'm aiming to avoid that.
I can mark BadObject foo(int) deprecated, and give users time to change affected code.
However, I can't overload foo based on return-type. foo is very well named, and it doesn't need to take extra parameters. How can I add the new function to my library whilst maintaining the old version, at least for a while?
What's the strategy to deprecate the old function without breaking too much dependant code, while providing users the time to migrate to the new version? Ideally I'd keep the current function name and parameter list, because it's named quite well now. It feels like this should be a reasonably common problem: what's a decent way to solve it?
Although the solution will force you to change your function names, but it'll be a compromise between your old users and your new ones.
So - rename the old foo into deprecatedFoo and your new foo into foo2 (or anything you want). Then, in the header file you include with your library, you can simply:
#define deprecatedFoo foo
and inside the function itself do:
#warning ("This function is deprecated. Use 'foo2' or change the #define in LINE in file HEADER.")
Users of the old versions won't have to change their code, and will be issued a warning, and the new users will probably listen and change the #define in order to use the new foo.
In the next version you'll just delete the old foo and the define.
I think a classic example is Boost's Spirit.
From their FAQ:
While introducing Spirit V2 we restructured the directory structure in
order to accommodate two versions at the same time. All of
Spirit.Classic now lives in the directory
boost/spirit/home/classic
where the directories above contain forwarding headers to the new
location allowing to maintain application compatibility. The
forwarding headers issue a warning (starting with Boost V1.38) telling
the user to change their include paths. Please expect the above
directories/forwarding headers to go away soon.
This explains the need for the directory
boost/spirit/include
which contains forwarding headers as well. But this time the headers
won't go away. We encourage application writers to use only the
includes contained in this directory. This allows us to restructure
the directories underneath if needed without worrying application
compatibility. Please use those files in your application only. If it
turns out that some forwarding file is missing, please report this as
a bug.
You can ease migration by keeping the new and old versions in separate directories and using forwarding headers to maintain compatibility. Users will eventually be forced to use the new headers.
SDL 2.0 has a different approach. They don't provide a compatibility layer but instead a migration guide walking the users through the most dramatic changes. In this case, you can help users understand how they need to restructure their code.
What if to make your Object class inherit from BadObject (which you'll keep temporarily)? Then the old user code won't know about that, so it won't break provided that your new "foo" function still returns your objects correctly.