How to copy results of a plugin to another project? - ocaml

In Frama-C, I would like to copy the results of a plugin like Value from one project to another. How exactly do I do this? I'm guessing I have to use Project.copy with the proper State_selection, but what would that be for Value? More generally, how do I determine what the State_selection would be for a given plugin?

Unfortunately, there is no unified mechanism across plug-ins for that. For the EVA1 plug-in, you would probably do something like
let selection = State_selection.with_codependencies Db.Value.self in
Project.copy ~selection ~src dest
in order to capture EVA's state as well as the intermediate states on which it depends.
That said, I'd advise against trying to copy such a substantial part of Frama-C's internal state. It's very error-prone and implies working with arcane API. If you can afford it, two other solutions seem easier:
work in the original project, possibly creating a new project with a new AST as a result, through the File.create_copy_from_visitor.
copy the entire project with Project.copy and work on the new project.
1: Evolved Value Analysis, the new name of Value

Related

Adding Custom c++ function in chromium and call them in browser

I am trying to write custom function in bootstrapper.cc under v8/src/init.
int helloworld(){
return 0;
}
When it try to call it from chromium console, it throws undefined.
Look around bootstrapper.cc to see how other built-in functions are installed. Examples you could look at include Array and DataView (or any other, really).
There is no way to simply define a C++ function of a given name and have that show up in JavaScript. Instead, you have to define a property on the global object; and the function itself needs to have the right calling convention, and process its parameters / prepare its return value appropriately so that it can be called from JavaScript. You can't just take or return an int.
If you find it inconvenient to work with C++, an alternative might be to develop a Chrome extension, which would allow you to use JavaScript for the implementation, and also remove the need to compile/maintain/update your own build (which is a lot of work!). There is no existing guide for how to extend V8 in the way you're asking, because that approach is so much work that we don't recommend doing it like this (though of course it is possible -- you just have to read enough of the existing C++ source to understand how it's done).

Importing constants out of C++ headers instead of hardcoding them: extending .net controls?

I've been researching how to extend .net controls to have more freedom to do the same things you can do with the regular windows API in C++ in a VB program. For example, if you want to add week numbers to a calendar control, you'll have to manually import the DLL and extend the control's class, calling internal windows functions.
I've found various topics on how people handle this, and I'm not quite happy with the 'canonical method'. To be honest, I think it's a pretty bad paradigm to use.
These internal windows functions use pointers to set magic properties.
First, I find it rather strange that a pointer, which its system-dependent value size, is being abused to hold something that isn't a memory location but a value, but that aside: these pointers are also used to set which attribute is being set.
For example, (leaving out all the boilerplate necessary to link up the code), changing the first day of the week to Tuesday would use this code:
Private Const MCM_FIRST As Int32 = &H1000
Private Const DTM_FIRST As Int32 = &H1000
Private Const DTM_GETMONTHCAL As Int32 = (DTM_FIRST + 8)
Private Const MCM_SETFIRSTDAYOFWEEK As Int32 = (MCM_FIRST + 15)
Dim hMonthView As IntPtr =
SendMessage(Me.Handle, DTM_GETMONTHCAL, IntPtr.Zero, IntPtr.Zero)
Call SendMessage(hMonthView, MCM_SETFIRSTDAYOFWEEK, 0&, 1&)
So the magic values of 0x1008 and 0x1015 is what my question is about in this code.
First off, this is a rather strange way of working: these values aren't documented anywhere as far as I know other than the examples. What if I need a property where there happens to not be an internet tutorial on so far? Where/how do I find the value of MCM_<ARBITRARY_VALUE_HERE> in general?
Note: I mean the latter question in the broad, general sense: not applying to just the specific calendar control the example is about, but really any windows control. I can already google up the specific C++ header file by name (e.g. for the example it's defined in Commctrl.h: it's just that that piece of information is rather useless if I don't know the idiomatic way of how to pull something like that out of the C++ header into the VB code.
Secondly... these values are defined in headers somewhere. Is it not possible to import the values from the proper header? This way the program will stay working in the (admittedly unlikely) scenario where the DLL is changed by re-compiling it.
One approach for this back for VB6 was to prepare a TLB file with constants, function declarations, etc. of the Win32 API, and then reference that in the VB6 program. The TLB didn't provide COM objects, it was just a convenient way of packaging up all the declarations as though they were in (what we now think of as) an assembly.
As far as I can think, that approach should still work perfectly well today in .NET through "COM" interop. You can just as easily reference the TLB in a C# or VB project and thereby access its contents.
The book Hardcore Visual Basic by Bruce McKinney included a disk with a prepared TLB for this purpose, and this seems to still be available today:
http://vb.mvps.org/hardweb/mckinney2a.htm
I don't know how comprehensive this was at the time, nor if it is really still up to date. At the very least it seems instructive in how to prepare a TLB for this type of approach.
The following page also provides a description of this approach with some additional explanation an examples (too long to copy in here).
http://www.brainbell.com/tutors/Visual_Basic/newfile156.html

How to add a constant spread to an existing YieldTermStructure object in Quantlib

I would really appreciate your inputs on moving from a YieldTermStructure pointer to that of adding a spread as below::
boost::shared_ptr<YieldTermStructure> depoFutSwapTermStructure(new PiecewiseYieldCurve<Discount,
LogLinear>(settlementDate, depoFutSwapInstruments_New, termStructureDayCounter, 1.0e-15));
I tried adding a spread of 50 bps as below...
double OC_Spread(0.50 / 100);
Rate OCSQuote = OC_Spread;
boost::shared_ptr<Quote> OCS_Handler(new SimpleQuote(OCSQuote));
I then proceed to create a zerospreaded object as below:
ZeroSpreadedTermStructure Z_Spread(Handle<YieldTermStructure>(*depoFutSwapTermStructure), Handle<Quote>(OCS_Handler));
But now I am stuck as the code repeatedly breaks down if I go on ahead to do anything like
Z_Spread.zeroYieldImpl;
What is the issue with above code. I have tried several flavors of above approach and failed on all the fronts.
Also is there a native way of calling directly the discount function just like as I do now with the TermStructure object prior to adding the spread currently as below???
depoFutSwapTermStructure->discount(*it)
I'm afraid you got your interfaces a bit mixed up. The zeroYieldImpl method you're trying to call on your ZeroSpreadedTermStructure is protected, so you can't use it from your code (at least, that's how I'm guessing your code breaks, since you're not reporting the error you get).
The way you interact with the curve you created is through the public YieldTermStructure interface that it inherits; that includes the discount method that you want to call, as well as methods such as zeroRate or forwardRate.
Again, it's hard to say why your call to discount fails precisely, since you're not quoting the error and you're not saying what *it is in the call. From the initialization you do report, and from the call you wrote, I'm guessing that you might have instantiated a ZeroSpreadedTermStructure object but you're trying to use it with the -> syntax as if it were a pointer. If that's the case, calling Z_Spread.discount(*it) should work instead (assuming *it resolves to a number).
If that's not the problem, I'm afraid you'll have to add a few more details to your question.
Finally, for a more general treatment of term structures in QuantLib, you can read here and here.

Can I read a SMT2 file into a solver through the z3 c++ interface?

I've got a problem where the z3 code embedded in a larger system isn't finding a solution to a certain set of constraints (added through the C++ interface) despite some fairly long timeouts. When I dump the constraints to a file (using the to_smt2() method on the solver, just before the call to check()), and run the file through the standalone z3 executable, it solves the system in about 4 seconds (returning sat). For what it's worth, the file is 476,587 lines long, so a fairly big set of constraints.
Is there a way I can read that file back into the embedded solver using the C++ interface, replacing the existing constraints, to see if the embedded version can solve starting from the exact same starting point as the standalone solver? (Essentially, how could I create a corresponding from_smt2(stream) method on the solver class?)
They should be the same set of constraints as now, of course, but maybe there's some ordering effect going on when they are read from the file, or maybe there are some subtle differences in the solver introduced when we embedded it, or something that didn't get written out with to_smt2(). So I'd like to try reading the file back, if I can, to narrow down the possible sources of the difference. Suggestions on what to look for while debugging the long-running version would also be helpful.
Further note: it looks like another user is having similar issues here. Unlike that user, my problem uses all bit-vectors, and the only unknown result is the one from the embedded code. Is there a way to invoke the (get-info :reason-unknown) from the C++ interface, as suggested there, to find out why the embedded version is having a problem?
You can use the method "solver::reason_unknown()" to retrieve explanations for search failure.
There are methods for parsing files and strings into a single expression.
In case of a set of assertions, the expression is a conjunction.
It is perhaps a good idea to add such a method directly to the solver class for convenience. It would be:
void from_smt2_string(char const* smt2benchmark) {
expr fml = ctx().parse_string(smt2benchmark);
add(fml);
}
So if you were to write it outside of the solver class you need to:
expr fml = solver.ctx().parse_string(smt2benchmark);
solver.add(fml);

How to deprecate function when return type changes c++

What strategies are there for deprecating functions when their return type needs to change? For example, I have:
BadObject foo(int); // Old function: BadObject is being removed.
Object foo(int); // New function.
Object and BadObject are very different internally, and swapping their return types will break code for current users of my library. I'm aiming to avoid that.
I can mark BadObject foo(int) deprecated, and give users time to change affected code.
However, I can't overload foo based on return-type. foo is very well named, and it doesn't need to take extra parameters. How can I add the new function to my library whilst maintaining the old version, at least for a while?
What's the strategy to deprecate the old function without breaking too much dependant code, while providing users the time to migrate to the new version? Ideally I'd keep the current function name and parameter list, because it's named quite well now. It feels like this should be a reasonably common problem: what's a decent way to solve it?
Although the solution will force you to change your function names, but it'll be a compromise between your old users and your new ones.
So - rename the old foo into deprecatedFoo and your new foo into foo2 (or anything you want). Then, in the header file you include with your library, you can simply:
#define deprecatedFoo foo
and inside the function itself do:
#warning ("This function is deprecated. Use 'foo2' or change the #define in LINE in file HEADER.")
Users of the old versions won't have to change their code, and will be issued a warning, and the new users will probably listen and change the #define in order to use the new foo.
In the next version you'll just delete the old foo and the define.
I think a classic example is Boost's Spirit.
From their FAQ:
While introducing Spirit V2 we restructured the directory structure in
order to accommodate two versions at the same time. All of
Spirit.Classic now lives in the directory
boost/spirit/home/classic
where the directories above contain forwarding headers to the new
location allowing to maintain application compatibility. The
forwarding headers issue a warning (starting with Boost V1.38) telling
the user to change their include paths. Please expect the above
directories/forwarding headers to go away soon.
This explains the need for the directory
boost/spirit/include
which contains forwarding headers as well. But this time the headers
won't go away. We encourage application writers to use only the
includes contained in this directory. This allows us to restructure
the directories underneath if needed without worrying application
compatibility. Please use those files in your application only. If it
turns out that some forwarding file is missing, please report this as
a bug.
You can ease migration by keeping the new and old versions in separate directories and using forwarding headers to maintain compatibility. Users will eventually be forced to use the new headers.
SDL 2.0 has a different approach. They don't provide a compatibility layer but instead a migration guide walking the users through the most dramatic changes. In this case, you can help users understand how they need to restructure their code.
What if to make your Object class inherit from BadObject (which you'll keep temporarily)? Then the old user code won't know about that, so it won't break provided that your new "foo" function still returns your objects correctly.