Is it possible to generate/infer clojure spec based on spec for calling functions:
Let's say i have a function foo that i already wrote a spec for it, inside foo i call other function bar() that takes some of the inputs of foo (which have already spec) , so my question is it possible to infer/generate bar's spec ? Any existing library for this ?
Thanks
There is https://github.com/stathissideris/spec-provider, which you can use to infer specs at your bar's output.
I'm using this to visualize (in a pipeline) the inferred spec as shapes (in a java applet with the help of quil) and its diff between each step output compared to the anterior step (in a emacs buffer) at https://vimeo.com/240254456.
Ok so it looks like Clojure typed has what i was looking for, since i have specs for foo i can generate tests and then infer for other functions the specs and typed annotations. The utility of this since clojure is dynamic language, having already specced entry point functions we can infer sub-functions specs from those and check for consistency in code base (function called with the right args everywhere in code)
https://github.com/typedclojure/core.typed
Hope this can help others
Related
In this code headerTable and rowsTable are Java Objects. Here the same method with the same argument is being called on them:
(.setHorizontalAlignment headerTable Element/ALIGN_LEFT)
(.setHorizontalAlignment rowsTable Element/ALIGN_LEFT)
Is there a better way of doing this? I would think there must be a way to combine the two calls into one somehow. But since this is 'side effecting' code, perhaps not??
I'm thinking of an answer without writing a custom function or macro, something like "just use juxt or comp", but then maybe I'm being a bit too prescriptive...
Edit Type hinting was mentioned by Leonid Beschastny, so just in case it helps, here's the Java method signature:
public void setHorizontalAlignment(int horizontalAlignment)
And the class is PdfPTable, from iText. (This code is being used to create PDF files).
There are many possible refactorings, one would be
(run! #(.setHorizontalAlignment ^PdfPTable % Element/ALIGN_LEFT)
[headerTable rowsTable])
Hoping someone can provide an explain-like-I’m-five elucidation of the difference between the following types of functions within Famo.us, and when it’s appropriate to use them:
sampleFunction() {}
_sampleFunction() {}
SampleView.prototype.sampleFunction() {}
.bind and .call are also thrown around a lot…I understand them vaguely but not as concretely as I’d like. That might be a different question, but please feel free to use them in your explanation!
Apologies for the vagueness...wish there was more regarding this in famo.us university.
None of what you're looking at is syntax specific to Famo.us. It's actually common, if intermediate level, VanillaJS.
The _ is simply a coding convention to denote that a specific function should belong to the parent scope (ie a member/private function, whatever you prefer to call it). Javascript doesn't really have support for encapsulation - the act of blocking other classes and objects from accessing another class's functions and variables. While it is possible, it's quite cumbersome and hacky.
You'll see that Famo.us uses the underscore convention to denote that a function is a member of the class using it. Some of these functions are actually just aliases to the actual Javascript native function, for example ._add actually just call's Javascript's .add method. Of course, ._add could be updated in the future on Famo.us's end to do more in the future if that's required. You really wouldn't want to try and write over the native Javascript add. That's super bad.
The other upshot is that you can document that class and say that you can and should use the _add method for a specific purpose/scenario. You'll see that in the API docs.
Understanding prototype is a core part of what it means to be a Javascript Programmer, after all, it is a prototype driven language. MDN has a much better explanation than anything I can offer here but it's basically at the core of your classes.
If you want to extend off of an existing class (say, create your own View or Surface type) you would extend it's prototype. Check out Famous Starter Kit's App examples and see how many of them create an "AppView" class, which takes the prototype of the core View, copies it for itself, and then adds it's own functions, thus extending View without ruining the original copy.
With unit-testing several hundred lines of F# code I realized that it would be advantageous to not only check the output but also the signatures. The reason being if the code is validated for a release and then changes are made after the release that modify the signature, one would want to know why the signature changed so that either the test case can be updated for the new signature or to flag the change as causing a problem.
Is it possible to create a test case to verify a signature? If so, how?
As said by Stephen, if you write some unit tests for your code, the unit tests will generally call the function with values of the type that the function require, so that will automatically also check the signature (if you change the signature, you will not be able to compile your tests).
Another alternative, which is suitable for libraries is to use F# interface files (.fsi). The interface file specifies types of public functions in the implementation file (.fs) and it is also a good place for documentation.
If you then (accidentally) change the type of your implementation, your code will not compile unless you update the type in the interface file.
You will probably want to maintain the interface file by hand (see F# library sources for a good example), but you can get an initial by calling the compiler with --sig:mylibrary.fsi. You could probaby use this switch to automate the testing (and check the diff between signature files after each compilation).
I think the best approach would be to simply provide test cases which cover the bounds of your signature. e.g., to verify that a return type is an int,
let x:int = someFunc() //you'll get a compiler error if the return type changes
Really, I'd expect that just by virtue of exhaustively testing your public API, you will have necessarily tested the signatures. Especially in a language like F#, which has a relatively strict static type system.
I suppose you could also venture to use reflection to assert signatures, but honestly I don't think that would be such a good investment of time.
I'm in the process of writing a kind of runtime system/interpreter, and one of things that I need to be able to do is call c/c++ functions located in external libraries.
On linux I'm using the dlfcn.h functions to open a library, and call a function located within. The problem is that, when using dlsysm() the function pointer returned need to be cast to an appropriate type before being called so that the function arguments and return type are know, however if I’m calling some arbitrary function in a library then obviously I will not know this prototype at compile time.
So what I’m asking is, is there a way to call a dynamically loaded function and pass it arguments, and retrieve it’s return value without knowing it’s prototype?
So far I’ve come to the conclusion there is not easy way to do this, but some workarounds that I’ve found are:
Ensure all the functions I want to load have the same prototype, and provide some sort mechanism for these functions to retrieve parameters and return values. This is what I am doing currently.
Use inline asm to push the parameters onto the stack, and to read the return value. I really want to steer clear of doing this if possible!
If anyone has any ideas then it would be much appreciated.
Edit:
I have now found exactly what I was looking for:
http://sourceware.org/libffi/
"A Portable Foreign Function Interface Library"
(Although I’ll admit I could have been clearer in the original question!)
What you are asking for is if C/C++ supports reflection for functions (i.e. getting information about their type at runtime). Sadly the answer is no.
You will have to make the functions conform to a standard contract (as you said you were doing), or start implementing mechanics for trying to call functions at runtime without knowing their arguments.
Since having no knowledge of a function makes it impossible to call it, I assume your interpreter/"runtime system" at least has some user input or similar it can use to deduce that it's trying to call a function that will look like something taking those arguments and returning something not entirely unexpected. That lookup is hard to implement in itself, even with reflection and a decent runtime type system to work with. Mix in calling conventions, linkage styles, and platforms, and things get nasty real soon.
Stick to your plan, enforce a well-defined contract for the functions you load dynamically, and hopefully make due with that.
Can you add a dispatch function to the external libraries, e.g. one that takes a function name and N (optional) parameters of some sort of variant type and returns a variant? That way the dispatch function prototype is known. The dispatch function then does a lookup (or a switch) on the function name and calls the corresponding function.
Obviously it becomes a maintenance problem if there are a lot of functions.
I believe the ruby FFI library achieves what you are asking. It can call functions
in external dynamically linked libraries without specifically linking them in.
http://wiki.github.com/ffi/ffi/
You probably can't use it directly in your scripting language but perhapps the ideas are portable.
--
Brad Phelan
http://xtargets.heroku.com
I'm in the process of writing a kind of runtime system/interpreter, and one of things that I need to be able to do is call c/c++ functions located in external libraries.
You can probably check for examples how Tcl and Python do that. If you are familiar with Perl, you can also check the Perl XS.
General approach is to require extra gateway library sitting between your interpreter and the target C library. From my experience with Perl XS main reasons are the memory management/garbage collection and the C data types which are hard/impossible to map directly on to the interpreter's language.
So what I’m asking is, is there a way to call a dynamically loaded function and pass it arguments, and retrieve it’s return value without knowing it’s prototype?
No known to me.
Ensure all the functions I want to load have the same prototype, and provide some sort mechanism for these functions to retrieve parameters and return values. This is what I am doing currently.
This is what in my project other team is doing too. They have standardized API for external plug-ins on something like that:
typedef std::list< std::string > string_list_t;
string_list_t func1(string_list_t stdin, string_list_t &stderr);
Common tasks for the plug-ins is to perform transformation or mapping or expansion of the input, often using RDBMS.
Previous versions of the interface grew over time unmaintainable causing problems to both customers, products developers and 3rd party plug-in developers. Frivolous use of the std::string is allowed by the fact that the plug-ins are called relatively seldom (and still the overhead is peanuts compared to the SQL used all over the place). The argument stdin is populated with input depending on the plug-in type. Plug-in call considered failed if inside output parameter stderr any string starts with 'E:' ('W:' is for warnings, rest is silently ignored thus can be used for plug-in development/debugging).
The dlsym is used only once on function with predefined name to fetch from the shared library array with the function table (function public name, type, pointer, etc).
My solution is that you can define a generic proxy function which will convert the dynamic function to a uniform prototype, something like this:
#include <string>
#include <functional>
using result = std::function<std::string(std::string)>;
template <class F>
result proxy(F func) {
// some type-traits technologies based on func type
}
In user-defined file, you must add define to do the convert:
double foo(double a) { /*...*/ }
auto local_foo = proxy(foo);
In your runtime system/interpreter, you can use dlsym to define a foo-function. It is the user-defined function foo's responsibility to do calculation.
A C++ rules engine defines rules in XML where each rule boils down to "if X, then Y" where X is a set of tests and Y a set of actions.
In C++ code, 'functions' usable in tests/actions are created as a class for each 'function', each having a "run(args)" method... each takes its own set of parameters.
This works fine.
But, a separate tool is wanted to save users hand-crafting XML; the rules engine is aimed at non-programmers. The tool needs to know all the 'functions' available, as well as their required input parameters. What's the best way to consider doing this? I considered a couple of possibilities:
A config file describes the 'functions' and their parameters, and is read by the tool. This is pretty easy, and the actual C++ code can use it to perform argument validation, but still the C++ and XML are not guaranteed to be in sync - a programmer could modify C++ and forget to update the XML leading to validation bugs
Each 'function' class has methods which describe it. Somehow the tool loads the C++ classes... this would be easy in a language supporting reflection but messier in C++, probably you'd have to build a special DLL with all 'functions' or something. Which means extra overhead.
What makes sense given the nature of C++ specifically?
EDIT: is the title descriptive? I can't think of a better one.
There's a 3rd way - IDL.
Imagine you have a client-server app, and you have a code generator that produces wrapper classes that you can deploy on client and server so the user can write an app using the client API and the processing occurs on the server... this is a typical RPC scenario and is used in DCE-RPC, ONC-RPC, CORBA, COM and others.
The trick here is to define the signatures of the methods the client can call, which is done in an Interface Definition Language. This doesn't have to be difficult, but it is the source for the client/server API, you run it through a generator and it produces the C++ classes that you compile up for the client to use.
In your case, it sounds like the XML is the IDL. so you can create a tool that takes the XML and produces the C++ headers describing the functions that your code exposes. You don't really have to generate the cpp files (you could) but its easier to just generate the headers, so the programmer who adds a new function/parameter cannot forget to update the implementation - it just won't compile once the headers have been re-generated.
You can generate a header that is #included into the existing c++ headers if there is more there than just the function definitions.
So - that's my suggestion, #3: generate the definitions from your definitive XML signatures.
There is one other way:
Add a constraint that the argument types be uniform in a function call.
define some max number of arguments
describe the types and the precedence i.e. double converrts to String but not vice-versa
then you have
void f(int a1) .. f(int a1 .. int aN)
void f(double a1) .. f(double a1 .. double aN)
..
void f(T a1) ..
And other concrete data types like String, Date, etc.
Advantages:
Variations in signature fixed and regular
it's possible to only provide the "biggest" type signature (T)
works well with templates and language bridges
can warn action f with 2 Integer parameters undefined