Just to ask one things.
What is the use of below methods in sitecore context.
Language.TryParse(string value, out bool);
As this method belongs to Sitecore.Kernal library so not able to see the code however in my case it's not working as expected.
You're referring to the bool Language.TryParse(string value, out Language result) method, right? It'll give you a Language object if the provided value is a locale, such as en-US, registered in Sitecore. There are some more caveats within that method. You can read the source of it using a reflection tool, such as dotPeek.
Since there may be different languages in different Sitecore databases, it's usually more suitable to use for example LanguageManager.GetLanguage(string name, Database database). It'll return the Language object if exists, or null if it's not available in the given database.
Related
Following my reading of the article Programmers Are People Too by Ken Arnold, I have been trying to implement the idea of progressive disclosure in a minimal C++ API, to understand how it could be done at a larger scale.
Progressive disclosure refers to the idea of "splitting" an API into categories that will be disclosed to the user of an API only upon request. For example, an API can be split into two categories: a base category what is (accessible to the user by default) for methods which are often needed and easy to use and a extended category for expert level services.
I have found only one example on the web of such an implementation: the db4o library (in Java), but I do not really understand their strategy. For example, if we take a look at ObjectServer, it is declared as an interface, just like its extended class ExtObjectServer. Then an implementing ObjectServerImpl class, inheriting from both these interfaces is defined and all methods from both interfaces are implemented there.
This supposedly allows code such as:
public void test() throws IOException {
final String user = "hohohi";
final String password = "hohoho";
ObjectServer server = clientServerFixture().server();
server.grantAccess(user, password);
ObjectContainer con = openClient(user, password);
Assert.isNotNull(con);
con.close();
server.ext().revokeAccess(user); // How does this limit the scope to
// expert level methods only since it
// inherits from ObjectServer?
// ...
});
My knowledge of Java is not that good, but it seems my misunderstanding of how this work is at an higher level.
Thanks for your help!
Java and C++ are both statically typed, so what you can do with an object depends not so much on its actual dynamic type, but on the type through which you're accessing it.
In the example you've shown, you'll notice that the variable server is of type ObjectServer. This means that when going through server, you can only access ObjectServer methods. Even if the object happens to be of a type which has other methods (which is the case in your case and its ObjectServerImpl type), you have no way of directly accessing methods other than ObjectServer ones.
To access other methods, you need to get hold of the object through different type. This could be done with a cast, or with an explicit accessor such as your ext(). a.ext() returns a, but as a different type (ExtObjectServer), giving you access to different methods of a.
Your question also asks how is server.ext() limited to expert methods when ExtObjectServer extends ObjectServer. The answer is: it is not, but that is correct. It should not be limited like this. The goal is not to provide only the expert functions. If that was the case, then client code which needs to use both normal and expert functions would need to take two references to the object, just differently typed. There's no advantage to be gained from this.
The goal of progressive disclosure is to hide the expert stuff until it's explicitly requested. Once you ask for it, you've already seen the basic stuff, so why hide it from you?
I want to use derived attributes and references in an ecore model, but so far I have not found any documentation on how to set the code for the methods which compute the values of derived attributes/references.
As far as I understand it, the basic workflow is to mark an attribute/reference as derived, generate model code, and then manually add the implementation. However, I work with models dynamically generated through the Ecore API. Is there a way to take a String and specify this String as the implementation for the computation of the derived feature, without manually editing generated files?
EDIT>
To clarify: I'm looking for a way to directly change the generated Java files, by specifying method bodys (as strings) for the getters of derived EStructuralFeatures.
EMF provides a way of dealing with dedicated implementation for EOperation and derived EAttribute using "invocation delegate". This functionality allows you to put some implementation directly in your ecore metamodel in a string format (as soon as the used language can be "handled" by EMF, i.e, an invocation delegate exists).
As far as I know, OCL is well supported: https://wiki.eclipse.org/OCL/OCLinEcore#Invocation_Delegate
The registration of the invocation delegate is performed either by plugin registration or by hand (for standalone usage), and the mechanism works with the EMF reflection layer (dynamic EMF): https://wiki.eclipse.org/EMF/New_and_Noteworthy/Helios#Registering_an_Invocation_Delegate
(Please note that I never experienced this mechanism. I know it exists, but I never played with it.)
EDIT>
It seems that the question was not related to dynamic code execution for derived attribute, but to code injection (I misunderstood the "Is there a way to take a String and specify this String as the implementation for the computation of the derived feature?").
EMF provides a way of injecting code placed on the ecore metamodel directly into the generated code.
Here is the way for EAttribute with derived property. The EAttribute should have the following properties set to true: {derived volatile} (you can also add transient). If you only want a getter and no setter for your EAttribute, you can also set the property changeable to false.
Once your EAttribute is well "configured", you have to add a new EAnnotation with the source set to http://www.eclipse.org/emf/2002/GenModel and an entry with the key set to get and value set to your code that will be injected (see image below).
And voilĂ , your code will be generated with the value value injected in your getter.
You can add the same process for EOperation using body instead of get.
How can I detect incompatible API changes in C++? (not ABI but API changes)
Where compatible changes are things that can't break compilation of code using the API like:
parameter(s) added to method with default argument
methods added to a class
members added to a class
classes added
order of members or methods changed
comments/documentation changes
And incompatible changes are things that potentially break compilation of code using the API like:
removed arguments, (public/protected) methods, members, classes
type changes of arguments or members
name changes of public/protected members or methods
classes moved from one header to another
OP is right to think that C++ parsing is probably necessary. Likely deep reasoning, too.
I think the way to pose the question is,
for a particular set of uses of an API in an existing application, does changing the API change or break of the application?
If you don't limit yourself to a specific set of uses, almost any change to an API will change its semantics. Otherwise, why would you make them (modulo refactoring?). And if you use the full set of API features in the application, then its semantics must change somehow too.
With a specific set of uses, one can arguably determine which properties of the API might affect the specific uses, and determine if in fact they do. Ultimately you have to parse the original code accurately to determine the specific set of uses and the context in which they are used. You also have to determine the semantic properties on which the existing application depends, including the properties provided by the legacy API. Finally, you need to determine the properties defined by the new API, and verify still support the needs of the application.
In general, you need a theorem prover over the program properties to check this. And, while theorem proving technology has advanced significantly over the last 50 years, AFAIK said technology isn't strong enough to take generally arbitrary program properties and prove them, let alone overcome the problem of reasoning about arbitrarily complex programs.
Consider:
// my application
int x=0;
int y=foo(x); // API ensures that fail...
if (y>3) then fail(); // shouldn't happen
exit();
// my legacy API
int foo(int x) { return x+1; }
Now imagine the API is changed to:
// my new API
int foo(int x) { return x+2; }
The application still functions correctly.
How about:
// my new API
int foo(int x) { return TuringMachine(x); }
How are we going to prove that TuringMachine(x) produces a value < 3?
If we can't do this for such tiny programs, how are we going to do it
for ones that we write in practice?
Now, you might be able to limit the set of changes you will consider to
simply "syntactic" operations, such as "move method", "add parameter with initial value", etc.
You'll still need to parse the original program and modified APIs, and check that the syntactic properties imply semantic properties that don't damage the original program. You'll likely need control and dataflow analysis, alias analysis to worry about pointers, etc, and the tool will at best be able to tell for a limited number of cases when no change has occurred.
I'm sure there are research papers on this topic. A quick check at scholar.google.com didn't find anything obvious.
See "Source Compatibility" tab of the report of the ABICC tool. Most of the mentioned API changes and API breaks are detected by this tool.
There are two approaches to use the tool. Original via analysis of header files and a new via analysis of the library debug info ([1]). Use the second one. It's more reliable and simple.
You can find some report examples here: http://abi-laboratory.pro/tracker/
Tutorial: https://sourceware.org/glibc/wiki/Testing/ABI_checker#Usage
As an example, a string that contains only a valid email address, as defined by some regex.
If a field of this type would be a part of a more complex data structure, or would be used as a function parameter, or used in any other context, the client code would be able to assume the field is a string containing a valid email address. Thus, no checks like "valid?" should be ever necessary, so approach of domaintypes would not work.
In Haskell this could be accomplished by a smart constructor (section 1.2) and in Java by ensuring the type is immutable (all setters private) and by adding a check in the constructor that throws a RuntimeException if the string used to create the type doesn't contain a valid email address.
If this is impossible in plain Clojure, I would like to see an example implementation in some well known extensions of the language, like Typed Clojure.
Ok, maybe, I understand now a question and I formulate in the comment my thoughts not really well. So I try to suggest an admissible solution to your question and then I try to explain some ideas I tried to tell in the comment.
1) There is a gen-class that generates compiled bytecode for a class and you can set constructor for the class there.
2) You can create a record with defrecord in some namespace that is private by convention in your project, then you
create another namespace with public api and define your factory function here. So the user of your public namespace will be able to call only public functions of your public namespace. (Of course, he can call also private ones, but with some another code)
3) You can just define a function like make-email that will return a map.
So you didn't specify your data structure anywhere.
4) You can just document your code where you will warn people to use the factory function for construction.
But! In Java if your code requires some interface, then it's user problem to give to your code the valid interface implementation. So if you write even a little bit general code in Java you already has lost the property of the valid email string. This stuff with interfaces is because Java is statically typed language.
Clojure is, in general, dynamically typed, so the user, in general, should be able to pass arbitrary data structure to arbitrary function without any type problems in compile time and it's his fault if he pass the wrong data. That makes, for example, this thing possible: You create a record and create a factory (constructor) function. And you expect a record to be passed in your code. But the user can pass a map with the same keys as your record fields names and the code will work.
So, in general, if you want the user of your code to be responsible for passing a required typed in dynamically typed language, then it cost nothing for user to be responsible for constructing it in a correct way that you provide to him.
Another solutions are: User just write tests. You can specify in your api functions :pre and :post conditions to check the structure. You can use typed clojure with the ideas I wrote above. And you can use some additional declarative libraries, like that was mentioned in the first comment of #Thumbnail.
P.S. I'm not a clojure professional, so I could easily miss some better solutions.
I have a URI structure which is hierarchical for a particular data set:
/Blackboard/Requirement/{reqID}/Risk/{riskId}/MitigationPlan/{planId}
If the url is split at various IDs you can get that particular resource e.g.:
GET: /Blackboard/Requirement/2/Risk/2
This gets Risk #2 associated with Requirement #2
The question is this: a desired feature is now to be able to update (PUT) and delete (DELETE) a set of requirements in the same HTTP request. (The entire 'set' of requirements is GET-able when you fire an HTTP GET to the /Blackboard URL - that's the default functionality since something is written on the blackboard, figuratively)
So should I create a new collection resource URL only supporting PUT/DELETE like this:
/Blackboard/Requirements : HTTP PUT/DELETE
(note the plural)
or actually make the existing URL structure plural
/Blackboard/Requirements/{reqID}/Risk/{riskId}/MitigationPlan/{planId}
The latter seems to break semantic uniformity since the other items in the hierarchy are singular. Should I make them plural too??
Does having an item id help disambiguate singularity (from a human perspective :) like Blackboard/Requirements/1 or is it preferable to expose a different resource (i.e., a collection) purely for operational reasons (since GET is not allowed without id - irrespective of it being singular or plural)?
Just wanted to know the opinions of the community on which approach is commonly chosen (or is the right way of doing it) for clarity cleaner design.
The latter seems to break semantic uniformity since the other items in the hierarchy are singular. Should I make them plural too?
You should want your url's to be cool. So if you do change them to make it match, make sure your old singular urls return redirects with the new locations. Determining the expensive of that might help make the decision of change/no-change. If your API isn't used yet, then there's no barrier one way or the other. IMO, I'd go for consistency.
Does having an item id help disambiguate singularity (from a human perspective :) like Blackboard/Requirements/1 or is it preferable to expose a different resource (i.e., a collection) purely for operational reasons (since GET is not allowed without id - irrespective of it being singular or plural)?
For me, it makes more sense to have the url collections plural even with id. That could be a bias from file systems though. In that sense, it only makes sense that a single resource would be deeper in the url than the collection resource. It also give an easy way back to the collection resource, a url breadcrumb.