Whenever a CriteriaQuery in JPA2 does not yield a result, a NoResultException is thrown. This exception is not very useful in the sense that only context-information regarding line numbers where the exception is thrown might give some indication what went wrong.
I would like to have some speaking output "Looking for a class of type and the restrictions applied where and ". It seems, the CriteriaQuery is rather shy on such information, however it seems, with a combination of reflection and getter methods I will eventually get all information I want but it will be quite messy and cumbersome.
Is there a better way to retrieve the data, which went into the CriteriaQuery-Object?
As long as you are sticking to the JPA API, public interface of CriteriaQuery is all you have.
If you are ready to go implementation specific and use Hibernate implementation, there is more available via casting it to org.hibernate.ejb.CriteriaQueryImpl. For example render method seems to provide access to nested class that have getQueryString method.
Most likely some similar ways can be found from other JPA implementations.
Related
I am trying to implement message handling for actors in c++. The following code in scala is something I am trying to implement in c++
def receive = {
case Message1 =>{/* logic code */}
case Message2 =>{/* logic code */}
}
Thus the idea is to create a set of handler functions for the various message type and create a dispatch method to route the message to its appropiate message handler. All messages will extends the base message type.
What would be the best approach to solve this problem:
Maintain a Map(Message_type, function_pointer), the dispatch method will check the map and call the appropiate method. This mapping however needs to be done mannually in the Actor class.
I read this library, the lbrary is handling message exactly as I want to but I cant understand how they do pattern matching on the lambda fucntions created on line 56.
I would appreciate any suggestion or reading links that could get me closer to the solution of this problem.
Since you've already mentioned CAF: why do you want to implement your own actor library instead of using CAF? If you are writing the lib as an exercise, I suggest start reading libcaf_core/caf/match_case.hpp, libcaf_core/caf/on.hpp and libcaf_core/caf/detail/try_match.hpp. This is the "core" of CAF's pattern matching facility. Be warned, you will be looking at a lot of metaprogramming code. The code is meant to be read by C++ experts. It's definitely not a good place to learn the techniques.
I can outline what's going on, though.
CAF stores patterns as a list of match_case objects in detail::behavior_impl
You never get a pointer to either one as user
message_handler and behavior store a pointer to behavior_impl
Match cases can be generated differently:
Directly from callbacks/lambdas (trivial case)
Using a catch-all rule (via others >> ...)
Using the advanced on(...) >> ... notation
CAF can only match against tuples stored in message objects
"Emulates" (a subset of) reflections
Values and meta information (i.e. type information) are needed
For the matching itself, CAF simply iterates over the list of match_case objects
Try match the input with each case
Stop on first match (just like functional languages implement this)
We put a lot of effort into the pattern matching implementation to get a high-level and clean interface on the user's end. It's not easy, though. So, if you're doing this as an exercise, be warned that you need a lot of metaprogramming experience to understand the code.
In case you don't do this as an exercise, I would be interested why you think CAF does not cover your use case and maybe we can convince you to participate in its development rather than developing something else from scratch. ;)
Try to use sobjectizer (batteries included) or rotor(still experimental, but quite lightweight actor-like solution).
A bit of background:
I am currently working on an assignment from my OOP course which consists in designing and implementing a phone book manager around various design patterns.
In my project there are 3 classes around which all the action happens:
PhoneBook;
Contact (the types stored in the phone book);
ContactField (fields stored in the Contact).
ContactManager must provide a way to iterate over its contacts in 2 modes: unfiltered and filtered based on a predicate; Contact must provide a way to iterate over its fields.
How I initially decided to implement:
All design patterns books I came across recommend coding to an interface so my first thought was to extract an interface from each of the above classes and then make them implement it.
Now I also have to create some kind of polymorphic iterator for things to be smooth so I adapted the Java iterator interface to write forward iterators.
The problems:
The major setback with this design is that I lose interoperability
with stl <algorithm> and the syntactic sugar offered by range
based for loops.
Another issue I came across is the Iterator<T>::remove() function. If
I want an iterator that can alter the sequence it iterates over
(remove elements) then all is fine however if I don't want
that behavior I'm not exactly sure what to do.
I see that in Java one can throw UnsupportedOperationException
which isn't that bad since (correct me if I'm wrong) if an
exception isn't handled then the application is terminated and a
stack trace is shown. In C++ you don't really have that luxury
(unless you run with a debugger attached I think) and to be honest
I'd rather prefer to catch such errors at compile time.
The easiest way out (that I see) of this mess is to avoid using interfaces on the iterable types in order to accommodate my own stl compatible iterators. This will increase coupling however I'm not sure it will actually have any impact in the long run (not in the sense that this project will become throw away code soon of course). My guess is that it won't however, I'd like to hear the elders opinion as well before I proceed with my design.
I would probably take a slightly different approach.
Firstly, iteration over a contact is pretty simple since it's a single type of iteration and you can just provide begin and end methods to allow iteration over the underlying fields.
For the iteration over a PhoneBook I would still just provide a normal begin and end, and then provide a for_each_if function that you use to iterate over only the contacts that are interesting, instead of trying to provide a super-custom iterator that skips over un-interesting elements.
Say we want to Parse a XML messages to Business Objects. We split the process in two parts, namely:
-Parsing the XML messages to XML Grammar Objects.
-Transform XML Objects to Business Objects.
The first part is done automatically, generation a grammar object for each node.
The second part is done following the XML architecture so far. Example:
If we have the XML Message(Simplified):
<Main>
<ChildA>XYZ</ChildA>
<ChildB att1="0">
<InnerChild>YUK</InnerChild>
</ChildB>
</Main>
We could find the following classes:
DecodeMain(Calls DecodeChildA and B)
DecodeChildA
DecodeChildB(Calls DecodeInnerChild)
DecodeInnerChild
The main problem arrives when we need to handle versions of the same messages. Say we have a new version where only DecodeInnerChild changes(e.g.: We need to add an "a" at the end of the value)
It is really important that the solutions agile for further versions and as clean as possible. I considered the following options:
1)Simple Inheritance:Create two classes of DecodeInnerChild. One for each version.
Shortcomming: I will need to create different classes for every parent class to call the right one.
2)Version Parameter: Add to each method an Object with the version as a parameter. This way we will know what to do within each method according to each version.
Shortcoming: Not clean at all. The code of different versions is mixed.
3)Inheritance + Version Parameter: Create 2 classes with a base class for the common code for the nodes that directly changes (Like InnerChild) and add the version as a parameter in each method. When a node call the another class to decode the child object, it will use one or another class depending on the Version parameter.
4)Some kind of executor pattern(I do not know how to do it): Define at the start some kind of specifications object, where all the methods that are going to be used are indicated and I pass this object to a class that is in charge of execute them.
How would you do it? Other ideas are welcomed.
Thanks in advance. :)
How would you do it? Other ideas are welcomed.
Rather than parse XML myself I would as first step let something like CodesynthesisXSD to generate all needed classes for me and work on those. Later when performance or something becomes issue I would possibly start to look aound for more efficient parsers and if that is not fruitful only then i would start to design and write my own parser for specific case.
Edit:
Sorry, I should have been more specific :P, the first part is done
automatically, the whole code is generated from the XML schema.
OK, lets discuss then how to handle the usual situation that with evolution of software you will eventually have evolved input too. I put all silver bullets and magic wands on table here. If and what you implement of them is totally up to you.
Version attribute I have anyway with most things that I create. It is sane to have before backward-compatibility issue that can not be solved elegantly. Most importantly it achieves that when old software fails to parse newer input then it will produce complaint that makes immediately sense to everybody.
I usually also add some interface for converter. So old software can be equipped with converter from newer version of input when it fails to parse that. Also new software can use same converter to parse older input. Plus it is place where to plug converter from totally "alien" input. Win-win-win situation. ;)
On special case of minor change I would consider if it is cheap to make new DecodeInnerChild to be internally more flexible so accepts the value with or without that "a" in end as valid. In converter I have still to get rid of that "a" when converting for older versions.
Often what actually happens is that InnerChild does split and both versions will be used side-by-side. If there is sufficient behavioral difference between two InnerChilds then there is no point to avoid polymorphic InnerChilds. When polymorphism is added then indeed like you say in your 1) all containing classes that now have such polymorphic members have to be altered. Converter should usually on such cases either produce crippled InnerChild or forward to older version that the input is outside of their capabilities.
I am very sorry if this question seems stupid, i am a newbie to TCL and TCLtest, I am trying to perform unit test on a few TCLOO programs, and i am having difficulties testing the private methods ( the methods called using keyword 'my' ). Guidance needed
Leaving aside the question of whether you should test private methods, you can get at the methods by one of these schemes:
Use [info object namespace $inst]::my $methodname to call it, which takes advantage of the fact that you can use introspection to find out the real name of my (and that's guaranteed to work; it's needed for when you're doing callbacks with commands like vwait, trace, and Tk's bind).
Use oo::objdefine $inst export $methodname to make the method public for the particular instance. At that point, you can just do $inst $methodname as normal.
Consequence: You should not use a TclOO object's private methods for things that have to be protected heavily (by contrast with, say, a private field in a Java object). The correct level for handling such shrouding of information is either to put it in a master interpreter (with the untrusted code evaluating in a safe slave) or to keep the protected information at the underlying implementation (i.e., C) level. The best option of those two depends on the details of your program; it's usually pretty obvious which is the right choice (you don't write C just for this if you're otherwise just writing Tcl code).
This might look like OT, but bear with me.
Are you sure you have to test private methods? That sounds like testing the implementantion, and thats something you shouldnt do. You should be testing the behavior of your class, and that is tested through its public methods.
If you have a complicated chunk of code in one of the private methods, and you feel it needs to be tested spearately, consider refactoring the code into two separate classes. Make the method that needs testing public in one of the two classes.
That way you avoid having a "god class" that does everything and you get to test what you wanted to test. You might want to read more about Single Responsibility Principle.
If you need specific book titles on refactoring, id recommend "Clean Code" by Robert C. Martin. I love that book!
I have several modules (mainly C) that need to be redesigned (using C++). Currently, the main problems are:
many parts of the application rely on the functions of the module
some parts of the application might want to overrule the behavior of the module
I was thinking about the following approach:
redesign the module so that it has a clear modern class structure (using interfaces, inheritence, STL containers, ...)
writing a global module interface class that can be used to access any functionality of the module
writing an implementation of this interface that simply maps the interface methods to the correct methods of the correct class in the interface
Other modules in the application that currently directly use the C functions of the module, should be passed [an implementation of] this interface. That way, if the application wants to alter the behavior of one of the functions of the module, it simply inherits from this default implementation and overrules any function that it wants.
An example:
Suppose I completely redesign my module so that I have classes like: Book, Page, Cover, Author, ... All these classes have lots of different methods.
I make a global interface, called ILibraryAccessor, with lots of pure virtual methods
I make a default implementation, called DefaultLibraryAccessor, than simply forwards all methods to the correct method of the correct class, e.g.
DefaultLibraryAccessor::printBook(book) calls book->print()
DefaultLibraryAccessor::getPage(book,10) calls book->getPage(10)
DefaultLibraryAccessor::printPage(page) calls page->print()
Suppose my application has 3 kinds of windows
The first one allows all functionality and as an application I want to allow that
The second one also allows all functionality (internally), but from the application I want to prevent printing separate pages
The third one also allows all functionality (internally), but from the application I want to prevent printing certain kinds of books
When constructing the window, the application passes an implementation of ILibraryAccessor to the window
The first window will get the DefaultLibraryAccessor, allowing everything
I will pass a special MyLibraryAccessor to the second window, and in MyLibraryAccessor, I will overrule the printPage method and let it fail
I will pass a special AnotherLibraryAccessor to the third window, and in AnotherLibraryAccessor, I will overrule the printBook method and check the type of book before I will call book->print().
The advantage of this approach is that, as shown in the example, an application can overrule any method it wants to overrule. The disadvantage is that I get a rather big interface, and the class-structure is completely lost for all modules that wants to access this other module.
Good idea or not?
You could represent the class structure with nested interfaces. E.g. instead of DefaultLibraryAccessor::printBook(book), have DefaultLibraryAccessor::Book::print(book). Otherwise it looks like a good design to me.
Maybe look at the design pattern called "Facade". Use one facade per module. Your approach seems good.
ILibraryAccessor sounds like a known anti-pattern, the "god class".
Your individual windows are probably better off inheriting and overriding at Book/Page/Cover/Author level.
The only thing I'd worry about is a loss of granularity, partly addressed by suszterpatt previously. Your implementations might end up being rather heavyweight and inflexible. If you're sure that you can predict the future use of the module at this point then the design is probably ok.
It occurs to me that you might want to keep the interface fine-grained, but find some way of injecting this kind of display-specific behaviour rather than trying to incorporate it at top level.
If you have n number of methods in your interface class, And there are m number of behaviors per each method, you get m*(nC1 + nC2 + nC3 + ... + nCn) Implementations of your interface (I hope I got my math right :) ). Compare this with the m*n implementations you need if you were to have a single interface per function. And this method has added flexibility which is more important. So, no - I don't think a single interface would do. But you don't have to be extreme about it.
EDIT: I am sure the math is wrong. :(