I have some questions about writing a custom OutputAttributeProcessor.
I use WSO2 CEP 2.1.0 and siddhi 1.1.0.
I want to create a custom OutputAttributeProcessor, so I create two java classes, TestFactory implements OutputAttributeProcessorFactory and Test implements OutputAttributeProcessor.
Package of two classes is org.wso2.siddhi.extention.
TestFactory must override createAggregator and getProcessorType, and Test must override createNewInstance, getType, processInEventAttribute, and processRemoveEventAttribute.
First question is about each methods.
What should be written in getProcessorType?
And also, what is different between processInEventAttribute, and processRemoveEventAttribute?
In addition, I have one more question.
I will create jar file consits of two java classes.
I add the jar file to the class path at /repository/components/lib, and the fully-qualified class name for TestFactory to the siddhi.extension file located at /repository/conf/siddhi.
What is content of siddhi.extension?
Is the following a line?
org.wso2.siddhi.extention.TestFactory
If there is sample program about a custom OutputAttributeProcessor, please teach me.
Thank you in advance.
What should be written in getProcessorType?
Depending on the use case, you can return one of the AGGREGATOR or CONVERTER types here:
#Override
public ProcessorType getProcessorType() {
return OutputAttributeProcessorFactory.ProcessorType.AGGREGATOR;
}
What exactly is your use case? As the name implies, you can use AGGREGATOR type if it does aggregations (i.e. something like calculating an average, taking min() of several events etc.)
And also, what is different between processInEventAttribute, and processRemoveEventAttribute?
These are the two methods that are used to add and remove events to/from your OutputAttributeProcessor. For an example, if you are taking an average, it needs to be done for a specific set of events (like a sliding window, usually not all events received so far), which changes dynamically. So when you receive an event through processInEventAttribute(), you can update your average including that event. Similary when processRemoveEventAttribute() is called, you can update the average removing that event.
For an example, see the below code sample which calculates the average as a double value.
private double value = 0.0;
private long count=0;
public Attribute.Type getType() {
return Attribute.Type.DOUBLE;
}
#Override
public Object processInEventAttribute(Object obj) {
count++;
value += (Double) obj;
if (count == 0) {
return 0;
}
return value / count;
}
#Override
public Object processRemoveEventAttribute(Object obj) {
count--;
value -= (Double) obj;
if (count == 0) {
return 0;
}
return value / count;
}
What is content of siddhi.extension?
It's just one line as you've mentioned. Just the fully qualified class name.
org.wso2.siddhi.extention.TestFactory
Related
For few test cases I'm trying to follow a DRY principle, where only the interactions are different with same test case conditions. I'm not able to find a way to implement multiple methods in the interaction { } block.
As mentioned in http://spockframework.org/spock/docs/1.3/interaction_based_testing.html#_explicit_interaction_blocks, I'm using interaction { } in the then: block like below:
Java Code:
// legacy code (still running on EJB 1.0 framework, and no dependency injection involved)
// can't alter java code base
public voidGetData() {
DataService ds = new DataService();
ds = ds.findByOffset(5);
Long len = ds.getOffset() // happy path scenario; missing a null check
// other code
}
// other varieties of same code:
public voidGetData2() {
ItemEJB tmpItem = new ItemEJB();
ItemEJB item = tmpItem.findByOffset(5);
if(null != item) {
Long len = item.getOffset();
// other code
}
}
public voidGetData3() {
ItemEJB item = new ItemEJB().findByOffset(5);
if(null != item) {
Long len = item.getOffset();
// other code
}
}
Spock Test:
def "test scene1"() {
given: "a task"
// other code ommitted
DataService mockObj = Mock(DataService)
when: "take action"
// code omitted
then: "action response"
interaction {
verifyNoDataScenario() // How to add verifyErrorScenario() interaction to the list?
}
}
private verifyDataScenario() {
1 * mockObj.findByOffset(5) >> mockObj // the findByOffset() returns an object, so mapped to same mock instance
1 * mockObj.getOffset() >> 200
}
private verifyErrorScenario() {
1 * mockObj.findByOffset(5) >> null // the findByOffset() returns null
0 * mockObj.getOffset() >> 200 // this won't be executed, and should ie expected to throw NPE
}
The interaction closure doesn't accept more than one method call. I'm not sure if it's design limitation. I believe more can be done in the closure than just mentioning the method name. I also tried interpolating the mockObj as a variable and use data pipe / data table, but since it's referring the same mock instance, it's not working. I'll post that as a separate question.
I ended up repeating the test case twice just to invoke different interaction methods. Down the line I see more scenarios, and wanted to avoid copy & paste approach. Appreciate any pointers to achieve this.
Update:
Modified shared java code as the earlier DataService name was confusing.
As there's no DI involved, and I didn't find a way to mock method variables, so I mock them using PowerMockito, e.g. PowerMockito.whenNew(DataService.class).withNoArguments().thenReturn(mockObj)
Your application code looks very strange. Is the programming style in your legacy application really that bad? First a DataService object is created with a no-arguments constructor, just to be overwritten in the next step by calling a method on that instance which again returns a DataService object. What kind of programmer creates code like that? Or did you just make up some pseudo code which does not have much in common with your real application? Please explain.
As for your test code, it also does not make sense because you instantiate DataService mockObj as a local variable in your feature method (test method), which means that in your helper method mockObj cannot be accessed. So either you need to pass the object as a parameter to the helper methods or you need to make it a field in your test class.
Last, but not least, your local mock object is never injected into the class under test because, as I said in the first paragraph, the DataService object in getData() is also a local variable. Unless your application code is compeletely fake, there is no way to inject the mock because getData() does not have any method parameter and the DataService object is not a field which could be set via setter method or constructor. Thus, you can create as many mocks as you want, the application will never have any knowledge of them. So your stubbing findByOffset(long offset) (why don't you show the code of that method?) has no effect whatsoever.
Bottom line: Please provide an example reflecting the structure of your real code, both application and test code. The snippets you provide do not make any sense, unfortunately. I am trying to help, but like this I cannot.
Update:
In my comments I mentioned refactoring your legacy code for testability by adding a constructor, setter method or an overloaded getData method with an additional parameter. Here is an example of what I mean:
Dummy helper class:
package de.scrum_master.stackoverflow.q58470315;
public class DataService {
private long offset;
public DataService(long offset) {
this.offset = offset;
}
public DataService() {}
public DataService findByOffset(long offset) {
return new DataService(offset);
}
public long getOffset() {
return offset;
}
#Override
public String toString() {
return "DataService{" +
"offset=" + offset +
'}';
}
}
Subject under test:
Let me add a private DataService member with a setter in order to make the object injectable. I am also adding a check if the ds member has been injected or not. If not, the code will behave like before in production and create a new object by itself.
package de.scrum_master.stackoverflow.q58470315;
public class ToBeTestedWithInteractions {
private DataService ds;
public void setDataService(DataService ds) {
this.ds = ds;
}
// legacy code; can't alter
public void getData() {
if (ds == null)
ds = new DataService();
ds = ds.findByOffset(5);
Long len = ds.getOffset();
}
}
Spock test:
Now let us test both the normal and the error scenario. Actually I think you should break it down into two smaller feature methods, but as you seem to wish to test everything (IMO too much) in one method, you can also do that via two distinct pairs of when-then blocks. You do not need to explicitly declare any interaction blocks in order to do so.
package de.scrum_master.stackoverflow.q58470315
import spock.lang.Specification
class RepeatedInteractionsTest extends Specification {
def "test scene1"() {
given: "subject under test with injected mock"
ToBeTestedWithInteractions subjectUnderTest = new ToBeTestedWithInteractions()
DataService dataService = Mock()
subjectUnderTest.dataService = dataService
when: "getting data"
subjectUnderTest.getData()
then: "no error, normal return values"
noExceptionThrown()
1 * dataService.findByOffset(5) >> dataService
1 * dataService.getOffset() >> 200
when: "getting data"
subjectUnderTest.getData()
then: "NPE, only first method called"
thrown NullPointerException
1 * dataService.findByOffset(5) >> null
0 * dataService.getOffset()
}
}
Please also note that testing for exceptions thrown or not thrown adds value to the test, the interaction testing just checks internal legacy code behaviour, which has little to no value.
I'm facing a design problem. I want to separate building objects with a builder pattern, but the problem is that objects have to be built from configuration file.
So far I have decided that all objects, created from configuration, will be stored in DataContext class (container for all objects), because these objects states will be updated from a transmission (so it's easier to have them in one place).
I'm using external library for reading from XML file - and my question is how to hide it - is it better to inject it to concreteBuilder class? I have to notice that builder class will have to create lots of objects and at the end - connect them between each other.
Base class could look like that:
/*
* IDataContextBuilder
* base class for building data context object
* and sub obejcts
*/
class IDataContextBuilder {
public:
/*
* GetResult()
* returns result of building process
*/
virtual DataContext * GetResult () = 0;
/*
* Virtual destructor
*/
virtual ~IDataContextBuilder() { }
};
class ConcreteDataContextBuilder {
public:
ConcreteDataContextBuilder(pugi::xml_node & rootNode);
DataContext * GetResult ();
}
How to implement it correctly? What could be better pattern to build classes from configuration files?
I don't see a problem with that, but maybe you could inject another 'Director' class that receives a specific builder, loads the config files, and produces objects calling the respective builder-subclasses.
What I mean:
class DataContextDirector {
public:
void SetBuilder(IDataContextBuilder* builder);
void SetConfig(const std::string& configFilePath); // or whatever
DataContext* ProduceObject() {
// pseudo-code here:
// myBuilder->setup(xmlNodeOfConfig);
// return myBuilder->GetResult();
}
};
We're integrating with a 3rd Party webservice by using Wsdl2Java to translate their Schema and endpoints into Java for three different webservices that they offer.
This particular provider uses a lot of the same objects (think objects representing an Address, Money, Weight, etc), but, in their infinite wisdom, they've decided to create a different namespace for each webservice and duplicate the definition of their schemas for each one. The result is you have the following classes output for CXF integration:
com.thirdpartyguys.api.firstApi.Money
com.thirdpartyguys.api.secondApi.Money
com.thirdpartyguys.api.thirdApi.Money
Translating our data into theirs can involve a lot of business logic and, as a result, we have to define the code that creates the objects in triplicate for each individual Webservice API.
To overcome this problem I created an Interface defined thusly:
import org.apache.commons.beanutils.BeanUtils;
public interface CommonObjectInterface<A, R, S> {
A toFirstApi();
R toSecondApi();
S toThirdApi();
default Object doTransform(Object destination, Object source) {
try {
BeanUtils.copyProperties(destination, source);
} catch (Exception e) {
throw new RuntimeException("Fatal error transforming Object", e);
}
return destination;
}
}
You would then have each common object implement the interface, define its own constructors, fluent API, etc, and call the toXXX() methods to get the proper form of the object for the respective API.
Right now most of these implementing classes work by keeping a copy of one of the Apis locally, setting data on that, and then transforming it for the proper API using the doTransform() method which in its default form uses the Apache Commons BeanUtils.copyProperties() method.
It's more elegant than having the same code exist in three different places, but not by much! There's a lot of boilerplate and, even though this won't be getting hammered too much, not that efficient.
I would like to get feedback from the community as to whether this is a good idea or if there are better approaches. A similar question was asked years ago here, but I don't know if better solutions have emerged since it was asked. I imagine the best thing would be configuring wsdl2Java to allow setting the namespace at runtime, but from my initial research this does not seem to be possible.
The solution to this problem is specific to this exact situation:
1) A webservice provider that has the same object in different namespaces
2) Using wsdl2Java or some underlying Apache CXF technology to generate the web artifacts for writing a client.
This is a fringe case so I'm not sure how helpful this will be to the community but the trick is to account for a few situations where a copyProperties method doesn't work. In this case I'm using Spring's BeanUtils and BeanWrapper classes although I'm sure this could be adapted for Apache as well. The following code does the trick:
final String TARGET_PACKAGE = "com.thirdpartyguys.api";
public Object doTransform(Object destination, Object source) {
/*
* This will copy all properties for the same data type for which there is a getter method in
* source, and a setter method in destination
*/
BeanUtils.copyProperties(source, destination);
BeanWrapper sourceWrapper = new BeanWrapperImpl(source);
for(PropertyDescriptor p : sourceWrapper.getPropertyDescriptors()) {
/*
* Properties that are references to other schema objects are identical in structure, but have
* different packages. We need to copy these separately
*/
if(p.getPropertyType().getPackage().getName().startsWith(TARGET_PACKAGE)) {
try {
commonPropertyCopy(destination, source, p);
} catch (Exception e) {
throw new RuntimeException("Fatal error creating Data", e);
}
}
/*
* Properties that reference list don't create setters according to the Apache CXF
* convention. We have to call the get method and addAll()
*/
else if(Collection.class.isAssignableFrom(p.getPropertyType())) {
try {
collectionCopy(destination, source, p);
} catch (Exception e) {
throw new RuntimeException("Fatal error creating Data", e);
}
}
}
return destination;
}
private void collectionCopy(Object destination, Object source, PropertyDescriptor sourceProperty) throws Exception {
BeanWrapper destWrapper= new BeanWrapperImpl(destination);
PropertyDescriptor destProperty = destWrapper.getPropertyDescriptor(sourceProperty.getName());
Collection<?> sourceCollection = (Collection<?>) sourceProperty.getReadMethod().invoke(source);
Collection<Object> destCollection = (Collection<Object>) destProperty.getReadMethod().invoke(destination);
destCollection.addAll(sourceCollection);
}
private void commonPropertyCopy(Object destination, Object source, PropertyDescriptor sourceProperty) throws Exception {
if(sourceProperty.getPropertyType().isEnum()) {
instantiateEnum(destination, source, sourceProperty);
}
else {
instantiateObject(destination, source, sourceProperty);
}
}
private void instantiateEnum(Object destination, Object source, PropertyDescriptor sourceProperty) throws Exception {
BeanWrapper destWrapper= new BeanWrapperImpl(destination);
Enum<?> sourceEnum = (Enum<?>) sourceProperty.getReadMethod().invoke(source);
PropertyDescriptor destProperty = destWrapper.getPropertyDescriptor(sourceProperty.getName());
Object enumValue = Enum.valueOf(destProperty.getPropertyType().asSubclass(Enum.class), sourceEnum.name());
destProperty.getWriteMethod().invoke(destination, enumValue);
}
private void instantiateObject(Object destination, Object source, PropertyDescriptor sourceProperty) throws Exception {
Object subObj = sourceProperty.getReadMethod().invoke(source);
if(subObj!=null) {
BeanWrapper destWrapper = new BeanWrapperImpl(destination);
String subObjName = sourceProperty.getName();
PropertyDescriptor destProperty = destWrapper.getPropertyDescriptor(subObjName);
Class<?> propertyType = destProperty.getReadMethod().getReturnType();
Object subObjCopy = propertyType.getConstructor().newInstance();
doTransform(subObjCopy, subObj);
destProperty.getWriteMethod().invoke(destination, subObjCopy);
}
}
instantiateObject is used to create new instances of the "identical" objects from different packages. This also applies for Enumerated types and requires its own method, hence the implementation of instantiateEnum. Finally, the default CXF implemenation offers no setter method for Lists. We handle this situation in collectionCopy.
I have a system, which gets lists of objects from external system in some ABC-format, converts it to internal representation and passes to external service:
class ABCService() {
public ABCService(ExtService extService) {
this.extService = extService;
}
public void do(ABCData [] abcObjs) throws NoDataException {
if (abcObjs.length == 0) {
throw NoDataException();
} else {
List<Data> objs = new ArrayList<>();
for (ABCData abcObj : abcObjs) {
Data obj = Parser.parse(abcObj); // static call
objs.add(obj);
}
extService.do(objs);
}
}
}
When it comes to testing ABCService, we can test two things:
If no data is passed to "do", service throws an exception;
If some data is passed to "do", service should call extService and pass exactly the same number of objects, it has received from test caller.
But, though Parser factory is also tested, there is no guarantee, that output "objs" array is somehow connected to input abcObjs (e.g. method has created list with the predefined length, but method "forgets" to populate the list).
I my opinion those two test cases don't fully cover method's workflow leaving some of it dangerously untested.
How to modify ABCService design to increase it's testability?
The major testing difficulty in this code is that you have two collaborators and one of them is static.
If you can convert your Parser to a non-static (or perhaps wrap it in a non-static) and inject that as you do the extService, you could test that the parser is called the right number of times with the right arguments. Stubbing in the return values from the parser, you could also verify that your extService is called with the appropriately transformed objects instead of just the correct number of objects.
The problem you encountered is trying to handle two tasks in one function. The function do can be logically separated into two different member functions, so that you can use unittest for each of them.
By using refactoring, you can extract out the parsing and populating logic into another member function.
class ABCService() {
public void do(ABCData [] abcObjs) throws NoDataException {
extService.do(populateList(abcObjs));
}
List<Data> popuateList(ABCData[] abcObjs) {
if (abcObjs.length == 0) {
throw NoDataException();
} else {
List<Data> objs = new ArrayList<>();
for (ABCData abcObj : abcObjs) {
Data obj = Parser.parse(abcObj); // static call
objs.add(obj);
return objs;
}
}
}
while your current unittest can still remain for the "do" function, and additionally, you can add a unittest case for "populateList" function to ensure it generate correct data list
When I read the source of play framework 1.2.5, I found this class:
package play.classloading;
import java.util.concurrent.atomic.AtomicLong;
/**
* Each unique instance of this class represent a State of the ApplicationClassloader.
* When some classes is reloaded, them the ApplicationClassloader get a new state.
* <p/>
* This makes it easy for other parts of Play to cache stuff based on the
* the current State of the ApplicationClassloader..
* <p/>
* They can store the reference to the current state, then later, before reading from cache,
* they could check if the state of the ApplicationClassloader has changed..
*/
public class ApplicationClassloaderState {
private static AtomicLong nextStateValue = new AtomicLong();
private final long currentStateValue = nextStateValue.getAndIncrement();
#Override
public boolean equals(Object o) {
if (this == o) return true;
if (o == null || getClass() != o.getClass()) return false;
ApplicationClassloaderState that = (ApplicationClassloaderState) o;
if (currentStateValue != that.currentStateValue) return false;
return true;
}
#Override
public int hashCode() {
return (int) (currentStateValue ^ (currentStateValue >>> 32));
}
}
I don't understand the hashCode method, why it uses this implementation:
return (int) (currentStateValue ^ (currentStateValue >>> 32));
The source url: https://github.com/playframework/play/blob/master/framework/src/play/classloading/ApplicationClassloaderState.java#L34
hashCode() must return an int by design. currentStateValue is a long number denoting the internal state change of the ApplicationClassloader. The play! framework is a sophisticated web framework with a so called hot-code replacement and reload feature. central part of this feature implementation is play.classloading.ApplicationClassloader.detectChanges().
As you can see, there are lots of 'new ApplicationClassloaderState()' invocations which means a (fast - depending on your developement workflow and application size) increasing currentStateValue member in the returned object.
returning to the question: the implementations goal should be to hold the maximum information to differentiate at least 2 different states from each other.
try to understand currentStateValue as a (encryption)key which encrypts itself (in kryptology terms a terrible approach) but in this context it is very useful.
perhaps you recognize the relationship.
sorry that i cant post links here but you can find some usefull hints in wikipedia searching for "Involution_%28mathematics%29", "One-time_pad", "Tabula_recta".