Webservice invocation in BPEL with abstract return type - web-services

Is it possible to invoke in BPEL a webservice that has a return type an abstract class and at runtime returns any of the derived types?
E.g. if the return type is an order status which has a status field, and its subclasses having specific fields for different cases (valid order, invalid order, etc.).
The problem is that at invocation you have to specify an output variable that should be of this abstract type and subtype specific data couldn't be stored in a single type.
So far I have thought only of defining a data type that should accommodate all possible cases by having defined all fields of all derived classes.
Is there a better approach to this problem?

This should be possible, but may depend on the BPEL engine you are using.
I can recall that I have done similar processes in Apache ODE and WSO2 BPS.
If your BPEL engine does not support this, may be you can create several variables with the absolute types and use them appropriately in the invocations.
HTH

Related

How to create a container storing functions with different signatures?

I implemented a class which depended on an interface for sending data.
Concrete versions of the interface were implemented for testing and for production and they were injected at construction (depending if the class was being tested or used in production)
This works but there is a maintainence overhead on maintaining multiple overloaded send functions that do very similiar things.
I would like to template the send function however that is impossible on an overriden function.
My next idea is rather than the class depending on an interface, it will contain a map of datatypes to callbacks. This means I can specify the functionality for each datatype and inject it into the class depending on if I want test functionality or real functionality.
The difficulty comes because the map has to store functions with different signatures because the paramter type is different for every function.
How best can I do this? Is the idea sound or is there a better design?

Selecting a design pattern to assing different objects via an interface, based on a user made configuration

Currently am I working on a by user configurable controller.
The user can configure the modules which are objects of the same or different classes, all returning one or more variables as integer or boolean.
The user can configure the links between the just configured objects as they can request each others return data via a method.
A execution manager executes the highest object in the configuration of which the return values are not used by other objects. The highest object will require as configured by the user, return data from other objects via their methods. These methods "activate" the object of which data is requested, and will further down ask return data from other objects.
I am planning to write this software in c++ an shall be running on a cortex-m4 microcontroller.
I have bin looking in to several design patters but cant find any matching one, suiting my needs. So i made my own design but am not totally convinced of it being the perfect solution.
My design so far:
a abstract base class acts as an interface for creation.
a class inheriting the base class Decorates the module.
an other base class acts as an interface to access a single bool or integer.
a class inheriting the "other base class" contains the actual algorithm to access the method to retrieve the data from the module.
Meaning:
for every single by configuration linkable bool or integer is an object created to retrieve the data, returning it via a standard base interface.
this means that Every module can have any number of variables only each resulting when used in a single object per variable.
Is there any other, more efficient design pattern than my "Brand pattern", creating less overhead but also providing the same run time flexibility?
MvG Robbert

Why should I use DECLARE_DYNAMIC instead of DECLARE_DYNCREATE?

DECLARE_DYNCREATE provides exactly the same feature of DECLARE_DYNAMIC along with its dynamic object creation ability. Then why should anyone use DECLARE_DYNAMIC instead of DECLARE_DYNCREATE?
The macros are documented to provide different functionality.
DECLARE_DYNAMIC:
Adds the ability to access run-time information about an object's class when deriving a class from CObject.
This provides the functionality for introspection, similar to RTTI (Run-Time Type Information) provided by C++. An application can query a CObject-derived class instance for its run-time type through the associated CRuntimeClass Structure. It is useful in situations where you need to check that an object is of a particular type, or has a specific base class type. The examples at CObject::IsKindOf should give you a good idea.
DECLARE_DYNCREATE:
Enables objects of CObject-derived classes to be created dynamically at run time.
This macro enables dynamic creation of class instances at run-time. The functionality is provided through the class factory method CRuntimeClass::CreateObject. It can be used when you need to create class instances at run-time based on the class type's string representation. An example would be a customizable GUI, that is built from an initialization file.
Both features are implemented through the same CRuntimeClass Structure, which may lead to the conclusion that they can be used interchangeably. In fact, code that uses an inappropriate macro will compile just fine, and expose the desired run-time behavior. The difference is purely semantic: The macros convey different intentions, and should be used according to the desired features, to communicate developer intent.
There's also a third related macro, DECLARE_SERIAL:
Generates the C++ header code necessary for a CObject-derived class that can be serialized.
It enables serialization of respective CObject-derived class instances, for example to a file, memory stream, or network socket. Since the deserialization process requires dynamic creation of objects from the serialized stream, it includes the functionality of DECLARE_DYNCREATE.
Put together, the following list should help you pick the right macro for your specific scenarios:
Use DECLARE_DYNAMIC if your code needs to retrieve an object's run-time type.
Use DECLARE_DYNCREATE if, in addition, you need to dynamically create class instances based on the type's string representation.
Use DECLARE_SERIAL if, in addition, you need to provide serialization support.
You're asking "why buy a Phillips screwdriver when I own a flathead?" The answer is that you should use the tool that suits your needs: if you need to drive only flathead screws, don't buy a Phillips driver. Otherwise, buy one.
If you need the features provided by DECLARE_DYNCREATE (e.g. because you're creating a view that's auto-created by the framework when a document is opened) then you should use DECLARE_DYNCREATE and if you don't and DECLARE_DYNAMIC works, you should use it.

C++ adaptor with different interfaces, where interfaces may have different type/number of input parameters

It is well know how to build the adapter when the adaptee's methods look same except for the name.
For example,
http://sourcemaking.com/design_patterns/adapter/cpp/2
where none of "doThis", "doThat", and "doOther" has inputs. However, what if different methods have different number of input parameters?
Thanks
The example given in linked document describes a use of the adapter pattern in a situation where the change is purely syntactic. The situation implied by your question contains a semantic change, ie the adaptee method does not provide the exact same service than what the adapter interface "promises" to deliver formally. This means that the adaptee's must be somehow wrapped with more than a simple name change: some work must be done around it to build the missing parameters or transform the existing parameters into those required by the adaptee.
If each new adaptee has different requirements, then each adapter must contain the ad-hoc adapting code. There's not much one can do to factor out a common pattern out of this situation. The only easy case is the trivial one, when all the needed parameters are independent of the passed ones, and can be computed once for all before constructing the adapter, hence allowing an adapter as a simple std::bind equivalent.

Consuming custom objects between webservices

I have a webservice that is designed to accept performance data via a custom object. The custom object contains a Collection (Generic List) of performance measures among other data. The performance measure consists of simple data types (strings, ints, and a datetime). The only method exposed by the webservice requires this custom object (performance data object) to be passed in.
The problem lies in using this custom object externally. I wish to use the Add() and Item() methods of the Generic List class along with various other features within this class within another webservice. If I request the object from the Performance Data Webservice it seralizes the inner collection to an arrayList. I would like it to remain a generic collection.
I have toyed with using the XmlInclude method but currently havent found a solution with it.
The next thing I tried to do was create an assembly of this specific object that both the Peformance Data web service can use and any satelite programs (i.e. another webservice). The issue here is when I try to pass in the custom object created by the seperate assembly the performance data webservice barks its a different type. (Also I am applying the XmlInclude(GetType( custom assembly)) attribute to the exposed method). However still thinks the types are not convertable.
Note: I would prefer to call the Performance Data WS to get the custom object instead of having to deal with adding assemblies to each project that needs access.
Anyone have an idea other than restructing the program to work with methods exposed by the ArrayList?
If you use WCF, you can configure what type of collection comes out, whether an ArrayList, a fixed array, or a generic List.
I have found a solution that will work with .Net 2.0. By using Web Services Contract First (WSCF http://www.thinktecture.com/resourcearchive/tools-and-software/wscf/wscf-walkthrough)
I was able to pass generic collections between two services. A down side to WSCF, as the name suggests, is the approach requires the use of contract-first instead of the more common code-first methodology. Lucky it is not terribly complicated to modify the class and proxy after they are created. Hope this helps any lost travelers...