ColdFusion, WSDL, and extended complex classes - coldfusion

I am working with a webservice that provides basic CRUD functionality. The Retrieve is easy enough to work with, but I'm having trouble working with the Create (I've not yet messed with the Update or Delete functions).
The update function takes a single argument. This is a zObject in the WSDL. However, this is a generic object extended by what I actually need to pass. If I want to create an account, for example, I pass an Account object which extends the zObject definition.
I cannot for the life of my figure out how to get CF to allow me to do this.

ColdFusion implements the Apache Axis engine for its web service functionality.
Unfortunately CF does not take full advantage of the SOAP object model and allow
CF developers to "new" the different objects that make up a service (or subclass them).
Thankfully there is something we can do about that. When you first access a WSDL,
Axis generates a set of stub objects. These are regular java classes that contain
getters and setters for the base properties of the object. We need to use these
stubs to build our object.
In order to use these stubs however, we need to add them to the ColdFusion
class path:
Step 1) Access the WSDL in any way with coldfusion.
Step 2) Look in the CF app directory for the stubs. They are in a "subs"
directory, organized by WSDL.like:
c:\ColdFusion8\stubs\WS\WS-21028249\com\foo\bar\
Step 3) Copy everything from "com" on down into a new directory that exists in
the CF class path. or we can make one like:
c:\ColdFusion8\MyStubs\com\foo\bar\
Step 4) If you created a new directory add it to the class path.
A, open CF administrator
B. click on Server settings >> Java and JVM
C. add the path to "ColdFusion Class Path". and click submit
D. Restart CF services.
Step 5) Use them like any other java object with <CFObject /> or CreateObject()
MyObj = CreateObject("java","com.foo.bar.MyObject");
Remember that you can CFDump the object to see the available methods.
<cfdump var="#MyObj#" />
Your account object SHOULD be in the stubs. If you need to create it for some reason you would need to do that in a new Java class file
Generally when working with this much Java, cfscript is the way to go.
finally, the code would look like this:
<cfscript>
// create the web service
ArgStruct = StructNew();
ArgStruct.refreshWSDL = True;
ArgStruct.username = 'TestUserAccount';
ArgStruct.password = 'MyP#ssw0r3';
ws = createObject("webservice", "http://localhost/services.asmx?WSDL",ArgStruct);
account = CreateObject("java","com.foo.bar.Account");
account.SetBaz("hello world");
ws.Update(account);
</cfscript>

I agree with the critique of ColdFusion, however, the posted solution also does not respond well to wsdl changes.
Thankfully CF does all access to all the underlying Java methods on objects. This includes 'reflection'. While CreateObject does not know about the stub objects, the classloader that created the webservice does.
ws = createObject("webservice", "http://localhost/services.asmx?WSDL",ArgStruct);
account = ws.getClass().getClassLoader().loadClass('com.foo.bar.Account').newInstance();

Related

What's the correct way to allow different ColdFusion CFC's to instance each other?

I have a “best-practices” question in regards to the correct way to instance CFCs that all need to talk to each other in a given project.
Let’s say for example you have a web application that has a bunch of different modules in it:
Online Calendar
Online Store
Blog
File Manager (uploading/downloading/processing files)
User Accounts
Each of these modules is nicely organized so that the functions that pertain to each module are contained within separate CFC files:
Calendar.cfc
Store.cfc
Blog.cfc
Files.cfc
Users.cfc
Each CFC contains functions appropriate for that particular module. For example, the Users.cfc contains functions pertaining to logging users on/off, updating account info etc…
Sometimes a CFC might need to reference a function in another CFC, for example, if the store (Store.cfc) needs to get information from a customer (Users.cfc). However, I'm not sure of the correct way to accomplish this. There are a couple ways that I've been playing with to allow my CFC's to reference each other:
Method 1: Within a CFC, instance the other CFC’s that you’re going to need:
<!--- Store.cfc --->
<cfcomponent>
<!--- instance all the CFC’s we will need here --->
<cfset usersCFC = CreateObject("component","users") />
<cfset filesCFC = CreateObject("component","files") />
<cffunction name="storeAction">
<cfset var customerInfo = usersCFC.getUser(1) />
This approach seems to work most of the time unless some of the instanced CFC’s also instance the CFC’s that instance them. For example: If Users.cfc instances Files.cfc and Files.cfc also instances Users.cfc. I’ve run into problems with occasional dreaded NULL NULL errors with this probably because of some type of infinite recursion issue.
Method 2: Instance any needed CFCs inside a CFC’s function scope (this seems to prevent the recursion issues):
<!--- Store.cfc --->
<cfcomponent>
<cffunction name="storeAction">
<!--- create a struct to keep all this function’s variables --->
<cfset var local = structNew() />
<!--- instance all the CFC’s we will need here --->
<cfset local.usersCFC = CreateObject("component","users") />
<cfset local.filesCFC = CreateObject("component","files") />
<cfset var customerInfo = local.usersCFC.getUser(1) />
My concern with this approach is that it may not be as efficient in terms of memory and processing efficiency because you wind up instancing the same CFC’s multiple times for each function that needs it. However it does solve the problem from method 1 of infinite recursion by isolating the CFCs to their respective function scopes.
One thing I thought of based on things I've seen online and articles on object oriented programming is to take advantage of a “Base.cfc” which uses the “extends” property of the cfcompontent tag to instance all of the CFC's in the application. However, I've never tested this type of setup before and I'm not sure if this is the ideal way to allow all my CFCs to talk to each other especially since I believe using extends overwrites functions if any of them share a common function name (e.g. "init()").
<!--- Base.cfc --->
<cfcomponent extends="calendar store blog users files">
What is the correct "best-practices" method for solving this type of problem?
If each of your CFC instances are intended to be singletons (i.e. you only need one instance of it in your application), then you definitely want to looking into Dependancy Injection. There are three main Dependancy Injection frameworks for CF; ColdSpring, WireBox and DI/1.
I'd suggest you look at DI/1 or WireBox as ColdSpring hasn't been updated for a while.
The wiki page for DI/1 is here:
https://github.com/framework-one/di1/wiki/Getting-Started-with-Inject-One
Wirebox wiki page is here:
http://wiki.coldbox.org/wiki/WireBox.cfm
Essentially what these frameworks do is to create (instantiate) your CFCs (beans) and then handles the dependancies they have on each other. So when you need to get your instantiated CFC it's already wired up and ready to go.
Dependancy Injection is also sometimes called IoC (inversion of control) and is a common design pattern used in many languages.
Hope that helps and good luck!
If your cfcs not related to each other the base.cfc concept does not fit. The inheritance is for classes have common things that can inherit from each other. For example if you have User.cfc and you want to added new cfc called customer.cfc I would inherit from User and override some functionality or add some without touching the actual user.cfc.
So, back to your question, since the CFC are not related or have common between each other and to avoid cross referencing, I will create serviceFactory holds instances of cfcs like this
component name="ServiceFactory"
{
function init(){
return this;
}
public User function getUserService(){
return new User();
}
public Calendar function getCalendar(){
return new Calendar();
}
}
and referencing it by
serviceFactory= new ServiceFactory();
userService = serviceFactory.getUserService();
Keep in mind this approach works only if you have sort of another CFC to manage your logic
you can do the same for all other services. If your functions are static you can save your services in application scope and instantiate it only one time (like singleton).
The third option you have is DI(dependency Injection) framework

Compile Scala classes on the fly to implement webservices

I am struggling with how to dynamically add some webservices. I am using Scalatra for the webservice framework.
I want to allow the developer to be able to change authentication, for example, so that rather than using hard-coded credentials, instead use a database or password file or whatever they need.
I also want to allow them to add new webservices inside the servlet.
So, what I want to do is in the bootstrap code have it load up and recompile the class and then use that version.
I have looked at this, but I need to recompile an entire class, not snippets.
Generating a class from string and instantiating it in Scala 2.10
This is what I have tried, but I added a "/help" webservice but it isn't found, so the new class isn't being used yet.
class ScalatraBootstrap extends LifeCycle {
override def init(context: ServletContext) {
val sourceDir = new java.io.File("C:/Temp/MyServlet.scala")
val sse = ScalaScriptEngine.onChangeRefresh(sourceDir)
sse.refresh
println("*** - " + sse.compilationStatus.startTime + " " + sse.compilationStatus.stopTime)
context.mount(sse.get[MyServlet]("test.proj.MyServlet"), "/*")
I am using scalascriptengine (https://code.google.com/p/scalascriptengine/) at the moment.
So, how can I recompile the class file for the webservice, when it may have case classes, annotations and object classes in the same file, on the fly?
I am wondering if I need to have the webservice in Groovy instead, but I would prefer to keep it functional.
UPDATE
I had thought about plugins first, but ran into a problem with how would I add new webservices that way, and it may be that Scalatra is not going to be the right choice, that I may need to change my REST service framework.
Eventually I want to be able to change the webservices on the fly without having to restart the application, and recompiling the source would allow that.
Realizing a plug-in affordance is not too hard, at least for reasonably simple cases. The essential elements are:
A trait or abstract class defining the obligations of realizations of the plug-in.
A means to get the code for plug-ins onto the class-path. Alternatively, if you're familiar with working with classloaders, you can do it dynamically. I don't have much experience with that.
Once you have an instance of java.lang.Class[P <: PlugInType] it's trivial to get an instance, provided you don't need constructor parameters.
A protocol in the plug-in trait that allows the plug-in to, e.g., reserve a top-level URL path segment from which you derive a Scalatra route that covers all those paths. You then dispatch requests that match that leading path segment via the plug-in instance. All you have to do is make sure you don't let two plug-ins claim the same path or if you do you have some further means of resolving them.
Thanks to #RandallSchultz I found a solution that works.
override def init(context: ServletContext) {
val sourceDir = new java.io.File("C:/Temp/HelpServlet.scala")
val sse = ScalaScriptEngine.onChangeRefresh(sourceDir)
sse.refresh
println("*** - " + sse.compilationStatus.startTime + " " + sse.compilationStatus.stopTime)
context.mount(new MyServlet, "/*")
context.mount(sse.get("org.myproject.rest.HelpServlet"), "/help/*")
So when I went to "/help/help" I got my help page as expected, so the new webservice was added.
By reading it from a file I was able to add it while the application was running.

Ember-CLI - How to access application classes

Previously when I developed ember application I used App as my global object and every class was stored in this big object.
Like this
window.App.MyModel = Em.Object.extend({});
And in the browser console, I was able to do
App.MyModel.create();
So it was really easy for me to access MyModel class.
Now I started experiments with Ember-CLI, I don't have much experience with this kind of tools. I followed the documentations and I created my model Service like this.
var Service = Ember.Object.extend({});
export default Service
But now, how to access Service class from browser console?
Only way I found was:
App.__container__.resolve('model:service')
But I don't like it much. Is there better way? Btw, can you please explain how export works? Or is there some source (documentation, article) where I can study it?
Thanks a lot for response.
If you're aiming to have something available in most places throughout your application you'll typically want to register it on the container.
There are multiple ways to access the container, and you're correct in that the one you found was less than ideal. The running joke is "any time you directly access __container__ we need to add more underscores."
To directly control how things are registered on the container and injected into the container you typically want to use an initializer. Using initializers in ember-cli is super-easy, they're executed for you automatically.
Checking out the documentation you can see that you get access to the application's container as an argument which allows you to manipulate it in a safe manner.
Once you have access to the container you can use register and inject to make content easily available in particular locations. This was introduced here. One note, accessing things inside of the container from outside the context of your app (browser console) will require the usage of App.__container__ and that is the expected use pattern.
And the export you're running into is an ES6 module system construct, it's not Ember-specific. Playing with the ES6 module transpiler can give you a good sense of what goes in and what comes out in "we can do this today" type of JavaScript.
For ember 3.22, application classes can be accessed like so:
Ember.Namespace.NAMESPACES[1]._applicationInstances.values().next().value.lookup('service:state-events')
Note, you may need to modify the index in NAMESPACES[1] to be something other than 1. You can determine which namespace is your application when this returns true:
Ember.Namespace.NAMESPACES[1] instanceof Application
This approach is how ember-inspector accesses ember applications: https://github.com/emberjs/ember-inspector/blob/50db91b7bd26b12098cae774a307208fe0a47d75/ember_debug/main.js#L163-L168

(Java) How can I pass Schema-validated XML documents as parameters between distributed components (e.g. web services or sockets)?

Here is a description of the scenario and I would appreciate also any comments on the approach used
The core of my application is a set of web services backed by a P2P database. One service accepts a simple XML-based record (I have designed a generic schema for it). The service processes this data (mainly creating keys based on certain criteria) and pass the original data along with the created keys to a listening SocketServer in one of the listening P2P nodes. This key,data pair is routed to the proper node, which stores the data (associated with the key as an ID) in an XML database.
A second service accepts a query document that is structured based on the same schema, but with optional values that would be used for searching and matching from the previously stored ones. So the second service would pass this query (with the proper keys) to the P2P part, get back the results and pass them back to the service client.
E.g. if the original record submitted to the first service was < attr1 >value1 < /attr1 > < attr2 > value2 < / attr2 > (attribute list along with some other metadata mandated by the schema) then the second service should retrieve that record if the query received was < attr2 >value2 < / attr2 >
(I could later think about using more complex XPath or XQuery queries as the underlying XML database allows instead of exact matches for values here but that is not important at this stage. there is also a third service I am working on but it depends on getting the first two in proper shape first)
So my questions are:
1) What data type should I use as the parameters of the web services? How to utilize my schema for this usage? I was considering various XML binding frameworks (especially JAXB and SDO) for this but didn't know how to proceed.
2) How can I enhance the two services (call them store and search) to use dynamically created templates based on the original generic schema? The service would still accept documents of the main schema type but has the inner attribute list based on a template say template1 only requires whose values are ints while template2 require (float) and (string). The current JSP-based prototype manually creates this template but as an XML document that is assembled by hand (<>tags dispersed in text) and there is no type checking what so ever so I thought I could do better!
3) Is it possible to generate a quick web app prototype for simple access to this system (again by using the schema (&templates) to edit the appropriate XML message structures? What I am looking for is for the (human) user to choose a template and then just "fill in the blanks" and submit, no need for any fancy look and feel.
4) Can I or how can I also use this XML message type for communicating across sockets?
5) Does it matter if I deploy the services as stateless EJBs or not? Do I need them to be EJBs or servlets would be more than enough?
I currently have a rudimentary implementation (from previous developers) that were meant for a subset of my current requirements (I am improving on the services and adding new derived ones) but there was no schema nor validation and the data is passed all along as basic strings, thus providing weak typing and difficult to update manual parsing. The reason I want to update this to a stronger bound typing is to introduce changes in the data schema that would be passed along the whole system easily. Basically I want the system to be as less coupled to the data format/schema used as possible; the current prototype is too coupled to the data that I am finding it extremely difficult to change the data without breaking the system.
My initial investigation led me to consider JAXB but it supports only static typing (cannot create a schema/types dynamically at runtime that I want to persist for later usage). So I came across SDO which has both dynamic and static typing. The problem is just that there is not enough community and/or examples of using this approach so it seems risky (the examples of Apache Tuscany and Eclipselink implementations are very scarce and I could not find complete examples that are not 5+ years old (like this http://www.ibm.com/developerworks/java/library/j-sdo/) and also tackles the XML use case of SDO (most seem to focus on the relational usage of SDO).
This is my first time asking for programming help (here and elsewhere) so please bear with me. I searched a lot on the net but I could not find anything useful but pieces here and there that did not add up.
Any comment or hint is really appreciated.
trfndr
EDIT
I forgot one thing: how would the search service get back the results? Since it is opening a client socket connection, there is no way to get back any results synchronously. The current implementation tackles this by having the service client opening a listening socket on a random port and putting this contact info in the query document. After the search web service sends the query to the p2p part it finishes. The p2p sends the results as a WS call to another service which sends them back to the service client socket. I don't like this approach much, is there any more elegant solution?
I lead the EclipseLink JAXB & SDO implementations and represent Oracle on those specifications so hopefully I can help you out. This question is very similar to talk I'm giving at JavaOne in September.
1) What data type should I use as the
parameters of the web services? How to
utilize my schema for this usage? I
was considering various XML binding
frameworks (especially JAXB and SDO)
for this but didn't know how to
proceed.
This depend's on what web service framework you are using. JAXB is much easier to use with JAX-WS, and while JAXB is still easier to use with JAX-RS SDO, is a possible alternative.
2) How can I enhance the two services
(call them store and search) to use
dynamically created templates based on
the original generic schema? The
service would still accept documents
of the main schema type but has the
inner attribute list based on a
template say template1 only requires
whose values are ints while template2
require (float) and (string). The
current JSP-based prototype manually
creates this template but as an XML
document that is assembled by hand
(<>tags dispersed in text) and there
is no type checking what so ever so I
thought I could do better!
I'm not 100% what you mean here, but the following may be helpful:
Using #XmlAnyElement to Build a Generic Message
3) Is it possible to generate a quick
web app prototype for simple access to
this system (again by using the schema
(&templates) to edit the appropriate
XML message structures? What I am
looking for is for the (human) user to
choose a template and then just "fill
in the blanks" and submit, no need for
any fancy look and feel.
JAX-RS is a nice framework for creating quick prototypes. Below is an example I created:
Part 1 - The Database
Part 2 - Mapping the Database to Objects
Part 3 - Mapping the Objects to XML
Part 4 - The RESTful Service
Part 5 - The Client
4) Can I or how can I also use this
XML message type for communicating
across sockets?
I prefer frameworks like JAX-RS that communicate over the HTTP protocol.
5) Does it matter if I deploy the
services as stateless EJBs or not? Do
I need them to be EJBs or servlets
would be more than enough?
My preference is to use an EJB session bean for the service. If you are interacting with a database then you can leverage the Java Transaction API (JTA) to manage your database transactions.
Part 4 - The RESTful Service
SDO
EclipseLink is the SDO 2.1.1 (JSR-235) reference implementation. We have some examples posted below. If you are looking how to do something specific, I will try to post a relevant example.
http://wiki.eclipse.org/EclipseLink/Examples/SDO
JAXB
JAXB is static. It is also more popular than SDO. Recognizing this in EclipseLink we have implemented a dynamic JAXB feature. It gives you the dynamic aspect of SDO with a JAXB slant.
http://wiki.eclipse.org/EclipseLink/Examples/MOXy/Dynamic
EDIT #1
Since you are dealing with JAX-WS and your model is almost entirely dynamic, I think you should skip the JAXB binding altogether. In the following link see the section "Switching Off Data Binding"
http://java.sun.com/developer/technicalArticles/xml/jaxrpcpatterns3/
This will give us the body of the message as a javax.xml.transform.Source object. We will need to process the XML based on the dynamic templates. SDO would be a good choice here. You can constantly add new types to the HelperContext using XML schemas.
helperContext.getXSDHelper().define(schema1, null);
helperContext.getXSDHelper().define(schema2, null);
You wil be able to unmarshal the Source from the web service as follows:
XMLDocument doc = helperContext.getXMLHelper().load(source, null, null);
DataObject rootDataObject = doc.getRootObject();
String someValue = rootDataObject.getString("attr3/childAttr/anotherChildAttr");
You will also be able to use the XMLHelper to marshal your objects to XML when calling another service.

Need an advice for unit testing using mock object

I just recently read about "Mocking objects" for unit testing and currently I'm having a difficulties implementing this approach in my application. Please let me explain my problem.
I have a User model class, which is dependent on 2 data sources (database and facebook web service). The controller class simply use this User model as an interface to access data and it doesn't care about where the data came from.
Currently I never done any unit test to this User model because it is dependent on an external web service. But just a while ago, I read about object mocking and now I know that it is a common approach to unit test a class that depends on external resources (like in my case).
Now I want to create a unit test for the User model, but then I encountered a design issue:
In order for the User model to use a mocked Facebook SDK, I have to inject this mocked Facebook SDK to the User object (probably using a setter). Therefore I can't construct the Facebook SDK inside the User object. I have to construct it outside the User object, and inject the SDK into the User object.
The real client of my User model is the application's controller. Therefore I have to construct the Facebook SDK inside the controller and inject it to the user object. Well, this is a problem because I want my controller to be as clean as possible. I want my controller to be ignorant about the application's data source.
I'm not good at explaining something systematically, so you'll probably sleeping before reading this last paragraph. But anyway, I want to ask if anyone here ever encountered the same problem as mine? How do you solve this problem?
Regards,
Andree
P.S: I'm using Zend framework, PHP 5.3.
One way to solve this problem is to create two constructors in User: the default one instantiates the real-life data sources, the other one gets them as parameters. This way your controller can use the default constructor, while your tests use the parameterized one to pass in mock data sources.
Since you haven't specified your language, I show you an example in Java, hopefully this helps get the idea:
class User {
private DataBase database;
private WebService webService;
// default constructor
public User() {
database = new OracleDataBase();
webService = new FacebookWebService();
}
// constructor for unit testing
public User(DataBase database, WebService webService) {
this.database = database;
this.webService = webService;
}
}
This isn't really a question about mocking, but about dependencies--and trying to unit test has forced the issue. It sounds like currently you create your User object within your Controller.
If the User and Controller have the same lifetime (they're created at the same time), then you can pass the User into the Controller's constructor, which is where you can make the substitution.
If there is a User object per call, then perhaps the User object should be passed in by the environment, or returned from some Context object.
If you make the Facebook SDK Object(s) publicly accessible, your User object can still create it when it sets up but you can replace it with a mock before you actually do anything on the User object.
[Test]
public void Test()
{
User u = new User(); // let's say that the object on User gets created in the ctor
u.FacebookObj = new DynamicMock(typeof(FacebookSDK)).MockInstance;
Assert.That(u.Method(), Does.Stuff, "u.Method didn't do stuff");
}
You don't say what language you are using, but I use Ruby and Mocha for mocking objects, and it's quite easy.
See http://mocha.rubyforge.org/.
here the fourth example shows that any call to Product.name from the unit under test is intercepted and the value 'stubbed_name' is returned.
I guess there are similar mechanisms in Java, etc.
Since you will be Unit Testing your test class will play the role of the Controller, so that one should not be touched. So that can remain clean.
You are well underway of reinventing Dependency Injection. So it may be worthwhile to look at Spring or Guice to help you with the plumbing. (If you are in Java land).
Your test can indeed do the injection of the Mocked Facebook SDK and your Database service using setter or constructor arguments.
your tests will now do what the controllers (and maybe views) will do and verify the proper routines are called in the SDK's and that the resources are properly obtained and disposed of.