I'm just trying to develop an internal web service for a news agency which is connected to a MySQL database where all the authentication/news data remains. The purpose of all of this is to generate an XML version of the article/ list of articles depending upon the client's subscription, so it can be shown by a mobile frontend that I am working on, using Java Server Faces.
Up to date, I have generated and annotated JPA entities from my database using Eclipse, as well as created a Stateless Session Bean so it can be published as a web service. All of this works absolutely fine, so it's time to take it to the next level, but I don't know where to start.
I managed to set up a custom authenticator provider within WebLogic using my database, but don't really know if that's handy and where to go next.
I also had a look on OpenAM but thought there should be something native to either JAX-WS or WebLogic.
How could I approach this? The requirements as far as I can see would be:
One time authentication.
Using username/password stored on a MySQL table.
Authentication data provided within the SOAP message? (The client would log in through the JSF frontend, sending that data to the WS to check if it's valid).
Thanks!!!
p.s.: I did Java a long time ago, so I've been "disconnected" from the latest technologies/methodologies, so although my question goes quite straight to the point, if you think there would be a better way to accomplish what I've done so far just let me know, please.
Related
I am currently running WSO2 Analytics on a windows server but I want implement the analysing part somehow that a client can connect to the server and do some processing like visualization on its own rather than all processing being done on the server. Is this something possible on WSO2 platform?
Thanks
You can setup database you want ( see the documentation ). For production usage I woudn't even recomment using the bundled H2 database. WSO2 analytics supports number of databases by default, I believe Oracle is one of them.
As stated in the comments - you can create a client or service which reads the data from the database and displays them its own way.
most challenging part for me is that how the client uses the information from the database?
This is already on your own (outside scope of this question). You've asked if your client can access the analytics (result) data - yes you can. How to do that is up to you. (depending what the client is, ..)
For example at our client they are building data APIs which are directly consumable by different frontend libraries creating nicer charts and reports.
I am working on WSO2 ESB 4.0.3 on MAC OS X Lion (10.7.4)
I would like to know what are the best practices for development for WSO2 ESB 4.0.3.
Currently I am using Data Services Feature in it and existing tomcat application, which we are trying to port to WSO2 ESB, does the SQL query in 2-3 seconds where as WSO2 ESB 4.0.3 with Data Services feature taking around 16-17 secodns.
I would be thankful if some body can let me know best practices for WSO2 and in perticular XSLT transformation.
Hoping for answer.
thanks
Hi Prabath
Here is how my environment is
I am using WSO2 ESB 4.0.3 with Data Services Feature 3.2.2. Proxy service front ends the DS service. Data sources are defined as carbon data sources in datasources.properties.
I tried to run the same service in the WSO2 Data Services Server 2.6.3 and the performance is comparable to what existing tomcat application does but the ESB 4.0.3 with Data Services Feature 3.2.2 takes 8 times more time than tomcat application. Looks like XSLT is not a issue as I thought earlier.
I have all the error handling & input validation in the proxy service which calls this DS.
Also I tried changing it to local for the transport but still same performance issue. Also I have to make sure the format of the forwarded XML is SOAP 12 in the end point definition otherwise proxy service does not forward with local transport.
Can you please suggest so that I can use WSO2 ESB with Data Services Feature 3.2.2 and get comparable performance?
Help really appreciated.
thanks
Abhijit
Hi Prabath
Thanks for reply.
Proxy service validation and transformation is not a problem. Looking at the logs it looks like Data Service deployed in ESB with Data Services feature is taking 8 times more time than the tomcat application. So it is Data Services Feature which is problem I believe and not the proxy service.
Even if we remove the proxy service where you will do the input validations and error handling?
Please let me know.
thanks
Abhijit
Abhijit,
I'm not quite clear of whether this problem is related to executing SQL using dbReport/dbLookup mediators against doing the same thing having data services features installed in ESB OR transforming responses using XSLT at the ESB layer against doing it at the DSS layer.
If it's the former, then you should be able to effectively use the db mediator pair (namely dbLookup and dbreport) to execute simple SQL queries such as SELECT, INSERT, UPDATE, DELETE, etc. However, it is not recommended to do use those mediators to do much complex queries such as stored procedures with "OUT and INOUT" parameters etc as WSO2 DSS is specifically designed to serve any sort of complex queries like that. However, this (using data services) comes at the cost of network latency. Because, you're invoking a data service endpoint through the network which obviously adds the network latency to the end-to-end time taken to get your task done. However, if you're using Data Services features installed in the WSO2 ESB, you always have the option of using "local" transport instead of "http/https" which does an in-JVM call and thus would not dispatch the request over the network.
If this is related to the later, meaning, if you refer to the XSLT transformations, I believe there's no such hard and fast rules in doing this and this would completely depend on your requirements and the usecase. For example, if you're only using WSO2 DSS and want to get some request transformed into a particular format that is expected by the client side, it would only be enough for you to get it done at the WSO2 DSS layer. Because, dispatching it into ESB ONLY for the sake of getting the XSLT transformation done, would add an additional unwanted overhead to the end-to-end completion time of your task. On the other hand, if you're doing this as a part of a configuration flow at the ESB side, then it's perfectly okay to use something like XSLT mediator inside the flow itself.
Hope this helps!
Regards.
Prabath
I hope Prabath already gave the answer to your question.
However, it is not recommended to do use those mediators to do much complex queries such as stored procedures with "OUT and INOUT" parameters etc as WSO2 DSS is specifically designed to serve any sort of complex queries like that. However, this (using data services) comes at the cost of network latency. Because, you're invoking a data service endpoint through the network which obviously adds the network latency to the end-to-end time taken to get your task done. However, if you're using Data Services features installed in the WSO2 ESB, you always have the option of using "local" transport instead of "http/https" which does an in-JVM call and thus would not dispatch the request over the network.
I am new to developing web services using java. I have an academic project where I need to do dynamic service composition. For that I can't directly create a service-client for a particular service because if I do so then that client will call that particular service only. Client need to search various web services and then out of those services select any one at run time and also call that service at run time.
I was able to develop the web service(JAX-WS) using Eclipse(indigo), I also created the client for that web service and every thing is working fine. Now my problem is that while creating the client I am hard coding the client to call that particular web service only(since I am creating the client using the WSDL file of the service). However I actually need to call any one of the searched service, but for that I need to publish the service some where then discover it and then call it.
I tried publishing the service to juddiv3. But on juddiv3 I could only publish the sample service supplied with the juddiv3. When I try to publish service created by me then it is not getting displayed in the group of published services.
Is there any other UDDI server which I could install on my local machine and then publish and discover the service from that. Also I was not able to figure out how to create a client that will modify itself at run time to call any one service out of various searched services.
Kindly provide the necessary steps and code.
Thanks
You can use jUDDI (http://juddi.apache.org/ ).
juddi is based on UDDI v2.0, v 3.0 .
Here, you can publish as well as discover your web service.
For integration, you have to make some application which integrates with jUDDI.
But I think for your academic project, and for your purpose, jUDDI is best suitable! ( :) )
jUDDI has a boat load of examples in the source code trunk. You may want to check them out. It's difficult to guess what the problem is from the little information you've provided. Consider contacting the jUDDI team for further assistance. http://svn.apache.org/repos/asf/juddi/trunk/juddi-examples/. There's also additional document for working with UDDI in the jUDDI user's guide, which is at the jUDDI web site
You cannot directly publish on jUDDI. You need to create publisher entities in jUDDI server also. You'll find Rename4Sales and Rename4Marketing examples in 'Classes' folder in the standalone server's juddi application. Use these XMLs as your basis and create your own entity. You also need to configure the server's login credentials.
I suggest you follow the tutorials on jUDDI blog.
So I got this webapp running on a tomcat 7.0.27 which manages a large RDF/Ontology model with Jena, and what I want to do is provide a SPARQL endpoint to enable clients to query this model. Currently, there's a SOAP webservice where a SPARQL query can be embedded in a (SOAP) message, which is a legacy implementation I'm supposed to modernize.
How does one go about providing a SPARQL endpoint? It seems just an empty buzzword to me. What's the difference between a (SOAP) webservice and a SPARQL endpoint? I've been reading about Joseki and ARQ, which apparently (in combination?) provide SPARQL endpoint functionality, but I'm not sure whether I need it, since most people who are talking about it on the web are using older tomcat versions (5/6).
Can somebody explain to me how to provide a SPARQL endpoint or nudge me in the right direction in terms of further resources?
Tomcat is just a servlet container. It runs web applications. A SPARQL endpoint is a particular kind of web application that you can run in Tomcat.
Fuseki (the successor to Joseki, and, like Tomcat, a project of the Apache Software Foundation) is the most popular choice.
You say that your RDF model is “large”. Depending on how large it is (that is, does it comfortably fit in memory or not?) you may need a persistent RDF store as well, such as Apache TDB (which is designed to work with Fuseki) or OpenLink Virtuoso (which is its own webserver, so you wouldn't use it together with Tomcat and Fuseki but as a standalone server).
I am familiar with SOAP web services, and have done some PUT/GET/POST verbs in REST web services. Somewhere I read that your REST web service can return a code if something goes wrong at the web service, but can it return twice?
By that I mean: supposed your REST web service is querying a database and it is doing a lazy load, so it is taking a while. You intend to return an array of values from the database back to the client that called the REST web service. But while the REST web service is working on your database query, can it return a string that says "Query is 10% complete, please wait" or something like that? Can the REST web service call another web service that somehow communicates back to the client this information?
I doubt this is possible, otherwise I would have seen it, but I ask anyway.
Target platform is Visual Studio 2010 Professional with C# and MS Sql Server 2008
You could look at COMET, which according to wikipedia:
...is a programming technique that enables web servers to send data to the client without having any need for the client to request it. It allows creation of event-driven web applications which are hosted in the browser.
There are a number of articles on the web about doing this plus a couple of frameworks on sourceforge and github. However this is not trivial. I know it is possible with REST because a previous employer of mine has several real-time feeds based on RESTfull endpoints using COMET for push.
See here:
http://www.aaronlerch.com/blog/2007/07/08/creating-comet-applications-with-aspnet/
http://sourceforge.net/projects/emergetk/
https://github.com/Oyatel/CometD.NET