When working with web services, is it a good practice to have some sort of converter that converts the object from the web service to your domain object even if they have almost the exact properties? If it is not a good practice, why not?
I generally do this conversion in my code primarily because I prefer to completely abstract the web service and any evidence of it, essentially not wanting to use the objects exposed by the service in my domain. Tools such as AutoMapper are useful in this practice, though I often just do it manually. My preference is just to abstract the external web service behind an internal service or repository interface and have as little code actually depend on the external service as possible (even if the service is also one I wrote and is being used in the same enterprise environment).
Just think of it from the perspective of future re-factoring. If anything in the service ever changes, how much of the consuming application's code will be affected?
Related
I'm creating a web-application and decided to use micro-services approach. Would you please tell me what is the best approach or at least common to organize access to the database from all web-services (login, comments and etc. web-services). Is it well to create DAO web-service and use only it to to read/write values in the database of the application. Or each web-service should have its own dao layer.
Each microservice should be a full-fledged application with all necessary layers (which doesn't mean there cannot be shared code between microservices, but they have to run in separate processes).
Besides, it is often recommended that each microservice have its own database. See http://microservices.io/patterns/data/database-per-service.html https://www.nginx.com/blog/microservices-at-netflix-architectural-best-practices/ Therefore, I don't really see the point of a web service that would only act as a data access facade.
Microservices are great, but it is not good to start with too many microservices right away. If you have doubt about how to define the boundaries between microservices in your application, start by a monolith (all the time keeping the code clean and a good object-oriented with well designed layers and interfaces). When you get to a more mature state of the application, you will more easily see the right places to split to independently deployable services.
The key is to keep together things that should really be coupled. When we try to decouple everything from everything, we end up creating too many layers of interfaces, and this slows us down.
I think it's not a good approach.
DB operation is critical in any process, so it must be in the DAO layer inside de microservice. Why you don't what to implement inside.
Using a service, you loose control, and if you have to change the process logic you have to change DAO service (Affecting to all the services).
In my opinion it is not good idea.
I think that using Services to expose data from a database is ideal due to the flexibility it provides. Development of a REST service to expose some or all of your data as a service provides flexibility to consume the data directly to the UI via AJAX or by other services which can process the data and generate new information. These consumers do not need to implement a DAO and can be in any language. While a REST Service of your entire database is probably not a Micro-Service, a case could be made for breaking this down as Read only for Students, Professors and Classes for exposing on the School Web site(s), with different services for Create, Update and Delete (CUD) available only to the Registrars office desktop applications.
For example building a Service to exposes a statistical value on data will protect the data from examination by a user/program who only needs a statistical value without the requirement of having the service implement an entire DAO for the components of that statistic. Full function databases like SQL Server or Oracle provide a lot of functionality that application developers can use, including complex queries(using indexes), statistics the application of set operations on data.
Having a database service is a completely valid pattern. In fact, this is one of the key examples of where to start to export aspects of a monolith to a micro service in the Building Microservices book.
How to organize your code around such idea is a different issue. Yes, from the db client programmer's stand point, having the same DAO layer on each DB client makes a lot of sense.
The DAO pattern may be suitable to bind your DB to one programming language that you use. But then you need to ask yourself why you are exposing your database as a web service if all access to it will be mediated by the same DAO infrastructure. Or are you going to create one DAO pattern for each client programming language binding?
If all database clients are going to be written on the same programming language, then are you sure you really need to wrap your DB as a microservice? After all, the DB is usually already a remote service with a well-defined network protocol optimized to transfer data fast and reliably. Why adding HTTP on top of it? What are you expecting to gain from adding such complexity?
Another problem with using the DAO pattern is that the DAO structure does not necessarily follow the evolution of the web service. The web service may evolve in a way that does not make old clients incompatible. You may have different clients using different features of the micro service. In this case you are not sharing the same DAO layer structure on each client.
Make sure you are not using RPC-style programming over web services, which does not make much sense. You will be basically throwing away one of the key advantages of micro services, which is the decoupling between service and client.
I need to design a webservice for a data collection service - and the requirement is that the webservice should not access the database directly.
The web service will talk to the data collection service which in turn will access the database. So is HTTP the only option for web services to talk to the data collection service? But the data collection service is yet to be developed - is there a design that should be followed for the web service to be able to talk to this service? I want to make sure that the data collection service is implemented properly so that I can do my part of the web services without too many hassles.
Think of the "web service" as any application in this case. The code which accesses the "data collection service" should work in any application, after all.
The "data collection service" is an external dependency to the application. As such, it should be abstracted behind a service facade. An interface, for example. (I'm going to use C# in my example, since I'm most familiar with that.) Something like this:
public interface IDataCollectionService
{
void CollectData(SomeDataDTO data);
}
At this point, all code which uses the data collection service should use this interface type. Until the data collection service is developed, you can use a mock data collection service. Any in-memory implementation with just enough functionality to keep you going would work. For example:
public class MockDataCollectionService : IDataCollectionService
{
public void CollectData(SomeDataDTO data)
{
// do nothing
}
}
This implementation would have no side-effects, throw no errors, and just assume that everything in the external dependency worked as expected. You can edit/expand this mock to result in error conditions, etc.
(Note that I'm kind of using the word "mock" is a more generic way than purely unit testing terms. This happens a lot in the industry, actually. For your unit tests themselves, you could use any mocking library to create an actual unit test mock by using the above interface.)
This is all just an example, and how you structure it is entirely up to you. The point is that, as an external dependency, all interactions with the data collection service are behind a facade. Then, when that service is developed, it won't really matter what protocols and standards it uses. When you create a new DataCollectionService class which implements the interface, all of the code for using those protocols and standards would be encapsulated within that class. The rest of the project won't know or care the difference.
Then, when that implementation is ready, you can just swap it out with the "mock" implementation and test the system without having changed any other code in your application. If, at a later time, it's decided that the protocols need to change again, the only thing you'd need to change (or create a new one of) is that one class which implements the interface.
Indeed, if you ever are granted permission to access the database, all that would need to change is to again build a new implementation of that interface. Essentially, whether you can or can not access the database is just as immaterial as what protocols are being used. The architecture of the software abstracts it, so that the architecture of the infrastructure has as little impact as possible.
You can create a mock object that mimics the data collection service, until the actual service is completed. The mock object can double as a testing tool while you create your unit tests.
Interoperability comes to mind (MS/Java).
Also, with EJB you need to distribute EJB interface, with WS you got WSLD (I know there's EJB extension for WSDL, but I'm not sure it's used).
Anything else?
EJB is mostly about a programming model for how you implement callable Business Logic. You code is running in a container which looks after management, clustering, transactions and security. Your component can be called by and number of different mechansims including local Java Calls, RMI/IIOP for remote invocation and also Web Services, so yes your EJB can indeed have a WSDL and be callable fro other non-Java envrionments.
If you start instead from the point of view of having a WSDL, which probably will specify SOAP/HTTP, then you are free to implement that in many different technologies, and of cource invoke it via that specified protocol, which very many different clients can use. The big question is how easily you can deal with those quality of implementation issues - your chosen implementation environment may give a lot of help or leave a lot to you.
Summary: you're not really comparing like-with-like. Web Services is very about the interface, EJB very much about the implementation.
This question has been asked a few times on SO from what I found:
When should a web service not be used?
Web Service or DLL?
The answers helped but they were both a little pointed to a particular scenario. I wanted to get a more general thought on this.
When should a Web Service be considered over a Shared Library (DLL) and vice versa?
Library Advantages:
Native code = higher performance
Simplest thing that could possibly work
No risk of centralized service going down and impacting all consumers
Service Advantages:
Everyone gets upgrades immediately and transparently (unless versioned API offerred)
Consumers cannot decompile the code
Can scale service hardware separately
Technology agnostic. With a shared library, consumers must utilize a compatible technology.
More secure. The UI tier can call the service which sits behind a firewall instead of directly accessing the DB.
My thought on this:
A Web Service was designed for machine interop and to reach an audience
easily by using HTTP as the means of transport.
A strong point is that by publishing the service you are also opening the use of the
service to an audience that is potentially vast (over the web or at least throughout the
entire company) and/or largely outside of your control / influence / communication channel
and you don't mind or this is desired. The usage of the service is much easier as clients
simply have to have an internet connection and consume the service. Unlike a library which
may not be so easily done (but can be done). The usage of the service is largely open. You are making it available to whomever feels they could use it and however they feel to use it.
However, a web service is in general slower and is dependent on an internet connection.
It's in general harder to test than a code library.
It may be harder to maintain. Much of that depends on your maintainance and coding practices.
I would consider a web service if several of the above features are desired or at least one of them
is considered paramount and the downsides were acceptable or a necessary evil.
What about a Shared Library?
What if you are far more in "control" of your environment or want to be? You know who will be using the code
(interface isn't a problem to maintain), you don't have to worry about interop. You are in a situation where
you can easily achieve sharing without a lot of work / hoops to jump through.
Examples in my mind of when to use:
You have many applications in your control all hosted on the same server or two that will use the library.
Not so good example, you have many applications but all hosted on a dozen or so servers. Web Service may be a better choice.
You are not sure who or how your code could be used but know it is of good value to many. Web Service.
You are writing something only used by a limited set of applications, perhaps some helper functions. Library.
You are writing something highly specialized and is not suited for consumption by many. Such as an API for your Line of Business
Application that no one else will ever use. Library.
If all things being equal, it would be easier to start with a shared library and turn it into a web service but not so much vice versa.
There are many more but these are some of my thoughts on it...
Based on multiple sources...
Common Shared Library
Should provide a set of well-known operations that perform common tasks (e.g., String parsing, numerical manipulations, builders)
Should Encapsulate common reusable code
Have minimal dependencies on other libraries
Provide stable interfaces
Services
Should provide reusable application-components
Provide common business services (e.g., rate-of-return calculations, performance reports, or transaction history services)
May be used to connect existing software from disparate systems or exchange data between applications
Here are 5 options and reasons to use them.
Service
has peristent state
you need to release updates often
solves major business problem and owns data related to it
need security: user can't see your code, user can't access you storage
need agnostic intereface like REST (you can auto generate shallow REST clients for client languages esily)
need to scale separately
Library
you simply need a collection of resusaable code
needs to run on client side
can't tolerate any downtime
can't tolerate even few milliseconds of latency
simplest solution that couldd possibly work
need to ship code to data (high thoughput or map-reduce)
First provide library. Then service if need arises.
agile approach, you start with simplest solution than expand
needs might evolve and become more like "Service" cases
Library that starts local service.
many apps on the host need to connect to it and send some data to it
Neither
you can't seriously justify even the library case
business value is questionable
Ideally if I want both advantages, I'll need a portable library, with the agnostic interface glue, automatically updated, with obfuscated (hard to decompile) or secure in-house environment.
Possible using both webservice and library to turn it viable.
a. What are the things I must consider?
b. I have several Stored Procedures being execute by the current application. If I create equivalent methods to execute these procedures, what would be the risk or the challenge.
Architecturally, one thing you must consider in transforming a web app to a web service is that local access to methods and data is not the same as remote access. Remote access should be designed so that invocations are more course-grained and exchange more information at once.
Another thing you would need to think about is what your serialization protocol you will use. For example, SOAP vs a REST-based protocol.
Also, think about security - the security considerations are different between a web application and a web service.
Finally, think about how others will know about your web service (or if they will at all).
One risk is ensuring that your code remain the same.
What I mean by this is that there is a distinct possibility of code duplication in this situation, and as such means that you may inadvertently forget to modify one of the places where the Stored Procedure is used (say if you add a new variable to the stored proc call).
Then you also must consider security. For example, exposing a web service call that provides a list of users to the wild is probably not that good of an idea. you need to plan for how you're going to pass/receive authentication & authorization information.
Managing your code base as Stephen said is going to be a big challenge if you create equivlant methods. Your much better off extrapolating the methods into a new library, that both the web application and web service will use. Your web apps shouldn't have any data access code in them.
With a web service you need to consider your clients. Who is going to access your data and from where. If for example its from a .net windows client on the same network or machine a TCP binding might be best. Or if you need to support older .net framework clients or even java clients you need to be careful about what technology you use.
You will also want to choose between WCF or ASMX. Which the previous paragraph shouuld help answer.
It seems to me that the greatest challenge will be that you are obviously tempted to do this. I think you're making a mistake.
Your web application, and the web service you propose, have different requirements. By "transforming" the application into the service, you will burden the service with the requirements of the application.
Here's a "thought experiment": what if you were to write the service from scratch, ignoring the application. How similar would the service and application be? If they would wind up alike, then transformation would make sense. Otherwise, not so much.