Would this be a bad use case of Apache Camel? [closed] - web-services

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
We are implementing a Web Service using Apache Camel that has many (20-50) "direct:" routes calling Java methods. Every method basically has a route to it, whether it's for business rule processing, or DAO access methods. All the routes use from("direct:").to("direct"), but never to any other component.
While this may seem like it decouples the system from the standard Controller->bo->dao layers, it adds unnecessary book keeping of the Camel routes.
A better alternative would just simply be to define Java interfaces for the Business Objects and Dao layers, with an additional interface for any other service (external to the system, like file://, or http://) requests that would be a dependency inside the Business Objects or Controllers. The implementation to this additional interface would be using Apache Camel to talk to those external services.
As a side note, I'm thinking about how to convince my current colleagues to see my point.
Thoughts?
tldr;
Should Apache Camel be used where there is only 1 or 2 applications present?

I have used applications where there have been 20 systems involved in various complexity, protocols and patterns. I know other places where 50+ systems are involved. The only limit is on your design, performance etc.
Apache Camel is a middleware framework. Essentially your business logic should not know about how the data got to it or where it is to be delivered, only what it should deliver. Camel should take care of the rest.
By the way, does your middleware not talk to the external world? Why only use the direct and not other components?
You can also hide the middleware by using bean integration. That gives you an even more decoupling. See here: http://camel.apache.org/bean-integration.html
It really depends on what it is you want to accomplish and what your requirements are.

Well, for your specific use case I would say that you would probably have been better off with just a regular Java implementation with Camel being used as a glue between your system and the outside world. Like Souciance explains in his answer, Camel really shines when you have to integrate with multiple systems so you should at least keep it for communicating with the outside-world.
However, having already implemented the system's internals using Camel, I would have to say that it doesn't make much sense to put additional effort into replacing it with pure Java, especially since the current implementation gives you the ability to make the system more robust, for instance by using a high-performance MQ as a replacement for the direct routes, which would help your system become more resistant to failures and more easily decoupled later on, not to mention that having routes around your DAO objects makes it much easier to implement batching of DB updates when your system's load grows.

Related

C++ MicroServices with desktop application [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I got a desktop application and it's getting bigger and bigger. And i wonder if i can make something like microservices with desktop application? I want to application for now stays desktop. Application it's written in C++.
I can exclude some of the modules with some preparations.
But is it possible and if anybody have idea how to start with this?
A microservice-like architecture for the desktop is feasible. In fact, many OSes use services, which are often called daemons. For instance, the X Window system uses a client/server architecture. More recently, IDEs can now use the Language Server Protocol to communicate with one or more language servers that provide syntax highlighting, code completion, compilation feedback, etc.
There are a few things to consider:
What communication protocols are already available on the target system? On many systems, you can use a D-Bus (https://en.wikipedia.org/wiki/D-Bus)
What performance do you need? If performance is highly critical, you will need to use a highly optimized protocol for communicating between services, and startup times will need to be small. For good performance, something similar to the Language Server Protocol is probably good enough and will be easier to write, maintain and debug.
Can you handle the added complexity of splitting processes up? In a monolith, the whole thing is either up or down. With microservices, parts of the system that are running can attempt to call parts that are not. This situation must be handled.
What are the seams in your applications? Where can you split it up easily? Take a look at Domain Driven Design. Identify potential independent modules, and refactor to confirm. If the refactoring makes sense then split the module into its own process.
Desktop application and microservices are mutually exclusive because Desktop means one machine and microservices implies multiple machines (physical or virtual) that communicate through network using a technology agnostic protocol.
What you can do is to use modularization. This is an alternative to microservices that can work in some scenarios like yours.

Unit test - best practice for a multilayer project [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
I have a project with different layers of abstraction, which can be splitted in the groups:
Internal API;
Data Access Layer (DAL)
Business Access Layer (BAL)
...
Public API
Public accessible classes that have access to the internal data;
REST endpointes.
...
And inside Public API services I use Internal APIs.
Is it required to write Unit tests for all this layers or only for Internal API?
Are there any best practices?
Should I start writing my tests from Internal API and move to the next layer bottom-up?
The first thing I would say is "Yes." In other words, test everything.
For the internal API, you can write true unit tests with mock objects for the DAL and each class tested in isolation. This isn't just good to test for verification sake but also to give you confidence your code works and to serve as documentation of the code. That confidence comes in handy too when, for example, a REST API call fails later and you need to narrow down where the problem is.
You can test your DAL with an in-memory database for speed. I would call that an integration test while others would call that a unit test. Just semantics. But you got to do that too.
The Internal API tests are by developers for developers.
The testers should help with anything public facing. You simply write integration tests for the API services and REST client tests to verify the common cases and the obvious exceptional cases.
It sounds like a lot, and it kind of is. But if you take the time to get to know your tools and set up automation everywhere you can, you will be amazed how much you can accomplish pretty fast.
Hope this helps.

API best practice - generic vs ad hoc methods [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
I'm creating REST API that will be used by a web page.
There are several types of data I should provide, and I'm wondering what would be the best practice:
Create one method that will return a complex object with all the needed data.
Pros: one call will be needed from the UI side to get all the data.
Cons: not generic solution at all.
Create multiple autonomous method.
Pros: generic enough to be used in the future by other components.
Cons: will require the UI to make several calls to the server.
Which one adheres more to best practices?
It ultimately depends on your environment, the data-size and the quantity of methods. But there are several reasons to go with the second option and only one to go with the first.
First option: One complex method
Reason to go with the first: The HTTP overhead of multiple requests.
Does the overhead exist? Of course, but is it really that high? HTTP is one of the lightest application layer protocols. It is designed to have little overhead. It's simplicity and light headers are some of the main reasons to its success.
Second option: Multiple autonomous methods
Now there are several reasons to go with the second option. Even when the data is large, believe me, it still is a better option. Let's discuss some aspects:
If the data-size is large
Breaking data transfer into smaller pieces is better.
HTTP is a best effort protocol and data failures are very common, specially in the internet environment - so common they should be expected. The larger the data block, the greater the risks of having to re-request everything back.
Quantity of methods: Maintainability, Reuse, Componentization, Learnability, Layering...
You said yourself, a generic solution is easier to be used by other components. The simpler and more concise the methods' responsibilities are, the easier to understand them and reuse them in other methods it is.
It is easier to maintain, to learn: the more independent they are, the less one has to know to change it (or get rid of a bug!).
To take REST into consideration here is important, but the reasons to break down the components into smaller pieces really comes from understanding the HTTP protocol and good programming/software engineering.
So, here's the thing: REST is great. But not every pattern in its purest form works in every situation. If efficiency is an issue, go the one-call route. Or maybe supply both, if others will be consuming it and might not need to pull down the full complex object every time.
I'd say REST does not care about data normalization. Having two ways to get at the same data is not going to hurt.

Software Engineering for REST APIs [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
At my last project we designed an issue documentation system component (like a bug reporting tool) which should integrate into different Client Applications (Desktop Applications, Mobile Apps and also standalone into a Webclient). The focus was not on the Application/UI itself, but on its functional capabilities as a service. Think of it like the Google Search API, which also can be integrated in your Browser, as a widget, on your phone, and so on.
While defining functional (Use Cases) and nonfunctional requirements I came in great trouble to define them without getting UI specific, because we wanted to get a kind of service.
As a Workaround, we simply defined our requirements to fit on a standalone Webapplication plus the nonfuntional requirement that all function calls have to be done via a RESTful Service API, hoping that we won't miss any function afterwards, when using this API in an Desktop Application for example. Due to the fact, that we don't actually want a webapp in the first place, but a service, I am not very happy with this indirect way of requirement analysis, and I hope, that our developers get the point.
So my question is: How are REST APIs or Webservices designed in a way, that a developer knows what to do? Is there a UML UseCase profile for webservices for example?
Do not Forget : Use cases are just "half of the whole story"
You will have non-functional requirements also.You can not capture every important detail with use cases.
Then Ask Yourself : Are Use Cases Right For Me?
Use Cases are generally good for "interactive systems": systems that has interaction with user.They are good for capturing "functional" requirements.
Use Cases are not good for some systems. Be open-minded. While writing
your use cases, you see that this does not capture what you want or does not add
any value try-start with alternative tecniques such as just plain
Feature List.
Find The Root Cause
Ask yourself "Why I came in great trouble to define my reqirements without getting UI specific details"?
Pick Your Battles : Quality Scenarious + Arhitectural Factor Table
Identify your architecturally significiant requirements. One good way to define them is "Quality Scenarious":
Quality Scenarios are short statements of the form [stimulus]
[measurable response]
For example: When a new bug report entered to the Bug System[stimulus],it will sended to the bug owner mobile phone within 5 minute under X conditions.[measurable response]
Then can create a Architectural Factor Table with Quality Scenarious
Architectural Factor Table is a tool that help you to understand the
influence of factors, priorities and variability.
Here is a sample Factor Table from Craig Larman book: Applying UML and Patterns
"Guarantee that the software do what you want"...
So write your test first...Or create "executable" specifications...
And communicate...
Finally
There is nothing like Software Engineering for REST APIs.:-)
Looking at the definition of software requirements specification, you can specify a RESTful API as a technical requirement in the perspective of the product use such as Software Interfaces (i.e. Overall description, Product perspective, Software interfaces). This is not a functional requirement. It is only a technical requirement of your project.
There are no UML Use Case profile because UseCase intends to specify functional requirements. You can specify the interaction of users accessing your system through the RESTful API considering the functional access and data exchange in a regular Use Case.
All the characteristics required from the expected RESTful API should be specified as a technical requirement (i.e. Specific requirements, External interface requirements). The developer knows what to do considering all requirements of the application.

Practical Usage of N-Tier Architecture [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I'm a .NET web developer for a small organization. We have some skilled developers here, but what we don't have is anyone who's worked for larger, more organized, software shops. We do all right, but I find myself wanting to structure my code better with little place to turn for advice.
It comes to this. At some point someone in our organization decided we were going to use webservices whenever we had to do any data access at all no matter the case. Thus, our hardware architecture is organized so that is the only way we can access our databases. This sounds fine in theory, but the problem is most of our apps turn out like this:
Spaghetti Mess Of Code In The aspx.cs -> Web Service That Does Nothing But Call a Stored Procedure
Beyond that there's not much separation. Whenever I start trying to research better structural practices I wind up reading about things like dependency injections, dirty properties, and class factories, my head starts to swim, and I move on to something else in frustration.
Here's a basic example of my wonderings. So let's say I have to make a page to select employees from a list, edit them, and update the database. Is it better to have the web service return an Employee object on a get, and accept an Employee object on an update? Or is it better to have the Employee object call the webservice to self populate?
In other words: Employee emp = svc.GetEmployee(42); vs Employee emp = new Employee(42);
The second example seems like it would be better organization for updates (update the relevant properties and call emp.Update()), but the problem is what if I need to get a list of Employees? It would be inconsistent to do Employee emp = new Employee(id); for a singular employee, but do svc.GetAllEmployees() for a list.
I feel like I'm rambling a bit so I'm going to cease trying to explain and hope someone understands my confusion. I appreciate any advice that anyone can offer. Thanks!
As with anything, there are a number of different approaches you can take. (So hopefully there will be a number of good and different answers here, because this is definitely an important question.)
One question you should probably ask yourself about your design is "how much logic will need to be shared between applications?" Going with the small GetEmployee example you gave, it sounds like you want to know where to put the models in your domain. Are these models used by multiple applications? Is business logic shared across applications?
If so then you may want your domain models behind the web service. Maybe build up a rich domain behind those services with its data access and external dependencies (remember that dependency injection thing, the best design decisions will need to be in the domain behind the service layer since that's the core of the whole system).
Then, of course, how do you access this logic? Again, there are a lot of options. My personal favorite design is to have a kind of request/response system that abstracts the service layer. Something as cool as NServiceBus for a really disconnected asynchronous system, something as simple as Agatha for just abstracting out the actual service and putting the request/response logic in code, or maybe play around with ServiceStack (something I've been meaning to do) or another project, etc. Hell, you could just roll a plain old WCF or even SOAP service layer and nothing more. It's up to you.
At that point you're looking at a fairly procedural system at the service layer. This isn't a terrible thing. Think of the service layer like an MVC site. You send it a request, populated with some kind of incoming viewmodel, it does its domain stuff in all its object-oriented goodness, and returns a view in the form of some XML representation of an outgoing viewmodel. Now you have a repeating pattern. Your client-side applications are just great big views for your domain. The dumber they are, the more interchangeable and replaceable they are, the better.
This allows you to encapsulate various "business actions" in a unit of work at the service boundary. Given a request from a client application, either the whole thing succeeds or the whole thing fails. Wrap it up in good error handling and an application-level error/exception logger to give you all the details of the failed requests. (Imagine that every request can be serialized to a string and included in an error message. Now you have everything you need to recreate the error in a simple string, as opposed to asking users "what did you click on?" to try to recreate errors.)
If, instead, the back-end doesn't really share anything with different applications and each application is its own distinct entity entirely. At that point you don't really need to share all that logic behind the service layer, and it's entirely possible that you shouldn't try to make any kind of overlap. Is the data access the only thing that's behind the service? What about things like filesystem access or external web service access?
If the only thing behind the service is the data access, then you can keep your models and data access repositories in your client applications like you seem to be accustomed to and just swap out your repository implementations with implementations that internally reference and access the service layer. (This would be the second option in your GetEmployee example.) Properly abstracted, direct access vs. service access repositories can be swapped out trivially depending on where the application needs to live.
Of course, this leans a little towards a true persistence-ignorance approach, which can be dangerous. Performance implications need to be considered. Some piece of logic or unit of work on the back-end may hit the database several times to do several things. If this is happening across a service then that adds service overhead to each database call. So you'll want to address this on a case-by-case basis.
I guess I may be rambling at this point, so to get back to something concrete it really comes down asking yourself some questions about your domain. How persistence-ignorant can you afford to be? Do your applications share business domain logic? Do you need to access other non-database external dependencies behind the service only? There's no universal design that's always the right answer. You'll probably end up experimenting with various designs and homogenizing on a design that's right for your developers and your environment.