In SOA (Service Oriented Architecture) does individual services run as separate server? - web-services

Big banks follow Service Oriented Architecture for their functioning.
They may have more that 50 services so do they run individual services as separate server or they group services ?
Can anyone explain this in detail ?

According to SOA, each service should be able to serve the service independently and hence could be hosted on a separate server.
But all this servers should communicate between each other internally,so the system as a whole is aware of the services offered by each server and the outside world can hit a single endpoint while requesting a service. Internally a routing module will identify the server which offers the particular service and serve it.
It could be also possible that there could be more than one server serving the same request if the load expected is high.
Also the term server could mean a runtime, something like a JVM if the service is Java based or it could be a machine too.
according to wiki:
Every computer can run any number of services, and each service is
built in a way that ensures that the service can exchange information
with any other service in the network without human interaction and
without the need to make changes to the underlying program itself.
Normally similar nature services or services communicating with same code base or DB server or another application server etc are grouped together. While a high volume service or long running service could be served separately to speed up the system.

The whole purpose of Service Oriented Architecture (SOA) is to have flexibility for each module( exposed as service) to have it's freedom of deployment, implementation an expansion with least affecting other modules. So all these services either they could be hosted on single server with different ports or they could be on different server.
Usually at big banks, there would be a team owning each service.Each service is deployed on different server. In fact each service may be deployed on many servers for Scalability and fault tolerance.

Usually the services are hosted by an Enterprise Service Bus, a component which publishes the services to all information systems of the organization and also to the external B2B customers (via B2B gateway).
The services hosted on ESB may utilize services provided by backend systems. These services of backend system are considered as private and are only consumed via ESB. This approach eliminates the spaghetti mess which comes if everybody integrates with anybody.
Most of ESB systems I have come accross were high available solutions with a HA database, a cluster of application servers and a load balancer, all together creating a platform to ensure stability and performance of the services.
The number of services in an enterprise can be very large, I have been involved in projects with hundreds of services, largest corporations can run thousands of services.
I recommend to check wikipedia for more about ESB.

Related

Micro Service Architecute - Independent server service or on same server?

I am designing the architecture for software. I am in process of creating micro services related to big component.
Either i can create micro service on different server or on same server. Is it good to create micro service component on same server instance or i can create different micro-services on diffrent server.
I think its good too have different server for bigger components, obviously it will increase the cost.
Can any one please let me know your thoughts on this.
Thanks
I recommend a combined approach; Services are containerized in a shared server, this provides some level of isolation between the services. While using multiple servers for increasing availability and horizontally scaling.
From my experience this achieves the lowest cost and highest availability.
A good container orchestration system like Kubernetes abstracts this and combines all servers in a one virtual cluster, which simplifies the management of the whole infrastructure. In addition it provides some useful services that benefits this type of architecture, like managing the lifecycle of individual services, load balancing and moving services between nodes in case of hardware failure.

Seeking advice on proper approach to development on Netflix Eureka discoverable Spring Boot services with minimal overheads

We are running a Spring Boot-based environment with about 15 microservices and a Zuul edge gateway registered with Eureka. Currently, I have set up all microservices to call other microservices through the Zuul gateway (e.g. if serviceA needs to call serviceB, the URL configuration property would be serviceB.baseUrl=http://zuul.mydomain.com:7001 where zuul.mydomain.com is our development server on AWS with all other microservices running behind it). Zuul in turn proxies to the microservices via Eureka registry lookups.
One benefit of doing it this way is that for a developer working locally on his machine, he would just need to run his service and all other dependencies on other services are reachable through the Zuul gateway on AWS (and in our ecosystem, there's a lot of such cross-service dependencies).
Now, I would really love to leverage on the full potential of Eureka / Ribbon and make calls directly to a peer microservice via its service name and a #LoadBalanced RestTemplate but I find that this would impose quite a lot on the developer having to recreate an entire ecosystem on his machine. At a minimum, he would have to run Eureka, his own service and any other services that his service is dependent on. This makes the barriers to entry for development unnecessarily high.
I did consider making the developer's local instance register to our Eureka service on AWS, but the problem is that all services on AWS are registered using the EC2 instance's private IP which is basically unreachable from the developer's machine. If I force the service to register using its public IP, it would mean that I have to use up more of our ElasticIP allocation for each service or change the IP every time the EC2 instance gets rebooted.
I could run a local Eureka + microservices environment in the local network but that means I need to create one such environment for every office we operate out of and that just means more overheads. In addition to that problem, it would probably mean that developer A may be calling developer B's half-done-not-quite-there-yet version of a dependency service which just confuses the heck out of everyone if a problem occurs (the services that get deployed to our AWS environment at least goes through a code review first before being deployed).
If there's anybody who has figured out a way to simplify a developer's setup while being able to leverage on the peer-to-peer service invocation possibilities of Feign / #LoadBalanced RestTemplate clients, I would love to get some pointers in the right direction.
Have confirmed that I can accomplish what I want (which is to be able to do local development on Ribbon enabled RestTemplate clients without having to run Eureka) by:
Force Ribbon to not use Eureka using the following property: ribbon.eureka.enabled=false
Manually provide Ribbon with the servers to point to using the following sample property: servicename.ribbon.listOfServers=test.service.com:8080

Clustering webservices over VPN

We have number of web services exposed over VPN to our partners for their consumption. I was wondering, what would be the best way to make those web services highly available and scalabe for their usage. One option could be an apache sitting between our web services acting like a reverse proxy. But, that would introduce a single point of failure too. Can we use physical load balancer? I was not able to find any useful resources for planning out this activity. Any thoughts/ideas?
I did not work with physical load balancer, but Apache is a valid solution in most of the scenarios.
All of our clients (with critical back-end system) uses apache as a load balancer without problemas.
Most of the Application Servers also provide their custom integration with apache, like mod_jk for Weblogic or mod_cluster for JBoss.

Splitting up web services on different dedicated web servers

I currently have a number of web services on a single server.
In the future we want to move the load off a single server and split it to other servers.
I am familiar with the concept of scaling out but in our case I want to have different web services on different web servers so that traffic can be routed to the correct web services. So web services that do a lot more intensive work can be dedicated to a specific server.
How would I do this?
Would I need to change my client applications so that the correct webservice is called on the correct web server?
I think the proper pattern to use here would be tho have one server with a dispatcher that will just forward requests to appropriate back-end services. Then if you decide to move one of back-end services to another server, then you can just make configuration change in the dispatcher.
I am sure you can do it programmatically, but I am sure software or hardware (like F5) load balancer have ability to configure it out.

all services on one port, or each service on its own port?

need to make a bunch (20+) of services (usually single purpose services - such as user management - with multiple methods).
is it better for each service to be on its own port / in its own project in visual studio OR do them all at once on a single port in a single project?
what is the best from scaling point of view? also if they are all deployed on a single machine, performance wise is there a difference between the two approaches?
somehow it makes sense to have them separate, so if for example, one of the services sees significantly higher use it can be scaled alone on multiple machines, do you agree?
When it comes to web services the physical TCP/IP port is really not important, you should shoot for the standard HTTP port (80) and make sure you have adequate load balancing on the web server for the expected user load.
Performance on a server gets degraded when resources (memory or processing power) are on high demand, having your services on different ports won't change this situation, if your user load exceeds the resources available on just one server, you'll need to look into creating a server farm or deploying on a cloud service that can scale to you needs (Like Amazon EC2 or Microsoft Azure).
The project segmentation depends on functionality, services with similar functionality and that interact with related backend resources could be grouped together on the same projects, for scaling purposes, but there's no hard rule on how to segment your services on projects, is all common sense and trying to group similar functionalities.