Micro Service Architecute - Independent server service or on same server? - amazon-web-services

I am designing the architecture for software. I am in process of creating micro services related to big component.
Either i can create micro service on different server or on same server. Is it good to create micro service component on same server instance or i can create different micro-services on diffrent server.
I think its good too have different server for bigger components, obviously it will increase the cost.
Can any one please let me know your thoughts on this.
Thanks

I recommend a combined approach; Services are containerized in a shared server, this provides some level of isolation between the services. While using multiple servers for increasing availability and horizontally scaling.
From my experience this achieves the lowest cost and highest availability.
A good container orchestration system like Kubernetes abstracts this and combines all servers in a one virtual cluster, which simplifies the management of the whole infrastructure. In addition it provides some useful services that benefits this type of architecture, like managing the lifecycle of individual services, load balancing and moving services between nodes in case of hardware failure.

Related

Are managed databases (e.g. Amazon RDS) slower to access than databases on the same machine (EC2) as the web server

Imagine two cases:
I have web server running in an EC2 instance and it is connected to the database in the RDS, the managed database service.
I have web server and database running in the same EC2 instance.
Is my database in RDS going to be slower to access because it's not in the same machine?
How many milliseconds, approximately, does it add to your latency between the two?
Does this become bottleneck?
What about other managed database services like Azure, GCP, Digital Ocean, etc?
Do they behave the same?
Yes, it will be slower to RDS instances from your Webserver than a database on the same host, because you need to go over the network and that adds latency.
The drawback of running the DB on the same server is that you can't use a managed service to take care of your database and you're mixing largely stateless components (webserver) with stateful components (database). This solution is typically not scalable either. If you add more webservers, things get messy.
Don't know about Azure, GCP or Digital Ocean, but I'd be very surprised if things are different there. There's good reasons to separate these components.

AWS ALB - single for all services?

We have many internet services, what are the considerations whether to use alb per service or single alb for all using listener rule pointing to target 🎯 group.
The services has its own clusters/target group with different functionality and different url.
Can one service spike impact other services?
Is it going to be a single point of failure ?
Cost perspective ?
Observability, monitoring, logs ?
Ease of management ?
Personally I would normally use a single ALB and use different listeners for different services.
For example, I have service1.domain.com and service2.domain.com. I would have two hostname listeners in the same ALB which route to the different services.
In my experience ALB is highly available and scales very nicely without any issues. I've never had a service become unreachable due to scaling issues. ALB's scale based on "Load Balancer Capacity Units" (LBCU). As your load balancer requires more capacity, AWS automatically assigns more LBCU's which allows it to handle more traffic.
Source: Own experience working on an international system consisting of monoliths and microservices which have a large degree of scaling between timezones.
You don't have impact on service B if service A has a spike, but the identification of which service is having bad times could be a little pain.
For monitoring perspective it's is a bit hard because is not that easy to have a fast identification of which service/target is suffering.
For management, as soon as different teams need to create/management its targets it can create some conflicts.
I wouldn't encourage you using that monolith architecture.
From cost perspective you can use one load balancer with multi forward rules, but using a single central load balancer for an entire application ecosystem essentially duplicates the standard monolith architecture, but increases the number of instances to be served by one load balancer enormously. In addition to being a single point of failure for the entire system should it go down, this single load balancer can very quickly become a major bottleneck, since all traffic to every microservice has to pass through it.
Using a separate load balancer per microservice type may add additional overhead but it make single point of failure per microservice in this model, incoming traffic for each type of microservice is sent to a different load balancer.

Is there any benefit to running docker containers on different Amazon servers?

If I've built a microservices-based web app, is there any benefit to running docker containers on separate servers? When I say servers, I mean each with it's own OS, Kernel, etc.
One obvious benefit would be that if that machine goes down, it wouldn't take down all the services, but other than this what are the benefits?
Also, does Elastic Beanstalk ALREADY do this? or does it just deploy the containers on a single machine sharing the kernel (similar to Docker on a local machine).
You're talking about a clustered solution. That's running your services on multiple nodes (hosts).
The benefits are High Availability (no single point of failure) and Scalability; you can spread the load across multiple nodes and you can increase/decrease the number of nodes as needed to accomodate usage. All of these need to be taken care of when you design your application.
Nowadays, all the major cloud providers have proprietary technologies to cover clustering. You can use AWS's Elastic Beanstalk to create your clustered solution based on Docker containers as the building blocks. However you lock yourself in with AWS's technologies. I prefer to rely entirely on open source technologies (eg: Docker Swarm, Kubernetes) for clustering so that I can deploy to both on-premises data centers and different cloud solutions (AWS, Azure, GCP).

In SOA (Service Oriented Architecture) does individual services run as separate server?

Big banks follow Service Oriented Architecture for their functioning.
They may have more that 50 services so do they run individual services as separate server or they group services ?
Can anyone explain this in detail ?
According to SOA, each service should be able to serve the service independently and hence could be hosted on a separate server.
But all this servers should communicate between each other internally,so the system as a whole is aware of the services offered by each server and the outside world can hit a single endpoint while requesting a service. Internally a routing module will identify the server which offers the particular service and serve it.
It could be also possible that there could be more than one server serving the same request if the load expected is high.
Also the term server could mean a runtime, something like a JVM if the service is Java based or it could be a machine too.
according to wiki:
Every computer can run any number of services, and each service is
built in a way that ensures that the service can exchange information
with any other service in the network without human interaction and
without the need to make changes to the underlying program itself.
Normally similar nature services or services communicating with same code base or DB server or another application server etc are grouped together. While a high volume service or long running service could be served separately to speed up the system.
The whole purpose of Service Oriented Architecture (SOA) is to have flexibility for each module( exposed as service) to have it's freedom of deployment, implementation an expansion with least affecting other modules. So all these services either they could be hosted on single server with different ports or they could be on different server.
Usually at big banks, there would be a team owning each service.Each service is deployed on different server. In fact each service may be deployed on many servers for Scalability and fault tolerance.
Usually the services are hosted by an Enterprise Service Bus, a component which publishes the services to all information systems of the organization and also to the external B2B customers (via B2B gateway).
The services hosted on ESB may utilize services provided by backend systems. These services of backend system are considered as private and are only consumed via ESB. This approach eliminates the spaghetti mess which comes if everybody integrates with anybody.
Most of ESB systems I have come accross were high available solutions with a HA database, a cluster of application servers and a load balancer, all together creating a platform to ensure stability and performance of the services.
The number of services in an enterprise can be very large, I have been involved in projects with hundreds of services, largest corporations can run thousands of services.
I recommend to check wikipedia for more about ESB.

all services on one port, or each service on its own port?

need to make a bunch (20+) of services (usually single purpose services - such as user management - with multiple methods).
is it better for each service to be on its own port / in its own project in visual studio OR do them all at once on a single port in a single project?
what is the best from scaling point of view? also if they are all deployed on a single machine, performance wise is there a difference between the two approaches?
somehow it makes sense to have them separate, so if for example, one of the services sees significantly higher use it can be scaled alone on multiple machines, do you agree?
When it comes to web services the physical TCP/IP port is really not important, you should shoot for the standard HTTP port (80) and make sure you have adequate load balancing on the web server for the expected user load.
Performance on a server gets degraded when resources (memory or processing power) are on high demand, having your services on different ports won't change this situation, if your user load exceeds the resources available on just one server, you'll need to look into creating a server farm or deploying on a cloud service that can scale to you needs (Like Amazon EC2 or Microsoft Azure).
The project segmentation depends on functionality, services with similar functionality and that interact with related backend resources could be grouped together on the same projects, for scaling purposes, but there's no hard rule on how to segment your services on projects, is all common sense and trying to group similar functionalities.