Is there any benefit to running docker containers on different Amazon servers? - amazon-web-services

If I've built a microservices-based web app, is there any benefit to running docker containers on separate servers? When I say servers, I mean each with it's own OS, Kernel, etc.
One obvious benefit would be that if that machine goes down, it wouldn't take down all the services, but other than this what are the benefits?
Also, does Elastic Beanstalk ALREADY do this? or does it just deploy the containers on a single machine sharing the kernel (similar to Docker on a local machine).

You're talking about a clustered solution. That's running your services on multiple nodes (hosts).
The benefits are High Availability (no single point of failure) and Scalability; you can spread the load across multiple nodes and you can increase/decrease the number of nodes as needed to accomodate usage. All of these need to be taken care of when you design your application.
Nowadays, all the major cloud providers have proprietary technologies to cover clustering. You can use AWS's Elastic Beanstalk to create your clustered solution based on Docker containers as the building blocks. However you lock yourself in with AWS's technologies. I prefer to rely entirely on open source technologies (eg: Docker Swarm, Kubernetes) for clustering so that I can deploy to both on-premises data centers and different cloud solutions (AWS, Azure, GCP).

Related

Micro Service Architecute - Independent server service or on same server?

I am designing the architecture for software. I am in process of creating micro services related to big component.
Either i can create micro service on different server or on same server. Is it good to create micro service component on same server instance or i can create different micro-services on diffrent server.
I think its good too have different server for bigger components, obviously it will increase the cost.
Can any one please let me know your thoughts on this.
Thanks
I recommend a combined approach; Services are containerized in a shared server, this provides some level of isolation between the services. While using multiple servers for increasing availability and horizontally scaling.
From my experience this achieves the lowest cost and highest availability.
A good container orchestration system like Kubernetes abstracts this and combines all servers in a one virtual cluster, which simplifies the management of the whole infrastructure. In addition it provides some useful services that benefits this type of architecture, like managing the lifecycle of individual services, load balancing and moving services between nodes in case of hardware failure.

having part of system in cloud and some in local data center

I have a general question regarding hosting an application in the cloud such as such as AWS, or Azure.
Does it make senses to have my application servers, BDs in the cloud while having the Web servers in our local non-cloud data center? The reason I am asking is because of some special requirements is easier to have our Web servers in our in-house datacenter rather having it in the cloud.
so I am looking for any issues like slowness and etc with this approach.
Thanks
What you are suggesting really makes no sense.
There are hybrid architectures, but for most of these implementations you have the web / application servers in the cloud, and are accessing a database still in your non-cloud data center.
Most cloud providers do web serving really, really well: load balancing, autoscaling, monitoring, and content distribution networks all tied together with automation mean that it is highly doubtful that you can do it better in-house.

How to map subdomains to multiple docker containers (as web servers) hosted using Elastic Bean Stack on AWS

I have seen an aws example of hosting a nginx server and a php server running on separate docker containers within an instance.
I want to use this infrastructure to host multiple docker containers (each being its own web server on a different unique port).
Each of these unique web application needs to be available on the internet with a unique subdomain.
Since one instance will not be enough for all the Docker containers, I will need them spread over multiple instances.
How can I host hundreds of docker containers over several instances, while one nginx-proxy container does the routing to map a subdomain to each web application container using it's unique port?
E.g.
app1.mydomain.com --> docker container exposing port 10001
app2.mydomain.com --> docker container exposing port 10002
app3.mydomain.com --> docker container exposing port 10003
....
...
If I use a nginx-proxy container, it would be easy to map each port number to a different subdomain. This would be true of all the Docker containers are in the same instance as the nginx-proxy container.
But can I map it to docker containers that is hosted on a different instance. I am planning to use elastic beanstalk for creating new instances for the extra docker containers.
Now nginx is running on one instance, while there are containers on different instances.
How do I achieve the end goal of hundreds of web applications hosted on separate docker containers mapped to unique subdomains?
To be honest, you question is not quite clear to me. It seems you could deploy an Nginx container in each instance having the proxy configuration for every app container you have on it, and as the cluster scales out, all them would have an Nginx as well. So, you could just set an ELB on top of it (Elastic Beanstalk supports it natively), and you would be good.
Nonetheless, I think you're intending to push Elastic Beanstalk to hard. I mean, it's not supposed to be used that way, like a big and generic Docker cluster. Elastic Beanstalk was built to facilitate application deployments, and nowadays containers are just one of the, let's say, platforms available (although it's not a language or framework, off course) for people to do it. But Elastic Beanstalk is not a container manager.
So, in my opinion, what makes sense is to deploy a single container per Beanstalk application with an ELB on top of it, so you don't need to worry about the underlying machines and their IPs. That way, you can easily set up a frontend proxy to route requests, because you have a permanent address for you application pool. And being independent pools, they can scale independently, and so on.
There are some more complex solutions out there which try to solve that problem of deploying containers in a wide single cluster, like Google's Kubernetes, and keeping track of them and providing endpoints for each application group. Also, there are solutions for dynamic reverse proxies like this one, recently released, and probably a lot of other solutions popping up every day, but all them would demand a lot of customization. But, in that case, we are not talking about an AWS solution.

Kubernetes and vSphere, AWS

I am a bit late to the party and am just delving into containers now. At work we use vSphere as our virtualization platform, but are likely to move to "the cloud" (AWS, GCP, Heroku, etc.) at some point in the somewhat-near future.
Ideally, I'd like to build up our app containers such that I could easily port them from running on vSPhere nodes to AWS EC2 instances.
So I ask:
Are all Docker containers created equal? Could I port a Docker container of our own creation to AWS Container Service with zero config?
I believe Kubernetes helps map containers to the virtualization resources they need. Any chance this runs on AWS as well, or does AWS-ECS take care of this for me?
Kubernetes is designed to run on multiple cloud platforms (as well as bare metal). See Getting started on AWS for AWS specific instructions.

AWS multi-zone distaster recovery and load balancing - best approach?

I’m using Amazon Web Services, and trying to set up a modest system for load balancing and disaster recovery. The application is PHP based, with Zend Framework 2 (ZF2) on the front end, a local memcached server and MySQL through RDS. All servers are running Amazon Linux.
I am trying to configure the elastic load balancer to use two servers in two different AWS “availability zones.” To seamlessly allow one server to shut down and another take over, we need shared PHP sessions. So I set up PHP database sessions with ZF2.
In general, I assume the likelihood of an outage of an AWS zone is considerably lower than chance of a fatal problem in the individual servers or the application itself. So I am considering a different approach:
All the servers in the same availability zone
Separate AWS ElastiCache server (essentially memcached, cannot be used across zones)
PHP sessions stored in the cache (built-in support for memcached)
One emergency server in a different zone – in the rare case of a zone outage, we would change the DNS record to use the different server
Is this a good standard approach to DR and load balancing? I don’t like the DR solution in the case of zone outage, but I haven’t seen a zone go down much, and we can probably handle that level of risk if it simplifies the design. If the load balancer could weight be servers, I would pull all the weight on one zone, with the backup server weighted much lower.
What would be the benefit of keeping all the PHP servers in the same AZ vs distributing them among multiple AZs? I can't think of any, except a very small (3-5ms) latency improvement. Since there's very little downside, why not spread the servers among multiple AZs?
Your Elasticache memcached is still a single point of failure. If the AZ that the Elasticache instance is running in has a problem you will lose sessions. You could switch to use Elasticache w/ Redis (which supports master/slave) to achieve multi-AZ for your cache layer as well.