I am a bit late to the party and am just delving into containers now. At work we use vSphere as our virtualization platform, but are likely to move to "the cloud" (AWS, GCP, Heroku, etc.) at some point in the somewhat-near future.
Ideally, I'd like to build up our app containers such that I could easily port them from running on vSPhere nodes to AWS EC2 instances.
So I ask:
Are all Docker containers created equal? Could I port a Docker container of our own creation to AWS Container Service with zero config?
I believe Kubernetes helps map containers to the virtualization resources they need. Any chance this runs on AWS as well, or does AWS-ECS take care of this for me?
Kubernetes is designed to run on multiple cloud platforms (as well as bare metal). See Getting started on AWS for AWS specific instructions.
Related
What are the exact differences between EC2, Beanstalk and LightSail in AWS?
What are good real time scenarios in which I should use these services?
They are all based on EC2, the compute service from AWS allowing you to create EC2 instances (virtual machines in the cloud).
Lightsail is packaged in a similar way than Virtual Private Server, making it easy for anyone to start with their own server. It has a simplified management console and many options are tuned with default values that maximize availability and security.
Elastic Beanstalk is a service for application developers that provisions an EC2 instance and a load balancer automatically. It creates the EC2 instance, it installs an execution environment on these machines and will deploy your application for you (Elastic Beanstalk support Java, Node, Python, Docker and many others)
Behind the scenes, Elastic Beanstalk creates regular EC2 instances that you will see in your AWS Console.
And EC2 is the bare service that allows the other to be possible. If you choose to create an EC2 instance, you will have to choose your operating system, manage your ssh key, install your application runtime and configure security settings by yourself. You have full control of that virtual machine.
In simple terms:
EC2 - virtual host or an image. which you can use it to install apps and have a machine to do whatever you like.
Lightsail - is similar but more user friendly management option and good for small applications.
Beanstalk - an orchestration tool, which does all the work to create an EC2, install application, software and give you freedom from manual tasks in creating an environment.
More details at - https://stackshare.io/stackups/amazon-ec2-vs-amazon-lightsail-vs-aws-elastic-beanstalk
I don't know if my scenario is typical in any way, but here are the differences that were critical for me. I'm happier EC2 than EB:
EC2:
just a remote linux machine with shell (command line) access
tracable application-level errors, easy to see what is wrong with your application
you can use AWS web console panel or AWS command line tool to manage
you will need repeated steps if you want to reproduce same environment
some effort to get proper shell access (eg fix security rule to your IP only)
no load balancer provided by default
Elastic Beanstalk
a service that creates a EC2 instance with a programming language of your choice (eg Python, PHP, etc)
runs one application on that machine (for python - application.py)
upload applications as .zip file, extra effort needed to use your git source
need to get used to environment vs applications mental model
application level errors hidden deep in the server logs, logs downloaded in separate menu
can be managed by web console, but also needs another CLI tool in addition to AWS CLI (you end up installing two CLI tools)
provides load balancer and other server-level services, takes away the manual setup part
great for scaling stable appications, not so much for trial-and-see experimentaion
probably more expensive than just an EC2 instance
Amazon EC2 is a virtual host, in other words, it is a server where you can SSH configure your application, install dependencies and so on, like in your local machine. EC2 has a dozen of AMI (Amazon Machine Image: it is some kind of operating system of your EC2 server, for instance, you can have EC2 running on Linux based OS or in windows OS). To summarize, it is a great idea if you need a machine in your hands.
Amazon Lightsail is a simple tool that you can deploy and manage application with small management of servers. You can find it very practical if your application is small, For instance, it will perfectly fit your application if you use Wordpress or other CMS.
AWS Elastic Beanstalk is an orchestration tool. You can manage your application within that service, it is more elevated then AWS Light Sail.
If you still do not understand the differences, you can take a look at each service overview.
There is also an answer in Quora
I have spent only 10 mins on these technologies but here is my first take.
EC2 - a baremetal service. It gives you a server with an OS. That is it. There is nothing else installed on it. So if you need a webserver (nginx) or python, you'll need to do it yourself.
Beanstalk - helps you deploy your applications. Say you have a python/flask application which you want to run on a server. Traditionally you'll have to build the app, move the deployable package to another machine where a web server should be installed, then move the package into some directory in the web server. Beanstalk does all this for you automatically.
LightSail - I haven't tried it but it seem to be an even simpler option to create a server with pre-installed os/software.
In summary, these seem to make application deployment more easier by pre-configuring the server/EC2s with the required software packages and security policies (eg. port nos. etc.).
I am not an expert so I could be wrong.
I am using archlinux on my development. I am trying to use a free tier AMI for EC2 in AWS.
I have found Amazon linux 2 as one of the AMI's
I didnt find arch linux AMI in free tier.
I know using docker i can still use archlinux and keep the environment same
The reason why i want to use arch is i am familiar with the package management which is very crucial for ease on any particular linux distribution.
So will using Docker effect AWS performance and is Docker worth using at all.
Or should i get used to the AMI linux distribution.
If you like Archlinux use the Archlinux Docker.
The Docker overhead is very small.
Using Docker will also make it easy to port your setup to any location: other cloud, desktop, other OS.
Docker is perfect to go. Further, consider that, in different regions, you can use the AWS fargate. It allows you to start docker containers (scaling them up and down, etc) without having to manage servers (EC2 instances).
If I've built a microservices-based web app, is there any benefit to running docker containers on separate servers? When I say servers, I mean each with it's own OS, Kernel, etc.
One obvious benefit would be that if that machine goes down, it wouldn't take down all the services, but other than this what are the benefits?
Also, does Elastic Beanstalk ALREADY do this? or does it just deploy the containers on a single machine sharing the kernel (similar to Docker on a local machine).
You're talking about a clustered solution. That's running your services on multiple nodes (hosts).
The benefits are High Availability (no single point of failure) and Scalability; you can spread the load across multiple nodes and you can increase/decrease the number of nodes as needed to accomodate usage. All of these need to be taken care of when you design your application.
Nowadays, all the major cloud providers have proprietary technologies to cover clustering. You can use AWS's Elastic Beanstalk to create your clustered solution based on Docker containers as the building blocks. However you lock yourself in with AWS's technologies. I prefer to rely entirely on open source technologies (eg: Docker Swarm, Kubernetes) for clustering so that I can deploy to both on-premises data centers and different cloud solutions (AWS, Azure, GCP).
We are testing the ECS infrastructure to run an application that requires a backend service (a MySQL) as well as a few web servers. Since we'd like to restart and redeploy the front end web servers independently from the elasticsearch service, we were considering defining them as separate task definitions, as suggested here.
However, since the container names are autogenerated by ECS, we have no means of referring to the container running the MySQL instance, and links can only be defined between containers running on the same task.
How can I make a reference to a container from a different task?
PS: I'd like to keep everything running within ECS, and not rely on RDS, at least for now.
What you're asking about is generally called service discovery, and there are a number of different alternatives. ECS has integration with ELB through the service feature, where tasks will be automatically registered in the ELB and deregistered in the ELB appropriately. If you want to avoid ELB, another pattern might be an ambassador container (there's a sample called the ecs-task-kite that uses the ECS API) or you might be interested in an overlay network (Weave has a fairly detailed getting started guide for their solution).
I am pretty new to cloudfoundry. I am still trying to understand how exactly it works.
Say if I have three VMs. VM 1 is running on server A.
VM 2 and 3 are running on server B.
If I wanted to use a single CloudFoundry Instance on those three, would it work?
And if not, how could I use Cloudfoundry on multiple servers or at least multiple VMs? I know I can use BOSH to set them up, but do I still have to manage each instance seperately?
Thank you:
Jannis
BOSH will deploy VMs for you, you typically don't deploy Cloud Foundry onto existing VMs. BOSH supports deploying to several infrastructures. The core supported infrastructures include AWS, vSphere, OpenStack, and vCloud Air/vCloud Director. There are also community-provided "Cloud Provider Interfaces" for IBM SoftLayer, Azure, Google Compute Engine, and more.
Cloud Foundry is meant to be run as a distributed service, i.e. on multiple VMs. Typically those VMs will be on multiple different hosts, hardware racks, servers, datacenters, what have you. And BOSH is designed to facilitate deploying and managing distributed services like Cloud Foundry. So no, you do not need to manage individual VMs separately.
You can read more about BOSH and Deploying Cloud Foundry.