AWS: web server: Is it better to use Docker or configure the server on EC2 instance - amazon-web-services

I am using archlinux on my development. I am trying to use a free tier AMI for EC2 in AWS.
I have found Amazon linux 2 as one of the AMI's
I didnt find arch linux AMI in free tier.
I know using docker i can still use archlinux and keep the environment same
The reason why i want to use arch is i am familiar with the package management which is very crucial for ease on any particular linux distribution.
So will using Docker effect AWS performance and is Docker worth using at all.
Or should i get used to the AMI linux distribution.

If you like Archlinux use the Archlinux Docker.
The Docker overhead is very small.
Using Docker will also make it easy to port your setup to any location: other cloud, desktop, other OS.

Docker is perfect to go. Further, consider that, in different regions, you can use the AWS fargate. It allows you to start docker containers (scaling them up and down, etc) without having to manage servers (EC2 instances).

Related

How can you run a proxmox server on a ubuntu EC2 instance

I would like to run a proxmox server on a Ubuntu EC2 Instance.
I know this may sound crazy but I do not have any spare hardware to run a promox server on. Would it be possible to run this on a Ubuntu EC2 Instance?
If i was to download proxmox on a flash drive, can i insert it into my computer and install it (overiding) the ubuntu instance and just using the hardware? Is this possible AWS?
It is possible to run Proxmox on EC2, but if you want to host VM guests you need to run on an instance type that supports nested virtualisation, which is only the "metal" instances. These start at about $4/hour.
Running containers works fine on any standard x64 instance type, though.
I posted a guide to installing Proxmox on EC2 here:
https://github.com/thenickdude/proxmox-on-ec2
The tricky parts that the guide fixes up automatically are harmonising the network configuration generated by Debian's cloud-init package with Proxmox's nonstandard ifupdown2 package.

AWS - What are the exact differences between EC2, Beanstalk and LightSail?

What are the exact differences between EC2, Beanstalk and LightSail in AWS?
What are good real time scenarios in which I should use these services?
They are all based on EC2, the compute service from AWS allowing you to create EC2 instances (virtual machines in the cloud).
Lightsail is packaged in a similar way than Virtual Private Server, making it easy for anyone to start with their own server. It has a simplified management console and many options are tuned with default values that maximize availability and security.
Elastic Beanstalk is a service for application developers that provisions an EC2 instance and a load balancer automatically. It creates the EC2 instance, it installs an execution environment on these machines and will deploy your application for you (Elastic Beanstalk support Java, Node, Python, Docker and many others)
Behind the scenes, Elastic Beanstalk creates regular EC2 instances that you will see in your AWS Console.
And EC2 is the bare service that allows the other to be possible. If you choose to create an EC2 instance, you will have to choose your operating system, manage your ssh key, install your application runtime and configure security settings by yourself. You have full control of that virtual machine.
In simple terms:
EC2 - virtual host or an image. which you can use it to install apps and have a machine to do whatever you like.
Lightsail - is similar but more user friendly management option and good for small applications.
Beanstalk - an orchestration tool, which does all the work to create an EC2, install application, software and give you freedom from manual tasks in creating an environment.
More details at - https://stackshare.io/stackups/amazon-ec2-vs-amazon-lightsail-vs-aws-elastic-beanstalk
I don't know if my scenario is typical in any way, but here are the differences that were critical for me. I'm happier EC2 than EB:
EC2:
just a remote linux machine with shell (command line) access
tracable application-level errors, easy to see what is wrong with your application
you can use AWS web console panel or AWS command line tool to manage
you will need repeated steps if you want to reproduce same environment
some effort to get proper shell access (eg fix security rule to your IP only)
no load balancer provided by default
Elastic Beanstalk
a service that creates a EC2 instance with a programming language of your choice (eg Python, PHP, etc)
runs one application on that machine (for python - application.py)
upload applications as .zip file, extra effort needed to use your git source
need to get used to environment vs applications mental model
application level errors hidden deep in the server logs, logs downloaded in separate menu
can be managed by web console, but also needs another CLI tool in addition to AWS CLI (you end up installing two CLI tools)
provides load balancer and other server-level services, takes away the manual setup part
great for scaling stable appications, not so much for trial-and-see experimentaion
probably more expensive than just an EC2 instance
Amazon EC2 is a virtual host, in other words, it is a server where you can SSH configure your application, install dependencies and so on, like in your local machine. EC2 has a dozen of AMI (Amazon Machine Image: it is some kind of operating system of your EC2 server, for instance, you can have EC2 running on Linux based OS or in windows OS). To summarize, it is a great idea if you need a machine in your hands.
Amazon Lightsail is a simple tool that you can deploy and manage application with small management of servers. You can find it very practical if your application is small, For instance, it will perfectly fit your application if you use Wordpress or other CMS.
AWS Elastic Beanstalk is an orchestration tool. You can manage your application within that service, it is more elevated then AWS Light Sail.
If you still do not understand the differences, you can take a look at each service overview.
There is also an answer in Quora
I have spent only 10 mins on these technologies but here is my first take.
EC2 - a baremetal service. It gives you a server with an OS. That is it. There is nothing else installed on it. So if you need a webserver (nginx) or python, you'll need to do it yourself.
Beanstalk - helps you deploy your applications. Say you have a python/flask application which you want to run on a server. Traditionally you'll have to build the app, move the deployable package to another machine where a web server should be installed, then move the package into some directory in the web server. Beanstalk does all this for you automatically.
LightSail - I haven't tried it but it seem to be an even simpler option to create a server with pre-installed os/software.
In summary, these seem to make application deployment more easier by pre-configuring the server/EC2s with the required software packages and security policies (eg. port nos. etc.).
I am not an expert so I could be wrong.

Linux Docker containers on Windows Server 2016 build server in AWS

We are building a application using Dotnet 2.0 and Docker. We are running the containers in Linux mode. This works well on our local Windows 10 machines with Docker for Windows, and on AWS ECS. Now we are trying to add a CI pipeline to deploy to AWS ECS.
We set this server up using a AWS windows 2016 AMI running on EC2, but we quickly learned that we cannot use Docker for Windows. We can use the AWS Windows with Containers AMI with docker installed, but this does not support Linux containers.
Is there something we can do to get the machine to support Linux containers? We don't actually have to run Linux containers on the machine, we are just using it to build images, upload them to ECR and use ECS-CLI to run containers. Do we just need to move our build server to a linux AMI to support this (most of the team is lighter on Linux knowlege and a GUI is nice on a build/tools server)
Any thoughts? We are using Jenkins as our CI tool. I have seen the hack to get Linux containers running here but I don't want to use a hack on a important server in our development process.
I ran into the same problem. The only way I could get Docker for Windows working with Linux containers in EC2 was to use an i3.metal instance type. The Docker for Windows installation finished without a hitch and I was able to run Linux containers on the Windows Server 2016 i3.metal instance without issue.

How can I run Docker in a AWS Windows Server environment?

Thing I'd tried:
Toolbox on Windows Server 2012 R2. Disabled Hyper-V to allow virtualbox. I cannot enable virtualization as it's on the physical bios.
Installed Docker EE on Windows Server 2016 w/Containers EC2. Installed correctly. Daemon is running. BUT, I can't pull a single image beside the hello-world:nanoserver. So I hunted down the windowsservercore and nanoserver, still doesn't work because they are out of date. The repo from the frizzm person at Docker.com doesn't work when you try to pull it.
Started again with a fresh Windows Server 2016 instance. I disabled Hyper-V and installed ToolBox. Doesn't work.
How do I run Docker in a windows server environment in AWS?
All of the vids/tuts seem so simple, but I sure can't get it to work. I'm at a lose.
You don't actually need to install Docker for Windows (formerly known as the Docker Toolbox) in order to utilize Docker on Windows Server.
First, it's important to understand that there are two different types of containers on the Windows Server 2016 platform: Windows Containers and Hyper-V containers.
Windows Containers - runs on top of the Windows Server kernel, no virtual machines used here
Hyper-V Containers - virtual machine containers, each with their own kernel
There's also a third option that runs on top of Hyper-V called Linux Containers on Windows (LCOW), but we won't get into that, as it appears you're specifically asking about Windows containers.
Here are a couple options you can look at:
Bare Metal Instances on AWS
If you absolutely need to run Windows Hyper-V containers on AWS, or want to run Linux containers with Docker for Windows, you can provision the i3.metal EC2 instance type, which is a bare metal instance. You can deploy Windows Server 2016 onto the i3.metal instance type, install Hyper-V, and install Docker for Windows. This will give you the ability to run both Linux containers (under a Hyper-V Linux guest), Hyper-V containers, and Windows containers.
ECS-Optimized AMI
Amazon provides an Amazon Machine Image (AMI) that you can deploy EC2 instances from, which contains optimizations for the Amazon Elastic Container Service (ECS). ECS is a cloud-based clustering service that enables you to deploy container-based applications across an array of worker nodes running in EC2.
Generally you'll use ECS and the ECS-optimized AMI together to build a production-scale cluster to deploy your applications onto.
Windows Server 2016 with Containers AMI
There's also a "Windows Server 2016 with Containers" AMI available, which isn't the same as the ECS-optimized AMI, but does include support for running Docker containers on Windows Server 2016. All you have to do is deploy a new EC2 instance, using this AMI, and you can log into it and start issuing Docker commands to launch Windows containers. This option is most likely the easiest option for you, if you're new to Windows containers.
EC2 instances do not allow for nested virtualization (EC2 instances are themselves virtual machines). Docker for Windows uses Hyper-V under the hood, and Docker Toolbox uses Virtualbox under the hood, so neither of those solutions are viable.
Even if you were able to run them on a Windows EC2 instance, the performance wouldn't be that great due to the fact that Docker for Windows mounts files into the Docker VM via Samba, which is not very fast.
If you want to run Linux containers, you should probably run them on Linux. It's very fast to get set up, and all of the Docker commands that you're used to with Docker for Windows should still work.
It is possible to run docker on windows. Run the following command to set it up.
docker-machine create --driver amazonec2 aws01
What this command does is create a new EC2 linux instance, and connects up docker to that linux instance. When docker commands are run on your windows instance the docker commands actually are sent to the linux instance, executed, and the results are returned to the windows EC2 instance.
Here's Docker's documentation on it. I hope this helps.
https://docs.docker.com/machine/drivers/aws/#aws-credential-file
I know this contradicts your question a little; but you might also consider running it on one of the new ec2 Mac OS instances, which are bare metal. Worked for me.

Kubernetes and vSphere, AWS

I am a bit late to the party and am just delving into containers now. At work we use vSphere as our virtualization platform, but are likely to move to "the cloud" (AWS, GCP, Heroku, etc.) at some point in the somewhat-near future.
Ideally, I'd like to build up our app containers such that I could easily port them from running on vSPhere nodes to AWS EC2 instances.
So I ask:
Are all Docker containers created equal? Could I port a Docker container of our own creation to AWS Container Service with zero config?
I believe Kubernetes helps map containers to the virtualization resources they need. Any chance this runs on AWS as well, or does AWS-ECS take care of this for me?
Kubernetes is designed to run on multiple cloud platforms (as well as bare metal). See Getting started on AWS for AWS specific instructions.