As Amazon Web Services mentions,
An Amazon Machine Image (AMI) is a special type of pre-configured
operating system and virtual application software which is used to
create a virtual machine within the Amazon Elastic Compute Cloud
(EC2). It serves as the basic unit of deployment for services
delivered using EC2.
So AMI is a kind of virtual machine which we can use as a server to host our applications. So my questions are,
What are the limitations in AMI with compared to a normal server?
Can we install any software in the AMI?
AMI is not any server running; it is like a "backup of your server" on hard disk. Using this back-up you can bring up running servers instantly.
Concept of Imaging server is used in all IT companies. Suppose an IT companies give Windows 7 to new joinees with few pre-installed software. So, instead of configuring the laptop for every joinee; they will create an image and will just dump that image on new laptop and give it to new joinee.
Same Image when created on Amazon Web services is called as AMI. AMI is just an image of any server... It is something like you firstly make your machine ready ( with any operating system and all the softwares you need on them ) and then create an image out of it which is called as AMI.
To answer your questions;
1. Not sure what limitations you are talking about; but as such there is no limitation.
2. Yes you can install any software. To make things clear; you can even install a virus on the server ; and then create an AMI.
Related
I have a large piece of server software (3 GB of files pre-install) that is running on an EC2. The software installs a full app server or interface server that communicates with the front-end desktop GUIs and database. The software was originally designed years ago to be installed through a visual step-by-step installer off a USB drive on premises. This installer ensures that the software is set up with proper configuration, networking, connection to the database, etc. Every client gets 1 or more EC2 instances dedicated to handle their workload.
Moving into a cloud-minded paradigm, what is a better way to handle creating many servers, for many clients, all with different configurations of this software? When a server goes down, or another is needed for load, what's a "cloud" practice to spin up a new server and install the same configuration of software on this server?
I have multiple ideas including:
Store software files in S3 bucket and pull them to the EC2 instances as necessary. A config file for each customer will also be updated and stored on S3. The EC2 will then start the software from a PowerShell script to create proper configurations.
Store the software in the AMI of EC2 exactly as configured. This means any time a server is created with a new client configuration, we create a new AMI after installation.
Create a Lambda function that can handle all the different configuration parameters. When invoked, it will take care of spinning up a server, moving the software to the server, and installing the software with proper configuration.
Any guidance or references to white papers would be appreciated.
Thank you!
I would:
prepare one single AMI which has the software installed along with the required configuration
store the custom config files for the customers in a bucket
create a launch template for each customer. this template would reference the AMI and would fetch the customer specific config file from the bucket
create an EC2 autoscaling group for each customer. Each autoscaling group would use the specific launch template
A single AMI with the config file fetched automatically allows you to scale easily if your customer base grows. The alternative, one AMI per customer, would quickly become unmanageable.
The autoscaling group allows you to scale seemlessly if the customer required more servers, and it would take care of replacing unhealthy instances automatically.
Just to give you a context... I'm new to the aws world and all the services that provides.
I have a legacy application which I need to share some binarys with a client, and I was trying to use a ec2 instance (Amazon Linux AMI) with samba, to map it into a windows local machine.
I was able to establish a conection with another ec2 instances (same vpc), just as a tryout. But I wasn't able to do so with my windows machine or even with a linux vm I have.
The inbound rules for this concept ec2 instance was fully open (All traffic allowed).
Main question
Is it possible to do? Share a file system between a ec2-instances with a (over internet) local machine?
Just saying:
S3 storage isn't an option.
And in my region FSX still ain't implemented and for latency reasons is a no go.
Please ask as many questions you want, I'll try to anwser them as fast as I Can.
Kind Regads.
TL;DR - it's possible, but there's no 'simple' solution (in my opinion).
I thought of two possible solutions that you can implement, here we go ...
1: AWS EFS, AWS Direct Connect and Docker
A possible solution would be using AWS Elastic File System (EFS), AWS Direct Connect and a Docker Linux container.
Drawbacks
If it's the first time you encounter with the above AWS services or Docker, then it's going to be a bit of a journey to learn about them
EFS pricing - it's not so cheap, and you also need to consider the inbound and outbound traffic, it's best to use the calculator that is in the pricing page
EFS performance - if you only share files then it should be okay, but if you expect to get high speeds, then remember that it's not an EBS volume, so for higher speeds you need to pay more money
AWS Direct Connect pricing - you also need to take that into consideration
Security - I'm not sure how sensitive your data is, but you need to make sure you create a very strict VPC, with Security Groups and Network Access List rules - read about the VPC Security Best Practices
Steps to implement the solution
Follow the Walkthrough: Create and Mount a File System On-Premises with AWS Direct Connect and VPN, also, here are the steps on how to combine it with Docker
(Optional) To make it a bit easier - for Windows to "support" Linux file-system, you should use Windows Git Bash. If you're not sure how to use install 3rd-party apps in Windows Git Bash (like aws-vault) then read this blog post
Create an EFS in AWS, and mount it to your EC2 instance, read more about it here
Use AWS Direct Connect to connect to your VPC from your local Windows machine
Install Docker for Windows on your local machine
Create a Docker Volume, and mount the same EFS to that volume - a good example for this step
Test it - SSH to your EC2 instance, create a file on the EFS volume and then check in your local Docker Linux container that this file appears on the EFS volume
I omitted the security steps because it's up to you how strict you want your solution to be.
2: Using S3 as a shared file-system
You can try out this tool s3fs-fuse, but you'll still need to use a Docker Linux container since you're on Windows. I haven't tested it but it looks promising. You can read this blog post, it's a step-by-step tutorial on how to do it, and also shares some other possible solutions.
I have my docker image for microservice-x build on rpi
My dockerfile looks as below
FROM raspbian/stretch
....
This image runs on RPi .However if I wish to launch the docker image on AWS instance which Amazon Machine Image (AMI) type should I use ?
For AMI I will recommend using AWS ECS docker optimized AMI (AMR).
Amazon ECS-Optimized Amazon Linux 2 AMI (ARM)
Amazon EC2 Container Service makes it easy to manage containers at
scale by providing a centralized service that includes programmatic
access to the complete state of the containers and Amazon EC2
instances in the cluster, schedules containers in the proper location,
and uses familiar Amazon EC2 features like security groups, Amazon EBS
volumes, and IAM roles.
Amazon ECS-Optimized Amazon Linux 2 AMI (ARM)
For instance You can use Amazon EC2 A1 instances.
Amazon EC2 A1 instances deliver significant cost savings for scale-out
and Arm-based applications such as web servers, containerized
microservices, caching fleets, and distributed data stores that are
supported by the extensive Arm ecosystem. A1 instances are the first
EC2 instances powered by AWS Graviton Processors that feature 64-bit
Arm Neoverse cores and custom silicon designed by AWS.
You can find more in this article
Docker & ARM demonstrated the integration of ARM capabilities into
Docker Desktop Community for the first time. Docker & ARM unveiled
go-to-market strategy to accelerate Cloud, Edge & IoT Development.
These two companies have planned to streamline the app development
tools for cloud, edge, and internet of things environments built on
ARM platform. The tools include AWS EC2 A1 instances based on AWS’
Graviton Processors (which feature 64-bit Arm Neoverse cores). Docker
in collaboration with ARM will make new Docker-based solutions
available to the Arm ecosystem as an extension of Arm’s
server-tailored Neoverse platform, which they say will let developers
more easily leverage containers — both remote and on-premises which is
going to be pretty cool.
building-arm-based-docker-images-on-docker-desktop-made-possible-using-buildx
amazon-ec2-systems-manager-adds-raspbian-os-and-raspberry-pi-support
Forgot to notice that Ubuntu server 16.04 AMI type supports both X86 and ARM architectures
I have AWES EC-2 instances with Ubuntu 16.04 , how to migrate them to Microsoft azure?
I have their image Amazon Machine Images (AMI) on amazon web services, is there a way I could migrate the images to azure ? or the instance configuration? I prefer copy the image I have create in amazon web services (with Ubuntu 16.04 base) to azure.
I have seen this documentation: https://learn.microsoft.com/en-us/azure/site-recovery/migrate-tutorial-aws-azure but it does not specify Ubuntu support and it copy the instance, can I copy the image? and can it be perform with ubuntu 16.04?
As you see, all the support OS version show there. So, unfortunately, it does not support Ubuntu to migrate from AWES to Azure. For Linux, it just supports a part of Red Hat and Centos versions.
For the image, it's possible to export the VM to a VHD file and upload the Azure, but it just shows the Windows VM. You can get the whole steps from Move a Windows VM from Amazon Web Services (AWS) to an Azure virtual machine. You can try for Linux, but I'm not sure about it.
If you have any more questions, please let me know. Or if you think it's OK you can accept it :-)
I suggest you strongly consider implementing the base instance configuration as a userdata or init script. This start up script would install all required software and configuration settings on the instance.
This way you can simply run the script on the Azure instance, and it will work exactly as it would on the AWS instance.
This approach is best practice for managing a baseline configuration of any instance. You can also consider configuration management tools like Ansible to do the same.
I am trying to sell my product on AWS Marketplace and got stuck on the registration form on the question 'How is your product fulfilled?' and it gives me the option of 'AMI' and 'SaaS'.
What is the difference between AMI and SaaS?
AMI means that you just have an OS image (stored under your account) that your users can "clone" by starting their own instances:
AMI is the acronym for Amazon Machine Image. An Amazon Machine Image (AMI) is an image of a server -- including an Operating System and often additional software -- which runs on AWS.
SaaS means that you start and control instances yourself and users use software running on those servers without having access to the internal server environment.
How do AMI and SaaS compare? Amazon answers that question explicitly:
Both AMI and SaaS (Software as a Service) product listings are from trusted vendors. AMI products run within a customer's AWS account. You retain more control over software configuration and over the servers that run the software, but you also have additional responsibilities regarding server configuration and maintenance.