Move AWS EC2 Instance to another account - amazon-web-services

I created a Amazon AWS EC2 instance under my account and made an website/ftp on it, now a new partner wants to move the instance under his company account so his company can pay the bills.
We can't change the instance IP because banks in the region are communicating with the server.
How can I move the instance to a different account without having to change anything on the configuration?

The short answer is: No, you cannot move an running instance from one account to another unless and ofcourse AWS Technical support has some magic available behind the curtains.
You can However, Create an AMI from this instance and share this AMI with other users/account. refer: http://aws.amazon.com/articles/530

To share or migrate EC2 instances from a source account to a target
account follow these steps:
Create a custom Amazon Machine Image (AMI)
from the instance you want to share or migrate. Be sure to include all
required EBS data volumes in the AMI.
Note: Data stored on instance store volumes isn't preserved in AMIs, and won't be on the instance store volumes of the instances
that you launch from the AMI.
Share the AMI with the target account
using either the EC2 console or the AWS Command Line Interface (CLI).
From the target account, find the AMI
using the EC2 console or the AWS CLI.
Launch a new instance from the shared AMI
on the target account.
Note: The private IP address of VPC instances will be different in the new account, unless you specifically set them during
launch.
Related information
Changing the Encryption State of Your Data
AWS CLI Command Reference (EC2)
Source: Transfer Amazon EC2 Instance

This is not possible.
AWS Support does not have access to copy Amazon EC2 resources or
manipulate any configuration options in AWS accounts. You can't
separate an AWS account from an Amazon.com account or transfer
resources between AWS accounts. It is possible to manually migrate
Amazon EC2 resources from one account to another by completing the
steps described here.
Source : https://aws.amazon.com/premiumsupport/knowledge-center/account-transfer-ec2-instance/

I'm working with several hundreds on EC2 instances in several AWS regions and accounts. You can move an EC2 instance to another AWS account, however, you can't move the Elastic IP and it will take up 16 steps with AWS CLI, if you want to migrate Tags and clone the Security Groups. I wrote a detailed post with the whole process at https://medium.com/#gmusumeci/how-to-move-an-ec2-instance-to-another-aws-account-e5a8f04cef21.

there are more than 10 steps involved in doing the cloud move. I would suggest you use Infrastructure as a Configuration (terraform and CloudFormation) or Infrastructure as a real code (pulumi and CDK)
however if you want to give a go at a nice tool I found called KopiCloud. Please feel welcome to try it and leave your comments below. Is good if you need to move instances on a quick lift and shift scenario.

You can re-think the design of having the banks in the region communicating to your servers via IP.
If the banks communicate using DNS names, you have much more flexibility to move your servers around.
You can also achieve improvements in high availability and resiliency by moving to DNS connections.
So a plan might be
Setup a DNS record for your existing server
Get the banks who connect to your server to connect via the DNS name
Setup your new server in the other account (other answers describe this)
Cut the banks over to your new server in the new account simply by updating the DNS record
I haven't tried load balancing across accounts, but that may be another option, which would give you HA as a bonus. By registering your current instance, and new instance in another account as targets with a load balancer and getting your clients to connect to the load balancer, you could cut over to the other account. The only part I haven't tried is registering targets in different accounts, but looks like this should be possible with an AWS Network Load Balancer

Related

Retrieve existing resource data using AWS Cloudformation

I need to retrieve existing data/properties of a given resource by using an AWS Cloudformation template. Is it possible? If it is how can I do it?
Example 1:
Output: Security Group ID which allows traffic on port 22
Example 2:
Output: Instance ID which use default VPC
AWS CloudFormation is used to deploy infrastructure from a template in a repeatable manner. It cannot provide information on any resources created by any methods outside of CloudFormation.
Your requirements seem more relevant to AWS Config:
AWS Config provides a detailed view of the configuration of AWS resources in your AWS account. This includes how the resources are related to one another and how they were configured in the past so that you can see how the configurations and relationships change over time.
An AWS resource is an entity you can work with in AWS, such as an Amazon Elastic Compute Cloud (EC2) instance, an Amazon Elastic Block Store (EBS) volume, a security group, or an Amazon Virtual Private Cloud (VPC).
Using your examples, AWS Config can list EC2 instances and any resources that are connected to the instances, such as Security Groups and VPCs. You can easily click-through these relationship and view the configurations. It is also possible to view how these configurations have changed over time, such as:
When EC2 instance changed state (eg stopped, running)
When rules changed on Security Groups
Alternatively, you can simply make API calls to AWS services to obtain the current configuration of resources, such as calling DescribeInstances to obtain a list of Amazon EC2 instances and their configurations.

AWS EC2 Auto scaling philosophy

Hello there!
I'm at beginning of the investigation of AWS, but one of the concepts looks unclear to me. Based on it I want to ask for assistance with an understanding of functionality.
I have a web application on PHP installed on EC2.
My application is huge loaded and I need to use a load balancer for the best performance. How to do and set up this is clear. The Code of my application is hosted on Gitlab.
After EC2 and load balancer setup did I want to use Autoscaling.
So, I need to use the autoscale group.
Main question: what I should do next? As I understand I need somehow create a new instance, but I need a correct image for the instance with all dependencies and source code.
Code auto-deploy is also a big question. When the new feature merged I need to run the GitLab pipeline and delivery code somehow to the new EC2.
So what do I need to read and investigate to have the ability to deploy new code to the new EC2 instance automatically? Is AWS provide some tools for this?
Thank you for the help with my journey.
Regards,
Mavis.
You can begin with this link https://docs.aws.amazon.com/autoscaling/ec2/userguide/create-asg-from-instance.html which explains to you how to create an autoscaling group based on an EC2 instance.
In short you can generate an AMI ( Amazon machine Image) from your current EC2 (host php) and create a launch configuration/launch template for your autoscaling group.
Next, you may add a load balancer to distribute traffic to theses instances, you can associate it with target groups and your Autoscaling goup https://docs.aws.amazon.com/autoscaling/ec2/userguide/attach-load-balancer-asg.html
For the Auto deploy, you can automate within your pipeline to create a new launch configuration or to get the last version of your code PHP from S3 or another location in the user data part. You may use gitlab ci or CodeDeploy which is the perfect candidate for this kind of stuff
Be aware also, that the autoscaling group is statless(create/terminate instances) and you must store your images and assets in a shared location like S3, DB or EFS, because if an instance is unhealthy or terminated by the ASG, you may lose data.

Update terraform resource after provisioning

so I recently asked a question about how to provision instances that depend on each other. The answer I got was that I could instantiate the 3 instances, then have a null resource with a remote-exec provisioner that would update each instances.
It works great, except that in order to work my instances need to be configured to allow ssh. And since they are in a private subnet, I first need to allow ssh in a public instance that will then bootstrap my 3 instances. This bootstrap operation requires allowing ssh on 4 instances that really don't need to once the bootstrap is complete. This is not that bad, as I can still restrict the traffic to known ip/subnet, but I still thought it was worth asking if there was some ways to avoid that problem.
Can I update the security group of running instances in a single terraform plan? Example: Instantiate 3 instances with security_group X, provision them through ssh, then update the instances with security_group Y, thus disallowing ssh. If so, how? If not, are there any other solutions to this problem?
Thanks.
Based on the comments.
Instead of ssh, you could use AWS Systems Manager Run Command:
AWS Systems Manager Run Command lets you remotely and securely manage the configuration of your managed instances. Run Command enables you to automate common administrative tasks and perform ad hoc configuration changes at scale.
This would require making your instances to be recognized by AWS Systems Manager (SSM) which requires three things:
network connectivity to SSM service. Since your instances are in private subnet, they either have to connect to the SSM service using NAT gateway or VPC interface endpoints for SSM.
SSM Agent installed and running. This is usually not an issue as most offical AMI on AWS already have it setup.
Instance role with AmazonSSMManagedInstanceCore AWS managed policy.
Since run-command is not supported by terraform, you either have to use local-exec to run the command through AWS CLI, or through lambda function using aws_lambda_invocation.

How can Switch AWS EC2 server instance from one country to another country

I am using AWS server instance and i have deployed my application in AWS India origin cause of latency issue i wish to switch AWS china. So as i read about "Amazon Machine Image" but i am not sure about it would work or not.
Does cross origin copy of AWS AMI support in china?
I am little bit confuse about as below link is showing AWS EBS is there so my question is Do we use AWS-EBS service to create an AMI image?
https://www.amazonaws.cn/en/ebs/
You can create a AMI from existing instance and launch a new instance from that AMI in the new region. You need to copy the AMI to new region before starting the new instance.
For detailed steps see this blog

How to understand Amazon ECS cluster

I recently tried to deploy docker containers using task definition by AWS. Along the way, I came across the following questions.
How to add an instance to a cluster? When creating a new cluster using Amazon ECS console, how to add a new ec2 instance to the new cluster. In other words, when launching a new ec2 instance, what config is needed in order to allocate it to a user created cluster under Amazon ECS.
How many ECS instances are needed in a cluster, and what are the factors?
If I have two instances (ins1, ins2) in a cluster, and my webapp, db containers are running in ins1. After I updated the running service (through http://docs.aws.amazon.com/AmazonECS/latest/developerguide/update-service.html), I can see the newly created service is running in "ins2", before draining the old service in "ins1". My question is that after my webapp container allocated to another instance, the access IP address becomes another instance IP. How to prevent or what the solution to make the same IP address access to webapp? Not only IP, what about the data after changing to a new instance?
These are really three fairly different questions, so it might best to split them into different questions here accordingly - I'll try to provide an answer regardless:
Amazon ECS Container Instances are added indirectly, it's the job of the Amazon ECS Container Agent on each instance to register itself with the cluster created and named by you, see concepts and lifecycle for details. For this to work, you need follow the steps outlined in Launching an Amazon ECS Container Instance, be it manually or via automation. Be aware of step 10.:
By default, your container instance launches into your default
cluster. If you want to launch into your own cluster instead of the
default, choose the Advanced Details list and paste the following
script into the User data field, replacing your_cluster_name with the
name of your cluster.
#!/bin/bash
echo ECS_CLUSTER=your_cluster_name >> /etc/ecs/ecs.config
You only need a single instance for ECS to work as such, because the cluster itself is managed by AWS on your behalf. This wouldn't be sufficient for high availability scenarios though:
Because the container hosts are just regular Amazon EC2 instances, you would need to follow AWS best practices and spread them over two or three Availability Zones (AZ) so that a (rare) outage of an AZ doesn't impact your cluster, because ECS can migrate your containers to a different host instance (provided your cluster has sufficient spare capacity).
Many advanced clustering technologies that facilitate containers have their own service orchestration layers and usually require an uneven number >= 3 (service) instances for a high availability setup. You can read more about this in section Optimal Cluster Size within Administration for example (see also Running CoreOS with AWS EC2 Container Service).
This refers back to the high availability and service orchestration topics mentioned in 2. already, more precisely your are facing the problem of service discovery, which becomes more prevalent even when using container technologies in general and micro-services in particular:
To get familiar with this, I recommend Jeff Lindsay's Understanding Modern Service Discovery with Docker for an excellent overview specifically focused on your use case.
Jeff also maintains a containerized version of the increasingly popular Consul, which makes it simple for services to register themselves and to discover other services via a DNS or HTTP interface (see Running Consul in Docker and gliderlabs/docker-consul).