I would like to create AMI image based on my current EC2 Linux instance. There are things that bothers me tough, and I didn't find any accurate answer to my questions on the web.
My current EC2 instance has:
two private interfaces like eth0 and eth0:1
two elastic IP addresses, each of them points to above ifaces
The answer I'm missing is, will the new instance launched based on this image be created in the same manner? Duplicating somehow my current settings, etc? Is it even a problem if cloned 1:1? Since, that would be more sufficient from the Load Balancing standpoint.
From the other side, it can't be duplicated in the meaning of private IP addressing, cause I wouldn't be able to differentiate them connecting with ssh. Any1 has some experience creating images based on EC2 instances, and can hint me how it looks?
When launching an instance from an Amazon Machine Image (AMI), the disks will contain an exact copy of the disk at the time that the AMI was created.
However, other attributes might be different when launching a new instance, such as the number of Elastic Network Interfaces and, of course, the IP address will most likely be different. Therefore, you will need to request similar settings from EC2 when then instance is launched.
Related
Does anyone know the correct AWS services needed to launch multiple instances of a Docker image on unique publicly accessible IP addresses?
Every path I have tried with Amazon's ECS seems to be set up for scaling instances locked away in a private network and / or behind a single IP.
The container has instances of a web application running on port 8080, but ideally the end user will connect via port 80.
The objective is to be able to launch around 20 identical copies of the container at once, with each accessible via its own public IP.
There is no need for the public IP to be known in advance, as on startup, I patch the data as needed with the current IP address.
The containers live in Amazon's ECR, and there are a couple of unique instances running in standalone EC2 machines, I was trying to use ECS to launch multiple instances at will, but can successfully launch a total of 1 at a time before getting errors about conflicting ports because things are not isolated enough.
You can do this with ECS:
Change your task definition to use the awsvpc networking mode.
Change your service network configuration to auto-assign a public IP.
If you're deploying onto EC2 instances, I think you may be limited in the number of either network interfaces or public IP addresses that you can use. Fargate does not have this restriction.
I'm using a CloudFormation stack that deploys 3 EC2 VMs. Each needs to be configured to be able to discover the other 2, either via IP or hostname, doesn't matter.
Amazon's private internal DNS seems very unhelpful, because it's based on the IP address, which can't be known at provisioning time. As a result, I can't configure the nodes with just what I know at CloudFormation stack time.
As far as I can tell, I have a couple of options. All of them seem to me more complex than necessary - are there other options?
Use Route53, set up a private DNS hosted zone, make an entry for each of the VMs which is attached to their network interface, and then by naming the entries, I should know ahead of time the private DNS I assign to them.
Stand up yet another service to have the 3 VMs "phone home" once initialized, which could then report back to them who is ready.
Come up with some other VM-based shell magic, and do something goofy like using nmap to scan the local subnet for machines alive on a certain port.
On other clouds I've used (like GCP) when you provision a VM it gets an internal DNS name based on its resource name in the deploy template, which makes this kind of problem extremely trivial. Boy I wish I had that.
What's the best approach here? (1) seems straightforward, but requires people using my stack to have extra permissions they don't really need. (2) is extra resource usage that's kinda wasted. (3) Seems...well goofy.
Use Route53, set up a private DNS hosted zone, make an entry for each of the VMs which is attached to their network interface, and then by naming the entries
This is the best solution, but there's a simpler implementation.
Give each of your machines a "resource name".
In the CloudFormation stack, create a AWS::Route53::RecordSet resource that associates a hostname based on that "resource name" to the EC2 instance via its logical ID.
Inside your application, use the resource-name-based hostname to access the other isntance(s).
An alternative may be to use an Application Load Balancer, with your application instances in separate target groups. The various EC2 instances then send all traffic through the ALB, so you only have one reference that you need to propagate (and it can be stored in the UserData for the EC2 instance). But that's a lot more work.
This assumes that you already have the private hosted zone set up.
I think what you are talking about is known as service discovery.
If you deploy the EC2 instances in the same subnet in the same VPC with the same security group that allows the port the want to communicate over, they will be "discoverable" to each other.
You can then take this a step further. If autoscaling is on the group and machines die and respawn they can write there IPs into a registry i.e. dynamo so that other machines will know where to find them.
For our project we need a static IP binding to our Google Cloud VM instance due to IP whitelisting.
Since it's a managed group preemptible, the VM will terminate once in a while.
However, when it terminates I see in the operations log compute.instances.preempted directly followed by compute.instances.repair.recreateInstance with the note:
Instance Group Manager 'xxx' initiated recreateInstance on instance
'xxx'.
Reason: instance's intent is RUNNING but instance's status is
STOPPING.
After that follows a delete and a insert operation in order to restore the instance.
The documentation states:
You can simulate an instance preemption by stopping the instance.
In which case the IP address will stay attached when the VM is started again.
A) So my question, is it possible to have the instance group manager stop and start the VM in the event of preemption, instead of recreating? Since recreating means that the static IP will be detached and needs to be manually attached each time.
B) If option A is not possible, how can I attach the static IP address automatically so that I don't have to attach it manually when the VM is recreated? I'd rather not have an extra NAT VM instance to take care of this problem.
Thanks in advance!
I figured out a workaround to this (specifically, keeping a static IP address assigned to a preemptible VM instance between recreations), with the caveat that your managed instance group has the following properties:
Not autoscaling.
Max group size of 1 (i.e. there is only ever meant to be one VM in this group)
Autohealing is default (i.e. only recreates VMs after they are terminated).
The steps you need to follow are:
Reserve a static IP.
Create an instance template, configured as preemptible.
Create your managed group, assigning your template to the group.
Wait for the group to spin up your VM.
After the VM has spun up, assign the static IP that you reserved in step 1 to the VM.
Create a new instance template derived from the VM instance via gcloud (see https://cloud.google.com/compute/docs/instance-templates/create-instance-templates#gcloud_1).
View the newly create instance template in the Console, and note that you see your External IP assigned to the template.
Update the MiG (Managed Instance Group) to use the new template, created in step 6.
Perform a proactive rolling update on the MiG using the Replace method.
Confirm that your VM was recreated with the same name, the disks were preserved (or not, depending on how you configured the disks in your original template), and the VM has maintained its IP address.
Regards to step 6, my gcloud command looked like this:
gcloud compute instance-templates create vm-template-with-static-ip \
--source-instance=source-vm-id \
--source-instance-zone=us-east4-c
Almost goes without saying, this sort of setup is only useful if you want to:
Minimize your costs by using a single preemptible VM.
Not have to deal with the hassle of turning on a VM again after it's been preempted, ensuring as much uptime as possible.
If you don't mind turning the VM back on manually (and possibly not being aware it's been shutdown for who knows how long) after it has been preempted, then do yourself a favor and don't bother with the MiG and just standup the singular VM.
Answering your questions:
(A) It is not possible at the moment, and I am not sure if it will ever be possible. By design preemptible VMs are deleted to make space for normal VMs (if there are capacity constraints in the given zone) or regularly to differentiate them from normal VMs. In the latter case preemption might seem like a start/stop event, but in the former it may take a substantial amount of time before the VM is recreated.
(B) At the moment there is not good way to achieve it in generality.
If you have a special case where your group has only one instance you can hardcode the IP address in the Instance Template
Otherwise at the moment the only solution I can think of (other than using a Load Balancer) is to write a startup script that would attach the NAT IP.
I've found one way that ensures that all VM's in your network have the same outgoing IP address. Using Cloud NAT you can assign a static IP which all VM's will use, there is a downside though:
GCP forwards traffic using Cloud NAT only when there are no other
matching routes or paths for the traffic. Cloud NAT is not used in the
following cases, even if it is configured:
You configure an external IP on a VM's interface.
If you configure an external IP on a VM's interface, IP packets with the VM's internal IP as the source IP will use the VM's
external IP to reach the Internet. NAT will not be performed on
such packets. However, alias IP ranges assigned to the interface
can still use NAT because they cannot use the external IP to reach
the Internet. With this configuration, you can connect directly to
a GKE VM via SSH, and yet have the GKE pods/containers use Cloud
NAT to reach the Internet.
Note that making a VM accessible via a load balancer external IP does not prevent a VM from using NAT, as long as the VM network
interface itself does not have an external IP address.
Removing the VM's external IP also prevents you from direct SSH access to the VM, even SSH access from the gcloud console itself. The quote above shows an alternative with a load balancer, another way is a bastion, but doesn't directly solve access from for example Kubernetes/kubectl.
If that's no problem for you, this is the way to go.
One solution is to let the instances have dynamically chosen ephemeral IPs, but set the group as the target of a Load Balancer with a static IP. This way even when instances are created or destroyed, the LB acts as a frontend keeping the IP continious over time.
We do continuous integration from Jenkins, and have Jenkins deploy to an EC2 instance. This EC2 instance exports an NFS share of the deployed code to EC2 processing nodes. The processing nodes mount the NFS share.
Jenkins needs to be able to "find" this code-sharing EC2 instance and scp freshly-built code, and the processing nodes need to "find" this code-sharing EC2 instance and mount its NFS share.
These communications happen over private IP space, with our on-premise Jenkins communicating with our EC2 in a Direct Connect VPC subnet, not using public IP addresses.
Is there a straightforward way to reliably "address" (by static private IP address, hostname, or some other method) this code-sharing EC2 that receives scp'd builds and exports them via NFS? We determine the subnet at launch, of course, but we don't know how to protect against changes of IP address if the instance is terminated and relaunched.
We're also eagerly considering other methods for deployment, such as the new EFS or S3, but those will have to wait a little bit until we have the bandwidth for them.
Thanks!
-Greg
If it is a single instance at any given time that is this "code-sharing" instance you can assign an Elastic IP to it when after you've launched it. This will give you a fixed public IP that you can target.
Elastic IPs are reserved and static until you release them. Keep in mind that they cost money when they are not reserved.
Further on you can use SecurityGroups to limit access to the instance.
In the end, we created & saved a network interface, assigning one of our private IPs to it. When recycling the EC2, and making a new one in the same role, we just assign that saved interface (with its IP) to the new EC2. Seems to get the job done for us!
I've followed these instructions to duplicate an AWS, EBS-backed EC2 instance and I'm running into a snag.
This is exactly what I've done:
Created an AMI from the original instance (instance panel: actions>create image)
Launched the AMI as instance using the current keys (AMI panel: launch)
Created a new elastic IP and associated it with the new instance.
Problem: Everything seems fine but I lost connection to the original instance. I ran trace routes for the original IP and it's dropping when it reaches amazon. It doesn't seem to exist despite that it is showing in my panel, associated with the original instance and likewise, the original instance points to the correct IP address.
To trouble shoot, I have now:
Deleted the new AMI
Deleted the new instances
No change. What am I doing wrong? How do I properly duplicate an instance that I can then point to a different IP?
Thank you,
J
It seems that you did nothing wrong when setting up a new instance.
I don't know what you had in that instance, but one idea I have is that when you create an AMI from an instance, the default behavior of AWS is to reboot the instance:
Amazon EC2 powers down the instance before creating the AMI to ensure
that everything on the instance is stopped and in a consistent state
during the creation process. If you're confident that your instance is
in a consistent state appropriate for AMI creation, you can tell
Amazon EC2 not to power down and reboot the instance. Some file
systems, such as xfs, can freeze and unfreeze activity, making it safe
to create the image without rebooting the instance.
Maybe your web-server does not start on system start-up?