AWS AutoScaling with Static IPs - amazon-web-services

Is it possible to do AutoScaling with Static IPs in AWS ? The newly created instances should either have a pre-defined IP or pick from a pool of pre-defined IPs.
We are trying to setup ZooKeeper in production, with 5 zooKeeper instances. Each one should have a static-IP which are to hard-coded in the Kafka's AMI/Databag that we use. It should also support AutoScaling, so that if one of the zooKeeper node goes down, a new one is spawned with the same IP or from a pool of IPs. For this we have decided to go with 1 zoo-keeper instance per AutoScaling group, but the problem is with the IP.
If this is the wrong way, please suggest the right way. Thanks in advance !

One method would be to maintain a user data script on each instance, and have each instance assign itself an elastic IPs from a set of EIPs assigned for this purpose. This user data script would be referenced in the ASGs Launch Configuration, and would run on launch.
Say the user script is called "/scripts/assignEIP.sh", using the AWS CLI you would have it consult the pool to see which ones are available and which ones are not (already in use). Then it would assign itself one of the available EIPS.
For ease of IP management, you could keep the pool of IPs in a simple text properties file on S3, and have the instance download and consult that list when the instance starts.
Keep in mind that each instance will need an to be assigned IAM instance profile that will allow each instance to consult and assign EIPs to itself.

Related

Is it possible to attach volume and elastic IP automatically to Spot ec2 instance?

I would like to try to setup AWS Launch Template, or just Spot request (persistance) and I need automatically attach my specific volume.
The main idea - spot instance will be process data and store it in a separate volume. When Spot will die, another Spot should be requested automatically (which will be built from an image with predefined software) and data should continue processing automatically (and again, storing in my second volume).
But, I can`t setup it in AWS console, so, looks like it is not possible. Am I wrong? Is it possible in some another way?
The same according IP address - I would like to have the same IP address for any of "versions" of Spot (after recreating for example)
It is possible to attach and detach Elastic IP using AWS CLI to achieve what you want.
However there are other possible workarounds in case you do not want to script AWS CLI :
Using Route53 you can define an A record with TTL of 60 seconds if you can accept few minutes of transition. That way you get to use a domain to access your underlying instance instead.
Setup a ALB and forward the request to the EC2 fleet

Trying to automatically register my EC2 instances in Route 53

I have approximately 40 Windows EC2 instances running at the moment. This number will start to grow substantially in the next few months. Each one is a t2.small Windows 2016 Server instance. Cost is starting to become an issue as the number increases. Each instance has its own Elastic IP address because when user Tom wants to access his machine he will use the DNS tom.mydomain.com.
tom.mydomain.com is registered in a Route53 hosted zone pointing to Elastic IP 22.33.44.55 which has been associated with Tom's EC2 instance.
Problem is that Tom only needs to use his machine 4 hours per day. When not using it he simply shuts the machine down. But... An Elastic IP that is pointing to a stopped instance costs almost as much per hour as a t1.micro instance in a running state.
So what I want to do is when Tom logs into AWS console and starts his EC2 instance, it will automatically register itself with Route53 against the DNS "tom.mydomain.com".
In short I want to do away with the need for Elastic IPs which are fast becoming a very substantial cost.
The tutorial Auto-Register EC2 Instance in AWS Route 53
looks like it does exactly what I want to do. The problem is the scripting is for Linux. I want to get it working for Windows. I have everything done down to step 6 in the tutorial but am stuck there. Any one get something similar to this working on Windows?
I would recommend:
Create a web-based front-end where your users can authenticate and request access to their Amazon EC2 instance
You could use Amazon Cognito for authentication and DynamoDB for data storage
Once the user authenticates, the service can:
Start their EC2 instance (if it was previously stopped)
Associate the random public IP address to the customer's domain name
Tell the user that the instance is now available
Users login to the instance and perform their work function
You then have some mechanism (I'm not sure what) that detects that they no longer need the instance, and then Stops the instance to save costs
The above process avoids assigning IAM credentials to your users. While IAM credentials are important for staff members who work on your AWS infrastructure, they should not be assigned to end-users of your service.
The process also avoids assigning IAM permissions to each EC2 instance. While the instances themselves could call Route 53 to update a record for their domain name, this requires an IAM Role to be assigned to the EC2 instance. If your users have access to the instance itself, this would potentially open a security hole where they could call Route 53 with incorrect data, such as assigning other users' domain names to their own instance.
It's worth mentioning that the above recommendations mirror the way that Amazon WorkSpaces operates — users authenticate, their instance is started and after a period of non-use the instance is stopped.
I will recommend use of cloudformation template. Cloudformation can create EC2 and then attach it to route53 url. So when tom like to use the EC2 instance, he have to run the stack in Cloudformation. Once he finished he have to go back to cloudformation and destroy the stack.
Yes Cloudformation would be a recommended approach. You can try cloudkast which is an online cloudformation template generator. It will make your task of creating cloudformation template very easy and effortless

How to make a HTTP call reaching all instances behind amazon AWS load balancer?

I have a web app which runs behind Amazon AWS Elastic Load Balancer with 3 instances attached. The app has a /refresh endpoint to reload reference data. It need to be run whenever new data is available, which happens several times a week.
What I have been doing is assigning public address to all instances, and do refresh independently (using ec2-url/refresh). I agree with Michael's answer on a different topic, EC2 instances behind ELB shouldn't allow direct public access. Now my problem is how can I make elb-url/refresh call reaching all instances behind the load balancer?
And it would be nice if I can collect HTTP responses from multiple instances. But I don't mind doing the refresh blindly for now.
one of the way I'd solve this problem is by
writing the data to an AWS s3 bucket
triggering a AWS Lambda function automatically from the s3 write
using AWS SDK to to identify the instances attached to the ELB from the Lambda function e.g. using boto3 from python or AWS Java SDK
call /refresh on individual instances from Lambda
ensuring when a new instance is created (due to autoscaling or deployment), it fetches the data from the s3 bucket during startup
ensuring that the private subnets the instances are in allows traffic from the subnets attached to the Lambda
ensuring that the security groups attached to the instances allow traffic from the security group attached to the Lambda
the key wins of this solution are
the process is fully automated from the instant the data is written to s3,
avoids data inconsistency due to autoscaling/deployment,
simple to maintain (you don't have to hardcode instance ip addresses anywhere),
you don't have to expose instances outside the VPC
highly available (AWS ensures the Lambda is invoked on s3 write, you don't worry about running a script in an instance and ensuring the instance is up and running)
hope this is useful.
While this may not be possible given the constraints of your application & circumstances, its worth noting that best practice application architecture for instances running behind an AWS ELB (particularly if they are part of an AutoScalingGroup) is ensure that the instances are not stateful.
The idea is to make it so that you can scale out by adding new instances, or scale-in by removing instances, without compromising data integrity or performance.
One option would be to change the application to store the results of the reference data reload into an off-instance data store, such as a cache or database (e.g. Elasticache or RDS), instead of in-memory.
If the application was able to do that, then you would only need to hit the refresh endpoint on a single server - it would reload the reference data, do whatever analysis and manipulation is required to store it efficiently in a fit-for-purpose way for the application, store it to the data store, and then all instances would have access to the refreshed data via the shared data store.
While there is a latency increase adding a round-trip to a data store, it is often well worth it for the consistency of the application - under your current model, if one server lags behind the others in refreshing the reference data, if the ELB is not using sticky sessions, requests via the ELB will return inconsistent data depending on which server they are allocated to.
You can't make these requests through the load balancer, So you will have to open up the security group of the instances to allow incoming traffic from source other than the ELB. That doesn't mean you need to open it to all direct traffic though. You could simply whitelist an IP address in the security group to allow requests from your specific computer.
If you don't want to add public IP addresses to these servers then you will need to run something like a curl command on an EC2 instance inside the VPC. In that case you would only need to open the security group to allow traffic from some server (or group of servers) that exist in the VPC.
I solved it differently, without opening up new traffic in security groups or resorting to external resources like S3. It's flexible in that it will dynamically notify instances added through ECS or ASG.
The ELB's Target Group offers a feature of periodic health check to ensure instances behind it are live. This is a URL that your server responds on. The endpoint can include a timestamp parameter of the most recent configuration. Every server in the TG will receive the health check ping within the configured Interval threshold. If the parameter to the ping changes it signals a refresh.
A URL may look like:
/is-alive?last-configuration=2019-08-27T23%3A50%3A23Z
Above I passed a UTC timestamp of 2019-08-27T23:50:23Z
A service receiving the request will check if the in-memory state is at least as recent as the timestamp parameter. If not, it will refresh its state and update the timestamp. The next health-check will result in a no-op since your state was refreshed.
Implementation notes
If refreshing the state can take more time than the interval window or the TG health timeout, you need to offload it to another thread to prevent concurrent updates or outright service disruption as the health-checks need to return promptly. Otherwise the node will be considered off-line.
If you are using traffic port for this purpose, make sure the URL is secured by making it impossible to guess. Anything publicly exposed can be subject to a DoS attack.
As you are using S3 you can automate your task by using the ObjectCreated notification for S3.
https://docs.aws.amazon.com/AmazonS3/latest/dev/NotificationHowTo.html
https://docs.aws.amazon.com/cli/latest/reference/s3api/put-bucket-notification.html
You can install AWS CLI and write a simple Bash script that will monitor that ObjectCreated notification. Start a Cron job that will look for the S3 notification for creation of new object.
Setup a condition in that script file to curl "http: //127.0.0.1/refresh" when the script file detects new object created in S3 it will curl the 127.0.0.1/refresh and done you don't have to do that manually each time.
I personally like the answer by #redoc, but wanted to give another alternative for anyone that is interested, which is a combination of his and the accepted answer. Using SEE object creation events, you can trigger a lambda, but instead of discovering the instances and calling them, which requires the lambda to be in the vpc, you could have the lambda use SSM (aka Systems Manager) to execute commands via a powershell or bash document on EC2 instances that are targeted via tags. The document would then call 127.0.0.1/reload like the accepted answer has. The benefit of this is that your lambda doesn't have to be in the vpc, and your EC2s don't need inbound rules to allow the traffic from lambda. The downside is that it requires the instances to have the SSM agent installed, which sounds like more work than it really is. There's AWS AMIs already optimized with SSM agent stuff, but installing it yourself in the user data is very simple. Another potential downside, depending on your use case, is that it uses an exponential ramp up for simultaneous executions, which means if you're targeting 20 instances, it runs one 1, then 2 at once, then 4 at once, then 8, until they are all done, or it reaches what you set for the max. This is because of the error recovery stuff it has built in. It doesn't want to destroy all your stuff if something is wrong, like slowly putting your weight on some ice.
You could make the call multiple times in rapid succession to call all the instances behind the Load Balancer. This would work because the AWS Load Balancers use round-robin without sticky sessions by default, meaning that each call handled by the Load Balancer is dispatched to the next EC2 Instance in the list of available instances. So if you're making rapid calls, you're likely to hit all the instances.
Another option is that if your EC2 instances are fairly stable, you can create a Target Group for each EC2 Instance, and then create a listener rule on your Load Balancer to target those single instance groups based on some criteria, such as a query argument, URL or header.

how to stop Amazon Aws EC2 instance?

I'm trying to write a script to stop several instances in our test environment on Friday and have them start back on Monday, to save little cost.
Is there a way to stop instances by IP addresses (and not by instance ID), or some other way I don't know about? (The reason being that instance ID's may change if an instance had to be deleted and recreated.)
This is a zero code solution:
Put your instances into autoscale groups and add a shutdown and startup schedule on the autoscale group. This can be done in the AWS console.
This can also be automated using the AWS CLI.
Use EC2 Tags to give your instances key/value tag pairs, then write a script using Boto which searches for instances with the right tags, and then terminates them.
You could also use Boto to list instances matching the specific IP address, and terminate them that way.
But... IP addresses are dynamically assigned (unless you are using Elastic IPs). So why not make a note of the instance IDs when launching the instances, instead of the IP address?

How to get the public DNS address of a second ec2 instance while inside the first instance

Building off of this question:
https://unix.stackexchange.com/questions/24355/is-there-a-way-to-get-the-public-dns-address-of-an-instance
I know how to get an ec2 instance's own public DNS address. What I need is a way for this instance to get the public DNS instance of a second ec2 instance.
The idea is that I will have ~50 instances running, one or two of which will be a spot instance that is constantly running. All of the other worker instances need to know the spot, or master, instance's public DNS name to connect to it within my application. How can I do this?
On another note, is there a way I can create a backup of my spot master instance? In case it fails, I would like to have another spot instance that immediately takes its place, but my worker ec2 instances would have to update their information about the spot instance's public dns address.
I think the only way to get the public DNS of your other instance is by using the command line interface or Web API provided by amazon.
The concrete command you need is ec2-describe-instances which provides data about public DNS settings for each instance.
http://docs.aws.amazon.com/AWSEC2/latest/CommandLineReference/ApiReference-cmd-DescribeInstances.html
Of course you can do the same through the Web API:
http://docs.aws.amazon.com/AWSEC2/latest/APIReference/ApiReference-query-DescribeInstances.html
Regarding the backup you can map the spot instance to EBS (which is preferrable) and then make snapshot backups. The snapshot backups are still triggered manually in the amazon console (or again through command line tools and Web API). Snapshots should be good for regular backups.
You can also use a service like http://www.skeddly.com/ to automate your EC2 snapshot backups.
If you want to backup the full AMI image of your spot instance, so you can re-create it from scratch at a later time, or create multiple instances from the same image etc. go to the management console and do the following:
Click on Instances
Select the instance you want to create an AMI from
Click on "Actions" and select "Create Image"
Set the Image name and other info and save
An alternative is to use S3. When a spot instance comes up it will read its own public address and write it to a bucket in S3. The other instances will look up the bucket the first time they need it and use this value. If the spot instance goes down, the workers will poll the bucket periodically until a new spot instance comes up and updates the bucket.
Make sure to set the bucket to only allow authenticated access so only your applications can modify it.
This approach has a security advantage, as the VMs do not need access to your EC2 credentials. They only need access to a specific S3 bucket.