How to stop Elastic Beanstalk from giving me a new Elastic IP - amazon-web-services

I have a Rails app deployed successfully with Elastic Beanstalk, but each time I'm git aws.push, the end result is a new instance with a new Elastic IP, which is not the one I've assigned to my domain name.
So I have to go through this rig-a-ma-roll of allocating the old one to the new instance. Or alternatively, changing the DNS to point to the new Elastic IP, and then off course, delete the unused Elastic IP so I'm not charged by Amazon.
Can this new Elastic IP creation be prevented in a configuration?

If you use a load balanced environment, your domain should be pointing to the load balancer, so i assume you are on a single instance environment. In this case, you can use .config files and aws cli to automate the DNS record change (see http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/customize-containers-ec2.html). Another alternative would be to launch the environment in a VPC and attach an ENI with a fixed IP to the instance, that could avoid DNS caching issues.
But considering the ELB costs, i would not go that far, just launch a load balanced environment with a single instance and register that ELB in DNS (an ALIAS record, if you are using Route53).

Related

Will my running EC2 instance drop its current public IP if I associate an Elastic IP to it?

I am running an EC2 instance with a public IP. This IP is not elastic, so if I were to restart the instance, I would lose it.
When I created the instance, I did not know about this, so I configured my domain name to point to that IP as if it were set in stone.
Now I realized that I risk that my app be unreachable if the IP changes after a restart.
What would the correct procedure to assign an Elastic IP to this instance without downtime?
There will be not that much of downtime. it will be a couple of secs. just get the new ElasticIP and point the DNS to new. Or if you want to do without Downtime, Then plan to add Loadbalancer with your instance and point the domain to Loadbalancer. there will be no downtime
The public IP which is auto assigned to your instance will change once you restart/stop n start your instance. Creation and assigning of the Elastic IP wont cause any issue but the domain name configuration will have to be changed. the domain name configuration was done through AWS or any other service providers?

Assign a static IP in AWS

We all know that we can assign a Elastic IP associated with EC2 instance. However, when we rebuild the environment in ElasticBeanstalk the IP still changes since the old instance is terminated and a new instance is created. Is there any way we can assign a "real" static IP so that it wouldn't change even if it's rebuilt in ElasticBeanstalk? Thanks in advance.
From Using Elastic Beanstalk with Amazon VPC:
For single-instance environments, Elastic Beanstalk assigns an Elastic IP address (a static, public IP address) to the instance so that it can communicate directly with the Internet.)
For Load-balancing, autoscaling environments, you should always communicate via the Elastic Load Balancer, which is referenced by DNS Name.
If you require a fixed IP address for whitelisting, there are a few choices:
Route outbound traffic to the remote service via a NAT Gateway -- it can use a fixed Elastic IP address
Route traffic via a proxy in your VPC -- again, it can use a fixed Elastic IP address
Given that you have an auto-scaled environment, it doesn't necessarily make sense to allocate IP addresses to each individual instance. However, if you know the maximum number of instances that will be created, you could create Elastic IP addresses for your EC2 instances and re-associate them to instances when they are re-created. (You could potentially do this via a startup script.)
I agree with John. But just in case if you any way need EIP ( probably to ssh to the server ) : One workaround is Go to EC2 --> Elastic IPs --> Allocate new address . This way you are buying a fixed EIP for your account. Now you can manually associate this EIP with any of your EC2 instance.
Problem with this approach is that you have to always MANUALLY associate EIP.

OpsWorks app domains not resolving after adding elastic ip

I have an issue that I have been trying to work out for a while now. I am experimenting with AWS and thinking of moving sites over, but I can't get DNS to work with OpsWorks apps. I have a PHP / RDS stack that I have a few apps in.
These were working great except for the issue of OpsWorks instances having a dynamic DNS that changes upon instance reboot. I don't want to have to change my DNS records in Route53 every time that happens, so I implementated an EIP, registered it with the instance, and registered it with OpsWorks. I added rules to the policy that the EC2 uses for default VPC to accept incoming HTTP requests as well.
Now, when I add an A record to my DNS zone that points to the EIP, and add my domain in the OpsWorks app settings, my domain does not resolve in the browser. What am I missing?
OpsWorks does very little to manage DNS externally. All DNS management should be done through Route53.
To start, make sure you have your nameserver (NS) record properly configured to reference your domain in your hosted zone, and also make sure that whatever DNS provider you're using (e.g. name.com, etc) is configured to point to those DNS servers.
Also, regarding this point:
I don't want to have to change my DNS records in Route53 every time
that happens, so I implementated an EIP, registered it with the
instance, and registered it with OpsWorks.
You should really be using an elastic load balancer for this, not an elastic IP. You can associate an elastic load balancer with your OpsWorks stack so that any instances launched within the OpsWorks stack will be associated with that elastic load balancer. The additional benefit is that you can have multiple servers hosting your application as you scale.

DevOps, DNS and Public IP

I have a devops automation environment. Each successful build (web app) in Jenkins triggers a creation of EC2 (Linux) instance in AWS which is set to receive public IP and the app gets deployed on that instance. I'm calling the web application using instance's public IP. I need to mask the IP and call the app by custom name. I have created a subdomain on Route 53 subdomain.abc.com. I have three set of web apps and want to call them like one.subdomain.abc.com, two.subdomain.abc.com etc.
Since each time we have a different VM I'm not sure if EIP is an option.
Can someone please suggest a solution ?
Many thanks in advance.
If you are using just one Amazon EC2 instance for each app, then for each app you can:
Create an Elastic IP address that will be permanently used with the app
Create an A record in Amazon Route 53 to point to that Elastic IP address (eg app1.example.com)
When a new instance of the app is launched, re-associate the Elastic IP address with the new instance (assuming your old instance is then terminated)
If you wish to serve traffic from app1.example.com to several Amazon EC2 instances, then create an ALIAS record in Route 53 to point to an Elastic Load Balancer and register the EC2 instances with the load balancer.

Does it make sense to have an Amazon Elastic Load Balancer with just one EC2 instance?

My question is simple. Does it make sense to have an Amazon Elastic Load Balancer (ELB) with just one EC2 instance?
If I understood right, ELB will switch traffic between EC2 instances. However, I have just one EC2 instance. So, does it make sense?
On the other hand, I´m using Route 53 to route my domain requests example.com, and www.example.com to my ELB, and I don´t see how to redirect directly to my EC2 instance. So, do I need an ELB for routing purposes?
Using an Elastic Load Balancer with a single instance can be useful. It can provide your instance with a front-end to cover for a disaster situation.
For example, if you use an auto-scaling group with min=max=1 instance, with an Elastic Load Balancer, then if your instance is terminated or otherwise fails:
auto-scaling will launch a new replacement instance
the new instance will appear behind the load balancer
your user's traffic will flow to the new instance
This will happen automatically: no need to change DNS, no need to manually re-assign an Elastic IP address.
Later on, if you need to add more horsepower to your application, you can simply increase your min/max values in your autoscaling group without needing to change your DNS structure.
It's much easier to configure your SSL on an ELB than an EC2, just a few clicks in the AWS console. You can even hand pick the SSL protocols and ciphers.
It's also useful that you can associate different security groups to the actual EC2 and the forefront ELB. You can leave the ELB in the DMZ and protect your EC2 from being accessible by public and potentially vulnerable to attacks.
There is no need to use a Load Balancer if you are only running an single Amazon EC2 instance.
To point your domain name to an EC2 instance:
In the EC2 Management Console, select Elastic IP
Allocate New Address
Associate the address with your EC2 instance
Copy the Elastic IP address and use it in your Route 53 sub-domain
The Elastic IP address can be re-associated with a different EC2 instance later if desired.
Later, if you wish to balance between multiple EC2 instances:
Create an Elastic Load Balancer
Add your instance(s) to the Load Balancer
Point your Route 53 sub-domain to the Load Balancer
With NO ELB :-
Less Secure (DOS Attacks possible as HTTP 80 will be open to all, instead of being open only to ELB)
You won't have the freedom of terminating an instance to save EC2 hrs without worrying about remapping your elastic IP(not a big deal tho)
If you don't use ELB and your ec2 instance becomes unhealthy/terminates/goesDown
Your site will remain down (It will remain up if you use ELB+Scaling Policies)
You will have to remap your elastic IP
You pay for the time your elastic IP is not pointing to an instance around $0.005/hr
You get 750 hours of Elastic Load Balancing plus 15 GB data processing with the free tier so why not use it along with a min=1,max=1 scaling policy
On top of the answer about making SSL support easier by putting a load balancer in front of your EC2 instance, another potential benefit is HTTP/2. An Application Load Balancer (ALB) will automatically handle HTTP/2 traffic and convert up to 128 parallel requests to individual HTTP/1.1 requests across all healthy targets.
For more information, see: https://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-listeners.html#listener-configuration
It really depends on what are you running in the EC2 instance.
While with only one EC2 instance it's not necessary to use ELB (all your traffic will go to that instance anyways), if your EC2 service has to scale in the near future, is not a bad idea to invest some time now and get familiar with ELB.
This way, when you need to scale, it's just a matter of firing up additional instances, because you have the ELB part done.
If your EC2 service won't scale in the near future, don't worry too much!
About the second part, you definitely can route directly to your EC2 instance, you just need the EC2 instance IP. Take a look at the amazon route53 docs. Mind that if your IP is not static (you don't setup an Amazon Elastic IP), you'd need to change the IP mapping everytime the EC2 ip changes.
You can also use an ELB in front of EC2 if for example you want it to be publically reachable, without having to use up an Elastic IP address. As said previously they work well too with ASG's