At work we're trying to set up our load balancer with amazon aws. We have two instances, one instance is made from an ami from the first instance.
We only have time to use the AWS GUI right now.
We also currently have one instance associated with the route 53 DNS. What was happening was once that instance started failing, the load was not rolling over to the new instance.
We then tried using the A address of the load balancer for the Route 53 DNS, but that was not distributing the load either.
Are we doing this completely wrong? Do the Route 53 an ELB need to work in conjunction?
I really appreciate any help with this.
**NOTE at low traffic our health checks work fine and our instances are "In Service"
You need to have the route53 domain direct traffic to the ELB. If you have example.com and are trying to route that to the load balancer you need to associate the apex with the load balancer.
To do this, go to the route53 tab. Click your hosted zone and go to record sets. then create a new zone and click yes for alias You then need to associate the hosted zone with your ELB.
Now to get the traffic to fail over correctly you need to be running both instances behind the load balancer (preferably in multiple availability zones) and the ELB will take care of the failover.
To do this, go to the elb section of ec2. Click your load balancer and add instances to it.
Related
I have set ALB with fargate, currently I can access to ALB with dns name like this
myapp-LoadB-FDEWFSOAQXD4-f18c75dd4249a10d.elb.ap-northeast-1.amazonaws.com
However it is said this DNS could be changed.
So I want to give this the Elastic IP
I have experienced connection EC2 and Elastic IP.
In Elasitc IP panel I can choose instance.
However, there is not ALB is listed.
How can I set Elastic IP to ALB ? or am I wrong basically?
Two options here, depending on what kind of direction you are heading:
If you do not like the default DNS name
You can create a DNS record that will point to your load balancer. This means that people would be able to surf to your website by using www.whitebear.com instead of myapp-LoadB-FDEWFSOAQXD4-f18c75dd4249a10d.elb.ap-northeast-1.amazonaws.com
See: Routing traffic to an ELB load balancer - Amazon Route 53
If you really want to attach an ElasticIp to a loadbalancer
There are some use cases where it is really needed to be able to surf to a loadbalancer using a fixed IP. You can achieve this by setting up a Global Accelerator on AWS.
With Global Accelerator, you are provided two global static public IPs that act as a fixed entry point to your application, improving availability.
More information can be found on the AWS Global Accelerator page
If you wish to create a 'friendly' name for an Application Load Balancer, you can create a CNAME record in your Domain and point it to the DNS Name of the Load Balancer.
If you wish to point the Apex of your domain (eg example.com), you can use an Alias in Amazon Route 53 to point to the Application Load Balancer. (It is not normally possible to point a Domain apex to a CNAME record, so the Alias capability of Route 53 will do it for you.)
See: Routing traffic to an ELB load balancer - Amazon Route 53
It sounds like I cannot use an elastic ip with AWS Application Load Balancer.
I currently own a domain through GoDaddy and the DNS server points to the load balancer via the CNAME. However, if the load balancer dies and gets recreated, its url changes and I then have to change the CNAME and wait for the change to propagate.
There must be a solution around this - what is it?
It looks like the solution might be to use two load balancers - https://aws.amazon.com/blogs/networking-and-content-delivery/using-static-ip-addresses-for-application-load-balancers/, but this seems really excessive - I have a small application right now.
As far as I know, the only way to have a fixed static-IP for a LB is to use a Network Load Balancer.
As stated here
Support for static IP addresses for the load balancer. You can also assign one Elastic IP address per subnet enabled for the load balancer.
An Elastic Load Balancer retains its DNS name as long as you don't replace it manually. If you still want to have a temporary, low-cost solution to this problem, you can consider the following approach:
Assuming the application is deployed in a private subnet, I would proxy the traffic through an EC2 instance until your primary DNS changes propagate.
Launch a small EC2 instance and attach an Elastic IP to it (consider your bandwidth requirements to determine which size).
Configure a proxy (nginx) to forward traffic to your application.
Configure active-passive DNS failover using ELB DNS name and EIP.
I have an issue that I have been trying to work out for a while now. I am experimenting with AWS and thinking of moving sites over, but I can't get DNS to work with OpsWorks apps. I have a PHP / RDS stack that I have a few apps in.
These were working great except for the issue of OpsWorks instances having a dynamic DNS that changes upon instance reboot. I don't want to have to change my DNS records in Route53 every time that happens, so I implementated an EIP, registered it with the instance, and registered it with OpsWorks. I added rules to the policy that the EC2 uses for default VPC to accept incoming HTTP requests as well.
Now, when I add an A record to my DNS zone that points to the EIP, and add my domain in the OpsWorks app settings, my domain does not resolve in the browser. What am I missing?
OpsWorks does very little to manage DNS externally. All DNS management should be done through Route53.
To start, make sure you have your nameserver (NS) record properly configured to reference your domain in your hosted zone, and also make sure that whatever DNS provider you're using (e.g. name.com, etc) is configured to point to those DNS servers.
Also, regarding this point:
I don't want to have to change my DNS records in Route53 every time
that happens, so I implementated an EIP, registered it with the
instance, and registered it with OpsWorks.
You should really be using an elastic load balancer for this, not an elastic IP. You can associate an elastic load balancer with your OpsWorks stack so that any instances launched within the OpsWorks stack will be associated with that elastic load balancer. The additional benefit is that you can have multiple servers hosting your application as you scale.
When creating a service of type LoadBalancer on AWS, Kubernetes auto-provisions an elastic load balancer. I am wondering how I can automatically associate that load balancer with a Route 53 alias?
Alternatively, can I make Kubernetes re-use an elastic load balancer (which I've assigned a Route 53 alias to)?
There is a project that accomplishes this: https://github.com/wearemolecule/route53-kubernetes
A side note here, there are some issues with being able to select the TLD that this uses, it seems to use the first matching public recordset.
Also this doesn't work with the internal ELBs. There was an issue opened under the project for this request.
K8s cannot automatically associate the ELB with the route 53. You need config that by yourself. As on how to instruct k8s to reuse an existing ELB, there are two ways:
[Update: this only works on GCE, NOT on AWS] Specify the service type=LoadBalancer, and specify the ExternalIP to equal the existing ELB's external IP, and k8s should reuse that ELB. I know this works on GCE, but I haven't tried it on AWS. Also, if this all works, when you delete the k8s service, the ELB will be deleted by k8s as well.
Specify the service as of type=NodePort, and specify its NodePort to equal the backend port of your existing ELB. I have more confidence in this approach. Also, with this approach, when the service is deleted, the ELB will not be delete by k8s.
My question is simple. Does it make sense to have an Amazon Elastic Load Balancer (ELB) with just one EC2 instance?
If I understood right, ELB will switch traffic between EC2 instances. However, I have just one EC2 instance. So, does it make sense?
On the other hand, I´m using Route 53 to route my domain requests example.com, and www.example.com to my ELB, and I don´t see how to redirect directly to my EC2 instance. So, do I need an ELB for routing purposes?
Using an Elastic Load Balancer with a single instance can be useful. It can provide your instance with a front-end to cover for a disaster situation.
For example, if you use an auto-scaling group with min=max=1 instance, with an Elastic Load Balancer, then if your instance is terminated or otherwise fails:
auto-scaling will launch a new replacement instance
the new instance will appear behind the load balancer
your user's traffic will flow to the new instance
This will happen automatically: no need to change DNS, no need to manually re-assign an Elastic IP address.
Later on, if you need to add more horsepower to your application, you can simply increase your min/max values in your autoscaling group without needing to change your DNS structure.
It's much easier to configure your SSL on an ELB than an EC2, just a few clicks in the AWS console. You can even hand pick the SSL protocols and ciphers.
It's also useful that you can associate different security groups to the actual EC2 and the forefront ELB. You can leave the ELB in the DMZ and protect your EC2 from being accessible by public and potentially vulnerable to attacks.
There is no need to use a Load Balancer if you are only running an single Amazon EC2 instance.
To point your domain name to an EC2 instance:
In the EC2 Management Console, select Elastic IP
Allocate New Address
Associate the address with your EC2 instance
Copy the Elastic IP address and use it in your Route 53 sub-domain
The Elastic IP address can be re-associated with a different EC2 instance later if desired.
Later, if you wish to balance between multiple EC2 instances:
Create an Elastic Load Balancer
Add your instance(s) to the Load Balancer
Point your Route 53 sub-domain to the Load Balancer
With NO ELB :-
Less Secure (DOS Attacks possible as HTTP 80 will be open to all, instead of being open only to ELB)
You won't have the freedom of terminating an instance to save EC2 hrs without worrying about remapping your elastic IP(not a big deal tho)
If you don't use ELB and your ec2 instance becomes unhealthy/terminates/goesDown
Your site will remain down (It will remain up if you use ELB+Scaling Policies)
You will have to remap your elastic IP
You pay for the time your elastic IP is not pointing to an instance around $0.005/hr
You get 750 hours of Elastic Load Balancing plus 15 GB data processing with the free tier so why not use it along with a min=1,max=1 scaling policy
On top of the answer about making SSL support easier by putting a load balancer in front of your EC2 instance, another potential benefit is HTTP/2. An Application Load Balancer (ALB) will automatically handle HTTP/2 traffic and convert up to 128 parallel requests to individual HTTP/1.1 requests across all healthy targets.
For more information, see: https://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-listeners.html#listener-configuration
It really depends on what are you running in the EC2 instance.
While with only one EC2 instance it's not necessary to use ELB (all your traffic will go to that instance anyways), if your EC2 service has to scale in the near future, is not a bad idea to invest some time now and get familiar with ELB.
This way, when you need to scale, it's just a matter of firing up additional instances, because you have the ELB part done.
If your EC2 service won't scale in the near future, don't worry too much!
About the second part, you definitely can route directly to your EC2 instance, you just need the EC2 instance IP. Take a look at the amazon route53 docs. Mind that if your IP is not static (you don't setup an Amazon Elastic IP), you'd need to change the IP mapping everytime the EC2 ip changes.
You can also use an ELB in front of EC2 if for example you want it to be publically reachable, without having to use up an Elastic IP address. As said previously they work well too with ASG's