We currently have our production elastic search on aws. Nightly we update the production elastic with new data (base data) and then we run scrips to merge new base with current production.
Know this works alright but production is off while this is happening. So i thought that i can do all on staging elastic search environment on aws and then when its done just somehow switch to production.
So here my flow.
spin up new elastic search instance (staging)
populate data (staging)
run scripts (merging production to staging)
switch somehow
remove/delete/shutdown old production
I looked at aws route 53 and this looks promising. Basically fiddle with dns settings making "productionelastic" point to staging and then shutdown production instance.
Is there anything else i can do, also will route 53 idea work.
You can use Amazon Route 53 Health Checks and DNS Failover to route requests to the healthy Elastisearch service while the other one is undergoing maintenance, using health checks, and DNS failover:
If you have multiple resources that perform the same function, for
example, web servers or email servers, and you want Amazon Route 53 to
route traffic only to the resources that are healthy, you can
configure DNS failover by associating health checks with your resource
record sets. If a health check determines that the underlying resource
is unhealthy, Amazon Route 53 routes traffic away from the associated
resource record set. For more information, see Configuring DNS
Failover.
Using this service you can switch between both instances according to their availability. See Configuring DNS Failover
I used a iis reverse proxy rule.
create es instance
wait for it to be ready and created
run a powershell to update a fake website rewrite rule to point to
new instance
and then i use the fake website in the production code.
I will use the route53 when i have someone to manage it for me.
thanks
Related
Is there some easy way that I am missing to get an unchanging, accessible to the internet URL for something I deploy to ECS with docker compose up?
I've written a small web app using flask and Nginx, put the flask and nginx portions into Docker containers, and deployed the thing to AWS ECS using this workflow, which boils down to:
docker context use myecscontext
docker compose up
This deploys the whole thing using AWS Fargate and makes it accessible from the internet at timot-LoadB-xyzxyzxyzxyzx-xyzxyzxyz.us-east-2.elb.amazonaws.com. So far so good.
Now I'd like to make my-fancy-domain.com, registered with a non-AWS registrar, point to my web app. I know I can edit the DNS entry at my registrar to do this; here's the catch: that URL with all the xyzs changes every time I docker compose up after making changes to my web app. Must I really monkey around in my registrar's DNS settings every time I update something?
I had imagined I would simply slap an elastic IP on my new Fargate cluster when I'm satisfied that I want to replace the current live version with an update. I see now that I can't easily associate an elastic IP with the load balancer that Fargate sets up. And I would just as soon not move my-fancy-domain.com to Route53 simply to accomplish this.
For anyone who finds this in the future: what ended up working in the end was to move the DNS records for my site from my domain registrar to AWS Route53.
Once I did that the AWS Route53 console makes it straightforward to add an alias record pointing to the Application Load Balancer that the docker-compose/ECS integration set up. Those alias records are "a Route 53–specific extension to DNS".
I did not want to move DNS records to Route53, but it did solve the problem.
I would like to ask you if you have an microservice architecture (based on Spring Boot) involving Amazon Elastic Container Service (ECS) with Application Load Balancer(ALB), service discovery is performed automatically by the platform, or do you need a special mechanism (such as Eureka or Consul)?
From the documentation (ECS and ALB) is not clear you have this feature provided.
I have talked this with the Amazon support team and they respond the following:
"...using Service Discovery on AWS ECS[..] just with ALBs.
So, there could be three options here:
1) Using ALB/ELB as service endpoints (Target groups for ALBs, separate ELBs if using ELBs)
2) Using Route53 and DNS for Service Discovery
3) Using a 3rd Party product like Consul.io in combination with Nginx.
Let me speak about each of these options.
Using ALBs/ELBs
For this option the idea is to use the ELBs or ALB Target groups in front of each service.
We define an Amazon CloudWatch Events filter which listens to all ECS service creation messages from AWS CloudTrail and triggers an Amazon Lambda function.
This function identifies which Elastic Load Balancing load balancer (or an ALB Target group) is used by the new service and inserts a DNS resource record (CNAME) pointing to it, using Amazon Route 53.
The Lambda function also handles service deletion to make sure that the DNS records reflect the current state of applications running in your cluster.
The down side here is that it can incur higher costs if you are using ELBs - as you need an ELB for each service. And it might not be the simplest solution out there.
If you wish to read more on this you can do so here[1]
Using Route53
This approach involves the use of Route53 and running a simple agent[2] on your ECS container instances.
As your containers stop/start the agent will update the Route53 DNS records. It creates a SRV record. Likewise it will delete said records once the container is stopped.
Another part of this method is a Lambda function that performs health checks on ECS container instances - and removes them from R53 in case of a failure.
You can read up more on this method, on our blog post here[3].
Using a 3rd Party tool like Consul.io Using tools like Consul.io on ECS, will work - but is not supported by AWS. So you are free to use it, but we - unfortunately - do not offer support for it.
So, in conclusion - there are a few ways of implementing service discovery on AWS ECS - the two ways I showed here that use AWS resources, and of course the way of using 3rd party applications.
"
you dont have an out-of-the-box solution in AWS, although it is possible with some effort as described in https://aws.amazon.com/es/blogs/compute/service-discovery-an-amazon-ecs-reference-architecture/
You may also install Zuul + Ribbon + Eureka or Nginx + Consul and use ALB to distribute traffic among Zuul or Nginx
I have previously seen it done by having one EC2 instance running HAProxy, configured via a json file/lambda function, that in turn controlled the traffic with sticky sessions, into two separate elasticbeanstalk applications. So we have two layers of load balancing.
However, this has a few issues, one being: Testing several releases becomes expensive, requires more and more EB applications.
By canary release, I mean, being able to release to only a percentage of traffic, to figure out any errors that escaped the devs, the review process, and the QA process, without affecting all traffic.
What would be the best way to handle such a setup with AWS resources and not break the bank? :)
I found this Medium article that explain the usage of passive autoscaling group where you deploy the canary version into it and monitor for statistics. Once you are satisfied with the result, you can change the desired count for the canary autoscaling group to 0, and perform rolling upgrade to the active autoscaling group.
Here is the link to the article: https://engineering.klarna.com/simple-canary-releases-in-aws-how-and-why-bf051a47fb3f
The way you would achieve canary testing with elastic beanstalk is by
Create a 2nd beanstalk environment to which you deploy the canary release
Use a Route53 Weighted routing policy to send a percentage of the DNS requests to your canary environment.
If you're happy with the performance of the canary you can then route 100% of the traffic to the canary env, etc.
Something to keep in mind with DNS routing is, that the weighted routing is not an exact science since clients cache DNS based on the TTL you set in Route53. In the extreme scenario where you would have e.g. only one single client calling your beanstalk environment (such as a a single web server) and the TTL is set to 5 minutes, it could happen that the switching between environments only happens every 5 minutes.
Therefore for weighted routing it is recommended to use a fairly low TTL value. Additionally having many clients (e.g. mobile phones) works better in conjunction with DNS routing.
Alternatively it might be possible to create a separate LB in front of the two beanstalk environments that balances requests between the beanstalk environments. However I'm not 100% sure if a LB can sit in front other (beanstalk) LBs. I suspect the answer is not but I haven tried yet.
Modifying the autoscaling group in elastic beanstalk is not possible, since the LB is managed by beanstalk and beanstalk can decide to revert the changes you did manually on the LB. Additionally beanstalk does not allow you to deploy to a subset of instances while keeping the older version on another subset.
Hope this helps.
Traffic splitting is supported natively by Elastic Beanstalk.
Be sure to select a "high availability" config preset when creating your application environment (by clicking on "configure more options"), as this will configure a load balancer for your env:
Then edit the "Rolling updates and deployments" section of your environment and choose "Traffic splitting" as your deployment strategy.
I am developing a set of frontend webapps (for instance vaadin or angular) and backend RESTful services. Each frontend webapp will consume one or more of these backend services. I want both webapps and services to be secured over https.
Now, I want to register a single domain, say mydomain.com, and deploy the backend services such that they are available at
service1.api.mydomain.com, service2.api.mydomain.com etc. The frontend apps should be available at webapp1.mydomain.com, webapp2.mydomain.com etc.
I need to be able to setup two or more EC2 instances for the services, and the same for the webapps. For instance, service1 may be running instance A, service2 on instance B, and webapp1 on instance C, and webapp2 on instance D.
How do I configure this setup in AWS Route 53?
Since there is a limit to the max number of Elastic IPs (max 5) that can be allocated for one AWS account, I suppose separate public IPs for all the EC2 instances is not a solution, since I will be having more than 5 such subdomains.
I hope you can provide a practical example configuration with two services and two webapps.
You can submit a request to get the Elastic IP (EIP) limit increased for your account. Small increases (e.g. from 5 to 10) should be fairly quick and easy to obtain. Larger increases should be obtainable if you can justify it to AWS support.
https://console.aws.amazon.com/support/home#/case/create?issueType=service-limit-increase&limitType=service-code-vpc
If you're open to using path-based routing instead of subdomain based (e.g. mydomain.com/app1 and mydomain.com/app1/api) or a mix of the two (e.g. app1.mydomain.com and app1.mydomain.com/api), you could look at using an Application Load Balancer (ALB). You would need one ALB per subdomain used.
http://docs.aws.amazon.com/elasticloadbalancing/latest/application/tutorial-load-balancer-routing.html
Note: I expect subdomain-based routing to be available with the ALB in the future, but it hasn't been released yet.
ALBs could be cheaper than using Classic Elastic Load Balancers (ELBs), but if you're not using the load balancing functionality at all, EIPs may be your best bet since they're free when attached to a running instance.
I have created two instances in AWS (one is Live & other is Backup). My website is hosted on Live Instance. what I want to do is, if Live instance Status check fails, then it should switch to Backup Instance. Is there any automated process to achieve the same?
Is not a good idea to keep one instance idle and pay for it. Put them under an Elastic Load Balancer and start using both of them. The ELB health check will automatically remove the instances that stop working. Then you can continue to monitor the number of healthy instances under your ELB with AWS Cloud Watch and setup an alert - to get an email when something happens or you can even autoscale
Liviu Costea's answer is the best way to go. If you insist on only keeping one active server at a time, and you are using Route53 for your DNS, then you can use Route 53 Health Checks to switch the domain name resolution from your primary server to your secondary server in the case that your primary server goes out of service.