Rolling updates for websites hosted on Amazon's Elastic Beanstalk - amazon-web-services

I've deployed my application to Elastic Beanstalk, and configured it with the Rolling Updates feature. This sounds pretty cool because it keeps a certain number of servers in the farm at all times while others are swapped out for upgrades - so it seems an outage-less upgrade is achievable.
But by my reasoning this just isn't going to work - at some point during a rolling update there will be some servers with v1 on and some servers with v2 on. If the JS or CSS is different between v1 and v2 then there could be a v1 page loaded that sends a request to the load balancer and gets the v2 JS/CSS and vice versa.
I can't think of an easy way to avoid this, and so I'm struggling to see the point of Rolling Upgrades at all.
Am I missing something? Is there a better way to achieve an outage-less upgrade for a website? I was thinking that I could also set up a complete parallel Elastic Beanstalk environment with v2 on it, and then switch them over in one go - but that seems so much more time consuming.

As you described, to use rolling deployments and have continuous deployments on the same environment, you need to guarantee that version N is compatible with version N+1. "Compatible" meaning they can run simultaneously, what can be challenging in cases such as different files and database schema changes.
A popular workaround to that is Blue-Green Deployments, where you deploy to a different environment and then redirect users. The Swap URL feature helps to implement that using AWS Elastic Beanstalk.
Progressive rollouts and blue-green are not mutually exclusive. You can use progressive rollouts for small changes and blue-green for bigger ones. For more patterns and techniques on continuos delivery, i'd recommend you check Jez Humble's book.

Related

Amazon Fargate vs EC2 container website hosting

I got a project recently in which I have to build a React / NextJS application which will serve occasional high traffic but will mostly sit idle. We are currently looking for the cheapest option in all categories, but also want to build a scalable and manageable app with a quick and easy CI/CD pipeline. For the development server, we chose Heroku's free plan and pipeline, as I think it's perfectly idle for the job. For production, we decided to use Docker as it's the best way to set up a CD pipeline, and with 2000 minutes of free Github Actions per month, the whole Production/Development pipeline will be essentially free of cost for us. We also were thinking to use AWS because of its features and we want to keep a minimum number of bills to manage. For DB we're thinking of using DynamoDB because of free 25GB lifetime storage which will be enough as the only dynamic data in the site will be user data and blogs. And for object storage, the choice is S3.
Here, we're confused between the two offerings by AWS when it comes to Container hosting, ECS EC2, and ECS Fargate. While Fargate definitely feels like a better choice because of the fact that the application will sit idle most of the time, but we're really confused in resource provisioning for containers in Fargate. The app is running on NextJS, so it'll be server-side rendered.
So my question was, will a combo of 0.5 GB RAM x 0.25 vCPU will be enough for a Server Side Rendered NextJS application? Or should I go for a dedicated EC2? Or another cloud provider maybe?
NextJS is a framework that run on top of nodejs, as there is no such specific requirement (nodejs 10 only) mentioned on the documentation but you can treat them as we treat nodejs.
Node.js with V8 suitable for limited memory device?
So my question was, will a combo of 0.5 GB RAM x 0.25 vCPU will be
enough for a Server Side Rendered NextJS application? Or should I go
for a dedicated EC2? Or another cloud provider maybe?
I will not suggest EC2 type ECS service, you can go for fargate with minimal memory and CPU and set auto-scaling of ECS services whenever required.
But I think we have a better option then fargate that is serverless-nextjs
Serverless deployment dramatically improves reliability and scalability by splitting your application into smaller parts (also called [lambdas]3). In the case of Next.js, each page in the pages directory becomes a serverless lambda.
There are a number of benefits to serverless. The referenced link talks about some of them in the context of Express, but the principles apply universally: serverless allows for distributed points of failure, infinite scalability, and is incredibly affordable with a "pay for what you use" model.
Serverless Nextjs

Is it recommended to have multiple deployments of same application in same AWS region?

I had a requirement where it might be possible I would need to support different versions of the application at the same time which is kind of a business requirement.
One way of doing this would be to deploy the app in different regions. But it might also be required to run the same app in one region multiple times.
Of course, it can be done by parameterising the deployment scripts but will it lead to some issues?
One I can think of is same app running in the same region might consume the same resources and it might hit some of the regional limits. Are there any other issues I should be aware of?
We need more background information on your actual tech stack and more detailed requirements.
Running multiple versions of multiple deployments is something that a lot of companies manage on AWS. CI/CD pipelines and many other topics are very helpful, but I am fishing in the dark here.
You can certainly run those multiple deployments in one region.
For a first start, have a look at Elastic Beanstalk environments:
"You can deploy multiple AWS Elastic Beanstalk environments when you need to run multiple versions of an application."
And this gets you started on Elastic Beanstalk.

Infrastructure and code deployment in same pipeline or different?

We are in the process of setting up a new release process in AWS. We are using terraform with Elastic Beanstalk to spin up the hardware to deploy to (although actual tools are irrelevant).
As this elastic beanstalk does not support immutable deployments in windows environments we are debating whether to have a separate pipeline to deploy our infrastructure or to run terraform on all code deployments.
The two things are likely to have different rates of churn which feels like a good reason to separate them. This would also reduce risk as there is less to deploy. But it means code could be deployed to snowflake servers and means QA and live hardware could get out of sync and therefore we would not be testing like for like.
Does anyone have experience of the two approaches and care to share which has worked better and why?
Well,
we have both the approaches in place. The initial AWS provisioning has the last step of a null resource which runs an ansible which does the initial code deployment.
Subsequent code deployments are done with standalone jenkins+ansible jobs.

Deploying to several environments on Amazon Elastic Beanstalk at the same time

I have an application that have several environments (all running in Amazon Elastic Beanstalk), namely, Production, Worker and Debug. Each environment have corresponding git branch that is different from master in some ways (like, configuration is changed and some code is deleted).
I use eb deploy to deploy the new version of application from its branch. It zips current git branch using git zip and sends the information to Amazon. Then it deploys to running instances.
The problem, however, is that deploying takes some time (about 5 minutes). Thus, between deploying, say, worker and production it have different code. Which is bad, because my changes might have change the queue protocol or something like that.
What I want is to be able to upload the information and to do its processing on all the environments, but not actually replace the code, just prepare it. And after I did it for all the environments issue command like "finish deploy" so that the code base is replaced on all the environments simultaneously.
Is there a way to do it?
You need to perform a "blue-green" deploy and not do this in-place. Because your deployment model requires synchronization of more than one piece, a change to the protocol those pieces use means those pieces MUST be deployed at the same time. Treat it as a single service if there's a frequently-breaking protocol that strongly binds the design.
"Deployed" means that the outermost layer of the system is exposed and usable by other systems. In this case, it sounds like you have a web server tier exposing an API to some other system, and a worker tier that reads messages produced by the web tier.
When making a breaking queue protocol change, you should deploy BOTH change-sets (web server layer and queue layer) to entirely NEW beanstalk environments, have them configured to use each other, then do a DNS swap on the exposed endpoint, from the old webserver EB environment to the new one. After swapping DNS on the webserver tier and verifying the environment works as expected, you can destroy the old webserver and queue tiers.
On non-protocol-breaking updates, you can simply update one environment or the other.
It sounds complex because it is. If you are breaking the protocol frequently, then your system is not decoupled enough to expect to version the worker and webserver tiers, which is why you have to do this complex process to version them together.
Hope this helps!

updating all files on AWS EC2

I'm trying to determine the "best" way for a small company to keep web app EC2 instances in sync with current files while using autoscaling.
From my research, CloudFormation, Chef, Puppet, OpsWorks, and others seem like the tools to do so. All of them seem to have a decent learning curve, so I am hoping someone can point me in the right direction and I'll learn one.
The initial setup I am after is:
Route53
1x Load Balancer
2x EC2 (different AZ) - Apache/PHP
1x ElastiCache Redis
2x EC2 (different AZ) w/ MySQL
Email thru Google Apps
Customer File/Image Storage via S3
CloudFront for CDN
The only major challenge I can see is versioning/syncing the web/app server. We're small now, so I could probably just manually update the EBS or even using rsync, but I would rather automate it and be setup for autoscaling.
This is probably too broad of a question and may be closed, but let me give you a few thoughts.
Why not use RDS for MySQL?
You need to get into the thought of how to make and promote disk images. In the cloud world, you don't want to be rsyncing around a bunch of files from server to server. When you are ready to publish a revised set of code, just make am image from your staging environment, start new EC2 instances in your ELB based on that image, and turn off old instances. You may have a little different deployment sequence if you need to coordinate with DB schema changes, but that is a pretty straightforward approach.
You should still seek to automate some of your activities using tools such as those you mentioned. You don't need to do this all at once. Just figure out a manual part in your process that you want to automate and do it.