I just tested a deployment with AWS Amplify and the Amazon console.
My app is based in Paris. I try to do a test with GTMETRIX (based in Canada), I get a bad "Largest Contentful Paint". More than 4.2s.
On the other hand, here in Europe, it loads very quickly (max 1 second).
I tested with a Canada based VPN, it is slow to load. In comparison, I hosted my application on another service (like Vercel or Netlify) and the loading is much faster.
I thought AWS Amplify was working with the Cloudfront CDN. I have the impression that it is not working properly given the slowness in other countries.
Can you tell me why ?
Thank you
PS : This is only a static vuejs application.
There are too many factors that leads to the slowness.
But, Yes - AWS Amplify leverages the Amazon CloudFront Global Edge Network to distribute your web app globally. To deliver content to end users with lower latency, Amazon CloudFront uses a global network of 144 Points of Presence (133 Edge Locations and 11 Regional Edge Caches) in 65 cities across 29 countries.
For debugging purpose, one thing that you can consider to try is by hosting your your static website in S3 and served through CloudFront (this article may help you to troubleshoot).
Related
i have to make a web application which a maximum of 10,000 concurrent users for 1h. The web server is NGINX.
The application is a simple landing page with an HTML5 player with streaming video from CDN WOWZA.
can you suggest a correct deployment on AWS?
LoadBalancer on 2 or more EC2?
If so, which EC2 sizing do you recommend? Better to use Autoscaling?
thanks
thanks for your answer. The application is 2 page PHP and the impact is minimal because in PHP code i write only 2 functions that checks user/password and token.
the video is provided by Wowza CDN because is live streaming, not on-demand.
what tool or service do you suggest about the stress test of Web Server?
I have to make a web application which a maximum of 10,000 concurrent users for 1h.
Avg 3/s, it is not so bad. Sizing is a complex topic and without more details, constraints, testing, etc. You cannot get a reasonable answer. There are many options and without more information it is not possible to say which one is the best. You just started NGINX, but not what it's doing (static sites, PHP, CGI, proxy to something else, etc.)
The application is a simple landing page with an HTML5 player with streaming video from CDN WOWZA.
I will just lay down a few common options:
Let's assume it is a single static (another assumption) web page referring an external resource (video). Then the simplest and the most scalable solution would be an S3 bucket hosting behind the CloudFront (CDN).
If you need some simple quick logic, maybe a lambda behind a load balancer could be good enough.
And you can of course host your solution on full compute (ec2, beanstalk, ecs, fargate, etc.) with different scaling options. But you will have to test out what is your feasible scaling parameters or bottleneck (io, network CPU, etc.). Please note that different instance types may have different network and storage throughput. AWS gives you an opportunity to test and find out what is good enough.
I'm having difficulty providing a bluegreen for my s3 static website. I publish a version of the website in a given bucket and it is exposed at:
a Cloudfront distribution
then on a Route 53
and yet another CDN (corporate, which resolves the DNS) to reach the internet.
I've trying some "compute" solutions, like ALB, but I'm not successful.
The main issue of my difficulty is the long DNS replication time when I update CloudFront with a new address, making it difficult to rollback a future version to the old one (considering using different buckets for this publication).
Has anyone been through this or have any idea how to solve this?
AWS recommends that you create different CloudFront distributions for each
blue/green variant, each with its own DNS.
From the Hosting Static Websites on AWS prescriptive guidance:
Different CloudFront distributions can point to the same Amazon S3
bucket so there is no need to have multiple S3 buckets. Each variation
[A/B or blue/green] would store its assets under different folders in the same S3 bucket.
Configure the CloudFront behaviors to point to the respective Amazon
S3 folders for each A/B or blue/green variation.
The other key part of this strategy is an Amazon Route 53 feature
called weighted routing. Weighted routing allows you to associate
multiple resources with a single DNS name and dynamically resolve DNS
based on their relative assigned weights. So if you want to split your
traffic 70/30 for an A/B test, set the relative weights to be 70 and
30. For blue/green deployments, an automation script can call the Amazon Route 53 API to gradually shift the relative weights from blue
to green after automated tests validate that the green version is
healthy.
Hosting Static Websites on AWS - It's 2016 year whitepaper. It relies on non-working examples that don't work. You can't just setup two cloudfront distributions to serve the same CNAME for dns switching.
Another way is to do green/blue logic in lambda edge.
You can do blue/green or gradual deployment with a single Cloudfront distribution, 2 S3 buckets and Lambda#Edge.
You can find a ready-to-use cloudformation template that does this here.
Current Stack
I am using a cloudfront to distribute my static website objects that live in an S3 bucket. I am using Route53 to handle my DNS routing and health checking.
What I'd like to accomplish
I recently came across Netlify that does Split testing between different feature Branches. I would like to stick with my current stack on AWS but would like to build in this functionality for AB testing.
What I tried
Originally, I wanted to have a Route53 serve 2 separate Cloudfront services each with their own S3 bucket. I would use Weighted Round Robin to distribute 10% of traffic to the testing environment and the other 90% to the production environment. I learned quickly that Amazon does not allow the same domain to serve 2 different Cloudfront services each serving their own S3.
The other option was to do this testing at the edge node of my cloudfront service. This would require me to serve two different objects to from the same S3, which seems very messy and not scalable.
My question
Is it even possible to replicate what Netlify does with Split testing when using AWS? If so, how can I implement it? If not, what is my next best option for AB testing a static website?
I have angular application with nodejs backend(REST API). I am confused with S3 and EC2. which one is better and what are the pros and cons deploying to each.Considering average load. Help will be highly appreciate.
I figured it out by myself.
S3 is used to store static assets like videos, photos, text file and
any other format file. It is highly scalable, reliable, fast, inexpensive
data storage infrastructure.
EC2 is a like your own server. And it is on Cloud so computing
capacity can be decreased or increased instantly as per server need.
So here my confusion is clear as ...
When we build Angular2 Application it generates .js files which called
bundle in term of angular2. Now, these files can be hosted on S3 Bucket. And can be accessed through CloudFront in front of it. Which is very fast cache enabled. And pricing model is pay per request.
But EC2 is like running own server. And we have to configure server it
self so for angular application it is not good. It is good for node
Applicationas it can do computation.
You can setup a popular ubuntu server in EC2 with
Nginx to serve your angular frontend and proxy request for your
NodeJs Api
S3 is a file storage mainly for serving static content and media files (jpg, fonts, mp4 etc)
Theoretically you can host everything in your EC2 instance, but with S3 it is easier to scale, backup, migrate your static asset.
You can probably start with one simple EC2 instance to run everything, when everything's working fine you can try move the static asset to S3
I have seen PHP libraries for SimpleDB but nothing too interesting... are there any best practices or frameworks for this or should I just go at it? Thanks!
Aside from Amazon's basic PHP library for SimpleDB, there are two others that I know of.
Paws is a SimpleDB specific library, and
Tarzan which has support for many Amazon web services and seems to be well documented.
One thing to be aware of is that any individual SimpleDB request may not be as fast as a normal database call, but much of the time you can make your calls in parallel.
If you are able to run your web app on Amazon EC2 (either self hosted or if you can find a hosting company that uses EC2) you will see a much lower latency in SimpleDB requests than you'll get from a PHP host outside of Amazon's cloud. When running on EC2 I typically get round-trip latencies of 2-7ms to SimpleDB (not including the requst processing time).