I'm currently trying for my own education to achieve the fastest site speed possible for a landing page. I have hosted it once on Siteground with all speed optimizations on (Minify, Level 3 Supercacher (Memcached) and Lazy Loading, Cloudflare)
I have setup the 99% same site (100% not possible since SG has its own optimizer)
I assumed AWS will be faster. But when I look in my Developer Toolbar, Pingdom or GTMetrix SG wins. The reason is all files have a longer waiting time. I know it is minimal, but given the fact I want to achieve maximum speed I am wondering what the reason is. I tried to use a bigger instance, but changing from t2.micro to m4.16xlarge didn't make a difference. I am wondering if it would anyway without visitors.
This is the loading time on Siteground:
This is on my AWS Site:
The difference is that the JS and CSS files on SG get loaded in 20-30ms and the files on AWS in 40-50ms.
The last options I could try would be Varnish Cache or move to Lightsail, but I'm not sure if this will help.
Related
In my scenario CDN request are for images on the article pages, I want to make sure that they exist before rendering, if one doesn't exist we show a generic one An example is https://cloudfront.abc.com/CDNSource/teasers/56628.jpg there is 1 image per article.
The problem is occurs when The synchronous requests held the page execution for many minutes and then the load balancer timed out the request. looks like caused by making HTTP requests with no HTTP timeout.
Currently, my resources are in S3 so perhaps a CRON that syncs data hourly to the webservers, and the webservers take a copy of the S3 bucket when building. I think For that solution though I'd need an EBS to scale to our total image size. I'm not sure about it. Can anyone please guide me on how to calculate it, what that would be?
I had tried to use EFS for session storage previously but found its cost was far too high for us to use in production, +$15,000/mo. also please advise if there would be a better solution for this?
I want to host a scalable blog or application of this sort in nodeJS on AWS making use of AWS technologies. The idea here is to have a small EC2 server that is not responsible for serving the website, but only for running the CMS/admin panel. While these operations could be serverless as well, I think having a dedicated small VM EC2 instance could be more efficient, and works better with existing frameworks, etc.
In my diagram above, you can see there's two type of users audiences and admin/writers. Admin CRUD operations also cause lambda to run. Lambda generates the static site after Admin changes, which is delivered to S3. Users are directed to the static site hosted in S3. Only admins/writers have access to the server-connecting part of the site.
I think this is a good design for an extremely scalable and relatively cheap site, as long as the user-facing side is all static. An alternative to this is a CDN, but then I have to deal with cache invalidation issues, a site that updates slower, and a larger server.
This seems like a win-win to me. Feedback?
This ought to be a comment rather than an answer, but as I don't have enough points...
There are a couple of other considerations for this architecture. Lambda functions are great for scaling out microservices horizontally with each small function being executed in parallel tens or hundreds of times. Generation of a static site is typically a single threaded operation so you may not see the gains you expect, you'll also need to watch the timeout period (maximum 300 seconds currently) and make sure that you can generate the site in that time. Of course if you are not running Lambda code you are not getting charged.
For your admin frontend I would suggest ElasticBeanstalk, even if you peg it at a single instance, it gives you lots of great features like rolling updates.
Good luck with the project.
Ok,
So I've been playing around with amazon web services as I am trying to speed up my website and save resources on my server by using AWS S3 & CloudFront.
I ran a page speed test initially, the page speed loaded in 1.89ms. I then put all assets onto an s3 bucket, made that bucket available to cloudfront and then used the cloudfront url on the page.
When I ran the page speed test again using this tool using all their server options I got these results:
Server with lowest speed of : 3.66ms
Server with highest speed of: 5.41ms
As you can see there is a great increase here in speed. Have I missed something, is it configured wrong? I thought the CDN was supposed to make the page speed load faster.
Have I missed something, is it configured wrong? I thought the CDN was supposed to make the page speed load faster.
Maybe, in answer to both of those statements.
CDNs provide two things:
Ability to scale horizontally as request volume increases by keeping copies of objects
Ability to improve delivery experience for users who are 'far away' (either in network or geographic terms) from the origin, either by having content closer to the user, or having a better path to the origin
By introducing a CDN, there are (again two) things you need to keep in mind:
Firstly, the CDN generally holds a copy of your content. If the CDN is "cold" then there is less likely to be any acceleration, especially when the test user is close to the origin
Secondly, you're changing the infrastructure to add an additional 'hop' into the route. If the cache is cold, the origin isn't busy, and you're already close to it, you'll almost always see an increase in latency, not a decrease.
Whether a CDN is right for you or not depends on the type of traffic you're seeing.
If I'm a distributor based in the US with customers all over the world, even for dynamic, completely uncachable content, then using a CDN can help me improve performance for those users.
If I'm a distributor based in North Virginia with customers only in North Virginia, and I only see one request an hour, then I'm likely to have no visible performance improvement - because the cache is likely not to be kept populated, the network path isn't preferable, and I don't have to deal with scale.
Generally, yes, a CDN is faster. But 1.89ms is scorchingly fast; you probably won't beat that, certainly not under any kind of load.
Don't over-optimize here. Whatever you're doing, you have bigger fish to fry than 1.77ms in load time based on only three samples and no load testing.
I've seen the same thing. I initially moved some low-traffic static sites to S3/cloudfront to improve performance, but found even a small linux ec2-instance running nginx would give better response times in my use cases.
For a high traffic, geographically dispersed set of clients S3/Cloudfront would probably outperform.
BTW: I suspect you don't mean 1.89ms, but instead 1.89 seconds, correct?
I've been runnning a single django application on Amazon EC2 using gunicorn to serve the django portion and nginx for the static files.
I'm going to be starting new project soon, and wondering which of the following options would be better:
A larger amazon EC2 instance (Medium) runnning multiple django applications
Multiple smallers EC2 instances (Small/Micro) all running their own django applications?
Would anybody have any experience with this? What would the relevant performance metrics I could measure to get a good cost to performance ratio?
The answer to this question really depends on your app I'm afraid. You need to benchmark to be sure you are running on the right instance type. Some key metrics to watch are:
CPU
Memory usage
Requests per second, per instance size
App startup time
You will also need to tweak nginx/gunicorn settings to make sure you are running with a configuration that is optimised for your instance size.
If costs are a factor for you, one interesting metric is "cost per ten thousand requests", i.e. how much are you paying per 10000 requests for each instance type?
I agree with Mike Ryan's answer. I would add that you also have to evaluate whether your app needs a separate database. Sometimes it makes sense to isolate large/complex applications with their own database, which makes changes and maintenance easier. (Also reduces your risk in the case that something goes wrong). Not all of your user base would be affected in the case of an outage. You might want to create a separate instance for these applications. Note: Django supports multiple databases in one project but, again, that increases complexity for changes and maintenance.
I'm trying to understand how many EC2 servers I should start.
I understand the point of AWS is to be able to scale up quickly, but just for cost estimates, how many (approximately) micro ec2 nodes would be needed to run a simple php web app?
Just for the sake of estimating, assume the app is loading CodeIgniter and serving a static page without any database access.
Any ideas?
It completely depends on the type of site that you have. If it is static web-pages then one server with caching should be fine. Even dynamic pages should be fine if you do caching in the right places.
Depending on how much traffic you get on the sites you can get several hundred to several thousand hits per minute. An EC2 instance should be able to just about manage that (for a mostly static web-page).
I would recommend you not worrying about it. Any spike will at the most happen for a day. If you need to budget, plan for a 100 computers for one day. If you really need all of them, then you have a few hours to build a simple static e-mail collection page and redirect most of your traffic there.