Website slow response slow from some locations
I have a web server hosted in AWS Oregon region.
Our customer are accessing this website from a different part of the world (mostly from US, UK, and Dubai)
static assets are already on the AWS CloudFront
Now a day most of the customer from Dubai and UK are complaining that our web site is very slow but in the same time, we tried to access the website from the USA and other location but its fast.
what cloud be the best solution to make the site fast for all the customer in different locations.
web server is under ELB and we are using the SSL (ACM) certificate on the ELB for https.
Please suggest me the best solution. what about the Route53 latency bases routing .. will this work for my case.
In AWS you have a lot of regions. Many of them closer to your costumers. So, why not replicate the server to a closer region to match your costumers location. It's a looong way from Oregon to UK and Dubai.
EDIT:
shahid: "so your saying we have to setup to on web server on every region.whihc will cost us a lot more."
#shahid it's not in all aws regions, maybe just one it's enough for your problem. For your example (uk and dubai) , you can set an instance in France or London that are a lot closer than Oregon. This is what it's cloud. And this is why cloud was created. Since you already have the assets in cloud front you have to do the same with servers, and the way to do that is to clone the instance to the closest region. Without this solution you are going back to the old times (one server for all the world) = no cloud = large traveling times
EDIT2:
you can try use some tool like pingdom to check the differences response times across the world. With this you can check and verify that the connection is a lot faster from Oregon that was from UK and Dubai. You will also see the response times from cloudfront, to check that it's working as it should
Related
I'm wondering how much more expensive it would be to access my EC2 instance from a different country than if I just waited until I was back at home. I could wait, but it might be good to see the international rates for use.
I'm sending it http requests within an iOS app to query my database. My app does also have GPS data, so I guess I'm thinking getting a completely different location into the database would help with test cases.
If you are always accessing your app from the Internet, then there will be no change in the access charges.
The cost for Data Transfer is charged if traffic goes out to the Internet. it doesn't matter where on the Internet.
One exception to this is if you are using Amazon CloudFront, which is a global caching service and charges different rates depending upon from where traffic is served.
The site I'm working on will potentially get 20,000 visitors per day. It's no guarantee, but it's supposed to be used everyday by each employee in an organisation.
In the past I've just used a single t2.micro EC2 instance with an attached EBS volume to host sites, which has always been enough because these sites don't get a lot of traffic. But with 20,000 visitors a day how could I improve my AWS architecture to scale?
The site is going to have a profile for each user, including a profile picture - so potentially 20,000 image files. Should I be writing these to an S3 bucket instead of to the EBS?
Would a t2.micro ec2 instance cope with the scale, or should I be using a t2.small, t2.medium or even t2.large?
My MySQL databases are currently on the EBS volume, but should I use RDS?
All the users are in the UK, so I'm assuming using CloudFront is overkill?
You're right to assume CloudFront is overkill since all your users are localized to UK.
Update: using a CDN will take a lot of stress off your servers by caching the files rather than processing them each time a call is made.
Look at it this way, if you get 100,000 hits a day, and 90% of those hits are cached and served by the CDN, then your server only has to process 10,000 hits a day. That could be the difference between needing a m4.xlarge versus just needing a t2.small.
#mark-b
Use the Ireland region (and soon you can copy over to the UK region)
If you want to keep your database on your instance I would highly recommend a bit bigger one. As for a quick and easy solution, start up the smallest T series instance with EBS, beta test with 1000-5000 users, see how that goes. Notify the select group all their crap will disappear so don't invest a bunch o' time.
Next, get your analytics on the system and see if that will work times 4-5 more users. For SQL DB stuff you'll eventually want a M series instance I believe.
Also, you could always create a load balanced fleet. You do this in EBS by hitting Load Balanced instead of Single Instance. Create an auto scaling group and boom sauce - check that off.
As for the images, yeah I would recommend S3. Don't really want to dump the whole amount in i/o cause DB, hits, i/o, all on one instance is a lot.
Lastly, if you do plan on going to the UK region (not positive if that's been rolled out yet) I would recommend sectioning all the pieces of your application. This is really good practice to use all the resources they provide.
For a very fault tolerant system:
EC2 fleet (m or c series) with an ELB
S3 the images
RDS the users
CloudWatch the stats
Tenecy the users with IAM groups
Authenticate with STS or AD or whatever (kinda been in the cognito only recently)
Store their session and authenticated crap in ElastiCache - Redis
Keep tabs on them with Kinesis (optional)
And let them search each other with CloudSearch (also optional)
Boss system right there!
And that's if you want to spend a bunch o' cash but have a sweet sweet system. If you want to spend next to nothing, make it serverless. A broad question asked with hundreds of combinations so this is up to interpretation.
Hope this helps!
I'm looking for a way to pick the best AWS region to host a Proof of Concept installation for a potential customer in India.
For this, I'd like to try to ping the customer's web site (I verified that it's hosted in India, I assume by the customer itself since that's part of their business) from multiple AWS regions and see which one gives best results.
I found multiple tools which would allow me to run ping from my own browser to multiple AWS locations (e.g. https://cloudharmony.com/speedtest, http://www.cloudping.info/) but none which will allow me to ping between all AWS regions and a specific third party.
Does such a tool exist, or is my only option to run up an EC2 instance in each region and try to ping from it?
You might want to check the answers to this very similar question.
Keep in mind that not all regions have all AWS services available at this time, so make sure the region you pick has all the services that you plan to use. Also, Amazon has said that an India region is in the works.
Im developing global mobile service communicating with back end server ( S3 - file server , EC2 - application server)
But i don't know how many s3 and ec2 are needed and where i should launch these.
So i'd like to know about below
Im planning to mount S3 in Oregon. As you know, CloudFront is the good solution for getting image quickly but the problem i wanna solve is uploading. I thought 2 solutions. The first solution it that using Put method to CloudFront, upload file to S3 through CloudFront. The second solution is mounting several S3 in different regions. Which is the better solution?
Now i am developing application server in only one EC2. I might have to mount several EC2s for global service. but i don't know how to make end users to connect to specific ec2 of several EC2s. Can you explain me?
thanks
I think your understanding of S3, is slightly off.
You wouldn't and shouldn't need to create "Geo"-specific S3 buckets for the purposes you are describing.
If you are using the service for image delivery over pure HTTP, then you can create the bucket anywhere, and then use a Amazon Cloudfront Distribution as the "frontend" whihc will give you approximately 40 edge locations around the world for your Geo-optimizations.
The more relevant edge location will be used for each user around the world, and they will then request the image from your S3 bucket and store it based on your meta settings. (Typically, from my experience, it's about every 24 hours for low serving traffic websites even when you set an Expire age of months/years.
You also don't "mount" S3. You just create a bucket, and you shouldn't ever want to create multiple buckets which store the same data.
.........
For your sercond question, regarding creating a "global service" for EC2, what are you hopeing to actually achieve.
The web is naturally global. Are your users going to be frett over an additional 200ms latency?
You haven't really descrived what your service will do, but one approach, would be to do all of your computing in Oregon, and then just create cache servers, such as Varnish in different regions. You can use Route53 for the routing, and you can also take advantage of ELB.
My recommendation would be to stop what you are doing, and launch everything from Oregon. The "web" is already "global" and I don't think you are going to need to worry about issues such as this until you hit scale. At which point, I'm going to assume you can just hire someone to solve this problem for you. It sounds like you have the budget for it...
First of all, I am pretty new to AWS, so my question might seem very amateur.
I am a developing a web application which needs to available globally and currently am hosting it on amazon. Since the application is still under development, i have set it up in the Singapore region. However, when i test the application, i get good response times from locations on the the east side of the globe(~50ms). However, when i test the response times from the US, it's ~550ms. So we decided to have 2 instances one in Singapore and one in the US. But i'm not able to figure out a way to handle data replication and load balancing across regions. Elastic Beanstalk only allows me to do this in a particular region. Can somebody please explain how i can achieve global availability for my web app. The following are the services i currently use.
1. Amazon EC2
2. Amazon S3
I need both database replication and S3 file replication. Also it would be great if there was a way where i just need to deploy my application on one place and the changes are reflected across all the instances we would have on the globe.
Before you spend a lot of time and money setting up redundant servers in different regions, you may want to make sure that you can't get the performance improvement you need simply by implementing AWS Cloudfront:
Amazon CloudFront employs a network of edge locations that cache
copies of popular files close to your viewers. Amazon CloudFront
ensures that end-user requests are served by the closest edge
location. As a result, requests travel shorter distances to request
objects, improving performance. For files not cached at the edge
locations, Amazon CloudFront keeps persistent connections with your
origin servers so that those files can be fetched from the origin
servers as quickly as possible. Finally, Amazon CloudFront uses
additional optimizations – e.g. wider TCP initial congestion window –
to provide higher performance while delivering your content to
viewers.
http://aws.amazon.com/cloudfront/faqs/
The nice thing is, you can set this up and test it out in very little time and for very little money. Obviously this won't solve all performance problems, especially if you app is performance bound at the database, but this is a good way of taking care of that 'low hanging fruit' when trying to speed up your website in diverse locations around the world.