AWS Choosing a region or Choose budget - amazon-web-services

So, as the question title says,
How should we architect the solution using AWS ?
Do we need to consider the region first assuming we might use all the features in future or stick with a region which is near and migrate to other regions for additional service,when needed.
How generally it is decided ?

The cost is fairly negligible when looking at various services pricing between regions, but obviously worth noting if you're on a very tight budget.
Regarding availability most commonly will services be available day 1 in the following regions:
us-east-1
us-west-1
eu-west-1
You generally find that within a few weeks or months that services will be rolled out to other regions, with the exception of the China and Govcloud regions which can see a more significant delay.
New regions are generally deployed with a core set of services such as EC2, S3, RDS etc but after launch will start to add the remaining services there.
If your application is client facing (a client directly interacts with the application, over either a web browser or service API) then I believe geographical location can be more important to a degree than the pricing. Delivering as best an experience to the client in my opinion is more beneficial for example us-east-1 might be cheaper but your clients based in europe.
If you want the cutting edge the regions listed above will almost always be current. Obviously you need to weigh all of these factors and decide based on what is most important for your usecase.

Related

AWS Services and Regions

I am very new with AWS and wanted to clear my concept on AWS services. I have read that that AWS has plenty of services that can also be accessed through API. A service is basically a software program. Then why are services not available in all regions. If my customers are from India, I can buy the EC2 instance from Asia but why should I choose service from USA East. Again, why does AWS provide regions for End Points. They could have installed all the services in all their regions - assuming that they are only software programs and not hardware resources.
Latency is not a big problem for you, I think, you can choose the best price options for your sources. If latency big a problem, you must choose the region/zone near your target market. Better understanding read this doc.
AWS Services operate on multiple levels and are all exposed through APIs.
Some services operate at a global scope (e.g. Identity and Access Management or Route53), most on a regional level (e.g. S3) and others somewhere between the region and availability zone (EC2, RDS, VPC...).
AWS uses the concept of a region for multiple purposes, one of the major drivers being fault isolation. Something breaking in Ireland (eu-west-1) shouldn't stop a service in Frankfurt (eu-central-1) from operating. Latency is another driver here. Since physics is involved, higher distances also increase the latency, which makes things like replication more tricky. Data residency and other compliance aspects are also a good reason to compartmentalize services.
Services being regional results in their endpoints being regional as well.
As to not every service being available in every region: Hardware availability is part of the reason, it doesn't make sense to have the more obscure hardware for niche use cases (think GroundStation, their satellite control service) in all regions. Aside from that, there are most likely some financial aspects involved as well, as global scale and complexity come at a cost and if demand isn't sufficient, it may not make sense to roll out a service everywhere.

Disaster Recovery options on AWS

We are running on aws where we run everything in 1 region and use AZ's for our services. So if a AZ failed we would still be "up" and servicing our customers.
From reading the Reliability Pillar of the AWS Well-Architected documentation, this would suggest that this is enough to do in the case of a failure:
Unless you require a multi-region strategy, we advise you to meet your
recovery objectives in AWS using multiple Availability Zones within an
AWS Region.
A see tools out there like Cloud Endure and Druva CloudRange, but they seem like more for on premise or other cloud providers migrating or recovering on aws.
My question is, it is hard to definitively find, but it appears regions never go down, maybe services within a AZ or the AZ goes down, so if you are using AZ's for your applications and databases and doing backups to s3(Cross-Region replication) is this enough for DR?
Regions may not go down but they can become functionally unusable. There was an outage of eu-west-2a about 3 months ago that rendered large parts of eu-west-2 more-or-less unusable.
If you want redundancy, you should be mirroring infra to at least one other region.

Aws serverless - cost of switching region

I am currently in the middle of development for aws serverless backend(cognito,lambda,api Gateway,dynamodb,s3)..
I find that I choose the wrong region before.
Question:
1.is there any difference when using different region in Aws development?
2.is the cost high when changing region in the middle of development(re-creating the db/lambda function/api gateway)
3.what is the proper approach to switch to another region with the same serverless setting/config I am using now?
1. Cost and latency will differ.
Some services in AWS have different costs in different regions. Some services are global (all regions) by default - for example S3. There are some useful charts on this blog post, including the following diagram on data transfer out cost differences by region:
If your customer is in region A and is requesting services in region B then the response will take ever so slightly longer. It’s not usually long enough to warrant concern. Though, using CloudFront between the service and customer will reduce the slow down - and in many cases make for a faster service so it’s worth doing even if customer and service are in the same region.
2. It depends
If you’re creating these services manually then you’d have to spend that time in the console for the new region again. Time is money, and you’ll maybe make a mistake in setup - you’re only human.
If you’re creating these services in code - using CloudFormation (or AWS CDK, serverless.com, terraform or the many other ways to do Infrastructure as Code) then it won’t cost anything. You would have a single command (maybe a few) which will reproduce your infrastructure in any region.
Then, you’ll need to migrate data. This is the unavoidable cost. If you’ve beeen running in region A for any time and then move to region B you will need to transfer the data. That’ll require a script to take the data out of dynamo and put it into the new one.
3. Use Infrastructure as Code and always be prepared for data migration
Have a look at AWS CDK. It allows you to define your services in either Java, Python, or JavaScript and has some nice tutorials. https://cdkworkshop.com/
As you code, build out your scripts to extract the data from dynamo. This is useful to have even if you don’t transfer tl a different region - maybe you want to run a copy in a staging/dev environment.
4. New services are not released in all regions at the same time
If you are using a brand new service or a new feature of an existing service, it might not yet be available in each region. Choose a region that supports all desired services and features. For example, in this Dec 2019 announcement by AWS about Inter-Region peering for Transit Gateway, it says this feature was released to "US East (N. Virginia), US East (Ohio), US West (Oregon), EU (Ireland), and EU (Frankfurt) AWS Regions" and the others would come soon.

AWS vs GCP Cost Model

I need to make a cost model for AWS vs GCP. Currently, our organization is using AWS. Our biggest services used are:
EC2
RDS
Labda
AWS Gateway
S3
Elasticache
Cloudfront
Kinesis
I have very limited knowledge of cloud platforms. However, I have access to:
AWS Simple Monthly Calculator
Google Cloud Platform Pricing Calculator
MAP AWS services to GCP products
I also have access to CloudHealth so that I can get a breakdown of costs per services within our organization.
Of the 8 major services listed above are main usage and costs go to EC2, S3, and RDS.
Our director of engineering mentioned that I should be most concerned with vCPU and memory.
I would appreciate any insight (big or small) that people have into how I can go about creating this model, any other factors I should consider, which functionalities of the two providers for the services are considered historically "better" or cheaper, etc.
Thanks in advance, and any questions people may have, I am more than happy to answer.
-M
You should certainly cost-optimize your resources. It's so easy to create cloud resources that people don't always think about turning things off or right-sizing them.
Looking at your Top 5...
Amazon EC2
The simplest way to save money with Amazon EC2 is to turn off unused resources. You can even stop instances overnight and on the weekend. If they are only used 8 hours per workday, then that is only 40 out of 168 hours, so you can save 75% by turning them off when unused! For example, Dev and Test instances. People have written various types of automated utilities to turn instances on and off based on tags. Try search the Internet for AWS Stopinator.
Another way to save money on Amazon EC2 is to use spot instances. They are a fraction of the price, but have a risk that they might be turned off when demand increases. They are great where it is okay for systems to be terminated sometimes, such as automated testing systems. They are also a great way to supplement existing capacity at a fraction of the price.
If you definitely need the Amazon EC2 instances to keep running all the time, purchase Amazon EC2 Reserved Instances, which also offer a price saving.
Chat with your AWS Account Manager for help with the above options.
Amazon Relational Database Service (RDS)
Again, Amazon RDS instances can be stopped overnight/on weekends and turned on again when needed. You only pay while the instance is running (plus storage costs).
Examine the CloudWatch metrics for your RDS instances and determine whether they can be downsized without impacting applications. You can even resize them when they are used less (eg over weekends). Everything can be scripted, so you could trigger such downsizing and upsizing on a schedule.
Also look at the Engine used with RDS. Commercial offerings such as Oracle and Microsoft SQL Server are more expensive than open-source offerings like MySQL and PostgreSQL. Yes, your applications might need some changes, but the cost savings can be significant.
AWS Lambda
It is most unusual that Lambda is #3 in your list. In fact, some customers never get a charge for Lambda because it falls in the monthly free usage tier. Having high charges means you're making good use of Lambda (which is saving you EC2 costs), but take a look at which applications are using it the most and see whether they are using it wisely.
When correctly used, a Lambda function should only ever run for a few seconds, so check whether any application seem to be using it outside this pattern.
AWS API Gateway
Once again, these costs tend to be low ($3.50/million calls) so again I'd recommend trying to figure out how this is being used. If you really need that many calls, it would also explain the high Lambda costs. It would probably be more expensive if you were providing such functionality via Amazon EC2.
Amazon S3
Consider using different Storage Classes to reduce your costs. Costs can be reduced by:
Moving infrequently-accessed data to a different storage class
Moving data to One-Zone (if you have a copy of the data elsewhere, so don't need the redundancy)
Archiving infrequently-accessed data to Amazon Glacier, which offers much cheaper storage but does not have instant access
With GCP, you can benefit by receiving discounts such as the Committed Use Discount and the Sustained Use Discount.
With a Committed Use Discount, you can receive a discount of up to 70% if your usage is predictable.
With the Sustained Use Discount, there is an incremental discount if you reach certain usage thresholds.
On your concern with vCPU and memory, you may use predefined machine types. They are cheaper than custom machine types.
Lastly, you can also test the charges by trying out the Google Cloud Platform Free Tier.

How to improve the performance of a Django application across different geographic regions?

I have a Django application that is hosted on an AWS box located in the us-east-1 geographic region using Nginx and django-channels. Recently, I have had some users in the ap-southeast-1 region complain that my app is not very responsive. The app runs fine for me as I am using us-east-1.
How can I detect poor performance in a region is happening before a user complains?
What can I do to improve the app performance and user experience in the ap-southeast-1 timezone?
Is there any way to test the performance in another geographic region as part of unit-testing or something similar?
I have a feeling the answer for #2 will have something to do with: (A) Adding another web server in ap-southeast-1 and (B) caching, but I'm keen to hear if there are additional things I should be doing.
However, I have no clue how to detect slow performance for other regions is happening in the first place or to test to ensure it does not happen again in the future.
Yes, optimally you should have a server wherever you have users. However, if multiple servers in different regions have to talk to the same database, you will still have latency issues when the server communicates with the database in another region.
The best solution would be to have your full stack, servers and databases, in all supported regions and use cross-region replication to ensure that all regions share the same data. This is supported for some AWS databases such as DynamoDB and RDS.
As your architecture gets more complex, it may be a good idea to use Cloudformation to manage your stack in each region so that everything is kept up to date.
As for detecting performance, Cloudwatch is a good tool for monitoring your AWS resources. Depending on what AWS resource you are using for your server, it should have some metrics to measure the response times.
As for testing performance, you could look into creating a dev/test version of your server in another region, and use a proxy to access it. Then just use Cloudwatch to see how long those requests take.