AWS rates while traveling - amazon-web-services

I'm wondering how much more expensive it would be to access my EC2 instance from a different country than if I just waited until I was back at home. I could wait, but it might be good to see the international rates for use.
I'm sending it http requests within an iOS app to query my database. My app does also have GPS data, so I guess I'm thinking getting a completely different location into the database would help with test cases.

If you are always accessing your app from the Internet, then there will be no change in the access charges.
The cost for Data Transfer is charged if traffic goes out to the Internet. it doesn't matter where on the Internet.
One exception to this is if you are using Amazon CloudFront, which is a global caching service and charges different rates depending upon from where traffic is served.

Related

Shifting/Migrating PHP Codeigniter project to AWS

Our Company has a Software Product consists of Web App, Android and iOS App.
we have more then 350 clients, that is we have more then 350 databases(MYSQL) of each client and one code file repository(PHP Codeigniter). When new client purchase our software we just copy the the old empty database and client is able to use the software. this is our architecture.
Now we are planing to shift to AWS but we do not know which AWS service we really need for this type of architecture
We have Codeigniter 3.1 version, PHP 7 and MYSQL.
You can implement this sort of system on a single EC2 instance, simply installing the same software as you have on your current server. However in this case you are likely better off to host it somewhere cheaper than AWS.
However, what I recommend is that you implement it using RDS, EC2, S3 and Cloudfront.
RDS
I recommend to run your database on RDS:
the database server competes over completely different resources than PHP, so if you run into performance problems, it is impossible to figure out what is happening when database and PHP are on the same instance. A lack of CPU can lead to a lack of memory and vice versa.
built-in point-in-time recovery for up to 35 days has saved my bacon many many times and is great when you have a bug that is hard to reproduce or when someone (you) has accidentally deleted a large amount of data
On top of this I recommend to also go for Aurora for MySQL instead of MySQL RDS, especially as I expect your database size on disk to be smaller than 50GB:
On MySQL RDS you need to commission at least 100GB of disk to get good enough performance for production. 100GB gives you 100x50kb per second on the EBS disks that are used.
By comparison, on AWS Aurora you get the read performance of 6 different storage locations without having to commit to any amount of disk space. This saves money and is more performant
Aurora is also much faster in restoring point in time as well as with "dumb" queries, ie. table scans.
EC2
I recommend to look at nothing older than the t3, c5 or m5 instances, as they have the new "nitro hypervisor" and are significantly faster, while being cheaper. From experience you can go down a notch from your existing CPU count with these instances
If you can use c6/m6/t4 instances
I also found c5a and equivalents to be just as performant
AWS recommends to always use auto-scaling, but if you are coming a single server somewhere else you are already winning because you can restore within minutes.
Once you hit $600 per month in EC2 charges, definitely look at autoscaling. Virtually every webapp can be written in a way that allows for a server to be replaced at any point in time. With auto scaling you can then use Spot instances at 50-90% discount for your 2nd/3rd etc instance and save serious money.
S3
Store all customer provided files on S3, DO NOT get into a shared file system.
This is much cheaper than any disk or file system and has numerous automation features, such as versioning, cross-region backup, archiving, event triggers etc.
do not ever make your bucket publicly accessible.
Cloudfront
The key benefit of storing all customer provided files on S3 is that you can serve them with Cloudfront without paying for CPU. Cloudfront only charges for traffic delivered. S3 only charges for space used. Every file delivered through Cloudfront does not use your server's CPU, sockets, network bandwidth. On top of this transfer from EC2 to S3 and from S3 to Cloudfront is free of charge. You are only charged for the traffic you already had to pay for anyway.
You need to secure your clients file properly with Signed Urls or Signed Cookies. For this you can either create separate S3 buckets for each client or one single bucket.
Bonus: SQS
Many things in web application do not need to be done right now. They can wait a bit, sometimes a couple of 100 milliseconds, sometimes minutes or hours.
Anything that can wait, I recommend start implementing a background process that reads from an SQS queue for it. Your web application will need minimal time to push the work required and its parameters into an SQS queue. Your background process can then work on it in (rough) order of entry into the queue. When you use your normal web servers to process the background queues you are already getting a better distribution of server load over time. This is because you cannot control the amount of web requests, but you can control the speed in how you process background items (to a degree of course).
Later, when you have a lot of background processing and a lot of traffic, you can consider using different servers for background processing.
There are also lots of ways of how you can hook other event driven code onto the items that go into your queue, including monitoring for limits exceeded for certain items etc.

Azure VM Inbound Throttling to VMs?

We have 2 Elastic VMs (Linux) (Currently DS2V2) behind an Azure Load Balancer. We are doing HTTP Posts from our local lan into the Load Balancer, but we seem to be getting throttled. We have tried: Changing the size of the VMs, no difference; adding additional premium SSDs, again no difference; running multiple threads on our end, again no differenece.
What we did do though, was to having the Elastic Engine suck in all of the log files from the Linux boxes and the index rate jump pretty high while it was ingesting them. So we are assuming that it's not really the Linux Elastic boxes that are throttling us.
We do have Kibana installed on the boxes, and as a base line, we're just using the "Cluster Indexing Rate" for both our local posts to the box, and the local ingestion of the log files.
We do understand that yes, there is going to be some latency and overhead since we are now involving the internet, but not the rates we are currently getting. (We have a 1G pipe to the internet, it's nowhere near capacity, so we can rule out at least getting out of our company).
The question is, where else can we look to determine where we might be getting throttled?
For the performance "MUCH slower", it is a bit subjective question and hard to identify. I just provide some information that may impact it.
Azure Compute requests may be throttled at a subscription and on a per-region basis. If you have an API throttling error, you could refer to this document to troubleshoot throttling issues, and best practices to avoid being throttled.
Some factors CPU and storage limits that differ on Azure VM sizes may impact the Azure VM to process incoming data. You may change the size to a higher CPU and premium SSD disk. You could also change Azure resources to another region which is close to your location. You could refer to this article.

Request Limit Per Second on GCP Load Balance in front of Storage Bucket website

I want to know the limit of requests per second for Load Balancer on Google Cloud Platform. I didn't found this information on documentation.
My project is a static website hosted on Storage Bucket behind the Load Balancer and CDN active,
This website will receive a campaign in a Television channel and the estimative is that 100k requests per second for 5 minutes.
Could anyone help me with this information? Its necessary to ask Support for pre-warmup the load balancer before the campaign starts?
From the front page of GCP Load Balancing:
https://cloud.google.com/load-balancing/
Cloud Load Balancing is built on the same frontend-serving
infrastructure that powers Google. It supports 1 million+ queries per
second with consistent high performance and low latency. Traffic
enters Cloud Load Balancing through 80+ distinct global load balancing
locations, maximizing the distance traveled on Google's fast private
network backbone.
This seems to say that 1 million+ request per second is fully supported.
However, with all that said ... I wouldn't wait for "the day" before testing. See if you can't practice a suitable load. Given that this sounds like a finite event with high visibility (television), I'm sure you don't want to wait for the event only to find out something was wrong in the setup or theory. From the perspective of "is 100K request per second through a load balancer" ... the answer appears to be yes.
If you (or you asking on behalf of) a GCP consumer, Google has Technical Account Managers associated with accounts that can be brought into the planning loop ... especially if there are questions on "can we do this". One should always be cautious of sudden high volume needs of GCP resources. Again, through a Technical Account Manager, it does no harm to pre-warn Google of large resource requests. For example, if you said that you needed an extra 5000 Compute Engines, you may be constrained on what regions are available to you given a finite existing capacity. Google, just like other public cloud providers, has to schedule and balance resources in its regions. Timing is also very important. If you need a sudden burst of resources and the time that you need them happens to coincide with some event such as Black Friday (US) or Singles Day (China) special preparation may be needed.

Setting up a globally available web app on amazon web services

First of all, I am pretty new to AWS, so my question might seem very amateur.
I am a developing a web application which needs to available globally and currently am hosting it on amazon. Since the application is still under development, i have set it up in the Singapore region. However, when i test the application, i get good response times from locations on the the east side of the globe(~50ms). However, when i test the response times from the US, it's ~550ms. So we decided to have 2 instances one in Singapore and one in the US. But i'm not able to figure out a way to handle data replication and load balancing across regions. Elastic Beanstalk only allows me to do this in a particular region. Can somebody please explain how i can achieve global availability for my web app. The following are the services i currently use.
1. Amazon EC2
2. Amazon S3
I need both database replication and S3 file replication. Also it would be great if there was a way where i just need to deploy my application on one place and the changes are reflected across all the instances we would have on the globe.
Before you spend a lot of time and money setting up redundant servers in different regions, you may want to make sure that you can't get the performance improvement you need simply by implementing AWS Cloudfront:
Amazon CloudFront employs a network of edge locations that cache
copies of popular files close to your viewers. Amazon CloudFront
ensures that end-user requests are served by the closest edge
location. As a result, requests travel shorter distances to request
objects, improving performance. For files not cached at the edge
locations, Amazon CloudFront keeps persistent connections with your
origin servers so that those files can be fetched from the origin
servers as quickly as possible. Finally, Amazon CloudFront uses
additional optimizations – e.g. wider TCP initial congestion window –
to provide higher performance while delivering your content to
viewers.
http://aws.amazon.com/cloudfront/faqs/
The nice thing is, you can set this up and test it out in very little time and for very little money. Obviously this won't solve all performance problems, especially if you app is performance bound at the database, but this is a good way of taking care of that 'low hanging fruit' when trying to speed up your website in diverse locations around the world.

need some guidance on usage of Amazon AWS

every once in a while i read/hear about AWS and now i tried reading the docs.
But such docs seem to be written for people who already know which AWS they need to use and only search for how it can be used.
So, for myself, to understand AWS better i try to sketch a hypothetical Webapplication with a few questions.
The apps purpose is to modify content like videos or images. So a user has some kind of webinterface where he can upload his files, do some settings and a server grabs the file and modifies it (e.g. reencoding). The Service also extracts the audio track of a video and trys to index the spoken words so the customer can search within his videos. (well its just hypothetical)
So my questions:
given my own domain 'oneofmydomains.com' is it possible to host the complete webinterface on AWS? i thought about using GWT to create the interface and just deliver the JS/images via AWS, but which one, simple storage? what about some kind of index.html, is there an EC2 instance needed to host a webserver which has to run 24/7 causing costs?
now the user has the interface with a login form, is it possible to manage logins with an AWS? here i also think about an EC2 instance hosting a database, but it would also cause costs and im not sure if there is a better way?
the user has logged in and uploads a file. which storage solution could be used to save the customers original and modified content?
now the user wants to browse the status of his uploads, this means i need some kind of ACL, so that the customer only sees his own files. do i need to use a database (e.g. EC2) for this, or does amazon provide some kind of ACL, so the GWT webinterface will be secure without any EC2?
the customers files are reencoded and the audio track is indexed. so he wants to search for a video. Which service could be used to create and maintain the index for each customer?
hope someone can give a few answers so i understand AWS better on how one could use it
thx!
Amazon AWS offers a whole ecosystem of services which should cover all aspects of a given architecture, from hosting to data storage, or messaging, etc. Whether they're the best fit for purpose will have to be decided on a case by case basis. Seeing as your question is quite broad I'll just cover some of the basics of what AWS has to offer and what the different types of services are for:
EC2 (Elastic Cloud Computing)
Amazon's cloud solution, which is basically the same as older virtual machine technology but the 'cloud' offers additional knots and bots such as automated provisioning, scaling, billing etc.
you pay for what your use (by hour), for the basic (single CPU, 1.7GB ram) would prob cost you just under $3 a day if you run it 24/7 (on a windows instance that is)
there's a number of different OS to choose from including linux and windows, linux instances are cheaper to run without the license cost associated with windows
once you're set up the server to be the way you want, including any server updates/patches, you can create your own AMI (Amazon machine image) which you can then use to bring up another identical instance
however, if all your html are baked into the image it'll make updates difficult, so normal approach is to include a service (windows service for instance) which will pull the latest deployment package from a storage (see S3 later) service and update the site at start up and at intervals
there's the Elastic Load Balancer (which has its own cost but only one is needed in most cases) which you can put in front of all your web servers
there's also the Cloud Watch (again, extra cost) service which you can enable on a per instance basis to help you monitor the CPU, network in/out, etc. of your running instance
you can set up AutoScalers which can automatically bring up or terminate instances based on some metric, e.g. terminate 1 instance at a time if average CPU utilization is less than 50% for 5 mins, bring up 1 instance at a time if average CPU goes beyond 70% for 5 mins
you can use the instances as web servers, use them to run a DB, or a Memcache cluster, etc. choice is yours
typically, I wouldn't recommend having Amazon instances talk to a DB outside of Amazon because of the round trip is much longer, the usual approach is to use SimpleDB (see below) as the database
the AmazonSDK contains enough classes to help you write some custom monitor/scaling service if you ever need to, but the AWS console allows you to do most of your configuration anyway
SimpleDB
Amazon's non-relational, key-value data store, compared to a traditional database you tend to pay a penalty on per query performance but get high scalability without having to do any extra work.
you pay for usage, i.e. how much work it takes to execute your query
extremely scalable by default, Amazon scales up SimpleDB instances based on traffic without you having to do anything, AND any control for that matter
data are partitioned in to 'domains' (equivalent to a table in normal SQL DB)
data are non-relational, if you need a relational model then check out Amazon RDB, I don't have any experience with it so not the best person to comment on it..
you can execute SQL like query against the database still, usually through some plugin or tool, Amazon doesn't provide a front end for this at the moment
be aware of 'eventual consistency', data are duplicated on multiple instances after Amazon scales up your database, and synchronization is not guaranteed when you do an update so it's possible (though highly unlikely) to update some data then read it back straight away and get the old data back
there's 'Consistent Read' and 'Conditional Update' mechanisms available to guard against the eventual consistency problem, if you're developing in .Net, I suggest using SimpleSavant client to talk to SimpleDB
S3 (Simple Storage Service)
Amazon's storage service, again, extremely scalable, and safe too - when you save a file on S3 it's replicated across multiple nodes so you get some DR ability straight away.
you only pay for data transfer
files are stored against a key
you create 'buckets' to hold your files, and each bucket has a unique url (unique across all of Amazon, and therefore S3 accounts)
CloudBerry S3 Explorer is the best UI client I've used in Windows
using the AmazonSDK you can write your own repository layer which utilizes S3
Sorry if this is a bit long winded, but that's the 3 most popular web services that Amazon provides and should cover all the requirements you've mentioned. We've been using Amazon AWS for some time now and there's still some kinks and bugs there but it's generally moving forward and pretty stable.
One downside to using something like aws is being vendor locked-in, whilst you could run your services outside of amazon and in your own datacenter or moving files out of S3 (at a cost though), getting out of SimpleDB will likely to represent the bulk of the work during migration.