Run application on multiple locations and Pay as You Go [closed] - amazon-web-services

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
Hey guys I need advice in order to make the right architectural decision.
I need to be able to run the console application (or docker container in the future) on different locations (Countries/Citys) without paying for hundreds always running virtual machines.
In other words, I need to press the button and run the application for a couple of hours on a server in New York, next press, and the same application will be run in Stambul.
The straight forward approach is to buy hundreds of virtual machines, but there are two problems with it:
It's too expensive.
Probably only a couple of them will be used but I'll have to pay for all of them.
What can you recommend?
Does Azure support it? Or maybe AWS?

First thing, cloud service provider work base on the region instead of a city like you mentioned new york etc but you can choose always nearest region to the country/city in which you want to run your application. you can also try cloudping or aws cloudping for nearest region.
In other words, I need to press the button and run the application for
a couple of hours on a server in New York, next press, and the same
application will be run in Stambul.
So I will recommend docker container as you want to run the same application in a different region so instead of mainain AMI better to go with the container.
AWS fargate is designing for pay as you go purpose along with zero server maintenance mean you just need to specify the docker image and run your application, rest AWS will take care of the resources.
AWS Fargate is a serverless compute engine for containers that works
with both Amazon Elastic Container Service (ECS) and Amazon Elastic
Kubernetes Service (EKS). Fargate makes it easy for you to focus on
building your applications. Fargate removes the need to provision and
manage servers, lets you specify and pay for resources per
application, and improves security through application isolation by
design.
like you mentioned
without paying for hundreds always running virtual machines.
So you do not need pay, you will only pay for the compute hours that used by your application when you start/run the container.
With AWS Fargate, there are no upfront payments and you only pay for
the resources that you use. You pay for the amount of vCPU and memory
resources consumed by your containerized applications.
AWS Fargate pricing
For deployment purpose, I will recommend terraform so you will only need to create resources for region and for the rest of the region you can make it parameterized.

Related

What services of AWS to use for a microservice architecture? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 days ago.
Improve this question
I have a small mobile application which is based on microservice architecture. Two of the microservices are in java, one is in node. My application uses a MySQL database.
Presently I am using a VPS to host my services, all software (MySQL, tomcat and pm2) are installed in the same VPS.
Now I am planning to move to AWS and (as I have no prior AWS experience) I am overwhelmed by the services provided by AWS.
Can anyone please help me decide on this?
Since the usage is going to be very low at this point, I have to get this setup with the least monthly costs to be incurred.
For that I am thinking to get 1 EC2 instance and install all software in different docker containers (including the DB). Will this approach work? Or will I have to get another RDS instance? Is docker required? Or can I directly install all the software?
Such question is opinion based and community discourage such question but there is few things that are very clear to explain.
Will I have to get another RDS instance
I will not recommend to go with the container for DB, it will be hard to scale and maintain backup etc in container also there is a risk to lost data if no proper mounting were set in the container configuration.
I will recommend going with RDS free tier which is free for one year ( Term and condition apply)
It will be easy in future to upgrade, scale and maintain backup with RDS.
AWS Free Tier with Amazon RDS
750 hours of Amazon RDS Single-AZ db.t2.micro Instance usage running
MySQL, MariaDB, PostgreSQL, Oracle BYOL or SQL Server (running SQL
Server Express Edition) – enough hours to run a DB Instance
continuously each month
rds-free-tier
I am thinking to get 1 ec2 install and install all softwares in
different docker containers (including the DB),
At the initial level, it's fine to go with 1 instance. but here is the flow
Create ECS cluster
Create ECR registry and push your image to ECR
Create Task definition against each docker iamge
Create service for each task definition
As mentioned in the comment you can explore EKS, but in AWS I will prefer ECs.
You can explore this to start with gentle-introduction-to-how-aws-ecs-works-with-example
High-level look will be
It's been a while since I asked this question - and though I agree this is more of an opinion-based question - I still think this answer is something that would be a good start into cloud deployments.
Application hosting
For startups and small projects like these, the best approach would be to go with serverless lambda functions - and though it would add an overhead of lambda functions in code its worth the effort as it keeps the cost to almost zero until you start to get some tangible traffic.
Another approach for the application microservices could be docker - but docker containers are more for containerized deployment - to make sure code runs the same in the prod environment as it does in the dev- rather than this one should go with a small EC2 instance and deploy the microservices as separate processes (PM2 for node js). Though differential scaling would be tough at this point it doesn't matter - as soon as you start seeing CPU metrics touch the roof you can start decoupling the more used Microservices to another machine - and have a load balancer in front of it.
K8s is overkill at this point as again with one worker node even though the control plane is free to use - its just no point - until you have a sizable number of worker nodes
Database Deployment
Stateful deployments are trickier comparatively as there is a chance of data loss. Easier would be to go with managed DB hosting at this point such as AWS aurora/RDS or if you plan to use NoSQL then mongoDB atlas. Managing DB along with backups would be a painful task especially when you are saving every penny in infra costs.

Why do people use AWS over Docker Containers? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
AWS provides services like Elasticache, redis, databases and all are charged on hourly basis. But these services are also available in form of docker containers in docker hub. All the AWS services liste above use an instance. Meaning, an independent instance for databases and all. But what if one starts using an ec2 instance, and start downloading all the images for all the dependancies on databases. That would save them a lot of money right?
I have used docker before and it has almost all the images for the services aws provides.
EC2 is not free. You can run, for example, MySQL on an EC2 instance. It will be cheaper than using RDS, but you still need to pay for the compute and storage resources it consumes. Even if you run a database on a larger shared EC2 instance you need to account for its storage and CPU cycles, and you might need more or larger instances to run more tasks there.
(As of right now, in the us-east-1 region, a MySQL db.m5.large instance is US$0.171 per hour or US$895 per year paid up front, plus US$0.115 per GB of capacity per month; the same m5.large EC2 instance is US$0.096 per hour or US$501 per year, and storage is US$0.10 per GB per month. [Assuming 1-year, all-up-front, non-convertible reserved instances.])
There are good reasons to run databases not-in-Docker. Particularly in a microservice environment, application Docker containers are stateless, replicated, update their images routinely, can be freely deleted, and can be moved across hosts (by deleting and recreating them somewhere else). (In Kubernetes/EKS, look at how a Deployment object works.) None of these are true of databases, which are all about keeping state, cannot be deleted, cannot be moved (the data has to come with), and must be backed up.
RDS has some useful features. You can change the size of your database instance with some downtime, but no data loss. AWS will keep managed snapshots for you, and it's straightforward (if slow) to create a new database from a snapshot of your existing database. Patch updates to the database are automatically applied for you. You can pay Amazon for these features, in effect, or pay your own DBA to do the same tasks for a database running on an EC2 instance.
None of this is to say you have to use RDS, you do in fact save on AWS by running the same software on EC2, and it may or may not be in Docker. RDS is a reasonable choice in an otherwise all-Docker world though. The same basic tradeoffs apply to other services like Elasticache (for Redis).

Data Engineering - infrastructure/services for efficient extraction of data (AWS) [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 4 years ago.
Improve this question
Let's assume the standard data engineering problem:
every day at 3.00 AM connect to an API
download data
store them in a data lake
Let's say there is a python script that does the API hit and storage, but that is not that important.
Ideally I would like to have some service that comes alive, runs this script and kills itself... So far, I thought about those possibilities (using AWS services):
(AWS) Lambda - FaaS, ideal match for the usecase. But there is a problem: bandwith of the function (limited RAM/CPU) and timeout of 5 mins.
(AWS) Lambda + Step Functions + range requests: fire multiple Lambdas in parallel, each downloading a part of the file. Coordination via Step Functions. It solves the issue of 1) but it feels very complicated.
(AWS EC2) Static VM: classic approach: I have a VM, I have a python interpreter, I have a cron -> every night I run the script. Or every night, I can trigger a build of new EC2 machine using CloudFormation, run the script and then kill it. Problems: feels very old-school - like there has to be a better way to do it.
(AWS ECS) Docker: I have very little experience with docker. Probably similar to the VM case, but feels more versatile/controllable. I don't know if there is a good orchestrator for this kind of job and how easy it is (firing docker and killing it)
How I see it:
Exactly what I would like to have, but it is not good for downloading big data because of the resource constrains.
Complicated workaround for 1)
Feels very oldschool, additional devops expenses
Don't know a lot about this topic, feels like the current state-of-art
My question is: what is the current state-of-art for this kind of job? What services are useful and what are the experiences with them?
A variation on #3... Launch a Linux Amazon EC2 instance with a User Data script, with Shutdown Behavior set to Terminate.
The User Data script performs the download and copies the data to Amazon S3. It then executes sudo shutdown -h to turn off the instance. (Or, if the script is complex, the User Data script can download a program from an S3 bucket, then execute it.)
Linux EC2 instances are now charged per-second, so think of it like a larger version of Lambda that has more disk space and does not have a 5-minute limit.
There is no need to use CloudFormation to launch the instance because then you'd just need to delete the CloudFormation stack. Instead, just launch the instance directly with the necessary parameters. You could even create a Launch Template with the parameters and then simply launch an instance using the Launch Template.
You could even add a few smarts to the process and launch the instance using Spot Pricing (set the bid price to normal On-Demand pricing, since worst case you'll just pay the normal price). If the Spot Instance won't launch due to insufficient spare capacity, then launch an On-Demand instance instead.

website with fluctuating traffic [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I have a web application that has very fluctuating traffic. I'm talking about 30 to 40 users daily to thousands of people simultaneously. It's a ticketing app so this kind of behavior is here to stay so I want to make a strategic choice I don't want to by a host with a high configuration because it's just going to be sitting around for most of the time. We're running a Node.js server so we usually run low on RAM. My question is this: what are my options and how difficult is it to go from a normal VPS to something like Microsoft Azure, Google Cloud, or AWS.
It's difficult to be specific without knowing more about your application architecture but both AWS Lambda and Google App Engine offer 'serverless architecture' and support Node.js. Serverless architectures allow you to host code directly rather than running servers and associated infrastructure. Scaling is given to you by the services, costs are based on consumption and you can configure constraints and alerts to prevent racking up huge unexpected bills. In both instances you would need to front the services with additional Google or AWS services to make them accessible to customers, but these offer a great way to scale and pay only for what you need.
A first step is to offload static content to Amazon S3 (or similar service). Those services will handle any load and will lessen the load on your web server.
If the load goes up/down gradually (eg over the course of 30 minutes), you can use Auto Scaling to add/remove Amazon EC2 servers based upon load metrics. For example, you probably don't need many servers at night.
However, for handling of spiky traffic, rewriting an application as Serverless will make it highly resilient, highly scalable and most likely a lot cheaper too!

Is Amazon Web Services reasonably priced for a personal server? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I currently have a Linux, Apache, MySQL, PHP, Postfix web server that I setup on a spare computer at home that I am exploring transferring to Amazon Web Services. It's about as simple of a personal web server as it gets, I mainly use it for personal experimentation for PHP development, I have a blog, it hosts my e-mail, plus I do some C++ development on the server and run some small executable and networked personal applications.
The only traffic the server really sees is me (on a daily basis), plus some web crawlers, and the occasional hit from a Google search.
Is it reasonable to transfer my server to Amazon Web Services? Or is Amazon Web Services specifically targeted to larger scale servers? What's about the cheapest cost I can expect to pay for this hosting?
I tried using the AWS Simple Monthly Calculator but had a hard time estimating the numbers. Perhaps someone is doing something similar to my plans, and can inform me of what they are paying.
One of the reasons I am interested in AWS, is I am contemplating using my website as cloud storage for a mobile application I am working on, and if that application takes off quickly, I would like to be able to quickly scale to the traffic.
If you need a simple setup, it is sufficient to use a t1.micro instance. The monthly price for such an instance (depending on the location of the server) is about 15 US$. If you plan to run your server for a longer time, consider using reserved instances. You pay a one-time fee and get reduced hourly prices afterwards. If you run your server all the time, you should use a "High Utilization" instance. I think you won't get a lot of traffic and EBS requests, so I would focus on the main part regarding costs which is the EC2 instance hours.
Here is a basic example calculation with the above setup as a start. This calculation does not include a 1-year-free trial that Amazon offers.
If you need to scale, then you have a lot of options available. You can launch bigger instances if you need it. Have a look at the instance types page to get an overview (also includes details on the Micro instance). If scaling and possible upgrades are a main factor in your decision, then you should consider AWS.
Is it reasonable to transfer my server to Amazon Web Services
I think yes. Amazon has list of Linux versions where you can fetch free server with no payment. Bear in mind that, for example, for free server DB you can't connect to your DB from external IP (aka from external DB tool). But port redirection will work.
Usually I use Amazon for demo versions ( < 10k users). But its work great.