how to get budget estimate for copilot? - amazon-web-services

When I select "scheduled job" while initiating resources, how is the process handled internally?
Can I verify the container in ECS? I guess it will use batch jobs for this option.
# copilot init
Note: It's best to run this command in the root of your Git repository.
Welcome to the Copilot CLI! We're going to walk you through some questions
to help you get set up with an application on ECS. An application is a collection of
containerized services that operate together.
Which workload type best represents your architecture? [Use arrows to move, type to filter, ? for more help]
> Load Balanced Web Service
Backend Service
Scheduled Job
What will be the charges if I select backend service or scheduled job?

Copilot uses Fargate containers under the hood; therefore, your charges for a backend service are based on the number of containers you have running and the CPU/memory size of those containers. The minimum container size is 0.25 vCPU and 512 GB of reserved memory.
For other service types, your pricing depends on a few more things.
Load Balanced Web Service
Fargate containers based on size and number (~$9/month for the smallest possible container)
Application Load Balancer (about $20/month depending on traffic)
Backend Service
Fargate containers based on size and number (~$9/month for the smallest possible container)
Scheduled Job
Fargate containers based on size, number, and invocation frequency and duration (ie you only pay for the minutes you use)
State Machine transitions The first 4000 transitions in a month are free, which corresponds to an invocation frequency of about once every 21 minutes assuming there are no retry transitions. Transitions after that limit are billed at a low rate.
Other notes
All Copilot-deployed resources are grouped with a number of resource tags. You can use those tags to understand billing activity, and even add your own tags via the --resource-tags flag in copilot svc deploy or copilot job deploy.
The tags we use to logically group resources are the following:
Tag Name
Value
copilot-application
name of the application this resource belongs to
copilot-environment
name of the environment this resource belongs to
copilot-service
name of the service or job this resource belongs to
The copilot-service tag is used for both jobs and services for legacy reasons.

Copilot refers to these entities as common cloud architectures [1].
I could not find an official document which outlines how these architectures are composed in detail. I guess it might be an implementation detail from the creators' perspective when you look at point one of the AWS Copilot CLI charter [2]:
Users think in terms of architecture, not of infrastructure. Developers creating a new microservice shouldn't have to specify VPCs, load balancer settings, or complex pipeline configuration. They may not know anything about other AWS services. They should be able to specify what "kind" of application it is and how it fits into their overall architecture; the infrastructure should be generated from that.
I have to agree that more sophisticated users always ask themselves how costs of a specific architecture will look like and I completely endorse the idea of having a special Copilot command such as copilot estimate costs service-xy which can be executed before creating the service.
There is some high-level documentation on the architecture types Load Balanced Web Service and Backend Service. [3]
It mentions the command copilot svc show in conjunction with the --resources flag [4]:
You can also provide an optional --resources flag to see all AWS resources associated with your service.
I think this gives you the ability to estimate costs right after bringing the services up and running.
A somehow more complicated approach which I frequently apply to understand complex constructs in the AWS CDK is to look at the source code. For example, you could open the corresponding Go file for the Load Balanced Web Service architecture: [5]. Digging into the code, you'll notice that they make it pretty clear that they are using Fargate containers instead of EC2 instances.
That is also what they tell us in the high-level service docs [4]:
You can select a Load Balanced Web Service and Copilot will provision an application load balancer, security groups, an ECS Service and run your service on Fargate.
More on Fargate: [6]
Btw, there is a really interesting comment in the issue section which outlines why they decided against supporting EC2 in the first place [7]:
What features have the team currently explicitly decided against?
EC2 comes to mind. I think we could have built a really nice experience ontop of EC2 instances - but I think there's a difference between building an "abstraction" around ECS / Fargate and building an "illusion" around ECS / EC2. What I mean by that is that if we created the illusion of a fully hands off EC2 experience, customers might be surprised that they are expected to be in charge of patching and security maintenance of those instances. This isn't something that Copilot, a CLI can really automate for people (realistically). We're still trying to figure out a good way to expose EC2 to folks - but we definitely see Fargate as the future.

Related

Increase resources on a Compute Engine VM without shutting services down

To automatically manage Cloud resources in order to meet the felt need of my infrastructure, I need to increase VM resources. But this is possible only while the machine status is TERMINATED.
The problem is that I have got applications on the VMs that must not stop running. Do you have any suggestions about how I could proceed, like increasing my machines resources without interrupting its services? (database, web, etc...)
The purpose of that is to automate my whole infrastructure, to ensure its quality of service even if I'm not monitoring it by myself.
I suggest to take a look to Managed Instance Groups. It will offer some of the characteristics you need: high availability, scalability and the ability to add the instance group to a load balancer.
According to the Google official documentation about MIGs:
Make your workloads scalable and highly available by taking advantage of automated MIG services, including: autoscaling, autohealing, regional (multiple zone) deployment, and automatic updating.
Regarding the need of automation of the services you want, I suggest to generally use fully managed services. You can check a summary of GCP services and you can always inspect if they fit your demands.

Does AWS Elastic Beanstalk deployment duration depend on the instance type?

I cannot seem to find any evidence that supports whether this is a true statement or not. I want to know if AWS Beanstalk deploys are faster with more expensive instance types. Any thoughts are welcomed! My assumption is that it shouldn't matter, but my deploys are incredibly slow and I want to see how this can be improved.
The underlying EC2 instance definitely matters when it comes to the speed of deployment. Elastic Beanstalk is just an API service that receives and processes API calls, and provisions the underlying infrastructure. However, all the installation and configurations are done on the instances at the end of the day. Beanstalk runs a bunch of wrapper scripts in addition to user data on the instance and this is where the compute power comes into play. Moreover, some instances spin up faster than the others.
Having said that, this is definitely not the only factor. Your source bundle size, what region you are deploying to and a few other factors also play a role.

When to use AWS Lambda and when to use Kubernetes (EKS)?

We are trying to evaluate the best ways to scale our J2EE web application and use hosting services with AWS. Are there reasons why we would use the Lambda service over Kubernetes (EKS)? Although it seems that Lambda can scale functional units, I'm not clear why anyone would use that as a substitute for Kubernetes, given Kubernetes can replicate containers based on performance metrics.
They serve different purposes. If you want to have horizontal scalability on a "ec2/pod/container" level and handle the availability yourself (through k8s of course), go for Kubernetes.
If you have a straight forward function doing a particular thing and you don't want to bother yourself with operating costs of having to manage a cluster or packaging it, then you can let Lambda administer it for you (at the time of writing, you would pay 20 US cents per million call). It is just another layer of abstraction on top of a system that is probably similar to Kubernetes, scaling your function per needs.
The goal of these technologies is to remove as much overhead as possible between you and the code and infrastructure can be painful. To summarize, serverless is to Kubernetes what Kubernetes is to containers.
To make a conscious decision, take the following into account:
Does your budget covers operation and maintenance of infrastructure
Do you have the expertise in Kubernetes
How much would it cost to redesign your J2EE app into serverless
ready code
Your timeline (of course...)
Based on the AWS resources you will use, how much do you save or not
by implementing a k8s cluster (database service?, EBS, EC2s, etc.)

Launch and shutting down instances suited for AWS ECS or Kubernetes?

I am trying to create a certain kind of networking infrastructure, and have been looking at Amazon ECS and Kubernetes. However I am not quite sure if these systems do what I am actually seeking, or if I am contorting them to something else. If I could describe my task at hand, could someone please verify if Amazon ECS or Kubernetes actually will aid me in this effort, and this is the right way to think about it?
What I am trying to do is on-demand single-task processing on an AWS instance. What I mean by this is, I have a resource heavy application which I want to run in the cloud and have process a chunk of data submitted by a user. I want to submit a this data to be processed on the application, have an EC2 instance spin up, process the data, upload the results to S3, and then shutdown the EC2 instance.
I have already put together a functioning solution for this using Simple Queue Service, EC2 and Lambda. But I am wondering would ECS or Kubernetes make this simpler? I have been going through the ECS documenation and it seems like it is not very concerned with starting up and shutting down instances. It seems like it wants to have an instance that is constantly running, then docker images are fed to it as task to run. Can Amazon ECS be configured so if there are no task running it automatically shuts down all instances?
Also I am not understanding how exactly I would submit a specific chunk of data to be processed. It seems like "Tasks" as defined in Amazon ECS really correspond to a single Docker container, not so much what kind of data that Docker container will process. Is that correct? So would I still need to feed the data-to-be-processed into the instances via simple queue service, or other? Then use Lambda to poll those queues to see if they should submit tasks to ECS?
This is my naive understanding of this right now, if anyone could help me understand the things I've described better, or point me to better ways of thinking about this it would be appreciated.
This is a complex subject and many details for a good answer depend on the exact requirements of your domain / system. So the following information is based on the very high level description you gave.
A lot of the features of ECS, kubernetes etc. are geared towards allowing a distributed application that acts as a single service and is horizontally scalable, upgradeable and maintanable. This means it helps with unifying service interfacing, load balancing, service reliability, zero-downtime-maintenance, scaling the number of worker nodes up/down based on demand (or other metrics), etc.
The following describes a high level idea for a solution for your use case with kubernetes (which is a bit more versatile than AWS ECS).
So for your use case you could set up a kubernetes cluster that runs a distributed event queue, for example an Apache Pulsar cluster, as well as an application cluster that is being sent queue events for processing. Your application cluster size could scale automatically with the number of unprocessed events in the queue (custom pod autoscaler). The cluster infrastructure would be configured to scale automatically based on the number of scheduled pods (pods reserve capacity on the infrastructure).
You would have to make sure your application can run in a stateless form in a container.
The main benefit I see over your current solution would be cloud provider independence as well as some general benefits from running a containerized system: 1. not having to worry about the exact setup of your EC2-Instances in terms of operating system dependencies of your workload. 2. being able to address the processing application as a single service. 3. Potentially increased reliability, for example in case of errors.
Regarding your exact questions:
Can Amazon ECS be configured so if there are no task running it
automatically shuts down all instances?
The keyword here is autoscaling. Note that there are two levels of scaling: 1. Infrastructure scaling (number of EC2 instances) and application service scaling (number of application containers/tasks deployed). ECS infrastructure scaling works based on EC2 autoscaling groups. For more info see this link . For application service scaling and serverless ECS (Fargate) see this link.
Also I am not understanding how exactly I would submit a specific
chunk of data to be processed. It seems like "Tasks" as defined in
Amazon ECS really correspond to a single Docker container, not so much
what kind of data that Docker container will process. Is that correct?
A "Task Definition" in ECS is describing how one or multiple docker containers can be deployed for a purpose and what its environment / limits should be. A task is a single instance that is run in a "Service" which itself can deploy a single or multiple tasks. Similar concepts are Pod and Service/Deployment in kubernetes.
So would I still need to feed the data-to-be-processed into the
instances via simple queue service, or other? Then use Lambda to poll
those queues to see if they should submit tasks to ECS?
A queue is always helpful in decoupling the service requests from processing and to make sure you don't lose requests. It is not required if your application service cluster can offer a service interface and process incoming requests directly in a reliable fashion. But if your application cluster has to scale up/down frequently that may impact its ability to reliably process.

AWS: How to deploy docker instance near instantly on ec2 closest to my customer?

I have a basic docker image containing a python script that comes in under 100mb. I'm not sure what distro I'm going to use but preferably one that results in the smallest file size as possible.
The goal is to deploy a docker image on t2.nano ec2 instance but it must meet the following conditions:
from the time a customer requests access via URL, it should respond as quickly as possible, preferably under a few seconds.
the latency between the customer and newly deployed docker ec2 instance should be as small as possible, meaning ec2 on the closest availability region.
Is this possible?
It's not possible to deploy an EC2 instance in under a few seconds, especially not t2.nano type instances. EC2 instances follow the same rules as physical compute resources, thus larger instance types boot faster, and t2.nano is currently the smallest/least powerful/slowest instance size. That said, even the fastest instances would take a minute or so to be provisioned and fully booted.
It sounds like you should look into using AWS Lambda. It's their proprietary, managed, containerized compute resource service, and designed for the kind of workflow you are describing. You don't manage the containers themselves, but deploy the code (of which Python is a supported language) and its dependent libraries to the service, and it handles launching it in a container on demand, with overhead in the sub-second range.
Note that it is not designed for hosting "websites" directly, if that's your intention. Lambda functions are invoked by other AWS services, one of which is API Gateway, which would be the best route in providing a public interface to your Lambda functions. This could be used in conjunction with something like S3 static website hosting to provide the building blocks for a "serverless" web application.
As for your second question, Route 53 does support latency based routing, but I don't believe it supports API Gateway endpoints as targets yet. So if global latency is a big concern, your best bet may be to deploy a few full-time EC2 instances around the world and use latency based routing. If it's mainly static assets you're worried about, CloudFront can cache these at edge locations, as an alternative.