Deploying CodeStar application to Spot Instance - amazon-web-services

Is there an easy way to automatically deploy a CodeStar application to a persistent spot instance every time the request is fulfilled? The pipeline only runs when the code is changed and requires that the codedeploy agent already be installed. I've searched online and can't seem to find anything regarding using CodeStar with spot instances.

Seems totally possible, please read Deploying AWS CodeDeploy Applications to Auto Scaling Groups
AWS CodeDeploy supports Auto Scaling, an AWS service that can launch Amazon EC2 instances automatically according to conditions you define. These conditions can include limits exceeded in a specified time interval for CPU utilization, disk reads or writes, or inbound or outbound network traffic. Auto Scaling terminates the instances when they are no longer needed. For more information, see What Is Auto Scaling?.
When new Amazon EC2 instances are launched as part of an Auto Scaling
group, AWS CodeDeploy can deploy your revisions to the new instances
automatically. You can also coordinate deployments in AWS CodeDeploy
with Amazon EC2 instances registered with Elastic Load Balancing load
balancers. For more information, see Integrating AWS CodeDeploy with
Elastic Load Balancing and Set Up a Classic Load Balancer in Elastic
Load Balancing for AWS CodeDeploy Deployments.
Good luck

Related

GPU support for task currently within AWS-Fargate cluster

My main objective is to utilize GPU for one of our existing task being deployed through Fargate.
We have existing load balancers for our staging and production environments.
Currently we have two ECS Fargate clusters which deploy Fargate serverless tasks.
We want to be able to deploy one of our existing fargate tasks with GPU, but because fargate doesn't support GPU, we need to configure an EC2 task.
To do this, I believe we need to create EC2 auto-scaling groups associated with both the staging and production environments that allow for deploying an EC2 instances with a GPU through ECS.
I'm unsure whether or not we need to create a new cluster to house the EC2 task, or if we can put the EC2 task in our existing clusters (can you mix Fargate and EC2 like this?).
We're using Terraform for Infrastructure as code.
Any AWS documentation or relevant Terraform docs would be appreciated.
You can absolutely mix Fargate and EC2 tasks in the same cluster. Recommended checking out Capacity Providers for this: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/cluster-capacity-providers.html

devops aws django website scalability : how is auto scaling done in elastic beanstalk and elastic container service ecs

Im developing a django website.
On the devops side, im considering using aws with an autoscaling. I still hesitate to contenerize my config, so I would use either beanstalk (without container) or container service (with docker). The database will be on aurora on a separate server.
I am new to aws and the help they provide online by expert is not free so here is my question :
When i compare with other hosting providers, their prices depend on the hardware configuration of the server.
I guess (because i dont yet have access to cost explorer) that it is the same with ec2 instances on amazon: you pay more for more powerful servers (cpu and ram and/or storage).
So im wondering how elastic beanstalk or elastic container instanciate new ec2 servers : do they invoke more powerful hardware configurations(scaling up) based on the demand on my website or does it depend on my manual configuration ? Or do they only replicate ec2 instances (scaling out) with the same config i manually set at the init?
Can i manually change the cpu,ram and storage of an ec2 instance of benstalk or ecs without re-configuring it all?
Can i fine tune the autoscaling out and autoscaling up and which scaling is better and cheaper (best choice)?
thanks a lot!
Auto Scaling groups scales out horizontally, means spawn new instances like defined in the launch template/launch configuration. Auto Scaling group cannot scale vertically. You can change the launch cofiguration and edit the instance type and size, which will replace your instances in the Auto Scaling Group.
https://docs.aws.amazon.com/autoscaling/ec2/userguide/create-asg-launch-template.html
With ECS, you have to option Fargate or ECS on EC2. With Fargate (serverless) you can easily define how much resource RAM/CPU you want to allocate to the "task" to run. With ECS EC2, you need to first create the ECS Cluster (need to allocate EC2 for running the cluster), then create a seperate task and then allocate RAM and CPU to it.
https://docs.aws.amazon.com/AmazonECS/latest/userguide/task_definition_parameters.html
Using Beanstalk you can easily define how much resources RAM/CPU want to use in the configuration. (Easier than just running plain autoscaling groups with a load balancer). It has a very easy interface to play around and adjust the resources.

Differences between EC2 Auto Scaling Group (EC2 ASG) and Elastic Container Service (ECS)

From what I've read so far:
EC2 ASG is a simple solution to scale your server with more copies of it with a load balancer in front of the EC2 instance pool
ECS is more like Kubernetes, which is used when you need to deploy multiple services in docker containers that works with each other internally to form a service, and auto scaling is a feature of ECS itself.
Are there any differences I'm missing here? Because ECS is almost always a superior choice to go with if they work as I understand.
You are right, in a very simple sense, EC2 Autoscaling Groups is a way to add/remove (register/unregister) EC2 instances to a Classic Load Balancer or Target Groups (ALB/NLB).
ECS has two type of scaling as does any Container orchestration platform:
Cluster Autoscaling: Add remove EC2 instances in a Cluster when tasks are pending to run
Service Autoscaling: Add/remove tasks in a service based on demand, uses Application AutoScaling service behind the scenes

Why do you want to use AWS ECS vs. ElasticBeanstalk for Docker?

I'm planning to use Docker, and associate 1 EC2 instance with 1 Microservice.
Why do I want to deploy Docker in AWS ECS vs. ElasticBeanstalk?
It is said that AWS ECS has a native support to Docker. Is that it?
It would be great if you could be elaborate the pros and cons of running docker on AWS ECS vs. ElasticBeanstalk.
Elastic Beanstalk (multi-container) is an abstraction layer on top of ECS (Elastic Container Service) with some bootstrapped features and some limitations:
Automatically interacts with ECS and ELB
Cluster health and metrics are readily available and displayed without any extra effort
Load balancer must terminate HTTPS and all backend connections are HTTP
Easily adjustable autoscaling and instance sizing
Container logs are all collected in one place, but still segmented by instance – so in a cluster environment finding which instance served a request that logged some important data is a challenge.
Can only set hard memory limits in container definitions
All cluster instances must run the same set of containers
As of ECS IT is Amazon’s answer to container orchestration. It’s a bit rough around the edges and definitely a leap from Elastic Beanstalk, but it does have the advantage of significantly more flexibility including the ability to even define a custom scheduler.
All of the limitations imposed by Elastic Beanstalk are lifted.
Refer these for more info :
Elastic Beanstalk vs. ECS vs. Kubernetes
Amazon EC2 Container Serivce
Amazon Elasticbeanstalk

AWS EC2: Why does ".NET Beanstalk HostManager v1.0.1.1" keep coming back

I keep killing the default instance and it keeps coming back. Why?
This answer is based on the assumption that you are facing a specific issue I've seen several users stumbling over, but your question is a bit short on detail, so I might misinterpret your problem in fact.
Background
The AWS Toolkit for Visual Studio allows you to deploy applications to AWS Elastic Beanstalk, which is a Platform as a Service (PaaS) offering allowing you to quickly deploy and manage applications in the AWS cloud:
You simply upload your application, and Elastic Beanstalk
automatically handles the deployment details of capacity provisioning,
load balancing, auto-scaling, and application health monitoring.
You deploy an application to Elastic Beanstalk into an Environment comprised of an Elastic Load Balancer and resp. Auto Scaling policies, which together ensure your application will keep running even if the EC2 instance is having trouble servicing the requests for whatever reason (see Architectural Overview for an explanation and illustration how these components work together).
That is, your Amazon EC2 instances are managed by default, so you don't need to administrate the infrastructure yourself, but the specific characteristic of this AWS PaaS variation is that you still can do that:
At the same time, with Elastic Beanstalk, you retain full control over
the AWS resources powering your application and can access the
underlying resources at any time.
Now that's exactly what you unintentionally did by terminating the EC2 instance via a mechanism outside of the Elastic Beanstalk service, which the load balancer detects and, driven by those auto scaling policies, triggers the creation of a replacement instance.
Solution
Long story short, you need to terminate the Elastic Beanstalk environment instead, as illustrated in section Step 6: Clean Up within the AWS Elastic Beanstalk Walkthrough (there is a dedicated section for the Elastic Beanstalk service within the AWS Management Console).
You can also do this via Visual Studio, as explained in step 11 at the bottom of How to Deploy the PetBoard Application Using AWS Elastic Beanstalk:
To delete the deployment, expand the Elastic Beanstalk node in AWS
Explorer, and then right-click the subnode for the deployment. Click
Delete. AWS Elastic Beanstalk will begin the deletion process, which
may take a few minutes. If you specified an notification email address
the deployment, AWS Elastic Beanstalk will send status notifications
for the delete process to this address.