Reducing VPC Endpoint costs - deploying an image to Amazon ECS with CodePipeline - amazon-web-services

I have a basic understanding of AWS architecture, however I need a way to reduce current costs. My current pipeline is as follows:
Source react app code from Github
Use CodeBuild to build a docker container and push it to the ECR.
Deploy the container into ECS Fargate Cluster
For security reasons, I do not want my ECS service to auto-assign a public IP. Instead I have been using VPC endpoints within the same subnets that the cluster operates in, for the following services:
com.amazonaws.eu-west-2.ecr.dkr (Interface)
com.amazonaws.eu-west-2.ecr.api (Interface)
com.amazonaws.eu-west-2.logs (Interface)
com.amazonaws.eu-west-2.secretsmanager (Interface)
com.amazonaws.eu-west-2.s3 (Gateway)
The downside to this is now the majority of my AWS bill is taken up by having VPC endpoints stood up. The two options I thought of are:
Put a CloudFormation step in CodePipeline to stand up the VPC endpoints before ECS deployment, and delete manually after deployment
Create a Lambda function step in CodePipeline to stand up the VPC endpoints before deployment, and another Lambda step to delete them afterwards.
Are either of these "best practice" or is there another way I could automatically create/delete these endpoints when required?
Any further info required let me know.

Related

How to migrate AWS ECS from one account to another (in a different Region/AZ)?

Docker containers are hosted with aws ecs inside a VPC in a particular region. How do I migrate them to a different VPC in a different region?
Unfortunately, there isn't a straightforward method to migrate a service from one region to another. To accomplish this, you'll need to ensure that you have a VPC and ECS cluster set up in the target region. Then, you can create the service within that cluster and VPC.
If you're using Cloudformation or Terraform for configuration as code, simply update the region and relevant definitions, then redeploy. Otherwise, you can use the AWS CLI to extract the full definition of your cluster and service, and then recreate it in the target region. For more information, see the AWS CLI ECS reference: https://docs.aws.amazon.com/cli/latest/reference/ecs/index.html
Also, make sure that any Docker images stored in a private registry are accessible in the target region. Best of luck with your migration!

Retrieve existing resource data using AWS Cloudformation

I need to retrieve existing data/properties of a given resource by using an AWS Cloudformation template. Is it possible? If it is how can I do it?
Example 1:
Output: Security Group ID which allows traffic on port 22
Example 2:
Output: Instance ID which use default VPC
AWS CloudFormation is used to deploy infrastructure from a template in a repeatable manner. It cannot provide information on any resources created by any methods outside of CloudFormation.
Your requirements seem more relevant to AWS Config:
AWS Config provides a detailed view of the configuration of AWS resources in your AWS account. This includes how the resources are related to one another and how they were configured in the past so that you can see how the configurations and relationships change over time.
An AWS resource is an entity you can work with in AWS, such as an Amazon Elastic Compute Cloud (EC2) instance, an Amazon Elastic Block Store (EBS) volume, a security group, or an Amazon Virtual Private Cloud (VPC).
Using your examples, AWS Config can list EC2 instances and any resources that are connected to the instances, such as Security Groups and VPCs. You can easily click-through these relationship and view the configurations. It is also possible to view how these configurations have changed over time, such as:
When EC2 instance changed state (eg stopped, running)
When rules changed on Security Groups
Alternatively, you can simply make API calls to AWS services to obtain the current configuration of resources, such as calling DescribeInstances to obtain a list of Amazon EC2 instances and their configurations.

Use AWS CodeBuild to test private components of a system deployed with CodePipeline

I have a system that is automatically deployed by AWS CodePipeline. The deployment creates all resources from scratch, included the VPC.
This system has some components in private subnets, which expose an API and have no public access. They are accessible from other internal components only.
Once the deployment completes, I would like CodePipeline to start some API tests against these components.
I was planning to do it with AWS CodeBuild test action, but I have hit a roadblock there. The issue is that, to configure CodeBuild to run in a VPC, you need the VPC to exist. But my VPC does not exist yet at setup time of the pipeline.
Does anyone have a simple solution for that?
Note that I don't consider create the VPC in advance, separately from the rest of the system, a solution. I really want my deployment to be atomic.

AWS Cloudformation - reverse engineer an existing resource

Initially a while back created a cloudformation template to create multiple services on a given cluster, setup a aurora rds, redis and a load balancer.
The template was broken so i had to do various manual changes to get it working:
redis was created on the default vpc so had to manually do vpc - vpc peering
added https port forwarding on my alb
cors and various arn inline roles for the s3 bucket
and potentially dozen or so other changes
In the process of rewriting the cloudformation stack so my question is:
Is there a way using the aws cli to reverse engineer my current alb, rds, s3 to get a cloud formation formation template for each them?.
so then i would be able to compare the cloud formation template with the new own and adjust it.
Or is there a way to compare the current cloudformation stack with the current state of the resources and reverse engineer it that way.
Seems as former2 would be the best solution to your use-case:
Generate CloudFormation / Terraform / Troposphere templates from your existing AWS resources

Create AWS cache clusters in VPC with CloudFormation

I am creating an AWS stack inside a VPC using CloudFormation and need to create ElastiCache clusters on it. I have investigated and there is no support in CloudFormation to create cache clusters in VPCs.
Our "workaround" was to to create the cache cluster when some "fixed" instance (like a bastion for example) bootstrap using CloudInit and AWS AmazonElastiCacheCli tools (elasticache-create-cache-subnet-group, elasticache-create-cache-cluster). Then, when front end machines bootstrap (we are using autoscaling), they use elasticache-describe-cache-clusters to get cache cluster nodes and update configuration.
I would like to know if you have different solutions to this problem.
VPC support has now been added for Elasticache in Cloudformation Templates.
To launch a AWS::ElastiCache::CacheCluster in your VPC, create a AWS::ElastiCache::SubnetGroup that defines which subnet in your VPC you want Elasticache and assign it to the CacheSubnetGroupName property of AWS::ElastiCache::CacheCluster.
You workaround is a reasonable one (and shows that you seem to be in control of your AWS operations already).
You could improve on your custom solution eventually by means of the dedicated CustomResource type, which are special AWS CloudFormation resources that provide a way for a template developer to include resources in an AWS CloudFormation stack that are provided by a source other than Amazon Web Services. - the AWS CloudFormation Custom Resource Walkthrough provides a good overview of what this is all about, how it works and what's required to implement your own.
The benefit of using this facade for a custom resource (i.e. the Amazon ElastiCache cluster in your case) is that its entire lifecycle (create/update/delete) can be handled in a similar and controlled fashion just like any officially supported CloudFormation resource types, e.g. resource creation failures would be handled transparently from the perspective of the entire stack.
However, for the use case at hand you might actually just want to wait for official support becoming available:
AWS has announced VPC support for ElastiCache in the context of the recent major Amazon EC2 Update - Virtual Private Clouds for Everyone!, which boils down to Default VPCs for (Almost) Everyone.
We want every EC2 user to be able to benefit from the advanced networking and other features of Amazon VPC that I outlined above. To enable this, starting soon, instances for new AWS customers (and existing customers launching in new Regions) will be launched into the "EC2-VPC" platform. [...]
You don’t need to create a VPC beforehand - simply launch EC2
instances or provision Elastic Load Balancers, RDS databases, or
ElastiCache clusters like you would in EC2-Classic and we’ll create a
VPC for you at no extra charge. We’ll launch your resources into that
VPC [...] [emphasis mine]
This update sort of implies that any new services will likely be also available in VPC right away going forward (else the new EC2-VPC platform wouldn't work automatically for new customers as envisioned).
Accordingly I'd expect the CloudFormation team to follow suit and complete/amend their support for deployment to VPC going forward as well.
My solution for this has been to have a controller process that polls a message queue, which is subscribed to the SNS topic which I notify CloudFormation events to (click advanced in the console when you create a CloudFormation stack to send notifications to an SNS Topic).
I pass the required parameters as tags to AWS::EC2::Subnet and have the controller pick them up, when the subnet is created. I execute the set up when a AWS::CloudFormation::WaitConditionHandle is created, and use the PhysicalResourceId to cURL with PUT to satisfy a AWS::CloudFormation::WaitCondition.
It works somewhat, but doesn't handle resource deletion in ElastiCache, because there is no AWS::CloudFormation::WaitCondition analogue for stack deletion. That's a manual operation procedure wth my approach.
The CustomResource approach looks more polished, but requires an endpoint, which I don't have. If you can put together an endpoint, that looks like the way to go.