Automate AWS deployment for new customers - amazon-web-services

As I'm following a multi-instance deployment strategy opposed to a multi-tenant, I'm deploying my entire infrastructure again for every new customer. This results in a lot of work as I have to
Deploy a new API instance on Elastic Beanstalk + env variables
Deploy a new webapp instance via s3
Deploy a new file storage via s3
Deploy a new backup file storage via s3
Setup a new data pipeline backing up the file storage to the backup bucket
Mapping the API and web app instance to a new customer-specific URL (e.g. mycustomer.api.mycompany.com and mycustomer.app.mycompany.com) via Route 53 + CloudFront
...
Is there a way to automate all of this deployment? I've looked into CodeDeploy by AWS but that doesn't seem to fit my needs.

The AWS tool that you can use to build infrastructure again and again is CloudFormation. We call this technique Infrastructure as a Code (IaaC). You can also use Terraform if you don't want to use AWS Specific tool.
You can use either YAML or JSON to define the template for your infrastructure.
And, you'll be using Git to do templates change management.
Watch this reinvent video to clear the whole picture.

Related

Is it possible to mount (S3) file into AWS::ECS::TaskDefinition ContainerDefinition using cloudformation?

I have this ECS cluster that is running task definitions with singular container inside each group. I'm trying to add some fancy observability to my application by introducing OpenTelemetry. Following the AWS'es docs I found https://github.com/aws-observability/aws-otel-collector which is the AWS version of OTEL collector. This collector needs a config file (https://github.com/aws-observability/aws-otel-collector/blob/main/config/ecs/ecs-default-config.yaml) that specifies stuff like receivers, exporters, etc. I need to be able to create my own config file with 3rd party exporter (also need to add my secret API key somewhere inside there - maybe it can go to secrets manager and get mounted as env var :shrug:).
I'm wondering if this is doable without having to build my own image with baked config somewhere inside purely using cloudformation (what I use to deploy my app) and other amazon services?
The plan is to add this container besides each other app container (inside the task definition) [and yeah I know this is overkill but for now simple > perfect]
Building additional image will require some cardinal changes to the CI/CD so if I can go without those it will be awesome.
You can't mount an S3 bucket in ECS. S3 isn't a file system, it is object storage. You would need to either switch to EFS, which can be mounted by ECS, or add something to the startup script of your docker image to download the file from S3.
I would recommend to check a doc for AWS ADOT. You will find that it supports config variable AOT_CONFIG_CONTENT (doc). So you don't need a config file, only a config env variable. That plays very well with AWS ecosystem, because you can use AWS Systems Manager Parameter Store and/or AWS Secrets Manager, where you can store otel collector configuration (doc).

Gitlab & AWS parameter store

We want to save all our AWS accounts credentials in AWS parameter store for better security.
now the question is:
How can we use the credentials stored in AWS parameter store in GitLab for deployment?
In your project, you can configure .gitlab_ci.yaml to make many things, one of them is to deploy your application, and there are many ways, one of them is to:
Create a docker of your project
Push the image to ECR
Create a new ECS task definition with the new version of your docker image
Create a new ECS service with the new version of the task definition
and to do so, you need effectively the credentials of AWS that you have configured in your GitLab repository.
After that their many ways to deploy from GitLab to AWS, it depends on your company and what tools you are using.

AWS how to handle programatic credentilas when building a docker container

I have a .net core app in which I'm using services such as S3, RDS and Dynamo. Initially every instance of a client was initilaized using the Access_KEY and Secret_Access_Key direrectly, so basically these two were stored in a configuration file. Recently we've started a process to automate the AWS infrastructure creating using Terraform we are trying to migrate from manged container (Fargate and Amplify) to ECS, and we've also migrated from using plain secrets to using profiles.
In windows I've installed AWS CLI to configure a profile and under my
Users/{myUser/.aws
the following two files were creatd : config and credentials.
But how to configure a profile when using docker on linux I don't exactly know what are the steps that I should follow. When creating a CI-CD pipeline where after a commit and a successful build of a docker image, a new container should pop into existing replacing the old one. Should i configure the aws profile within the docker container running the app ? Should I generate a new set of Keys everytime a new container is build and replaces the old one ? The way this approach sounds, I don't belive this is the way to do it, but have no idea how to actually do it.
You shouldn't be using profiles when running inside AWS. Profiles are great for running the code locally, but when your code is deployed on ECS it should be utilizing a task IAM role.
You would manage that in Terraform by creating the IAM role, and then assigning the role to the task in the ECS task definition.

AWS Cloud Formation : How to automate the EC2 instance cloning / snapshot

Automating "Cloning" / "snapshot" of an already existing AWS EC2 instance.
I am able to create a AWS EC2 instance manually through Cloud Formation within the console. Alternatively , from Jenkins too I was able to perform the same operation.
Clone / Snapshot : Manually , through the options of "Snapshot" / "Create Image" I was able to spin up a new instance from the existing one. My question is can this be automated through Jenkins or script etc? The solution should be able to use either the snapshot or create image or any other options available and create a new instance from an existing one.
If the process can be automated , my request to please guide / provide steps / scripts / documents that can help me achieve the same.
Absolutely everything on AWS can be automated in multiple ways, including:
AWS Command-Line Interface (CLI)
SDKs and Programming Toolkits for AWS for multiple languages
Through IT management tools like Chef, Jenkins, Ansible, etc (which use SDKs to call AWS services on your behalf)
Please note that AWS CloudFormation is a service for deploying services, such as networking, compute and database in an automatic and reproducible manner. It is not typically used for operational activities like taking snapshots.

Imitate s3 and dynamo for development environments

I'm looking to set up my staging server (many instances) to be able to spin up new instances at the press of a button. Ideally I'd just bring up a new docker instance whenever I need it, however each instance needs its own s3 and dynamo instance. If I have to I'll bring up real s3 and dynamodb instances through aws api or similar but I'd prefer to have containers to mimic s3 and dynamo. Any suggestions would be appreciated.
You can run localstack in a Docker container. Image can be found here.
LocalStack - A fully functional local AWS cloud stack
Then you need to override the AWS URL in the AWS SDK client to point to this container.
In Java it would look like this:
DynamoDbClient dynamoDbClient = DynamoDbClient.builder()
.endpointOverride(localstackUrl)
.build();