How to clone AWS service configuration - amazon-web-services

My current project is depending hard on AWS, using different services: SQS, Lambda, DynamoDB, S3, API gateway ... and they interact with each other to perform a specific task (for example, a sqs trigger a lambda and store processed data to dynamodb and s3). After deploying and testing, all are working together very well, I am planning to clone the current working environment for a new project.
The normal way I can think of is creating new services one by one (because I took note all configurations for each service so far)
My question: Is there any good and automatical way to clone the working AWS environment configuration?
Any suggestion is very appreciated.

You can use CloudFormation template to design your architecture and then reuse it as needed.
Since you already have your architecture running, instead of doing it manually, you can run CloudFormer which will allow you create CloudFormation template out of your existing infrastructure. You can pick which parts to include in CloudFormation template via web interface and once done, you can launch another such architecture via that newly created CloudFormation template.

Related

Automated creation of a new environment in AWS

I could not find a definite 'yes' or 'no' anywhere, so I thought maybe I ask here. Is it possible to run a custom script which would automatically create a new environment on AWS with all the settings like (Network, Capacity, Security etc.)? I need to create a lot of new environments as I am switching from individual load balancers to shared ones, and all the settings are the same (apart from the environment and application name), so it involves a lot of manual work.
From What is AWS CloudFormation? - AWS CloudFormation:
AWS CloudFormation is a service that helps you model and set up your AWS resources so that you can spend less time managing those resources and more time focusing on your applications that run in AWS. You create a template that describes all the AWS resources that you want (like Amazon EC2 instances or Amazon RDS DB instances), and CloudFormation takes care of provisioning and configuring those resources for you. You don't need to individually create and configure AWS resources and figure out what's dependent on what; CloudFormation handles that.
If you want to create the CloudFormation template programmatically, you can use AWS CDK - AWS Cloud Development Kit (CDK):
The AWS CDK lets you build reliable, scalable, cost-effective applications in the cloud with the considerable expressive power of a programming language.
The AWS CDK supports TypeScript, JavaScript, Python, Java, C#/.Net, and Go. Developers can use one of these supported programming languages to define reusable cloud components known as Constructs. You compose these together into Stacks and Apps.
Or, you can simply write your own script in a programming language that calls an AWS SDK to individually create resources in AWS. Everything in AWS can be done via API calls.

Best practices to Initialize and populate a serverless PostgreSQL RDS instance by a CloudFormation stack deployment

We are successfully spinning up an AWS CloudFormation stack that includes a serverless RDS PostgreSQL instance. Once the PostgreSQL instance is in place, we're automatically restoring a PostgreSQL database dump (in binary format) that was created using pg_dump on a local development machine on the instance PostgreSQL instance just created by CloudFormation.
We're using a Lambda function (instantiated by the CloudFormation process) that includes a build of the pg_restore executable within a Lambda layer and we've also packaged our database dump file within the Lambda.
The above approached seems complicated for something that presumably has been solved many times... but Google searches have revealed almost nothing that corresponds to our scenario. ​We may be thinking about our situation in the wrong way, so please feel free to offer a different approach (e.g., is there a CodePipeline/CodeBuild approach that would automate everything). Our preference would be to stick with the AWS toolset as much as possible.
This process will be run every time we deploy to a new environment (e.g., development, test, stage, pre-production, demonstration, alpha, beta, production, troubleshooting) potentially per release as part of our CI/CD practices.
Does anyone have advice or a blog post that illustrates another way to achieve our goal?
If you have provisioned everything via IaC (Infrastructure as Code) anyway that is most of the time-saving done and you should already be able to replicate your infrastructure to other accounts/ stacks via different roles in your AWS credentials and config files by passing the —profile some-profile flag. I would recommend AWS SAM (Serverless application model) over Cloudformation though as I find I only need to write ~1/2 the code (roles & policies are created for you mostly) and much better & faster feedback via the console. I would also recommend sam sync (it is in beta currently but worth using) so you don’t need to create a ‘change set’ on code updates, purely the code is updated so deploys take 3-4 secs. Some AWS SAM examples here, check both videos and patterns: https://serverlessland.com (official AWS site)
In terms of the restore of RDS, I’d probably create a base RDS then take a snapshot of that and restore all other RDS instances from snapshots rather than manually creating it all the time you are able to copy snapshots and in fact automate backups to snapshots cross-account (and cross-region if required) which should be a little cleaner
There is also a clean solution for replicating data instantaneously across AWS Accounts using AWS DMS (Data Migration Services) and CDC (Change Data Capture). So, process is you have a source DB and a target DB and also a 'Replication Instance' (eg EC2 Micro) that monitors your source DB and can replicate that across to different instances (so you are always in sync on data, for example if you have several devs working on separate 'stacks' to not pollute each others' logs, you can replicate data and changes out seamlessly if required from one DB) - so this works with Source DB and Destination DB both already in AWS however you can also use AWS DMS of course for local DB migration over to AWS, some more info here: https://docs.aws.amazon.com/dms/latest/sbs/chap-manageddatabases.html

Best practice deploying using CDK, AWS, and Github private repos?

I am not quite clear on the best practices related to using CDK to deploy private Github repos to AWS. I understand that a pipeline should be created by CDK and the pipeline should invoke CodeDeploy to deploy the assets, but beyond that the details are murky.
I also want to understand if for this use case it would make sense to have a separate CDK repo which is responsible for the infrastructure for the entire backend of my project, or if it would make more sense to have CDK code included in each individual component repo. As I will be utilizing a microservice/cell based approach for building out components, the overhead required in adding CDK configuration for each component might be substantial.
You can think of CDK as a compiler that takes a given language and 'compiles' it down to CloudFormation templates. Those templates are then uploaded to Cloudformation by the CDK framework, and run for you when you execute the deploy command.
So. To answer your question more directly - if you can figure out how to do it in cloudformation, you can do it. That may involve spinning up a code build and running a script that executes some api calls or prepares a package for an ec2 server that then uses that package in the next step or any number of things.
But remember that CDK synths its cloudformation template all at once, and is only creating the template. It does not run any scripts that may be part of your code builds, and it does not 'wait' for certain things to be complete - because it isn't doing anything like that. If you have a sequence of events that need to occur, you want to use CodePipeline to orchestrate those events for you - but you can set up your CodePipeline with CDK for certain!
As for Overhead, maybe at first. But trust me when I say it becomes very quick and easy to generate a CDK stack for a given microservice with experience, and its super handy. Being able to spin up an ad-hoc on demand testing environment is super useful. Being able to deploy individual stacks on demand and make quick changes with just a line of code is handy as all get out. Having a single source of code for both your prod and development environments that you make a change in one and on next deployment in each is automatically reflected is super handy.
CDK is a very powerful tool - but it is a very low level one. It creates the template that will create your resources. Thats it. If your resources need to do something after being created for something else to happen you have to make use of other services to orchestrate that (CodePipeline, StepFunctions, Cloudwatch Events, ect)

How to deploy Infrastructure as Code on AWS

Had a question regarding infrastructure as code on AWS.
Wondering how to do this (the process of deploying) and also why is this an efficient method for architecture? Also, are there other methods that should be looked at over this?
I am looking to deploy this for a startup I am working for and need assistance in getting this going. Any help is appreciated.
Thank you.
From What is AWS CloudFormation?:
AWS CloudFormation is a service that helps you model and set up your Amazon Web Services resources so that you can spend less time managing those resources and more time focusing on your applications that run in AWS. You create a template that describes all the AWS resources that you want (like Amazon EC2 instances or Amazon RDS DB instances), and AWS CloudFormation takes care of provisioning and configuring those resources for you. You don't need to individually create and configure AWS resources and figure out what's dependent on what; AWS CloudFormation handles all of that.
So, instead of manually creating each bit of architecture (network, instances, queues, storage, etc), you can define them in a template and CloudFormation will deploy them. It is smart enough to mostly know the correct order of creation (eg creating the network before creating an Amazon EC2 instance within the network) and it can also remove resources when the 'stack' is no longer required.
Other benefits:
The template effectively documents the infrastructure
Infrastructure can be checked into a source code repository, and versioned
Infrastructure can be repeatedly and consistently deployed (eg Test environment matches Production environment)
Changes can be made to the template and CloudFormation can update the 'stack' by just deploying the changes
There are tools (eg https://former2.com/) that can generate the template from existing infrastructure, or just create it from code snippets taken from the documentation.
Here's an overview: Simplify Your Infrastructure Management Using AWS CloudFormation - YouTube
There are also tools like Terraform that can deploy across multiple cloud services.

AWS CLI vs Console and CloudFormation stacks

Is there any known downside to creating resources on aws through the CLI? Is it more reliable/easier/error prone/largely accepted/recommended to use one method over the other? While setting up recurring scripts, is there a reason why i would want to use CloudFormation or the AWS Console over the AWS CLI to run commands directly?
For example, if I were to create an ECS Fargate Task Definition, is there any reason why I might want to use AWS CloudFormation or the Console over AWS CLI? Cli syntax is straightforward and easy to use, and there are a few things (like setting up event rules/targets for a fargate task specifically) that are not supported via cloudformation yet.
The AWS CLI and AWS CloudFormation are two different tools that can be used to create infrastructure on AWS. The CLI is more powerful and has finer grained control than CloudFormation. CloudFormation makes it very easy to use yaml or json text files that can describe an entire enterprise in the cloud.
One of the strong benefits of CloudFormation is the automatic support for rolling back changes if anything fails while deploying a stack. The CLI in comparison would require you to figure out the details of what went wrong and how to get back to where your state was. Updating infrastructure using CloudFormation is another benefit. Make the change in the template and update the stack.
For small setups, using the CLI is fine. However, once you get past launching an EC2 instance and start building VPCs, Instances, KeyPairs, Security Groups, RDS, etc. etc. you will find that the CLI has some real limitations: mostly being too manual of a process, not easily repeatable, difficult to put the process into version control, ....
If you are constantly building, testing and deleting complex setups, CloudFormation is absolutely one of the best tools from AWS. Note that there are a number of third party solutions that have a huge number of followers such as Bamboo, Octopus, Jenkins, Chef, etc.
If your job is SysOps or DevOps then you absolutely want to master the CLI and CloudFormation. These are amazing tools for working with AWS. Also master Beanstalk, maybe OpsWorks and one of the third party tools like Jenkins.