Convert an existing application AWS stack over to CloudFormation - amazon-web-services

I have a web site hosted in AWS that makes use of a number of AWS services. The environment was created manually using a combination of the web console and the AWS CLI. I'd like to start managing it using CloudFormation. I've used the CouldFormer tool to create a template of the stack but I can't find a way to use it to manage the existing environment. It will allow me to create a duplicate environment without too many problems but I don't really want to delete the entire production environment so I can recreate it using CloudFormation.
Is there a way to create a template of an existing environment and start updating it with CloudFormation?

#Sailor is right... unfortunately, I can't quote a credible source either - it's just a combination of working with Cloud Formation for an extended period of time and knowing enough about it. (Maybe I'm the credible source)
But what you could do is use your Cloud Former stack, and roll your existing production infrastructure into it.
For example, if you've got some EC2 images and a scaling group - roll that out, and then start terminating the others. How you'd do it would depend on your environment, but if it's architected for the cloud, it shouldn't be too difficult.

Currently there is not way to do that.

Related

Best practices to Initialize and populate a serverless PostgreSQL RDS instance by a CloudFormation stack deployment

We are successfully spinning up an AWS CloudFormation stack that includes a serverless RDS PostgreSQL instance. Once the PostgreSQL instance is in place, we're automatically restoring a PostgreSQL database dump (in binary format) that was created using pg_dump on a local development machine on the instance PostgreSQL instance just created by CloudFormation.
We're using a Lambda function (instantiated by the CloudFormation process) that includes a build of the pg_restore executable within a Lambda layer and we've also packaged our database dump file within the Lambda.
The above approached seems complicated for something that presumably has been solved many times... but Google searches have revealed almost nothing that corresponds to our scenario. ​We may be thinking about our situation in the wrong way, so please feel free to offer a different approach (e.g., is there a CodePipeline/CodeBuild approach that would automate everything). Our preference would be to stick with the AWS toolset as much as possible.
This process will be run every time we deploy to a new environment (e.g., development, test, stage, pre-production, demonstration, alpha, beta, production, troubleshooting) potentially per release as part of our CI/CD practices.
Does anyone have advice or a blog post that illustrates another way to achieve our goal?
If you have provisioned everything via IaC (Infrastructure as Code) anyway that is most of the time-saving done and you should already be able to replicate your infrastructure to other accounts/ stacks via different roles in your AWS credentials and config files by passing the —profile some-profile flag. I would recommend AWS SAM (Serverless application model) over Cloudformation though as I find I only need to write ~1/2 the code (roles & policies are created for you mostly) and much better & faster feedback via the console. I would also recommend sam sync (it is in beta currently but worth using) so you don’t need to create a ‘change set’ on code updates, purely the code is updated so deploys take 3-4 secs. Some AWS SAM examples here, check both videos and patterns: https://serverlessland.com (official AWS site)
In terms of the restore of RDS, I’d probably create a base RDS then take a snapshot of that and restore all other RDS instances from snapshots rather than manually creating it all the time you are able to copy snapshots and in fact automate backups to snapshots cross-account (and cross-region if required) which should be a little cleaner
There is also a clean solution for replicating data instantaneously across AWS Accounts using AWS DMS (Data Migration Services) and CDC (Change Data Capture). So, process is you have a source DB and a target DB and also a 'Replication Instance' (eg EC2 Micro) that monitors your source DB and can replicate that across to different instances (so you are always in sync on data, for example if you have several devs working on separate 'stacks' to not pollute each others' logs, you can replicate data and changes out seamlessly if required from one DB) - so this works with Source DB and Destination DB both already in AWS however you can also use AWS DMS of course for local DB migration over to AWS, some more info here: https://docs.aws.amazon.com/dms/latest/sbs/chap-manageddatabases.html

How to deploy Infrastructure as Code on AWS

Had a question regarding infrastructure as code on AWS.
Wondering how to do this (the process of deploying) and also why is this an efficient method for architecture? Also, are there other methods that should be looked at over this?
I am looking to deploy this for a startup I am working for and need assistance in getting this going. Any help is appreciated.
Thank you.
From What is AWS CloudFormation?:
AWS CloudFormation is a service that helps you model and set up your Amazon Web Services resources so that you can spend less time managing those resources and more time focusing on your applications that run in AWS. You create a template that describes all the AWS resources that you want (like Amazon EC2 instances or Amazon RDS DB instances), and AWS CloudFormation takes care of provisioning and configuring those resources for you. You don't need to individually create and configure AWS resources and figure out what's dependent on what; AWS CloudFormation handles all of that.
So, instead of manually creating each bit of architecture (network, instances, queues, storage, etc), you can define them in a template and CloudFormation will deploy them. It is smart enough to mostly know the correct order of creation (eg creating the network before creating an Amazon EC2 instance within the network) and it can also remove resources when the 'stack' is no longer required.
Other benefits:
The template effectively documents the infrastructure
Infrastructure can be checked into a source code repository, and versioned
Infrastructure can be repeatedly and consistently deployed (eg Test environment matches Production environment)
Changes can be made to the template and CloudFormation can update the 'stack' by just deploying the changes
There are tools (eg https://former2.com/) that can generate the template from existing infrastructure, or just create it from code snippets taken from the documentation.
Here's an overview: Simplify Your Infrastructure Management Using AWS CloudFormation - YouTube
There are also tools like Terraform that can deploy across multiple cloud services.

AWS CLI vs Console and CloudFormation stacks

Is there any known downside to creating resources on aws through the CLI? Is it more reliable/easier/error prone/largely accepted/recommended to use one method over the other? While setting up recurring scripts, is there a reason why i would want to use CloudFormation or the AWS Console over the AWS CLI to run commands directly?
For example, if I were to create an ECS Fargate Task Definition, is there any reason why I might want to use AWS CloudFormation or the Console over AWS CLI? Cli syntax is straightforward and easy to use, and there are a few things (like setting up event rules/targets for a fargate task specifically) that are not supported via cloudformation yet.
The AWS CLI and AWS CloudFormation are two different tools that can be used to create infrastructure on AWS. The CLI is more powerful and has finer grained control than CloudFormation. CloudFormation makes it very easy to use yaml or json text files that can describe an entire enterprise in the cloud.
One of the strong benefits of CloudFormation is the automatic support for rolling back changes if anything fails while deploying a stack. The CLI in comparison would require you to figure out the details of what went wrong and how to get back to where your state was. Updating infrastructure using CloudFormation is another benefit. Make the change in the template and update the stack.
For small setups, using the CLI is fine. However, once you get past launching an EC2 instance and start building VPCs, Instances, KeyPairs, Security Groups, RDS, etc. etc. you will find that the CLI has some real limitations: mostly being too manual of a process, not easily repeatable, difficult to put the process into version control, ....
If you are constantly building, testing and deleting complex setups, CloudFormation is absolutely one of the best tools from AWS. Note that there are a number of third party solutions that have a huge number of followers such as Bamboo, Octopus, Jenkins, Chef, etc.
If your job is SysOps or DevOps then you absolutely want to master the CLI and CloudFormation. These are amazing tools for working with AWS. Also master Beanstalk, maybe OpsWorks and one of the third party tools like Jenkins.

Docker for AWS vs pure Docker deployment on EC2

The purpose is production-level deployment of a 8-container application, using swarm.
It seems (ECS aside) we are faced with 2 options:
Use the so called docker-for-aws that does (swarm) provisioning via a cloudformation template.
Set up our VPC as usual, install docker engines, bootstrap the swarm (via init/join etc) and deploy our application in normal EC2 instances.
Is the only difference between these two approaches the swarm bootstrap performed by docker-for-aws?
Any other benefits of docker-for-aws compared to a normal AWS VPC provisioning?
Thx
If you need to provide a portability across different cloud providers - go with AWS CloudFormation template provided by Docker team. If you only need to run on AWS - ECS should be fine. But you will need to spend a bit of time on figuring out how service discovery works there. Benefit of Swarm is that they made it fairly simple, just access your services via their service name like they were DNS names with built-in load-balancing.
It's fairly easy to automate new environment creation with it and if you need to go let's say Azure or Google Cloud later - you simply use template for them to get your docker cluster ready.
Docker team has put quite a few things into that template and you really don't want to re-create them yourself unless you really have to. For instance if you don't use static IPs for your infra (fairly typical scenario) and one of the managers dies - you can't just restart it. You will need to manually re-join it to the cluster. Docker for AWS handles that through IPs sync via DynamoDB and uses other provider specific techniques to make failover / recovery work smoothly. Another example is logging - they push your logs automatically into CloudWatch, which is very handy.
A few tips on automating your environment provisioning if you go with Swarm template:
Use some infra automation tool to create VPC per environment. Use some template provided by that tool so you don't write too much yourself. Using a separate VPC makes all environment very isolated and easier to work with, less chance to screw something up. Also, you're likely to add more elements into those environments later, such as RDS. If you control your VPC creation it's easier to do that and keep all related resources under the same one. Let's say DEV1 environment's DB is in DEV1 VPC
Hook up running AWS Cloud Formation template provided by docker to provision a Swarm cluster within this VPC (they have a separate template for that)
My preference for automation is Terraform. It lets me to describe a desired state of infrastructure rather than on how to achieve it.
I would say no, there are basically no other benefits.
However, if you want to achieve all/several of the things that the docker-for-aws template provides I believe your second bullet point should contain a bit more.
E.g.
Logging to CloudWatch
Setting up EFS for persistence/sharing
Creating subnets and route tables
Creating and configuring elastic load balancers
Basic auto scaling for your nodes
and probably more that I do not recall right now.
The template also ingests a bunch of information about related resources to your EC2 instances to make it readily available for all Docker services.
I have been using the docker-for-aws template at work and have grown to appreciate a lot of what it automates. And what I do not appreciate I change, with the official template as a base.
I would go with ECS over a roll your own solution. Unless your organization has the effort available to re-engineer the services and integrations AWS offers as part of the offerings; you would be artificially painting yourself into a corner for future changes. Do not re-invent the wheel comes to mind here.
Basically what #Jonatan states. Building the solutions to integrate what is already available is...a trial of pain when you could be working on other parts of your business / application.

Best practice for reconfiguring and redeploying on AWS autoscalegroup

I am new to AWS (Amazon Web Services) as well as our own custom boto based python deployment scripts, but wanted to ask for advice or best practices for a simple configuration management task. We have a simple web application with configuration data for several different backend environments controlled by a command line -D defined java environment variable. Sometimes, the requirement comes up that we need to switch from one backend environment to another due to maintenance or deployment schedules of our backend services.
The current procedure requires python scripts to completely destroy and rebuild all the virtual infrastructure (load balancers, auto scale groups, etc.) to redeploy the application with a change to the command line parameter. On a traditional server infrastructure, we would log in to the management console of the container, change the variable, bounce the container, and we're done.
Is there a best practice for this operation on AWS environments, or is the complete destruction and rebuilding of all the pieces the only way to accomplish this task in an AWS environment?
It depends on what resources you have to change. AWS is evolving everyday in a fast paced manner. I would suggest you to take a look at the AWS API for the resources you need to deal with and check if you can change a resource without destroying it.
Ex: today you cannot change a Launch Group once it is created. you must delete it and create it again with the new configurations. but if you have one auto scaling group attached to that launch group you will have to delete the auto scaling group and so on.
IMHO a see no problems with your approach, but as I believe that there is always room for improvement, I think you can refactor it with the help of AWS API documentation.
HTH
I think I found the answer to my own question. I know the interface to AWS is constantly changing, and I don't think this functionality is available yet in the Python boto library, but the ability I was looking for is best described as "Modifying Attributes of a Stopped Instance" with --user-data as being the attribute in question. Documentation for performing this action using HTTP requests and the command line interface to AWS can be found here: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Using_ChangingAttributesWhileInstanceStopped.html