Is it a good practice for CloudFormation deployment be done via CI/CD? I am currently considering the safety & performance aspect.
If someone accidentally removed a DB for example, CloudFormation will just remove it ... There could be code reviews to prevent this ... but just wondering if its a good practice.
With a serverless application there maybe no choice? Like otherwise its too manual to deploy everything
Another observation is performance, CloudFormation is rarely changed but it will need to run anyway if its part of the CI/CD process. Is there any way to speed this up?
Definitely.
You cannot achieve CI/CD in true sense until you do that.
Consider a scenario that for a particular release you added a messaging queue (AWS SQS). Now if you haven't integrated your Cloudformation with your CI/CD then your code that reads/write to/from SQS goes into your environment but will fail to do either operations just for the simple fact that SQS does not exist because your cloudformation change that would have created the SQS did not execute. So, eventually you end up having half baked environment.
To avoid this pitfall it is highly recommended that you execute your cloudformation as part of your CI/CD
Regarding your concern 'If someone accidentally removed a DB for example, CloudFormation will just remove it', this can happen even with the actual code. For example the developer had put in some test code to cleanup the database but forgot to remove that and that code gets executed in production environment. But ideally this would not happen because of the guard rails of manual testing, automated testing and JUnits. So in similar context treat Cloudformation as any other code (in fact Cloudformation is best described as Infrastructure as Code) which should be tested thoroughly. To check out on details for Unit Testing Cloudformation , see Is there a way to unit test AWS Cloudformation template
Yes absolutely. If you treat Infrastructure as Software (IaS), then you should be able to implement modern CI/CD software practices like syntax checking, unit testing, functional testing, verification, automated testing and deployment etc on your Cloudformation templates as well.
AWS provides a best practices solution here:
https://aws.amazon.com/answers/devops/aws-cloudformation-validation-pipeline/
The solution provides this introduction:
>
"Many Amazon Web Services (AWS) customers use AWS CloudFormation to manage their infrastructure as code and to help deploy AWS resources in a controlled and predictable way. DevOps teams are commonly tasked with validating AWS CloudFormation templates before launch to ensure they follow industry best practices and satisfy company-specific business and governance requirements. These teams often leverage AWS Developer Tools, which is a set of services designed to help DevOps professionals follow continuous integration and continuous delivery (CI/CD) practices and create their own pipelines to automatically build, validate, and deploy code."
Related
I am not quite clear on the best practices related to using CDK to deploy private Github repos to AWS. I understand that a pipeline should be created by CDK and the pipeline should invoke CodeDeploy to deploy the assets, but beyond that the details are murky.
I also want to understand if for this use case it would make sense to have a separate CDK repo which is responsible for the infrastructure for the entire backend of my project, or if it would make more sense to have CDK code included in each individual component repo. As I will be utilizing a microservice/cell based approach for building out components, the overhead required in adding CDK configuration for each component might be substantial.
You can think of CDK as a compiler that takes a given language and 'compiles' it down to CloudFormation templates. Those templates are then uploaded to Cloudformation by the CDK framework, and run for you when you execute the deploy command.
So. To answer your question more directly - if you can figure out how to do it in cloudformation, you can do it. That may involve spinning up a code build and running a script that executes some api calls or prepares a package for an ec2 server that then uses that package in the next step or any number of things.
But remember that CDK synths its cloudformation template all at once, and is only creating the template. It does not run any scripts that may be part of your code builds, and it does not 'wait' for certain things to be complete - because it isn't doing anything like that. If you have a sequence of events that need to occur, you want to use CodePipeline to orchestrate those events for you - but you can set up your CodePipeline with CDK for certain!
As for Overhead, maybe at first. But trust me when I say it becomes very quick and easy to generate a CDK stack for a given microservice with experience, and its super handy. Being able to spin up an ad-hoc on demand testing environment is super useful. Being able to deploy individual stacks on demand and make quick changes with just a line of code is handy as all get out. Having a single source of code for both your prod and development environments that you make a change in one and on next deployment in each is automatically reflected is super handy.
CDK is a very powerful tool - but it is a very low level one. It creates the template that will create your resources. Thats it. If your resources need to do something after being created for something else to happen you have to make use of other services to orchestrate that (CodePipeline, StepFunctions, Cloudwatch Events, ect)
Other devs and I are currently testing/building lambda functions for cleaning data that flows from S3 -> SQS -> Data Router Lambda(python), DynamoDB Rules Engine, and then a text processor in Lambda. We're currently working on the AWS platform but I'm trying to test this part of the data pipeline locally.
Ideally simulating S3 and SQS and dumping the zip files and running it through the lambda function. Currently toying with the SAM-CLI and Visual Studio, but nothing's stuck yet. Any tips?
There are several ways you can approach (local) testing of your AWS application:
Use unit tests for the different parts of your "pipeline", mocking the other parts like DynamoDB, SQS, etc.
Use something like LocalStack.
Every developer has their own "developer environment" in AWS. You could for example prefix every resource with the name of the developer (john_processing_lambda). You deploy to AWS and run integration tests from your local machine. You can achieve something like this with tools like Terraform, which allow you to "dynamically" name resources and for example add prefixes with the developers name.
Personally, I think running "AWS on your local machine" via Docker containers or tools like LocalStack not really satisfying. We had the best results with a combination of option 1 and option 3. Both have the upside that you can use the same tests in your CI/CD pipeline.
Furthermore, not running in the actual cloud (AWS) always bears the risk of "forgetting" something. Most notably IAM permissions. So everything runs fine on your local machine, but then it does not work on AWS.
Deploying a separate environment for every developer, so that they can play around with the actual resources and run tests directly in AWS, would be my recommendation. This paired with solid unit tests should yield the best results.
The downside of developer environments in AWS is that a developer has to deploy their code to AWS every time they want to test something. So making deployments fast is important. I found that with sufficient experience, you don't need to deploy that often anymore and this becomes less of an issue. Nevertheless, developer satisfaction in your team is important, so make sure to make this as smooth as possible.
I recently started working with AWS and IaC, I'm using Cloudformation to provision my AWS resources, but I discovered that AWS provide both a SDK and a CDK to enable you to provision resources programmatically instead of plain json/yaml.
But based on the documentation I did not really understand how they differ, can someone explain me how they differ and for what use case you should use what?
CDK: Is a framework to model and provision your infrastructure or stack. Stack can consist of a database for ex: DynamoDB, S3 Bucket, Lambda, API Gateway etc. It provides a facility to write code to create an infrastructure in AWS. Also called Infrastructure as code.
Check here
SDK: These are the code libraries provided by Amazon in various languages, like Java, Python, PHP, Javascript, Typescript etc. These libraries help interact with AWS services (like creating data in DynamoDB) which you either create through CDK or console. SDKs simplify using AWS services in your application with an API.
Check here
AWS SDK is a library primarily to ease the access to the AWS services by handling for you the data (de)serialization, credentials management, failure handling, etc. Perhaps, for specific scenarios, you could use the AWS SDK as the infrastructure as a code tool, however it could be cumbersome as it is not the intended usage of the library.
Based on the https://docs.aws.amazon.com/whitepapers/latest/develop-deploy-dotnet-apps-on-aws/infrastructure-as-code.html, dedicated tools for the IaC are AWS CloudFormation and AWS CDK.
AWS CDK is an abstraction on top of CloudFormation. CDK scripts are in fact transformed to the CloudFormation definitions when scripts are synthesized.
The difference can be best described on an example: Imagine that for each lambda function in your stack you want to create an error CloudWatch alarm and connect to the SNS topic.
With CloudFormation you will either a) need to write a pretty much similar bunch of yaml/json definitions for each lambda function to ensure the monitoring, b) use the nested stack templates, c) use CloudFormation modules.
With CDK you can write a generic code construct - class or method, which can create the alarm for the given lambda function and create the SNS alarm action for given topic.
In other words, CDK helps you generalize and re-use your IaC in a very familiar way to how you develop your business code. The code is shorter and more readable than the CF definitions.
The difference is even more remarkable when you need to set up similar resources in different AWS regions and when you have different AWS account per environment. You can manage all AWS accounts and regions with a single CDK codebase.
Some background first: CloudFormation is Amazon's solution for an “Infrastructure as Code” approach to managing the definition, provisioning and deployment of a bunch of resources across accounts/regions. This is done by using their declarative yaml/json-based template language to define it all, and then executing the templates through various means (console, cli, APIs...). More info:
white paper: https://docs.aws.amazon.com/whitepapers/latest/develop-deploy-dotnet-apps-on-aws/infrastructure-as-code.html
faq: https://aws.amazon.com/cloudformation/faqs/
There are other popular IaC solutions or tools to help achieve it more easily out there, such as Terraform and Kubernetes (container orchestration that also uses declarative templates to define desired states).
Potential benefits of IaC: At a high level, you can better track & audit your infra, reuse definitions/processes, make all your changes in a more consistent manner, faster thanks to all the automation and assurances you can get with an infra-as-code approach. You may be familiar with these as mentioned in previous answers and more, such as:
version controlling your infrastructure definitions,
more efficient and logically complex ways of constructing templates,
ability to write tests against them,
do diffs (see "change sets") before making real infra changes with the templates,
detect when live infra differs from your definitions,
automate rollbacks,
and lots of other state management assistance through a framework like CF that might be needed when performing regular ops duties.
CDK:
This is for helping to automate CloudFormation as part of an IaC approach to provisioning and deploying resources. It lets you use various popular programming languages to help with the creation, testing, and management of your CF setup. Some of AWS’s motivations: “YAML is an excellent format for describing the desired state of your cluster, but it is does not have primitives for expressing logic and reusable abstractions.“ “AWS CDK uses the familiarity and expressive power of programming languages for modeling your applications.”
More info: https://docs.aws.amazon.com/cdk/v2/guide/home.html
However, Amazon knows about other solutions, and happily points them out on the main CDK page now, downplaying its original connection to CF. You don't need to use CloudFormation if you don't want to; specifically, they mention you can use the same CDK constructs with the help of:
cdktf for Terraform maintained by its creators, Hashicorp
cdk8s for Kubernetes by AWS. re: “We realized this was exactly the same problem our customers had faced when defining their applications through CloudFormation templates, a problem solved by the AWS Cloud Development Kit (AWS CDK), and that we could apply the same design concepts from the AWS CDK to help all Kubernetes users.”
SDK:
AWS has an API for all of their services, and the various SDKs give you access to them. For example, I can use AWS’s Java SDK to manage an API Gateway. If I wanted to script some custom deployment process, I could do so with the SDK, managing all the state, etc. myself. You could probably even re-implement the CloudFormation service with the various underlying APIs... The APIs have varying levels of documentation though. E.g. CloudFormation Java APIs are only mentioned in the raw API reference, not the friendlier Developer Guide.
I find that the difference for me is that the CDK codifies the CloudFormation JSON/YAML. First response, is great ya okay in code but the benefit on the code side of things is you can write unit testing against the code. Therefore you get to build that sense of security or insurance policy against the provisioned services in the CDK.
There are other ways to test CF, however, with a dev background, this feels more comfortable.
It's more of an open question and I'm just hoping for any opinions and suggestions. I have AWS in mind but it probably can relate also to other cloud providers.
I'd like to provision IaaC solution that will be easily maintainable and cover all the requirements of modern serverless architecture. Terraform is a great tool for defining the infrastructure, has many official resources and stable support from the community. I really like its syntax and the whole concept of modules. However, it's quite bad for working with Lambdas. It also raises another question: should code change be deployed using the same flow as infrastructure change? Where to draw the line between code and infrastructure?
On the other hand, Serverless Framework allows for super easy development and deployment of Lambdas. It's strongly opinionated when it comes to the usage of resources but it comes with some many out-of-the-box features that it's worth it. It shouldn't really be used for defining the whole infrastructure.
My current approach is to define any shared resources using Terraform and any domain-related resources using Serverless. Here I have another issue that is related to my previous questions: deployment dependency. The simple scenario: Lambda.1 adds users to Cognito (shared resource) which has Lambda.2 as a trigger. I have to create a custom solution for managing the deployment order (Lambda.2 has to be deployed first, etc.). It's possible to hook up the Serverless Framework deployment into Terraform but then again: should the code deployment be mixed with infrastructure deployment?
It is totally possible to mix the two and I have had to do so a few times. How this looks actually ends up being simpler than it seems.
First off, if you think about whatever you do with the Serverless Framework as developing microservices (without the associated infrastructure management burden), that takes it one step in the right direction. Then, what you can do is decide that everything that is required to make that microservice work internally is defined within that microservice as a part of the services configuration in the serverless.yml, whether that be DynamoDB tables, Auth0 integrations, Kinesis streams, SQS, SNS, IAM permissions allocated to functions, etc. Keep that all defined as a part of that microservice. Terraform not required.
Now think about what that and other microservices might need to interact with more broadly. They aren't critical for that services internal operation but are critical for integration into the rest of the organisations infrastructure. This includes things like deployment IAM roles used by the Serverless Framework services to deploy into CloudFormation, Relational Databases that have to be shared amongst multiple services and resources, networking elements (VPC's, Security Groups, etc), monolithic clusters like ElasticSearch and Redis ... all of these elements are great candidates for definition outside of the Serverless Framework and work really well with Terraform.
Any resource would be able to connect to these Terraform defined resource as needed, unlike that hard association such as Lambda functions triggered off of an API Gateway endpoint.
Hope that helps
We have a react mobile frontend and an AWS appsync backend (DynamoDB, step functions, lambdas, graphql, auth)
Is there an easy way to do this? We have an application and backend in production, and now want to make some changes to our GraphQL components (e.g. schema). Ideally, I would like to have an offline environment which mimics that deployed on AWS.
I found this Is there a way to test AppSync code locally and/or in CI/CD?, which didn't really have any clear answers. It seems setting up a duplicate environment on AWS (pretty much a staging envirFor lambdas, we have played around with Serverless a little.
We don't use CloudFormation today (maybe we should?), for lambdas, we have played around with Serverless a little, but had issues testing locally with authentication and I think DynamoDB. Ultimately, we just ended up using the AWS console to create components, and then something like Cloud9's IDE to build and debug before deploying to production. I don't like the fragmented dev experience. Lambdas weren't too bad because of Cloud9, but as for GraphQL doesn't seem to have the equivalent.
Eager to learn what the best practices are, and how best (and easy) it is to setup a good dev environment.
Thanks
If you already have a working Production schema and are looking to simulate a Dev environment, you would have to replicate it manually for now.
We recently launched Amplify Console, to specifically address the best practices around CI/CD, and manage your API across stages. A recommended practice would be to use Amplify CLI which internally uses CloudFormation nested stacks to simplify the process of creating and maintaining your AWS AppSync APIs. In addition to this, Amplify CLI also gives you out of the box scaffolding for your Request/Response Mapping Templates in CloudFormation with just a simple annotated schema.
You could use some of these tools as a recommended practice for maintaining cloud resources. We are also actively working towards enhancing the Developer Experience for some of these workflows.