I'm looking to use NestJS on my next project, but I am slightly put off by the lack of documentation regarding deployment practices and continuous deployment cycles. Ideally I would like to use something like cloud compute to automatically compile my project and deploy it as updates are pushed to a release branch. Anyone have advice regarding that?
It is a very broad question, as there are many ways to implement CI, deployment pipeline, or deployment strategies.
I would suggest you to take a look to developer tools in AWS such as CodePipeline, for pipeline creation and CodeBuild/Jenkins as building services. Take a look at docker container, and look for deployment services like Elastic Beanstalk for single/multicontainer container, ECS, or just CodeDeploy.
I would also suggest you to take a look to AWS Blue/Green deployments white paper, as it also review the different deployment strategies.
Related
I am trying to automate the deployment of the AWS ECS and couldn't find much information I could do that and will like to see if there is any advice on what I can explore. Currently, we have an Azure DevOps pipeline that will push the containerized image to the ECR and we will manually create the task definition at ecs and update the service afterwards. Is there anyway that I can automate this with azure devops release?
A bit open ended for a Stackoverflow style question but the short answer is that there are a lot of AWS native alternatives to this. This is an example that implements the blue-green pattern (it can be simplified with a more generic rolling update deployment). If you are new to ECS you probably want to consider using Copilot. This is a entry level blog that hints about how to deploy an application and build a pipeline for it.
There are so many options:
Docker-compose with ECS cli looks the easiest solution
Terraform
CloudFormation (looks complex!)
Ansible
I am only interested in setting up a basic ECS docker set-up with ELB and easily updating the Docker image version.
We all love technology here, but we're not all super geniuses when it comes to tech. So I'm looking to keep my set-up as simple as possible. We run Jenkins, 2 NodeJS applications, 2 Java applications in ECS and I know it involves IAM, Security Groups, EBS, ELB, ECS Service/Task, ECS Task Definition, but that already gets complex quickly in CloudFormation.
What are good technologies that will allow us to use Docker, keep things simple and don't require us to be very intelligent to understand our own programming code?
I would suggest you start by trying to setup your pipeline using Terraform. Learning it will give you experience in a non-vendor specific infrastructure as code.
Another possibility is to avoid using CloudFormation directly and prefer using the AWS CDK (https://docs.aws.amazon.com/cdk/latest/guide/home.html) as IaC.
Best regards
I've read about two approaches (there are probably more) for implementing continuous delivery pipelines in GCP:
Skaffold
Spinnaker + Container Builder
I've worked with both a little bit in Quiklabs. If someone has real experience with both, could you please share their pros and cons compared to each other? Why did you choose one over another?
Pipeline using Skaffold (from the docs https://skaffold.dev/docs/pipeline-stages/):
Detect source code changes
Build artifacts
Test artifacts
Tag artifacts
Render manifests
Deploy manifests
Tail logs & Forward ports
Cleanup images and resources
Pipeline using Spinnaker + Cloud Builder:
Developer:
Change code
Create a git tag and push to repo
Container Builder:
Detect new git tag
Build Docker image
Run unit tests
Push Docker image
Spinnaker (from the docs https://www.spinnaker.io/concepts/):
Detect new image
Deploy Canary
Cutover manual approval
Deploy PROD (blue/green)
Tear down Canary
Destroy old PROD
I have worked on both and as per my experience, skaffold is good only for local development testing however if we want to scale to production, pre-production usecases it is better to use a spinnaker pipeline. It(spinnaker) provides cutting edge advantages over skaffold as
Sophisticated/Complex deployment strategies: You can define deployment
strategies like deployment of service 1 before service 2 etc.
Multi-Cluster deployments: Easy UI based deployment can be configured to multiple clusters
Visualization:It provides a rich UI that shows the status of any deployment or pod across clusters, regions, namespace and cloud providers.
I'm not a real power user of both, but my understanding is that
Skaffold is great for dev environment, for developers (build, test, deploy, debug, loop).
Spinnaker is more oriented continuous development for automated platforms (CI/CD), that's why you can perform canary and blue/green deployment and stuff like this, useless for development phase.
Skaffold is also oriented Kubernetes environment, compare to Spinnaker which is more agnostic and can deploy elsewhere.
Skaffold is for fast Local Kubernetes Development.Skaffold handles the workflow for building, pushing and deploying your application
This makes it different from spinnaker which is more oriented towards CI/CD with full production environments
I have a scenario and looking for feedback and best approaches. We create and build our Docker Images using Azure Devops (VSTS) and push those images to our AWS repository. Now I can deploy those images just fine manually but would like to automate the process in a continual deployment model. Is there an approach to use codepipeline with a build step to just create and zip the imagesdefinitions.json file before it goes to the deploy step?
Or is there an better alternative that I am overlooking.
Thanks!
You can definitely use a build step (eg. CodeBuild) to automate generating your imagedefinitions.json file, there's an example here.
You might also want to look at the recently announced CodeDeploy ECS deployment option. It works a little differently to the ECS deployment action but allows blue/green deployments via CodeDeploy. There's more information in the announcement and blog post.
We are in the process of setting up a new release process in AWS. We are using terraform with Elastic Beanstalk to spin up the hardware to deploy to (although actual tools are irrelevant).
As this elastic beanstalk does not support immutable deployments in windows environments we are debating whether to have a separate pipeline to deploy our infrastructure or to run terraform on all code deployments.
The two things are likely to have different rates of churn which feels like a good reason to separate them. This would also reduce risk as there is less to deploy. But it means code could be deployed to snowflake servers and means QA and live hardware could get out of sync and therefore we would not be testing like for like.
Does anyone have experience of the two approaches and care to share which has worked better and why?
Well,
we have both the approaches in place. The initial AWS provisioning has the last step of a null resource which runs an ansible which does the initial code deployment.
Subsequent code deployments are done with standalone jenkins+ansible jobs.