I've read about two approaches (there are probably more) for implementing continuous delivery pipelines in GCP:
Skaffold
Spinnaker + Container Builder
I've worked with both a little bit in Quiklabs. If someone has real experience with both, could you please share their pros and cons compared to each other? Why did you choose one over another?
Pipeline using Skaffold (from the docs https://skaffold.dev/docs/pipeline-stages/):
Detect source code changes
Build artifacts
Test artifacts
Tag artifacts
Render manifests
Deploy manifests
Tail logs & Forward ports
Cleanup images and resources
Pipeline using Spinnaker + Cloud Builder:
Developer:
Change code
Create a git tag and push to repo
Container Builder:
Detect new git tag
Build Docker image
Run unit tests
Push Docker image
Spinnaker (from the docs https://www.spinnaker.io/concepts/):
Detect new image
Deploy Canary
Cutover manual approval
Deploy PROD (blue/green)
Tear down Canary
Destroy old PROD
I have worked on both and as per my experience, skaffold is good only for local development testing however if we want to scale to production, pre-production usecases it is better to use a spinnaker pipeline. It(spinnaker) provides cutting edge advantages over skaffold as
Sophisticated/Complex deployment strategies: You can define deployment
strategies like deployment of service 1 before service 2 etc.
Multi-Cluster deployments: Easy UI based deployment can be configured to multiple clusters
Visualization:It provides a rich UI that shows the status of any deployment or pod across clusters, regions, namespace and cloud providers.
I'm not a real power user of both, but my understanding is that
Skaffold is great for dev environment, for developers (build, test, deploy, debug, loop).
Spinnaker is more oriented continuous development for automated platforms (CI/CD), that's why you can perform canary and blue/green deployment and stuff like this, useless for development phase.
Skaffold is also oriented Kubernetes environment, compare to Spinnaker which is more agnostic and can deploy elsewhere.
Skaffold is for fast Local Kubernetes Development.Skaffold handles the workflow for building, pushing and deploying your application
This makes it different from spinnaker which is more oriented towards CI/CD with full production environments
Related
They both seem to be recommended CI/CD tools within Google Cloud.. but with similar functionality. Would I use one over the other? Maybe together?
Cloud Build seems to be the de facto tool. While Cloud Deploy says that it can do "pipeline and promotion management."
Both of them are designed as serverless, meaning you don't have to manage the underlying infrastructure of your builds and defining delivery pipelines in a YAML configuration file. However, Cloud Deploy needs a configuration for Skaffold, which Google Cloud Deploy needs in order to perform render and deploy operations.
And according to this documentation,
Google Cloud Deploy is a service that automates delivery of your applications to a series of target environments in a defined sequence.
Cloud Deploy is an opinionated, continuous delivery system currently supporting Kubernetes clusters and Anthos. It picks up after the CI process has completed (i.e. the artifact/images are built) and is responsible for delivering the software to production via a progression sequence defined in a delivery pipeline.
While Google Cloud Build is a service that executes your builds on Google Cloud.
Cloud Build (GCB) is Google's cloud Continuous Integration/Continuous Development (CICD) solution. And takes users code stored in Cloud Source Repositories, GitHub, Bitbucket, or other solutions; builds it; runs tests; and saves the results to an artifact repository like Google Container Registry, Artifactory, or a Google Cloud Storage bucket. Also, supports complex builds with multiple steps, for example, testing and deployments. If you want to add your CI pipeline, it's as easy as adding an additional step to it. Take your Artifacts, either built or stored locally or at your destination and easily deploy it to many services with a deployment strategy of you choice.
Provide more details in order to choose between the two services and it will still depend on your use case. However, their objectives might help to make it easier for you to choose between the two services.
Cloud Build's mission is to help GCP users build better software
faster, more securely by providing a CI/CD workflow automation product for
developer teams and other GCP services.
Cloud Deploy's mission is to make it easier to set up and run continuous
software delivery to a Google Kubernetes Engine environment.
In addtion, refer to this documentation for price information, Cloud Build pricing and Cloud Deploy pricing.
We are trying to migrate several of our Java/Maven projects into AWS Code Pipeline and we could not find a good and reasonable migration approach (our current architecture is to use AWS for production). Specifically we are interested in several things:
How to cache Maven dependencies so that build tasks do not download the same packages all over again.
There are several approaches possible, for example:
a) Use Code Artifact, but then the Maven projects will be connected to a specific AWS subscription.
b) Use S3 buckets, but then 3PP modules (Maven Wagons) will need to be used.
c) Use EC2 instance for building.
d) Use Docker container created specifically for build purposes.
It is not really clear if Jenkins or Code Pipeline is recommended as a CI/CD product in AWS. We could see some examples that Code Pipeline is used with Jenkins. What is the purpose of such a setup.
Thank you,
I'm looking to use NestJS on my next project, but I am slightly put off by the lack of documentation regarding deployment practices and continuous deployment cycles. Ideally I would like to use something like cloud compute to automatically compile my project and deploy it as updates are pushed to a release branch. Anyone have advice regarding that?
It is a very broad question, as there are many ways to implement CI, deployment pipeline, or deployment strategies.
I would suggest you to take a look to developer tools in AWS such as CodePipeline, for pipeline creation and CodeBuild/Jenkins as building services. Take a look at docker container, and look for deployment services like Elastic Beanstalk for single/multicontainer container, ECS, or just CodeDeploy.
I would also suggest you to take a look to AWS Blue/Green deployments white paper, as it also review the different deployment strategies.
I have been investigating a combination of Spinnaker, Spring Boot/Cloud and Amazon Web services for implementing a continuous delivery of microservices for my new employer.
My biggest issue is separation of the different environments in AWS and determination of that environment by the Spring Boot/Cloud microservice when it is going through the pipeline from code checkin to production.
What are the best practices for separating the different environments in AWS? I have seen separate sub-accounts, use of VPC to separate the environments.
Then the next step involves determination of the environment by the microservice when it starts up. We are planning on using Spring Cloud Config Server to provide runtime configuration to the microservices. The determination of environment when contacting the config server needs to be made to set the 'label' when asking the config server for configuration.
Assumptions I am making:
1. Spinnaker, or some other pipeline capable tool is being used to push the artifact into the various environments.
2. The original artifact is a self contained jar, that ideally would be incorporated into an AWS AMI, and then pushed without modification into the subsequent deployment environments.
3. The AMI can be built with whatever scripting is necessary to determine the runtime environment, and provide that info to the jar when it is started.
4. The instance is being started by an auto-scaler, and will be starting the instance(AMI), without providing external info when being started.
I think that is enough for now. This could probably be split into multiple questions, but I wanted to maintain a cohesive whole for the process.
Thanks for the assistance.
I've been looking into Mesos, Marathon and Chronos combo to host a large number of websites. In my head I should be able to type a few commands into my laptop, and wait about 30 minutes for the thing to build and deploy.
My only issue, is that my resources are scattered across multiple data centers, numerous cloud accounts, and about 6 on premises places. I see no reason why I can't control them all from my laptop -- (I have serious power and control issues when it comes to my hardware!)
I'm thinking that my best approach is to build the brains in the cloud, (zoo keeper and at least one master), and then add on the separate data centers, but I am yet to see any examples of a distributed cluster, where not all the nodes can talk to each other.
Can anyone recommend a way of doing this?
I've got a setup like this, that i'd like to recommend:
Source code, deployment scripts and dockerfiles in GIT
Each webservice has its own directory and comes together with a dockerfile to containerize it
A build script (shell script running docker builds) builds all the docker containers, of which all images are pushed to a docker image repository
A ansible deploy deploys all the containers remotely to a set of VPSes. (You use your own deployment procedure, that fits mesos/marathon)
As part of the process, a activeMQ broker is deployed to the cloud (yep, in a container). While deploying, it supplies each node with the URL of the broker they need to connect to. In your setup you could instead use ZooKeeper or etcd for example.
I am also using jenkins to do automatic rebuilds and to run deploys whenever there has been GIT commits, but they can also be done manually.
Rebuilds are lightning fast, and deploys dont take much time either. I can replicate everything I have in my repository endlessly and have zero configuration.
To be able to do a new deploy, all I need is a set of VPSs with docker daemons, and some datastores for persistence. Im not sure if this is something that you can replace with mesos, but ansible will definitely be able to install a mesos cloud for you onto your hardware.
All logging is being done with logstash, to a central logging server.
i have setup a 3 master, 5 slave, 1 gateway mesos/marathon/docker setup and documented here
https://github.com/debianmaster/Notes/wiki/Mesos-marathon-Docker-cluster-setup-on-RHEL-7-with-three-master
this may help you in understanding the load balancing / scaling across different machines in your data center
1) masters can also be used as slaves
2) mesos haproxy bridge script can be used for service discovery of the newly created services in the cluster
3) gateway haproxy is updated every min with new services that are created
This documentation has
1) master/slave setup
2) setting up haproxy that automatically reloads
3) setting up dockers
4) example service program
You should use Terraform to orchestrate your infrastructure as code.
Terraform has a lot of providers that allows you to manage different resources accross multiples clouds services and/or bare-metal resources such as vSphere.
You can start with the Getting Started Guide.