I want to setup hybrid cloud and choose OpenStack as private cloud and AWS,RackSpace OpenStack as public clouds. what tools I need to choose for the automatic provisioning and configuration of cloud resources.
thanks
There are several tools that could help you with provisioning, management, deployment and post deployment support such as Rightscale, Scalr and Xervmon. Xervmon is one tool you should give it a try. It supports multi cloud provisioning, management, deployment and monitoring of these wide array of assets. XOPS - Xervmon Operation Center is an integrated tool that leverages puppet classes to provision and automate routine tasks across multi cloud providers.
Xervmon takes an integrated approach to the whole gamut of cloud management, monitoring across hybrid infrastructure. Xervmon
Related
They both seem to be recommended CI/CD tools within Google Cloud.. but with similar functionality. Would I use one over the other? Maybe together?
Cloud Build seems to be the de facto tool. While Cloud Deploy says that it can do "pipeline and promotion management."
Both of them are designed as serverless, meaning you don't have to manage the underlying infrastructure of your builds and defining delivery pipelines in a YAML configuration file. However, Cloud Deploy needs a configuration for Skaffold, which Google Cloud Deploy needs in order to perform render and deploy operations.
And according to this documentation,
Google Cloud Deploy is a service that automates delivery of your applications to a series of target environments in a defined sequence.
Cloud Deploy is an opinionated, continuous delivery system currently supporting Kubernetes clusters and Anthos. It picks up after the CI process has completed (i.e. the artifact/images are built) and is responsible for delivering the software to production via a progression sequence defined in a delivery pipeline.
While Google Cloud Build is a service that executes your builds on Google Cloud.
Cloud Build (GCB) is Google's cloud Continuous Integration/Continuous Development (CICD) solution. And takes users code stored in Cloud Source Repositories, GitHub, Bitbucket, or other solutions; builds it; runs tests; and saves the results to an artifact repository like Google Container Registry, Artifactory, or a Google Cloud Storage bucket. Also, supports complex builds with multiple steps, for example, testing and deployments. If you want to add your CI pipeline, it's as easy as adding an additional step to it. Take your Artifacts, either built or stored locally or at your destination and easily deploy it to many services with a deployment strategy of you choice.
Provide more details in order to choose between the two services and it will still depend on your use case. However, their objectives might help to make it easier for you to choose between the two services.
Cloud Build's mission is to help GCP users build better software
faster, more securely by providing a CI/CD workflow automation product for
developer teams and other GCP services.
Cloud Deploy's mission is to make it easier to set up and run continuous
software delivery to a Google Kubernetes Engine environment.
In addtion, refer to this documentation for price information, Cloud Build pricing and Cloud Deploy pricing.
The cloud workflow doesn't come with a scheduling feature. Apart from that, what are all the differences between these two services in terms of features? In which use case should we prefer the workflow over composer or vice versa?
There are some key differences to consider when choosing between the two solutions :
A Composer instance needs to be in a running state to trigger DAGs and you'll also need to size your Cloud Composer instance based on your usage, You do not need to do this in Cloud Workflows as it is a Serverless service and you pay for anytime a workflow is triggered
Another key difference is that Cloud Composer is really convenient for writing and orchestrating data pipelines because of it's internal scheduler and also because of the provided Operators, You can interact with any Data services inside of GCP.
However, Cloud Workflows interacts with Cloud Functions, wich is a task that Composer cannot do really well.
Both Composer and Workflows support orchestrating multiple services and can handle long running workflows. Despite there being some overlap in the capabilities of these products, each has differentiators that make them well suited to particular use cases.
Composer is most commonly used for orchestrating the transformation of data as part of ELT or data engineering. Workflows, in contrast, is focused on the orchestration of HTTP-based services built with Cloud Functions, Cloud Run, or external APIs.
Composer is designed for orchestrating batch workloads that can handle a delay of a few seconds between task executions. It wouldn’t be suitable if low latency was required in between tasks, whereas Workflows is designed for latency sensitive use cases.
While you don’t have to worry about maintaining Airflow deployments in Composer, you do need to specify how many workers you need for a given Composer environment. Workflows is completely serverless; there is no infrastructure to manage or scale.
For further information refer to this google blog article and this one.
I am starting to learn PCF . Please help me understand if PCF falls under the concept of containerization or virtualization.
Kindly help me with this.
PCF (a.k.a. PAS, a.k.a. TAS) apps are deployed on containers, typically using Garden as the container runtime and Diego as the container orchestration engine. The components of the PCF runtime may be deployed as virtual machines, managed by BOSH, or as containers.
Pivotal Cloud Foundry (PCF) is a Platform as a Service (PaaS). It helps the developer to write the modern microservice based application and consume services from the marketplace. Typically, we should deploy and install PCF on the cloud platforms such as AWS Cloud and Azure Cloud. The deployment is a big process like it requires 20+ VMs and it should be highly available.
Now coming to your question, PCF doesn't fall specifically under containerization nor virtualization. PCF provides PaaS service like Elastic Bean Stalk in AWS Cloud. Of course, we can use Docker container technology for the application runtime on PCF Cloud.
what is PCF: Pivotal Cloud Foundry is a commercial version of Cloud Foundry that is produced by Pivotal. It has commercial features that are added over and above what is available in the open source version of Cloud Foundry. It's PaaS platform i.e. a platform upon which developers can build and deploy applications. It provides you runtime to your applications. You give PCF an application, and the platform does the rest. It does everything from understanding application dependencies to container building and scaling and wiring up networking and routing.
Beauty of PCF is that you need not to worry about the underlying infrastructure and it can be deployed on-premises and on many cloud providers to give enterprises a hybrid and multi-cloud platform. It gives you flexibility and offers a lot of options to develop and run cloud native apps inside any cloud platform.
Category: PCF is one example of an “application” PaaS, also called the Cloud Foundry Application Runtime, and Kubernetes is a “container” PaaS (sometimes called CaaS). PCF is higher level abstraction and Kubernetes is lower level of abstraction in the PaaS world. In simple terms Cloud Foundry can be classified as a tool in the "Platform as a Service" category.
Applications run on PCF are deployed, scaled and maintained by BOSH (PCF’s infrastructure management component). It deploys versioned software and the VM for it to run on and then monitors the application after deployment. It can't be seen purely under containerization or virtualization.
Learning: Pivotal used to provide PWS (Pivotal Web Services) which is a kind of platform available over the internet that you could have explored to learn for free but somehow PWS took its final bow and left the stage back in Jan'21. May be look to go to one of certified providers: https://www.cloudfoundry.org/certified-platforms/
I have gone through the cloudbreak documentation and I am still not sure what is the exact purpose of this component.
Is it actually useful only for deploying the cluster in any cloud services and if so can we customise the components that needs to be installed in the cluster.
If it is only for maintaining the deployment of a cluster then is there any cost involved in using cloudbreak?
Cloudbreak main purpose is Hdp or Hdf cluster management. It provides an UI and api to access, create and edit the cluster. It also provides access control management for the clusters. Yes you can customize the components installation via ambari blueprint.
One additional benefit was from its component periscope, which provides autoscaling based on ambari alerts.
Why we need Spring cloud? What is the difference between AWS and Spring Cloud?
It might help to think of a division between infrastructure, applications and platform. Think of infrastructure as hardware - servers, disk, compute, network routing etc. that you can use. Let's call 'application' the executables that you build from your code in order to implement business logic and satisfy your end users. Then platform is a connecting layer - tools and standards to help your applications make use of infrastructure.
AWS is most famous for providing cloud infrastructure but it also provides a lot of services that could fall under platform. For example, it provides an API gateway service and container orchestration services with ECS or EKS (kubernetes) - these are more platform-level services as they are services that help your applications to scale and to talk to each other in the cloud.
Spring Cloud is a set of tools that help you address common problems faced by cloud applications. Concerns like how to get applications to talk reliably to each other in the cloud (eureka, hystrix and ribbon for http, streams for messaging) how to provide a single entry-point for consumers to access a set of microservices (zuul and spring cloud gateway) and how to manage configuration across microservices (spring cloud config). Mostly I would put these concerns under 'platform' but there are grey areas. You normally add the spring cloud libraries to components you build, which is a bit more 'application'-like. Some of the same concerns are addressed by certain services available in AWS (especially ECS and EKS(kubernetes)).
So the core areas of concern are very different for AWS (primarily infrastructure) and spring cloud (platform or platform-application bridge). But there can be some overlap at the platform level because AWS and spring cloud both offer so many options. It is tricky to find any direct comparisons because there are so many options but if you focus on EKS(kubernetes) in particular a good article comparing it with spring cloud is https://dzone.com/articles/deploying-microservices-spring-cloud-vs-kubernetes
Spring Cloud is a programming API that is used to develop an application by following a microservices design.
Spring Cloud speaks about development. Cloud Platforms like AWS are used to deploy the application.
Spring Cloud is just a set of tools (software) commonly used in the cloud, AWS is one of many cloud options, a place where you can deploy your apps.