Cloud Foundry Development Workflow - cloud-foundry

I'm trying to figure out how to use Micro CloudFoundry for development as described when I read things like the following from the Cloud Foundry blog:
"Rather than installing a web server (Tomcat, etc.), runtimes (Java, Ruby, etc.), and services (Postgres, MongoDB, etc.), you can do a single download of Micro Cloud Foundry, boot it up, and deploy your applications using ‘vmc push’."
When I'm developing (Node, Grails or Java web apps), I'm used to just refreshing and seeing my changes (well, always for client-side code, sometimes for server-side); it makes for very rapid and efficient development.
Constantly invoking 'vmc push' during development is pretty much a non-starter for me. It's far too slow of a feedback cycle to be practical. Is there a better way? Does anyone actually do this?
What does your Cloud Foundry development workflow look like and where does Micro Cloud Foundry fit in?

All issues with the delays involved in pushing an application to Cloud Foundry aside, I often use Micro Cloud Foundry for provisioning services (MySQL, MongoDB, Redis etc) and then use a local tunnel to connect to them via the vmc tunnel command.

Related

Microservices same as cloud services or webservices?

Firstly, I apologize for the rather basic question. I am just beginning to learn about Microservices Architecture and would like to get my basics right.
I was wondering if topics such as AWS cloud services/web services imply the Microservices architecture. For instance, if someone is working on an AWS project does that mean that he is using a microservice architecture? I do understand AWS, Docker etc is more of a platform. Are they exclusively for Microservices?
I would really appreciate a short clarification
Microservices, cloud infrastructure like Amazon Web Services, and container infrastructure like Docker are three separate things; you can use any of these independently of the others.
"Microservices" refers to a style of building a large application out of independently-deployable parts that communicate over the network. A well-designed microservice architecture shouldn't depend on sharing files between components, and could reasonably run distributed across several systems. Individual services could run on bare-metal hosts and outside containers. This is often in contrast to a "monolithic" application, a single large deployable where all parts have to be deployed together, but where components can communicate with ordinary function calls.
Docker provides a way of packaging and running applications that are isolated from their host system. If you have an application that depends on a specific version of Python with specific C library dependencies, those can be bundled into a Docker image, and you can just run it without needing to separately install them on the host.
Public-cloud services like AWS fundamentally let you rent someone else's computer by the hour. An AWS Elastic Compute Cloud (EC2) instance literally is just a computer that you can ssh into and run things. AWS, like most other public-cloud providers offers a couple of tiers of services on top of this; a cloud-specific networking and security layer, various pre-packaged open-source tools as services (you can rent a MySQL or PostgreSQL database by the hour using AWS RDS, for example), and then various proprietary cloud-specific offerings (Amazon's DynamoDB database, analytics and machine-learning services). This usually gives you "somewhere to run it" more than any particular design features, unless you're opting to use a cloud's proprietary offerings.
Now, these things can go together neatly:
You design your application to run as microservices; you build and unit-test them locally, without any cloud or container infrastructure.
You package each microservice to run in a Docker container, and do local integration testing using Docker Compose, without any cloud infrastructure.
You further set up your combined application to deploy in Kubernetes, using Docker Desktop or Minikube to test it locally, again without any cloud infrastructure.
You get a public-cloud Kubernetes cluster (AWS EKS, Google GKE, Azure AKS, ...) and deploy the same application there, using the cloud's DNS and load balancing capabilities.
Again, all of these steps are basically independent of each other. You could deploy a monolithic application in containers; you could deploy microservices directly on cloud compute instances; you could run containers in an on-premises environment or directly on cloud instances, instead of using a container orchestrator.
No, using a cloud provider does not imply using a microservice architecture.
AWS can be (and is often) used to spin up a monolithic service, e.g. just a single EC2 server which uses a single RDS database.
Utilizing Docker and a container orchestrator like ECS or EKS, also does not mean on its own that one has a microservice architecture. If you split your backend and frontend into two Docker containers that get run on ECS, that's really not a microservice architecture. Even if you'd horizontally scale them, so you'd have multiple identical containers running for both the backend and frontend service, they still wouldn't be thought of as microservices.

Pivotal Cloud Foundry is based on container or VM

I am starting to learn PCF . Please help me understand if PCF falls under the concept of containerization or virtualization.
Kindly help me with this.
PCF (a.k.a. PAS, a.k.a. TAS) apps are deployed on containers, typically using Garden as the container runtime and Diego as the container orchestration engine. The components of the PCF runtime may be deployed as virtual machines, managed by BOSH, or as containers.
Pivotal Cloud Foundry (PCF) is a Platform as a Service (PaaS). It helps the developer to write the modern microservice based application and consume services from the marketplace. Typically, we should deploy and install PCF on the cloud platforms such as AWS Cloud and Azure Cloud. The deployment is a big process like it requires 20+ VMs and it should be highly available.
Now coming to your question, PCF doesn't fall specifically under containerization nor virtualization. PCF provides PaaS service like Elastic Bean Stalk in AWS Cloud. Of course, we can use Docker container technology for the application runtime on PCF Cloud.
what is PCF: Pivotal Cloud Foundry is a commercial version of Cloud Foundry that is produced by Pivotal. It has commercial features that are added over and above what is available in the open source version of Cloud Foundry. It's PaaS platform i.e. a platform upon which developers can build and deploy applications. It provides you runtime to your applications. You give PCF an application, and the platform does the rest. It does everything from understanding application dependencies to container building and scaling and wiring up networking and routing.
Beauty of PCF is that you need not to worry about the underlying infrastructure and it can be deployed on-premises and on many cloud providers to give enterprises a hybrid and multi-cloud platform. It gives you flexibility and offers a lot of options to develop and run cloud native apps inside any cloud platform.
Category: PCF is one example of an “application” PaaS, also called the Cloud Foundry Application Runtime, and Kubernetes is a “container” PaaS (sometimes called CaaS). PCF is higher level abstraction and Kubernetes is lower level of abstraction in the PaaS world. In simple terms Cloud Foundry can be classified as a tool in the "Platform as a Service" category.
Applications run on PCF are deployed, scaled and maintained by BOSH (PCF’s infrastructure management component). It deploys versioned software and the VM for it to run on and then monitors the application after deployment. It can't be seen purely under containerization or virtualization.
Learning: Pivotal used to provide PWS (Pivotal Web Services) which is a kind of platform available over the internet that you could have explored to learn for free but somehow PWS took its final bow and left the stage back in Jan'21. May be look to go to one of certified providers: https://www.cloudfoundry.org/certified-platforms/

Deploying a multi-service app to a cloud provider

There are several tutorials on how to deploy a containerized service to the cloud: AWS, Google Cloud Platform, Heroku, and many others all have nice tutorials on how to do this.
However, most real-world apps are made of two or more services (for example a database + a web server), rather than just one service.
Is it bad practice to deploy the various services of a multi-service app to different clusters (e.g. deploy the database to a GKE cluster, and the web server to another GKE cluster)? I'm asking this because I am finding it very difficult to deploy a simple web app to a single cluster, while I was expecting that once I set up my Dockerfiles and docker-compose.yml everything would work out-of-the-box (as advertised by the documentations of Docker Compose and Kubernetes) and I would be able to have a small cluster with 1 container for my database and 1 container for my web server.
So my questions are:
Is it bad practice to deploy the various services of a multi-service app to different clusters?
What is, in general, the de-facto standard way to deploy a web app with a database and a web server to the cloud? What are the easiest tools to achieve this?
Practically, what is the simplest way I can deploy a React + Express + MongoDB app to any cloud provider with a free-tier account?
Deploying multiple services (AKA applications) that shares some logic between them on the same cluster/namespace is actually the best practice. I am not sure why you find it difficult, but you could take a container orchestrator platform, such as Kubernetes and deploy as many applications as you want - in the same project on the same cluster.
I would recommend getting into a cloud platfrom that serves a Container Orchestrator such as Google Container Engine of Google Cloud Platform (or any other cloud platform you want) and start exploring around. You can also read about containers overall or Kubernetes.
So, practically speaking, I would probably create MongoDB and the express app inside the same namespace (and every other service or application related to the project on another container within the same namespace).

is Google Cloud Platform capable of providing these things?

I have a system made up of the following that I'd like to host on Google Cloud Platform:
web service (apache cxf)
web server (apache tomcat)
database (mysql)
hosted web pages
I'd like to be able to install/set up Tomcat and MySql myself. I do not want to use someone's canned, prepackaged components.
If this has built in tools to allow load testing that would be a great nice to have but its not required.
What is required is that it essentially runs itself and requires little hands on intervention from me on a day to day basis.
Yes the GCP can do all of that.
Just set up a virtual machine inside the compute engine and it will be very easy to maintain and even scale your applications.
My company is doing just that right now.
Yes you can use the Google Compute Engine , it's Infrastructure as a Service just like Amazon EC2 or other servers.
Refer https://cloud.google.com/compute/

Difference between Cloud Foundry & Pivotal Web Services

I read on wikipedia that cloud foundry open source software is available to anyone whereas the Pivotal Web Services is a commercial product from Pivotal.
I kinda searched a lot on internet but did not find any cloud foundry open source software implementation example. Everything is for Pivotal product which provides a 2 months free trial service.
So can anyone tell me what is the cloud foundry open source software?
And what exactly is the difference between cloud foundry OSS & Pivotal CF?
Cloud Foundry is open source software, but if you are looking to tinker with it for the first time, using the OSS is a bit involved. You will need to have a provisioned cloud environment, you will install it yourself using MicroBosh, and everything will be done through the command line.
Pivotal Cloud Foundry is a commercial implementation that makes it easier to get up and running as you are learning the project. It provides a hosted environment in Pivotal Web Services so that you don't have to install it yourself, a web interface that makes managing the environment easier, and a number of pre-provisioned services including relational databases and messaging queues. This is the best starting point if you are just learning the technology.
To add to the above answer, Pivotal Cloud Foundry offers a public cloud offering called Pivotal Web Services where you can signup and deploy your apps on the cloud which is hosted by Pivotal.
On the other hand they also allow enterprises to host private cloud environment by installing components of the cloud infrastructure on VMWare VSphere, AWS, OpenStack Check this(http://docs.pivotal.io/pivotalcf/installing/pcf-docs.html) link out.