Cloudfoundry Multi VM - cloud-foundry

I am pretty new to cloudfoundry. I am still trying to understand how exactly it works.
Say if I have three VMs. VM 1 is running on server A.
VM 2 and 3 are running on server B.
If I wanted to use a single CloudFoundry Instance on those three, would it work?
And if not, how could I use Cloudfoundry on multiple servers or at least multiple VMs? I know I can use BOSH to set them up, but do I still have to manage each instance seperately?
Thank you:
Jannis

BOSH will deploy VMs for you, you typically don't deploy Cloud Foundry onto existing VMs. BOSH supports deploying to several infrastructures. The core supported infrastructures include AWS, vSphere, OpenStack, and vCloud Air/vCloud Director. There are also community-provided "Cloud Provider Interfaces" for IBM SoftLayer, Azure, Google Compute Engine, and more.
Cloud Foundry is meant to be run as a distributed service, i.e. on multiple VMs. Typically those VMs will be on multiple different hosts, hardware racks, servers, datacenters, what have you. And BOSH is designed to facilitate deploying and managing distributed services like Cloud Foundry. So no, you do not need to manage individual VMs separately.
You can read more about BOSH and Deploying Cloud Foundry.

Related

Private service to service communication for Google Cloud Run

I'd like to have my Google Cloud Run services privately communicate with one another over non-HTTP and/or without having to add bearer authentication in my code.
I'm aware of this documentation from Google which describes how you can do authenticated access between services, although it's obviously only for HTTP.
I think I have a general idea of what's necessary:
Create a custom VPC for my project
Enable the Serverless VPC Connector
What I'm not totally clear on is:
Is any of this necessary? Can Cloud Run services within the same project already see each other?
How do services address one another after this?
Do I gain the ability to use simpler by-convention DNS names? For example, could I have each service in Cloud Run manifest on my VPC as a single first level DNS name like apione and apitwo rather than a larger DNS name that I'd then have to hint in through my deployments?
If not, is there any kind of mechanism for services to discover names?
If I put my managed Cloud SQL postgres database on this network, can I control its DNS name?
Finally, are there any other gotchas I might want to be aware of? You can assume my use case is very simple, two or more long lived services on Cloud Run, doing non-HTTP TCP/UDP communications.
I also found a potentially related Google Cloud Run feature request that is worth upvoting if this isn't currently possible.
Cloud Run services are only reachable through HTTP request. you can't use other network protocol (SSH to log into instances for example, or TCP/UDP communication).
However, Cloud Run can initiate these kind of connection to external services (for instance Compute Engine instances deployed in your VPC, thanks to the serverless VPC Connector).
the serverless VPC connector allow you to make a bridge between the Google Cloud managed environment (where live the Cloud Run (and Cloud Functions/App Engine) instances) and the VPC of your project where you have your own instances (Compute Engine, GKE node pools,...)
Thus you can have a Cloud Run service that reach a Kubernetes pods on GKE through a TCP connection, if it's your requirement.
About service discovery, it's not yet the case but Google work actively on that and Ahmet (Google Cloud Dev Advocate on Cloud Run) has released recently a tool for that. But nothing really build in.

Swisscom Cloud Foundry: Is there a way to request app instances to run in separate regions?

This is a question regarding the Swisscom CloudFoundry PaaS.
We use manifests to configure and provision our applications running in CloudFoundry. You can specify how many app instances you want to have running for an app there (or you can scale the app via cf scale APP_NAME 2).
What should I expect in terms of "regions" as to where these instances will be run?
Is there any way for me to tell CloudFoundry to run instances in separate datacenters / regions?
Will CloudFoundry itself try to start-up app instances in different locations?
Will CloudFoundry most likely start them very close to each other?
Is there simply no way of knowing / telling in which datacenter the App instances will run?
I would like to know what the recommended way is to run a CloudFoundry app on Swisscom's PaaS to maximise it's availability.
There is no way to tell CloudFoundry app instances in which datacenter to run
Yes, CloudFoundry distributes app instances across datacenters
No, CloudFoundry is distributing it
Yes, there is simply no way

Kubernetes and vSphere, AWS

I am a bit late to the party and am just delving into containers now. At work we use vSphere as our virtualization platform, but are likely to move to "the cloud" (AWS, GCP, Heroku, etc.) at some point in the somewhat-near future.
Ideally, I'd like to build up our app containers such that I could easily port them from running on vSPhere nodes to AWS EC2 instances.
So I ask:
Are all Docker containers created equal? Could I port a Docker container of our own creation to AWS Container Service with zero config?
I believe Kubernetes helps map containers to the virtualization resources they need. Any chance this runs on AWS as well, or does AWS-ECS take care of this for me?
Kubernetes is designed to run on multiple cloud platforms (as well as bare metal). See Getting started on AWS for AWS specific instructions.

cloudfoundry on openstack iaas - understanding the stack

I am evaluating cloudfoundry (private cloud option) with Openstack as an IaaS candidate.
I have following setup in mind, but looks like I am missing some connections -
I will have OpenStack installed
On one VM on openstack (ubuntu 10.4 image), I will install cloudfoundry cloud_controller
On other multiple VMs on openstack, I will install cloudfoundry DEAs
And this I understand as called a multiple hosts installation of cloud foundry
Now when I push an application to cloudfoundry using VMC (with 5 instances request), One of the Cloudfoundry DEA will spawn 5 VMs on itself and deploy/run the app on all 5 cloudfoundry VMs
That means I have 5 instances of my app running
I can access the app through a single URL and cloudnfoundry controller/router will route the request to one of the running instances of my app
Now for scaling the infrastructure, I can reconfigure my openstack instances and restart them with new configuration (i.e. more volume, more RAM etc)
And for scaling the application, I can simply add more instances to the cloudfoundry vmc push command
Sorry for the writeup but pls suggest if this is a valid understanding (also if you guys have better options - basically we are looking at a scalable application and infrastructure for developers)
Thanks Much,
Vcap OSS questions are best directed to the vcap dev site and I would suggest you start there.

What is the CloudFoundry infrastructure?

Does anyone have any idea if CloudFoundry is based on IaaS and datacenters from VMware, or is it based on 3rd party IaaS providers such AWS EC2??
Thanks,
Cloud Foundry is a open source Platform as a Service. It's entirely written with Ruby and the components are very loose couple. You can download Cloud Foundry source code from https://github.com/cloudfoundry/vcap. Cloud Foundry is a PaaS which comes on top of the IaaS layer. Now your IaaS layer can be anything (i.e. vmware vSphere, Amazon EC2, Cloudstack and etc.)
cloudfoundry.com is a hosted Cloud Foundry PaaS environment by vmware. Since this service is being given by vmware, IaaS is vmware as well. It provides a free 2 Gig storage to any user who registers. After registration, users can deploy their apps which would then become a subdomain of cloudfoundry.com (i.e. myCompanyName.cloudfoundry.com). This service is currently in Beta right now.
You can find more information on the following websites:
http://www.cloudfoundry.com
http://www.cloudfoundry.org
http://docs.cloudfoundry.com
http://support.cloudfoundry.com
CloudFoundry.com runs on VMware's own Vsphere infrastructure and servers:
CF, though, is open source, and other providers offer their own service.