Cloud Foundry versions - cloud-foundry

Getting the accurate version of various parts of a Cloud Foundry installation is important ro refer to the right documentation pages, but seems a bit tricky.
Here is what I got so far:
CLI: cf -v
Cloud Foundry API: cf api
Buildpacks: cf buildpacks lists the installed buildpacks, the version appears in the file name
Here is what I do not:
Cloud Foundry, such as 2.4. According to How to check PCF version there is a (painful) way from the API version, then the CAPI version, etc.
Service brokers: cf marketplace lists available SB but no version info here, same for cf marketplace -s postgresql
For the desperate, the release notes such as https://docs.pivotal.io/pivotalcf/2-4/pcf-release-notes/runtime-rn.html can help.
How can I get the missing versions (preferably from command line or HTTP) as a regular user?

For Pivotal Cloud Foundry, all of your version information can be found in Ops Manager. There is a handy diagnostic report you can export which gives you a JSON listing of all the versions of things you have installed.
It's under your user name in the upper right corner, then click settings and advanced.
https://docs.pivotal.io/pivotalcf/2-4/customizing/pcf-interface.html#settings
For PCF or CF, you can also get detailed version information from BOSH. Running bosh deployments will show you all of the BOSH releases that are part of your current deployment. Each BOSH release has a fixed set of software that it will install. If you care to go deeper, you can look at the individual BOSH release to get versions or more often git commit hashes for the software included in that release.
As an unprivileged user you can locate most of this information by running cf curl /v2/info.
Ex:
$ cf curl /v2/info
{
"name": "Pivotal Application Service",
"build": "2.4.2-build.33",
"support": "https://support.pivotal.io",
"version": 0,
"description": "https://docs.pivotal.io/pivotalcf/2-3/pcf-release-notes/runtime-rn.html",
"authorization_endpoint": "https://login.run.pcfone.io",
"token_endpoint": "https://uaa.run.pcfone.io",
"min_cli_version": "6.23.0",
"min_recommended_cli_version": "6.23.0",
"app_ssh_endpoint": "ssh.run.pcfone.io:2222",
"app_ssh_host_key_fingerprint": "62:b2:73:9c:c1:c7:4f:c9:79:0c:62:ec:a1:9a:f9:b0",
"app_ssh_oauth_client": "ssh-proxy",
"doppler_logging_endpoint": "wss://doppler.run.pcfone.io:443",
"api_version": "2.125.0",
"osbapi_version": "2.14",
"routing_endpoint": "https://api.run.pcfone.io/routing"
}
build gives you the PCF version.
api_version gives you the Cloud Controller version
osbapi_version gives you the open service broker API version (not version for individual brokers)
Obtaining the version for individual services is going to be the trickiest as it will depend on what information each service broker exposes. The output in the Marketplace is provided by an individual service broker, so if that broker were to include version information it would show up there. Similarly, there may be APIs & Dashboards exposed by individual service brokers that tell you more details like their version. You would need to consult with each individual broker to see how you can get more details about the version of it's that's been deployed.

Related

Invalid arguments when creating new datalab instance

I am following the quickstart tutorial for datalab here, within the GCP console. When I try to run
datalab beta create-gpu datalab-instance-name
In step 3 I receive the following error
write() argument must be str, not bytes
Can anyone help explain why this is the case and how to fix it?
Thanks
Referring to the official documentation, before running Datalab instance, the corresponding APIs should be enabled: Google Compute Engine and Cloud Source Repositories APIs. To do so, visit Products -> APIs and Services -> Library and search for the APIs. Additionally, make sure that billing is enabled for your Google Cloud project.
You can also enabling the APIs by typing the following command, which will give you a prompt to enable the API:
datalab list
I made some research and found that the same issue has been reported on the Github page. If enabling API's wouldn't work, the best option would be to contribute (add a comment) in the mentioned Github topic to make it more visible to the Datalab Engineering team.

Do I need to deploy function in gcloud in order to have OCR?

This GCloud Tutorial has a "Deploying the function", such as
gcloud functions deploy ocr-extract --trigger-bucket YOUR_IMAGE_BUCKET_NAME --entry-point
But at Quickstart: Using Client Libraries does not mention it at all, all it needs is
npm install --save #google-cloud/storage
then a few lines of code will work.
So I'm confused, do I need the "deploy" in order to have OCR, in other words what do/don't I get from "deploy"?
The command
npm install --save #google-cloud/storage
is an example of installing the Google Cloud Client Library for Node.js in your development environment, in this case, Cloud Storage API. This example is part of Setting Up a Node.js Development Environment tutorial.
Once you have coded, tested and set all the configurations for the app as described in the tutorial the next step would be the deployment, in this example a Cloud Function:
gcloud functions deploy ocr-extract --trigger-bucket YOUR_IMAGE_BUCKET_NAME --entry-point
So, note that this commands are two different steps to run OCR with Cloud Functions, Cloud Storage and other Cloud Platform components in the tutorial example using Node.js environment.
While Cloud Function (CF) is easy to understand, this answers specifically my own question, what does the "Deploy" actually do:
to have the code work for you, they must be deployed/uploaded to the GC. For people like me never done GCF this is new. My understanding was all I need to supply is credentials and satisfy the whatever server/backend (sorry, cloud) settings when my local app calls the remote Web API. That's where I stucked. The key I missed is the sample app itself is a server/backend event-handler trigger functions, and therefore Google requires them to be "deployed" just like when we deploy something during a staging or production release in a traditional corporate environment. So it's a real deploy. If you still don't get it, go to your GC admin page, menu, Cloud Function, "Overview" tab, you will see them. Hence goes to next
The 3 GC deploy command used in the Deploying Functions have ocr-extract ocr-save ocr-translate, they are not switches, they are function names that you can name them anything. Now, still in the Admin page, click on any of 3, "Source". Bang, they are there, deployed (uploaded).
Google, as this is a tutorial no one has digged into command reference book yet, I recommend adding a piece of note telling readers those 3 ocr-* can be anything you want to name.

Docker image registry/repository on private cloudfoundry

I installed Cloudfoundry (approximately version v220) on OpenStack and I want to work with private Docker images on Cloudfoundry.
I would like to run docker registry/repository (Doc|Github) server on Cloudfoundry.
I found tutorials on how to install it directly on a machine/VM (1|2|3).
Is there something to be said against running it on Cloudfoundry?
How do I install it?
Is Diego or something like that already providing the registry/repository service?
I thought Diego was part of Cloudfoundry but reading the CF release notes it looks like I have to install Diego separately is that correct (see "Recommended Diego Version")?
It is possible to run private Docker images on Cloudfoundry and there is a CF-specific registry you can use. In order to do that, there are a number of extra steps that you will need to undertake.
To answer your last question first, we must tease apart what exactly is meant by "Diego is a part of Cloud Foundry". Cloud Foundry is deployed using BOSH, which among other things has a concept of a release. A release is in essence a versioned collection of source code, configuration, dependencies, etc. that your system needs to run. I would recommend reading the BOSH docs to gain more of an understanding as to exactly what BOSH is.
Historically, Cloud Foundry has been made up of a single BOSH release, cf-release, but that is no longer the case. Diego itself is deployed as a separate release, diego-release, and that is what is being referred to in the cf-release release notes. To ensure compatibility, each release of cf-release publishes which release of diego-release is being run alongside.
Diego does support an internal docker registry that can run private docker images, but in order to do so, you must deploy another BOSH release and configure it correctly. That bosh release is the diego-docker-cache-release, the README should hopefully help in getting you started. This cf-dev post by the current Diego PM might also be helpful in setting it up. If you run into any problems or issues, I would recommend posting to the cf-dev mailing lists as the CF community and developers maintain a closer watch on that communication channel.

Customizing micro cloud foundry?

I am trying to create a stand alone companion to a customized cloud foundry deployment that has some additional services enabled in it, in the same way that micro cloud foundry is a companion to cloudfoundry.com. I've blogged a longer description of my work to date for context but the short question is this:
Is there micro-cf-release available which can be extended and used to create a customized micro cloud foundry? With the release train happening now, this must be somewhere, along with a process and tooling for creating the VM. Is this in the opensource somewhere?
The capistrano script that builds the releases is:
https://github.com/cloudfoundry/micro/blob/master/build/build.cap
This workflow is experimental, but it should be possible to use a subset of the build task in the script and customize cf-release before building from it.

How to add a system service to Cloud Foundry step by step

I want to add new system service to micro cloud. and following the steps specified in docuement "How to add a system service to Cloud Foundry step by step" for adding echo service.
But i don't see the specified folder structure in my system where i have my micro cloud.
Thanks
Saidesh
The docs are in the source tree on CloudFoundry.org. For doing development work, that's where the best information is. Here's the doc that I used: https://github.com/cloudfoundry/oss-docs/tree/master/vcap/adding_a_system_service
One other thought tho: If you're wanting to add a "service", then I'd suggest not using Micro Cloud Foundry, but instead setting up a Ubuntu virtual machine and installing the code base from CloudFoundry.org. Instructions for doing so can be found here: https://github.com/cloudfoundry/oss-docs/tree/master/vcap/single_and_multi_node_deployments_with_dev_setup
Hope that helps,
John