I want to add new system service to micro cloud. and following the steps specified in docuement "How to add a system service to Cloud Foundry step by step" for adding echo service.
But i don't see the specified folder structure in my system where i have my micro cloud.
Thanks
Saidesh
The docs are in the source tree on CloudFoundry.org. For doing development work, that's where the best information is. Here's the doc that I used: https://github.com/cloudfoundry/oss-docs/tree/master/vcap/adding_a_system_service
One other thought tho: If you're wanting to add a "service", then I'd suggest not using Micro Cloud Foundry, but instead setting up a Ubuntu virtual machine and installing the code base from CloudFoundry.org. Instructions for doing so can be found here: https://github.com/cloudfoundry/oss-docs/tree/master/vcap/single_and_multi_node_deployments_with_dev_setup
Hope that helps,
John
Related
I am following the quickstart tutorial for datalab here, within the GCP console. When I try to run
datalab beta create-gpu datalab-instance-name
In step 3 I receive the following error
write() argument must be str, not bytes
Can anyone help explain why this is the case and how to fix it?
Thanks
Referring to the official documentation, before running Datalab instance, the corresponding APIs should be enabled: Google Compute Engine and Cloud Source Repositories APIs. To do so, visit Products -> APIs and Services -> Library and search for the APIs. Additionally, make sure that billing is enabled for your Google Cloud project.
You can also enabling the APIs by typing the following command, which will give you a prompt to enable the API:
datalab list
I made some research and found that the same issue has been reported on the Github page. If enabling API's wouldn't work, the best option would be to contribute (add a comment) in the mentioned Github topic to make it more visible to the Datalab Engineering team.
I am completely new to the HPC and Google Cloud (I just signed for a trial account) .
My Idea is to perform a RNAseq analysis (9 samples paired, 18 fastQ files), Mainly I want to perform the FastQC and the mapping trying different aligments. Download the Bam files and continue from home with my computer.
First, I generated a instance with 8 vCPU and the max memory they allow me, and I choose Ubuntu 18.04.
Then I went to genomics API, and the firs error came up:
API solution not found with service name: genomics
How can I progress? Is it possible to do what I want in the trial period?
Regards,
Fer
Per the Google Cloud Genomics Quickstart Guide, you will need to Enable Billing for the account, and then can enable the Genomics API for your project.
You can use these products with trial credits, but they will require a billing account to be created to use the trial credits.
Is there a way to review content/files on a flapping app instance?
I had today this problem with one go application and unfortunately since container didnt start, I couldnt check what files are there. So the only way to debug problem (which was btw related to wrong filename) was the log stream.
Thank you,
Leszek
PS.
I am using HPE Stackato, but I assume the approach will be similar to the approach in CF and PCF...
With Pivotal Cloud Foundry, you can cf ssh to SSH into the container, or to set up port-forwarding so that you can use plain ssh or even scp and sftp to access the container or view its file system. You can read some more about it in:
The diego-ssh repository's README
The documentation on Accessing Apps with SSH
I highly doubt this functionality exists with HPE Stackato because it uses an older Cloud Foundry architecture. Both PCF and HPE are based off of open source Cloud Foundry, but PCF is on the newer Diego architecture, HPE is still on the DEA architecture according to the nodes listed in the Stackato Cluster Setup docs.
With the DEA architecture, you should be able to use the cf files command, which has the following usage:
NAME:
files - Print out a list of files in a directory or the contents of a specific file
USAGE:
cf files APP_NAME [PATH] [-i INSTANCE]
ALIAS:
f
OPTIONS:
-i Instance
To deal with a container that is failing to start, there is currently no out-of-the-box solution with Diego, but it has been discussed. This blog post discusses some options, including:
For the app in question explicitly specify a start command by adding a ";sleep 1d" The push command would like this - cf push <app_name> -c "<original_command> ;sleep 1d". This will keep the container around for a day after process within the container has exited.
I am trying to create a stand alone companion to a customized cloud foundry deployment that has some additional services enabled in it, in the same way that micro cloud foundry is a companion to cloudfoundry.com. I've blogged a longer description of my work to date for context but the short question is this:
Is there micro-cf-release available which can be extended and used to create a customized micro cloud foundry? With the release train happening now, this must be somewhere, along with a process and tooling for creating the VM. Is this in the opensource somewhere?
The capistrano script that builds the releases is:
https://github.com/cloudfoundry/micro/blob/master/build/build.cap
This workflow is experimental, but it should be possible to use a subset of the build task in the script and customize cf-release before building from it.
Let's say that I setup my own cloud using the open source cloud foundry implementation provided on cloudfoundry.org. Will each app that I deploy be run as a separate user? Or is there any of VMWare's virtualization technology in use here? E.g. would each app run in a separate virtual machine or anything like that? How can I configure the memory, cpu, and disk resource limits for each app?
I asked this on the mailing list. Here's the response I got:
If your DEA is configured to run in secure mode, then each app runs as its own user and process isolation is used to protect them. We are moving toward a model of using linux cgroups http://en.wikipedia.org/wiki/Cgroups when on linux, using the warden cgroup wrappers that are already in our source tree.
VM based isolation for a single app is pretty heavy weight, but we have long term plans to provide this for apps that need/desire it. (As opposed to the warden/cgroup work which is a near term project)
Since this is related to the open source for cloud foundry, you can try asking your question on https://groups.google.com/a/cloudfoundry.org/group/vcap-dev
You should get a quick response there!