Bind a service to two different app spaces in cloudfoundry - cloud-foundry

Is it possible to bind a service (i.e. MariaDB) to apps in different spaces? How can I achieve it if I want to use the same database for two different spaces?

Currently we don't support service instance sharing. We made the necessary code changes already and tested it (Service Broker), but don't roll it out on prd because the feature is at the moment beta.
Sharing a service instance between spaces allows apps in different
spaces to share databases, messaging queues, and other types of
services. This eliminates the need for development teams to use
service keys and user-provided services to bind their apps to the same
service instance that was provisioned using the cf create-service
command. Sharing service instances improves security, auditing, and
provides a more intuitive user experience.
See this discussion for more info when this feature will be generally available from upstream.

I tried the solution from https://docs-cloudfoundry-staging.cfapps.io/devguide/services/sharing-instances.html.
If I run the first command I get the following error:
$ cf enable-feature-flag service_instance_sharing
Server error, status code: 403, error code: 10003, message: You are not authorized to perform the requested action
The second command works and now I can see the service in the space B on the dashboard.
$ cf share-service SERVICE-INSTANCE -s OTHER-SPACE [-o OTHER-ORG]
Note: if I click on the service on the dashboard it says: This is a shared service. It is accessible only in the space it was shared from. The service is also shown greyed out.

you can same service instance in two different spaces/orgs
follow:
1) https://docs.pivotal.io/pivotalcf/2-3/services/enable-sharing.html
2) https://docs.pivotal.io/pivotalcf/2-3/devguide/services/sharing-instances.html

Related

What trace-token option for gcloud is used for?

Help definition is not clear for me:
Token used to route traces of service requests for investigation of
issues.
Could you provide simple example how to use it?
I tried:
gcloud compute instances create vm3 --trace-token xyz123
I can find "vm3" string in logs, but not my token xyz123.
The only use of it seems to be in grep:
history| grep xyz123
The flag --trace-token is intended to be used by the support agents when there is some error which is difficult to track from the logs. The Google Cloud Platform Support agent provides a time bound token which will expire after a specified time and asks the user to run the command for the specific product in which the user is facing the issue. Then it gets easier for the support agent to trace the error by using that --trace-token.
For example :
A user faced some error while creating a Compute Engine instance and contacted the Google Cloud Platform Support team. The support agent then inspected the logs and other resources but could not find the root cause of the issue. Then the support agent provides a --trace-token and asks the user to run the below command with the provided --trace-token.
--trace-token = abcdefgh
Command : gcloud compute instances create my-vm --trace-token abcdefgh
After the user runs the above command the support agent could find the error by analysing in depth with the help of the --trace-token
Please note that when a --trace-token flag is used the content of the trace may include sensitive information like auth tokens, the contents of any accessed files. Hence they should only be used for manual testing and should not be used in production environments.

Difficulty with gcloud run service - migrating to espV2

Ive been following the documentation (https://cloud.google.com/endpoints/docs/openapi/get-started-cloud-run#configure_esp)
I am able to load an image (Hello example), create endpoints, and run it as a service.....however, whenever I follow the process to migrate the working services to ESPv2, I get the following:
{"message":"upstream request timeout","code":504}
I've tried this on two different services. Any thoughts/ideas?
Thanks,
Steve
I have solved this issue. I was not aware that ESPv2 is only a proxy that needs to co-exist and point to the original backend service.
As soon as I created a separate service, everything worked.

Error - functions: failed to create function dialogflowFirebaseFulfillment

when i'm trying to deploy firebase function from my local machine i'm getting this error.
functions: failed to create function dialogflowFirebaseFulfillment
HTTP Error: 400, Default service account 'project-id#appspot.gserviceaccount.com' doesn't exist. Please recreate this account (for example by disabling and enabling the Cloud Functions API), or specify a different account.
and the project that i'm trying to deploy is, https://github.com/actions-on-google/codelabs-nodejs/tree/master/level1-complete
It seems your service account is removed. You may want to check whether your firebase & actions on google projects are removed or not.
If they are not, check for service accounts on console.cloud.google.com and make sure all your accounts are same as you are trying to deploy. (firebase, dialogflow, app-engine etc.) Also, disabling and enabling the Cloud Functions API may help as mentioned in error.
I notice that your error has 'project-id#appspot.gserviceaccount.com'.
Shouldn't the project-id be your {project-id} from the google action that you created, and not the word project-id.

How do i pass AWS credentials using a process variable containing an AWS service endpoint id

I want to use two variables, $(Aws.Endpoint) and $(Aws.Region), in my AWS-related release tasks, and provide values for those as process variables.
Aws.Endpoint is the id of an aws service endpoint in VSTS. When i do this, i get
Endpoint auth data not present: ...
Has anyone who ran into this seemingly trivial issue found a solution? Otherwise i need to define the aws endpoint directly in the task, which feels wrong, because i eventually want the release tasks to be part of a task group, shared by all the environment making up the pipeline (dev, stage, prod).
Note: i see there is no stackoverflow tag for AWS Tools for Visual Studio Team Services, and i don't have the reputation to create a new tag. If someone with enough reputation could create something like aws-tools-for-vsts (homepage), that would be grand.
No, you can’t do it for the tasks in AWS tools for Microsoft Visual Studio Team Services extension. You can custom build/release task to meet your requirement through VSTS extension.
If you want to get AWS endpoint in your custom build/release task, you can get the endpoint through Get-VstsEndpoint or task.getEndpointAuthorization with the GUID of service (can get in build/release log: ##[debug]awsCredentials=9a3009d2-35f3-4954-a8fa-34c3313c34f6)
For example:
$awsEndpoint = Get-VstsEndpoint -Name [GUID of service] -Require
Write-Host $awsEndpoint
foreach($p in $awsEndpoint.Auth.Parameters){
Write-Host $p
}

Set up Auto Scaling Apps

Is it possible to setup auto-scaling capabilities for an app depending on the workload?
I haven't found anything useful neither in the Developer Console nor in the docs. Is there may be a hidden possibility via the CLI?
Just wondering if this is possible as I'm doing a basic evaluation on Swisscom Application Cloud.
There are several opensource autoscaling projects of various readiness for production use like
https://github.com/cloudfoundry-incubator/app-autoscaler
https://github.com/cloudfoundry-samples/cf-autoscaler
Pivotal Cloud Foundry supports auto-scaling of the applications out of the box (http://docs.pivotal.io/pivotalcf/1-8/appsman-services/autoscaler/autoscale-configuration.html)
This capability is not present at the moment, and it is not part of the (open source) cloudfoundry platform either. Some platforms provide it, but this has not been released to the community yet!
There are various ways how you can do that.
As described by Anatoly, you can obvisouly use the "Auto Scaler" Service, if this is deployed from your respective Provider.
(you can figure that out by just calling this Feature-Flags-API check: https://apidocs.cloudfoundry.org/253/feature_flags/get_the_app_scaling_feature_flag.html)
An other option is actually writing your own small auto-scaler based on the custom-defined scaling-behaviours you've to meet your application. (DIY ;))
Get Load
: First you need to get information about your current "load" of the app (i.e. memory usage, cpu usage etc). You can easily do that by pulling data from the v2/apps//stats API. See details here:
https://apidocs.cloudfoundry.org/253/apps/get_detailed_stats_for_a_started_app.html
Write some magic :
Now you need to write some magic around to verify if the app is under heavy load. Could be CPU or Memory or other bottle necks you try to get our of the stats API.
Scale up/down :
With the PUT v2/apps// API you can easily now change the amount of instances of your app by filling the paramter "instances" accordingly.
https://apidocs.cloudfoundry.org/253/apps/updating_an_app.html
For PCF you can take a look at this https://github.com/Pivotal-Field-Engineering/autoscaling-cli-plugin. It should give you what you are looking for.
You will need to install it via the
cf install-plugin https://github.com/pivotal-cf-experimental/autoscaling-cli-plugin
and configure using steps similar to below
Get the details of the autoscaler from your marketplace
cf m | grep app-autoscaler
Install the auto scaler plugin using service & plan from above
cf create-service <service> <plan> myAutoScaler
Bind the service to your app (or u can do this via you deployment manifest)
cf bind-service myApp myAutoScaler
Configure your scaling parameters
cf configure-autoscaling --min-threshold ## --max-threshold ## --max-instances # --min-instances # myApp myAutoScaler