I want to schedule restart for an app so is there any way that applications will be restarted automatically after specific timelimit in PCF?
I am not sure if there is anything within PCF that can execute CF Commands. My suggestion is to have a CI/CD Job Configured (Jenkins-Job for example) that will execute cf restart <app_name> at scheduled Intervals
I've been working on a scheduler service which you can register at Cloud Foundry containing service plans also for restarting apps. There are also other service plans like just triggering an arbitrary http-endpoint. I'd be happy if you try it out and provide me feedback. Just check it out in GIT: https://github.com/grimmpp/cloud-foundry-resource-scheduler
I've also started to describe what is provides and how it can be installed and configured. For using it you just need to create service instances in the marketplace of cloud foundry and specify some parameters for e.g. how often or when it should be called. ...
Related
I am running cloud foundry on a Kubernetes cluster on the Digital Ocean platform. I am able to deploy apps successfully via cf push APP_NAME without a database. Now I would like to run my Django app with a PostgreSQL database. When I run from terminal cf marketplace it does now show me the list of offerings/services available in the marketplace.
cf marketplace
Output
Getting services from marketplace in org abc-cforg / space abc-cfspace as admin...
OK
No service offerings found
Output from cf version
cf version 6.53.0+8e2b70a4a.2020-10-01
I have tried with cf version 7 as well but no luck.
I am quoting from this doc -
No problem. The Cloud Foundry marketplace is a collection of services that can be
provisioned on demand. Your marketplace may differ depending on the Cloud Foundry
distribution you are using.
What should I be doing now to get the list of service offerings in the marketplace? I googled quite some time but could not find a fix.
I have an account in pivotal as well but this is deprecated already as per this link.
By default, there will not be any services in the marketplace. As a platform operator, you'll need to add the services that you want to expose to your CloudFoundry users.
If you look at a public CloudFoundry offering, you can see that this is done for you, and when you run cf m you'll get the list of services that the public provider and their operations team set up for you.
When you run your own CF, that's on you to set up.
There are a couple of things you can do:
The easy option is to use user-provided services. These are not set up through the marketplace, so you simply ignore that command altogether.
You would instead go procure your service from somewhere else. You mentioned using Digital Ocean, so you could procure one of their managed databases. Once you have your database credentials, you would run cf cups -p username,password,host my-service (these are free-form fields names, enter whatever makes sense for your service) and, when prompted, enter the info. This creates a user-provided service, which can be bound to your apps and works just like a service you'd acquire through the marketplace.
The more involved option requires deploying more infrastructure to run a service broker. The service broker talks to Cloud Controller and provides a catalog of available services. Those services are what Cloud Controller displays when you run cf m.
There are some community-provided brokers and commercial ones as well. I think a lot of these brokers also assume a Bosh deployment and not Kubernetes, so be careful to read the instructions and see if that's a requirement.
A quick scan through and here are a few that seem like they should work:
https://github.com/cloudfoundry-community/cf-containers-broker
https://github.com/cloudfoundry-community/s3-broker
https://github.com/cloudfoundry-community/rds-broker
I'm working on a Cloud Run docker application that handles a few long-running data integration processes.
I'm struggling to come up with a way to locally run/test my submissions to Cloud Tasks before actually deploying the container to Cloud Run.
Is there any way to do this?
A local emulator for Cloud Tasks is not available yet, in some cases you can substitute Cloud Tasks with Pub/Sub.
Also, consider to use non Google solutions such as Cloud-Tasks-In-Process-Emulator, gcloud-tasks-emulator 0.5.1 or Cloud tasks emulator.
As I can understand you want to test the cloud task locally! Yes it is possible by using ngrok. By using ngrok you can access your local application on public and for cloud task you need the public url for handling task.
I am building a python app in Google cloud. This involves delayed execution of tasks.
It seems, Cloud tasks are limited to App Engine.
Can we use cloud tasks from GCE VMs or containers running in GCP/other clouds VMs?
Even google docs have only for push queues with app engine.
Does cloud tasks support pull queues?
[EDIT]
I tried looking at their cloud discovery files. v2beta1 has pull references but v2 does not. I believe GCP don't want to support this in future :-(.
Cloud Tasks does not support pull queues, but just launched a Beta feature for HTTP Targets which allows Cloud Tasks to push tasks to any HTTP endpoint. There's even functionality for Cloud Tasks to include an authentication token based on an associated service account: https://cloud.google.com/tasks/docs/creating-http-target-tasks
This would allow you to push to GCE, or really any service that can operate as a webhook. If you were to use the new Cloud Run Beta product, verifying these tokens is handled for you.
Cloud Pub/Sub provides support for pull-based processing.
I wrote a small plugin for Apache Airflow, which runs fine on my local deployment. However, when I use Google Composer, the user interface hangs and becomes unresponsive. Is there any way to restart the webserver in Google Composer
(Note: This answer is currently more suggestive than finalized.)
As far as restarting the webserver goes...
What doesn't work:
I reviewed Airflow Web Interface in the docs which describes using the webserver but not accessing it from a CLI or restarting.
While you can also run Airflow CLI commands on Composer, I don't see a command for restarting the webserver in the Airflow CLI today.
I checked the gcloud CLI in the Google Cloud SDK but didn't find a restart related command.
Here are a few ideas that may work for restarting the Airflow webserver on Composer:
In the gcloud CLI, there's an update command to change environment properties. I would assume that it restarts the scheduler and webserver (in new containers) after you change one of these to apply the new setting. You could set an arbitrary environment variable to check, but just running the update command with no changes may work.
gcloud beta composer environments update ...
Alternatively, you can update environment properties excluding environment variables in the GCP Console.
I think re-running the import plugins command would cause a scheduler/webserver restart as well.
gcloud beta composer environments storage plugins import ...
In a more advanced setup, Composer supports deploying a self-managed Airflow web server. Following the linked guide, you can: connect into your Composer instance's GKE cluster, create deployment and service Kubernetes configuration files for the webserver, and deploy both with kubectl create. Then you could run a kubectl replace or kubectl delete on the pod to trigger a fresh start.
This all feels like a bit much, so hopefully documentation or a simpler way to achieve webserver restarts emerges to succeed these workarounds.
I am evaluating stackdriver from GCP for logging across multiple micro services.
Some of these services are deployed on premise and some of them are on AWS/GCP.
Our services are either .NET or nodejs based apps and we are invested in winston for nodejs and nlog in .net.
I was looking # integrating our on-premise nodejs application with stackdriver logging. Looking # https://cloud.google.com/logging/docs/setup/nodejs the documentation it seems that there we need to install the agent for any machine other than the google compute instances. Is this correct?
if we need to install the agent then is there any way where I can test the logging during my development? The development environment is either a windows 10/mac.
There's a new option for ingesting logs (and metrics) with Stackdriver as most of the non-google environment agents look like they are being deprecated. https://cloud.google.com/stackdriver/docs/deprecations/third-party-apps
A Google post on logging on-prem resources with stackdriver and Blue Medora
https://cloud.google.com/solutions/logging-on-premises-resources-with-stackdriver-and-blue-medora
for logs you still need to install an agent on each box to collect the logs, it's a BindPlane agent not a Google agent.
For node.js, you can use the #google-cloud/logging-winston and #google-cloud/logging-bunyan modules from anywhere (on-prem, AWS, GCP, etc.). You will need to provide projectId and auth credentials manually if not running on GCP. Instructions on how to set these up is available in the linked pages.
When running on GCP we figure out the exact environment (App Engine, Compute Engine, etc.) automatically and the logs should up under those resources in the Logging UI. If you are going to use the modules from your development machines, we will report the logs against the 'global' resource by default. You can customize this by passing a specific resource descriptor yourself.
Let us know if you run into any trouble.
I tried setting this up on my local k8s cluster. By following this: https://kubernetes.io/docs/tasks/debug-application-cluster/logging-stackdriver/
But i couldnt get it to work, the fluentd-gcp-v2.0-qhqzt keeps crashing.
Also, the page mentions that there are multiple issues with stackdriver logging if you DONT use it on google GKE. See the screenshot.
I think google is trying to lock you in into GKE.