How to force terminate a cloud foundry app - cloud-foundry

I have an app running in cloud foundry which has been working fine for months, but has suddenly stopped responding. The errors in the log are all related to connecting to a postgres database service. I don't really know how to administer this sort of thing in cf, so I decided to just remove the app and service and redeploy from scratch.
However I can't remove the app or service - all requests are blocked due to an in progress operation between the app and service.
For example:
Job (ac7753ee-19e8-4b7a-9f39-85284167fb7d) failed: The service broker rejected the request due to an operation being in progress for the service binding.
So I can't delete the app because it is bound to the service, and I can't unbind the app and service because there is an operation in progress.
What can I do?

For now, to get you unstuck you could try cf purge-service-instance instead, this removes the service instance without making a call to the broker.

Related

Can I check health of a Cloud Run Services without triggering its cold start?

We have a problem where we have a WebSocket connection to a Cloud Run Service that is preceded by a health check to the mentioned service.
The issue we face is that every health check is keeping the Run Service alive or cold starting it. What we want is only connect if the service is up and running.
What we need therefore is a check if the Cloud Run Service is up and running without waking it up. A request therefore somewhat does not help since it will always wake the service up.
Any ideas how to make this happen? We can not use the cloud API for this case since we can not have credentials available in the clients.

AWS ECS service monitoring (using multiple endpoints)

We are currently deploying multiple instances (a front-end, a back-end, a database, etc..). All are deployed and configured using a CloudFormation script so we can deploy our solution quickly. One of our central server has multiple connections to other services and, for some, we open very simple REST endpoints that reply with 200 or 500 if the server can connect to another service or the database (get on /dbConnectionStatus for example).
We would like to have perform calls on those endpoints periodically and have a view on these. A little bit like the health check but without restarting the instance in case of trouble and possibly multiple endpoints to check on a service.
Is there an AWS service that can achieve that? If not what alternative do you suggest?
AWS CloudWatch Synthetic Monitoring can do what you want. By default it will just perform checks against your endpoints, and log the success or failure, without triggering a redeployment or something like a load balancer health check would.

How Cloud Run behaves with things that are running in my application during the deploy of a new service revision?

I'm migrating a PHP web application that currently runs on Compute Engine to Cloud Run. Currently, this platform schedules the execution of some PHP scripts in the form of cron jobs.
Let's say that I plan to use Cloud Scheduler to schedule requests to some of these PHP scripts after migrating to Cloud Run. My question is related to how Cloud Run will behave if any of these PHP scripts happen to be running during the end of a new deploy of a new service revision, would the deploy of a new revision kill the script execution (triggered by Cloud Scheduler request) in progress?
Also, I would like to know how Cloud Run behaves with (any) requests in progress during a new service revision deploy. Maybe both of my questions are related/connected.
(Maybe I am wrong when I think that the deploy of a new revision will immediately kill everything running and every request in progress to the service.)
When you deploy a new revision, the new request are routed to the new revision. The currently running request continue on the existing instances of the previous revisions. When there is no active request on an instance of the old revision, it will be deleted after a while (about 15 minutes today).
So, the 2 questions are related. But a remarks: If you run PHP script with Cloud Scheduler, the HTTP request that you perform must stay active up to the end of the script. If you send a response in your PHP script before the end on the processing, firstly the CPU will be throttle and you script will be very very very slow. And secondly, Cloud Run service will consider the instance as inactive (not serving active request) and can delete it as it wants.

Stop Server Side GTM

I am trying to stop server side GTM as I did it for a test to understand the process but I am still getting billed. What are the steps to stop this.
I have so far.
Removed the transport URL from the GA tag
Paused the GA tag in the client side GTM
Removed the 4 A's and 4 AAAA records from my DNS
Deleted the mapping from the Cloud account under App Engine > Settings
Disabled the application as well
You can find here how to stops it from serving and incurring billing charges related to serving your app:
https://cloud.google.com/appengine/docs/managing-costs#understanding_billing
Anyway, you may continue to incur charges from other Google Cloud products.
Google Tag Manager has a dependency on App Engine and it requires the creation of a Google Cloud Platform project.
In order to stop charges from accruing to an App Engine application you could either disable the application (although some fees related to Cloud Logging, Cloud Storage, or Cloud Datastore might keep being charged), disable billing or my recommendation will be to completely shut down the project related to your tagging server. Take into consideration that when shutting down a project after around 30 days all the resources associated with your project will be fully deleted and you won't be able to recover it.

Retrieving Queued Messages on Remote Federation Upstream in Rabbitmq

I've recently been working on setting up RabbitMQ clusters on Google Computer Engine and AWS connected via federation. So far I've been able to get that working fine although I've encountered an issue that I can't figure out how to solve.
At a certain point, I wanted to see what would happen if I deleted all the VMs in the GCE cluster to then re-create them. I was able to bring the cluster back up, but the AWS cluster exchange that was previously federated, continued to hold the queued messages, even after a new federation link was created from GCE to AWS. All new messages on the AWS cluster were being retrieved via the federation link, but the old queued messages were not being sent also.
How could I get these old messages to also be sent onto the new federation link?
If the messages are already queued in the remote server, then you probably want to use shovel to solve this problem: https://www.rabbitmq.com/shovel.html