Execute REST call on all Pivotal Cloud Foundry instances - cloud-foundry

We deploy our Spring Boot microservices to PCF. Depending upon the load we might have multiple instances running. We have a custom cache invalidation REST API call to invalidate the custom cache we use. Is there any way for this REST API call to invalidate the cache in all the instances? We might have 1 to 3 instances for the application.

Related

AWS ECS service monitoring (using multiple endpoints)

We are currently deploying multiple instances (a front-end, a back-end, a database, etc..). All are deployed and configured using a CloudFormation script so we can deploy our solution quickly. One of our central server has multiple connections to other services and, for some, we open very simple REST endpoints that reply with 200 or 500 if the server can connect to another service or the database (get on /dbConnectionStatus for example).
We would like to have perform calls on those endpoints periodically and have a view on these. A little bit like the health check but without restarting the instance in case of trouble and possibly multiple endpoints to check on a service.
Is there an AWS service that can achieve that? If not what alternative do you suggest?
AWS CloudWatch Synthetic Monitoring can do what you want. By default it will just perform checks against your endpoints, and log the success or failure, without triggering a redeployment or something like a load balancer health check would.

GCP components to orchestrate crons running in GCE (Google Workflows?)

I need to run a pipeline of data transformation that is composed of several scripts in distinct projects = Python repos.
I am thinking of using Compute Engine to run these scripts in VMs when needed as I can manage resources required.
I need to be able to orchestrate these scripts in the sense that I want to run steps sequentially and sometimes asyncronously.
I see that GCP provides us with a Worflows components which seems to suit this case.
I am thinking of creating a specific project to orchestrate the executions of scripts.
However I cannot see how I can trigger the execution of my scripts which will not be in the same repo as the orchestrator project. From what I understand of GCE, VMs are only created when scripts are executed and provide no persistent HTTP endpoints to be called to trigger the execution from elsewhere.
To illustrate, let say I have two projects step_1 and step_2 which contain separate steps of my data transformation pipeline.
I would also have a project orchestrator with the only use of triggering step_1 and step_2 sequentially in VMs with GCE. This project would not have access to the code repos of these two former projects.
What would be the best practice in this case? Should I use other components than GCE and Worflows for this or there is a way to trigger scripts in GCE from an independent orchestration project?
One possible solution would be to not use GCE (Google Compute Engines) but instead create Docker containers that contain your task steps. These would then be registered with Cloud Run. Cloud Run spins up docker containers on demand and charges you only for the time you spend processing a request. When the request ends, you are no longer charged and hence you are optimally consuming resources. Various events can cause a request in Cloud Run but the most common is a REST call. With this in mind, now assume that your Python code is now packaged in a container which is triggered by a REST server (eg. Flask). Effectively you have created "microservices". These services can then be orchestrated by Cloud Workflows. The invocation of these microservices is through REST endpoints which can be Internet addresses with authorization also present. This would allow the microservices (tasks/steps) to be located in separate GCP projects and the orchestrator would see them as distinct callable endpoints.
Other potentials solutions to look at would be GKE (Kubernetes) and Cloud Composer (Apache Airflow).
If you DO wish to stay with Compute Engines, you can still do that using shared VPC. Shared VPC would allow distinct projects to have network connectivity between each other and you could use Private Catalog to have the GCE instances advertize to each other. You could then have a GCE instance choreograph or, again, choreograph through Cloud Workflows. We would have to check that Cloud Workflows supports parallel items ... I do not believe that as of the time of this post it does.
This is a common request, to organize automation into it's own project. You can setup service account that spans multiple projects.
See a tutorial here: https://gtseres.medium.com/using-service-accounts-across-projects-in-gcp-cf9473fef8f0
On top of that, you can also think to have Workflows in both orchestrator and sublevel project. This way the orchestrator Workflow can call another Workflow. So the job can be easily run, and encapsuled also under the project that has the code + workflow body, and only the triggering comes from other project.

APIG/Lambda for hosting multiple micro services

What is the AWS recommendation for hosting multiple micro-services? can we host all of them on same APIG/Lambda? With this approach, I see when want to update an API of one service then we are deploying API of all services which means our regression will include testing access of all services. On the other hand creating separate APIG/Lambda per service we will end up with multiple resources (and multiple accounts) to manage, can be operational burden later on?
A microservice is autonomously developed and should be built, tested, deployed, scaled, ... independently. There are many ways to do it and how you'd split your product into multiple services. One pattern as an example is having the API Gateway as the frontdoor to all services behind it, so it would be its own service.
A lambda usually performs a task and a service can be composed of multiple lambdas. I can't even see how multiple services can be executed from the same lambda.
It can be a burden specially if there is no proper tooling and processes to manage all systems in a automated and escalable way. There are pros and cons for any architecture, but the complexity is definitely reduced for serverless applications since all compute is managed by AWS.
AWS actually has a whitepaper that talks about microservices on AWS.

Google Cloud Platform design for a stateful application

Usecase: Our requirement is to run a service continuously every few minutes. This service reads a value from datastore, and hits a public url using that value from datastore (Stateful). This service doesnt have Front End. No body would be accessing this service publicly. A new value is stored in datastore as a result of response from the url. Exactly one server is required to run.
We are in need to decide one of the below for our use case.
Compute Engine (IaaS -> we dont want to maintain the infra for this simple stateful application)
Kubernetes Engine (still feeling overkill )
App Engine : PaaS-> App Engine is usually used for Mobile apps, Gaming, Websites. App Engine provides a url with web address. Is it right choice for our usecase? If we choose app engine, is it possible to stop the public app engine url? Also, as one instance would be running continuously in app engine, what is cost effective - standard or flexible?
Cloud Functions -> Event Driven(looks not suitable for our application)
Google Cloud Scheduler-> We thought we could use cloud scheduler + cloud functions. But during outage, jobs are queued up. In our case, after outage, only one server/instance/job could be up and running.
Thanks!
after outage, only one server/instance/job could be up and running
Limiting Cloud Function concurrency is enough? If so, you can do this:
gcloud functions deploy FUNCTION_NAME --max-instances 1 FLAGS...
https://cloud.google.com/functions/docs/max-instances
I also recommend taking a look at Google Cloud Run, is a serverless docker platform, it can be limited to a maximum of 1 instances responding to a maximum of 1 request concurrently. It would require Cloud Scheduler too, making regular HTTP requests to it.
With both services configured with max concurrency of 1, only one server/instance/job will be up and running, but, after outages, jobs may be scheduled as soon as another finish. If this is problematic, adding a lastRun datetime field on datastore job row and not running if it's too recent, or disable retry of cloud scheduler, like said here:
Google Cloud Tasks HTTP trigger - how to disable retry

Is it possible to run Postgres (or any DB) with Google Cloud Run?

1. Summarize the problem
Google Cloud Run advertises that it is "stateless containers". Is there a way to run anything at all, have it save state somewhere?
I want to run Postgres in a container, but only have it up on demand, spin up the PG container when there is a request made.
The same question goes for a container that will hold a REST API (web server), to connect to the PG container.
So when the web app (hosted on Firebase), makes a request to the REST API (container), it would spin up, and then the PG instance that gets queried from the REST api would spin up (or can simply put both DB , REST API in one container).
For a dev instance, I don't want something up 24x7x365 doing mostly nothing, just something that will spin up during development hours, but have a number of these, am the only OPS guy, want to automate it for developers, including myself and minimize billing.
Any best approach here would be appreciated.
2. Provide background including what you've already tried
I have created Docker containers and deployed to Cloud Run
3. Show some code
yum install buildah podman -y
4. Describe expected and actual results including any error messages
I am looking for a solution to minimize billing for a dev environment that will include hosting and a database/REST API (database has to be Postgres).
I'm looking for a stateful cloud run that will maintain the state of a database.
Cloud Run is not suitable for hosting a database. Server instances allocated for incoming requests to Cloud Run can come and go, and not all requests will go to the same instance, which means that not all clients will see the same data. That's the problem with "stateless containers".
If you want to use Cloud Run to provide database access, it would best be as a proxy to some other cloud-hosted database service. You might use to it host a REST API endpoint that accesses some other database service (for example: Cloud Firestore, Cloud SQL). But it doesn't make sense to host the database itself in your docker image, since those server instances can come and go unpredictably, destroying any database state stored in each instance.