I have a Redis instance running in GCP Memorystore, and I have enabled notify-keyspace-events on this instance. My ultimate goal is to publish messages from my Redis instance when certain keys expire, and on these events, make a call to a service I have on Cloud Run with the data of the key as input.
How do I think about building this? Only way I can think is to have a thread always running in my Cloud Run instance to check for new messages in Redis Pub/Sub channels. I am afraid this might not work though as Cloud Run is not going to allow background tasks.
I am thinking of a way to generate a POST request to my Cloud Run service when the Redis message is generated, but could not find a way to do this yet.
What I know so far of that can be integrated together is Cloud Pub/Sub with Cloud Run as stated in these guides here and here.
What I don't know for sure is if you will be able to somehow publish events from your GCP Memorystore to a Pub/Sub topic. Maybe, if you are able to read in real time which Redis keys inspire, you could manually publish these events as messages to your Pub/Sub topìc, and then your Cloud Run subscribe to the same topic to receive the messages from it.
Another thing you could consider is using Cloud Background Functions.
As for sending a direct POST request to your Cloud Run service, the following documentation could be useful for you.
Related
I'd like to call a Cloud Run app from inside a Cloud Function multiple times, given some logic. I've googled this quite a lot and don't find good solutions. Is this supported?
I've seen the Workflows Tutorials, but AFAIK they are meant to pass messages in series between different GPC services. My Cloud Function runs on a schedule every minute and it would only need to call the Cloud Run app a few times per day given some event. I've thought about having the entire app run in Cloud Run instead of the Cloud function. However, I think having it all in Cloud Run would be more expensive than running the Cloud function.
I went through your question, I have an alternative in my mind if you agree to the solution. You can use Cloud Scheduler to securely trigger a Cloud Run service asynchronously on a schedule.
You need to create a service account to associate with Cloud
Scheduler, and give that service account the permission to invoke
your Cloud Run service, i.e. Cloud Run invoker (You can use an
existing service account to represent Cloud Scheduler, or you can
create a new one for that matter)
Next, you have to create a Cloud Scheduler job that invokes your
service at specified times. Specify the frequency, or job interval,
at which the job is to run, using a configuration string. Specify the
fully qualified URL of your Cloud Run service, for example
https://myservice-abcdef-uc.a.run.app The job will send requests to
this URL.
Next, specify the HTTP method: the method must match what your
previously deployed Cloud Run service is expecting. When you deploy
the service using Cloud Scheduler, make sure you do not allow
unauthenticated invocations. Please go through this
documentation for details and try to implement the steps.
Back to your question, yes it's possible to call your Cloud Run service from inside Cloud Functions. Here, your Cloud Run service calls from another backend service i.e. Cloud Functions directly( synchronously) over HTTP, using its endpoint URL. For this use case, you should make sure that each service is only able to make requests to specific services.
Go through this documentation suggested by #John Hanley as it provides you with the steps you need to follow.
I want to create a function that receives an http request for text data and send response of voice data.
Specifically, I want to run TTS called tacotron2 at the following url on the cloud and receive the resulting voice.
https://github.com/NVIDIA/tacotron2
Is it possible to run a machine learning model using google cloud run and receive binary audio data?
Cloud Run fully managed don't support the GPU. I would like to say not, except if the model can work (slowly) in a non GPU environment.
The alternative is to use Cloud Run for Anthos, on your own GKE cluster. In this case, you can choose the node pool configuration that you prefer, with GPU and you can. But it's not serverless, you have to manage yourselves the cluster and you have to pay it full time (don't scale to 0 like Cloud Run fully managed)
Is there a way to have post deployment mail in kubernetes on GCP/AWS ?
It has become harder to maintaining deployment on kubernetes once deployment team size grows. Having a post deployment mail service will ease up the process. As it'll also say who applied the deployment.
You could try to watch deployment events using https://github.com/bitnami-labs/kubewatch and webhook handler.
Another thing could be implementing customized solution with kubernetes API, for instance in python: https://github.com/kubernetes-client/python then run it as a separate notification pod in your cluster
Third option is to have deployment managed in ci/cd pipeline where actual deployment execution step is "approval" type, you should see user who approved and next step in the pipeline after approving could be the email notification
Approval in circle ci: https://circleci.com/docs/2.0/workflows/#holding-a-workflow-for-a-manual-approval
I don’t think such feature is built-in in Kubernetes.
There is a watch mechanism though, what you could use. Run the following GET query:
https://<api-server-url>/apis/apps/v1/namespace/<namespace>/deployments?watch=true
The connection will not close and you’ll get a “notification” about each deployment. Check the status fields. Then you can send the mail or do something else.
You’ll need to pass an authorization token to gain access to the API server. If you have kubectl setup, you could run a local proxy, which then won’t need the token: kubectl proxy.
You can attach handlers to container lifecycle events. Kubernetes supports preStop and postStart events. Kubernetes sends the postStart event immediately after the container is started. Here is the snippet of the pod manifest deployment file.
spec:
containers:
- name: <******>
images: <******>
lifecycle:
postStart:
exec:
command: [********]
Considering GCP, one option could be create a filter to get the info about your deployment finalization at Stackdriver Logging, and with the filter you can use the CREATE METRIC option, also in Stackdriver Logging.
With the metric created, use Stackdriver Monitoring to create an alert to send e-mails. More details at official documentation.
It looks like no one has mentioned "native tool" Kubernetes provides for that yet.
Please note, that there is a concept of Audit in Kubernetes.
It provides a security-relevant chronological set of records documenting the sequence of activities that have affected system by individual users, administrators or other components of the system.
Each request on each stage of its execution generates an event, which is then pre-processed according to a certain policy and processed by certain backend.
That allows cluster administrator to answer the following questions:
what happened?
when did it happen?
who initiated it?
on what did it happen?
where was it observed?
from where was it initiated?
to where was it going?
Administrator can specify what events should be recorded and what data they should include with the help of Audit policy/ies.
There are a few backends that persist audit events to an external storage.
Log backend, which writes events to a disk
Webhook backend, which sends events to an external API
Dynamic backend, which configures webhook backends through an AuditSink API object.
In case you use log backend, it is possible to collect data with tools such as a fluentd. With that data you can achieve more than just a post deployment mail in Kubernetes.
Hope that helps!
I'm moving stuff from Azure to AWS, and the only thing I'm really gonna miss is the webjobs, where I can schedule command line jobs.
I know I can achieve somewhat the same with task scheduler or windows services, but I do also like the way webjobs shows logs and that stuff...
Do anybody know a tool like that, that can run windows command line apps on AWS?
Checkout AWS Lambda. It is a new service from AWS.
AWS Lambda, compute service that runs your code in response to events and automatically manages the compute resources for you, making it easy to build applications that respond quickly to new information.
Lambda vs WebJobs
I've recently been working on setting up RabbitMQ clusters on Google Computer Engine and AWS connected via federation. So far I've been able to get that working fine although I've encountered an issue that I can't figure out how to solve.
At a certain point, I wanted to see what would happen if I deleted all the VMs in the GCE cluster to then re-create them. I was able to bring the cluster back up, but the AWS cluster exchange that was previously federated, continued to hold the queued messages, even after a new federation link was created from GCE to AWS. All new messages on the AWS cluster were being retrieved via the federation link, but the old queued messages were not being sent also.
How could I get these old messages to also be sent onto the new federation link?
If the messages are already queued in the remote server, then you probably want to use shovel to solve this problem: https://www.rabbitmq.com/shovel.html