How we can Monitor Resource Changes In Google Cloud and how to monitor for GCP resource changes, automate alerts based on those changes and invoke an action.
There is a complete Operation Suite(formerly known as Stackdriver) on Google Cloud Platform that provides the features mentioned above.
Official documentation: https://cloud.google.com/products/operations
Cloud Monitoring, Cloud Alerting can be used to alerts based on different events. Alerts can be published to different channels like email, slack, etc.
Related
We use alert policies in order to monitor issues with some of our services. For a small subset of our alert policies, we'd like to react in an automated way to incidents raised by one of our alert policies. For this purpose, we created a pub/sub notifaction channel, topic and push subscription which sends alert notifications to a cloud run service. The cloud run service helps us with reducing manual opertations effort. For some reason, we'd like the cloud run service to also acknowledge, mute and close incidents by using an API.
While I found public APIs for managing alert policies and notification channels, I cannot find an API for managing incidents. Is there a public one?
I have created an alert policy in google cloud platform. I am getting the emails notifications based on the alert policy.
Now I want to configure an external custom monitoring system for this alert. I want to know the REST APIs that this monitoring system can call at after every 10-20 sec and get the status of this alert.
Please help
If you want to use only Cloud Monitoring, and Alerting Policy you won't achieve this.
It is not possible to configure alert policies to notify (repeatedly) while the policy's conditions are met. Alert policies that are created through Google Cloud Console send a notification only when the condition is met. You can also receive a notification when the condition stops being met.
Additional information can be found in Notifications per incident documentation.
In Cloud Monitoring API v3 - Alerting policies, you can find information that only the creation of the incident is sent.
An alerting policy is a configuration resource that describes the criteria for generating incidents and how to notify you when those incidents are created.
In general, if you want to use Notification Channels to send notifications outside you can use Webhooks or PubSub.
Note
Webhooks only support public endpoints. If you need notifications sent to an endpoint that isn't public, then create a Pub/Sub notification channel and configure a subscription to the Pub/Sub topic. For more information, see Webhook notifications fail when configured for a private endpoint.
As you didn't provide more information it's hard to say if you are not using some built-in features in 3rd party software to integrate with GCP Cloud Monitoring. One of the example is Grafana:
Grafana ships with built-in support for Google Cloud Monitoring. Add it as a data source to build dashboards for your Google Cloud Monitoring metrics.
GCP also might use Prometheus features. Maybe this might give you something similar to what you want.
Prometheus is a monitoring tool often used with Kubernetes. If you configure Cloud Operations for GKE and include Prometheus support, then the metrics that are generated by services using the Prometheus exposition format can be exported from the cluster and made visible as external metrics in Cloud Monitoring.
There are some workarounds, however they won't fulfill what you want.
It is possible to create multiple conditions that identify the same issue. Every time a condition is met, a notification will be received.
It is possible to get users notified when a condition is NOT met, however this might cause spam messages.
The last thing I want to mention is that there is already a Feature Request to add multiple notifications until the condition is gone. More details in FR: Repeat Notifications until condition is gone.
Additional Documentation:
Monitoring Alerts in GCP by integrating Cloud Operations with Notification Channels
Conclusion
Alert policies that are created through Google Cloud Console send a notification only when the condition is met. You can also enable notification to get solved notification.
There is Feature Request to add repeatedly notifications - here
To send notifications to other apps/resources you can use Webhooks or PubSub.
Hi dear StackOverflow community,
These below amazon concepts are confusing to me, I do not get to establish the key difference among them at once:
Amazon inspector vs trusted advisor vs cloudwatch vs Personal Health Dashboard vs AWS cloud trail.
Could you help me to get clarity in the key difference among them?
Thank you very much in advance
Trusted Advisor
Trusted Advisor offers recommendations to lower cost and improve security, performance and fault tolerance. Some are provided for free, while all of the recommendations are only available to subscribers to AWS Support.
Personal Health Dashboard
AWS Personal Health Dashboard shows issues and outages that might affect your usage of AWS services.
Amazon CloudWatch
Amazon CloudWatch stores metrics and allows Alarms to be configured based on those metrics. Many AWS services send metrics to CloudWatch, such as Amazon EC2 providing CPU metrics and Amazon S3 providing storage metrics. It also has CloudWatch Logs that can store log files and respond to log messages, and CloudWatch Events that can trigger actions in response to certain events).
AWS CloudTrail
AWS CloudTrail is an audit trail of API calls made to AWS. It tracks details of all requests, such as the user, source IP, timestamp, request parameters and the success of the API call. Just like a security company keeps track of every time you use a swipe-card, CloudTrail keep track of every time a request is made to an AWS service.
Amazon Inspector
Amazon Inspector runs on Amazon EC2 instances and scans the computer for known vulnerabilities in the operating system and applications.
I have created a set of cloud functions that work to ingest data into google cloud storage. The functions have been set with a get http request to only accept internal traffic.
However, when I use cloud scheduler to to invoke the functions I continually get permissions errors even while after specifying a service account for each of the functions with the proper permissions. I have set each of the functions to be in the us-central1 region and have researched the docs and Stack overflow with no success so far. Can I receive some assistance with this?
Cloud Scheduler is a serverless product. This means it doesn't belong to your project and not send the request to your Cloud Function through the VPC. In addition, Cloud Scheduler isn't yet supported in VPC SC
Thus, you can't. The workaround is to allow all ingress traffic on cloud function and to uncheck allow-unauthenticated access. Therefore, your function is callable from elsewhere (from internet) BUT you need a valid authentication to invoke it.
Use your service account and add it to Cloud Scheduler for invoking your function. Grant it the sufficient role for this
Alternative
However, if you would like initially not deploy your function publicly accessible on internet (allow internal traffic only ingress mode), there is an alternative.
Change your Cloud Scheduler to publish a PubSub message instead of calling directly your function. Then, deploy your function linked to PubSub topic instead of in HTTP target mode.
You might have some update to perform in your code, especially if you have parameters to handle (initially in the query or the body, now all is in the PubSub message published by Cloud Scheduler). But your function in only callable by your PubSub topic and no other way.
According to the documentation, in order to trigger a Cloud Function from Cloud Scheduler you have to use Pub/Sub. These are the steps:
Create the Cloud Function and make it trigger by a Pub/Sub topic.
Create the Pub/Sub topic.
Create the Cloud Scheduler job that will invoke the Pub/Sub trigger.
Once you do that you will be able to test-run the Cloud Scheduler job and verify whether it's working now. The final schema is something like this:
Cloud Scheduler job => Pub/Sub topic => Cloud Function
Once it's working remember to revert the roles granted to the Cloud Scheduler service account, as this method doesn't require them.
Here I found a blog post that does the same but with a more practical approach that you can follow from a CLI.
Does google cloud have an analogous functionality to AWS Lambda?
In particular I would like compute resources to be opened up and jobs scheduled via https events.
I'm also interested in any other cloud hosting providers which have similar functionality.
I just found out that there is something that looks interesting in the latest documentation of the SDK's command line tool gcloud.
https://cloud.google.com/sdk/gcloud/reference/alpha/functions/
This sounds exciting.
UPDATE: Google just released some official documentation of an alpha version of Cloud Functions. For now, functions can be written in Javascript using Node, and triggered by Pub/Sub, Cloud Storage, direct HTTP stimuli or manually for debugging purposes.
Google Cloud Storage has Object Change Notification. Only web hooks are currently supported at this time.
A client application can send a request to watch for a bucket's change notification events in order to be notified about changes to a bucket's objects. After a notification channel is initiated, Google Cloud Storage notifies the application any time an object is added, updated, or removed from the bucket.
For example, when you add a new picture to a bucket, an application could be notified to create a thumbnail.
More info can be found at: https://cloud.google.com/storage/docs/object-change-notification
Regarding other providers that have similar functionality, check out IronWorker. You can kick off IronWorker tasks via https endpoints using the webhook endpoint and you can run jobs on multiple clouds. Here's a comparison of Lambda vs IronWorker.
And yes, I work for Iron.io.
Lately Google announced alpha release of Google Cloud Functions which supports http interface.
There is Google Cloud Functions and Microsoft Azure functions, they are both fairly new (Microsoft announced Azure function on March 31 2016)
if you need Lambda with HTTP interface then look at Nano Lambda
They can deploy to any cloud and on premise.