Is there a way to update the scheduled query notification email to something custom?
By default it is the email of the creator, however, this is often a service account with no true email recipient.
e.g with terraform provisioning, we would have a service account. We would want to update the email notification target from the SA to a support email so failures would be routed more correctly?
Ive checked the documentation and I could be missing it, but does not seem an option, unless Ive missed something via the CLI?
Thanks!
Since scheduled Query IDs always start with "scheduled_query_[runsID]", search for scheduled_query_ using advanced logs queries
Create a logs-based metric based on your BigQuery scheduled update log.
Create an alerting policy using the logs-based metric created in step 1.
While creating the alerting policy in step 3, select email as a notification channel.
In notification channel, add your email address to get all the notifications.
Related
I am looking for ideas on how to set the recipient of PagerDuty alerts.
To give some context, I have an aws config rule that publishes a new event into an SNS topic, via EventBridge, each time the config rule is non-compliant then I have PagerDuty subscribed to the sns topic; PagerDuty successfully receives the alerts and forward them to the alert recipients, no issue is here.
My question is this: is it possible to set the recipient of the PagerDuty Alert based on the event that triggers the alert?
I am thinking about using lambda to query CloudTrail to extract the email address of the user initiating the event that causes the aws config to become non-compliant, but not sure how to set that email address as the recipient of the PagerDuty notification.
Is this even possible? or is there a better way to approach it?
Thanks in advance
Some options for thought:
Depending on the size of your instance you could build a specific service for each of the possible recipients. Either using the lambda you mentioned to control which service the alert is routed to. Or, alternatively, using a PagerDuty global Ruleset (or event orchestration) to route the alert based on its contents.
This doesn't need a much setup initially but the tradeoff is that it quickly becomes unwieldly at scale.
https://support.pagerduty.com/docs/event-orchestration#global-orchestrations
I've also seen solutions that assign an escalation policy without a specific target to a service such as user account with no contact info. When an alert and incident are opened a webhook is sent to, for example, RunDeck and that tool takes action in PagerDuty. The correct recipient is assigned to the incident and requested to acknowledge.
The tradeoffs here are losing visual sight of who is on-call for a service and the lift to stand up RunDeck, a lambda, or some other listener to process the webhook event.
https://support.pagerduty.com/docs/event-orchestration#webhooks
https://www.pagerduty.com/integrations/rundeck-runbook-automation/
I am setting up alerting for GCP VMs. It works fine for email, but I'm trying to use the webhook option. It shows the incident was caught and the webhook triggered. But I don't see an alert on the receiving end. I don't know how to debug since GCP webhooks seems like a blackbox. Does anybody know where I can see the log for the actual webhook call? I'm not sure it is receiving an alert ID from the webhook call.
I'm using this document:
https://cloud.google.com/monitoring/alerts/using-channels-api#api-create-channels
Thanks!
Gary
You configure a webhook notification channel and expect to be notified when incidents occur, but you might not receive any notifications because of following reasons:
1.Private endpoint
You can't use webhooks for notifications unless the endpoint is public.
To resolve this situation, use Pub/Sub notifications combined with a pull subscription to that notification topic.
When you configure a Pub/Sub notification channel, incident notifications are sent to a Pub/Sub queue that has Identity and Access Management controls. Any service that can query for, or listen to, a Pub/Sub topic can consume these notifications. For example, applications running on App Engine, Cloud Run, or Compute Engine virtual machines can consume these notifications.
2.Public endpoint
To identify why the delivery failed, examine your Cloud Logging log entries for failure information.
For example, you can search for log entries for the notification channel resource by using the Logs Explorer, with a filter like the following:
resource.type="stackdriver_notification_channel"
NOTE : Also check whether you have been mentioned in the recipient list, if not mentioned you will not be able to see the alert on the receiving end.
Refer Troubleshooting Alerting policies for information.
I am new to alert policy creation in google cloud.
I have set up a GKE cluster and enabled upgrade notifications to publish a message to Pub/Sub topic whenever cluster gets upgraded. The Pub/Sub uses pull subscription model. Now whenever a message is published to the pub/sub I need to set-up an alerting policy to pull the message and send an email containing the message content to a distribution channel via email. I need to achieve it without writing Cloud function only through alerting policy?
Can anyone please suggest how to achieve this? Thank you
Alert policy can't read the PubSub messages. The product listen to the logs and when the combination match a policy rule, an action (an alert) is generated.
If you need to send an email on the PubSub message content, you MUST read it (with Cloud Functions, Cloud Run, App Engine or whatever) and:
Either send directly the email with the message content
Or, if you want to use Cloud Alerting, publish a special log format (put a specific key word in the log that you write along to the message content), to let Cloud Alerting detect the log entries and send email alert with the log trace (including your message content)
I have been exploring to transfer the JsonPayload message field from Logs viewer service (which are syslogs of a service) of GCP to a slack network, but owing to this I am not able to find any predefined services (like alerting policies to transfer Payload) available on Stackdriver. I have been able to create a counter or distribution user-metrics for logs but this will only provide me with some int64 value instead of a string value or the actual message body. Is there a way in GCP to actually send a payload of logs over slack or any email?
We had a similar issue where we wanted to be able to send certain events to slack and for fatal issues trigger an issue with our ops team via VictorOps.
Couldn't find anything out there to fit our needs so we just created our own slack / VictorOps Cloudfunction.
https://github.com/patiently/gcloud-slack-logger
In GCP, you can export logs to Pub/Sub, Cloud Storage, or BigQuery. There is no other way within GCP to export logs at the moment.
As of 2022, I found this can be done as follows:
In GCP Logs viewer (not legacy version) choose the create alert button.. One of the options here is a GCP notification channel, which supports slack. Some points here:
The slack channel can't be private as far as I can tell
Slack channel must be in your correct slack space. If your org has multiple slack spaces, make sure GCP is trying to connect to correct one.
Put in the log query criteria you want. THen go into Monitoring and you will see this in Alerting dropdown.
I am trying to edit an AWS CloudWatch alert that sends an email to my team such that custom content is sent in the email. Currently, all email alerts contain only auto-generated content. The email content contains the reason for the alert, a link to the alarm in the AWS console, and sections for Alarm Details, Threshold, and Monitored Metric. However, I want to add custom content listing likely causes of the alert and procedures to execute when receiving the alert. Does anybody know how custom content can be added to a CloudWatch alert email?
I have read existing AWS CloudWatch Alarm documentation such as How to Create/Edit a CloudWatch Alarm, and How to create a CPU Usage Alarm that Sends an Email. I have also tried various Google searches and searches for existing questions here on SO but to no avail. Any help/advice would be greatly appreciated.
You can setup a Lambda trigger to the alarm and send an email using AWS SES SMTP credentials, creating a formatted email content with alarm trigger event data.
You can flow the document you mentioned above: https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/US_AlarmAtThresholdEC2.html
Choose existing SNS topic or create new one, after that enter slack email and when have any alarms will be sent to Slack channel that was integrated Email app by you
xxxxxxxxxx#x.slack.com you can use add Email App in Slack like icon below