google cloud platform -- creating alert policy -- how to specify message variable in alerting documentation markdown? - google-cloud-platform

So I've created a logging alert policy on google cloud that monitors the project's logs and sends an alert if it finds a log that matches a certain query. This is all good and fine, but whenever it does send an email alert, it's barebones. I am unable to include anything useful in the email alert such as the actual message, the user must instead click on "View incident" and go to the specified timeframe of when the alert happened.
Is there no way to include the message? As far as I can tell viewing the gcp Using Markdown and variables in documentation templates doc on this.
I'm only really able to use ${resource.label.x} which isn't really all that useful because it already includes most of that stuff by default in the alert.
Could I have something like ${jsonPayload.message}? It didn't work when I tried it.

Probably (!) not.
To be clear, the alerting policies track metrics (not logs) and you've created a log-based metric that you're using as the basis for an alert.
There's information loss between the underlying log (that contains e.g. jsonPayload) and the metric that's produced from it (which probably does not). You can create Log-based metrics labels using expressions that include the underlying log entry fields.
However, per the example in Google's docs, you'd want to consider a limited (enum) type for these values (e.g. HTTP status although that may be too broad too) rather than a potentially infinite jsonPayload.

It is possible. Suppose you need to pass "jsonPayload.message" present in your GCP log to documentation section in your policy. You need to use "label_extractor" feature to extract your log message.
I will share a policy creation JSON file template wherein you can pass "jsonPayload.message" in the documentation section in your policy.
policy_json = {
"display_name": "<policy_name>",
"documentation": {
"content": "I have the extracted the log message:${log.extracted_label.msg}",
"mime_type": "text/markdown"
},
"user_labels": {},
"conditions": [
{
"display_name": "<condition_name>",
"condition_matched_log": {
"filter": "<filter_condition>",
"label_extractors": {
"msg": "EXTRACT(jsonPayload.message)"
}
}
}
],
"alert_strategy": {
"notification_rate_limit": {
"period": "300s"
},
"auto_close": "604800s"
},
"combiner": "OR",
"enabled": True,
"notification_channels": [
"<notification_channel>"
]
}

Related

AWS Eventbridge: scheduling a CodeBuild job with environment variable overrides

When I launch an AWS CodeBuild project from the web interface, I can choose "Start Build" to start the build project with its normal configuration. Alternatively I can choose "Start build with overrides", which lets me specify, amongst others, custom environment variables for the build job.
From AWS EventBridge (events -> Rules -> Create rule), I can create a scheduled event to trigger the codebuild job, and this works. How though in EventBridge do I specify environment variable overrides for a scheduled CodeBuild job?
I presume it's possible somehow by using "additional settings" -> "Configure target input", which allows specification and templating of event JSON. I'm not sure though how how to work out, beyond blind trial and error, what this JSON should look like (to override environment variables in my case). In other words, where do I find the JSON spec for events sent to CodeBuild?
There are an number of similar questions here: e.g. AWS EventBridge scheduled events with custom details? and AWS Cloudwatch (EventBridge) Event Rule for AWS Batch with Environment Variables , but I can't find the specifics for CodeBuild jobs. I've tried the CDK docs at e.g. https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.aws_events_targets.CodeBuildProjectProps.html , but am little wiser. I've also tried capturing the events output by EventBridge, to see what the event WITHOUT overrides looks like, but have not managed. Submitting the below (and a few variations: e.g. as "detail") as an "input constant" triggers the job, but the environment variables do not take effect:
{
"ContainerOverrides": {
"Environment": [{
"Name": "SOME_VAR",
"Value": "override value"
}]
}
}
There is also CodeBuild API reference at https://docs.aws.amazon.com/codebuild/latest/APIReference/API_StartBuild.html#API_StartBuild_RequestSyntax. EDIT: this seems to be the correct reference (as per my answer below).
The rule target's event input template should match the structure of the CodeBuild API StartBuild action input. In the StartBuild action, environment variable overrides have a key of "environmentVariablesOverride" and value of an array of EnvironmentVariable objects.
Here is a sample target input transformer with one constant env var and another whose value is taken from the event payload's detail-type:
Input path:
{ "detail-type": "$.detail-type" }
Input template:
{"environmentVariablesOverride": [
{"name":"MY_VAR","type":"PLAINTEXT","value":"foo"},
{"name":"MY_DYNAMIC_VAR","type":"PLAINTEXT","value":<detail-type>}]
}
I got this to work using an "input constant" like this:
{
"environmentVariablesOverride": [{
"name": "SOME_VAR",
"type": "PLAINTEXT",
"value": "override value"
}]
}
In other words, you can ignore the fields in the sample events in EventBridge, and the overrides do not need to be specified in a "detail" field.
I used the Code Build "StartBuild" API docs at https://docs.aws.amazon.com/codebuild/latest/APIReference/API_StartBuild.html#API_StartBuild_RequestSyntax to find this format. I would presume (but have not tested) that other fields show here would work similarly (and that the API reference for other services would work similarly when using EventBridge: can anyone confirm?).

gcp: events are not consistent for resource type. "operation" is missing for buckets

According to following GCP link
standards event structure in json should have operation details. But found that for storage bucket operation entry is missing in log to identify it as last action occurred.
"operation": {
object (LogEntryOperation)
}
Other resource.type = firewall rule
"operation": {
"id": "operation-xxxxxxxxxxxxxxxxxxxxxx",
"producer": "compute.googleapis.com",
"last": true
}
How to get operation details as mandatory object to be received in events?
If GCP doesn't support operation:{} in events consistently any evidence would be helpful.
"last" is an optional field and may not be populated. This cannot be enforced.
You can try enabling bucket logging to gather more information about events regarding your storage or create a Feature Request at Public Issue Tracker to have this option added in future.

How to pass query parameters in API gateway with S3 json file backend

I am new to AWS and I have followed this tutorial : https://docs.aws.amazon.com/apigateway/latest/developerguide/integrating-api-with-aws-services-s3.html, I am now able to read from the TEST console my AWS object stored on s3 which is the following (it is .json file):
[
{
"important": "yes",
"name": "john",
"type": "male"
},
{
"important": "yes",
"name": "sarah",
"type": "female"
},
{
"important": "no",
"name": "maxim",
"type": "male"
}
]
Now, what I am trying to achieve is pass query parameters. I have added type in the Method Request and added a URL Query String Parameter named type with method.request.querystring.type mapping in the Integration Request.
When I want to test, typing type=male is not taken into account, I still get the 3 elements instead of the 2 male elements.
Any reasons you think this is happening ?
For information, the Resources is the following (and I am using AWS Service integration type to create the GET method as explained in the AWS tutorial)
/
/{folder}
/{item}
GET
In case anyone is interested by the answer, I have been able to solve my problem.
The full detailed solution requires a tutorial but here are the main steps. The difficulty lies in the many moving parts so it is important to test each of them independently to make progress (quite basic you will tell me).
Make sure your SQL query to your s3 DB is correct, for this you can go in your s3 bucket, click on your file and select "query with s3 select" from the action.
Make sure that your lambda function works, so check that you build and pass the correct SQL query from the test event
Setup the API query strings in the Method Request panel and setup the Mapping Template in the Integration Request panel (for me it looked like this "TypeL1":"$input.params('typeL1')") using the json content type
Good luck !

Is it possible query if a single RingCentral user has Automatic Call Recording (ACR) enabled?

Are there any APIs that we can query if current extension is added into auto recording list?
We have a web phone using the stop/start recording Call Control API but it doesn't work when auto recording is enabled, so in our app, we need to disable the recording button. We need a way to identify if a user has ACR enabled without retrieving all the users.
We can get extension list with the following account-wide API, but it doesn't take any query parameters to filter the results. If we check this way, we need to load all extensions which takes too long.
https://developers.ringcentral.com/api-reference/Rule-Management/listCallRecordingExtensions
GET /restapi/v1.0/account/{accountId}/call-recording/extensions
Information on whether an individual user has Automatic Call Recording (ACR) enabled is available in the extension endpoint and is separately controlled for inbound and outbound calls.
GET /restapi/v1.0/account/{accountId}/extension/{extensionId}
The extension response includes a serviceFeatures features property which is an array of features. Filter this property for a feature matching the following:
featureName IN (AutomaticInboundCallRecording, AutomaticOutboundCallRecording)
enabled = true
Here's an example response showing only the serviceFeatures property value for those two features:
{
"serviceFeatures": [
{
"featureName": "AutomaticInboundCallRecording",
"enabled": true
},
{
"featureName": "AutomaticOutboundCallRecording",
"enabled": false,
"reason": "ExtensionLimitation"
}
]
}
See more here:
https://developers.ringcentral.com/api-reference/User-Settings/readExtension

How to add custom metered usage items in IBM Marketplace (AppDirect)

I am trying to do a full integration of a solution into IBM Marketplace. (The one using AppDirect). There are many metering items available (Users, MBs, ...) but I can use none of them. Let's say, for example, we use "Places". I have checked the option "Allow custom metered usage" but that won't allow me to add this "Places" metering item in my pricing option. How can I achieve this?
Note: IBM has discontinued it's Marketplace. Probably this question is of no use anymore but I decided not to delete it as we never know if they will enable it back. Also... before the discontinuation announce, I manage to get a reply from IBM stating that they don't allow the custom unit types and I was invited to use the generic "Item".
If you are billing a custom usage unit, the request looks like:
{
"account": {
"accountIdentifier": "{UUID}"
},
"items": [{
"quantity": 5,
"customUnit": "Places",
"price": 2.99,
"description": "some cool places"
}]
}
Custom units use a different field name than the predefined "units"- I'm not sure which error you were getting back when attempting to bill usage, but that might explain the error if you were getting back a dump of expected unit values.