How to use logger to log an error into stackdriver? - google-cloud-platform

I have a VM micro instance running on google compute cloud and I want to log an error message to stackdriver. This page https://cloud.google.com/logging/docs/agent/installation shows this example
logger "Some test message"
which works great for normal messages, but I want stackdriver to recognize some messages as errors, so that they would show up here https://console.cloud.google.com/errors, which would allow me to get email notifications.
I'm aware that the gcloud tool has a beta logging solution, but I'm hoping to avoid installing the extra components it requires.

You'll want to read over the docs about formatting at https://cloud.google.com/error-reporting/docs/formatting-error-messages
Something like:
{
"message": "Some test message",
"context": {
"reportLocation": {
"functionName": "my_function"
}
},
"serviceContext": {
"service": "my service",
}
}
You'll need the message to be the jsonPayload of the log entry, not the textPayload. I believe the agent will automatically recognize JSON messages, but if there are also non-JSON messages it may fall back to using text in all cases. In that case, using a dedicated log for the errors should help.
You may also be interested in the docs on how messages are grouped together: https://cloud.google.com/error-reporting/docs/grouping

Related

Amazon Connect error using Lex as customer input

I am trying to create a demo call centre using AWS connect.
Part of my contact flow makes use of the "Get Customer Input", as I want to use an Amazon lex bot. I have created to divert to a specific working queue. For example, if the user says "sales" they should be directed to the sales queue.
I have tested the Lex bot within the Lex console and it works as intended.
However when testing the Lex integration within AWS connect it will always follow the "error" path on the block after a user says something on the phone.
Here is the CloudWatch log showing the Error result of the module.
{
"Results": "Error",
"ContactFlowName": "Inbound Flow",
"ContactFlowModuleType": "GetUserInput",
"Timestamp": "2022-02-12T18:06:10.940Z"
}
Here is the contact flow:
Here is the settings for the "Get Customer Input Block":
Here is a test of the Lex bot in the Lex dashboard:
Any help would be greatly appreciated.
Turns out the solution was if you're using LexV2 make sure you set the proper Language Attribute as well. Easiest way is using the set Voice block in your Contact Flow, on the very bottom of the block you can enable "set language attribute".

GCP stackdriver fo OnPrem

Based on Stackdriver, I want to send notifications to my Centreon monitoring (behind Nagios) for workflow reasons, do you have any idea on how to do so?
Thank you
Stackdriver alerting allows webhook notifications, so you can run a server to forward the notifications anywhere you need to (including Centreon), and point the Stackdriver alerting notification channel to that server.
There are two ways to send external information in the Centreon queue without a traditional passive agent mode.
First, you can use the Centreon DSM (Dynamic Services Management) addon.
It is interesting because you don't have to register a dedicated and already known service in your configuration to match the notification.
With Centreon DSM, Centreon can receive events such as SNMP traps resulting from the detection of a problem and assign the event dynamically to a slot defined in Centreon, like a tray event.
A resource has a set number of “slots” on which alerts will be assigned (stored). While this event has not been taken into account by human action, it will remain visible in the Centreon web frontend. When the event is acknowledged, the slot becomes available for new events.
The event must be transmitted to the server via an SNMP Trap.
All the configuration is made through Centreon web interface after the module installation.
Complete explanations, screenshots, and tips are described on the online documentation: https://documentation.centreon.com/docs/centreon-dsm/en/latest/user.html
Secondly, Centreon developers added a Centreon REST API you can use to submit information to the monitoring engine.
This feature is easier to use than the SNMP Trap way.
In that case, you have to create both host/service objects before any API utilization.
To send status, please use the following URL using POST method:
api.domain.tld/centreon/api/index.php?action=submit&object=centreon_submit_results
Header
key value
Content-Type application/json
centreon-auth-token the value of authToken you got on the authentication response
Example of service body submit: The body is a JSON with the parameters provided above formatted as below:
{
"results": [
{
"updatetime": "1528884076",
"host": "Centreon-Central"
"service": "Memory",
"status": "2"
"output": "The service is in CRITICAL state"
"perfdata": "perf=20"
},
{
"updatetime": "1528884076",
"host": "Centreon-Central"
"service": "fake-service",
"status": "1"
"output": "The service is in WARNING state"
"perfdata": "perf=10"
}
]
}
Example of body response: :: The response body is a JSON with the HTTP return code, and a message for each submit:
{
"results": [
{
"code": 202,
"message": "The status send to the engine"
},
{
"code": 404,
"message": "The service is not present."
}
]
}
More information is available in the online documentation: https://documentation.centreon.com/docs/centreon/en/19.04/api/api_rest/index.html
Centreon REST API also allows to get real-time status for hosts, services and do the object configuration.

Cognito User Migration Trigger - Exception during user migration - Exception Location

We're using a lambda function to respond to the 'User Migration' trigger in AWS Cognito. When something like a syntax error occurs, you can see it in cloud watch logs. However, "Exception during user migration" errors seen on the login page are no where to be found in the cloud watch logs.
Where are we supposed to look for these? I can't find any anything in the documentation and assumed it would have gone to cloud watch.
I can't test it in the lambda interface because one of the parameters being passed into the lambda function will have a function nested within the object and I can't create a test JSON setup that has that. There's also no test trigger for user migration that is pre-built.
Any ideas as to why I can't see this in cloud watch or where the exceptions would be shown would be greatly appreciated.
Unfortunately Cogntio doesn't expose any logs (or metrics, for that matter!).
The closest you can get is to view the lambda's logs in CloudWatch. If you log your response, and watch your lambda's error metric then you should mostly be able to debug issues internal to the lambda.
This does leave a few edge cases:
You won't see anything if the lambda can't be invoked (this would only happen under heavy concurrent loads either on that single lambda, or on all lambdas across your account)
If you return a bad response the lambda will succeed but the trigger action will fail and Cognito will give you a fairly generic message. At this point you're at the mercy of AWS' documentation to work out what's wrong (which can be a bit hit and miss- although StackOverflow always helps!).
You can find an example payload for the lambda in the trigger documentation:
{
"userName": "THE USERNAME",
"request": {
"password": "THE PASSWORD"
},
"response": {
// it is your responsibility to fill this bit in and return the completed object back:
"userAttributes": {
"string": "string",
...
},
"finalUserStatus": "string",
"messageAction": "string",
"desiredDeliveryMediums": [ "string", ... ],
"forceAliasCreation": boolean
}
}
n.b. As an aside, which you might know, but Lambda payloads always have to be in JSON, which does not store functions. So you should always be able to derive a test payload to use in the console.

How can I customize the entire email notification in Stackdriver Alerting?

Currently, the message specified in the Document field while creating alerting policy appears in the Document field of the Stackdriver alert email.
I would like to overwrite the entire email message body with my custom content.
How can I overwrite the message body of Stackdriver Alert email with my custom message?
Is there any other workaround to do this?
You should be able to send the notification to a webhook, and this could directly be an HTTP-triggered Cloud Function.
This Cloud Function would receive all the information from the alert, and you can follow this tutorial to use SendGrid to send your alerts.
This is a lot more complex than just setting the email notifications, but also provides you with an amazing flexibility regarding alerts, as you'll be able to not just write the message however you want, but you could process the data in any way you want:
You have low priority alerts? Then store them and just send a digest
once in a while instead of spamming.
Want to change who is sent the
alert depending on a calendar rotation? Use the function to look up
who should be notified.
And those are just some random quick ideas I got while writing this message.
The information provided in the POST body is this one (that's just a sample):
{
"incident": {
"incident_id": "f2e08c333dc64cb09f75eaab355393bz",
"resource_id": "i-4a266a2d",
"resource_name": "webserver-85",
"state": "open",
"started_at": 1385085727,
"ended_at": null,
"policy_name": "Webserver Health",
"condition_name": "CPU usage",
"url": "https://app.google.stackdriver.com/incidents/f333dc64z",
"summary": "CPU for webserver-85 is above the threshold of 1% with a value of 28.5%"
},
"version": 1.1
}
You can create a single webhook that handles all the alerts, or you can create a webhook on a per-policy basis to handle things separately.

Lambda function not working upon Alexa Skill invocation

I've just created my first (custom) still. I've set the function up in Lambda by uploading a zip file containing my index.js and all the necessary code required, including node_modules and the base Alexa skill that mine is a child of (as per the tutorials). I made sure I zipped up the files and sub-folders, not the folder itself (as I can see this is a common cause of similar errors) but when I create the skill and test in the web harness with a sample utterance I get:
remote endpoint could not be called, or the response it returned was
invalid.
I'm not sure how to debug this as there's nothing logged in CloudWatch.
I can see in the Lambda request that my slot value is translated/parsed successfully and the intentname is correct.
In AWS Lambda I can invoke the function successfully both with a LaunchRequest and another named intent. From the developer console though, I get nothing.
I've tried copying the JSON from the lambda test (that works) to the developer portal and I get the same error. Here is a sample of the JSON I'm putting in the dev portal (that works in Lambda)
{
"session": {
"new": true,
"sessionId": "session1234",
"attributes": {},
"user": {
"userId": null
},
"application": {
"applicationId": "amzn1.echo-sdk-ams.app.149e75a3-9a64-4224-8bcq-30666e8fd464"
}
},
"version": "1.0",
"request": {
"type": "LaunchRequest",
"requestId": "request5678"
}
}
The first step in pursuing this problem is probably to test your lambda separate from your skill configuration.
When looking at your lambda function in the AWS console, note the 'test' button at the top, and next to it there is a drop down with an option to configure a test event. If you select that option you will find that there are preset test events for Alexa. Choose 'alexa start session' and then choose 'save and test' button.
This will give you more detailed feedback about the execution of your lambda.
If your lambda works fine here then the problem probably lies in your skill configuration, so I would go back through whatever tutorial and documentation you were using to configuration your skill and make sure you did it right.
When you write that the lambda request looks fine I assume you are talking about the service simulator, so that's a good start, but there could still be a problem on the configuration tab.
We built a tool for local skill development and testing.
BST Tools
Requests and responses from Alexa will be sent directly to your local server, so that you can quickly code and debug without having to do any deployments. I have found this to be very useful for our own development.
Let me know if you have any questions.
It's open source: https://github.com/bespoken/bst