I have setup a rule in cloudwatch to monitor Glue ETL. In the state change I am sending a notification to SNS. I have modified the input transformer to get a custom body of the email but not getting how to change the subject line of the email . It still giving the default "AWS Notification Message"
My Input transformer :
{"state":"$.detail.state"}
"The JOB has changed state to <state>."
Use a Lambda function—rather than the Amazon SNS topic—as a target for the CloudWatch Events rule. Then, configure the Lambda function to publish a custom message to the Amazon SNS topic when triggered by the CloudWatch Events rule.
Documented here: https://aws.amazon.com/premiumsupport/knowledge-center/change-sns-email-for-cloudwatch-events/
Transformer (no way Jose)
As far as I can tell there is currently no way to control the email subject with a transformer. Typically you will control the notification body for a rule through the transformer which modifies the input json message (e.g. in the case of a build https://docs.aws.amazon.com/codebuild/latest/userguide/sample-build-notifications.html#sample-build-notifications-ref ). Based on what I see in the documentation this only modifies part of the body embedded between the header and the footer of the email payload.
JSON (also not possible)
1. Since all notifications are generated with an API call with json payload you can experiment and configure. Using the CLI you can specify a json format using --message-structure attribute. However the subject is not part of the json payload itself and is sent as a separate parameter "--subject" (see example below) you won't be able to configure that unless they either modify the UI or the json payload.
2.In order to exercise greater control over your output you might have to use JSON (select "Constant (JSON text)") which is documented for mobile messaging https://docs.aws.amazon.com/sns/latest/dg/mobile-push-send-custommessage.html but not very well for HTTP https://docs.aws.amazon.com/sns/latest/dg/sns-message-and-json-formats.html but decent for the CLI https://docs.aws.amazon.com/cli/latest/reference/sns/publish.html
3. You can go to the console https://console.aws.amazon.com/sns/v2/ and click on "Publish a Message" which allows you to specify a subject. Notice that there is a "JSON message generator" but that's only for the body.
Coding Workaround (possible ...kinda)
If you feel really determined you can explore a workaround: look at the API and figure out what call is equivalent to sending a call which includes a subject. Create a lambda function that executes that call. From the rule invoke the lambda :-) and you are done. If there is will, there is a way...
Notes:
aws sns publish --topic-arn arn:aws:sns:us-east-1:652499160872:DP-Build --message-structure json --subject "Test Build subject" --message "{ \"default\":\"Foo\", \"email\":\"Bar\"}"
According to the docs there is a "Subject" key you can pass as a parameter:
Blockquote
Subject
The Subject parameter specified when the notification was published to the topic. Note that this is an optional parameter. If no Subject was specified, then this name/value pair does not appear in this JSON document.
Blockquote
set "detail-type":"Glue ETL State-change Notification"
you might need to look at https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/CloudWatch-Events-Input-Transformer-Tutorial.html
Related
I want to send custom attribute values along with message in the SQS queue. I've come across MessageAttributeValue and MessageSystemAttributeValue. Both have almost same definition in AWS documentation.
What's the difference between them?
MessageAttributeValue documentation
MessageSystemAttributeValue documentation
The difference is on the use:
Message attribute: You can use message attributes to attach custom metadata to Amazon SQS messages for your applications.
Message system attribute: You can use message system attributes to store metadata for other Amazon services (currently, the only supported message system attribute is AWSTraceHeader. And its value must be a correctly formatted Amazon X-Ray trace header string).
MessageAttributes are normal attributes of a message. MessageSystemAttributes are special attributes, and there is only one of those:
Currently, the only supported message system attribute is AWSTraceHeader. Its type must be String and its value must be a correctly formatted AWS X-Ray trace header string.
Which is what you get when you look at the actual usage of the data type within the documentation instead of just looking at the raw data type itself, e.g.: https://docs.aws.amazon.com/AWSSimpleQueueService/latest/APIReference/API_SendMessage.html
I am using AWS lambda to process a csv file and send out a message in chime to my team on a daily basis. I need to make a part of the message a hyperlink.
Current output: ID: https://www.example.com/ID=123456
Required output: ID: 123456
and when one clicks on the ID, it should take the user to a link like "https://www.google.com/ID=123456"
I am using urllib3 to send the output from aws lambda to chime group. I believe chime only offers Markdown or code type formatting. I would like to know if it is possible to implement a solution in the aws lambda function itself.
Need some help with Lambda invocation and authentication. I have an AWS Lambda function that is invoked from AWS IoT MQTT feed based on a specific topic. The invocation happens when an authenticated IoT Thing publishes to MQTT on that topic. My question is how do I see who has invoked it? I need this information so I know under what user to store the published information to database. I'm guessing there should be some environment variables that carry this information but I haven't found it. Maybe I been looking in all the wrong places:/
Many thanks,
Marcus
You should be able to modify the Lambda trigger in your IoT configuration to include the client ID by using something like the following SQL statement:
select clientId() as clientId, *
How are you?
You could send the user on the topic message. Is it not easier? Not sure how to get it from env var.
I'm trying to use Go to send objects in a S3 bucket to Textract and collect the response.
I'm using the aws go sdk package and able to connect to my S3 bucket and list all the objects contained within. So far so good. I now need to be able to send one of those objects (a .pdf file) to Textract and collect the response(s).
The AWS Go SDK content for interacting with Textract seem to be quite extensive but I cannot find a good example for how to do this.
I would be very grateful for a sample or advice on how to do this.
To start a job, you invoke StartDocumentTextDetection, using a DocumentLocation to specify the file, and you specify a SNS topic where Textract will publish a notification when it has finished to process your job.
You have now two possibilities:
Subscribe to the SNS topic, and when you receive a message retrieve the result
Create a lambda function triggered by the SNS topic, which retrieves the result.
The second option is IMO better 'cause it use less computation time (doesn't run until the job hasn't finished).
To retrieve the job, you use GetDocumentTextDetection
If anyone else reaches this site searching for an answer:
I understood the documentation as if I could just call the StartDocumentAnalysis function through the textract SDK but in fact what was missing is the fact that you need to create a new Session first and do the calls based on the session:
https://docs.aws.amazon.com/sdk-for-go/api/service/textract/#New
I have created a rule to send the incoming IoT messages to a S3 bucket.
The problem is that any time IoT recieves a messages is sended and stored in a new file (with the same name) in S3.
I want this S3 file to keep all the data from before and not truncate each time a new message is stored.
How can I do that?
When you set up an IoT S3 rule action, you need to specify a bucket and a key. The key is what we might think of as a "path and file name". As the docs say, we can specify the key string by using a substitution template, which is just a fancy way of saying "build a path out of these pieces of information". When you are building your substitution template, you can reference fields inside the message as well as use use a bunch of other functions
Especially look at the functions topic, timestamp, as well as some of the string manipulator functions.
Let's say your topic names are something like things/thing-id-xyz/location and you just want to store each incoming JSON message in a "folder" for the thing-id it came in from. You might specify a key like:
${topic(2)}/${timestamp()).json
it would evaluate to something like:
thing-id-xyz/1481825251155.json
where the timestamp part is the time the message came in. That will be different for each message, and then the messages would not overwrite each other.
You can also specify parts of the message itself. Let's imagine our incoming messages look something like this:
{
"time": "2022-01-13T10:04:03Z",
"latitude": 40.803274,
"longitude": -74.237926,
"note": "Great view!"
}
Let's say you want to use the nice ISO date value you have in your data instead of the timestamp of the file. You could reference the time field no problem, like:
${topic(2)}/${time}.json
Now the file would be written as the key:
thing-id-xyz/2022-01-13T10:04:03Z.json
You should be able to find some combination of values that works for your needs, and that most importantly, is UNIQUE for each message so they don't overwrite each other in S3.
You can do it using AWS IoT SQL variable expressions. For example use following as a key ${newuuid()}. This will create new s3 object for each message received.
See more about SQL Functions https://docs.aws.amazon.com/iot/latest/developerguide/iot-sql-functions.html
You can't do this with the S3 IoT Rule Action. You can get similar results using AWS Firehose, which will batch up several messages and write to one file. You will still end up with multiple files though.