I want to send custom attribute values along with message in the SQS queue. I've come across MessageAttributeValue and MessageSystemAttributeValue. Both have almost same definition in AWS documentation.
What's the difference between them?
MessageAttributeValue documentation
MessageSystemAttributeValue documentation
The difference is on the use:
Message attribute: You can use message attributes to attach custom metadata to Amazon SQS messages for your applications.
Message system attribute: You can use message system attributes to store metadata for other Amazon services (currently, the only supported message system attribute is AWSTraceHeader. And its value must be a correctly formatted Amazon X-Ray trace header string).
MessageAttributes are normal attributes of a message. MessageSystemAttributes are special attributes, and there is only one of those:
Currently, the only supported message system attribute is AWSTraceHeader. Its type must be String and its value must be a correctly formatted AWS X-Ray trace header string.
Which is what you get when you look at the actual usage of the data type within the documentation instead of just looking at the raw data type itself, e.g.: https://docs.aws.amazon.com/AWSSimpleQueueService/latest/APIReference/API_SendMessage.html
Related
I wanted to leverage similar pull request as with the GCP CLI for Pub/Sub subscription:
gcloud pubsub subscriptions pull --filter
I'm looking to leverage the same in Java client libraries.
Is there a way to do this?
Thank you.
If you are looking java client library which is doing pubsub stuff find the docs below. If you need specific thing update your question properly
https://cloud.google.com/pubsub/docs/quickstart-client-libraries#pubsub-client-libraries-java
The --filter option in gcloud is not something that is inherent to Pub/Sub or the service, but rather a utility provided within the gcloud command infrastructure itself. The filtering is done entirely on the client side. Also note that this is only affecting the display of the list of messages, not what messages are actually returned. If you run gcloud topic filters, you can see more details on this functionality:
Most gcloud commands return a list of resources on success. By default
they are pretty-printed on the standard output. The
--format=NAMEATTRIBUTES and --filter=EXPRESSION flags along with projections can be used to format and change the default
output to a more meaningful result.
Therefore, if you want to perform this action in Java, you will need to write the code to apply the filter upon receiving messages. Based on the Java asynchronous pull sample, you'd need to change the message receiver to something like:
private boolean shouldProcessMessage(PubsubMessage message) {
// Change to perform whatever filtering you want on messages
// to determine if they should be processed.
return true;
}
private void processMessage(PubsubMessage message) {
// Put logic here to handle the message.
}
...
MessageReceiver receiver =
(PubsubMessage message, AckReplyConsumer consumer) -> {
if (shouldProcessMessage(message)) {
processMessage(message);
}
consumer.ack();
};
This is assuming you don't want messages that do not match your filter to be delivered again. If you do want them to be delivered again, you'd want to call consumer.nack() on those messages instead of consumer.ack().
If all of the filtering you want to do is on the message attributes, then you can take advantage of Pub/Sub's built-in filtering. This feature allows you to check for the existence of attributes, check for equality in the value of an attribute, and check for a prefix for the value of the attribute. This type of filter is declared as part of the subscription creation and so you wouldn't have any Java code associated with it unless you are creating your subscriptions programatically. If you use this type of filtering, messages that do not match the filter do not get delivered to your subscriber and therefore your MessageReceiver does not need to check to see if it should process such messages; it can assume that the only messages it receives are ones that match the filter.
I have created a rule to send the incoming IoT messages to a S3 bucket.
The problem is that any time IoT recieves a messages is sended and stored in a new file (with the same name) in S3.
I want this S3 file to keep all the data from before and not truncate each time a new message is stored.
How can I do that?
When you set up an IoT S3 rule action, you need to specify a bucket and a key. The key is what we might think of as a "path and file name". As the docs say, we can specify the key string by using a substitution template, which is just a fancy way of saying "build a path out of these pieces of information". When you are building your substitution template, you can reference fields inside the message as well as use use a bunch of other functions
Especially look at the functions topic, timestamp, as well as some of the string manipulator functions.
Let's say your topic names are something like things/thing-id-xyz/location and you just want to store each incoming JSON message in a "folder" for the thing-id it came in from. You might specify a key like:
${topic(2)}/${timestamp()).json
it would evaluate to something like:
thing-id-xyz/1481825251155.json
where the timestamp part is the time the message came in. That will be different for each message, and then the messages would not overwrite each other.
You can also specify parts of the message itself. Let's imagine our incoming messages look something like this:
{
"time": "2022-01-13T10:04:03Z",
"latitude": 40.803274,
"longitude": -74.237926,
"note": "Great view!"
}
Let's say you want to use the nice ISO date value you have in your data instead of the timestamp of the file. You could reference the time field no problem, like:
${topic(2)}/${time}.json
Now the file would be written as the key:
thing-id-xyz/2022-01-13T10:04:03Z.json
You should be able to find some combination of values that works for your needs, and that most importantly, is UNIQUE for each message so they don't overwrite each other in S3.
You can do it using AWS IoT SQL variable expressions. For example use following as a key ${newuuid()}. This will create new s3 object for each message received.
See more about SQL Functions https://docs.aws.amazon.com/iot/latest/developerguide/iot-sql-functions.html
You can't do this with the S3 IoT Rule Action. You can get similar results using AWS Firehose, which will batch up several messages and write to one file. You will still end up with multiple files though.
I have setup a rule in cloudwatch to monitor Glue ETL. In the state change I am sending a notification to SNS. I have modified the input transformer to get a custom body of the email but not getting how to change the subject line of the email . It still giving the default "AWS Notification Message"
My Input transformer :
{"state":"$.detail.state"}
"The JOB has changed state to <state>."
Use a Lambda function—rather than the Amazon SNS topic—as a target for the CloudWatch Events rule. Then, configure the Lambda function to publish a custom message to the Amazon SNS topic when triggered by the CloudWatch Events rule.
Documented here: https://aws.amazon.com/premiumsupport/knowledge-center/change-sns-email-for-cloudwatch-events/
Transformer (no way Jose)
As far as I can tell there is currently no way to control the email subject with a transformer. Typically you will control the notification body for a rule through the transformer which modifies the input json message (e.g. in the case of a build https://docs.aws.amazon.com/codebuild/latest/userguide/sample-build-notifications.html#sample-build-notifications-ref ). Based on what I see in the documentation this only modifies part of the body embedded between the header and the footer of the email payload.
JSON (also not possible)
1. Since all notifications are generated with an API call with json payload you can experiment and configure. Using the CLI you can specify a json format using --message-structure attribute. However the subject is not part of the json payload itself and is sent as a separate parameter "--subject" (see example below) you won't be able to configure that unless they either modify the UI or the json payload.
2.In order to exercise greater control over your output you might have to use JSON (select "Constant (JSON text)") which is documented for mobile messaging https://docs.aws.amazon.com/sns/latest/dg/mobile-push-send-custommessage.html but not very well for HTTP https://docs.aws.amazon.com/sns/latest/dg/sns-message-and-json-formats.html but decent for the CLI https://docs.aws.amazon.com/cli/latest/reference/sns/publish.html
3. You can go to the console https://console.aws.amazon.com/sns/v2/ and click on "Publish a Message" which allows you to specify a subject. Notice that there is a "JSON message generator" but that's only for the body.
Coding Workaround (possible ...kinda)
If you feel really determined you can explore a workaround: look at the API and figure out what call is equivalent to sending a call which includes a subject. Create a lambda function that executes that call. From the rule invoke the lambda :-) and you are done. If there is will, there is a way...
Notes:
aws sns publish --topic-arn arn:aws:sns:us-east-1:652499160872:DP-Build --message-structure json --subject "Test Build subject" --message "{ \"default\":\"Foo\", \"email\":\"Bar\"}"
According to the docs there is a "Subject" key you can pass as a parameter:
Blockquote
Subject
The Subject parameter specified when the notification was published to the topic. Note that this is an optional parameter. If no Subject was specified, then this name/value pair does not appear in this JSON document.
Blockquote
set "detail-type":"Glue ETL State-change Notification"
you might need to look at https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/CloudWatch-Events-Input-Transformer-Tutorial.html
Suppose I have a task of updating an user via a third party API call. Is it okay to put the actual user data inside the message (if it fits)? Or should I only provide an ID in the message so the worker can retrieve the updated record from my local database?
You need to check what level of compliance is required for your infrastructure, to see what kind of data you want to put in the queue.
If there aren't any compliance restrictions, you are free to put any kind of data in your own infrastructure on AWS.
I tried to enable notifications in S3 bucket, but i get JSON format long data to my registered email , i want to filter on notifications's attribute such as "object deleted" , "date-time" only, so is it possible ?
If you want to either limit the fields returned, or filter the events that get generated, you are going to have to do that yourself.
Easiest way would probably be to have the s3 event notifications sent to a custom lambda function (that you write) that can filter and/or reformat the raw s3eventnotification and then have lambda send it on to your downstream consumer, i.e. via email if you want - but there is nothing built-in to aws to do the filtering/reformatting for you.