Is there any possibility of combine the "prefix" and "wildcard" subscription options?
I want to subscribe to a channel with the following pattern:
channel..messages.*
In my application there is a logging component that should log for all channels the values of the different messages types, like this:
channel.<string:ChannelId>.messages.<string:MessageType>
Is there any chance of subscribing to such a topic with crossbar / wamp?
Or is there a better solution for such a use case?
Related
I wanted to leverage similar pull request as with the GCP CLI for Pub/Sub subscription:
gcloud pubsub subscriptions pull --filter
I'm looking to leverage the same in Java client libraries.
Is there a way to do this?
Thank you.
If you are looking java client library which is doing pubsub stuff find the docs below. If you need specific thing update your question properly
https://cloud.google.com/pubsub/docs/quickstart-client-libraries#pubsub-client-libraries-java
The --filter option in gcloud is not something that is inherent to Pub/Sub or the service, but rather a utility provided within the gcloud command infrastructure itself. The filtering is done entirely on the client side. Also note that this is only affecting the display of the list of messages, not what messages are actually returned. If you run gcloud topic filters, you can see more details on this functionality:
Most gcloud commands return a list of resources on success. By default
they are pretty-printed on the standard output. The
--format=NAMEATTRIBUTES and --filter=EXPRESSION flags along with projections can be used to format and change the default
output to a more meaningful result.
Therefore, if you want to perform this action in Java, you will need to write the code to apply the filter upon receiving messages. Based on the Java asynchronous pull sample, you'd need to change the message receiver to something like:
private boolean shouldProcessMessage(PubsubMessage message) {
// Change to perform whatever filtering you want on messages
// to determine if they should be processed.
return true;
}
private void processMessage(PubsubMessage message) {
// Put logic here to handle the message.
}
...
MessageReceiver receiver =
(PubsubMessage message, AckReplyConsumer consumer) -> {
if (shouldProcessMessage(message)) {
processMessage(message);
}
consumer.ack();
};
This is assuming you don't want messages that do not match your filter to be delivered again. If you do want them to be delivered again, you'd want to call consumer.nack() on those messages instead of consumer.ack().
If all of the filtering you want to do is on the message attributes, then you can take advantage of Pub/Sub's built-in filtering. This feature allows you to check for the existence of attributes, check for equality in the value of an attribute, and check for a prefix for the value of the attribute. This type of filter is declared as part of the subscription creation and so you wouldn't have any Java code associated with it unless you are creating your subscriptions programatically. If you use this type of filtering, messages that do not match the filter do not get delivered to your subscriber and therefore your MessageReceiver does not need to check to see if it should process such messages; it can assume that the only messages it receives are ones that match the filter.
I have two result topics where messages arrive from different sources "almost nearly at the same time".
Topic: sensor1/result --receiving payload--> { "output_from_sensor1": {"result":"OK"} }
Topic: sensor2/result --receiving payload--> { "output_from_sensor2": {"result":"OK"} }
I would like to create an AWS IoT Rule which scans these two topics "simultaneously in one query" and take an action.
I am not sure if AWS IoT SQL support "scanning multiple topics" in one query. No such references found in AWS docs.
During the way, I have tried these IoT queries (from my knowledge of SQL syntax) but no luck so far :(
SELECT output_from_sensor1.result AS final_output.result FROM ‘sensor1/result’ WHERE (SELECT output_from_sensor2.result FROM ‘sensor2/result’)=‘OK’
(SELECT output_from_sensor1.result FROM 'sensor1/result') UNION (SELECT output_from_sensor2.result FROM 'sensor2/result')
Thanks much!
AWS IoT rules are triggered by a single MQTT message and the rule actions only process the message that triggered the rule. So while the + and # Wildcards can be used to select from multiple topics, each invocation of the rule only handles one message.
Your assumption that it is possible to 'scan multiple topics' in one query implies that multiple messages are involved (to each topic).
Depending on the problem you are trying to solve, it may make sense to buffer the messages in a queue (e.g. SQS). The processing can then check if multiple messages appear in a given time window to perform a single action on both messages.
I am not sure if AWS IoT SQL support "scanning multiple topics" in one query. No such references found in AWS docs.
I haven't found a definitive statement in the documentation that rules this out. But the wording is consistent with a rule being triggered by one message.
e.g. From the rules tutorial
The rule is triggered when an MQTT message that matches the topic filter is received on a topic.
The FROM Clause subscribes the rule to a topic or topic filter using the MQTT + and # wildcards.
There are operators like AND and OR but these are not used in the FROM clause. The operators documentation states:
The following operators can be used in SELECT and WHERE clauses.
I want to publish a notification using SNS and I want subscribers to be able to filter on multiple message attribute(s). One of such message attribute is going to be a String.Array. For example, the notification can have two attributes fruit_found and all_fruits_found.
"fruit_found": ["Apple"],"all_fruits_found":["Mango","Apple","Banana"]
There can be use cases where a subscriber might need to know if both Mango and Apple were found and only then consume the notification else drop it. Is it possible to do so in SNS?
So I had to talk to the SNS customer support team and found out that they don't have AND operation within a String.array message attributes.
A workaround that I found was to replicate the same message attributes for the number of filters you want to provide. For the message in the question, it should have structure like:
"fruit_found": ["Apple"],
"all_fruits_found_filter_1":["Mango","Apple","Banana"],
"all_fruits_found_filter_2":["Mango","Apple","Banana"]
The filter policy defined for when both Mango and Apple are found would be:
"all_fruits_found_filter_1": ["Mango"] //and
"all_fruits_found_filter_2": ["Apple"]
However, there is a limitation of at max 10 message attributes per SNS message. So if you are within that boundary the above solution works fine. Else you would have to refer to the answer from Ali.
You cannot achieve this using SNS alone you might require a lambda function to receive SNS message and separate it based on the string and publish again to a topic.
You might need to create three SNS topic:
For lambda processing
For Mango subscribers
For Apple subscribers
I am looking to setup some CloudFormation stuff that is able to find any email addresses in CloudWatch logs and let us know that one slipped through the cracks. I thought this would be a simple process of using a RegEx pattern that catches all the possible variations and email address can have, and using that as a filter. Having discovered that CloudWatch filtering does not support RegEx I've become a bit stumped as to how to write a filter that can be relied upon to catch any email address.
Has anyone done something similar to this, or know where a good place to start would be?
Amazon has launched a service called CloudWatch insights and it allows to filter messages logs. In the previous link you have examples of queries.
You need to select the CloudWatch Log Group and the period of time in which search.
Example:
fields #message
| sort #timestamp desc
| filter #message like /.*47768.*/
If you're exporting the logs somewhere (Like Sumologic, Datadog etc) thats a better place to do that alerting.
If not and you're exporting them into S3 then a triggered lambda function that runs the check might do the trick. Could be expensive long term though.
The solution that we landed upon was to pass stings through a RegEx pattern that recognises email addresses before they logged into AWS. Replacing any matches with [REDACTED]. Which is simple enough to do in a lambda.
I have a system in which I get a lot of messages. Each message has a unique ID, but it can also receives updates during its lifetime. As the time between the message sending and handling can be very long (weeks), they are stored in S3. For each message only the last version is needed. My problem is that occasionally two messages of the same id arrive together, but they have two versions (older and newer).
Is there a way for S3 to have a conditional PutObject request where I can declare "put this object unless I have a newer version in S3"?
I need an atomic operation here
That's not the use-case for S3, which is eventually-consistent. Some ideas:
You could try to partition your messages - all messages that start with A-L go to one box, M-Z go to another box. Then each box locally checks that there are no duplicates.
Your best bet is probably some kind of database. Depending on your use case, you could use a regular SQL database, or maybe a simple RAM-only database like Redis. Write to multiple Redis DBs at once to avoid SPOF.
There is SWF which can make a unique processing queue for each item, but that would probably mean more HTTP requests than just checking in S3.
David's idea about turning on versioning is interesting. You could have a daemon that periodically trims off the old versions. When reading, you would have to do "read repair" where you search the versions looking for the newest object.
Couldn't this be solved by using tags, and using a Condition on that when using PutObject? See "Example 3: Allow a user to add object tags that include a specific tag key and value" here: https://docs.aws.amazon.com/AmazonS3/latest/dev/object-tagging.html#tagging-and-policies