AWS SNS topic initial value - amazon-web-services

I am looking for something I dubbed 'state' topics.
Once a subscriber successfully subscribed it will receive the current value of a topic: the current/actual state.
After that the subscriber will receive updates of the topic as usual.
Ideas?
A two phase approach might work:
Topic A to transport the actual state change from an instance to farm level. A lambda could subscribe to this topic and persist the state.
Another topic B is used to transport from farm level to other interested parties.
Question remains: can I do a single 'subscription accepted' call and provide current state?

Related

AWS SQS Selective Polling Pattern

I have a system where I publish updates to a shared topic meant for specific consumers.
I noticed messages getting stuck in the queue due to a lack of selective listening in SQS consumers, so messages are being hijacked.
Example:
Given: Message{destination: A, payload: 1234}
Given: ConsumerA, & ConsumerB
I expect Message to be processed by ConsumerA. However, it gets hijacked by Consumer B continuously. It receives the message, then refuses to process it since the destination field doesn't match, leading to the visibility timeout to expire, and the message put back on the queue.. but due to the nature of SQS, ConsumerB has an equal chance of picking the message again.
My question is, what patterns are used to solve this type of issue?
I'm considering creating a queue per consumer but it has drawbacks specific to the system im working on.
If I could only listen for messages with matching attributes, problem solved, but that's seemingly not the case.
Is there any other way?
Sharing a single Amazon SQS queue is not an appropriate architecture for your use-case.
If you want your consumers to be able to 'request' a message from a particular subset, you should either use separate SQS queues or use a database. You could even store objects in Amazon S3 as a form of noSQL database.
Having consumers grab messages and then 'send them back' to the queue is not compatible with the design of the Amazon SQS service.

Is it possible to kick off two different cloud build which are based on subscription to same topic?

currently i have a cloud-build application which is being kicked off by a pub-sub trigger , subscribing to eg. topic1
I would like to know if i can kick off another cloud-build application from subscribing to the same topic. Is there a way to configure the message (or the trigger) so that if message1 is published to topic1, then cloudbuild1 is kicked off, and if message2 is published to topic1, then cloudbuild2 is kicked off?
Kind regards
marco
When you create a subscription on a topic, all the published messages in the topic are replicated in each subscription.
Therefore, if you have TOPIC and Sub1 and Sub2, if you publish 1 message in TOPIC, you will have this message in Sub1 and Sub2.
However, you can set up a filter on messages when you create a subscription. You can set this filter only at the creation and you can't update it later. You need to delete and recreate the subscription if you want to update the filter.
In addition, you can filter only on message attributes, not on the message body content.
Therefore, with filter, think wisely your filter from the beginning and when you publish a message in TOPIC, add attributes that allow your to route the messages to the correct subscription.

How to route dead letter messages back to the original topic?

I have a Google Cloud Pub/Sub subscription that is using a dead-letter topic. I recently had an outage that was preventing a number of messages from being processed & they ended up in the dead-letter topic.
The outage was resolved and I'd like to easily send the contents of dead letter subscription back to the original subscription. They're all present in the queue still (I have nothing consuming the dead letter sub) so I just need to route them somewhere.
This is an admin task so I'd like it to be manually initiated, if that makes any difference. Ideally via the UI but I can't see anything there.
You have a few options:
Use a Dataflow pipeline to move messages from dead letter topic to your topic.
Update existing pipeline to read from both original topic and dead letter topic based on configuration
Create a new system that, when enabled, moves messages from one topic to another.
The right answer might depend on your system design and requirements.
In case your use case for dead letter topic always includes moving the messages back to the primary topic after a delay, you might want to use configurable exponential backoff in Cloud Pub/Sub. This feature will have general availability towards the end of Q2 2020.

pub/sub : how can I use pub sub to check message in any email account?

I am new to pub/sub on GCP and have some difficulties on understanding some concepts. So if I want to get email every time I have new message in my mailbox, can I use Pub/Sub for that? How the push notification work in that case? I understand the subscriber concepts but I have some difficulties in the publisher concepts. Can anyone help?
Although I am not familiar with the Gmail API (I am specialized mainly in GCP), a quick read over the documentation can provide some really useful insights about this topic. Also, as per your question, I think your doubts are more related to Pub/Sub itself, rather than Gmail API, so let me try to clarify some things for you.
I can see in the Gmail API documentation, that you can configure Gmail to send push notifications using Cloud Pub/Sub topics, in such a way that Gmail sends publish requests to a Pub/Sub topic whenever a mailbox update matches the configuration you established. Although I cannot get into much details about this part of the scenario, from the documentation I understand that the way to configure the Gmail push notifications is to make a watch() request with the configuration you want and pointing a Pub/Sub topic that you should have previously created. Once this is set (and also permissions are correctly configured), Gmail would keep publishing mailbox message updates for a period of 7 days (after a week, you have to re-call watch()).
In order to receive notifications, you can now forget completely about the Gmail API, and you can focus on Pub/Sub. You should create a Pub/Sub subscription (using either Pull or Push configuration, depending on your requirements), so that your client (wherever and whatever it is) receives the Pub/Sub messages that work as a notification. You may have to acknowledge the messages so that they are not retried, too.
As a side note, given that you mentioned that the Pub/Sub subscriber concepts are more or less clear to you, and you would like to know more about publishing, let me share with you some links that may come in handy for a better understanding of the environment:
Pub/Sub main concepts.
Typical Pub/Sub flow.
General guide for Pub/Sub publishers.
In the scenario you are presenting (Gmail notifications using Pub/Sub), you would have to create a topic (with the name you want, let's name it gmail_topic), and the Gmail API would be your publisher. What the watch() method would be doing, behind the scenes, is calling the publish() method to send messages (containing information about mailbox updates) to your topic gmail_topic. Messages are passed to Pub/Sub subscriptions (which you can create and bind to the gmail_topic), and they retained in each of the subscriptions for 7 days (the maximum retention period) until you consume and acknowledge them.

Event Driven MessageBus architecture with AWS SNS: one or many message buses/ lambda action functions

I am implementing a process in my AWS based hosting business with an event driven architecture on AWS SNS. This is largely a learning experience with a new architecture, programming and hosting paradigm for me.
I have considered AWS Step functions, but have decided to implement a Message Bus with AWS SNS topic(s), because I want to understand the underlying event driven programming model.
Nearly all actions are performed by lambda functions and steps are coupled via SNS and/or SQS.
I am undecided if to implement the process with one or many SNS topics and if I should subscribe the core logic to the message bus(es) with one or many lambda functions.
One or many message buses
My core process currently consist of 9 events which of which 2 sets of 2 can be parallel, the remaining 4 are sequential. Subscribing these all to the same message bus is easier to set up, but requires each lambda function to check if the message is relevant to it, which seems like a waste of resources.
On the other hand I could have 6 message buses and be sure that a notified resource has something to do with the message.
One or many lambda functions
If all lambda functions are subscribed to the same message bus, it may be easier to package them all up with a dispatcher function in a single lambda function. It would also reduce the amount of code to upload to lambda, albeit I don't have to pay for that.
On the other hand I would loose the ability to control the timeout for the lambda function and any changes to the order of events is now dependent on the dispatcher code.
I would still have the ability to scale each process part, as any parts that contain repeating elements are seperated by SQS queues.
You should always emit each type of message to it's own topic, as this allows other services to consume these events without tightly coupling the two services.
Likewise, each worker that wants to consume messages should have it's own queue with it's own subscription to the topic.
Doing the following allows you to add new message consumers for a given event without having to modify the upstream service. Furthermore, responsibility over each component is clear - the service producing messages to a topic owns that topic (and the message format), whereas the consumer owns its queue and event handling semantics.
Your consumer can specify a message filter when subscribing to a topic, so it can only receive messages it cares about (documentation).
For example, a process that sends a customer survey after the customer has received their order would subscribe its queue to the Order Status Changed event with the filter set to only receive events where the new_status field is equal to shipment-received).
The above reflects principles of Service-Oriented architecture - and there's plenty of good material out there elaborating the points above.