Is there way to get notified when a connection is made from a consumer to Azure Relay Connection? - azure-relay

Here is my use case:
Configure Azure Relay with one connection
Consumer connects to relay with this connection
We push information to the consumer via Azure Relay
Consumer connection drops and we detect the error on our side
We stop pushing the events
Some time later consumer connects again
At this point, I would like to start pushing the events. Is there way I can be notified by azure that new connection is made to my Azure Relay resource and I can start pushing events to that connection?

Related

Is it possible to migrate Artemis multicast temporary queue to Amazon SNS / SQS solutions?

currently we are using Artemis for our publish-subscribe pattern and at one specific case, we are using temporary queues to subscribe to a topic. They are non-shared and non-durable and they receive messages as long as there is a consumer listening from those. As a simple example, application instances are using local cache for a configuration and if that configuration is changed, an event is published, each instance receives the same message and evict their local caches. Each instance is connecting with temporary queue (names created by broker with UUID) at startup, and they may be restarted because of a deployment or rescheduling on kubernetes (as they are running on spot instances). Is it possible to migrate this usage to AWS services using SNS and SQS?
So far, I could only see virtual queues close to this, but as far as I understand they do not receive same message on different virtual queues (of one standard queue). If I have to use standard queues to subscribe for each instance, then I would need to use unique names for instances but there may be also scaling up and then scale down, so application needs to detect queues that do not have consumers anymore and remove them (so they do not receive messages from topic anymore).
I have made some trials with virtual queues where I have created two consumer threads (receiving messages with AmazonSQSVirtualQueuesClient) and send message to host queue (with AmazonSQSClient). They did not end up on virtual queues, in fact messages are still on the queue at the moment. I have also tried to send the message with AmazonSQSVirtualQueuesClient but then get warning WARNING: Orphaned message sent to ... . I believe it is only fit for request-responder pattern and the exact destination needs to be known by publisher.

How to deliver message from Amazon SQS to all running instnaces of a service

I have two services, one is the producer (Service A) and one is a consumer (Service B). So Service A will produce a message which will be published to Amazon SQS service and then it will be delivered to Service B as it has subscribed to the queue. So, this works fine until I have a single instance of Service B.
But when I start another instance of Service B, such that now there are 2 instances of Service B, both of which are subscribing to the same queue, as it is the same service, I observe that the messages from SQS are now being delivered in round-robin fashion. Such that at a given time, only one instance of Service B receives the message that is published by Service A. I want that when a message is published to this queue, it should be received by all the instances of Service B.
How can we do this? I have developed these services as Springboot applications, along with Spring cloud dependencies.
Please see the diagram below for reference.
If you are interested in building functionality like this, use SNS, not SQS. We have a Spring BOOT example that shows how to build a web app that lets users sign up for email subscriptions and then when a message is published, all subscribed emails get the message.
The purpose of this example is to get you up and running building a Spring BOOT app using the Amazon Simple Notification Service. That is, you can build this app with Spring BOOT and the official AWS Java V2 API:
Creating a Publish/Subscription Spring Boot Application
While your message may appear to be read in a round robbin fashion, they are not actually consumed in a round robin. SQS works by making all messages available to any consumer (that has the appropriate IAM permissions) and hides the message as soon as one consumer fetches the message for a pre-configured amount of time that you can configure, effectively "locking" that message. The fact that all of your consumer seem to be operating in a round robin way is most likely coincidental.
As others have mentioned you could use SNS instead of SQS to fanout messages to multiple consumers at once, but that's not as simple a setup as it may sound. If your service B is load balanced, the HTTP endpoint subscriber will point to the Load Balancer's DNS name, and thus only one instance will get the message. Assuming your instances have a public IP, you could modify your app so that it self-registers as an HTTP subscriber to the topic when the application wakes up. The downsides here are that you're not only bypassing your Load Balancer, you're also losing the durability guarantees that come with SQS since an SNS topic will try to send the message X times, but will simply drop the message after that.
An alternative solution would be to change the message hiding timeout setting on the SQS queue to 0, that way the message is never locked and every consumer will be able to read it. That will also mean you'll need to modify your application to a) not process messages twice, as the same message will likely be read more than once by the time it has finished processing and b) handle failure gracefully when one of the instance deletes the message from the queue and other instances try to delete that message from the queue after that.
Alternatively, you might want to use some sort of service mesh, or service discovery mechanism so that instances can communicate between each other in a peer-to-peer fashion so that one instance can pull the message from the SQS queue and propagate it to the other instances of the service.
You could also use a distributed store like Redis or DynamoDB to persist the messages and their current status so that every instance can read them, but only one instance will ever insert a new row.
Ultimately there's a few solutions out there for this, but without understanding the use-case properly it's hard to make a hard recommendation.
Implement message fanout using Amazon Simple Notification Service (SNS) and Amazon Simple Queue Service (SQS). There is a hands-on Getting Started example of this.
Here's how it works: in the fanout model, service A publishes a message to an SNS topic. Each instance of service B has an associated SQS queue which is subscribed to that SNS topic. The published message is delivered to each subscribed queue and hence to each instance of service B.

Is there a way to invoke an AWS Step Function or Lambda in response to a websocket message?

TLDR: Is there a way to trigger an AWS lambda or step function based on an external system's websocket message?
I'm building a synchronization service which connects to a system which supports websockets. I can use timers in step functions to wake periodically and call lambda functions to perform the sync, but I would prefer to subscribe to the websocket and perform the sync only when a message is received.
There are plenty of ways to expose websockets in AWS, but I haven't found a way to consume them short of something like an EC2 instance with a custom service running on it. I'm trying to stay in the serverless ecosystem.
It seems like consuming a websocket is a fairly common requirement; have I overlooked something?
Lambdas are ephemeral. They can't be sitting there waiting for a websocket message.
However, I think what you can do is use an Activity task. Once the step function gets to that state it will wait. The activity worker will run on an EC2 instance and subscribe to a websocket. When a message is received it will poll the State Machine for an activity token and call SendTaskSuccess. The state machine will then continue execution and call the lambda that performs the sync.
You can use AWS API gateway service and lambda. It supports web sockets and can trigger lambda on request

What is the difference between Jobs and Messages in AWS IoT?

Jobs and Messages are both just transactions of text between AWS IoT service and devices.
Why should I use jobs than messages or the other way around?
They are transaction but they have their differences
Messages - The AWS IoT message broker is a publish/subscribe broker service that enables the sending and receiving of messages to
and from AWS IoT. The act of sending the message is referred to as
publishing. The act of registering to receive messages for a topic
filter is referred to as subscribing.
Example - When communicating with AWS IoT, a client sends a message addressed to a topic like Sensor/temp/room1. The message broker, in turn, sends the message to all clients that have registered to receive messages for that topic.
Jobs - AWS IoT jobs can be used to define a set of remote operations that are sent to and executed on one or more devices
connected to AWS IoT.
Example - you can define a job that instructs a set of devices to download and install application or firmware updates, reboot, rotate certificates, or perform remote troubleshooting operations.
To use Jobs or Messages is up to your requirements. If you want to update a set of devices Jobs seems to do the work, or its just one device message will do.

How to effectively manage queue transactions listened by two ec2 instances

I have made two ec2 instances listening to one particular aws sqs queue. Now both the instances are connected via load balancer. The problem is , as both the instances are listening on the same queue, isn't the message going to be received by both the instances at the same time. If it is received on both the instances at the same time, how to prevent duplication of the transactions?
Your architecture does not describe a normal usage pattern for Amazon SQS.
Amazon Simple Queue Service (SQS) is a queueing service where you can create a queue and send messages to a queue. Amazon SQS retains the message for up to 14 days.
Applications can then request to receive a message from the queue. This makes the message invisible while the application is processing the message. Once the application finishes processing the message, to tells SQS to delete the message from the queue. If the application fails while processing a message, Amazon SQS will make the message visible again after a period of time so that it can be processed by another application server.
Things to note:
SQS does not send messages. Rather, applications request to receive a message from the queue. Think of the apps as "retrieving" a message rather than SQS transmitting a message.
The instances retrieving the messages should not be sitting behind a Load Balancer. There is nothing sent to a Load Balancer in this process. Instead, the instances themselves connect to SQS and request a message.
If you wish to send a message to multiple systems at the same time, you can use Amazon Simple Notification Service (SNS).