Is it possible to reply a message once you received a data from Publisher.
It must be a direct reply, once the Publisher published a message.
I'm using Google PubSub Service.
https://cloud.google.com/pubsub/docs/pull
Publisher/Sender (PHP):
$sendToOps =[];
$sendToOps['MESSAGE'] = "my message";
$topicName = env('GOOGLE_CLOUD_TO_OPS_TOPIC');
$pubSub = new PubSubClient();
$topic = $pubSub->topic($topicName);
$ret = $topic->publish([
'attributes'=>$sendToOps
]);
//**********The word "Apple" must output here**********
echo $ret;
//*****************************************************
Subscriber/Receiver (Javascript):
'use strict';
//Get .env File Data
require('dotenv').config({path: '/usr/share/nginx/html/myProject/.env'});
var request = require('request');
var port = process.env.PORT_GATEWAY;
var host = process.env.IP_PUSH;
var test = process.env.TEST_FLAG;
var pubsubSubscription = process.env.GOOGLE_CLOUD_TO_OPS_SUBSCRIPTION;
const keyFilePath= 'public/key/key.json';
// Imports the Google Cloud client library
const {PubSub} = require('#google-cloud/pubsub');
// Creates a client; cache this for further use
const pubSubClient = new PubSub({
keyFilename: keyFilePath
});
function listenForMessages() {
// References an existing subscription
const subscription = pubSubClient.subscription(pubsubSubscription);
// Create an event handler to handle messages
const messageHandler = message => {
console.log(message.attributes);
//*****************************************************
//I want to reply to Sender with the word "Apple" here
//*****************************************************
message.ack()
};
subscription.on('message', messageHandler);
}
listenForMessages();
Is it possible to reply a message once you received a data from
Publisher.
Depends on what you mean by "reply". The publisher of a message posts a message on a Pub/Sub Topic. Subscribers receive messages from a Pub/Sub Subscription. There is no two-way communications channel here. There is no Pub/Sub reply back method.
A subscriber could publish a message to a different topic that the publisher reads as a subscriber. Both sides would be publisher and a subscriber but on different topics.
Once a message is received, a subscriber could directly call an API on the publisher.
However, the intent of Publish/Subscribe is to decouple senders from receivers and not to lock them together.
Related
I am facing a strange issue in SQS. Let me simplify my use-case, I have 7 messages in the FIFO queue and my standalone app should keep-on polling the messages in the same sequence for my business case infinitely. For instance, my app read message1 and after some business processing, the app will delete it and repost the same message into the same queue(tail of the queue), and these steps will be continued for the next set of messages endlessly. Here, my expectation is my app will be polling the message continuously and doing the operations based on the messages in the queue in the same sequence, but that's where the problem arises. When the message is read from the queue for the very first time, delete it, and repost the same message into the same queue, even after the successful sendMessageResult, the reposted message is not present in the queue.
I have included the below code to simulate the issue, briefly, Test_Queue.fifo queue with Test_Queue_DLQ.fifo configured as reDrivePolicy is created. At the very first time after creating the queue, the message is posted -> "Test_Message" into Test_Queue.fifo queue(Getting the MessageId in response ) and long-polling the queue to read the message, and after iterating the ReceiveMessageResult#getMessages, deleting the message(Getting MessageId in response). Again, after the successful deletion of the message, the same message is reposted into the tail of the same queue(Getting the MessageId in response). But, the reposted message is not present in the queue. When, I checked the AWS admin console the message count is 0 in the Messages available and Messages in flight sections and the reposted message is not even present in Test_Queue_DLQ.fifo queue. As per the SQS docs, if we delete the message, even if it is present in flight mode should be removed, so reposting the same message should not be an issue. I suspect on SQS side, where they are performing some equals comparison and skipping the same message during in visibleTimeOut interval to avoid deduplication of the same message in the distributed environment, but couldn't get any clear picture.
Code snippet to simulate the above issue
public class SQSIssue {
#Test
void sqsMessageAbsenceIssueTest() {
AmazonSQS amazonSQS = AmazonSQSClientBuilder.standard().withEndpointConfiguration(new AwsClientBuilder
.EndpointConfiguration("https://sqs.us-east-2.amazonaws.com", "us-east-2"))
.withCredentials(new AWSStaticCredentialsProvider(new BasicAWSCredentials(
"<accessKey>", "<secretKey>"))).build();
//create queue
String queueUrl = createQueues(amazonSQS);
String message = "Test_Message";
String groupId = "Group1";
//Sending message -> "Test_Message"
sendMessage(amazonSQS, queueUrl, message, groupId);
//Reading the message and deleting using message.getReceiptHandle()
readAndDeleteMessage(amazonSQS, queueUrl);
//Reposting the same message into the queue -> "Test_Message"
sendMessage(amazonSQS, queueUrl, message, groupId);
ReceiveMessageRequest receiveMessageRequest = new ReceiveMessageRequest()
.withQueueUrl(queueUrl)
.withWaitTimeSeconds(5)
.withMessageAttributeNames("All")
.withVisibilityTimeout(30)
.withMaxNumberOfMessages(10);
ReceiveMessageResult receiveMessageResult = amazonSQS.receiveMessage(receiveMessageRequest);
//Here I am expecting the message presence in the queue as I recently reposted the same message into the same queue after the message deletion
Assertions.assertFalse(receiveMessageResult.getMessages().isEmpty());
}
private void readAndDeleteMessage(AmazonSQS amazonSQS, String queueUrl) {
ReceiveMessageRequest receiveMessageRequest = new ReceiveMessageRequest()
.withQueueUrl(queueUrl)
.withWaitTimeSeconds(5)
.withMessageAttributeNames("All")
.withVisibilityTimeout(30)
.withMaxNumberOfMessages(10);
ReceiveMessageResult receiveMessageResult = amazonSQS.receiveMessage(receiveMessageRequest);
receiveMessageResult.getMessages().forEach(message -> amazonSQS.deleteMessage(queueUrl, message.getReceiptHandle()));
}
private String createQueues(AmazonSQS amazonSQS) {
String queueName = "Test_Queue.fifo";
String deadLetterQueueName = "Test_Queue_DLQ.fifo";
//Creating DeadLetterQueue
CreateQueueRequest createDeadLetterQueueRequest = new CreateQueueRequest()
.addAttributesEntry("FifoQueue", "true")
.addAttributesEntry("ContentBasedDeduplication", "true")
.addAttributesEntry("VisibilityTimeout", "600")
.addAttributesEntry("MessageRetentionPeriod", "262144");
createDeadLetterQueueRequest.withQueueName(deadLetterQueueName);
CreateQueueResult createDeadLetterQueueResult = amazonSQS.createQueue(createDeadLetterQueueRequest);
GetQueueAttributesResult getQueueAttributesResult = amazonSQS.getQueueAttributes(
new GetQueueAttributesRequest(createDeadLetterQueueResult.getQueueUrl())
.withAttributeNames("QueueArn"));
String deadLetterQueueArn = getQueueAttributesResult.getAttributes().get("QueueArn");
//Creating Actual Queue with DeadLetterQueue configured
CreateQueueRequest createQueueRequest = new CreateQueueRequest()
.addAttributesEntry("FifoQueue", "true")
.addAttributesEntry("ContentBasedDeduplication", "true")
.addAttributesEntry("VisibilityTimeout", "600")
.addAttributesEntry("MessageRetentionPeriod", "262144");
createQueueRequest.withQueueName(queueName);
String reDrivePolicy = "{\"maxReceiveCount\":\"5\", \"deadLetterTargetArn\":\""
+ deadLetterQueueArn + "\"}";
createQueueRequest.addAttributesEntry("RedrivePolicy", reDrivePolicy);
CreateQueueResult createQueueResult = amazonSQS.createQueue(createQueueRequest);
return createQueueResult.getQueueUrl();
}
private void sendMessage(AmazonSQS amazonSQS, String queueUrl, String message, String groupId) {
SendMessageRequest sendMessageRequest = new SendMessageRequest()
.withQueueUrl(queueUrl)
.withMessageBody(message)
.withMessageGroupId(groupId);
SendMessageResult sendMessageResult = amazonSQS.sendMessage(sendMessageRequest);
Assertions.assertNotNull(sendMessageResult.getMessageId());
}
}
From Using the Amazon SQS message deduplication ID:
The message deduplication ID is the token used for deduplication of sent messages. If a message with a particular message deduplication ID is sent successfully, any messages sent with the same message deduplication ID are accepted successfully but aren't delivered during the 5-minute deduplication interval.
Therefore, you should supply a different Deduplication ID each time the message is placed back onto the queue.
https://stackoverflow.com/a/65844632/3303074 is fitting, I should have added the SendMessageRequest#withMessageDeduplicationId, but I would like to add few more points to the answer, The technical reason behind the message disappearance is because I have enabled the ContentBasedDeduplication for the queue. Amazon SQS uses an SHA-256 hash to generate the MessageDeduplicationId using the body of the message (but not the attributes of the message) if the MessageDeduplicationId has not been mentioned explicitly when sending the message. When ContentBasedDeduplication is in effect, messages with identical content sent within the deduplication interval are treated as duplicates and only one copy of the message is delivered. So even we add different attributes for the same message repost into the same queue will not work as expected. Adding MessageDeduplicationId helps to solve the issue because even the queue has ContentBasedDeduplication set, the explicit MessageDeduplicationId overrides the generated one.
Code Snippet
SendMessageRequest sendMessageRequest = new SendMessageRequest()
.withQueueUrl(queueUrl)
.withMessageBody(message)
.withMessageGroupId(groupId)
// Adding explicit MessageDeduplicationId
.withMessageDeduplicationId(UUID.randomUUID().toString());
SendMessageResult sendMessageResult = amazonSQS.sendMessage(sendMessageRequest);
I am working on sending OTP messages for user login leveraging Amazon SNS. I am able to send Text message as suggesting here. For the email notification as well I would like to use a similar approach. But looks like for email notifications, a topic has to be created in SNS and a subscriber has to be created for each email id registered in the application.
Is it not possible to send email to mail-id dynamically as done for text messages without creating topics and subscribers? If not please suggest a way to set email id dynamically based on the user logged in.
Code for Text Messaging:
public static void main(String[] args) {
AmazonSNSClient snsClient = new AmazonSNSClient();
String message = "My SMS message";
String phoneNumber = "+1XXX5550100";
Map<String, MessageAttributeValue> smsAttributes =
new HashMap<String, MessageAttributeValue>();
//<set SMS attributes>
sendSMSMessage(snsClient, message, phoneNumber, smsAttributes);
}
public static void sendSMSMessage(AmazonSNSClient snsClient, String message,
String phoneNumber, Map<String, MessageAttributeValue> smsAttributes) {
PublishResult result = snsClient.publish(new PublishRequest()
.withMessage(message)
.withPhoneNumber(phoneNumber)
.withMessageAttributes(smsAttributes));
System.out.println(result); // Prints the message ID.
}
Correct.
Amazon SNS normally uses a Public/Subscribe model for messages.
The one exception is the ability to send an SMS message to a specific recipient.
If you wish to send an email to a single recipient, you will need to use your own SMTP server, or use Amazon Simple Email Service (Amazon SES).
The following code works to send a message but when it arrives, it displays the text 'VERIFY' for a sender id. How do I specific a sender ID? I think it's done with the message attributes but I cannot figure out the syntax.
session = boto3.session.Session(profile_name='Credentials',region_name='us-east-1')
theMessage='Now is the time for all good people to come to the aid of their party'
senderID='Godzilla'
snsclient = session.client('sns')
response = snsclient.publish(PhoneNumber='+84932575571', Message=theMessage)
pp = pprint.PrettyPrinter(indent=4)
print(pp.pprint(response))
Add a third parameter MessageAttributes to the publish method.
snsclient.publish(PhoneNumber='+84932575571', Message=theMessage,MessageAttributes={
'AWS.SNS.SMS.SenderID': {
'DataType': 'String',
'StringValue': 'Godzilla'
}})
The sender id is not supported in many countries. see AWS SNS SMS SenderId Supported Countries
I want to send notifications to clients via websockets. This notifications are generated by actors, hence I'm trying to create a stream of actor's messages at server startup and subscribe websockects connections to this stream (sending only those notifications emitted since subscription)
With Source.actorRef we can create a Source of actor messages.
val ref = Source.actorRef[Weather](Int.MaxValue, fail)
.filter(!_.raining)
.to(Sink foreach println )
.run()
ref ! Weather("02139", 32.0, true)
But how can I subscribe (akka http*) websockets connections to this source if has been materialized already?
*WebSockets connections in Akka HTTP requires a Flow[Message, Message, Any]
What I'm trying to do is something like
// at server startup
val notifications: Source[Notification,ActorRef] = Source.actorRef[Notificacion](5,OverflowStrategy.fail)
val ref = notifications.to(Sink.foreach(println(_))).run()
val notificationActor = system.actorOf(NotificationActor.props(ref))
// on ws connection
val notificationsWS = path("notificationsWS") {
parameter('name) { name ⇒
get {
onComplete(flow(name)){
case Success(f) => handleWebSocketMessages(f)
case Failure(e) => throw e
}
}
}
}
def flow(name: String) = {
val messages = notifications filter { n => n.name equals name } map { n => TextMessage.Strict(n.data) }
Flow.fromSinkAndSource(Sink.ignore, notifications)
}
This doensn't work because the notifications source is not the one that is materialized, hence it doens't emit any element.
Note: I was using Source.actorPublisher and it was working but ktoso discourages his usage and also I was getting this error:
java.lang.IllegalStateException: onNext is not allowed when the stream has not requested elements, totalDemand was 0.
You could expose the materialised actorRef to some external router actor using mapMaterializedValue.
Flow.fromSinkAndSourceMat(Sink.ignore, notifications)(Keep.right)
.mapMaterializedValue(srcRef => router ! srcRef)
The router can keep track of your sources actorrefs (deathwatch can help tidying things up) and forward messages to them.
NB: you're probably already aware, but note that by using Source.actorRef to feed your flow, your flow will not be backpressure aware (with the strategy you chose it will just crash under load).
I am trying to assemble a simple AppFabric Topic whereby messages are sent and received using the SessionId. The code does not abort, but brokeredMessage is always null. Here is the code:
// BTW, the topic already exists
var messagingFactory = MessagingFactory.Create(uri, credentials);
var topicClient = messagingFactory.CreateTopicClient(topicName);
var sender = topicClient.CreateSender();
var message = BrokeredMessage.CreateMessage("Top of the day!");
message.SessionId = "1";
sender.Send(message);
var subscription = topic.AddSubscription("1", new SubscriptionDescription { RequiresSession = true});
var mikeSubscriptionClient = messagingFactory.CreateSubscriptionClient(subscription);
var receiver = mikeSubscriptionClient.AcceptSessionReceiver("1");
BrokeredMessage brokeredMessage;
receiver.TryReceive(TimeSpan.FromMinutes(1), out brokeredMessage); // brokeredMessage always null
You have two problems in your code:
You create a subscription AFTER you send the message. You need to create a subscription before sending, because a subscription tells the topic to, in a sense, copy, the message to several different "buckets".
You are using TryReceive but are not checking for its result. It returns true, if a message was received, and false if not (e.g. Timeout has occured).
I am writing my sample application and will post it on our blog today. I will post the link here as well. But until then, move the subscription logic to before sending the message, and the receiver after it and you will start seeing results.
Update:
As promised, here is the link to my blog post on getting started with AppFabric Queues, Topics, Subscriptions.