I have a sample WebJob I wrote which uses a service bus. My data ingestion layer is all SB queue that then publish data onto topics. I can see all of my Queue functions getting invoked and working with provided text writer as a logger, but not my topic based functions. I can see them getting invoked by the log and output can be obtained using console.writeline. Is there something else I need to do for topics?
topic function
public static void HandleContactAssigned(
[ServiceBusTrigger(ContactAssingedTopic, "sub_contact_assigned")] NotifyAssigmentInfo message,
TextWriter writer
)
{
writer.WriteLine("HandleContactAssinged Called");
Console.WriteLine("called HandleContactAssinged");
}
WebJobs Console output
[04/16/2015 16:28:03 > e1254f: INFO] called HandleContactAssinged
[04/16/2015 16:28:03 > e1254f: INFO] Executing: 'MessageFunctions.HandleContactAssigned' because New service bus message detected on 'int_contact_assgined/Subscriptions/sub_contact_assigned'.
[04/16/2015 16:28:03 > e1254f: INFO] called HandleContactAssinged
As you can see the function gets called but no of the normal monitoring/logging works
Hmm... That should work. Make sure you have set the AzureWebJobsDashboard connectionstring in your WebJob or the Azure portal.
You can also look at the following sample as an example.
https://github.com/Azure/azure-webjobs-sdk-samples/blob/master/BasicSamples/ServiceBus
Related
Is there an option to send event in batch to Event hub from C#.
the SendBatch API mentioned doesn't exist in new EventHubClient.
There is no longer a send batch call, but there is a SendAsync method with an IEnumerable overload that is effectively the same thing.
https://learn.microsoft.com/dotnet/api/microsoft.azure.eventhubs.eventhubclient#Microsoft_Azure_EventHubs_EventHubClient_SendAsync_Microsoft_Azure_EventHubs_EventData_
I implemented a service that polls for eyetracking events and pushes the received data using a FlowableProcessor (PublishSubject in RxJava 1.x). The whole polling runs in a different thread that can be started and stopped using the services class methods.
I wrote a test for the service, but i am not quite sure if I am doing the right thing.
My idea:
Subscribe to the stream provided by the stream service
Start the service
Wait for the onComplete event (published automatically when the service is supplied with a non-infinite data source mock)
Verify the results
This is my testing code:
StreamService service = new StreamService(mockSource);
Flowable<ETData> stream = service.getStream();
TestSubscriber<ETData> testSub = new TestSubscriber<>();
stream.subscribe(testSub);
service.start();
testSub.await();
testSub.assertNoErrors();
testSub.assertComplete();
List<ETData> received = testSub.values();
assertEquals(2, received.size());
assertEquals(mockData0, received.get(0));
assertEquals(mockData1, received.get(1));
Is this okay, or am i missing something?
I noticed that multiple instances of my Web job are receiving the same message and end up acting on it. This is not the desired behavior. I would like multiple messages to be processed concurrently, however, I do not want the same message being processed by multiple instances of the web job.
My web job is of the continuous running type.
I use a QueueTrigger to receive the message and invoke the function
My function runs for several hours.
I have looked into the JobHostConfiguration.BatchSize and MaxDequeue properties and I am not sure on these. I simply want a single instance processing a message and that it could take several hours to complete.
This is what I see in the web job logs indicating the message is received twice.
[01/24/2017 16:17:30 > 7e0338: INFO] Executing:
'Functions.RunExperiment' - Reason: 'New queue message detected on
'runexperiment'.'
[01/24/2017 16:17:30 > 7e0338: INFO] Executing:
'Functions.RunExperiment' - Reason: 'New queue message detected on
'runexperiment'.'
According to the official document, if we use Azure queue storage in the WebJob on the multiple instance, we no need to write code to prevent multiple instances to processing the same queue message.
The WebJobs SDK queue trigger automatically prevents a function from processing a queue message multiple times; functions do not have to be written to be idempote.
I deployed a WebJob on the 2 instances WebApp, It also works correctly(not execute twice with same queue message). It could run on the 2 instances and there is no duplicate executed message.
So it is very odd that the queue message is executed twice, please have a try to debug it whether there are 2 queue messages that have the same content are triggered.
The following is my debug code. I wrote the message that with the executed time and instance id info into another queue.
public static void ProcessQueueMessage([QueueTrigger("queue")] string message, [Queue("logqueue")] out string newMessage, TextWriter log)
{
string instance = Environment.GetEnvironmentVariable("WEBSITE_INSTANCE_ID");
string newMsg = $"WEBSITE_INSTANCE_ID:{instance}, timestamp:{DateTime.Now}, Message:{message}";
log.WriteLine(newMsg);
Console.WriteLine(newMsg);
newMessage = newMsg;
}
}
I had the same issue of a single message processed multiple times at the same time. The issue disappeared as soon as I have set the MaxPollingInterval property...
I'm using Azure Webjobs to process messages from a queue.
I saw that the Webjobs SDK processes any failed message again after 10 minutes, and it if fails 5 times it moves it to the poison queue (1).
Also I can see the nextVisibleTime of the message in the queue, that is 10 minutes after the insertionTime (2).
I want to use the AzureSDK error handling of the messages but I cannot wait 10 minutes for the message to be processed again.
Is there any way I can set this nextVisibleTime to a few seconds?
Create a .NET WebJob in Azure App Service
If the method fails before completing, the queue message is not deleted; after a 10-minute lease expires, the message is released to be picked up again and processed.
How to use Azure queue storage with the WebJobs SDK
public static void WriteLog([QueueTrigger("logqueue")] string logMessage,
DateTimeOffset expirationTime,
DateTimeOffset insertionTime,
DateTimeOffset nextVisibleTime,
Note: There are similar questions here in StackOverflow but with no answer:
QueueTrigger Attribute Visibility Timeout
Azure WebJob QueueTrigger Retry Policy
In the latest v1.1.0 release, you can now control the visibility timeout by registering your own custom QueueProcessor instances via JobHostConfiguration.Queues.QueueProcessorFactory. This allows you to control advanced message processing behavior globally or per queue/function.
For example, to set the visibility for failed messages, you can override ReleaseMessageAsync as follows:
protected override async Task ReleaseMessageAsync(CloudQueueMessage message, FunctionResult result, TimeSpan visibilityTimeout, CancellationToken cancellationToken)
{
// demonstrates how visibility timeout for failed messages can be customized
// the logic here could implement exponential backoff, etc.
visibilityTimeout = TimeSpan.FromSeconds(message.DequeueCount);
await base.ReleaseMessageAsync(message, result, visibilityTimeout, cancellationToken);
}
More details can be found in the release notes here.
If there is an exception while processing your function, the SDK will put the message back in the queue instantly and the message will be reprocessed. Are you not seeing this behavior?
I am new to WSO2 CEP
I have created the entire flow to read the JMS message and split it using Text formatter. The problem is that when I try to push messages into the queue, it is not able to reach the the output event adaptor. I have a mysql event adaptor and configured it into my event formatter but I keep getting the below message in my log
[2014-02-13 21:20:06,347] ERROR - {ReceiverGroup} No receiver is reachable at reconnection, can't publish the events
[2014-02-13 21:20:06,352] ERROR - {AsyncDataPublisher} Reconnection failed for for tcp://localhost:7661
Can someone help me understand what is this tcp://localhost:7661 is all about
Regards
Subbu
tcp://localhost:7661 is the default port to which Thrift(WSO2 events are published). It seems a default event formatter has been created and trying to publish events to that port.
Can you check your list of event formatters and ensure that no event formatters of type WSO2Event are created. This event formatter might be automatically created if you set an exported stream to be 'pass-through' when creating the execution plan.
You can enable event tracing[1] and monitor to determine exactly upto which point the event is coming in your configured flow.
[1] http://docs.wso2.org/display/CEP300/CEP+Event+Tracer
HTH,
Lasantha