have this error in AmazonDynamoDBLockClient. Im using org.springframework.cloud:spring-cloud-stream-binder-kinesis:2.0.1.RELEASE
#Spring Cloud Stream Kinesis Binder properties
spring.cloud.stream.bindings.cdcInput.group=listener
spring.cloud.stream.bindings.cdcInput.destination=my_stream
spring.cloud.stream.bindings.cdcInput.content-type=application/json
spring.cloud.stream.kinesis.binder.auto-create-stream=false
spring.cloud.stream.kinesis.binder.locks.table=LockRegistry
spring.cloud.stream.kinesis.binder.checkpoint.table=MetadataStore
spring.cloud.stream.kinesis.binder.locks.leaseDuration=10
spring.cloud.stream.kinesis.binder.locks.heartbeat-period=3
and here is my application.properties configs
com.amazonaws.services.dynamodbv2.AmazonDynamoDBLockClient - Heartbeat thread recieved interrupt, exiting run() (possibly exiting thread)
java.lang.InterruptedException: sleep interrupted
java.lang.Thread.sleep(Native Method)
com.amazonaws.services.dynamodbv2.AmazonDynamoDBLockClient.run(AmazonDynamoDBLockClient.java:1184)
java.lang.Thread.run(Thread.java:748)
[SpringContextShutdownHook] INFO org.springframework.integration.aws.inbound.kinesis.KinesisMessa
geDrivenChannelAdapter - stopped KinesisMessageDrivenChannelAdapter{shardOffsets=[KinesisShardOffset{iteratorType=TRIM_HORIZON, sequenceNumber='null', timestamp=null, stream='binlog_updates', shard='shardId-000000000000', reset=false}], consumerGroup='cdc-listener'}
[-kinesis-shard-locks-1] ERROR org.springframework.integration.aws.inbound.kinesis.KinesisMessageD
rivenChannelAdapter - Error during unlocking: DynamoDbLock [lockKey=cdc-listener:rds_binlog_updates:shardId-000000000000,lockedAt=2021-01-21#15:54:36.735, lockItem=null]
org.springframework.dao.DataAccessResourceFailureException: Failed to release lock at cdc-listener:binlog_updates:shardI
d-000000000000; nested exception is java.util.concurrent.RejectedExecutionException: Task org.springframework.integration.aws.lock.DynamoDbLockRegistry$DynamoDbLock$ reject
ed from java.util.concurrent.ThreadPoolExecutor#[Terminated, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 1]
org.springframework.integration.aws.lock.DynamoDbLockRegistry$DynamoDbLock.unlock(DynamoDbLockRegistry.java:526)
org.springframework.integration.aws.inbound.kinesis.KinesisMessageDrivenChannelAdapter$ShardConsumerManager.run(KinesisMessageDrivenChannelAdapter.java:1294)
We are getting this error even if the stream is empty and no data is read by consumer. Service starts application context as regular without any errors, but in 1-2 minutes such an error appears and application falls down.
Related
My application is designed in such a way that whenever someone tries to start it, the application will close other instances of itself. (notice that this is working as expected and should not be changed).
I am using Akka and one of my Actors is a PersistentActor that uses leveldb.
When the application starts it locks <path-to-lock>/LOCK, when it starts again the leveldb cannot lock the file so the PersistentActor cannot start.
After some inspection I found that the leveldb Actor class is LeveldbJournal and it starts under the system guardian with path akka://actor-system/system/akka.persistence.journal.leveldb.
I would like leveldb to restart itself until the file can be locked or a max retries limit has reached.
Logs:
[ERROR] [02/04/2019 15:39:25.731] [operator-actor-system-akka.actor.default-dispatcher-5] [akka://operator-actor-system/system/akka.persistence.journal.leveldb] Unable to acquire lock on '<path-to-lock>/LOCK'
akka.actor.ActorInitializationException: akka://operator-actor-system/system/akka.persistence.journal.leveldb: exception during creation
at akka.actor.ActorInitializationException$.apply(Actor.scala:180)
at akka.actor.ActorCell.create(ActorCell.scala:607)
at akka.actor.ActorCell.invokeAll$1(ActorCell.scala:461)
at akka.actor.ActorCell.systemInvoke(ActorCell.scala:483)
at akka.dispatch.Mailbox.processAllSystemMessages(Mailbox.scala:282)
at akka.dispatch.Mailbox.run(Mailbox.scala:223)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.io.IOException: Unable to acquire lock on '<path-to-lock>/LOCK'
at org.iq80.leveldb.impl.DbLock.<init>(DbLock.java:55)
at org.iq80.leveldb.impl.DbImpl.<init>(DbImpl.java:167)
at org.iq80.leveldb.impl.Iq80DBFactory.open(Iq80DBFactory.java:59)
at akka.persistence.journal.leveldb.LeveldbStore$class.preStart(LeveldbStore.scala:178)
at akka.persistence.journal.leveldb.LeveldbJournal.preStart(LeveldbJournal.scala:23)
at akka.actor.Actor$class.aroundPreStart(Actor.scala:510)
at akka.persistence.journal.leveldb.LeveldbJournal.aroundPreStart(LeveldbJournal.scala:23)
at akka.actor.ActorCell.create(ActorCell.scala:590)
... 7 more
When the PersistentActor restarts:
[ERROR] [02/04/2019 15:39:57.168] [operator-actor-system-akka.actor.default-dispatcher-16] [akka://operator-actor-system/user/controller/view-manager/records-service-api-supervisor/records-service-api] Persistence failure when replaying events for persistenceId [record-service-persistence-actor]. Last known sequence number [0] (akka.persistence.RecoveryTimedOut)
Thanks,
Ido Sorozon
P.S. Versions:
Scala: 2.11.8
Akka: 2.4.19
To debug a locked file problem, we're calling SysInternal's Handle64.exe 4.11 from a .NET process (via Process.Start with asynchronous output redirection). The calling process hangs on Process.WaitForExit because the Handle64 process doesn't exit (for more than two hours).
We took a dump of the corresponding Handle64 process and checked it in the Visual Studio 2017 debugger. It shows two threads ("Main Thread" and "ntdll.dll!TppWorkerThread").
Main thread's call stack:
ntdll.dll!NtWaitForSingleObject () Unknown
ntdll.dll!LdrpDrainWorkQueue() Unknown
ntdll.dll!RtlExitUserProcess() Unknown
kernel32.dll!ExitProcessImplementation () Unknown
handle64.exe!000000014000664c() Unknown
handle64.exe!00000001400082a5() Unknown
kernel32.dll!BaseThreadInitThunk () Unknown
ntdll.dll!RtlUserThreadStart () Unknown
Worker thread's call stack:
ntdll.dll!NtWaitForSingleObject() Unknown
ntdll.dll!LdrpDrainWorkQueue() Unknown
ntdll.dll!LdrpInitializeThread() Unknown
ntdll.dll!_LdrpInitialize() Unknown
ntdll.dll!LdrInitializeThunk() Unknown
My question is: Why would a process hang in LdrpDrainWorkQueue? From https://stackoverflow.com/a/42789684/62838, I gather that this is the Windows 10 parallel loader at work, but why would it get stuck while exiting the process? Can this be caused by how we invoke Handle64 from another process? I.e., are we doing something wrong or is this rather a bug in Handle64?
How long did you wait?
According to this analysis,
The worker thread idle timeout is set to 30 seconds. Programs which
execute in less than 30 seconds will appear to hang due to
ntdll!TppWorkerThread waiting for the idle timeout before the process
terminates.
I would recommend trying to set the registry key specified in that article to disable the parallel loader and see if this resolved the issue.
Parent Key: HKLM\Software\Microsoft\Windows NT\CurrentVersion\Image File Execution Options\handle64.exe
Value Name: MaxLoaderThreads
Type: DWORD
Value: 1 to disable
I have a CEP flow that has a high throughput with 100+ messages per second.
And I am publishing the result of my processing into a JMS publisher with the following configuration:
Output Event Adapter Type* : JMS
JNDI Initial Context Factory Class: org.apache.activemq.jndi.ActiveMQInitialContextFactory
JNDI Provider URL *: tcp://localhost:61616
Connection Factory JNDI Name *: QueueConnectionFactory
Destination Type *: Queue
Destination *: myqueue
also, in order to try if the problem was not having concurrency i added:
Concurrent Publishers: Allow
to the JMSPublisher
and I am getting the following error:
ERROR {org.wso2.carbon.event.output.adapter.jms.JMSEventAdapter} -
Event dropped at Output Adapter 'kpis' for tenant id '-1234',
Job queue is full, Task java.util.concurrent.FutureTask#5651dd6c
rejected from java.util.concurrent.ThreadPoolExecutor#3c8c7b29
[Running, pool size =1, active threads = 1, queued tasks = 10000,
completed tasks = 176986]
java.util.concurrent.RejectedExecutionException: Task
java.util.concurrent.FutureTask#5651dd6c rejected from
java.util.concurrent.ThreadPoolExecutor#3c8c7b29[Running, pool size = 1,
active threads = 1, queued tasks = 10000, completed tasks = 176986]
Is there any limitation on the throughput to a JMS activemq?
Also, so far there is no consumer on the queue I am writing all the messages to. Can that have a negative impact on the WSO2 CEP publisher causing that error and degrading performance?
From reading some info online it looks like this might be a direct problem with the pool size!
Is it possible to increase the JMSEventAdapter pool size? if yes, then how?
FULL STACK TRACE:
ERROR {org.wso2.carbon.event.output.ad apter.jms.JMSEventAdapter} - Event dropped at Output Adapter 'kpis' for tenant id '-1234', Job queue is full, Task java.util.concurrent.FutureTask#745cb718 rejected from java.util.concurrent.ThreadPoolExecutor#3a7d9bcf[Running, pool size = 100, active threads = 100, queued tasks = 10000, completed tasks = 5151]
at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2047)
java.util.concurrent.RejectedExecutionException: Task java.util.concurrent.FutureTask#745cb718 rejected from java.util.concurrent.ThreadPoolExecutor#3a7d9bcf[Running, pool size = 100, active threads = 100, queued tasks = 10000, completed tasks = 5151]
at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:823)
at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1369)
at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:112)
at org.wso2.carbon.event.output.adapter.jms.JMSEventAdapter.publish(JMSEventAdapter.java:142)
at org.wso2.carbon.event.output.adapter.core.internal.OutputAdapterRuntime.publish(OutputAdapterRuntime.java:62)
at org.wso2.carbon.event.output.adapter.core.internal.CarbonOutputEventAdapterService.publish(CarbonOutputEventAdapterService.java:143)
at org.wso2.carbon.event.publisher.core.internal.EventPublisher.process(EventPublisher.java:414)
at org.wso2.carbon.event.publisher.core.internal.EventPublisher.sendEvent(EventPublisher.java:226)
at org.wso2.carbon.event.publisher.core.internal.EventPublisher.onEvent(EventPublisher.java:294)
at org.wso2.carbon.event.stream.core.internal.EventJunction.sendEvents(EventJunction.java:194)
at org.wso2.carbon.event.processor.core.internal.listener.SiddhiOutputStreamListener.receive(SiddhiOutputStreamListener.java:100)
at org.wso2.siddhi.core.stream.output.StreamCallback.receiveEvents(StreamCallback.java:98)
at org.wso2.siddhi.core.stream.output.StreamCallback.receive(StreamCallback.java:69)
at org.wso2.siddhi.core.stream.StreamJunction.sendEvent(StreamJunction.java:126)
at org.wso2.siddhi.core.stream.StreamJunction$Publisher.send(StreamJunction.java:323)
at org.wso2.siddhi.core.query.output.callback.InsertIntoStreamCallback.send(InsertIntoStreamCallback.java:46)
at org.wso2.siddhi.core.query.output.ratelimit.OutputRateLimiter.sendToCallBacks(OutputRateLimiter.java:78)
at org.wso2.siddhi.core.query.output.ratelimit.PassThroughOutputRateLimiter.process(PassThroughOutputRateLimiter.java:40)
at org.wso2.siddhi.core.query.selector.QuerySelector.processNoGroupBy(QuerySelector.java:123)
at org.wso2.siddhi.core.query.selector.QuerySelector.process(QuerySelector.java:86)
at org.wso2.siddhi.core.query.processor.filter.FilterProcessor.process(FilterProcessor.java:56)
at org.wso2.siddhi.core.query.input.ProcessStreamReceiver.processAndClear(ProcessStreamReceiver.java:154)
at org.wso2.siddhi.core.query.input.ProcessStreamReceiver.process(ProcessStreamReceiver.java:80)
at org.wso2.siddhi.core.query.input.ProcessStreamReceiver.receive(ProcessStreamReceiver.java:150)
at org.wso2.siddhi.core.stream.StreamJunction.sendData(StreamJunction.java:214)
at org.wso2.siddhi.core.stream.StreamJunction.access$200(StreamJunction.java:46)
at org.wso2.siddhi.core.stream.StreamJunction$Publisher.send(StreamJunction.java:343)
at org.wso2.siddhi.core.stream.input.InputDistributor.send(InputDistributor.java:49)
at org.wso2.siddhi.core.stream.input.InputEntryValve.send(InputEntryValve.java:59)
at org.wso2.siddhi.core.stream.input.InputHandler.send(InputHandler.java:51)
at org.wso2.carbon.event.processor.core.internal.listener.SiddhiInputEventDispatcher.sendEvent(SiddhiInputEventDispatcher.java:39)
at org.wso2.carbon.event.processor.core.internal.listener.AbstractSiddhiInputEventDispatcher.consumeEvent(AbstractSiddhiInputEventDispatcher.java:104)
at org.wso2.carbon.event.stream.core.internal.EventJunction.sendEvents(EventJunction.java:183)
at org.wso2.carbon.event.processor.core.internal.listener.SiddhiOutputStreamListener.receive(SiddhiOutputStreamListener.java:100)
at org.wso2.siddhi.core.stream.output.StreamCallback.receiveEvents(StreamCallback.java:98)
at org.wso2.siddhi.core.stream.output.StreamCallback.receive(StreamCallback.java:69)
at org.wso2.siddhi.core.stream.StreamJunction.sendEvent(StreamJunction.java:126)
at org.wso2.siddhi.core.stream.StreamJunction$Publisher.send(StreamJunction.java:323)
at org.wso2.siddhi.core.query.output.callback.InsertIntoStreamCallback.send(InsertIntoStreamCallback.java:46)
at org.wso2.siddhi.core.query.output.ratelimit.OutputRateLimiter.sendToCallBacks(OutputRateLimiter.java:78)
at org.wso2.siddhi.core.query.output.ratelimit.PassThroughOutputRateLimiter.process(PassThroughOutputRateLimiter.java:40)
at org.wso2.siddhi.core.query.selector.QuerySelector.processNoGroupBy(QuerySelector.java:123)
at org.wso2.siddhi.core.query.selector.QuerySelector.process(QuerySelector.java:86)
at org.wso2.siddhi.core.query.input.ProcessStreamReceiver.processAndClear(ProcessStreamReceiver.java:154)
at org.wso2.siddhi.core.query.input.ProcessStreamReceiver.process(ProcessStreamReceiver.java:80)
at org.wso2.siddhi.core.query.input.ProcessStreamReceiver.receive(ProcessStreamReceiver.java:102)
at org.wso2.siddhi.core.stream.StreamJunction.sendEvent(StreamJunction.java:126)
at org.wso2.siddhi.core.stream.StreamJunction$Publisher.send(StreamJunction.java:323)
at org.wso2.siddhi.core.query.output.callback.InsertIntoStreamCallback.send(InsertIntoStreamCallback.java:46)
at org.wso2.siddhi.core.query.output.ratelimit.OutputRateLimiter.sendToCallBacks(OutputRateLimiter.java:78)
at org.wso2.siddhi.core.query.output.ratelimit.PassThroughOutputRateLimiter.process(PassThroughOutputRateLimiter.java:40)
at org.wso2.siddhi.core.query.selector.QuerySelector.processNoGroupBy(QuerySelector.java:123)
at org.wso2.siddhi.core.query.selector.QuerySelector.process(QuerySelector.java:86)
at org.wso2.siddhi.core.query.input.stream.join.JoinProcessor.process(JoinProcessor.java:110)
at org.wso2.siddhi.core.query.processor.stream.window.LengthWindowProcessor.process(LengthWindowProcessor.java:86)
at org.wso2.siddhi.core.query.processor.stream.window.WindowProcessor.processEventChunk(WindowProcessor.java:57)
at org.wso2.siddhi.core.query.processor.stream.AbstractStreamProcessor.process(AbstractStreamProcessor.java:101)
at org.wso2.siddhi.core.query.input.stream.join.JoinProcessor.process(JoinProcessor.java:118)
at org.wso2.siddhi.core.query.input.ProcessStreamReceiver.processAndClear(ProcessStreamReceiver.java:154)
at org.wso2.siddhi.core.query.input.ProcessStreamReceiver.process(ProcessStreamReceiver.java:80)
at org.wso2.siddhi.core.query.input.ProcessStreamReceiver.receive(ProcessStreamReceiver.java:102)
at org.wso2.siddhi.core.stream.StreamJunction.sendEvent(StreamJunction.java:126)
at org.wso2.siddhi.core.stream.StreamJunction$Publisher.send(StreamJunction.java:323)
at org.wso2.siddhi.core.query.output.callback.InsertIntoStreamCallback.send(InsertIntoStreamCallback.java:46)
at org.wso2.siddhi.core.query.output.ratelimit.OutputRateLimiter.sendToCallBacks(OutputRateLimiter.java:78)
at org.wso2.siddhi.core.query.output.ratelimit.PassThroughOutputRateLimiter.process(PassThroughOutputRateLimiter.java:40)
at org.wso2.siddhi.core.query.selector.QuerySelector.processNoGroupBy(QuerySelector.java:123)
at org.wso2.siddhi.core.query.selector.QuerySelector.process(QuerySelector.java:86)
at org.wso2.siddhi.core.query.processor.filter.FilterProcessor.process(FilterProcessor.java:56)
at org.wso2.siddhi.core.query.input.ProcessStreamReceiver.processAndClear(ProcessStreamReceiver.java:154)
at org.wso2.siddhi.core.query.input.ProcessStreamReceiver.process(ProcessStreamReceiver.java:80)
at org.wso2.siddhi.core.query.input.ProcessStreamReceiver.receive(ProcessStreamReceiver.java:150)
at org.wso2.siddhi.core.stream.StreamJunction.sendData(StreamJunction.java:214)
at org.wso2.siddhi.core.stream.StreamJunction.access$200(StreamJunction.java:46)
at org.wso2.siddhi.core.stream.StreamJunction$Publisher.send(StreamJunction.java:343)
at org.wso2.siddhi.core.stream.input.InputDistributor.send(InputDistributor.java:49)
at org.wso2.siddhi.core.stream.input.InputEntryValve.send(InputEntryValve.java:59)
at org.wso2.siddhi.core.stream.input.InputHandler.send(InputHandler.java:51)
at org.wso2.carbon.event.processor.core.internal.listener.SiddhiInputEventDispatcher.sendEvent(SiddhiInputEventDispatcher.java:39)
at org.wso2.carbon.event.processor.core.internal.listener.AbstractSiddhiInputEventDispatcher.consumeEvent(AbstractSiddhiInputEventDispatcher.java:104)
at org.wso2.carbon.event.stream.core.internal.EventJunction.sendEvent(EventJunction.java:146)
at org.wso2.carbon.event.receiver.core.internal.management.InputEventDispatcher.onEvent(InputEventDispatcher.java:27)
at org.wso2.carbon.event.receiver.core.internal.EventReceiver.sendEvent(EventReceiver.java:298)
at org.wso2.carbon.event.receiver.core.internal.EventReceiver.processMappedEvent(EventReceiver.java:222)
at org.wso2.carbon.event.receiver.core.internal.EventReceiver$MappedEventSubscription.onEvent(EventReceiver.java:355)
at org.wso2.carbon.event.input.adapter.core.internal.InputAdapterRuntime.onEvent(InputAdapterRuntime.java:110)
at org.wso2.carbon.event.input.adapter.jms.internal.util.JMSMessageListener.onMessage(JMSMessageListener.java:61)
at org.wso2.carbon.event.input.adapter.jms.internal.util.JMSTaskManager$MessageListenerTask.handleMessage(JMSTaskManager.java:643)
at org.wso2.carbon.event.input.adapter.jms.internal.util.JMSTaskManager$MessageListenerTask.run(JMSTaskManager.java:542)
at org.apache.axis2.transport.base.threads.NativeWorkerPool$1.run(NativeWorkerPool.java:172)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Due to the high throughput of the execution plans to the publishers and the async mechanism that closes JMS connections, Active MQ JMS connection pool inside the WSO2 CEP engine is unable to keep up with the opening and closure of those connections.
This rapidly exhausts all the available connections. Independently of the maximum # set.
The solution in my case goes through reducing the number of messages sent per unit of time and accumulating results.
I am building a system where a Producer sends a list of tasks to be queued which will be consumed by a number of Consumers.
Assume I have a list of tasks and they can be categorised into Black, Orange and Yellow. All the Black tasks are sent to Queue_0, Orange to Queue_1 and Yellow to Queue_2. And I will assign a worker to each queue(i.e: Consumer_0 to Queue_0, Consumer_1 to Queue_1 and Consumer_2 to Queue_2). If Black lists get larger, I want to add an extra Consumer(i.e: Consumer_3) to Queue_0 to aid Consumer_0.
I went through RabbitMQ tutorials on Worker Queues and Routing. I thought Routing will solve my problem. I launched three terminals, a producer and two consumers which will receive Black tasks. When the producer sends a few black tasks(Black_Task_1, Black_Task_2), both consumers received the two messages (i.e: Consumer_0 receives Black_Task_1 and Black_Task_2, Consumer_3 also receives Black_Task_1 and Black_Task_2) . I want my consumers to share the task, not do the same task. Example, Consumer_0 does Black_Task_1 while Consumer_3 does Black_Task_2. What configurations can I achieve that?
=============================
Update
This is a sample code taken from RabbitMQ, routing tutorial. I modified a little. Note that this code doesn't sent Black, Orange or Yellow queues. But the concept is there.
emit_log_direct.py
#!/usr/bin/env python
import pika
import sys
connection = pika.BlockingConnection(pika.ConnectionParameters(host='localhost'))
channel = connection.channel()
channel.exchange_declare(exchange='direct_logs',
type='direct')
severity = sys.argv[1] if len(sys.argv) > 1 else 'info'
message = ' '.join(sys.argv[2:]) or 'Hello World!'
channel.basic_publish(exchange='direct_logs',
routing_key=severity,
body=message)
print " [x] Sent %r:%r" % (severity, message)
connection.close()
receive_logs_direct.py
#!/usr/bin/env python
import pika
import sys
import time
connection = pika.BlockingConnection(pika.ConnectionParameters(host='localhost'))
channel = connection.channel()
channel.exchange_declare(exchange='direct_logs',
type='direct')
result = channel.queue_declare(exclusive=True)
queue_name = result.method.queue
severities = sys.argv[1:]
if not severities:
print >> sys.stderr, "Usage: %s [info] [warning] [error]" % \
(sys.argv[0],)
sys.exit(1)
for severity in severities:
channel.queue_bind(exchange='direct_logs',
queue=queue_name,
routing_key=severity)
print ' [*] Waiting for logs. To exit press CTRL+C'
def callback(ch, method, properties, body):
print " [x] %r:%r" % (method.routing_key, body,)
time.sleep(1)
print " [x] Done"
ch.basic_ack(delivery_tag=method.delivery_tag)
channel.basic_qos(prefetch_count=1)
channel.basic_consume(callback,
queue=queue_name)
channel.start_consuming()
Producer
nuttynibbles$ ./4_emit_log_direct.py info "run run info"
[x] Sent 'info':'run run info'
Consumer_0
nuttynibbles$ ./4_receive_logs_direct_customize.py info
[*] Waiting for logs. To exit press CTRL+C
[x] 'info':'run run info'
[x] Done
Consumer_3
nuttynibbles$ ./4_receive_logs_direct_customize.py info
[*] Waiting for logs. To exit press CTRL+C
[x] 'info':'run run info'
[x] Done
I think your basic issue is with this:
If Black lists queue get larger, I want to add an extra Consumer(i.e:
Consumer_3) to Queue_0 to aid Consumer_0.
As soon as you add another consumer to the queue - it will pick up the next available message.
If the first consumer does not acknowledge the message; then multiple workers will be able to work on the same message as it will remain on the queue.
So make sure you are correctly acknowledging the messages:
By default, RabbitMQ will send each message to the next consumer, in
sequence. On average every consumer will get the same number of
messages. This way of distributing messages is called round-robin.
[...]
There aren't any message timeouts; RabbitMQ will redeliver the message
only when the worker connection dies. It's fine even if processing a
message takes a very, very long time.
Depending on the nature of the task, you may be able to split the work between multiple processes by creating a priority queue; which is used by C1 (a consumer) to get additional resources. In this case you'll have to have workers ready and listening on the separate priority queue; thus creating a sub-queue where T1 (a task) is being processed.
However, in other to do this, the initial C1 has to make sure the task is no longer available by acknowledging its receipt.
I think that your problem is that you are creating a new Queue for each consumer. When you call
result = channel.queue_declare(exclusive=True)
queue_name = result.method.queue
in your consumer, this declares a new queue, tells RabbitMQ to generate a unique name for it, and marks it for exclusive use by the channel in the consumer that is calling it. That means that each consumer will have its own queue.
You then bind each new Queue to the exchange using the severity as a routing key. When a message comes into a direct Exchange, RabbitMQ will route a copy of it to every Queue that is bound with a matching routing key. There is no round-robin across the Queues. Each consumer will get a copy of the message, which is what you are observing.
I believe what you want to do is have each consumer use the same name for the queue, specify the name in the queue_declare, and don't make it exclusive. Then all the consumers will be listening to the same queue. The messages will be delivered to one of the consumers, basically in a round-robin fashion.
The producer (the emit_log.py program) doesn't declare or bind the queue - it doesn't have to, but if the binding isn't established before the message is sent, it will be discarded. If you are using a fixed queue, you can have the producer set it up as well, just be sure to use the same parameters (e.g. queue_name) as the consumer.
Our system processes messages delivered from a messaging system. If no message is received after 10 seconds, an error should be raised (inactivity timeout).
I was thinking of using a ScheduledExecutorService (with 1 Thread). Each time a message is received, I cancel the previous timeout task and submit a new one:
ScheduledExecutorService scheduler = Executors.newScheduledThreadPool(1);
Callable timeoutTask = new Callable() {...};
...
synchronized {
timeout.cancel();
timeout = executor.schedule( timeoutTask, 10, TimeUnit.SECONDS);
}
In normal case, we process ~ 1000 / sec. Would this approach scale?
If you share the threadpool and keep the running time of timeoutTask low, it'd be most probably fine. If you'd have one threadpool per ~ 1000 / sec tasks then that would not work.
If you are still worried, you can have a look at HashedWheelTimer from the Netty project (link). This thing is very efficient in scheduling timeouts. Note that you MUST share instance of HashedWheelTimer as well.