I want to delete all the queues present in RabbitMQ server using AMQP-CPP library.
I could not find any methods in AMQP-CPP library that gives the list of queues / deletes all the queues present (if we are not specifying the queue name).
Could you please let me know if there are any possible ways to do this?
The AMQP protocol doesn't have a method to list resources in a broker.
With RabbitMQ, you can use the REST API provided by the management plugin:
To list all queues accross all vhosts:
GET /api/queues
To delete a queue in a given vhost:
DELETE /api/queues/$vhost/$name
This step can be done using AMQP too.
See the complete list of REST endpoints for more information.
Related
from the recent documentation, it seems, that Boost Log V2 has been extended with Text IPC message queue backend:
https://www.boost.org/doc/libs/1_80_0/libs/log/doc/html/log/detailed/sink_backends.html
but I haven't found any description, how can I configure it via .ini file:
https://www.boost.org/doc/libs/1_80_0/libs/log/doc/html/log/detailed/utilities.html#log.detailed.utilities.setup.settings_file
Can anybody help me, where are these settings documented?
Regards
There is no built-in factory for creating text IPC message queue sinks from settings. Mainly because the queue setup protocol, including choosing which process should create the queue, and with what parameters, is the application logic and not a configurable setting.
You can register a custom sink factory as described here (you don't need a custom sink as you can use the sink backend provided by Boost.Log). The sink factory should process the settings and create and configure the sink and the associated IPC queue accordingly.
According to the google doc, a service running in the flexible enviroment can be the target of a push task:
Outside of the standard environment, you can't add tasks to push
queues, but a service running in the flexible environment can be the
target of a push task. You can specify this using the target parameter
when adding a task to queue or by specifying the default target for
the queue in queue.yaml.
However, when I tried to do it I get 404 errors in the flexible service.
That's totally normal due to the required endpoint (/_ah/queue/deferred) for task queues is it not defined in the flexible service.
How do I become a flexible service in a valid target for task queues?
Do I have to define that endpoint in my code in some way?
Usually, you'll need to write a handler in your worker service to do the processing after receiving a task. In the case of push tasks, the service will send HTTP requests to your whatever url you specify. If no url is specified the default URL /_ah/queue/[QUEUE_NAME] will be used.
Now, from the endpoint you mention, it seems you are using deferred tasks, which are a somewhat special kind. Please, see this thread for a workaround by adding the needed url entry. It mentions Managed VMS but it should still work.
Following is my use case
Bunch of applications enqueue messages in Kafka under different topics.
Have consumer of each topic distribute the work to a worker in a cluster. The work can be classified as long running, memory intensive, simple etc and the worker is chosen accordingly.
This has me exploring Akka cluster for work distribution, routing and scaling. I can use Akka "Supervisor" as a Kafka consumer and assign incoming work to the appropriate worker based on its classification.
But what I am still trying to understand is the correct way to implement a resilient way of communication between the supervisor and workers in the Akka cluster. Because as soon as the supervisor consumes the message from Kafka, the Kafka offset is committed. If some error happens in processing after the offset commit, is the following acceptable way to recover and start from where it was last left?
Make the supervisor a persistent actor by using durable mailbox backed by Kafka. Supervisor enqueues work in Kafka and worker gets its work from Kafka and commits its offset only after completing the work.
As said by Jaakko, it really depends on the third-part library you are using.
As far as I'm concerned I have successfully used Akka Streams Kafka although I did enable offset auto-commit.
However, this library may meet your needs since it allows you to customize offset commit (see sections External Offset Storage and Offset Storage in Kafka).
The documentation says:
The Consumer.committableSource makes it possible to commit offset positions to Kafka. Compared to auto-commit this gives exact control of when a message is considered consumed.
In order to disable auto-commit, you have to complete your Akka application.conf file by adding an akka.kafka.consumer section:
akka.kafka.consumer {
# Properties defined by org.apache.kafka.clients.consumer.ConsumerConfig
# can be defined in this configuration section.
kafka-clients {
# Disable auto-commit by default
enable.auto.commit = false
}
}
Last version of akka-stream-kafka_2.11 (version 0.16) is compatible with Akka 2.5.x but you have to override akka-stream_2.11 dependency with the one of the Akka toolkit. Currently, I am using this library with Akka 2.5.3 and it works really well.
Hope you will find what your are looking for :)
If I have a distributed CEP setup with a JMS broker as the primary input.
Now if we tell our client application to send event to Topic X, the events will be distributed to each node in the CEP cluster, as each one will be listening on same Topic X.
Will this lead to duplication of results (lets say if I am counting certain data field, now since each node is receiving duplicate data, will my count be double of actual value if I have a 2 node cluster)
Can the CEP work off a JMS Queue instead of a Topic ? This way which ever node gets the event data first will consume the message off the Queue ? Does WSO2 CEP support JMS Queues ?
No, currently (CEP 2.0.1) does not have support to receive events from JMS queue.
But if this is your requirement then you can write your own CEP addeptor(broker) to receive events from a queue and push that to CEP.
To create a custom broker
Create an appropriate Broker Type by extending org.wso2.carbon.broker.core.BrokerType and an appropriate Broker Type Factory by extending the org.wso2.carbon.broker.core.BrokerTypeFactory from the jar org.wso2.carbon.broker.core-4.0.5.jar
Then to configure that broker with the CEP create a file called "broker.xml" at wso2cep-2.0.1/repository/conf
and add the following XML:
<brokerTypes xmlns="http://wso2.org/carbon/broker">
<brokerType class="<<class reference>>" /> ...
</brokerTypes>
Find a detail documentation on creating a custom broker at http://suhothayan.blogspot.com/2013/02/writing-custom-broker-for-wso2-cep.html
One of the characteristics I love most about Google's Task Queue is its simplicity. More specifically, I love that it takes a URL and some parameters and then posts to that URL when the task queue is ready to execute the task.
This structure means that the tasks are always executing the most current version of the code. Conversely, my gearman workers all run code within my django project -- so when I push a new version live, I have to kill off the old worker and run a new one so that it uses the current version of the code.
My goal is to have the task queue be independent from the code base so that I can push a new live version without restarting any workers. So, I got to thinking: why not make tasks executable by url just like the google app engine task queue?
The process would work like this:
User request comes in and triggers a few tasks that shouldn't be blocking.
Each task has a unique URL, so I enqueue a gearman task to POST to the specified URL.
The gearman server finds a worker, passes the url and post data to a worker
The worker simply posts to the url with the data, thus executing the task.
Assume the following:
Each request from a gearman worker is signed somehow so that we know it's coming from a gearman server and not a malicious request.
Tasks are limited to run in less than 10 seconds (There would be no long tasks that could timeout)
What are the potential pitfalls of such an approach? Here's one that worries me:
The server can potentially get hammered with many requests all at once that are triggered by a previous request. So one user request might entail 10 concurrent http requests. I suppose I could have a single worker with a sleep before every request to rate-limit.
Any thoughts?
As a user of both Django and Google AppEngine, I can certainly appreciate what you're getting at. At work I'm currently working on the exact same scenario using some pretty cool open source tools.
Take a look at Celery. It's a distributed task queue built with Python that exposes three concepts - a queue, a set of workers, and a result store. It's pluggable with different tools for each part.
The queue should be battle-hardened, and fast. Check out RabbitMQ for a great queue implementation in Erlang, using the AMQP protocol.
The workers ultimately can be Python functions. You can trigger workers using either queue messages, or perhaps more pertinent to what you're describing - using webhooks
Check out the Celery webhook documentation. Using all these tools you can build a production ready distributed task queue that implements your requirements above.
I should also mention that in regards to your first pitfall, celery implements rate-limiting of tasks using a Token Bucket algorithm.