While running through the qpid c++ api tutorial I ran into "Session ended by peer with amqp:internal-error" which i assume is because of different versions of the qpid api and my broker (Rabbitmq).
I have changed the rabbitmq to use protocol 1.0, but it looks like qpid defaults to 0-10. I have found a bunch of articles saying I should move up to version 1.0 of the api, but I have not been able to find out how. Does anyone know how to do this?
Figured out the solution, when creating connection you can set protocol there, though i think you may need qpid-proton installed as well.
Connection connection("rabbitmq-serv:5672","{protocol: 'amqp1.0'}");
It still gets failures, but rabbitmq seems to acknowledge that it exists
Also qpid-proton seems to connect to rabbitmq no problem, using the amqp1.0 by default
Related
We are trying to implement Active MQ C client in AIX Server. Having lot of compatibility issues to compile the CMS client in AIX.
Please anyone suggest me what are other possibilities to use active MQ client and Step by step it would help to solve our issue.
Server : AIX 6.1
compiler : XLC
Thanks in advance.
There is no supported ActiveMQ C or C++ client for AIX, in fact the ActiveMQ CPP client is not being actively maintained so I would suggest looking into something like a simpler STOMP client or perhaps give one of the Qpid project clients a try to see if that would work as ActiveMQ 5 has support for AMQP 1.0 as well.
I have been trying to connect two machines: both Virtual Machines, one is Xubuntu and the other is Ubuntu. I'm also very new to OpenDDS, but the best way -or so it seems- to do it is to use the .ini files.
However, when I try to connect, I seem to fail in changing the Discovery Server, since the default is localhost:12345. Can somebody help me with that so I can configure the file properly?
I have tried using the dds_udp_conf.ini and the tcp one, but it doesn't seem to work.
Also, I tried using unicast, but failed.
the ini file:
[common]
DCPSDebugLevel=0
DCPSInfoRepo=corbaloc::localhost::12345/DCPSInfoRepo
DCPSGlobalTransportConfig=config1
[config/config1]
transports=udp1
[transport/udp1]
transport_type=udp
And I use the syntax:
./publisher -DCPSConfigFile conf.ini
Well, the publisher and subscriber are supposed to connect, but the publisher sends some error messages and in the other VM nothing happens.
I seem to fail because I cant change the configuration in the localhost for discovery.
When I try to run the server with a different parameter than localhost:12345 it always sends error messages too.
It's unclear to me where you're running the InfoRepo if both the publisher and subscriber are told the InfoRepo is running at localhost. Regardless I would recommend using the RTPS discovery and transport instead. It's easy to set up because the participants can find each other through the network's multicast without InfoRepo. This config is the simplest way to use RTPS with OpenDDS:
[common]
DCPSDefaultDiscovery=DEFAULT_RTPS
DCPSGlobalTransportConfig=$file
[transport/the_rtps_transport]
transport_type=rtps_udp
Just give this to both the programs and they should find each other. If not that would mean there's probably something's wrong with how the networking is set up on your VMs.
I am a little new to active MQ so please bear with me.
I am trying to take advantage of the ActiveMQ priority backup feature for some of my Java and CPP applications. I have two brokers on two different servers (local and remote), and I want the following behavior for my apps.
Always connect to local broker on startup
If local broker goes down, connect to remote
While connected to remote, if local comes back up, we then reconnect to local.
I have had success with testing it on the java apps by simply adding priorityBackup to my uri options
i.e.
failover:(tcp://local:61616,tcp://remote:61616)?randomize=false&priorityBackup=true
However stuff isn't going as smoothly on the CPP side.
The following works fine on the CPP apps (with basic working failover functionality - aka jumping to remote when local goes down )
failover:(tcp://local:61616,tcp://remote:61616)?randomize=false
But updating the uri options with priorityBackup seems to break failover functionality completely (my apps never failover to the remote broker, they just stay in some kind of broker-less/limbo state when their local broker goes down)
failover:(tcp://local:61616,tcp://remote:61616)?randomize=false&priorityBackup=true
Is there anything I am missing here? Extra uri options that I should have included?
UPDATE: Transport connector info
<transportConnectors>
<transportConnector name="ClientOpenwire" uri="tcp://0.0.0.0:61616?wireFormat.maxInactivityDuration=7000"/>
<transportConnector name="Broker2BrokerOpenwire" uri="tcp://0.0.0.0:62627?wireFormat.maxInactivityDuration=5000"/>
<transportConnector name="stompConnector" uri="stomp://0.0.0.0:62623"/>
</transportConnectors>
backup and priorityBackup parameters are handled in completely different way in Java and C++ implementation of the library.
Java implementation works well but unfortunately C++ implementation is broken. There are no extra options that can fix this issue. Serious changes in library are required to resolve this issue.
I was testing this issue using activemq-cpp-library-3.8.3, and brokers in various versions (5.10.0, 5.11.1). Issue is not fixed in 3.8.4 release.
Does XBMC web service have a call to instruct the service to update its library?
I can't seem to find it in the documentation but this would seem like a pretty basic thing to include.
Yes! In the current Dharma release, there is the HTTP API (deprecated) and the JSON-RPC API.
In a few weeks, with the next release (you can also download the nightly build, but beware of bugs) - the is a updated JSON-RPC API:
http://wiki.xbmc.org/index.php?title=JSON-RPC_API/v3#VideoLibrary.Scan
Sending this:
{"id":1,"method":"VideoLibrary.Scan","params":[],"jsonrpc":"2.0"}
Should do the trick for the next release. (TCP Port 8080 in my case)
{"id":1,"method":"VideoLibrary.ScanForContent","params":[],"jsonrpc":"2.0"}
For the current Dharma-Release.
Here's a small program that does that using Python https://github.com/asafge/TV_Mover
I'm trying to implement a solution using HornetQ. Since I need to access it through a C++ application, that raises me a problem. I'm compiling the activemq-cpp builtin example, and changing it to work with stomp instead of openwire (HornetQ doesn't understand openwire). The application refuses to produce messages on the intended queue. Seems that a lot of people are having the same issue, but no one has the answer. (someone said it's a bug on the cms API)
Anyone has a pratical example of HornetQ working with a C++ app?
PS: Obviously the activemq-cpp example works with an activemq server using openwire.
HornetQ is probably mapping destination names differently then the ActiveMQ C++ Stomp client, for instance in ActiveMQ a topic destination is prefixed with /topic/ and a queue is /queue/. I beleive this is different in HornetQ but not really sure. You may want to look in their docs for what they use, if its configurable then you could alter it to match what the CMS client is sending. You could also modify your local copy of CMS to send the destination name using the HornetQ prefix.
Regards
Tim.
www.fusesource.com
The only solution I have seen is a HornetQ to ActiveMQ bridge written in java then have the C++ app work with ActiveMQ. You might be able to do something with JNI to handle marshaling messages into your app.