I'm trying to use ActiveMQ-CPP with HornetQ. I'm using the ActiveMQ-CPP bundled example, but I'm having a hard time with it.
The producer works like a charm, but the consumer gives me the following message:
* BEGIN SERVER-SIDE STACK TRACE
Message: Queue /queue/exampleQueue does not exist
Exception Class
END SERVER-SIDE STACK TRACE *
FILE: activemq/core/ActiveMQConnection.cpp, LINE: 768
FILE: activemq/core/ActiveMQConnection.cpp, LINE: 774
FILE: activemq/core/ActiveMQSession.cpp, LINE: 350
FILE: activemq/core/ActiveMQSession.cpp, LINE: 281
Time to completion = 0.161 seconds.
The problem is that the queue exists. The code works all right with ActiveMQ+Openwire, but I'm not having the same luck with HornetQ+STOMP.
Any ideas?
Try to set the same Queue's address you defined on Hornetq as the destination.
Probably your queue is defined on HornetQ like this
<queue name="exampleQueue">
<address>jms.queue.exampleQueue</address>
</queue>
So, try to connect to this address via STOMP.
See the following frames according to the protocol:
Subscribing to the queue
SUBSCRIBE
destination:jms.queue.exampleQueue
^#
Sending a message
SEND
destination:jms.queue.exampleQueue
it works
^#
As soon as the message is sent, you'll get the message on the session you subscribed to the queue
MESSAGE
timestamp:1311355464983
redelivered:false
expires:0
subscription:subscription/jms.queue.exampleQueue
priority:0
message-id:523
destination:jms.queue.exampleQueue
it works
-- EDIT
There's one point left I would like to add...
HornetQ doesn't conform to STOMP's naming standarts (see http://community.jboss.org/message/594176 ), so there's a possibility that the activemq-cpp follows the behavior of ativemq-nms, which "normalize" queue's name to the STOMP standart: "/queue/YourQueue" (and causes naming issues).
So, if that's the case, even if you try to change your destination name to 'jms.queue.exampleQueue', activemq-cpp could normalize it to '/queue/jms.queue.exampleQueue'.
In NMS+HornetQ, there's no "out of the box" way of avoiding this. The only choice is to edit NMS's source code and remove the part which normalize queue's names. Maybe it's the same way out on activemq-cpp.
HornetQ doesn't like the "/queue/" prefix for a SUBSCRIBE. I took that out of the ToStomp method in StompHelper and everything worked.
Related
I got an ER_NET_PACKETS_OUT_OF_ORDER error when running a multithreaded C++ app using Poco::Data::MySQL and Poco::Data::SessionPool. The error message looks like this:
MySQL: [MySQL]: [Comment]: mysql_stmt_prepare error [mysql_stmt_error]: Got packets out of order [mysql_stmt_errno]: 1156 [mysql_stmt_sqlstate]: 08S01 [statemnt]: ...
The app is making queries from multiple threads every 100ms. The connections are provided by a common SessionPool.
I got around this problem by adding reset=true to the connection string. However, as stated in the official docs, adding this option may result in problems with encoding.
I have a AMQP Source and AMQP Sink with Declarations:
List<Declaration> declarations = new ArrayList<Declaration>() {{
add(QueueDeclaration.create(sourceExchangeName));
add(BindingDeclaration.create(sourceExchangeName, sourceExchangeName).withRoutingKey(sourceRoutingKey));
}};
amqpSource = AmqpSource
.committableSource(
NamedQueueSourceSettings.create(connectionProvider, sourceExchangeName)
.withDeclarations(declarations),
bufferSize);
AmqpWriteSettings amqpWriteSettings = AmqpWriteSettings.create(connectionProvider)
.withExchange("DEST_XCHANGE")
.withRoutingKey("ROUTE123")
.withDeclaration(ExchangeDeclaration.create(destinationExchangeName,
BuiltinExchangeType.DIRECT.getType()));
amqpSink = AmqpSink.create(amqpWriteSettings);
And then I have a flow..
amqpSource.map(doSomething).async().map(doSomethingElse).async().to(amqpSink)
Now, after i started the app, the messages were sent to source queue was not getting consumed. I later found out that this was due to errors that occurred during declarations. (i.e., it worked fine when I removed the .withDeclarations(..) in the Source and Sink settings.
So my questions:
How to detect if the AMQP Source and Sink are up and running fine?
How to ignore declaration exceptions?
If any exception occurs, how can I know and make the system fail?
To answer 1 and 3, the AmqpSink materializes a CompletionStage<Done> that you would have to keep, and handle (register some callback functions on) to observe failure and completion of the stream. In the docs sample we block on that completion stage which is not good in production code (https://doc.akka.io/docs/alpakka/current/amqp.html#with-sink), that's probably because the sample is included in one of the Alpakka tests. Prefer the usual CompletionStage callback/transformation methods instead (see for example this introduction).
The CompletionStage will fail when an error happens, when the stream is materialized/starting up or during the processing of elements, alternatively complete once the source reaches the end and each element has gone through your flow into the sink. That means that for starting up the stream, if it does not pretty quickly fail it is running.
For question 2 not sure if it is possible to ignore the declaration exceptions, it could be that those always fail the connection.
what's the main difference between SenderFaultCode and ReceiverFaultCode. With WCF's FaultExceptions, we can create a FaultCode with two methods :
CreateSenderFaultCode.
CreateReceiverFaultCode.
In which case should we use one or the other ? Thanks !
SenderFaultCode: Represents the [SOAP version 1.2] Sender fault code indicating a client call was not formatted correctly or did not contain the appropriate information.
ReceiverFaultCode: Represents the [SOAP version 1.2] Receiver fault code indicating an error occurred during the processing of a client call on the server due to a problem with the recipient.
reference
I'm trying to stream MutationGroups into spanner with SpannerIO.
The goal is to write new MuationGroups every 10 seconds, as we will use spanner to query near-time KPI's.
When I don't use any windows, I get the following error:
Exception in thread "main" java.lang.IllegalStateException: GroupByKey cannot be applied to non-bounded PCollection in the GlobalWindow without a trigger. Use a Window.into or Window.triggering transform prior to GroupByKey.
at org.apache.beam.sdk.transforms.GroupByKey.applicableTo(GroupByKey.java:173)
at org.apache.beam.sdk.transforms.GroupByKey.expand(GroupByKey.java:204)
at org.apache.beam.sdk.transforms.GroupByKey.expand(GroupByKey.java:120)
at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:537)
at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:472)
at org.apache.beam.sdk.values.PCollection.apply(PCollection.java:286)
at org.apache.beam.sdk.transforms.Combine$PerKey.expand(Combine.java:1585)
at org.apache.beam.sdk.transforms.Combine$PerKey.expand(Combine.java:1470)
at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:537)
at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:491)
at org.apache.beam.sdk.values.PCollection.apply(PCollection.java:299)
at org.apache.beam.sdk.io.gcp.spanner.SpannerIO$WriteGrouped.expand(SpannerIO.java:868)
at org.apache.beam.sdk.io.gcp.spanner.SpannerIO$WriteGrouped.expand(SpannerIO.java:823)
at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:537)
at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:472)
at org.apache.beam.sdk.values.PCollection.apply(PCollection.java:286)
at quantum.base.transform.entity.spanner.SpannerProtoWrite.expand(SpannerProtoWrite.java:52)
at quantum.base.transform.entity.spanner.SpannerProtoWrite.expand(SpannerProtoWrite.java:20)
at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:537)
at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:491)
at org.apache.beam.sdk.values.PCollection.apply(PCollection.java:299)
at quantum.entitybuilder.pipeline.EntityBuilderPipeline$Write$SpannerWrite.expand(EntityBuilderPipeline.java:388)
at quantum.entitybuilder.pipeline.EntityBuilderPipeline$Write$SpannerWrite.expand(EntityBuilderPipeline.java:372)
at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:537)
at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:491)
at org.apache.beam.sdk.values.PCollection.apply(PCollection.java:299)
at quantum.entitybuilder.pipeline.EntityBuilderPipeline.main(EntityBuilderPipeline.java:122)
:entityBuilder FAILED
Because of the error above I assume the input collection needs to be windowed and triggered, as SpannerIO uses a GroupByKey (this is also what I need for my use case):
...
.apply("1-minute windows", Window.<MutationGroup>into(FixedWindows.of(Duration.standardMinutes(1)))
.triggering(Repeatedly.forever(AfterProcessingTime
.pastFirstElementInPane()
.plusDelayOf(Duration.standardSeconds(10))
).orFinally(AfterWatermark.pastEndOfWindow()))
.discardingFiredPanes()
.withAllowedLateness(Duration.ZERO))
.apply(SpannerIO.write()
.withProjectId(entityConfig.getSpannerProject())
.withInstanceId(entityConfig.getSpannerInstance())
.withDatabaseId(entityConfig.getSpannerDb())
.grouped());
When I do this, I get the following exceptions during runtime:
java.lang.IllegalArgumentException: Attempted to get side input window for GlobalWindow from non-global WindowFn
org.apache.beam.sdk.transforms.windowing.PartitioningWindowFn$1.getSideInputWindow(PartitioningWindowFn.java:49)
com.google.cloud.dataflow.worker.StreamingModeExecutionContext$StepContext.issueSideInputFetch(StreamingModeExecutionContext.java:631)
com.google.cloud.dataflow.worker.StreamingModeExecutionContext$UserStepContext.issueSideInputFetch(StreamingModeExecutionContext.java:683)
com.google.cloud.dataflow.worker.StreamingSideInputFetcher.storeIfBlocked(StreamingSideInputFetcher.java:182)
com.google.cloud.dataflow.worker.StreamingSideInputDoFnRunner.processElement(StreamingSideInputDoFnRunner.java:71)
com.google.cloud.dataflow.worker.SimpleParDoFn.processElement(SimpleParDoFn.java:323)
com.google.cloud.dataflow.worker.util.common.worker.ParDoOperation.process(ParDoOperation.java:43)
com.google.cloud.dataflow.worker.util.common.worker.OutputReceiver.process(OutputReceiver.java:48)
com.google.cloud.dataflow.worker.SimpleParDoFn$1.output(SimpleParDoFn.java:271)
org.apache.beam.runners.core.SimpleDoFnRunner.outputWindowedValue(SimpleDoFnRunner.java:219)
org.apache.beam.runners.core.SimpleDoFnRunner.access$700(SimpleDoFnRunner.java:69)
org.apache.beam.runners.core.SimpleDoFnRunner$DoFnProcessContext.output(SimpleDoFnRunner.java:517)
org.apache.beam.runners.core.SimpleDoFnRunner$DoFnProcessContext.output(SimpleDoFnRunner.java:505)
org.apache.beam.sdk.values.ValueWithRecordId$StripIdsDoFn.processElement(ValueWithRecordId.java:145)
After investigating further it appears to be due to the .apply(Wait.on(input)) in SpannerIO: It has a global side input which does not seem to work with my fixed windows, as the docs of Wait.java state:
If signal is globally windowed, main input must also be. This typically would be useful
* only in a batch pipeline, because the global window of an infinite PCollection never
* closes, so the wait signal will never be ready.
As a temporary workaround I tried the following:
add a GlobalWindow with triggers instead of fixed windows:
.apply("globalwindow", Window.<MutationGroup>into(new GlobalWindows())
.triggering(Repeatedly.forever(AfterProcessingTime
.pastFirstElementInPane()
.plusDelayOf(Duration.standardSeconds(10))
).orFinally(AfterWatermark.pastEndOfWindow()))
.discardingFiredPanes()
.withAllowedLateness(Duration.ZERO))
This results in writes to spanner only when I drain my pipeline. I have the impression the Wait.on() signal is only triggered when the Global windows closes, and doesn't work with triggers.
Disable the .apply(Wait.on(input)) in SpannerIO:
This results in the pipeline getting stuck on the view creation which
is described in this SO post:
SpannerIO Dataflow 2.3.0 stuck in CreateDataflowView.
When I check the worker logs for clues, I do get the following warnings:
logger: "org.apache.beam.sdk.coders.SerializableCoder"
message: "Can't verify serialized elements of type SpannerSchema have well defined equals method. This may produce incorrect results on some PipelineRunner
logger: "org.apache.beam.sdk.coders.SerializableCoder"
message: "Can't verify serialized elements of type BoundedSource have well defined equals method. This may produce incorrect results on some PipelineRunner"
Note that everything works with the DirectRunner and that I'm trying to use the DataflowRunner.
Does anyone have any other suggestions for things I can try to get this running? I can hardly imagine that I'm the only one trying to stream MutationGroups into spanner.
Thanks in advance!
Currently, SpannerIO connector is not supported with Beam Streaming. Please follow this Pull Request which adds streaming support for spanner IO connector.
We are building a RTB(real time bidding) platform. Using nginx as http server, the bidder is writen in lua, google protocol buffer for serializing data and Zlog for logs. After test runs, we got three error messages in the nginx error log:
"[libprotobuf Error, google/protobuf/wire_format.cc:1053]
String field contains invalid UTF-8 data when parsing a protocol buffer.
Use the 'bytes' type if you intend to send raw bytes."
So we went back to check the source code of protocol buffer, and found that this check is controlled by a macro(-DNDEBUG: it means NOT debug mode?, according to the comment). And -DNDEBUG disables GOOGLE_PROTOBUF_UTF8_VALIDATION(i think?). So, we enabled this macro(-DNDEBUG) in the configuration. However, after testing, we still got the same error message. And then, we changed all the "String" type to "Bytes" typr in XXX.proto. After testing, the same error message showed.
worker process 53574 exited on signal 11(core dumped),then process died.
lua entry thread aborted: runtime error:/home/bilin/rtb/src/lua/shared/log.lua:34: 'short' is not callable"
Hope somebody can help us solving those problems.
Thank you.