Poco::Data::MySQL 'Got packets out of order' error - c++

I got an ER_NET_PACKETS_OUT_OF_ORDER error when running a multithreaded C++ app using Poco::Data::MySQL and Poco::Data::SessionPool. The error message looks like this:
MySQL: [MySQL]: [Comment]: mysql_stmt_prepare error [mysql_stmt_error]: Got packets out of order [mysql_stmt_errno]: 1156 [mysql_stmt_sqlstate]: 08S01 [statemnt]: ...
The app is making queries from multiple threads every 100ms. The connections are provided by a common SessionPool.

I got around this problem by adding reset=true to the connection string. However, as stated in the official docs, adding this option may result in problems with encoding.

Related

Twisted ssh - session execCommand implementation

Good day. I apologize for asking for obvious things because I'm writing in PHP and I know Python at the level "I started learning this yesterday". I've already spent a few days on this - but to no avail.
I downloaded twisted example of the SSH server for version 20.3 from here https://docs.twistedmatrix.com/en/twisted-20.3.0/conch/examples/. Line 162 has an execCommand method that I need to implement to make it work. Then I noticed a comment in this method "We don't support command execution sessions". Therefore, the question: Is this comment apply only to the example, or twisted library entirely. Ie, is it possible to implement this method to make the example server will work as I need?
More information. I don't think that this info is required to answer my questions above.
Why do I need it? I'm trying to compile an environment for writing functional (!) tests (there would be no such problems with the unit tests, I guess). Our API uses the SSH client (phpseclib / SSH2) by 30%+ of endpoints. Whatever I do, I had only 3 options of the results depending on how did I implement this method: (result: success, response: "" - empty; result: success, response: "1"; result: failed, response: "Unable to fulfill channel request at… SSH2.php:3853"). Those were for an SSH2 Client. If the error occurs (3rd case), the server shows logs in the terminal:
[SSHServerTransport, 0,127.0.0.1] Got remote error, code 11 reason: ""
[SSHServerTransport, 0,127.0.0.1] connection lost
I just found this works:
def execCommand(self, protocol, cmd):
protocol.write('Some text to return')
protocol.session.conn.sendEOF(protocol.session)
If I don't send EOF the client throws a timeout error.

Streaming MutationGroups into Spanner

I'm trying to stream MutationGroups into spanner with SpannerIO.
The goal is to write new MuationGroups every 10 seconds, as we will use spanner to query near-time KPI's.
When I don't use any windows, I get the following error:
Exception in thread "main" java.lang.IllegalStateException: GroupByKey cannot be applied to non-bounded PCollection in the GlobalWindow without a trigger. Use a Window.into or Window.triggering transform prior to GroupByKey.
at org.apache.beam.sdk.transforms.GroupByKey.applicableTo(GroupByKey.java:173)
at org.apache.beam.sdk.transforms.GroupByKey.expand(GroupByKey.java:204)
at org.apache.beam.sdk.transforms.GroupByKey.expand(GroupByKey.java:120)
at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:537)
at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:472)
at org.apache.beam.sdk.values.PCollection.apply(PCollection.java:286)
at org.apache.beam.sdk.transforms.Combine$PerKey.expand(Combine.java:1585)
at org.apache.beam.sdk.transforms.Combine$PerKey.expand(Combine.java:1470)
at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:537)
at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:491)
at org.apache.beam.sdk.values.PCollection.apply(PCollection.java:299)
at org.apache.beam.sdk.io.gcp.spanner.SpannerIO$WriteGrouped.expand(SpannerIO.java:868)
at org.apache.beam.sdk.io.gcp.spanner.SpannerIO$WriteGrouped.expand(SpannerIO.java:823)
at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:537)
at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:472)
at org.apache.beam.sdk.values.PCollection.apply(PCollection.java:286)
at quantum.base.transform.entity.spanner.SpannerProtoWrite.expand(SpannerProtoWrite.java:52)
at quantum.base.transform.entity.spanner.SpannerProtoWrite.expand(SpannerProtoWrite.java:20)
at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:537)
at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:491)
at org.apache.beam.sdk.values.PCollection.apply(PCollection.java:299)
at quantum.entitybuilder.pipeline.EntityBuilderPipeline$Write$SpannerWrite.expand(EntityBuilderPipeline.java:388)
at quantum.entitybuilder.pipeline.EntityBuilderPipeline$Write$SpannerWrite.expand(EntityBuilderPipeline.java:372)
at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:537)
at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:491)
at org.apache.beam.sdk.values.PCollection.apply(PCollection.java:299)
at quantum.entitybuilder.pipeline.EntityBuilderPipeline.main(EntityBuilderPipeline.java:122)
:entityBuilder FAILED
Because of the error above I assume the input collection needs to be windowed and triggered, as SpannerIO uses a GroupByKey (this is also what I need for my use case):
...
.apply("1-minute windows", Window.<MutationGroup>into(FixedWindows.of(Duration.standardMinutes(1)))
.triggering(Repeatedly.forever(AfterProcessingTime
.pastFirstElementInPane()
.plusDelayOf(Duration.standardSeconds(10))
).orFinally(AfterWatermark.pastEndOfWindow()))
.discardingFiredPanes()
.withAllowedLateness(Duration.ZERO))
.apply(SpannerIO.write()
.withProjectId(entityConfig.getSpannerProject())
.withInstanceId(entityConfig.getSpannerInstance())
.withDatabaseId(entityConfig.getSpannerDb())
.grouped());
When I do this, I get the following exceptions during runtime:
java.lang.IllegalArgumentException: Attempted to get side input window for GlobalWindow from non-global WindowFn
org.apache.beam.sdk.transforms.windowing.PartitioningWindowFn$1.getSideInputWindow(PartitioningWindowFn.java:49)
com.google.cloud.dataflow.worker.StreamingModeExecutionContext$StepContext.issueSideInputFetch(StreamingModeExecutionContext.java:631)
com.google.cloud.dataflow.worker.StreamingModeExecutionContext$UserStepContext.issueSideInputFetch(StreamingModeExecutionContext.java:683)
com.google.cloud.dataflow.worker.StreamingSideInputFetcher.storeIfBlocked(StreamingSideInputFetcher.java:182)
com.google.cloud.dataflow.worker.StreamingSideInputDoFnRunner.processElement(StreamingSideInputDoFnRunner.java:71)
com.google.cloud.dataflow.worker.SimpleParDoFn.processElement(SimpleParDoFn.java:323)
com.google.cloud.dataflow.worker.util.common.worker.ParDoOperation.process(ParDoOperation.java:43)
com.google.cloud.dataflow.worker.util.common.worker.OutputReceiver.process(OutputReceiver.java:48)
com.google.cloud.dataflow.worker.SimpleParDoFn$1.output(SimpleParDoFn.java:271)
org.apache.beam.runners.core.SimpleDoFnRunner.outputWindowedValue(SimpleDoFnRunner.java:219)
org.apache.beam.runners.core.SimpleDoFnRunner.access$700(SimpleDoFnRunner.java:69)
org.apache.beam.runners.core.SimpleDoFnRunner$DoFnProcessContext.output(SimpleDoFnRunner.java:517)
org.apache.beam.runners.core.SimpleDoFnRunner$DoFnProcessContext.output(SimpleDoFnRunner.java:505)
org.apache.beam.sdk.values.ValueWithRecordId$StripIdsDoFn.processElement(ValueWithRecordId.java:145)
After investigating further it appears to be due to the .apply(Wait.on(input)) in SpannerIO: It has a global side input which does not seem to work with my fixed windows, as the docs of Wait.java state:
If signal is globally windowed, main input must also be. This typically would be useful
* only in a batch pipeline, because the global window of an infinite PCollection never
* closes, so the wait signal will never be ready.
As a temporary workaround I tried the following:
add a GlobalWindow with triggers instead of fixed windows:
.apply("globalwindow", Window.<MutationGroup>into(new GlobalWindows())
.triggering(Repeatedly.forever(AfterProcessingTime
.pastFirstElementInPane()
.plusDelayOf(Duration.standardSeconds(10))
).orFinally(AfterWatermark.pastEndOfWindow()))
.discardingFiredPanes()
.withAllowedLateness(Duration.ZERO))
This results in writes to spanner only when I drain my pipeline. I have the impression the Wait.on() signal is only triggered when the Global windows closes, and doesn't work with triggers.
Disable the .apply(Wait.on(input)) in SpannerIO:
This results in the pipeline getting stuck on the view creation which
is described in this SO post:
SpannerIO Dataflow 2.3.0 stuck in CreateDataflowView.
When I check the worker logs for clues, I do get the following warnings:
logger: "org.apache.beam.sdk.coders.SerializableCoder"
message: "Can't verify serialized elements of type SpannerSchema have well defined equals method. This may produce incorrect results on some PipelineRunner
logger: "org.apache.beam.sdk.coders.SerializableCoder"
message: "Can't verify serialized elements of type BoundedSource have well defined equals method. This may produce incorrect results on some PipelineRunner"
Note that everything works with the DirectRunner and that I'm trying to use the DataflowRunner.
Does anyone have any other suggestions for things I can try to get this running? I can hardly imagine that I'm the only one trying to stream MutationGroups into spanner.
Thanks in advance!
Currently, SpannerIO connector is not supported with Beam Streaming. Please follow this Pull Request which adds streaming support for spanner IO connector.

Poloniex & websockets

===SIMPLE & SHORT===
Does anybody have working application that talks with Poloniex through WAMP in these days (January, 2018)?
===MORE SPECIFIC===
I used several info sources to make it work using combo: autobahn-cpp & C++. Windows 10 OS.
I was able to connect to wss://api.poloniex.com, realm1. Plus I was able to subscribe and get subscription ID. But I never got any events even when everything established.
===RESEARCH===
During research in the web I saw a lot of controversial information:
1. Claims, that wss://api2.poloniex.com should be used, and channels names are actually numbers - How to connect to poloniex.com websocket api using a python library
2. This answer gave me base code, but I am getting anything more than just connections, also by following this answer - wss://api.poloniex.com is correct address - Connecting to Poloniex Push-API
3. I saw post (sorry, lost the link), there were comments made that websockets implementation are basically broken on poloniex. They were posted 6 months ago.
===SPECS===
1. Windows 10
2. Autobahn-Cpp
3. wss://api.poloniex.com:443 ; realm1
4. Different subscriptions: ticker, BTC_ETH, 148, 1002, etc..
5. Source code I got from here
===WILL HELP AS WELL===
Is there any way to get all valid subscriptions or, probably, those, that have more than 0 subscribers? I mean, does WAMP have a way to do that?
Is there any known issues with Autobahn-Cpp and poloniex combo?
Is there any simpler way to test WAMP elsewhere to make sure Autobahn isn't a problem? Like any other well documented & supported online projects that accept WAMP websocket communication?
I can receive the correct tick order book data from wss://api2.poloniex.com use python3
but sometime The channel 1002 may stop sending the new tick info.
wss://api.poloniex.com:443 ; realm1
This may be the issue as I've been using api2 and here is the code that works, and has been working for the past 2 quarters non-stop. Its in python, but should be easy enough to port to C++.
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
import websocket
import json
def on_error(ws, error):
print(error)
def on_close(ws):
print("### closed ###")
connection.close()
def on_open(ws):
print("ONOPEN")
ws.send(json.dumps({'command':'subscribe','channel':'BTC_ETH'}))
def on_message(ws, message):
message = json.loads(message)
print(message)
websocket.enableTrace(True)
ws = websocket.WebSocketApp("wss://api2.poloniex.com/",
on_message = on_message,
on_error = on_error,
on_close = on_close)
ws.on_open = on_open
ws.run_forever()
the code is pretty much self-explanatory (You can check all channels/pairs on Poloniex API website), just save it and run in terminal
python3 fileName.py
should provide You with BTCETH raw stream of orders and trades on console output.
Playing with the message/subscriptions You can then do as You please with it.
It seems that websockets in Poloniex are unstable. Therefore I can stop my attempts make Autobahn-Cpp work with it at least by now and move on.

Some questions about protobuf

We are building a RTB(real time bidding) platform. Using nginx as http server, the bidder is writen in lua, google protocol buffer for serializing data and Zlog for logs. After test runs, we got three error messages in the nginx error log:
"[libprotobuf Error, google/protobuf/wire_format.cc:1053]
String field contains invalid UTF-8 data when parsing a protocol buffer.
Use the 'bytes' type if you intend to send raw bytes."
So we went back to check the source code of protocol buffer, and found that this check is controlled by a macro(-DNDEBUG: it means NOT debug mode?, according to the comment). And -DNDEBUG disables GOOGLE_PROTOBUF_UTF8_VALIDATION(i think?). So, we enabled this macro(-DNDEBUG) in the configuration. However, after testing, we still got the same error message. And then, we changed all the "String" type to "Bytes" typr in XXX.proto. After testing, the same error message showed.
worker process 53574 exited on signal 11(core dumped),then process died.
lua entry thread aborted: runtime error:/home/bilin/rtb/src/lua/shared/log.lua:34: 'short' is not callable"
Hope somebody can help us solving those problems.
Thank you.

How to debug "could not receive data from client: Connection reset by peer"

I'm running a django-celery application on Ubuntu-12.04.
When I run a celery task from my web interface, I get the following error, taken form postgresql-9.3 logfile (maximum level of log):
2013-11-12 13:57:01 GMT tss_usr 8113 LOG: could not receive data from client: Connection reset by peer
tss_usr is the postgresql user of the django application database and (in this example) 8113 is the pid of the process who killed the connection, I guess.
Have you got any idea on why this happens or at least how to debug this issue?
To make things work again I need to restart postgresql which is extremely uncomfortable.
I know this is an older post, but I just found it because I had the same error today in my postgres logs. I narrowed it down to a PDO select statement. I'm using Zend Framework 1.10.3 on Ubuntu Precise.
The following pdo statement generated an error if $opinion is a long text string. The column opinion is type Text in my postgres table. The query succeeds if $opinion is under a certain number of characters. 1000 characters works fine. 2000 characters fails with "could not receive data from client: Connection reset by peer".
$select = $this->db->select()
->from( 'datauserstopics' )
->where("opinion = ?",trim($opinion))
->where("datatopicsid = ?",trim($tid))
->where("datausersid= ?",$datausersid);
$stmt = $this->db->query($select);
I circumvented the problem by using:
->where("substr(opinion,1,100) = ?",trim(substr($opinion,1,100)))
This is not a perfect solution, but for my purposes, the select statement using substr() suffices.
Note that I have no problem inserting long strings into the same table/column. The disconnect problem only appears for me on the PDO select with relatively long text strings.
I'm getting it in 2017 with 9.4, I have no text fields, don't know what a PDO is. My select statement is about 50 bytes long, I'm trying to fetch an int4 and a double precision. I suspect the error message can mean multiple things.
I've since found https://dba.stackexchange.com/questions/142350/postgres-could-not-receive-data-from-client-connection-reset-by-peer which indicates it could be a problem with the client configuration. My client is libpg and PQconnectdb() is giving me a CONNECTION_OK return. It works at least partly.
For me, restarting the hypervisor where both the Postgres and the application using it helped. I've seen stack traces in dmesg before, though.