Log Jdbi interactions with database - jdbi

I've reviewed a couple of questions/answers here around the subject, but nothing works out-of-the-box. I've also read the SqlLogger section in the official documentation, but still I can't find a way log/visualize what Jdbi (version 3.x) "is doing" when it interacts with the database? — in a straightforward way.
I'm aware Jdbi is using almost raw SQL, but it's always nice to be able to see what is it what the framework/library says it's doing for debugging purposes, etc.
I've tried pretty much any namespace starting from org.jdbi (within a logback.xml file), up until trace mode, but I just see something like this:
03-01-2021 19:52:26,656 |- TRACE in org.jdbi.v3.core.Jdbi:315 [reactor-http-epoll-2] - Jdbi [org.jdbi.v3.core.Jdbi#7a76fb45] obtain handle [org.jdbi.v3.core.Handle#725d5aec] in 0ms
03-01-2021 19:52:26,697 |- TRACE in org.jdbi.v3.core.Handle:187 [reactor-http-epoll-2] - Handle [org.jdbi.v3.core.Handle#725d5aec] released
Is there a way to do this these days?

Not a JDBI answer, but more generic way to see raw SQL is to use JDBC proxy like P6Spy or datasource-proxy.
P6Spy allows intercepting by either decorating DataSource or stub JDBC Driver (requires no code changes) and prints logs in format:
p6spy: #1617156635 | took 0ms | statement | connection 3|SELECT NOW()
Datasource-proxy only supports decorating DataSource and prints:
n.t.d.l.l.SLF4JQueryLoggingListener:
Name:, Time:0, Success:True
Type:Statement, Batch:False, QuerySize:1, BatchSize:0
Query:["SELECT NOW()"]
Params:[]

Related

How to configure OpenTelemetry agent for an Akka application

I am trying to export metrics and traces from my Akka app written in Scala using OpenTelemetry agent with the purpose of consuming the data in OpenSearch.
Technology stack for my application:
Akka - 2.6.*
RabbitMQ (amqp client 5.12.*)
PostgreSQL (jdbc 42.2.*)
I've added OpenTelemetry instrumentation runtime dependency to build.sbt:
val runtimeDependencies: Seq[ModuleID] = Seq(
"io.opentelemetry.instrumentation" % "opentelemetry-instrumentation-api" % otelInstrumentationVersion % "runtime"
)
...
libraryDependencies ++= compileDependencies ++ testDependencies ++ runtimeDependencies,
I am passing OpenTelemetry configurations in a properties file:
export JAVA_OPTS="... \
-javaagent:lib/opentelemetry/opentelemetry-javaagent-all-v1.6.0.jar \
-Dotel.javaagent.configuration-file=lib/opentelemetry/otel.properties"
The only other related piece in my code is the properties file:
otel.service.name=my-app
otel.traces.exporter=jaeger
otel.propagators=jaeger
I do receive some traces in OpenSearch, but they are disparate and unrelated whereas I would expect them to be linked. For example a message is received on RabbitMQ topic, it makes it's way into an actor, the latter eventually issues a SQL query. As a result I could see for each execution how much time did each step take.
This is an approximate view that I get in OpenSearch:
I would love to be able to follow documentation, but I find that OpenTelemetry's configuration guide is scarce at this point.
Update:
Not sure whether this is relevant, but I get a warning on datapreper:
2021-09-29T16:50:50,861 [raw-pipeline-prepper-worker-5-thread-1] WARN com.amazon.dataprepper.plugins.prepper.oteltrace.OTelTraceRawPrepper - Missing trace group for SpanId: 922097e31cf96c72
Ok so I got around by running across this issue and then reading about how to surpress specific instrumentations.
So to reduce clutter in tracing dashboard, one would add something as following to the properties file (or equivalent via environment variables):
otel.instrumentation.rabbitmq.enabled=false
otel.instrumentation.grpc.enabled=false
Note that I removed the two cluttering instrumentation libraries peculiar for my use case. For another application one wold choose other libraries from link # 2 above. In this way the spans that you as application developer declare will become roots.

DCMTK Understand the "DIMSE No valid Presentation Context ID" error

I'm currently developing a simple application for querying/retrieving data on a PACS. I use DCMTK for this purpose, and a DCM4CHEE PACS as test server.
My goal is to implement simple C-FIND queries, and a C-MOVE retrieving system (coupled with a custom SCP to actually download the data).
To do so, I've created a CustomSCU class, that inherits the DCMTK DcmSCU class.
I first implemented a C-ECHO message, that worked great.
Then, I tried to implement C-FIND requesting, but I got the error "DIMSE No valid Presentation Context ID" (more on that in the next paragraph) from my application, but no other log from DCM4CHEE. I've then used the command tool findscu (from dcmtk) to see if there was some configuration issue but the tool just worked fine. So in order to implement my C-FIND request, I've read the source of findscu (here) and adapted it in my code (meaning that i'm not using DcmSCU::sendCFindRequest but the class DcmFindSU).
But now, i'm facing the same problem with C-MOVE request. My code is pretty straight-forward :
//transfer syntaxes
OFList<OFString> ts;
ts.push_back(UID_LittleEndianExplicitTransferSyntax);
ts.push_back(UID_BigEndianExplicitTransferSyntax);
ts.push_back(UID_LittleEndianImplicitTransferSyntax);
//sop class
OFString pc = UID_MOVEPatientRootQueryRetrieveInformationModel;
addPresentationContext(pc, ts);
DcmDataset query;
query.putAndInsertOFStringArray(DCM_QueryRetrieveLevel, "PATIENT");
query.putAndInsertOFStringArray(DCM_PatientID, <ThePatientId>);
OFCondition condition = sendMOVERequest(findPresentationContextID(pc, ""), getAETitle(), &query, nullptr);
return condition.good();
I've also tried using UID_MOVEStudyRootQueryRetrieveInformationModel instead of UID_MOVEPatientRootQueryRetrieveInformationModel, with the same result : my application shows the error
DIMSE No valid Presentation Context ID
As I understand, a presentation context is concatenation of one or more transfer syntax and one SOP class. I read that the problem could come from the PACS that won't accept my presentation contexts. To be sure, I used the movescu tool (from DCMTK). It worked, and I saw this in the logs from de server DCM4CHEE :
received AAssociatedRQ
pc-1 : as=<numbers>/Patient Root Q/R InfoModel = FIND
ts=<numbers>/Explicit VR Little Endian
ts=<numbers>/Explicit VR Big Endian
ts=<numbers>/Implicit VR Little Endian
That means that the movescu tool does a find before attempting an actual move ?
Therefore, I changed my application context creation with :
OFList<OFString> ts;
ts.push_back(UID_LittleEndianExplicitTransferSyntax);
ts.push_back(UID_BigEndianExplicitTransferSyntax);
ts.push_back(UID_LittleEndianImplicitTransferSyntax);
OFString pc1 = UID_FINDPatientRootQueryRetrieveInformationModel;
OFString pc = UID_MOVEPatientRootQueryRetrieveInformationModel;
addPresentationContext(pc1, ts);
addPresentationContext(pc, ts);
(also tried study root)
But this didn't do the trick.
The problem seems to lie on the client side, as findPresentationContextID(pc, ""); alwasy return 0, no matter what.
I don't feel like it's possible to adapt the code of the movescu tool, as it appears to be very complex and not adequat for simple retrieve operations.
I don't know what to try. I hope someone can help me understand what's going on. That's the last part of my application, as the storage SCP already works.
Regards
It looks like you are not negotiating the association with the PACS.
After adding the presentation contexts and before sending any command, the SCU must connect to the PACS and negotiate the PresentationContexts with DcmSCU::initNetwork and then DcmSCU::negotiateAssociation.

C++ Poco ODBC Transactions - AutoCommit mode

I am currently attempting to use transactions in my C++ app, but I have a problem with the ODBC's auto commit mode.
I am using the POCO libaries to create a connection to a PostgreSQL database on the same machine. Currently, I can send data to this database as single statements, but I cannot get my head around how to use Poco's transaction libraries to be able to send this data more quickly.
As I have several thousand records to insert, and so continuing to use single insert statements is extrememly slow and inpractical - So I am trying to use Poco's transaction to speed this up a bit (a fair bit).
The error I am encountering is a theoretically a simple one - Poco is throwing the following error:
'Invalid access: Session is in auto commit mode.'
I understand, as a result of this, I should somehow set "auto commit" to false - as it only allows me to commit data to the database line by line, rather than as a single transaction.
The problem is how I set this.
Currently, I have a session created from Session.h, that looks alot like this:
session = new Poco::Data::Session(
"ODBC",
connection_data.str()
);
Where connection data is a simple stringstream with the login information, password, database, server and "Driver={PostgreSQL ANSI};" to tell ODBC to utilize PostgreSQL's driver.
I have tried just setting a property "autocommit" to false through the session's setFeature or setProperty settings, this, of course, was to no avail. (it was more of a ditch attempt at this point).
session->setFeature("AUTOCOMMIT", false);
Looking around, I saw a possible alternative method by creating a ODBC sessionImpl directly from ODBC/session/SessionImpl.h instead of using this generic method above, and then creating a new session object from this.
The benefits of this are that ODBC's sessionImpl has references to autocommit mode in the header, which would suggest it would be able to handle this:
void autoCommit(const std::string&, bool val);
/// Sets autocommit property for the session.
However, having not used sessionImpl before, I cannot garuntee if this will work or if can can get this to work with the limited documentation available.
I am using C++ 03 (Not 11), with Visual Studio 2015
Poco 1.7.5
Boost (Where needed)
Would any one know the correct way of setting this feature (above) or a alternative method to achieving this?
edit: Looking at the source of poco, at:
https://github.com/pocoproject/poco/blob/develop/Data/ODBC/src/SessionImpl.cpp#L153
The property seems be named autoCommit, and looking at
https://github.com/pocoproject/poco/blob/develop/Data/include/Poco/Data/AbstractSessionImpl.h#L120
the case of the property names seem to matter. So, does it help if you use session->setFeature("autoCommit", false);?
Cant you just call session->begin(); and session->end(); on the corresponding Session object?
What is returned by session->canTransact()?
According to the doc begin() will start a new transaction, the doc does not mention any property that needs to be set before or after.
See: https://pocoproject.org/docs/Poco.Data.Session.html
Also faced a similar issue.
First of all before begin() need:
m_ses.setFeature("autoCommit", false);
m_ses.begin();
And the second issue is that this feature stays "autoCommit" in false for all other sessions. So don't forget for the next session call
session.setFeature("autoCommit", true);

Can Amazon Simple Workflow (SWF) be made to work with jRuby?

For uninteresting reasons, I have to use jRuby on a particular project where we also want to use Amazon Simple Workflow (SWF). I don't have a choice in the jRuby department, so please don't say "use MRI".
The first problem I ran into is that jRuby doesn't support forking and SWF activity workers love to fork. After hacking through the SWF ruby libraries, I was able to figure out how to attach a logger and also figure out how to prevent forking, which was tremendously helpful:
AWS::Flow::ActivityWorker.new(
swf.client, domain,"my_tasklist", MyActivities
) do |options|
options.logger= Logger.new("logs/swf_logger.log")
options.use_forking = false
end
This prevented forking, but now I'm hitting more exceptions deep in the SWF source code having to do with Fibers and the context not existing:
Error in the poller, exception:
AWS::Flow::Core::NoContextException: AWS::Flow::Core::NoContextException stacktrace:
"aws-flow-2.4.0/lib/aws/flow/implementation.rb:38:in 'task'",
"aws-flow-2.4.0/lib/aws/decider/task_poller.rb:292:in 'respond_activity_task_failed'",
"aws-flow-2.4.0/lib/aws/decider/task_poller.rb:204:in 'respond_activity_task_failed_with_retry'",
"aws-flow-2.4.0/lib/aws/decider/task_poller.rb:335:in 'process_single_task'",
"aws-flow-2.4.0/lib/aws/decider/task_poller.rb:388:in 'poll_and_process_single_task'",
"aws-flow-2.4.0/lib/aws/decider/worker.rb:447:in 'run_once'",
"aws-flow-2.4.0/lib/aws/decider/worker.rb:419:in 'start'",
"org/jruby/RubyKernel.java:1501:in `loop'",
"aws-flow-2.4.0/lib/aws/decider/worker.rb:417:in 'start'",
"/Users/trcull/dev/etl/flow/etl_runner.rb:28:in 'start_workers'"
This is the SWF code at that line:
# #param [Future] future
# Unused; defaults to **nil**.
#
# #param block
# The block of code to be executed when the task is run.
#
# #raise [NoContextException]
# If the current fiber does not respond to `Fiber.__context__`.
#
# #return [Future]
# The tasks result, which is a {Future}.
#
def task(future = nil, &block)
fiber = ::Fiber.current
raise NoContextException unless fiber.respond_to? :__context__
context = fiber.__context__
t = Task.new(nil, &block)
task_context = TaskContext.new(:parent => context.get_closest_containing_scope, :task => t)
context << t
t.result
end
I fear this is another flavor of the same forking problem and also fear that I'm facing a long road of slogging through SWF source code and working around problems until I finally hit a wall I can't work around.
So, my question is, has anyone actually gotten jRuby and SWF to work together? If so, is there a list of steps and workarounds somewhere I can be pointed to? Googling for "SWF and jRuby" hasn't turned up anything so far and I'm already 1 1/2 days into this task.
I think the issue might be that aws-flow-ruby doesn't support Ruby 2.0. I found this PDF dated Jan 22, 2015.
1.2.1
Tested Ruby Runtimes The AWS Flow Framework for Ruby has been tested
with the official Ruby 1.9 runtime, also known as YARV. Other versions
of the Ruby runtime may work, but are unsupported.
I have a partial answer to my own question. The answer to "Can SWF be made to work on jRuby" is "Yes...ish."
I was, indeed, able to get a workflow working end-to-end (and even make calls to a database via JDBC, the original reason I had to do this). So, that's the "yes" part of the answer. Yes, SWF can be made to work on jRuby.
Here's the "ish" part of the answer.
The stack trace I posted above is the result of SWF trying to raise an ActivityTaskFailedException due to a problem in some of my activity code. That part is my fault. What's not my fault is that the superclass of ActivityTaskFailedException has this code in it:
def initialize(reason = "Something went wrong in Flow",
details = "But this indicates that it got corrupted getting out")
super(reason)
#reason = reason
#details = details
details = details.message if details.is_a? Exception
self.set_backtrace(details)
end
When your activity throws an exception, the "details" variable you see above is filled with a String. MRI is perfectly happy to take a String as an argument to set_backtrace(), but jRuby is not, and jRuby throws an exception saying that "details" must be an Array of Strings. This exception blows through all the nice error catching logic of the SWF library and into this code that's trying to do incompatible things with the Fiber library. That code then throws a follow-on exception and kills the activity worker thread entirely.
So, you can run SWF on jRuby as long as your activity and workflow code never, ever throws exceptions because otherwise those exceptions will kill your worker threads (which is not the intended behavior of SWF workers). What they are designed to do instead is communicate the exception back to SWF in a nice, trackable, recoverable fashion. But, the SWF code that does the communicating back to SWF has, itself, code that's incompatible with jRuby.
To get past this problem, I monkey-patched AWS::Flow::FlowException like so:
def initialize(reason = "Something went wrong in Flow",
details = "But this indicates that it got corrupted getting out")
super(reason)
#reason = reason
#details = details
details = details.message if details.is_a? Exception
details = [details] if details.is_a? String
self.set_backtrace(details)
end
Hope that helps someone in the same situation as me.
I'm using JFlow, it lets you start SWF flow activity workers with JRuby.

How to enable DEBUG level logging with Jetty embedded?

I'm trying to set the logging level to DEBUG in an embedded Jetty instance.
The documentation at http://docs.codehaus.org/display/JETTY/Debugging says to -
call SystemProperty.set("DEBUG", "true") before calling new
org.mortbay.jetty.Server().
I'm not sure what the SystemProperty class is, it doesn't seem to be documented anywhere. I tried System.setProperty(), but that didn't do the trick.
My question was answered on the Jetty mailing list by Joakim Erdfelt:
You are looking at the old Jetty 6.x docs at docs.codehaus.org.
DEBUG logging is just a logging level determined by the logging
implementation you choose to use.
If you use slf4j, then use slf4j's docs for configuring logging level. http://slf4j.org/manual.html
If you use java.util.logging, use the JVM docs. http://docs.oracle.com/javase/6/docs/technotes/guides/logging/overview.html
If you use the built-in StdErrLog, then there is a pattern to follow.
-D{classref}.LEVEL={level}
Where {classref} is the class reference you want to set the level on,
and all sub-class refs. and {level} is one of the values ALL, DEBUG,
INFO, WARN
Example:
-Dorg.eclipse.jetty.LEVEL=INFO - this will enable INFO level logging for all jetty packages / classes.
-Dorg.eclipse.jetty.io.LEVEL=DEBUG - this will enable DEBUG level logging for IO classes only
-Dorg.eclipse.jetty.servlet.LEVEL=ALL - this will enable ALL logging (trace events, internally ignored exceptions, etc..) for servlet
packages.
-Dorg.eclipse.jetty.util.thread.QueuedThreadPool.LEVEL=ALL - this will enable level ALL+ on the specific class only.
Add this
-Dorg.eclipse.jetty.util.log.class=org.eclipse.jetty.util.log.StdErrLog
-Dorg.eclipse.jetty.LEVEL=DEBUG
In case you just want to quickly get log messages to stderr add something like this to java command line:
-Dorg.eclipse.jetty.util.log.class=org.eclipse.jetty.util.log.StdErrLog -D{classref}.LEVEL=DEBUG
You can use this snippet to enable logging:
import org.eclipse.jetty.util.log.Log;
import org.eclipse.jetty.util.log.StdErrLog;
.
.
.
StdErrLog logger = new StdErrLog();
logger.setDebugEnabled(true);
Log.setLog(logger);