How to log SQL values sent to my DB using EclipseLink? - jpa-2.0

I use EclipseLink as my JPA2 persistence layer, and i would like to see the values sent to DB in logs.
I already see SQL queries (using <property name="eclipselink.logging.level" value="ALL" /> in my persistence.xml), but, for example in an SQSL INSERT, I do not see the values, only the placeholders ?
So, how to see what values are sent

You'll need to use a JDBC proxy driver like p6spy or log4jdbc to get the SQL statements issued with their values instead of the placeholders. This approach works well you are using a EclipseLink with a connection pool whose URL is derived from persistence.xml (where you can specify a JDBC URL recognized by the proxy driver instead of the actual), but may not be so useful in a Java EE environment (atleast for log4jdbc), unless you can get the JNDI data sources to use the proxy driver.

Related

Unable to connect to AWS Athena Workgroup using JDBC connection?

I am using JDBC to connect to Athena for a specific Workgroup. But it is by default redirecting to the primary workgroup
Below is the code snippet
Properties info = new Properties();
info.put("user", "access-key");
info.put("password", "secrect-access-key");
info.put("WorkGroup","test");
info.put("schema", "testschema");
info.put("s3_staging_dir", "s3://bucket/athena/temp");
info.put("aws_credentials_provider_class","com.amazonaws.auth.DefaultAWSCredentialsProviderChain");
Class.forName("com.simba.athena.jdbc.Driver");
Connection connection = DriverManager.getConnection("jdbc:awsathena://athena.<region>.amazonaws.com:443/", info);
As you can see I am using "Workgroup" as the key for the properties. I also tried "workgroup", "work-group", "WorkGroup". It is not able to redirect to the specified Workgroup. Always going to the default one i.e primary workgroup.
Kindly help. Thanks
If you look at the release notes of Athena JDBC, the workgroup support is from v2.0.7.
If you jar is below this version, it will not work. Try to upgrade the library to 2.0.7 or above
You need to Override Client-Side Settings in workgroup.Enable below setting and rerun the query via JDBC.
Check this doc for more information.

wso2 esb poll files, read data, transform data publish record message

My tech req is the following :
Poll CSV Files
Read Data line by line
Transform data to desired format
convert to JSON/XML
Publish data thru REST/JMS
Deploy in WSO2 EI6.1.1
How is this possible in DS Tooling 3.8.0 of WSO2 ?
I know inbound endpoints,mediators,sequences proxy services etc can be used, but cant find a single document/article that helps in doing this.
Where do I start? How do I sequentially execute these steps? Artifacts are independently created, but dont how to automate them to an integration flow.
Appreciate if someone can shed some light.
Solution :
Create empty ESB solution project.
Create Proxy-Service.
Use Smooks-config for tranformation of CSV flat data to XML format
Create endpoint, for producing JMS messages to JMS queue of ActiveMQ.
Use datamapper mediator, if transformation is required
Use log mediator , for logging.
Use property mediator for setting endpoint related properties.
Config - axis2.xml,axis2Client.xml for enabling transport settings on E1611.
Export to CAR file, deploy on EI611 management console.
Happy Testing!!

How to call on Web Service API and route data into Azure SQL Database?

Having configured an Azure SQL Database, I would like to feed some tables with data from an HTTP REST GET call.
I have tried Microsoft Flow (whose HTTP Request action is utterly botched) and I am now exploring Azure Data Factory, to no avail.
The only way I can currently think of is provisioning an Azure VM and install Postman with Newman. But then, I would still need to create a Web Service interface to the Azure SQL Database.
Does Microsoft offer no HTTP call service to hook up to an Azure SQL Database?
Had the same situation a couple of weeks ago and I ended up building the API call management using Azure Functions. No problem to use the Azure SDK's to upload the result to e.g BLOB store or Data Lake. And you can add whatever assembly you need to perform the HTTP post operation.
From their you can easily pull it with Data Factory to a Azure SQL db.
I would suggest you write yourself an Azure Data Factory custom activity to achieve this. I've done this for a recent project.
Add a C# class library to your ADF solution and create a class that inherits from IDotNetActivity. Then in the IDictionary method make the HTTP web request to get the data. Land the downloaded file in blob storage first, then have a downstream activity to load the data into SQL DB.
public class GetLogEntries : IDotNetActivity
{
public IDictionary<string, string> Execute(
IEnumerable<LinkedService> linkedServices,
IEnumerable<Dataset> datasets,
Activity activity,
IActivityLogger logger)
{
etc...
HttpWebResponse myHttpWebResponse = (HttpWebResponse)httpWebRequest.GetResponse();
You can use the ADF linked services to authenticate against the storage account and define where container and file name you want as the output etc.
This is an example I used for data lake. But there is an almost identical class for blob storage.
Dataset outputDataset = datasets.Single(dataset => dataset.Name == activity.Outputs.Single().Name);
AzureDataLakeStoreLinkedService outputLinkedService;
outputLinkedService = linkedServices.First(
linkedService =>
linkedService.Name ==
outputDataset.Properties.LinkedServiceName).Properties.TypeProperties
as AzureDataLakeStoreLinkedService;
Don't bother with an input for the activity.
You will need an Azure Batch Service as well to handle the compute for the compiled classes. Check out my blog post on doing this.
https://www.purplefrogsystems.com/paul/2016/11/creating-azure-data-factory-custom-activities/
Hope this helps.

Is there any way to pass Properties to an Oracle datasource in ColdFusion?

So I have an issue trying to get N-datatypes (NVarchar, NClob) to work with ColdFusion ORM using a database which has a default characterset of US7ASCII and an NLS characterset of AL16UTF16.
Essentially, this is solved either by using setFormOfUse() on the Connection, by setting a JVM System Property (-Doracle.jdbc.defaultNChar=true), or by passing that Property along when creating the Connection.
ColdFusion has a spot for adding Connection String attributes, which would work with MySQL, MSSQL, etc., however the Oracle JDBC driver ignores that string and only accepts a Java Properties object (see: javadoc).
Is there any way to pass a Properties object to ColdFusion's datasources?
(Unfortunately, setting the global System Property isn't an option, and I'm not sure how to make ColdFusion ORM/Hibernate account for setFormOfUse(), assuming that's even possible.)

Hibernate running on separate JVM fail to read

I am implementing WebService with Hibernate to write/read data into database (MySQL). One big issue I have was when I insert data (e.g., USER table) via one JVM (example: JUNit test or directly from DBUI suite) successfully, my WebService's Hibernate running on separate JVM cannot find this new data. They all point to the same DB server. It is only if I had destroyed the WebService's Hibernate SessionFactory and recreate it, then the WebService's Hibernate layer can read the new inserted data. In contrast, the same JUnit test or a direct query from DBUI suite can find the inserted data.
Any assistance is appreciated.
This issue is resolved today with the following:
I changed our Hibernate config file (hibernate.cfg.xml) to have Isolation Level to at least "2" (READ COMMITTED). This immediately resolved the issue above. To understand further about this isolation level setting, please refer to these:
Hibernate reading function shows old data
Transaction isolation levels relation with locks on table
I ensured I did not use 2nd level caching by setting CacheMode to IGNORE for each of my Session object:
Session session = getSessionFactory().openSession();
session.setCacheMode(CacheMode.IGNORE);
Reference only: Some folks did the following in hibernate.cfg.xml to disable their 2nd level caching in their apps (BUT I didn't need to):
<property name="cache.provider_class">org.hibernate.cache.internal.NoCacheProvider</property>
<property name="hibernate.cache.use_second_level_cache">false</property>
<property name="hibernate.cache.use_query_cache">false</property>