We have a query with a common structure that we use under different circumstances (where clause is different)
So we have something like this
val baseQuery: SelectQuery<Record> = dsl
.select(someFields)
.from(someTable)
.join(otherTable).on(joinClause)
.query
In other places we then extend this query however needed.
baseQuery.apply {
addConditions(conditionsA)
}.fetch()
baseQuery.apply {
addConditions(conditionsB)
}.fetch()
So far so good. But now it would be cool if we could somehow use this base in combination with a CTE. Not sure how to do that though.
val someCTE: CommonTableExpression<*> = DSL.....
// dsl
// .with(someCTE)
// .selectQuery(baseQuery.apply {}) ¯\_(ツ)_/¯
baseQuery.apply {
// addWith(someCTE) ¯\_(ツ)_/¯
addSelect(someCTE.field(cteField))
addJoin(someCTE, joinClause)
addConditions(conditionsC)
}
Is there a way to do it? Perhaps other suggestions how to reuse the base query when using a CTE?
Edit: Solution
With the help of Lukas' answer I settled on this approach
fun dynamicQuery(
context: SelectSelectStep<*> = dsl.select(),
selects: List<Field<*>> = listOf(),
joins: List<Pair<Table<*>, Condition>> = listOf(),
conditions: List<Condition> = listOf()
): SelectQuery<Record>
So in normal customizations I can
dynamicQuery(
conditions = conditionsA
).fetch()
dynamicQuery(
conditions = conditionsB
).fetch()
It can be combined with a CTE
val someCTE: CommonTableExpression<*> = DSL.....
dynamicQuery(
context = dsl.with(someCTE).select(),
selects = listOf(someCTE.field(cteField)),
joins = listOf(someCTE to joinClause),
conditions = conditionsC
).fetch()
TL;DR: You can't use CTEs with jOOQ 3.15's model API
Some background on the jOOQ model API vs DSL API distinction
The very old jOOQ 1.0 only had what is now called the "model API", a mutable, procedural API with setters (no getters), where you can manipulate dynamic SQL.
jOOQ 2.0 introduced the DSL API, which is what most people are using today. The fact that the DSL API mimicks the SQL language helps users discover jOOQ API much more easily. Everything is named exactly as expected. With the exception of a few quirks in the areas of CTE and derived tables, you can write jOOQ-SQL almost just like actual SQL.
The model API was not deprecated, but wrapped by the DSL API, and kept around:
for backwards compatibility reasons
because some people seemed to like the procedural approach
You can't do anything with the model API that you couldn't do with the DSL API as well, though a more functional programming style may be helpful when doing this with the DSL API. See: https://blog.jooq.org/2017/01/16/a-functional-programming-approach-to-dynamic-sql-with-jooq
The future of jOOQ
While the model API is still getting some new clauses support for SELECT, INSERT, UPDATE, DELETE statements, there are some statements that are not available in a model API form. These include MERGE, TRUNCATE, all DDL statements, all procedural statements. And, the WITH clause.
The strategy is to eventually deprecate the model API, because the redundancy creates a lot of extra work that is better invested elsewhere. There are also subtle bugs when people call model API methods in unexpected order, i.e. an order that is not possible through the DSL API.
In a first step, pretty soon, jOOQ will inverse the relationship between APIs: https://github.com/jOOQ/jOOQ/issues/11241. The model API will be the auxiliary wrapper of the DSL API for those who rely on it for backwards compatibility. It isn't unlikely that the model API will even be extracted into a separate compatibility module, to discourage its use in new code
In a next step, with the dependencies inversed, the DSL API can finally become consistently immutable, which is what many users expect, and to their surprise, find lacking: https://github.com/jOOQ/jOOQ/issues/9047
Eventually, the model API will be deprecated, and then dropped
You can still use it today, and the deprecation and removal will be done over a long period of time, so there's no hurry in getting off this API (as always with jOOQ). But in the context of your question, it's good to see that jOOQ will not invest in adding too many features to it, anymore. CTE support won't be added to the model API.
Workarounds
You can, of course, work around this limitation, because internally, the model API is CTE capable:
You could use reflection to add new CTEs to the SelectQuery internal representation. I won't document how this works, here, because it's never a good idea to document these things :)
You could start creating a query using the DSL API, and then extract the internal SelectQuery representation using SelectFinalStep.getQuery(), and continue working from there.
Related
I am attempting to write a Restful API to control a CMS. There are Objects (e.g., An instance of an Article) and each object has multiple Drafts. To work on an object's Drafts, I built a URL like this:
/cms/objects/{objectId}/drafts/{draftId}
draftIds are not unique unless you provide an objectId.
Hopefully I haven't already messed up with this. But now I have multiple verbs I want to perform on each Draft. Here's a list of verbs:
CLONE, UNPUBLISH, PUBLISH, UNLOCK, LOCK
The way I approached this is to create an endpoint that has an operation query param. And here's an example of how you use it:
POST /cms/objects/3232/drafts/1?operation=CLONE
POST /cms/objects/3232/drafts/1?operation=LOCK
Is there a better way to do this? I'm actually upgrading an already existing API, and the way you used to do something like this is like so:
POST cms/objects/3232/duplicate-draft?source-draft-id=1
POST cms/objects/3232/lock?draft-id=1
I prefer my way, because in my eyes it's more consistent (though I am biased). Is there a better way to do this?
By the way, I am aware of this question that is very similar. The reason I'm creating this new one is that mine is laser focused on the "multiple verb" aspect and his isn't. His is more like, "how do I do multiple verbs and can you teach me lots of other stuff about Restful APIs, too?"
Also, I feel that the accepted answer on that question does not sufficiently address this (see his point 2). For example, my Drafts, do not have a lock=true in their body so I can't easily map the advice from his Change Password section. My Drafts have two fields to represent if the Draft is locked: The user name of the locker and the user id of the locker. It would be inconvenient to have the client pass these both in so my intent above is to abstract that work away from the client.
I have a project.
project = Project.objects.get(id=1)
and now i want to select the data from related tables of project. It can be done it 2 ways, let me know which one is better. and why?
attachments = project.attachments_set.all()
samples = project.projectsamples_set.all()
OR
attachments = Attachments.objects.filter(project=ctx['project'])
samples = ProjectSamples.objects.filter(project=ctx['project'])
I would like to know the Technical prospective.
These queries are exactly equivalent, as you can see if you examine the generated SQL. I would say that the first is preferable as it is more compact and readable, but that is very much subjective so it is up to you which you use.
(Note that if you don't actually have the project object to start with, and don't need it, then it's more efficient to query Attachments and Samples via project_id than to get the product and use the related accessors. However that doesn't appear to be the case in your example.)
This should be a simple scenario - I have a data model with a parent/child relationship. For example's sake, let's say it's Orders and OrderDetails - 1 Order -> many OrderDetails.
I'd like to expose the model via oData using a standard DataService, but with a few limitations.
First, I should only see my Orders. That's simple enough using EntitySetRights.ReadSingle and a QueryInterceptor to make sure the order is in fact mine.
So far, so good! But how can the associated OrderDetail records be exposed in the oData feed in a way where I can read OrderDetails for a specific (read single) Order without giving access to the entire OrderDetails table?
In other words, I want to allow reading my details
myUrl.com/OrderService.svc/Orders(5)/OrderDetails <-- Good! My order is #5
but not everyone's details
myUrl.com/OrderService.svc/OrderDetails <-- Danger, Scarry, Keep Out!
Thanks for the help!
This is so called "containment" - your sample exactly described here: http://data.uservoice.com/forums/72027-wcf-data-services-feature-suggestions/suggestions/1012615-support-containment-hierarchical-models-in-odata?ref=title
WCF Data Services doesn't support this out of the box yet.
It is theoretically possible to implement such restriction with a custom LINQ provider. In your LINQ implementation you could detect the expansion (not that hard) and in that case allow it. But you could prevent queries to the entity set itself (also rather easy to recognize). For more details of how the LINQ expressions look plese refere to this series: http://blogs.msdn.com/b/vitek/archive/2010/02/25/data-services-expressions-part-1-intro.aspx
It depends on what provider you wanted to use originally. If you had a custom provider already this is not that hard. If you had a reflection based provider, it is possible to layer this on top. If you had EF, this might be rather tricky (not sure if it's even possible).
We currently have an application that retrieves data from the server through a web service and populates a DataSet. Then the users of the API manipulate it through the objects which in turn change the dataset. The changes are then serialized, compressed and sent back to the server to get updated.
However, I have begin using NHibernate within projects and I really like the disconnected nature of the POCO objects. The problem we have now is that our objects are so tied to the internal DataSet that they cannot be used in many situations and we end up making duplicate POCO objects to pass back and forth.
Batch.GetBatch() -> calls to web server and populates an internal dataset
Batch.SaveBatch() -> send changes to web server from dataset
Is there a way to achieve a similar model that we are using which all database access occurs through a web service but use NHibernate?
Edit 1
I have a partial solution that is working and persisting through a web service but it has two problems.
I have to serialize and send my whole collection and not just changed items
If I try to repopulate the collection upon return my objects then any references I had are lost.
Here is my example solution.
Client Side
public IList<Job> GetAll()
{
return coreWebService
.GetJobs()
.BinaryDeserialize<IList<Job>>();
}
public IList<Job> Save(IList<Job> Jobs)
{
return coreWebService
.Save(Jobs.BinarySerialize())
.BinaryDeserialize<IList<Job>>();
}
Server Side
[WebMethod]
public byte[] GetJobs()
{
using (ISession session = NHibernateHelper.OpenSession())
{
return (from j in session.Linq<Job>()
select j).ToList().BinarySerialize();
}
}
[WebMethod]
public byte[] Save(byte[] JobBytes)
{
var Jobs = JobBytes.BinaryDeserialize<IList<Job>>();
using (ISession session = NHibernateHelper.OpenSession())
using (ITransaction transaction = session.BeginTransaction())
{
foreach (var job in Jobs)
{
session.SaveOrUpdate(job);
}
transaction.Commit();
}
return Jobs.BinarySerialize();
}
As you can see I am sending the whole collection to the server each time and then returning the whole collection. But I'm getting a replaced collection instead of a merged/updated collection. Not to mention the fact that it seems highly inefficient to send all the data back and forth when only part of it could be changed.
Edit 2
I have seen several references on the web for almost a transparent persistent mechanism. I'm not exactly sure if these will work and most of them look highly experimental.
ADO.NET Data Services w/NHibernate (Ayende)
ADO.NET Data Services w/NHibernate (Wildermuth)
Custom Lazy-loadable Business Collections with NHibernate
NHibernate and WCF is Not a Perfect Match
Spring.NET, NHibernate, WCF Services and Lazy Initialization
How to use NHibernate Lazy Initializing Proxies with Web Services or WCF
I'm having a hard time finding a replacement for the DataSet model we are using today. The reason I want to get away from that model is because it takes a lot of work to tie every property of every class to a row/cell of a dataset. Then it also tightly couples all of my classes together.
I've only taken a cursory look at your question, so forgive me if my response is shortsighted but here goes:
I don't think you can logically get away from doing a mapping from domain object to DTO.
By using the domain objects over the wire you are tightly coupling your client and service, part of the reason to have a service in the first place is to promote loose coupling. So that's an immediate issue.
On top of that you're going to end up with a brittle domain logic interface where you can't make changes on the service side without breaking your client.
I suspect your best bet would be to implement a loosely coupled service which implements a REST / or some other loosely coupled interface. You could use a product such as automapper to make the conversions simpler and easier and also flatten data structures as necessary.
At this point I don't know of any way to really cut down the verbosity involved in doing the interface layers but having worked on large projects that didn't make the effort I can honestly tell you the savings wasn't worth it.
I think your issue revolves around this issue:
http://thatextramile.be/blog/2010/05/why-you-shouldnt-expose-your-entities-through-your-services/
Are you or are you not going to send ORM-Entities over the wire?
Since you have a Services-Oriented architecture.. I (like the author) do not recommend this practice.
I use NHibernate. I call those ORM-Entities. They are THE POCO model. But they have "virtual" properties that allow for lazy-loading.
However, I also have some DTO-Objects. These are also POCO's. These do not have lazy'loading friendly properties.
So I do alot of "converting". I hydrate ORM-Entities (with NHibernate)...and then I end up converting them to Domain-DTO-Objects. Yes, it stinks in the beginning.
The server sends out the Domain-DTO-Objects's. There is NO lazy loading. I have to populate them with the "Goldie Locks" "just right" model. Aka, if I need Parent(s) with one level of children, I have to know that up front and send the Domain-DTO-Objects over that way, with just the right amount of hydration.
WHen I send back Domain-DTO-Objects's (from client to the server), I have to reverse the process. I convert the Domain-DTO-Objects into ORM-Entities. And allow NHibernate to work with the ORM-Entities.
Because the architecture is "disconnected", I do alot of (NHiberntae) ".Merge()" calls.
// ormItem is any NHibernate poco
using (ISession session = ISessionCreator.OpenSession())
{
using (ITransaction transaction = session.BeginTransaction())
{
session.BeginTransaction();
ParkingAreaNHEntity mergedItem = session.Merge(ormItem);
transaction.Commit();
}
}
.Merge is a wonderful thing. Entity Framework does not have it. Boo.
Is this alot of setup? Yes.
Do I think it is perfect? No.
However. Because I send very basic DTO's(Poco's) that are not "flavored" to the ORM, I have the ability to switch ORM's without killing my contracts to the outside world.
My datalayer can be ADO.NET, EF, NHibernate, or anything. I have to write the "Converters" if I switch, and the ORM code, but everything else is isolated.
Many people argue with me. They said I'm doing too much, and the ORM-Entities are fine.
Again, I like to "now allow any lazy loading" appearances. And I prefer to have my data-layer isolated. My clients should not know or care about my data-layer/orm of choice.
There are just enough subtle differences (or some not so subtle ones) between EF and NHibernate to screwball the game plan.
Do my Domain-DTO-Objects's look 95% like my ORM-Entities? Yep. But its the 5% that can screwball you.
Moving from DataSets, especially if they are populated from stored-procedures with alot of biz-logic in the TSQL, isn't trivial. But now that I do object model, and I NEVER write a stored procedure that isn't simple CRUD functions, I'd never go back.
And I hate maintenance projects with voodoo TSQL in the stored procedures. It ain't 1999 anymore. Well, most places.
Good luck.
PS Without .Merge(in EF), here is what you have to do in a disconnected world: (boo microsoft)
http://www.entityframeworktutorial.net/EntityFramework4.3/update-many-to-many-entity-using-dbcontext.aspx
What's the best and/or fastest method of doing multijoin queries in Django using the ORM and QuerySet API?
If you are trying to join across tables linked by ForeignKeys or ManyToManyField relationships then you can use the double underscore syntax. For example if you have the following models:
class Foo(models.Model):
name = models.CharField(max_length=255)
class FizzBuzz(models.Model):
bleh = models.CharField(max_length=255)
class Bar(models.Model):
foo = models.ForeignKey(Foo)
fizzbuzz = models.ForeignKey(FizzBuzz)
You can do something like:
Fizzbuzz.objects.filter(bar__foo__name = "Adrian")
Don't use the API ;-) Seriously, if your JOIN are complex, you should see significant performance increases by dropping down in to SQL rather than by using the API. And this doesn't mean you need to get dirty dirty SQL all over your beautiful Python code; just make a custom manager to handle the JOINs and then have the rest of your code use it rather than direct SQL.
Also, I was just at DjangoCon where they had a seminar on high-performance Django, and one of the key things I took away from it was that if performance is a real concern (and you plan to have significant traffic someday), you really shouldn't be doing JOINs in the first place, because they make scaling your app while maintaining decent performance virtually impossible.
Here's a video Google made of the talk:
http://www.youtube.com/watch?v=D-4UN4MkSyI&feature=PlayList&p=D415FAF806EC47A1&index=20
Of course, if you know that your application is never going to have to deal with that kind of scaling concern, JOIN away :-) And if you're also not worried about the performance hit of using the API, then you really don't need to worry about the (AFAIK) miniscule, if any, performance difference between using one API method over another.
Just use:
http://docs.djangoproject.com/en/dev/topics/db/queries/#lookups-that-span-relationships
Hope that helps (and if it doesn't, hopefully some true Django hacker can jump in and explain why method X actually does have some noticeable performance difference).
Use the queryset.query.join method, but only if the other method described here (using double underscores) isn't adequate.
Caktus blog has an answer to this: http://www.caktusgroup.com/blog/2009/09/28/custom-joins-with-djangos-queryjoin/
Basically there is a hidden QuerySet.query.join method that allows adding custom joins.