Flattening data with same performance for recursive $query->with - laravel-5.5

I am using Laravel 5.5.13.
Thanks to awesome help from awesome member of SO I currently get nested (and repeated) data by doing this:
public function show(Entity $entity)
{
return $entity->with([
'comments' => function($query) {
$query->with([
'displayname',
'helpfuls' => function($query) {
$query->with('displayname');
}
]);
},
'thumbs' => function($query) {
$query->with('displayname');
}
])->firstOrFail();
}
This gives me example data like this: https://gist.githubusercontent.com/blagoh/ee5e70dfe35aa5c68b2d445c63887aaa/raw/a0612fb770a27eaacfbb1e87987aa4fd8902a8a3/nested.json
However I want to flatten it to this: https://gist.github.com/blagoh/7076be06c400d04941a0593267e11e81 - look at the version diff we see the changes:
https://gist.github.com/blagoh/7076be06c400d04941a0593267e11e81/revisions#diff-cb567797700e4d4b63b106653162c671R15
We see line 15 is now "helpful_ids": [] and has just array of ids, and then all displaynames and helpfuls were moved to top of array on line 45 and 78.
Is it possible to flatten this data, while keeping same query performance (or better)?

Related

RethinkDB Multiple emits in Map

I've been trying out RethinkDB for a while and i still don't know how to do something like this MongoDB example:
https://docs.mongodb.com/manual/tutorial/map-reduce-examples/#calculate-order-and-total-quantity-with-average-quantity-per-item
In Mongo, in the map function, I could iterate over an array field of one document, and emit multiple values.
I don't know how to set the key to emit in map or return more than one value per document in the map function.
For example, i would like to get from this:
{
'num' : 1,
'lets': ['a','b,'c']
}
to
[
{'num': 1, 'let' : 'a' },
{'num': 1, 'let' : 'b' },
{'num': 1, 'let' : 'c' }
]
I'm not sure if I should think this differently in RethinkDB or use something different from map-reduce.
Thanks.
I'm not familiar with Mongo; addressing your transformation example directly:
r.expr({
'num' : 1,
'lets': ['a', 'b', 'c']
})
.do(function(row) {
return row('lets').map(function(l) {
return r.object(
'num', row('num'),
'let', l
)
})
})
You can, of course, use map() instead of do() (in case not a singular object)

Grouping the output of a CouchDB View

I have a map reduce view:
.....
emit( diffYears, doc.xyz );
reduced with _sum.
xyz is then a number which is summed per integer(diffYears).
The output looks roughly like this:
4 1204.9
5 796.19
6 1124.8
7 1112.6
8 1993.62
9 159.26
10 395.41
11 456.05
12 457.97
13 39.80
14 483.68
15 269.469
etc..
What I would like to do is group the results as follows:
Grouping Total per group
0-4 1959.2 i.e add up the xyz's for years 0,1,2,3,4
5-9 3998.5 same for 5,6,7,8,9 ...etc.
10-14 3566.3
I saw a suggestion where a list was used on a view output here: Using a CouchDB view, can I count groups and filter by key range at the same time?
but have been unable to adapt it to get any kind of result.
The code given is:
{
_id: "_design/authors",
views: {
authors_by_date: {
map: function(doc) {
emit(doc.date, doc.author);
}
}
},
lists: {
count_occurrences: function(head, req) {
start({ headers: { "Content-Type": "application/json" }});
var result = {};
var row;
while(row = getRow()) {
var val = row.value;
if(result[val]) result[val]++;
else result[val] = 1;
}
return result;
}
}
}
I substituted var val = row.key in this section:
while(row = getRow()) {
var val = row.value;
if(result[val]) result[val]++;
else result[val] = 1;
}
(although in this case the result is a count.)
This seems to be the way to do it.
(It is like having a startkey and endkey for each grouping which I can do manually, naturally, but not inside a process. Or is there a way of entering multiple start- and endkeys into one GET command???? )
This must be a fairly normal thing to do especially for researchers using statistical analysis.
I assume therefore that it does get done but I cannot locate examples
as far as CouchDB is concerned.
I would appreciate some help with this please or a pointer in the right direction.
Many thanks.
EDIT:
Perhaps the answer lies in a process in 'reduce' to group the output??
You can accomplish what you want using a complex key. The limitation is that the group size is static and needs to be defined in the view.
You'll need a simple step function to create your groups within map like:
var size = 5;
var group = ( doc.diffYears - (doc.diffYears % size)) / size;
emit( [group, doc.diffYears], doc.xyz);
The reduce function can remain _sum.
Now when you query the view use group_level to control the grouping. At group_level=0, everything will be summed and one value will be returned. At group_level=1 you'll receive your desired sums of 0-4, 5-9 etc. At group_level=2 you'll get your original output.

slick 3 two leftJoin query result to classes mapping

Current question is relative with next one, but now I need to read the data from database instead of insert.
I have next three case classes:
case class A (id: Long, bList: List[B])
case class B (id: Long, aId: cList: List[C])
case class C (id: Long, bId: Long)
And query with two leftJoin functions and incomingAId for filtering aTable results:
val query = (for {
((aResult,bResult),cResult) <- aTable.filter(_.id === incomigAId)
.joinLeft(bTable).on(_.id === _.aId)
.joinLeft(cTable).on(_._2.map(_.id) === _.bId)
} yield ((aResult,bResult),cResult)).result.transactionally
Next query works and the result looks valid, but isn't easy to handle it to the case classes. Also, executionResult has Seq[Nothing] type and process of mapping requires something like that:
database.run(query).map{ executionResult =>
executionResult.map { vectorElement: [Tuple2[Tuple2[A, Option[B]], Option[C]]]
...
}
}
Is there any proper way to prevent Seq[Nothing] (changes in query)?
Or if the query result type is fine, could you please share solution how to map it to the case classes above?
Right now I'm using next solution, but I suppose that some part of code can be optimized (e.g. groupBy replacing with something else).
execute(query).mapTo[Vector[((A, Option[B]), Option[C])]]
.flatMap { insideFuture =>
insideFuture.groupBy(_._1._1).mapValues { values =>
//scala groupBy losts ordering
val orderedB = values.groupBy(_._1._2).toSeq.sortBy(_._1.map(_.id)).toMap
orderedB.mapValues(_.map(_._2))
}.headOption match {
case Some(data) => Future.successful {
data._1.copy(bList = data._2.map {
case (Some(bElement), optionCElements) =>
bElement.copy(cList = optionCElements.toList.flatten)
case _ =>
throw new Exception("Invalid query result. Unable to find B elements")
}.toList)
}
case None => Future.failed(new Exception("Unable to find A with next id " + incomigAId))
}
}
}

Why is 'array' being passed to the filter function?

JSBin Example
In the following code, 'array' is simply an array of integers, 'items' is a list of objects, and 'coprop' is a computed filter of 'items' using 'array' (which may change) to decide which elements of 'items' belong in 'coprop'.
The desired result is just those elements of 'items' that have a 'value' in 'array'.
The actual result is a javascript error when the elements of 'array' are (for some reason) passed to the filtering function in addition to the elements of 'items'. That triggers a 'typeerror: i.get is not function' message.
array: [ 4, 5, 6 ],
items: function() {
return this.get('store').peekAll('item');
}.property(),
coprop: Ember.computed.filter('items', function(i,idx,ary) {
console.log('i = ' + i);
return this.get('array').isAny(i.get('value'));
}).property('items','array')
Please Note This is a simplified example to demonstrate the problem. The filtering function is more complex than shown here, but this does show the issue quite clearly.
Actual Output
"i = <App.Thing:ember409:1>"
"i = <App.Thing:ember410:2>"
"i = <App.Thing:ember411:3>"
"i = <App.Thing:ember412:4>"
"i = 4"
"error"
"TypeError: i.get is not a function
The Questions
Why is 'array' being passed to the filter function as elements to filter?
How do I ensure that 'coprop' will be updated if 'array' changes?
It's because of the property suffix on it. Maybe you could filter it like this:
coprop: Ember.computed('array.[]', 'items.[]', function(){
let arr = this.get('array');
return this.get('items').filter(i => arr.contains(i.get('value')));
})

Way of fast iterating through list

I have a question about iterating through lists.
Let's say i have list of maps with format
def listOfMaps = [ ["date":"2013/05/23", "id":"1"],
["date":"2013.05.23", "id":"2"],
["date":"2013-05-23", "id":"3"],
["date":"23/05/2013", "id":"4"] ]
Now i have a list of two patterns (in reality i have a lot more :D)
def patterns = [
/\d{4}\/\d{2}\/\d{2}/, //'yyyy/MM/dd'
/\d{4}\-\d{2}\-\d{2}/ //'yyyy-MM-dd'
]
I want to println dates only with the "yyyy/MM/dd" and "yyyy-MM-dd" format so i have to go through the lists
for (int i = 0; i < patterns.size(); i++) {
def findDates = listOfMaps.findAll{it.get("word") ==~ patterns[i] ?
dateList << it : "Nothing found"}
}
but i have a problem with this way. What if the list "listOfMaps" gonna be huge? It will take a lot of time to find all patters because this code will have to go through the whole list of patters and the same amount of time it will have to go through list of maps wich in case of huge lists might take a long while :). I tried with forEach inside the findAll clousure it does not work.
So my question is is there any way to go through the list of patterns inside the findAll clousure? For instance sth like this in pseudocode
def findDates = listOfMaps.findAll{it.get("word") ==~ for(){patterns[i]} ? : }
so in that case it goes only once through the listOfMaps list and it iterates through patterns(which always is way way way way smaller than listOfMaps).
I might have an idea to create a function that returns the instance of list, but i'm struggling to implement this :).
Thanks in advance for response.
You could do:
def listOfMaps = [ [date:"2013/05/23", id:"1"],
[date:"2013.05.23", id:"2"],
[date:"2013-05-23", id:"3"],
[date:"23/05/2013", id:"4"] ]
def patterns = [
/\d{4}\/\d{2}\/\d{2}/, //'yyyy/MM/dd'
/\d{4}\-\d{2}\-\d{2}/ //'yyyy-MM-dd'
]
def foundRecords = listOfMaps.findAll { m ->
patterns.find { p ->
m.date ==~ p
}
}