I have a tree model representing a k-ary tree (i.e. nodes have a list of children). The model implements the QAbstractItemModel and its QSize span(QModelIndex) method.
Given the following k-ary tree:
and
/ \
/ \
/ \
or or
/\ /\ \
/ \ / \ \
I1 I2 I3 I4 I5
My intention is to visualize the tree in a QTableView from left to right as follows:
___________
| | | Item 1
| |v|_______
| | | Item 2
| |_|_______
|^| | Item 3
| | |_______
| |v| Item 4
| | |_______
| | | Item 5
|_|_|_______
Such that each node spans over all its child nodes.
I've implemented the QAbstractItemModel::span method, but it is not taken into account by the QTableView class.
When (i.e. reacting to which signal) and how should I rebuild/redraw the QTableView?
Notes:
The model will be a read/write model. The user will have the option to rearrange the expression tree with drag&drop.
Qt Version 4.8, upgrading to 5 is not an option.
Thanks in advance.
Related
I have an import query (table a) and an imported Excel file (table b) with records I am trying to match it up with.
I am looking for a method to replicate this type of SQL in M:
SELECT a.loc_id, a.other_data, b.stk
FROM a INNER JOIN b on a.loc_id BETWEEN b.from_loc AND b.to_loc
Table A
| loc_id | other data |
-------------------------
| 34A032B1 | ... |
| 34A3Z011 | ... |
| 3DD23A41 | ... |
Table B
| stk | from_loc | to_loc |
--------------------------------
| STKA01 | 34A01 | 34A30ZZZ |
| STKA02 | 34A31 | 34A50ZZZ |
| ... | ... | ... |
Goal
| loc_id | other data | stk |
----------------------------------
| 34A032B1 | ... | STKA01 |
| 34A3Z011 | ... | STKA02 |
| 3DD23A41 | ... | STKD01 |
All of the other queries I can find along these lines use numbers, dates, or times in the BETWEEN clause, and seem to work by exploding the (from, to) range into all possible values and then filtering out the extra rows. However I need to use string comparisons, and exploding those into all possible values would be unfeasable.
Between all the various solutions I could find, the closest I've come is to add a custom column on table a:
Table.SelectRows(
table_b,
(a) => Value.Compare([loc_id], table_b[from_loc]) = 1
and Value.Compare([loc_id], table_b[to_loc]) = -1
)
This does return all the columns from table_b, however, when expanding the column, the values are all null.
This is not very specific "After 34A01 could be any string..." in trying to figure out how your series progresses.
But maybe you can just test for how a value "sorts" using the native sorting function in PQ.
add custom column with table.Select Rows:
= try Table.SelectRows(TableB, (t)=> t[from_loc]<=[loc_id] and t[to_loc] >= [loc_id])[stk]{0} otherwise null
To reproduce with your examples:
let
TableB=Table.FromColumns(
{{"STKA01","STKA02"},
{"34A01","34A31"},
{"34A30ZZZ","34A50ZZZ"}},
type table[stk=text,from_loc=text,to_loc=text]),
TableA=Table.FromColumns(
{{"34A032B1","34A3Z011","3DD23A41"},
{"...","...","..."}},
type table[loc_id=text, other data=text]),
//determine where it sorts and return the stk
#"Added Custom" = Table.AddColumn(#"TableA", "stk", each
try Table.SelectRows(TableB, (t)=> t[from_loc]<=[loc_id] and t[to_loc] >= [loc_id])[stk]{0} otherwise null)
in
#"Added Custom"
Note: if the above algorithm is too slow, there may be faster methods of obtaining these results
I have some table like this:
+------+------+------+
| Lvl1 | Lvl2 | Lvl3 |
+------+------+------+
| A1 | B1 | C1 |
| A1 | B1 | C2 |
| A1 | B2 | C3 |
| A2 | B3 | C4 |
| A2 | B3 | C5 |
| A2 | B4 | C6 |
| A3 | B5 | C7 |
+------+------+------+
In which it is some thing like a hierarchy.
When user select A1, he actually selects the first 3 rows, B1, selects first 2 rows, and C1, selects only the first row.
That is A is the highest level, and C is the lowest. Note that ids from different levels are unique, since they have a special prefix, A,B,C.
The problem is when filtering in more than one level, I may have empty result set.
e.g. filtering on Lvl1=A1 & Lvl2=B3 (no intersection), and so will return nothing. What I need is to get the first 5 rows (Lvl1=A1 or Lvl2=B3)
const lvl1Filter: IBasicFilter = {
$schema: "http://powerbi.com/product/schema#basic",
target: {
table: "Hierarchy",
column: "Lvl1"
},
operator: "In",
values: ['A1'],
filterType: FilterType.BasicFilter
}
const lvl2Filter: IBasicFilter = {
$schema: "http://powerbi.com/product/schema#basic",
target: {
table: "Hierarchy",
column: "Lvl2"
},
operator: "In",
values: ['B3'],
filterType: FilterType.BasicFilter
}
report.setFilters([lvl1Filter, lvl2Filter]);
The problem is that the filters are independent from each other, and they will both be applied, that is with AND operation between them.
So, is there a way to send the filters with OR operation between them, or is there a way to simulate it?
PS: I tried to put all data in single column (like the following table), and it worked, but the data was very large (millions of records), and so, it was very very slow, so I need something more efficient.
All data in single column:
+--------------+
| AllHierarchy |
+--------------+
| A1 |
| A2 |
| A3 |
| B1 |
| B2 |
| B3 |
| B4 |
| B5 |
| C1 |
| C2 |
| C3 |
| C4 |
| C5 |
| C6 |
| C7 |
+--------------+
Set Filter:
const allHierarchyFilter: IBasicFilter = {
$schema: "http://powerbi.com/product/schema#basic",
target: {
table: "Hierarchy",
column: "AllHierarchy"
},
operator: "In",
values: ['A1', 'B3'],
filterType: FilterType.BasicFilter
}
report.setFilters([allHierarchyFilter]);
It isn't directly possible to make "or" filter between multiple columns in Power BI, so you were right to try to combine all values in a single column. But instead of appending all possible values by union them in one column, which will give you a long list, you can also try to combine their values "per row". For example, concatenate all values in the current row, maybe add some unique separator (it depends on your actual values, which are not shown). If all columns will have values, make it simple - create new DAX column (not a measure!):
All Levels = 'Table'[Lvl1] & "-" & 'Table'[Lvl2] & "-" & 'Table'[Lvl3]
If it is possible some of the levels to be blank, you can if you want, to handle that:
All Levels = 'Table'[Lvl1] &
IF('Table'[Lvl2] = BLANK(); ""; "-" & 'Table'[Lvl2]) &
IF('Table'[Lvl3] = BLANK(); ""; "-" & 'Table'[Lvl3])
Note that depending on your regional settings, you may have to replace semicolons in the above code with commas.
This will give you a new column, which will contain all values from the current row, e.g. A1-B2-C3. Now you can make a filter All Levels contains A1 or All Levels contains B3, which now is a filter on a single column and we can easily use or:
When embedding your JavaScript code should create advanced filter, like this:
const allLevelsFilter: IAdvancedFilter = {
$schema: "http://powerbi.com/product/schema#advanced",
target: {
table: "Hierarchy",
column: "All Levels"
},
logicalOperator: "Or",
conditions: [
{
operator: "Contains",
value: "A1"
},
{
operator: "Contains",
value: "B3"
}
],
filterType: FilterType.AdvancedFilter
}
report.setFilters([allLevelsFilter]);
If you need exact match (e.g. the above code will also return rows with A11 or B35), then add the separator at the start and the end of the column too (i.e. to get -A1-B2-C3-) and in your JavaScript code append it before and after your search string (i.e. search for -A1- and -B3-).
Hope this helps!
I just wanted to know how deque is implemented and how are the basic operations like push_front and random access operator are provided in that implementation.
I just wanted to know how deque is implemented
It's always a good to have an excuse for doing ASCII art:
+-------------------------------------------------------------+
| std::deque<int> |
| |
| subarrays: |
| +---------------------------------------------------------+ |
| | | | | | | |
| | int(*)[8] | int(*)[8] | int(*)[8] |int(*)[8]|int(*)[8] | |
| | | | | | | |
| +---------------------------------------------------------+ |
| / \ |
| / \ |
| / \ |
| / \ |
| / \ |
| / \ |
| / \ |
| / \ |
| - - |
| +------------------------------+ |
| | ?, ?, 42, 43, 50, ?, ?, ?, ? | |
| +------------------------------+ |
| |
| additional state: |
| |
| - pointer to begin of the subarrays |
| - current capacity and size |
| - pointer to current begin and end |
+-------------------------------------------------------------+
how are the basic operations like push_front and random access operator are provided in that implementation?
First, std::deque::push_front, from libcxx:
template <class _Tp, class _Allocator>
void
deque<_Tp, _Allocator>::push_front(const value_type& __v)
{
allocator_type& __a = __base::__alloc();
if (__front_spare() == 0)
__add_front_capacity();
__alloc_traits::construct(__a, _VSTD::addressof(*--__base::begin()), __v);
--__base::__start_;
++__base::size();
}
This obviously checks whether the memory already allocated at the front can hold an additional element. If not, it allocates. Then, the main work is shifted to the iterator: _VSTD::addressof(*--__base::begin()) goes one location before the current front element of the container, and this address is passed to the allocator to construct a new element in place by copying v (the default allocator will definitely do a placement-new).
Now random access. Again from libcxx, std::deque::operator[] (the non-const version) is
template <class _Tp, class _Allocator>
inline
typename deque<_Tp, _Allocator>::reference
deque<_Tp, _Allocator>::operator[](size_type __i) _NOEXCEPT
{
size_type __p = __base::__start_ + __i;
return *(*(__base::__map_.begin() + __p / __base::__block_size) + __p % __base::__block_size);
}
This pretty much computes an index relative to some start index, and then determines the subarray and the index relative to the start of the subarray. __base::__block_size should be the size of one subarray here.
Not sure why I'm having a difficult time with this, it seems so simple considering it's fairly easy to do in R or pandas. I wanted to avoid using pandas though since I'm dealing with a lot of data, and I believe toPandas() loads all the data into the driver’s memory in pyspark.
I have 2 dataframes: df1 and df2. I want to filter df1 (remove all rows) where df1.userid = df2.userid AND df1.group = df2.group. I wasn't sure if I should use filter(), join(), or sql For example:
df1:
+------+----------+--------------------+
|userid| group | all_picks |
+------+----------+--------------------+
| 348| 2|[225, 2235, 2225] |
| 567| 1|[1110, 1150] |
| 595| 1|[1150, 1150, 1150] |
| 580| 2|[2240, 2225] |
| 448| 1|[1130] |
+------+----------+--------------------+
df2:
+------+----------+---------+
|userid| group | pick |
+------+----------+---------+
| 348| 2| 2270|
| 595| 1| 2125|
+------+----------+---------+
Result I want:
+------+----------+--------------------+
|userid| group | all_picks |
+------+----------+--------------------+
| 567| 1|[1110, 1150] |
| 580| 2|[2240, 2225] |
| 448| 1|[1130] |
+------+----------+--------------------+
EDIT:
I've tried many join() and filter() functions, I believe the closest I got was:
cond = [df1.userid == df2.userid, df2.group == df2.group]
df1.join(df2, cond, 'left_outer').select(df1.userid, df1.group, df1.all_picks) # Result has 7 rows
I tried a bunch of different join types, and I also tried different
cond values:
cond = ((df1.userid == df2.userid) & (df2.group == df2.group)) # result has 7 rows
cond = ((df1.userid != df2.userid) & (df2.group != df2.group)) # result has 2 rows
However, it seems like the joins are adding additional rows, rather than deleting.
I'm using python 2.7 and spark 2.1.0
Left anti join is what you're looking for:
df1.join(df2, ["userid", "group"], "leftanti")
but the same thing can be done with left outer join:
(df1
.join(df2, ["userid", "group"], "leftouter")
.where(df2["pick"].isNull())
.drop(df2["pick"]))
I need (for g++) a (computation time) optimized algorithm tree structure to duplicate/multiplicate a tree.
My tree will be a k-ary tree, but not necessarily filled.
The main operation is to multiplicate (up to k times) the existing tree and add the trees as subtrees to a new node. Then the leaf node level will be erased to hold the fixed-level rule.
Does anybody know of a data structure offering this?
An example for the multiplication: Suppose we have a binary tree
A
|
/ \
/ \
B C
| / \
| / \
D E F
and we want to add a new node / multiply like
R
/ \
/ \
.. ..
So the result will look like
R
/ \
/ \
/ \
/ \
/ \
A A
| |
/ \ / \
/ \ / \
B C B C
| / \ | / \
| / \ | / \
D E F D E F
I tried to organize this on a std::vector in a heap-like structure, but multiplying the tree is still kind of slow, because I have to copy each tree level by itself rather than just copying the whole tree at once.
When you add R, it is trivial to give it 2 pointers to A, rather than copying the entire subtree starting at A.
R
/ \
| |
\ /
A
|
/ \
/ \
B C
| / \
| / \
D E F
This is both very fast and very easy to code.
Now, the hitch in this comes in if you later want to update one side of the tree, but not the other. For example, perhaps you want to change the "right" F to a G. At that point you can use a copy-on-write strategy on only certain of the nodes, in this case leading to
R
/ \
/ \
A A <-- copied, left side points to B
| / \
/ \ * \
/ \ \
B C C <-- copied, left side points to E
| / \ / \
| / \ * \
D E F G
Basically, you only need to copy the path from the point of the change (F/G) up to either the root (easiest to implement) or up to the highest node that is shared (A in this example).
Maybe take a look on Androids code for the T9-dictionary. AFAIR it looks flat, but basically what they do is build a tree of letters, so that traversing the tree from top to bottom makes words. And I think they used relative offsets to jump from on node to the next (like a linked list).
So you should be able to copy the whole tree in one run.
I don't remember the exact layout thou, and i think it didn't do ugly padding as I do here, but to continue w/ your example it would look something(!) like this:
# your tree
__________
/// _ \ _
/// /// \ \ /// \
A007021B007000D000000C007014E000000F000000
\\\_/ \\\_____/
# copying it, "under" R:
__________ __________
_ /// _ \ _ /// _ \ _
/// \ /// /// \ \ /// \ /// /// \ \ /// \
R007049A007021B007000D000000C007014E000000F000000A007021B007000D000000C007014E000000F000000
\\\ \\\_/ \\\_____/ / \\\_/ \\\_____/
\\\______________________________________/