I'm having a lot of trouble figuring out how to get RANKX to behave at different levels of a hierarchy.
I have a hierarchy structure as follows:
Region
Manager
Supervisor
Agent
For simplicity sake, let's rank each level on the total number of Requests Handled. I want to rank each level of the hierarchy against everyone at that level. For example, at the Agent level, each agent's total Requests Handled should be ranked against every other agent, regardless of what supervisor, manager, or region they are in.
I can get the agent level to work just fine with the following, but I can't figure out how to get the upper levels to rank against each other. This works fine if I apply the same RANKX statement at any level, but only when one level is shown in a visual. When adding any additional levels it breaks.
Requests Handled Measure =
SUMX('Work Done',[Requests Handled])
Rank Measure =
IF(
ISINSCOPE(Roster[Associate Name]) && NOT(ISBLANK([Requests Handled Measure])),
RANKX(
ALL(Roster[Associate Name], Roster[Supervisor Name], Roster[Manager Name], Roster[Region]),
[Requests Handled Measure],,DESC,Dense
)
)
The ideal result would be something like:
- Region 1 240 Requests Rank 2
- Manager A 122 Requests Rank 2
- Supervisor A 65 Requests Rank 3
- Agent A 30 Requests Rank 9
- Agent B 35 Requests Rank 5
- Supervisor B 57 Requests Rank 4
- Agent C 29 Requests Rank 10
- Agent D 28 Requests Rank 11
- Manager B 118 Requests Rank 3
- Supervisor C 65 Requests Rank 3
- Agent E 33 Requests Rank 6
- Agent F 32 Requests Rank 7
- Supervisor D 53 Requests Rank 6
- Agent G 26 Requests Rank 13
- Agent H 27 Requests Rank 12
- Region 2 250 Requests Rank 1
- Manager C 99 Requests Rank 4
- Supervisor E 56 Requests Rank 5
- Agent I 25 Requests Rank 14
- Agent J 31 Requests Rank 8
- Supervisor F 43 Requests Rank 7
- Agent K 20 Requests Rank 16
- Agent L 23 Requests Rank 15
- Manager D 151 Requests Rank 1
- Supervisor G 78 Requests Rank 1
- Agent M 40 Requests Rank 1
- Agent N 38 Requests Rank 2
- Supervisor H 73 Requests Rank 2
- Agent O 36 Requests Rank 4
- Agent P 37 Requests Rank 3
Related
table production
code
part
qty
process_id
1
21
10
10
1
22
12
10
2
22
15
10
1
21
10
12
1
22
12
12
I have to extract data like based on process, every process has multiple part but I can't take every part's data, so that have to distinct on code for getting process wise summation of qty.
how to get data like this in postgresql or in django
process_id
qty
10
27
12
12
I tried in this way
Production.objects.values('process').distinct('code').annotate(total_qty=Sum('quantity'))
The following query gets your desired result, but from your snippet I'm not sure this is the logic you had in mind. If you add more detail I can refine the answer.
SELECT process_id, SUM(qty) qty
FROM production
WHERE part=22
GROUP BY process_id
I begin to use Power BI, and I don't know how to group lines.
I have this kind of data :
api user 01/07/21 02/07/21 03/07/21 ...
a 25 null 3 4
b 25 1 null 2
c 25 1 4 5
a 30 4 3 5
b 30 3 2 2
c 30 1 1 3
And I would like to have the sum of the values per user, not by api and user
user 01/07/21 02/07/21 03/07/21 ...
25 2 7 11
30 8 6 10
Do you know how to do it please ?
I created a table with your sample data (make sure your values are treated as numbers):
Then create a Matrix visual, with "user" in Rows and your desired columns in the Values section:
The table below indicates a minimal example of my raw data:
Product
Order Day
Customer
Units Ordered
Units Delivered
P
Apr 1
X
4
3
P
Apr 2
X
4
3
P
Apr 1
Y
3
1
P
Apr 1
Z
3
1
Q
Apr 1
Z
3
1
Q
Apr 2
W
3
2
R
Apr 3
X
1
0
R
Apr 4
Y
2
0
R
Apr 5
Z
8
8
R
Apr 6
Z
6
6
Based on this I am able to create the following table as a PBI report where I give a product summary:
Product
# diff. Customers Ordered
Total Ordered
Total Delivered
Service Rate
Product-level Service Test
P
3
14
8
0.57
1
Q
2
6
3
0.5
0
R
3
17
14
0.82
1
This final column in the above summary checks whether more then 50% is being delivered. This report can be filtered on Order Day (time filter) as well as Customers.
Now, in a similar fashion, I would like to create a customer summary report:
Customer
# diff. Products Ordered
# Products Passing Service Test
X
2
1
Y
2
0
Z
3
1
W
1
1
It basically summarizes the product report after filtering for specific customers. My problem is the final column called "Products Passing Service Test". I am not able to define an appropriate measure in order to get the right numbers displayed in this column. I tried some other approaches but then it does not work well with the time filter on Customer Orders.
Anyone that can help? Thank you very much!
JD
We have a social like app, and we started using the AWS ElasticcSearch Service in production, but we started to have a problem with ES, the ES version is the 2.3.
The cluster configuration is:
Data node: 2
Data node types: m3.medium.elasticsearch
Dedicated master instance count: 3
Dedicated master instance type: t2.small.elasticsearch.
Capacity of each data node: 50GB.
The problem is that in less than thirty minutes one of the node free storage size went from 9 GB to 0 GB, we did not know how this happened.
We have 4 types of documents, where one of them is a dynamic type, lets call it Group type, that is because every document of Group can have N fields that represents the friends of a Group.
Something like
{
13: [1,2,3,4],
5: [1,3,4],
user_ids: [1,2,3,4,6,7],
id: 1
}
This means that the users with ID 13 and 5 are friends with some of the users of the Group with ID 1.
So this document can grows according to the amount of users.
If anyone had or has the same problem, or just fully understand the Elastic Search architecture it would be awesome his help.
Indices info:
curl -XGET 'http://host/_cat/indices?v
health status index pri rep docs.count docs.deleted store.size pri.store.size
green open .kibana-4 1 1 5 0 1.9mb 1017.3kb
green open X 1 1 2259502 29575 57.5gb 28.7gb
green open Y 1 1 113156 0 21.7mb 10.8mb
curl -XGET 'http://host/_cat/nodes?v&h=host,id,ip,rp,hp,d,cpu,v,r,m,n
host id ip rp hp d cpu v r m n
x.x.x.x tIgm x.x.x.x 95 5 5.7gb 0 2.3.2 - m Shatter
x.x.x.x puUF x.x.x.x 95 6 5.7gb 0 2.3.2 - m Justice
x.x.x.x 1qZi x.x.x.x 97 54 17.7gb 7 2.3.2 d - Allatou
x.x.x.x lcty x.x.x.x 97 60 17.7gb 8 2.3.2 d - Amergin
x.x.x.x Nq1H x.x.x.x 5 15 5.7gb 0 2.3.2 - * Arkus
Thanks a lot!
I have managed to resolve the problem.
My problem is known as Mapping Explosion
Having variables keys in the mapping, like I had in the Group document type, will result on an evergrowing index.
I am facing a problem of huge memory leak on a server, serving a Django (1.8) app with Apache or Ngnix (The issue happens on both).
When I go on certain pages (let's say on the specific request below) the RAM of the server goes up to 16 G in few seconds (with only one request) and the server freeze.
def records(request):
"""Return list 14 last records page. """
values = []
time = timezone.now() - timedelta(days=14)
record =Records.objetcs.filter(time__gte=time)
return render(request,
'record_app/records_newests.html',
{
'active_nav_tab': ["active", "", "", ""]
' record': record,
})
When I git checkout to older version, back when there was no such problem, the problem survives and i have the same issue.
I Did a memory check with Gumpy for the faulty request here is the result:
>>> hp.heap()
Partition of a set of 7042 objects. Total size = 8588675016 bytes.
Index Count % Size % Cumulative % Kind (class / dict of class)
0 1107 16 8587374512 100 8587374512 100 unicode
1 1014 14 258256 0 8587632768 100 django.utils.safestring.SafeText
2 45 1 150840 0 8587783608 100 dict of 0x390f0c0
3 281 4 78680 0 8587862288 100 dict of django.db.models.base.ModelState
4 326 5 75824 0 8587938112 100 list
5 47 1 49256 0 8587987368 100 dict of 0x38caad0
6 47 1 49256 0 8588036624 100 dict of 0x39ae590
7 46 1 48208 0 8588084832 100 dict of 0x3858ab0
8 46 1 48208 0 8588133040 100 dict of 0x38b8450
9 46 1 48208 0 8588181248 100 dict of 0x3973fe0
<164 more rows. Type e.g. '_.more' to view.>
After a day of search I found my answer.
While investigating I checked statistics on my DB and saw that some table was 800Mo big but had only 900 rows. This table contains a Textfield without max len. Somehow one text field got a huge amount of data inserted into and this line was slowing everything down on every pages using this model.