I don't want to see the number of days left before something is Due.
ID Age Due Description Urg
1 33min 9d Do Stuff 8.33
I want to see the actual date that it's due, like below.
ID Age Due Description Urg
1 33min 2022-04-27 Do Stuff 8.33
Is there an easy way to modify ~/.taskrc ?
Found a solution. Use:
$ task list
Result:
ID Age Project Tags Due Description Urg
1 33min 2022-04-27 Do Stuff 8.33
Reference: https://taskwarrior.org/docs/report.html#modifiable
Related
table production
code
part
qty
process_id
1
21
10
10
1
22
12
10
2
22
15
10
1
21
10
12
1
22
12
12
I have to extract data like based on process, every process has multiple part but I can't take every part's data, so that have to distinct on code for getting process wise summation of qty.
how to get data like this in postgresql or in django
process_id
qty
10
27
12
12
I tried in this way
Production.objects.values('process').distinct('code').annotate(total_qty=Sum('quantity'))
The following query gets your desired result, but from your snippet I'm not sure this is the logic you had in mind. If you add more detail I can refine the answer.
SELECT process_id, SUM(qty) qty
FROM production
WHERE part=22
GROUP BY process_id
relatively new to PowerBI/PowerQuery/DAX and have become stuck at the following problem. I am unsure what road to go down to get the best outcome and would appreciate any help.
My data table is connected to a time tracking application. A User will enter a time entry everytime they complete a task. The task can be either a Project task or an Admin task. When selecting either of these, there will be multiple sub-categories beneath each, each with its own ID. This translates to my table as the following :
User ProjectID AdminID Hours Date
John 1 2 01/01/22
John 11 1 01/01/22
John 4 1 01/01/22
John 12 3 01/01/22
John 13 1 01/01/22
Pete 7 1 01/01/22
Pete 2 4 01/01/22
Pete 3 2 01/01/22
Mike 1 6 01/01/22
Mike 9 1 01/01/22
Mike 10 1 01/01/22
My objective is, for each Date in the table, to calculate the total hours spent either doing Project tasks or Admin tasks. I am not concerned about the specific breakdown (ie the sum of the unique IDs), rather the overall total. The above example covers just one day, in reality my data covers multiple years. My expected output will look like this :
User TotalProject TotalAdmin Date
John 3 5 01/01/22
John 3 4 01/02/22
John 5 2 01/03/22
Pete 5 1 01/01/22
Pete 1 8 01/02/22
Pete 6 2 01/03/22
Mike 6 2 01/01/22
Mike 6 1 01/02/22
Mike 7 2 01/03/22
I am unsure the best method to achieve this - either by creating some kind of column in the table through PowerQuery? Or a calculated column using DAX? And if so, what the SUM syntax would look like?
Very willing to learn, to any tips would be greatly appreciated!
For your sample input, just create 2 measures.
Total Admin = CALCULATE( SUM('Table'[Hours]), NOT(ISBLANK('Table'[AdminID])))
Total Project = CALCULATE( SUM('Table'[Hours]), NOT(ISBLANK('Table'[ProjectID])))
I gathered some NBA players' data of their triple-double games, and would like to find out who got the most explosive data on average.
The source is "Basketball Reference - Player Game Finder - Triple Doubles".(Sorry that I can't post the direct url because of the lack of reputation)
So I generated a table summarizing descriptive statistics (e.g. count mean) for several variables (pts trb ast stl blk) usingļ¼
tabstat pts trb ast stl blk, statistics(count mean) format(%9.1f) by(player)
What I get is the following table:
tabstat result:
How can I tell Stata to filter the players by count >= 10 (who got 10 or more triple-doubles ever) as a column then sort the table by pts and get:
Ideal result:
Like above, I would say Michael Jordan and James Harden are the Top 2 most explosive triple-double players and Darrell Walker is the most economic one.
Do study https://stackoverflow.com/help/mcve on how to present an example other people can work with straight away. Also, avoiding sports-specific jargon that won't be universally comprehensible and focusing more on the general programming problem would help. Fortunately, what you want seems clear nevertheless.
To do this you need to create a variable defining the order desired in advance of your tabstat call. To get it (value) labelled as you wish, use labmask (search labmask then download from the Stata Journal location given).
Here is some technique.
sysuse auto, clear
egen mean = mean(weight), by(rep78)
egen count = count(weight), by(rep78)
egen group = group(mean rep78) if count >= 5
replace group = -group
labmask group, values(rep78)
label var group "`: var label rep78'"
tabstat mpg weight , by(group) s(count mean) format(%1.0f)
Summary statistics: N, mean
by categories of: group (Repair Record 1978)
group | mpg weight
-------+--------------------
2 | 8 8
| 19 3354
-------+--------------------
3 | 30 30
| 19 3299
-------+--------------------
4 | 18 18
| 22 2870
-------+--------------------
5 | 11 11
| 27 2323
-------+--------------------
Total | 67 67
| 21 3030
----------------------------
Key details:
The grouping variable is based not only on the means you want to sort on but also on the original grouping variable, just in case there are ties on the means.
To get ordering from highest mean downwards, the grouping variable must be negated.
tabstat doesn't show variable labels in the body of the table. (Usually there wouldn't be enough space for them.)
I've tried to Google and read around this problem, but I can't seem to find an adequate solution. I'm hoping someone here can help me. I'm sorry if it's too simple but I would appreciate any advice or help.
I'm working with a longitudinal dataset and I would like to assign an encounter number for each person (ID) who may have had one or more interactions with our laboratory (accesssion). The dataset looks something like this, and I would like to create a new variable (encounter) that numbers each unique encounter for each individual sequentially.
ID accession encounter
----------------------------------
1 1234 1
1 1234 1
1 1235 2
1 1236 3
1 1236 3
2 1000 1
2 1001 2
2 1001 2
3 1111 1
3 1112 2
4 1001 1
4 1001 1
I've tried using first.variable statements such as:
data new; set old;
by id accession;
if first.id & first.accession then encounter=1;
else encounter+1;
run;
I haven't been successful because it won't retain the same encounter number if both the id and accession number remain the same.
Thank you in advance for helping to point me in the right direction.
Your close. At the first of each ID you want to set it to 0, and at the first of each accession you want to increment.
data new; set old;
by id accession;
Retain encounter;
if first.id then encounter=0;
If first.accession then encounter+1;
run;
I am updating my RRD file with some counts...
For example:
time: value:
12:00 120
12:05 135
12:10 154
12:20 144
12:25 0
12:30 23
13:35 36
here my RRD is updating as below logic:
((current value)-(previous value))/((current time)-(previous time))
eg. ((135-120))/5 = 15
but my problem is when it comes 0 the reading will be negative:
((0-144))/5
Here " 0 " value comes with system failure only( from where the data is fetched)..It must not display this reading graph.
How can I configure like when 0 comes it will not update the "RRD graph" (skip this reading (0-144/5)) and next time it will take reading like ((23-0)/5) but not (23-144/10)
When specifying the data sources when creating the RRD, you can specify which range of values is acceptable.
DS:data_source:GAUGE:10:1:U will only accept values above 1.
So if you get a 0 during an update, rrd will replace it with unknown and i assume it can find a way to discard it.