redmine: time spent report including subtasks - redmine

Is there a Redmine plugin (or maybe core functionality which I don't know how to use) which allows me to get a time entries report which includes the subtask durations?
Eg, there are tasks in hierarchy like in asciiart below:
#1 - new module (time spent 1h)
\
\->#2 - prepare dev server (time spent 2h)
|
\->#3 - prepare images (time spent 0.5h)
The time_entries report should result in something like this:
#1 - new module | 1:00
#2 - prepare dev server | 2:00
#3 - prepare images | 0:30
If I have dozens of tasks, entries above don't say anything about them. Eg. I don't know that task #1 as whole took 3.5h, and I don't know what is #2 and #3 (because they are not linked on the report to #1)
Is there some ready code I could use which gives a hierarchical report, with total spent time for each parent entry?

Related

ray.tune.run stuck before starting training infinitely

I use ray.tune.run to train a customized gym model by using rllib after ray.init(num_cpus=2)
I do not need to search hyperparameters so I give parameters in model config specific values. But tune.run seems still stuck in parameter searching infinitely with repeatedly outputting in terminal the following results
== Status ==
Current time: 2021-12-15 10:14:37 (running for 00:22:40.96)
Memory usage on this node: 17.1/62.7 GiB
Using FIFO scheduling algorithm.
Resources requested: 1.0/2 CPUs, 1.0/1 GPUs, 0.0/36.44 GiB heap, 0.0/18.22 GiB objects
Result logdir: /
Number of trials: 1/1 (1 RUNNING)
+---------------------------+----------+-----------------------+
| Trial name | status | loc |
|---------------------------+----------+-----------------------|
| PPO_sim-v0_a3b49_00000 | RUNNING | 192.168.11.86:1869965 |
+---------------------------+----------+-----------------------+
I tried to use ray.init() and it can start training but return nan reward for all iters.
And then I changed nothing and re-call ray.tune.run, it is again stuck in searching with above Status information.
I have no idea why this happens even I don't specify hyperparameter searching.
So can anyone help me with this problem?
Appreciate any help about this.

Splunk - To search for concurrent run of processes

I want to check if there are multiple instances of a job/process running .
Ex: My Splunk search :
index=abc <jobname> | stats earliest(_time) AS earliest_time, latest(_time) AS latest_time count by source | convert ctime(earliest_time), ctime(latest_time) | sort - count
returns :
source earliest_time latest_time count
logA 06/06/2020 15:24:09 06/06/2020 15:24:59 1
logB 06/06/2020 15:24:24 06/06/2020 15:25:12 2
In the above since logB indicates job run before logA completion time, it is indication of concurrent run of process. I would like to generate a list of all such jobs if it is possible , any help is appreciated .
Thank you.
There is an inbuilt Splunk command for this, concurrency. This command requires an event start time and the duration, which we can calculate as the difference between the earliest and latest times. This command will create a new field called concurrency which is a measurement represent[ing] the total number of events in progress at the time that each particular event started, including the event itself.
index=abc <jobname> | stats earliest(_time) as et latest(_time) as lt count by source | eval duration=lt-et | concurrency start=et duration=duration | where concurrency>1
Docs for concurrency can be found at https://docs.splunk.com/Documentation/Splunk/8.0.4/SearchReference/Concurrency

Splice Machine: TIMESTAMPADD returns value that is 1 hour off 10-15% of the time

Running into a weird bug when executing TIMESTAMPADD queries where the result is not always accurate.
Example #1 (incorrect):
TIMESTAMPADD(SQL_TSI_SECOND, 1214870399, TIMESTAMP('1970-01-01 00:00:00.000Z'))
Returns: 2008-07-01 00:59:59.0
It should be: 2008-06-30 23:59:59.0
Example #2 (correct):
TIMESTAMPADD(SQL_TSI_SECOND, 1167609600, TIMESTAMP('1970-01-01 00:00:00.000Z'))
Returns: 2007-01-01 00:00:00.0 which is correct.
It happens with roughly 10-15% of my queries (lots of unixtime to timestamp converting when querying my tables). It is always the same 1 hour off.
Thanks
Edit with additional information:
Other example unixtimes that show up incorrectly if I try to convert:
1270508410 to 2010-04-06 00:00:10.0 which should be 2010-04-05 23:00:10.0
1304722810 to 2011-05-07 00:00:10.0 which should be 2011-05-06 23:00:10.0
1340221507 to 2012-06-20 20:45:07.0 which should be 2012-06-20 19:45:07.0
This last one is just to show its not just related to timestamps that are near the midnight time period.
Turns out Splice Machine has their own open issue about this problem.
For reference when reaching out to Splice Machine support: Ticket number DB-4937

Log Slow Pages Taking Longer Than [n] Seconds In ColdFusion with Details

(ACF9)
Unless there's an option I'm missing, the "Log Slow Pages Taking Longer Than [n] Seconds" setting isn't useful for front-controller based sites (e.g., Model-Glue, FW/1, Fusebox, Mach-II, etc.).
For instance, in a Mura/Framework-One site, I just end up with:
"Warning","jrpp-186","04/25/13","15:26:36",,"Thread: jrpp-186, processing template: /home/mysite/public_html_cms/wwwroot/index.cfm, completed in 11 seconds, exceeding the 10 second warning limit"
"Warning","jrpp-196","04/25/13","15:27:11",,"Thread: jrpp-196, processing template: /home/mysite/public_html_cms/wwwroot/index.cfm, completed in 59 seconds, exceeding the 10 second warning limit"
"Warning","jrpp-214","04/25/13","15:28:56",,"Thread: jrpp-214, processing template: /home/mysite/public_html_cms/wwwroot/index.cfm, completed in 32 seconds, exceeding the 10 second warning limit"
"Warning","jrpp-134","04/25/13","15:31:53",,"Thread: jrpp-134, processing template: /home/mysite/public_html_cms/wwwroot/index.cfm, completed in 11 seconds, exceeding the 10 second warning limit"
Is there some way to get query string or post details in there, or is there another way to get what I'm after?
You can easily add some logging to your application for any requests that take longer than 10 seconds.
In onRequestStart():
request.startTime = getTickCount();
In onRequestEnd():
request.endTime = getTickCount();
if (request.endTime - request.startTime > 10000){
writeLog(cgi.QUERY_STRING);
}
If you're writing a Mach-II, FW/1 or ColdBox application, it's trivial to write a "plugin" that runs on every request which captures the URL or FORM variables passed in the request and stores that in a simple database table or log file. (You can even capture session.userID or IP address or whatever you may need.) If you're capturing to a database table, you'll probably not want any indexes to optimize for performance and you'll need to rotate that table so you're not trying to do high-speed inserts on a table with tens of millions of rows.
In Mach-II, you'd write a plugin.
In FW/1, you'd put a call to a controller which handles this into setupRequest() in your application.cfc.
In ColdBox, you'd write an interceptor.
The idea is that the log just tells you what pages arw xonsostently slow sp ypu can do your own performance tuning.
Turn on debugging for further details for a start.

Increasing SQLite SELECT performance

I have a program that does some math in an SQL query. There are hundreds of thousands rows (some device measurements) in an SQLite table, and using this query, the application breaks these measurements into groups of, for example, 10000 records, and calculates the average for each group. Then it returns the average value for each of these groups.
The query looks like this:
SELECT strftime('%s',Min(Stamp)) AS DateTimeStamp,
AVG(P) AS MeasuredValue,
((100 * (strftime('%s', [Stamp]) - 1334580095)) /
(1336504574 - 1334580095)) AS SubIntervalNumber
FROM LogValues
WHERE ((DeviceID=1) AND (Stamp >= datetime(1334580095, 'unixepoch')) AND
(Stamp <= datetime(1336504574, 'unixepoch')))
GROUP BY ((100 * (strftime('%s', [Stamp]) - 1334580095)) /
(1336504574 - 1334580095)) ORDER BY MIN(Stamp)
The numbers in this request are substituted by my application with some values.
I don't know if i can optimize this request more (if anyone could help me to do so, i'd really appreciate)..
This SQL query can be executed using an SQLite command line shell (sqlite3.exe). On my Intel Core i5 machine it takes 4 seconds to complete (there are 100000 records in the database that are being processed).
Now, if i write a C program, using sqlite.h C interface, I am waiting for 14 seconds for exactly the same query to complete. This C program "waits" during these 14 seconds on the first sqlite3_step() function call (any following sqlite3_step() calls are executed immediately).
From the Sqlite download page I have downloaded SQLite command line shell's source code and build it using Visual Studio 2008. I ran it and executed the query. Again 14 seconds.
So why does a prebuilt, downloaded from the sqlite website, command line tool takes only 4 seconds, while the same tool, built by me, takes 4 times longer time to execute?
I am running Windows 64 bit. The prebuilt tool is an x86 process. It also does not seem to be multicore optimized - in a Task Manager, during query execution, I can see only one core busy, for both built-by-mine and prebuilt SQLite shells.
Any way I could make my C program execute this query as fast as the prebuilt command line tool does it?
Thanks!