I'm trying to look for a visual with the following specifications:
Job usage timeline (bar graph like)
Jobs can be part of a group
When two or more jobs of the same group overlap (timestamp) they should be
displayed in the next line
Timestamps can be days, hours, minutes and seconds precise Tooltips (hover over QuickInfo)
Here are the ones I've tried so far and why they didn't work:
R-Script: hc_vistime - (Couldn't be displayed)
R-Script: gg_vistime - (No tooltips)
Gantt - (Timestamps could only be days exact)
Craydec Timelines - (Overlaps would not start new line)
Any suggestions? Thanks in advance!
Related
I have AWS managed Open Search cluster, and I'm trying to implement anomaly detection. I'm pretty new to all these tools. Today I stumbled with this:
Inside the "Live anomalies" chart I can see a red bar indicating some sort of anomaly for that time. But below inside the "Anomaly overwiew" box I see zero anomalies.
The first chart is for the last 600 minutes. The second chart if for the last 7 days. I don't understand why the "Anomaly overview" chart doesn't show the same anomaly as "Live anomalies" chart does.
can I configure for “retry time cycle” many different interval expression for example
something like that: “R6/PT10S, R2/PT30M” - 6 times each 10 Seconds and then 2 times after 30 minutes
Thanks in Advance,
Wladi
The job executor section of the camunda user guide only shows an example of comma separated intervals without repeats.
Looking at the code it seems a repeat is only recognized if there is a single interval configured
https://github.com/camunda/camunda-bpm-platform/blob/7.13.0/engine/src/main/java/org/camunda/bpm/engine/impl/util/ParseUtil.java#L88
The lookup of which interval to apply also does not consider any repeats
https://github.com/camunda/camunda-bpm-platform/blob/7.13.0/engine/src/main/java/org/camunda/bpm/engine/impl/cmd/DefaultJobRetryCmd.java#L113
This sounds like a useful feature though, you might want to open an issue in the camunda issue tracker.
While training my predictor I came across this error and I got stuck how to fix it.
I have two data-series, a "Target time-series data" with 9234 rows and a single "item_id" and a second one that is "Related time-series data" with the same number of rows as I only have a single id.
I'm setting de data with a window of 180 days, what is exactly the difference between the second and the first number that has appeared on the error, 9414 - 9234 = 180.
We were unable to train your predictor.
Please ensure there are no missing values for any items in the related time series, All items need data until 2020-03-15 00:00:00.0. For example, following items have missing data: item: brl only has 9234/9414 required datapoints starting 1994-06-07 00:00:00.0, please refer to documentation for additional details.
Once my data don't have missing data and it's on a daily basis why is it returning this error?
My data starts on 1994-06-07 and ends on 2019-09-17. Why should I have 9414 data points rather than 9234?
Should I take out 180 days in my "Target time-series data"?
The future values of the related time-series data must be known.
Example of a good related-time series: You know past and future days in which marketing has or will send email newsletters promoting the product you're forecasting. You can use this data as a related-time series.
Example of a bad related-time series: You notice that Google searches for your brand correlated with the sale of your product. As a result you want to use it as a related-time series. Since you don't know how many searches will occur in the future, so you can't use this as a related time series.
In you case, You have TARGET_TIME_SERIES data for 9414 days and you want to predict demand for the next 180 days. That means your RELATED_TIME_SERIES data should be 9594 days.
Edit: I have not tested this with amazon's forecasting product. I'm basing my answer on working with Facebook Prophet (which is one of the models amazon forcast uses). Please let me know if my solution worked.
I am having real problems getting the AWS IoT Analytics Delta Window (docs) to work.
I am trying to set it up so that every day a query is run to get the last 1 hour of data only. According to the docs the schedule feature can be used to run the query using a cron expression (in my case every hour) and the delta window should restrict my query to only include records that are in the specified time window (in my case the last hour).
The SQL query I am running is simply SELECT * FROM dev_iot_analytics_datastore and if I don't include any delta window I get the records as expected. Unfortunately when I include a delta expression I get nothing (ever). I left the data accumulating for about 10 days now so there are a couple of million records in the database. Given that I was unsure what the optimal format would be I have included the following temporal fields in the entries:
datetime : 2019-05-15T01:29:26.509
(A string formatted using ISO Local Date Time)
timestamp_sec : 1557883766
(A unix epoch expressed in seconds)
timestamp_milli : 1557883766509
(A unix epoch expressed in milliseconds)
There is also a value automatically added by AWS called __dt which is a uses the same format as my datetime except it seems to be accurate to within 1 day. i.e. All values entered within a given day have the same value (e.g. 2019-05-15 00:00:00.00)
I have tried a range of expressions (including the suggested AWS expression) from both standard SQL and Presto as I'm not sure which one is being used for this query. I know they use a subset of Presto for the analytics so it makes sense that they would use it for the delta but the docs simply say '... any valid SQL expression'.
Expressions I have tried so far with no luck:
from_unixtime(timestamp_sec)
from_unixtime(timestamp_milli)
cast(from_unixtime(unixtime_sec) as date)
cast(from_unixtime(unixtime_milli) as date)
date_format(from_unixtime(timestamp_sec), '%Y-%m-%dT%h:%i:%s')
date_format(from_unixtime(timestamp_milli), '%Y-%m-%dT%h:%i:%s')
from_iso8601_timestamp(datetime)
What are the offset and time expression parameters that you are using?
Since delta windows are effectively filters inserted into your SQL, you can troubleshoot them by manually inserting the filter expression into your data set's query.
Namely, applying a delta window filter with -3 minute (negative) offset and 'from_unixtime(my_timestamp)' time expression to a 'SELECT my_field FROM my_datastore' query translates to an equivalent query:
SELECT my_field FROM
(SELECT * FROM "my_datastore" WHERE
(__dt between date_trunc('day', iota_latest_succeeded_schedule_time() - interval '1' day)
and date_trunc('day', iota_current_schedule_time() + interval '1' day)) AND
iota_latest_succeeded_schedule_time() - interval '3' minute < from_unixtime(my_timestamp) AND
from_unixtime(my_timestamp) <= iota_current_schedule_time() - interval '3' minute)
Try using a similar query (with no delta time filter) with correct values for offset and time expression and see what you get, The (_dt between ...) is just an optimization for limiting the scanned partitions. You can remove it for the purposes of troubleshooting.
Please try the following:
Set query to SELECT * FROM dev_iot_analytics_datastore
Data selection filter:
Data selection window: Delta time
Offset: -1 Hours
Timestamp expression: from_unixtime(timestamp_sec)
Wait for dataset content to run for a bit, say 15 minutes or more.
Check contents
After several weeks of testing and trying all the suggestions in this post along with many more it appears that the extremely technical answer was to 'switch off and back on'. I deleted the whole analytics stack and rebuild everything with different names and it now seems to now be working!
Its important that even though I have flagged this as the correct answer due to the actual resolution. Both the answers provided by #Populus and #Roger are correct had my deployment being functioning as expected.
I found by chance that changing SELECT * FROM datastore to SELECT id1, id2, ... FROM datastore solved the problem.
After hours of searching the web (including SO), I am requesting advice from the community. RRD seems to be the right tool for this, but I could not get a straight answer until now.
My question is : Is it possible to get RRD output a graph for the day, that averages data from the past year ?
In other words, I want the "view span" to be one day long, but the "data span" to extend over the last 12 months, so that for 6pm, the value will be computed as the average value of ALL previous traffic measured at 6pm last 12 months.
Any hints, or instructions welcomed!
There is no direct way to create such a graph, at least in theory it would be possible using multiple DEF lines together with the SHIFT operation to create such a chart ... you would have to use a program to create the necessary command line though