I am updating my RRD file with some counts...
For example:
time: value:
12:00 120
12:05 135
12:10 154
12:20 144
12:25 0
12:30 23
13:35 36
here my RRD is updating as below logic:
((current value)-(previous value))/((current time)-(previous time))
eg. ((135-120))/5 = 15
but my problem is when it comes 0 the reading will be negative:
((0-144))/5
Here " 0 " value comes with system failure only( from where the data is fetched)..It must not display this reading graph.
How can I configure like when 0 comes it will not update the "RRD graph" (skip this reading (0-144/5)) and next time it will take reading like ((23-0)/5) but not (23-144/10)
When specifying the data sources when creating the RRD, you can specify which range of values is acceptable.
DS:data_source:GAUGE:10:1:U will only accept values above 1.
So if you get a 0 during an update, rrd will replace it with unknown and i assume it can find a way to discard it.
Related
I'm setting up a rrd database to store sensor data for 3 days in 12hr intervalls (43200s) = 6 row in RRA.
rrdtool create test.rrd --step 43200 --start 1562429286 DS:temp:GAUGE:86400:U:U RRA:AVERAGE:0:1:6
The databases starting time is 1562429286 (06.07.2019 - 18:08:06).
When I dump the database:
rrdtool dump test.rrd
it says (output trimmed for clarity):
2019-07-04 02:00:00 CEST / 1562198400 NaN
2019-07-04 14:00:00 CEST / 1562241600 NaN
2019-07-05 02:00:00 CEST / 1562284800 NaN
2019-07-05 14:00:00 CEST / 1562328000 NaN
2019-07-06 02:00:00 CEST / 1562371200 NaN
2019-07-06 14:00:00 CEST / 1562414400 NaN
I expected rrdtool to give the next nearest timestamp ( 6.7.19 18:00 ) as the last entry ("starting point") instead. So why is it at 14:00 ?
At first this explanation (How to create a rrd file with a specific time?) made perfect sense for the small intervall of 5m to me. But in my case I cannot get behind the logic if the intervall is bigger (12h)
This is because the RRA buckets are always normalised to be aligned to the GMT (UCT) timezone. It is not visible if you are using a cdp (consolodated data point) width of an hour or less; but in your case, your cdp are 12 hours in width. Your timezone means that these are offset by 2 hours from UCT zero resulting in apparent boundaries of 02 and 14 local time (if you were in London then you'd be seeing 0 and 12 as expected).
This effect is much more noticeable when you are using 1-day rollups and are located in somewhere like New Zealand, when you'll see the CDP boundary appearing at noon rather than at midnight.
It is not currently possible to specify a different timezone to use as a base for the RRA buckets (this would make the data nonportable) though I believe it has been on the RRDTool feature request list for a number of years.
I am currently trying to create a report that shows how customers behave over time, but instead of doing this by date, I am doing it by customer age (number of months since they first became a customer). So using a date field isn't really an option, considering one customer may have started in Dec 2016 and another starts in Jun 2017.
What I'm trying to find is the month-over-month change in units purchased. If I was using a date field, I know that I could use
[Previous Month Total] = CALCULATE(SUM([Total Units]), PREVIOUSMONTH([FiscalDate]))
I also thought about using EARLIER() to find out but I don't think it would work in this case, as it requires row context that I'm not sure I could create. Below is a simplified version of the table that I'll be using.
ID Date Age Units
219 6/1/2017 0 10
219 7/1/2017 1 5
219 8/1/2017 2 4
219 9/1/2017 3 12
342 12/1/2016 0 500
342 1/1/2017 1 280
342 2/1/2017 2 325
342 3/1/2017 3 200
342 4/1/2017 4 250
342 5/1/2017 5 255
How about something like this?
PrevTotal =
VAR CurrAge = SELECTEDVALUE(Table3[Age])
RETURN CALCULATE(SUM(Table3[Units]), ALL(Table3[Date]), Table3[Age] = CurrAge - 1)
The CurrAge variable gives the Age evaluated in the current filter context. You then plug that into a filter in the CALCULATE line.
I have been trying to create a chat bot using program ab. I have created a simple aiml file and tried. But It is not working. I am getting this,
Name = super Path = /aiml/bots/super
c:/ab
/aiml/bots
/aiml/bots/super
/aiml/bots/super/aiml
/aiml/bots/super/aimlif
/aiml/bots/super/config
/aiml/bots/super/logs
/aiml/bots/super/sets
/aiml/bots/super/maps
Preprocessor: 0 norms 0 persons 0 person2
Get Properties: /aiml/bots/super/config/properties.txt
addAIMLSets: /aiml/bots/super/sets does not exist.
addCategories: /aiml/bots/super/aiml does not exist.
AIML modified Thu Jan 01 05:30:00 IST 1970 AIMLIF modified Thu Jan 01 05:30:00 IST 1970
No deleted.aiml.csv file found
No deleted.aiml.csv file found
addCategories: /aiml/bots/super/aimlif does not exist.
Loaded 0 categories in 0.002 sec
No AIMLIF Files found. Looking for AIML
addCategories: /aiml/bots/super/aiml does not exist.
Loaded 0 categories in 0.001 sec
--> Bot super 0 completed 0 deleted 0 unfinished
Setting predicate topic to unknown
normalized = HELLO
No match.
writeCertainIFCaegories learnf.aiml size= 0
I have no answer for that.
Why the file is not loaded? I have included the simple aiml file also below. super folder have all the inbuilt aiml files I downloaded with program ab
Because the aiml files were not loaded properly and so there are no answers to reply to any of the questions.
Preprocessor: 0 norms 0 persons 0 person2
This means no files were processed and added.
The .aiml files were most likely not loaded or found. Perhaps a naming issue??
Name = super Path = /aiml/bots/super
c:/ab
/aiml/bots
/aiml/bots/super
/aiml/bots/super/aiml
/aiml/bots/super/aimlif
/aiml/bots/super/config
/aiml/bots/super/logs
/aiml/bots/super/sets
/aiml/bots/super/maps
Preprocessor: 0 norms 0 persons 0 person2
Get Properties: /aiml/bots/super/config/properties.txt
addAIMLSets: /aiml/bots/super/sets does not exist.
addCategories: /aiml/bots/super/aiml does not exist.
AIML modified Thu Jan 01 05:30:00 IST 1970 AIMLIF modified Thu Jan 01 05:30:00 IST 1970
No deleted.aiml.csv file found
No deleted.aiml.csv file found
addCategories: /aiml/bots/super/aimlif does not exist.
Loaded 0 categories in 0.002 sec
No AIMLIF Files found. Looking for AIML
addCategories: /aiml/bots/super/aiml does not exist.
Loaded 0 categories in 0.001 sec
--> Bot super 0 completed 0 deleted 0 unfinished
Setting predicate topic to unknown
normalized = HELLO
No match.
writeCertainIFCaegories learnf.aiml size= 0
I have no answer for that.
The Loaded 0 categories tells you that no categories were found from your .aiml files
Also, --> Bot super 0 completed 0 deleted 0 unfinished, again tells you that 0 categories were completed for your bot
It may be that you missed out on setting up your .aiml.csv files
Hope this helps
Let's say I have a Pandas DataFrame with two columns: 1) user_id, 2) steps (which contains the number of steps on the given date). Now I want to calculate the difference between the number of steps and the number of steps in the preceding measurement (measurements are guaranteed to be in order within my DataFrame).
So basically this comes down to appending an extra column to my DataFrame where the row values of this data frame match the value of the column 'steps' within this same row, minus the value of the 'steps' column in the row above (or 0 if this is the first row). To complicate things further, I want to calculate these differences per user_id, so I want to make sure that I do not subtract the steps values of two rows with different user_id's.
Does anyone have an idea how to get this done with Python 2.7 and Panda?
So an example to illustrate this.
Example input:
user_id steps
1015 48
1015 23
1015 79
1016 10
1016 20
Desired output:
user_id steps d_steps
1015 48 0
1015 23 -25
1015 79 56
2023 10 0
2023 20 10
Your output shows user ids that are not in you orig data but the following does what you want, you will have to replace/fill the NaN values with 0:
In [16]:
df['d_steps'] = df.groupby('user_id').transform('diff')
df.fillna(0, inplace=True)
df
Out[16]:
user_id steps d_steps
0 1015 48 0
1 1015 23 -25
2 1015 79 56
3 1016 10 0
4 1016 20 10
Here we generate the desired column by calling transform on the groupby by object and pass a string which maps to the diff method which subtracts the previous row value. Transform applies a function and returns a series with an index aligned to the df.
Let me preface this with I am new at using pandas so I'm sorry if this question is basic or answered before, I looked online and couldn't find what I needed.
I have a dataframe that consists of a baseball teams schedule. Some of the games have been played already and as a result the results from the game are inputed in the dataframe. However, for games that are yet to happen, there is only the time they are to be played (eg 1:35 pm).
So, I would like to convert all of the values of the games yet to happen into Na's.
Thank you
As requested here is what the results dataframe for the Arizona Diamondbacks contains
print MLB['ARI']
0 0
1 0
2 0
3 1
4 0
5 0
6 0
7 0
8 1
9 0
10 1
...
151 3:40 pm
152 8:40 pm
153 8:10 pm
154 4:10 pm
155 4:10 pm
156 8:10 pm
157 8:10 pm
158 1:10 pm
159 9:40 pm
160 8:10 pm
161 4:10 pm
Name: ARI, Length: 162, dtype: object
Couldn't figure out any direct solution, only iterative
for i in xrange(len(MLB)):
if 'pm' in MLB.['ARI'].iat[i] or 'am' in MLB.['ARI'].iat[i]:
MLB.['ARI'].iat[i] = np.nan
This should work if your actual values (1s and 0s) are also strings. If they are numbers, try:
for i in xrange(len(MLB)):
if type(MLB.['ARI'].iat[i]) != type(1):
MLB.['ARI'].iat[i] = np.nan
The more idiomatic way to do this would be with the vectorised string methods.
http://pandas.pydata.org/pandas-docs/stable/basics.html#vectorized-string-methods
mask = MLB['ARI'].str.contains('pm') #create boolean array
MLB['ARI'][mask] = np.nan #the column names goes first
Create the boolean array from and then use it to select the data you want.
Make sure that the column name goes before the masking array, otherwise you'll be acting on a copy of the data and your original dataframe wont get updated.
MLB['ARI'][mask] #returns a view on MLB datafrmae, will be updated
MLB[mask]['ARI'] #returns a copy of MLB, wont be updated.