WMI EventLog Time interval - wmi

Hie all,
I'm trying to get eventlog entries using WMI and WQL.
I can get the right log with the right sourcename of itand so on, but i can make a select query to only get result for the 5 or 10 past minutes.
here is my query:

Here are a few snippets from a script of mine:
Dim dtmStart, dtmEnd, sDate, ...
I actually had an array of dates and I was looking for logon/off/unlock events for the entire day. So I built my complete start and end dates from that.
I won't put in the day month and year but, you could just define it, e.g. sDate = 10100608.
dtmStart = sDate + "000000.000000-420" '0hr on the date in question.
dtmEnd = sDate + "235900.000000-420" ' 23:59 on the date in question
(Note that the UTC offset is in minutes here -420 day light savings time North America.)
Set colEvents = oWMIService.ExecQuery _
("SELECT * FROM Win32_NTLogEvent WHERE Logfile = 'Security' AND " _
& "TimeWritten >= '" & dtmStart & "' AND TimeWritten < '" _
& dtmEnd & "' AND " _
& "(EventCode = '528' OR EventCode = '540' OR EventCode = '538')")
' Query for events during the time range we're looking for.

Mike,
Show me your query. Usually the time format is something like this
20100608100000.000000-300
see this for more details about DateTime formatting for WQL

Related

Convert a number column into a time format in Power BI

I'm looking for a way to convert a decimal number into a valid HH:mm:ss format.
I'm importing data from an SQL database.
One of the columns in my database is labelled Actual Start Time.
The values in my database are stored in the following decimal format:
73758 // which translates to 07:27:58
114436 // which translates to 11:44:36
I cannot simply convert this Actual Start Time column into a Time format in my Power BI import as it returns errors for some values, saying it doesn't recognise 73758 as a valid 'time'. It needs to have a leading zero for cases such as 73758.
To combat this, I created a new Text column with the following code to append a leading zero:
Column = FORMAT([Actual Start Time], "000000")
This returns the following results:
073758
114436
-- which is perfect. Exactly what I needed.
I now want to convert these values into a Time.
Simply changing the data type field to Time doesn't do anything, returning:
Cannot convert value '073758' of type Text to type Date.
So I created another column with the following code:
Column 2 = FORMAT(TIME(LEFT([Column], 2), MID([Column], 3, 2), RIGHT([Column], 2)), "HH:mm:ss")
To pass the values 07, 37 and 58 into a TIME format.
This returns the following:
_______________________________________
| Actual Start Date | Column | Column 2 |
|_______________________________________|
| 73758 | 073758 | 07:37:58 |
| 114436 | 114436 | 11:44:36 |
Which is what I wanted but is there any other way of doing this? I want to ideally do it in one step without creating additional columns.
You could use a variable as suggested by Aldert or you can replace Column by the format function:
Time Format = FORMAT(
TIME(
LEFT(FORMAT([Actual Start Time],"000000"),2),
MID(FORMAT([Actual Start Time],"000000"),3,2),
RIGHT([Actual Start Time],2)),
"hh:mm:ss")
Edit:
If you want to do this in Power query, you can create a customer column with the following calculation:
Time.FromText(
if Text.Length([Actual Start Time])=5 then Text.PadStart( [Actual Start Time],6,"0")
else [Actual Start Time])
Once this column is created you can drop the old column, so that you only have one time column in the data. Hope this helps.
I, on purpose show you the concept of variables so you can use this in future with more complex queries.
TimeC =
var timeStr = FORMAT([Actual Start Time], "000000")
return FORMAT(TIME(LEFT([timeStr], 2), MID([timeStr], 3, 2), RIGHT([timeStr], 2)), "HH:mm:ss")

AWS Glue pushdown predicate not working properly

I'm trying to optimize my Glue/PySpark job by using push down predicates.
start = date(2019, 2, 13)
end = date(2019, 2, 27)
print(">>> Generate data frame for ", start, " to ", end, "... ")
relaventDatesDf = spark.createDataFrame([
Row(start=start, stop=end)
])
relaventDatesDf.createOrReplaceTempView("relaventDates")
relaventDatesDf = spark.sql("SELECT explode(generate_date_series(start, stop)) AS querydatetime FROM relaventDates")
relaventDatesDf.createOrReplaceTempView("relaventDates")
print("===LOG:Dates===")
relaventDatesDf.show()
flightsGDF = glueContext.create_dynamic_frame.from_catalog(database = "xxx", table_name = "flights", transformation_ctx="flights", push_down_predicate="""
querydatetime BETWEEN '%s' AND '%s'
AND querydestinationplace IN (%s)
""" % (start.strftime("%Y-%m-%d"), today.strftime("%Y-%m-%d"), ",".join(map(lambda s: str(s), arr))))
However it appears, that Glue still attempts to read data outside the specified date range?
INFO S3NativeFileSystem: Opening 's3://.../flights/querydestinationplace=12191/querydatetime=2019-03-01/part-00045-6cdebbb1-562c-43fa-915d-93b125aeee61.c000.snappy.parquet' for reading
INFO FileScanRDD: Reading File path: s3://.../flights/querydestinationplace=12191/querydatetime=2019-03-10/part-00021-34a13146-8fb2-43de-9df2-d8925cbe472d.c000.snappy.parquet, range: 0-11797922, partition values: [12191,17965]
WARN S3AbortableInputStream: Not all bytes were read from the S3ObjectInputStream, aborting HTTP connection. This is likely an error and may result in sub-optimal behavior. Request only the bytes you need via a ranged GET or drain the input stream after use.
INFO S3NativeFileSystem: Opening 's3://.../flights/querydestinationplace=12191/querydatetime=2019-03-10/part-00021-34a13146-8fb2-43de-9df2-d8925cbe472d.c000.snappy.parquet' for reading
WARN S3AbortableInputStream: Not all bytes were read from the S3ObjectInputStream, aborting HTTP connection. This is likely an error and may result in sub-optimal behavior. Request only the bytes you need via a ranged GET or drain the input stream after use.
Notice the querydatetime=2019-03-01 and querydatetime=2019-03-10 its outside the specified range of 2019-02-13 - 2019-02-27. Is that why there's the next line "aborting HTTP connection" tho? It goes on to say "This is likely an error and may result in sub-optimal behavior" is something wrong?
I wonder if the problem is because it does not support BETWEEN inside the predicate or IN?
The table create DDL
CREATE EXTERNAL TABLE `flights`(
`id` string,
`querytaskid` string,
`queryoriginplace` string,
`queryoutbounddate` string,
`queryinbounddate` string,
`querycabinclass` string,
`querycurrency` string,
`agent` string,
`quoteageinminutes` string,
`price` string,
`outboundlegid` string,
`inboundlegid` string,
`outdeparture` string,
`outarrival` string,
`outduration` string,
`outjourneymode` string,
`outstops` string,
`outcarriers` string,
`outoperatingcarriers` string,
`numberoutstops` string,
`numberoutcarriers` string,
`numberoutoperatingcarriers` string,
`indeparture` string,
`inarrival` string,
`induration` string,
`injourneymode` string,
`instops` string,
`incarriers` string,
`inoperatingcarriers` string,
`numberinstops` string,
`numberincarriers` string,
`numberinoperatingcarriers` string)
PARTITIONED BY (
`querydestinationplace` string,
`querydatetime` string)
ROW FORMAT SERDE
'org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe'
STORED AS INPUTFORMAT
'org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat'
OUTPUTFORMAT
'org.apache.hadoop.hive.ql.io.parquet.MapredParquetOutputFormat'
LOCATION
's3://pinfare-glue/flights/'
TBLPROPERTIES (
'CrawlerSchemaDeserializerVersion'='1.0',
'CrawlerSchemaSerializerVersion'='1.0',
'UPDATED_BY_CRAWLER'='pinfare-parquet',
'averageRecordSize'='19',
'classification'='parquet',
'compressionType'='none',
'objectCount'='623609',
'recordCount'='4368434222',
'sizeKey'='86509997099',
'typeOfData'='file')
One of the issue I can see with the code is that you are using "today" instead of "end" in the between clause. Though I don't see the today variable declared anywhere in your code, I am assuming it has been initialized with today's date.
In that case the range will be different and the partitions being read by glue spark is correct.
In order to push down your condition, you need to change the order of columns in your partition by clause of table definition
A condition having "in" predicate on first partition column can not be push down as you are expecting.
Let me if it helps.
Pushdown predicates in Glue DynamicFrame works fine with between as well as IN clause.
As long as you have correct sequence of partition columns defined in table definition and in query.
I have table with three level of partitions.
s3://bucket/flights/year=2018/month=01/day=01 -> 50 records
s3://bucket/flights/year=2018/month=02/day=02 -> 40 records
s3://bucket/flights/year=2018/month=03/day=03 -> 30 records
Read data in dynamicFrame
ds = glueContext.create_dynamic_frame.from_catalog(
database = "abc",table_name = "pqr", transformation_ctx = "flights",
push_down_predicate = "(year == '2018' and month between '02' and '03' and day in ('03'))"
)
ds.count()
Output:
30 records
So, you are gonna get the correct results, if sequence of columns is correctly specified. Also note, you need to specify '(quote) IN('%s') in IN clause.
Partition columns in table:
querydestinationplace string,
querydatetime string
Data read in DynamicFrame:
flightsGDF = glueContext.create_dynamic_frame.from_catalog(database = "xxx", table_name = "flights", transformation_ctx="flights",
push_down_predicate=
"""querydestinationplace IN ('%s') AND
querydatetime BETWEEN '%s' AND '%s'
"""
%
( ",".join(map(lambda s: str(s), arr)),
start.strftime("%Y-%m-%d"), today.strftime("%Y-%m-%d")))
Try to do the end as this
start = str(date(2019, 2, 13))
end = str(date(2019, 2, 27))
# Set your push_down_predicate variable
pd_predicate = "querydatetime >= '" + start + "' and querydatetime < '" + end + "'"
#pd_predicate = "querydatetime between '" + start + "' AND '" + end + "'" # Or this one?
flightsGDF = glueContext.create_dynamic_frame.from_catalog(
database = "xxx"
, table_name = "flights"
, transformation_ctx="flights"
, push_down_predicate=pd_predicate)
The pd_predicate will be a string that will work as a push_down_predicate.
Here is a nice read about it if you like.
https://aws.amazon.com/blogs/big-data/work-with-partitioned-data-in-aws-glue/

Is there a native way to convert to UTC time in UniVerse 11.2.4+?

The release notes for UniVerse version 11.2.4 mention local time zone configuration, but it is in the context of auditing. This is the quote:
Local time zone configuration
Prior to UniVerse 11.2.4, the date and time data stored in the audit log records was based on UTC only.
Beginning at UniVerse 11.2.4, UniVerse adds the date and time data
based on local timezone to audit log records. The data is stored in
location 19 for each record. The dictionary name for this data field
is TZINFO. For more information, see UniVerse Security Features.
Since UniVerse seems capable of working with time zones natively, does this mean there might be a way to easily generate UTC-formatted date/time stamps from my EST/EDT values?
I am sending data to a system that wants dates formatted in ISO-8601 Date/Time format yyyy-MMddTHH:mm:ssZ, like 2015-06-02T15:55:22Z, with the time zone and Daylight Saving Time offsets accounted for.
I dug through the Security Features guide, and found this:
UniVerse also adds a globally cataloged program to help users to
obtain date and time information from the audit log (which is called
by the above two I-descriptor fields):
SUBROUTINE GETLOCALTIME
(
RESULT ;* OUT: output
TZOFF ;* IN: time zone offset
DATE ;* IN: UTC date
TIME ;* IN: UTC time
OP ;* IN: operation
;* 1: get local date
;* 2: get local time
;* 3: get local timezone id
;* 4: get local timezone daylight saving flag
)
(Since I'm not using the auditing capabilities of UniVerse, I don't think I can do much with this, nor could I locate the subroutine.)
I have also played with the popular(?) DATE.UTILITY program from PickWiki, but its calculation of Daylight Saving Time start/end dates seem off. I will save those issues for another question.
This is getting long-winded but I'm hoping someone can point me in the right direction if there's a new OCONV() parameter or something I could use.
Just in case it matters, I'm running on Windows Server 2008 R2.
Thanks!
Time is a complicated thing. Socially we have accepted that it not only acceptable to alter it 2 times a year, we have mandated it! This is all well and good for us meat machines who only want to understand time when it is convenient for us however it does cause us to get grumpy when out reporting "looks funny".
The solution to your problem is not exceptionally easy, especially if you are working with already recorded dates. Dates and times in Universe are generally recorded based on local system time. If this is something that you are trying to do going forward you have to note what the offset is at the time of the transaction or simply stamp things SYSTEM(99), which complicated pretty much all other reporting you will need to do. Either way, this is a complicated matter and it still likely to be somewhat imperfect.
Here is a little something that might help you if you are the one in charge of recording dates, going forward.
SECONDS.SINCE.GMT.01.01.1970 = SYSTEM(99)
CRT SECONDS.SINCE.GMT.01.01.1970:" Seconds since GMT Epoch Began"
NUMBER.OF.DAYS.SINCE.01.01.1970 = DATE() -732
;* Day 0 in Pick is 12/31/1967 because Dick Pick so we subtract 732 from the pick date
SECONDS.SINCE.MIDNIGHT.LOCAL= TIME()
SECS.PER.DAY = 24 * 60 * 60
LOCAL.SECONDS.SINCE.GMT.01.01.1970 = NUMBER.OF.DAYS.SINCE.01.01.1970 * SECS.PER.DAY + FIELD(SECONDS.SINCE.MIDNIGHT.LOCAL,".",1)
;*I drop the precision
CRT LOCAL.SECONDS.SINCE.GMT.01.01.1970: " Seconds since 01/01/1970 in local time"
OFFSET = (LOCAL.SECONDS.SINCE.GMT.01.01.1970 - SECONDS.SINCE.GMT.01.01.1970)
CRT "CURRENT.OFFSET IS ":INT((OFFSET / 60 )/ 60)
END
Which outputs the following on my system which is currently PDT (even though OCONV(DATE(),'DZ') reports it as PST.
1434472817 Seconds since GMT Epoch Began
1434447617 Seconds since 01/01/1970 in local time
CURRENT.OFFSET IS -7
Hopefully you have found this helpful.
Thanks for the clues. Here's my implementation:
SUBROUTINE FORMAT.ISO.8601 (IDATE, ITIME, RESULT, ERR.TEXT)
* Don't step on the caller's variables.
IN.DATE = IDATE
IN.TIME = ITIME
* Initialize the outbound variable.
RESULT = ''
IF NOT(NUM(IN.DATE)) THEN
ERR.TEXT = 'Non-numeric internal date ' : DQUOTE(IN.DATE) : ' when numeric required.'
RETURN
END
IF NOT(NUM(IN.DATE)) THEN
ERR.TEXT = 'Non-numeric internal time ' : DQUOTE(IN.TIME) : ' when numeric required.'
RETURN
END
* SYSTEM(99) is based on 1/1/1970.
SECONDS.SINCE.GMT.01.01.1970 = SYSTEM(99)
* Day 0 in Pick is 12/31/1967
* Subtract 732 to equalize the starting dates.
NUMBER.OF.DAYS.SINCE.01.01.1970 = DATE() - 732
SECONDS.SINCE.MIDNIGHT.LOCAL= TIME()
SECS.PER.DAY = 24 * 60 * 60
LOCAL.SECONDS.SINCE.GMT.01.01.1970 = NUMBER.OF.DAYS.SINCE.01.01.1970 * SECS.PER.DAY + FIELD(SECONDS.SINCE.MIDNIGHT.LOCAL,".",1)
OFFSET = LOCAL.SECONDS.SINCE.GMT.01.01.1970 - SECONDS.SINCE.GMT.01.01.1970
OFFSET = INT((OFFSET / 60 )/ 60)
OTIME = OCONV(IN.TIME, 'MTS')
IF OTIME = '' THEN
ERR.TEXT = 'Bad internal time ' : DQUOTE(IN.TIME) : '.'
RETURN
END
HOURS = FIELD(OTIME, ':', 1)
MINUTES = FIELD(OTIME, ':', 2)
SECONDS = FIELD(OTIME, ':', 3)
HOURS -= OFFSET
IF HOURS >= 24 THEN
IN.DATE += 1
HOURS = HOURS - 24
END
HOURS = HOURS 'R%2'
ODATE = OCONV(IN.DATE, 'D4/')
IF ODATE = '' THEN
ERR.TEXT = 'Bad internal date ' : DQUOTE(IN.DATE) : '.'
RETURN
END
DMONTH = FIELD(ODATE, '/', 1)
DDAY = FIELD(ODATE, '/',2)
DYEAR = FIELD(ODATE, '/',3)
RESULT = DYEAR : '-' : DMONTH : '-' : DDAY : 'T' : HOURS : ':' : MINUTES : ':' : SECONDS : 'Z'
RETURN
END
Here's my test harness:
CRT 'Testing right now.'
IDATE = DATE()
ITIME = TIME()
CALL FORMAT.ISO.8601 (IDATE, ITIME, RESULT, ERR.TEXT)
IF ERR.TEXT THEN
CRT 'ERR.TEXT: ' : ERR.TEXT
END ELSE
CRT 'RESULT: ' : RESULT
END
CRT
CRT 'Testing an hour ago.'
IDATE = DATE()
ITIME = TIME()
ITIME = ITIME - (60*60)
IF ITIME < 0 THEN
ITIME += (24*60*60)
IDATE -= 1
END
CALL FORMAT.ISO.8601 (IDATE, ITIME, RESULT, ERR.TEXT)
IF ERR.TEXT THEN
CRT 'ERR.TEXT: ' : ERR.TEXT
END ELSE
CRT 'RESULT: ' : RESULT
END
CRT
CRT 'Testing an hour from now.'
IDATE = DATE()
ITIME = TIME()
ITIME = ITIME + (60*60)
IF ITIME > (24*60*60) THEN
ITIME -= (24*60*60)
IDATE += 1
END
CALL FORMAT.ISO.8601 (IDATE, ITIME, RESULT, ERR.TEXT)
IF ERR.TEXT THEN
CRT 'ERR.TEXT: ' : ERR.TEXT
END ELSE
CRT 'RESULT: ' : RESULT
END
END
Here's my test run:
>T$FORMAT.ISO.8601
Testing right now.
RESULT: 2017-03-29T00:47:22Z
Testing an hour ago.
RESULT: 2017-03-28T23:47:22Z
Testing an hour from now.
RESULT: 2017-03-29T01:47:22Z

Convert result of aggregation to string

I am using aggregate and Sum to determine the number of hours I have worked in each month. I have it working, but the "hours" variable always has extra content in it!
I should add, I am a newbie at django and I got most of this code from here (Django beginner: How to query in django ORM to calculate fields based on dates).
My code:
hours = ""
work_data = ""
month_data = ""
for month in range(1,13):
entries_per_month = Mydata.objects.filter(myTimePeriod__month=month).filter(myResource="James")
hours = str(entries_per_month.aggregate(value=Sum('myHoursLogged')))
month_data = month_data + "'" + str(month) + "',"
work_data = work_data + hours + ","
I look at the results of work_data:
work_data
This gives me a result of {'value': Decimal('136.80')},{'value': Decimal('146.40')},
I need it in the format: 136.80, 146.40 (This is the format required by the charting library). I have tried using str() to convert it but it doesnt seem to work in this case.
str is not useful here, because your data is a list of dictionaries. You just need to process that and get the results:
hours = ','.join(str(v['value']) for v in entries_per_month)

What is the best way to populate a load file for a date lookup dimension table?

Informix 11.70.TC4:
I have an SQL dimension table which is used for looking up a date (pk_date) and returning another date (plus1, plus2 or plus3_months) to the client, depending on whether the user selects a "1","2" or a "3".
The table schema is as follows:
TABLE date_lookup
(
pk_date DATE,
plus1_months DATE,
plus2_months DATE,
plus3_months DATE
);
UNIQUE INDEX on date_lookup(pk_date);
I have a load file (pipe delimited) containing dates from 01-28-2012 to 03-31-2014.
The following is an example of the load file:
01-28-2012|02-28-2012|03-28-2012|04-28-2012|
01-29-2012|02-29-2012|03-29-2012|04-29-2012|
01-30-2012|02-29-2012|03-30-2012|04-30-2012|
01-31-2012|02-29-2012|03-31-2012|04-30-2012|
...
03-31-2014|04-30-2014|05-31-2014|06-30-2014|
........................................................................................
EDIT : Sir Jonathan's SQL statement using DATE(pk_date + n UNITS MONTH on 11.70.TC5 worked!
I generated a load file with pk_date's from 01-28-2012 to 12-31-2020, and plus1, plus2 & plus3_months NULL. Loaded this into date_lookup table, then executed the update statement below:
UPDATE date_lookup
SET plus1_months = DATE(pk_date + 1 UNITS MONTH),
plus2_months = DATE(pk_date + 2 UNITS MONTH),
plus3_months = DATE(pk_date + 3 UNITS MONTH);
Apparently, DATE() was able to convert pk_date to DATETIME, do the math with TC5's new algorithm, and return the result in DATE format!
.........................................................................................
The rules for this dimension table are:
If pk_date has 31 days in its month and plus1, plus2 or plus3_months only have 28, 29, or 30 days, then let plus1, plus2 or plus3 equal the last day of that month.
If pk_date has 30 days in its month and plus1, plus2 or plus3 has 28 or 29 days in its month, let them equal the last valid date of those month, and so on.
All other dates fall on the same day of the following month.
My question is: What is the best way to automatically generate pk_dates past 03-31-2014 following the above rules? Can I accomplish this with an SQL script, "sed", C program?
EDIT: I mentioned sed because I already have more than two years worth of data and
could perhaps model the rest after this data, or perhaps a tool like awk is better?
The best technique would be to upgrade to 11.70.TC5 (on 32-bit Windows; generally to 11.70.xC5 or later) and use an expression such as:
SELECT DATE(given_date + n UNITS MONTH)
FROM Wherever
...
The DATETIME code was modified between 11.70.xC4 and 11.70.xC5 to generate dates according to the rules you outline when the dates are as described and you use the + n UNITS MONTH or equivalent notation.
This obviates the need for a table at all. Clearly, though, all your clients would also have to be on 11.70.xC5 too.
Maybe you can update your development machine to 11.70.xC5 and then use this property to generate the data for the table on your development machine, and distribute the data to your clients.
If upgrading at least someone to 11.70.xC5 is not an option, then consider the Perl script suggestion.
Can it be done with SQL? Probably, but it would be excruciating. Ditto for C, and I think 'no' is the answer for sed.
However, a couple of dozen lines of perl seems to produce what you need:
#!/usr/bin/perl
use strict;
use warnings;
use DateTime;
my #dates;
# parse arguments
while (my $datep = shift){
my ($m,$d,$y) = split('-', $datep);
push(#dates, DateTime->new(year => $y, month => $m, day => $d))
|| die "Cannot parse date $!\n";
}
open(STDOUT, ">", "output.unl") || die "Unable to create output file.";
my ($date, $end) = #dates;
while( $date < $end ){
my #row = ($date->mdy('-')); # start with pk_date
for my $mth ( qw[ 1 2 3 ] ){
my $fut_d = $date->clone->add(months => $mth);
until (
($fut_d->month == $date->month + $mth
&& $fut_d->year == $date->year) ||
($fut_d->month == $date->month + $mth - 12
&& $fut_d->year > $date->year)
){
$fut_d->subtract(days => 1); # step back until criteria met
}
push(#row, $fut_d->mdy('-'));
}
print STDOUT join("|", #row, "\n");
$date->add(days => 1);
}
Save that as futuredates.pl, chmod +x it and execute like this:
$ futuredates.pl 04-01-2014 12-31-2020
That seems to do the trick for me.