I'm trying to write a program in c++ which produce a report, which provide a report on the usage by time. Break the time into blocks of quarter of an hour
00:00-00:14, 00:15-00:29, …, 23:45-23:59.
I should provide number of incidents in each time break. This is my code so far. I appreciate if anyone come up with a solution.
string time = word;
size_t found2 = word.find(":");
string tmpH,tmpM;
tmpH = word.substr(0,found2);
tmpM = word.substr((found2+1),word.length());
cout<<" word= "<<word<<" tmpH= "<<tmpH<<" tmpM= "<<tmpM<<endl;
int h = atoi(tmpH.c_str());
int m = atoi(tmpM.c_str());
////
Input:
aa784 pts/30 Fri Mar 28 03:25 still logged in 101.175.22.198
aa784 sshd Fri Mar 28 03:25 still logged in 101.175.22.198
aa784 pts/30 Fri Mar 28 03:25 - 03:25 (00:00) 101.175.22.198
aa784 sshd Fri Mar 28 03:25 - 03:25 (00:00) 101.175.22.198
hmb183 sshd Fri Mar 28 03:24 still logged in c110-20-244-248.mirnd4.nsw.optusnet.com.au
bkg988 sshd Fri Mar 28 03:24 - 03:24 (00:00) 139.218.157.100
hmb183 sshd Fri Mar 28 03:21 - 03:22 (00:01) c110-20-244-248.mirnd4.nsw.optusnet.com.au
fmm290 pts/43 Fri Mar 28 03:11 still logged in 1002-wan-001.rhw.com.au
fmm290 sshd Fri Mar 28 03:11 still logged in 1002-wan-001.rhw.com.au
bkg988 sshd Fri Mar 28 03:09 - 03:09 (00:00) 139.218.157.100
pm554 pts/14 Fri Mar 28 02:22 still logged in ppp239-204.static.internode.on.net
pm554 sshd Fri Mar 28 02:22 still logged in ppp239-204.static.internode.on.net
bkg988 sshd Fri Mar 28 02:17 - 02:17 (00:00) 139.218.157.100
bkg988 sshd Fri Mar 28 02:12 - 02:12 (00:00) 139.218.157.100
bkg988 sshd Fri Mar 28 02:10 - 02:10 (00:00) 139.218.157.100
bx972 pts/12 Fri Mar 28 02:09 still logged in cpe-121-218-195-236.lnse4.cht.bigpond.net.au
bkg988 sshd Fri Mar 28 02:07 - 02:07 (00:00) 139.218.157.100
hmb183 sshd Fri Mar 28 02:05 - 02:06 (00:01) c110-20-244-248.mirnd4.nsw.optusnet.com.au
bkg988 sshd Fri Mar 28 02:04 - 02:04 (00:00) 139.218.157.100
output:
00:00-00:14 10 users logged in
00:15-00:29 15 users logged in
....
23:45-23:59 3 users logged in
Therefore I have 4 conditions in an hour which comes to 96 conditions of time?
First, you can convert each block of hour and minute into minutes , for example 23:45 equals 1095 in minutes. Storing all of these blocks into a list and sort them by its starting time.
For each event, convert each event time into number of minute and use binary search(or linear search) to search for a block that has largest starting time less than or equals to the event time,and that block will be the block this event belong to.
Time complexity to sort is O(1) as there is only few block, and, for all query will be O(n), with n is the number of query.(Binary search in this case can be considered take constant time).
Edit: As you have added another constraint, so , you need to sort all the event by date and time, and for each date, you can use the described approach.
Given lines like:
bkg988 sshd Fri Mar 28 02:17 - 02:17 (00:00) 139.218.157.100
You can do this:
std::string to_month_number(const std::string& name)
{
return name == "Jan" ? "01/" :
name == "Feb" ? "02/" :
...;
}
typedef std::pair<std::string, int> When;
typedef std::map<When, int> Num_Logins;
Num_Logins num_logins;
std::string user, term, day, month, dom;
int hour, min;
char c;
while (std::cin >> user >> term >> dow >> month >> dom >> hour >> c >> min && c == ':')
{
if (dom.length() == 1) dom = ' ' + dom; // standardise with for sorting...
When when = std::make_pair(to_month_number(month) + ' ' + dom, (hour * 60 + min) / 15);
++num_logins[when];
}
I suspect the actual input will be a bit more complex, with the date being formatted differently when the process started last year or intraday, so you'll need to tune the fields parsed out. To recreate the time when iterating over num_logins to print out results, just:
int hour = key->second / 4;
int min = (key->second % 4) * 15; // 00, 15, 30 or 45
Related
There are many method of having a measure to show percentage in a column of table ,
but cannot find a method to always show the ratio of a SPECIFIC group in percentage between two category.
data sample:
YEAR MONTH TYPE AMOUNT
2020 Jan A 100
2020 Feb A 250
2020 Mar A 230
2020 Jan B 158
2020 Feb B 23
2020 Mar B 46
2019 Jan A 499
2019 Feb A 65
2019 Mar A 289
2019 Jan B 465
2019 Feb B 49
2019 Mar B 446
2018 Jan A 13
2018 Feb A 97
2018 Mar A 26
2018 Jan B 216
2018 Feb B 264
2018 Mar B 29
2018 Jan A 314
2018 Feb A 659
2018 Mar A 226
2018 Jan B 469
2018 Feb B 564
2018 Mar B 164
My Goal is always show the percentage of A compare with the total amount
YEAR and MONTH are used to synchronize with slicer.
e.g. I select YEAR = 2020 , MONTH = Jan
100/258 = 38%
Manually inputted in textbox
First, Create these following 3 measures in your table-
1.
amount_A =
CALCULATE(
SUM(pie_chart_data[AMOUNT]),
FILTER(
ALLSELECTED(pie_chart_data),
pie_chart_data[TYPE] = "A"
)
)
2.
amount_overall =
CALCULATE(
SUM(pie_chart_data[AMOUNT]),
ALLSELECTED(pie_chart_data)
)
3.
amount_A_percentage = [amount_A]/[amount_overall]
Now, add both measure amount_A and amount_overall to your donut chart's values column. And place the amount_A_percentage measure to a Card and place the card in center of the Donut chart. The presentation will be as below finally-
I want to schedule a CloudWatch event to run every other Monday and have started with this command:
0 14 ? * 2 *
Currently with the above command, I get a weekly schedule of Monday executions:
Mon, 27 Jul 2020 14:00:00 GMT
Mon, 03 Aug 2020 14:00:00 GMT
Mon, 10 Aug 2020 14:00:00 GMT
Mon, 17 Aug 2020 14:00:00 GMT
Mon, 24 Aug 2020 14:00:00 GMT
Mon, 31 Aug 2020 14:00:00 GMT
Mon, 07 Sep 2020 14:00:00 GMT
Mon, 14 Sep 2020 14:00:00 GMT
Mon, 21 Sep 2020 14:00:00 GMT
Mon, 28 Sep 2020 14:00:00 GMT
However, I would like the schedule to be set to every other Monday, e.g.
Mon, 27 Jul 2020 14:00:00 GMT
Mon, 10 Aug 2020 14:00:00 GMT
Mon, 24 Aug 2020 14:00:00 GMT
Mon, 07 Sep 2020 14:00:00 GMT
Mon, 21 Sep 2020 14:00:00 GMT
I have seen examples with exp and # being used, but I don't think AWS CloudWatch events accept these sort of parameters.
Chris' answer is correct. Currently, there is no way that I could think of to express this as part of CloudWatch Scheduled Events.
However, a workaround could be to set it to every Monday (0 14 ? * 2 *) and trigger a Lambda function that checks whether it's in the on-week or the off-week before triggering the actual target.
Even though this adds some complexity, it would be a viable solution.
You won't be able to do any of the fancier commands (especially those using variables from the command line).
You could do this very basically but would require 2 separate events in order to carry it out:
0 14 ? * 2#1 * - Run on the first Monday of the month.
0 14 ? * 2#3 * - Run on the third Monday of the month.
Unfortunately there is no compatible syntax for scheduled expressions that would allow the concept of every other week, so the above commands occasionally could lead to a 3 week gap.
If you don't care about the Monday you could of course use 0 14 1,15 * * to run on the 1st and 15th of each month (roughly every 2 weeks).
The final option would be to run every Monday, but have the script exit if it is not the every other week, the expression would then just be 0 14 ? * 2 *.
More information about the syntax is available on the Cron Expressions section of the Scheduled Events page.
so here's my problem: I have big log files and want a script to grep certain periods of time and safe them to a file (sorted), basically
bash script.sh Jul 4 Sep 30
will return for example
Sep 30 user0 logged in
Sep 15 user1 logged in
Aug 6 user0 logged in
Aug 3 user1 logged in
Jul 28 user2 logged in
Jul 27 user2 logged in
Jul 4 user0 logged in
My first attempt was that every month and date gets his own variable like
bash script.sh Jul 4 Sep 3 0
so I can use $1 for start month (July), $2 for start date (4) and so on in grep like
for logs in logs*
do
grep -qEe "^\"$1\" [\"$2\"-9]\s" $messages >> result.txt
done
to get all logs from July 4 to 9 but I don't know how to get logs from the entire time period that aren't in the same month nor in a period like 1-9 or 10-19 and so on
Any help greatly appreciated!
EDIT:
As some people asked, here's how my log files look like (just much bigger and not sorted):
Sep 30 user0 logged in
Jul 27 user2 logged in
Aug 6 user0 logged in
Aug 31 user1 logged in
Jul 8 user2 logged in
Sep 5 user1 logged in
Jul 27 user2 logged in
Jul 14 user0 logged in
[...]
Here's my take:
#/bin/bash
year="$(date +"%Y")"
start="$(date -d"$1 $2, $year" +'%s')"
end="$(($(date -d"$3 $4, $year" +'%s')+86400))"
for log in logs*; do
while IFS= read -r line; do
d="$(date -d"$(cut -d' ' -f1,2 <<< "$line"), $year" +'%s')"
if (( $start <= $d && $d < $end )); then
echo "$s"
fi
done < "$log"
done
You run it like that: ./script.sh Jul 04 Sep 03. Since no year is included in the logs, it assumes that all dates (including the ones in the command line) are for the current year. It's probably not the most optimal solution but it works. It relies on date which it repeatedly calls to parse dates into a unix timestamp. unix timestamps are nice because they are just numbers and thus can be used in numeric comparisons.
$ range="Jul 4 Sep 30"
$ awk -v range="$range" '
BEGIN {
numMths = split("Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec",m)
for (i in m) {
mths[m[i]] = i
}
split(range,r)
beg = sprintf("%02d%02d", mths[r[1]], r[2])
end = sprintf("%02d%02d", mths[r[3]], r[4])
}
{ cur = sprintf("%02d%02d", mths[$1], $2) }
(cur >= beg) && (cur <= end) { vals[$1,$2] = $0 }
END {
for (mthNr=numMths; mthNr>0; mthNr--) {
for (dayNr=31; dayNr>0; dayNr--) {
date = m[mthNr] SUBSEP dayNr
if (date in vals) {
print vals[date]
}
}
}
}
' file
Sep 30 user0 logged in
Sep 5 user1 logged in
Aug 31 user1 logged in
Aug 6 user0 logged in
Jul 27 user2 logged in
Jul 14 user0 logged in
Jul 8 user2 logged in
I'm writing a code where I use only boost libraries as prerequisites.
I need a class to handle datetime values and operations (add and subtract years, months, hours, etc.), so I picked the gregorian date as an option.
But, when I handle days in leap years, some surprises appear. There is a piece of an example code:
int main()
{
boost::gregorian::date d1(2000,1,1);
boost::gregorian::days ds(118);
boost::gregorian::date d2 = d1 + ds;
std::cout << boost::gregorian::to_iso_extended_string(d1) << std::endl;
std::cout << boost::gregorian::to_iso_extended_string(d2) << std::endl;
return 0;
}
Output:
2000-01-01
2000-04-28 (should be 2000-04-27)
Is there an option for this issue? In the manual page, the boost warning about "lead to unexpected results..."
I think it's correct as it is:
for a in {1..118}; do echo -n "+$a days: "; date --rfc-2822 -d"2000-01-01 +$a days"; done
prints
shows I see no anomalies around the leap date:
+1 days: Sun, 02 Jan 2000 00:00:00 +0100
+2 days: Mon, 03 Jan 2000 00:00:00 +0100
...
+57 days: Sun, 27 Feb 2000 00:00:00 +0100
+58 days: Mon, 28 Feb 2000 00:00:00 +0100
+59 days: Tue, 29 Feb 2000 00:00:00 +0100
+60 days: Wed, 01 Mar 2000 00:00:00 +0100
...
+116 days: Wed, 26 Apr 2000 00:00:00 +0200
+117 days: Thu, 27 Apr 2000 00:00:00 +0200
+118 days: Fri, 28 Apr 2000 00:00:00 +0200
Consider this log file
SN PID Date Status
1 P01 Fri Feb 14 19:32:36 IST 2014 Alive
2 P02 Fri Feb 14 19:32:36 IST 2014 Alive
3 P03 Fri Feb 14 19:32:36 IST 2014 Alive
4 P04 Fri Feb 14 19:32:36 IST 2014 Alive
5 P05 Fri Feb 14 19:32:36 IST 2014 Alive
6 P06 Fri Feb 14 19:32:36 IST 2014 Alive
7 P07 Fri Feb 14 19:32:36 IST 2014 Alive
8 P08 Fri Feb 14 19:32:36 IST 2014 Alive
9 P09 Fri Feb 14 19:32:36 IST 2014 Alive
10 P010 Fri Feb 14 19:32:36 IST 2014 Alive
When i do => grep "P01" File
output is : (as expected)
1 P01 Fri Feb 14 19:32:36 IST 2014 Alive
10 P010 Fri Feb 14 19:32:36 IST 2014 Alive
But when i do => grep " P01 " File (notice the space before and after P01)
I do not get any output!
Question : grep matches pattern in a line, so " P01 " ( with space around ) should match the first PID of P01 as it has spaces around it....but seems that this logic is wrong....what obvious thing i am missing here!!!?
If the log uses tabs not spaces, your grep pattern won't match. I would add word boundaries to the word you want to find:
grep '\<P01\>' file
If you really want to use whitespace in your pattern, use one of:
grep '[[:blank:]]P01[[:blank:]]' file # horizontal whitespace, tabs and spaces
grep -P '\sP01\s' file # using Perl regex