How to get a total number of occurrences in an IF statement - if-statement

After looking at a bunch of the other questions similar to mine, I found one that partially works but not exactly as I need it to. I am trying to get a total for the number of times an occurrence is found in my IF statement. The code I have tried is below:
for i in data['resources']:
url = server+ "/api/3/assets/" +str(i)
response = requests.request("GET", url, headers=headers, verify=False)
data = response.json()
Total = 0
if data['red'] and data['blue'] > 0:
Total += 1
print('Total:', Total)
Rather than giving me an output like:
Total: 3
I get this:
Total: 1
Total: 1
Total: 1

Total=0 inside the for loop and getting reset for each iteration. So you have to put Total=0 outside the for loop. And also if you want it to print only once then put the Print statement outside and at the end of the for loop
I am sure this should solve your problem

Related

For Loop and If Statement not performing as expected

Here's the code:
# Scrape table data
alltable = driver.find_elements_by_id("song-table")
date = date.today()
simple_year_list = []
complex_year_list = []
dateformat1 = re.compile(r"\d\d\d\d")
dateformat2 = re.compile(r"\d\d\d\d-\d\d-\d\d")
for term in alltable:
simple_year = dateformat1.findall(term.text)
for year in simple_year:
if 1800 < int(year) < date.year: # Year can't be above what the current year is or below 1800,
simple_year_list.append(simple_year) # Might have to be changed if you have a song from before 1800
else:
continue
complex_year = dateformat2.findall(term.text)
complex_year_list.append(complex_year)
The code uses regular expressions to find four consecutive digits. Since there are multiple 4 digit numbers, I want to narrow it down to between 1800 and 2021 since that's a reasonable time frame. simple_year_list, however, prints out numbers that don't follow the conditions.
You aren't saving the right value here:
simple_year_list.append(simple_year)
You should be saving the year:
simple_year_list.append(year)
I would need more information to help further though. Maybe give us a sample of the data you're working through, and the output you're seeing?
You can do it all in regex.
Add start ^ and end $ anchors, and range restriction via pattern:
dateformat1 = re.compile(r"^(1[89]\d\d|20([01]\d|2[01]))$")

Python: Match a special caracter with regular expression

Hi everyone I'm using the re.match function to extract pieces of string within a row from the file.
My code is as follows:
## fp_tmp => pointer of file
for x in fp_tmp:
try:
cpuOverall=re.match(r"(Overall CPU load average)\s+(\S+)(%)",x)
cpuUsed=re.match(r"(Total)\s+(\d+)(%)",x)
ramUsed=re.match(r"(RAM Utilization)\s+(\d+\%)",x)
####Not Work####
if cpuUsed is not None: cpuused_new=cpuUsed.group(2)
if ramUsed is not None: ramused_new=ramUsed.group(2)
if cpuOverall is not None: cpuoverall_new=cpuOverall.group(2)
except:
searchbox_result = None
Each field is extracted from the following corresponding line:
ramUsed => RAM Utilization 2%
cpuUsed => Total 4%
cpuOverall => Overall CPU load average 12%
ramUsed, cpuUsed, cpuOverall are the variable where I want write the result!!
Corretly line are:
(space undefined) RAM Utilization 2%
(space undefined) Total 4%
(space undefined) Overall CPU load average 12%
When I execute the script all variable return a value: None.
With other variable the script work corretly.
Why the code not work in this case? I use the python3
I think that the problem is a caracter % that not read.
Do you have any suggestions?
PROBLEM 2:
## fp_tmp => pointer of file
for x in fp_tmp:
try:
emailReceived=re.match(r".*(Messages Received)\s+\S+\s+\S+\s+(\S+)",x)
####Not Work####
if emailReceived is not None: emailreceived_new=emailReceived.group(2)
except:
searchbox_result = None
Each field is extracted from the following corresponding on 2 lines in a file:
[....]
Counters: Reset Uptime Lifetime
Receiving
Messages Received 3,406 1,558 3,406
[....]
Rates (Events Per Hour): 1-Minute 5-Minutes 15-Minutes
Receiving
Messages Received 0 0 0
Recipients Received 0 0 0
[....]
I want extract only second occured, that:
Rates (Events Per Hour): 1-Minute 5-Minutes 15-Minutes
Receiving
Messages Received 0 0 0 <-this
Do you have any suggestions?
cpuOverall line: you forgot that there is more information at the start of the line. Change to
'.*(Overall CPU load average)\s+(\S+%)'
cpuUsed line: you forgot that there is more information at the start of the line. Change to
'.*(Total)\s+(\d+%)'
ramUsed line: you forgot that there is more information at the start of the line... Change to
'.*(RAM Utilization)\s+(\d+%)'
Remember that re.match looks for an exact match from the start:
If zero or more characters at the beginning of string match the regular expression pattern, return a corresponding match object. [..]
With these changes, your three variables are set to the percentages:
>>> print (cpuused_new,ramused_new,cpuoverall_new)
4% 2% 12%

DynamoDB Client.Scan() is not returning the LastEvaluatedKey parameter

I have a table with 10k rows.
I'm trying to parse them to change a small thing inside an attribute (inside each row) with Python, so I'm using the client.scan() taking batches of 10 rows and giving the "LastEvaluatedKey" parameter to the next .scan().
The problem is that after 40 rows the scan() doesn't return the lastKey, like the DB it's only 40 lines long.
I've noticed that launching the same script against another table, 3x times bigger, the stop happens at 120 rows (3x times bigger).
The table has On-Demand capacity.
Any idea about this?
client = boto3.client('dynamodb')
resource = boto3.resource('dynamodb')
table = resource.Table(table_name)
remaining = 3961
iteration = 0
limit = 10
while remaining > 0:
# retrieve Limit
if iteration == 0:
response = client.scan(
TableName=table_name,
Limit=limit,
Select='ALL_ATTRIBUTES',
ReturnConsumedCapacity='TOTAL',
TotalSegments=123,
Segment=122,
)
key = response["LastEvaluatedKey"]
else:
response = client.scan(
TableName=table_name,
Limit=limit,
Select='ALL_ATTRIBUTES',
ExclusiveStartKey=key,
ReturnConsumedCapacity='TOTAL',
TotalSegments=123,
Segment=122,
)
key = response["LastEvaluatedKey"]
iteration += 1
for el in response["Items"]:
print(el)
I think there are two problems:
you seem to be scanning with a limit: try removing that
your are running a parallel scan and always scanning the last segment:
TotalSegments=123
Segment=122
I'm not sure how big your tables are but 123 segments is quite a lot and I don't see you scanning any of other segments, from 0 to 121.
Try this:
iteration = 0
response = client.scan(
TableName=table_name,
Select='ALL_ATTRIBUTES',
ReturnConsumedCapacity='TOTAL'
)
while True:
iteration += 1
for el in response["Items"]:
print(el)
last_key = response["LastEvaluatedKey"]
if not last_key:
break
response = client.scan(
TableName=table_name,
Select='ALL_ATTRIBUTES',
ExclusiveStartKey=last_key,
ReturnConsumedCapacity='TOTAL'
)
I expect the above should work to retrieve all items in your table. Then, if you still would like to run a parallel scan, you can do so but you'll have to handle the splitting into segments and in order for that to be efficient you'll have to handle running those concurrently (more complicated to do than a sequential scan).

Python sorting timestamp

I am struggling with something that should be relatively straight forward, but I am getting nowhere.
I have a bunch of data that has a timestamp in the format of hh:mm:ss. The data ranges from 00:00:00 all 24 hours of the day through 23:59:59.
I do not know how to go about pulling out the hh part of the data, so that I can just look at data between specific hours of the day.
I read the data in from a CSV file using:
with open(filename) as csvfile:
reader = csv.DictReader(csvfile)
for row in reader:
time = row['Time']
This gives me time in the hh:mm:ss format, but now I do not know how to do what I want, which is look at the data from 6am until 6pm. 06:00:00 to 18:00:00.
With the times in 24 hour format, this is actually very simple:
'06:00:00' <= row['Time'] <= '18:00:00'
Assuming that you only have valid timestamps, this is true for all times between 6 AM and 6 PM inclusive.
If you want to get a list of all rows that meet this, you can put this into a list comprehension:
relevant_rows = [row for row in reader if '06:00:00' <= row['Time'] <= '18:00:00']
Update:
For handling times with no leading zero (0:00:00, 3:00:00, 15:00:00, etc), use split to get just the part before the first colon:
> row_time = '0:00:00'
> row_time.split(':')
['0', '00', '00']
> int(row_time.split(':')[0])
0
You can then check if the value is at least 6 and less than 18. If you want to include entries that are at 6 PM, then you have to check the minutes and seconds to make sure it is not after 6 PM.
However, you don't even really need to try anything like regex or even a simple split. You have two cases to deal with - either the hour is one digit, or it is two digits. If it is one digit, it needs to be at least six. If it is two digits, it needs to be less than 18. In code:
if row_time[1] == ':': # 1-digit hour
if row_time > '6': # 6 AM or later
# This is an entry you want
else:
if row_time < '18:00:00': # Use <= if you want 6 PM to be included
# This is an entry you want
or, compacted to a single line:
if (row_time[1] == ':' and row_time > '6') or row_time < '18:00:00':
# Parenthesis are not actually needed, but help make it clearer
as a list comprehension:
relevant_rows = [row for row in reader if (row['Time'][1] == ':' and row['Time'] > '6') or row['Time'] < '18:00:00']
You can use Python's slicing syntax to pull characters from the string.
For example:
time = '06:05:22'
timestamp_hour = time[0:2] #catch all chars from index 0 to index 2
print timestamp_hour
>>> '06'
should produce the first two digits: '06'. Then you can call the int() method to cast them as ints:
hour = int(timestamp_hour)
print hour
>>> 6
Now you have an interger variable that can be checked to see if is between, say, 6 and 18.

Speedy test on R data frame to see if row values in one column are inside another column in the data frame

I have a data frame of marketing data with 22k records and 6 columns, 2 of which are of interest.
Variable
FO.variable
Here's a link with the dput output of a sample of the dataframe: http://dpaste.com/2SJ6DPX
Please let me know if there's a better way of sharing this data.
All I want to do is create an additional binary keep column which should be:
1 if FO.variable is inside Variable
0 if FO.Variable is not inside Variable
Seems like a simple thing...in Excel I would just add another column with an "if" formula and then paste the formula down. I've spent the past hours trying to get this and R and failing.
Here's what I've tried:
Using grepl for pattern matching. I've used grepl before but this time I'm trying to pass a column instead of a string. My early attempts failed because I tried to force grepl and ifelse resulting in grepl using the first value in the column instead of the entire thing.
My next attempt was to use transform and grep based off another post on SO. I didn't think this would give me my exact answer but I figured it would get me close enough for me to figure it out from there...the code ran for a while than errored because invalid subscript.
transform(dd, Keep = FO.variable[sapply(variable, grep, FO.variable)])
My next attempt was to use str_detect, but I don't think this is the right approach because I want the row level value and I think 'any' will literally use any value in the vector?
kk <- sapply(dd$variable, function(x) any(sapply(dd$FO.variable, str_detect, string = x)))
EDIT: Just tried a for loop. I would prefer a vectorized approach but I'm pretty desperate at this point. I haven't used for-loops before as I've avoided them and stuck to other solutions. It doesn't seem to be working quite right not sure if I screwed up the syntax:
for(i in 1:nrow(dd)){
if(dd[i,4] %in% dd[i,2])
dd$test[i] <- 1
}
As I mentioned, my ideal output is an additional column with 1 or 0 if FO.variable was inside variable. For example, the first three records in the sample data would be 1 and the 4th record would be zero since "Direct/Unknown" is not within "Organic Search, System Email".
A bonus would be if a solution could run fast. The apply options were taking a long, long time perhaps because they were looping over every iteration across both columns?
This turned out to not nearly be as simple as I would of thought. Or maybe it is and I'm just a dunce. Either way, I appreciate any help on how to best approach this.
I read the data
df = dget("http://dpaste.com/2SJ6DPX.txt")
then split the 'variable' column into its parts and figured out the lengths of each entry
v = strsplit(as.character(df$variable), ",", fixed=TRUE)
len = lengths(v) ## sapply(v, length) in R-3.1.3
Then I unlisted v and created an index that maps the unlisted v to the row from which it came from
uv = unlist(v)
idx = rep(seq_along(v), len)
Finally, I found the indexes for which uv was equal to its corresponding entry in FO.variable
test = (uv == as.character(df$FO.variable)[idx])
df$Keep = FALSE
df$Keep[ idx[test] ] = TRUE
Or combined (it seems more useful to return the logical vector than the modified data.frame, which one could obtain with dd$Keep = f0(dd))
f0 = function(dd) {
v = strsplit(as.character(dd$variable), ",", fixed=TRUE)
len = lengths(v)
uv = unlist(v)
idx = rep(seq_along(v), len)
keep = logical(nrow(dd))
keep[ idx[uv == as.character(dd$FO.variable)[idx]] ] = TRUE
keep
}
(This could be made faster using the fact that the columns are factors, but maybe that's not intentional?) Compared with (the admittedly simpler and easier to understand)
f1 = function(dd)
mapply(grepl, dd$FO.variable, dd$variable, fixed=TRUE)
f1a = function(dd)
mapply(grepl, as.character(dd$FO.variable),
as.character(dd$variable), fixed=TRUE)
f2 = function(dd)
apply(dd, 1, function(x) grepl(x[4], x[2], fixed=TRUE))
with
> library(microbenchmark)
> identical(f0(df), f1(df))
[1] TRUE
> identical(f0(df), unname(f2(df)))
[1] TRUE
> microbenchmark(f0(df), f1(df), f1a(df), f2(df))
Unit: microseconds
expr min lq mean median uq max neval
f0(df) 57.559 64.6940 70.26804 69.4455 74.1035 98.322 100
f1(df) 573.302 603.4635 625.32744 624.8670 637.1810 766.183 100
f1a(df) 138.527 148.5280 156.47055 153.7455 160.3925 246.115 100
f2(df) 494.447 518.7110 543.41201 539.1655 561.4490 677.704 100
Two subtle but important additions during the development of the timings were to use fixed=TRUE in the regular expression, and to coerce the factors to character.
I would go with a simple mapply in your case, as you correctly said, by row operations will be very slow. Also, (as suggested by Martin) setting fixed = TRUE and apriori converting to character will significantly improve performance.
transform(dd, Keep = mapply(grepl,
as.character(FO.variable),
as.character(variable),
fixed = TRUE))
# VisitorIDTrue variable value FO.variable FO.value Keep
# 22 44888657 Direct / Unknown,Organic Search 1 Direct / Unknown 1 TRUE
# 2 44888657 Direct / Unknown,System Email 1 Direct / Unknown 1 TRUE
# 6 44888657 Direct / Unknown,TV 1 Direct / Unknown 1 TRUE
# 10 44888657 Organic Search,System Email 1 Direct / Unknown 1 FALSE
# 18 44888657 Organic Search,TV 1 Direct / Unknown 1 FALSE
# 14 44888657 System Email,TV 1 Direct / Unknown 1 FALSE
# 24 44888657 Direct / Unknown,Organic Search 1 Organic Search 1 TRUE
# 4 44888657 Direct / Unknown,System Email 1 Organic Search 1 FALSE
...
Here is a data.table approach that I think is very similar in spirit to Martin's:
require(data.table)
dt <- data.table(df)
dt[,`:=`(
fch = as.character(FO.variable),
rn = 1:.N
)]
dt[,keep:=FALSE]
dtvars <- dt[,strsplit(as.character(variable),',',fixed=TRUE),by=rn]
setkey(dt,rn,fch)
dt[dtvars,keep:=TRUE]
dt[,c("fch","rn"):=NULL]
The idea is to
identify all pairs of rn & variable (saved in dtvars) and
see which of these pairs match with rn & F0.variable pairs (in the original table, dt).