I have two lists with diffrent itemsas follows:
numbers = ['1','2','3','4','5','6','7',]
days = ['mon','tue','wed','thu','fri','sat','sun',]
I want to print from both to look like this:
result = 1
mon
2
tue
3
wed
4
thu.....etc
Is there such as code that does this?
Regards
You can use zip to combine two lists.
The zip() function is probably what you want here.
You can print such output with this.
for n, m in zip(numbers, days):
print(n, m)
Output -
1 mon
2 tue
3 wed
4 thu
5 fri
6 sat
7 sun
Hope it helps.
Update - zip function combines two equal-length collections (e.g. list) together and produces a tuple object.
This can resolve your problem.
<?php
//array 1
$numbers = ['1','2','3','4','5','6','7',];
// array 2
$days = ['mon','tue','wed','thu','fri','sat','sun',];
// use for loop
for($i = 0; $i < 7; $i++) {
echo $numbers[$i].' '.$days[$i].'<br>';
}
?>
Output -
1 mon
2 tue
3 wed
4 thu
5 fri
6 sat
7 sun
Related
I have a data type:
data Days = Mon Int | Tue Int | Wed Int | Thu Int | Fri Int
The problem is I have no idea how to deconstruct it.
For example I need a function that returns a list where the argument is not 0 e.g:
[(Mon 1), (Tue 0), (Wed 2), (Wed 0), (Thu 0)] it should return: [(Mon 1),(Wed 2)]
I think that you need to write a "deconstruction function"
numberOfDay :: Days -> Int
numberOfDay (Mon v) = v
numberOfDay (Tue v) = v
-- and so on.
days = [(Mon 1), (Tue 0), (Wed 2), (Wed 0), (Thu 0)]
filtered = filter ((/= 0) . numberOfDay) days
After all, you could come in at any time and add another constructor that has, for example no arguments. Any mechanism that generically deconstructs a type where all constructors have the same argument list would fail at this point.
Another solution would be, to structure your code slightly different:
data Days = Mon | Tue | Wed | Thu | Fri
data DayVal = DayVal Days Int
I'm having an issue trying to sort a vector of files by their last write time. Sorting seems to work as intended, however sometimes even though the time string shows a higher date, the time_t is lower.
Example Output:
Wed Aug 19 01:51:07 2020 || 1597819867
Wed Aug 19 05:17:20 2020 || 1597832240
Tue Aug 18 18:54:26 2020 || 1597794866
Tue Aug 18 18:43:20 2020 || 1597794200
Tue Aug 18 18:42:38 2020 || 1597794158
Wed Aug 19 22:52:44 2020 || 1597895564 <-Wrong
Thu Aug 13 18:25:32 2020 || 1597361132 <-Wrong
Wed Aug 12 22:36:51 2020 || 1597289811 <-Wrong
Mon Aug 17 21:49:45 2020 || 1597718985
My Code:
for (int i = 0; i < 200; i++) {
auto diff = GetFileWriteTime(pth) - GetFileWriteTime(dates[i]);
if (diff > 0.0) {
itPos = dates.begin();
if (i > 0) { itPos = dates.begin() + i - 1; }
dates.insert(itPos, pth);
dates.pop_back();
break;
}
}
}
I have a pandas data frame:
data = pd.read_csv(path)
I'm looking for a good way to remove outlier rows that have an extreme value in any of the features (I have 400 features in the data frame) before I run some prediction algorithms.
Tried a few ways but they don't seem to solve the issue:
data[data.apply(lambda x: np.abs(x - x.mean()) / x.std() < 3).all(axis=1)]
using Standard Scaler
I think you can check your output but comparing both indexes by Index.difference, because I think your solution works very nice:
import pandas as pd
import numpy as np
np.random.seed(1234)
df = pd.DataFrame(np.random.randn(100, 3), columns=list('ABC'))
print (df)
A B C
0 0.471435 -1.190976 1.432707
1 -0.312652 -0.720589 0.887163
2 0.859588 -0.636524 0.015696
3 -2.242685 1.150036 0.991946
4 0.953324 -2.021255 -0.334077
5 0.002118 0.405453 0.289092
6 1.321158 -1.546906 -0.202646
7 -0.655969 0.193421 0.553439
8 1.318152 -0.469305 0.675554
9 -1.817027 -0.183109 1.058969
10 -0.397840 0.337438 1.047579
11 1.045938 0.863717 -0.122092
12 0.124713 -0.322795 0.841675
13 2.390961 0.076200 -0.566446
14 0.036142 -2.074978 0.247792
15 -0.897157 -0.136795 0.018289
16 0.755414 0.215269 0.841009
17 -1.445810 -1.401973 -0.100918
18 -0.548242 -0.144620 0.354020
19 -0.035513 0.565738 1.545659
20 -0.974236 -0.070345 0.307969
21 -0.208499 1.033801 -2.400454
22 2.030604 -1.142631 0.211883
23 0.704721 -0.785435 0.462060
24 0.704228 0.523508 -0.926254
25 2.007843 0.226963 -1.152659
26 0.631979 0.039513 0.464392
27 -3.563517 1.321106 0.152631
28 0.164530 -0.430096 0.767369
29 0.984920 0.270836 1.391986
df1 = df[df.apply(lambda x: np.abs(x - x.mean()) / x.std() < 3).all(axis=1)]
print (df1)
A B C
0 0.471435 -1.190976 1.432707
1 -0.312652 -0.720589 0.887163
2 0.859588 -0.636524 0.015696
3 -2.242685 1.150036 0.991946
4 0.953324 -2.021255 -0.334077
5 0.002118 0.405453 0.289092
6 1.321158 -1.546906 -0.202646
7 -0.655969 0.193421 0.553439
8 1.318152 -0.469305 0.675554
9 -1.817027 -0.183109 1.058969
10 -0.397840 0.337438 1.047579
11 1.045938 0.863717 -0.122092
12 0.124713 -0.322795 0.841675
13 2.390961 0.076200 -0.566446
14 0.036142 -2.074978 0.247792
15 -0.897157 -0.136795 0.018289
16 0.755414 0.215269 0.841009
17 -1.445810 -1.401973 -0.100918
18 -0.548242 -0.144620 0.354020
19 -0.035513 0.565738 1.545659
20 -0.974236 -0.070345 0.307969
22 2.030604 -1.142631 0.211883
23 0.704721 -0.785435 0.462060
24 0.704228 0.523508 -0.926254
25 2.007843 0.226963 -1.152659
26 0.631979 0.039513 0.464392
28 0.164530 -0.430096 0.767369
29 0.984920 0.270836 1.391986
30 0.079842 -0.399965 -1.027851
31 -0.584718 0.816594 -0.081947
idx = df.index.difference(df1.index)
print (idx)
Int64Index([21, 27], dtype='int64')
print (df.loc[idx])
A B C
21 -0.208499 1.033801 -2.400454
27 -3.563517 1.321106 0.152631
Currently I'm printing post archive dates like so with https://play.golang.org/p/P1-sAo5Qy8:
2009 Nov 10»Something happened in 2009
2005 Nov 10»Something happened 10 years ago
2009 Jun 10»Summer of 2009
Though I think it's nicer to print by year:
2009
2009 Nov 10»Something happened in 2009
2009 Jun 10»Summer of 2009
2005
2005 Nov 10»Something happened 10 years ago
How would I range reverse chronically over the Posts PostDate, to print the grouping that I want? Can it be done all in the template?
Implement the sort.Interface on your Posts struct, then sort it in reverse order.
type Posts struct {
Posts []Post
}
func (p Posts) Len() int {
return len(p.Posts)
}
func (p Posts) Less(i, j int) bool {
return p.Posts[i].PostDate.Before(p.Posts[j].PostDate)
}
func (p Posts) Swap(i, j int) {
p.Posts[i], p.Posts[j] = p.Posts[j], p.Posts[i]
}
and
posts := Posts{p}
sort.Sort(sort.Reverse(posts))
That will give you the posts in the sequence you want them.
Next you'll have to implement a func using a closure so you can check if the current year is the same as the one for the last post to get the grouping by year. If yes output just the post, otherwise output a header with the year followed by the post.
currentYear := "1900"
funcMap := template.FuncMap{
"newYear": func(t string) bool {
if t == currentYear {
return false
} else {
currentYear = t
return true
}
},
}
and to use it:
{{ range . }}{{ if newYear (.PostDate.Format "2006") }}<li><h1>{{ .PostDate.Format "2006" }}</h1></li>{{ end }}
See a working example on the Playground.
So far i have been able to merge two files and get the following dataframe (df1):
ID someLength someLongerSeq someSeq someMOD someValue
A 16 XCVBNMHGFDSTHJGF NMH T3(P) 7
A 16 XCVBNMHGFDSTHJGF NmH M3(O); S4(P); S6(P) 1
B 24 HDFGKJSDHFGKJSDFHGKLSJDF HFGKJSDFH S9(P) 5
C 22 QIOWEURQOIWERERQWEFFFF RQoIWERER Q16(D); S19(P) 7
D 19 HSEKDFGSFDKELJGFZZX KELJ S7(P); C9(C); S10(P) 1
i am looking for a way to do a regex match based on "someSeq" column to look for that substring in the "someLongersSeq" column and get the start location of the match and then add that to the whole numbers that are attached to the characters such as T3(P).
Example:
For the second row "ID:A","someSeq":"NmH" matches starts at location 4 of the someLongerSeq (after to upper conversion of NmH). So i want to add that number 4 to someMOD fields M3(O);S4(P);S6(P) so that i get M7(O);S8(P);S10(P) and then overwrite the new value in the someMOD column.
And do that for each row. Regex is per row bases.
Any help is really appreciated. Thanks.
First of all, I should mention that it is hard to read your data. I slightly modify it( I remove spaces from someMOD column) to read them. This is not a problem since you have already your data into a data.frame. So I read the data like this :
dat <- read.table(text='ID someLength someLongerSeq someSeq someMOD someValue
A 16 XCVBNMHGFDSTHJGF NMH T3(P) 7
A 16 XCVBNMHGFDSTHJGF NmH M3(O);S4(P);S6(P) 1
B 24 HDFGKJSDHFGKJSDFHGKLSJDF HFGKJSDFH S9(P) 5
C 22 QIOWEURQOIWERERQWEFFFF RQoIWERER Q16(D);S19(P) 7
D 19 HSEKDFGSFDKELJGFZZX KELJ S7(P);C9(C);S10(P) 1',header=TRUE)
Then the idea is:
to process row by row using apply
use gregexpr to get the index of someSeq into someLongerSeq
use gsubfn to add the previous index to its digit of someMOD
Here the whole solution:
library(gsubfn)
res <- t(apply(dat,1,function(x){
idx <- gregexpr(x['someSeq'],x['someLongerSeq'],
ignore.case = TRUE)[[1]][1]
x[['someMOD']] <- gsubfn("[[:digit:]]+",
function(x) as.numeric(x)+idx,
x[['someMOD']])
x
}))
as.data.frame(res)
ID someLength someLongerSeq someSeq someMOD someValue
1 A 16 XCVBNMHGFDSTHJGF NMH T8(P) 7
2 A 16 XCVBNMHGFDSTHJGF NmH M8(O);S9(P);S11(P) 1
3 B 24 HDFGKJSDHFGKJSDFHGKLSJDF HFGKJSDFH S18(P) 5
4 C 22 QIOWEURQOIWERERQWEFFFF RQoIWERER Q23(D);S26(P) 7
5 D 19 HSEKDFGSFDKELJGFZZX KELJ S18(P);C20(C);S21(P) 1