I am attempting to create a loop to save me having to type out the code many times. Essentially, I have 60 csv files that I need to alter and save. My code looks as follows:
forvalues i = 0203 0206 : 1112 {
cd "C:\Users\User\Desktop\Data\"
import delimited `i'.csv, varnames(1)
gen time=`i'
keep rssd9017 rssd9010 bhck4074 bhck4079 bhck4093 bhck2170 time
save `i'.dta, replace
}
However, I am getting the error "203.csv" does not exist. It seems to be dropping the leading 0, any way to fix this?
You are asking for a numlist, but in this context 0203, with nothing else said, just looks to Stata like a quirky but acceptable way to write 203: hence your problem.
But do you really have a numlist that is 0203 0206 : 1112?
Try it:
numlist "0203 0206 : 1112"
ret li
The list starts 203 206 209 212 215 218 221 224 227 230 233 236 ...
My wild guess is that you have files, one for each quarter over a period, labelled 0203 for March 2002 through to 1112 for December 2011. In fact you do say that you have times, even though my guess implies 40 files, not 60. If so, that means you won't have a file that is labelled 0215, so this is the wrong way to think in any case.
Here is a better approach. First take the cd out of the loop: you need only do that once!
cd "C:\Users\User\Desktop\Data"
Now find the files that are ????.csv. You need only install fs once.
ssc inst fs
fs ????.csv
foreach f in `r(files)' {
import delimited `f', varnames(1)
gen time = substr("`f'", 1, 4)
keep rssd9017 rssd9010 bhck4074 bhck4079 bhck4093 bhck2170 time
save `time'.dta, replace
}
On my guess, you still need to fix the time to something civilised and you would be better off appending the files, but one problem at a time.
Note that insisting on leading zeros, which you think is the problem here, but is probably a red herring, is written up here.
Related
I have three csv files containing different data for a common object. These represent data about distinct collections of items at work. These objects have unique codes. The number of files is not important so I will set this problem up with two. I have a handy recipe for joining these files using join -- but the cleaning part is killing me.
File A snippet - contains unique data. Also the cataloging error E B.
B 547
J 65
EB 289
E B 1
CO 8900
ZX 7
File B snippet - unique data about a different dimension of the objects.
B 5
ZX 67
SD 4
CO 76
J 54
EB 10
Note that file B contains a code not in common with file A.
Now I submit to you the "official" canon of codes designated for this set of objects:
B
CO
ZX
J
EB
Note that File B contains a non-canonical code with data. It needs to be captured and documented. Same with bad code in file A.
End goal: run trend and stats on the collections using the various fields from the multiple reports. They mostly match the canon but there are oddballs due to cataloging errors and codes that are no longer in use.
End goal result after merge/join:
B 547 5
J 65 54
EB 289 10
CO 8900 76
ZX 7 67
So my first idea was to use grep -F -f for this, using the canonical codes as a search list then merge with join. Problem is, with one letter codes it's too inclusive. It would seem like a job for awk where it can work with tab delimiters and REGEX the oddball codes. I'm not sure though, how to get awk to use a list to sift other files. Will join alone handle all this? Maybe I merge with join or paste, then sift out the weirdos? Which method is the least brittle and more likely to handle edge cases like the drunk cataloger?
If you're thinking, "Dude, this is better done with Perl or Python ...etc.". I'm all ears. No rules, I just need to deliver!
Your question says the data is csv, but based on your samples I'm assuming it's tsv. I'm also assuming E B should end up in the outlier output and that NA values should be filled with 0.
Given those assumptions, the following may be sufficient:
sort -t $'\t' -k 1b,1 fileA > fileA.sorted && sort -t $'\t' -k 1b,1 fileB > fileB.sorted
join -t $'\t' -a1 -a2 -e0 -o auto fileA.sorted fileB.sorted > out
grep -f codes out > out-canon
grep -vf codes out > out-oddball
The content of file codes:
^B\s
^CO\s
^ZX\s
^J\s
^EB\s
Result:
$ cat out-canon
B 547 5
CO 8900 76
EB 289 10
J 65 54
ZX 7 67
$ cat out-oddball
E B 1 0
SD 0 4
Try this(GNU awk):
awk 'BEGIN{FS=OFS="\t";}ARGIND==1{c[$1]++;}ARGIND==2{b[$1]=$2}ARGIND==3{if (c[$1]) {print $1,$2,b[$1]+0; delete b[$1];} else {if(tolower($1)~"[a-z]+ +[a-z]+")print>"error.fileA"; else print>"oddball.fileA";}}END{for (i in b) {print i,0,b[i] " (? maybe?)";print i,b[i] > "oddball.fileB";}}' codes fileB fileA
It will create error.fileA, oddball.fileA if such lines exists, oddball.fileB.
Normal output didn't write to file, you can write with > yourself when results are ok:
B 547 5
J 65 54
EB 289 10
CO 8900 76
ZX 7 67
SD 0 4 (? maybe?)
Had a hard time reading your description, not sure if this is what you want.
Anyway it's easy to improve this awk code.
You can change to FILENAME=="file1", or FILENAME==ARGV[1] if ARGIND is not working.
I am referring to this question I posted days ago, I haven't' get any replies yet and I suspect that the situation was not properly described, I made a simpler set up that would be easier to understand, and hopefully. get more attention from the experienced programmers!
I forgot to mention, I am running Python 2 on Jupyter
import pandas as pd
from pandas import Series, DataFrame
g_input_df = pd.read_csv('SetsLoc.csv')
URL=g_input_df.iloc[0,0]
c_input_df = pd.read_csv(URL)
c_input_df = c_input_df.set_index("Parameter")
root_path = c_input_df.loc["root_1"]
input_rel_path = c_input_df.loc["root_2"]
input_file_name = c_input_df.loc["file_name"]
This section reads from a .csv a list of paths, just one at a time, each one of them directing to another .csv file that contains the input for a simulation to be set-up using python.
The results from the above code can be tested here:
c_input_df
Value Parameter
root_1 C:/SimpleTest/
root_2 Input/
file_name Prop_1.csv
URL
'C:/SimpleTest/Sets/Set_1.csv'
root_path+input_rel_path+input_file_name
Value C:/SimpleTest/Input/Prop_1.csv
dtype: object
Property_1 = pd.read_csv('C:/SimpleTest/Input/Prop_1.csv')
Property_1
height weight
0 100 50
1 110 44
2 98 42
...on the other hand, if I try to use varibales to describe the file's path and name I get an error:
Property_1 = pd.read_csv(root_path+input_rel_path+input_file_name)
Property_1
I get the following error:
ValueErrorTraceback (most recent call last)
<ipython-input-3-1d5306b6bdb5> in <module>()
----> 1 Property_1 = pd.read_csv(root_path+input_rel_path+input_file_name)
2 Property_1
C:\ProgramData\Anaconda2\lib\site-packages\pandas\io\parsers.pyc in parser_f(filepath_or_buffer, sep, delimiter, header, names, index_col, usecols, squeeze, prefix, mangle_dupe_cols, dtype, engine, converters, true_values, false_values, skipinitialspace, skiprows, nrows, na_values, keep_default_na, na_filter, verbose, skip_blank_lines, parse_dates, infer_datetime_format, keep_date_col, date_parser, dayfirst, iterator, chunksize, compression, thousands, decimal, lineterminator, quotechar, quoting, escapechar, comment, encoding, dialect, tupleize_cols, error_bad_lines, warn_bad_lines, skipfooter, skip_footer, doublequote, delim_whitespace, as_recarray, compact_ints, use_unsigned, low_memory, buffer_lines, memory_map, float_precision)
653 skip_blank_lines=skip_blank_lines)
654
--> 655 return _read(filepath_or_buffer, kwds)
656
657 parser_f.__name__ = name
C:\ProgramData\Anaconda2\lib\site-packages\pandas\io\parsers.pyc in _read(filepath_or_buffer, kwds)
390 compression = _infer_compression(filepath_or_buffer, compression)
391 filepath_or_buffer, _, compression = get_filepath_or_buffer(
--> 392 filepath_or_buffer, encoding, compression)
393 kwds['compression'] = compression
394
C:\ProgramData\Anaconda2\lib\site-packages\pandas\io\common.pyc in get_filepath_or_buffer(filepath_or_buffer, encoding, compression)
208 if not is_file_like(filepath_or_buffer):
209 msg = "Invalid file path or buffer object type: {_type}"
--> 210 raise ValueError(msg.format(_type=type(filepath_or_buffer)))
211
212 return filepath_or_buffer, None, compression
ValueError: Invalid file path or buffer object type: <class 'pandas.core.series.Series'>}
I beleive that the problem resides in the way the parameters that make up the path and filenemae are read from the dataframe, is there any way to specify that those parameters are paths, or something similar that will avoid this problem?
Any help is highly appreciated!
I posted the solution in the other question related to this post, in case someone wants to get a look:
Problems opening a path in Python
I've got a regex that is responsible for matching the pattern A:B in lines where you might have multiple matches (i.e. "A:B A: B A : B A:B", etc.) The problem lies in the complexity of what A represents.
I'm using the regex:
\b[\w|\(|\)+]+\s*:(?:(?![\w+]+\s*:).)*
to match items in:
Data_1: Tutor Elementary: 10 a F Test: 7.87 sips
Turning 1 Data (A Run), Data: 0.0 10.0 10.0 17.3 0.0
Turning 2 Data (A Run), Data2: 0.0 6.8 0.0 6.8 6.8
Data_1: Tutor Pool: Data2: A B C
Turning 2 (A Run), ABSOLUTE: 368 337 428 0 2 147
Data_4 : 4AZE Localization : 33.14 lat -86 long
Time: 0.75 Data Scenario: 3121.2
The question is this, if you examine this setup (I use https://regex101.com/), lines 2,3,5 don't return exactly what I'm looking for. Where the match is the first in the line, I want it to grab everything from the beginning of the line to the first ':'. Is this type of conditional regex possible? I've tried every possible way I could imagine, but I haven't been successful yet.
Thanks in advance!
A little complex, but try this here
^(.*?:.*?)(\b\w+\b\s*:.*?)\b\w+\b:.*$|^(.*?:.*?)\b\w+\b\s*:(.*?)$|^(.*)$
I am working on a relatively new challenge in CodeEval called 'Football.' The description is listed in the following link:
https://www.codeeval.com/open_challenges/230/
Inputs are lines of a file read by Python, and within each line there are lists separated by '|', with each list representing a country: the first being country "1", second being country "2", and so on.
1 2 3 4 | 3 1 | 4 1
19 11 | 19 21 23 | 31 39 29
Outputs are also lines in response to each line read from the file.
1:1,2,3; 2:1; 3:1,2; 4:1,3;
11:1; 19:1,2; 21:2; 23:2; 29:3; 31:3; 39:3;
so country 1 supports team 1, 2, and 3 as shown in the first line of output: 1:1,2,3.
Below is my solution, and since I have no clue why the solution only works for the two sample cases lited in the description link, I'd like to ask anyone for comments and hints on how to correct my code. Thank you very much for your time and assistance ahead of time.
import sys
def football(string):
countries = map(str.split, string.split('|'))
teams = sorted(list(set([i[j] for i in countries for j in range(len(i))])))
results = []
for i in range(len(teams)):
results.append([teams[i]+':'])
for j in range(len(countries)):
if teams[i] in countries[j]:
results[i].append(str(j+1))
for i in range(len(results)):
results[i] = results[i][0]+','.join(results[i][1:])
return '; '.join(results) + '; '
if __name__ == '__main__':
lines = [line.rstrip() for line in open(sys.argv[1])]
for line in lines:
print football(line)
After deliberately failing an attempt to checkout the complete test input and my output, I found the problem. The line:
teams = sorted(list(set([i[j] for i in countries for j in range(len(i))])))
will make the output problematic in terms of sorting. For example here's a sample input:
10 20 | 43 23 | 27 | 25 | 11 1 12 43 | 33 18 3 43 41 | 31 3 45 4 36 | 25 29 | 1 19 39 | 39 12 16 28 30 37 | 32 | 11 10 7
and it produces the output:
1:5,9; 10:1,12; 11:5,12; 12:5,10; 16:10; 18:6; 19:9; 20:1; 23:2; 25:4,8; 27:3; 28:10; 29:8; 3:6,7; 30:10; 31:7; 32:11; 33:6; 36:7; 37:10; 39:9,10; 4:7; 41:6; 43:2,5,6; 45:7; 7:12;
But the challenge expects the output teams to be sorted by numbers in ascending order, which is not achieved by the above-mentioned code as the numbers are in string format, not integer format. Therefore the solution is simply adding a key to sort the teams list by ascending order for integer:
teams = sorted(list(set([i[j] for i in countries for j in range(len(i))])), key=lambda x:int(x))
With a small change in this line, the code passes through the tests. A sample output looks like:
1:5,9; 3:6,7; 4:7; 7:12; 10:1,12; 11:5,12; 12:5,10; 16:10; 18:6; 19:9; 20:1; 23:2; 25:4,8; 27:3; 28:10; 29:8; 30:10; 31:7; 32:11; 33:6; 36:7; 37:10; 39:9,10; 41:6; 43:2,5,6; 45:7;
Please let me know if you have a better and more efficient solution to the challenge. I'd love to read better codes or great suggestions on improving my programming skills.
Here's how I solved it:
import sys
with open(sys.argv[1]) as test_cases:
for test in test_cases:
if test:
team_supporters = {}
for nation, nation_teams in enumerate(test.strip().split("|"), start=1):
for team in map(int, nation_teams.split()):
team_supporters.setdefault(team, []).append(nation)
print(*("{}:{};".format(team, ",".join(map(str, sorted(nations))))
for team, nations in sorted(team_supporters.items())))
The problem is not very complicated. We're given a mapping from nation (implicitly numbered by their order in the input) to a list of teams. We need to reverse that to create an output that maps from a team to a list of nations.
It seems natural to use a dictionary that maps in the same way as the desired output. We can use enumerate to give numbers to the nations as we iterate over them. The setdefault method of the dict adds empty lists to the dictionary as they are needed (using a collections.defaultdict instead of a regular dictionary would be another way to deal with this). We don't need to care about the order of the input, nor the order things are stored in the dictionary's inner lists.
The output we build using str.format calls and the default space separator of the print function. If the final semicolon wasn't desired, I'd have used print("; ".join("{}:{}.format(...))) instead. Since the output needs to be sorted by team at the top level, and by nation in the inner lists, we make some sorted calls where necessary.
Sorting the inner lists is probably not even be necessary, since the nations were processed in order, with their numbers derived from the order they had in the input line. Fortunately, Python's Timsort algorithm is very fast on already-sorted input, so even with a bit of unnecessary sorting, our code is still fast enough.
I'm taking my first steps writing code to do linguistic analysis of texts. I use Python and the NLTK library. The problem is that the actual counting of words takes up close to 100 % of my CPU (iCore5, 8GB RAM, macbook air 2014) and ran for 14 hours before I shut the process down. How can I speed the looping and counting up?
I have created a corpus in NLTK out of three Swedish UTF-8 formatted, tab-separated files Swe_Newspapers.txt, Swe_Blogs.txt, Swe_Twitter.txt. It works fine:
import nltk
my_corpus = nltk.corpus.CategorizedPlaintextCorpusReader(".", r"Swe_.*", cat_pattern=r"Swe_(\w+)\.txt")
Then I've loaded a text-file with one word per line into NLTK. That also works fine.
my_wordlist = nltk.corpus.WordListCorpusReader("/Users/mos/Documents/", "wordlist.txt")
The text-file I want to analyse (Swe_Blogs.txt) has this structure, and works fine to parse:
Wordpress.com 2010/12/08 3 1,4,11 osv osv osv …
bloggagratis.se 2010/02/02 3 0 Jag är utled på plogade vägar, matte är lika utled hon.
wordpress.com 2010/03/10 3 0 1 kruka Sallad, riven
EDIT: The suggestion to produce the counter as below, does not work, but can be fixed:
counter = collections.Counter(word for word in my_corpus.words(categories=["Blogs"]) if word in my_wordlist)
This produces the error:
IOError Traceback (most recent call last)
<ipython-input-41-1868952ba9b1> in <module>()
----> 1 counter = collections.Counter(word for word in my_corpus.words("Blogs") if word in my_wordlist)
/usr/local/Cellar/python/2.7.6/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/nltk/corpus/reader/plaintext.pyc in words(self, fileids, categories)
182 def words(self, fileids=None, categories=None):
183 return PlaintextCorpusReader.words(
--> 184 self, self._resolve(fileids, categories))
185 def sents(self, fileids=None, categories=None):
186 return PlaintextCorpusReader.sents(
/usr/local/Cellar/python/2.7.6/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site- packages/nltk/corpus/reader/plaintext.pyc in words(self, fileids, sourced)
89 encoding=enc)
90 for (path, enc, fileid)
---> 91 in self.abspaths(fileids, True, True)])
92
93
/usr/local/Cellar/python/2.7.6/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/nltk/corpus/reader/api.pyc in abspaths(self, fileids, include_encoding, include_fileid)
165 fileids = [fileids]
166
--> 167 paths = [self._root.join(f) for f in fileids]
168
169 if include_encoding and include_fileid:
/usr/local/Cellar/python/2.7.6/Frameworks/Python.framework/Versions/2.7/ lib/python2.7/site-packages/nltk/data.pyc in join(self, fileid)
174 def join(self, fileid):
175 path = os.path.join(self._path, *fileid.split('/'))
--> 176 return FileSystemPathPointer(path)
177
178 def __repr__(self):
/usr/local/Cellar/python/2.7.6/Frameworks/Python.framework/Versions/2.7/ lib/python2.7/site-packages/nltk/data.pyc in __init__(self, path)
152 path = os.path.abspath(path)
153 if not os.path.exists(path):
--> 154 raise IOError('No such file or directory: %r' % path)
155 self._path = path
IOError: No such file or directory: '/Users/mos/Documents/Blogs'
A fix is to assign my_corpus(categories=["Blogs"] to a variable:
blogs_text = my_corpus.words(categories=["Blogs"])
It's when I try to count all occurrences of each word (about 20K words) in the wordlist within the blogs in the corpus (115,7 MB) that my computer get's a little tired. How can I speed up the following code? It seems to work, no error messages, but it takes >14h to execute.
import collections
counter = collections.Counter()
for word in my_corpus.words(categories="Blogs"):
for token in my_wordlist.words():
if token == word:
counter[token]+=1
else:
continue
Any help to improve my coding skills is much appreciated!
It seems like your double loop could be improved:
for word in mycorp.words(categories="Blogs"):
for token in my_wordlist.words():
if token == word:
counter[token]+=1
This would be much faster as:
words = set(my_wordlist.words()) # call once, make set for fast check
for word in mycorp.words(categories="Blogs"):
if word in words:
counter[word] += 1
This takes you from doing len(my_wordlist.words()) * len(mycorp.words(...)) operations to closer to len(my_wordlist.words()) + len(mycorp.words(...)) operations, as building the set is O(n) and checking whether a word is in the set is O(1) on average.
You can also build the Counter direct from an iterable, as Two-Bit Alchemist points out:
counter = Counter(word for word in mycorp.words(categories="Blogs")
if word in words)
You already got good answers on how to count words properly with Python. The problem is that it will still be quite slow. If you are just exploring the corpora, using a chain of UNIX tools gives you a much quicker result. Assuming that your text is tokenized, something like this gives you the first 100 tokens in descending order:
cat Swe_Blogs.txt | cut --delimiter='\t' --fields=5 | tr ' ' '\n' | sort | uniq -c | sort -nr | head -n 100