Rocket Universe & Unidata File - universe

This is just for clarification, know exactly what a qpointer is but today in a meeting the concept of a dpointer was raised. Anyone know what a "D" pointer refers to? Never ever heard this term before.

This is a nice question because it helped me put together a couple of pieces I had rolling around in my head, so thanks for that!
D's are dictionary items that refer to a logical location in the the data array and you have probably seen them a million times in the DICT of any given file.
A D Item in the VOC servers the same purpose and is valid with any query. Lots of shops have some generics (F1, F2, F3, F4, F5, F6..etc) set up so you don't have to remember the dictionary name if you know what filed you want. I think the precedence for dictionary items is DICT File -> VOC but I could be wrong on that.
As an example to illiterate this I went into HS.SALES and took one of the DICT items in the CUSTOMER table and wrote it to VOC after removing the conversion in field 3. I chose BUY_DATE because it had a conversion
SORT CUSTOMER BUY_DATE 06:51:04am 10 Oct 2017 PAGE 1
CUSTOMER.. Date Purchased
1 01/07/91
10 01/28/91
01/29/91
01/30/91
Remove the conversion and save into the VOC.
>ED DICT CUSTOMER BUY_DATE
10 lines long.
0001: D Date of purchase
0002: 14
0003: D2/
0004: Date Purchased
0005: 8R
0006: M
0007: ORDERS
0008: INTEGER
0009:
0010:
----: 3
0003: D2/
----: R
0003:
----: SAVE VOC F14NOCON
"F14NOCON" filed in file "VOC".
----: Q
Now sort with new D type. Values are before the Y-1995 era when pick date were still 4 digits!
SORT CUSTOMER F14NOCON 06:45:25am 10 Oct 2017 PAGE 1
CUSTOMER.. Date Purchased
1 8408
10 8429
8430
8431
Good Luck!

Related

Dynamic Google Sheets Column + Row formula

I have a good sheet that I want to grab the header which a date time stamp which will match against another sheet find the entries with that date and suburb and type and give me an average cost.
My formula is =AVERAGEIFS(Sheet1!C:C,Sheet1!A:A, B11:B, Sheet1!F:F, C10) which gives me the average but i've hard coded the header date:
example:
What I want to do is dynamically add the data from the row above with the date time instead of of manually adding it in the formula something like this:
=AVERAGEIFS(Sheet1!C:C,Sheet1!A:A, B11:B, Sheet1!F:F, =CHAR(COLUMN()+64) & 10)
Which would automatically grab the column + row 10 e.g C10, D10, E10.
If i put =CHAR(COLUMN()+64) & 10 in its own cell it works but when I add it to averageifs condition it gives me a parsing error.
Expecting C10, D10, E10 from =CHAR(COLUMN()+64) & 10 which should allow me to dynamically filter data on the date int he header above it.
try:
=AVERAGEIFS(Sheet1!C:C, Sheet1!A:A, B11:B, Sheet1!F:F, INDIRECT(CHAR(COLUMN()+64)&10))

Map values in a list to a new value with PySpark

I'm trying to recode a list of values using Pyspark to create a new column. I've set my mapping up with nested dictionaries, but can't get the mapping syntax figured out. The original data has several string values that need to get recoded to a new value, then I want to give the column a new name. The original column values will get grouped several different ways to create different new columns.
The df will have several thousand columns, so I need the code to be as efficient as possible.
I have a different scenario with a 1-1 mapping where I was able to create my expression with:
#expr = [ create_map([lit(x) for x in chain(*values.items())])[orig_df[key]].cast(IntegerType()).alias('new_name') for key, values in my_dict.items() if key in orig_df.columns]
I just can't figure out the syntax for mapping the many to one.
Here's what I've tried:
grouping_dict = {'orig_col_n':{'new_col_n_a': {'20':['011','012'.'013'],'30':['014','015','016']},
'new_col_n_b': {'25':['011','013','015'],'35':['012','014','016']}}}
expr = [ f.when(f.col(key) == f.lit(old_val),f.lit(new_value))
.cast(IntegerType())
.alias(new_var_name)
for key, new_var_names_dict in grouping_dict.items()
for new_var_name,mapping_dict in new_var_names_dict.items()
for new_value,old_value_list in mapping_dict.items()
for old_val in old_value_list
if key in original_df.columns]
new_df = original_df.select(*expr)
This expression isn't quite right, it creates multiple columns with the same name as it loops through the values that need to be mapped.
Any suggestions for restructuring my dictionary or how to fix my syntax would be greatly appreciated!
enter image description here
enter image description here
orig_col_n new_col_n_a new_col_n_b
011 20 25
012 20 35
013 20 25
014 30 35
015 30 25
016 30 35

How to load specific columns with varying location from a text file in python?

I'm trying to read the discharge data of 346 US rivers stored online in textfiles. The files are more or less in this format:
Measurement_number Date Gage_height Discharge_value
1 2017-01-01 10 1000
2 2017-01-20 15 2000
# etc.
I only want to read the gage height and discharge value columns.
The problem is that in most files additional columns with metadata are added in front of the 'Gage height' column, so i can not just simply read the 3rd and 4th column because their index varies.
I'm trying to find a way to say 'read the columns with the name 'Gage_height' and 'Discharge_value'', but I haven't succeeded yet.
I hope anyone can help. I'm currently trying to load the text files with numpy.genfromtxt so it would be great to find a solution with that package but other solutions are also more than welcome.
This is my code so far
data_url=urllib2.urlopen(#the url of this specific site)
data=np.genfromtxt(data_url,skip_header=1,comments='#',usecols=2,3])
You can use the names=True option to genfromtxt, and then use the column names to select which columns you want to read with usecols.
For example, to read 'Gage_height' and 'Discharge_value' from your data file:
data = np.genfromtxt(filename, names=True, usecols=['Gage_height', 'Discharge_value'])
Note that you don't need to set skip_header=1 if you use names=True.
You can then access the columns using their names:
gage_height = data['Gage_height'] # == array([ 10., 15.])
discharge_value = data['Discharge_value'] # == array([ 1000., 2000.])
See the docs here for more information.

Python Time Series

I am working on a real estate cash-flow simulation.
What I want in the end is a time series where everyday I report if the property is vacant, leased and if I collected rent.
In my present code, I create first a profit array with values of "Leased", "Vacant" or "Today you collected rent of $1000", so I used this to create my time series:
rng=pd.date_range('6/1/2016', periods=len(profit), freq='D')
ts=pd.Series(profit, index=rng)
To simplify, I assumed I collected rent every 30 days. Now I want to be more specific and collect it every 5th day of the month (for example) and be flexible on the day the next tenant will move in.
Do you know commands or a good source where I can learn how to iterate from month to month?
Any help would be appreciated
You can build a sequence of dates using date_range and .shift() (freq='M' is for month-end frequencies) with pd.datetools.day like so:
date_sequence = pd.date_range(start, end, freq='M').shift(num_of_days, freq=pd.datetools.day)
and then use this sequence to select dates from the DateTimeIndex using
df.loc[date_sequence, 'column_name'] = value
Alternatively, you can use pd.DateOffset() like so:
ts = pd.date_range(start=date(2015, 6, 1), end=date(2015, 12, 1), freq='MS')
DatetimeIndex(['2015-06-01', '2015-07-01', '2015-08-01', '2015-09-01',
'2015-10-01', '2015-11-01', '2015-12-01'],
dtype='datetime64[ns]', freq='MS')
Now add 5 days:
ts + pd.DateOffset(days=5)
to get:
DatetimeIndex(['2015-06-06', '2015-07-06', '2015-08-06', '2015-09-06',
'2015-10-06', '2015-11-06', '2015-12-06'],
dtype='datetime64[ns]', freq=None)

Help: Extracting data tuples from text... Regex or Machine learning?

I would really appreciate your thoughts on the best approach to the following problem. I am using a Car Classified listing example which is similar in nature to give an idea.
Problem: Extract a data tuple from the given text.
Here are some characteristics of the data.
The vocabulary (words) in the text is limited to a specific domain. Lets assume 100-200 words at the most.
Text that needs to be parsed is a headline like a Car Ad data shown below. So each record corresponds to one tuple (row).
In some cases some of the attributes may be missing. So for example, in raw data row #5 below the year is missing.
Some words go together (bigrams). Like "Low miles".
Historical data available = 10,000 records
Incoming New Data volume = 1000-1500 records / week
The expected output should be in the form of (Year,Make,Model, feature). So the output should look like
1 -> (2009, Ford, Fusion, SE)
2 -> (1997, Ford, Taurus, Wagon)
3 -> (2000, Mitsubishi, Mirage, DE)
4 -> (2007, Ford, Expedition, EL Limited)
5 -> ( , Honda, Accord, EX)
....
....
Raw Headline Data:
1 -> 2009 Ford Fusion SE - $7000
2 -> 1997 Ford Taurus Wagon - $800 (san jose east)
3 -> '00 Mitsubishi Mirage DE - $2499 (saratoga) pic
4 -> 2007 Ford Expedition EL Limited - $7800 (x)
5 -> Honda Accord ex low miles - $2800 (dublin / pleasanton / livermore) pic
6 -> 2004 HONDA ODASSEY LX 68K MILES - $10800 (danville / san ramon)
7 -> 93 LINCOLN MARK - $2000 (oakland east) pic
8 -> #######2006 LEXUS GS 430 BLACK ON BLACK 114KMI ####### - $19700 (san rafael) pic
9 -> 2004 Audi A4 1.8T FWD - $8900 (Sacramento) pic
10 -> #######2003 GMC C2500 HD EX-CAB 6.0 V8 EFI WHITE 4X4 ####### - $10575 (san rafael) pic
11 -> 1990 Toyota Corolla RUNS GOOD! GAS SAVER! 5SPEED CLEAN! REG 2011 O.B.O - $1600 (hayward / castro valley) pic img
12 -> HONDA ACCORD EX 2000 - $4900 (dublin / pleasanton / livermore) pic
13 -> 2009 Chevy Silverado LT Crew Cab - $23900 (dublin / pleasanton / livermore) pic
14 -> 2010 Acura TSX - V6 - TECH - $29900 (dublin / pleasanton / livermore) pic
15 -> 2003 Nissan Altima - $1830 (SF) pic
Possible choices:
A machine learning Text Classifier (Naive Bayes etc)
Regex
What I am trying to figure out is if RegEx is too complicated for the job and a Text classifier is an overkill?
If the choice is to go with a text classifier then what would you consider to be the easiest to implement.
Thanks in advance for your kind help.
This is a well studied problem called information extraction. It is not straight forward to do what you want to do, and it is not as simple as you make it sound (ie machine learning is not an overkill). There are several techniques, you should read an overview of the research area.
Check this IE library for writing extraction rule< I think it will work best for you problem.
There also example how to create fast dictionary matching.
I think that the ARX or Phoebus systems may suit your needs if you already have annotated data and a list of words associated to each field. Their approach is a mix of information extraction and information integration.
There are a few good entity recognition libraries. Have you taken a look at Apache opennlp?
As a user looking for a specific model of car the task is easier. I'm pretty sure I could classify, say, most Ford Rangers since I know what to look for with regexp.
I think your best bet is to write a function for each car model with type String -> Maybe Tuple. Then run all these on each input and throw away those inputs resulting in zero or too many tuples.
You should use a tool like Amazon Mechanical Turk for this. Human microtasking. Another alternative is to use a data entry freelancer. upWork is a great place to look. You can get excellent quality results and the cost is very reasonable for each.