Formatting issue that is getting passed to document. Sample companyList below which varies each time script is run
companyList = ["Apple - Seattle Washington (800) 555-5555", "Microsoft - Tampa Florida (800) 555-1234", "Samsung - Tokyo Japan (01) 555 123-1234"]
Right now the line of code to format this text is:
companyInfo = "\n\n".join(companyList)
and companyInfo outputs like this:
Apple - Seattle Washington (800) 555-5555 Microsoft - Tampa Florida (800) 555-1234 Samsung - Tokyo Japan (01) 555 123-1234
How can I rewrite this to format like this (note tabbed one over per new line):
Apple - Seattle Washington (800) 555-5555
Microsoft - Tampa Florida (800) 555-1234
Samsung - Tokyo Japan (01) 555 123-1234
Many thanks in advance
You can do it like this:
companyInfo = "\n\n".join("\t%s" % x for x in companyList)
Related
I have data consisting of route details of the customers and also their store scores.
raw data with overall ranking for all the customers :
Dist_Code|Dist_Name|State|Store_name|Route_code|Store_score|Rank
5371 ABC Chicago CG 1200 5 1
2098 HGT Kansas KK 6500 4.8 2
7680 POE Arizona QW 3300 4.2 3
3476 POE Arizona CV 3300 4 4
6272 KUN Florida ANF 7800 3.9 5
3220 ABC Chicago AF 1200 3.6 6
7266 IOR Califor LU 4500 3.2 7
3789 POE Arizona TR 3300 3 8
9383 KAR Newyork IO 5600 3 9
1583 KUN Florida BOT 7800 2.8 10
8219 ABC Chicago Bb 1200 2.5 11
3734 ABC Chicago AA 1200 2 12
6900 POE Arizona HAL 3300 1.8 13
8454 KUN Florida UYO 7800 1.5 14
Filters
Distname ALL
State ALL
Routecode ALL
This is the overall ranking for all the customers without selecting any filters. So when I select some filter like (Dist name, route code, store score) I want it to show the rank according to the selected filter. Eg :
Dist_Code|Dist_Name|State|Store_name|Route_code|Store_score|Rank
7680 POE Arizona QW 3300 4.2 1
3476 POE Arizona CV 3300 4 2
3789 POE Arizona TR 3300 3 3
6900 POE Arizona HAL 3300 1.8 4
Filter
Distname POE
State Arizona
Routecode 3300
The store score is based on some parameter which I calculated in a model using python.
Currently it is string column in powerbi. I tried some dax but it was not successful.
Right now, I have a view with a mess of common, conditional string replacement and substitutions for an open text field - in this example, regional classification.
(Please ignore the accuracy of geography, I'm just working with historical standard assignments. Also, I know I could speed things up with REPLACE or even just cleaning the RegEx statements for lookback - I'm just asking about the variable/nesting here.)
CREATE OR REPLACE FUNCTION public.region_cleanup(record_region text)
RETURNS text
LANGUAGE sql
STRICT
AS $function$
SELECT REGEXP_REPLACE(
REGEXP_REPLACE(
REGEXP_REPLACE(
REGEXP_REPLACE(
REGEXP_REPLACE(
REGEXP_REPLACE(record_region,'(NORTH AMERICA\s\-\sUSA\s\-\sUSA)','USA')
,'Rest\sof\sthe\sWorld\s\-\s','')
,'NORTH\sAMERICA\s\-\sCANADA','NORTH AMERICA - Canada')
,'\&\;','&')
,'Georgia\s\-\sGeorgia','MIDDLE EAST - Georgia')
,'EUROPE - Turkey','MIDDLE EAST - Turkey')
A sample output using this function would look like this in my dataset, pulling out records impacted (some are already in the correct format):
record_region_input
record_region_output
NORTH AMERICA - USA - USA - NORTHEAST - Massachusetts - Boston Metro
USA - NORTHEAST - Massachusetts - Boston Metro
NORTH AMERICA - USA - USA - MIDATLANTIC - Virginia
USA - MIDATLANTIC - Virginia
Rest of the World - ASIA - Thailand
ASIA - Thailand
Rest of the World - EUROPE - Portugal
EUROPE - Portugal
Rest of the World - ASIA - China - Shanghai Metro
ASIA - China - Shanghai Metro
Georgia - Georgia
MIDDLE EAST - Georgia
This is... fine. Regex is needed since there's tons of variability on what may come before or after these strings, and I have a proper validation list elsewhere. This is just a bulk scrub of common historical naming issues.
The problem is where I get hundreds of these kind of "known substitutions" (100+) for things like company naming or cross-department standards. Having dozens and dozens of REGEXP_REPLACE( nested statements makes editing/adding/dropping anything a maddening game of counting.
I'm trying to clean data within Postgres exclusively, since my current pipeline doesn't always allow for standardization prior to upload. I know how I'd tackle this cleanly outside of pure SQL, but in a 'vanilla' PostgreSQL instance (v12+) is there a better method for transforming strings for a view?
Updated with a sample input/output table using the example function.
If when you will split a string of data into additional regions then maybe replacing regions will be easy for you. For example:
with tb as (
select 1 as id, 'NORTH AMERICA - USA - USA - NORTHEAST - Massachusetts - Boston Metro' as record_region_input
union all
select 2 as id, 'NORTH AMERICA - USA - USA - MIDATLANTIC - Virginia'
union all
select 3 as id, 'Rest of the World - ASIA - China - Shanghai Metro'
)
select * from (
select distinct tb.id, unnest(string_to_array(record_region_input, ' - ')) as region from tb
order by tb.id
) a1 where a1.region not in ('NORTH AMERICA', 'Rest of the World');
-- Result:
1 Boston Metro
1 Massachusetts
1 NORTHEAST
1 USA
2 MIDATLANTIC
2 USA
2 Virginia
3 ASIA
3 China
3 Shanghai Metro
After then, for example, for duplicating regions you can use distinct, for unnecessary regions you can use NOT in, and you can use like '%ASIA%' to get all regions which contain ASIA and etc. After all processes, you can merge the corrected string again. Example:
with tb as (
select 1 as id, 'NORTH AMERICA - USA - USA - NORTHEAST - Massachusetts - Boston Metro' as record_region_input
union all
select 2 as id, 'NORTH AMERICA - USA - USA - MIDATLANTIC - Virginia'
union all
select 3 as id, 'Rest of the World - ASIA - China - Shanghai Metro'
)
select a1.id, string_agg(a1.region, ' - ') from (
select distinct tb.id, unnest(string_to_array(record_region_input, ' - ')) as region from tb
order by tb.id
) a1 where a1.region not in ('NORTH AMERICA', 'Rest of the World')
group by a1.id
-- Return:
1 Boston Metro - Massachusetts - NORTHEAST - USA
2 MIDATLANTIC - USA - Virginia
3 ASIA - China - Shanghai Metro
This is a simple idea, maybe this idea helps you to replace regions.
I need some help with reshaping some data into groups. The variables are country1 and country2, and samegroup, which indicates if the countries are in the same group (continent). The original data I have is something like this:
country1
country2
samegroup
China
Vietnam
1
France
Italy
1
Brazil
Argentina
1
Argentina
Brazil
1
Australia
US
0
US
Australia
0
Vietnam
China
1
Vietnam
Thailand
1
Thailand
Vietnam
1
Italy
France
1
And I would like the output to be this:
country
group
China
1
Vietnam
1
Thailand
1
Italy
2
France
2
Brazil
3
Argentina
3
Australia
4
US
5
My first instinct would be to sort the initial data by "samegroup", then reshape (long to wide). But that doesn't quite solve the issue and I'm not sure how to continue from there. Any help would be greatly appreciated!
Unless you have a non-standard definition of continent, it is much easier to use kountry (which you will probably have to install) than reshape or repeated merges:
clear
input str12 country1 str12 country2 byte samegroup
China Vietnam 1
France Italy 1
Brazil Argentina 1
Argentina Brazil 1
Australia US 0
US Australia 0
Vietnam China 1
Vietnam Thailand 1
Thailand Vietnam 1
Italy France 1
end
capture net install dm0038_1
kountry country1, from(other) geo(marc) marker
rename (country1 GEO) (country group)
sort group country
capture ssc install sencode
sencode group, replace // or use recode here
keep country group
duplicates drop
list, clean noobs
label list group
This will produce
. list, clean noobs
country group
China Asia
Thailand Asia
Vietnam Asia
Australia Australasia
France Europe
Italy Europe
US North America
Argentina South America
Brazil South America
. label list group
group:
1 Asia
2 Australasia
3 Europe
4 North America
5 South America
I have data like this along with other columns in a pandas df.
Apologies I haven't figured out how to present the question with code for the dataframe. First Post
Location:
- Tokyo, Japan
- Sacramento, USA
- Mexico City, Mexico
- Mexico City, Mexico
- Colorado Springs, USA
- New York, USA
- Chicago, USA
Does anyone know how I could isolate the country name from the location and create a new column with just the Country Name?
Try this:
In [29]: pd.DataFrame(df.Location.str.split(',',1).tolist(), columns = ['City','Country'])
Out[29]:
City Country
0 Tokyo Japan
1 Sacramento USA
2 Mexico City Mexico
3 Mexico City Mexico
4 Colorado Springs USA
5 Seoul South Korea
You can do this without any regular expressions - you can find the String.indexOf(“, “) to find the position of the seperator in the String, and then use String.substring to cut the String down to just this section.
However, a regular expression can also do this easily, but would likely be slower.
I have been trying to calculate the amount of turnover happening in exective boards between 2006 and 2009 in the financial sector.
For this I have data looking like the following:
Year Bank Director DirectorID (ISIN, RoA, Size etc)
2005 Bank1 John Smith 120
2005 Bank1 Barry Pooter 160
2005 Bank1 Jack Sparrow 2070
2006 Bank1 John Smith 120
2006 Bank1 Barry Pooter 160
2006 Bank1 Jack Sparrow 2070
2007 Bank1 John Smith 120
2007 Bank1 Barry Pooter 160
2007 Bank1 Jack Sparrow 2070
2008 Bank1 John Smith 120
2008 Bank1 Carla Jansen 250
2008 Bank1 Jack Sparrow 2070
2009 Bank1 John Smith 160
2009 Bank1 Carla Jansen 250
2009 Bank1 Mike Stata 875
And this data repeats for each bank from 2005 - 2015.
Now I have already made a turnover dummy variable with 0 = no change and 1 = change by using:
collapse(sum) DirectorID, by (ISIN, Year, Bank)
gen interest = inrange(Year, 2006,2009)
bysort ID interest (DirectorID) : gen temp = DirectorID[1] != DirectorID[_N]
replace temp = . if interest==0
bysort ID : egen changed = max(temp)
However, I would like to make turnover an actual variable on how many changes were made i.e.: (assume bank2 made no change Turnover=0, bank3 made 6 changes (6 new managers came in)Turnover=6 and bank4 made 4 changes (4 new managers came in)Turnover=4.
Bank Turnover (ISIN, RoA, Size, etc)
Bank1 2
Bank2 0
Bank3 6
Bank4 4
Is this possible with Stata (or SPSS if that happens to be the case)?
ISIN codes are my ID variable as they are linked to each specific bank.
Two new people entered the board of Bank1. For now it would show as Turnover = 2 as only 2 new people entered the organization's board. Had three people joined in the previous example, in that case Turnover = 3 as each change made to the Board counts as "+1" turnover regardless of the people leaving. Only people that join (whether they replace someone or are just an addition to the board) are of interest in my thesis.
However, this could also be calculated differently if that makes it easier. Depends on how I write my methodology. It would be fine if the variable turnover says how many changes were made per year i.e. Turnover2005: 2005 - 2006, Turnover2006: 2006 - 2007, Turnover2007 2007- 2008 and Turnover2008 2008 - 2009
Finally, it's possible that TMTs grow, i.e. 2005 bank 1 has 14 managers on the board and in 2006 they hire 3 new managers but only let 1 go. Now the board has 16 managers and made 3 changes (3 new managers)
This might help. The following code builds a dataset consisting of data with four banks and five years. It is panel data. The xtset command lets you use time series operators which are well documented here (https://www.youtube.com/watch?v=ik8r4WvrPkc). (Note: for sake of clear exposition, in this example Bank 1 had no changes, Bank 2 had two changes, Bank 3 had three, etc.).
// Clear the session and other memory.
set more off
clear all
// Input reproducible data.
input year bank_num ceo_num
2005 1 200
2006 1 200
2007 1 200
2008 1 200
2009 1 200
2005 2 222
2006 2 222
2007 2 222
2008 2 333
2009 2 444
2005 3 300
2006 3 301
2007 3 302
2008 3 302
2009 3 303
2005 4 999
2006 4 888
2007 4 777
2008 4 666
2009 4 555
end
// Declare the panel structure.
xtset bank_num year
// Gen variable indicating if ceo_num stayed same.
// Resulting variable is 0 when there was no change.
gen no_turn = (ceo_num - f1.ceo_num)
// Gen dummy to indicate if ceo_num changed.
gen is_turn = (no_turn != 0 & no_turn < .)
// Gen a variable that counts changes.
egen turn_nums = sum(is_turn), by(bank_num)
// List data to inspect results.
list
Edit: Re-characterized comment for no_turn variable.