We are working on a project where are planning to generate multi line csv files and load to informatica via power exchange.
Our csv file will look like as below.
Field1, Field2
"Row 1 Cell 1 line 1
Row 1 Cell 1 line 2", "Row 1 Cell 2 Line 1
Row 1 Cell 2 line 2"
"Row 2 Cell 1 line 1
Row 2 Cell 1 line 2", "Row 2 Cell 2 Line 1
Row 2 Cell 2 Line 2"
Just wanted to check will power exchange understand this type of csv file?
Update: We tried loading a sample file to power exchange and it converted the new lines to space :-( Is there a way to retain then as new line and load to the data center as new line?
Yoc can use Informatica custom properties "MatchQuotesPastEndOfLine = Yes" in Session setting
CSV File: CSV file should be comma separated values, Multi line separation will not work.
PowerExchange: when you create powerexchange datamaps for the csv file, you can preview the data, how the data looking there? try to play with powerexchange setting while creating the datamaps and see if you can configure some properties.
Related
I have multiple sheets in one excel file like Sheet1, Sheet2, Sheet3,etc. Now I have to list all the particular column in one csv file. Both the sheets has one unique column "Attribute" and only those records should be listed in the csv file line by line. (First sheet's 'Attribute' values should be in 1st line and 2nd sheet's 'Attribute' values should be in 2nd line and etc.,)
If instances,
Sheet1:
Attribute,Order
P,1
Emp_ID,2
DOJ,3
Name,4
Sheet2:
Attribute,Order
C,1
Emp_ID,2
Exp,3
LWD,4
Expected result: (In some .csv file)
P,Emp_ID,DOJ,name
C,Emp_ID,Exp,LWD
Note: Line starting from P should be in first line and C should be in 2nd line and etc.,
Below is my code:
import pandas as pd
excel = 'E:\Python Utility\Inbound.xlsx'
K = 'E:\Python Utility\Headers_Files\All_Header.csv'
df = pd.read_excel(excel,sheet_name = None)
data = pd.DataFrame(df,columns=['Attribute']).T
print data
M = data.to_csv(K, encoding='utf-8',index=False,header=False)
print 'done'
Output show's as below:
Empty DataFrame Columns: [] Index: [Attribute] done
If I use sheet_name = 'sheet1' then DataFrame works good and data loaded as expected in csv file.
Thanks in advance
I have example data in csv file
Some text
Text 2
Text 3
I want to output it with line break into text area, but getting error when trying to put it as it is.
it works only when i remove line breaks
iimPlayCode('ADD !EXTRACT {{!COL1}}\n');
var description = iimGetExtract(1);
Edit your csv file by adding double quotes:
"1. Some text2. Text 2 3. Text 3"
And run the code:
iimPlayCode('SET !DATASOURCE yourfile.txt\nADD !EXTRACT {{!COL1}}');
var description = iimGetExtract(1);
I'm kinda new to Python, so thx for your help!
I want to tell Python to take a big .csv list and split it up to many small lists of only two columns
Take this .csv file
Always use column "year" which is the first column
Then take always the next column (for-loop?), starting with column 2 which is "Object1", then column 3 which is "Object2" and so on...
Save each list as .csv - now only containing two columns - and name it after the second column (f.e. "Object1")
So far I am up to this:
import csv
object = 0
f = open("/home/Data/data.csv")
csv_f = csv.reader(f, delimiter=';', quotechar='|')
writer = csv.writer(csv_f)
for row in csv_f:
writer("[0],[object]")
object += 1
f.close()
Your code is trying to open the same file for reading and writing, which may have unexpected results.
Think about your problem as a series of steps; one way to approach the problem is:
Open the big file
Read the first line of the file, which contains the column titles.
Go through the column titles (the first line of your big csv file), skipping the first one, then:
For each column title, create a new csv file, where the filename is the name of the column.
Take the value of the first column, plus the value of the column you are currently reading, and write it to the file.
Repeat till all column titles are read
Close the file
Close the big file.
Here is the same approach as above, taking advantage of Python's csv reading capabilities:
import csv
with open('big-file.csv') as f:
reader = csv.reader(f, delimiter=';', quotechar='|')
titles = next(reader)
for index, column_name in enumerate(titles[1:]):
with open('{}.csv'.format(column_name), 'w') as i:
writer = csv.writer(i, delimiter=';', quotechar='|')
for row in reader:
writer.writerow((row[0],row[index+1]))
f.seek(0) # start from the top of the big file again
next(reader) # skip the header column
I want to write a python script which should read xlsx file and based on value of column X, it should write/append file with the value of column Z.
Sample data:
Column A Column X Column Y Column Z
123 abc test value 1
124 xyz test value 2
125 xyz test value 3
126 abc test value 4
If value in Column X = abc then it should create a file (if not existing already) in some path with name abc.txt and insert the value of column Z in abc.txt file, likewise if Column X = xyz then it should create a file in same path with xyz.txt and insert the value of column Z in xyz.txt file.
from openpyxl import load_workbook
wb = load_workbook('filename.xlsm')
ws = wb.active
for cell in ws.columns[9]: #here column 9 is value is what i am testing which is Column X of my example.
if cell.value == "abc":
print ws.cell(column=12).value #this is not working and i dont know how to read corresponding value of another column
Please suggest what could be done.
Thank you.
Change
print ws.cell(column=12).value
By:
print ws.columns[col][row].value
in your case:
print ws.columns[12-1][cell.row-1].value
Note that if you use this indexation method cols and rows start with index 0. This is why I'm doing cell.row-1, so take it into account when you address your column, if your 12 starts counting from 1 you'll have to address to 11.
Alternatively you can access to your information cell like this: ws.cell(row = cell.row, column = 12).value. Note in this case cols and rows start at 1.
Suppose i have tabular column as below.Now i want to extract the column wise data.I tried extracting data by creating a list.But it is extracting the first row correctly but from second row onwards there is space i.e under CEN/4.Now my code considers zeroth column has 5.0001e-1 form second row,it starts reading from there. How to extract the data correctly coulmn wise.output is scrambled.
0 1 25 CEN/4 -5.000000E-01 -3.607026E+04 -5.747796E+03 -8.912796E+02 -88.3178
5.000000E-01 3.607026E+04 5.747796E+03 8.912796E+02 1.6822
27 -5.000000E-01 -3.641444E+04 -5.783247E+03 -8.912796E+02 -88.3347
5.000000E-01 3.641444E+04 5.783247E+03 8.912796E+02 1.6653
28 -5.000000E-01 -3.641444E+04 -5.712346E+03 -8.912796E+02 -88.3386
5.000000E-01 3.641444E+04 5.712346E+03 8.912796E+02
my code is :
f1=open('newdata1.txt','w')
L = []
for index, line in enumerate(open('Trial_1.txt','r')):
#print index
if index < 0: #skip first 5 lines
continue
else:
line =line.split()
L.append('%s\t%s\t %s\n' %(line[0], line[1],line[2]))
f1.writelines(L)
f1.close()
my output looks like this:
0 1 CEN/4 -5.000000E-01 -5.120107E+04
5.000000E-01 5.120107E+04 1.028093E+04 5.979930E+03 8.1461
i want columnar data as it is in the file.How to do that.I am a bgeinner
its hard to tell from the way the input data is presented in your question, but Im guessing your file is using tabs to separate columns, in any case, consider using python csv module with the relevant delimiter like:
import csv
with open('input.csv') as f_in, open('newdata1', 'w') as f_out:
reader = csv.reader(f_in, delimiter='\t')
writer = csv.writer(f_out, delimiter='\t')
for row in reader:
writer.writerow(row)
see python csv module documentation for further details