IICS Informatica - Not creating Flat File - informatica

I am using Source - a query
select col1 from tab1 where exe_dt > sysdate;
The above query returns no rows
My Mapping
Source (Oracle 12c Table) -> (Passthough) Expression -> Target (Flat File (pipe delimited))
Even though there is no data i need to create the output flat file with nothing in it.
I am creating the target file as a dynamic file name at the run time with date concatenated - ABC_20210311_1a.csv.. the date of part of the file name changes everyday
appreciated if i can get any help

Related

When storing impala table as textfile, is it posisble to tell it to save column names in the textfile?

I have created an impala table as
create table my_schema.my_table stored as textfile as select ...
As per the definition the table has data stored in textfiles somewhere in HDFS. Now when i run hdfs command such as:
hadoop fs -cat path_to_file | head
I do not see any column names. I suppose impala stores the column names somewhere else, but since i would like to work with these textfiles also outside of impala, it would be great if the files would include the headers.
Is there some option i can set when creating the table to add the headers to the text files? Or do i need to figure out the names by parsing the results of show create table?

Ordering the columns in the output of mapping task in Informatica cloud

I'm creating a mapping task to union,join 5 flat files and few transformation logic on top of it using Informatica cloud. I'm passing the output as .txt / .csv format for downstream processing and loading it to a data warehouse in certain column order.
I have to generate the output file during runtime because Liaison connection automatically cuts the output file which I'm dropping and pastes it inside data warehouse. (So I cannot use meta data and field mapping)
Is there any tool in the design which I can use to order the column sequence on the output (Like Column A should be the first column, Column C should be the second, Column B should be the third)
If there is no tool / object readily available inside the design pane of mapping task, is there any work around to do the same

Power BI - Use slicer to fetch file from folder

I have files in a folder which have same structure. The only difference between them is that for each day a new file is created which is named as date of the day.
So if file is created on 11th November 2019, its name would be 11092019.xlsx.
I have created a slicer which fetches names of all files present in this folder.
Keeping in mind that file format is same and only difference is in their file name and data values in them. Is there any way so that when I select a value from slicer the respective files data will be displayed in table visual ?
It is not possible to load a file interactively based on the slicer value.
You can achieve this by,
Load all files in the folder.
Combine them into a single table, where you need to add a "File Name" column.
Use slicer to show records coming from the specific file.

How to load string from flat file into date in target table(Teradata) using informatica?

I am trying to load string from flat file into date in target teradata table using informatica.
But while doing that, my workflow is succeeding but data is not loading into table. When I run the debugger, data is passing through, SQ, expression. When I debug the target instance, getting 'no data available' for date field.
Could any one of you help me to know how can we load string from flat file into date in target table of teradata.
date format used: MM/DD/YYYY
Source data type is string(10) and
target data type is date in format MM/DD/YYYY.
--
Thnx,
SP
while debugging add an expression and check how the string is being converted to date .

Pentaho DI (Kettle) best way to select flow based on csv file header?

I'm using Pentaho DI (kettle) and not sure what's the best way to do the following:
From a downloaded csv file, check if a column exists, and based on that select the right next step.
There are 3 possible options.
Thanks,
Isaac
You did not mention possible options, so I'll just provide you with a sketch showing how to check if a column exists in a file.
For this you will need a CSV file input step and Metadata structure of stream step which will read the metadata of the incoming stream.
For a sample csv file with 3 columns named col1, col2 and col3 you get every column in a separate row with its name as a value in Fieldname column in Metadata step.
Then depending on your needs you could use for example Filter Rows or Switch / Case step for further processing.