Load files from folder with a custom query with power BI - powerbi

I am trying to load csv files from a folder but I need to apply several custom steps to each file, including dropping the PromoteHeaders default.
I have a custom query that can load a single file successfully. How do I turn it into a query that loads all files in a folder?
By default, File.folder's "promoteHeaders" messes up my data because of a missing column name (which my custom query fixes).

The easiest way to create a function that reads a specific template of file is to actually do it. Just create the M to read it and by right click on the entity transform it to a function.
After that is really simple to transform your M so it uses parameters.
You can create a blank query and replace the code with this on as an example, customize with more steps to deal with your file requirements.
= (myFile) => let
Source = Csv.Document(myFile,[Delimiter=",", Columns=33, Encoding=1252, QuoteStyle=QuoteStyle.None])
in
Source
And then Invoke Custom Function for each file with the content as the parameter.

Related

Is it possible to save reports and data transformation steps in PowerBI?

I have prepared some reports based on the files I prepared. I am wondering is it possible to save this report (measures and visualizations) and also the steps I made while transforming data? I want to be able to load new files (which in the structure are the same as the ones I used creating my report) and the data transformation and report done automatically on this updated data.
Is it possible?
You can save it as a template - file extension pbit. It saves only the structure of the file, without actual data. When opening the report it refreshes it, and if there are parameters in the report (e.g. folder/file path or server address) it will refresh it considering the input values
you can read more here
https://learn.microsoft.com/en-us/power-bi/create-reports/desktop-templates
You can simply save your file as a template.

How to change the data of a sample .pbix?

I want to change the data of a sample .pbix but when I'm updating the data can not find the path of the Excel since I don't have it. Is there a way without having it?
You can check exists source from Transform data --> data source settings and find the path of exists excel and in case if you did not found you can add other data sources to replace it.

How to load from a folder without combining files

I'm trying to load several csv files from a folder that aren't connected.
Its not allowing me to load them without combining them.
Get Data > File > Folder
When they load it automatically tries to combine them which creates NULLs in one of the file rows.
When you get data from folder, it will combine all file into one table. To load each file into it's own table, you should get data from a Text/CSV file instead and import each of the files you want.

Is it possible to write back to data file in postman?

While working with postman, data.someVariable returns data from within a csv file that can also be used as {{someVariable}} in uri/json.
This gives us the data for that variable from that row/iteration.
Is there a mechanism to write back to the data file by doing something like postman.setData('responseCode') = responseCode.
This would be really helpful to store response code in the data file and to record call wise details in same format as the input within csv.
The only solution I figured out is
to populate json objects in the environment with information about the data file name and structure/values of information to be added
to create a separate web service (maybe in node.js) that exposes an http call to write to a file and takes in as parameter a json input as the one created in the environment as mentioned above and writes that to a file / original data file (or a copy of it) in the desired format
to call the above mentioned web service call at the end of each run or desired rest call execution to generate step wise information/debug report
There is no way to write back to data file in postman as of now .
However, you can populate that in your environment file at run time using
pm.environment.set("varname")
keep varname in such a way that you understand this is the variable you wanted to write back into data file.

Use TfileUnarchive on Amazon S3

I have a talend job which is simple like below:
ts3Connection -> ts3Get -> tfileinputDelimeted -> tmap -> tamazonmysqloutput.
Now the scenario here is that some times I get the file in .txt format and sometimes I get it in a zip file.
So I want to use tFileUnarchive to unzip the file if it's in zip or process it bypassing the tFileUnarchive component if the file is in unzipped format i.e only in .txt format.
Any help on this is greatly appreciated.
The trick here is to break the file retrieval and potential unzipping into one sub job and then the processing of the files into another sub job afterwards.
Here's a simple example job:
As normal, you connect to S3 and then you might list all the relevant objects in the bucket using the tS3List and then pass this to tS3Get. Alternatively you might have another way of passing the relevant object key that you want to download to tS3Get.
In the above job I set tS3Get up to fetch every object that is iterated on by the tS3List component by setting the key as:
((String)globalMap.get("tS3List_1_CURRENT_KEY"))
and then downloading it to:
"C:/Talend/5.6.1/studio/workspace/S3_downloads/" + ((String)globalMap.get("tS3List_1_CURRENT_KEY"))
The extra bit I've added starts with a Run If conditional link from the tS3Get which links the tFileUnarchive with the condition:
((String)globalMap.get("tS3List_1_CURRENT_KEY")).endsWith(".zip")
Which checks to see if the file being downloaded from S3 is a .zip file.
The tFileUnarchive component then just needs to be told what to unzip, which will be the file we've just downloaded:
"C:/Talend/5.6.1/studio/workspace/S3_downloads/" + ((String)globalMap.get("tS3List_1_CURRENT_KEY"))
and where to extract it to:
"C:/Talend/5.6.1/studio/workspace/S3_downloads"
This then puts any extracted files in the same place as the ones that didn't need extracting.
From here we can now iterate through the downloads folder looking for the file types we want by setting the directory to "C:/Talend/5.6.1/studio/workspace/S3_downloads" and the global expression to "*.csv" in my case as I wanted to read in only the CSV files (including the zipped ones) I had in S3.
Finally, we then read the delimited files by setting the file to be read by the tFileInputDelimited component as:
((String)globalMap.get("tFileList_1_CURRENT_FILEPATH"))
And in my case I simply then printed this to the console but obviously you would then want to perform some transformation before uploading to your AWS RDS instance.