I am trying to find a solution to load an external data file but from a relative path, so when someone else open my PBIX it will still work on his/her computer.
many thanks.
Relative paths are *not* currently supported by Power BI.
To ease the pain, you can create a variable that contains the path where the files are located, and use that variable to determine the path of each table. That way, you only have to change a single place (that variable) and all the tables will automatically point to the new location.
Create a Blank Query, give it a name (e.g. dataFolderPath) and type in the path where your files are (e.g. C:\Users\augustoproiete\Desktop)
With the variable created, edit each of your tables in the Advanced Editor and concatenate your variable with the name of the file.
e.g. instead of "C:\Users\augustoproiete\Desktop\data.xlsx", change it to dataFolderPath & "\data.xlsx"
You can also vote/watch this feature request to be notified when it gets implemented:
Support relative path to excel/csv sources
You can use also the "Parameters" function.
1. Create a new Parameter like "PathExcelFiles"
Parameter_ScreenShot
Edit your "Source" entry
SourceEntry_ScreenShot
Done !
I don't think this is possible yet.
Please add your support for this idea so the Microsoft Power BI team will be more likely to add this as a new feature.
I couldn't bear the fact that there is no possibility to use relative paths, but finally I had to...
So I tried to find a half-decent acceptable workaround.
Using Python-Script it is at least possible to get access to the users %HOME% directory.
let
PySource = Python.Execute("from pathlib import Path#(lf)import pandas as pd#(lf)dataset = pd.DataFrame([[str(Path.home())]], columns = [1])"),
homeDir = Text.Trim(Lines.ToText(PySource{[Name="dataset"]}[Value][1])),
...
The same should be possible with R-Script but didn't do it.
Anybody knows any better solution to get the %HOME% directory inside "Power" Query? I would be glad to have one.
Then I created two scripts inside my working directory install.bat:
#ECHO OFF
if exist "%HOME%\.pbiTemplatePath\filepath.txt" GOTO :ERROR
#This is are the key commands
mkdir "%HOME%\.pbiTemplatePath"
echo|set /p="%cd%" > "%HOME%\.pbiTemplatePath\filepath.txt"
GOTO :END
#Just a little message box
:ERROR
SET msgboxTitle=There is already another working directory installed.
SET /p msgboxBody=<"%HOME%\.pbiTemplatePath\filepath.txt"
SET tmpmsgbox=%temp%\~tmpmsgbox.vbs
IF EXIST "%tmpmsgbox%" DEL /F /Q "%tmpmsgbox%"
ECHO msgbox "%msgboxBody%",0,"%msgboxTitle%">"%tmpmsgbox%"
WSCRIPT "%tmpmsgbox%"
:END
and uninstall_all.bat:
#ECHO OFF
if exist "%HOME%\.pbiTemplatePath\filepath.txt" RMDIR /S /Q "%HOME%\.pbiTemplatePath\"
So in "Power" BI I did this:
let
PySource = Python.Execute("from pathlib import Path#(lf)import pandas as pd#(lf)dataset = pd.DataFrame([[str(Path.home())]], columns = [1])"),
homeDir = Text.Trim(Lines.ToText(PySource{[Name="dataset"]}[Value][1])),
workingDirFile = Text.Combine({homeDir, ".PbiTemplatePath\filepath.txt"} , "\"),
workingDir = Text.Trim(Lines.ToText(Csv.Document(File.Contents(workingDirFile),[Delimiter=";", Columns=1, QuoteStyle=QuoteStyle.None])[Column1])),
...
Now if my git-repository (containing a "Power" BI-template-file and some config-files saying the template where to load the data from and the install/uninstall-scripts). Install has to be executed once and nobody has to copy and paste any path.
I'd be glad about any suggestion of improvement. It's not the solution Gotham deserves... Gotham deserves a better one.
As mentioned by a few people, you can use a dataset parameter and reference that in your script. What I haven't seen mentioned is that you can change these values using an API call:
https://learn.microsoft.com/en-us/rest/api/power-bi/datasets/update-parameters
Related
When I execute
sp.preview(expression, viewer = “file”, filename = “expression.png”)
SymPy automatically saves the preview in C:\Users\my username.
How do I change the save path?
I'm not sure if there is a canonical way to do this, but you can get the SYMPY_PATH as
from sympy.testing.tests.test_code_quality import SYMPY_PATH
and at least add that as a prefix to your filename (however your system likes paths added). If that works and puts it in the SymPy directory then you can use any path where you would like the file to go.
I am trying to append multiple Excel files into a large database by executing the following code:
cls
set more off
clear all
global route = "C:\Users\NICOLE\Desktop\CAR"
cd "$route"
tempfile buildDB
save `buildDB', emptyok
local filenames : dir "$route" files "*.xlsx"
display `filenames'
foreach f of local filenames {
import excel using `"`f'"' ,firstrow allstring clear
gen source = `"`f'"'
append using `buildDB'
save `"`buildDB'"', replace
}
save "C:\Users\NICOLE\Desktop\CAR\DB_EG-RAC.dta" ,replace
Stata manages to append all of the files, but it also displays the following message of error:
file C:\Users\NICOLE.xlsx not found r(601);
And I do not know how to solve it, because it does not let my code run as it should. Thanks!
We have deadlock here. On the face of it the filename in question is not one you write in your code, but could only be part of the result of
local filenames : dir "$route" files "*.xlsx"
But the file named isn't even in the same directory as that named. Moreover, you are adamant that the file doesn't exist and Stata according to your error report can't find it.
The question still remains: how does Stata get asked to open a file that supposedly doesn't exist?
My only guesses are feeble:
Code you are not showing is responsible.
You are running slightly different versions of this script in different places and getting confused. Can you replicate this error that you did get once all over again? Have you searched everywhere remotely possible on the C: drive for this file nicole.xlsx?
It is crucial to realise that we can test nothing here. The problem has not been presented reproducibly.
I have a suite which has 50 test cases. When I execute my suite, I get all the failed screenshots listed in the project's folder. I want to point and store those screenshots to a different directory with the name of the test case. I wanted it to be a one time setup than doing it explicitly for every test cases.
There's quite a few ways to change the screenshots default directory.
One way is to set the screenshot_root_directory argument when importing Selenium2Library. See the importing section of Selenium2Library's documentation, and importing libraries in the user guide.
Another way is to use the Set Screenshot Directory keyword, which will do pretty much the same thing as specifying a path when importing the library. Though, using this keyword you can set the path to a new one whenever you like. For example, you could make it so that each test case could have it's own screenshot directory using this keyword. According to your question, this may be the best solution.
And finally, you may also post-process screenshots using an external tool, or even a listener, that would move all screenshots to another directory. Previously mentioned solutions are in most cases much better, but you still may want to do this in some cases, where say, the directory where you want screenshots to be saved would be created only after the tests have finished executing.
I suggest you to do the follow:
For new directory, you should put the following immediately after where you open a browser such:
Open Browser ${URL} chrome
Set screenshot directory ${OUTPUT FILE}${/}..${/}${TEST_NAME}${/}
For replace the screenshot name from the default to your own name, create the following keyword:
sc
Capture page screenshot filename=${SUITE_NAME}-{index}.png
Then, create another keyword and run it on Setup's test case:
Register Keyword To Run On Failure sc
In the above example, I created a new folder with the test case name, which create a screenshot (in case of failure) with the name of suite project name (instead of 'selenium-screenshot-1.png').
I'm using Knime 3.1.2 on OSX and Linux for OPENMS analysis (Mass Spectrometry).
Currently, it uses static filename.mzML files manually put in a directory. It usually has more than one file pressed in at a time ('Input FileS' module not 'Input File' module) using a ZipLoopStart.
I want these files to be downloaded dynamically and then pressed into the workflow...but I'm not sure the best way to do that.
Currently, I have a Python script that downloads .gz files (from AWS S3) and then unzips them. I already have variations that can unzip the files into memory using StringIO (and maybe pass them into the workflow from there as data??).
It can also download them to a directory...which maybe can them be used as the source? But I don't know how to tell the ZipLoop to wait and check the directory after the python script is run.
I also could have the python script run as a separate entity (outside of knime) and then, once the directory is populated, call knime...HOWEVER there will always be a different number of files (maybe 1, maybe three)...and I don't know how to make the 'Input Files' knime node to handle an unknown number of input files.
I hope this makes sense.
Thanks!
Thanks to Gábor for getting me on the right track. Although I ended up doing a slightly different route after much experimentation.
===
Being new to Knime, I don't know if this is an efficient use of Knime, or a complete Kluge...but it does work.
So, part of the problem is some of the Knime specific objects - One of which is called URIDataValue.
A Python Pandas dataframe is, apparently, interchangable with the Knime tables. However, I don't know if there's a way to import one of these URIDataValue objects into Python. So here's what I did...
1. I wrote a Python script that creates a Pandas Dataframe, and populates it with one Column. Everything is a string, including the column header:
from pandas import DataFrame
# Create empty table
T = DataFrame(
[
['file:///Users/.../copy/lfq_spikein_dilution_1.mzML'],
['file:///Users/.../copy/lfq_spikein_dilution_2.mzML'],
],
)
T.columns = ['URIDataValue']
#print T
output_table = T
That creates this dataframe:
Note: The column name and values are just strings. But it is (apparently) important that the column header be 'URIDataValue'...even though HERE it's just text. If the column name is not 'URIDataValue' the next node doesn't know what to do.
NEXT, the 'output_table' from the 'Python Source' node is patched to a 'String to URI' node, which (apparently and magically) knows to change the entire columns string values to URIDataValues (presumably based on the name of the first column...don't know that for sure).
Finally, the NEW table, with the correct data objects goes to a 'URI to PORT' node...since apparently 'Port' objects and a 'URI' object are different.
This, then, matches the needed input to the ZipLoop...which is normally the out put from a static (hard coded) 'Input Files' node.
Now, to actually solve the question above, I just have to add the code to my 'Python Source' to download and unzip the S3 files, then annotate the dataframe with their locations, and go.
I have no idea what I'm doing, but it worked.
There are multiple options to let things work:
Convert the files in-memory to a Binary Object cells using Python, later you can use that in KNIME. (This one, I am not sure is supported, but as I remember it was demoed in one of the last KNIME gatherings.)
Save the files to a temporary folder (Create Temp Dir) using Python and connect the Pyhon node using a flow variable connection to a file reader node in KNIME (which should work in a loop: List Files, check the Iterate List of Files metanode).
Maybe there is already S3 Remote File Handling support in KNIME, so you can do the downloading, unzipping within KNIME. (Not that I know of, but it would be nice.)
I would go with option 2, but I am not so familiar with Python, so for you, probably option 1 is the best. (In case option 3 is supported, that is the best in my opinion.)
Running some random code I found on the internet a few weeks ago has changed the pagesize and linesize defaults of my SAS output window. I don't remember what code it was though unfortunately. The current default pagesize is 15, which is generally way too small.
Does anyone know how to change the default?
I can change this using "options pagesize=80" or something but that only lasts for the current session. I can also change it in the GUI from Tools>Options>Output>Display but any changes won't save to my next session.
Any tips would be much appreciated! This is kind of excruciating. Thanks!
Your editor preferences are stored in a SAS catalog. Only 1 SAS session can open/write to this catalog at a single time. You can find out the location of the catalog that your SAS session is using by running this code:
proc options;run;
... And then search for SASUSER in the log.
If you launch SAS and it tries to use a SASUSER catalog that is already in use by another session, it will give you the message:
WARNING: Unable to copy SASUSER registry to WORK registry. Because of this,
WARNING: you will not see registry customizations during this session.
Are you seeing this message when you launch SAS? If so, it means that you have another instance of SAS open on your machine that has that catalog open. You have 2 options:
Close all instances of sas.exe on your machine (via task manager, be sure to check process names, not just the applications tab) then try making the change again.
Setup another shortcut to launch sas.exe. On this shortcut, specify a different SASUSER location like so:
sas.exe -SASUSER "d:\sas\profile2.cfg"
Also, I'm assuming you have the option to 'Save settings on exit' checked. Or if this isn't the case you can save your current settings by typing the command save into the command bar.
EDIT :
Some additional places to check that may override any profile settings:
Your sasv9.cfg file. Again, run proc options;run; and search for sasv9.cfg. It will give you the location of this file. If the file simply contains a list of other filenames, be sure to open up those 'included' files and check those.
Your autoexec file. If your SAS environment is specifying an autoexec file to load at launch, make sure it's not adjusting them there. Also if it is using an autoexec file, make sure you have all the loggin options turned on as the first thing that happens when SAS loads: option mprint notes source source2;.
Try right-clicking on SAS and choose 'Launch as Admininstrator'. If your profile is in a read-only location due to priveleges, perhaps your settings aren't being saved.
Look in your windows event log to see if SAS is loggin any errors there.
According to the SAS for Windows documentation, pagesize is controlled in part by the default printer. 15 is the minimum value, so it's possible that there is something wrong with your default printer and/or SAS is doing something odd (such as not finding one). If 'some random code' changed your default printer, you could simply try changing it back (see your SYSPRINT option).
I believe you can override this in your sasv9.cfg, commonly located in a path like C:\Program Files\SAS\SAS Foundation\9.4\nls\en\sasv9.cfg (varying based on what language version of SAS you use and your version, plus installation details), by simply adding -pagesize=80 or whatever you wish the default to be. You also can add options pagesize=80; to your autoexec.sas (or a new autoexec.sas if you don't have one already); see this paper or the documentation for more details on that.