Power Query : How to handle "DataSource.NotFound: File or Folder: We couldn't find the folder " error - powerbi

I'm trying to get the folder contents using the query below and also handles the error. But still I'm getting the text with a warning icon - "DataSource.NotFound: File or Folder: We couldn't find the folder"
See the M-code below:
let
Source = Folder.Files("\\serverpath\Desktop\"),
AlternativeOutput=#table(type table [Name=text,Extension=text,Availability=text], {{"Error", "Error", "Folder not available"}}),
TestForError= try Source,
Output =
if TestForError[HasError] then AlternativeOutput
else Source
in
Output

This is likely a bug, which explains why try...otherwise also doesn't work. The HasError field is returning false even though we get a DataSource.NotFound error. I've filed an item on our end to track this issue.

You need to force evaluation of the Folder.Files at each try / otherwise attempt, using Table.Buffer
= try Table.Buffer(Folder.Files(path1)) otherwise Folder.Files(path2)
you can then evaluate multiple paths, such as the same onedrive account if used on different computers having different login credentials i.e named username on one pc, but user.name on another (e.g personal computer, work computer)
= let SourceA = try Folder.Files("C:\Users\Username\OneDrive\Documents\"),
SourceB = try Folder.Files("C:\Users\User.name\OneDrive\Documents\"),
DynamicSource = try Table.Buffer(SourceA[Value])
otherwise Table.Buffer(SourceB[Value])
in DynamicSource 

Related

Google Cloud platform terraform/terragrunt googleapi: Error 409: Requested entity already exist

I am having a strange issue when trying to push code out to our gcp repo. It fails with the following error "googleapi: Error 409: Requested entity already exists, alreadyExists" and it is referring to a project that already exists() This only occurs after i either remove another project that's no longer needed or add .bck to the terragrunt.hcl files. These projects have no dependancies on each other whatsoever.
terraform {
source = "../../../modules//project/"
}
include {
path = find_in_parent_folders("org.hcl")
}
dependency "folder" {
config_path = "../"
# Configure mock outputs for the terraform commands that are returned when there are no
outputs available (e.g the
# module hasn't been applied yet.
mock_outputs_allowed_terraform_commands = ["plan", "validate"]
mock_outputs = {
folder_id = "folder-not-created-yet"
}
}
inputs = {
project_name = "<pimsstest-project>"
folder_id = dependency.folder.outputs.folder_created # Test folder id
is_service_project = true
code push will fail with the structure in VS code is like this:
But it succeeds when like this
Some background to add. Pimsstest used to exist in a production folder under org and i moved it to test via vs code with a simple cut and paste and re push of code. I then removed the project from the console as it still existed in production. I cannot work out why the removal of another project will flag up this existing error message on pimsstest. It doesn't make any sense to me.
Across GCP a project ID can exist once only. Upon removal, it might not be instantly available again (it will always have status "scheduled for removal" - and you should receive an email, with the details of the scheduled operation). What the error message actually is trying to tell may be:
Error 409: Requested entity already STILL exist.
In theory (even if it's unlikely, when the name is unique enough), any other customer could snatch the name in the meanwhile - in which case the error message could be literally understood.

Error: This operation is not allowed when there are no records displayed. Please execute a query that returns

I have added the following code in the WebApplet_Load of Service Request Applet.It's giving me the above error once, I tries to open the SR screen from the application.
try
   {
      var currBC = this.BusComp();
      with (currBC)
      {
ActivateField("Restrict_drop_down");
ClearToQuery();
//BC.SetViewMode(3);;
TheApplication.SetProfilAttr("SR Type", GetFieldValue("Restrict_drop_down"));
        ExecuteQuery(ForwardBackward);
      }
   }
   catch (e)
   {
      TheApplication().RaiseErrorText(e.errText);
   }
Any idea on how to solve the issue?
You cannot do GetFieldValue when the BC is in Query mode. You have just done ClearToQuery, so you have to execute the query first, check for FirstRecord(); and then do a GetFieldValue();
Also, during the WebApplet load the first BC query is not finished running. It might not be the best place to write this code.
Please check with a siebel expert on your team, such kind of code needs to be placed carefully.

Shiny downloadButton() and downloadHandler() 500 Error

I have developed a Shiny Dashboard, I have several data frames that get imported through reactive file readers, etc.. I have also added a "Generate PDF" button, using downloadButton() in my ui.R code. My server.R code implements the downloadHandler() to handle that request.
On my Windows desktop this all works perfectly. I want this to run on a Linux server I have setup. I had to modify some paths, of course, and Shiny Server runs as root on this box. When I click the "Generate PDF" button on site running on the Linux server, I get an HTTP 500 error almost instantly. I have manually compiled the pdfReport.Rmd file on the Linux server myself and it runs just fine.
I am guessing one of two things:
Somehow the data isn't getting passed the same way on the Linux box as it does on the Windows desktop. This is probably not likely, but it is a possibility.
I have something wrong with my paths so when the temp files get written to start generating the PDF, the system doesn't have the ability or a path doesn't exist to write the file. Possibly my downloadHandler() code is malformed in some way. I think this is a higher possibility than the #1.
Here is my code for the downloadHandler():
output$pdfReport <- downloadHandler(
# For PDF output, change this to "report.pdf"
filename = reactive({paste0("/srv/shiny-server/itpod/","ITPOD-",Sys.Date(),".pdf")}),
content = function(file) {
# Copy the report file to a temporary directory before processing it, in
# case we don't have write permissions to the current working dir (which
# can happen when deployed).
tempReport <- file.path("/srv/shiny-server/itpod", "pdfReport.Rmd")
file.copy("report.Rmd", tempReport, overwrite = TRUE)
params <- list(ilp=updateILP(), ico=updateICO(), sec=updateSecurity(), ppwc=updateWorkPreviousPeriodCompleted(),
pow=updateOngoingWorkCABApproved(), pwcr=updatePlannedWorkCABRequested(), epca=updateEmergencyChangesPendingCABApproval(),
fac=updateFacilities(), drs=updateDRStatus(), ov=updateOperationalEvents(), sl=updateStaffLocations(),
w = updateWeather())
# Knit the document, passing in the `params` list, and eval it in a
# child of the global environment (this isolates the code in the document
# from the code in this app).
rmarkdown::render(tempReport, output_file = file, params = params, envir = new.env(parent = globalenv())
)
}
)
I thought maybe that the path just wasn't writeable, so I tried changing that to /tmp, but that didn't work either. Poking around, I discovered that when I over the "Generate PDF" button, I get a long URL with a "session":
http://my.url.com:3838/itpod/session/d661a858f5679aba26692bc9b4442872/download/pdfReport?w=
I'm starting to wonder if this is the issue and that I'm not writing to a path of the current session or something? This is a new area to me with Shiny. Like I said, on my desktop it works fine, but once I deploy it to the Linux server, it doesn't work correctly. Any help would be much appreciated. Thanks in advance!
Ok - after much troubleshooting, I figured out that some of the files I had in the shiny webroot that were dependencies for the main pdfReport.Rmd file weren't being seen, since the code copied the report to a temp directory.
Because I didn't want to copy all of the files from my webroot over to the temp, I decided to make the report render within the webroot itself. For me, this isn't a big deal since my shiny app is running as root anyway.
I will fix this now that I have it working, basically my fix will be to do the following:
Make the service run as a normal user
Rather than copy of the files that the report depends on, I will have to statically reference them in the report code.
I apologize for all of those who may have read this and are working on it. My fix was to the code above was the following:
output$pdfReport <- downloadHandler(
# For PDF output, change this to "report.pdf"
filename = reactive({paste0("/srv/shiny-server/itpod/","ITPOD-",Sys.Date(),".pdf")}),
content = function(file) {
# Copy the report file to a temporary directory before processing it, in
# case we don't have write permissions to the current working dir (which
# can happen when deployed).
report <- file.path(getwd(), "pdfReport.Rmd")
#tempReport <- file.path(tempdir(), "pdfReport.Rmd")
#file.copy("pdfReport.Rmd", tempReport, overwrite = TRUE)
params <- list(ilp=updateILP(), ico=updateICO(), sec=updateSecurity(), ppwc=updateWorkPreviousPeriodCompleted(),
pow=updateOngoingWorkCABApproved(), pwcr=updatePlannedWorkCABRequested(), epca=updateEmergencyChangesPendingCABApproval(),
fac=updateFacilities(), drs=updateDRStatus(), ov=updateOperationalEvents(), sl=updateStaffLocations(),
w = updateWeather())
# Knit the document, passing in the `params` list, and eval it in a
# child of the global environment (this isolates the code in the document
# from the code in this app).
rmarkdown::render(report, output_file = file, params = params, envir = new.env(parent = globalenv())
)
}
)
})
Notice, that instead of copying the file to a temp directory, I just specify the file in the current working directory.

DropNet returning metadata for root folder not folder requested

Problem: GetMetaData for the folder that I need returns the root folder metadata.
Background:
I'm trying to write a small app to download a folder that is too large (many thousand files and multiple GB) to download from the Dropbox web interface. It tries to recurse through the subdirectories of the directory given, downloading all the files.
What actually happens is an endless loop. The app (incorrectly) gets the root folder metadata, iterates through the directories until it hits the directory I need and then starts working through the root directory as that is the metadata set that it receives.
The directory name "/Apps" works fine but the one I need doesn't. The folder name has an underscore and a mix of upper and lower case letters (no other characters) similar to "/XYX_DataFolder".
My app has "Full Dropbox" permission and I authorized with the account that the api key was acquired under.
Changing the directory name is not an option for me.
I'm using VS2012 and the DropNet was added through NuGet.
Any input on this issue would be welcome. Thanks!
Edit:
Runtime Version v4.0.30319
Version 1.10.23.0
As reported in the Visual Studio properties page for the reference.
I authorize which works fine and then use the code below. Some directories work fine but when I try to GetMetaData on the folder mentioned above, I get the metadata from the root folder.
private void DownloadDirectory( string serverDirectory, string clientDirectory ) {
var meta = m_client.GetMetaData( serverDirectory, false, false );
foreach ( var item in meta.Contents ) {
var destinationPath = Path.Combine( clientDirectory, item.Name );
if ( item.Is_Dir && item.Path == m_serverRootDirectory ) {
DownloadDirectory( item.Path, destinationPath );
}
else {
//var fileBytes = m_client.GetFile( item.Path );
//File.WriteAllBytes( destinationPath, fileBytes );
//textBox1.Text += Environment.NewLine + destinationPath;
}
}
}
Ok, so I downloaded the source and found my problem right away. I was missing a null for the hash in the GetMetaData call, so it was using the wrong overload. Sorry to waste your time... Thanks for the response!

magento Not valid template file /page/1column.phtml

I had my site running fine on the devp. server. After I migrated the app to my production server. Everything worked until I added an extension and enabled it. The site still works but the product view page doesn't show up. Everytime I click on the product view page, this error is appended to my log file...
CRIT (2): Not valid template file:frontend/base/default/template/page/1column.phtml
I have checked the file it is alright, just same as the one working on the development server. I've tried disabling the only plugin (custom menu) that I have and still the problem persists. I've tried increasing memory_limit but it doesn't help either.
Please help, I am stuck in the middle of nothing.
A common cause of this error is the use of symlinks without enabling this in the admin area…
System > configuration > developer > Template Settings
The error gets triggered in app\code\core\Mage\Core\Block\Template.php around line 243 ( see here ) - so if its not an issue with symlinks then this would be a good place to start debugging.
If you are not using xDebug then where the exception gets caught around line 250 you should either log or var_dump the values of:
$includeFilePath
and
$this->_viewDir
Then make sure they both exist (paying attention to the case)
Failing that you might want to look at permissions.
UPDATE core_config_data SET value = '1' WHERE path = 'dev/template/allow_symlink';
or
INSERT INTO core_config_data (scope, scope_id, path, value) VALUES ('default', 0, 'dev/template/allow_symlink', '1');