Using Power Query to query Kusto and the query times out after 5 minutes even though I've set the timeout to 21 minutes, like this:
[Timeout = #duration(0,0,21,0), ClientRequestProperties = [#"query_language" = "csl"]])
The query in question typically takes about 7-10 minutes when run directly in Kusto.
A similar question asked here had an answer that suggested going to "Data source settings" and clicking on "Change Source..." but that button is grayed for me. Besides, the above, query-specific setting should override a global setting, right?
Assuming that you're using the AzureDataExplorer.Contents() or Kusto.Contents() methods, there was a regression in the Timeout implementations of the connector. This was fixed on Jun 7 2021, and should be included in version 3.0.52 of the connector (should already be publicly available - make sure you have the latest version of the PBI Desktop).
If you're still facing an issue, contact me directly at itsagui(at)microsoft.com
Related
I usually work with power bi and all go well on my computer. Yesturday I downloaded it on a virtual machine itself on a Windows server 2019 machine that I connect on using "distant desktop" or whatever you call it to create a dashboard to visualize some data.
The problem I'm having is that it is taking forever (over 10 hours by now) on the " creating connexions in model".
Image is in french but it is pretty usual content:
precisions:
I already have optimized the data, and can not reshape the tables more,
The only big table (>100 rows) have a count of around 800k rows,
I do have internet and can ping whatever I want,
Any idea where it can come from? Thanks for your help!
Ok, I've found my problem this morning while hanging in the options of the virtual machine.
I had only 1 virtual processor that was always used by other applications and so powerBi could not get nearly as much processing power as needed.
I just changed it and it loaded in seconds...
Conclusion: powerBi speed is based on your processing power, so when working on it, close as much windows as possible on your computer.
Same for me. Import of a local Excel sheet (1 tab, 2 cols, 30 rows) did not complete after 30 minutes using only one processor.
Upgrading the VM to more than one processor solved the issue.
I'm trying to monitor if my Lambda has been executed within the last 25 hours within New Relic. I want to alert if it hasn't.
I have the following NRQL which gives me the graph I want to see:
SELECT sum(`provider.invocations.Sum`) FROM ServerlessSample WHERE provider.resource = 'my_lambda_name'
I then just want to say that if it dips below 1 for 1500 minutes (25 hours) then alert, but NR only allows me to set an alarm for 120 minutes. Any tips on how to get around this?
Interesting question, as I have seen in New Relic discussion page, or Explorers Hub, there might be solution for your task.
Can you please review this link:
https://discuss.newrelic.com/t/relic-solution-extending-the-functionality-of-nrql-alert-conditions-beyond-a-single-minute/75441
If you think about this for a moment, you might see how NRQL queries using percentile or stddev are a lot less useful than they seem, when used in an alert condition. After all, if you calculate the standard deviation over an hour (or 24 hours), that can be meaningful. But stddev(duration), or percentile(duration,95) calculated over only 60 seconds is less meaningful.
I think that limit is 24 hours but I haven't test it yet.
Hope this will help you, I will try to give it a go as well to see will this work.
I am running tests that take many hours to complete on ADW and the amount of SQL involved rolls off the 10,000 row limit of sys.dm_pdw_exec_requests (as documented at https://learn.microsoft.com/en-us/azure/sql-data-warehouse/sql-data-warehouse-service-capacity-limits ) in less than 30 minutes.
Is my only option to create a process to capture into a table in my database the data on sys.dm_pdw_exec_requests every N minutes (where N << 30 )?
I'm not sure what your use case is, but perhaps you can get the same useful information out of the audit logs?
https://learn.microsoft.com/en-us/azure/sql-data-warehouse/sql-data-warehouse-auditing-overview
You might be able to use something that was already built for that purpose, instead of reinventing the wheel:
https://github.com/andrealibero/Azure_SQL_DWH_Perf_Stats
the PowerShell script can collect output of DMVs (configured in an XML file) in a loop or for a number of specified iterations.
Given how quickly the DMVs roll out for you this might help in your scenario.
I recently came across this issue on opencart version 2.1.0.1 where if you have quite a lot of product options set and update the product they end up removing some of the options. In my case out of 133 option values 50 was removed without any reason.
The issue is later identified to be caused by low max_input_vars value in php.ini My default setup had 1000 which I later changed to 2000 and now the issue no longer prevalent :)
I'm trying to build a tool that collects a few data points from a user usage report with
https://www.googleapis.com/admin/reports/v1/usage/{user}/all/dates/{yyyy-mm-dd}
Since the data is delayed - how do I get the most recent report? If I were to query today's (2013-11-22) date I would get something like:
Data for dates later than 2013-11-19 is not yet available. Please check back later
Is there a set number of days/hours for reports to be available - or do I have to trial and error backwards until I get a successful response?
I believe there is a delay of about 48 hours for the reports as of right now. However, if Google is able to improve on that, you'll want your app to be able to take advantage of those improvements without any changes needed.
I suggest you make a first attempt using today's date. When that fails, parse the error response to grab the last date report data is available for and use that value. This way you're always making only 2 max attempts and if Google improves the delay to 24 hours or even less, your app is able to take immediate advantage of that change.