I am new to QuickSight and trying to do a PoC with QuickSight.
While trying to develop a geographical map visualization, I noticed that it limits to 5000 top data points in the visualizations.
From below link I understand this display limit of QuickSight.
https://docs.aws.amazon.com/quicksight/latest/user/working-with-visual-types.html#customizing-number-of-data-points?icmpid=docs-quicksight-whatsnew
Is 5000 is the maximum data-points for a geographical map? (I couldn't find the maximum limit for map in documentations)
Is there a way to increase this limit as we have pretty bigger dataset and may not filter out data down till 5000 datasets.
Please advise.
I checked and confirm that there is no statement about the limit of the geo map in the official documentation. Judging on my works with QuickSight until now, the limit is highly possible at 5000, which is the limits of line chart, bar chart, combo chart and some others.
In our use case with not-geo-map chart, we aggregate the data higher to reduce the number of data points, for example from daily data to weekly data. This is not a solution I enjoy with, but have to compromise.
Related
I am currently working with Big Query and understand that there is a partition limit of up to 4,000 partitions.
Does anyone know if this limit apply to Active Storage Tier only or both Active & Long Term Storage Tier?
Reason for asking because I have a partitioned table, partitioned by hour and have been using it for more than 6 months already but we don't get any error prompting partition limit exceed 4,000 when we insert new data.
I have did a count on the number of partition attached image below:
As we can see the total partitions is 6,401 and we are still able to insert new data.
At the same we also create a new partitioned table and try moving data into this newly created partitioned table but we encountered some error saying we have exceeded the limit of 4,000.
In addition, I also tried to insert data incrementally but I still get error as follow:
Steps to reproduce error:
Create a partitioned table (partition by hour)
Start moving data by month from another table
My finding:
The mentioned partition limit is only applicable to active storage tier.
Can anyone help to confirm on this?
As I understood the limitation, you can't modify more than 4000 partitions in one job. Your jobs that you describe first are supposedly working because they are modifying only a few partitions.
When you try to move more than 4000 partitions in one go, you will hit the limitation as you described.
I noticed I was hitting this limitation on both Active Storage and Long Term Storage. This is a BigQuery-wide limitation.
Looking at this page: Power BI features comparison I see that a dataset can be 10gb and storage is limited to 100tb. Can I take this to mean there is a limit of 10,000 10gb apps?
Also is there a limit on the number of users? It implies no with the statement "Licensed by dedicated cloud compute and storage resources", but I wanted to be sure.
I assume I am paying for compute so the real limits are based on what compute resources I purchase? Are there any limits on this?
Thanks.
Yes you can have 10,000 10GB datasets, to use up the total volume of 100TB, however storage will also be used for Excel Workbooks, Dataflows storage, as well as Excel ranges pinned to a dashboard, and other uploaded images.
There is no limit on the total number of users, however there is a limit based on 'peak renders per hour', which means how often users interact with the report. PBI Premium does expect you you have a mix of frequent and infrequent users, so for Premium P1 nodes, the peak renders per hour is 1 to 2400. Anything over that, you may experience performance degradation on that node is for example you had 3500 renders of a report in an hour, but it will depend on the type of report, queries etc. You can scale up to quite a number of nodes if you need to, Power BI Premium Gen 2 does allow auto scale.
I want to create a dashboard/chart in Google Cloud Monitoring where I can see the total number of rows of my BigQuery table at all times.
With resource type "bigquery_dataset" and metric "uploaded_row_count" I only see the number of new rows per second with aligner "rate".
If I choose "sum" as aligner it only shows the number of new rows added for the chosen alignment period.
I'm probably missing something but how do I see the total number of rows of a table?
PubSub subscriptions have this option with metric "num_undelivered_messages" and also Dataflow jobs with "element_count".
Thanks in advance.
There's an ongoing feature request for BigQuery table attribute on GCP's Cloud Monitoring metrics, but there's no ETA when this feature be rolled out . Please star and comment if you wanted the feature to be implemented in the future.
Cloud Monitoring only charts and monitors any (numeric) metric data that your Google Cloud project collects. On this case, system metrics generated for BigQuery. By looking at the documentation, only the metric for uploaded rows are available, which has the behaviour that you're seeing in the chart. The total number of rows however is currently not available.
Therefore as of this writing, unfortunately what you want is not possible, due to the the Cloud Monitoring limitations on BigQuery, there are only work around that you can try to do.
For other readers who are ok with #Mikhail Berlyant comment, here is a thread for querying metadata including row counts.
I have a csv with approx. 108,000 rows, each of which is a unique long/lat combination. The file is uploaded into S3 and visualised in Quicksight.
My problem is that Quicksight is only showing the first 10,000 points. The points that it shows are in the correct place, the map works perfectly, it's just missing 90%+ of the points I wish to show. I don't know if it makes a difference but I am using an admin enabled role for both S3 and Quicksight as this is a Dev environment.
Is there a way to increase this limit so that I can show all of my data points?
I have looked in the visualisation settings (the drop doen in the viz) and explored the tab on the left as much as I can. I am quite new to AWS so this may be a really easy one.
Thanks in advance!
You could consider combining lat/lng that are near each other based on some rule you come up when preparing your data.
There appears to be limitations on how many rows and columns you can serve to QuickSight:
https://docs.aws.amazon.com/quicksight/latest/user/data-source-limits.html
The Power BI plan comparison and limits table at the URL below states maximum total data sizes limit (1GB free or 10GB paid) and maximum streaming throughput limits (10k rows per hour free or 1 million rows per hour paid).
https://powerbi.com/dashboards/pricing/
Specific questions are:
(1) How are the data size limits measured? Is this the size of the raw data or the size of the compressed tabular model? The page isn't specific about what the size limit applies to.
(2) Do the throughput limits apply ONLY when using the Azure Stream Analytics preview connector or do they also apply when using the REST API? e.g. if using the free Power BI tier (and assuming I don't go over the 1GB total size limit), is the maximum number of rows I can submit per hour limited to 10k (e.g. 2 calls within an hour of 5k rows each or 4 calls of 2.5k rows each, etc)?
Good questions.
The data limit is based on the size of data sent to the Power BI service. If you send us a workbook the size of the workbook is counted against your quota. If you send us data rows, the size of the uncompressed data rows is counted against your quota. Our service is in preview right now so there might be tweaks to the above as we move forward. You can keep up to date on the latest guidelines by referring to this page: https://www.powerbi.com/dashboards/pricing/
The limits apply to any caller of the Power BI API. The details on the limits are listed at the bottom of this article: https://msdn.microsoft.com/en-US/library/dn950053.aspx. The usage is additive in that if you posted 5K rows, then you'd be able to post an addition 5K rows within the hour.
Appreciate your using Power BI.
Lukasz P.
Power BI Team, Microsoft