Increase row limit of Apache Superset - apache-superset

I tried increasing the row limit of Apache Superset from 50000 to 500000 in the config.py file. But even after changing the limit, I was limited to see only 100000 rows using the table chart in Apache Superset.
Is there any way to increase the row limit to more than 100000 and display more than 100000 rows in table chart ?
Thanks

Related

BQ google update columns - Exceeded rate limits

Scenario is to update column descriptions in tables(About 1500 columns in 50 tables). Due to multiple restrictions I have been asked to use the bq query command to execute the ALTER TABLE sql for updating column descriptions, thorugh cloud CLI. query -
bq query --nouse_legacy_sql \ 'ALTER TABLE `<Table>` ALTER COLUMN <columnname> SET OPTIONS(DESCRIPTION="<Updated Description>")';
Issue is if I bunch the bq queries together for 1500 columns it is 1500 sql statements.
This is causing the standard Exceeded rate limits: too many table update operations for this table error.
Any suggestions on how to execute it better.
You are hitting the rate limit:
Maximum rate of table metadata update operations per table: 5 operations per 10 seconds
You will need to stagger the updates to make sure it happens in batch of 5 operations per 10 seconds. You could also try to alter all the columns in a single table with a single statement to reduce the number of calls required.

Google Cloud clustered table not working?

On google cloud, I've created an ingestion-time partitioned table clustered on columns Hour, Minute and Second. From my knowledge on clustered table, this means that my rows are distributed in clusters organized by hour, each hour cluster contains minute clusters and each minute cluster should contain second clusters.
So I would expect that when I query data from 13:10:00 to 13:10:30, the query should affect only rows inside cluster of hour 13, minute 30 and seconds from 0 to 30. Am I wrong?
I'm asking this, because actually it seems clusters are not working on my project, since I have a test table of 140 MB, but when I add WHERE condition on my clustered columns, BigQuery still says the query will affect all the table size, while I would expect that using clustered columns in Where condition, the amount of data queried should be smaller. Any help? Thank you.

Drupal dynamic cache page increase

I have a site under Drupal 8 and I have a problem with the cache of pages that are constantly increasing. How to limit the creation of the cache which here is bigger and bigger and exceeds my DB quota.
Screenshot of DB: https://i.imgur.com/ZnGcIYm.png
Here is the website: rjluxefurniture.be
Thanks,
Yannik
You can limit the number of table rows in each cache table using the following setting in your settings.php configuration file:
$settings['database_cache_max_rows']['default'] = 500;
The default setting is 5000 rows per cache table.
For more information see the Drupal change record: https://www.drupal.org/node/2891281

AWS Redshift maximums

I have recently started exploring Amazon redshift database. However I am not able to find the below database maximum parameters anywhere in the documentations .
Parameters
Columns Maximum per table or view
Names Maximum length of database and column names
Characters Maximum number of characters in a char/varchar field
Connections Maximum connections to the server
concurency Maximm number of concurrent users
Row size Maximum row size
DISTKEY Maximum per table
SORTKEY Maximuum per table(compound/interval)
Cluster size Maximum cluster size(in terms of compressed datasize)
Would be of great help if anyone can provide the info
Connection limits, concurrency limits and naming contraints are detailed here:
http://docs.aws.amazon.com/redshift/latest/mgmt/amazon-redshift-limits.html
Currently there is a max of 500 connections and 50 concurrency per cluster.
You can have one DISTKEY per table, more details here:
http://docs.aws.amazon.com/redshift/latest/dg/c_choosing_dist_sort.html
Interleaved sort keys are limited to eight columns:
http://docs.aws.amazon.com/redshift/latest/dg/t_Sorting_data.html
Maximum length of character data types:
http://docs.aws.amazon.com/redshift/latest/dg/c_Supported_data_types.html
Cluster size is determined by the number and type of nodes in the cluster. In terms of storage the current maximum possible is 128 x ds2.8xlarge nodes, for a max storage of 2 Petabytes:
http://docs.aws.amazon.com/redshift/latest/mgmt/working-with-clusters.html
With recent RA3 node type (ra3.16xlarge) you can go up to 8PB
500 is maximun for Query Editor not the real maximum DB Connections.

Cassandra get more than 10k rows

I am getting stuck with Cassandra all() query.
I am using the Django platform. My query is to get all rows from Cassandra table. But, CQL has some limit to 10k rows at a time.
Before, I have less than 10k rows in Cassandra table. But, now the count has increased up-to 12k.
How do I get the all() query to return all 12k rows?
CQL have a default limitation to 10k rows. That means there's an implicit limit to 10k when you perform any SELECT. If you want you can override that by specifying a new LIMIT value, eg:
SELECT * FROM mytable LIMIT 500000;