Drupal dynamic cache page increase - drupal-8

I have a site under Drupal 8 and I have a problem with the cache of pages that are constantly increasing. How to limit the creation of the cache which here is bigger and bigger and exceeds my DB quota.
Screenshot of DB: https://i.imgur.com/ZnGcIYm.png
Here is the website: rjluxefurniture.be
Thanks,
Yannik

You can limit the number of table rows in each cache table using the following setting in your settings.php configuration file:
$settings['database_cache_max_rows']['default'] = 500;
The default setting is 5000 rows per cache table.
For more information see the Drupal change record: https://www.drupal.org/node/2891281

Related

Timeout value in powerbi service

I have a dataset for 3 days I can not update it on Power BI service, knowing that on DESKTOP it is normally updated in 30 minutes. the dataset is powered from a SQL server database, via a data gateway. the data gateway is well updated. the incremental update is activated and the dataset retrieves only the data of the last 3 days to each update.
Here is the generated error message :
Data source error: The XML for Analysis request timed out before it was completed. Timeout value: 17998 sec.
Cluster URI: WABI-WEST-EUROPE-B-PRIMARY-redirect.analysis.windows.net
Activity ID: 680ec1d7-edea-4d7c-b87e-859ad2dee192
Application ID: fcde3b0f-874b-9321-6ee4-e506b0782dac
Time: 2020-12-24 19:03:30Z
What is the solution to this problem please.
Thank you
What license are you using?
Without premium capacity, the max dataset size can be 1GB. Maybe your dataset size has crossed this mark? If you are using the shared capacity, then you can check the workspace utilized storage size by clicking on ellipses at top right corner. Then click on storage to see how much is utilized for that workspace.
Also note that in shared capacity there is a 10GB uncompressed dataset limit at the gateway (but this should not be an issue as you have only 3 day data in refesh)
Also check whether your power query query is foldable (on the final step you should be able to see the 'show native query' option). If not then incremental refresh does not work and results in querying the entire data.
Also, note that 17998 sec means 5 hours. What is your internet speed?

Django's cache framework - auto deletion?

I have encountered the following problem and I have no clue why.
I use Django's cache framework to cache part of my site.
I have set the expires time as 15mins.
Sometimes when I am checking with the database, there is no record in the cache table. At first, I suspect Django will remove the expired cache in the database.
But later, I can find some expired caches still exist in the table.
I want to ask how Django handles the cache in the database?
Does Django auto-remove all the expired cache in the table?
Thanks!
How and when the cache is purged depends on which cache backend you are using. Generally the cache will only be purged periodically, when the number of items in it exceeds a specified limit (as opposed to when they have expired - Django does not check this until and unless you try to fetch an item from the cache).
From the documentation on the cache configuration:
Cache backends that implement their own culling strategy (i.e., the locmem, filesystem and database backends) will honor the following options:
MAX_ENTRIES: The maximum number of entries allowed in the cache before old values are deleted. This argument defaults to 300.
CULL_FREQUENCY: The fraction of entries that are culled when MAX_ENTRIES is reached. The actual ratio is 1 / CULL_FREQUENCY, so set CULL_FREQUENCY to 2 to cull half the entries when MAX_ENTRIES is reached. This argument should be an integer and defaults to 3.
So when and how your cache is cleared depends on these parameters. By default the entire cache is not cleared - only a fraction of entries are removed.

How to load django query faster?

I have a database with around 7000 entries. When I load the page displaying all 7000 entries, the template takes around 7 seconds to load. How can I lower the load time? What are my options? other than caching?
See below the screenshot from network tab in Google Chrome.
You can implement lazy-loading/pagination, i.e, Initially displaying the first 'n'(say 100) entries. Then on reaching the last entry you can dynamically display the next 'n' entries using JavaScript and Ajax. Otherwise you can use Django pagination.

AppFabric Cache Database configuration has 1GB size

I use AppFabric cache with SQL Server-Based Cluster Configuration.
The problem is the configuration database has grown to 1GB size.
The problematic table which takes most space in db is 'ConfigAudit'.
Its full of entries with values of column 'Operation': UpdateNew, UpdateOld,
with UpdatedTimeStamps of each minute.
I can not find any information about AppFabric's cluster configuration database, nor about any audit of cache operations.
Cache works fine except this problem.
Is there a way this audit to be turned off ,
or other solution to make this database much smaller and not growing up again?
Kind regards,
Charles.
dbo.ConfigAudit table is used to track the changes on dbo.Config table. It is mostly for diagnostics, and it cannot be changed via PowerShell commands.
To turn off the tracking, you can disable all the triggers for dbo.Config table. For example,

Django App on Heroku

I've been struggling with the issue where I believe my account has been shutdown due to having too large of a table? Correct me if I'm wrong.
=== HEROKU_POSTGRESQL_OL (DATABASE_URL)
Plan: Dev
Status: available
Connections: 0
PG Version: 9.1.8
Created: 2013-01-06 18:23 UTC
Data Size: 11.8 MB
Tables: 15
Rows: 24814/10000 (Write access revoked)
Fork/Follow: Unsupported
I tried running
heroku pg:psql HEROKU_POSTGRESQL_OL
to look at the tables, but how do I determine which table has too many rows and is flooding my database inside psql?
Once, I do determine which table this is. Can I just go to heroku run manage.py shell and call Model_with_too_many_rows.delete.all() and my account will no longer be shutdown? Are there other steps that must be taken to have the smaller db register with heroku so that my write access will be returned?
Sorry, if these questions are trivial, but my understanding of SQL is limited.
EDIT: I also believe that there was a time where my database was flooded with entries, but I have since deleted them. Is there any command I can run to resize the databse to acknowledge that the number of rows have been reduced? Or does heroku do this automatically?
There may be a smarter way to check row count by table, but I use the pg-extras plugin and run pg:index_usage.
You will regain write access to your database within ~5 minutes of getting back down below the 10k row limit – Heroku will check this and update the limit automatically.