Django App on Heroku - django

I've been struggling with the issue where I believe my account has been shutdown due to having too large of a table? Correct me if I'm wrong.
=== HEROKU_POSTGRESQL_OL (DATABASE_URL)
Plan: Dev
Status: available
Connections: 0
PG Version: 9.1.8
Created: 2013-01-06 18:23 UTC
Data Size: 11.8 MB
Tables: 15
Rows: 24814/10000 (Write access revoked)
Fork/Follow: Unsupported
I tried running
heroku pg:psql HEROKU_POSTGRESQL_OL
to look at the tables, but how do I determine which table has too many rows and is flooding my database inside psql?
Once, I do determine which table this is. Can I just go to heroku run manage.py shell and call Model_with_too_many_rows.delete.all() and my account will no longer be shutdown? Are there other steps that must be taken to have the smaller db register with heroku so that my write access will be returned?
Sorry, if these questions are trivial, but my understanding of SQL is limited.
EDIT: I also believe that there was a time where my database was flooded with entries, but I have since deleted them. Is there any command I can run to resize the databse to acknowledge that the number of rows have been reduced? Or does heroku do this automatically?

There may be a smarter way to check row count by table, but I use the pg-extras plugin and run pg:index_usage.
You will regain write access to your database within ~5 minutes of getting back down below the 10k row limit – Heroku will check this and update the limit automatically.

Related

cfthread with heavy DB activity

When we enter a ticket in system, there is an insert in a group of 5-6 tables (based on some rules). That works alright when single ticket is added manually.
Now we have a requirement where we will get the ticket feed from external source (most probably a xml or may be a txt file). It can have any number of tickets up to 500.
If I have to leverage the cfthread, and say 10 threads of 50 tickets each, how I can prevent it to cause issue on DB. Ultimately, everyone of those tickets will be inserting data in same Db tables.
As each thread will be completely independent, would't it create queue on DB (may be a deadlock too)?
Environment: CF2016, SQL Server2014

Is there a way to force Sitecore to sync MongoDB data with it's SQL database?

I am setting up Sitecore xDB and am trying to test exactly what info gets through the system for authenticated and non-authenticated users. I would like to be able to make a change and see the results quickly in Sitecore. I found the setting to lower session lifetime to 1 minute rather than 20. I have not found a way to just force Sitecore to sync with Mongo on demand or at least within 1-5 minutes rather than, what also appears to be about 20 minutes at the moment. Does it exist or is "rebuilding" the database explained here the only existing process?
See this blog post by Martina Welander for this and more good info about xDB sessions: https://mhwelander.net/2016/08/24/whats-in-a-session-what-exactly-happens-during-a-session-and-how-does-the-xdb-know-who-you-are/
You just need a utility page that calls System.Web.HttpContext.Current.Session.Abandon(). You may also want to redirect the user to a page that doesn't exist.
Update to address comment
My understanding is that once an xDB session has expired, processing should take place quickly. In the Sitecore.Analytics.Processing.Services.config file, the BackgroundService agent is set to run on an interval of 15 seconds by default.
You may just be seeing cached reporting data. Try clearing the cache using the /sitecore/admin/cache.aspx page. You could also decrease the defaultCacheExpiration setting for the reporting cacheProvider in the Sitecore.Analytics.Reporting.config file. The default is 10 minutes.

Improve the response time of AWS Dynamodb

We have been using Couchbase for about two years but we finally decided to switch to Amazon DynamoDB service for many reasons.
Now I started the migration of data to dynamodb. First everything was alright and going as expected, but after some time the response time from dynamo is getting higher and higher and the migration process is getting slower by time.
I tried to change my strategies but with no luck.
What can I do to increase the response time?
Basically I am scanning an SQL table getting 100 items per query then asking Couchbase to retrieve the data I want about these 100 items. At first I was getting high response times (as shown in the image bellow).
The following info might help:
I am running the migration code on an ec2 micro server running Ubuntu 14.04 with node v 4.4.1.
After looking at the graphs I started measuring the time for each dynamodb request (so I don't know what is the average was at first), the average response time is 800 ms for about 150,000 requests (get & put only, no batch commands or queries)
I am storing the items in two tables one with integer hash key and the other is with integer hash and sort keys
The second table is a huge one (having about 4.4 millions of items and counting)
Thanks to #Vor for mentioning the "Hot partition keys" I have made further readings and in reference to the Guidelines for Working with Tables I have followed what they are recommending and also I distributed my requests into batch requests made the difference.
Thank you all for your help.

API Gateway generating 11 sql queries per second on REG_LOG

We have sysdig running on our WSO2 API gateway machine and we notice that it fires a large number of SQL queries to the database for a minute, than waits a minute and repeats.
The query looks like this:
Every minute it goes wild, waits for a minute and goes wild again with a request of the following format:
SELECT REG_PATH, REG_USER_ID, REG_LOGGED_TIME, REG_ACTION, REG_ACTION_DATA
FROM REG_LOG
WHERE REG_LOGGED_TIME>'2016-02-29 09:57:54'
AND REG_LOGGED_TIME<'2016-03-02 11:43:59.959' AND REG_TENANT_ID=-1234
There is no load on the server. What is causing this? What can we do to avoid this?
screen shot sysdig api gateway process
This particular query is the result of the registry indexing task that runs in the background. The REG_LOG table is being queried periodically to retrieve the latest registry actions. The indexing task cannot be stopped. However, one can configure the frequency of the indexing task through the following parameter that is in the registry.xml. See [1] for more information.
indexingFrequencyInSeconds
If this table is filled up, one can clean the data using a simple SQL query. However, when deleting the records, one must be careful not to delete all the data. The latest records of each resource path should be left in the REG_LOG table since reindexing of data requires at least one reference of each resource path.
Also, if required, before clearing up the REG_LOG table, you can take a dump of the data in case you do not want to loose old records. Hope this answer provides information you require.
[1] - https://docs.wso2.com/display/Governance510/Configuration+for+Indexing

How can i limit records to be synched to some limit using Microsoft sync framework

This is regarding Microsoft sync framework where we are syncing Ipad data with Sql server database.
It is working fine.
But here, I want to limit my records to be synched to only 20 records at a time. Right now, all the records are getting synched.
I there a out-of box feature by sync framework which will enable us to do so.
If not, how can I write a custom code to achieve this.
you should be able to set batching. but it's in terms of size, not number of rows. you can simply get your row size and multiply by the number of rows you have in mind.
lookup SetDownloadBatchSize and SetBatchSpoolDirectory in the documentation.
e.g. config.SetDownloadBatchSize = some value in Kb