Item Table takes forever to load in Dynamics NAV 2013 - microsoft-dynamics

Test Environment:
CPU: Intel Core i5 7th Gen (2.71GHz - 4 Logical Processors)
RAM: 12 GB (Free 6 GB)
HDD: 1 TB (Free 700 GB for SQL and NAV)
NAV: 7.0.33781 (2013)
SQL: 2008 SP1
OS: Windows 10 Pro
Note: This is the System where both SQL and NAV system are running together for test purposes. This happens in Live server environment also.
Problem:
Inside the NAV RTC, every page related to Item Table (with no customization) take too much time to load the records. This includes Drill-down pages, Assist Edit Pages and List Pages. This does not happen when navigating through single records (ie: Next and Previous actions) on Item Card page.
Observation:
When loading above said slow-down pages, the CPU performance monitor indicates about 90% usage taken by MSSQLSERVER itself.
Experiments:
- Taken out the Item Table (TableID: 27) to a fresh DB (Demo DB of NAV)
and works like a charm since less data.
- Imported Item table and related pages from a fresh DB (Demo) to this
issue causing DB but nothing went okay.
- SQL SELECT * FROM query with no filters and with all the columns
loads within 0.1 seconds of execution time.
Extra Specifications:
Item Master has 18000 Records in total with no customized fields or newly created fields/flowfields. Neither no codes on Pages' OnOpen triggers nor in table code.

Related

.Net Core6 Session Stop Working After 5k Items At DynamoDB,

I have moved the session of my .net6 application to DynamoDB and all worked well. But suddenly it stops working. The table configuration is given below.
Capacity mode On-demand
Item count 5,028
Table size 3 megabytes
Average item size 606.08 bytes
I am not giving any code as the code does not change and I assume it is more related to the DynamoDB configuration than the code.

Timeout value in powerbi service

I have a dataset for 3 days I can not update it on Power BI service, knowing that on DESKTOP it is normally updated in 30 minutes. the dataset is powered from a SQL server database, via a data gateway. the data gateway is well updated. the incremental update is activated and the dataset retrieves only the data of the last 3 days to each update.
Here is the generated error message :
Data source error: The XML for Analysis request timed out before it was completed. Timeout value: 17998 sec.
Cluster URI: WABI-WEST-EUROPE-B-PRIMARY-redirect.analysis.windows.net
Activity ID: 680ec1d7-edea-4d7c-b87e-859ad2dee192
Application ID: fcde3b0f-874b-9321-6ee4-e506b0782dac
Time: 2020-12-24 19:03:30Z
What is the solution to this problem please.
Thank you
What license are you using?
Without premium capacity, the max dataset size can be 1GB. Maybe your dataset size has crossed this mark? If you are using the shared capacity, then you can check the workspace utilized storage size by clicking on ellipses at top right corner. Then click on storage to see how much is utilized for that workspace.
Also note that in shared capacity there is a 10GB uncompressed dataset limit at the gateway (but this should not be an issue as you have only 3 day data in refesh)
Also check whether your power query query is foldable (on the final step you should be able to see the 'show native query' option). If not then incremental refresh does not work and results in querying the entire data.
Also, note that 17998 sec means 5 hours. What is your internet speed?

Long running prepare statements on Azure SQL Data Warehouse

I am running a functional test of a 3rd party application on an Azure SQL Data Warehouse database set at DWU 1000. In reviewing the current activity via:
sys.dm_pdw_exec_requests
I see:
prepare statements taking 30+ seconds,
NULL statements taking up to 25 seconds,
compilation of statements takes up to 60 seconds,
explain statements taking 60+ seconds, and
select count(1) from empty tables take 60+ seconds.
How does one identify the bottleneck involved?
The test has been running for a few hours and the Azure portal shows little DWU consumed on average, so I doubt that modifying the DWU will make any difference.
The third-party application has workload management feature, so I've specified a limit of 30 connections to the ADW database (understanding that only 32 sessions are active on the database itself.)
There are approximately ~1,800 tables and ~350 views in the database across 29 schemas (per information_schema.tables).
I am in a functional testing mode, so many of the tables involved in the queries have not yet been loaded, but statistics have been created on every column on every table in the scope of the test.
One userID is being used in the test. It is in smallrc.
have a look at your tables - in the query? Make sure all columns in joins, group by, and order by have up-to-date statistics.
https://learn.microsoft.com/en-us/azure/sql-data-warehouse/sql-data-warehouse-tables-statistics

Sitecore EventQueue Table growing out of control

We are having an issue with the EventQueue table growing very fast at times, up to 3k records a second, and never clearing records (30 million as of right now). Our environment has the following set up:
Sitecore 7.2
4 CD servers and 1 CM server
All four CD servers are load balanced.
CD1 and CD2 are pointed to DB1 server CD3 and CD4 are pointed to DB2
server There are 2 Publishing targets (one for each DB) Merge
Replication is setup for the Core db across all servers (CM, CD's)
EventQueue is enabled
I have a few questions so I will break them down into separate line items.
When a publish is issued for all CD servers is the updated content sent directly from the CM db to the CD db's (all of the correct tables) or is it sent to the EventQueue table in the CD db and the CD server has a job/task that looks at the table and updates as needed.
Depending on answer to the first question, if there are 2 CD servers pointing to the same DB how do they know if they should process the EventQueue table (wont they each process the table and be duplicating efforts)
Why isn't the EventTable table cleared? How is is cleared, when is it cleared?
On CM publish, the publish request is sent to the EventQueue table on the CD db where it is processed as per the instance's publishing schedule.
The InstanceName column in the EventQueue table stores the unique name of each Sitecore instance (by default this is Machine Name + IIS Instance Name, but can be set in web.config). This enables events to be picked up by an individual CD instance in a load balanced environment.
The EventQueue table is cleared by a Sitecore task defined in the <scheduling> element in the web.config, although I've seen this misbehave in the past. By default, it is set as follows:
<agent type="Sitecore.Tasks.CleanupEventQueue, Sitecore.Kernel" method="Run" interval="04:00:00">
<DaysToKeep>1</DaysToKeep>
</agent>
I've previously run into high loads on the EventQueue and PublishQueue tables and would recommend trying the following (some of which were suggested from Sitecore support):
Reduce the interval of the CleanupEventQueue agent (above)
Reduce the DaysToKeep setting on the CleanupEventQueue (also CleanupPublishQueue wouldn't hurt)
Create a scheduled SQL job to run the clean up script outlined in the CMS Tuning Guide (Page 10: http://sdn.sitecore.net/upload/sitecore7/70/cms_tuning_guide_sc70-usletter.pdf)
Finally, from Sitecore support:
Sitecore recommends that the number of rows (entries) in the History, PublishQueue, and EventQueue tables would be less than 1000.

Django App on Heroku

I've been struggling with the issue where I believe my account has been shutdown due to having too large of a table? Correct me if I'm wrong.
=== HEROKU_POSTGRESQL_OL (DATABASE_URL)
Plan: Dev
Status: available
Connections: 0
PG Version: 9.1.8
Created: 2013-01-06 18:23 UTC
Data Size: 11.8 MB
Tables: 15
Rows: 24814/10000 (Write access revoked)
Fork/Follow: Unsupported
I tried running
heroku pg:psql HEROKU_POSTGRESQL_OL
to look at the tables, but how do I determine which table has too many rows and is flooding my database inside psql?
Once, I do determine which table this is. Can I just go to heroku run manage.py shell and call Model_with_too_many_rows.delete.all() and my account will no longer be shutdown? Are there other steps that must be taken to have the smaller db register with heroku so that my write access will be returned?
Sorry, if these questions are trivial, but my understanding of SQL is limited.
EDIT: I also believe that there was a time where my database was flooded with entries, but I have since deleted them. Is there any command I can run to resize the databse to acknowledge that the number of rows have been reduced? Or does heroku do this automatically?
There may be a smarter way to check row count by table, but I use the pg-extras plugin and run pg:index_usage.
You will regain write access to your database within ~5 minutes of getting back down below the 10k row limit – Heroku will check this and update the limit automatically.