AppFabric Cache Database configuration has 1GB size - appfabric

I use AppFabric cache with SQL Server-Based Cluster Configuration.
The problem is the configuration database has grown to 1GB size.
The problematic table which takes most space in db is 'ConfigAudit'.
Its full of entries with values of column 'Operation': UpdateNew, UpdateOld,
with UpdatedTimeStamps of each minute.
I can not find any information about AppFabric's cluster configuration database, nor about any audit of cache operations.
Cache works fine except this problem.
Is there a way this audit to be turned off ,
or other solution to make this database much smaller and not growing up again?
Kind regards,
Charles.

dbo.ConfigAudit table is used to track the changes on dbo.Config table. It is mostly for diagnostics, and it cannot be changed via PowerShell commands.
To turn off the tracking, you can disable all the triggers for dbo.Config table. For example,

Related

Why sometimes the DynamoDB is extremely slow?

I am developing an application using DynamoDB. This application is not yet open to the public so only certain employees can access the application.
Generally, the application is very fast and there are no performance issues. Sometimes, however, the application is extremely slow.
At first I suspected that the problem comes from React JS application or from the API but that problem is from DynamoDB.
How can I affirm this?
I tested by stopping Node JS (so the API was offline)
I tested directly in the AWS console in "Explore table items" screens and in "PartiQL editor" screens
And DynamoDB was very very slow and I get this error:
The level of configured provisioned throughput for one or more global secondary indexes of the table was exceeded.
Consider increasing your provisioning level for the under-provisioned global secondary indexes with the UpdateTable API
I cannot understand because no application is running.
So why DynamoDB because slow ?
---> Maybe there is a bug in the API. Engineer are works on that.
But why does the DynamoDB keep running slow when API was offline?
How can I "restart" and/or "stop" DynamoDB service?
Best regards
Update: 2022-09-05 17h42 (Japan Time)
I created two videos to illustrate what I say (Sorry for the delay because to create the videos I had to wait for the database bugs):
Normal Case: DynamoDB is very very fast
https://youtu.be/ayeccV0zk0E
Issue Case: DynamoDB is very very slow
https://youtu.be/1u201N2HV8o
---> On my example, I have only 52 Users so this is bug not normal.
Regards
The error message is giving you a potential cause for your perceived slowness.
I suspect that what you perceive as slowness is because the throughput of the Global Secondary Index your app is reading from is exhausted, and the app (or the AWS SDK) is performing exponential backoff to retry the API call.
The one dimension you scale DynamoDB with aside from the Key schema is Throughput. You decide how many requests per second (it's a bit more complicated than that) DynamoDB can handle, and AWS ensures that load can be served. If you go beyond that, AWS throttles API calls, and you receive the errors.
GSIs have their own throughput that you can manage. I suggest you take a look at the provided metrics to identify where your throughput bottleneck is and adjust the throughput accordingly. If you don't want to deal with throughput at all, switch the table to On-Demand Capacity (Pay per request) and AWS handles that for you at a small premium.
The error message mentions provisioned throughput of a GSI, so it is quite likely that this is your problem:
The DynamoDB GSI documentation https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/GSI.html#GSI.ThroughputConsiderations explains that
When you create a global secondary index on a provisioned mode table, you must specify read and write capacity units for the expected workload on that index. The provisioned throughput settings of a global secondary index are separate from those of its base table. A Query operation on a global secondary index consumes read capacity units from the index, not the base table. When you put, update or delete items in a table, the global secondary indexes on that table are also updated. These index updates consume write capacity units from the index, not from the base table.
For example, if you accidentally set a GSI's read provisioning to 1, then you can only do on average one read per second from this GSI. If you do a scan that needs to return 10 items, it may take around 10 seconds to complete. Even if no other application is using the table.
Please read the aforementioned link for the full story on how to provision secondary indexes in DynamoDB.
If this is not your problem, please update your question with details on the provisioned throughput settings of your base table and its GSI.

Informatica - Greenplum Database Target - Increase throughput. Additional setting required in ODBC.ini file for increasing throughput

For a request in informatica, where we need to load data from SQL Server(Source) to Postgresql Greenplum Database(Target). Noticed throughput is 35(rows/second).
One additional change was made in the odbc.ini file, then the throughput jumped to 1500(rows/second).
Please add the BatchMechanism=2 as below, the issue is solved
Driver=******/ODBC7.1/lib/DWgplm27.so
Description=DataDirect 7.1 PostgreSQL Wire Protocol
AlternateServers=
ApplicationUsingThreads=1
***BatchMechanism=2***
Hope, it will help others too.

Why does WSO2AM execute count query against mb_metadata table over and over again?

We have enabled advanced throttling for WSO2AM 2.6.0. Once this was enabled and the execution plans were appropriately created, we are noticing that over 35M select count queries per hour are executing against MB_METADATA table.
Also, MB_METADATA and MB_CONTENT table are constantly growing and the row count never goes down.
I have disabled all statistics as well as tracing. we have 4 WSO servers, each one running independently with the gateway, key manager, and traffic manager on the same box. The DB is oracle.
we are seeing this query run 35 million times / hr:
SELECT COUNT(MESSAGE_ID) AS count
FROM MB_METADATA
WHERE QUEUE_ID=:1
AND MESSAGE_ID BETWEEN :2 AND :3
AND DLC_QUEUE_ID=-1
I would expect the table sizes to be manageable and this query not be run at this high of a rate.
Any suggestions on what might be going on? may be a configuration that I need to disable?
Sharing the MB database is not correct. Each traffic manager node should have its own MB database, and it can be the default H2 one.
Quoted from docs:
Do not share the WSO2_MB_STORE_DB database among the nodes in an Active-Active set-up
or Traffic Manager HA scenario, because each node should have its own local WSO2_MB_STORE_DB
database to act as separate Traffic Managers.
The latter mentioned DBs can be either H2 DBs or any RDBMS such as MySQL.
If the database gets corrupted then you need to replace the database with a fresh database
that is available in the product distribution.
Ref: https://docs.wso2.com/display/AM260/Installing+and+Configuring+the+Databases

Impact of On-Demand mode on Audit table data for Amazon DynamoDB

I am working on Amazon DynamoDB audit table.
The read/write mode was set to "Provisioning". Now, the mode is changed to "On-Demand". I have an "Audit Table" (which captures the audit information like date and time of operation, user details, etc) associated with DynamoDB.
My questions on this are:
1) How is it impacting the data that gets created in the "Audit Table"?
2) Will the data be deleted automatically on timely bases?
3) If not, what is the maximum limit of data that a table (audit table in this case) can persist?
Please let me know if you need any more information from my side.
Waiting for your answers on my questions.
Thanks and regards,
Mahesh Bongale
Provisioning just means that the table is initializing with whatever read/write capacity you set, or OnDemand capacity if you set it to that mode (similar to an auto-scaling mode where it will always deliver the throughput needed by your application). More info: https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/HowItWorks.ReadWriteCapacityMode.html
No, absolutely not, unless you specifically add code that will delete old data OR set a specific TTL on your data. More info: https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/TTL.html
There is no specific limit on the number of rows in a given table. It can be as much as you want. There are a few limits though on a few things, some can be lifted if you ask AWS, some can not: https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Limits.html

Migrating a relational DB into AWS services

I have a terabyte size SQL Server DB table which has only two columns:
Id,
HTML Content
There are few applications that call this Table to retrieve the HTML content by providing the Id of the row.
The DB is residing On-premises, and the maintenance cost and size of it is getting higher and higher. I am thinking to move this DB into AWS Dynamo DB. Reason I have choose Dynamo DB is the cost and the performance I have read about it.
Are the any concerns I should know about before choosing Dynamo DB?
Are the any other services in AWS that I could possibly use over
Dynamo DB?
I understand that SQL Server is a Relational DB, while DynamoDB is no sql. And it seems a No Sql DB could be a potential solution for this scenario. I have no kind of joins nor transactions against that Table. All I am doing with the table is to Insert, and Select.
Are the any concerns I should know about before choosing Dynamo DB?
As with any NoSql bigdata DB, Dynamo is "eventually consistent", so, if your application writes and then immediately reads the same record - you should expect failures (inconsistencies).
I'm not familiar with "Prem" and assuming you mean that you're working with your private servers I feel obligated to provide the following warning: working in the cloud is very different from working with your own servers: requests fail more often, latency pattern is different and you should architect your software to handle these sort of issues. If you're planning on moving to the cloud I'd start with migrating your application and leave the DB to be last.
If you really need real time updates of your data, You should reconsider moving on Dynamo. Also dynamo is useful when you do need a dynamic number of columns for each row. So except the cost, i don't see any benefits here.
If you don't need realtime updates, you can look into AWS Redshift or Google BigQuery, and these will be cheaper solutions compare to Dynamo.
Like you have mentioned, you just have two columns, take a look into "redis" also. A plain key value structure will help in performance. But since Redis stores everything in the Physical memory, costing will be high and you'll still need permanent storage/ DB like SQL, MySQL. So in terms of performance, yes you ll be able to see huge difference. but you'll be more thn the current cost.
How about AWS Aurora? At least AWS claims of 1/10th of cost compare to other SQL/MySQL instances. It have backward compatibility also.