Seems in QuestDB the data is not available before commit log is committed data is not available in Realtime - questdb

Seems in QuestDB the data is not available right away.
like before the commit log is committed data is not available in Realtime to be selected.
is it correct?
can i make it also select from memory from the not committed data yet?

I think I found the answer,
https://questdb.io/docs/troubleshooting/faq#why-is-ilp-data-not-immediately-available
Well this is not what I expected, I hoped i could select from the uncommitted commit log too
is there an alternative?

Related

A record is entered into Redshift Table now a databricks notebook should be triggered [duplicate]

I have a trigger in Oracle. Can anyone please help me with how it can be replicated to Redshift? DynamoDB managed stream kind of functionality will also work.
Redshift does not support triggers because it's a data warehousing system which is designed to be able to import large amounts of data in a limited time. So, if every row insert would be able to fire a trigger the performance of batch inserts would suffer. This is probably why Redshift developers didn't bother to support this and I agree with them. The trigger type of behavior should be a part of business application logic that runs in OLTP environment and not the data warehousing logic. If you want to run some code in DW after inserting or updating data you have to do it as another step of your data pipeline.

Update DynamoDB table for Provisioned Throughput

In the API of DynamoDB there is a way to increase/decrease table Provisioned Throughput but there is some Active mode that needs to be updated, what if there is two scripts that running on the same table at the same time and one of them read and the other update the table, What's going to happened with the one that reading? Is it going to failed?
I think maybe before every reading I can check if the table is on Active mode and if not just wait until it does but each time that I'm Query/Scan the database I need to make this check. Maybe it's not necessary.
Is anyone know about this?
It's not necessary, you can still read from the table when it's been updated.
EDIT:
from http://aws.amazon.com/dynamodb/faqs/
Q: Does Amazon DynamoDB remain available when I ask it to scale up or down by changing the provisioned throughput?
Yes. Amazon DynamoDB is designed to scale its provisioned throughput up or down while still remaining available.
DynamoDB reads are "eventually consistent", so the query/scan may not see the updated rows but the request will not fail. You can request consistent reads if you need them though (though they consume slightly more Read Capacity).
See the docs for more information.

Reducing history in CiviCRM

I have a CiviCRM site with 30,000 contacts. I am noticing a number of places where history is logged. The database is getting larger over time. Does anybody have any thoughts on removing history. Has anybody created scripts to cleanup old history data.
I am not sure what history you want to delete but here are couple of things you can do.
All the logging and history data are important, so think twice before deleting them.
1) If you have "Logging" Enabled under Misc., you will get a log table for every table in CiviCRM database.
2) Every contact has Changelog, I assume by history you mean this one.
3) Remove deleted records permanently, this will eliminate the possibility to check revision records in some places.
4) Extremely, you can even delete activities but you will not want to do that.
At the end of the day, it is a CRM, deleting any of the records is a loss of data.
If you are referring to the detailed logging option (as set up as by #popcm) then you can set this detailed logging to write to a separate database - it's a setting in the civicrm.settings.phop file.
Then you could occasionally dump all the data from this database and store it offline, emptying online the database on each occasion.
If you are referring simply to the changelog history or other aspects of the CiviCRM data, then as #popcm indicates, you really don't want to delete this as you'll only regret it later.
If keeping lots of data online is a concern, look to strengthen your security.

AppFabric Cache Database configuration has 1GB size

I use AppFabric cache with SQL Server-Based Cluster Configuration.
The problem is the configuration database has grown to 1GB size.
The problematic table which takes most space in db is 'ConfigAudit'.
Its full of entries with values of column 'Operation': UpdateNew, UpdateOld,
with UpdatedTimeStamps of each minute.
I can not find any information about AppFabric's cluster configuration database, nor about any audit of cache operations.
Cache works fine except this problem.
Is there a way this audit to be turned off ,
or other solution to make this database much smaller and not growing up again?
Kind regards,
Charles.
dbo.ConfigAudit table is used to track the changes on dbo.Config table. It is mostly for diagnostics, and it cannot be changed via PowerShell commands.
To turn off the tracking, you can disable all the triggers for dbo.Config table. For example,

How to monitor database updates from application?

I work with SQL Server database with ODBC, C++. I want to detect modifications in some tables of the database: another application inserts or updates rows and I have to detect all these modifications. It does not have to be the immediate trigger, it is acceptable to use polling to periodically check database tables for modifications.
Below is the way I think this can be done, and need your opinions whether this is the standard/right way of doing this, or any better approaches exist.
What I've thought of is this: I add triggers in SQL Server, which, on any modification, will insert the identifiers of modified/added rows into special table, which I will check periodically from my application. Suppose there are 3 tables: Customers, Products, Services. i will make three additional tables: Change_Customers, Change_Products, Change_Services, and will insert the identifiers of modified rows of the respective tables. Then I will read these Change_* tables from my application periodically and delete processed records.
Now if you agree that above solution is right, I have another question: Is it better to have separate Change_* tables for each of my tables I wish to monitor, or is it better to have one fat Changes table which will contain the changes from all tables.
Query Notifications is the technology designed to do exactly what you're describing. You can leverage Query Notifications from managed clients via the well known SqlDependency class, but there are native Ole DB and ODBC ways too. See Working with Query Notifications, the paragraphs about SSPROP_QP_NOTIFICATION_MSGTEXT (OleDB) and SQL_SOPT_SS_QUERYNOTIFICATION_MSGTEXT (ODBC). See The Mysterious Notification for an explanation how Query Notifications work.
This is the only polling-free solution that work with any kind of updates. Triggers and polling for changes has severe scalability and performance issues. Change Data Capture and Change Tracking are really covering a different topic (synchronizing datasets for occasionally connected devices, eg. Sync Framework).
Change Data Capture(CDC)--http://msdn.microsoft.com/en-us/library/cc645937.aspx
First you will need to enable CDC in database
::
USE db_name
GO
EXEC sys.sp_cdc_enable_db
GO
Enable CDC on table then
:: sys.sp_cdc_enable_table
Then you can query changes
If your version of Sql Server is 2005 - you may use Notification Services
If your Sql Server is 2008+ - there is most preferrable way to use triggers and log changes to log tables and periodically poll these tables from application to see the changes