SnappyData per-row TTL - row

Is it possible to set TTL per row. Meaning the row will be automatically deleted when the TTL has passed. The TTL can be different for ever row in the same table.
Thanks!

Yes. You can use the 'Expire' clause when you create the table. See here.

Related

Update bigquery table soon after insert records in Table

I have a requirement to load the data after a few minutes I need to update the record, how can I achieve that?? I am getting google.api_core.exceptions.BadRequest: 400 UPDATE or DELETE statement over table dataset.tablename would affect rows in the streaming buffer, which is not supported
Is there any way to flush the data from streaming buffer to permanent storage??
I tried below option but this query also getting the same error.
UPDATE dataset.tablename
SET _PARTITIONTIME = CURRENT_TIMESTAMP()
WHERE _PARTITIONTIME IS NULL```
Streamed data is not immediately available for operations outside of analysis (select) for up to 90 minutes (typically much less). You can use streamingBuffer.oldestEntryTime to see the age of the oldest row in the streaming buffer in the tables.get response.
https://cloud.google.com/bigquery/docs/streaming-data-into-bigquery#dataavailability
As a potential workaround, you could create an independent table with desired changes and join it in a query/view with the table you're streaming to, to see newer values in your query results. Eventually, you could use the "change" table to merge changes into the original table.
https://cloud.google.com/bigquery/docs/reference/standard-sql/dml-syntax#merge_statement

BigQuery: clustering column with millions of cardinality

I have a BigQuery table, partitioned by date (for everyday there is one partition).
I would like to add various columns sometimes populated and sometimes missing and a column for a unique-id.
The data need to be searchable through a unique id. The other use case is to aggregate per column.
This unique id will have a cardinality of millions per day.
I would like to use the unique-id for clustering.
Is there any limitation on this? Anyone has tried it?
It's a valid use case to enable clustering on an id column, the amount of values shouldn't cause any limitations.

how to sum values in function lambda in dynamodb in AWS

I have some tables in dynamodb and I simply want to take a cost variable, of a service and create a function that adds (like sum(column)) up all from one id and returns the result. how can I do it
Summing up values from a DynamoDB table requires a full table scan by design.
It's because you need to gather all values from the column you are trying to sum up.
Your question is similar to Find Average and Total sum in DynamoDB?
You can use a query with a projection expression to read all values for the attribute you wish to sum into an array, then sum the values in the array client side.
A query avoids the need to do a full table scan.
For this to work, the "id" you reference in "all from one id" must be a partition key.

NULL TO NOT NULL ALTER TABLE IN SAS

I'm having difficulties running a SAS Data Intergration job.
One column needs to be removed from the target's structure,
but cannot be removed because of the NULL constraint.
Do I need to remove the constraint first?
How do I do that?
Thank you in advance,
Gal.
Does the physical table exists without the column? If so, then the constraint is only in the metadata. Recreate the metadata and you should be fine.
If the physical table exists with the column, then you need to recreate that table without the column. You will still need to refresh the table metadata for DI Studio to pick it up.

How to update all records in a table at the same time (without updating records one by one) using stored procedure

I have a table named Emp wit (Empname,Details). There are 4 records in the table. I want to update all records with a single update statement, without updating records one by one, using a stored procedure.
UPDATE [tableName] SET [columnName] = [value] WHERE [condition]