What is the TRUNCATE TABLE command in QuestDB? [duplicate] - questdb

This question already has answers here:
Mysql: Which to use when: drop table, truncate table, delete from table
(5 answers)
Closed 4 months ago.
In the documentation it says
TRUNCATE TABLE is used to permanently delete the contents of a table without deleting the table itself.
Can anyone please give an example of it?

Unlike DROP TABLE, TRUNCATE TABLE keeps the table and deletes only all rows that are currently stored in the table. This means that you don't need to re-create the table, if you're planning to insert new rows into it once it's truncated.
This might be handy for local experiments (but not only) where you regenerate the data occasionally and run some queries over it.

Related

Is it safe to truncate ACT_RU_METER_LOG table in Camunda BPM?

The ACT_RU_METER_LOG table contains 10 million rows. I want to upgrade the Camunda from 7.10.0 to 7.17 and as part of the upgrade there are few alter table statements on the mentioned table. As expected these alter tables take huge time, hence wondering if I can truncate the table. I am aware that the metrics can be disabled, but the existing data should be cleaned explicitly.
Thanks in advance.

QuickSight, Automatically Add Large Number of Columns to Table

Is there a way to automatically add a large amount of columns to a QuickSight table without the manual procedure of draging and dropping them?
For instance in the picture below: I would like to add all numbered columns to the table.
The QuickSight support said that this is not possible for now. They have received this question multiple times and it will hopefully be possible in the future.

How to import all columns of a csv as Strings in Bigquery [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
I am using airflow to unarchive CSV files (e.g. FILE__YYYYMMDD.csv) from a GCS bucket to BigQuery. Since the file evolved throughout the months, its schema changed (More columns were added). So I used the option autodetect to set the table schema in BQ. Unfortunately, some key columns are autodetected wrong (detects hexa hashes as floats for some reason), and so I want to import every column as a String, then cast it within the query that is supposed to analyze the tables...
Do you recommend this approach ?
How do I tell Bigquery "autodetect the column (names), but set their types as String"
If the schema can change at any time, the safest way is to create a workflow:
Import the new file in a temporary table
Create a merge query to merge the data in the temporary table to the final one. In that merge query, you can cast the fields in the format that you want to merge the data in the final table.
(The temporary table will be deleted automatically)
EDIT 1
Following the comment discussion, your use case isn't achievable out of the box on BigQuery. You must do more thing before the integration.
The idea that I have is the following:
When a file comes in, get the header line
Get the schema of the target table
If the header has more fields than the target table, update the schema with the new columns with the STRING type.
Load the file in BigQuery with the schema that you deduce from the header reading, and the allow_jagged_rows parameters to allow the integration with less column that the final schema. Load the file from Cloud Storage, not from your code.

How to get current table name inside sas data step? [duplicate]

This question already has answers here:
SAS: concatenate different datasets while keeping the individual data table names
(2 answers)
Closed 1 year ago.
I need to join some tables and create a new column with some mark to indicate the table that data came from.
I have tbl_202101, tbl_202102 ... tbl_202109. I need to join all, adding a new column table_origin, for example, indicating the respective table.
DATA FINAL_TABLE;
SET TBL_202101 - TBL_202109;
/* Here I don't know how to identify the current table */
table_origin = CASE
WHEN *CURRENT TABLE* = TBL_202101 THEN 202101
WHEN *CURRENT TABLE* = TBL_202102 THEN 202102
AND GO ON...
RUN;
How could I do it?
Set statement option
INDSNAME=variable
creates and names a variable that stores the name of the SAS data set
from which the current observation is read. The stored name can be a data
set name or a physical name. The physical name is the name by which the
operating environment recognizes the file.

Which tables, triggers, views are affected by a drop column cascade in PostgreSQL [duplicate]

This question already has answers here:
Find dependent objects for a table or view
(5 answers)
Closed 3 years ago.
Django created a migration for dropping a field from a table:
ALTER TABLE "my_table" DROP COLUMN "my_deprecated_field" CASCADE;
COMMIT;
I would like to know which consquences the CASCADE has, i.e. which other columns, tables, triggers, etc. are going to be affected by it.
Since there is no EXPLAIN ALTER, which other means do I have to find out?
I think it'll remove all the objects which are dependent or having reference(foreign key, etc) to this object too.
Assume Table A is having non-nullable foreign key to Table B.
If someone drops Table B, what will happen to Table A ? Table A rows will point to what ? It can't point to null, as it's non-nullable.
CASCASE come into picture here and using it on Table B will lead to dropping off of Table A rows also.
You can see a example here http://www.postgresqltutorial.com/postgresql-drop-column/