How django drops column in SQLite - django

Don't be confused. I am not asking how to drop column in Django. My question is what Django actually does to drop a column from a SQLite database.
Recently i came across an article that says you can't drop columns in SQLite database. So you have to drop the table and recreate it. And its quite strange that if SQLite doesn't support that then how Django is doing that?
Is it doing this?

To drop a column in an SQLite database Django follows the procedure described here: https://www.sqlite.org/lang_altertable.html#caution
Which in simple words is create a new table, copy data from old table, delete old table and then rename new table.
from the source code [GitHub] We can see that the schema editor for SQLite calls self._remake_table(model, delete_field=field) in the method remove_field which is what is used to drop a column. The method _remake_table has the following docstring in it's code which describes how exactly the process is performed:
Shortcut to transform a model from old_model into new_model This
follows the correct procedure to perform non-rename or column addition
operations based on SQLite's documentation
https://www.sqlite.org/lang_altertable.html#caution The essential
steps are:
Create a table with the updated definition called "new__app_model"
Copy the data from the existing "app_model" table to the new table
Drop the "app_model" table
Rename the "new__app_model" table to "app_model"
Restore any index of the previous "app_model" table.

Related

How to create table based on minimum date from other table in DAX?

I want to create a second table from the first table using filters with dates and other variables as follows. How can I create this?
Following is the expected table and original table,
Go to Edit Queries. Lets say our base table is named RawData. Add a blank query and use this expression to copy your RawData table:
=RawData
The new table will be RawDataGrouped. Now select the new table and go to Home > Group By and use the following settings:
The result will be the following table. Note that I didnt use the exactly values you used to keep this sample at a miminum effort:
You also can now create a relationship between this two tables (by the Index column) to use cross filtering between them.
You could show the grouped data and use the relationship to display the RawDate in a subreport (or custom tooltip) for example.
I assume you are looking for a calculated table. Below is the workaround for the same,
In Query Editor you can create a duplicate table of the existing (Original) table and select the Date Filter -> Is Earliest option by clicking right corner of the Date column in new duplicate table. Now your table should contain only the rows which are having minimum date for the column.
Note: This table is dynamic and will give subsequent results based on data changes in the original table, but you to have refresh both the table.
Original Table:
Desired Table:
When I have added new column into it, post to refreshing dataset I have got below result (This implies, it is doing recalculation based on each data change in the original source)
New data entry:
Output:

Relationships break when adding column using query

I have a dataset with multiple tables and relationships between those tables which were auto detected when I connected to my PostgreSQL server.
But when I add a column using query, those relationships are no longer effective in the report view and all my graphs show 'blank' labels.
One thing I noticed is that in the Data view, the uuid which are used to make the relationships (my foreign keys in PSQL) appear with brackets and those brackets desappear after I add columns in query.
Before:
After:
I don't know if this helps.
I have tried adding columns with custom queries or simply duplicating an existing column.
I don't have any issue when adding columns using DAX.
Thanks,
Change the type of the column by yourself as one of the first steps in the query editor. You can either go with the brackets or without. Make sure its the same for every other key-/foreign key.
If you lost your relationship you can edit it by yourself. After your ID´s are proparly formatted go to Model (the last on the left pane) and drag and drop your relationship form one table to the other. This way it should work again.

Alter sort and distribution for dependent tables

This is the query to change the sort and dist key in Redshift tables.
CREATE TABLE new_dummy
DISTKEY (id)
SORTKEY (account_id,created_at)
AS (SELECT * FROM dummy);
ALTER TABLE dummy RENAME TO old_dummy;
ALTER TABLE new_dummy RENAME TO dummy;
DROP TABLE old_dummy;
It throws the below error:
ERROR: cannot drop table old_dummy because other objects depend on it
HINT: Use DROP ... CASCADE to drop the dependent objects too.
So is it not possible to change the keys for dependent tables?
It appears that you have VIEWs that are referencing the original (dummy) table.
When a table is renamed, the VIEW continues to point to the original table, regardless of what it is named. Therefore, trying to delete the table results in an error.
You will need to drop the view before dropping the table. You can then recreate the view to point to the new dummy table.
So, the flow would be:
Create new_dummy and load data
Drop view
Drop dummy
Rename new_dummy to dummy
Create view
You might think that his is bad, but it's actually a good feature because renaming a table will not break any views. The view will automatically stay with the correct table.
UPDATE:
Based on Joe's comment below, the flow would be:
CREATE VIEW ... WITH NO SCHEMA BINDING
Then, for each reload:
Create new_dummy and load data
Drop dummy
Rename new_dummy to dummy
This answer is based upon the fact that you have foreign key references within the table definition, which are not compatible with the process of renaming and dropping tables.
Given this situation, I would recommend that you load data as follows:
Start a transaction
DELETE * from the table
Load data with INSERT INTO
End the transaction
This means you are totally reloading the contents of the table. Wrapping it in a transaction means that there is no period where the table will appear 'empty'.
However, this leaves the table in a bit of a messy state, requiring a VACUUM to delete the old data.
Alternatively, you could:
TRUNCATE the table
Load data with INSERT INTO
TRUNCATE does not require a cleanup since it clears all data associated with the table (not just marking it for deletion). However, TRUNCATE immediately commits the transaction, so there will be a gap where the table will be empty.

Insert Csv file values into table apex

I am trying to insert bulk values into the table through an excel.csv file.
I have created a file browser item on the page, now in the process have to write insert code for this to insert the excel values into the table.
the following table I have created: NON_DYNAMIC_USER_GROUPS
columns: ID,NAME,GROUP,GROUP_TYPE.
Need to create insert process code for this.
I prefer the Excel2Collection plugin for converting any form of Excel document into rows in an Oracle table.
http://www.apex-plugin.com/oracle-apex-plugins/process-type-plugin/excel2collections_271.html
PL/SQL already written, and formulated into an APEX plugin, making it easy to use.
It is possible to uncompress the code and convert it to using your own table, instead of apex_collections, which are limited to 50 columns/fields.

Select for Teradata Primary/Foreign key relationship

Im in the process of learning to properly pull appropriate metadata from a Teradata database and a large part of what I need is to pull all existing primary/foreign keys within a database. I am still very much a beginner with Teradata as well as big data in general, so a simplified explanation would be nice.
A simplified version of a select statement would also be incredibly helpful. Thanks in advance.
Foreign Keys: dbc.All_RI_ParentsV[X]
PK/Unique: dbc.IndicesV[X]. Unique Indexes got a UniqueFlag Y, if it was defined as a PK in the Create Table IndexType will be P. Multi-column indexes got one row per column all sharing the same IndexNumber, 1 is always the PI.
But as Teradata is a DWH you might have tables without defined PK and you will hardly find any defined FKs.