Updating Records Via RowVersion , using 'SQL WHERE' to filter for a MAX Value - amazon-web-services

Trying to update a table based off a RowVersion value in existing table. My data lake updates once a week , with new data stored as a .json file, which holds any new RowVersions.
I need to:
1)Query the existing table in my data warehouse to find the most up to date RowVersion( ie max)
2)Use that value to only filter/select the records in my data warehouse that are greater than the RowVersion I just identified
3)Update my table to include the new Rows
My Question is - the SQL Below, I am not sure how to select the Max RowNumber in the current table and then use that to filter/specify what I want returned when querying my S3 Bucket:
create or replace temporary table UPDATE_CAR_SALES AS
SELECT
VALUE:CAR::string AS CARS,
VALUE:RowVersion::INT AS ROW_VERSION
having row_version > max(row_version)
from '#s3_bucket',
lateral flatten( input => $1:value);

It's not clear to me how you store the data. Is the CARS column unique? Do you need to find maximum row version for each car or for all cars/rows? Anyway you can use a sub-query to filter the rows having row version is higher than the max value:
create or replace temporary table UPDATE_CAR_SALES AS
SELECT
VALUE:CAR::string AS CARS,
VALUE:RowVersion::INT AS ROW_VERSION
FROM #s3_bucket, lateral flatten( input => $1 )
where ROW_VERSION > (SELECT MAX(RowVersion)
from MAIN_TABLE);
If you need to filter the rows, based on row version of each car (of the existing table):
create or replace temporary table UPDATE_CAR_SALES AS
SELECT * FROM (SELECT
VALUE:CAR::string AS CARS,
VALUE:RowVersion::INT AS ROW_VERSION
FROM #s3_bucket, lateral flatten( input => $1 )) temp_table
where temp_table.ROW_VERSION > (SELECT MAX(RowVersion)
from MAIN_TABLE where cars = temp_table.CARS );
I needed to put the main query in brackets to be able to use alias. Hope it helps.

Related

Referencing single value of another column in Power Query Editor

I would like to get a single value from "table2.MappedValue" for every record in table1 in Power Query Editor,
I have two tables, that have a many to one relationship, table2 is just a mapping table:
table1: ID | Values
table2: ID | MappedValue
when I try Table.Column(#"table2","MappedValue"), I get a list and not a single value.
I can do that from Table tools-> New Column, but I was wondering if that is possible in Power Query Editor.
You can do this by merging queries. In the query editor go to Home tab and select table1 click on merge and merge with table2. Next step is to expand your new column by selecting the dubble arrow in the column and select the column you want.

Add column with hardcoded values in PowerBI

It's probably extremely simple but I can't find an answer. I have created a new column and I would like to use the DAX syntax to fill the column with hardcoded values.
I can write this: Column = 10 and I will get a column of 10s but let's say my table has 3 rows and I would like to insert a column with [10, 17, 155]. How can I do that?
Try using DATATABLE function
Table = DATATABLE("Column Name",INTEGER,{{10},{17},{155}})
You can also put more columns with their own data if you want to, check this
https://learn.microsoft.com/en-us/dax/datatable-function
Assuming your table has a primary key column, say, ID, you could create a new table with just the column you want to manually input.
ID Value
---------
1 10
2 17
3 155
You can create this table either through the Enter Data button or create it using the DAX DATATABLE function as #Deltapimol suggests.
Once you have this table you can create a relationship to your existing table in the data model at which point you can either use this new table in your report to get the values you need or if you really need them in the existing table for some reason, you can pull them over using the RELATED function in a calculated column.
Table1 = GENERATESERIES(1, 3)
Table2 = DATATABLE(
"ID", INTEGER,
"Value" INTEGER,
{{1, 10},{2, 17},{3, 155}}
)
Now you can create a relationship from Table1 to Table2[ID] and then define a calculated column on Table1 as follows:
ValueFromTable2 = RELATED(Table2[Value])
If you don't want to create a relationship, then you could use the LOOKUPVALUE function instead in a calculated column on Table11.

Calculated Column to get average from values in another table in PowerBI

This might be very basic but I am new to PowerBI.
How do I get average of values for unique ID into another table.
For eg. My Table 1 has multiple ID values. I have created another table for unique ID which I am planning to used to join other table.
I want a calculated column in table 2 which will give me average value of respective ID from table 1.
How do I get the calculated column like shown below
In stead of creating a new table with the averages per ID and then joining on that, you could also do it directly with a calculated column using the following DAX expression:
Average by ID = CALCULATE(AVERAGE('Table 1'[Values]),ALLEXCEPT('Table 1','Table 1'[ID]))
Not exactly what you asked for, but maybe it's useful anyway.
How's it going?
The quickest way I can think of doing this would be to use
SUMMARIZECOLUMNS
You can accomplish this by creating another table based on your initial fact table like so:
Table 2 =
SUMMARIZECOLUMNS ( 'Table 1'[ID], "Avg", AVERAGE ( 'Table 1'[Values] ) )
Once this table has been created, you can create a relationship.
This will work in either SSAS or in PowerBI directly.
Hope this helps!! Have a good one!!

Is it possible to remove a column from a partitioned table in Google BigQuery?

I'm trying to remove a column from a partitioned table in BigQuery using this command
bq query --destination_table [DATASET].[TABLE_NAME] --replace --use_legacy_sql=false 'SELECT * EXCEPT(column) FROM [DATASET].[TABLE_NAME]'
As a result the unwanted column is removed, the schema is changed but the data is no more partitioned.
Any suggestion on how to keep the data partitioned after the column is removed? Docs are clear only for non partitioned tables.
There are two workarounds you can use:
Use a column-partitioning table, which means it's partitioned on a value of a regular column in a table. You can create a new column-partitioned table and copy the data deleting the column:
bq mk --time_partitioning_field=pt --schema=... [DATASET].[TABLE_NAME2]
bq query --destination_table=[DATASET].[TABLE_NAME2] "SELECT _PARTITIONTIME as pt, * EXCEPT(column) from [DATASET].[TABLE_NAME]"
You can also still use day-partitioned tables, but copy the data using DML. You can set or copy _PARTITIONTIME column inside the DML INSERT statement, which is not possible with regular SELECT. Here is an example:
INSERT INTO
dataset1.table1 (_partitiontime,
a,
b)
SELECT
TIMESTAMP(DATE "2008-12-25") AS _partitiontime,
"a" AS a,
"b" AS b
This requires DML over partitioned tables, which is currently in alpha: https://issuetracker.google.com/issues/36383555
BigQuery now supports DROP COLUMN in partitioned tables:
ALTER TABLE mydataset.mytable
DROP COLUMN column
It's in beta at the time of writing, but it worked for me.

Alter column data type in Amazon Redshift

How to alter column data type in Amazon Redshift database?
I am not able to alter the column data type in Redshift; is there any way to modify the data type in Amazon Redshift?
As noted in the ALTER TABLE documentation, you can change length of VARCHAR columns using
ALTER TABLE table_name
{
ALTER COLUMN column_name TYPE new_data_type
}
For other column types all I can think of is to add a new column with a correct datatype, then insert all data from old column to a new one, and finally drop the old column.
Use code similar to that:
ALTER TABLE t1 ADD COLUMN new_column ___correct_column_type___;
UPDATE t1 SET new_column = column;
ALTER TABLE t1 DROP COLUMN column;
ALTER TABLE t1 RENAME COLUMN new_column TO column;
There will be a schema change - the newly added column will be last in a table (that may be a problem with COPY statement, keep that in mind - you can define a column order with COPY)
to avoid the schema change mentioned by Tomasz:
BEGIN TRANSACTION;
ALTER TABLE <TABLE_NAME> RENAME TO <TABLE_NAME>_OLD;
CREATE TABLE <TABLE_NAME> ( <NEW_COLUMN_DEFINITION> );
INSERT INTO <TABLE_NAME> (<NEW_COLUMN_DEFINITION>)
SELECT <COLUMNS>
FROM <TABLE_NAME>_OLD;
DROP TABLE <TABLE_NAME>_OLD;
END TRANSACTION;
(Recent update) It's possible to alter the type for varchar columns in Redshift.
ALTER COLUMN column_name TYPE new_data_type
Example:
CREATE TABLE t1 (c1 varchar(100))
ALTER TABLE t1 ALTER COLUMN c1 TYPE varchar(200)
Here is the documentation link
If you don't want to change the column order, an option will be creating a temp table, drop & create the new one with desired size and then bulk again the data.
CREATE TEMP TABLE temp_table AS SELECT * FROM original_table;
DROP TABLE original_table;
CREATE TABLE original_table ...
INSERT INTO original_table SELECT * FROM temp_table;
The only problem recreating the table is that you will need to grant again permissions and if the table is too bigger it will take a piece of time.
ALTER TABLE publisher_catalogs ADD COLUMN new_version integer;
update publisher_catalogs set new_version = CAST(version AS integer);
ALTER TABLE publisher_catalogs DROP COLUMN version RESTRICT;
ALTER TABLE publisher_catalogs RENAME new_version to version;
Redshift being columnar database doesn't allow you to modify the datatype directly,
however below is one approach this will change the column order.
Steps -
1.Alter table add newcolumn to the table
2.Update the newcolumn value with oldcolumn value
3.Alter table to drop the oldcolumn
4.alter table to rename the columnn to oldcolumn
If you don't want to alter the order of the columns then solution would be to
1.create temp table with new column name
copy data from old table to new table.
drop old table
rename the newtable to oldtable
One important thing create a new table using like command instead simple create.
This method works for converting an (big) int column into a varchar
-- Create a backup of the original table
create table original_table_backup as select * from original_table;
-- Drop the original table, and then recreate with new desired data types
drop table original_table;
create table original_table (
col1 bigint,
col2 varchar(20) -- changed from bigint
);
-- insert original entries back into the new table
insert into original_table select * from original_table_backup;
-- cleanup
drop original_table_backup;
You can use the statements below:
ALTER TABLE <table name --etl_proj_atm.dim_card_type >
ALTER COLUMN <col name --card_type> type varchar(30)
UNLOAD and COPY with table rename strategy should be the most efficient way to do this operation if retaining the table structure(row order) is important.
Here is an example adding to this answer.
BEGIN TRANSACTION;
ALTER TABLE <TABLE_NAME> RENAME TO <TABLE_NAME>_OLD;
CREATE TABLE <TABLE_NAME> ( <NEW_COLUMN_DEFINITION> );
UNLOAD ('select * from <TABLE_NAME>_OLD') TO 's3://bucket/key/unload_' manifest;
COPY <TABLE_NAME> FROM 's3://bucket/key/unload_manifest'manifest;
END TRANSACTION;
for updating the same column in redshift this would work fine
UPDATE table_name
SET column_name = 'new_value' WHERE column_name = 'old_value'
you can have multiple clause in where by using and, so as to remove any confusion for sql
cheers!!