Import range with 4 condition - if-statement

I created a dashboard with 2 drop-down list option (Region and Business Line). Each drop-down menu has "All" option to select all option.
Menu 1 - Region : Asia, DACH, North America, All
Menu 2 - Business line: CROF, CROC, Communication, All
Menu 3 - Date filter
-Condition 1: When Menu 1 is "ALL" then import range from all region and one of Menu 2 option
-Condition 2: When Menu 2 is "ALL" then import range from all business line and one of Menu 1 option
-Condition 3: When Menu 1 and 2 are "ALL" then import range only based on the date filter
-Condition 4: When Menu 1 and 2 are select to individual option
I try this code, but not works. it only show me the first row data
=IFS(B3="'All'", QUERY(IMPORTRANGE("14dZCHHnIWNgnE7aufM6Uv7XZkYjViKiVN2J2NLAsIjg", "'Main Calendar'!A:A1000"),"Select Col1, Col2, Col3, Col4, Col5, Col6, Col7, Col8, Col9, Col10, Col11, Col12, Col19, Col32 where Col2>=date'"&TEXT(B5,"yyyy-mm-dd")&"' AND Col3<=date'"&TEXT(B6,"yyyy-mm-dd")&"' AND Col10='"&B4&"'"),
B4="'All'", QUERY(IMPORTRANGE("14dZCHHnIWNgnE7aufM6Uv7XZkYjViKiVN2J2NLAsIjg", "'Main Calendar'!A:AG"),"Select Col1, Col2, Col3, Col4, Col5, Col6, Col7, Col8, Col9, Col10, Col11, Col12, Col19, Col32 where Col2>=date'"&TEXT(B5,"yyyy-mm-dd")&"' AND Col3<=date'"&TEXT(B6,"yyyy-mm-dd")&"' AND Col32='"&B3&"'"),
AND(B3="All",B4="All"),QUERY(IMPORTRANGE("14dZCHHnIWNgnE7aufM6Uv7XZkYjViKiVN2J2NLAsIjg", "'Main Calendar'!A:AG"),"Select Col1, Col2, Col3, Col4, Col5, Col6, Col7, Col8, Col9, Col10, Col11, Col12, Col19, Col32 where Col2>=date'"&TEXT(B5,"yyyy-mm-dd")&"' AND Col3<=date'"&TEXT(B6,"yyyy-mm-dd")&"'"),
TRUE,
QUERY(IMPORTRANGE("14dZCHHnIWNgnE7aufM6Uv7XZkYjViKiVN2J2NLAsIjg", "'Main Calendar'!A:AG"),"Select Col1, Col2, Col3, Col4, Col5, Col6, Col7, Col8, Col9, Col10, Col11, Col12, Col19, Col32 where Col2>=date'"&TEXT(B5,"yyyy-mm-dd")&"' AND Col3<=date'"&TEXT(B6,"yyyy-mm-dd")&"' AND Col32='"&B3&"' AND Col10='"&B4&"'"))
This is the dashboard that I expected:
Your help is valuable, Thanks a lot

Related

Google sheet query with importrange and different IF statements

In order to rather datas of different sheets, I would like to create graphics.
But to do that i want to do a query statement that take others sheets depending on two things.
Here is the formula that i tried to do this but it didn't worked :
=IF($D$13="name_1";
{QUERY(
IMPORTRANGE("url_1";"sheet_name!range");
"SELECT Col2, Col3, Col4, Col5, Col6, Col7, Col8, Col9, Col10, Col11, Col12, Col13 WHERE Col1 ='" & $D$10& "'");
"ERROR"});
IF($D$13="name_2";
{QUERY(
IMPORTRANGE("url_2";"sheet_name!range");
"SELECT Col2, Col3, Col4, Col5, Col6, Col7, Col8, Col9, Col10, Col11, Col12, Col13 WHERE Col1 ='" & $D$10& "'");
"ERROR")};
But this is not working.
I want that, depending on the "name", in the D13 cell, that display the datas the query bellow.
So if it is the "name_1" in the cell D13, that show his datas from his sheet.
Cell D13
Name_1
Cell D10
Datas
That will displays the datas in a table :
Month
Data 1
Data 2...
Date 12
January
12
15
321
February
14
112
45
If I choose an other data in the cell D10 or/and in the cell D13 that will display other datas from other Sheet :
Month
Data 1
Data 2...
Date 12
January
2
47
36
February
54
85
7
I tried with the IFS statement as well but that worked.
Thx for your help !
you can add another where with and for example like this:
=QUERY({IMPORTRANGE("url_2"; "sheet_name!range")};
"select Col2,Col3,Col4,Col5,Col6,Col7,Col8,Col9,Col10,Col11,Col12,Col13
where Col1 ='"&$D$10&"'
and Col2 ='"&D$1$3&"'")

Django ORM make Union Query with column not in common in both tables, set value of not in common column as null

Hi i want to make a query in Djang ORM
like this
Select Col1, Col2, Col3, Col4, Col5 from Table1
Union
Select Col1, Col2, Col3, Null as Col4, Null as Col5 from Table2
as you see Col4, Col5 are not in common but they will return null instead in Table2.
Table1_qs = Table1.objects.all()
Table2_qs = Table2.objects.all()
Table1_qs.values('Col1', 'Col2','Col3','Col4','Col5').union(Table2_qs.values('Col1', 'Col2','Col3','Null as Col4','Null as Col5'))
How can i make the query in Django?
the solution is made possible by Value and annotate.
here is how.
let say Col4 is type IntegerField,
and Col5 is type CharField
from django.db.models import Value, IntegerField, CharField
Table1_qs = Table1.objects.all()
Table2_qs = Table2.objects.all()
Table1_qs = Table1_qs.values('Col1', 'Col2','Col3','Col4','Col5')
Table2_qs = Table2_qs.values('Col1', 'Col2','Col3').annotate(
Col4=Value(None, output_field=IntegerField()),
Col5=Value(None, output_field=CharField()) )
unioned_query = Table1_qs.union(Table2_qs)
please note:
1: columns type must be the same as each.
2: and they must be in same order as well.
the problem that arise is within foreign-key. as only the id (primary key) of them will be returned when using Values() on a query-set!
I hope Django add a way to get them as usual objects too.

How to do an importrange query with a time duration condition (all rows under 1 minute)

I'm trying to make a query(importrange) of this google-sheet file
I want to filter my data based on 3 conditions:
Col5='GC' OR
Col5='CL' AND(this is the problem I can not solve)
In Col4 the time must be under 60 seconds.
I've tried different solutions (time, seconds, timevalue) but none of them works.
I tried this but it's WITHOUT the last, crucial passage:
=query(IMPORTRANGE("18OOzibH9rmuzNxOPo_EbZ1rhF32qESuvPa4x4pB1BmA/edit#gid=0",
"data!A1:Q"),
"select Col1, Col2, Col3, Col4, Col5, Col6, Col7, Col8, Col9, Col10, Col11, Col12, Col13, Col14, Col15, Col16, Col17, WHERE (Col5='GC') OR (Col5='CL')"
)
The result I am expecting to see is to have only the rows with GC and CL in Col5 and a duration <= 60 seconds.
=QUERY(IMPORTRANGE("18OOzibH9rmuzNxOPo_EbZ1rhF32qESuvPa4x4pB1BmA", "data!A1:Q"),
"where Col5 matches 'CL|GC'
and minute(Col4) < 1", 1)

How the pricing is calculated If I query a Bigquery View?

Say I have a BigQuery View
MyView:
select col1, col2, col3, col4, col5, col6, col7 from mytable;
Now If I Query my view:
select col1 from MyView;
So In this case , will the pricing will be calculated for all columns or only for col1?
will the pricing will be calculated for all columns or only for col1?
Only for col1!
You can easily check this in UI by looking into estimation of how many bytes will be processed using
select col1 from MyView
vs
select * from MyView

how to find size of database, schema, table in redshift

Team,
my redshift version is:
PostgreSQL 8.0.2 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 3.4.2 20041017 (Red Hat 3.4.2-6.fc3), Redshift 1.0.735
how to find out database size, tablespace, schema size & table size ?
but below are not working in redshift ( for above version )
SELECT pg_database_size('db_name');
SELECT pg_size_pretty( pg_relation_size('table_name') );
Is there any alternate to find out like oracle ( from DBA_SEGMENTS )
for tble size, i have below query, but not sure about exact menaing of MBYTES. FOR 3rd row, MBYTES = 372. it means 372 MB ?
select trim(pgdb.datname) as Database, trim(pgn.nspname) as Schema,
trim(a.name) as Table, b.mbytes, a.rows
from ( select db_id, id, name, sum(rows) as rows from stv_tbl_perm a group by db_id, id, name ) as a
join pg_class as pgc on pgc.oid = a.id
join pg_namespace as pgn on pgn.oid = pgc.relnamespace
join pg_database as pgdb on pgdb.oid = a.db_id
join (select tbl, count(*) as mbytes
from stv_blocklist group by tbl) b on a.id=b.tbl
order by a.db_id, a.name;
database | schema | table | mbytes | rows
---------------+--------------+------------------+--------+----------
postgres | public | company | 8 | 1
postgres | public | table_data1_1 | 7 | 1
postgres | proj_schema1 | table_data1 | 372 | 33867540
postgres | public | table_data1_2 | 40 | 2000001
(4 rows)
The above answers don't always give correct answers for table space used. AWS support have given this query to use:
SELECT TRIM(pgdb.datname) AS Database,
TRIM(a.name) AS Table,
((b.mbytes/part.total::decimal)*100)::decimal(5,2) AS pct_of_total,
b.mbytes,
b.unsorted_mbytes
FROM stv_tbl_perm a
JOIN pg_database AS pgdb
ON pgdb.oid = a.db_id
JOIN ( SELECT tbl,
SUM( DECODE(unsorted, 1, 1, 0)) AS unsorted_mbytes,
COUNT(*) AS mbytes
FROM stv_blocklist
GROUP BY tbl ) AS b
ON a.id = b.tbl
JOIN ( SELECT SUM(capacity) AS total
FROM stv_partitions
WHERE part_begin = 0 ) AS part
ON 1 = 1
WHERE a.slice = 0
ORDER BY 4 desc, db_id, name;
Yes, mbytes in your example is 372Mb. Here's what I've been using:
select
cast(use2.usename as varchar(50)) as owner,
pgc.oid,
trim(pgdb.datname) as Database,
trim(pgn.nspname) as Schema,
trim(a.name) as Table,
b.mbytes,
a.rows
from
(select db_id, id, name, sum(rows) as rows
from stv_tbl_perm a
group by db_id, id, name
) as a
join pg_class as pgc on pgc.oid = a.id
left join pg_user use2 on (pgc.relowner = use2.usesysid)
join pg_namespace as pgn on pgn.oid = pgc.relnamespace
and pgn.nspowner > 1
join pg_database as pgdb on pgdb.oid = a.db_id
join
(select tbl, count(*) as mbytes
from stv_blocklist
group by tbl
) b on a.id = b.tbl
order by mbytes desc, a.db_id, a.name;
I'm not sure about grouping by database and scheme, but here's a short way to get usage by table,
SELECT tbl, name, size_mb FROM
(
SELECT tbl, count(*) AS size_mb
FROM stv_blocklist
GROUP BY tbl
)
LEFT JOIN
(select distinct id, name FROM stv_tbl_perm)
ON id = tbl
ORDER BY size_mb DESC
LIMIT 10;
you can checkout this repository, i'm sure you'll find useful stuff there.
https://github.com/awslabs/amazon-redshift-utils
to answer your question you can use this view:
https://github.com/awslabs/amazon-redshift-utils/blob/master/src/AdminViews/v_space_used_per_tbl.sql
and then query as you like.
e.g: select * from admin.v_space_used_per_tbl;
Modified versions of one of the other answers. This includes database name, schema name, table name, total row count, size on disk and unsorted size:
-- sort by row count
select trim(pgdb.datname) as Database, trim(pgns.nspname) as Schema, trim(a.name) as Table,
c.rows, ((b.mbytes/part.total::decimal)*100)::decimal(5,3) as pct_of_total, b.mbytes, b.unsorted_mbytes
from stv_tbl_perm a
join pg_class as pgtbl on pgtbl.oid = a.id
join pg_namespace as pgns on pgns.oid = pgtbl.relnamespace
join pg_database as pgdb on pgdb.oid = a.db_id
join (select tbl, sum(decode(unsorted, 1, 1, 0)) as unsorted_mbytes, count(*) as mbytes from stv_blocklist group by tbl) b on a.id=b.tbl
join (select id, sum(rows) as rows from stv_tbl_perm group by id) c on a.id=c.id
join (select sum(capacity) as total from stv_partitions where part_begin=0) as part on 1=1
where a.slice=0
order by 4 desc, db_id, name;
-- sort by space used
select trim(pgdb.datname) as Database, trim(pgns.nspname) as Schema, trim(a.name) as Table,
c.rows, ((b.mbytes/part.total::decimal)*100)::decimal(5,3) as pct_of_total, b.mbytes, b.unsorted_mbytes
from stv_tbl_perm a
join pg_class as pgtbl on pgtbl.oid = a.id
join pg_namespace as pgns on pgns.oid = pgtbl.relnamespace
join pg_database as pgdb on pgdb.oid = a.db_id
join (select tbl, sum(decode(unsorted, 1, 1, 0)) as unsorted_mbytes, count(*) as mbytes from stv_blocklist group by tbl) b on a.id=b.tbl
join (select id, sum(rows) as rows from stv_tbl_perm group by id) c on a.id=c.id
join (select sum(capacity) as total from stv_partitions where part_begin=0) as part on 1=1
where a.slice=0
order by 6 desc, db_id, name;
This query is much easier:
-- List the Top 30 largest tables on your cluster
SELECT
"schema"
,"table" AS table_name
,ROUND((size/1024.0),2) AS "Size in Gigabytes"
,pct_used AS "Physical Disk Used by This Table"
FROM svv_table_info
ORDER BY pct_used DESC
LIMIT 30;
SVV_TABLE_INFO is a Redshift systems table that shows information about user-defined tables (not other system tables) in a Redshift database. The table is only visible to superusers.
To get the size of each table, run the following command on your Redshift cluster:
SELECT "table", size, tbl_rows
FROM SVV_TABLE_INFO
The table column is the table name.
The size column is the size of the table in MB.
The tbl_rows column is the total number of rows in the table, including rows that have been marked for deletion but not yet vacuumed.
Source
Look at SVV_TABLE_INFO Redshift documentation for other interesting columns to retrieve from this system table.
This is what I am using(please change the databasename from 'mydb' to your database name) :
SELECT CAST(use2.usename AS VARCHAR(50)) AS OWNER
,TRIM(pgdb.datname) AS DATABASE
,TRIM(pgn.nspname) AS SCHEMA
,TRIM(a.NAME) AS TABLE
,(b.mbytes) / 1024 AS Gigabytes
,a.ROWS
FROM (
SELECT db_id
,id
,NAME
,SUM(ROWS) AS ROWS
FROM stv_tbl_perm a
GROUP BY db_id
,id
,NAME
) AS a
JOIN pg_class AS pgc ON pgc.oid = a.id
LEFT JOIN pg_user use2 ON (pgc.relowner = use2.usesysid)
JOIN pg_namespace AS pgn ON pgn.oid = pgc.relnamespace
AND pgn.nspowner > 1
JOIN pg_database AS pgdb ON pgdb.oid = a.db_id
JOIN (
SELECT tbl
,COUNT(*) AS mbytes
FROM stv_blocklist
GROUP BY tbl
) b ON a.id = b.tbl
WHERE pgdb.datname = 'mydb'
ORDER BY mbytes DESC
,a.db_id
,a.NAME;
src: https://aboutdatabases.wordpress.com/2015/01/24/amazon-redshift-how-to-get-the-sizes-of-all-tables/