Big query "SELECT * EXCEPT" - google-cloud-platform

I am learning how to query bigquery (esp STRUCTs and ARRAYs)
I have a table structured as below reference table:
Table Name: Addresses
Name (String), Age(INT, Address (RECORD and REPEATED)
The columns within Address are : address1, address2, city, zipcode
Question:
how do I select all columns except zipcode
I tried querying as follows
SELECT
EXCEPT(zipcode)
FROM Address, UNNEST(address)
The above query is retrieving the address record field column twice.
Also, the subsequent select command that runs is as follows
SELECT
Name,
Age,
Address
from temp
The address should have all columns except zipcode.

If you explode your address array, you need to rebuild it. I'm not expert in SQL, but this code work
select name,
age,
ARRAY_AGG(STRUCT(a.address1 as address1, a.address2 as address2, a.city as city)) as address
from Address, unnest(address) a
group by name, age
You can’t use * except because you need to group by your keys to rebuild your array agg

Consider below - leaves all columns as is with except of address where zipcode is removed
select * except(address),
array(select as struct * except(zipcode) from t.address) address
from Addresses t

Related

How to validate Location Column in Sharepoint list?

I need a conditional formula to validate data that is put in the location column to only accept valid street addresses. For instance, to prevent someone from putting an email address in the location column. The location column so far accepts the email address as a street address.
Location column accepting an email address as a location
You can use following column validation to exclude string contains "#"
=AND(IF(ISERROR(FIND("#",Address)),TRUE))

How do i can create IR in apex oracle based on different table and column

IR based on PL/SQL Function Body returning SQL Query.
How do i can create Interactive reports based on multiple table and Deferent column name.
Exp :-
Select list item return three value
1 or 2 or 3
And the function return query basen on select list value
when Value equal 1
Select name, satate, country_id from cities
when value equal 2 Return
Select country, id from country
when value equal 3 Return
Select ocean,oc_id,from oceans
The three query return different column name and value.
Ok firstly, your question is poorly written. But from what I gather, you want an SQL query that returns different things based on an input.
I dont think you even need a plsql function body for this.
Simply do something like this:
SELECT * FROM
(SELECT name as name,
state as state,
country_id as id,
1 as value
FROM cities
UNION ALL
SELECT country as name,
NULL as state,
id as id,
2 as value
FROM country
UNION ALL
SELECT ocean as name,
NULL as state,
oc_id as id,
3 as value
FROM oceans)
WHERE value = :input_parameter_value;
Because if you are trying to display a variable number of columns and constantly changing their names and such. You are gonna have a bad time, it can be done, as can everything. But afaik its not exactly simple
No objections to what #TineO has said in their answer, I'd probably do it that way.
Though, yet another option: if your Apex version allows it, you can create three Interactive Report regions on the same page, each selecting values from its own table, keeping its own column labels.
Then create a server condition for each region; its type would be Function that returns a Boolean and look like
return :P1_LIST_ITEM = 1;
for the 1st region; = 2 for the 2nd and = 3 for the 3rd.
When you run the page, nothing would be displayed as P1_LIST_ITEM has no value. Once you set it, one of conditions would be met and appropriate region would be displayed.

Caching a table in SQL server database

I have a data warehouse where a lot of values are stored as coded values. Coded columns store a numeric value that relates to a row on the CODE_VALUE table. The row on the CODE_VALUE table contains descriptive information for the code. For example, the ADDRESS table has a Address_TYPE_CD column.Address type can be home/office/postal address etc . The output from selecting these columns would be a list of numbers as 121234.0/2323234.0/2321344.0. So we need to query the code_value table to get their descriptions.
We have created a function which hits the code_value table and gets the description for these codes. But when I use the function to change codes to their description it takes almost 15 minutes for a query that otherwise takes a few seconds . So I was thinking of loading the table permanently in Cache. Any suggestions how can this be dealt with??
A solution being used by another system is described below
I have been using Cerner to query the database, which uses User access routines to convert these code_values and are very quick. Generally they are written in C++. The routine is using the global code cache to look up the display value for the code_value that is passed to it. That UAR never hits Oracle directly. The code cache does pull the values from the Code_Value table and load them into memory. So the code cache system is hitting Oracle and doing memory swapping to cache the code values, but the UAR is hitting that cached data instead of querying the Code_Value table.
EXAMPLE :
Person table
person_id(PK)
person_type_cd
birth_dt_cd
deceased_cd
race_cd
name
Visit table
visit_id(PK)
person_id(FK)
visit_type_cd
hospital_cd
visit_dt_tm
disch_dt_tm
reason_visit_cd
address code_value
address_id(PK)
person_id(FK)
address_type_cd
street
suburb_cd
state_cd
country_cd
code_value table
code_value
code_set
description
DATA :
code_value table
code_value code_set description
visit_type :
121212 12 admitted
122233 12 emergency
121233 12 outpatient
address_type :
1234434 233 home
23234 233 office
343434 233 postal
ALTER function [dbo].[getDescByCv](#cv int)
returns varchar(80)
as begin
-- Returns the code value display
declare #ret varchar(80)
select #ret = cv.DESCRIPTION
from CODE_VALUE cv
where cv.code_value = #cv
and cv.active_ind = 1
return isnull(#ret, 0)
end;
Final query :
SELECT
v.PERSON_ID as PersonID
, v.ENCNTR_ID as EncntrID
, [EMR_DWH].dbo.[getDispByCv](v.hospital_cd) as Hospital
, [EMR_DWH].dbo.[getDispByCv](v.visit_type_cd) as VisitType
from visit v
SELECT
v.PERSON_ID as PersonID
, v.ENCNTR_ID as EncntrID
, [EMR_DWH].dbo.[getDispByCv](v.hospital_cd) as Hospital
, [EMR_DWH].dbo.[getDispByCv](v.visit_type_cd) as VisitType
, [EMR_DWH].dbo.[getDispByCv](n.person_type) as PersonType
, [EMR_DWH].dbo.[getDispByCv](v.deceased_cd) as Deceased
, [EMR_DWH].dbo.[getDispByCv](v.address_type_cd) as AddressType
, [EMR_DWH].dbo.[getDispByCv](v.country_cd) as Country
from visit v
,person p
,address a
where v.visit_id = 102288.0
and v.person_id = p.person_id
and p.person_id = a.person_id

Kettle database lookup case insensitive

I've a table "City" with more than 100k records.
The field "name" contains strings like "Roma", "La Valletta".
I receive a file with the city name, all in upper case as in "ROMA".
I need to get the id of the record that contains "Roma" when I search for "ROMA".
In SQL, I must do something like:
select id from city where upper(name) = upper(%name%)
How can I do this in kettle?
Note: if the city is not found, I use an Insert/update field to create it, so I must avoid duplicates generated by case-sensitive names.
You can make use of the String Operations steps in Pentaho Kettle. Set the Lower/Upper option to Y
Pass the city (name) from the City table to the String operations steps which will do the Upper case of your data stream i.e. city name. Join/lookup with the received file and get the required id.
More on String Operations step in pentaho wiki.
You can use a 'Database join' step. Here you can write the sql:
select id from city where upper(name) = upper(?)
and specify the city field name from the text file as parameter. With 'Number of rows to return' and 'Outer join?' you can control the join behaviour.
This solution doesn't work well with a large number of rows, as it will execute one query per row. In those cases Rishu's solution is better.
This is how I did:
First "Modified JavaScript value" step for create a query:
var queryDest="select coalesce( (select id as idcity from city where upper(name) = upper('"+replace(mycity,"'","\'\'")+"') and upper(cap) = upper('"+mycap+"') ), 0) as idcitydest";
Then I use this string as a query in a Dynamic SQL row.
After that,
IF idcitydest == 0 then
insert new city;
else
use the found record
This system make a query for file's row but it use few memory cache

how to extract column parameters from sqlite create string?

in sqlite it is possible to have string by which the table was created:
select sql from sqlite_master where type='table' and tbl_name='MyTable'
this could give:
CREATE TABLE "MyTable" (`id` PRIMARY KEY NOT NULL, [col1] NOT NULL,
"another_col" UNIQUE, '`and`,''another'',"one"' INTEGER, and_so_on);
Now I need to extract with this string any additional parameters that given column name has been set with.
But this is very difficult since the column name could be enclosed with special characters, or put plain, column name may have some special characters that are used as encapsulation etc.
I don't know how to approach it. The result should be having a column name the function should return anything that is after this name and before , so giving it id it should return PRIMARY KEY NOT NULL.
Use the pragma table_info:
http://www.sqlite.org/pragma.html#pragma_table_info
sqlite> pragma table_info(MyTable);
cid|name|type|notnull|dflt_value|pk
0|id||1||1
1|col1||1||0
2|another_col||0||0
3|`and`,'another',"one"|INTEGER|0||0
4|and_so_on||0||0