For example: In MySql I have column1 and column2.
How to make sql query that will return all records that contain all search words in either column 1 or column 2.
So far I was using:
Select * from table1 where (column1 like '%search from input box%' or column2 like '%search from input box%')
If your table is MyISAM:
SELECT *
FROM table1
WHERE MATCH(column1) AGAINST ('+search +from +input +box' IN BOOLEAN MODE)
UNION
SELECT *
FROM table2
WHERE MATCH(column2) AGAINST ('+search +from +input +box' IN BOOLEAN MODE)
Create a FULLTEXT INDEX on column1 and column2 (independently) for this to work fast, and adjust #ft_min_word_len to match words as short as you need.
If it's not MyISAM:
SELECT *
FROM table
WHERE (
column1 RLIKE '[[:<:]]search[[:>:]]'
AND column1 RLIKE '[[:<:]]from[[:>:]]'
AND column1 RLIKE '[[:<:]]input [[:>:]]'
AND column1 RLIKE '[[:<:]]box[[:>:]]'
)
OR
(
column2 RLIKE '[[:<:]]search[[:>:]]'
AND column2 RLIKE '[[:<:]]from[[:>:]]'
AND column2 RLIKE '[[:<:]]input [[:>:]]'
AND column2 RLIKE '[[:<:]]box[[:>:]]'
)
Related
I am executing a proc C(.pc) file in my project, with below steps:
Step-1:
I have a string with ',' separated delimiter:
string string1={"Delhi","Bombay","Pune");
Step-2:
I am creating a varchar
varchar str[100]
Step-3:
copy string1 to str using strcpy/strncpy
Step-4
Executing the query
select column1, column2 from table where column1 in (str.arr);
The query is returning 0 values, but if I hard code as below:
select column1, column2 from table where column1 in ('Delhi','Bombay','Pune');
I am getting 5 rows as output
I am missing how to effictively convert the string to varchar/multiset, which can be used in the select statemnt.
Are there any steps for converting the string into varchar or multiset.
I am trying to grab a substring from a queryText column. The queryText column is a SQL query statement. And my goal is to parse and extract specific patterns into a new column called TableName.
parse kind=regex queryText with "[Ff][Rr][Oo][Mm]" TableName
Above is my current Regex statement. It returns all characters after "FROM" or "from". I would like to only grab characters after "FROM" and before the first whitespace or newline. Any idea on what i have to add to the regex expression to do this?
you could use the extract() function.
for example (using the i flag for case-insensitivity):
datatable(input:string)
[
"select * FROM MyTable\n where X > 1",
"SELECT A,B,C from MyTable",
"select COUNT(*) from MyTable GROUP BY X",
"select * FROM MyTable",
"select * from [a].[b]",
]
| extend output = extract(#"(?i)from\s+([^\s]+)\s*", 1, input)
input
output
select * from [a].[b]
[a].[b]
select * FROM MyTable where X > 1
MyTable
SELECT A,B,C from MyTable
MyTable
select COUNT(*) from MyTable GROUP BY X
MyTable
select * FROM MyTable
MyTable
I am trying to copy data from one nested table into another nested table without flattening the data.
input_table : column1 STRING NULLABLE
column2 RECORD REPEATED
{
sub1 STRING NULLABLE
}
copy_table : column1 STRING NULLABLE
column2 RECORD REPEATED
{
sub STRING REPEATED
}
IF u see the sub1 column defination has been modified, in this case how do I copy the exact data without thr use of unnest. because If I use unnest the rows increase as it flatten the record by row.
query :
insert into copy_table()
select column1,
[struct(array[column2.sub])]
from input_table,
unnest(column2) as column2;
Based on your question, input_table and copy_table would be like below
CREATE TEMP TABLE input_table AS
SELECT '111' column1, [STRUCT('aaa' AS sub1), STRUCT('bbb'), STRUCT('ccc')] AS column2;
CREATE TEMP TABLE copy_table (
column1 STRING,
column2 ARRAY<STRUCT<sub ARRAY<STRING>>>
);
Instead of UNNESTing in FROM clause, you can transform column2 at SELECT list clause using a subquery and insert it into your copy_table.
INSERT INTO copy_table
SELECT column1, ARRAY(SELECT AS STRUCT [sub1] AS sub FROM UNNEST(column2)) AS column2
FROM input_table;
So I have a list of keywords:
['xxxxl','xxxl','xxl','xl','xxxxt','xxxt','xxt','xt']
In bigquery, I want to write a regex, inside the following sql code
SELECT my_column
FROM table
REGEXP_CONTAINS(lower(my_column),regex)
so that my output table contains only the values that don't match any of the items in keywords list.
Thanks
Below is for BigQuery Standard SQL
#standardSQL
WITH `project.dataset.lookup_table` AS (
SELECT ['xxxxl','xxxl','xxl','xl','xxxxt','xxxt','xxt','xt'] keywords
)
SELECT my_column
FROM `project.dataset.table`,
(SELECT STRING_AGG(LOWER(keyword), '|') exclude_pattern
FROM `project.dataset.lookup_table`,
UNNEST(keywords) keyword)
WHERE NOT REGEXP_CONTAINS(LOWER(my_column), exclude_pattern)
You can test / play with above using below simplified example
#standardSQL
WITH `project.dataset.lookup_table` AS (
SELECT ['xxxxl','xxxl','xxl','xl','xxxxt','xxxt','xxt','xt'] keywords
), `project.dataset.table` AS (
SELECT 'xxxxl' my_column UNION ALL
SELECT 'abc'
)
SELECT my_column
FROM `project.dataset.table`,
(SELECT STRING_AGG(LOWER(keyword), '|') exclude_pattern
FROM `project.dataset.lookup_table`,
UNNEST(keywords) keyword)
WHERE NOT REGEXP_CONTAINS(LOWER(my_column), exclude_pattern)
with output
Row my_column
1 abc
Is there a way to improve the following? I need to count all rows with NULL value(s) in a specific column.
SELECT
SUM(IF(column1 IS NULL, 1, 0)) AS column1,
SUM(IF(column2 IS NULL, 1, 0)) AS column2
FROM
`dataset.table`;
One of the options:
#standardSQL
SELECT
COUNTIF(column1 IS NULL) AS column1,
COUNTIF(column2 IS NULL) AS column2
FROM `project.dataset.table`
Or (just to have few options for you):
#standardSQL
SELECT
COUNT(1) - COUNT(column1) AS column1,
COUNT(1) - COUNT(column2) AS column2
FROM `project.dataset.table`