I am trying to design a doctrine query and I am new to doctrine but with the help of my other post I come up with a query which works when I run in my Mysql. But I want it to convert the query in Doctrine (2.3) can some one help me in this.
MySQL Query:
SELECT * FROM user WHERE
(`user_name` like '%TOM%' OR `user_name` like '%AN%' and `login_datetime` BETWEEN '2013-01-01 00:00:00' and '2013-02-31 23:59:59') OR
NOT ( --NOR
(`user_name` like '%PHP%' OR `user_name` like '%BA%' and `login_datetime` BETWEEN '2013-02-01 00:00:00' and '2013-03-31 23:59:59') OR
(`user_name` like '%SUN%' OR `user_name` like '%MOON%' and `login_datetime` BETWEEN '2013-03-01 00:00:00' and '2013-04-31 23:59:59')
) OR
NOT ( --NAND
(`user_name` like '%RAJ%' OR `user_name` like '%MUTH%' and `login_datetime` BETWEEN '2013-04-01 00:00:00' and '2013-06-31 23:59:59') AND
(`user_name` like '%BAG%' OR `user_name` like '%LAP%' and `login_datetime` BETWEEN '2013-05-01 00:00:00' and '2013-07-31 23:59:59')
)
--Link Reference: for the above MySql Query.
My Try with Doctrine: Reference Link:
It is very difficult to understand the doctrine query because of the () brasses which it automatically created in between queries so it gives me some wrong results all the time. Kindly help me.
It is very difficult to understand the doctrine query because of the () brasses which it automatically created in between queries so it gives me some wrong results all the time.
When you use an expr it typically wraps the expression in (). I think thats where you are running into confusion. Something similar to the following should work (this isnt tested so you may need to adjust abit):
$qry = $this->manager()->createQueryBuilder()
->from($this->entity, 'e')
->select('e');
// (`user_name` like '%TOM%' OR `user_name` like '%AN%' and `login_datetime` BETWEEN '2013-01-01 00:00:00' and '2013-02-31 23:59:59')
$expr1 = $qry->expr()->andX(
$qry->expr()->orX(
$qry->expr()->like('e.user_name', '%TOM%'),
$qry->expr()->like('e.user_name', '%AN%')
),
$qry->expr()->between('e.login_datetime', '2013-02-01 00:00:00', '2013-02-31 23:59:59')
);
//(`user_name` like '%PHP%' OR `user_name` like '%BA%' and `login_datetime` BETWEEN '2013-02-01 00:00:00' and '2013-03-31 23:59:59')
$expr2a = $qry->expr()->andX(
$qry->expr()->orX(
$qry->expr()->like('e.user_name', '%PHP%'),
$qry->expr()->like('e.user_name', '%BA%')
),
$qry->expr()->between('e.login_datetime', ''2013-02-01 00:00:00'', '2013-03-31 23:59:59')
);
// (`user_name` like '%SUN%' OR `user_name` like '%MOON%' and `login_datetime` BETWEEN '2013-03-01 00:00:00' and '2013-04-31 23:59:59')
$expr2b = $qry->expr()->andX(
$qry->expr()->orX(
$qry->expr()->like('e.user_name', '%SUN%'),
$qry->expr()->like('e.user_name', '%MOON%')
),
$qry->expr()->between('e.login_datetime', '2013-03-01 00:00:00', '2013-04-31 23:59:59')
);
// combine expr2a and expr2b with OR as $expr2
$expr2 = $qry->expr()->orX($expr2a, $expr2b);
// (`user_name` like '%RAJ%' OR `user_name` like '%MUTH%' and `login_datetime` BETWEEN '2013-04-01 00:00:00' and '2013-06-31 23:59:59')
$expr3a = $qry->expr()->andX(
$qry->expr()->orX(
$qry->expr()->like('e.user_name', '%RAJ%'),
$qry->expr()->like('e.user_name', '%MUTH%')
),
$qry->expr()->between('e.login_datetime', ''2013-04-01 00:00:00'', '2013-06-31 23:59:59')
);
// (`user_name` like '%BAG%' OR `user_name` like '%LAP%' and `login_datetime` BETWEEN '2013-05-01 00:00:00' and '2013-07-31 23:59:59')
$expr3b = $qry->expr()->andX(
$qry->expr()->orX(
$qry->expr()->like('e.user_name', '%BAG%'),
$qry->expr()->like('e.user_name', '%LAP%')
),
$qry->expr()->between('e.login_datetime', '2013-05-01 00:00:00', '2013-07-31 23:59:59')
);
// combine expr2a and expr2b with OR as $expr2
$expr3 = $qry->expr()->andX($expr3a, $expr3b);
// final query essentially WHERE expr1 OR NOT(expr2) OR NOT(expr3)
$qry->where($expr1)
->or($qry->expr()->not($expr2))
->or($qry->expr()->not($expr3));
Related
This Redshift query fails -
DELETE FROM TBL_1 stg
WHERE EXISTS (
WITH CCDA as (
SELECT
row_number() OVER (PARTITION BY emp_id,customer_id ORDER BY seq_num desc) rn
, *
FROM TBL_2
WHERE end_dt > (SELECT max(end_dt) FROM TBL_3)
)
SELECT emp_id,customer_id FROM CCDA WHERE rn = 1
AND stg.emp_id = CCDA.emp_id
AND stg.customer_id = CCDA.customer_id
);
Error: Invalid operation: syntax error at or near "stg"
However, the below query runs fine -
SELECT * FROM TBL_1 stg
WHERE EXISTS (
WITH CCDA as (
SELECT
row_number() OVER (PARTITION BY emp_id,customer_id ORDER BY seq_num desc) rn
, *
FROM TBL_2
WHERE end_dt > (SELECT max(end_dt) FROM TBL_3)
)
SELECT emp_id,customer_id FROM CCDA WHERE rn = 1
AND stg.emp_id = CCDA.emp_id
AND stg.customer_id = CCDA.customer_id
);
Am I missing something?
You cannot use an alias in a DELETE statement for the target table. "stg" cannot be used as the alias and this is why you are getting this error.
Also to reference other tables in a DELETE statement you need to use the USING clause.
See: https://docs.aws.amazon.com/redshift/latest/dg/r_DELETE.html
A quick stab of what this would look like (untested):
WITH CCDA as (
SELECT
row_number() OVER (PARTITION BY emp_id,customer_id ORDER BY seq_num desc) rn
, *
FROM TBL_2
WHERE end_dt > (SELECT max(end_dt) FROM TBL_3)
)
DELETE FROM TBL_1
USING CCDA
WHERE CCDA.rn = 1
AND TBL_1.emp_id = CCDA.emp_id
AND TBL_1.customer_id = CCDA.customer_id
;
I am trying to convert this redshift query to athena.
select
a.customerid,
a.country,
a.stockcode,
a.description,
a.invoicedate,
a.sales_amt,
(b.nbr_months_active) as nbr_months_active
from
ecommerce_sales_data a
inner join (
select
customerid,
count(
distinct(
DATE_PART(y, cast(invoicedate as date)) || '-' || LPAD(
DATE_PART(mon, cast(invoicedate as date)),
2,
'00'
)
)
) as nbr_months_active
from
ecommerce_sales_data
group by
1
) b on a.customerid = b.customerid
This is what I have tried. It returns the results. But I am not sure if the results will match with redshift query in all cases.
WITH students_results(InvoiceNo,StockCode,Description,Quantity,InvoiceDate,UnitPrice,CustomerID,Country) AS (VALUES
('536365','85123A','WHITE HANGING HEART T-LIGHT HOLDER','6','12/1/2010 8:26','2.55','17850','United Kingdom'),
('536365','71053','WHITE METAL LANTERN','6','12/1/2010 8:26','3.39','17850','United Kingdom'),
('536365','84406B','CREAM CUPID HEARTS COAT HANGER','8','12/1/2010 8:26','2.75','17850','United Kingdom')
)
select
a.customerid,
a.country,
a.stockcode,
a.description,
a.invoicedate,
cast(a.quantity as decimal(11,2)) * cast(a.unitprice as decimal(11,2)) as sales_amt,
(b.nbr_months_active) as nbr_months_active
from
students_results a
inner join (
select
customerid,
count(
distinct(
date_format(date_parse(invoicedate,'%m/%d/%Y %k:%i'), '%Y-%m')
)) as nbr_months_active
FROM students_results group by customerid) as b
on a.customerid = b.customerid
The source of Redshift query is here:
https://aws.amazon.com/blogs/machine-learning/build-multi-class-classification-models-with-amazon-redshift-ml/
I have a problem converting below t-sql query into DAX.
Overview - There are two sample tables - Table1 and Table2 with below schema
Table1 (ID varchar(20),Name varchar(30))
Table2 (CapID varchar(20),CAPName varchar(30), CapID_Final varchar(20))
Please note : There exists one to many relationship between above tables : [ID] in Table2 with [CapID] in Table1
I am trying to derive CapID_Final column in table2 based on conditions as per my t-SQL query in below which works perfectly fine -
SELECT CASE
WHEN [CapID] like 'CA%' and [CAPName]='x12345-Sample'
and [CapID] not in(select [ID] from Table1 where Name='x12345-Sample')
THEN 'Undefined_Cap_1'
WHEN [CapID] like 'CA%' and [CAPName]='z12345-Sample'
and [CapID] not in(select [ID] from Table1 where Name='z12345-Sample')
THEN 'Undefined_Cap_2'
WHEN [CapID] like 'CA%' and [CAPName]='a123-Sample'
and [CapID] not in(select [ID] from Table1 where Name='a123-Sample')
THEN 'Undefined'
ELSE [CapID]
END AS [CapID_Final] from Table2
However, I want the same derivation for CapID_Final column in Power BI in a calculated column using DAX.
So far, I have tried below code - but it returns "Undefined" for even matched conditions -
CapID_Final =
IF(LEFT(Table2[CapID],2)="CA" && Table2[CAPName]="z12345-Sample" &&
NOT
(COUNTROWS (
FILTER (
Table1,CONTAINS(Table1,Table1[ID],Table2[CapID])
)
) > 0),"Undefined_Cap_1","Undefined"
)
I am not familiar with DAX, however I tried and couldn't figure it out.
Could you please let me know how to convert my sql query to equivalent DAX in Power BI?
A SWITCH is basically the equivalent of a CASE clause here:
CapID_Final =
SWITCH (
TRUE (),
LEFT ( Table2[CapID], 2 ) = "CA"
&& Table2[CAPName] = "x12345-Sample"
&& NOT (
Table2[CapID]
IN CALCULATETABLE ( VALUES ( Table1[ID] ), Table1[Name] = "x12345-Sample" )
), "Undefined_Cap_1",
LEFT ( Table2[CapID], 2 ) = "CA"
&& Table2[CAPName] = "z12345-Sample"
&& NOT (
Table2[CapID]
IN CALCULATETABLE ( VALUES ( Table1[ID] ), Table1[Name] = "z12345-Sample" )
), "Undefined_Cap_2",
LEFT ( Table2[CapID], 2 ) = "CA"
&& Table2[CAPName] = "a12345-Sample"
&& NOT (
Table2[CapID]
IN CALCULATETABLE ( VALUES ( Table1[ID] ), Table1[Name] = "a12345-Sample" )
), "Undefined",
Table1[CapID]
)
You might even be able to refactor it a bit to be more code efficient. Assuming I didn't make any logic mistakes:
CapID_Final =
VAR IDs =
CALCULATETABLE ( VALUES ( Table1[ID] ), Table1[Name] = Table2[CAPName] )
RETURN
IF (
LEFT ( Table2[CapID], 2 ) = "CA"
&& NOT ( Table2[CapID] IN IDs ),
SWITCH (
Table2[CAPName],
"x12345-Sample", "Undefined_Cap_1",
"z12345-Sample", "Undefined_Cap_2",
"a12345-Sample", "Undefined"
),
Table1[CapID]
)
As a best-practice never use calculated column. In fact, if extensively used they slow down your model refresh and heavily increase your model weight (because they are not compressed). Instead, calculate it in your back-end database or using M Query.
Having said this, the solution to your question is very simple using a SWITCH function:
SWITCH ( <Expression>, <Value>, <Result> [, <Value>, <Result> [, … ] ] [, <Else>] )
In your case would be as follow:
CapIDFinal:=
SWITCH(TRUE(),
AND(CONDITION_1_1, CONDITION_1_2), "Value if condition 1 is true",
AND(CONDITION_2_1, CONDITION_2_2), "Value if condition 2 is true",
"Value if none of above conditions is true
)
I have a table MyTable has columns: QuoteID, ControlNo, Premium, ExpirationDate
I need to create a measure that would grab the SUM(Premium) and EffectiveDate should be <= Today() and last ExpirationDate (ordered by QuoteID DESC) should be >= Today().
How can I translate below statement to DAX?
In SQL, I would do this way:
select sum(Premium) as Premium
from MyTable t
where EffectiveDate <= GETDATE() and
(select top 1 t2.ExpirationDate
from MyTable t2
where t2.ControlNo = t.controlno
order by t.quoteid desc) >= GETDATE()
How can I write it in DAX?
I've tried this, but it's not working properly:
Premium =
CALCULATE (
SUM ( fact_Premium[Premium] ),
FILTER (
fact_Premium,
fact_Premium[EffectiveDate] <= TODAY () &&
TOPN ( 1, ALL ( fact_Premium[ExpirationDate] ),
fact_Premium[QuoteID], ASC ) >= TODAY ()
)
)
UPDATE:
Trying to create calculated table from fact_Premium dataset, but still not sure how can I filter it
In_Force Premium =
FILTER(
ADDCOLUMNS(
SUMMARIZE(
//Grouping necessary columns
fact_Premium,
fact_Premium[QuoteID],
fact_Premium[Division],
fact_Premium[Office],
dim_Company[CompanyGUID],
fact_Premium[LineGUID],
fact_Premium[ProducerGUID],
fact_Premium[StateID],
fact_Premium[ExpirationDate]
),
"Premium", CALCULATE(
SUM(fact_Premium[Premium])
),
"ControlNo", CALCULATE(
DISTINCTCOUNT(fact_Premium[ControlNo])
)
), // Here I need to make sure TODAY() falls between fact_Premium[EffectiveDate] and (SELECT TOP 1 fact_Premium[ExpirationDate] ORDE BY QuoteID DESC)
)
There's a couple of problems with the DAX here.
First, when you use TOPN, your ordering expression (3rd argument) can only reference rows in the table you are operating on (2nd argument), so only using the [ExpirationDate] column there won't work. I think you probably want fact_Premium instead of ALL ( fact_Premium[ExpirationDate] ).
Second, the TOPN function returns a table rather than a single value, so you need to access the column you want somehow. One option would be to use an iterator like SUMX or MAXX:
MAXX( TOPN(...), fact_Premium[ExpirationDate] )
You could also use SELECTCOLUMNS which will coerce a single row, single column table to just be a value:
SELECTCOLUMNS( TOPN(...), "ExpirationDate", fact_Premium[ExpirationDate] )
I can't guarantee that this will work perfectly, but it should get you closer to your goal:
Premium =
CALCULATE (
SUM ( fact_Premium[Premium] ),
FILTER (
fact_Premium,
fact_Premium[EffectiveDate] <= TODAY () &&
SUMX( TOPN ( 1, fact_Premium, fact_Premium[QuoteID], DESC ),
fact_Premium[ExpirationDate] )
>= TODAY ()
)
)
when i do the query operation with "join" in my statement , i get the error information.Below are the error, environment and version details.
jdk-1.7.0_79
Phoenix-4.7.0
Hbase-1.1.2 with 7 region servers.
Caused by: java.sql.SQLException: Encountered exception in sub plan [0] execution.
at org.apache.phoenix.execute.HashJoinPlan.iterator(HashJoinPlan.java:193)
at org.apache.phoenix.mapreduce.PhoenixInputFormat.getQueryPlan(PhoenixInputFormat.java:129)
... 11 more
Caused by: java.sql.SQLException: java.lang.ArrayIndexOutOfBoundsException: 1
at org.apache.phoenix.cache.ServerCacheClient.addServerCache(ServerCacheClient.java:266)
at org.apache.phoenix.join.HashCacheClient.addHashCache(HashCacheClient.java:84)
at org.apache.phoenix.execute.HashJoinPlan$HashSubPlan.execute(HashJoinPlan.java:381)
at org.apache.phoenix.execute.HashJoinPlan$1.call(HashJoinPlan.java:162)
at org.apache.phoenix.execute.HashJoinPlan$1.call(HashJoinPlan.java:158)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
at java.util.concurrent.FutureTask.run(FutureTask.java:166)
at org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask.run(JobManager.java:183)
... 3 more
Caused by: java.lang.ArrayIndexOutOfBoundsException: 1
at org.apache.hadoop.io.FastByteComparisons$LexicographicalComparerHolder$UnsafeComparer.compareTo(FastByteComparisons.java:245)
at org.apache.hadoop.io.FastByteComparisons$LexicographicalComparerHolder$UnsafeComparer.compareTo(FastByteComparisons.java:132)
at org.apache.hadoop.io.FastByteComparisons.compareTo(FastByteComparisons.java:46)
at org.apache.hadoop.io.WritableComparator.compareBytes(WritableComparator.java:188)
at org.apache.phoenix.util.ScanUtil$2.compare(ScanUtil.java:484)
at org.apache.phoenix.query.KeyRange.compareUpperToLowerBound(KeyRange.java:277)
at org.apache.phoenix.query.KeyRange.compareUpperToLowerBound(KeyRange.java:222)
at org.apache.phoenix.util.ScanUtil.searchClosestKeyRangeWithUpperHigherThanPtr(ScanUtil.java:506)
at org.apache.phoenix.filter.SkipScanFilter.intersect(SkipScanFilter.java:220)
at org.apache.phoenix.filter.SkipScanFilter.hasIntersect(SkipScanFilter.java:182)
at org.apache.phoenix.compile.ScanRanges.intersectScan(ScanRanges.java:331)
at org.apache.phoenix.compile.ScanRanges.intersects(ScanRanges.java:421)
at org.apache.phoenix.cache.ServerCacheClient.addServerCache(ServerCacheClient.java:175)
... 10 more
Below is the sql.
select a.MINITORDATE as MINITORDATE ,TEMPVAL, HUMVAL,PM25VAL ,NCPM25VAL from (
( select "Data_minitorDate" as MINITORDATE ,"Data_minitorVal" as PM25VAL from AQM.AQMDATA_ALL where 1=1 AND SUBSTR(ROW,8,2) = '00' AND SUBSTR(ROW,1,6) = '099812' AND SUBSTR(ROW,10,2) = '00' AND SUBSTR(ROW,12,2) = '00'
AND "Data_minitorDate" between '2016-08-22 00:00:00' and '2016-08-23 23:59:59') a
INNER JOIN
( select "Data_minitorDate" as MINITORDATE ,"Data_minitorVal" as TEMPVAL from AQM.AQMDATA_ALL where 1=1 AND SUBSTR(ROW,8,2) = '00' AND SUBSTR(ROW,1,6) = '099812' AND SUBSTR(ROW,10,2) = '02' AND SUBSTR(ROW,12,2) = '00'
AND "Data_minitorDate" between '2016-08-22 00:00:00' and '2016-08-23 23:59:59') b on b.MINITORDATE = a.MINITORDATE
INNER JOIN
( select "Data_minitorDate" as MINITORDATE ,"Data_minitorVal" as HUMVAL from AQM.AQMDATA_ALL where 1=1 AND SUBSTR(ROW,8,2) = '00' AND SUBSTR(ROW,1,6) = '099812' AND SUBSTR(ROW,10,2) = '03' AND SUBSTR(ROW,12,2) = '00'
AND "Data_minitorDate" between '2016-08-22 00:00:00' and '2016-08-23 23:59:59') c on c.MINITORDATE = b.MINITORDATE
INNER JOIN
( select "Data_minitorDate" as MINITORDATE ,"Data_minitorVal" as NCPM25VAL from AQM.AQMDATA_ALL where 1=1 AND SUBSTR(ROW,8,2) = '02' AND SUBSTR(ROW,1,6) = '023120' AND SUBSTR(ROW,10,2) = '00' AND SUBSTR(ROW,12,2) = '00'
AND "Data_minitorDate" between '2016-08-22 00:00:00' and '2016-08-23 23:59:59') d on d.MINITORDATE = c.MINITORDATE
)
The table ' AQM.AQMDATA_ALL' is created by Phoenix,and SALT_BUCKETS = 28.
The number of rows in 'AQM.AQMDATA_ALL' is around 6 million.
With out SALT_BUCKETS or use the below sql,the query is fine!!!
select a.MINITORDATE as MINITORDATE ,TEMPVAL, HUMVAL,PM25VAL ,NCPM25VAL from (
( select "Data_minitorDate" as MINITORDATE ,"Data_minitorVal" as PM25VAL from AQM.AQMDATA_ALL where 1=1 AND SUBSTR(ROW,8,2) = '00' AND SUBSTR(ROW,1,6) = '099812' AND SUBSTR(ROW,10,2) = '00' AND SUBSTR(ROW,12,2) = '00'
AND "Data_minitorDate" between '2016-08-22 00:00:00' and '2016-08-23 23:59:59') a
INNER JOIN
( select "Data_minitorDate" as MINITORDATE ,"Data_minitorVal" as TEMPVAL from AQM.AQMDATA_ALL where 1=1 AND SUBSTR(ROW,8,2) = '00' AND SUBSTR(ROW,1,6) = '099812' AND SUBSTR(ROW,10,2) = '02' AND SUBSTR(ROW,12,2) = '00'
AND "Data_minitorDate" between '2016-08-22 00:00:00' and '2016-08-23 23:59:59') b on b.MINITORDATE = a.MINITORDATE
INNER JOIN
( select "Data_minitorDate" as MINITORDATE ,"Data_minitorVal" as HUMVAL from AQM.AQMDATA_ALL where 1=1 AND SUBSTR(ROW,8,2) = '00' AND SUBSTR(ROW,1,6) = '099812' AND SUBSTR(ROW,10,2) = '03' AND SUBSTR(ROW,12,2) = '00'
AND "Data_minitorDate" between '2016-08-22 00:00:00' and '2016-08-23 23:59:59') c on c.MINITORDATE = b.MINITORDATE )
The only difference between above two statemens is missing one "inner join".
Not only doing the query with 'SQuirreL' SQL Client,but also running the MapReduce job with phoenix-client , i meet the same problem.
Please help me on this.
Regards!
At Splice Machine (Open Source) we ran a TPCH 1 Gig benchmark with the latest version of Phoenix and saw a lot of the sub plan execution exceptions. We did not have salted tables however.
I would file a bug directly via JIRA with just your schema. It looks like the query plan has a parse problem.
Be careful running a lot of joins in Phoenix, it does not scale (Join Performed on Client).
See slide 20 on this presentation from a member of the phoenix team.
http://www.slideshare.net/enissoz/apache-phoenix-past-present-and-future-of-sql-over-hbase
Good luck.