apache phoenix join issue - mapreduce

when i do the query operation with "join" in my statement , i get the error information.Below are the error, environment and version details.
jdk-1.7.0_79
Phoenix-4.7.0
Hbase-1.1.2 with 7 region servers.
Caused by: java.sql.SQLException: Encountered exception in sub plan [0] execution.
at org.apache.phoenix.execute.HashJoinPlan.iterator(HashJoinPlan.java:193)
at org.apache.phoenix.mapreduce.PhoenixInputFormat.getQueryPlan(PhoenixInputFormat.java:129)
... 11 more
Caused by: java.sql.SQLException: java.lang.ArrayIndexOutOfBoundsException: 1
at org.apache.phoenix.cache.ServerCacheClient.addServerCache(ServerCacheClient.java:266)
at org.apache.phoenix.join.HashCacheClient.addHashCache(HashCacheClient.java:84)
at org.apache.phoenix.execute.HashJoinPlan$HashSubPlan.execute(HashJoinPlan.java:381)
at org.apache.phoenix.execute.HashJoinPlan$1.call(HashJoinPlan.java:162)
at org.apache.phoenix.execute.HashJoinPlan$1.call(HashJoinPlan.java:158)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
at java.util.concurrent.FutureTask.run(FutureTask.java:166)
at org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask.run(JobManager.java:183)
... 3 more
Caused by: java.lang.ArrayIndexOutOfBoundsException: 1
at org.apache.hadoop.io.FastByteComparisons$LexicographicalComparerHolder$UnsafeComparer.compareTo(FastByteComparisons.java:245)
at org.apache.hadoop.io.FastByteComparisons$LexicographicalComparerHolder$UnsafeComparer.compareTo(FastByteComparisons.java:132)
at org.apache.hadoop.io.FastByteComparisons.compareTo(FastByteComparisons.java:46)
at org.apache.hadoop.io.WritableComparator.compareBytes(WritableComparator.java:188)
at org.apache.phoenix.util.ScanUtil$2.compare(ScanUtil.java:484)
at org.apache.phoenix.query.KeyRange.compareUpperToLowerBound(KeyRange.java:277)
at org.apache.phoenix.query.KeyRange.compareUpperToLowerBound(KeyRange.java:222)
at org.apache.phoenix.util.ScanUtil.searchClosestKeyRangeWithUpperHigherThanPtr(ScanUtil.java:506)
at org.apache.phoenix.filter.SkipScanFilter.intersect(SkipScanFilter.java:220)
at org.apache.phoenix.filter.SkipScanFilter.hasIntersect(SkipScanFilter.java:182)
at org.apache.phoenix.compile.ScanRanges.intersectScan(ScanRanges.java:331)
at org.apache.phoenix.compile.ScanRanges.intersects(ScanRanges.java:421)
at org.apache.phoenix.cache.ServerCacheClient.addServerCache(ServerCacheClient.java:175)
... 10 more
Below is the sql.
select a.MINITORDATE as MINITORDATE ,TEMPVAL, HUMVAL,PM25VAL ,NCPM25VAL from (
( select "Data_minitorDate" as MINITORDATE ,"Data_minitorVal" as PM25VAL from AQM.AQMDATA_ALL where 1=1 AND SUBSTR(ROW,8,2) = '00' AND SUBSTR(ROW,1,6) = '099812' AND SUBSTR(ROW,10,2) = '00' AND SUBSTR(ROW,12,2) = '00'
AND "Data_minitorDate" between '2016-08-22 00:00:00' and '2016-08-23 23:59:59') a
INNER JOIN
( select "Data_minitorDate" as MINITORDATE ,"Data_minitorVal" as TEMPVAL from AQM.AQMDATA_ALL where 1=1 AND SUBSTR(ROW,8,2) = '00' AND SUBSTR(ROW,1,6) = '099812' AND SUBSTR(ROW,10,2) = '02' AND SUBSTR(ROW,12,2) = '00'
AND "Data_minitorDate" between '2016-08-22 00:00:00' and '2016-08-23 23:59:59') b on b.MINITORDATE = a.MINITORDATE
INNER JOIN
( select "Data_minitorDate" as MINITORDATE ,"Data_minitorVal" as HUMVAL from AQM.AQMDATA_ALL where 1=1 AND SUBSTR(ROW,8,2) = '00' AND SUBSTR(ROW,1,6) = '099812' AND SUBSTR(ROW,10,2) = '03' AND SUBSTR(ROW,12,2) = '00'
AND "Data_minitorDate" between '2016-08-22 00:00:00' and '2016-08-23 23:59:59') c on c.MINITORDATE = b.MINITORDATE
INNER JOIN
( select "Data_minitorDate" as MINITORDATE ,"Data_minitorVal" as NCPM25VAL from AQM.AQMDATA_ALL where 1=1 AND SUBSTR(ROW,8,2) = '02' AND SUBSTR(ROW,1,6) = '023120' AND SUBSTR(ROW,10,2) = '00' AND SUBSTR(ROW,12,2) = '00'
AND "Data_minitorDate" between '2016-08-22 00:00:00' and '2016-08-23 23:59:59') d on d.MINITORDATE = c.MINITORDATE
)
The table ' AQM.AQMDATA_ALL' is created by Phoenix,and SALT_BUCKETS = 28.
The number of rows in 'AQM.AQMDATA_ALL' is around 6 million.
With out SALT_BUCKETS or use the below sql,the query is fine!!!
select a.MINITORDATE as MINITORDATE ,TEMPVAL, HUMVAL,PM25VAL ,NCPM25VAL from (
( select "Data_minitorDate" as MINITORDATE ,"Data_minitorVal" as PM25VAL from AQM.AQMDATA_ALL where 1=1 AND SUBSTR(ROW,8,2) = '00' AND SUBSTR(ROW,1,6) = '099812' AND SUBSTR(ROW,10,2) = '00' AND SUBSTR(ROW,12,2) = '00'
AND "Data_minitorDate" between '2016-08-22 00:00:00' and '2016-08-23 23:59:59') a
INNER JOIN
( select "Data_minitorDate" as MINITORDATE ,"Data_minitorVal" as TEMPVAL from AQM.AQMDATA_ALL where 1=1 AND SUBSTR(ROW,8,2) = '00' AND SUBSTR(ROW,1,6) = '099812' AND SUBSTR(ROW,10,2) = '02' AND SUBSTR(ROW,12,2) = '00'
AND "Data_minitorDate" between '2016-08-22 00:00:00' and '2016-08-23 23:59:59') b on b.MINITORDATE = a.MINITORDATE
INNER JOIN
( select "Data_minitorDate" as MINITORDATE ,"Data_minitorVal" as HUMVAL from AQM.AQMDATA_ALL where 1=1 AND SUBSTR(ROW,8,2) = '00' AND SUBSTR(ROW,1,6) = '099812' AND SUBSTR(ROW,10,2) = '03' AND SUBSTR(ROW,12,2) = '00'
AND "Data_minitorDate" between '2016-08-22 00:00:00' and '2016-08-23 23:59:59') c on c.MINITORDATE = b.MINITORDATE )
The only difference between above two statemens is missing one "inner join".
Not only doing the query with 'SQuirreL' SQL Client,but also running the MapReduce job with phoenix-client , i meet the same problem.
Please help me on this.
Regards!

At Splice Machine (Open Source) we ran a TPCH 1 Gig benchmark with the latest version of Phoenix and saw a lot of the sub plan execution exceptions. We did not have salted tables however.
I would file a bug directly via JIRA with just your schema. It looks like the query plan has a parse problem.
Be careful running a lot of joins in Phoenix, it does not scale (Join Performed on Client).
See slide 20 on this presentation from a member of the phoenix team.
http://www.slideshare.net/enissoz/apache-phoenix-past-present-and-future-of-sql-over-hbase
Good luck.

Related

problem using Quantlib to get price/yield/interest of a CPIbond instrument

I am trying to analyze a TIP security using Quantlib. I have not been able to find much documentation but I managed to find an example from 2015 that supposed worked because it was posted as a solution. The example no longer works and the problem lies with the construction of zeroSwapHelpers.
Here my modified code:
import QuantLib as ql
import datetime as dt
calendar = ql.UnitedStates(ql.UnitedStates.GovernmentBond)
observationInterpolation = ql.CPI.Flat
calendar = ql.UnitedKingdom()
dayCounter = ql.ActualActual(ql.ActualActual.Bond)
convention = ql.ModifiedFollowing
lag = 3
today = ql.Date(5,3,2008)
evaluationDate = calendar.adjust(today)
issue_date = calendar.advance(evaluationDate,-1, ql.Years)
maturity_date = ql.Date(2,9,2052)
fixing_date = calendar.advance(evaluationDate,-lag, ql.Months)
ql.Settings.instance().setEvaluationDate(evaluationDate)
observationInterpolation = ql.CPI.Flat
yTS = ql.YieldTermStructureHandle(ql.FlatForward(evaluationDate, 0.05, dayCounter))
tenor = ql.Period(1, ql.Months)
from_date = ql.Date(20, ql.July, 2007);
to_date = ql.Date(20, ql.November, 2009);
rpiSchedule = ql.Schedule(from_date, to_date, tenor, calendar,
convention, convention,
ql.DateGeneration.Backward, False)
# this is the going to be holder the inflation curve.
cpiTS = ql.RelinkableZeroInflationTermStructureHandle()
inflationIndex = ql.UKRPI(False, cpiTS)
fixData = [206.1, 207.3, 208.0, 208.9, 209.7, 210.9,
209.8, 211.4, 212.1, 214.0, 215.1, 216.8,
216.5, 217.2, 218.4, 217.7, 216,
212.9, 210.1, 211.4, 211.3, 211.5,
212.8, 213.4, 213.4, 213.4, 214.4]
dte_fixings=[dtes for dtes in rpiSchedule]
print(len(fixData))
print(len(dte_fixings[:len(fixData)]))
#must be the same length
#inflationIndex.addFixings(dte_fixings[:len(fixData)], fixData)
#Current CPI level
#last observed rate
fixing_rate = 214.4
inflationIndex.addFixing(fixing_date, fixing_rate)
observationLag = ql.Period(lag, ql.Months)
zciisData =[( ql.Date(25, ql.November, 2010), 3.0495 ),
( ql.Date(25, ql.November, 2011), 2.93 ),
( ql.Date(26, ql.November, 2012), 2.9795 ),
( ql.Date(25, ql.November, 2013), 3.029 ),
( ql.Date(25, ql.November, 2014), 3.1425 ),
( ql.Date(25, ql.November, 2015), 3.211 ),
( ql.Date(25, ql.November, 2016), 3.2675 ),
( ql.Date(25, ql.November, 2017), 3.3625 ),
( ql.Date(25, ql.November, 2018), 3.405 ),
( ql.Date(25, ql.November, 2019), 3.48 ),
( ql.Date(25, ql.November, 2021), 3.576 ),
( ql.Date(25, ql.November, 2024), 3.649 ),
( ql.Date(26, ql.November, 2029), 3.751 ),
( ql.Date(27, ql.November, 2034), 3.77225),
( ql.Date(25, ql.November, 2039), 3.77 ),
( ql.Date(25, ql.November, 2049), 3.734 ),
( ql.Date(25, ql.November, 2059), 3.714 )]
#lRates=[rtes/100.0 for rtes in zip(*zciisData)[1]]
#baseZeroRate = lRates[0]
zeroSwapHelpers = [ql.ZeroCouponInflationSwapHelper(rate/100,observationLag,
date, calendar, convention, dayCounter, inflationIndex,observationInterpolation,yTS) for date,rate in zciisData]
# the derived inflation curve
jj=ql.PiecewiseZeroInflation(
evaluationDate, calendar, dayCounter, observationLag,
inflationIndex.frequency(), inflationIndex.interpolated(),
zciisData[0][1],#baseZeroRate,
yTS, zeroSwapHelpers, 1.0e-12, ql.Linear())
cpiTS.linkTo(jj)
notional = 1000000
fixedRates = [0.1]
fixedDayCounter = ql.Actual365Fixed()
fixedPaymentConvention = ql.ModifiedFollowing
fixedPaymentCalendar = ql.UnitedKingdom()
contractObservationLag = ql.Period(3, ql.Months)
observationInterpolation = ql.CPI.Flat
settlementDays = 3
growthOnly = False
baseCPI = 206.1
fixedSchedule = ql.Schedule(issue_date,
maturity_date,
ql.Period(ql.Semiannual),
fixedPaymentCalendar,
ql.Unadjusted,
ql.Unadjusted,
ql.DateGeneration.Backward,
False)
bond = ql.CPIBond(settlementDays,
notional,
growthOnly,
baseCPI,
contractObservationLag,
inflationIndex,
observationInterpolation,
fixedSchedule,
fixedRates,
fixedDayCounter,
fixedPaymentConvention)
#bond2= ql.QuantLib.C
bondEngine=ql.DiscountingBondEngine(yTS)
bond.setPricingEngine(bondEngine)
print(bond.NPV() )
print(bond.cleanPrice())
compounding = ql.Compounded
yield_rate = bond.bondYield(fixedDayCounter,compounding,ql.Semiannual)
y_curve = ql.InterestRate(yield_rate,fixedDayCounter,compounding,ql.Semiannual)
##Collate results
print( "Clean Price:", bond.cleanPrice())
print( "Dirty Price:", bond.dirtyPrice())
print( "Notional:", bond.notional())
print( "Yield:", yield_rate)
print( "Accrued Amount:", bond.accruedAmount())
print( "Settlement Value:", bond.settlementValue())
#suspect there's more to this for TIPS
print( "Duration:", ql.BondFunctions.duration(bond,y_curve))
print( "Convexity:", ql.BondFunctions.convexity(bond,y_curve))
print( "Bps:", ql.BondFunctions.bps(bond,y_curve))
print( "Basis Point Value:", ql.BondFunctions.basisPointValue(bond,y_curve))
print( "Yield Value Basis Point:", ql.BondFunctions.yieldValueBasisPoint(bond,y_curve))
print( "NPV:", bond.NPV())
# get the cash flows:
#cf_list=[(cf.amount(),cf.date()) for cf in bond.cashflows()]
def to_datetime(d):
return dt.datetime(d.year(),d.month(), d.dayOfMonth())
for cf in bond.cashflows():
try:
amt=cf.amount()
rte=jj.zeroRate(cf.date())
zc=yTS.zeroRate(cf.date(),fixedDayCounter,compounding,ql.Semiannual).rate()
except:
amt=0
rte=0
zc=0
print( to_datetime(cf.date()),amt,rte,zc)
#################################################
Error:
27
27
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-52-c046731c2284> in <module>
72
73 zeroSwapHelpers = [ql.ZeroCouponInflationSwapHelper(rate/100,observationLag,
---> 74 date, calendar, convention, dayCounter, inflationIndex,observationInterpolation,yTS) for date,rate in zciisData]
75
76
1 frames
/usr/local/lib/python3.7/dist-packages/QuantLib/QuantLib.py in __init__(self, quote, lag, maturity, calendar, bcd, dayCounter, index, observationInterpolation, nominalTS)
17159
17160 def __init__(self, quote, lag, maturity, calendar, bcd, dayCounter, index, observationInterpolation, nominalTS):
> 17161 _QuantLib.ZeroCouponInflationSwapHelper_swiginit(self, _QuantLib.new_ZeroCouponInflationSwapHelper(quote, lag, maturity, calendar, bcd, dayCounter, index, observationInterpolation, nominalTS))
17162 __swig_destroy__ = _QuantLib.delete_ZeroCouponInflationSwapHelper
17163
TypeError: in method 'new_ZeroCouponInflationSwapHelper', argument 1 of type 'Handle< Quote > const &'

Opencart MySQL Response Very Slowly

I am using Opencart.
10M Product SQL Table
SQL responsible very slowly.
My server info;
48 CPU
64GB Memory
PHP 7.3
Mariadb 10.3
Nginx + PHP-FPM
SELECT p.product_id, (SELECT AVG(rating) AS total FROM oc_review r1 WHERE r1.product_id = p.product_id AND r1.status = '1' GROUP BY r1.product_id) AS rating, (SELECT price FROM oc_product_discount pd2 WHERE pd2.product_id = p.product_id AND pd2.customer_group_id = '1' AND pd2.quantity = '1' AND ((pd2.date_start = '0000-00-00' OR pd2.date_start < NOW()) AND (pd2.date_end = '0000-00-00' OR pd2.date_end > NOW())) ORDER BY pd2.priority ASC, pd2.price ASC LIMIT 1) AS discount, (SELECT price FROM oc_product_special ps WHERE ps.product_id = p.product_id AND ps.customer_group_id = '1' AND ((ps.date_start = '0000-00-00' OR ps.date_start < NOW()) AND (ps.date_end = '0000-00-00' OR ps.date_end > NOW())) ORDER BY ps.priority ASC, ps.price ASC LIMIT 1) AS special FROM oc_product_to_category p2c LEFT JOIN oc_product p ON (p2c.product_id = p.product_id) LEFT JOIN oc_product_description pd ON (p.product_id = pd.product_id) LEFT JOIN oc_product_to_store p2s ON (p.product_id = p2s.product_id) WHERE pd.language_id = '1' AND p.status = '1' AND p.date_available <= NOW() AND p2s.store_id = '0' AND p2c.category_id = '18' GROUP BY p.product_id ORDER BY p.sort_order ASC, LCASE(pd.name) ASC LIMIT 0,48

Extraneous Django DB Queries on user.has_perms(perms)

Looking at the SQL queries run when user.has_perms(perms) is called, I see:
SELECT "auth_permission"."id",
"auth_permission"."name",
"auth_permission"."content_type_id",
"auth_permission"."codename",
"django_content_type"."id",
"django_content_type"."name",
"django_content_type"."app_label",
"django_content_type"."model"
FROM "auth_permission"
inner join "auth_user_user_permissions"
ON ( "auth_permission"."id" =
"auth_user_user_permissions"."permission_id" )
inner join "django_content_type"
ON ( "auth_permission"."content_type_id" =
"django_content_type"."id" )
WHERE "auth_user_user_permissions"."user_id" = %s
ORDER BY "django_content_type"."app_label" ASC,
"django_content_type"."model" ASC,
"auth_permission"."codename" ASC
and:
SELECT "django_content_type"."app_label",
"auth_permission"."codename"
FROM "auth_permission"
inner join "auth_group_permissions"
ON ( "auth_permission"."id" =
"auth_group_permissions"."permission_id" )
inner join "auth_group"
ON ( "auth_group_permissions"."group_id" = "auth_group"."id" )
inner join "auth_user_groups"
ON ( "auth_group"."id" = "auth_user_groups"."group_id" )
left outer join "django_content_type"
ON ( "auth_permission"."content_type_id" =
"django_content_type"."id" )
WHERE "auth_user_groups"."user_id" = %s
my questions are:
What exactly are these queries doing?
Why are they run on
every request? Is there some way to cache these results?

Multiple Query in Doctrine with NAND,NOR,NOT,AND Operators

I am trying to design a doctrine query and I am new to doctrine but with the help of my other post I come up with a query which works when I run in my Mysql. But I want it to convert the query in Doctrine (2.3) can some one help me in this.
MySQL Query:
SELECT * FROM user WHERE
(`user_name` like '%TOM%' OR `user_name` like '%AN%' and `login_datetime` BETWEEN '2013-01-01 00:00:00' and '2013-02-31 23:59:59') OR
NOT ( --NOR
(`user_name` like '%PHP%' OR `user_name` like '%BA%' and `login_datetime` BETWEEN '2013-02-01 00:00:00' and '2013-03-31 23:59:59') OR
(`user_name` like '%SUN%' OR `user_name` like '%MOON%' and `login_datetime` BETWEEN '2013-03-01 00:00:00' and '2013-04-31 23:59:59')
) OR
NOT ( --NAND
(`user_name` like '%RAJ%' OR `user_name` like '%MUTH%' and `login_datetime` BETWEEN '2013-04-01 00:00:00' and '2013-06-31 23:59:59') AND
(`user_name` like '%BAG%' OR `user_name` like '%LAP%' and `login_datetime` BETWEEN '2013-05-01 00:00:00' and '2013-07-31 23:59:59')
)
--Link Reference: for the above MySql Query.
My Try with Doctrine: Reference Link:
It is very difficult to understand the doctrine query because of the () brasses which it automatically created in between queries so it gives me some wrong results all the time. Kindly help me.
It is very difficult to understand the doctrine query because of the () brasses which it automatically created in between queries so it gives me some wrong results all the time.
When you use an expr it typically wraps the expression in (). I think thats where you are running into confusion. Something similar to the following should work (this isnt tested so you may need to adjust abit):
$qry = $this->manager()->createQueryBuilder()
->from($this->entity, 'e')
->select('e');
// (`user_name` like '%TOM%' OR `user_name` like '%AN%' and `login_datetime` BETWEEN '2013-01-01 00:00:00' and '2013-02-31 23:59:59')
$expr1 = $qry->expr()->andX(
$qry->expr()->orX(
$qry->expr()->like('e.user_name', '%TOM%'),
$qry->expr()->like('e.user_name', '%AN%')
),
$qry->expr()->between('e.login_datetime', '2013-02-01 00:00:00', '2013-02-31 23:59:59')
);
//(`user_name` like '%PHP%' OR `user_name` like '%BA%' and `login_datetime` BETWEEN '2013-02-01 00:00:00' and '2013-03-31 23:59:59')
$expr2a = $qry->expr()->andX(
$qry->expr()->orX(
$qry->expr()->like('e.user_name', '%PHP%'),
$qry->expr()->like('e.user_name', '%BA%')
),
$qry->expr()->between('e.login_datetime', ''2013-02-01 00:00:00'', '2013-03-31 23:59:59')
);
// (`user_name` like '%SUN%' OR `user_name` like '%MOON%' and `login_datetime` BETWEEN '2013-03-01 00:00:00' and '2013-04-31 23:59:59')
$expr2b = $qry->expr()->andX(
$qry->expr()->orX(
$qry->expr()->like('e.user_name', '%SUN%'),
$qry->expr()->like('e.user_name', '%MOON%')
),
$qry->expr()->between('e.login_datetime', '2013-03-01 00:00:00', '2013-04-31 23:59:59')
);
// combine expr2a and expr2b with OR as $expr2
$expr2 = $qry->expr()->orX($expr2a, $expr2b);
// (`user_name` like '%RAJ%' OR `user_name` like '%MUTH%' and `login_datetime` BETWEEN '2013-04-01 00:00:00' and '2013-06-31 23:59:59')
$expr3a = $qry->expr()->andX(
$qry->expr()->orX(
$qry->expr()->like('e.user_name', '%RAJ%'),
$qry->expr()->like('e.user_name', '%MUTH%')
),
$qry->expr()->between('e.login_datetime', ''2013-04-01 00:00:00'', '2013-06-31 23:59:59')
);
// (`user_name` like '%BAG%' OR `user_name` like '%LAP%' and `login_datetime` BETWEEN '2013-05-01 00:00:00' and '2013-07-31 23:59:59')
$expr3b = $qry->expr()->andX(
$qry->expr()->orX(
$qry->expr()->like('e.user_name', '%BAG%'),
$qry->expr()->like('e.user_name', '%LAP%')
),
$qry->expr()->between('e.login_datetime', '2013-05-01 00:00:00', '2013-07-31 23:59:59')
);
// combine expr2a and expr2b with OR as $expr2
$expr3 = $qry->expr()->andX($expr3a, $expr3b);
// final query essentially WHERE expr1 OR NOT(expr2) OR NOT(expr3)
$qry->where($expr1)
->or($qry->expr()->not($expr2))
->or($qry->expr()->not($expr3));

SQL SP work correctly in SSMS but fail in VC++ application

I have an app that run from multiple computers and must synchronize something between its internal database and one database from SQL server.
I use some temporary tables to insert internal database's data and then call an SP to synchronize data, it will process data row-by-row and then either update them in SQL database, insert new rows or delete dropped rows. Since I should support customers that have SQL server 2000, I should have a solution other than MERGE.
The problem is my SP work very well in SSMS but it suddenly fail when called from my application, I use C++ native code and use ODBC and SQL Native Client for connection with SQL server.
Here is of my database and SP definition:
IF (NOT EXISTS(SELECT * FROM master.dbo.sysdatabases WHERE name='TestDB1'))
CREATE DATABASE TestDB1;
GO
USE TestDB1;
GO
IF (NOT EXISTS( SELECT * FROM dbo.sysobjects WHERE name='Servers'))
BEGIN
CREATE TABLE Servers(
[ID] uniqueidentifier NOT NULL PRIMARY KEY,
[Name] nvarchar(50)
-- Other fields omitted
);
END;
GO
IF (NOT EXISTS( SELECT * FROM dbo.sysobjects WHERE name='P'))
BEGIN
CREATE TABLE [dbo].[P](
[ID] bigint NOT NULL,
[ServerID] uniqueidentifier NOT NULL
CONSTRAINT KK_P_Servers FOREIGN KEY REFERENCES [Servers],
[PName] nvarchar(255) NOT NULL,
-- Other fields omitted
CONSTRAINT PK_P PRIMARY KEY CLUSTERED ([ID], [ServerID])
);
END;
GO
IF (NOT EXISTS( SELECT * FROM dbo.sysobjects WHERE name='C1'))
BEGIN
CREATE TABLE [dbo].[C1](
[ID] bigint NOT NULL,
[ServerID] uniqueidentifier NOT NULL
CONSTRAINT FK_C1_Servers FOREIGN KEY REFERENCES [Servers],
[PID] bigint NOT NULL,
[Type] nvarchar(50) NOT NULL
-- Other fields omitted
CONSTRAINT PK_C1 PRIMARY KEY CLUSTERED ([ID], [ServerID]),
CONSTRAINT FK_C1_P
FOREIGN KEY ([PID], [ServerID]) REFERENCES [P]
);
END;
GO
IF (NOT EXISTS( SELECT * FROM dbo.sysobjects WHERE name='C2'))
BEGIN
CREATE TABLE [dbo].[C2](
[ID] bigint NOT NULL,
[ServerID] uniqueidentifier NOT NULL
CONSTRAINT FK_C2_Servers FOREIGN KEY REFERENCES [Servers],
[PID] bigint NULL,
[Name] nvarchar(255) NOT NULL
-- Other fields omitted
CONSTRAINT PK_C2 PRIMARY KEY CLUSTERED ([ID], [ServerID]),
CONSTRAINT FK_C2_P FOREIGN KEY ([PID], [ServerID]) REFERENCES [P]
);
END;
GO
IF (NOT EXISTS( SELECT * FROM dbo.sysobjects WHERE name='debug'))
BEGIN
CREATE TABLE debug (
[id] int identity(1, 1),
[msg] nvarchar(255) NOT NULL,
[cnt] int
);
END;
GO
CREATE TABLE #C1(
[ID] bigint NOT NULL PRIMARY KEY,
[PID] bigint NOT NULL,
[Type] nvarchar(50) NOT NULL
);
GO
CREATE TABLE #C2(
[ID] bigint NOT NULL PRIMARY KEY,
[PID] bigint NOT NULL,
[Name] nvarchar(255) NOT NULL
);
GO
CREATE TABLE #P(
[ID] bigint NOT NULL PRIMARY KEY,
[PName] nvarchar(255) NOT NULL UNIQUE
-- Table have other fields that is not important here
);
GO
CREATE TABLE #C1(
[ID] bigint NOT NULL PRIMARY KEY,
[PID] bigint NOT NULL,
[Type] nvarchar(50) NOT NULL
);
GO
CREATE TABLE #C2(
[ID] bigint NOT NULL PRIMARY KEY,
[PID] bigint NOT NULL,
[Name] nvarchar(255) NOT NULL
);
GO
CREATE PROCEDURE #RegisterServer
#ServerId uniqueidentifier,
#ServerName nvarchar(128)
AS
BEGIN
BEGIN TRANSACTION
UPDATE [Servers]
SET [ServerName]=#ServerName
WHERE [ID]=#ServerId;
IF ##ROWCOUNT = 0
INSERT INTO [Servers](
[ID], [ServerName]
) VALUES (
#ServerId, #ServerName
);
COMMIT TRANSACTION
END
GO
CREATE PROCEDURE #DropP
#ServerID uniqueidentifier,
#PId bigint
AS
BEGIN
DELETE FROM C1
WHERE PID=#PId AND ServerID=#ServerID;
UPDATE C2 SET PID=NULL
WHERE PID=#PId AND ServerID=#ServerID;
DELETE FROM P
WHERE ID=#PId AND ServerID=#ServerID;
END
GO
CREATE PROCEDURE #SynchronizeP
#ServerID uniqueidentifier
AS
BEGIN
DECLARE #rc int, #e int;
DECLARE #AllP TABLE (
[num] bigint IDENTITY(1, 1) PRIMARY KEY,
[ID] bigint NOT NULL,
[PName] nvarchar(255) NOT NULL
);
DECLARE #AllC1 TABLE (
[num] bigint IDENTITY(1, 1) PRIMARY KEY,
[ID] bigint NOT NULL,
[PID] bigint NOT NULL,
[Type] nvarchar(50) NOT NULL
);
DECLARE #AllC2 TABLE (
[num] bigint IDENTITY(1, 1) PRIMARY KEY,
[ID] bigint NOT NULL,
[PID] bigint NOT NULL,
[Name] nvarchar(255) NOT NULL
);
DELETE FROM debug;
INSERT INTO #AllP( [ID], [PName] )
SELECT [ID], [PName]
FROM #ServerP;
SELECT #rc = ##ROWCOUNT;
INSERT INTO debug VALUES( 'CREATE #AllP', #rc );
INSERT INTO #AllC1( [ID], [PID], [Type] )
SELECT [ID], [PID], [Type]
FROM #ServerC1;
SELECT #rc = ##ROWCOUNT;
INSERT INTO debug VALUES( 'CREATE #AllC1', #rc );
INSERT INTO #AllC2( [ID], [PID], [Name] )
SELECT [ID], [PID], [Name]
FROM #ServerC2;
SELECT #rc = ##ROWCOUNT;
INSERT INTO debug VALUES( 'CREATE #AllC2', #rc );
DECLARE #PCount int
SELECT #PCount = COUNT(*) FROM #AllP
INSERT INTO debug VALUES( 'Read count of #AllP', #PCount );
BEGIN TRANSACTION;
DECLARE #PId bigint, #PName nvarchar(255);
-- find dropped c1 and delete them
DELETE FROM [C1]
WHERE [ServerID]=#ServerID AND ([ID] NOT IN (SELECT a.[ID] FROM #AllC1 a));
SELECT #rc = ##ROWCOUNT;
INSERT INTO debug VALUES( 'Delete invalid c1', #rc );
-- find dropped c2 and abandon them
UPDATE [C2] SET [PID]=NULL
WHERE [ServerID]=#ServerID AND ([ID] NOT IN (SELECT a.[ID] FROM #AllC2 a));
SELECT #rc = ##ROWCOUNT;
INSERT INTO debug VALUES( 'Abandon invalid c2', #rc );
-- find dropped p and delete them
DELETE FROM [P]
WHERE [ServerID]=#ServerID AND ([ID] NOT IN (SELECT a.[ID] FROM #AllP a));
SELECT #rc = ##ROWCOUNT;
INSERT INTO debug VALUES( 'Delete invalid p', #rc );
-- insert or update server p into database
DECLARE #p int
SET #p = 1
WHILE #p <= #PCount
BEGIN
SELECT #PId=[ID], #PName=[PName]
FROM #AllP
WHERE [num] = #p;
SELECT #rc = ##ROWCOUNT;
INSERT INTO debug VALUES( 'Select a p ' +
CASE #PId WHEN NULL THEN 'NULL' ELSE CONVERT(nvarchar(5), #PId) END + '|' +
CASE #PName WHEN NULL THEN 'NULL' ELSE #PName END, #rc );
-- update or add this processor
UPDATE dbo.[P]
SET [PName]=#PName
WHERE [ServerID]=#ServerID AND [ID]=#PId;
SELECT #rc = ##ROWCOUNT;
INSERT INTO debug VALUES( 'Update p', #rc );
IF #rc = 0
BEGIN
INSERT INTO dbo.[P](
[ID], [ServerID], [PName]
) VALUES(
#PId, #ServerID, #PName
);
SELECT #rc = ##ROWCOUNT;
INSERT INTO debug VALUES( 'Insert p', #rc );
END;
-- Now update list of c1 that belong to this processor
DECLARE #TmpC1 TABLE (
[num] bigint identity(1, 1) primary key,
[ID] bigint NOT NULL,
[Type] nvarchar(50) NOT NULL
);
INSERT INTO #TmpC1( [ID], [Type] )
SELECT [ID], [Type]
FROM #AllC1
WHERE [PID] = #PId;
SELECT #rc = ##ROWCOUNT;
INSERT INTO debug VALUES( 'Create #TmpC1', #rc );
DECLARE #Test nvarchar(4000);
SELECT #Test = '';
SELECT #Test = #Test + CONVERT(nvarchar(5), [ID]) + ', '
FROM #TmpC1;
SELECT #rc = ##ROWCOUNT;
INSERT INTO debug VALUES( '#TmpC1: ' + #Test, #rc );
DECLARE #C1Count int, #C1 int;
SELECT #C1Count = COUNT(*) FROM #TmpC1;
INSERT INTO debug VALUES( '#TmpC1.Count', #C1Count );
SET #C1 = 1
WHILE #C1 <= #C1Count
BEGIN
DECLARE #C1Id bigint, #C1Type nvarchar(50);
SELECT #C1Id=[ID], #C1Type=[Type]
FROM #TmpC1
WHERE [num] = #C1;
SELECT #rc = ##ROWCOUNT;
INSERT INTO debug VALUES( 'Read c1: ' +
CASE #C1Id WHEN NULL THEN 'NULL' ELSE CONVERT(nvarchar(5), #C1Id) END + '|' +
CASE #C1Type WHEN NULL THEN 'NULL' ELSE #C1Type END, #rc );
UPDATE C1
SET [PID]=#PId, [Type]=#C1Type
WHERE [ID]=#C1Id AND [ServerID]=#ServerID;
SELECT #rc = ##ROWCOUNT;
INSERT INTO debug VALUES( 'Update c1', #rc );
IF #rc = 0
BEGIN
INSERT INTO C1(
[ID], [ServerID], [PID], [Type]
) VALUES (
#C1Id, #ServerID, #PId, #C1Type
);
SELECT #rc = ##ROWCOUNT;
INSERT INTO debug VALUES( 'Insert c1', #rc );
END;
SET #C1 = #C1 + 1;
END;
DELETE FROM #TmpC1;
-- And at last insert or update c2 of this processor
DECLARE #TmpC2 TABLE (
[num] bigint identity(1, 1) primary key,
[ID] bigint NOT NULL,
[Name] nvarchar(255) NOT NULL
);
INSERT INTO #TmpC2( [ID], [Name] )
SELECT [ID], [Name]
FROM #AllC2
WHERE [PID] = #PId;
SELECT #rc = ##ROWCOUNT;
INSERT INTO debug VALUES( 'Create #TmpC2', #rc );
SELECT #Test = '';
SELECT #Test = #Test + CONVERT(nvarchar(5), [ID]) + ', '
FROM #TmpC2;
SELECT #rc = ##ROWCOUNT;
INSERT INTO debug VALUES( '#TmpC2: ' + #Test, #rc );
DECLARE #C2Count int, #C2 int;
SELECT #C2Count = COUNT(*) FROM #TmpC2;
INSERT INTO debug VALUES( '#TmpC2.Count', #C2Count );
SET #C2 = 1
WHILE #C2 <= #C2Count
BEGIN
DECLARE #C2Id bigint, #C2Name nvarchar(255);
SELECT #C2Id=[ID], #C2Name=[Name]
FROM #TmpC2
WHERE [num] = #C2;
SELECT #rc = ##ROWCOUNT;
INSERT INTO debug VALUES( 'Read c2: ' +
CASE #C2Id WHEN NULL THEN 'NULL' ELSE CONVERT(nvarchar(5), #C2Id) END + '|' +
CASE #C2Name WHEN NULL THEN 'NULL' ELSE #C2Name END, #rc );
UPDATE [C2]
SET [PID]=#PId, [Name]=#C2Name
WHERE [ID]=#C2Id AND ServerID=#ServerID;
SELECT #rc = ##ROWCOUNT;
INSERT INTO debug VALUES( 'Update c2', #rc );
IF #rc = 0
BEGIN
INSERT INTO debug VALUES( 'Inserting channel: ' +
CONVERT(nvarchar(5), #C2Id) + '|' +
CONVERT(nvarchar(50), #ServerId) + '|' +
CONVERT(nvarchar(5), #PId), 0 );
INSERT INTO [C2] (
[ID], [ServerID], [PID], [Name]
) VALUES (
#C2Id, #ServerID, #PId, #C2Name
);
SELECT #rc = ##ROWCOUNT;
INSERT INTO debug VALUES( 'Insert c2', #rc );
END;
INSERT INTO debug VALUES( 'To next c2', #C2 );
SET #C2 = #C2 + 1;
INSERT INTO debug VALUES( 'Next c2', #C2 );
END;
DELETE FROM #TmpC2;
SET #p = #p + 1;
END;
COMMIT TRANSACTION;
END
GO
Each time that I execute #SynchronizeP from C++ app I get a sudden error somewhere in between the SP and transaction will be failed, but executing the code in SSMS is perfect.
I tried anything but I can come up with an answer!!
Here is my sample data that I work with it
INSERT INTO #P( [ID, [Name] ) VALUES
( 1, 'p1' )
( 2, 'p2' )
( 3, 'p3' )
GO
INSERT INTO #C1( [ID], [PID], [Type] ) VALUES
( 1, 1, 'T1' )
( 2, 1, 'T2' )
( 3, 2, 'T3' )
( 4, 2, 'T4' )
( 5, 3, 'T5' )
( 6, 3, 'T6' )
GO
INSERT INTO #C2( [ID], [PID], [Name] ) VALUES
( 1, 1, 'C2_01' )
( 2, 1, 'C2_02' )
( 3, 1, 'C2_03' )
( 4, 1, 'C2_04' )
( 5, 1, 'C2_05' )
( 6, 1, 'C2_06' )
( 7, 1, 'C2_07' )
( 8, 1, 'C2_08' )
( 9, 1, 'C2_09' )
(10, 1, 'C2_10' )
(11, 1, 'C2_11' )
(12, 1, 'C2_12' )
(13, 1, 'C2_13' )
(14, 1, 'C2_14' )
(15, 1, 'C2_15' )
(16, 1, 'C2_16' )
(17, 1, 'C2_17' )
(18, 1, 'C2_18' )
(19, 1, 'C2_19' )
(20, 1, 'C2_20' )
(21, 1, 'C2_21' )
(22, 1, 'C2_22' )
(23, 1, 'C2_23' )
(24, 1, 'C2_24' )
(25, 1, 'C2_25' )
(26, 1, 'C2_26' )
(27, 1, 'C2_27' )
(28, 1, 'C2_28' )
(29, 1, 'C2_29' )
(30, 1, 'C2_30' )
(31, 2, 'C2_31' )
(32, 2, 'C2_32' )
(33, 2, 'C2_33' )
(34, 2, 'C2_34' )
(35, 2, 'C2_35' )
(36, 2, 'C2_36' )
(37, 2, 'C2_37' )
(38, 2, 'C2_38' )
(39, 2, 'C2_39' )
(40, 2, 'C2_40' )
(41, 2, 'C2_41' )
(42, 2, 'C2_42' )
(43, 2, 'C2_43' )
(44, 2, 'C2_44' )
(45, 2, 'C2_45' )
(46, 2, 'C2_46' )
(47, 2, 'C2_47' )
(48, 2, 'C2_48' )
(49, 2, 'C2_49' )
(50, 2, 'C2_50' )
(51, 2, 'C2_51' )
(52, 2, 'C2_52' )
(53, 2, 'C2_53' )
(54, 2, 'C2_54' )
(55, 2, 'C2_55' )
(56, 2, 'C2_56' )
(57, 2, 'C2_57' )
(58, 2, 'C2_58' )
(59, 2, 'C2_59' )
(60, 2, 'C2_60' )
(61, 3, 'C2_61' )
(62, 3, 'C2_62' )
(63, 3, 'C2_63' )
(64, 3, 'C2_64' )
(65, 3, 'C2_65' )
(66, 3, 'C2_66' )
(67, 3, 'C2_67' )
(68, 3, 'C2_68' )
(69, 3, 'C2_69' )
(70, 3, 'C2_70' )
(71, 3, 'C2_71' )
(72, 3, 'C2_72' )
(73, 3, 'C2_73' )
(74, 3, 'C2_74' )
(75, 3, 'C2_75' )
(76, 3, 'C2_76' )
(77, 3, 'C2_77' )
(78, 3, 'C2_78' )
(79, 3, 'C2_79' )
(80, 3, 'C2_80' )
(81, 3, 'C2_81' )
(82, 3, 'C2_82' )
(83, 3, 'C2_83' )
(84, 3, 'C2_84' )
(85, 3, 'C2_85' )
(86, 3, 'C2_86' )
(87, 3, 'C2_87' )
(88, 3, 'C2_88' )
(89, 3, 'C2_89' )
(90, 3, 'C2_90' )
GO
EXEC #SynchronizeP
GO
Edit: Oh my God!! I can't believe it, I add SET NOCOUNT ON to start of my SP and every thing go as expected!! does anyone knows why?? why a message that indicate count of affected row break execution of my SP.
I known that in most cases this is a good idea to add SET NOCOUNT ON in start of SP( for performance ) but why forgetting to add it break my SP??
By prefixing your SP with a #, you have made it temporary. So it probably doesn't exist when you call it from a different session in your C++ program
I think the answer is ODBC will close or cancel the command when it receive first answer from the SQL, so if I forget to use SET NOCOUNT ON and SQL send count notifications, ODBC will cancel the command. May be there is some technique to enable multiple result set for an SQL command in ODBC but I don't know such a technique