I'm using Wso2 Analytic, but when I was checking my database I found the following command executing by many time.
I'm use Oracle database version 11.2.0.4.
WSO2 API Manager Analytics
Version: 2.1.0
Command:
MERGE INTO API_REQ_USER_BROW_SUMMARY dest USING( SELECT :1 api, :2
version, :3 apiPublisher, :4 tenantDomain, :5 total_request_count, :6
year, :7 month, :8 day, :9 requestTime, :10 os, :11 browser FROM dual)
src ON(dest.api=src.api AND dest.version=src.version AND
dest.apiPublisher=src.apiPublisher AND dest.year=src.year AND
dest.month=src.month AND dest.day=src.day AND dest.os=src.os AND
dest.browser=src.browser AND dest.tenantDomain=src.tenantDomain)WHEN NOT
MATCHED THEN INSERT(api, version, apiPublisher, tenantDomain,
total_request_count, year, month, day, requestTime, os, browser)
VALUES(src.api, src.version, src.apiPublisher, src.tenantDomain,
src.total_request_count, src.year, src.month, src.day, src.requestTime,
src.os, src.browser) WHEN MATCHED THEN UPDATE SET
dest.total_request_count=src.total_request_count,
dest.requestTime=src.requestTime
This is the expected behavior. DAS works with DB through a Data Access Layer(DAL). DAL uses merge queries to perform data inserts. When events does not come in batches this query can run on the DB per each event which will lead to above behavior.
Related
I create a Studio Notebook at Kinesis analytics, I can see data come in by MQTT SQL Legacy of Analytics. So I am receiving data:
enter image description here
When I go to "open in apache zeppelin"
I create the table
%flink.ssql
CREATE TABLE `ppgsignal0903` ( `timestamp` BIGINT,`[Heart Rate Measurement]` DOUBLE,
`[Energy Expended]` DOUBLE,
`RR-Interval` DOUBLE,
`iso_time` as TO_TIMESTAMP(FROM_UNIXTIME(`timestamp`)) )
WITH ( 'connector' = 'kinesis',
'stream' = 'PPG_PW',
'aws.region' = 'eu-central-1',
'scan.stream.initpos' = 'LATEST',
'format' = 'json' )
data is coming in all the time, when I go to see my table:
%flink.ssql(type=update)
SELECT * FROM ppgsignal0903;
I have the following error:
Fail to run sql command: SELECT * FROM ppgsignal0903
Unable to create a source for reading table
'hive.ppgdatabase.ppgsignal0903'.
Table options are:
'aws.region'='eu-central-1'
'connector'='kinesis'
'format'='json'
'scan.stream.initpos'='LATEST'
'stream'='PPG_PW'
does anyone have a tip?
I need to do some analytics and manipulate the data showing real time charts( for example, hear beats per second, time between diastolic and systolic blood pressure in the last 10 min, etc) , so I need to have different paragraphs where I could run separate with real time data
Firstly, I have a table in SQLlite3 with two fields CAR (TEXT NOT NULL), checkout (TEXT NOT NULL)
car checkout
red %d%d/%m%m/%Y (for example 27/09/2021)
Second, I wrote a script which the structure is when I run it, all the entries that current date is equal or bigger than checkout to be deleted.
Third, in the same script with SELECT to check if the car is in the list and checkout is bigger than current date exclude from my available cars.
The code snippet makes the first step is the following:
try:
con = lite.connect(DB)
with con:
paper=[]
cur=con.cursor()
cur.execute("DELETE FROM CHECK_TABLE WHERE DATE(substr(checkout,7,4)||substr(checkout,4,2)||substr(checkout,1,2))<=DATE(strftime('%Y%m%d',date('now')))")
con.commit()
print('Entries with old dates deleted.')
except lite.Error as e:
print('Error connection: ',e)
The problem is that is not deleting anything. The strange behaviour is firstly that the SQL query works in DB Browser,
Image: Proof DB Browser in Windows 10 - Python2.7 - SQLite3
the second strange behaviour is that no error is raising and the third strange is that I tested two days ago and it worked normally! I really need your thoughts.
The same logic is in the following code snippet which is the the third step that I described above with SELECT command.
def ReadDateAndCar(car):
try:
con = lite.connect(DB)
with con:
paper=[]
cur=con.cursor()
cur.execute("SELECT DISTINCT car FROM CHECK_TABLE WHERE car='"+car+"' AND DATE(substr(checkout,7,4)||substr(checkout,4,2)||substr(checkout,1,2))<=DATE(strftime('%Y%m%d',date('now')))")
free_cars=cur.fetchall()
return free_cars
except lite.Error as e:
print('Error connection: ',e)
return 0
Exactly the same problems. SQL query works fine, no python error is raising, it worked few days ago. Can someone enlighten me?
Both your queries are wrong and they don't work in DB Browser either.
What you should do is store the dates with the ISO format YYYY-MM-DD, because this is the only text date format compatible with SQLite's datetime functions like date() and strftime() and it is comparable.
If you use any other format the result of these functions is null and this is what happens in your case.
The expressions substr(checkout,7,4)||substr(checkout,4,2)||substr(checkout,1,2) and strftime('%Y%m%d',date('now')) return dates in the format YYYYMMDD and if you use them inside date() or strftime() the result is null.
Since you obtain in both sides of the inequality dates in the format YYYYMMDD then they are directly comparable and you should not use the function date().
The condition should be:
substr(checkout, -4) || substr(checkout, 4, 2) || substr(checkout, 1, 2) <= strftime('%Y%m%d', 'now')
Interactive Grid 5.1: I would like to suppress the initial search, that means "no search without a filter". How?
For testing purposes, I created an interactive grid (IG) based on Scott's EMP table. By default, once the page is ran, it retrieves all employees because there's no WHERE condition in IG's query.
Then I switched DEBUG mode on, entered "king" into the search field and hit Enter key. Of course, King's data showed on the screen, but that's not the interesting part - debug info is.
If you look at it, you'll see that Apex rewrote your "original" query and added some extra info. This is what you're looking for:
Rewrite SQL to: select a.* from (select q.*,count(*) over () "APEX$TOTAL_ROW_COUNT"
from(select q.*
from(select "ROWID" "APEX$ROWID","EMPNO","ENAME","JOB","MGR","HIREDATE","SAL","COMM","DEPTNO"
from(select EMPNO,
ENAME,
JOB,
MGR,
HIREDATE,
SAL,
COMM,
DEPTNO
from EMP
)q
)q
where (1=2 or upper(q."ENAME") like :apex$1 or upper(q."JOB") like :apex$2)
)q
)a
where ROWNUM <= :p$_max_rows
Pay attention to the WHERE clause which says:
where (1=2 or upper(q."ENAME") like :apex$1 or upper(q."JOB") like :apex$2)
Aha! As ENAME and JOB are the only VARCHAR2 columns, it seems that Apex expects any of those values to filter data. OK then, I rewrote my own IG's SELECT statement to
select EMPNO,
ENAME,
JOB,
MGR,
HIREDATE,
SAL,
COMM,
DEPTNO
from EMP
where :apex$1 is not null and :apex$2 is not null --> added this
and ran the page again. Guess what? IG's grid is empty!
So, if that's what you are looking for, give it a try - run the page in Debug mode, review is results and modify your query according to what you see.
I am analysing the siebel log and i see that every query runs twice in the log. Could anyone pls tell me why this happens?
For example the below query is one of the many queries that i found got executed twice in the log
SELECT /*+ ALL_ROWS */
T2.CONFLICT_ID,
T2.LAST_UPD,
T2.CREATED,
T2.LAST_UPD_BY,
T2.CREATED_BY,
T2.MODIFICATION_NUM,
T2.ROW_ID,
T1.BU_ID,
T2.MULTI_LINGUAL_FLG,
:1
FROM
SIEBEL.S_LST_OF_VAL_BU T1,
SIEBEL.S_LST_OF_VAL T2
WHERE
T2.ROW_ID = T1.LST_OF_VAL_ID (+) AND
(T2.TYPE = :2 AND T2.NAME = :3)
ORDER BY
T2.TYPE, T2.ORDER_BY, T2.VAL
The query should NOT run twice, unless the logged in user has repeated an operation, and the Business Component is not cached. You will see the SQLs for LOV values repeated in the log, but the value of bind variable ":2" will be different each time. You can see these values just under the SQL
eg: Bind variable 2: TIME_ZONE_DST_ORDINAL
Bind variable 2: DAY_NAME
Is there any other SQL which is repeated and not for the S_LST_OF_VAL tables ?
ubuntu version: 12.10
mysql server version: 5.5.29-0
python version: 2.7
I am trying to use MySQLdb to insert data into my localhost mysql server. I don't get any errors when I run the script but the data isn't enter into my table. I view tables with phpmyadmin.
I tried going back to basics and following a tutorial but same result. The weird thing is that I can create and delete tables but not enter data.
The code is from the tutorial even reports that 4 rows were inserted. What is preventing data from being entered into the table when the script reports everything is fine??
cursor = conn.cursor ()
cursor.execute ("DROP TABLE IF EXISTS animal")
cursor.execute ("""
CREATE TABLE animal
(
name CHAR(40),
category CHAR(40)
)
""")
cursor.execute ("""
INSERT INTO animal (name, category)
VALUES
('snake', 'reptile'),
('frog', 'amphibian'),
('tuna', 'fish'),
('racoon', 'mammal')
""")
print "%d rows were inserted" % cursor.rowcount
Add :
conn.commit()
at the bottom of your script.
On a side note, have a look at the following : http://mysql-python.sourceforge.net/MySQLdb.html