Using the ignite C++ API, I'm trying to find a way to perform an SqlFieldsQuery to select a specific field, but would like to do this for a set of keys.
One way to do this, is to do the SqlFieldsQuery like this,
SqlFieldsQuery("select field from Table where _key in (" + keys_string + ")")
where the keys_string is the list of the keys as a comma separated string.
Unfortunately, this takes a very long time compared to just doing cache.GetAll(keys) for the set of keys, keys.
Is there an alternative, faster way of getting a specific field for a set of keys from an ignite cache?
EDIT:
After reading the answers, I tried changing the query to:
auto query = SqlFieldsQuery("select field from Table t join table(_key bigint = ?) i on t._key = i._key")
I then add the arguments from my set of keys like this:
for(const auto& key: keys) query.AddArgument(key);
but when running the query, I get the error:
Failed to bind parameter [idx=2, obj=159957, stmt=prep0: select field from Table t join table(_key bigint = ?) i on t._key = i._key {1: 159956}]
Clearly, this doesn't work because there is only one '?'.
So I then tried to pass a vector<int64_t> of the keys, but I got an error which basically says that std::vector<int64_t> did not specialize the ignite BinaryType. So I did this as defined here. When calling e.g.
writer.WriteInt64Array("data", data.data(), data.size())
I gave the field a arbitrary name "data". This then results in the error:
Failed to run map query remotely.
Unfortunately, the C++ API is neither well documented, nor complete, so I'm wondering if I'm missing something or that the API does not allow for passing an array as argument to the SqlFieldsQuery.
Query that uses IN clause doesn't always use indexes properly. The workaround for this is described here: https://apacheignite.readme.io/docs/sql-performance-and-debugging#sql-performance-and-usability-considerations
Also if you have an option to to GetAll instead and lookup by key directly, then you should use it. It will likely be more effective anyway.
Query with operator "IN" will not always use indexes. As a workaround, you can rewrite the query in the following way:
select field from Table t join table(id bigint = ?) i on t.id = i.id
and then invoke it like:
new SqlFieldsQuery(
"select field from Table t join table(id bigint = ?) i on t.id = i.id")
.setArgs(new Object[]{ new Integer[] {2, 3, 4} }))
Related
I have a C++Builder SQL Statement with a parameter like
UnicodeString SQLStatement = "INSERT INTO TABLENAME (DATETIME) VALUES (:dateTime)"
Can I add the parameter without quotes?
Usually I'd use
TADOQuery *query = new TADOQuery(NULL);
query->Parameters->CreateParameter("dateTime", ftString, pdInput, 255, DateTimeToStr(Now()));
which will eventually produce the SQL String
INSERT INTO TABLENAME (DATETIME) VALUES ('2022-01-14 14:33:00.000')
but because this is a legacy project (of course, it always is) and I have to maintain different database technologies, I need to be able to inject database specific date time conversion methods, so that the endresult would look like
INSERT INTO TABLENAME (DATETIME) VALUES (to_date('2022-01-14 14:33:00.000', 'dd.mm.yyyy hh24:mi:ss')))
If I try injecting this via my 'usual' method (because I don't think I can inject a second parameter into this one) it'd look like:
TADOQuery *query = new TADOQuery(NULL);
query->Parameters->CreateParameter("dateTime", ftInteger, pdInput, 255, "to_date('" + DateTimeToStr(Now()) + "', 'dd.mm.yyyy hh24:mi:ss')");
but of course the result would look like:
INSERT INTO TABLENAME (DATETIME) VALUES ('to_date('2022-01-14 14:33:00.000', 'dd.mm.yyyy hh24:mi:ss')'))
and therefore be invalid
Or is there another way to do this more cleanly and elegantly? Although I'd settle with 'working'.
I can work around this by preparing two SQL Statements and switch the statement when another database technology is but I just wanted to check if there is another way.
Why are you defining the parameter's DataType as ftInteger when your input value is clearly NOT an integer? You should be defining the DataType as ftDateTime instead, and then assigning Now() as-is to the parameter's Value. Let the database engine decide how it wants to format the date/time value in the final SQL per its own rules.
query->Parameters->CreateParameter("dateTime", ftDateTime, pdInput, 0, Now());
I am trying to insert in database(Oracle) in python with cx_oracle. I need to select from table and insert into another table.
insert_select_string = "INSERT INTO wf_measure_details(PARENT_JOB_ID, STAGE_JOB_ID, MEASURE_VALS, STEP_LEVEL, OOZIE_JOB_ID, CREATE_TIME_TS) \
select PARENT_JOB_ID, STAGE_JOB_ID, MEASURE_VALS, STEP_LEVEL, OOZIE_JOB_ID, CREATE_TIME_TS from wf_measure_details_stag where oozie_job_id = '{0}'.format(self.DAG_id)"
conn.executemany(insert_select_string)
conn.commit()
insert_count = conn.rowcount
But I am getting below error. I do not have select parameter of data as data is getting from select query.
Required argument 'parameters' (pos 2) not found
Please suggest how to solve this
As mentioned by Chris in the comments to your question, you want to use cursor.execute() instead of cursor.executemany(). You also want to use bind variables instead of interpolated parameters in order to improve performance and reduce security risks. Take a look at the documentation. In your case you would want something like this (untested):
cursor.execute("""
INSERT INTO wf_measure_details(PARENT_JOB_ID, STAGE_JOB_ID,
MEASURE_VALS, STEP_LEVEL, OOZIE_JOB_ID, CREATE_TIME_TS)
select PARENT_JOB_ID, STAGE_JOB_ID, MEASURE_VALS, STEP_LEVEL,
OOZIE_JOB_ID, CREATE_TIME_TS
from wf_measure_details_stag
where oozie_job_id = :id""",
id=self.DAG_id)
I generate a list of ID numbers. I want to execute an insert statement that grabs all records from one table where the ID value is in my list and insert those records into another table.
Instead of running through multiple execute statements (as I know is possible), I found this cx_Oracle function, that supposedly can execute everything with a single statement and list parameter. (It also avoids the clunky formatting of the SQL statement before passing in the parameters) But I think I need to alter my list before passing it in as a parameter. Just not sure how.
I referenced this web page:
https://dev.mysql.com/doc/connector-python/en/connector-python-api-mysqlcursor-executemany.html
ids = getIDs()
print(ids)
[('12345',),('24567',),('78945',),('65423',)]
sql = """insert into scheme.newtable
select id, data1, data2, data3
from scheme.oldtable
where id in (%s)"""
cursor.prepare(sql)
cursor.executemany(None, ids)
I expected the SQL statement to execute as follows:
Insert into scheme.newtable
select id, data1, data2, data3 from scheme.oldtable where id in ('12345','24567','78945','65423')
Instead I get the following error:
ORA-01036: illegal variable name/number
Edit:
I found this StackOverflow: How can I do a batch insert into an Oracle database using Python?
I updated my code to prepare the statement before hand and updated the list items to tuples and I'm still getting the same error.
You use executemany() for batch DML, e.g. when you want to insert a large number of values into a table as an efficient equivalent of running multiple insert statements. There are cx_Oracle examples discussed in https://blogs.oracle.com/opal/efficient-and-scalable-batch-statement-execution-in-python-cx_oracle
However what you are doing with
insert into scheme.newtable
select id, data1, data2, data3
from scheme.oldtable
where id in (%s)
is a different thing - you are trying to execute one INSERT statement using multiple values in an IN clause. You would use a normal execute() for this.
Since Oracle keeps bind data distinct from SQL, you can't pass in multiple values to a single bind parameter because the data is treated as a single SQL entity, not a list of values. You could use %s string substitution syntax you have, but this is open to SQL Injection attacks.
There are various generic techniques that are common to Oracle language interfaces, see https://oracle.github.io/node-oracledb/doc/api.html#sqlwherein for solutions that you can rewrite to Python syntax.
using temporary table to save ids (batch insert)
cursor.prepare('insert into temp_table values (:1)')
dictList = [{'1': x} for x in ids]
cursor.executemany(None, dictList)
then insert selected value into newtable
sql="insert into scheme.newtable (selectid, data1, data2, data3 from scheme.oldtable inner join temp_table on scheme.oldtable.id = temp_table.id)"
cursor.execut(sql,connection)
the script of create temporary table in oracle
CREATE GLOBAL TEMPORARY TABLE temp_table
(
ID number
);
commit
I hope this useful.
I have Tags as partition key in my table, and when I am trying to query I am getting AttributeError.
Below is my code:
kb_table = boto3.resource('dynamodb').Table('table_name')
result = kb_table.query(
KeyConditionExpression=Key('Tags').contains('search term')
)
return result['Items']
Error:
"errorMessage": "'Key' object has no attribute 'contains'"
Basically I want to search through the table where I the field is having that search term. I have achived it using scan but I have read everywhere that we should not use that.
result = kb_table.scan(
FilterExpression="contains (Tags, :titleVal)",
ExpressionAttributeValues={ ":titleVal": "search term" }
)
So I have changed my partition-key to Tags along with a sort-key so that I can achieve this using query but now I am getting this error.
Any idea how to get this working?
In order to use Query you must specify one partition to access, you cannot wildcard a partition or specify multiple keys.
KeyConditionExpression
The condition that specifies the key value(s)
for items to be retrieved by the Query action.
The condition must perform an equality test on a single partition key
value.
Assuming you want to search the whole table for tags, a scan is the most appropriate approach.
EDIT: You can use Query with the exact search term, but im guessing that is not what you want.
kb_table = boto3.resource('dynamodb').Table('table_name')
result = kb_table.query(
KeyConditionExpression=Key('Tags').eq('search term')
)
return result['Items']
Without saving SHA1 digest string in table directly. Is it possible to format the column in select statement ?
For example (Hope you know what i mean):
#item = Item.where(Digest::SHA1.hexdigest id.to_s:'356a192b7913b04c54574d18c28d46e6395428ab')
No, not the way you want it. The hexdigest method you're using won't be available at the database level. You could use database-specific functions though.
For example:
Item.where("LOWER(name) = ?", entered_name.downcase)
The LOWER() function will be available to the database so it can pass the name column to it.
For your case, I can suggest two solutions:
Obviously, store the encrypted field in the table. And then match.
key = '356a192b7913b04c54574d18c28d46e6395428ab'
Item.where(encrypted_id: key)
Iterate over all column values (ID, in your case) and find the one that matches:
all_item_ids = Item.pluck("CAST(id AS TEXT)")
item_id = all_item_ids.find{ |val| Digest::SHA1.hexdigest(val) == key }
Then you could use Item.find(item_id) to get the item or Item.where(id: item_id) to get an ActiveRecord::Relation object.