Can I add a string type parameter to a SQL statement without quotes? - c++

I have a C++Builder SQL Statement with a parameter like
UnicodeString SQLStatement = "INSERT INTO TABLENAME (DATETIME) VALUES (:dateTime)"
Can I add the parameter without quotes?
Usually I'd use
TADOQuery *query = new TADOQuery(NULL);
query->Parameters->CreateParameter("dateTime", ftString, pdInput, 255, DateTimeToStr(Now()));
which will eventually produce the SQL String
INSERT INTO TABLENAME (DATETIME) VALUES ('2022-01-14 14:33:00.000')
but because this is a legacy project (of course, it always is) and I have to maintain different database technologies, I need to be able to inject database specific date time conversion methods, so that the endresult would look like
INSERT INTO TABLENAME (DATETIME) VALUES (to_date('2022-01-14 14:33:00.000', 'dd.mm.yyyy hh24:mi:ss')))
If I try injecting this via my 'usual' method (because I don't think I can inject a second parameter into this one) it'd look like:
TADOQuery *query = new TADOQuery(NULL);
query->Parameters->CreateParameter("dateTime", ftInteger, pdInput, 255, "to_date('" + DateTimeToStr(Now()) + "', 'dd.mm.yyyy hh24:mi:ss')");
but of course the result would look like:
INSERT INTO TABLENAME (DATETIME) VALUES ('to_date('2022-01-14 14:33:00.000', 'dd.mm.yyyy hh24:mi:ss')'))
and therefore be invalid
Or is there another way to do this more cleanly and elegantly? Although I'd settle with 'working'.
I can work around this by preparing two SQL Statements and switch the statement when another database technology is but I just wanted to check if there is another way.

Why are you defining the parameter's DataType as ftInteger when your input value is clearly NOT an integer? You should be defining the DataType as ftDateTime instead, and then assigning Now() as-is to the parameter's Value. Let the database engine decide how it wants to format the date/time value in the final SQL per its own rules.
query->Parameters->CreateParameter("dateTime", ftDateTime, pdInput, 0, Now());

Related

libpqxx array data with prepared statements

In PostgreSQL we can store array data in columns. Lets say I have a postgres table like this:
CREATE TABLE sal_emp (
name text,
pay_by_quarter integer[],
schedule text[][]
);
Now I am using libpqxx to insert values to this table through my C++ application. This library supports having prepared statements which can have parameters. The query text can contain $1, $2 etc. as placeholders for parameter values that we can provide when you invoke the prepared statement. Here is the documentation for prepared statements.
So lets say I prepared a statement to insert values to this table like so:
pqxx::connection con(dbConnectionString);
con.prepare("sal_empInsert", "INSERT INTO sal_emp VALUES($1, $2, $3)");
For simple parameters like the name, I am pretty sure I can pass a string value as parameter. But I am confused what do I pass for the 2nd and 3rd parameter. Does it need to be an simple array (pointer to the first element), will it work if it is a c++ vector, or does this library have a class for this which I need to initialize first with my array and then pass this object in as parameter?
vector can be used to insert an array through prepared statements in PostgreSQL. So in the given example you can insert pay_by_quarter and schedule values in libpqxx as follows:
con.prepare("insert", "INSERT INTO sal_emp VALUES ($1, $2, $3)");
pqxx::work txn(con);
// For example...
std::string name = "John";
std::vector<int> pay_by_quarter = {1000, 2000, 1500};
std::vector<std::vector<std::string>> text = {{"ABC", "123"}, {"PQR", "678"}};
txn.exec_prepared("insert", name, pay_by_quarter, text);
txn.commit();

how to build sql from RelBuild without schema info?

i want to generate sql use calcite. like this
org.apache.calcite.rel.rel2sql.RelToSqlConverterTest#testAntiJoin
final FrameworkConfig frameworkConfig = Frameworks.newConfigBuilder()
.parserConfig(SqlParser.Config.DEFAULT)
// .defaultSchema(schema)
.build();
final RelBuilder builder = RelBuilder.create(frameworkConfig);
final RelBuilder builder = relBuilder();
final RelNode root = builder
.scan("DEPT")
.scan("EMP")
.join(
JoinRelType.ANTI, builder.equals(
builder.field(2, 1, "DEPTNO"),
builder.field(2, 0, "DEPTNO")))
.project(builder.field("DEPTNO"))
.build();
but if i don't set the schema, the exception table not found will be throw.
is there any way to generate sql without schema info.
the aim is generate sql. just generate sql.
reply for first answer. because comment character length limit.
My scenario is Business Intelligence. DataSource can be many, such as Hive, ClickHouse, and so on. And there are many tables. I also need to dynamically delete or add datasource. So I don't think it's appropriate for Calcite to be aware of all the data sources. I have two more questions:
How to create 'free-standing' table objects as you said
Check whether SqlNode can be used to do this. for example:
SqlIdentifier from = new SqlIdentifier("testTable", SqlParserPos.QUOTED_ZERO);
SqlNode[] nodes = new SqlNode[2];
nodes[0] = new SqlIdentifier("a", SqlParserPos.QUOTED_ZERO);
nodes[1] = SqlLiteral.createExactNumeric("1", SqlParserPos.QUOTED_ZERO);
SqlNode where = new SqlBasicCall(SqlStdOperatorTable.EQUALS, nodes, SqlParserPos.QUOTED_ZERO);
SqlIdentifier selectNode = new SqlIdentifier("a", SqlParserPos.QUOTED_ZERO);
SqlSelect select = new SqlSelect(SqlParserPos.QUOTED_ZERO, SqlNodeList.EMPTY,
new SqlNodeList(Arrays.asList(selectNode), SqlParserPos.QUOTED_ZERO),
from,
where,
null,
null,
null,
null,
null,
null,
null);
SqlString sqlString = select.toSqlString(CalciteSqlDialect.DEFAULT);
System.out.println(sqlString.getSql());
Only one method in RelBuilder uses a RelOptSchema: scan(String...) (and its variant Scan(Iterable<String>)). Which makes sense when you consider that the purpose of RelOptSchema is as a directory service, converting a table name (or table path, consisting of a table name qualified with catalog and/or schema names) into a RelOptTable object.
If you have 'free-standing' table objects that are not accessed via a namespace then you can create TableScan relational expressions directly and then call RelBuilder.push(RelNode) to add them to the stack. Since you never call RelBuilder.scan you can create RelBuilder with a null RelOptSchema.
But in your case, it looks as if you don't have free-standing table objects. That's a problem for Calcite, because it needs to know that your "EMP" table has a field called "DEPTNO" and it has type INTEGER.
So I suggest that you create a 'virtual' schema that contains type information but is not necessarily backed by real tables. The MockCatalogReader class, used in several of Calcite's tests, is a good example to follow.

Ignite SqlFieldsQuery specific keys

Using the ignite C++ API, I'm trying to find a way to perform an SqlFieldsQuery to select a specific field, but would like to do this for a set of keys.
One way to do this, is to do the SqlFieldsQuery like this,
SqlFieldsQuery("select field from Table where _key in (" + keys_string + ")")
where the keys_string is the list of the keys as a comma separated string.
Unfortunately, this takes a very long time compared to just doing cache.GetAll(keys) for the set of keys, keys.
Is there an alternative, faster way of getting a specific field for a set of keys from an ignite cache?
EDIT:
After reading the answers, I tried changing the query to:
auto query = SqlFieldsQuery("select field from Table t join table(_key bigint = ?) i on t._key = i._key")
I then add the arguments from my set of keys like this:
for(const auto& key: keys) query.AddArgument(key);
but when running the query, I get the error:
Failed to bind parameter [idx=2, obj=159957, stmt=prep0: select field from Table t join table(_key bigint = ?) i on t._key = i._key {1: 159956}]
Clearly, this doesn't work because there is only one '?'.
So I then tried to pass a vector<int64_t> of the keys, but I got an error which basically says that std::vector<int64_t> did not specialize the ignite BinaryType. So I did this as defined here. When calling e.g.
writer.WriteInt64Array("data", data.data(), data.size())
I gave the field a arbitrary name "data". This then results in the error:
Failed to run map query remotely.
Unfortunately, the C++ API is neither well documented, nor complete, so I'm wondering if I'm missing something or that the API does not allow for passing an array as argument to the SqlFieldsQuery.
Query that uses IN clause doesn't always use indexes properly. The workaround for this is described here: https://apacheignite.readme.io/docs/sql-performance-and-debugging#sql-performance-and-usability-considerations
Also if you have an option to to GetAll instead and lookup by key directly, then you should use it. It will likely be more effective anyway.
Query with operator "IN" will not always use indexes. As a workaround, you can rewrite the query in the following way:
select field from Table t join table(id bigint = ?) i on t.id = i.id
and then invoke it like:
new SqlFieldsQuery(
"select field from Table t join table(id bigint = ?) i on t.id = i.id")
.setArgs(new Object[]{ new Integer[] {2, 3, 4} }))

array in prepared statement inside where in clause

prep_stmt = con->prepareStatement("SELECT * FROM table WHERE customers in ( ? ) and alive = ?");
prep_stmt->setString(1,customer_string);
prep_stmt->setInt(2,1);
res = prep_stmt->executeQuery();
Here the customer_string is "12,1,34,67,45,14"
When I pass it as a String it always returns a single row, takes the first value only 12.
The sql statement prepared is:
SELECT * FROM table WHERE customers in ( "12,1,34,67,45,14" ) and alive = 1
but I want sql statement to be prepared as:
SELECT * FROM table WHERE customers in (12,1,34,67,45,14 ) and alive = 1
What is the easiest way to achieve the same in C++?
I am assuming you are using the MySQL C++ Connector. Unfortunately it seems that it is not possible to pass array as parameter of prepared statement using this API:
Connector/C++ does not support the following JDBC standard data types: ARRAY, BLOB, CLOB, DISTINCT, FLOAT, OTHER, REF, STRUCT.
https://dev.mysql.com/doc/connector-cpp/en/connector-cpp-usage-notes.html
You can place the value into the query directly by concatenating strings. Be VERY careful to not introduce SQL injection vulnerability. Alternatively use some other API.

CRM 2015 Microsoft.Xrm.Sdk: Unexpected results in second CreateQuery call

Microsoft.Xrm.Sdk, Version=7.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35
var ctx = new ServiceContext(...);
var result1 = (from f in ctx.CreateQuery<aEntity>()
where f.field1 == "x"
select new { f.Id, f.field2 }).ToList();
var result2 = (from f in ctx.CreateQuery<aEntity>()
where f.field1 == "x"
select f.field1).First();
result2 returns null! After adding f.field1 to the select clause in the first query result2 returns "x". It looks like a internal columnset is created and used in the context of the second call. Looking at the SQL Server trace of both calls we see the expected select-from queries based on the C# code. The returned second result is not expected. Can someone explain this behaviour?
As for me it looks like a caching functionality and it's on the side of CRM because as you mentioned SQL queries were correct. I had the same issue in my applications when tried to make two consecutive queries for the same entity record but selected two different fields, the second request always returned NULL. Here are workarounds that I use when work with the ServiceContext:
Simple one: always retrieve an entity with all fields (without select statement) (even if I want it or not)
or create a service context with disabled caching
Right now I try to use the ServiceContext as less as possible replacing it with QueryBase expressions (even if I love to use LINQ).
Keep in mind LINQ CRM driver implementation is a subset of SQL only.
Could you try something like this?
var result1 = (from f in ctx.CreateQuery<aEntity>()
where f.field1 == "x"
select new CustomClass {
Id = f.aEntityId,
Field2 = f.field2
}).ToList();
You can have complex queries if you want, but you need to know what can be done and what can't be done.
Id property is not always returned by the driver, but the entity's primary key is, which is normally the entity logical name + "Id".