How to create a StreamableTable which is also a ScannableTable (Apache Calcite)? - apache-calcite

I am looking to implement a org.apache.calcite.schema.Table which can be used as a stream as well as a table.
I was going through the Calcite documentation, and here it mentions an example of Orders table which is a stream as well as table. It also mentions that both of the following queries are applicable on this Orders table/stream,
SELECT STREAM * FROM Orders;
and
SELECT * FROM Orders;
I am trying to implement a class whose instances are such tables. I implemented the StreamableTable interface as well as the ScannableTable interface but still not able to get it to work both ways. When I try to execute a non-stream query (like SELECT * FROM TEST_TABLE), I get the following error,
Caused by: org.apache.calcite.runtime.CalciteContextException: From line 1, column 15 to line 1, column 38: Cannot convert stream 'TEST_TABLE' to relation
at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
at org.apache.calcite.runtime.Resources$ExInstWithCause.ex(Resources.java:467)
at org.apache.calcite.sql.SqlUtil.newContextException(SqlUtil.java:883)
at org.apache.calcite.sql.SqlUtil.newContextException(SqlUtil.java:868)
at org.apache.calcite.sql.validate.SqlValidatorImpl.newValidationError(SqlValidatorImpl.java:5043)
at org.apache.calcite.sql.validate.SqlValidatorImpl.validateModality(SqlValidatorImpl.java:3739)
at org.apache.calcite.sql.validate.SqlValidatorImpl.validateModality(SqlValidatorImpl.java:3664)
at org.apache.calcite.sql.validate.SqlValidatorImpl.validateQuery(SqlValidatorImpl.java:1048)
at org.apache.calcite.sql.SqlSelect.validate(SqlSelect.java:232)
at org.apache.calcite.sql.validate.SqlValidatorImpl.validateScopedExpression(SqlValidatorImpl.java:1016)
at org.apache.calcite.sql.validate.SqlValidatorImpl.validate(SqlValidatorImpl.java:724)
at org.apache.calcite.sql2rel.SqlToRelConverter.convertQuery(SqlToRelConverter.java:567)
at org.apache.calcite.prepare.Prepare.prepareSql(Prepare.java:242)
at org.apache.calcite.prepare.Prepare.prepareSql(Prepare.java:208)
at org.apache.calcite.prepare.CalcitePrepareImpl.prepare2_(CalcitePrepareImpl.java:642)
at org.apache.calcite.prepare.CalcitePrepareImpl.prepare_(CalcitePrepareImpl.java:508)
at org.apache.calcite.prepare.CalcitePrepareImpl.prepareSql(CalcitePrepareImpl.java:478)
at org.apache.calcite.jdbc.CalciteConnectionImpl.parseQuery(CalciteConnectionImpl.java:231)
at org.apache.calcite.jdbc.CalciteMetaImpl.prepareAndExecute(CalciteMetaImpl.java:556)
at org.apache.calcite.avatica.AvaticaConnection.prepareAndExecuteInternal(AvaticaConnection.java:675)
at org.apache.calcite.avatica.AvaticaStatement.executeInternal(AvaticaStatement.java:156)
... 3 more
Queries like SELECT STREAM * FROM TEST_TABLE work as expected.
Can someone help me create such a table?

The modality of Select * FROM Orders is RELATION but the table is an instance of StreamableTable and that's why the above exception is thrown. I changed the RelOptTableImpl#supportsModality a bit as follows.
#Override public boolean supportsModality(SqlModality modality) {
switch (modality) {
case STREAM:
return table instanceof StreamableTable;
default:
// check whether the table is scannable
if (table instanceof ScannableTable) {
return true;
}
return !(table instanceof StreamableTable);
}
}
For the above SQL, now the plans generated were as per usual:
Logical:
LogicalProject(ROWTIME=[$0], ID=[$1], PRODUCT=[$2], UNITS=[$3])
LogicalTableScan(table=[[STREAMS, ORDERS]])
Physical:
EnumerableTableScan(table=[[STREAMS, ORDERS]])
With Result set starting with:
ROWTIME=2015-02-15 10:15:00; ID=1; PRODUCT=paint; UNITS=10",
ROWTIME=2015-02-15 10:24:15; ID=2; PRODUCT=paper; UNITS=5"
You can create a test case for this in StreamTest with above changes.

Related

"Operation must use an updateable query" in MS Access when the Updated Table is the same as the source

The challenge is to update a table by scanning that same table for information. In this case, I want to find how many entries I received in an Upload dataset that have the same key (effectively duplicate instructions).
I tried the obvious code:
UPDATE Base AS TAR
INNER JOIN (select cdKey, count(*) as ct FROM Base GROUP BY cdKey) AS CHK
ON TAR.cdKey = CHK.cdKey
SET ctReferences = CHK.ct
This resulted in a non-updateable complaint. Some workarounds talked about adding DISTINCTROW, but that made no difference.
I tried creating a view (query in Ms/Access parlance); same failure.
Then I projected the set (SELECT cdKey, count(*) INTO TEMP FROM Base GROUP BY cdKey), and substituted TEMP for the INNER JOIN which worked.
Conclusion: reflexive updates are also non-updateable.
An initial thought was to embed a sub-select in the update, for example:
UPDATE Base TAR SET TAR.ctReferences = (select count(*) from Base CHK where CHK.cd = TAR.cd)
This also failed.
As this is part of a job I am calling, this SQL (like the other statements) are all strings executed by CurrentDb.Execute statements. I thought maybe I could make this a DLookup, I found that as cd is a string, I had a gaggle of double- and triple-quoted elements that was too messy to read (and maintain).
Best solution was to write a function so I could avoid having to do any sort of string manipulation. Hence, in a module there's a function:
Public Function PassHelperCtOccurs(ByRef cdX As String) As Long
PassHelperCtOccurs = DLookup("count(*)", "Base", "cd='" & cdX & "'")
End Function
And the call is:
CurrentDb().Execute ("UPDATE Base SET ctOccursCd =PassHelperCtOccurs(cd)")

Spanner setAllowPartialRead(true) usage and purpose

from offical code snippet example of Spanner Java Client :
https://github.com/GoogleCloudPlatform/java-docs-samples/blob/HEAD/spanner/spring-data/src/main/java/com/example/spanner/SpannerTemplateSample.java
I can see the usage of
new SpannerQueryOptions().setAllowPartialRead(true)):
#Component
public class SpannerTemplateSample {
#Autowired
SpannerTemplate spannerTemplate;
public void runTemplateExample(Singer singer) {
// Delete all of the rows in the Singer table.
this.spannerTemplate.delete(Singer.class, KeySet.all());
// Insert a singer into the Singers table.
this.spannerTemplate.insert(singer);
// Read all of the singers in the Singers table.
List<Singer> allSingers = this.spannerTemplate
.query(Singer.class, Statement.of("SELECT * FROM Singers"),
new SpannerQueryOptions().setAllowPartialRead(true));
}
}
I didn't find any explanation on it. Anyone can help?
Quoting from the documentation:
Partial read is only possible when using Queries. In case the rows returned by query have fewer columns than the entity that it will be mapped to, Spring Data will map the returned columns and leave the rest as they of the columns are.

Big Query insertAll method in Java is not reflecting the changes in Table

I am trying to insert data into a Big Query table using the method insertAll.
This is how my insert method code looks like -
public void insertIntoTable(String datasetName, String tableName) {
TableId tableId = TableId.of(datasetName, tableName);
String fieldName = "testField";
Map<String, Object> rowContent = new HashMap<>();
rowContent.put(fieldName, "testVal");
InsertAllResponse response = bigquery.insertAll(InsertAllRequest.newBuilder(tableId).addRow("rowId", rowContent).build());
if (response.hasErrors()) {
for (Map.Entry<Long, List<BigQueryError>> entry : response.getInsertErrors().entrySet()) {
System.out.println(entry.getValue().toString());
}
}
}
Although it is not throwing any error, but still data is not getting inserted into my table.
As per there access-control document:
https://cloud.google.com/bigquery/docs/access-control
To use the insertAll method the user would require bigquery.tables.updateData permission. And for using that permission the user need to have the bigquery.dataEditor role, which I already have.
There is no issue with the permission because in another method I am creating a table, and the table is getting created successfully in Big Query.
The tableName and datasetName is also correct, I tested it in the debug mode.
Another issue that I can think of is type mismatch issue. But thats not the case either, because I checked it and the type is String only. Attached schema details below -
Can there be any other issue which I may be missing out here?
=============================================
Edited Part -
It has been pointed out to me that the stream data does not reflect in the UI, for viewing the data we have to query the table.
Now I am facing a new issue.
I executed the above function 6 times, each time with a different value. Basically I was changing only this line -
rowContent.put(fieldName, "testVal");
The first time when I executed the method, the insert value was testVal.
The other five times when I executed the method, I modified this line of code -
Execution 1: rowContent.put(fieldName, "testVal1");
Execution 2: rowContent.put(fieldName, "testVal2");
Execution 3: rowContent.put(fieldName, "testVal3");
Execution 4: rowContent.put(fieldName, "testVal4");
Execution 5: rowContent.put(fieldName, "testVal5");
So ideally in my table there should be 6 rows with the values -
testVal
testVal1
testVal2
testVal3
testVal4
testVal5
But I am able to see only two rows when I am querying my table.
Why it is showing only 2 rows instead of 6?

Siebel NextRecord method is not moving to the next record

I have found a very weird behaviour in our Siebel 7.8 application. This is part of a business service:
var bo:BusObject;
var bc:BusComp;
try {
bo = TheApplication().GetBusObject("Service Request");
bc = bo.GetBusComp("Action");
bc.InvokeMethod("SetAdminMode", "TRUE");
bc.SetViewMode(AllView);
bc.ClearToQuery();
bc.SetSearchSpec("Status", "='Unscheduled' OR ='Scheduled' OR ='02'");
bc.ExecuteQuery(ForwardOnly);
var isRecord = bc.FirstRecord();
while (isRecord) {
log("Processing activity '" + bc.GetFieldValue("Id") + "'");
bc.SetFieldValue("Status", "03");
bc.WriteRecord();
isRecord = bc.NextRecord();
}
} catch (e) {
log("Exception: " + e.message);
} finally {
bc = null;
bo = null;
}
In the log file, we get something like this:
Processing activity '1-23456'
Processing activity '1-56789'
Processing activity '1-ABCDE'
Processing activity '1-ABCDE'
Exception: The selected record has been modified by another user since it was retrieved.
Please continue. (SBL-DAT-00523)
So, basically, it processes a few records from the BC and then, apparently at random, it "gets stuck". It's like the NextRecord call isn't executed, and instead it processes the same record again.
If I remove the SetFieldValue and WriteRecord to avoid the SBL-DAT-00523 error, it still shows some activities twice (only twice) in the log file.
What could be causing this behaviour?
It looks like in business component "Action" you have join(s) that can return multiple records for one base record and you use ForwardOnly mode to query BC.
Assume, for example, in table S_EVT_ACT you have one record with a custom column X_PHONE_NUMBER = '12345678' and you have two records in table S_CONTACT with column 'MAIN_PH_NUM' equal to the same value '12345678'. So when you will join these two tables using SQL like this:
SELECT T1.* FROM SIEBEL.S_EVT_ACT T1, SIEBELS_CONTACT T2
WHERE T1.X_PHONE_NUMBER = T2.MAIN_PH_NUM
as a result you will get two records, with the same T1.ROW_ID.
Exactly the same situation happens when you use ForwardOnly cursor mode in eScript, in this case Siebel just fetches everything what database has returned. And that why it's a big mistake to iterate over business component while it's queried in a ForwardOnly mode. You should use ForwardBackward mode instead, because in this case Siebel will exclude duplicates records (it also true for normal UI queries, because it also executed in ForwardBackward mode).
Actually this is the most important and less known difference between ForwardOnly and ForwardBackward cursor modes.
Try changing that query mode
bc.ExecuteQuery(ForwardOnly);
to ForwardBackward.

Qt/SQL - Get column type and name from table without record

Using Qt, I have to connect to a database and list column's types and names from a table. I have two constraints:
1 The database type must not be a problem (This has to work on PostgreSQL, SQL Server, MySQL, ...)
2 When I looked on the internet, I found solutions that work but only if there are one or more reocrd into the table. And I have to get column's type and name with or without record into this database.
I searched a lot on the internet but I didn't find any solutions.
I am looking for an answer in Qt/C++ or using a query that can do that.
Thanks for help !
QSqlDriver::record() takes a table name and returns a QSqlRecord, from which you can fetch the fields using QSqlRecord::field().
So, given a QSqlDatabase db,
fetch the driver with db.driver(),
fetch the list of tables with db.tables(),
fetch the a QSqlRecord for each table from driver->record(tableName), and
fetch the number of fields with record.count() and the name and type with record.field(x)
According to the previous answers, I make the implementation as below.It can work well, hope it can help you.
{
QSqlDatabase db = QSqlDatabase::addDatabase("QSLITE", "demo_conn"); //create a db connection
QString strDBPath = "db_path";
db.setDatabaseName(strDBPath); //set the db file
QSqlRecord record = db.record("table_name"); //get the record of the certain table
int n = record.count();
for(int i = 0; i < n; i++)
{
QString strField = record.fieldName(i);
}
}
QSqlDatabase::removeDatabase("demo_conn"); //remove the db connection
Getting column names and types is a database-specific operation. But you can have a single C++ function that will use the correct sql query according to the QSqlDriver you currently use:
QStringlist getColumnNames()
{
QString sql;
if (db.driverName.contains("QOCI", Qt::CaseInsensitive))
{
sql = ...
}
else if (db.driverName.contains("QPSQL", Qt::CaseInsensitive))
{
sql = ...
}
else
{
qCritical() << "unsupported db";
return QStringlist();
}
QSqlQuery res = db.exec(sql);
...
// getting names from db-specific sql query results
}
I don't know of any existing mechanism in Qt which allows that (though it might exist - maybe by using QSqlTableModel). If noone else knows of such a thing, I would just do the following:
Create data classes to store the information you require, e.g. a class TableInfo which stores a list of ColumnInfo objects which have a name and a type.
Create an interface e.g. ITableInfoReader which has a pure virtual TableInfo* retrieveTableInfo( const QString& tableName ) method.
Create one subclass of ITableInfoReader for every database you want to support. This allows doing queries which are only supported on one or a subset of all databases.
Create a TableInfoReaderFactory class which allows creation of the appropriate ITableInfoReader subclass dependent on the used database
This allows you to have your main code independent from the database, by using only the ITableInfoReader interface.
Example:
Input:
database: The QSqlDatabase which is used for executing queries
tableName: The name of the table to retrieve information about
ITableInfoReader* tableInfoReader =
_tableInfoReaderFactory.createTableReader( database );
QList< ColumnInfo* > columnInfos = tableInfoReader->retrieveTableInfo( tableName );
foreach( ColumnInfo* columnInfo, columnInfos )
{
qDebug() << columnInfo.name() << columnInfo.type();
}
I found the solution. You just have to call the record function from QSqlDatabase. You have an empty record but you can still read column types and names.