I am new Apache Calcite and am able to fetch the data from DB by using relational algebra ,but not able to do insert,update, delete,drop operation. If can share sample code will be more helpful.
As far as I am aware, RelBuilder can not build a RelNode for INSERT, UPDATE, DELETE, DROP operations.
For DML (INSERT, UPDATE, DELETE, MERGE), the equivalent relational algebra uses TableModify, so you can call LogicalTableModify.create to build one TableModify node and use RelBuilder to build a RelNode`` as its input as follows:
RelNode node = builder.scan("envliven").project("Name");
TableModify modifyNode = LogicalTableModify.create((table,
schema, node,
UPDATE, updateColumnList,
sourceExpressionList, flattened);
For DDL (DROP, CREATE, ALTER), there is no corresponding relational algebraļ¼but you can use the SqlNode to execute directly like CalcitePrepareImpl.executeDdl.
For example:
update nation set n_nationkey = 1 where n_nationkey = 2;
RelNode as follows:
LogicalTableModify(table=[[test, nation]], operation=[UPDATE], updateColumnList=[[n_nationkey]], sourceExpressionList=[[1]], flattened=[false])
LogicalProject(n_nationkey=[$0], n_name=[$1], n_regionkey=[$2], n_comment=[$3], EXPR$0=[1])
LogicalFilter(condition=[=($0,2)])
LogicalTableScan(table=[[test, nation]])
For UPDATE, the updateColumnList has the column you updated, and the sourceExpressionList has the new values.
For INSERT:
insert into nation(n_nationkey, n_name) values(1, 'test');
RelNode as follows:
LogicalTableModify(table=[[test, nation]], operation=[INSERT], flattened=[false])
LogicalProject(n_nationkey=[$0], n_name=[$1], n_regionkey=[null], n_comment=[null])
LogicalValues(tuples=[[{ 1, _UTF-16'test ' }]])
Related
i want to generate sql use calcite. like this
org.apache.calcite.rel.rel2sql.RelToSqlConverterTest#testAntiJoin
final FrameworkConfig frameworkConfig = Frameworks.newConfigBuilder()
.parserConfig(SqlParser.Config.DEFAULT)
// .defaultSchema(schema)
.build();
final RelBuilder builder = RelBuilder.create(frameworkConfig);
final RelBuilder builder = relBuilder();
final RelNode root = builder
.scan("DEPT")
.scan("EMP")
.join(
JoinRelType.ANTI, builder.equals(
builder.field(2, 1, "DEPTNO"),
builder.field(2, 0, "DEPTNO")))
.project(builder.field("DEPTNO"))
.build();
but if i don't set the schema, the exception table not found will be throw.
is there any way to generate sql without schema info.
the aim is generate sql. just generate sql.
reply for first answer. because comment character length limit.
My scenario is Business Intelligence. DataSource can be many, such as Hive, ClickHouse, and so on. And there are many tables. I also need to dynamically delete or add datasource. So I don't think it's appropriate for Calcite to be aware of all the data sources. I have two more questions:
How to create 'free-standing' table objects as you said
Check whether SqlNode can be used to do this. for example:
SqlIdentifier from = new SqlIdentifier("testTable", SqlParserPos.QUOTED_ZERO);
SqlNode[] nodes = new SqlNode[2];
nodes[0] = new SqlIdentifier("a", SqlParserPos.QUOTED_ZERO);
nodes[1] = SqlLiteral.createExactNumeric("1", SqlParserPos.QUOTED_ZERO);
SqlNode where = new SqlBasicCall(SqlStdOperatorTable.EQUALS, nodes, SqlParserPos.QUOTED_ZERO);
SqlIdentifier selectNode = new SqlIdentifier("a", SqlParserPos.QUOTED_ZERO);
SqlSelect select = new SqlSelect(SqlParserPos.QUOTED_ZERO, SqlNodeList.EMPTY,
new SqlNodeList(Arrays.asList(selectNode), SqlParserPos.QUOTED_ZERO),
from,
where,
null,
null,
null,
null,
null,
null,
null);
SqlString sqlString = select.toSqlString(CalciteSqlDialect.DEFAULT);
System.out.println(sqlString.getSql());
Only one method in RelBuilder uses a RelOptSchema: scan(String...) (and its variant Scan(Iterable<String>)). Which makes sense when you consider that the purpose of RelOptSchema is as a directory service, converting a table name (or table path, consisting of a table name qualified with catalog and/or schema names) into a RelOptTable object.
If you have 'free-standing' table objects that are not accessed via a namespace then you can create TableScan relational expressions directly and then call RelBuilder.push(RelNode) to add them to the stack. Since you never call RelBuilder.scan you can create RelBuilder with a null RelOptSchema.
But in your case, it looks as if you don't have free-standing table objects. That's a problem for Calcite, because it needs to know that your "EMP" table has a field called "DEPTNO" and it has type INTEGER.
So I suggest that you create a 'virtual' schema that contains type information but is not necessarily backed by real tables. The MockCatalogReader class, used in several of Calcite's tests, is a good example to follow.
Using the ignite C++ API, I'm trying to find a way to perform an SqlFieldsQuery to select a specific field, but would like to do this for a set of keys.
One way to do this, is to do the SqlFieldsQuery like this,
SqlFieldsQuery("select field from Table where _key in (" + keys_string + ")")
where the keys_string is the list of the keys as a comma separated string.
Unfortunately, this takes a very long time compared to just doing cache.GetAll(keys) for the set of keys, keys.
Is there an alternative, faster way of getting a specific field for a set of keys from an ignite cache?
EDIT:
After reading the answers, I tried changing the query to:
auto query = SqlFieldsQuery("select field from Table t join table(_key bigint = ?) i on t._key = i._key")
I then add the arguments from my set of keys like this:
for(const auto& key: keys) query.AddArgument(key);
but when running the query, I get the error:
Failed to bind parameter [idx=2, obj=159957, stmt=prep0: select field from Table t join table(_key bigint = ?) i on t._key = i._key {1: 159956}]
Clearly, this doesn't work because there is only one '?'.
So I then tried to pass a vector<int64_t> of the keys, but I got an error which basically says that std::vector<int64_t> did not specialize the ignite BinaryType. So I did this as defined here. When calling e.g.
writer.WriteInt64Array("data", data.data(), data.size())
I gave the field a arbitrary name "data". This then results in the error:
Failed to run map query remotely.
Unfortunately, the C++ API is neither well documented, nor complete, so I'm wondering if I'm missing something or that the API does not allow for passing an array as argument to the SqlFieldsQuery.
Query that uses IN clause doesn't always use indexes properly. The workaround for this is described here: https://apacheignite.readme.io/docs/sql-performance-and-debugging#sql-performance-and-usability-considerations
Also if you have an option to to GetAll instead and lookup by key directly, then you should use it. It will likely be more effective anyway.
Query with operator "IN" will not always use indexes. As a workaround, you can rewrite the query in the following way:
select field from Table t join table(id bigint = ?) i on t.id = i.id
and then invoke it like:
new SqlFieldsQuery(
"select field from Table t join table(id bigint = ?) i on t.id = i.id")
.setArgs(new Object[]{ new Integer[] {2, 3, 4} }))
How do I create an insert query in Doctrine that will perform the same function as the following SQL query:
INSERT INTO target (tgt_col1, tgt_col2)
SELECT 'flag' as marker, src_col2 FROM source
WHERE src_col1='mycriteria'
Doctrine documentation says:
If you want to execute DELETE, UPDATE or INSERT statements the Native
SQL API cannot be used and will probably throw errors. Use
EntityManager#getConnection() to access the native database connection
and call the executeUpdate() method for these queries.
Examples
// Get entity manager from your context.
$em = $this->getEntityManager();
/**
* 1. Raw query
*/
$query1 = "
INSERT INTO target (tgt_col1, tgt_col2)
SELECT 'flag' as marker, src_col2 FROM source
WHERE src_col1='mycriteria'
";
$affectedRows1 = $em->getConnection()->executeUpdate($query1);
/**
* 2. Query using class metadata.
*/
$metadata = $em->getClassMetadata(Your\NameSpace\Entity\Target::class);
$tableName = $metadata->getTableName();
$niceTitle = $metadata->getColumnName('niceTitle');
$bigDescription = $metadata->getColumnName('bigDescription');
$metadata2 = $em->getClassMetadata(Your\NameSpace\Entity\Source::class);
$table2Name = $metadata2->getTableName();
$smallDescription = $metadata2->getColumnName('smallDescription');
$query2 = "
INSERT INTO $tableName ($niceTitle, $bigDescription)
SELECT 'hardcoded title', $smallDescription FROM $table2Name
WHERE $niceTitle = 'mycriteria'
";
$affectedRows2 = $em->getConnection()->executeUpdate($query2);
I'm still not convinced it's the right approach you are taking but if you really need an SQL query to be run for whatever reason you can do that in Doctrine with $entityManager->createNativeQuery(); function:
http://doctrine-orm.readthedocs.org/en/latest/reference/native-sql.html
Doctrine isn't a tool for query manipulation. The whole idea is to work on Entity level, not the SQL level (tables, etc). Doctrine's 2 QueryBuilder doesn't even support INSERT operations via DQL.
A small snippet of pseudo code below to illustrate how it can be done in "Doctrine's way":
$qb = $entityManager->createQueryBuilder();
$qb->select('s')
->from('\Foo\Source\Entity', 's')
->where('s.col1 = :col1')
->setParameter('col1', 'mycriteria');
$sourceEntities = $qb->getQuery()->getResult();
foreach($sourceEntities as $sourceEntity) {
$targetEntity = new \Foo\Target\Entity();
$targetEntity->col1 = $sourceEntity->col1;
$targetEntity->col2 = $sourceEntity->col2;
$entityManager->persist($targetEntity);
}
$entityManager->flush();
Is it possible to do a bulk insert with Sitecore Rocks? Something along the lines of SQL's
INSERT INTO TABLE1 SELECT COL1, COL2 FROM TABLE2
If so, what is the syntax? I'd like to add an item under any other item of a given template type.
I've tried using this syntax:
insert into (
##itemname,
##templateitem,
##path,
[etc.]
)
select
'Bulk-Add-Item',
//*[##id='{B2477E15-F54E-4DA1-B09D-825FF4D13F1D}'],
Path + '/Item',
[etc.]
To this, Query Analyzer responds:
"values" expected at position 440.
Please note that I have not found a working concatenation operator. For example,
Select ##item + '/value' from //sitecore/content/home/*
just returns '/value'. I've also tried ||, &&, and CONCATENATE without success.
There is apparently a way of doing bulk updates with CSV, but doing bulk updates directly from Sitecore Query Analyzer would be very useful
Currently you cannot do bulk inserts, but it is a really nice idea. I'll see what I can do.
Regarding the concatenation operator, this following works in the Query Analyzer:
select #Text + "/Value" from /sitecore/content/Home
This returns "Welcome to Sitecore/Value".
The ##item just returns empty, because it is not a valid system attribute.
I am building an app in Symfony2, using Doctrine2 with mysql. I would like to use a fulltext search. I can't find much on how to implement this - right now I'm stuck on how to set the table engine to myisam.
It seems that it's not possible to set the table type using annotations. Also, if I did it manually by running an "ALTER TABLE" query, I'm not sure if Doctrine2 will continue to work properly - does it depend on the InnoDB foreign keys?
Is there a better place to ask these questions?
INTRODUCTION
Doctrine2 uses InnoDB which supports Foreign Keys used in Doctrine associations. But as MyISAM does not support this yet, you can not use MyISAM to manage Doctrine Entities.
On the other side, MySQL v5.6, currently in development, will bring the support of InnoDB FTS and so will enable the Full-Text search in InnoDB tables.
SOLUTIONS
So there are two solutions :
Using the MySQL v5.6 at your own risks and hacking a bit Doctrine to implement a MATCH AGAINST method : link in french... (I could translate if needed but there still are bugs and I would not recommend this solution)
As described by quickshifti, creating a MyISAM table with fulltext index just to perform the search on. As Doctrine2 allows native SQL requests and as you can map this request to an entity (details here).
EXAMPLE FOR THE 2nd SOLUTION
Consider the following tables :
table 'user' : InnoDB [id, name, email]
table 'search_user : MyISAM [user_id, name -> FULLTEXT]
Then you just have to write a search request with a JOIN and mapping (in a repository) :
<?php
public function searchUser($string) {
// 1. Mapping
$rsm = new ResultSetMapping();
$rsm->addEntityResult('Acme\DefaultBundle\Entity\User', 'u');
$rsm->addFieldResult('u', 'id', 'id');
$rsm->addFieldResult('u', 'name', 'name');
$rsm->addFieldResult('u', 'email', 'email');
// 2. Native SQL
$sql = 'SELECT u.id, u.name FROM search_user AS s JOIN user AS u ON s.user_id = u.id WHERE MATCH(s.name) AGAINST($string IN BOOLEAN MODE)> 0;
// 3. Run the query
$query = $this->_em->createNativeQuery($sql, $rsm);
// 4. Get the results as Entities !
$results = $query->getResult();
return $results;
}
?>
But the FULLTEXT index needs to stay up-to-date. Instead of using a cron task, you can add triggers (INSERT, UPDATE and DELETE) like this :
CREATE TRIGGER trigger_insert_search_user
AFTER INSERT ON user
FOR EACH ROW
INSERT INTO search_user SET user_id=NEW.id, name=NEW.name;
CREATE TRIGGER trigger_update_search_user
AFTER UPDATE ON user
FOR EACH ROW
UPDATE search_user SET name=name WHERE user_id=OLD.id;
CREATE TRIGGER trigger_delete_search_user
AFTER DELETE ON user
FOR EACH ROW
DELETE FROM search_user WHERE user_id=OLD.id;
So that your search_user table will always get the last changes.
Of course, this is just an example, I wanted to keep it simple, and I know this query could be done with a LIKE.
Doctrine ditched the fulltext Searchable feature from v1 on the move to Doctrine2. You will likely have to roll your own support for a fulltext search in Doctrine2.
I'm considering using migrations to generate the tables themselves, running the search queries w/ the native SQL query option to get sets of ids that refer to tables managed by Doctrine, then using said sets of ids to hydrate records normally through Doctrine.
Will probly cron something periodic to update the fulltext tables.