Currently my VDB DDL file is getting quite big. I want to split into different files using the following.
IMPORT FROM REPOSITORY "DDL-FILE"
INTO test OPTIONS ("ddl-file" '/path/to/schema1.ddl')
However, this does not seem to work.
Can the DDL file path be relative, how?
The schema test, can it be VIRTUAL?
Does "DDL-FILE" refer to "ddl-file"?
What should I put in my main VDB ddl and what should I put in my extra ddl's. Should the
extra ddl's contain server configuration details or should they be defined as a VDB.
I would like to see a working example on how to use this.
This will be used in a teiid springboot project where you can only load one main vdb file. It is not workable to have one very large ddl file.
I tried multiple approaches but it does not seem to work, either giving me a null pointer with no error codes or error codes that tell me nothing.
Also the syntax in Teiid 9.3 seems different:
IMPORT FOREIGN SCHEMA public
FROM REPOSITORY DDL-FILE
INTO test OPTIONS ("ddl-file" '/path/to/schema.ddl')
This feature is currently not implemented in Teiid Spring Boot. This issue is captured in https://issues.redhat.com/browse/TEIIDSB-219
Update: I added the needed code to master, should be available with 1.7 release meanwhile you can build the master branch and test it out.
Related
I have a service running on my linux machine that reads data stored in a .json file when the machine is booting. The service then validates the incoming JSON data and modifies specific system configurations according to the data. The service is written in C++ and for the validation im using https://github.com/pboettch/json-schema-validator.
In development it was easy to modify the JSON schema and just adapt the data manually. I've started to use semantic versioning for my JSON schema and included it the following way:
JSON schema:
{
"$id": "https://my-company.org/schemas/config/0.1.0/config.schema.json",
"$schema": "http://json-schema.org/draft-07/schema#",
// Start of Schema definition
}
JSON data:
{
"$schema": "https://my-comapny.org/schemas/config/0.1.0/config.schema.json",
// Rest of JSON data
}
With the addition of the version, I am able to check if a version mismatch exists before validating.
What I am looking for is a way to automatically migrate the JSON data to match the newer schema version, if a version mismatch is identified. Is there any way to automatically achieve this, or is the only way to manually edit the JSON data to match the schema?
Since I plan on releasing this as open source I would really like to include some form of automatic migration so I can just ask the user if he wants to migrate to conform to the newest schema version instead of throwing an error, if a version mismatch was identified.
What you're asking for is something which will need to make assumptions to work.
This is an age old problem and similar for databases. You can have schema migrations generated with many simple changes, but this is not viable if you wish to translate existing data automatically too.
Let's look at a basic example. You rename a field.
How would a tool know you've renamed a field vs removed an old one and added a new one? It essentially, cannot.
So, you need to write your migrations by hand.
You could use JSON transformation tools like jq or fx to create migration scripts without writing it in code, which may or may not be preferable. (jq has a steeper learning curve but it's also very powerful.)
Every time I write query using the bq CLI tool I have to specify --use_legacy_sql=flase. I need to configure this. Google's documentation on this states it must be edited in the file $HOME/.bigqueryrc but there is no such file. In fact I couldn't find the .bigqueryrc named file anywhere on the server. Please help me to save few seconds every time I write a query. Thanks.
Following comments conversation above - the file under discussion - .bigqueryrc - can be created (manually) and populated with "default" flag values as described at the Setting default values for command-line flags BigQuery documentation.
I'm using siddhi to create some app which also interacts with PostgreSQL DB. Although I'm not sure, I believe, there is a bug about making multiple updates on the same PG table, within a single event (i.e. upon receiving an event, update a record in the table, and create another one again in the same table) it seems the batch updates are causing some problems. SO, I just want to give it a try after disabling batchUpdate (it is enabled by default). I just don't know how to configure it using siddhi-sdk (via Intellij plugin). There are two related tickets:
https://github.com/wso2-extensions/siddhi-store-rdbms/issues/43
https://github.com/wso2/product-sp/issues/472
Until these are documented, I'd like to get some quick response how to set these fields.
Best regards...
I'm using siddhi to create some app which also interacts with PostgreSQL DB. Although I'm not sure, I believe, there is a bug about making multiple updates on the same PG table, within a single event (i.e. upon receiving an event, update a record in the table, and create another one again in the same table) it seems the batch updates are causing some problems.
When batchEnabled has been set to true, it will perform the insert/update operation on batch of events instead of performing those operations on each and every single event. Simply, this has been introduced to improve the performance.
The default value of this parameter is currently set to "true".
However, batchEnable configurations is done through a system parameter called, "{{RDBMS-Name}}.batchEnable" which have to be configured in the WSO2 Stream Processor's deployment.yaml
If you want to overide this property in Product-SP please find the steps below.
Open the deployment.yaml file located in {Product-SP-Home}/conf/editor/
Insert the following lines in the file.
siddhi:
extensions:
extension:
name: store
namespace: rdbms
properties:
PostgreSQL.batchEnable: true
But currently there is no way to overwrite those system configurations from the siddhi app level. Since you are using the SDK, what you can do is changing the default value of above parameter to "false".
Please find the steps below do it.
Find the siddhi-store-rdbms-4.x.xx.jar file in the siddhi
sdk. This is located in the {siddhi-sdk-home}/lib/ .
Open the jar file using an archive manager and open the
rdbms-table-config.xml file located inside it with a text editor.
Set false in <batchEnable>true</batchEnable> attribute under the
<database name="PostgreSQL"> tag and save it.
Thanks Raveen. with a simple dash (-) before "extension" I was able to set the config.
siddhi:
extensions:
- extension:
name: store
namespace: rdbms
properties:
PostgreSQL.batchEnable: false
I am very new to Appcelerator, i've got my head around using Alloy to lay the content of my apps out, and have got to grips with using the Firefox extension to create an SQLite database. I'm stuck at putting the two together though. I've tried the Ti.UI.Database.Install but I'm not 100% which JS file to add that coding to, or where to copy the DB file to. I've followed a few threads and tutorials, tried putting the .db file into the resources folder, lib folder etc but keep coming up with errors. If someone could just talk me through the basic steps that would be great.
This is about using a predefined sqlite database in your app, meaning you want to install a db with preloaded records in its tables.
app/assets is a good place for your_database.sql;
then in app/alloy.js
Ti.Database.install('/your_database.sql', 'your_database')
finally configure the adapter attribute in your alloy's models with:
type: "sql",
db_file: "your_database.sql",
db_name: "your_database",
collection_name: "your_table_name"
Anyway, if you do not need to preload a database, you only have to define your models (here, in example, app/models/foobars.js) and configure their adapter with
type: "sql",
collection_name: "foobars"
This way Alloy will take care to create and install the database (including a foobars table) for you.
I'm curious what the point of the structure.sql file is. It seems to be updated and created every time rails migrations are run. So it seems to be a visual representation of our database. What else can it be used for?
When one runs structure:load, what does it do? What does it mean to load a structure file into a database? Why would you need to do that?
Should one be committing the structure.sql file?
Seems like your rails app is configured to use the sql schema format
#/config/application.rb
...
config.active_record.schema_format = :sql
...
the structure.sql is in place of a schema.db.
Running db:structure:load ( or db:schema:load) will load your entire database. You only need to do this when bringing on a new app instance from scratch. After awhile, your migration files will become quite lengthy and it will be better to do a load first, then a migration when bringing up a new app instance