What is the structure.sql used for? - ruby-on-rails-4

I'm curious what the point of the structure.sql file is. It seems to be updated and created every time rails migrations are run. So it seems to be a visual representation of our database. What else can it be used for?
When one runs structure:load, what does it do? What does it mean to load a structure file into a database? Why would you need to do that?
Should one be committing the structure.sql file?

Seems like your rails app is configured to use the sql schema format
#/config/application.rb
...
config.active_record.schema_format = :sql
...
the structure.sql is in place of a schema.db.
Running db:structure:load ( or db:schema:load) will load your entire database. You only need to do this when bringing on a new app instance from scratch. After awhile, your migration files will become quite lengthy and it will be better to do a load first, then a migration when bringing up a new app instance

Related

TEIID Importing ddl into vdb ddl

Currently my VDB DDL file is getting quite big. I want to split into different files using the following.
IMPORT FROM REPOSITORY "DDL-FILE"
INTO test OPTIONS ("ddl-file" '/path/to/schema1.ddl')
However, this does not seem to work.
Can the DDL file path be relative, how?
The schema test, can it be VIRTUAL?
Does "DDL-FILE" refer to "ddl-file"?
What should I put in my main VDB ddl and what should I put in my extra ddl's. Should the
extra ddl's contain server configuration details or should they be defined as a VDB.
I would like to see a working example on how to use this.
This will be used in a teiid springboot project where you can only load one main vdb file. It is not workable to have one very large ddl file.
I tried multiple approaches but it does not seem to work, either giving me a null pointer with no error codes or error codes that tell me nothing.
Also the syntax in Teiid 9.3 seems different:
IMPORT FOREIGN SCHEMA public
FROM REPOSITORY DDL-FILE
INTO test OPTIONS ("ddl-file" '/path/to/schema.ddl')
This feature is currently not implemented in Teiid Spring Boot. This issue is captured in https://issues.redhat.com/browse/TEIIDSB-219
Update: I added the needed code to master, should be available with 1.7 release meanwhile you can build the master branch and test it out.

Getting started with Alloy and SQLIte

I am very new to Appcelerator, i've got my head around using Alloy to lay the content of my apps out, and have got to grips with using the Firefox extension to create an SQLite database. I'm stuck at putting the two together though. I've tried the Ti.UI.Database.Install but I'm not 100% which JS file to add that coding to, or where to copy the DB file to. I've followed a few threads and tutorials, tried putting the .db file into the resources folder, lib folder etc but keep coming up with errors. If someone could just talk me through the basic steps that would be great.
This is about using a predefined sqlite database in your app, meaning you want to install a db with preloaded records in its tables.
app/assets is a good place for your_database.sql;
then in app/alloy.js
Ti.Database.install('/your_database.sql', 'your_database')
finally configure the adapter attribute in your alloy's models with:
type: "sql",
db_file: "your_database.sql",
db_name: "your_database",
collection_name: "your_table_name"
Anyway, if you do not need to preload a database, you only have to define your models (here, in example, app/models/foobars.js) and configure their adapter with
type: "sql",
collection_name: "foobars"
This way Alloy will take care to create and install the database (including a foobars table) for you.

How do you use inspectdb in Django?

I am just starting with Django, and I would like to make an app that uses my existing sqlite db.
I read the docs and I found that you can create models from a db, using inspectdb; altho I can't find an example of how you use that command, on an existing db.
I copied the db file inside the directory of my project, ran the command and I see that a sqlite3 file is created in my directory project.
Altho the file has nothing to do with the database that I made. I tried to pass the db name to the inspectdb command but it says that it doesn't accept parameters.
So how can I actually tell the command to use my db to create the model for my app?
There must be some obvious step that I am missing...this is what I did:
-created the project
-created the app
-copied my db inside the project folder
-ran inspectdb
But I see the model empty, and a new db called db.sqlite3 created
Found the answer: there is a variable that has to be set, to define which one is the db that the application will use. the default is set to "db.sqlite3", which explain why I am getting this behavior.
Once you modify the name with the database that I already made, the command run without issues.
Not sure if it is just me getting stomped, but this info about the name that has to be changed was not mentioned anywhere...
Thanks

Migrate ColdFusion scheduled tasks using neo-cron.xml

We currently have two ColdFusion 10 dedicated servers which we are migrating to a single VPS server. We have many scheduled tasks on each. I have taken each of the neo-cron.xml files and copied the var XML elements, from within the struct type='coldfusion.server.ConfigMap' XML element, and pasted them within that element in the neo-cron.xml file on the new server. Afterward I restarted the ColdFusion service, log into cf admin, and the tasks all show as expected.
My problem is, when I try to update any of the tasks I get the following error when saving:
An error occured scheduling the task. Unable to store Job :
'SERVERSCHEDULETASK#$%^DEFAULT.job_MAKE CATALOGS (SITE CONTROL)',
because one already exists with this identification
Also, when I try to delete a task it tells me a task with that name does not exist. So it seems to me that the task information must also be stored elsewhere. So there when I try to update a task, the record doesn't exist in the secondary location so it tries to add it new to the neo-cron.xml file, which causes an error because it already exists. And when trying to delete, it doesn't exist in the secondary location so it says a task with that name does not exist. That is just a guess though.
Any ideas how I can get this to work without manually re-creating dozens of tasks? From what I've read this should work, but I need to be able to edit the tasks.
Thank you.
After a lot of hair-pulling I was able to figure out the problem. It all boiled down to having parentheses in the scheduled task names. This was causing both the "Unable to store Job : 'SERVERSCHEDULETASK#$%^DEFAULT.job_MAKE CATALOGS (SITE CONTROL)', because one already exists with this identification" error and also causing me to be unable to delete jobs. I believe it has something to do with encoding the parentheses because the actual neo-cron.xml name attribute of the var element encodes the name like so:
serverscheduletask#$%^default#$%^MAKE CATALOGS (SITE CONTROL)
Note that this anomaly did not exist on ColdFusion 10, Update 10, but does exist on Update 13. I'm not sure which update broke it, but there you go.
You will have to copy the neo-cron.xml from C:\ColdFusion10\\lib of one server to another. After that restart the server to make the changes effective. Login to the CF Admin and check the functionality.
This should work.
Note:- Please take a backup of the existing neo-cron.xml, before making the changes.

How to detect and respond to a database change (INSERT) from a django project?

I am setting up our project to integrate with a shipping platform called Endicia which has the ability to insert new rows into our database when a package is shipped.
How can I detect from python when a new row has been inserted?
My solution for now would be to query the DB every 30 seconds or so for new rows... is there another solution to send a signal from postgres to python?
You'd set up a custom command that is run by the manage.py file.
You'd put it in `yourapp/management/commands/' folder. Make sure to add an init.py file to both the management and commands folder or the command won't work. Then you create the code for the custom command.
Then, see this related question about running a shell script when changes are made to a postgres database. The answer there was to use PL/sh. You'll need to figure that part out on your own, but basically however you do it, the end result is that the script should call something like /path/to/app/manage.py command_name