How to make SybaseIQ case insensitive? - case-insensitive

I created my DB(SYbaseIQ 16.X) with Case IGNORE feature, but it's failing for all select due to case sensitivity. I tried ALTER DATABASE command, but it doesn't have any such option.
How can I revert my DB to case IGNORE setting, and also to check what are the configurations of my DB?

You can't change it. You need to create a brand new database using the iqinit utility with the -c switch (for case-sensitivity). Then you'll need to use an external tool to dump the schema of your original database and then use the IQ "extract" command to get the data. Once you have those, use dbisql to create the new schema and import the data into the new database.

Related

Automatically migrate JSON data to newest version of JSON schema

I have a service running on my linux machine that reads data stored in a .json file when the machine is booting. The service then validates the incoming JSON data and modifies specific system configurations according to the data. The service is written in C++ and for the validation im using https://github.com/pboettch/json-schema-validator.
In development it was easy to modify the JSON schema and just adapt the data manually. I've started to use semantic versioning for my JSON schema and included it the following way:
JSON schema:
{
"$id": "https://my-company.org/schemas/config/0.1.0/config.schema.json",
"$schema": "http://json-schema.org/draft-07/schema#",
// Start of Schema definition
}
JSON data:
{
"$schema": "https://my-comapny.org/schemas/config/0.1.0/config.schema.json",
// Rest of JSON data
}
With the addition of the version, I am able to check if a version mismatch exists before validating.
What I am looking for is a way to automatically migrate the JSON data to match the newer schema version, if a version mismatch is identified. Is there any way to automatically achieve this, or is the only way to manually edit the JSON data to match the schema?
Since I plan on releasing this as open source I would really like to include some form of automatic migration so I can just ask the user if he wants to migrate to conform to the newest schema version instead of throwing an error, if a version mismatch was identified.
What you're asking for is something which will need to make assumptions to work.
This is an age old problem and similar for databases. You can have schema migrations generated with many simple changes, but this is not viable if you wish to translate existing data automatically too.
Let's look at a basic example. You rename a field.
How would a tool know you've renamed a field vs removed an old one and added a new one? It essentially, cannot.
So, you need to write your migrations by hand.
You could use JSON transformation tools like jq or fx to create migration scripts without writing it in code, which may or may not be preferable. (jq has a steeper learning curve but it's also very powerful.)

Copying multiple tables (or entire schema) from one cluster to another

I understand that AWS doesn't support a direct copy from one cluster to another for a given table. We need to UNLOAD from one and then COPY to another. However this applies to a table. Does it apply to schema as well?
say I have a schema that looks like
some_schema
|
-- table1
-- table2
-- table3
another_schema
|
-- table4
-- table5
and I want to copy tsome_schema to another cluster, but don't need another_schema. Making a snapshot doesn't make sense if there are too many of another_schema (say, another_schema2, another_schema3, another_schema4, etc., each with multiple tables in it)
I know I can do UNLOAD some_schema.table1 and then COPY some_schema.table1, but what can I do if I just want to copy the entire some_schema?
I believe unload a schema is not available, but you have couple of options based on the size of your cluster and number of tables you like to copy to the new cluster.
Create a script to generate UNLOAD and LOAD commands based on your schemas you like to copy
Create a snapshot, restore tables selectively. https://docs.aws.amazon.com/redshift/latest/mgmt/working-with-snapshots.html
If the number of tables which will be excluded from copy is not big you can CTAS them with BACKUP NO option, thus they will not be included when you create a snapshot.
To me, option 1 looks the easiest, let me know if you need any help with that.
UPDATE :
Here is the SQL to generate UNLOAD statements
select 'unload (''select * from '||n.nspname||'.'||c.relname||''') to ''s3_location''
access_key_id ''accesskey''
secret_access_key ''secret_key''
delimiter ''your_delimiter''
PARALLEL ON
GZIP ;' as sql
from pg_class c
left join pg_namespace n on c.relnamespace=n.oid
where n.nspname in ('schema1','schema2');
If you like to add an additional filter for tables use c.relname column
I agree with solution provided by #mdem7. I would like to provide bit different solution that I feel may be helpful to others.
There are two problems,
Copying the schema and table definition(meaning DDL)
Copying data
Here is my proposed solution,
Copying the schema and table definition(meaning DDL)
I think, pg_dump command suit best here and it will export full schema definition in SQL file that could directly imported to another cluster.
pg_dump --schema-only -h your-host -U redshift-user -d redshift-database -p port > your-schema-file.sql
Then import the same to other cluster.
psql -h your-other-cluster-host -U other-cluster-username -d your-other-cluster-database-name -a -f your-schema-file.sql
Copying data
As suggested in other answer, unload to S3 and Copy from S3 suits best.
Hope it helps.
You really only have two options -
What mdem7 suggested using UNLOAD/COPY.
I don't recommend using pg_dump to get schema as it will miss the Redshift specific table settings like DIST/SORT keys + column ENCODING.
Check out this view instead - Generate Table DDL
The alternative is what you mentioned the restore from a snapshot (manual or automated). However, the moment the new cluster comes online (while it's still restoring), log in and drop (with cascade) all the schemas you do not want. This will stop the restore on the dropped schemas/tables. The only downside to this approach is that the new cluster needs to be the same size as the original. Which may or may not matter. If the cluster is going to be relatively long lived once it's restored and it makes sense you can resize it downwards after the restore has completed.

Liquibase - How to skip changesets that have been executed

I have a dropwizard application backed by a mysql database. I am using the liquibase wrapper for database migrations
To start with I used the ' db dump' command to autogenerate the migrations.xml file for me.
Now I am working on refactoring the database and I want to be able to update specific column names and names of tables.
I use the preconditions to skip the already generated tables to skip the 'createTable' commands
<preConditions onFail="MARK_RAN">
<not>
<tableExists tableName="myTable" />
</not>
</preConditions>
How do I now skip the execution of the primary key and foreign key contraints? Is there a precondition at the changelog level that I can use to indicate "skip already executed changesets'? Or do I just create a new migrations.xml? Will this delete the existing data?
Check this for preconditions http://www.liquibase.org/documentation/preconditions.html
You can't tell liquibase to skip all changesets that are executed, but you can just exclude that file (you have to save migration.xml in some other folder (archive i.e), because someday you can create db structure from scratch and you will need that changesets)
That means that you can just create new .xml file with changesets that you want to be executed. Liquibase wont delete any data, it will work only with xml files you give to it.

How to detect and respond to a database change (INSERT) from a django project?

I am setting up our project to integrate with a shipping platform called Endicia which has the ability to insert new rows into our database when a package is shipped.
How can I detect from python when a new row has been inserted?
My solution for now would be to query the DB every 30 seconds or so for new rows... is there another solution to send a signal from postgres to python?
You'd set up a custom command that is run by the manage.py file.
You'd put it in `yourapp/management/commands/' folder. Make sure to add an init.py file to both the management and commands folder or the command won't work. Then you create the code for the custom command.
Then, see this related question about running a shell script when changes are made to a postgres database. The answer there was to use PL/sh. You'll need to figure that part out on your own, but basically however you do it, the end result is that the script should call something like /path/to/app/manage.py command_name

Possible to add a new column to an Amazon SimpleDB domain with a default value?

You can dynamically put a new attribute on a single record in a domain, but that attribute remains null for all other records. Is there an "update * set newattribute='defaultval" style statement that I can execute that will add the new attribute to all the other records? I have a lot of records and would prefer not to loop over them all and do it programatically.
I don't think there is any such option. We had a similar problem and had to do a hack. We added the Attribute_Name_Default as a separate attribute. We then wrote a wrapper for the Aws SimpleDB client which would check the default attribute for each attribute and assign the value to the original attribute before returning to actual code. Using dependency injection we did not have to change any code. If dependency injection is not an option just checkout the aws client from github make the change and use that jar as a dependency.