Restricting database in Postgresql - django

I am using Django1.9 with Postgresql. This is my data model for the model "Feature" :
class Feature(models.Model):
image_component = models.ForeignKey('Image_Component',on_delete=models.CASCADE,)
feature_value = HStoreField()
def save(self,*args,**kwargs):
if Feature.objects.filter(feature_value__has_keys=['size', 'quality' , 'format']):
super(Feature, self).save(*args, **kwargs)
else:
print("Incorrect key entered")
I am imposing a restriction on the feature_value such that only the keys that is allowed with Hstore are size , format and quality. I can do this while updating the database using Django-Admin. But I am not able to update the database directly using pgAdmin3 with the same restrictions i.e , I want to impose the same restrictions on the database level. How can I do that? Any suggestions?

You need to ALTER you Future table and add a constraint for feature_value field with such query:
ALTER TABLE your_feature_table
ADD CONSTRAINT restricted_keys
CHECK (
-- Check that 'feature_value' contains all specified keys
feature_value::hstore ?& ARRAY['size', 'format', 'quality']
AND
-- and that number of keys is three
array_length(akeys(feature_value), 1) = 3
);
This will ensure that every data in feature_value could contain exactly three keys: size, format and quality; and won't allow empty data.
Note, that before applying this query, you need to remove all invalid data from the table, or you would receive an error:
ERROR: check constraint "restricted_keys" is violated by some row
You could execute this query in DB console, or since you're using Django, it would be more appropriate to create a migration and apply this query using RunSQL: create an emtpy migration and pass above query into migrations.RunSQL, and pass this query into reverse_sql param for removing the constraint when the migration is unapplied:
ALTER TABLE your_feature_table
DROP CONSTRAINT restricted_keys;
After applying:
sql> INSERT INTO your_feature_table (feature_value) VALUES ('size => 124, quality => great, format => A4')
1 row affected in 18ms
sql> INSERT INTO your_feature_table (feature_value) VALUES ('format => A4')
ERROR: new row for relation "your_feature_table" violates check constraint "restricted_keys"
Detail: Failing row contains ("format"=>"A4").
sql> INSERT INTO your_feature_table (feature_value) VALUES ('')
ERROR: new row for relation "your_feature_table" violates check constraint "restricted_keys"
Detail: Failing row contains ().
sql> INSERT INTO your_feature_table (feature_value) VALUES ('a => 124, b => great, c => A4')
ERROR: new row for relation "your_feature_table" violates check constraint "restricted_keys"
Detail: Failing row contains ("a"=>"124", "b"=>"great", "c"=>"A4").
sql> INSERT INTO your_feature_table (feature_value) VALUES ('size => 124, quality => great, format => A4, incorrect_key => error')
ERROR: new row for relation "your_feature_table" violates check constraint "restricted_keys"
Detail: Failing row contains ("size"=>"124", "format"=>"A4", "quality"=>"great", "incorrect_ke...).

Related

how to insert/update data in sql database using azure databricks notebook jdbc

I got lots of example to append/overwrite table in sql from AZ Databricks Notebook. But no single way to directly update, insert data using query or otherway.
ex. I want to update all row where (identity column)ID = 1143, so steps which I need to taken care are
val srMaster = "(SELECT ID, userid,statusid,bloburl,changedby FROM SRMaster WHERE ID = 1143) srMaster"
val srMasterTable = spark.read.jdbc(url=jdbcUrl, table=srMaster,
properties=connectionProperties)
srMasterTable.createOrReplaceTempView("srMasterTable")
val srMasterTableUpdated = spark.sql("SELECT userid,statusid,bloburl,140 AS changedby FROM srMasterTable")
import org.apache.spark.sql.SaveMode
srMasterTableUpdated.write.mode(SaveMode.Overwrite)
.jdbc(jdbcUrl, "[dbo].[SRMaster]", connectionProperties)
Is there any other sufficient way to achieve the same.
Note : Above code is also not working as SQLServerException: Could not drop object 'dbo.SRMaster' because it is referenced by a FOREIGN KEY constraint. , so it look like it drop table and recreate...not at all the solution.
You can use insert using a FROM statement.
Example: update values from another table in this table where a column matches.
INSERT INTO srMaster
FROM srMasterTable SELECT userid,statusid,bloburl,140 WHERE ID = 1143;
or
insert new values to rows where one of the existing column value matches
UPDATE srMaster SET userid = 1, statusid = 2, bloburl = 'https://url', changedby ='user' WHERE ID = '1143'
or just insert multiple values
INSERT INTO srMaster VALUES
(1, 10, 'https://url1','user1'),
(2, 11, 'https://url2','user2');
In SQL Server, you cannot drop a table if it is referenced by a FOREIGN KEY constraint. You have to either drop the child tables before removing the parent table, or remove foreign key constraints.
For a parent table, you can use the below query to get foreign key constraint names and the referencing table names:
SELECT name AS 'Foreign Key Constraint Name',
OBJECT_SCHEMA_NAME(parent_object_id) + '.' + OBJECT_NAME(parent_object_id) AS 'Child Table'
FROM sys.foreign_keys
WHERE OBJECT_SCHEMA_NAME(referenced_object_id) = 'dbo' AND
OBJECT_NAME(referenced_object_id) = 'PARENT_TABLE'
Then you can alter the child table and drop the constraint by its name using the below statement:
ALTER TABLE dbo.childtable DROP CONSTRAINT FK_NAME;

Adding NOT NULL constraint to Cloud Spanner table

If a Cloud Spanner table is created with nullable columns, is it possible to add a NOT NULL constraint on a column without recreating the table?
You can add a NOT NULL constraint to a non-key column. You must first ensure that all rows actually do have values for the column. Spanner will scan the data to verify before fully applying the NOT NULL constraint. More information about how to alter tables is here and here.
However, you can not add such a constraint to a key column. That kind of change would require rewriting all the data in the table, because the nullness of the key affects how the data is encoded. The only option for making that change is to create a new table that's set up the way you want, make code changes to support using both tables temporarily, gradually move the data from the old table to the new table, and eventually change the code to use only the new table and drop the old table. If you further then wanted the original table name, you'd have to do the whole thing again.
Unfortenately there is not way to add not null column
The way to do it:
1 add nullable column
ALTER TABLE table1 ADD COLUMN column1 STRING(255)
UPDATE table1.column1, SET NOT NULL VALUE to the column (if the table is not empty).
UPDATE TABLE table1 SET column1 = "<GENERATED DATA>"
Add constraint
ALTER TABLE table1 ADD COLUMN column1 STRING(255) NOT NULL
Thanks.
Creating a non-nullable column in Spanner on an existing table is typically a three step process:
# add new column to table
ALTER TABLE <table_name> ADD COLUMN <column_name> <value_type>;
# create default values
UPDATE <table_name> SET <column_name>=<default_value> WHERE TRUE;
# add constraint
ALTER TABLE <table_name> ALTER COLUMN <column_name> <value_type> NOT NULL;

Storing unique values in django hstorefield with postgresql

Is it possible to add a postgresql hstorefield (django >= 1.8) to a model where values in the hstore are unique?
Keys are obviously unique but can values be unique as well? I suppose custom validators could be added to the model but I am curious to know if this can be done on the database level
A single hstore value can contain multiple key => value pairs, making a solution based on a unique index impossible. Additionally, your new hstore value can also have multiple key => value pairs. The only viable alternative is then a BEFORE INSERT OR UPDATE trigger on the table:
CREATE FUNCTION trf_uniq_hstore_values() RETURNS trigger AS $$
DECLARE
dups text;
BEGIN
SELECT string_agg(x, ',') INTO dups
FROM (SELECT svals(hstorefield) AS x FROM my_table) sub
JOIN (SELECT svals(NEW.hstorefield) AS x) vals USING (x);
IF dups IS NOT NULL THEN
RAISE NOTICE format('Value(s) %s violate(s) uniqueness constraint. Operation aborted.', dups);
RETURN NULL;
ELSE
RETURN NEW;
END IF;
END; $$ LANGUAGE plpgsql;
CREATE TRIGGER tr_uniq_hstore_values
BEFORE INSERT OR UPDATE ON my_table
FOR EACH ROW EXECUTE PROCEDURE trf_uniq_hstore_values();
Note that this will not trap existing duplicates in the table.

sql Column with multiple values (query implementation in a cpp file )

I am using this link.
I have connected my cpp file with Eclipse to my Database with 3 tables (two simple tables
Person and Item
and a third one PersonItem that connects them). In the third table I use one simple primary and then two foreign keys like that:
CREATE TABLE PersonsItems(PersonsItemsId int not null auto_increment primary key,
Person_Id int not null,
Item_id int not null,
constraint fk_Person_id foreign key (Person_Id) references Person(PersonId),
constraint fk_Item_id foreign key (Item_id) references Items(ItemId));
So, then with embedded sql in c I want a Person to have multiple items.
My code:
mysql_query(connection, \
"INSERT INTO PersonsItems(PersonsItemsId, Person_Id, Item_id) VALUES (1,1,5), (1,1,8);");
printf("%ld PersonsItems Row(s) Updated!\n", (long) mysql_affected_rows(connection));
//SELECT newly inserted record.
mysql_query(connection, \
"SELECT Order_id FROM PersonsItems");
//Resource struct with rows of returned data.
resource = mysql_use_result(connection);
// Fetch multiple results
while((result = mysql_fetch_row(resource))) {
printf("%s %s\n",result[0], result[1]);
}
My result is
-1 PersonsItems Row(s) Updated!
5
but with VALUES (1,1,5), (1,1,8);
I would like that to be
-1 PersonsItems Row(s) Updated!
5 8
Can somone tell me why is this not happening?
Kind regards.
I suspect this is because your first insert is failing with the following error:
Duplicate entry '1' for key 'PRIMARY'
Because you are trying to insert 1 twice into the PersonsItemsId which is the primary key so has to be unique (it is also auto_increment so there is no need to specify a value at all);
This is why rows affected is -1, and why in this line:
printf("%s %s\n",result[0], result[1]);
you are only seeing 5 because the first statement failed after the values (1,1,5) had already been inserted, so there is still one row of data in the table.
I think to get the behaviour you are expecting you need to use the ON DUPLICATE KEY UPDATE syntax:
INSERT INTO PersonsItems(PersonsItemsId, Person_Id, order_id)
VALUES (1,1,5), (1,1,8)
ON DUPLICATE KEY UPDATE Person_id = VALUES(person_Id), Order_ID = VALUES(Order_ID);
Example on SQL Fiddle
Or do not specify the value for personsItemsID and let auto_increment do its thing:
INSERT INTO PersonsItems( Person_Id, order_id)
VALUES (1,5), (1,8);
Example on SQL Fiddle
I think you have a typo or mistake in your two queries.
You are inserting "PersonsItemsId, Person_Id, Item_id"
INSERT INTO PersonsItems(PersonsItemsId, Person_Id, Item_id) VALUES (1,1,5), (1,1,8)
and then your select statement selects "Order_id".
SELECT Order_id FROM PersonsItems
In order to achieve 5, 8 as you request, your second query needs to be:
SELECT Item_id FROM PersonsItems
Edit to add:
Your primary key is autoincrement so you don't need to pass it to your insert statement (in fact it will error as you pass 1 twice).
You only need to insert your other columns:
INSERT INTO PersonsItems(Person_Id, Item_id) VALUES (1,5), (1,8)

Trying to add foreign key in mysql with heidisql

I've been trying to add a foreign key to my table using heidisql and I keep getting the error 1452.
After reading around I made sure all my tables were running on InnoDB as well as checking that they had the same datatype and the only way I can add my key is if I drop all my data which I don't intend to do since I have spent quite a few hours on this.
here is my table create code:
CREATE TABLE `data` (
`ID` INT(10) NOT NULL AUTO_INCREMENT,
#bunch of random other columns stripped out
`Ability_1` SMALLINT(5) UNSIGNED NOT NULL DEFAULT '0',
#more stripped tables
`Extra_Info` SET('1','2','3','Final','Legendary') NOT NULL DEFAULT '1' COLLATE 'utf8_unicode_ci',
PRIMARY KEY (`ID`),
UNIQUE INDEX `ID` (`ID`)
)
COLLATE='utf8_unicode_ci'
ENGINE=InnoDB
AUTO_INCREMENT=650;
here is table 2
CREATE TABLE `ability` (
`ability_ID` SMALLINT(5) UNSIGNED NOT NULL AUTO_INCREMENT,
#stripped columns
`Name_English` VARCHAR(12) NOT NULL COLLATE 'utf8_unicode_ci',
PRIMARY KEY (`ability_ID`),
UNIQUE INDEX `ability_ID` (`ability_ID`)
)
COLLATE='utf8_unicode_ci'
ENGINE=InnoDB
AUTO_INCREMENT=165;
Finally here is the create code along with the error.
ALTER TABLE `data`
ADD CONSTRAINT `Ability_1` FOREIGN KEY (`Ability_1`) REFERENCES `ability` (`ability_ID`) ON UPDATE CASCADE ON DELETE CASCADE;
/* SQL Error (1452): Cannot add or update a child row: a foreign key constraint fails (`check`.`#sql-ec0_2`, CONSTRAINT `Ability_1` FOREIGN KEY (`Ability_1`) REFERENCES `ability` (`ability_ID`) ON DELETE CASCADE ON UPDATE CASCADE) */
If there is anything else I can provide please let me know this is really bothering me. I'm also using 5.5.27 - MySQL Community Server (GPL) that came with xampp installer.
If you are using HeidiSQL it is pretty easy.
Just see the image, click on the +Add to add foreign keys.
I prefer GUI way of creating tables and its attribute because it saves time and reduces errors.
I found it. Sorry everyone. The problem was that I had 0 as a default value for my fields while my original table had no value for 0.
Here is how you can do it ;
Create your Primary keys. For me this was straight forward so I won't post how to do that here
To create your FOREIGN KEYS you need to change the table / engine type for each table from MyIASM to InnoDb. To do this Select the table on the right hand side then select the OPTIONS tab on the right hand side and change the engine from MyIASM to InnoDb for every table.