How to catch Unique constraint error in POSTGRESQL - c++

If I create a table with a unique contraint, for example:
CREATE TABLE distributors (
did integer,
name varchar(40) UNIQUE
);
What would happen if I try to enter an entry with the name that already exists. I tried to do so and it just quit without displaying any error message. Is there a way to check whether a new entry was actually inserted?

If the insert failed than there should be error code set somewhere, readable by some method of the interface you're using - more details are definitely in documentation to your access library/module.
Alternatively you can change your insert to:
INSERT INTO distributors (did, name) values ( ... ) RETURNING did;
And if it didn't return anything - there has been error.

If you try to insert record with name that already exists, you'll receive error message like this:
ERROR: duplicate key value violates unique constraint "distributors_name_key"
DETAIL: Key (name)=(aaa) already exists.
and record will not be inserted.
If you do it from allplcation level, the exception will be thrown with a message similar to this. It's up to the programmer how to handle this exception.
If your ID field is autogenerating (SERIAL or BIGSERIAL), and you insert just name, if you insert name which already exists, ID sequence will increase by 1, even if you didn't insert any record.
To avoid that you can make "SELECT" query before INSERT to check, it the record already exists. Possible to do all in one transaction, in pseudo-code:
BEGIN TRANSACTION;
int records = SELECT name FROM table WHERE name = 'aaa' FOR UPDATE; //FOR UPDATE to lock the row from being read by other user until transaction finishes.
if (records == 0)
INSERT INTO table VALUES (1, 'aaa');
else
MessageBox.Show("Record already exists");
COMMIT TRANSACCTION;

Related

How to show alert if user tries to save values, that are already present my table for my Oracle Apex Form based application

I have a form in my Oracle APEX based application, I want to have validation on submit button, so that the combination of two specific entries, if they already are present in the SQL table/View, I want to show an alert, like "The entry for this combination of values of A and B already exists, please enter correct values."
If those two specific entries are represented by two form items (e.g. :P1_ONE and :P2_TWO), then the validation procedure might be a function that returns error text, such as
declare
l_cnt number;
retval varchar2(200);
begin
select count(*)
into l_cnt
from your_table t
where t.column_one = :P1_ONE
and t.column_two = :P1_TWO;
if l_cnt > 0 then
retval := 'The entry for this combination already exists';
end if;
end;
The query itself might need to be modified, depending on what exactly you meant by describing the problem; that's the way I understood it.
Then you should have a unique constraint on the table, and let that validate incoming data.
Any violation of this constraint will have exception raised, which can be transformed within the APEX error handling procedure.

sql Column with multiple values (query implementation in a cpp file )

I am using this link.
I have connected my cpp file with Eclipse to my Database with 3 tables (two simple tables
Person and Item
and a third one PersonItem that connects them). In the third table I use one simple primary and then two foreign keys like that:
CREATE TABLE PersonsItems(PersonsItemsId int not null auto_increment primary key,
Person_Id int not null,
Item_id int not null,
constraint fk_Person_id foreign key (Person_Id) references Person(PersonId),
constraint fk_Item_id foreign key (Item_id) references Items(ItemId));
So, then with embedded sql in c I want a Person to have multiple items.
My code:
mysql_query(connection, \
"INSERT INTO PersonsItems(PersonsItemsId, Person_Id, Item_id) VALUES (1,1,5), (1,1,8);");
printf("%ld PersonsItems Row(s) Updated!\n", (long) mysql_affected_rows(connection));
//SELECT newly inserted record.
mysql_query(connection, \
"SELECT Order_id FROM PersonsItems");
//Resource struct with rows of returned data.
resource = mysql_use_result(connection);
// Fetch multiple results
while((result = mysql_fetch_row(resource))) {
printf("%s %s\n",result[0], result[1]);
}
My result is
-1 PersonsItems Row(s) Updated!
5
but with VALUES (1,1,5), (1,1,8);
I would like that to be
-1 PersonsItems Row(s) Updated!
5 8
Can somone tell me why is this not happening?
Kind regards.
I suspect this is because your first insert is failing with the following error:
Duplicate entry '1' for key 'PRIMARY'
Because you are trying to insert 1 twice into the PersonsItemsId which is the primary key so has to be unique (it is also auto_increment so there is no need to specify a value at all);
This is why rows affected is -1, and why in this line:
printf("%s %s\n",result[0], result[1]);
you are only seeing 5 because the first statement failed after the values (1,1,5) had already been inserted, so there is still one row of data in the table.
I think to get the behaviour you are expecting you need to use the ON DUPLICATE KEY UPDATE syntax:
INSERT INTO PersonsItems(PersonsItemsId, Person_Id, order_id)
VALUES (1,1,5), (1,1,8)
ON DUPLICATE KEY UPDATE Person_id = VALUES(person_Id), Order_ID = VALUES(Order_ID);
Example on SQL Fiddle
Or do not specify the value for personsItemsID and let auto_increment do its thing:
INSERT INTO PersonsItems( Person_Id, order_id)
VALUES (1,5), (1,8);
Example on SQL Fiddle
I think you have a typo or mistake in your two queries.
You are inserting "PersonsItemsId, Person_Id, Item_id"
INSERT INTO PersonsItems(PersonsItemsId, Person_Id, Item_id) VALUES (1,1,5), (1,1,8)
and then your select statement selects "Order_id".
SELECT Order_id FROM PersonsItems
In order to achieve 5, 8 as you request, your second query needs to be:
SELECT Item_id FROM PersonsItems
Edit to add:
Your primary key is autoincrement so you don't need to pass it to your insert statement (in fact it will error as you pass 1 twice).
You only need to insert your other columns:
INSERT INTO PersonsItems(Person_Id, Item_id) VALUES (1,5), (1,8)

Validation to detect text in numeric field

I'm trying to prevent users crashing the create new product apex page. On the create page i have a text field:product_name and a numeric field: product_quantity.
Currently when they enter text in the product_quantity field and click 'Save' they get the following error:
Error processing validation.
ORA-01722: invalid number
I have investigated the error however i thought in Apex, if you selected a numeric field, it would detect whether the user entered text or numeric characters?
Is there a method to display a validation message if the user has entered text, it shouts, otherwise it enables the user to save the new entry?
UPDATE
I know why its happening but dont know how to solve it.
I recreated my page and it worked. I then added two pieces of validation in my page processing and when i then try it i get the error in my intial post. If i disable them it works again. The validation use NOT EXISTS to find whether the entered value already exists in the table before they add it.
If only the validation kicks in after looking whether a numerical value has been entered. I stopped the validation looking at an associated item, and turned off the 'when button pressed' but still no joy.
select 1 from MY_TABLE where column_name = :P6_TEXT_FIELD
Is there a way to run the text box validation (checking whether its text entered) before the validation i have created in the page processing?
That's the thing with validations: they all get executed and do not short-circuit. You can actually clearly see this happening when you debug the page.
In the case of a number field, you'd see it does not pass the number validation. But this does not stop validation. So your second validation will be run which uses the submitted value, but would obviously fail when you entered text for instance.
There are some work-arounds for that.
For example, you could change your NOT-EXISTS validation to a PLSQL function returning an error message and execute something like this (example):
DECLARE
v_test_nbr NUMBER;
v_check_exists NUMBER;
BEGIN
BEGIN
v_test_nbr := to_number(:P6_TEXT_FIELD);
EXCEPTION
WHEN OTHERS THEN
-- or catch 1722 (invalid number) and 6502 (char to number conversion error)
v_test_nbr := NULL;
END;
IF v_test_nbr IS NOT NULL
THEN
-- if v_test_nbr is not null then the field should be numerically valid
-- if it isn't then this code would be skipped and this validation
-- will not throw an error.
-- However, the previous validation will still fail when text is entered,
-- so this shouldn't matter.
BEGIN
SELECT 1
INTO v_check_exists
FROM my_table
WHERE column_name = :P6_TEXT_FIELD;
EXCEPTION
WHEN no_data_found THEN
v_check_exists := 0;
END;
IF v_check_exists = 1
THEN
RETURN 'A record with this key already exists';
END IF;
END IF;
RETURN NULL;
END;
HOWEVER - in this case, where you want to check for duplicate entries a better option may exist if you are at least on version 4.1. If your table has the correct constraints defined, then you would have a unique key defined on the field you are performing the not-exists on. This means that if you would not have this validation on the field, you would get an ora-00001 DUP_VAL_ON_INDEX error.
You could then use the error processing provided by apex to catch this error, and produce a user-friendly message.
You can find an example of how to use and implement this on the blog of Patrick Wolf of the apex development team:
apex-4-1-error-handling-improvements-part-1/
apex-4-1-error-handling-improvements-part-2/

SQLite INSERT command return error "column number is not unique"

I have a text file with rows (lines). Each row is a record in database table. I read this file and fill database.
Tables creating command:
CREATE TABLE gosts(number TEXT PRIMARY KEY, userNumber TEXT, status TEXT, date TEXT, title TEXT, engTitle TEXT, description TEXT, mainCategory INTEGER, category INTEGER, subCategory INTEGER);
Inserting query:
INSERT INTO gosts VALUES ("30331.8-95", "ÃÎÑÒ 30331.8-95", "Äåéñòâóþùèé", "01.07.1996", "Ýëåêòðîóñòàíîâêè çäàíèé. ×àñòü 4. Òðåáîâàíèÿ ïî îáåñïå÷åíèþ áåçîïàñíîñòè. Îáùèå òðåáîâàíèÿ ïî ïðèìåíåíèþ ìåð çàùèòû äëÿ îáåñïå÷åíèÿ áåçîïàñíîñòè. Òðåáîâàíèÿ ïî ïðèìåíåíèþ ìåð çàùèòû îò ïîðàæåíèÿ ýëåêòðè÷åñêèì òîêîì", "Electrical installations of buildings. Part 4. Protection for safety. Applisation of protective measues for safety. Measures of protection against electric shock", "Íàñòîÿùèé ñòàíäàðò óñòàíàâëèâàåò îáùèå òðåáîâàíèÿ ïî ïðèìåíåíèþ ìåð çàùèòû äëÿ îáåñïå÷åíèÿ áåçîïàñíîñòè è òðåáîâàíèÿ ïî ïðèìåíåíèþ ìåð çàùèòû îò ïîðàæåíèÿ ýëåêòðè÷åñêèì òîêîì ïðè ýêñïëóàòàöèè ýëåêòðîóñòàíîâîê çäàíèé", 37, 333, 628);
Please ignore encoding problems. Source file has cp1251 encoding, but inserting sample is taken from console. I tried to use utf-8 but had the same problem.
SQLite using code above:
if(sqlite3_prepare_v2(database, query, -1, &statement, 0) == SQLITE_OK) {
...
}
Function calling doesn't return SQLITE_OK. And I gea error message by:
string error = sqlite3_errmsg(database);
if(error != "not an error") cout << query << " " << error << endl;
Strangely, some records are inserted without error and I can't find differences between good and bad records.
I can provide more information if needed.
I would bet that the difference between the good and bad rows were whether or not the value associated with the 'number' column was already in the table.
This is one of the reasons that table designs usually do not use TEXT valued columns for PRIMARY KEYs.
If it is possible to re-create your table, I would create an ID field responsible for being the PRIMARY KEY for your table. Further enable the IDENTITY property for auto increment of your primary key value.
This should prevent insertion failure due to having duplicate values in the 'number' column.
Now if values in 'number' column must be unique then you should add a UNIQUE constraint on that column.
NOTE: The UNIQUE will yield the same error you are currently receiving as it appears you are trying to add multiple rows with the same value for column'number'
Review the SQLite CREATE TABLE documentation for more details.

What's the right pattern for unique data in columns?

I've a table [File] that has the following schema
CREATE TABLE [dbo].[File]
(
[FileID] [int] IDENTITY(1,1) NOT NULL,
[Name] [varchar](256) NOT NULL,
CONSTRAINT [PK_File] PRIMARY KEY CLUSTERED
(
[FileID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
The idea is that the FileID is used as the key for the table and the Name is the fully qualified path that represents a file.
What I've been trying to do is create a Stored Procedure that will check to see if the Name is already in use if so then use that record else create a new record.
But when I stress test the code with many threads executing the stored procedure at once I get different errors.
This version of the code will create a deadlock and throw a deadlock exception on the client.
CREATE PROCEDURE [dbo].[File_Create]
#Name varchar(256)
AS
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE
BEGIN TRANSACTION xact_File_Create
SET XACT_ABORT ON
SET NOCOUNT ON
DECLARE #FileID int
SELECT #FileID = [FileID] FROM [dbo].[File] WHERE [Name] = #Name
IF ##ROWCOUNT=0
BEGIN
INSERT INTO [dbo].[File]([Name])
VALUES (#Name)
SELECT #FileID = [FileID] FROM [dbo].[File] WHERE [Name] = #Name
END
SELECT * FROM [dbo].[File]
WHERE [FileID] = #FileID
COMMIT TRANSACTION xact_File_Create
GO
This version of the code I end up getting rows with the same data in the Name column.
CREATE PROCEDURE [dbo].[File_Create]
#Name varchar(256)
AS
BEGIN TRANSACTION xact_File_Create
SET NOCOUNT ON
DECLARE #FileID int
SELECT #FileID = [FileID] FROM [dbo].[File] WHERE [Name] = #Name
IF ##ROWCOUNT=0
BEGIN
INSERT INTO [dbo].[File]([Name])
VALUES (#Name)
SELECT #FileID = [FileID] FROM [dbo].[File] WHERE [Name] = #Name
END
SELECT * FROM [dbo].[File]
WHERE [FileID] = #FileID
COMMIT TRANSACTION xact_File_Create
GO
I'm wondering what the right way to do this type of action is? In general this is a pattern I'd like to use where the column data is unique in either a single column or multiple columns and another column is used as the key.
Thanks
If you are searching heavily on the Name field, you will probably want it indexed (as unique, and maybe even clustered if this is the primary search field). As you don't use the #FileID from the first select, I would just select count(*) from file where Name = #Name and see if it is greater than zero (this will prevent SQL from retaining any locks on the table from the search phase, as no columns are selected).
You are on the right course with the SERIALIZABLE level, as your action will impact subsequent queries success or failure with the Name being present. The reason the version without that set causes duplicates is that two selects ran concurrently and found there was no record, so both went ahead with the inserts (which creates the duplicate).
The deadlock with the prior version is most likely due to the lack of an index making the search process take a long time. When you load the server down in a SERIALIZABLE transaction, everything else will have to wait for the operation to complete. The index should make the operation fast, but only testing will indicate if it is fast enough. Note that you can respond to the failed transaction by resubmitting: in real world situations hopefully the load will be transient.
EDIT: By making your table indexed, but not using SERIALIZABLE, you end up with three cases:
Name is found, ID is captured and used. Common
Name is not found, inserts as expected. Common
Name is not found, insert fails because another exact match was posted within milliseconds of the first. Very Rare
I would expect this last case to be truly exceptional, so using an exception to capture this very rare case would be preferable to engaging SERIALIZABLE, which has serious performance consequences.
If you do really have an expectation that it will be common to have posts within milliseconds of one another of the same new name, then use a SERIALIZABLE transaction in conjunction with the index. It will be slower in the general case, but faster when these posts are found.
First, create a unique index on the Name column. Then from your client code first check if the Name exists by selecting the FileID and putting the Name in the where clause - if it does, use the FileID. If not, insert a new one.
Using the Exists function might clean things up a little.
if (Exists(select * from table_name where column_name = #param)
begin
//use existing file name
end
else
//use new file name