Update Database MFC C++ ODBC CRecordset - c++

I'm Developing a MFC application (SDI) to update, add and delete a table in the database called security.
The problem is after updating one row in the table, the row is Updated ( i verified) then when I do another action (updating another row or deleting a row) the update is canceled. I really don't know if there is a problem with the CRecordset or the database itself.
//m_pSet is a an instance of a class based on CRecordSet:
m_pSet->Open();
m_pSet->Edit();
m_pSet->m_Security_Id = sec->SecurityId;
m_pSet->m_Security_Name = sec->SecurityName;
m_pSet->m_Security_Type_Id = sec->SecurityTypeStringToInt();
if (!m_pSet->Update())
{
AfxMessageBox(_T("Record not updated; no field values were set."));
}

In my experiences with Oracle and SQL Server there is a difference in the way commit statements happen. The behavior you are seeing implies that the Update is not implicitly committed.
In Oracle, commits are an explicit statement and need to be conducted after you have carried out some transaction.
In SQL Server, commits are implicit by default and do not need to be carried out after transactions.
That being said, this other other Stack Overflow Question and Answer appears to have two methods of making commits explicit in SQL Server, meaning without the commit, you may lose your transaction.
The first being that you can use BEGIN TRANSACTION to have the database wait for a commit statement. From what you have posted, it would seem this is not the case.
The other way to make commit statements explicit in SQL Server is by changing some settings on the databsae itself. Based on your line of thought I would check the settings referred to in the post noted here and ensure that you did not make commits implicit.

Related

Is it possible to run queries in parallel in Redshift?

I wanted to do an insert and update at the same time in Redshift. For this I am inserting the data into a temporary table, removing the updated entries from the original table and inserting all the new and updated entries. Since Redshift uses concurrency, sometimes entries are duplicated, because the delete started before the insert was finished. Using a very large sleep for each operation this does not happen, however the script is very slow. Is it possible to run queries in parallel in Redshift?
Hope someone can help me , thanks in advance!
You should read up on MVCC (multi-version coherency control) and transactions. Redshift can only only run one query at a time (for a session) but that is not the issue. You want to COMMIT both changes at the same time (COMMIT is the action that causes changes to be apparent to others). You do this by wrapping your SQL statement in a transaction (BEGIN ... COMMIT) and executed in the same session (not clear if you are using multiple sessions). All changes made within the transaction will only be visible to the session making the changes UNTIL COMMIT when ALL the changes made by the transaction will be visible to everyone at the same moment.
A few things to watch out for - if your connection is in AUTOCOMMIT mode then you may break out of your transaction early and COMMIT partial results. Also when you are working in transactions your source table information is unchanging (so you see consistent data during your transaction) and this information isn't allowed to change for you. This means that if you have multiple sessions changing table data you need to be careful about the order in which they COMMIT so the right version of data is presented to each other.
begin transaction;
<run the queries in parallel>
end transaction;
In this specific case do this:
create temp table stage (like target);
insert into stage
select * from source
where source.filter = 'filter_expression';
begin transaction;
delete from target
using stage
where target.primarykey = stage.primarykey;
insert into target
select * from stage;
end transaction;
drop table stage;
See:
https://docs.aws.amazon.com/redshift/latest/dg/c_best-practices-upsert.html
https://docs.aws.amazon.com/redshift/latest/dg/t_updating-inserting-using-staging-tables-.html

Oracle FK constraints are not enforced

I'm using oracle 12c database and want to test out one problem.
When carrying out web service request it returns underlying ORA-02292 error on constraint name (YYY.FK_L_TILSYNSOBJEKT_BEGRENSNING).
Here is SQL of the table with the constraint:
CONSTRAINT "FK_L_TILSYNSOBJEKT_BEGRENSNING" FOREIGN KEY ("BEGRENSNING")
REFERENCES "XXX"."BEGRENSNING" ("IDSTRING") DEFERRABLE INITIALLY DEFERRED ENABLE NOVALIDATE
The problem is, that when I try to delete the row manually with valid IDSTRING (in both tables) from parent table - it successfully does it.
What cause it to behave this way? Is there any other info I should give?
Not sure if it helps someone, since it was fairly stupid mistake, but i'll try to make it useful since people demand answers.
Keyword DEFERRABLE INITIALLY DEFERRED means that constraint is enforced at commit not at query run time as opposed to INITIALLY IMMEDIATE, which does the check right after you issue the query, however this keyword makes database bulk updates a bit slower (since every query in a transaction has to be checked by constraint, meanwhile in initial deference if it turns out there is an issue - whole bulk is rolled back, no additional unnecessary queries are issued and something can be done about it), hence used less often than initial deference.
Error ORA-02292 however is shown only for DELETE statements, knowing that its a bit easier to debug your statements.

What's more efficient? Read and Write If... or always write to db?

I have a database table, that has a column, which is being updated frequently (relatively).
The question is:
Is it more efficient to avoid always writing to the database, by reading the object first (SELECT ... WHERE), and comparing the values, to determine if an update is even necessary
or always just issue an update (UPDATE ... WHERE) without checking what's the current state.
I think that the first approach would be more hassle, as it consists of two DB operations, instead of just one, but we could also avoid an unnecessary write.
I also question if we should even think about this, as our db will most likely not reach the 100k records in this table anytime soon, so even if the update would be more costly, it wouldn't be an issue, but please correct me if I'm wrong.
The database is PostgreSQL 9.6
It will avoid I/O load on the database if you only perform the updates that are necessary.
You can include the test with the UPDATE, like in
UPDATE mytable
SET mycol = 'avalue'
WHERE id = 42
AND mycol <> 'avalue';
The only downside is that triggers will not be called unless the value really changes.

Can data be changed in my tables that will break batch updating while it modifies multiple rows?

I have an update query that is based on the result of a select, typically returning more than 1000 rows.
If some of these rows are updated by other queries before this update can touch them could that cause a problem with the records? For example could they get out of sync with the original query?
If so would it be better to select and update individual rows rather than in batch?
If it makes a difference, the query is being run on Microsoft SQL Server 2008 R2
Thanks.
No.
A Table cannot be updated while something else is in the process of updating it.
Databases use concurrency control and have ACID properties to prevent exactly this type of problem.
I would recommend reading up on isolation levels. The default in SQL Server is READ COMMITTED, which means that other transactions cannot read data that has been updated but not committed by a given transaction.
This means that data returned by your select/update statement will be an accurate reflection of the database at a moment in time.
If you were to change your database to READ UNCOMMITTED then you could get into a situation where the data from your select/update is out of synch.
If you're selecting first, then updating, you can use a transaction
BEGIN TRAN
-- your select WITHOUT LOCKING HINT
-- your update based upon select
COMMIT TRAN
However, if you're updating directly from a select, then, no need to worry about it. A single transaction is implied.
UPDATE mytable
SET value = mot.value
FROM myOtherTable mot
BUT... do NOT do the following, otherwise you'll run into a deadlock
UPDATE mytable
SET value = mot.value
FROM myOtherTable mot WITH (NOLOCK)

Postgresql locks deadlock

I am developing a system using Django + Postgresql. It's my first time with postgresql but I chose it because I needed the transactions and foreign key features.
In a certain view I have to lock my tables with AccessExclusiveLock, to prevent any read or write during this view. That's because I do some checks on the whole data before I save/update my entities.
I noticed an inconsistent error that happens from time to time. It's because of a select statement that happens directly after the lock statement. It demands to have AccessShareLock. I read on postgresql website that AccessShareLock conflicts with AccessExclusiveLock.
What I can't understand is why is it happening in the first place. Why would postgresql ask for implicit lock if it already has an explicit lock that cover that implicit one ? The second thing I can't understand is why is this view running on 2 different postregsql processes ? Aren't they supposed to be collected in a single transaction ?
Thanx in advance.
In PostgreSQL, instead of acquiring exclusive access locks, I would recommend to set the appropriate transaction isolation level on your session. So, before running your "update", send the following command to your database:
begin;
set transaction isolation level repeatable read;
-- your SQL commands here
commit;
According to your description, you need repeatable read isolation level.