I have a strange problem with Doctrine 2.5 while try update my table UserProfile where my column BusinessActivity is a foreign key.
CASE 1) USING getReference()
update works, but not on column BusinessActivity.
$myid = 6;
$businessActivity = $entityManger->getReference('BusinessActivity', 6);
//$businessActivity proxy object was created correctly with id 6
$userDetails->setBusinessActivity($businessActivity);
$entityManger->merge($userDetails);
//FLUSH AND COMMIT
CASE 2) CREATING OBJECT FROM DB WITH REPOSITORY WORKS
$rep = $entityManager->getRepository('BusinessActivity');
$businessActivity = $rep->findOneBy(array('idActivity' => 6);
$userDetails->setBusinessActivity($businessActivity);
//FLUSH AND COMMIT
Naturally, I already have id and I didn't want to execute a query with findOneBy.
Why does this happen?
Related
I have an API which reads from two main tables Table A and Table B.
Table A has a column which acts as foreign key to Table B entries.
Now inside api flow, I have a method which runs below logic.
Raw SQL -> Joining table A with some other tables and fetching entries which has an active status in Table A.
From result of previous query we take the values from Table A column and fetch related rows from Table B using Django Models.
It is like
query = "Select * from A where status = 1" #Very simplified query just for example
cursor = db.connection.cursor()
cursor.execute(query)
results = cursor.fetchAll()
list_of_values = get_values_for_table_B(results)
b_records = list(B.objects.filter(values__in=list_of_values))
Now there is a background process which will enter or update new data in Table A and Table B. That process is doing everything using models and utilizing
with transaction.atomic():
do_update_entries()
However, the update is not just updating old row. It is like deleting old row and deleting related rows in Table B and then new rows are added to both tables.
Now the problem is if I run api and background job separately then everything is good, but when both are ran simultaneously then for many api calls the second query of Table B fails to get any data because the transaction executed in below manner:
Table A RAW Transaction executes and read old data
Background Job runs in a single txn and delete old data and enter new data. Having different foreign key values that relates it to Table B.
Table B Models read query executes which refers to values already deleted by previous txn, hence no records
So, for reading everything in a single txn I have tried below options
with transaction.atomic():
# Raw SQL for Table A
# Models query for Table B
This didn't worked and I am still getting same issue.
I tried another way around
transaction.set_autocommit(False)
Raw SQl for Table A
Models query for Table B
transaction.commit()
transaction.set_autocommit(True)
But this didn't work either. How can I read both queries in a single transaction so background job updates should not affect this read process.
I got lots of example to append/overwrite table in sql from AZ Databricks Notebook. But no single way to directly update, insert data using query or otherway.
ex. I want to update all row where (identity column)ID = 1143, so steps which I need to taken care are
val srMaster = "(SELECT ID, userid,statusid,bloburl,changedby FROM SRMaster WHERE ID = 1143) srMaster"
val srMasterTable = spark.read.jdbc(url=jdbcUrl, table=srMaster,
properties=connectionProperties)
srMasterTable.createOrReplaceTempView("srMasterTable")
val srMasterTableUpdated = spark.sql("SELECT userid,statusid,bloburl,140 AS changedby FROM srMasterTable")
import org.apache.spark.sql.SaveMode
srMasterTableUpdated.write.mode(SaveMode.Overwrite)
.jdbc(jdbcUrl, "[dbo].[SRMaster]", connectionProperties)
Is there any other sufficient way to achieve the same.
Note : Above code is also not working as SQLServerException: Could not drop object 'dbo.SRMaster' because it is referenced by a FOREIGN KEY constraint. , so it look like it drop table and recreate...not at all the solution.
You can use insert using a FROM statement.
Example: update values from another table in this table where a column matches.
INSERT INTO srMaster
FROM srMasterTable SELECT userid,statusid,bloburl,140 WHERE ID = 1143;
or
insert new values to rows where one of the existing column value matches
UPDATE srMaster SET userid = 1, statusid = 2, bloburl = 'https://url', changedby ='user' WHERE ID = '1143'
or just insert multiple values
INSERT INTO srMaster VALUES
(1, 10, 'https://url1','user1'),
(2, 11, 'https://url2','user2');
In SQL Server, you cannot drop a table if it is referenced by a FOREIGN KEY constraint. You have to either drop the child tables before removing the parent table, or remove foreign key constraints.
For a parent table, you can use the below query to get foreign key constraint names and the referencing table names:
SELECT name AS 'Foreign Key Constraint Name',
OBJECT_SCHEMA_NAME(parent_object_id) + '.' + OBJECT_NAME(parent_object_id) AS 'Child Table'
FROM sys.foreign_keys
WHERE OBJECT_SCHEMA_NAME(referenced_object_id) = 'dbo' AND
OBJECT_NAME(referenced_object_id) = 'PARENT_TABLE'
Then you can alter the child table and drop the constraint by its name using the below statement:
ALTER TABLE dbo.childtable DROP CONSTRAINT FK_NAME;
I have a sql file where I write sentences for run in release, this file contains sentences like:
-- =======================2019-02-01=======================
UPDATE rating set stars = 3 where id = 6;
UPDATE users SET status = 'A' where last_login >= '2019-01-01';
INSERT INTO....
-- =======================2019-02-15=======================
UPDATE rating set stars = 3 where id = 6;
UPDATE users SET status = 'A' where last_login >= '2019-01-01';
INSERT INTO....
I run specifics sentences in each release date, but I believe that is bad practice and its no escalable method.
I'm trying change this method to Knex seeds or migrations. what would be the best practice to do it?
Seeds have a problem because knex executes the seeds every time I write the command knex seed:run, and it show some errors.
Knex stores the filenames and signatures of what it has executed so that it does not need to run them again.
https://knexjs.org/#Installation-migrations
Programmatically you can execute migrations like this:
knex({..config..}).migrate.latest({
directory: 'migrations', // where the files are stored
tableName: 'knex_migrations' // where knex saves its records
});
Example migration file
exports.up = function(knex) {
return knex.raw(`
UPDATE rating set stars = 3 where id = 6;
UPDATE users SET status = 'A' where last_login >= '2019-01-01';
INSERT INTO....
`)
};
The files will be executed alphabetically/sorted, and will not be re-executed against the same database.
I am using Cassandra 3.9 and DataStax C++ driver 2.6. I have created a table that has only a primary key and static columns. I am able to insert data into the table, but I am not able to update the table and I don't know why. As an example, I created the table t that is defined here:
[Cassandra Table with primary key and static column][1]
Then I successfully inserted data into the table with the following CQL insert command:
"insert into t (k, s, i) VALUES('George', 'Hello', 2);"
Then, "select * from t;" results in the following:
k | i | s
-------+---+-------
George | 2 | Hello
However, if I then try to update the table using the following command:
"UPDATE t set s = "World" where k = "George";"
I get the following error:
SyntaxException: line 1:26 no viable alternative at input 'where' (UPDATE t set s = ["Worl]d" where...)
Does anyone know how to update a table with only static columns and a primary key (i.e. partition key + cluster key)?
Enclose string with single quote
Example :
UPDATE t set s = 'World' where k = 'George';
Hi all i have the following code:
tx = session.beginTransaction();
Query query = session.createQuery("UPDATE com.nisid.entities.Payment set amount=:amount,paymentMethod=:method,paymentExpire=:expireDate"
+ "Where paymentId=:payid,actionId=:actionid");
query.setParameter("amount", amount);
query.setParameter("method", method);
query.setParameter("expireDate", expireDate);
query.setParameter("payid", projectId);
query.setParameter("actionid", actionId);
int resutl=query.executeUpdate();
Trying to do an update using HQL but i am getting error: galArgumentException: node to traverse cannot be null!
my table in the DB is called Payment and it has A COMPOSITE KEY ( projectId,actionId)
Could you please help me further???
The concept is that i have a JSP page which retrieves and displayes the results from DB retrieving info from Project Table, Payment Table and Action Table. Project has many to many relationship with Action and i am using Payment Table as the intermetiary table which holds the 2 FK of the other table.
You missed space before where, and replace , to and after where.
Query query = session.createQuery("UPDATE com.nisid.entities.Payment set amount=:amount,paymentMethod=:method,paymentExpire=:expireDate"
+ " Where paymentId=:payid and actionId=:actionid");