We're using FluentMigrator in one project. Let's say I got code like this one below.
So every time when we run new migration all the previous data deleting. Is there are way to avoid it and keep safe the data in places which are not changing?
public class Migration1 : Migration
{
public override void Up() {
Create.Table("Project")
.WithColumn("id").AsInt64().PrimaryKey().Identity()
.WithColumn("name").AsString(30).Nullable()
.WithColumn("author").AsString(30).Nullable()
.WithColumn("date").AsDate().Nullable()
.WithColumn("description").AsString(1000).Nullable();
Create.Table("Data")
.WithColumn("id").AsInt64().PrimaryKey().Identity()
.WithColumn("project_id").AsInt64().ForeignKey("Project", "id")
.WithColumn("a").AsInt32().Nullable()
.WithColumn("b").AsInt32().Nullable()
.WithColumn("c").AsInt32().Nullable()
.WithColumn("d").AsInt32().Nullable();
}
public override void Down() {
Delete.Table("data");
Delete.Table("project");
}
}
As part of you Down method you could create some backup tables which are identical to the table you are deleting but are post fixed with a timestamp. Eg:
Project_201407091059
Data_201407091059
You could then copy all the data from the tables being deleted to these tables.
Related
I am working on this project where we store the log details for each run.
We are planning to use Flatbuffers for the same
This is my flatbuffer schema
table logData
{
Id:int;
attemptId:int;
line:string;
}
table log
{
maxLimit: int;
counter: int;
job:[logData]; //Vector of Tables
}
Now for the first run we just add data to the table using the helper functions provided by the Auto-Generated files
logBuilder build(builder);
builder.add_maxLimit(10);
auto data = builder.CreateVector(some_vector)
builder.add_job(data);
Now for the second run we have new data so, Is there any way to append more data to the vector job, whilst keeping the old data intact?
Using WSO2 BPS 3.6.0 - is there a (standard) way to update an instance variable in an already running instance?
The reason behind is - the client passes incorrect data at the process initialization, the client may fix its data, but the process instance remembers the wrong values.
I believe I may still update a data in the database, but I wouldn't like to see process admins messing with the database
Edit:
I am working with the BPEL engine and my idea is to update a variable not from a process design, but as a corrective action (admin console? api?)
Thank you for all ideas.
You are setting the instance variables during process initialization based on client's request.
For your requirement, where the variables need to be retrieved for the request. You can do this by using the execution entity to read the data instead of the instance variables that were set during process initialization.
Refer example below :
public class SampleTask implements JavaDelegate {
public void execute(DelegateExecution execution) throws Exception {
String userId = execution.getVariable("userId");
//perform your logic here
}
}
If you want to keep using the instance variables, I suggest you to change the instance variable during the process execution.
public class SampleTask implements JavaDelegate {
private String userId;
public void execute(DelegateExecution execution) throws Exception {
String newUserId = execution.getVariable("userId");
setUserId(newUserId);
//perform your logic here
}
public void setUserId(String userId) {
this.userId = userId;
}
public String getUserId() {
return userId;
}
}
I am trying to delete a row from QSqlQueryModel as follows:
void MainWindow::deleteRecord()
{
int row_index= ui->tableView->currentIndex().row();
model->removeRow(row_index);
}
But it is not working.
I tried the following as well:
void MainWindow::deleteRecord()
{
int row_index= ui->tableView->currentIndex().row();
if(!db_manager->delete_record(QString::number(row_no))){
ui->appStatus->setText("Error: data deletion ...");
} else{
ui->appStatus->setText("Record deleted ...");
}
}
Where in db_manager, the function delete_recod(QString row_no) is:
bool DatabaseManager::delete_record(QString row_index)
{
QSqlQuery query;
query.prepare("DELETE FROM personal_Info WHERE ref_no = (:ref_no)");
query.bindValue(":ref_no",row_index);
if (!query.exec())
{
qDebug() << "Error" << query.lastError().text();
return false;
}
return true;
}
But also not working. In both attempts, the application doesn't crash and no SQLite errors.
What am I doing wrong and how can I fix it?
The first approach is failing because QSqlQueryModel does not implement removeRows. You're not checking its return value (bad! bad! bad!), which is false, meaning failure.
And how could it possibly implement a row removal function? Your SQL query can be literally anything, including result sets for which it does not make any sense to remove rows.
Instead, consider using a QSqlTableModel -- it or may not apply to your case, but given the form of your DELETE statement, I would say it does. (QSqlTableModel only shows the contets of one table / view).
The second approach is instead possibly working already. The fact you don't see your UI updated does not imply anything at all -- you should check the actual database contents to see if the DELETE statement actually worked and deleted something.
Now note that there's nothing coming from the database telling Qt to update its views. You need to set up that infrastructure. Modern databases support triggers and signalling systems (which are wrapped in Qt by QSqlDriver::notification), which can be used for this purposes. In other cases you msut manually trigger a refresh of your SQL model, for instance by calling QSqlTableModel::select().
I am importing data into a new Symfony2 project using Doctrine2 ORM.
All new records should have an auto-generated primary key. However, for my import, I would like to preserve the existing primary keys.
I am using this as my Entity configuration:
type: entity
id:
id:
type: integer
generator: { strategy: AUTO }
I have also created a setter for the id field in my entity class.
However, when I persist and flush this entity to the database, the key I manually set is not preserved.
What is the best workaround or solution for this?
The following answer is not mine but OP's, which was posted in the question. I've moved it into this community wiki answer.
I stored a reference to the Connection object and used that to manually insert rows and update relations. This avoids the persister and identity generators altogether. It is also possible to use the Connection to wrap all of this work in a transaction.
Once you have executed the insert statements, you may then update the relations.
This is a good solution because it avoids any potential problems you may experience when swapping out your configuration on a live server.
In your init function:
// Get the Connection
$this->connection = $this->getContainer()->get('doctrine')->getEntityManager()->getConnection();
In your main body:
// Loop over my array of old data adding records
$this->connection->beginTransaction();
foreach(array_slice($records, 1) as $record)
{
$this->addRecord($records[0], $record);
}
try
{
$this->connection->commit();
}
catch(Exception $e)
{
$output->writeln($e->getMessage());
$this->connection->rollBack();
exit(1);
}
Create this function:
// Add a record to the database using Connection
protected function addRecord($columns, $oldRecord)
{
// Insert data into Record table
$record = array();
foreach($columns as $key => $column)
{
$record[$column] = $oldRecord[$key];
}
$record['id'] = $record['rkey'];
// Insert the data
$this->connection->insert('Record', $record);
}
You've likely already considered this, but my approach would be to set the generator strategy to 'none' for the import so you can manually import the existing id's in your client code. Then once the import is complete, change the generator strategy back to 'auto' to let the RDBMS take over from there. A conditional can determine whether the id setter is invoked. Good luck - let us know what you end up deciding to use.
I am developing a large scale application in C++. The application also maintains a database (currently I am using MySQL), I use OTL for database connectivity. Now what i want to do is to provide support for using database from multiple vendors. E.g User A use MySQL and user B use PostGres. I think upon implementing it in C++ but didn't come up with any possible solution due to lack of experience.
What I want to achieve, is something like that:
There would be a separate VC Project that deals with database and suppose it contains following files:
DataAccessLayer.cpp //This will main entry point of the project
Product.cpp //This deals with product table
Customer.cpp // This deals with Customer table
Orders.cpp //This deals with Orders table
. . . and many more // I want to have one cpp file per Database table`
And we will use the above project in our code like this
DataAccessLayer oDataAccessLayer;
oDataAccessLayer.Connect(); // This will connect to specified database, it might b some abstract class and have concrete class for each supported DB
oDataAccessLayer.Products.Search(//Some parameters here e.g prod id to b search);//I don't want to write search query again again for each database, This function will execute the query in specific database
oDataAccessLayer.Customers.Add(//Parameter)//Same is the case here I don't want to write ADd query for each supported database
oDataAccessLayer.Disconnect();
I don't want the whole code I just need some sample code or related article to study.
Your question is a bit ambiguous. However, I will base my answer on the perspective that you want your code to access one physical database, which is configurable to MySQL, PostGres, Oracle, etc.
Generate generic SQL statements.
Instead of accessing the MySQL library for inserting records, build a SQL statement and let the connector execute the statement. This reduces the interface to one small point: the connector. I have my Field and Record classes set up to use the Visitor pattern. I create Visitors to build the SQL statements. This enables my database component to be more generic.
Abstract the DB Connector.
Create your own Connector object as a Facade around the DB manufacturer's connector. Make the methods generic, such as passing a string containing SQL text. Next, change your components to require passing of this facade during construction or when accessing the database. Finally, create an instance of this facade to communicate with specific database applications before using any of your components.
Suggestions
I have found that having the Record class contain the table name, I could eliminate the need for a Table class (which models the database table). Appending and loading is handled by my database Manager class. In my recent progress, I implemented functionality for prepared statements and BLOB fields.
Suppositions:
Suppose i have following two tables in my database
Project
Docs
Docs
And currently i am supporting only following two databases:
1) Post Gres
2) Oracle
Implementation:
I implement this logic in following three phases
1) Data Base Adapter: Only this class is exposed to outer world and this contians the Pointer to Each QueryBuilder, these pointer will be initialized in the constructor of this class on the basis of DB type(which is passed as an argument to constructor)
2) DB Connector: This class will be responsible for DB handling (establish connection, disconnect etc.)
3) Query Builder: This section contains a basse class for each table in DB and also contains concrete calss for each each specific database
Sample Code (Synopsis Only)
DBAdapter.h
DBAdapter(enum DBType, string ConnectionString);//constructor
IntializeDBConnectAndQueryBuilder();
Project *pProject;
Docs *pDocs;
//----------------------------------------------------------------------------------------
DBAdapter.cpp
DBAdapter(enum DBType, string ConnectionString)//constructor
{
swithc (DBType)
{
case enPostGres:
pProject = new Project_PostGres();
pDocs = new Docs_PostGres();
break;
};
}
DB Connector Section:
DBConnector.h
virtual Connect()=0; //pure virtual
IsConnected()=0;//pure virtual
Disconnect()=0; //pure virtual
//------------------------------------------------------------------------------------------
DBConnector_PostGres.cpp: DBConnector
Connect()
{ //Implementation specific to postgres goes here}
IsConnected()
{ //Implementation specific to postgres goes here}
Disconnect()
{ //Implementation specific to postgres goes here}
//------------------------------------------------------------------------------------------
DBConnector_OracleGres.cpp: DBConnector
Connect()
{ //Implementation specific to oracle goes here}
IsConnected()
{ //Implementation specific to oracle goes here}
Disconnect()
{ //Implementation specific to oracle goes here}
//------------------------------------------------------------------------------------------
Query Builder Section:
Project.h
virtual int AddProject();
virtual int DeleteProject();
virtual int IsProjectTableExist();
DBConnector *pDBConnector; // This pointer will be initalized in all concrete classes
//------------------------------------------------------------------------------------------
Project.cpp
int AddProject()
{
//Query for adding Project is same for all databases
}
int DeleteProject()
{
//Query for deleting Project is same for all databases
}
//------------------------------------------------------------------------------------------
Project_PostGres.cpp : Project
Project_PostGres() // Constructor
{
pDBConnector = new DBConnector_PostGres();
}
int IsProjectTableExist()
{
//Query specific to postgres goes here
}
Project
//------------------------------------------------------------------------------------------
Project_Oracle.cpp : Project
{
//Query specific to oracle goes here
}
//. . . and so on for other databases
//------------------------------------------------------------------------------------------
//Same number of files with same logic will also be created for Docs Table
------------------------------------------------------------------------------------------