I need your kindly support.
I have a big project in my redmine with a lot of subprojects in it.
More than 300 issues have been moved from this project to another subprojects by mistake. And I haven't got a chance to rescue it by hands directly from redmine. But I have a database dump which has been done before this accident.
So, my question is - Can I compare table "issue" from right database with damaged database and move issues back? Or May be has any tools or methods to move back issues to right project?
Redmine version is 2.0.4. Database: PostgreSQL.
Thank you in advance.
plan a:
You can try to analyze table issues and find all issues which are move wrongly.
You know new project_id and you know approximately timestamps of changes. And write sql query (or use rails console) to undo action.
for example (code NOT tested!)
new_project_id = Project.find(ID).id # note that ID is project identificator not id of record!
timestamp = DateTime.parse('2013-10-30 12:20:45')
issues = Issue.where(project_id: new_project_id).where('updated_at > ? AND updated_at < ?', timestamp - 1.minute, timestamp + 1.minute)
# check that all selected issues must be updated!!!
issues.update_all(project_id: old_project_id) # note that old_project_id is correct id (integer value) of record in DB
plan b:
you can find all issue_id which have project_id in correct DB. And then apply SQL query to update project id to correct value for all issues where id IN (issue_ids) on corrupted DB
# load correct DATABASE and start rails console
project = Project.find(OLD_ID) # note that OLD_ID is project identificator not id of record!
issue_ids = project.issue_ids
# save somewhere issue_ids
# load corrupted database and start rails console
issue_ids = [saved_array_of_ids_from_previous_step]
Issue.where(id: issue_ids).update_all(project_id: correct_project_id) # note that correct_project_id is correct id (integer value) of record in DB
Related
what is the best way to fix this kind of DataBase Errors without having to delete my db and migration files and starting to enter data from scratch?
django.db.utils.IntegrityError: The row in table 'store_product_category' with primary key '1' has an invalid foreign key: store_product_category.category_id contains a value '1' that does not have a corresponding value in store_category.id.
while inspection the sqlit DB i observe that there is a mismatch in the IDs of the store_product_category and store_category.id.
is there anyway i can modify the id directly on the DB, i dont want to start deleting database files and migrations
If I've understood right:
The model StoreProductCategory has a FK - category, linking to a model StoreCategory.
You have a SPC record with category == 1 but no record in StoreCategory with this ID?
If so, the fix is reasonably simple.
Enter the DB shell using python manage.py dbshell and run an SQL INSERT command to add the appropriate record.
Change your model for StoreProductCategory and set on_delete for that FK. I would suggest maybe PROTECT might be appropriate here, but it's up to you - just make sure it's something that will keep things consistent.
If (2) is already done, I do question how this happened in the first place - that would kind of indicate somebody has messed directly with the DB. You may want to investigate who has access and what gets done there.
I'm attempting to trial setting up a system in Django whereby I specify the database connection to use at runtime. I feel I may need to go as low level as possible, but want to try and work within the idioms of Django where possible, perhaps stretching it as much as can be possible.
The general precise is that I have a centralised database that stores meta information Datasets - but the actual datasets are created as dyanmic models at runtime, in the database in question. I need to be able to specify which database to connect to at runtime to then extract the data back out...
I have kind of the following idea:
db = {}
db['ENGINE'] = 'django.db.backends.postgresql'
db['OPTIONS'] = {'autocommit': True}
db['NAME'] = my_model_db['database']
db['PASSWORD'] = my_model_db['password']
db['USER'] = my_model_db['user']
db['HOST'] = my_model_db['host']
logger.info("Connecting to database {db} on {host}".format(db=source_db['NAME'], host=source_db['HOST']))
connections.databases['my_model_dynamic_db'] = db
DynamicObj.objects.using('my_model_dynamic_db').all()
Has anyone achieved this? And how?
We are working to upgrade our application to a more current version of Ruby & Rails. Our app integrates with a legacy database (SQL Server 2008 R2) that has a table with a column of image data type (we are unable to change this column to varbinary(max)). Previously we were able to save a binary into the image column. However now we are getting conversion errors.
We are working to upgrade to the following (among others):
Rails 4.2.1
ActiveRecord_SQLServer_Adapter (4.2.4)
tiny_tds (0.6.3.rc1)
freeTDS (v0.91.112)
When we now attempt to save into the image column, we get errors similar to:
TinyTds::Error: Unclosed quotation mark after the character string
Researching various issues within tiny_tds & activerecord_sqlserver_adapter, we decided to create a second table that matched the first but change the data type from image to varbinary(max). We can save a binary into the column.
The code causing the challenge is in a background job where we grab images from s3, store them locally and then push the image into the database. Again, we don't control the legacy database and thus can't change the data type (or confront the issue of why we are storing the image in the db in the first place).
...
#d = Doc.new
...
open("#{Rails.root}/cache/pictures/image.png", "wb") do |file|
file << open(r.image.url).read
end
#d.document = File.binread("#{Rails.root}/cache/pictures/image.png")
#d.save!
Given the upgrade has broken our saving images, we are trying to figure out how best to determine a fix. We could obviously roll back until we find a version that works. However we hope to find a fix. Anyone have any ideas?
Update:
We added the following configuration as we had triggers on the table being inserted: ActiveRecord::ConnectionAdapters::SQLServerAdapter.use_output_inserted = true
When we remove this configuration we get the following error:
TinyTds::Error: The target table 'doc' of the DML statement cannot have any enabled triggers if the statement contains an OUTPUT clause without INTO clause.
Note: We are unable to make any modifications to the triggers.
Per feedback on the ActiveRecord_SQLServer_Adapter site, we rolled back to 4.1.11 and we are now able to save into the image column.
We also had to add this snippet to overcome the issue with the triggers.
I have a problem with a Django application. Queries on the model Scope are extremely slow and after some debugging I still have no clue where the problem lies.
When I query the db like scope = Scope.objects.get(pk='Esoterik I') it takes 5 to 10 seconds. The database has less than 10 entries and an index on the primary key, so it is way too slow. When executing the an equivalent query on the db like SELECT * FROM scope WHERE title='Esoterik I'; everything is ok and it takes only about 50ms.
The same problem happens if I query a set of results like scope_list = Scope.objects.filter(members=some_user) and then call print(scope_list) or iterate over the list elements. The query itself only takes a few ms but the print or iterating of the elements takes again like 5 to 10 seconds but the set has only two entries.
Database Backend is Postgresql. The same problem occurs on the local development server and apache.
Here the code of the model:
class Scope(models.Model):
title = models.CharField(primary_key=True, max_length=30)
## the semester the scope is linked with
assoc_semester = models.ForeignKey(Semester, null=True)
## the grade of the scope. can be Null if the scope is not a class
assoc_grade = models.ForeignKey(Grade, null=True)
## the timetable of the scope. can be null if the scope is not direct associated with a class
assoc_timetable = models.ForeignKey(Timetable, null=True)
## the associated subject of the scope
assoc_subject = models.ForeignKey(Subject)
## the calendar of the scope
assoc_calendar = models.ForeignKey(Calendar)
## the usergroup of the scope
assoc_usergroup = models.ForeignKey(Group)
members = models.ManyToManyField(User)
unread_count = None
update
Here is the output of the python profiler. It seems that query.py was getting called 1.6 million times - a little too much.
You should try and first isolate the problem. Run manage.py shell and run the following:
scope = Scope.objects.get(pk='Esoterik I')
print scope
Now django queries are not executed until they very much have to. That is to say, if you're experiencing slowness after the first line, the problem is somewhere in the creation of the query which would suggest problems with the object manager. The next step would be to try and execute raw SQL through django, and make sure the problem is really with the manager and not a bug in django in general.
If you're experiencing slowness with the second line, the problem is eitherwith the actual execution of the query, or with the display\printing of the data. You can force-execute the query without printing it (check the documentation) to find out which one it is.
That's as far as I understand but I think the best way to solve this is to break the process down to different parts and finding out which part is the one causing the slowness
For being sure about the database execution time, it is better to test queries generated by Django since Django-generated queries may not be a simple SELECT * from blah blah
To see the Django generated query:
_somedata = Scope.objects.filter(pk='Esoterik I') # you must use filter in here
print somedata.query.__format__('')
This will display you the complete query generated by Django. Then copy it and open a Postgresql console and use Postgresql analyze tools:
EXPLAIN ANALYZE <your django query here>;
like:
EXPLAIN ANALYZE SELECT * FROMsomeapp_scope WHERE id = 'Esoterik I';
EXPLAIN will show average execution data while ANAYLZE will also show you some extra data about execution time of that analyze.
You can also see if any index is used by postgresql during query execution in those analyze results.
I've been looking for a way to define database tables and alter them via a Django API.
For example, I'd like to be write some code which directly manipulates table DDL and allow me to define tables or add columns to a table on demand programmatically (without running a syncdb). I realize that django-south and django-evolution may come to mind, but I don't really think of these tools as tools meant to be integrated into an application and used by and end user... rather these tools are utilities used for upgrading your database tables. I'm looking for something where I can do something like:
class MyModel(models.Model): # wouldn't run syncdb.. instead do something like below
a = models.CharField()
b = models.CharField()
model = MyModel()
model.create() # this runs the create table (instead of a syncdb)
model.add_column(c = models.CharField()) # this would set a column to be added
model.alter() # and this would apply the alter statement
model.del_column('a') # this would set column 'a' for removal
model.alter() # and this would apply the removal
This is just a toy example of how such an API would work, but the point is that I'd be very interested in finding out if there is a way to programatically create and change tables like this. This might be useful for things such as content management systems, where one might want to dynamically create a new table. Another example would be a site that stores datasets of an arbitrary width, for which tables need to be generated dynamically by the interface or data imports. Dose anyone know any good ways to dynamically create and alter tables like this?
(Granted, I know one can do direct SQL statements against the database, but that solution lacks the ability to treat the databases as objects)
Just curious as to if people have any suggestions or approaches to this...
You can try and interface with the django's code that manages changes in the database. It is a bit limited (no ALTER, for example, as far as I can see), but you may be able to extend it. Here's a snippet from django.core.management.commands.syncdb.
for app in models.get_apps():
app_name = app.__name__.split('.')[-2]
model_list = models.get_models(app)
for model in model_list:
# Create the model's database table, if it doesn't already exist.
if verbosity >= 2:
print "Processing %s.%s model" % (app_name, model._meta.object_name)
if connection.introspection.table_name_converter(model._meta.db_table) in tables:
continue
sql, references = connection.creation.sql_create_model(model, self.style, seen_models)
seen_models.add(model)
created_models.add(model)
for refto, refs in references.items():
pending_references.setdefault(refto, []).extend(refs)
if refto in seen_models:
sql.extend(connection.creation.sql_for_pending_references(refto, self.style, pending_references))
sql.extend(connection.creation.sql_for_pending_references(model, self.style, pending_references))
if verbosity >= 1 and sql:
print "Creating table %s" % model._meta.db_table
for statement in sql:
cursor.execute(statement)
tables.append(connection.introspection.table_name_converter(model._meta.db_table))
Take a look at connection.creation.sql_create_model. The creation object is created in the database backend relevant to the database you are using in your settings.py. All of them are under django.db.backends.
If you must have ALTER table, I think you can create your own custom backend that extends an existing one and adds this functionality. Then you can interface with it directly through a ExtendedModelManager you create.
Quickly off the top of my head..
Create a Custom Manager with the Create/Alter methods.