The situation I have is our normal Rails DB user has full ownership in order to run migrations.
However, we use a shared DB for development, so we can't run "destructive" DB tasks against the development DB, such as rake db:drop/reset/etc....
My thought is to create 2 DB users:
rails-service
rails-migrator
The service user is the "normal" web app user that connects to the DB when the app is live. This DB user would only have standard CRUD privileges but no dropping rights.
The migrator user is the "admin" user that is only used for running migrations. This DB user would have normal "full" access to the DB such that it "could" drop the DB if that command were executed.
Question: Is there a clean way to tell Rails migrations to run as the rails-migrator user? I'm not sure how I would accomplish this aside from somehow altering the connection strings for every rails migration file, which seems like a bad idea.
In tandem with the above, I'm going to "delete" the destructive rake tasks so that a developer can't even run them.
# lib/tasks/db.rake
# See: https://coderwall.com/p/jt4e1q/disable-destructive-rake-tasks-by-environment
tasks = Rake.application.instance_variable_get '#tasks'
tasks.delete 'db:reset'
tasks.delete 'db:drop'
namespace :db do
desc 'db:reset not available in this environment'
task :reset do
puts 'db:reset has been disabled'
end
desc 'db:drop not available in this environment'
task :drop do
puts 'db:drop has been disabled'
end
end
I refer you to the answer of Matthew Rudy Jacobs from 2007 (!) https://www.ruby-forum.com/topic/123618
Lucky enough it works also now :)
I just changed DEFINED? and the rest to ENV['AS_DB_ADMIN'] and used it to separate migration access to another user.
On migration I used
set :default_env, { as_db_admin: true }
Related
I'm developing a C++ web application and I'm using PostgreSQL with libpqxx, on Ubuntu 16.04. The issue I'm having is that I need a portable (over Linux systems) way to run my unit tests.
What I have now is a class that controls database calls. I'd like to test how this class acts under my unit tests. For that, for every time I run a unit test, I'd like to:
Create a temp user
Create a dummy db
run my tests on it
delete the db
delete the user
Now doing the steps is fine with Google tests. I can create a fixture that will do them all reproducibly. BUT...
How can I create a user with a password in one call (without being prompted for the password), so that I can create that user on the go and run my tests?
The manual doesn't seem to provide a way to provide the password in an argument. I was hoping to do something like this:
system("createuser " + username + " -password abcdefg");
where system() runs a terminal command. Then I can connect to the DB server with that username and password to do my unit tests.
Another failed attempt for me was to pass an sql query that would create the user through the terminal:
system("psql -c \"CREATE ROLE joe PASSWORD 'aabbccdd';\"")
When I do this, I get the error:
psql: FATAL: database "user" does not exist
where user is my unix username.
Remember that I cannot connect to the server with libpqxx because I don't have credentials yet (as the temporary user). Right? Please correct me if I'm wrong.
Is there any way to do what I'm planning here and make unit-tests runnable without user intervention?
The problem with the system("psql -c \"CREATE ROLE joe PASSWORD 'aabbccdd';\"") call is that it doesn't specify a database to connect to which will then default to trying a database of the same name as the user logging in. Try adding -d postgres and see if it works.
Please note that this only creates the user, it doesn't create any database or give privileges to anything.
In Django admin I have by mistake locked myself by attempting the wrong password.
I later of deleted the user and created another one using manage.py createsuperuser. However, it still says that I'm locked. How do I unlock myself?
It gives the following error when I try to log in using Django admin..
Account locked: too many login attempts. Contact an admin to unlock your account.
Given your error message and 3 strike policy, I assume you have django-axes in your project. You may have it configured to block by IP, regardless of user. That would explain why creating a new user did not work.
Djang-axes documentation gives you an outline of how to clear lockouts.
manage.py axes_reset will reset all lockouts and access records.
If you are currently in production and do not want to risk resetting any valid lockouts, you could try resetting for just your ip
manage.py axes_reset ip will clear lockout/records for ip
So, for example on if you are logged in on the same computer your server is on, you can use localhost:
manage.py axes_reset ip 127.0.0.1
If for some reason that doesn't work, you still have the option of manually deleting your AccessAttempt from your database. This assumes, of course, that you have access to your database, that your user has delete privileges, you are comfortable with sql, and you have not changed the default table name from django-axes.
delete from axes_accessattempt where username ='your_username'; where 'your_username' is the account you wish to unlock.
This can also by done by ip:
delete from axes_accessattempt where ip_address='your_ip'; where 'your_ip' is the ip address from the computer you are using.
Resetting attempts from command line:
python manage.py axes_reset
will reset all lockouts and access records.
python manage.py axes_reset_ip [ip ...]
will clear lockouts and records for the given IP addresses.
python manage.py axes_reset_username [username ...]
will clear lockouts and records for the given usernames.
python manage.py axes_reset_logs (age)
will reset (i.e. delete) AccessLog records that are older than the given age where the default is 30 days.
Yep here's how I did it.. Go to the shell using python manage.py shell
There enter the following commands
from axes.models import AccessAttempt
AccessAttempt.objects.all().delete()
If however the data is needed you must then delete only the object containing your username by
for obj in AccessAttempt.objects.all():
if obj.username == your_username:
obj.delete()
I am using elasticsearch-rails gem For my site i need to create custom callbacks. https://github.com/elastic/elasticsearch-rails/tree/master/elasticsearch-model#custom-callbacks
But i really confused by one thing. What means if self.published? on this code?
i try to use this for my models
after_commit on: [:update] do
place.__elasticsearch__.update_document if self.published?
end
but for model in console i see self.published? => false but i don`t know what this means
From the document of elasticsearch-rails.
For ActiveRecord-based models, use the after_commit callback to protect your data against inconsistencies caused by transaction rollbacks:
I think it was used to make sure everything is updated successfully into database before we sync to elasticsearch server
I've started working on a Django/Postgres site. Sometimes I work in manage.py shell, and accidentally do some DB action that results in an error. Then I am unable to do any database action at all, because for any database action I try to do, I get the error:
current transaction is aborted, commands ignored until end of transaction block
My current workaround is to restart the shell, but I should find a way to fix this without abandoning my shell session.
(I've read this and this, but they don't give actionable instructions on what to do from the shell.)
You can try this:
from django.db import connection
connection._rollback()
The more detailed discussion of This issue can be found here
this happens to me sometimes, often it's the missing
manage.py migrate
or
manage.py syncdb
as mentioned also here
it also can happen the other way around, if you have a schemamigration pending from your models.py. With south you need to update the schema with.
manage.py schemamigration mymodel --auto
Check this
The quick answer is usually to turn on database level autocommit by adding:
'OPTIONS': {'autocommit': True,}
To the database settings.
I had this error after restoring a backup to a totally empty DB. It went away after running:
./manage syncdb
Maybe there were some internal models missing from the dump...
WARNING: the patch below can possibly cause transactions being left in an open state on the db (at least with postgres). Not 100% sure about that (and how to fix), but I highly suggest not doing the patch below on production databases.
As the accepted answer does not solve my problems - as soon as I get any DB error, I cannot do any new DB actions, even with a manual rollback - I came up with my own solution.
When I'm running the Django-shell, I patch Django to close the DB connection as soon as any errors occur. That way I don't ever have to think about rolling back transactions or handling the connection.
This is the code I'm loading at the beginning of my Django-shell-session:
from django import db
from django.db.backends.util import CursorDebugWrapper
old_execute = CursorDebugWrapper.execute
old_execute_many = CursorDebugWrapper.executemany
def execute_wrapper(*args, **kwargs):
try:
old_execute(*args, **kwargs)
except Exception, ex:
logger.error("Database error:\n%s" % ex)
db.close_connection()
def execute_many_wrapper(*args, **kwargs):
try:
old_execute_many(*args, **kwargs)
except Exception, ex:
logger.error("Database error:\n%s" % ex)
db.close_connection()
CursorDebugWrapper.execute = execute_wrapper
CursorDebugWrapper.executemany = execute_many_wrapper
For me it was a test database without migrations. I was using --keepdb for testing. Running it once without it fixed the error.
There are a lot of useful answers on this topic, but still it can be a challenge to figure out what is the root of the issue. Because of this, I will try to give just a little more context on how I was able to figure out the solution for my issue.
For Django specifically, you want to turn on logs for db queries and before the error is raised, you can find the query that is failing in the console. Run that query directly on db, and you will see what is wrong.
In my case, one column was missing in db, so after migration everything worked correctly.
I hope this will be helpful.
If you happen to get such an error when running migrate (South), it can be that you have lots of changes in database schema and want to handle them all at once. Postgres is a bit nasty on that. What always works, is to break one big migration into smaller steps. Most likely, you're using a version control system.
Your current version
Commit n1
Commit n2
Commit n3
Commit n4 # db changes
Commit n5
Commit n6
Commit n7 # db changse
Commit n8
Commit n9 # db changes
Commit n10
So, having the situation described above, do as follows:
Checkout repository to "n4", then syncdb and migrate.
Checkout repository to "n7", then syncdb and migrate.
Checkout repository to "n10", then syncdb and migrate.
And you're done. :)
It should run flawlessly.
If you are using a django version before 1.6 then you should use Christophe's excellent xact module.
xact is a recipe for handling transactions sensibly in Django applications on PostgreSQL.
Note: As of Django 1.6, the functionality of xact will be merged into the Django core as the atomic decorator. Code that uses xact should be able to be migrated to atomic with just a search-and-replace. atomic works on databases other than PostgreSQL, is thread-safe, and has other nice features; switch to it when you can!
I add the following to my settings file, because I like the autocommit feature when I'm "playing around" but dont want it active when my site is running otherwise.
So to get autocommit just in shell, I do this little hack:
import sys
if 'shell' in sys.argv or sys.argv[0].endswith('pydevconsole.py'):
DATABASES['default']['OPTIONS']['autocommit'] = True
NOTE: That second part is just because I work in PyCharm, which doesnt directly run manage.py
I got this error in Django 1.7. When I read in the documentation that
This problem cannot occur in Django’s default mode and atomic()
handles it automatically.
I got a bit suspicious. The errors happened, when I tried running migrations. It turned out that some of my models had my_field = MyField(default=some_function). Having this function as a default for a field worked alright with sqlite and mysql (I had some import errors, but I managed to make it work), though it seems to not work for postgresql, and it broke the migrations to the point that I didn't event get a helpful error message, but instead the one from the questions title.
I want to develop an application which uses Django as Fronted and Celery to do background stuff.
Now, sometimes Celery workers on different machines need database access to my django frontend machine (two different servers).
They need to know some realtime stuff and to run the django-app with
python manage.py celeryd
they need access to a database with all models available.
Do I have to access my MySQL database through direct connection? Thus I have to allow user "my-django-app" access not only from localhost on my frontend machine but from my other worker server ips?
Is this the "right" way, or I'm missing something? Just thought it isn't really safe (without ssl), but maybe that's just the way it has to be.
Thanks for your responses!
They will need access to the database. That access will be through a database backend, which can be one that ships with Django or one from a third party.
One thing I've done in my Django site's settings.py is load database access info from a file in /etc. This way the access setup (database host, port, username, password) can be different for each machine, and sensitive info like the password isn't in my project's repository. You might want to restrict access to the workers in a similar manner, by making them connect with a different username.
You could also pass in the database connection information, or even just a key or path to a configuration file, via environment variables, and handle it in settings.py.
For example, here's how I pull in my database configuration file:
g = {}
dbSetup = {}
execfile(os.environ['DB_CONFIG'], g, dbSetup)
if 'databases' in dbSetup:
DATABASES = dbSetup['databases']
else:
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.mysql',
# ...
}
}
Needless to say, you need to make sure that the file in DB_CONFIG is not accessible to any user besides the db admins and Django itself. The default case should refer Django to a developer's own test database. There may also be a better solution using the ast module instead of execfile, but I haven't researched it yet.
Another thing I do is use separate users for DB admin tasks vs. everything else. In my manage.py, I added the following preamble:
# Find a database configuration, if there is one, and set it in the environment.
adminDBConfFile = '/etc/django/db_admin.py'
dbConfFile = '/etc/django/db_regular.py'
import sys
import os
def goodFile(path):
return os.path.isfile(path) and os.access(path, os.R_OK)
if len(sys.argv) >= 2 and sys.argv[1] in ["syncdb", "dbshell", "migrate"] \
and goodFile(adminDBConfFile):
os.environ['DB_CONFIG'] = adminDBConfFile
elif goodFile(dbConfFile):
os.environ['DB_CONFIG'] = dbConfFile
Where the config in /etc/django/db_regular.py is for a user with access to only the Django database with SELECT, INSERT, UPDATE, and DELETE, and /etc/django/db_admin.py is for a user with these permissions plus CREATE, DROP, INDEX, ALTER, and LOCK TABLES. (The migrate command is from South.) This gives me some protection from Django code messing with my schema at runtime, and it limits the damage an SQL injection attack can cause (though you should still check and filter all user input).
This isn't a solution to your exact problem, but it might give you some ideas for ways to smarten up Django's database access setup for your purposes.