I am building an app using Miguel's Flasky code as the basic framework to start with. I'm following his structure but with separate Blueprints for each major discrete part of my own additional functionality.
During any initial deployment, I need the app to initialise the database with a system user, various system-related defaults, etc.
Where in the Flasky app should I put the code to do this? It obviously has to be invoked before any user interactions but will need to reference app, db and SQLAlchemy models defined in several of the Bluprints, so must be after these have all been imported.
I'm thinking it should be in a function in the app __init__.py which I then call at the end of the app.create_app function, after all the blueprint imports. It would obviously test for whether initialisation has already happened (i.e. is this a first run, not just a restart of the app).
Is that where people usually put first-time initialisation code?
Related
I am working on a legacy app with 100s of related admin models, controllers, services, jobs, etc. We decided to move all of these files in a folder at the top-level of the app directory called admin. So our ideal file structure would be:
app/admin/models/admin.rb
app/admin/controllers/admin_controller.rb
app/admin/services/admin_service_of_some_kind.rb
app/admin/models/audit.rb
etc
We want the call sites in our code to be:
Admin::Admin.create...
Admin::AdminService.retrieve_all_audit_logs...
Admin::Audit.scope_by_admin...
etc
The problem is that after reading these links:
http://urbanautomaton.com/blog/2013/08/27/rails-autoloading-hell/
http://guides.rubyonrails.org/autoloading_and_reloading_constants.html
I understand that Rails infers file path names from the constants. So if I want to call Admin::AdminService.some_task... Rails will believe the Admin constant and the nested AdminService constant would exist in a file at (assuming that app/admin is autoloaded... which I believe it is) app/admin/admin_service.rb which is not true... they exist in app/admin/services/admin_service.rb.
How can I make this happen given that I want the call site to be Admin::AdminService.some_task?
Given the folder structure of app/admin/services/admin_service.rb, the call site would have to be Services::AdminService.some_task right? (this assumes that app/admin is autoloaded) right? However, this is not what I want.
Why not just move them into
app/models/admin/*.rb
app/services/admin/*.rb
Should a component be its own application?. So we have separate our apps for that reason.
Now reusability does matter in Django. It is trivial to make our apps reusable when each module in the apps does not depends on another apps.
However, It is common to refer a model in another apps by adding ForeignKey('appname.MyModel'). It creates a hard dependency of the Django apps with another apps.
The same thing happened with import of another apps (i.e. from appname import MyModel). It creates a dependencies of the apps to another apps.
If the app contains such dependency of another apps, then it does not seems to be viable to share our apps (i.e. Not reusable).
What do I have to do to make the dependencies loose. And allow me to share my apps without having to hardcode another apps in the app.
So, it's worth noting that we don't really need to depend on your specific apps. We depend instead on having something that satisfies the same interfaces your apps expose.
This is the 'Pythonic' way to do things (sometimes referred to as duck typing as 'if it walks like a duck and quacks like a duck... it must be a duck').
You've had in comments how to solve the ForeignKey problem
To summarise, you can just add the value in settings.py:
MY_FK_MODEL = 'someapp.SomeModel'
and then use it in your models.py like so:
from django.conf import settings
class ReusableAppModel(models.Model):
some_model = models.ForeignKey(settings.MY_FK_MODEL)
So far, so easy; now to solve the import.
We actually already have an example of this from Django itself. Which is the get_user_model() method.
We could make something like that by adding the following in settings.py:
MY_APP_DEPENDENCY = 'myapp.my_module.MyClass'
along with a helper function similar to get_user_model() somewhere in your reusable app. Let's say reusable_app/helpers.py for the sake of argument:
from django.conf import settings
from pydoc import locate
def get_my_app_dependency():
dependency = locate(settings.MY_APP_DEPENDENCY)
# locate() returns None if the class is not found,
# so you could return a default class instead if you wished.
return dependency
Then you can get that class wherever you need it by calling the method:
from reusable_app.helpers import get_my_app_dependency
MyAppDependency = get_my_app_dependency()
app_dep_instance = MyAppDependency()
The summary here is that you can allow users to specify a class/method/whatever as a string in settings.py and then use that to refer to your dependency.
This lets users 'inject' a dependency into your app.
One final note:
Whenever you have an app/module that has lots of dependencies on others, it's worth double checking to see if they really should be separate. You want to avoid creating one giant module satisfying lots of disparate responsibilities, but likewise you want to avoid artificially breaking code up when it doesn't make sense. It's a careful balance.
I am working on a django project that, along with having a database for its models and relations, writes to a log directory called activity_logs outside of the project directory to keep track of formatted user activities, one file for each user. This is an alternative file-structure-based solution to having a database table carry this information along, because this offloads some storage from the DB and is relatively easy to format and express such activities. Perhaps some of you may recommend storing this kind of data in the database, which is fine, but I still believe there is question from all of this that I need help answering.
This django project has multiple apps that have an extensive test suite, one for each app. Additionally, there is a logging.py file that encapsulates the logging functionality (writing/reading activities to/from log files), and so both the test cases within the test suites as well as the view functions (and various other utility functions) all utilize these logging functions in order to store these user activities and retrieve them based on model relationships to emulate a user notification system. Since the logging module takes care of this logging, it needs to know where to write to, and so we have a directory structure called activity_logs to which it writes user log files, creating one for a new user and deleting one for a user removed from the database. One of the newest changes we would like to make in this project is to create a separate logging directory for testing this logging functionality, something like test_activity_logs, so that it would never be confused when writing to the test directory for test users or the regular activity log directory for real users.
My problem is this: at runtime, how can I tell the system, at whichever startpoint of execution (whether it be from a view function call through the django test Client object, a test case, an actual HTTPrequest made via a URL, etc.), when to look inside the activity_logs or the test_activity_logs directory? It solely depends on whether I am generating new information for a real user or a test user, but a User is a User in our system, and I'm facing some trouble trying to tell these functions that call some logging functions to write to the test log directory vs. the regular one. For example, one approach I am trying is to pass a keyword argument (kwarg) to the logging functions so that they can be made aware of which directory to read/write to/from, like so:
self.assertTrue(activity_has_been_logged(ACTIVITY_ACCOUNT_CREATED, user.get_profile(), use_test_activity_log_directory=True) == True)
the kwarg called use_test_activity_log_directory=True will tell the logging function called activity_has_been_logged to read the test activity log directory. Unfortunately, apart from being a little inflexible (but tolerable), this doesn't solve the situation where the django test client object sends a GET or POST request via a URL to a view function that writes activities to log files:
response = client.post(propose_match_url, post) #Can't write to test_log_directory if by default it writes to regular directory!
How do I let the client pass on this kwarg to those view functions? I think that it should totally be possible to do this, but I'm not sure if fiddling with these kwargs is the best way, or maybe create a global variable in the project settings file, but maybe that might cause some trouble with race conditions with a shared mutable variable.
Your help would be great. Thanks in advance!
So I just solved this problem. The logging file hosting all logging functionality is really the only place that needs to know where to look (either test_activity_logs or activity_logs), since all other components will invoke functions from the logging module to write/read to/from these directories. I gave an additional field to the model class of the UserProfile class called is_test that is a boolean field to determine whether to look in the test_activity_logs if is_test=True, or activity_logs if is_test=False. That way, the logging module needs only to check the input parameter of type UserProfile and its new field to determine where to perform its logging functionalities. Problem solved!
Check out daemontools if you're on a *nix box or launchd on OS X. Both can make sure your Django instance stays running in whatever mode you prefer (daemontools has a few more options for that) and can isolate a directory for logging stdout/stderr.
You can set environment variables for each instance to help other log files and temporary files know where to be created, which you then get from os.environment or simply use the current working directory as a base if using daemontools.
The directory is automatically created for you using daemontools.
When using sinatra-r18n to handle internationalisation, the r18n lib exposes a variable t for use within your helpers, routes and templates, as per these instructions.
I have written a simple unit test using rack-unit to confirm that some of my pluralisations work but the test throws an error claiming t is nil.
I've tried referencing it via app.t, MySillyApp.t (where MySillyApp is the name of my Sinatra app), MySillyApp.settings.t etc and none of them give me access to the t I need.
What I am trying to achieve is a confirmation that my translation files include all the keys I need corresponding to plurals of various metric units my app needs to understand. Perhaps there is a more direct way of testing this without going via the Sinatra app itself. I'd welcome any insight here.
I had similar task to check localized strings in my Cucumber scenarios.
I've made working example.
Here you can find how strings got translated.
This file halps to understand how to add R18n support to your testing framework:
require 'r18n-core'
...
class SinCucR18nWorld
...
include R18n::Helpers
end
As you can see instead of rack/unit I'm using RSpec/Cucumber, sorry.
I am working on a Django app and I have a class which reads the contents of a file and returns a Django model. My question is where do I store this class in the file system? All this does is reads the file, populates a Django model and returns it.
Thanks
There is nothing special about a Django application: it's just a Python package. Technically you can put the class anywhere you can import.
With that being said, it's best to keep related code bundled together. It sounds like a good place for this particular class is in the file that declares the Model it returns.
On the other hand it might be logical to throw it into the application's __init__.py file.
You could also make a utils, etc, admin, scripts . . . folder/package to put utility classes and scripts if it's meant to be used for administration and site maintenance.
In the end it's more about how you want to organize your project, but technically it can live just about anywhere.