In Django, how can I tell my project to read/write in distinct log directories at runtime? - django

I am working on a django project that, along with having a database for its models and relations, writes to a log directory called activity_logs outside of the project directory to keep track of formatted user activities, one file for each user. This is an alternative file-structure-based solution to having a database table carry this information along, because this offloads some storage from the DB and is relatively easy to format and express such activities. Perhaps some of you may recommend storing this kind of data in the database, which is fine, but I still believe there is question from all of this that I need help answering.
This django project has multiple apps that have an extensive test suite, one for each app. Additionally, there is a logging.py file that encapsulates the logging functionality (writing/reading activities to/from log files), and so both the test cases within the test suites as well as the view functions (and various other utility functions) all utilize these logging functions in order to store these user activities and retrieve them based on model relationships to emulate a user notification system. Since the logging module takes care of this logging, it needs to know where to write to, and so we have a directory structure called activity_logs to which it writes user log files, creating one for a new user and deleting one for a user removed from the database. One of the newest changes we would like to make in this project is to create a separate logging directory for testing this logging functionality, something like test_activity_logs, so that it would never be confused when writing to the test directory for test users or the regular activity log directory for real users.
My problem is this: at runtime, how can I tell the system, at whichever startpoint of execution (whether it be from a view function call through the django test Client object, a test case, an actual HTTPrequest made via a URL, etc.), when to look inside the activity_logs or the test_activity_logs directory? It solely depends on whether I am generating new information for a real user or a test user, but a User is a User in our system, and I'm facing some trouble trying to tell these functions that call some logging functions to write to the test log directory vs. the regular one. For example, one approach I am trying is to pass a keyword argument (kwarg) to the logging functions so that they can be made aware of which directory to read/write to/from, like so:
self.assertTrue(activity_has_been_logged(ACTIVITY_ACCOUNT_CREATED, user.get_profile(), use_test_activity_log_directory=True) == True)
the kwarg called use_test_activity_log_directory=True will tell the logging function called activity_has_been_logged to read the test activity log directory. Unfortunately, apart from being a little inflexible (but tolerable), this doesn't solve the situation where the django test client object sends a GET or POST request via a URL to a view function that writes activities to log files:
response = client.post(propose_match_url, post) #Can't write to test_log_directory if by default it writes to regular directory!
How do I let the client pass on this kwarg to those view functions? I think that it should totally be possible to do this, but I'm not sure if fiddling with these kwargs is the best way, or maybe create a global variable in the project settings file, but maybe that might cause some trouble with race conditions with a shared mutable variable.
Your help would be great. Thanks in advance!

So I just solved this problem. The logging file hosting all logging functionality is really the only place that needs to know where to look (either test_activity_logs or activity_logs), since all other components will invoke functions from the logging module to write/read to/from these directories. I gave an additional field to the model class of the UserProfile class called is_test that is a boolean field to determine whether to look in the test_activity_logs if is_test=True, or activity_logs if is_test=False. That way, the logging module needs only to check the input parameter of type UserProfile and its new field to determine where to perform its logging functionalities. Problem solved!

Check out daemontools if you're on a *nix box or launchd on OS X. Both can make sure your Django instance stays running in whatever mode you prefer (daemontools has a few more options for that) and can isolate a directory for logging stdout/stderr.
You can set environment variables for each instance to help other log files and temporary files know where to be created, which you then get from os.environment or simply use the current working directory as a base if using daemontools.
The directory is automatically created for you using daemontools.

Related

How set short month names using ember-moment

I need to change short month in moment.
But I can't do it.
I have try to set
localeOutputPath: 'assets/moment-locales'
And call
Ember.$.getScript('/assets/moment-locales/ru.js');
In this case i have ember-mirage error
Your Ember app tried to GET '/assets/moment-locales/ru.js?_=1490191145335',
but there was no route defined to handle this request. Define a route that
matches this path in your mirage/config.js file
Is it simple way to set short months name for moment?
I assume you are using ember-moment addon; and already have configured config/environment.js with
moment: {
// This will output _all_ locale scripts to assets/moment-locales
localeOutputPath: 'assets/moment-locales'
},
as you have mentioned.
Ember.$.getScript('/assets/moment-locales/ru.js');
provides a way to dynamically load moment ru locale on the fly when needed. This means instead of including the related locale to your application's javascript file you prefer to load relevant locale upon some user request in your application. Generally it is best to perform such loading operation within a router's hook methods such as beforeModel or model.
In order to get short month names from moment; you need to first import ES6 moment module via
import moment from 'moment';
and access the short month names with
moment.monthsShort()
As far as I can see; there is a problem with the way you are requesting the locale so you are getting the error you have mentioned. I believe a working code is a better explanation from pure text; hence I have created the following git repository to illustrate how you can change the locale dynamically in a route and how you can display short names retrieved from moment. Please take a look at it by cloning and running in your localhost.
In the application at this repository, application.hbs contains links to 5 sub-routes; each displaying short names of months in different languages. The code that does the trick of dynamically loading the relevant locale is in routes/locale-route.js file's model hook method. If locale is already loaded (note that English is included by default with moment) it simply returns the short names of the months via switching to target locale (moment.locale(localeToLoad);). Otherwise, it performs a remote call to the server and waits for the response (by using a promise) to return the name of months. All routes for 5 different languages extend from this base route. Once, a locale is loaded from the server; you no longer need to load it again once more and the locale-route already handles it as I explained. I hope that helps.
After reading your comment; I updated the source code to include ember-cli-mirage. Mirage is a client-side mock server to develop, test and prototype your application. Once you include it as a dependency it starts intercepting your remote call requests. Hence, in your case mirage intercepts requests for demanding related locale. What you need to do is passing through mirage for locales. In order to do that, you need to add following
this.passthrough('/assets/moment-locales/**');
to mirage/config.js so that mirage will not interfere with demanding moment-locales at the runtime. Please see related file under the git repository I have provided. This will solve your problem for sure.

Moving admin-related files out of their models and controller folders and into a separate admin folder in Rails. How to organize this code?

I am working on a legacy app with 100s of related admin models, controllers, services, jobs, etc. We decided to move all of these files in a folder at the top-level of the app directory called admin. So our ideal file structure would be:
app/admin/models/admin.rb
app/admin/controllers/admin_controller.rb
app/admin/services/admin_service_of_some_kind.rb
app/admin/models/audit.rb
etc
We want the call sites in our code to be:
Admin::Admin.create...
Admin::AdminService.retrieve_all_audit_logs...
Admin::Audit.scope_by_admin...
etc
The problem is that after reading these links:
http://urbanautomaton.com/blog/2013/08/27/rails-autoloading-hell/
http://guides.rubyonrails.org/autoloading_and_reloading_constants.html
I understand that Rails infers file path names from the constants. So if I want to call Admin::AdminService.some_task... Rails will believe the Admin constant and the nested AdminService constant would exist in a file at (assuming that app/admin is autoloaded... which I believe it is) app/admin/admin_service.rb which is not true... they exist in app/admin/services/admin_service.rb.
How can I make this happen given that I want the call site to be Admin::AdminService.some_task?
Given the folder structure of app/admin/services/admin_service.rb, the call site would have to be Services::AdminService.some_task right? (this assumes that app/admin is autoloaded) right? However, this is not what I want.
Why not just move them into
app/models/admin/*.rb
app/services/admin/*.rb

How to create separate python script for uploading data into ndb

Can anyone guide me towards the right direction as to where I should place a script solely for loading data into ndb. As I wish to upload all data into the gae ndb so that the application could perform query on it.
Right now, the loading of data is in my application. I wish to placed it separately from the main application.
Should it be edited in the yaml file?
EDITED
This is a snippet of the entity and the handler to upload the data into GAE ndb.
I wish to placed this chunk of code separately from my main application .py. Reason being the uploading of this data won't be done frequently and to keep the codes in the main application "cleaner".
class TagTrend_refine(ndb.Model):
tag = ndb.StringProperty()
trendData = ndb.BlobProperty(compressed=True)
class MigrateData(webapp2.RequestHandler):
def get(self):
listOfEntities = []
f = open("tagTrend_refine.txt")
lines = f.readlines()
f.close()
for line in lines:
temp = line.strip().split("\t")
data = TagTrend_refine(
tag = temp[0],
trendData = temp[1]
)
listOfEntities.append(data)
ndb.put_multi(listOfEntities)
For example if I placed the above code in a file called dataLoader.py, where should I call this script to invoke?
In app.yaml alongside my main application(knowledgeGraph.application)?
- url: /.*
script: knowledgeGraph.application
You don't show us the application object (no doubt a WSGI app) in your knowledge.py module, so I can't know what URL you want to serve with the MigrateData handler -- I'll just guess it's something like /migratedata.
So the class TagTrend_refine should be in a separate file (usually called models.py) so that both your dataloader.py, and your knowledge.py, can import models to access it (and models.py will need its own import of ndb of course). (Then of course access to the entity class will be as models.TagTrend_refine -- very basic Python).
Next, you'll complete dataloader.py by defining a WSGI app, e.g, at end of file,
app = webapp2.WSGIApplication(routes=[('/migratedata', MigrateData)])
(of course this means this module will need to import webapp2 as well -- can I take for granted a knowledge of super-elementary Python?).
In app.yaml, as the first URL, before that /.*, you'll have:
url: /migratedata
script: dataloader.app
Given all this, when you visit '/migratedata', your handler will read the "tagTrend_refine.txt" file that you uploaded together with your .py, .yaml, and so on, files in your overall GAE application, and unconditionally create one entity per line of that file (assuming you fix the multiple indentation problems in your code as displayed above, but, again, this is just super-elementary Python -- presumably you've used both tabs and spaces and they show up OK in your editor, but not here on SO... I recommend you use strictly, only spaces, never tabs, in Python code).
However this does seem to be a peculiar task. If /migratedata gets visited twice, it will create duplicates of all entities. If you change the tagTrend_refine.txt and deploy a changed variation, then visit /migratedata... all old entities will stick around and all the new entities will join them. And so forth.
Moreover -- /migratedata is NOT idempotent (if visited more than once it does not produce the same state as running it just once) so it shouldn't be a GET (and now we're on to super-elementary HTTP for a change!-) -- it should be a POST.
In fact I suspect (but I'm really flying blind here, since you see fit to give such tiny amounts of information) that you in fact want to upload a .txt file to a POST handler and do the updates that way (perhaps avoiding duplicates...?). However, I'm no mind reader, so this is about as far as I can go.
I believe I have fully answered the question you posted (though perhaps not the one you meant but didn't express:-) and by SO's etiquette it would be nice to upvote and accept this answer, then, if needed, post another question, expressing MUCH more clearly and completely what you're trying to achieve, your current .py and .yaml (ideally with correct indentation), what they actually do and why you'd like to do something different. For POST vs GET in particular, just study When should I use GET or POST method? What's the difference between them? ...
Alex's solution will work, as long as all you data can be loaded in under 1 minute, as that's the timeout for an app engine request.
For larger data, consider calling the datastore API directly from your own computer where you have the source. It's a bit of a hassle because it's a different API; it's not ndb. But it's still a pretty simple API. Here's some code that calls the API:
https://github.com/GoogleCloudPlatform/getting-started-python/blob/master/2-structured-data/bookshelf/model_datastore.py
Again, this code can run anywhere. It doesn't need to be uploaded to app engine to run.

Google App Engine - Add & Reflect pattern, working around eventual consistency

I'm building a web application on app engine.
In my case, that's built on django-nonrel, but the key point is that it's using Google's datastore.
I love the fact that I don't need to deal with replication, sharding, backups and such, but one thing that is always getting in my way is the eventual consistency, which seems to get in the way of implementing a common web app pattern which I'm calling "Add & Reflect".
Let's say I have a project management app. The Project is its central model.
Now there's a web page page where I see a list of all projects, can add a project, and then I'll reflect back the list of all projects, which should include the project I just added (assuming no errors).
So the pattern goes like this:
Get and display list of existing projects
User adds new project (using a form on that page)
New project is created
As a response, get and display list of existing projects (now includes the new project)
Now the thing is, that due to eventual consistency, there is no guarantee whatsoever that I will get that new project when I get a list of all projects right after adding a new project.
Now that would be fine if this momentary inconsistency happened when another request (e.g. another user: user B) requested the list of projects one second after the project was added by the first user (user A), but it's really a problem when user A performs an operation, and does not see the results of his action, therefore does not get feedback.
I have gotten used to doing something like this to work around this problem:
def create_project(request):
response_context = {}
new_project = Project(name=request.POST['name'])
project.save()
response_context['projects'] = Project.get_serialized_projects()
# on GAE, eventual consistency means we are not guaranteed to see the
# new projects while querying for all projects, therefore we might need
# to add it manually...
if project.serialize() not in response_context['projects']:
response_context['projects'].append(project.serialize())
return render('projects.html', response_context)
The problem is that this happens in many places in my code, so I'm thinking maybe I'm missing something there, since this pattern is such a basic web app pattern.
Any suggestions for other ways to handle this?
Yes its a common issue. No theres no magic fix.
From client-side once you know the commit succeeded you can save the item locally (globals or storage) and then when querying from datastore merge your saved data. Put an expiration on it so its temporary. Its not trivial to make it work in all cases (say added an item then removed/renamed it so also update cache etc).
From server-side its common to cache recent saves in memcache and also merge with your queries.

CFInclude vs Custom Tag vs CFC for Presentation and Security

I'm just starting out with ColdFusion OOP and I am wanting to make a DIV which shows different links to users depending on what page they are on and what login rights (role) they have. Basically a 'context' menu.
Should I put this toolbar/navigation DIV in a .cfm or .cfc file?
To reiterate; The cfm or cfc file needs to know what page the user is on and will also check what role they have. Depending on these two pieces of information it will display a set of links to the user. The role information comes from the database and stored in a SESSION variable, and to find out what page they are on I guess it could use #GetFileFromPath(GetBaseTemplatePath())#.
My first thought was to have a normal .cfm file, put all the presentation and logic in that file (the HTML and lots of <cfif> statements) to ensure the correct information is displayed in the DIV, and then use <cfinclude> to display it on the page. Then I started thinking maybe I should make a Custom Tag and ask the calling page to pass in the user's credentials and the #GetFileFromPath(GetBaseTemplatePath())# as arguments and then have that Custom Tag return all the presentational data.
Finally I guess a CFC could do the above as well, but I'd be breaking the 'rule' of having presentational and logic data in a CFC.
Any suggestions on the best practice to achieve what I'm trying to do? It will eventually serve thousands of customers so I need to make sure my solution is easy to scale.
Anything that outputs HTML to the screen should be in a .cfm file.
That being said, depending on your need, you could have methods in a CFC that generate HTML, but the method simply returns the HTML as a string.
In programming, there are very few absolutes, but here is one: You should NEVER directly output anything inside of a function or method by using output="true". Instead, whatever content is generated, it should be returned from the method.
If you will have a need to use this display element more than once, a custom tag might be the best way to go rather than an include.
I see security as being a combination of what menu items I can see and what pages can be ran.
The main security function is inside of the main session object
On the menus
I call a function called
if (session.objState.checkSecurity(Section, Item) == 1)
then ...
For page security
function setupRequest() {
...
if (session.objState.checkSecurity(getSection(), getItem()) == 0) {
location("#request.self#?message=LoginExpired", "no");
return;
}
...
}
The particulars of what checkSecurity can do varies from application to application, but it is tied into how FW/1 works. The following security variations exist:
session.objState.checkSecurity(getSection())
session.objState.checkSecurity(getSection(), getItem())
session.objState.checkSecurity(getSection(), getItem(), Identifier)
None of the presentation files know anything about security.
Rules by which I live:) :
No CF business logic in CFM files. Just use some service which will serve template and provide needed data.
navService = com.foobar.services.Navigation(form, url);
and later output #navService.GetNavConent()#
No direct output from CFC files, functions should always return content. For example, make one function which makes one link based on some logic, second which wraps that and returns to cfm template.
Also one more hint, avoid using application and session scopes in your services.
This makes refactoring, testing and debugging too difficult.
For session you can make session.currentUser , CurrentUser.cfc which provides all things you need. e.g. session.currentUser.isAuthorized("backend/administration") and if true, show link to backend/administration.
Same for application, if you need locale, applicaiton wide setting or some singleton, make application.applicationSettings, ApplicationSettings.cfc and use that to retrieve all info you need in cfc's.
These rules will make your application to be easier to test and debug, and really easy to migrate tomorrow on some javascript based UI like Angular or backbone.js since all th edata you need is already in CFC and theoretically you just need to put remote in CFC or make some remote facade in the middle and you're done.