Has anyone being able to get Yeoman to work with Django??
I've tried to set it up and even if i change my grunt file to the correct paths its still uses the default.
I've searched only but it doesnt that anyone is using such file structure.
So that was a really stupid comment I made above. :-)
Here's a proper response! Yeoman is simply a scaffolding tool for us to quickly generate css, js and html files. I am using it in a completely decoupled way, cleanly separated from django.
Here's the tree structure of the frontend site.
/Users/calvin/work/yeoman-test/
|~app/
| |~scripts/
| | |~controllers/
| | | `-main.js
| | |~vendor/
| | | |-angular.js
| | | |-angular.min.js
| | | |-es5-shim.min.js
| | | `-json3.min.js
| | `-app.js
| |~styles/
| | |-bootstrap.css
| | `-main.css
| |+views/
| |-.buildignore
| |-.htaccess
| |-404.html
| |-favicon.ico
| |-index.html
| `-robots.txt
|~test/
| |+spec/
| `+vendor/
|-.gitattributes
|-.npmignore
|-Gruntfile.js
|-package.json
`-testacular.conf.js
And here's the tree structure for the django application acting as a pure json web-service. Using django-tastypie.
/Users/calvin/work/yeomandjango/
|~deploy/
| |-crontab
| |-gunicorn.conf.py
| |-live_settings.py*
| |-nginx.conf
| `-supervisor.conf
|~requirements/
| `-project.txt
|+static/
|-.gitignore
|-.hgignore
|-__init__.py
|-__init__.pyc
|-dev.db
|-fabfile.py
|-local_settings.py
|-local_settings.pyc
|-manage.py*
|-settings.py
|-settings.pyc
|-urls.py
`-urls.pyc
By running the django web service from domain and urls such as http://service.mysite.com/api/v1/ and having our frontend yeoman generated "static" site http://mysite.com calling these API urls as needed.
The yeoman generated AngularJS app simply POSTS/GETS/PUTS/DELETES the api resources/urls given by our django-tastypie APIs.
This is a loosely coupled configuration you can consider.
However, do note that this set-up is performing "cross-domain API requests". This means that on our "server-side" django application, we will need to handle CORS.
Here's an example middleware snippet that needs to be implemented at django server side for this to work.
import re
from django.utils.text import compress_string
from django.utils.cache import patch_vary_headers
from django import http
try:
import settings
XS_SHARING_ALLOWED_ORIGINS = settings.XS_SHARING_ALLOWED_ORIGINS
XS_SHARING_ALLOWED_METHODS = settings.XS_SHARING_ALLOWED_METHODS
except:
XS_SHARING_ALLOWED_ORIGINS = '*'
XS_SHARING_ALLOWED_METHODS = ['POST','GET','OPTIONS', 'PUT', 'DELETE']
class XsSharing(object):
"""
This middleware allows cross-domain XHR using the html5 postMessage API.
Access-Control-Allow-Origin: http://foo.example
Access-Control-Allow-Methods: POST, GET, OPTIONS, PUT, DELETE
"""
def process_request(self, request):
if 'HTTP_ACCESS_CONTROL_REQUEST_METHOD' in request.META:
response = http.HttpResponse()
response['Access-Control-Allow-Origin'] = XS_SHARING_ALLOWED_ORIGINS
response['Access-Control-Allow-Methods'] = ",".join( XS_SHARING_ALLOWED_METHODS )
return response
return None
def process_response(self, request, response):
# Avoid unnecessary work
if response.has_header('Access-Control-Allow-Origin'):
return response
response['Access-Control-Allow-Origin'] = XS_SHARING_ALLOWED_ORIGINS
response['Access-Control-Allow-Methods'] = ",".join( XS_SHARING_ALLOWED_METHODS )
return response
Related
I built a python 2.7 application with the below directory structure for my relevant files. How do cal methods located in different folder locations?
Data-Wrangling-OpenStreetMap-data
|
+---- process_data
| |
| +---- __init__.py
| |
| +---- data_cleaner.py
|
+---- main_code.py
|
+---- audit _data
| |
| +---- __init__.py
| |
| +---- audit_file.py
I have succeeded in doing it correctly for one class referenced from main_code.py via the use of:
from process_data.data_cleaner import DataCleaner
However, if attempt a similar pattern for another class located in seperate folder referenced by main_code.py for via the use of the import statement
from audit_data.audit_file import AuditFile
I get the error:
ImportError: No module named audit_data.audit_file
Any ideas as to what I may be overlooking and/or guidance on what further details I need to post to help find the solution to my problem?
from process_data.data_cleaner import data_cleaner
as data_cleaner is the file name data_cleaner.py and the second one data_cleaner is a class defined in it.
The cause of my problem was a silly one;
The folder containing the class I was calling was named audit _file whilst the folder I was calling within my code was audit_file
What didnt work
from audit_data.audit_file import AuditFile
What worked
from audit _data.audit_file import AuditFile
For those reading this watch out for unintended spaces in your folder names
Consider our current architecture:
+---------------+
| Clients |
| (API) |
+-------+-------+
∧
∨
+-------+-------+ +-----------------------+
| Load Balancer | | Nginx |
| (AWS - ELB) +<-->+ (Service Routing) |
+---------------+ +-----------------------+
∧
∨
+-----------------------+
| Nginx |
| (Backend layer) |
+-----------+-----------+
∧
∨
----------------- +-----------+-----------+
File Storage | Gunicorn |
(AWS - S3) <-->+ (Django) |
----------------- +-----------------------+
When a client, mobile or web, try to upload large files (more than a GB) on our servers then often face idle connection timeouts. Either from their client library, on iOS for example, or from our load balancer.
When the file is actually being uploaded by the client, no timeouts occurs because the connection isn't "idle", bytes are being transferred. But I think when the file has been transferred into the Nginx backend layer and Django starts uploading the file to S3, the connection between the client and our server becomes idle until the upload is completed.
Is there a way to prevent this from happening and on which layer should I tackle this issue ?
I have faced the same issue and fixed it by using django-queued-storage on top of django-storages. What django queued storage does is that when a file is received it creates a celery task to upload it to the remote storage such as S3 and in mean time if file is accessed by anyone and it is not yet available on S3 it serves it from local file system. In this way you don't have to wait for the file to be uploaded to S3 in order to send a response back to the client.
As your application behind Load Balancer you might want to use shared file system such as Amazon EFS in order to use the above approach.
You can create an upload handler to upload file directly to s3. In this way you shouldn't encounter connection timeout.
https://docs.djangoproject.com/en/1.10/ref/files/uploads/#writing-custom-upload-handlers
I did some tests and it works perfectly in my case.
You have to start a new multipart_upload with boto for example and send chunks progressively.
Don't forget to validate the chunk size. 5Mb is the minimum if your file contains more than 1 part. (S3 Limitation)
I think this is the best alternative to django-queued-storage if you really want to upload directly to s3 and avoid connection timeout.
You'll probably also need to create your own filefield to manage file correctly and not send it a second time.
The following example is with S3BotoStorage.
S3_MINIMUM_PART_SIZE = 5242880
class S3FileUploadHandler(FileUploadHandler):
chunk_size = setting('S3_FILE_UPLOAD_HANDLER_BUFFER_SIZE', S3_MINIMUM_PART_SIZE)
def __init__(self, request=None):
super(S3FileUploadHandler, self).__init__(request)
self.file = None
self.part_num = 1
self.last_chunk = None
self.multipart_upload = None
def new_file(self, field_name, file_name, content_type, content_length, charset=None, content_type_extra=None):
super(S3FileUploadHandler, self).new_file(field_name, file_name, content_type, content_length, charset, content_type_extra)
self.file_name = "{}_{}".format(uuid.uuid4(), file_name)
default_storage.bucket.new_key(self.file_name)
self.multipart_upload = default_storage.bucket.initiate_multipart_upload(self.file_name)
def receive_data_chunk(self, raw_data, start):
buffer_size = sys.getsizeof(raw_data)
if self.last_chunk:
file_part = self.last_chunk
if buffer_size < S3_MINIMUM_PART_SIZE:
file_part += raw_data
self.last_chunk = None
else:
self.last_chunk = raw_data
self.upload_part(part=file_part)
else:
self.last_chunk = raw_data
def upload_part(self, part):
self.multipart_upload.upload_part_from_file(
fp=StringIO(part),
part_num=self.part_num,
size=sys.getsizeof(part)
)
self.part_num += 1
def file_complete(self, file_size):
if self.last_chunk:
self.upload_part(part=self.last_chunk)
self.multipart_upload.complete_upload()
self.file = default_storage.open(self.file_name)
self.file.original_filename = self.original_filename
return self.file
You can try to skip uploading the file to your server and upload it to s3 directly, then only get back an url for your application.
There is an app for that: django-s3direct you can give it a try.
I'm running Django 1.10.1 against Postgres 9.4. My staging server and dev environments have psql servers at version 9.4.9 and production is an RDS instance at 9.4.7.
It seems like my SearchVectorField is not storing the search configuration given in production, though it is in staging and dev, and it seems to be either a version thing (unlikely, given the version difference and that it also worked on 9.3 in staging/dev) or the fact that production is on RDS instead of local on the server.
I'm using a custom configuration for full-text search called unaccent, which looks like this:
Token | Dictionaries
-----------------+-----------------------
asciihword | english_stem
asciiword | english_stem
email | simple
file | simple
float | simple
host | simple
hword | unaccent,english_stem
hword_asciipart | english_stem
hword_numpart | simple
hword_part | unaccent,english_stem
int | simple
numhword | simple
numword | simple
sfloat | simple
uint | simple
url | simple
url_path | simple
version | simple
word | unaccent,english_stem
Unaccent is installed in both environments, and works in both environments.
I'm storing the search data in a django.contrib.postgres.search.SearchVectorField on my Writer model:
class Writer(models.Model):
#...
search = SearchVectorField(blank=True)
That column is updated with the following search vector:
writer_search_vector = (SearchVector('first_name', 'last_name', 'display_name',
config='unaccent', weight='A') +
SearchVector('raw_search_data', config='unaccent', weight='B'))
by the following statement, which runs periodically:
Writer.objects.update(search=search_utils.writer_search_vector)
And, for some reason, the configuration is storing successfully on my staging server and in dev, but not in production. E.g., this code returns the same results in all environments:
In [3]: Writer.objects.annotate(searchy=SearchVector('last_name')).filter(searchy='kostenberger')
Out[3]: <QuerySet []>
In [4]: Writer.objects.annotate(searchy=SearchVector('last_name', config='unaccent')).filter(searchy='kostenberger')
Out[4]: <QuerySet [<Writer: Andreas J. Köstenberger>, <Writer: Margaret Elizabeth Köstenberger>]>
But in staging, I get the following correct result if I use the stored vector:
In [5]: Writer.objects.filter(search='kostenberger')
Out[5]: <QuerySet [<Writer: Andreas J. Köstenberger>, <Writer: Margaret Elizabeth Köstenberger>]>
while in production, against the RDS instance, I get the following, incorrect result:
In [5]: Writer.objects.filter(search='kostenberger')
Out[5]: <QuerySet []>
and yet, in production still, the unaccent works but the english_stem does not, in that it will match the stemmed version of the text (below), but not the original version (above):
In [6]: Writer.objects.filter(search='kostenberg')
Out[6]: <QuerySet [<Writer: Margaret Elizabeth Köstenberger>, <Writer: Andreas J. Köstenberger>]>
Note that the database tables for Writer in the two environments are identical for this test.
Any ideas why the stored vector isn't working in production with the correct config, while if I create the vector on the fly it will work?
On RDS Postgres, you aren't allowed to change the default_text_search_config parameter. So, you have to configure the text search with each query:
from django.contrib.postgres.search import SearchRank, SearchQuery
…
search_query = SearchQuery(value='kostenberger', config='unaccent')
Writer.objects.filter(search=search_query)
since Django 1.4 (I think) django create a folder for my project when I start a project. Django add a folder for any application I created (with python manage.py startapp) at the same level of my project folder.
Project_name
|---project_name_dir/
|---application_dir/
`---manage.py
I really like the following folder structure:
Project_name
|---project_name_dir/
| |---application_dir/
| | |-- __init__.py
| | |-- models.py
| | |-- tests.py
| | `-- views.py
| |-- __init__.py
| |-- settings.py
| |-- urls.py
| |-- wsgi.py
| |---templates/
| | `---application_dir/
| `---static/
| |---css/
| |---font/
| |---img/
| `---js/
|---deployment/
|---documentation/
|---config/
`---manage.py
Because I have a folder with all my django files (project_name_dir/) and other directories for non django files.
So why Django put application at the same level of my project folder?
In Django, the position of the application directory is not considered. Django only uses the name of the application.
Thus, the position of the application is basically a matter of convenience of the programmer.
This is also the reason why two apps should not have the same name: even if they are imported in INSTALLED_APPS as
('app.app1', 'app1')
Django only concerns with the last part after the dot, i.e. app1.
So, in the end, you can use the directory structure you want, as long as the apps' names don't collide and you point to the app on INSTALLED_APPS. Because of this, if there isn't any special reason, you should put them on the project's root, like Django does.
I tried several things to get my app working on heroku, but now I'm out of ideas. I can install my project on heroku's rep, but I get a 500 error code. My application works very well using virtualenv on my machine after I followed the steps described on heroku documentation for django.
When I do my "git push heroku master" and try in browser, I get the following error:
2013-07-07T15:39:11.170514+00:00 app[web.1]: ImportError: No module named apps.base
2013-07-07T15:39:11.170059+00:00 app[web.1]: File "/app/.heroku/python/lib/python2.7/site-packages/django/utils/dateformat.py", line 35, in format
2013-07-07T15:39:11.170202+00:00 app[web.1]: app = import_module(appname)
2013-07-07T15:39:11.170202+00:00 app[web.1]: default_translation = _fetch(settings.LANGUAGE_CODE)
2013-07-07T15:39:11.170202+00:00 app[web.1]: _default = translation(settings.LANGUAGE_CODE)
I tought it was caused by a directory structure that wasn't supported on heroku, so I adjusted it from the default one that was created with django startproject command.
Here is my new file structure. I adjusted the import reference everywhere and as I said, I works pefctly in local:
manage.py
Procfile
requirements.txt
vielfaltig
|____apps
| |____base
| | |____models.py
| | |____templates
| | |____tests.py
| | |____urls.py
| | |____views.py
| |____projects
| | |____admin.py
| | |____models.py
| | |____templates
| | |____templatetags
| | |____tests.py
| | |____translation.py
| | |____urls.py
| | |____views.py
|____locale
|____media
|____settings.py
|____static
|____urls.py
|____vielfaltig.db
|____wsgi.py
As you notice, I have 2 apps (base and projects). In the code, I import them using "vielfaltig.apps.base" for example. I changed this everywhere. I had this error before and I changed the directory structure according to what I read when I googled the error. I also tried to put everything in the root directory (along with the requirements.txt and procfile). I don't know why it keeps telling me an ImportError for "apps.base" while I reference the app using "vielfaltif.apps.base" everywhere... ?
Does anyone have an idea? I will paste my settings.py if needed. For now I think it would just take a lot of space.
Thank you very much for any help !
I guess it will work with import vielfaltig.apps.base instead of apps.base. It can be fixed if you insert path vielfaltig to python system path in your settings.py
import sys
sys.path.append('vielfaltig')