I am looking to write a Django app that is a utility.
I would like the user to be presented with a form when they can upload 4 or 5 files.
They would then be able to hit a process button and the 5 files would be processed and combine or zipped and downloaded as 1 file(result_file)
After this the 5 file would be deleted as well at the result_file
I would prefer to avoid having a database on this app as it never really stores ant information
Could all this be done in a view ? and how would you guys approach it ?
This is what I would try:
Create the script that does the file processing work.
You can place the script in a newly created folder inside the django app folder and just import it in your views.py.
You can create an empty file ,"init.py" in the aforementioned new folder, and make it as package to import it in views.py
In the same views.py, there could be another function that has the download feature .
Related
I have a project made in Django. I have only added social auth for login purposes. I want selected emails only to log in to the website. I used social-auth-app-django library for social auth and added a variable SOCIAL_AUTH_GOOGLE_OAUTH2_WHITELISTED_EMAILS to the settings.py file where it contains a list of all the emails permitted for logging in.
My project directory looks something like this:
project_parent/
----project/
--------settings.py
--------wsgi.py
--------asgi.py
--------urls.py
----app/
--------models.py
--------views.py
--------urls.py
--------admin.py
----db.sqlite3
----manage.py
----config.json
Here is the config file:
{
"OAUTH2": {
"WHITELISTED_EMAILS": [
"xyz#gmail.com",
"abc#gmail.com",
"efg#gmail.com",
"lmn#gmail.com"
]
}
}
In settings.py file I have loaded the config file like this:
config = open('./config.json',)
data = json.load(config)
SOCIAL_AUTH_GOOGLE_OAUTH2_WHITELISTED_EMAILS = data['OAUTH2']['WHITELISTED_EMAILS']
I have made a webpage that takes the new mail id (need to add to the config file) and appends that particular mail id (or a list of mail ids) to the config.json file. But now the problem arises, the newly added mails don't reflect directly to the variable defined in the settings.py file. For that, I need to restart the code in order to get the latest config file. Every time I add a mail id, I need to restart the code.
I thought to make a database table in my app folder and load that table in settings.py by import the model from app folder. But on importing, the terminal raises the error as it says app is still not loaded so that I can't use the models inside my app.
Is there a way in which I can directly load the database table without importing the models.py file from app folder? Or if possible to load the config.json file in real-time so that I don't have to restart the code
settings.py in Django is loaded once at execution time, we should not change the values during runtime. Although that is suggested by the docs there are ways you can change the settings values during runtime. You can use django-constance library, and then make a form to update the setting's value during runtime by editing the database value.
i downloaded the zip file form tinymce.cloud. and added the file in static folder everything is working fine,
except now i'm
getting this notification every time i want to create a post.
I already have an account but as they suggested to add key in tinymce.js file the content is totally different in mine because i'm not using just single js file but bunch of files now i don't know where i should put my api key. so it stop giving me notification.
script file i'm using in head file
post_create.html where i added script.
To run TinyMCE 5 from the cloud, use the following code in the head of your HTML file, replacing no-api-key with your own API key:
<script src="https://cdn.tiny.cloud/1/no-api-key/tinymce/5/tinymce.min.js" referrerpolicy="origin"></script>
There's more information about getting started with TinyMCE 5 on the cloud in the docs: https://www.tiny.cloud/docs/quick-start/
I have created an API in Django. It is supposed to take a request and pass the argument to allenNLP files to gather a computed response. I want to know how to run my django app in allenNLP environment and I want all the source code of allenNLP to be in a folder in my django project. Is it possible and how can I do it?
What you're looking for is running AllenNLP inside django.
You can add AllenNLP libraries in your requirements.py. Next, the .py file can be stored in any of your source code hierarchy.
In your views.py, where you are getting request and extracting parameters, you can call the .py file which contains allennlp source code.
Not sure about what AllenNLP files you're talking about, if it's code files, they can go in your regular source code folder, if it's a static files, like Image, CSV etc, they need to go in static folder.
Please clar my understanding of your requirement if the answer doesn't address your question.
I am working on a project where urls are put into a Django model called UrlItems. The models.py file containing UrlItems is located in the home app. I typed scrapy startproject scraper in the same directory as the models.py file. Please see this image to better understand my Django project structure.
I understand how to create new UrlItems from my scraper but what if my goal is to get and iterate over my Django project's existing UrlItems inside my spider's def start_requests(self) function?
What I have tried:
1) I followed the marked solution in this question to try and see if my created DjangoItem already had the UrlItems loaded. I tried to use UrlItemDjangoItem.objects.all() in my spider's start_requests function and realized that I would not be able to retrieve my Django project's UrlItems this way.
2) In my spider I tried to import my UrlItems like this from ...models import UrlItem and I received this error ValueError: attempted relative import beyond top-level package.
Update
After some consideration I may end up having the Scrapy spider query my Django application's API to receive a list of the existing Django objects in JSON.
We let users upload their own custom css/less to our django app.
All our css/less files are compressed by django_compressor "on-the-fly".
When the app is initially deployed all css files are moved to the collect-static directory.
When users upload custom css styles they replace one of the less files in the collect-static directory.
The problem with that is that the changes only appear when the apache is being reloaded thus a new css file gets generated by django-compressor.
Is there a way to force django-compressor to regenerate it's compiled and cached files? I would not feel comfortable triggering a sudo services apache2 reload at django application level.
I can come up with two possible solutions, I don't like both of them very much.
You can call compress (doc) from within incron or cron:
python manage.py compress
Or you can set a very low COMPRESS_REBUILD_TIMEOUT. (doc)
BTW you have the user scripts as a seperate bundle, right?
I used a different approach. I am now using offline compression, which is faster and better for multi server deployments anyways.
I give the user an interface to change certain css and less values. I save those css/less values in a database table, so that the user can edit the stuff easily.
To make the new css/less values available to the frontend (compiled css files) I write the values the user entered in a less file to the disk and re-run the python manage.py compress command.
This way the compiled compressor files are generated and if the user entered invalid less code, which would lead to compile errors, the compressor stops and keeps the old css files.
Here's my save() method:
def save(self, *args, **kwargs):
#write the less file to the file system
#everytime the model is saved
try:
file_location = os.path.join(settings.STATIC_ROOT, 'styles', 'less', 'custom_styles.less')
with open(file_location, 'w') as f:
f.write(render_to_string('custom_styles/custom_stylesheet_tmpl.txt', {'platform_customizations': self}))
except IOError:
#TODO show error message to user in admin backend and via messaging system
raise IOError
#re-run offline compress command in prod mode
management.call_command('compress')
super(PlatformCustomizations, self).save(*args, **kwargs)