I have a Django application that needs to upload files generated upon the update of a model objects. More concretely, once a database entry is modified, a function has to be fired to generate a tiny CSV file and upload it to an (S)FTP server. My question is on how to make the uploads reliable and performant (possibly couple of thousands transactions a day). That is, make sure the files are always uploaded with possibly few duplicates without overloading the Django instance.
One option I explored was to send these tiny CSV files to a cloud queue (e.g. AWS SNS/SQS) and have another CRON job (e.g. AWS Lambda) to fetch the queue and upload the files.
Any ideias for the architecture?
Related
Following this post and this, here's my situation:
Users upload images to my backend, setup like so: LB -> Nginx Ingress Controller -> Django (Uwsgi). The image eventually will be uploaded to Object Storage. Therefore, Django will temporarily write the image to the disk, then delegate the upload task to a async service (DjangoQ), since the upload to Object Storage can be time consuming. Here's the catch: since my Django replicas and DjangoQ replicas are all separate pods, the file is not available in the DjangoQ pod. Like usual, the task queue is managed by a redis broker and any random DjangoQ pod may consume that task.
I need a way to share the disk file created by Django with DjangoQ.
The above mentioned posts basically mention two solutions:
-solution 1: NFS to mount the disk on all pods. It kind of seems like an overkill since the shared volume only stores the file for a few seconds until upload to Object Storage is completed.
-solution 2: the Django service should make the file available via an API, which DjangoQ would use to access the file from another pod. This seems nice but I have no idea how to proceed... should I create a second Django/uwsgi app as a side container which would listen to another port and send an HTTPResponse with the file? Can the file be streamed?
Third option: don't move the file data through your app at all. Have the user upload it directly to object storage. This usually means making an API which returns a pre-signed upload URL that's valid for a few minutes, user uploads the file, then makes another call to let you know the upload is finished. Then your async task can download it and do whatever.
Otherwise you have the two options correctly. For option 2, and internal Minio server is pretty common since again, Django is very slow for serving large file blobs.
I ran into the problem where Heroku doesn't update my GitHub repository (or say static filesystem) when a blog post (including pictures) is created from the website.
Other images survive, whilst the ones saved in my filesystem with the server running on heroku, disapper.
I found this on their documentation.
The Heroku filesystem is ephemeral - that means that any changes to the filesystem whilst the dyno is running only last until that dyno is shut down or restarted.
I'm still confused why not all the pictures disappear and only those added later do.
Is AWS S3 a solution for this? If it is, how can I represent my filesystem using buckets?
Say, for the Blog Post 1 I have 2 picture resolutions, which means storing the files in different folders corresponding to those resolutions.
---1920x1920
-----picture.jpg
---800x800
-----picture.jpg
Does that mean I have to create 2 buckets named 1920x1920 and 800x800 or is there a better way of handling them?
Is AWS S3 a solution for this?
S3 is the recommended solution for this, and the configuration is documented in Heroku DevCentre with specfic instructions for uploading from Python.
Note these Python instructions use the Direct Upload approch: Have the flask app generate a pre-signed URL, which is then passed back to the client Javascript code, so that the user's browser can make the upload to S3 directly. The resulting S3 URL of the image, is then put into a hidden element in the form, which is then received by your app on form submit.
The fact that you have separate image sizes suggests your app does some processing (maybe with PIL) to get these thumbnails. In which case it may be easier to use the Pass-Through approach where your app implements its own upload mechanism, does the processing and then uploads the thumbnails to S3 (The upload to S3 part is well document, such as in this SO thread).
The Pass-Through method carries the warning that this may cause blocking of a single threaded worker. If your site gets a volume of requests that causes this to be an issue, you may need to increase the number of gunicorn workers, or change to a worker type that supports concurrency (This github post has some useful commands/info on concurrent worker types).
The best way to implement this whole thing (although the requirement for a redisgo dyno and worker dyno may push you into the paid teir) may be with Background Tasks using rq. You use the Direct-Upload approach above to upload the original image, then have a background job download that, do the resizing, and put the resulting thumbnails back onto S3.
Does that mean I have to create 2 buckets named 1920x1920 and 800x800 or is there a better way of handling them?
Have one Bucket for the entire app, and just include forward slashes in the object's key to mimic a subdirectory structure.
I am trying to understand how data from multiple sources and systems gets into HDF? I want to push web server log files form 30+ systems. These logs are sitting on 18 different servers.
Thx
Veer
You can create a map-reduce job. The input for your mapper would be a file sitting on a server, and your reducer would deduct to which path to put the file in hdfs. You can either aggregate all of your files in your reducer, or simply write the file as is at the given path.
You can use Oozie to schedule the job, or you can run it sporadically by submitting the map-reduce job on the server which hosts the job tracker service.
You could also create an java application that uses the hdfs api. The FileSystem object can be used to do standard file system operation, like writing a file to a given path.
Either way, you need to request the creation through hdfs api, because the name node is responsible for splitting the file in blocks and writing it on distributed servers.
On occasion I need to send emails with attachments to users of my site. I am using SendGrid and python-sendgrid 0.1.4 to do the send. Email sending is queued through Redis.
Here's the issue -- where do I put the attachment, which is currently generated as part of the web process? I tried putting it /tmp, which didn't work -- presumably because the file was deleted when the web process shut down and was no longer available when the worker process came by? I tried /app/media, which also didn't work -- I think because /app/media is read-only (though, oddly, I did not get any errors attempting to write to this directory)?
I think the answer may be that I have to refactor my code to generate the attachment in the same process as the email is sent, but as that is a pretty significant refactor, I thought I'd ask the community first. Thanks!
Heroku's /tmp directories are unique to each dyno. So your Web Dyno saves a file in its /tmp directory, then your worker looks in its /tmp directory and cannot find it.
The best option is likely refactoring your code (that way you aren't clogging up your Web Dyno's resources creating and writing files to disk). However, if you really want to avoid it, you could store your files temporarily on S3 [tutorial] or some other external storage mechanism.
You always need to use an external storage like for example S3, to store files that need to be available to every server instance/dyno.
Interesting to know is, if you don't want to store those attachements forever. You can attach a lifecycle event to your S3 bucket that will automatically delete a file if it's older then x days.
I maintain a couple of low-traffic sites that have reasonable user uploaded media files and semi big databases. My goal is to backup all the data that is not under version control in a central place.
My current approach
At the moment I use a nightly cronjob that uses dumpdata to dump all the DB content into JSON files in a subdirectory of the project. The media uploads is already in the project directory (in media).
After the DB is dumped, the files are copied with rdiff-backup (makes an incremental backup) into another location. I then download the rdiff-backup directory on a regular basis with rsync to store a local copy.
Your Ideas?
What do you use to backup your data? Please post your backup solution - if you only have a few hits per day on your site or if you maintain a high traffic one with shareded databases and multiple fileservers :)
Thanks for your input.
Recently, I've found this solution called Django-Backup and has worked for me. You can even combine the task of backing up the databases or media files with a cronjob.
Regards,
My backup solution works the following way:
Every night, dump the data to a separate directory. I prefer to keep data dump directory distinct from the project directory (one reason being that project directory changes with every code deployment).
Run a job to upload the data to my Amazon S3 account and another location using rsync.
Send me an email with the log.
To restore a backup locally I use a script to download the data from S3 and upload it locally.