I am making a web-app that auto-deletes an uploaded file after a certain number of hours since it has been uploaded. My question is, what would be the best backend way of implementing this?
Should I poll a folder for files older than X hours then call a script to delete those files? Since it is going to be a web-app, is there something server-side language that I could use for it?
A simple back end shell script scheduled as a cronjob every x minutes will do the job if files are stored directly on file system. If file references , locations etc are stored in DB with time stamp, same can be cleaned up with shell script cronjob too.
This is the way it was being done in one of my previous projects.
Related
So I have a rather large json database that I'm maintaining with python. It's basically scraping data from a website on an hourly basis and I'm running daily restarts on the system (Linux Mint) via crontab. My issue is that if the system happens to restart during the database updating process I get corrupted json files.
My question is if there is anyway to delay the system restart in my script to ensure the system shuts down at a safe time? I could issue the restart command inside the script itself but if I decide to run multiple scripts that are similar to this in the future I'll obviously have a problem.
Any help here would be greatly appreciated. Thanks
Edit: Just to clarify I'm not using the python jsondb package. I am doing all file handling myself
So my solution to this was quite simple (Just protect data integrity):
Before write - backup the file
On successful write - delete the backup (Avoids doubling the size of the DB)
Where ever a corrupted file is encountered - revert to backup
The idea being that if the system closes the script during the file backup, it doesn't matter, we still have the original and if the system closes the script during write to the original file, the backup never gets deleted and we can just use that instead. All and all it was just an extra 6 lines of code and appears to have solved the issue.
On occasion I need to send emails with attachments to users of my site. I am using SendGrid and python-sendgrid 0.1.4 to do the send. Email sending is queued through Redis.
Here's the issue -- where do I put the attachment, which is currently generated as part of the web process? I tried putting it /tmp, which didn't work -- presumably because the file was deleted when the web process shut down and was no longer available when the worker process came by? I tried /app/media, which also didn't work -- I think because /app/media is read-only (though, oddly, I did not get any errors attempting to write to this directory)?
I think the answer may be that I have to refactor my code to generate the attachment in the same process as the email is sent, but as that is a pretty significant refactor, I thought I'd ask the community first. Thanks!
Heroku's /tmp directories are unique to each dyno. So your Web Dyno saves a file in its /tmp directory, then your worker looks in its /tmp directory and cannot find it.
The best option is likely refactoring your code (that way you aren't clogging up your Web Dyno's resources creating and writing files to disk). However, if you really want to avoid it, you could store your files temporarily on S3 [tutorial] or some other external storage mechanism.
You always need to use an external storage like for example S3, to store files that need to be available to every server instance/dyno.
Interesting to know is, if you don't want to store those attachements forever. You can attach a lifecycle event to your S3 bucket that will automatically delete a file if it's older then x days.
Having spent a couple of hours coding an event gateway solution, I discover that they are not supported by CF standard edition. Buggerit! So back to the drawing board.
I can see how I can check the folder's dateLastModified attribute using cfdirectory and so I can run a scheduled task to see when a new file has been uploaded, but whats the best way of storing/comparing the file list so as to get a list of just the ones added since last check.
General hints/links appreciated
Assuming that, for whatever reason, you can't use a gateway, the simplest soluition that springs to mind is to move files you've procesed to a separate directory. Then, your scheduled task can just deal with files in the FTP directory itself.
they are not supported by CF standard
edition
Are you still using CF7? It has been supported by CF Standard Edition since CF8
As #Henry pointed out, you can use an Event Gateway.
If you decide not to use that approach, I'd suggest a ColdFusion scheduled task. Most foolproof algorithm for that task is storing the results of the last <cfdirectory/> call either in a persistent scope - application or server - or writing it out to a database or file (e.g. WDDX). Reason to hold on to all this information, rather than just a timestamp, is handling situations where newly added or changed files do not take on the correct timestamp for whatever reason (system clock off comes to mind).
If you use a database to capture the data you could use a MINUS/EXCEPT query in SQL Server or Oracle, respectively, to determine what's new. Else you'll need to do perform some nested looping in ColdFusion over the old a new queries to generate the list of new files.
We have an extensive existing codebase and we've added load-balanced servers with a single master server to the equation now. There are various apps that contain models with uploaded files and images which all work fine... However, this raises the obvious problem of the rsync delay. Rsync is in the crontab and set to run every minute but this still means there's a potential 59 second wait between content being created and it actually existing on the webservers.
What I would like, is to be able to register some kind of 'post file changed' handler that triggers rsync whenever a new file is uploaded. I can't find anything of the sort though! Django has file upload handlers, but these appear to only deal with the actual upload stream, not the file as it is saved to the filesystem thereafter.
The best approach I can see is to create simple extensions to FileField, FieldFile, ImageField and ImageFieldFile as part of my project and hook into the save and delete methods in the FileField. Essentially, to create custom File and Image fields with this behaviour added. This isn't massively complicated to do but it doesn't seem like the most elegant solution to me. I'll need to teach South about my new fields, update every model that is affected and then create hordes of south migrations (which I'm pretty sure will clash with some code we have pending).
I'm also looking into creating a custom Storage class for the project, but I'm nervous about this having far-reaching effects on other pieces of code.
I can't believe no-one has come across this issue before, is there a canonical approach?
Thanks very much!
If you want to tackle this problem from the server-side (eg. similar solution to rsync) and you're running Linux, you might want to check out lsyncd:
http://code.google.com/p/lsyncd/
lsyncd uses inotify in the Linux kernel to watch directories and invoke an rsync as soon as files are modified. Fairly simple to drop in.
I went to upload a new file to my web server only to get a message in return saying that my disk quota was full... I wasn't using up my allotted space but rather my allotted FILE QUANTITY. My host caps my total number of files at about 260,000.
Checking through my folders I believe I found the culprit...
I have a small DVD database application (Video dB By split Brain) that I have installed and hidden away on my web site for my own personal use. It apparently caches data from IMDB, and over the years has secretly amassed what is probably close to a MIRROR of IMDB at this point. I don't know for certain but I did have a 2nd (inactive) copy of the program on the host that I created a few years back that I was using for testing when I was modifying portions of it. The cache folder in this inactive copy had 40,000 files totalling 2.3GB in size. I was able to delete this folder over FTP but it took over an hour. Thankfully it also gave me some much needed breathing room.
...But now as you can imagine the cache folder for the active copy of this web-app likely has closer to 150000 files totalling about 7GB worth of data.
This is where my problem comes in... I use Flash FXP for my FTP client and whenever I try to delete the cache folder, or even just view the contents it will sit and try to load a file list for a good 5 minutes and then lose connection to the server...
my host has a web based file browser and it crashes when trying to do this... as do free online FTP clients like net2ftp.com. I don't have SSH ability on this server so I can't login directly to delete either.
Anyone have any idea how I can delete these files? Is there a different FTP program I can download that would have better success... or perhaps a small script I could run that would be able to take care of it?
Any help would be greatly appreciated.
Anyone have any idea how I can delete
these files?
Submit a support request asking for them to delete it for you?
It sounds like it might be time for a command line FTP utility. One ships with just about every operating system. With that many files, I would write a script for my command-line FTP client that goes to the folder in question and performs a directory listing, redirecting the output to a file. Then, use magic (or perl or whatever) to process that file into a new FTP script that runs a delete command against all of the files. Yes, it will take a long time to run.
If the server supports wildcards, do that instead and just delete ..
If that all seems like too much work, open a support ticket with your hosting provider and ask them to clean it up on the server directly.
Having said all that, this isn't really a programming question and should probably be closed.
We had a question a while back where I ran an experiment to show that Firefox can browse a directory with 10,000 files no problem, via FTP. Presumably 150,000 will also be ok. Firefox won't help you delete, but it might be helpful in capturing the names of the files you need to delete.
But first I would just try the command-line client ncftp. It is well engineered and I have had good luck with it in the past. You can delete a large number of files at once using shell patterns. And it is available for Windows, MacOS, Linux, and many other platforms.
If that doesn't work, you sound like a long-term customer---could you beg your ISP the privilege of a shell account for a week so you can remote login with Putty or ssh and blow away the entire directory with a single rm -r command?
If your ISP provides ssh access, you can use one rm command to remove the files.
If there is no command line access, you can have a try with some powerful FTP client like CrossFTP. It works on win, mac, and linux. When you select to delete the huge amount of files on your server, it can queue in the delete operations, so that you don't need to reload the folder again. When you restart CrossFTP, the queue can also be restored and continued.