My company uses a customer management system that is sort of terrible. We have to upload tons of files to it but it has no ftp server for us to use and only allows one file upload at a time through it's uploader. Is there anyway to write a program to automate something like this? Thanks.
Related
I want to view files Such as excel or zip or any other files in the browser without getting downloaded.
I am able to display image and pdf files in the browser but unable to view any other format's such as zip or xls.
I am storing my files in S3.
What should i do?
Web browsers are not able to natively display most file types. They can render HTML and can display certain types of images (eg JPG, PNG), but only after these files are actually downloaded to your computer.
The same goes for PDFs -- they are downloaded, then a browser plug-in renders the content.
When viewing file (eg Excel spreadsheets and PDF files) within services like Gmail and Google Drive, the files are typically converted into images on the server-end and those images are sent to your computer. Amazon S3 is purely a storage service and does not offer a conversion service like this.
Zip files are a method of compressing files and also storing multiple files within a single archive file. Some web services might offer the ability to list files within a Zip, but again Amazon S3 is purely a storage service and does not offer this capability.
To answer your "What should I do?" question, some options are:
Download the files to your computer to view them, or
Use a storage service that offers these capabilities (many of which store the actual files in Amazon S3, but add additional services to convert the files for viewing online)
I might be a bit too late but did you try Filestash? (I made it)
That's what it looks like when you open a xls document on S3:
I aim to support all the common formats and the already supported list is rather big already
I am trying to stream a video with HLSv4. I am using AWS Elastic Transcoder and S3 to convert the original file (eg. *.avi or *.mp4) to HLSv4.
Transcoding is successful, with several *.ts and *.aac (with accompanying *.m3u8 playlist files for each media file) and a master *.m3u8 playlist file linking to the media-file specific playlist files. I feel fairly comfortable everything is in order here.
Now the trouble: This is a membership site and I would like to avoid making every video file public. The way to do this typically with S3 is to generate temporary keys server-side which you can append to the URL. Trouble is, that changes the URLs to the media files and their playlists, so the existing *.m3u8 playlists (which provide references to the other playlists and media) do not contain these keys.
One option which occurred to me would be to generate these playlists on the fly as they are just text files. The obvious trouble is overhead, it seems hacky, and these posts were discouraging: https://forums.aws.amazon.com/message.jspa?messageID=529189, https://forums.aws.amazon.com/message.jspa?messageID=508365
After spending some time on this, I feel like I'm going around in circles and there doesn't seem to be a super clear explanation anywhere for how to do this.
So as of September 2015, what is the best way to use AWS Elastic Transcoder and S3 to stream HLSv4 without making your content public? Any help is greatly appreciated!
EDIT: Reposting my comment below with formatting...
Thank you for your reply, it's very helpful
The plan that's forming in my head is to keep the converted ts and aac files on S3 but generate the 6-8 m3u8 files + master playlist and serve them directly from app server So user hits "Play" page and jwplayer gets master playlist from app server (eg "/play/12/"). Server side, this loads the m3u8 files from s3 into memory and searches and replaces the media specific m3u8 links to point to S3 with a freshly generated URL token
So user-->jwplayer-->local master m3u8 (verify auth server side)-->local media m3u8s (verify auth server side)-->s3 media files (accessed with signed URLs and temporary tokens)
Do you see any issues with this approach? Such as "you can't reference external media from a playlist" or something similarly catch 22-ish?
Dynamically generated playlists is one way to go. I actually implemented something like this as a Nginx module and it works very fast, though it's written in C and compiled and not PHP.
The person in your first link is more likely to have issues because of his/hers 1s chunk duration. This adds a lot of requests and overhead, the value recommended by Apple is 10s.
There are solutions like HLS encrypted with AES-128 (supported on the Elastic Transcoder), which also adds overhead if you do it on the-fly, and HLS with DRM like PHLS/Primetime which will most likely get you into a lot of trouble on the client-side.
There seems to be a way to do it with Amazon CloudFront. Please note that I haven't tried it personally and you need to check if it works on Android/iOS.
The idea is to use Signed Cookies instead of Signed URLs. They were apparently introduced in March 2015. The linked blog entry even uses HLS as an example.
Instead of dynamic URLs you send a Set-Cookie header after you authenticate the user. The cookie (hopefully) gets passed around with every request (playlist and segments) and CloudFront decides whether to allow the access to your S3 bucket or not:
You can find the documentation here:
http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/PrivateContent.html
I need to setup a process to update a database table with user supplied CSV-data (running Coldfusion 8/MySQL 5.0.88).
I'm not sure about the best way to do this.
Should I give users FTP-access to my system, generate a directory for every user and upload files from there, or should I pick files up from external locations, so the user has to setup an FTP folder my system can access. I'm sort of leaning towards the 2nd way and wanted to set this up using cfschedule and cfftp, but I'm not sure this is the best way to go forward. Security wise, I'm mor inclined to have users specify an FTP location, from where I pull, rather than handing out and maintaing FTP folders for every user.
Question:
Which approach is better both in terms of security and automation?
Thanks for input!
I wouldn't use either approach. I would give the users a web page to upload their csv files. The cf page that accepts the files would place them into a specific folder and make sure they have unique filenames. The cffile tag will help you with that.
The scheduled job would start with a cfdirectory tag on the target folder. This creates a query object. Loop through it and do what you have to do with each file.
Remember to check for the correct file extension. Then look at the first line of the file to ensure it matches the expected format.
Once you have finished processing the file, do something with it so that you don't process it again on the next scheduled job.
Setting up a custom FTP server is certainly a possibility, since you are able to create users, and give them privileges (automated). It is also secure.
But I don't know the best place to start if you don't have any experience with setting up a FTP server.
Try https://www.dropbox.com/
a.)Create a dropbox account,send invites to your users/clients.
b.)You can upload files/folders into dropbox,your clients/users can access it from their
dropbox account/dropbox desktop app..
c.)Your users/clients can upload files/folders and you can access it from your dropbox
website account/desktop app.
Dropbox is rank 1 software, better in security and automation.
Other solutions:
Best solution GOOGLE DRIVE(5gb free)
create a new gmail account,give ur id and password to your users.ask them to open google drive and import/export files.or try skydrive(25gb free)
http://www.syncplicity.com/
https://www.cubby.com/
http://www.huddle.com/?source=cj&aff=4003003
http://www.egnyte.com/
http://www.sharefile.com/
When you upload file with Django, the response won't be returned until the file upload has completed. If the uploaded file is large, it will take a long time, during which the user can't do anything but wait. Is there any way to implement asynchronous processing of file uploading? So, when a file is uploading backend, the user can do some other operation on the current page without interrupting the upload?
I am aware that it has been more that 5 years since this question was asked but I have similar problem and there are no "simple answers" on SO.
For your problem I may suggest using a progress bar (if using Django forms). Uploading a file asynchronously might not be possible in Django.
In my case the browser element is not crucial and I am considering moving the file upload from browser upload to some sort of FTP / AWS S3 file storage and working on this.
Seems like https://github.com/jeanphix/django-resumable is designed for that (never tried it though). There is also a version for admin site - https://github.com/jonatron/django-admin-resumable-js
UPD: django-resumable is now abandoned, so I ended up creating my own fork with support for S3 and Inline Admin Views. You can try it here, feedback very much is welcome – https://github.com/DataGreed/django-admin-async-upload
You need task management. Celery is what you at after.
I maintain a couple of low-traffic sites that have reasonable user uploaded media files and semi big databases. My goal is to backup all the data that is not under version control in a central place.
My current approach
At the moment I use a nightly cronjob that uses dumpdata to dump all the DB content into JSON files in a subdirectory of the project. The media uploads is already in the project directory (in media).
After the DB is dumped, the files are copied with rdiff-backup (makes an incremental backup) into another location. I then download the rdiff-backup directory on a regular basis with rsync to store a local copy.
Your Ideas?
What do you use to backup your data? Please post your backup solution - if you only have a few hits per day on your site or if you maintain a high traffic one with shareded databases and multiple fileservers :)
Thanks for your input.
Recently, I've found this solution called Django-Backup and has worked for me. You can even combine the task of backing up the databases or media files with a cronjob.
Regards,
My backup solution works the following way:
Every night, dump the data to a separate directory. I prefer to keep data dump directory distinct from the project directory (one reason being that project directory changes with every code deployment).
Run a job to upload the data to my Amazon S3 account and another location using rsync.
Send me an email with the log.
To restore a backup locally I use a script to download the data from S3 and upload it locally.