I created a small gh-page at http://s-a.github.io/sample-db/ which contains an audio player pointing to wav files in a sub folder. Any request on a wave file returns 404. Are there any limitations for hosting files on github.io?
No problem with file size, they are under the 100MB limit (see github documentation)
Your problem come from the fact that github page server is case sensitive.
In your channel.json file, your naming scheme is bpnet_something. But your files are named with the scheme bpNet_something.
Change bpnet to bpNet in file names in your channel.json file, commit, push and listen to the music.
Related
I searched but haven't found a satisfying solution.
Minio/S3 does not have directories, only keys (with prefixes). So far so good.
Now I am in the need to change those prefixes. Not for a single file but for a whole bunch (a lot) files which can be really large (actually no limit).
Unfortunatly these storage servers seem not to have a concept of (and does not support):
rename file
move file
What has to be done is for each file
copy the file to the new target location
delete the file from the old source location
My given design looks like:
users upload files to bucketname/uploads/filename.ext
a background process takes the uploaded files, generates some more files and uploads them to bucketname/temp/filename.ext
when all processings are done the uploaded file and the processed files are moved to bucketname/processed/jobid/new-filenames...
The path prefix is used when handling the object created notification to differentiate if it is a upload (start processing), temp (check if all files are uploaded) and processed/jobid for holding them until the user deletes them.
Imagine a task where 1000 files have to get to a new location (within the same bucket) copy and delete them one by one has a lot of space for errors. Out of storage space during the copy operation and connection errors without any chance for rollback(s). It doesn't get easier if the locations would be different bucktes.
So, having this old design and not chance to rename/move a file:
Is there any change to copy the files without creating new physical files (without duplicating used storage space)?
Any experienced cloud developer could give me please a hint how to do this bulk copy with rollbacks in error cases?
Anyone implemented something like that with a functional rollback mechanism if e.g. file 517 of 1000 fails? Copy and delete them back seems not to be way to go.
Currently I am using Minio server and Minio dotnet library. But since they are compatible with Amazon S3 this scenario could also have happend on Amazon S3.
I am building an application in C++ that should sync a directory to server. Using the code given in Obtaining Directory Change Notifications, I wait for any change in the directory. Once a notification is given, I look for the last modified file then upload to to the server.
I can get the code of getting the last modified file by codes given in similar questions, The problem to this approach is, if the user copies 5 files of size 500 Bytes, then all files will be copied in less than a second, so they all will have the same modification time.
I need to distinguish between them. How can I do that?
I have an App Folder in my dropbox, and in that folder, let's say there is a folder called 'database'.
How would I go about downloading the whole folder's contents and putting it into a folder, or reading each files contents one-by-one?
(The latter is preferable as I assume that would use the least amount of processing power)
Thank you!
To sum up: I need to get a whole folder's file contents, and save them locally.
I've found an answer, though it doesn't use the Dropbox API, it uses URLLIB.
URL = "dropbox folder link"
globalData = urllib.urlretrieve(URL, "file-destination")
Simple :)
I understand that Django File Upload Handler by default stores files less than 2.5MB to memory and those above it to a temp folder in the disk,
In my models,where I have a file field,I have specified the upload_tofolder where I expect the files to be written to.
Though when I try reading this files from this folder,I get an error implying that the files do not yet exist in that folder.
How will I force django to write the files to the folder specified in upload_to before another procedure starts reading from it?
I know I can read the files directly from memory by request.FILES['file'].name but I would rather force the files to be written from memory to folder before I read them.
Any Insights will be highly appreciated.
FILE_UPLOAD_MAX_MEMORY_SIZE setting tells django the maximum size of file to keep in memory. Set it to 0 and it will be always written to disk.
I am looking for a very general answer to the feasibility of the idea, not a specific implementation.
If you want to serve small variations of the same media file to different people (say, an ePub or music file), is it possible to serve most of the file to everybody but individualized small portions of the file to each recipient for watermarking using something like Amazon WS.
If yes, would it be possible to create a dropbox-like file hosting service with these individualized media files where all users “see” most of the same physical stored file but with tiny parts of the file served individually? If, say, 1000 users had the same 10 MB mp3 file with different watermarks on a server that would amount to 10 GB. But if the same 1000 users were served the same file except for a tiny 10 kB individual watermarked portion it would only amount to 20 MB in total.
An EPUB is a single file and must be served/downloaded as such, not in pieces. Why don't you implement simple server-side logic to customize the necessary components, build the EPUB from the common assets and the customized ones, and then let users download that?
The answer is, of course, yes, it can be done, using an EC2 instance -- or any other machine that can run a web server, for that matter. The problem is that any given type of media file has different levels of complexity when it comes to customizing the file... from the simplest, where the file contains a string of bytes at a known position that can simply be overwritten with your watermark data, to a more complex format that would have to be fully or partially disassembled and repackaged every time a download is requested.
The bottom line is that for any format I can think of, the server would spent some amount of CPU resources -- possibly a significant amount -- crunching the data and preparing/reassembling the file for download. The ultimate solution would be very format-specific, and, as a side note, has really nothing to do with anything AWS other than the fact that you can host web servers in EC2.