Currently we were using this extension (https://marketplace.visualstudio.com/items?itemName=GlobalFreightSolutionsLtd.copy-files-to-google-buckets) to copy files from Azure to google cloud bucket. After almost one year of use everything was perfect, until we got error:
UnhandledPromiseRejectionWarning: ResumableUploadError: A resumable upload could not be performed. The directory, C:\Users\VssAdministrator.config, is not writable. You may try another upload, this time setting options.resumable to false.
Maybe someone had similar problem and can help to solve it. Or we should contact product owner to solve the issue? Any other options/suggestions for uploading files are also acceptable. Thanks.
Judging by the error message its coming from the Node JS client library used to connect to GCP buckets, the extension is apparently built with this GCP client library, the suggestion mentioned on the GitHub would need to be implemented by the Developer of the extension, concretely it looks that it may be using the createWriteStream method which according to the doc:
Resumable uploads require write access to the $HOME directory. Through
config-store, some metadata is stored. By default, if the directory is
not writable, we will fall back to a simple upload. However, if you
explicitly request a resumable upload, and we cannot write to the
config directory, we will return a ResumableUploadError
I would suggest maybe contacting the extension publisher, or trying to disable resumable uploads within the extension options (if any)
Related
Successfully able to upload the SCORM package zip and unzip in S3 bucket using drupal 8.
While trying to read the SCORM files in the extracted data folder we got the error message like
"ERROR – unable to acquire LMS API, content may not play properly and results may not be recorded. Please contact technical support"
I checked the access stuff all are in public only
Can anyone tell me where i missed
image
That content sounds like its setup to look for the API or API_1484_11 (SCORM API's for 1.2 and 2004) and pop up an alert.
With a runtime API present that alert would go away. Your next question - "How do I expose a runtime API?" the answer is normally you hand roll one, or look for a Runtime API paid or otherwise.
Something like https://github.com/cybercussion/SCOBot/blob/master/QUnit-Tests/js/scorm/SCOBot_API_1484_11.js might get you started if your looking for free.
If you plan to build a LMS you may want to look into paid options.
I started to use AWS S3 to provide a fast way to my users download the installation files of my Win32 apps. Each install file has about 60MB and the download it's working very fast.
However when i upload a new version of the app, S3 keeps serving the old file instead ! I just rename the old file and upload the new version with the same name of the old. After i upload, when i try to download, the old version is downloaded instead.
I searched for some solutions and here is what i tried :
Edited all TTL values on cloudfrond to 0
Edited the metadata 'Cache-control' with the value 'max-age=0' for each file on the bucket
None of these fixed the issue, AWS keeps serving the old file instead of the new !
Often i will upload new versions, so i need that when the users try to download, S3 never use cache at all.
Please help.
I think this behavior might be because S3 uses an eventually consistent model, meaning that updates and deletes will propagate eventually but it is not guaranteed that this happens immediately, or even within a specific amount of time. (see here for the specifics of their consistency approach). Specifically, they say "Amazon S3 offers eventual consistency for overwrite PUTS and DELETES in all Regions" and I think the case you're describing would be an overwrite PUT. There appears to be a good answer on a similar issue here: How long does it take for AWS S3 to save and load an item? which touches on the consistency issue and how to get around it, hopefully that's helpful
I'm doing a launcher (in C#) that downloads a full game or app. The app can be very large (i.e. 5GB) and I need to get it with the correct folder hierarchhy, so the same launcher can check if the user has the correct app or it needs to be repaired or updated.
I'm trying to do that with amazon s3 and CloudFront, but seems that I can only get objects and not the full folder of the app.
I also have stored the folder in an EC2, and that works fine, but seems that EC2 is not designed for that, so downloads are extremely slow.
Is there any amazon service to do that?
Have you considered zipping the files first? It solves alot of issues eg folder structure, compression and works great from s3 and cloud front. Its a common solution for this use case.
You can do this in your application with the DownlodDirectory method in TransferUtility class in the .NET SDK.
You can read more about the DownloadDirectory method here. By default I believe it only downloads objects in the root path, so don’t forget to do it recursively for sub-folders if necessary.
I have over 30 Leaflet maps hosted on my Google Cloud Platform bucket (for example) and it has always been an easy process to upload my folder (which includes an html file with sub-folders including .js and .css files) and share the map publicly.
I tried uploading another map today, but within the folder there are no files showing and I get the following message "There are no live objects in this folder. If you have object versioning enabled, this folder may contain archived versions of objects, which aren't visible in the console. You can list archived object versions using gsutil or the APIs."
Does anyone know what is going on here?
We have also seen this problem, and it seems that the issue is limited to buckets that have spaces in the name.
It's also not reproducible through the gcloud web console, but if you use gsutil to upload a file to a bucket with a space in the name then it won't be visible on the web UI.
I can see from your screenshot that your bucket also has spaces (%20 in the url).
If you need a workaround asap, you could rename your bucket...
But google should fix this soon, I hope.
There is currently open issue on GCS/Console integration
If files have any symbols that needs urlencoding - they are not visible in console - but accessible via gsutil/API (which is currently recommended as workaround)
Issue has been resolved as of 8-May-2018 10:00 UTC
This can happen if the file doesn't have an extension, the UI treats it as a folder and lets you navigate into it, showing a blank folder instead of the file contents.
We had the same symptom (files show up in API but invisible on the web and via CLI).
The issue turned out to be that we were saving files to "./uploads", which Google interprets as "create a directory literally called '.' and then a subdirectory called uploads."
The fix was to upload to "uploads/" instead of "./uploads". We also just ran a mass copy operation via the API for everything under "./uploads". All visible now!
I also had spaces in my url and it was not working properly yesterday. Checked this morning and everything is working as expected. I still have the spaces in my URL btw.
I am trying to stream a video with HLSv4. I am using AWS Elastic Transcoder and S3 to convert the original file (eg. *.avi or *.mp4) to HLSv4.
Transcoding is successful, with several *.ts and *.aac (with accompanying *.m3u8 playlist files for each media file) and a master *.m3u8 playlist file linking to the media-file specific playlist files. I feel fairly comfortable everything is in order here.
Now the trouble: This is a membership site and I would like to avoid making every video file public. The way to do this typically with S3 is to generate temporary keys server-side which you can append to the URL. Trouble is, that changes the URLs to the media files and their playlists, so the existing *.m3u8 playlists (which provide references to the other playlists and media) do not contain these keys.
One option which occurred to me would be to generate these playlists on the fly as they are just text files. The obvious trouble is overhead, it seems hacky, and these posts were discouraging: https://forums.aws.amazon.com/message.jspa?messageID=529189, https://forums.aws.amazon.com/message.jspa?messageID=508365
After spending some time on this, I feel like I'm going around in circles and there doesn't seem to be a super clear explanation anywhere for how to do this.
So as of September 2015, what is the best way to use AWS Elastic Transcoder and S3 to stream HLSv4 without making your content public? Any help is greatly appreciated!
EDIT: Reposting my comment below with formatting...
Thank you for your reply, it's very helpful
The plan that's forming in my head is to keep the converted ts and aac files on S3 but generate the 6-8 m3u8 files + master playlist and serve them directly from app server So user hits "Play" page and jwplayer gets master playlist from app server (eg "/play/12/"). Server side, this loads the m3u8 files from s3 into memory and searches and replaces the media specific m3u8 links to point to S3 with a freshly generated URL token
So user-->jwplayer-->local master m3u8 (verify auth server side)-->local media m3u8s (verify auth server side)-->s3 media files (accessed with signed URLs and temporary tokens)
Do you see any issues with this approach? Such as "you can't reference external media from a playlist" or something similarly catch 22-ish?
Dynamically generated playlists is one way to go. I actually implemented something like this as a Nginx module and it works very fast, though it's written in C and compiled and not PHP.
The person in your first link is more likely to have issues because of his/hers 1s chunk duration. This adds a lot of requests and overhead, the value recommended by Apple is 10s.
There are solutions like HLS encrypted with AES-128 (supported on the Elastic Transcoder), which also adds overhead if you do it on the-fly, and HLS with DRM like PHLS/Primetime which will most likely get you into a lot of trouble on the client-side.
There seems to be a way to do it with Amazon CloudFront. Please note that I haven't tried it personally and you need to check if it works on Android/iOS.
The idea is to use Signed Cookies instead of Signed URLs. They were apparently introduced in March 2015. The linked blog entry even uses HLS as an example.
Instead of dynamic URLs you send a Set-Cookie header after you authenticate the user. The cookie (hopefully) gets passed around with every request (playlist and segments) and CloudFront decides whether to allow the access to your S3 bucket or not:
You can find the documentation here:
http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/PrivateContent.html