It seems like the way to go is a class Upload in package #aws-sdk/lib-storage. It's a class that used to be called ManagedUpload and which supports multipart uploads. However when trying to import #aws-sdk/lib-storage, I get the error below, so apparently the package only works on Node, not the browser.
For the browser, what is the best alternative for someone who wants to implement multipart uploads to S3?
Related
We're working on a flutter app where we want to upload single large files (100-150 MBs) directly to S3. The upload is too slow for our use case even though we're sending contentType: 'multipart/form-data'. We've looked through AWS docs and found out that we should be using multipart upload.
Now the problem is doesn't contentType: 'multipart/form-data' ensure that the file will be uploaded in chunks? If it doesn't, how can we break the file into parts at frontend using flutter and upload all of them to S3? (we're using Dio as our HttpClient)
Please help us to solve this problem.
Try with AWS Amplify SDK
https://docs.amplify.aws/start/q/integration/flutter/
You should be able to use standard Amplify Storage Library
https://docs.amplify.aws/lib/storage/getting-started/q/platform/flutter/
Multipart uploads should be handled by Storage library.
You might simply use some packages to upload files to S3 Bucket.
Simple file uploading -> amazon_s3_cognito: https://pub.dev/packages/amazon_s3_cognito
Some extra features -> minio: https://pub.dev/packages/minio
Is it possible to copy a file from Sharepoint to s3? Preferably coding it from the AWS side.
I've searched but not seeing much out there. There's a similar title but this doesn't answer the question:
upload files from sharepoint online to aws s3 bucket
It is possible for sure. SharePoint online has a rest API I use a python package called office365, it implements a SharePoint client to handle most of the daily operations you will need.
The repo is: https://github.com/O365/python-o365
Some tips about I have struggled for the first time:
The ClientContext object requires the base site URL for the SharePoint library you want to authenticate, library doc:
https://mysharepoint.mydomain.com/sites/mysite/shareddocuments/
The URL you must pass to the Client context will be: https://mysharepoint.mydomain.com/sites/mysite
The method UserCredential requires your user in the following format: user#mydomain
I have a question regarding the possibility of downloading an artifact from Artifactory through Django.
Is it possible to use a get request using requests like :
import requests
r = requests.get(http://localhost:8081/artifactory/libs-release-local/ch/qos/logback/logback-classic/0.9.9/logback-classic-0.9.9.jar?skipUpdateStats=true)
or is there another way to download the artifact in python?
If curl works to download the artifact with the URL then the requests library does have sufficient functions that support the curl request. I would recommend referring to this StackOverflow for more info.
I have a Django server running in an elastic beanstalk environment. I would like to have it render HTML templates pulled from a separate AWS S3 Bucket.
I am using the Django-storages library, which lets me use static and media files from the bucket, but I can't figure out how to get it to render templates.
The reasoning for doing it like this is that once my site is running, I would like to be able to add these HTML templates without having to redeploy the entire site.
Thank you
To my best knowledge, Django-storages is responsible for managing static assets and media files, it doesn't mount the S3 bucket to the file system, what you might be looking for is something like S3Fuse which will mount the bucket on the File System, which will allow you to update the template and have them sync. This might not be the best solution because even if you got the sync to work, Django might not pick those changes and serve the templates from memory.
I believe what you're looking for is a Continuous Delivery pipeline, that way you won't be worried about hosting.
Good Question though.
I have a requirement of uploading and downloading image files in google drive via web application built using Django. I have explored using Django Google Drive Storage API, it seems to be working while saving files. But I have no clue where the files are getting saved and how to read the files. if anyone has experience using Django google drive API or have a recommendation for storing files in google drive would highly be helpful at this moment.
Thank you.
If you are using models to upload the image files, something like this:
class Imageupload(models.Model):
imagefile= models.ImageField(upload_to='img',blank=True,storage=gd_storage)
views.py file looks like:
lst=Imageupload.objects.all()
url_for_first_imagefile=lst[0].imagefile.url #used to display or download the image in template
Then you can directly access them using imagefile.url attribute.