Load and cache images from AWS S3 into flutter - amazon-web-services

I want to fetch and cache user profile pictures from S3 into my flutter app.
First, when a user uploads a picture, my flask backend generates a random file name, stores the file in an S3 bucket (using boto3) and the name in the database.
To retrieve the picture I use presigned_urls:
s3client = boto3.client('s3', config=Config(signature_version='s3v4', region_name='eu-west-2'))
s3client.generate_presigned_url('get_object',Params={'Bucket': BUCKET,'Key': file_name_retrieved_from_db_for_user},ExpiresIn=120)
In Flutter I have a Future which calls the API and gets the image's generated presigned url (i.e. https://xx.s3.amazonaws.com/FILENAME.jpg?signature).
And then using a FutureBuilder I do the following:
FutureBuilder(
future: get_picture_url(user_id),
builder: (context, snapshot) {
if (snapshot.connectionState == ConnectionState.done) {
if (snapshot.data==0) {
return Icon(Icons.account_circle, size: 110.0);
}
print(user_id);
print('this is the data fetched');
print(snapshot.data);
return CachedNetworkImage(
imageUrl: snapshot.data,
imageBuilder: (context, imageProvider) => Container(
width: 180.0,
height: 180.0,
decoration: BoxDecoration(
shape: BoxShape.circle,
image: DecorationImage(
image: imageProvider, fit: BoxFit.cover),
),
),
placeholder: (context, url) => ProfPicPlaceHolder(),
errorWidget: (context, url, error) => Icon(Icons.error),
);
} else {
return ProfPicPlaceHolder();
}
}
),
The problem is that each time the FutureBuilder calls the API to get the image's url, the URL is different due to different signature following the filename in the url, so the same image is loaded and cached again and again.
How can I access an image that is stored in S3 in flask using boto3 and then pass that url to cached network image in flutter?
Is there any other way to cache an image in flutter from aws S3?

I implemented a solution.
For those dealing with the same thing, use the property cacheKey of the CachedNetworkImage().
I used the filename (stored in user's table) as cacheKey. Also, I cache filename and file url in a hivebox which is updated regularly.
My view reads data from the hivebox.

Related

Url Image from S3 not displaying the image

I am trying to upload an image to S3 through graphql using the apollo-upload-client library which just give the ability to send images through a graphql query.
So the image is storying itself in the S3 bucket, but when I try to read the Location url it doesn't seems to work. When I read the url with an <img src="img_url" /> it just shows:
And when I try to manually enter the link, it just automatically downloads a strange text file with a lot of weird symbols.
This is what the upload looks like:
export async function uploadImageResolver(
_parent,
{ file }: MutationUploadImageArgs,
context: Context,
): Promise<string> {
// identify(context);
const { createReadStream, filename, mimetype } = await file;
const response = await s3
.upload({
ACL: 'public-read',
Bucket: environment.bucketName,
Body: createReadStream(),
Key: uuid(),
ContentType: mimetype,
})
.promise();
return response.Location;
}
An example of the File object looks like this:
{
filename: 'Screenshot 2021-06-15 at 13.18.10.png',
mimetype: 'image/png',
encoding: '7bit',
createReadStream: [Function: createReadStream]
}
What I am doing wrong? It returns an actual S3 link but the link itself isn't displaying any image. And I tried to upload the same image to S3 manually and it works just fine. Thanks in advance for any advice!
So after a deeper research, it seems that the problem is with the serverless framework, specially with serverless-offline. It doesn't allow transport of binary data.
So I tried to convert the createReadStream to a base64 string, but that didn't work either.
This is the try:
export async function uploadImageResolver(
_parent,
{ file }: MutationUploadImageArgs,
context: Context,
): Promise<string> {
const { createReadStream, filename, mimetype } = await file;
const response = await s3
.upload({
ACL: 'public-read',
Bucket: environment.bucketName,
Body: (await stream2buffer(createReadStream())).toString('base64'),
Key: `${uuid()}${extname(filename)}`,
ContentEncoding: 'base64',
ContentType: mimetype // image/jpg, image/png, ...
})
.promise();
return response.Location;
}
async function stream2buffer(stream: Stream): Promise<Buffer> {
return new Promise<Buffer>((resolve, reject) => {
let _buf = Array<any>();
stream.on('data', (chunk) => _buf.push(chunk));
stream.on('end', () => resolve(Buffer.concat(_buf)));
stream.on('error', (err) => reject(`error converting stream - ${err}`));
});
}
I also tried to install the serverless-apigw-binary plugin, that that didn't work either.
plugins:
- serverless-webpack
- serverless-offline
- serverless-apigw-binary
It is uploading the same corrupted image to s3.
These are some posts with the same problem, but none of them the got a solution.
https://stackoverflow.com/questions/61050997/file-uploaded-successfully-to-s3-using-serverless-api-but-it-doesnt-opencorrup
Uploading image to s3 from AWS Lambda with NodeJS results in corrupted file
UPDATE: So it is definitely not a problem with my s3.upload function or the s3 itself. It seems that the issue is getting the image to the server. I am pretty sure that is has something to do with serverless.
I've created a small function that just receives the image and load it into a local folder. And I am getting the image corrupted:
export async function uploadImageResolver(
_parent,
{ file }: MutationUploadImageArgs,
context: Context,
): Promise<string> {
// identify(context);
const { createReadStream, filename } = await file;
createReadStream().pipe(
createWriteStream(__dirname + `/../../../images/${filename}`),
);
return ''
}
UPDATE 2: I figured out that it works when deploying. So it has to be something with serverless-offline.

S3 - Video, uploaded with getSignedUrl link, does not play and is downloaded in wrong format

I am using AWS SDK in Server Side with Node.JS and having issue with uploading files as formData from client side.
On the server side I have simple route, which creates upload link, where video will be uploaded later directly from client side.
I am using S3 getSignedUrl method for generating that link with putObject, which creates PUT request for client, but causes very strange issue with formData.
Video uploaded as formData is not behaving correctly - instead of playing it S3 uploaded url downloads that video and it is also broken.
Here is simple how i configure that method on server side:
this.s3.getSignedUrl(
'putObject',
{
Bucket: '<BUCKET_NAME>',
ContentType: `${contentType}` -> video/mp4 as a rule,
Key: key,
},
(err, url) => {
if (err) {
reject(err)
} else {
resolve(url)
}
},
)
axios put request with blob is actually working, but not for formData.
axios.put(url, file, {
headers: {
'Content-Type': file.type,
},
onUploadProgress: ({ total, loaded }) => {
setProgress((loaded / total) * 100)
},
})
This is working version, but when I try to add file to formData, it is uploaded to S3, but video downloads instead of playing.
I do not have big experience in AWS, so if somebody knows how to handle that issue, I will be thankfull

Protect Strapi uploads folder via S3 SignedUrl

uploading files from strapi to s3 works fine.
I am trying to secure the files by using signed urls:
var params = {Bucket:process.env.AWS_BUCKET, Key: `${path}${file.hash}${file.ext}`, Expires: 3000};
var secretUrl = ''
S3.getSignedUrl('getObject', params, function (err, url) {
console.log('Signed URL: ' + url);
secretUrl = url
});
S3.upload(
{
Key: `${path}${file.hash}${file.ext}`,
Body: Buffer.from(file.buffer, 'binary'),
//ACL: 'public-read',
ContentType: file.mime,
...customParams,
},
(err, data) => {
if (err) {
return reject(err);
}
// set the bucket file url
//file.url = data.Location;
file.url = secretUrl;
console.log('FIle URL: ' + file.url);
resolve();
}
);
file.url (secretUrl) contains the correct URL which i can use in browser to retrieve the file.
But whenever reading the file form strapi admin panel no file nor tumbnail is shown.
I figured out that strapi adds a parameter to the file e.g ?2304.4005 which corrupts the get of the file to AWS. Where and how do I change that behaviour
Help is appreciated
Here is my solution to create a signed URL to secure your assets. The URL will be valid for a certain amount of time.
Create a collection type with a media field, which you want to secure. In my example the collection type is called invoice and the media field is called document.
Create an S3 bucket
Install and configure strapi-provider-upload-aws-s3 and AWS SDK for JavaScript
Customize the Strapi controller for your invoice endpoint (in this exmaple I use the core controller findOne)
const { sanitizeEntity } = require('strapi-utils');
var S3 = require('aws-sdk/clients/s3');
module.exports = {
async findOne(ctx) {
const { id } = ctx.params;
const entity = await strapi.services.invoice.findOne({ id });
// key is hashed name + file extension of your entity
const key = entity.document.hash + entity.document.ext;
// create signed url
const s3 = new S3({
endpoint: 's3.eu-central-1.amazonaws.com', // s3.region.amazonaws.com
accessKeyId: '...', // your accessKeyId
secretAccessKey: '...', // your secretAccessKey
Bucket: '...', // your bucket name
signatureVersion: 'v4',
region: 'eu-central-1' // your region
});
var params = {
Bucket:'', // your bucket name
Key: key,
Expires: 20 // expires in 20 seconds
};
var url = s3.getSignedUrl('getObject', params);
entity.document.url = url // overwrite the url with signed url
return sanitizeEntity(entity, { model: strapi.models.invoice });
},
};
It seems like although overwriting controllers and lifecycle of the collection models and strapi-plugin-content-manager to take into account the S3 signed urls, one of the Strapi UI components adds a strange hook/refs ?123.123 to the actual url that is received from the backend, resulting in the following error from AWS There were headers present in the request which were not signed when trying to see images from the CMS UI.
Screenshot with the faulty component
After digging the code & node_modules used by Strapi, it seems like you will find the following within strapi-plugin-upload/admin/src/components/CardPreview/index.js
return (
<Wrapper>
{isVideo ? (
<VideoPreview src={url} previewUrl={previewUrl} hasIcon={hasIcon} />
) : (
// Adding performance.now forces the browser no to cache the img
// https://stackoverflow.com/questions/126772/how-to-force-a-web-browser-not-to-cache-images
<Image src={`${url}${withFileCaching ? `?${cacheRef.current}` : ''}`} />
)}
</Wrapper>
);
};
CardPreview.defaultProps = {
extension: null,
hasError: false,
hasIcon: false,
previewUrl: null,
url: null,
type: '',
withFileCaching: true,
};
The default is set to true for withFileCaching, which therefore appends the const cacheRef = useRef(performance.now()); query param to the url for avoiding browser caches.
By setting it to false, or leaving just <Image src={url} /> should solve the issue of the extra query param and allow you to use S3 signed URLs previews also from Strapi UI.
This would also translate to use the docs https://strapi.io/documentation/developer-docs/latest/development/plugin-customization.html to customize the module strapi-plugin-upload in your /extensions/strapi-plugin-upload/...

Loading GPX files from AmazonS3 with Google-Maps-APIv3 only works locally, but not on a deployed site (Heroku)

I'm trying to apply Google Maps API (v3) on my site, which is deployed on Heroku.
The map is populated with a given GPX file, provided by JS/Ajax.
The GPX file is stored on AmazonS3.
( I don't think that it's matter, but note the the site is built with Django, and the GPX file is a FileField of the relevant model ).
It works very well locally (local ip), but the map is not loaded at the deployed site.
I couldn't track any related error on the server logs, e.g. wrong key, etc.
Following is the relevant code snippet:
<div id="map" style="width: 50%; height: 50%;"></div>
<script>
function loadGPXFileIntoGoogleMap(map, filename) {
$.ajax({url: filename,
dataType: "xml",
success: function(data) {
var parser = new GPXParser(data, map);
parser.setTrackColour("#ff0000"); // Set the track line colour
parser.setTrackWidth(5); // Set the track line width
parser.setMinTrackPointDelta(0.001); // Set the minimum distance between track points
parser.centerAndZoom(data);
parser.addTrackpointsToMap(); // Add the trackpoints
parser.addRoutepointsToMap(); // Add the routepoints
parser.addWaypointsToMap(); // Add the waypoints
}
});
}
function initMap() {
var mapOptions = {
zoom: 8,
mapTypeId: google.maps.MapTypeId.ROADMAP
};
var map = new google.maps.Map(document.getElementById("map"), mapOptions);
loadGPXFileIntoGoogleMap(map, "{{ object.gpx_file.url }}");
}
</script>
<script src="https://maps.googleapis.com/maps/api/js?key=...&callback=initMap"
async defer></script>
Here is the corresponding snapshots:
Following is the console log (Chrome), while on the deployed site:
Can you please assist me with that?
Is that a GoogleMaps API issue? an AmazonS3 issue? Other?
How can I make GoogleMaps work on the deployed site?
Thanks ahead,
Shahar

How to get the local file when uploading it to amazon aws s3?

This is the solution I found online: [https://docs.aws.amazon.com/sdk-for-javascript/v2/developer-guide/s3-example-photo-album.html]
function addPhoto(albumName) {
var files = document.getElementById('photoupload').files;
if (!files.length) {
return console.log('Please choose a file to upload first.');
}
var file = files[0];
var fileName = file.name;
var albumPhotosKey = encodeURIComponent(albumName) + '//';
var photoKey = albumPhotosKey + fileName;
s3.upload({
Key: photoKey,
Body: file,
ACL: 'public-read'
}, function(err, data) {
if (err) {
return console.log('There was an error uploading your photo: ', err.message);
}
console.log('Successfully uploaded photo.');
viewAlbum(albumName);
});
}
However, in my current environment, there is no such concept called "document". I don't really know how "document" works. Can I include "document" in my current environment? Or can I use something else to get the local file[an image]? Thanks a lot!
s3.upload: https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html#upload-property
You should specify, what your environment is. document object makes sense only in HTML, a web page running in the browser. If you're not running in a browser but standalone, you probably use Node.js.
As documentation says, the Body parameter can be of Buffer, Typed Array, Blob, String or ReadableStream.
So a simple upload of a local file in Node.js could look like:
var fs = require('fs');
var stream = fs.createReadStream('/my/file');
s3.upload({
Bucket: 'mybucket',
Key: 'myfile'
Body: stream
}, function(err, data) {
if (err) return console.log('Error by uploading.', err.message);
console.log('Successfully uploaded.');
});
The Document Object Model (DOM) is a cross-platform and language-independent application programming interface that treats an HTML, XHTML, or XML document as a tree structure wherein each node is an object representing a part of the document.
The DOM represents a document with a logical tree. Each branch of the tree ends in a node, and each node contains objects. DOM methods allow programmatic access to the tree; with them one can change the structure, style or content of a document. Nodes can have event handlers attached to them. Once an event is triggered, the event handlers get executed
SOURCE: https://en.wikipedia.org/wiki/Document_Object_Model
This is just the root context used to access the DOM once the page has been loaded into a browser.
An example of seeing the document model load event:
document.addEventListener("DOMContentLoaded", function(event) {
// - Code to execute when all DOM content is loaded.
alert("LOADED!");
});