Save image picked from ReactNative/Expo ImagePicker to Baqend - baqend

I'm having a hard time saving an image that is being picked from Expo (React Native).
https://docs.expo.io/versions/latest/sdk/imagepicker.html
It seems that React Native does not have support for uploading the selected image as blob, but does have a base64 option.
The code:
_pickImage = async () => {
let pickerResult = await ImagePicker.launchImageLibraryAsync({
allowsEditing: true,
base64: true,
aspect: [4, 4],
});
this._handleImagePicked(pickerResult);
};
_handleImagePicked(pickerResult) {
const uri = pickerResult.base64;
const img = new db.File({ name: 'test.jpg', data: uri, type: 'base64', mimeType: 'image/jpg' });
db.UserData.load(this.state.UserDataID).then(UserData => {
img.upload({ force: true }).then((file) => {
UserData.photo = "https://remarkable-apple-95.app.baqend.com/v1" + file.id;
alert(file.id)
return UserData.update();
},
(error) => { alert(error); }
);
});
}
When I console.log(pickerResult.base64) I get a super long string that looks like base64, but when this is run, the img.upload is throwing the error and it says "PersistentError: An unexpected persistent error occurred."

You're right. React Native has no support for binary data. Unfortunately Baqend does not support base64 file uploads yet.
As a workaround you have 2 options:
Use the React Native Fetch Blob library, which bypasses the limitations of React Native not supporting binary files by uploading and downloading the files directly via native code and gives back a reference to those. Your code could look similar to this:
ImagePicker.showImagePicker(options, async (response) => {
const upload = new db.message.UploadFile('files', 'uploadFetchBlob.jpg')
const body = 'RNFetchBlob-' + response.uri;
RNFetchBlob.fetch('PUT', 'https://{YOUR-APP-NAME}.app.baqend.com/v1' + upload.request.path, upload.request.headers, body).then((res) => {
db.File({ parent: 'files', name: 'uploadFetchBlob.jpg'}).url
})
});
Unfortunately this wont work with the expo client right now, but you'd have to eject your project and use 'native code'.
The second option would be not to use the baqend file endpoint directly, but upload your base64 string to a baqend module instead. There you can parse your base64 string and upload it to your files from within your backend module. You can find an example for this in our Guide. https://www.baqend.com/guide/topics/baqend-code/#handling-binary-data
Hope this helps

Related

Uppy Companion doesn't work for > 5GB files with Multipart S3 uploads

Our app allow our clients large file uploads. Files are stored on AWS/S3 and we use Uppy for the upload, and dockerize it to be used under a kubernetes deployment where we can up the number of instances.
It works well, but we noticed all > 5GB uploads fail. I know uppy has a plugin for AWS multipart uploads, but even when installed during the container image creation, the result is the same.
Here's our Dockerfile. Has someone ever succeeded in uploading > 5GB files to S3 via uppy? IS there anything we're missing?
FROM node:alpine AS companion
RUN yarn global add #uppy/companion#3.0.1
RUN yarn global add #uppy/aws-s3-multipart
ARG UPPY_COMPANION_DOMAIN=[...redacted..]
ARG UPPY_AWS_BUCKET=[...redacted..]
ENV COMPANION_SECRET=[...redacted..]
ENV COMPANION_PREAUTH_SECRET=[...redacted..]
ENV COMPANION_DOMAIN=${UPPY_COMPANION_DOMAIN}
ENV COMPANION_PROTOCOL="https"
ENV COMPANION_DATADIR="COMPANION_DATA"
# ENV COMPANION_HIDE_WELCOME="true"
# ENV COMPANION_HIDE_METRICS="true"
ENV COMPANION_CLIENT_ORIGINS=[...redacted..]
ENV COMPANION_AWS_KEY=[...redacted..]
ENV COMPANION_AWS_SECRET=[...redacted..]
ENV COMPANION_AWS_BUCKET=${UPPY_AWS_BUCKET}
ENV COMPANION_AWS_REGION="us-east-2"
ENV COMPANION_AWS_USE_ACCELERATE_ENDPOINT="true"
ENV COMPANION_AWS_EXPIRES="3600"
ENV COMPANION_AWS_ACL="public-read"
# We don't need to store data for just S3 uploads, but Uppy throws unless this dir exists.
RUN mkdir COMPANION_DATA
CMD ["companion"]
EXPOSE 3020
EDIT:
I made sure I had:
uppy.use(AwsS3Multipart, {
limit: 5,
companionUrl: '<our uppy url',
})
And it still doesn't work- I see all the chunks of the 9GB file sent on the network tab but as soon as it hits 100% -- uppy throws an error "cannot post" (to our S3 url) and that's it. failure.
Has anyone ever encountered this? upload goes fine till 100%, then the last chunk gets HTTP error 413, making the entire upload fail.
Thanks!
Here I'm adding some code samples from my repository that will help you to understand the flow of using the BUSBOY package to stream the data to the S3 bucket. Also, I'm adding the reference links here for you to get the package details I'm using.
https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-s3/index.html
https://www.npmjs.com/package/busboy
export const uploadStreamFile = async (req: Request, res: Response) => {
const busboy = new Busboy({ headers: req.headers });
const streamResponse = await busboyStream(busboy, req);
const uploadResponse = await s3FileUpload(streamResponse.data.buffer);
return res.send(uploadResponse);
};
const busboyStream = async (busboy: any, req: Request): Promise<any> {
return new Promise((resolve, reject) => {
try {
const fileData: any[] = [];
let fileBuffer: Buffer;
busboy.on('file', async (fieldName: any, file: any, fileName: any, encoding: any, mimetype: any) => {
// ! File is missing in the request
if (!fileName)
reject("File not found!");
let totalBytes: number = 0;
file.on('data', (chunk: any) => {
fileData.push(chunk);
// ! given code is only for logging purpose
// TODO will remove once project is live
totalBytes += chunk.length;
console.log('File [' + fieldName + '] got ' + chunk.length + ' bytes');
});
file.on('error', (err: any) => {
reject(err);
});
file.on('end', () => {
fileBuffer = Buffer.concat(fileData);
});
});
// ? Haa, finally file parsing wen't well
busboy.on('finish', () => {
const responseData: ResponseDto = {
status: true, message: "File parsing done", data: {
buffer: fileBuffer,
metaData
}
};
resolve(responseData)
console.log('Done parsing data! -> File uploaded');
});
req.pipe(busboy);
} catch (error) {
reject(error);
}
});
}
const s3FileUpload = async (fileData: any): Promise<ResponseDto> {
try {
const params: any = {
Bucket: <BUCKET_NAME>,
Key: <path>,
Body: fileData,
ContentType: <content_type>,
ServerSideEncryption: "AES256",
};
const command = new PutObjectCommand(params);
const uploadResponse: any = await this.S3.send(command);
return { status: true, message: "File uploaded successfully", data: uploadResponse };
} catch (error) {
const responseData = { status: false, message: "Monitor connection failed, please contact tech support!", error: error.message };
return responseData;
}
}
In the AWS S3 service in a single PUT operation, you can upload a single object up to 5 GB in size.
To upload > 5GB files to S3 you need to use the multipart upload S3 API, and also the AwsS3Multipart Uppy API.
Check your upload code to understand if you are using AWSS3Multipart correctly, setting the limit properly for example, in this case a limit between 5 and 15 is recommended.
import AwsS3Multipart from '#uppy/aws-s3-multipart'
uppy.use(AwsS3Multipart, {
limit: 5,
companionUrl: 'https://uppy-companion.myapp.net/',
})
Also, check this issue on Github Uploading a large >5GB file to S3 errors out #1945
If you're getting Error: request entity too large in your Companion server logs I fixed this in my Companion express server by increasing the body-parser limit:
app.use(bodyparser.json({ limit: '21GB', type: 'application/json' }))
This is a good working example of Uppy S3 MultiPart uploads (without this limit increased): https://github.com/jhanitesh10/uppy
I'm able to upload files up to a (self-imposed) limit of 20GB using this code.

iOS can't play uploaded audio: JS MediaRecorder -> Blob -> Django Server -> AWS s3 -> JS decodeAudioData --> "EncodingError: Decoding Failed"

Answer: shouldn't set content/mime type browser side with JS, should use native browser mimeType then convert server side (I used PyDub).
Question:
I am using Javascript MediaRecorder, Django, AWS s3 and Javascript Web Audio API to record audio files for users to share voice notes with one another. I've seen disbursed answers online about how to record and upload audio data and the issues with Safari/iOS but thought this could be a thread to bring it together and confront some of these issues.
Javascript:
mediaRecorder = new MediaRecorder(stream);
mediaRecorder.onstop = function (e) {
var blob = new Blob(
chunks,
{
type:"audio/mp3",
}
);
var formdata = new FormData();
formdata.append('recording', blob)
var resp = await fetch(url, { // Your POST endpoint
method: 'POST',
mode: 'same-origin',
headers: {
'Accept': 'application/json',
'X-Requested-With': 'XMLHttpRequest',
'X-CSRFToken': csrf_token,
},
body: formdata,
})
}
Django:
for k,file in request.FILES.items():
sub_path = "recordings/audio.mp3"
meta_data = {"ContentType":"audio/mp3"}
s3.upload_fileobj(file, S3_BUCKET_NAME, sub_path,ExtraArgs=meta_data)
###then some code to save the s3 URL to my database for future retrieval
Javascript:
var audio_context = new AudioContext();
document.addEventListener("#play-audio","click", function(e) {
var url = "https://docplat-bucket.s3.eu-west-3.amazonaws.com/recordings/audio.mp3"
var request = new XMLHttpRequest();
request.open('GET', url, true);
request.responseType = 'arraybuffer';
request.onload = function () {
audio_context.decodeAudioData(request.response, function (buffer) {
playSound(buffer)
});
}
request.send();
})
Results in:
"EncodingError: Decoding Failed"
Note however that using the w3 schools demo mp3 url does play the recording:
https://docplat-bucket.s3.eu-west-3.amazonaws.com/recordings/t-rex-roar.mp3
Specs:
PC (used to upload recoding): Windows 11, Chrome Version 98.0.4758.81 (Official Build) (64-bit)
Django: Version: 3.1.7
Mobile (used to play recording): iPhone X, iOS (Version 14.7.1)
Problematic url: https://docplat-bucket.s3.eu-west-3.amazonaws.com/recordings/audio.mp3
Working url: https://docplat-bucket.s3.eu-west-3.amazonaws.com/recordings/t-rex-roar.mp3
(This is my first post so please forgive me if I haven't asked this question in the ideal way :) )
When you upload the recorded Blob you set the type to 'audio/mp3'. But unless you use a custom library which patches the MediaRecorder the mimeType of the recording will be whatever the browser likes best.
As of now it's 'audio/opus' in Firefox and 'audio/webm' in Chrome.
If you define your Blob like this it should work.
var blob = new Blob(
chunks,
{
type: mediaRecorder.mimeType
}
);
You would also have to change your server side code to not use 'audio/mp3' anymore.

Url Image from S3 not displaying the image

I am trying to upload an image to S3 through graphql using the apollo-upload-client library which just give the ability to send images through a graphql query.
So the image is storying itself in the S3 bucket, but when I try to read the Location url it doesn't seems to work. When I read the url with an <img src="img_url" /> it just shows:
And when I try to manually enter the link, it just automatically downloads a strange text file with a lot of weird symbols.
This is what the upload looks like:
export async function uploadImageResolver(
_parent,
{ file }: MutationUploadImageArgs,
context: Context,
): Promise<string> {
// identify(context);
const { createReadStream, filename, mimetype } = await file;
const response = await s3
.upload({
ACL: 'public-read',
Bucket: environment.bucketName,
Body: createReadStream(),
Key: uuid(),
ContentType: mimetype,
})
.promise();
return response.Location;
}
An example of the File object looks like this:
{
filename: 'Screenshot 2021-06-15 at 13.18.10.png',
mimetype: 'image/png',
encoding: '7bit',
createReadStream: [Function: createReadStream]
}
What I am doing wrong? It returns an actual S3 link but the link itself isn't displaying any image. And I tried to upload the same image to S3 manually and it works just fine. Thanks in advance for any advice!
So after a deeper research, it seems that the problem is with the serverless framework, specially with serverless-offline. It doesn't allow transport of binary data.
So I tried to convert the createReadStream to a base64 string, but that didn't work either.
This is the try:
export async function uploadImageResolver(
_parent,
{ file }: MutationUploadImageArgs,
context: Context,
): Promise<string> {
const { createReadStream, filename, mimetype } = await file;
const response = await s3
.upload({
ACL: 'public-read',
Bucket: environment.bucketName,
Body: (await stream2buffer(createReadStream())).toString('base64'),
Key: `${uuid()}${extname(filename)}`,
ContentEncoding: 'base64',
ContentType: mimetype // image/jpg, image/png, ...
})
.promise();
return response.Location;
}
async function stream2buffer(stream: Stream): Promise<Buffer> {
return new Promise<Buffer>((resolve, reject) => {
let _buf = Array<any>();
stream.on('data', (chunk) => _buf.push(chunk));
stream.on('end', () => resolve(Buffer.concat(_buf)));
stream.on('error', (err) => reject(`error converting stream - ${err}`));
});
}
I also tried to install the serverless-apigw-binary plugin, that that didn't work either.
plugins:
- serverless-webpack
- serverless-offline
- serverless-apigw-binary
It is uploading the same corrupted image to s3.
These are some posts with the same problem, but none of them the got a solution.
https://stackoverflow.com/questions/61050997/file-uploaded-successfully-to-s3-using-serverless-api-but-it-doesnt-opencorrup
Uploading image to s3 from AWS Lambda with NodeJS results in corrupted file
UPDATE: So it is definitely not a problem with my s3.upload function or the s3 itself. It seems that the issue is getting the image to the server. I am pretty sure that is has something to do with serverless.
I've created a small function that just receives the image and load it into a local folder. And I am getting the image corrupted:
export async function uploadImageResolver(
_parent,
{ file }: MutationUploadImageArgs,
context: Context,
): Promise<string> {
// identify(context);
const { createReadStream, filename } = await file;
createReadStream().pipe(
createWriteStream(__dirname + `/../../../images/${filename}`),
);
return ''
}
UPDATE 2: I figured out that it works when deploying. So it has to be something with serverless-offline.

S3 - Video, uploaded with getSignedUrl link, does not play and is downloaded in wrong format

I am using AWS SDK in Server Side with Node.JS and having issue with uploading files as formData from client side.
On the server side I have simple route, which creates upload link, where video will be uploaded later directly from client side.
I am using S3 getSignedUrl method for generating that link with putObject, which creates PUT request for client, but causes very strange issue with formData.
Video uploaded as formData is not behaving correctly - instead of playing it S3 uploaded url downloads that video and it is also broken.
Here is simple how i configure that method on server side:
this.s3.getSignedUrl(
'putObject',
{
Bucket: '<BUCKET_NAME>',
ContentType: `${contentType}` -> video/mp4 as a rule,
Key: key,
},
(err, url) => {
if (err) {
reject(err)
} else {
resolve(url)
}
},
)
axios put request with blob is actually working, but not for formData.
axios.put(url, file, {
headers: {
'Content-Type': file.type,
},
onUploadProgress: ({ total, loaded }) => {
setProgress((loaded / total) * 100)
},
})
This is working version, but when I try to add file to formData, it is uploaded to S3, but video downloads instead of playing.
I do not have big experience in AWS, so if somebody knows how to handle that issue, I will be thankfull

Cannot upload file from React Native: "Network request failed"?

When trying to upload a selected image from my React Native project I get a nondescript error message:
Network request failed
Seems to be a common issue, but most people are just forgetting their file types or are on Android and have an issue with Flipper. Nothing that has worked for anyone I've found with the same symptoms has worked for me.
Code:
const localUri = result.uri;
const filename = localUri.split("/").pop();
const type = mime.lookup(localUri) || "image";
const formData = new FormData();
formData.append("file", { uri: localUri, name: filename, type });
try {
const file = await fetch(`${SERVER_URL}/api/upload`, {
method: "POST",
body: formData,
}).then((res) => {
console.log(res);
return res.status === 200 ? res.text() : res.json();
});
} catch (e) {
console.log(e);
}
Considerations:
Using a physical IOS device. Iphone.
Using Expo 40.0.0 with corresponding RN SDK, not ejected.
Using expo-image-picker to get image.
Using NGROK to get requests through to my localhost server from my phone.
All other requests to my server from React Native work fine, it's only when I try to uplaod a file
Image renders fine from supplied URI, so it's getting the right source.
Form Data source from above:
{ "name": "CAPS-FILE-NAME.jpg", "type": "image/jpeg", "uri": "file:///var/mobile/Containers/Data/Application/CAPS-PATHING/Library/Caches/ExponentExperienceData/project-src-pathing/ImagePicker/CAPS-FILE-NAME.jpg", }
Things tried:
Using Content-Type header: "multipart/form-data"
Using /private instead of file://
Using Postman to hit my server through NGROK, which works
Changing my Expo/RN to 38.0.0
Getting base64 -> blob -> formData, same result
Many other things I've forgotten now. If it's on Google results, I've tried it.
For anyone who gets stuck with this also, I switched to using XMLHttpRequest instead of fetch and it miraculously works now. Not sure why fetch is broken in RN, but at least there's a solution.