NestJS: Image Upload & Serve API - amazon-web-services

I tried to create an API for uploading & retrieving images with NestJS. Images should be stored on S3.
What I currently have:
Controller
#Post()
#UseInterceptors(FileFieldsInterceptor([
{name: 'photos', maxCount: 10},
]))
async uploadPhoto(#UploadedFiles() files): Promise<void> {
await this.s3Service.savePhotos(files.photos)
}
#Get('/:id')
#Header('content-type', 'image/jpeg')
async getPhoto(#Param() params,
#Res() res) {
const photoId = PhotoId.of(params.id)
const photoObject = await this.s3Service.getPhoto(photoId)
res.send(photoObject)
}
S3Service
async savePhotos(photos: FileUploadEntity[]): Promise<any> {
return Promise.all(photos.map(photo => {
const filePath = `${moment().format('YYYYMMDD-hhmmss')}${Math.floor(Math.random() * (1000))}.jpg`
const params = {
Body: photo.buffer,
Bucket: Constants.BUCKET_NAME,
Key: filePath,
}
return new Promise((resolve) => {
this.client.putObject(params, (err: any, data: any) => {
if (err) {
logger.error(`Photo upload failed [err=${err}]`)
ExceptionHelper.throw(ErrorCodes.SERVER_ERROR_UNCAUGHT_EXCEPTION)
}
logger.info(`Photo upload succeeded [filePath=${filePath}]`)
return resolve()
})
})
}))
}
async getPhoto(photoId: PhotoId): Promise<AWS.S3.Body> {
const object: S3.GetObjectOutput = await this.getObject(S3FileKey.of(`${Constants.S3_PHOTO_PATH}/${photoId.value}`))
.catch(() => ExceptionHelper.throw(ErrorCodes.RESOURCE_NOT_FOUND_PHOTO)) as S3.GetObjectOutput
logger.info(JSON.stringify(object.Body))
return object.Body
}
async getObject(s3FilePath: S3FileKey): Promise<S3.GetObjectOutput> {
logger.info(`Retrieving object from S3 s3FilePath=${s3FilePath.value}]`)
return this.client.getObject({
Bucket: Constants.BUCKET_NAME,
Key: s3FilePath.value
}).promise()
.catch(err => {
logger.error(`Could not retrieve object from S3 [err=${err}]`)
ExceptionHelper.throw(ErrorCodes.SERVER_ERROR_UNCAUGHT_EXCEPTION)
}) as S3.GetObjectOutput
}
The photo object actually ends up in S3, but when I download it I can't open it.
Same for the GET => can't be displayed.
What general mistake(s) I'm making here?

Not sure what values are you returning to your consumer and which values they use to get the Image again; Could you post how the actual response looks like, what is the request and verify, if the FQDN & Path match?
It seems you forgot about ACL as well, i.e. the resources you upload this way are not public-read by default.
BTW you could use aws SDK there:
import { Injectable } from '#nestjs/common'
import * as AWS from 'aws-sdk'
import { InjectConfig } from 'nestjs-config'
import { AwsConfig } from '../../config/aws.config'
import UploadedFile from '../interfaces/uploaded-file'
export const UPLOAD_WITH_ACL = 'public-read'
#Injectable()
export class ImageUploadService {
s3: AWS.S3
bucketName
cdnUrl
constructor(#InjectConfig() private readonly config) {
const awsConfig = (this.config.get('aws') || { bucket: '', secretKey: '', accessKey: '', cdnUrl: '' }) as AwsConfig // read from envs
this.bucketName = awsConfig.bucket
this.cdnUrl = awsConfig.cdnUrl
AWS.config.update({
accessKeyId: awsConfig.accessKey,
secretAccessKey: awsConfig.secretKey,
})
this.s3 = new AWS.S3()
}
upload(file: UploadedFile): Promise<string> {
return new Promise((resolve, reject) => {
const params: AWS.S3.Types.PutObjectRequest = {
Bucket: this.bucketName,
Key: `${Date.now().toString()}_${file.originalname}`,
Body: file.buffer,
ACL: UPLOAD_WITH_ACL,
}
this.s3.upload(params, (err, data: AWS.S3.ManagedUpload.SendData) => {
if (err) {
return reject(err)
}
resolve(`${this.cdnUrl}/${data.Key}`)
})
})
}
}

For anyone having the same troubles, I finally figured it out:
I enabled binary support on API Gateway (<your-gateway> Settings -> Binary Media Types -> */*) and then returned all responses from lambda base64 encoded. API Gateway will do the decode automatically before returning the response to the client.
With serverless express you can can enable the auto base64 encoding easily at the server creation:
const BINARY_MIME_TYPES = [
'application/javascript',
'application/json',
'application/octet-stream',
'application/xml',
'font/eot',
'font/opentype',
'font/otf',
'image/jpeg',
'image/png',
'image/svg+xml',
'text/comma-separated-values',
'text/css',
'text/html',
'text/javascript',
'text/plain',
'text/text',
'text/xml',
]
async function bootstrap() {
const expressServer = express()
const nestApp = await NestFactory.create(AppModule, new ExpressAdapter(expressServer))
await nestApp.init()
return serverlessExpress.createServer(expressServer, null, BINARY_MIME_TYPES)
}
In the Controller, you're now able to just return the S3 response body:
#Get('/:id')
async getPhoto(#Param() params,
#Res() res) {
const photoId = PhotoId.of(params.id)
const photoObject: S3.GetObjectOutput = await this.s3Service.getPhoto(photoId)
res
.set('Content-Type', 'image/jpeg')
.send(photoObject.Body)
}
Hope this helps somebody!

Related

Handling multipart/form-data in aws lambda

I'm trying to send a request with an image to a lambda function through API gateway.
I'm using this piece of code to parse the form-data-object received by my lambda function. I then upload the image to S3, but when downloading and opening the image from S3, I see that it's corrupt.
I have tried the following npm packages:
parse-multipart
parse-multipart-data
Both do not work, because I get an empty parts-array. The piece of code I use does get results in the array, but the buffers seems to be corrupt.
The problem seems to be in this line of code:
Buffer.from(item.slice(item.search(/Content-Type:\s.+/g) + item.match(/Content-Type:\s.+/g)[0].length + 4, -4), 'binary')
Does anyone has a solution for me?
You can use busboy to parse the multipart form data before uploading to S3 as shown below:
// This code is written in ECMAScript 6 (ES6), not CommonJS syntax.
// So, make sure you add <"type": "module"> in your package.json.
import {S3Client, PutObjectCommand} from '#aws-sdk/client-s3';
import Busboy from 'busboy';
// Initialize the clients outside the function handler to take advantage of execution environment reuse.
const s3Client = new S3Client({region: process.env.AWS_REGION});
// Function handler.
export async function handler(event) {
const {
ContentType: contentType = '',
File: file = '',
} = await FORM.parse(event['body'], event['headers']);
try {
// Adds an object to a bucket. We must have `WRITE` permissions on a bucket to add an object to it.
await s3Client.send(new PutObjectCommand({
Body: file,
Bucket: 'BUCKET_NAME',
ContentType: contentType,
Key: 'SOME_KEY',
}));
return {
isBase64Encoded: false,
statusCode: 200,
body: JSON.stringify({
message: 'Everything is gonna be alright.',
}),
}
} catch (e) {
return {
isBase64Encoded: false,
statusCode: 404,
body: JSON.stringify(e),
}
}
}
const FORM = {
parse(body, headers) {
return new Promise((resolve, reject) => {
const data = {};
const buffer = Buffer.from(body, 'base64');
const bb = Busboy({
headers: Object.keys(headers).reduce((newHeaders, key) => {
// busboy expects lower-case headers.
newHeaders[key.toLowerCase()] = headers[key];
return newHeaders;
}, {}),
limits: {
fileSize: 10485760, // Set as desired.
files: 1,
},
});
bb.on('file', (name, stream, info) => {
const chunks = [];
stream.on('data', (chunk) => {
if (name === 'File') {
chunks.push(chunk);
}
}).on('limit', () => {
reject(new Error('File size limit has been reached.'));
}).on('close', () => {
if (name === 'File') {
data[name] = Buffer.concat(chunks);
data['ContentType'] = info.mimeType;
}
});
});
bb.on('error', (err) => {
reject(err);
});
bb.on('close', () => {
resolve(data);
});
bb.end(buffer);
});
}
};

Image Upload with AWS using ID return from JWT

update
Ok so I am having a authentication issue with passport/JWT when trying to grab the header after it got set with login. So JWT should return an ID and I am trying to grab that ID and use it to update a user profile with a collection Image upload. Here is where it gets weird. I get this error in the console:
you are not valid
node:internal/errors:464
ErrorCaptureStackTrace(err);
^
Error [ERR_HTTP_HEADERS_SENT]: Cannot set headers after they are sent to the client
at new NodeError (node:internal/errors:371:5)
at ServerResponse.setHeader (node:_http_outgoing:576:11)
at ServerResponse.header (C:\Users\tquig\OneDrive\Documents\GitHub\Team2\backend\node_modules\express\lib\response.js:776:10)
at ServerResponse.send (C:\Users\tquig\OneDrive\Documents\GitHub\Team2\backend\node_modules\express\lib\response.js:170:12)
at ServerResponse.json (C:\Users\tquig\OneDrive\Documents\GitHub\Team2\backend\node_modules\express\lib\response.js:267:15)
at C:\Users\tquig\OneDrive\Documents\GitHub\Team2\backend\routes\users.js:239:29
at processTicksAndRejections (node:internal/process/task_queues:96:5) {
code: 'ERR_HTTP_HEADERS_SENT'
}
Now insomnia gives me back this error:
"AwsError: MissingRequiredParameter: Missing required key 'Key' in params"
I am not sure if AWS is just hating the JWT token or if my JWT id return isn't really returning properly and I am not doing it right in general.
Here is the AWS middleware:
const S3 = require('aws-sdk/clients/s3')
const fs = require('fs')
const bucketName = process.env.bucketName
const region = process.env.bucketRegion
const accessKeyId = process.env.AWSAccessKeyId
const secretAccessKey = process.env.AWSSecretKey
const s3 = new S3({
region,
accessKeyId,
secretAccessKey
})
function uploadFile(file){
const fileStream = fs.createReadStream(file.path)
const uploadParams = {
Bucket: bucketName,
Body: fileStream,
key: file.filename
}
return s3.upload(uploadParams).promise()
}
exports.uploadFile = uploadFile
//downloads a file from s3
//not tested yet
/*
function getFileStream(fileKey){
const downloadParams = {
key: fileKey
bucket: bucketName
}
return s3.getObject(downloadParams).createReadStream()
}
exports.getFileStream = getFileStream
*/
Here is the authentication code:
require('dotenv').config();
const jwt = require('jsonwebtoken');
const mongoose = require('mongoose');
// middleware functionality to check logged in user
module.exports = async (req, res, next) => {
try{
const token = req.cookies.jwt;
if(!token) return res.status(401).json({errorMessage: "Unauthorized"});
const verified = jwt.verify(token, process.env.secretKey);
req.user = verified.id;
next();
} catch (err){
console.error(err);
res.status(401).json({errorMessage: "Unauthorized"});
}
}
and here is the route:
const storage = multer.diskStorage({
destination: function (req, file, cb) {
cb(null, "uploads");
},
filename: function (req, file, cb) {
cb(
null,
file.fieldname + "-" + Date.now() + path.extname(file.originalname)
);
},
});
const upload = multer({
storage: storage,
fileFilter: (req, file, cb) => {
if (file.mimetype == "image/png" || file.mimetype == "image/jpg" || file.mimetype == "image/jpeg") {
cb(null, true);
} else {
cb(null, false);
return cb(new Error('Only .png, .jpg and .jpeg format allowed!'));
}
} });
router.post("/collections", requireLogin, upload.single("myImage"), async (req, res) => {
const obj = {
img: {
data: req.file.filename,
contentType: req.file.contentType
}
}
const newCollection = new collections({
imgName: req.file.filename,
image: obj.img
});
const findBool = user.findById(req.body.id)
.then(() => res.json('found user'))
.catch(err => res.status(400).json('UserIdError: ' + err));
//need another check before uploading to aws to prevent unauthorized uploads
if (findBool){
//upload to aws
await uploadFile(req.file).then(() => res.json('AWS upload Complete'))
.catch(err => res.status(400).json('AwsError: ' + err))
//mongodb upload
try {
await user.findbyId(req.body.id).insertOne(newCollection)
} catch (error) {
res.status(400).json('updateError: ' + error)
}
//delete file from local storage
unlinkFile(file.path)
}
});

Migrate GetObject (v2) to GetObjectCommand (v3) - aws-sdk

I'm trying to migrate an Express endpoint from v2 to v3 of the aws-sdk for JavaScript. The endpoint is a file downloader for AWS S3.
In version 2, I passed the result of GetObject back to the browser in a readable stream. In version 3 that same technique fails with the error:
TypeError: data.Body.createReadStream is not a function
How do I work with the data that returned from the new GetObjectCommand? Is it a blob? I'm struggling to find anything useful in the v3 SDK docs.
Here are the two versions of the endpoint:
import AWS from 'aws-sdk'
import dotenv from 'dotenv'
import { GetObjectCommand, S3Client } from '#aws-sdk/client-s3'
dotenv.config()
// VERSION 3 DOWNLOADER - FAILS
const getFileFromS3v3 = async (req, res) => {
const client = new S3Client({ region: 'us-west-2' })
const params = {
Bucket: process.env.AWS_BUCKET,
Key: 'Tired.pdf',
}
const command = new GetObjectCommand(params)
try {
const data = await client.send(command)
console.log(data)
data.Body.createReadStream().pipe(res)
} catch (error) {
console.log(error)
}
}
// VERSION 2 DOWNLOADER - WORKS
const getFileFromS3 = async (req, res) => {
const filename = req.query.filename
var s3 = new AWS.S3()
var s3Params = {
Bucket: process.env.AWS_BUCKET,
Key: 'Tired.pdf',
}
// if the file header exists, stream the file to the response
s3.headObject(s3Params, (err) => {
if (err && err.code === 'NotFound') {
console.log('File not found: ' + filename)
} else {
s3.getObject(s3Params).createReadStream().pipe(res)
}
})
}
export { getFileFromS3, getFileFromS3v3 }
This version 3 code works. Thanks to a major assist, the trick was to pipe data.Body and not use any of the fileStream methods.
import { GetObjectCommand, S3Client } from '#aws-sdk/client-s3'
import dotenv from 'dotenv'
dotenv.config()
const getFileFromS3 = async (req, res) => {
const key = req.query.filename
const client = new S3Client({ region: process.env.AWS_REGION })
const params = {
Bucket: process.env.AWS_BUCKET,
Key: key,
}
const command = new GetObjectCommand(params)
try {
const data = await client.send(command)
data.Body.pipe(res)
} catch (error) {
console.log(error)
}
}
export { getFileFromS3 }
When called from this frontend function the code above returns the file from S3 to the browser.
const downloadFile = async (filename) => {
const options = {
url: `/api/documents/?filename=${filename}`,
method: 'get',
responseType: 'blob',
}
try {
const res = await axios(options)
fileDownload(res.data, filename)
} catch (error) {
console.log(error)
}
}

Generate S3 URL in "virtual-hosted style" format

I'm using Javascript SDK on AWS Lambda to generate signed URL. But getSignedUrl is only returning path style URL. I have tried setting s3ForcePathStyle as false.
const params = {
Bucket: 'bucket_name',
Key: 'key_name'
};
const options = {
signatureVersion: 'v4',
useAccelerateEndpoint: false,
// endpoint: new AWS.Endpoint('https://bucket_name.s3.amazonaws.com'),
s3ForcePathStyle: false
};
const client = new AWS.S3(options);
exports.handler = async (event) => {
const signedURL = await (new Promise((resolve, reject) => {
client.getSignedUrl('putObject', params, (err, data) => {
if (err) {
reject(err);
}
else {
resolve(data);
}
});
}));
return signedURL;
};
If I uncomment endpoint: new AWS.Endpoint('https://bucket_name.s3.amazonaws.com') I'm getting an absurd URL which combines both:
https://bucket_name.s3.amazonaws.com/bucket_name/key_name......
tl;dr
https://s3.ap-south-1.amazonaws.com/bucket_name/key_name..... # Code returns path-style
https://bucket_name.s3.amazonaws.com/key_name..... # I want virtual-hosted-style

How to save multipart form data to a file in AWS Lambda

I am writing a serverless api using Node.js and Typescript. I want to send a .xlsx file from the client side and save the file in S3 bucket.
I tried using the npm package busboy. And the file that saved in S3 bucket cannot be read(corrupted or not written fully). Looks like I haven't write the file correctly.
Here's my code
import * as busboy from 'busboy';
import AWS = require('aws-sdk');
import { S3 } from 'aws-sdk';
export const saveExcel: Handler = (event: APIGatewayEvent, context: Context, cb: Callback) => {
const contentType = event.headers['Content-Type'] || event.headers['content-type'];
const bb: busboy.Busboy = new busboy({ headers: { 'content-type': contentType } });
bb.on('file', function (fieldname, file, filename, encoding, mimetype) {
file
.on('data', data => {
const params: S3.PutObjectRequest = {
Bucket: 'bucket-name',
Key: filename,
Body: data
};
s3.upload(params, function (err, data) {
if (err) {
console.log(err);
} else {
console.log(data);
}
})
})
.on('end', () => {
console.log("done");
});
});
bb.end(event.body);
}
What am I doing wrong? Or do I have to try any other library?