elastic transcode is not writing metadata in s3, AWS - amazon-web-services

I am using lambda function to transcode the video file I upload. Here is the code I am using in the lambda function.
var params = {
PipelineId: pipelineId,
Input: {
Key: inputKey
},
Outputs: [{
Key: outputKey,
PresetId: transcoderPresetID,
}],
UserMetadata : {jid : 'test', vid: v001 }
}
but when I check the metadata on the s3 object that was written by elastic transcoder, all I can see is "content-type": "video/mp4"
My log files are not showing any errors, am I missing something. Please let me know. Thank you

The UserMetadata is not used when saving an object to S3. The UserMetadata is sent as part of the job status notification as documented here:
https://docs.aws.amazon.com/elastictranscoder/latest/developerguide/notifications.html
If you wish to add custom metadata on the S3 object after transcoding, you could perform an object copy. For example:
$s3Client->copyObject(
$sourceObject,
$sourceObject,
array(
"jid" => "test",
"vid" => "v001",
)
);

Related

Uppy - How do you upload to s3 via multipart? Using companion?

https://uppy.io/docs/aws-s3-multipart/
Uppy multipart plugin sounds like exactly what I need but I can't see how to do the backend part of things. The impression I get is that I need to setup a companion to route the upload to S3 but can't find any details on setting up the companion for this.
I can see lots of references about using Companion to fetch external content but none on the multipart S3 uploading.
I neither see anywhere inside Uppy to provide AWS credentials which makes me think Companion even more.
But there are 4 steps to complete a multipart upload and I can't see how providing one companion url will help Uppy.
Thanks in advance to anyone who can help or jog me in the right direction.
Providing Uppy a companion URL makes it so that Uppy will fire off a series of requests to the-passed-url.com/s3/multipart. You then need to configure your server to handle these requests. Your server will be where your credentials are handled for AWS.
In short when you click the upload button in Uppy, this is what happens:
Uppy sends a post request to /s3/multipart to create/initiate the multipart upload.
Using the data returned from the previous request, Uppy will send a get request to /s3/multipart/{uploadId} to generate AWS S3 pre-signed URLs to use for uploading the parts.
Uppy will then upload the parts using the pre-signed URLs from the previous request.
Finally, Uppy will send a post request to /s3/multipart/{uploadId}/complete to complete the multipart upload.
I was able to accomplish this using Laravel/Vue. I don't know what your environment is but I've posted my solution which should help, especially if your server is using PHP.
Configuring Uppy to Use Multipart Uploads with Laravel/Vue
I am sharing code snippets for AWS S3 Multipart [github]
If you add Companion to the mix, your users will be able to select files from remote sources, such as Instagram, Google Drive, and Dropbox, bypassing the client (so a 5 GB video isn’t eating into your users’ data plans), and then uploaded to the final destination. Files are removed from Companion after an upload is complete, or after a reasonable timeout. Access tokens also don’t stick around for long, for security reasons.
Setup companion server:
1: Setup s3 configuration.
Uppy automatically generates the upload URL and puts the file in the uploads directory.
s3: {
getKey: (req, filename) =>{
return `uploads/${filename}`;
},
key: 'AWS KEY',
secret: 'AWS SECRET',
bucket: 'AWS BUCKET NAME',
},
2: Support upload from a remote resource
Uppy handles everything for us. We just need to provide a secret key and token from different remote resources like Instagram, drive, etc.
example: Drive upload
Generate google key and secrete from google and add it to code
Add redirect URL for authentication
3: Run node server locally
const fs = require('fs')
const path = require('path')
const rimraf = require('rimraf')
const companion = require('#uppy/companion')
const app = require('express')()
const DATA_DIR = path.join(__dirname, 'tmp')
app.use(require('cors')({
origin: true,
credentials: true,
}))
app.use(require('cookie-parser')())
app.use(require('body-parser').json())
app.use(require('express-session')({
secret: 'hello planet',
}))
const options = {
providerOptions: {
drive: {
key: 'YOUR GOOGLE DRIVE KEY',
secret: 'YOUR GOOGLE DRIVE SECRET'
},
s3: {
getKey: (req, filename) =>{
return `uploads/${filename}`;
} ,
key: 'AWS KEY',
secret: 'AWS SECRET',
bucket: 'AWS BUCKET NAME',
},
},
server: { host: 'localhost:3020' },
filePath: DATA_DIR,
secret: 'blah blah',
debug: true,
}
try {
fs.accessSync(DATA_DIR)
} catch (err) {
fs.mkdirSync(DATA_DIR)
}
process.on('exit', () => {
rimraf.sync(DATA_DIR)
})
app.use(companion.app(options))
// handle server errors
const server = app.listen(3020, () => {
console.log('listening on port 3020')
})
companion.socket(server, options)
Setup client:
1: client HTML code:
This code will allow upload from the drive, webcam, local, etc. You can customize it to support more remote places.
Add companion URL as your above node server running URL(http://localhost:3020)
<!doctype html>
<html>
<head>
<meta charset="utf-8">
<title>Uppy</title>
<link href="https://releases.transloadit.com/uppy/v1.29.1/uppy.min.css" rel="stylesheet">
</head>
<body>
<div id="drag-drop-area"></div>
<script src="https://releases.transloadit.com/uppy/v1.29.1/uppy.min.js"></script>
<script>
Uppy.Core({
debug: false,
autoProceed: false,
restrictions: {
maxNumberOfFiles: 5,
}
}).
use(Uppy.AwsS3Multipart, {
limit: 4,
companionUrl: 'http://localhost:3020'
}).
use(Uppy.Dashboard, {
inline: true,
showProgressDetails: true,
showLinkToFileUploadResult: false,
proudlyDisplayPoweredByUppy: false,
target: '#drag-drop-area',
}).use(Uppy.GoogleDrive, { target: Uppy.Dashboard, companionUrl: 'http://localhost:3020' })
.use(Uppy.Url, { target: Uppy.Dashboard, companionUrl: 'http://localhost:3020' })
.use(Uppy.Webcam, { target: Uppy.Dashboard, companionUrl: 'http://localhost:3020' });
</script>
</body>
</html>

S3 - Video, uploaded with getSignedUrl link, does not play and is downloaded in wrong format

I am using AWS SDK in Server Side with Node.JS and having issue with uploading files as formData from client side.
On the server side I have simple route, which creates upload link, where video will be uploaded later directly from client side.
I am using S3 getSignedUrl method for generating that link with putObject, which creates PUT request for client, but causes very strange issue with formData.
Video uploaded as formData is not behaving correctly - instead of playing it S3 uploaded url downloads that video and it is also broken.
Here is simple how i configure that method on server side:
this.s3.getSignedUrl(
'putObject',
{
Bucket: '<BUCKET_NAME>',
ContentType: `${contentType}` -> video/mp4 as a rule,
Key: key,
},
(err, url) => {
if (err) {
reject(err)
} else {
resolve(url)
}
},
)
axios put request with blob is actually working, but not for formData.
axios.put(url, file, {
headers: {
'Content-Type': file.type,
},
onUploadProgress: ({ total, loaded }) => {
setProgress((loaded / total) * 100)
},
})
This is working version, but when I try to add file to formData, it is uploaded to S3, but video downloads instead of playing.
I do not have big experience in AWS, so if somebody knows how to handle that issue, I will be thankfull

How to upload files to S3 bucket from a docker container?

I have containerized my project that uploads files to S3.
Everything was working fine when I was uploading the files from my local file system.
I just mounted my container to my local file system, and then uploading stopped.
The following is the piece of function for uploading the files to the S3 bucket:
// AWS configuration
AWS.config.update({ region: 'ap-northeast-1' });
let s3 = new AWS.S3({ apiVersion: '2006-03-01' });
.
.
.
function s3uploader(uploadingVideo) {
// call S3 to retrieve upload file to specified bucket
let uploadParams = { Bucket: "my-bucket", Key: '', Body: '' };
let file = uploadingVideo;
console.log(file);
// Configure the file stream and obtain the upload parameters
let fileStream = fs.createReadStream(file);
fileStream.on('error', function (err) {
console.log('File Error', err);
});
uploadParams.Body = fileStream;
uploadParams.Key = path.basename(file);
// call S3 to retrieve upload file to specified bucket
s3.upload(uploadParams, function (err, data) {
console.log("Hello World!")
if (err) {
console.log("Error", err);
} if (data) {
console.log("Upload Success", data.Location);
}
});
}
At the moment, nothing happens when running the container. No error, not even the "Hello World!" part, so I think that s3 is not being called at the first place.
I have found a similar question here, but it wasn't helpful to my case.
I also thought of maybe installing the aws cli from dockerfile but also didn't succeed with that.
What is exactly missing here, and how to fix it?

AWS S3 presigned URL with metadata

I am trying to create presigned-url using boto3 below
s3 = boto3.client(
's3',
aws_access_key_id=settings.AWS_ACCESS_KEY,
aws_secret_access_key=settings.AWS_ACCESS_SECRET,
region_name=settings.AWS_SES_REGION_NAME,
config=Config(signature_version='s3v4')
)
metadata = {
'test':'testing'
}
presigned_url = s3.generate_presigned_url(
ClientMethod='put_object',
Params={
'Bucket': settings.AWS_S3_BUCKET_NAME,
'Key': str(new_file.uuid),
'ContentDisposition': 'inline',
'Metadata': metadata
})
So, after the URL is generated and I try to upload it to S3 using Ajax it gives 403 forbidden. If I remove Metadata and ContentDisposition while creating URL it gets uploaded successfully.
Boto3 version: 1.9.33
Below is the doc that I referring to:
https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/s3.html#S3.Client.generate_presigned_url
Yes I got it working,
Basically after the signed URL is generated I need to send all the metadata and Content-Dispostion in header along with the signed URL.
For eg: My metadata dictionary is {'test':'test'} then I need to send this metadata in header i.e. x-amz-meta-test along with its value and content-dispostion to AWS
I was using createPresignedPost and for me I got this working by adding the metadata I wanted to the Fields param like so :-
const params = {
Expires: 60,
Bucket: process.env.fileStorageName,
Conditions: [['content-length-range', 1, 1000000000]], // 1GB
Fields: {
'Content-Type': 'application/pdf',
key: strippedName,
'x-amz-meta-pdf-type': pdfType,
'x-amz-meta-pdf-id': pdfId,
},
};
As long as you pass the data you want, in the file's metadata, to the lambda that you're using to create the preSignedPost response then the above will work. Hopefully will help someone else...
I found the metadata object needs to be key/value pairs, with the value as a string (example is Nodejs lambda):
const AWS = require('aws-sdk');
const s3 = new AWS.S3();
exports.handler = async (event) => {
const { key, type, metadata } = JSON.parse(event.body);
// example
/*metadata: {
foo: 'bar',
x: '123',
y: '22.4213213'
}*/
return await s3.getSignedUrlPromise('putObject', {
Bucket: 'the-product-uploads',
Key: key,
Expires: 300,
ContentType: type,
Metadata: metadata
});
};
Then in your request headers you need to add each k/v explicitly:
await fetch(signedUrl, {
method: "PUT",
headers: {
"Content-Type": fileData.type,
"x-amz-meta-foo": "bar",
"x-amz-meta-x": x.toString(),
"x-amz-meta-y": y.toString()
},
body: fileBuffer
});
In boto, you should provide the Metadata parameter passing the dict of your key, value metadata. You don't need to name the key as x-amz-meta as apparently boto is doing it for you now.
Also, I didn't have to pass the metadata again when uploading to the pre-signed URL:
params = {'Bucket': bucket_name,
'Key': object_key,
'Metadata': {'test-key': value}
}
response = s3_client.generate_presigned_url('put_object',
Params=params,
ExpiresIn=3600)
I'm using a similar code in a lambda function behind an API

read data from AWS S3 bucket using JavaScript

i'm trying to read data from AWS S3 Bucket using JavaScript but getting
error :
"Error: Missing credentials in config"
AWS.config.update({
"region": "eu-west-1"
});
var params = { Bucket: <BucketName>, Key: "data.json" };
new AWS.S3().getObject(params, function (err, json_data) {
if (!err) {
var json = JSON.parse(new Buffer(json_data.Body).toString("utf8"));
console.log(json);
}
else
console.log(err);
});
even if i tried without AWS.config.update i'm getting this error.
any idea?
AFAIK if you wish to access a bucket which is not public then you will need to supply your AWS credentials along with the request. Here's the SDK page for building an AWS.credentials object that you put into your options when making the AWS.S3 object.
https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/Credentials.html
No example because I am not a JS dev and can't write it out of memory, sorry!