I'm trying to record and save Audio to AWS S3. I'm able to record and play the newly recorded audio in the App, but when trying to save the following file (using https://github.com/benjreinhart/react-native-aws3), the file turns up empty in S3:
const file = { uri: sound, name:${name}.caf, type: "audio/x-caf" };
Has anyone found an audio type that works with .caf?
Related
Right now I have a setup where multiple mp3 files exist in an AWS s3 bucket. The bucket is made public and I am able to download individual mp3 files from the URL created by aws s3.
I want to make a radio streaming service that will traverse through those mp3 files in a continuous fashion. is there any way maybe using cloudfront to do this?
ex: song1.mp3 song2.mp3, song3.mp3
These 3 will play in sequence with a single call.
I realized adding an encoding type when uploading files via the extra args parameter allows the file to be streamed rather than downloaded.
import mimetypes
# Guess file type
mimetype, _ = mimetypes.guess_type(fname)
if mimetype is None:
raise Exception("Failed to guess mimetype")
else:
print("\nMimetype: ", mimetype)
s3.upload_file(fname, S3_TO_BUCKET_NAME, key,
Callback = ProgressPercentage(fname),
ExtraArgs={'ContentType': mimetype})
I have a 121MB MP3 file I am trying to upload to my AWS S3 so I can process it via Amazon Transcribe.
The MP3 file comes from an MP4 file I stripped the audio from using FFmpeg.
When I try to upload the MP3, using the S3 object upload UI in the AWS console, I receive the below error:
InvalidPart
One or more of the specified parts could not be found. the part may not have been uploaded, or the specified entity tag may not match the part's entity tag
The error makes reference to the MP3 being a multipart file and how the "next" part is missing but it's not a multipart file.
I have re-run the MP4 file through FFmpeg 3 times in case the 1st file was corrupt, but that has not fixed anything.
I have searched a lot on Stackoverflow and have not found a similar case where anyone has uploaded a single 5MB+ file that has received the error I am.
I've also crossed out FFmpeg being the issue by saving the audio using VLC as an MP3 file but receive the exact same error.
What is the issue?
Here's the console in case it helps:
121MB is below the 160 GB S3 console single object upload limit, the 5GB single object upload limit using the REST API / AWS SDKs as well as the 5TB limit on multipart file upload so I really can't see the issue.
Considering the file exists & you have a stable internet-connected (no corrupted uploads), you may have incomplete multipart upload parts in your bucket somehow which may be conflicting with the upload for whatever reason so either follow this guide to remove them and try again or try creating a new folder/bucket and re-uploading again.
You may also have a browser caching issue/extension conflict so try incognito (with extensions disabled) or another browser if re-uploading to another bucket/folder doesn't work.
Alternatively, try the AWS CLI s3 cp command or a quick "S3 file upload" application in a supported SDK language to make sure that it's not a console UI issue.
I use MicroSoft Video indexer to run Video intelligence and extract useful features/contents from user video inputs. I've been using the web interface, but now I'm needing to automate this by using the Rest API.
There are 2 approaches to specifying the video file to upload. One is ingesting raw bytes and the other is citing the address of a video file on the cloud (recommended). See the API doc.
The problem is that I'm having issues specifying a file on Google Cloud Storage bucket. I understand that you have to specify the URL of a publicly available file, which I'm doing by providing the signedURL of the video file. But it keeps giving me the error below which doesn't happen when I specify the address of the same video file uploaded to Cloudinary.
{
"ErrorType": "INVALID_INPUT",
"Message": "Url content type 'application/xml' is not supported. Only audio and video files are supported. You can find the supported types here: https://learn.microsoft.com/ro-ro/azure/media-services/media-services-media-encoder-standard-formats. Trace id: '7e9e2626-6dd3-4e9b-8b06-5991cfdb896c"
}
For reference, here are the 2 addresses:
Cloudinary: https://res.cloudinary.com/account-id/video/upload/v1612773239/VideoAI/Elen_Hyundai_high.mp4
GCS: https://storage.googleapis.com/project-id.appspot.com/video-files/filename?GoogleAccessId=firebase-adminsdk-some-service-account-key.com&Expires=1612857836&Signature=Rog3NWpzqXINxPWQl9sLLP8eDASNEglUehL6YkMB90YXRIWGk8PZOYGpUB9MVgTXKtFe4IjbR0tRuDUVhCeIhFTfL3kR2YXCZ3mqbOYOQlzfasq4YE4FtGiN40gNicjiLbJbq8vq4pMIwRfSirSOV92t9ev5ydPcW0BgICfd5n6QOhCvLx%2FpPgonCuGtK82Zyu21M%2BFRxuqmDTfCsZwP0fxfzwoZblusEFxIxpZFiXtow27EBYy3Dqv062UWhPuLhSyBKnFIHReaSaRcfwRVMF4Sw849eeMLGYdqHSy9LOVsw%2FcYTJenjM4bqoj1jqSduh8A%2FmGkN3rFRok3aT4qjw%3D%3D
I can provide the actual URL if you're needing to test it.
Ok, I solved this by encoding the query params path of the signedURL gotten from GCS before adding it to the videoUrl params sent to MicroSoft Video Indexer (MVI)
const temp = signedURL.split('?')
videoUrl = temp[0] + encodeURIComponent(`?${temp[1]}`)
This is because MVI runs decodeURIComponent on the videoUrl - thus the goal is to make the result equal to the original url sent by GCS:
const bucketFile = bucket.file(filePath)
const [signedURL] = await bucketFile.getSignedUrl({
action: 'read',
expires: 3600 * 1000, // 1hr or add-your-expiration,
})
I'm using a lambda function to receive a bytes array of audio data, save it as mp3, store it in S3, and then use the S3 object to start a Transcribe job.
Everything's been processed correctly. I can see the .mp3 file in S3. I've also downloaded it to my local machine and played it, and it plays correctly as mp3.
However, when I start the transcription job I get back an error:
The media format that you specified doesn't match the detected media format. Check the media format and try your request again.
This is my call to start the AWS Transcribe job:
transcribe.start_transcription_job(
TranscriptionJobName=job_name,
Media={'MediaFileUri': job_uri},
MediaFormat='mp3',
LanguageCode='en-US'
)
Any idea what may be causing this?
Cheers!
mp3 requires compression, if you just save byte array, then it's not in .mp3 format. You can use soxi to validate audio files: http://sox.sourceforge.net/soxi.html
In our rails app, we are uploading mp3/mp4 files directly to s3. We need to find the duration of video/audio and store it on the table. Just I tried mp4info gem
require 'taglib'
require 'mp4info'
file = "livetouch-test.mp4"
info = MP4Info.open(file)
p info.SECS
This provides video length, but this expecting video should be store locally. On rails app, videos/audios are available on s3 only. Anyone know how to get s3 uploaded file duration on RoR
Why you want to store the video duration and other related info in DB, when you want to upload the files, get the duration with that library you mentioned before and add the duration as meta tag when uploading the file to S3.
So, each time you want to access that file, you have access to all meta information too, you can put other meta tags such as category, etc.
For more info read the following doc on AWS
https://docs.aws.amazon.com/AWSRubySDK/latest/AWS/S3/S3Object.html