I’m working on oneVPL samples from this GitHub repository (https://github.com/oneapi-src/oneAPI-samples ) and I’m trying to build hello-vpp sample. After running the program with the command in readme.md file, I wanted to increase the video size to 1280x720. While playing the raw output file, I used the below command
fplay -video_size 1280x720 rawvideo out.raw
My raw output file got damaged. A buffered video got played. How do I change the width and height of the output file? Any suggestions here?
Add the scale filter. Example assuming video.raw is 640x360:
ffplay -f rawvideo -video_size 640x360 -pixel_format rgba -vf scale=1280:720 video.raw
Try the below command:
ffplay -video_size 1280x720 -pixel_format bgra -f rawvideo out.raw
I'm writing an application which need to capture screen. I've looked up for solution and internet says that FFMPEG could do it. But I can't find the way to do that IN CODE. FFMPEG documentation seems to be very poor.
Can anybody please tell me how do I access framebuffer raw data with FFMPEG?
FFmpeg supports input of rawframes throught stdin:
With the arg -f rawvideo ffmpeg will expect frames coming from stdin
ffmpeg -r 60 -f rawvideo -pix_fmt uyvy422 -s 1280x720 -i - -threads 0 -preset fast -y -pix_fmt yuv420p output.mp4
You can check this link, it has useful information.
In Qt, you would run a QProcess with ffmpeg with -f rawvideo and write to stdin with write() method.
This is roughly how to acomplish it:
QProcess* process;
process->start("ffmpeg.exe", args, QProcess::Unbuffered | QProcess::ReadWrite);
process->waitForStarted();
...
process->setProcessChannelMode(QProcess::ForwardedChannels);
videoFrame->GetBytes(&buffer);
process->write(buffer);
I have a c++ script that coneverts a series of jpg into a .mp4 video, the command i use is the folllowing:
std::system("ffmpeg -threads auto -y -framerate 1.74659 -i /mnt/ev_ramdsk/1/%05d-capture.jpg -vcodec libx264 -preset ultrafast /mnt/ev_ramdsk/1/video.mp4");
which produces a .mp4 video file like its supposed to except it cant be played from anywhere (tested in 2 computers and html5 video)
But, if from the same computer where the program runs, i do:
ffmpeg -threads auto -y -framerate 2 -i %05d-capture.jpg -vcodec libx264 -preset ultrafast video.mp4
from the command line, the output video plays wonderfully (except in vlc, for vlc i have to use -vcodec mpeg4)
What can possibly cause this behaviour?
could cp command corrupt the file? (ran after the mpeg to move it out of the ramfs)
EDIT:
As requested, i ran the whole set of commands one by one in the console exactly as the program do (the program logs every single command it runs, i just repeated them).
The commands are:
cp -r /var/cache/zoneminder/events/1/16/05/18/23/30/00/ /mnt/ev_ramdsk/1/
ffmpeg -threads auto -y -framerate 1.76729 -i /mnt/ev_ramdsk/1/%5d-capture.jpg -preset ultrafast /mnt/ev_ramdsk/1/video.mp4
cp /mnt/ev_ramdsk/1/video.mp4 /var/cache/evmanager/videos/1/2016_05_18_23_30_00_.mp4
The resulting .mp4 file can be played without any trouble. Also, is the only one with a preview image in the file explorer.
Thank you very much!
Solved it!
this was the winning answer. finally got it to work using:
std::system("ffmpeg -threads auto -y -r 1.74659 -i /mnt/ev_ramdsk/1/%05d-capture.jpg -px_fmt yuv420p -preset ultrafast -r 10 /mnt/ev_ramdsk/1/video.mp4");
Thank you very much!
So below is my command that I am running. It should be converting it to mp3 but it still exports as a video in flv. What am I doing wrong?
$cmd = '/usr/local/bin/youtube-dl -o "%(title)s.%(ext)s" -x --audio-format mp3 -- '.escapeshellarg($url).'';
youtube-dl will download the video before converting it. Most likely, you don't have ffprobe or ffmpeg installed. Make sure both programs are available (i.e. you get a sensible output for ffprobe --help and ffmpeg --help).
You can directly download the .mp3 file from the youtube site.
For e.g in ubuntu terminal youtube-dl youtube.com/watch?v=qn6CMz18lkQ -f 141 .Most probably 141 is the .mp3 file format code for better quality.
Original Question
I want to be able to generate a new (fully valid) MP3 file from an existing MP3 file to be used as a preview -- try-before-you-buy style. The new file should only contain the first n seconds of the track.
Now, I know I could just "chop the stream" at n seconds (calculating from the bitrate and header size) when delivering the file, but this is a bit dirty and a real PITA on a VBR track. I'd like to be able to generate a proper MP3 file.
Anyone any ideas?
Answers
Both mp3split and ffmpeg are both good solutions. I chose ffmpeg as it is commonly installed on linux servers and is also easily available for windows. Here's some more good command line parameters for generating previews with ffmpeg
-t <seconds> chop after specified number of seconds
-y force file overwrite
-ab <bitrate> set bitrate e.g. -ab 96k
-ar <rate Hz> set sampling rate e.g. -ar 22050 for 22.05kHz
-map_meta_data <outfile>:<infile> copy track metadata from infile to outfile
instead of setting -ab and -ar, you can copy the original track settings, as Tim Farley suggests, with:
-acodec copy
I also recommend ffmpeg, but the command line suggested by John Boker has an unintended side effect: it re-encodes the file to the default bitrate (which is 64 kb/s in the version I have here at least). This might give your customers a false impression of the quality of your sound files, and it also takes longer to do.
Here's a command line that will slice to 30 seconds without transcoding:
ffmpeg -t 30 -i inputfile.mp3 -acodec copy outputfile.mp3
The -acodec switch tells ffmpeg to use the special "copy" codec which does not transcode. It is lightning fast.
NOTE: the command was updated based on comment from Oben Sonne
If you wish to REMOVE the first 30 seconds (and keep the remainder) then use this:
ffmpeg -ss 30 -i inputfile.mp3 -acodec copy outputfile.mp3
try:
ffmpeg -t 30 -i inputfile.mp3 outputfile.mp3
This command also works perfectly.
I cropped my music files from 20 to 40 seconds.
-y : force output file to overwrite.
ffmpeg -i test.mp3 -ss 00:00:20 -to 00:00:40 -c copy -y temp.mp3
you can use mp3cut:
cutmp3 -i foo.mp3 -O 30s.mp3 -a 0:00.0 -b 0:30.0
It's in ubuntu repo, so just: sudo apt-get install cutmp3.
You might want to try Mp3Splt.
I've used it before in a C# service that simply wrapped the mp3splt.exe win32 process. I assume something similar could be done in your Linux/PHP scenario.
I have got an error while doing the same
Invalid audio stream. Exactly one MP3 audio stream is required.
Could not write header for output file #0 (incorrect codec parameters ?): Invalid argumentStream mapping:
Fix for me was:
ffmpeg -ss 00:02:43.00 -t 00:00:10 -i input.mp3 -codec:a libmp3lame out.mp3
My package medipack is a very simple command-line app as a wrapper over ffmpeg.
you can achieve trimming your video using these commands:
medipack trim input.mp3 -s 00:00 -e 00:30 -o output.mp3
medipack trim input.mp3 -s 00:00 -t 00:30 -o output.mp3
you can view options of trim subcommand as:
srb#srb-pc:$ medipack trim -h
usage: medipack trim [-h] [-s START] [-e END | -t TIME] [-o OUTPUT] [inp]
positional arguments:
inp input video file ex: input.mp4
optional arguments:
-h, --help show this help message and exit
-s START, --start START
start time for cuting in format hh:mm:ss or mm:ss
-e END, --end END end time for cuting in format hh:mm:ss or mm:ss
-t TIME, --time TIME clip duration in format hh:mm:ss or mm:ss
-o OUTPUT, --output OUTPUT
you could also explore other options using medipack -h
srb#srb-pc:$ medipack --help
usage: medipack.py [-h] [-v] {trim,crop,resize,extract} ...
positional arguments:
{trim,crop,resize,extract}
optional arguments:
-h, --help show this help message and exit
-v, --version Display version number
you may visit my repo https://github.com/srbcheema1/medipack and checkout examples in README.