Send H.264 encoded stream through RTMP using FFmpeg - c++

I followed this to encode a sequences images to h.264 video.
Here is outputting part of my code:
int srcstride = outwidth*4;
sws_scale(convertCtx, src_data, &srcstride, 0, outheight, pic_in.img.plane, pic_in.img.i_stride);
x264_nal_t* nals;
int i_nals;
int frame_size = x264_encoder_encode(encoder, &nals, &i_nals, &pic_in, &pic_out);
if (frame_size) {
fwrite(nals[0].p_payload, frame_size, 1, fp);
}
This is in a loop to process frames and write them into a file.
Now, I'm trying to stream these encoded frames through RTMP. As I know, the container for the RTMP is FLV. So I used command line as a trial:
ffmpeg -i test.h264 -vcodec copy -f flv rtmp://localhost:1935/hls/test
This one works well as streaming a h.264 encoded video file.
But how can I implement it as C++ code and stream the frames at the same time when they are generated, just like what I did to stream my Facetime camera.
ffmpeg -f avfoundation -pix_fmt uyvy422 -video_size 1280x720 -framerate 30 -i "1:0" -pix_fmt yuv420p -vcodec libx264 -preset veryfast -acodec libvo_aacenc -f flv -framerate 30 rtmp://localhost:1935/hls/test
This may be a common and practical topic. But I'm stuck here for days, really need some relevant exprience. Thank you!

Related

fwrite optimisation for video streaming

I have created an FFMPEG pipeline to stream video frames to an RTSP server. I initiate my pipeline as follows:
FILE* openPipeLine = _popen("ffmpeg -f rawvideo -r 10 -video_size 1280x720 -pixel_format bgr24 -i pipe: -vcodec libx264 -crf 25 -pix_fmt yuv420p -f rtsp rtsp://localHost:8554/mystream", "wb");
I then have a loop where I process the frames and end up with a Mat variable that contains the 720x1280 processed image. I then use the fwrite function to write the frame to the server as shown below.
string filename = "file.mp4";
VideoCapture capture(filename);
Mat frame;
for( ; ; )
{
capture >> frame;
if(frame.empty())
break;
fwrite(frame.data, 1, 1280 * 720 * 3, openPipeLine);
}
Everything runs perfectly but the fwrite function takes approximately 0.1 seconds to run. I need it to be far more efficient. Is there a way to store the frames and batch write a group of frames instead of calling fwrite on each iteration? perhaps I can change the number 1 in the second input to fwrite?

Convert raw RGB32 file to JPEG or PNG using FFmpeg

Context
I used a C++ program to write raw bytes to a file (image.raw) in RGB32 format:
R G B A R G B A R G B A ...
and I want to be able to view it in some way. I have the dimensions of the image.
My tools are limited to command line commands (e.g. ffmpeg). I have visited the ffmpeg website for instructions, but it deals more with converting videos to images.
Questions
Is it possible to turn this file into a viewable file type (e.g. .jpeg, .png) using ffmpeg. If so, how would I do it?
If it's not possible, is there a way I can use another command?
It that's still not viable, is there any way I can manipulate the RGB32 bytes inside a C++ program to make it more suitable without the use of external libraries? I also don't want to encode .jpeg myself like this.
Use the rawvideo demuxer:
ffmpeg -f rawvideo -pixel_format rgba -video_size 320x240 -i input.raw output.png
Since there is no header specifying the assumed video parameters you must specify them, as shown above, in order to be able to decode the data correctly.
See ffmpeg -pix_fmts for a list of supported input pixel formats which may help you choose the appropriate -pixel_format.
get a single frame from raw RGBA data
ffmpeg -y -f rawvideo -pix_fmt rgba -ss 00:01 -r 1 -s 320x240 -i input.raw -frames:v 1 output.png
-y overwrite output
-r input framerate (placed before -i)
-ss skip to this time
-frames:v number of frames ot output

How can I make all frames in my video become i-frames?

Now I use ffmpeg to encode my video in c++. I need to decode a h264 frame without other frames. So I need to make all my frames in my video become i-frames. But I don't know how to set parameters in order to do this. What should I do if I need to make all my video frame i-frames?
ffmpeg -i yourfile -c:v libx264 -x264opts keyint=1 out.mp4
-x264opts keyint=1 sets the keyframe interval to 1 (I believe you can also use -g 1). You probably want to set other rate control parameters also, e.g. -crf 10 (for quality) and -preset veryslow (for speed), see this page.

Ffmpeg: writing audio and video to a stream in an mpeg format

I had the commands for exporting a video stream to an mpeg file working correctly with the following code:
ffmpeg -r 24 -pix_fmt rgba -s 1280x720 -f rawvideo -y -i -vf vflip -vcodec mpeg1video -qscale 4 -bufsize 500KB -maxrate 5000KB OUTPUT_FILE
Now, I wanted to add the commands so that audio can be used as well since there's no option for that right now.
I've edited the previous command to the next one:
ffmpeg -r 24 -pix_fmt rgba -s 1280x720 -f rawvideo -y -i -f s16le -ac 1 -ar 44100 -i - -acodec pcm_s16le -ac 1 -b:a 320k -ar 44100 -vf vflip -vcodec mpeg1video -qscale 4 -bufsize 500KB -maxrate 5000KB OUTPUT_FILE
So as you can see I added a new input with the settings for the audio I'm going to be inputting (I'm going to test this with the values of a sine wave).
I'm writing the data to the file like this:
// Write a frame to the ffmpeg stream
fwrite(frame, sizeof(unsigned char*) * frameWidth * frameHeight, 1, ffmpeg);
// Write multiple sound samples per written frame
for (int t = 0; t < 44100/24; ++t)
fwrite(&audio, sizeof(short int), 1, ffmpeg);
The first line is the one that only writes the video (where the frame object is a render to texture from the video I'm inputting)
After that I'm trying to add the audio. I'm using a for-loop so I can use multiple samples per video frame (because otherwise you would only have 24 audio samples per second)
This does render with a couple of issues:
The rendered video shows green flashes
The video slides across the screen. For example, if it slides 200 pixels to the right those pixels get rendered at the other side. Also a bit of the frame that should be at the bottom is rendered at the top (so the frame also slides down but this is a constant, it doesn't move over time)
I can't figure out where my mistake is. I've tried multiple codecs and tried different orders for the commands but it stays the same or gets worse.
Thanks in advance

How to convert YUV frames to a video?

How to convert set of YUV frames to a Video and later convert the video to YUV frames without any loss using C ?(I dont want to convert it to RGB in between)
if you have a raw YUV file, you need to tell ffmpeg which pixel format/subsampling that is used. YUV have no header, so you also need to specify the width and height of the data.
The following ffmpeg commands encodes a 1080p YUV 4:2:0 to H.264 using the x264 encoder and place the result in a mp4-container. This operation is however not losless.
ffmpeg -f rawvideo -pix_fmt yuv420p -s:v 1920x1080 -r 25 -i input.yuv \
-c:v libx264 output.mp4
To get the YUV-frames back again, do
ffmpeg -i output.mp4 frames.yuv
If you are looking for a lossless encoder, try HuffYUV or FFV1