I created an FFMPEG pipeline so that I can stream video frames to an RTSP server. I created a synthetic video for testing where each frame is a green number on a black background. The video plays on my screen but it does not stream to the server because I get the error "Unable to find a suitable output format for 'rtsp://10.0.0.6:8554/mystream"
My code is below. The source code is taken from the answer: How to stream frames from OpenCV C++ code to Video4Linux or ffmpeg?
int main() {
int width = 720;
int height = 1280;
int fps = 30;
FILE* pipeout = _popen("ffmpeg -f rawvideo -r 30 -video_size 720x1280 -pixel_format bgr24 -i pipe: -vcodec libx264 -crf 24 -pix_fmt yuv420p rtsp://10.0.0.6:8554/mystream", "w");
for (int i = 0; i < 100; i++)
{
Mat frame = Mat(height, width, CV_8UC3);
frame = Scalar(60, 60, 60); //Fill background with dark gray
putText(frame, to_string(i + 1), Point(width / 2 - 50 * (int)(to_string(i + 1).length()), height / 2 + 50), FONT_HERSHEY_DUPLEX, 5, Scalar(30, 255, 30), 10); // Draw a green number
imshow("frame", frame);
waitKey(1);
fwrite(frame.data, 1, width * height * 3, pipeout);
}
// Flush and close input and output pipes
fflush(pipeout);
_pclose(pipeout); //Windows
return 0;
}
When I change -f rawvideo to -f rtsp in the FFMPEG command, I no longer get the error but the program just displays the first frame on the screen and seems to get stuck. Is there a wrong parameter in the pipeline. When I change the RTSP url to a file name such as output.mkv, it works perfectly and saves the video to the file.
Related
I use this example to decode a mpeg1 video
when decode starts
log (every 3 to 10 frames) :
[mpeg1video # 0x5626caf74e40] ac-tex damaged at 39 15
[mpeg1video # 0x5626caf74e40] Warning MVs not available
[mpeg1video # 0x5626caf74e40] concealing 405 DC, 405 AC, 405 MV errors in P frame
and the result is :
I tried make rgb from YUV using opencv but the rgb results is same
cv::Size actual_size(frame->width, frame->height);
cv::Size half_size(frame->width/2, frame->height/2);
cv::Mat y(actual_size, CV_8UC1, frame->data[0]);
cv::Mat u(half_size, CV_8UC1, frame->data[1]);
cv::Mat v(half_size, CV_8UC1, frame->data[2]);
cv::Mat u_resized, v_resized;
cv::resize(u, u_resized, actual_size, 0, 0, cv::INTER_NEAREST); //repeat u values 4 times
cv::resize(v, v_resized, actual_size, 0, 0, cv::INTER_NEAREST); //repeat v values 4 times
cv::Mat yuv;
std::vector<cv::Mat> yuv_channels = { y, u_resized, v_resized };
cv::merge(yuv_channels, yuv);
cv::Mat bgr;
cv::cvtColor(yuv, bgr, cv::COLOR_YUV2BGR);
cv::imshow("x",bgr);
cv::waitKey(1000/25);
problem solved by using mpeg2 video and AV_CODEC_ID_MPEG2VIDEO
codec = avcodec_find_decoder(AV_CODEC_ID_MPEG2VIDEO);
I need to store multiple encoded frames in memory.
I am using cv::imencode(".jpg", ...) for compressing and storing the encoded images to std::list<std::vector<u_char>> compressed_images - a list of compressed images.
I want to create a video from compressed_images, but I must use cv::imdecode() to decode all the images to cv::Mat, then use cv::VideoWriter to save the images to MJPEG video.
Can I skip cv::imdecode(), or use other solution for avoid encoding two times?
You may PIPE the encoded images to FFmpeg.
According to the following post, you can "simply mux the JEPG images to make a video".
In case the frames are in memory, you can write the encoded images to an input PIPE of FFmpeg.
Instead of -f image2, use -f image2pipe format flag.
Implementing the solution in C++ is too difficult for me.
I have implemented a Python code sample.
The code sample:
Builds a list of 100 encoded frames (green frame counter).
PIPE the encoded frames to ffmpeg sub-process.
The encoded images are written to stdin input stream of the sub-process.
Here is the Python code sample:
import numpy as np
import cv2
import subprocess as sp
# Generate 100 synthetic JPEG encoded images in memory:
###############################################################################
# List of JPEG encoded frames.
jpeg_frames = []
width, height, n_frames = 640, 480, 100 # 100 frames, resolution 640x480
for i in range(n_frames):
img = np.full((height, width, 3), 60, np.uint8)
cv2.putText(img, str(i+1), (width//2-100*len(str(i+1)), height//2+100), cv2.FONT_HERSHEY_DUPLEX, 10, (30, 255, 30), 20) # Green number
# JPEG Encode img into jpeg_img
_, jpeg_img = cv2.imencode('.JPEG', img)
# Append encoded image to list.
jpeg_frames.append(jpeg_img)
###############################################################################
#FFmpeg input PIPE: JPEG encoded images
#FFmpeg output AVI file encoded with MJPEG codec.
# https://video.stackexchange.com/questions/7903/how-to-losslessly-encode-a-jpg-image-sequence-to-a-video-in-ffmpeg
process = sp.Popen('ffmpeg -y -f image2pipe -r 10 -i pipe: -codec copy out.avi', stdin=sp.PIPE)
# Iterate list of encoded frames and write the encoded frames to process.stdin
for jpeg_img in jpeg_frames:
process.stdin.write(jpeg_img)
# Close and flush stdin
process.stdin.close()
# Wait one more second and terminate the sub-process
try:
process.wait(1)
except (sp.TimeoutExpired):
process.kill()
Update: C++ implementation:
#include "opencv2/opencv.hpp"
#include "opencv2/highgui.hpp"
#ifdef _MSC_VER
#include <Windows.h> //For Sleep(1000)
#endif
#include <stdio.h>
int main()
{
int width = 640;
int height = 480;
int n_frames = 100;
//Generate 100 synthetic JPEG encoded images in memory:
//////////////////////////////////////////////////////////////////////////
std::list<std::vector<uchar>> jpeg_frames;
for (int i = 0; i < n_frames; i++)
{
cv::Mat img = cv::Mat(height, width, CV_8UC3);
img = cv::Scalar(60, 60, 60);
cv::putText(img, std::to_string(i + 1), cv::Point(width / 2 - 100 * (int)(std::to_string(i + 1).length()), height / 2 + 100), cv::FONT_HERSHEY_DUPLEX, 10, cv::Scalar(30, 255, 30), 20); // Green number
//cv::imshow("img", img);cv::waitKey(1);
std::vector<uchar> jpeg_img;
cv::imencode(".JPEG", img, jpeg_img);
jpeg_frames.push_back(jpeg_img);
}
//////////////////////////////////////////////////////////////////////////
//In Windows (using Visual Studio) we need to use _popen and in Linux popen
#ifdef _MSC_VER
//ffmpeg.exe must be in the system path (or in the working directory)
FILE *pipeout = _popen("ffmpeg -y -f image2pipe -r 10 -i pipe: -codec copy out.avi", "wb"); //For Windows use "wb"
#else
//https://batchloaf.wordpress.com/2017/02/12/a-simple-way-to-read-and-write-audio-and-video-files-in-c-using-ffmpeg-part-2-video/
FILE *pipeout = popen("ffmpeg -y -f image2pipe -r 10 -i pipe: -codec copy out.avi", "w"); //For Linux use "w"
//In case ffmpeg is not in the execution path, you may use full path:
//popen("/usr/bin/ffmpeg -y -f image2pipe -r 10 -i pipe: -codec copy out.avi", "w");
#endif
std::list<std::vector<uchar>>::iterator it;
//Iterate list of encoded frames and write the encoded frames to pipeout
for (it = jpeg_frames.begin(); it != jpeg_frames.end(); ++it)
{
std::vector<uchar> jpeg_img = *it;
// Write this frame to the output pipe
fwrite(jpeg_img.data(), 1, jpeg_img.size(), pipeout);
}
// Flush and close input and output pipes
fflush(pipeout);
#ifdef _MSC_VER
_pclose(pipeout);
#else
pclose(pipeout);
#endif
//It looks like we need to wait one more second at the end.
Sleep(1000);
return 0;
}
I am trying to pipe opencv frames to ffmpeg using rawvideo format, which should accept the input as BGRBGRBGR... encoding the frame before piping is not an option.
cv::Mat frame;
cv::VideoCapture cap("cap.avi");
while(1)
{
cap >> frame;
if(!frame.data) break;
// some processing
cv::Mat array = frame.reshape(0, 1); // to make continuous
std::string output((char*) array.data, array.total() * array.elemSize());
std::cout << output;
}
with command
cap.exe | ffplay -f rawvideo -pixel_format bgr24 -video_size 1920x1080 -framerate 10 -i -
gives this kind of distorted result
I think problem is not related to ffmpeg/ffplay, but something is wrong with my frame to raw conversion
how to convert mat with 3 channels to rawvideo bgr24 format ?
I am trying to scale a decoded YUV420p frame(1018x700) via sws_scale to RGBA, I am saving data to a raw video file and then playing the raw video using ffplay to see the result.
Here is my code:
sws_ctx = sws_getContext(video_dec_ctx->width, video_dec_ctx->height,AV_PIX_FMT_YUV420P, video_dec_ctx->width, video_dec_ctx->height, AV_PIX_FMT_BGR32, SWS_LANCZOS | SWS_ACCURATE_RND, 0, 0, 0);
ret = avcodec_decode_video2(video_dec_ctx, yuvframe, got_frame, &pkt);
if (ret < 0) {
std::cout<<"Error in decoding"<<std::endl;
return ret;
}else{
//the source and destination heights and widths are the same
int sourceX = video_dec_ctx->width;
int sourceY = video_dec_ctx->height;
int destX = video_dec_ctx->width;
int destY = video_dec_ctx->height;
//declare destination frame
AVFrame avFrameRGB;
avFrameRGB.linesize[0] = destX * 4;
avFrameRGB.data[0] = (uint8_t*)malloc(avFrameRGB.linesize[0] * destY);
//scale the frame to avFrameRGB
sws_scale(sws_ctx, yuvframe->data, yuvframe->linesize, 0, yuvframe->height, avFrameRGB.data, avFrameRGB.linesize);
//write to file
fwrite(avFrameRGB.data[0], 1, video_dst_bufsize, video_dst_file);
}
Here is the result without scaling (i.e. in YUV420p Format)
Here is the after scaling while playing using ffplay (i.e. in RGBA format)
I run the ffplay using the following command ('video' is the raw video file)
ffplay -f rawvideo -pix_fmt bgr32 -video_size 1018x700 video
What should I fix to make the correct scaling happen to RGB32?
I found the solution, the problem here was that I was not using the correct buffer size to write to the file.
fwrite(avFrameRGB.data[0], 1, video_dst_bufsize, video_dst_file);
The variable video_dst_file was being taken from the return value of
video_dst_bufsize = av_image_alloc(yuvframe.data, yuvframe.linesize, destX, destY, AV_PIX_FMT_YUV420P, 1);
The solution is to get the return value from and use this in the fwrite statement:
video_dst_bufsize_RGB = av_image_alloc(avFrameRGB.data, avFrameRGB.linesize, destX, destY, AV_PIX_FMT_BGR32, 1);
fwrite(avFrameRGB.data[0], 1, video_dst_bufsize_RGB, video_dst_file);
I'm trying to modify and write some video using openCV 2.4.6.1 using the following code:
cv::VideoCapture capture( video_filename );
// Check if the capture object successfully initialized
if ( !capture.isOpened() )
{
printf( "Failed to load video, exiting.\n" );
return -1;
}
cv::Mat frame, cropped_img;
cv::Rect ROI( OFFSET_X, OFFSET_Y, WIDTH, HEIGHT );
int fourcc = static_cast<int>(capture.get(CV_CAP_PROP_FOURCC));
double fps = 30;
cv::Size frame_size( RADIUS, (int) 2*PI*RADIUS );
video_filename = "test.avi";
cv::VideoWriter writer( video_filename, fourcc, fps, frame_size );
if ( !writer.isOpened() && save )
{
printf("Failed to initialize video writer, unable to save video!\n");
}
while(true)
{
if ( !capture.read(frame) )
{
printf("Failed to read next frame, exiting.\n");
break;
}
// select the region of interest in the frame
cropped_img = frame( ROI );
// display the image and wait
imshow("cropped", cropped_img);
// if we are saving video, write the unwrapped image
if (save)
{
writer.write( cropped_img );
}
char key = cv::waitKey(30);
When I try to run the output video 'test.avi' with VLC I get the following error: avidemux error: no key frame set for track 0. I'm using Ubuntu 13.04, and I've tried using videos encoded with MPEG-4 and libx264. I think the fix should be straightforward but can't find any guidance. The actual code is available at https://github.com/benselby/robot_nav/tree/master/video_unwrap. Thanks in advance!
[PYTHON] Apart from the resolution mismatch, there can also be a frames-per-second mismatch. In my case, the resolution was correctly set, but the problem was with fps. Checking the frames per second at which VideoCapture object was reading, it showed to be 30.0, but if I set the fps of VideoWriter object to 30.0, the same error was being thrown in VLC. Instead of setting it to 30.0, you can get by with the error by setting it to 30.
P.S. You can check the resolution and the fps at which you are recording by using the cap.get(3) for width, cap.get(4) for height and cap.get(5) for fps inside the capturing while/for loop.
The full code is as follows:
import numpy as np
import cv2 as cv2
cap = cv2.VideoCapture(0)
#Define Codec and create a VideoWriter Object
fourcc = cv2.VideoWriter_fourcc('X','V','I','D')
#30.0 in the below line doesn't work while 30 does work.
out = cv2.VideoWriter('output.mp4', fourcc, 30, (640, 480))
while(True):
ret, frame = cap.read()
colored_frame = cv2.cvtColor(frame, cv2.COLOR_BGR2BGRA)
print('Width = ', cap.get(3),' Height = ', cap.get(4),' fps = ', cap.get(5))
out.write(colored_frame)
cv2.imshow('frame', colored_frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
cap.release()
out.release()
cv2.destroyAllWindows()
The full documentation (C++) for what all properties can be checked is available here : propId OpenCV Documentation
This appears to be an issue of size mismatch between the frames written and the VideoWriter object opened. I was running into this issue when trying to write a series of resized images from my webcam into a video output. When I removed the resizing step and just grabbed the size from an initial test frame, everything worked perfectly.
To fix my resizing code, I essentially ran a single test frame through my processing and then pulled its size when creating the VideoWriter object:
#include <cassert>
#include <iostream>
#include <time.h>
#include "opencv2/opencv.hpp"
using namespace cv;
int main()
{
VideoCapture cap(0);
assert(cap.isOpened());
Mat testFrame;
cap >> testFrame;
Mat testDown;
resize(testFrame, testDown, Size(), 0.5, 0.5, INTER_NEAREST);
bool ret = imwrite("test.png", testDown);
assert(ret);
Size outSize = Size(testDown.cols, testDown.rows);
VideoWriter outVid("test.avi", CV_FOURCC('M','P','4','2'),1,outSize,true);
assert(outVid.isOpened());
for (int i = 0; i < 10; ++i) {
Mat frame;
cap >> frame;
std::cout << "Grabbed frame" << std::endl;
Mat down;
resize(frame, down, Size(), 0.5, 0.5, INTER_NEAREST);
//bool ret = imwrite("test.png", down);
//assert(ret);
outVid << down;
std::cout << "Wrote frame" << std::endl;
struct timespec tim, tim2;
tim.tv_sec = 1;
tim.tv_nsec = 0;
nanosleep(&tim, &tim2);
}
}
My guess is that your problem is in the size calculation:
cv::Size frame_size( RADIUS, (int) 2*PI*RADIUS );
I'm not sure where your frames are coming from (i.e. how the capture is set up), but likely in rounding or somewhere else your size gets messed up. I would suggest doing something similar to my solution above.