Pipe opencv frames to ffmpeg - c++

I am trying to pipe opencv frames to ffmpeg using rawvideo format, which should accept the input as BGRBGRBGR... encoding the frame before piping is not an option.
cv::Mat frame;
cv::VideoCapture cap("cap.avi");
while(1)
{
cap >> frame;
if(!frame.data) break;
// some processing
cv::Mat array = frame.reshape(0, 1); // to make continuous
std::string output((char*) array.data, array.total() * array.elemSize());
std::cout << output;
}
with command
cap.exe | ffplay -f rawvideo -pixel_format bgr24 -video_size 1920x1080 -framerate 10 -i -
gives this kind of distorted result
I think problem is not related to ffmpeg/ffplay, but something is wrong with my frame to raw conversion
how to convert mat with 3 channels to rawvideo bgr24 format ?

Related

FFMPEG Pipeline

I created an FFMPEG pipeline so that I can stream video frames to an RTSP server. I created a synthetic video for testing where each frame is a green number on a black background. The video plays on my screen but it does not stream to the server because I get the error "Unable to find a suitable output format for 'rtsp://10.0.0.6:8554/mystream"
My code is below. The source code is taken from the answer: How to stream frames from OpenCV C++ code to Video4Linux or ffmpeg?
int main() {
int width = 720;
int height = 1280;
int fps = 30;
FILE* pipeout = _popen("ffmpeg -f rawvideo -r 30 -video_size 720x1280 -pixel_format bgr24 -i pipe: -vcodec libx264 -crf 24 -pix_fmt yuv420p rtsp://10.0.0.6:8554/mystream", "w");
for (int i = 0; i < 100; i++)
{
Mat frame = Mat(height, width, CV_8UC3);
frame = Scalar(60, 60, 60); //Fill background with dark gray
putText(frame, to_string(i + 1), Point(width / 2 - 50 * (int)(to_string(i + 1).length()), height / 2 + 50), FONT_HERSHEY_DUPLEX, 5, Scalar(30, 255, 30), 10); // Draw a green number
imshow("frame", frame);
waitKey(1);
fwrite(frame.data, 1, width * height * 3, pipeout);
}
// Flush and close input and output pipes
fflush(pipeout);
_pclose(pipeout); //Windows
return 0;
}
When I change -f rawvideo to -f rtsp in the FFMPEG command, I no longer get the error but the program just displays the first frame on the screen and seems to get stuck. Is there a wrong parameter in the pipeline. When I change the RTSP url to a file name such as output.mkv, it works perfectly and saves the video to the file.

Creating MJEPG video from multiple JPEG encoded images without using cv::imdecode()

I need to store multiple encoded frames in memory.
I am using cv::imencode(".jpg", ...) for compressing and storing the encoded images to std::list<std::vector<u_char>> compressed_images - a list of compressed images.
I want to create a video from compressed_images, but I must use cv::imdecode() to decode all the images to cv::Mat, then use cv::VideoWriter to save the images to MJPEG video.
Can I skip cv::imdecode(), or use other solution for avoid encoding two times?
You may PIPE the encoded images to FFmpeg.
According to the following post, you can "simply mux the JEPG images to make a video".
In case the frames are in memory, you can write the encoded images to an input PIPE of FFmpeg.
Instead of -f image2, use -f image2pipe format flag.
Implementing the solution in C++ is too difficult for me.
I have implemented a Python code sample.
The code sample:
Builds a list of 100 encoded frames (green frame counter).
PIPE the encoded frames to ffmpeg sub-process.
The encoded images are written to stdin input stream of the sub-process.
Here is the Python code sample:
import numpy as np
import cv2
import subprocess as sp
# Generate 100 synthetic JPEG encoded images in memory:
###############################################################################
# List of JPEG encoded frames.
jpeg_frames = []
width, height, n_frames = 640, 480, 100 # 100 frames, resolution 640x480
for i in range(n_frames):
img = np.full((height, width, 3), 60, np.uint8)
cv2.putText(img, str(i+1), (width//2-100*len(str(i+1)), height//2+100), cv2.FONT_HERSHEY_DUPLEX, 10, (30, 255, 30), 20) # Green number
# JPEG Encode img into jpeg_img
_, jpeg_img = cv2.imencode('.JPEG', img)
# Append encoded image to list.
jpeg_frames.append(jpeg_img)
###############################################################################
#FFmpeg input PIPE: JPEG encoded images
#FFmpeg output AVI file encoded with MJPEG codec.
# https://video.stackexchange.com/questions/7903/how-to-losslessly-encode-a-jpg-image-sequence-to-a-video-in-ffmpeg
process = sp.Popen('ffmpeg -y -f image2pipe -r 10 -i pipe: -codec copy out.avi', stdin=sp.PIPE)
# Iterate list of encoded frames and write the encoded frames to process.stdin
for jpeg_img in jpeg_frames:
process.stdin.write(jpeg_img)
# Close and flush stdin
process.stdin.close()
# Wait one more second and terminate the sub-process
try:
process.wait(1)
except (sp.TimeoutExpired):
process.kill()
Update: C++ implementation:
#include "opencv2/opencv.hpp"
#include "opencv2/highgui.hpp"
#ifdef _MSC_VER
#include <Windows.h> //For Sleep(1000)
#endif
#include <stdio.h>
int main()
{
int width = 640;
int height = 480;
int n_frames = 100;
//Generate 100 synthetic JPEG encoded images in memory:
//////////////////////////////////////////////////////////////////////////
std::list<std::vector<uchar>> jpeg_frames;
for (int i = 0; i < n_frames; i++)
{
cv::Mat img = cv::Mat(height, width, CV_8UC3);
img = cv::Scalar(60, 60, 60);
cv::putText(img, std::to_string(i + 1), cv::Point(width / 2 - 100 * (int)(std::to_string(i + 1).length()), height / 2 + 100), cv::FONT_HERSHEY_DUPLEX, 10, cv::Scalar(30, 255, 30), 20); // Green number
//cv::imshow("img", img);cv::waitKey(1);
std::vector<uchar> jpeg_img;
cv::imencode(".JPEG", img, jpeg_img);
jpeg_frames.push_back(jpeg_img);
}
//////////////////////////////////////////////////////////////////////////
//In Windows (using Visual Studio) we need to use _popen and in Linux popen
#ifdef _MSC_VER
//ffmpeg.exe must be in the system path (or in the working directory)
FILE *pipeout = _popen("ffmpeg -y -f image2pipe -r 10 -i pipe: -codec copy out.avi", "wb"); //For Windows use "wb"
#else
//https://batchloaf.wordpress.com/2017/02/12/a-simple-way-to-read-and-write-audio-and-video-files-in-c-using-ffmpeg-part-2-video/
FILE *pipeout = popen("ffmpeg -y -f image2pipe -r 10 -i pipe: -codec copy out.avi", "w"); //For Linux use "w"
//In case ffmpeg is not in the execution path, you may use full path:
//popen("/usr/bin/ffmpeg -y -f image2pipe -r 10 -i pipe: -codec copy out.avi", "w");
#endif
std::list<std::vector<uchar>>::iterator it;
//Iterate list of encoded frames and write the encoded frames to pipeout
for (it = jpeg_frames.begin(); it != jpeg_frames.end(); ++it)
{
std::vector<uchar> jpeg_img = *it;
// Write this frame to the output pipe
fwrite(jpeg_img.data(), 1, jpeg_img.size(), pipeout);
}
// Flush and close input and output pipes
fflush(pipeout);
#ifdef _MSC_VER
_pclose(pipeout);
#else
pclose(pipeout);
#endif
//It looks like we need to wait one more second at the end.
Sleep(1000);
return 0;
}

Extracting KLV data from mp2 stream using C++ and ffmpeg

I have an mp2 stream that has klv metadata. I stored the klv in a file using ffmpeg command line:
ffmpeg -i input.mpg -map data-re -codec copy -f data output.klv
I now want to do this in c++. So, I have
FFMPEG setup …..
Then the main loop
// Read frames
while(av_read_frame(pFormatCtx, &packet) >= 0)
{
// Is this a packet from the video stream?
if(packet.stream_index == videoStream)
{
// Decode video frame
avcodec_decode_video2(pCodecCtx, pFrame, &frameFinished, &packet);
// Did we get a video frame?
if(frameFinished)
{
// Convert the image from its native format to RGB
sws_scale(sws_ctx, (uint8_t const * const *)pFrame->data,
pFrame->linesize, 0, pCodecCtx->height,
pFrameRGB->data, pFrameRGB->linesize);
QImage myImage(pFrameRGB->data[0], pCodecCtx->width, pCodecCtx->height, QImage::Format_RGB888);
QPixmap img(QPixmap::fromImage(myImage.scaled(ui->label->width(),ui->label->height(),Qt::KeepAspectRatio)));
ui->label->setPixmap(img);
QCoreApplication::processEvents();
}
}
else // klv stream
{
// Decode klv data
qDebug() << packet.buf->size;
for(int i=0; i<packet.buf->size; i++)
{
qDebug() << packet.buf->data[i];
}
}
The resulting klv output is different - I must be doing something wrong processing the packet. The frames are good and I'm viewing it in a qt label - so my ffmpeg setup is working on images but not the klv data.
My bad - this code is working - I was comparing the int output to the ffmpeg output being viewed in notepad - when I used notepad++ - I can make sense of the ffmpeg output and it does correlate :)

Open USB camera with OpenCV and stream to rtsp server

I got a Logitech C920 camera connected via USB to a NVIDIA TX1. I am trying to both stream the camera feed over rtsp to a server while doing some computer vision in OpenCV. I managed to read H264 video from the usb camera in Opencv
#include <iostream>
#include "opencv/cv.h"
#include <opencv2/opencv.hpp>
#include "opencv/highgui.h"
using namespace cv;
using namespace std;
int main()
{
Mat img;
VideoCapture cap;
int heightCamera = 720;
int widthCamera = 1280;
// Start video capture port 0
cap.open(0);
// Check if we succeeded
if (!cap.isOpened())
{
cout << "Unable to open camera" << endl;
return -1;
}
// Set frame width and height
cap.set(CV_CAP_PROP_FRAME_WIDTH, widthCamera);
cap.set(CV_CAP_PROP_FRAME_HEIGHT, heightCamera);
cap.set(CV_CAP_PROP_FOURCC, CV_FOURCC('X','2','6','4'));
// Set camera FPS
cap.set(CV_CAP_PROP_FPS, 30);
while (true)
{
// Copy the current frame to an image
cap >> img;
// Show video streams
imshow("Video stream", img);
waitKey(1);
}
// Release video stream
cap.release();
return 0;
}
I also have streamed the USB camera to a rtsp server by using ffmpeg:
ffmpeg -f v4l2 -input_format h264 -timestamps abs -video_size hd720 -i /dev/video0 -c:v copy -c:a none -f rtsp rtsp://10.52.9.104:45002/cameraTx1
I tried to google how to combine this two functions, i.e. open usb camera in openCV and use openCV to stream H264 rtsp video. However, all I can find is people trying to open rtsp stream in openCV.
Have anyone successfully stream H264 rtsp video using openCV with ffmpeg?
Best regards
Sondre

stereo Camera OpenCV read depth image correctly 2 pixels encoded in 1

I have a stereo camera that gives an image in the format YUYV with a a resolution of 320 x 480, where in each pixel (16 bits) has encoded 2 pixels of 8 bits. I'm using OpenCV in order to get the image but when I try to get the real resolution of the image I wont get good results. I guess I missing how to properly split the 16 bits in two.
Using this and this I'm able to reconstruct an image but still is not the real one.
Mat frame;
unsigned int width= cap.get(CV_CAP_PROP_FRAME_WIDTH);
unsigned int height= cap.get(CV_CAP_PROP_FRAME_HEIGHT);
m_pDepthImgBuf = (unsigned char*)calloc(width*height*2, sizeof(unsigned char));
...
cap >> frame; // get a new frame from camera
imshow("YUVY 320x480", frame);
memcpy( (void*)m_pDepthImgBuf, (void*)frame.data, width*height*2 * sizeof(unsigned char) );
cv::Mat depth(height,width*2,CV_8UC1,(void*)m_pDepthImgBuf);
camera properties:
SDL information:
Video driver: x11
Device information:
Device path: /dev/video2
Stream settings:
Frame format: YUYV (MJPG is not supported by device)
Frame size: 320x480
Frame rate: 30 fps
v4l2-ctl -d /dev/video1 --list-formats
ioctl: VIDIOC_ENUM_FMT
Index : 0
Type : Video Capture
Pixel Format: 'YUYV'
Name : YUV 4:2:2 (YUYV)
In the following image you can see in green the initial 320x480 image and in gray scale the depth that I'm trying to extract.
The expected result should be: