Using libVLC to read RTSP video stream + convert data to OpenCV Mat - c++

I'm attempting to read an RTSP stream from a Sony FCB-EV7520 camera through an IP interface using the libVLC library and convert the data to the format used by OpenCV, namely the Mat type. I've been trying to find a good example of this for a couple of days, but the only results I've found so far are this and this. I followed the code in the first example, adapting it to the RTSP use case, but I have yet to retrieve any data in the Mat. However, from the terminal output I seem to achieve a connection to the camera. Do you see any obvious flaws in the code? Are there any other (easier) ways of achieving my goal? Any other libraries that could be used? Any help would be appreciated! The code im running is:
#include <stdio.h>
#include <vlc/vlc.h>
#include <stdint.h>
#include <opencv2/opencv.hpp>
#include <opencv2/highgui/highgui.hpp>
using namespace std;
using namespace cv;
struct VideoDataStruct {
int param;
};
int done = 0;
libvlc_media_player_t *mp;
unsigned int videoBufferSize = 0;
uint8_t *videoBuffer = 0;
void cbVideoPrerender(void *p_video_data, uint8_t **pp_pixel_buffer, int size) {
// Locking
// Locking a mutex here once I get the code to work.
if (size > videoBufferSize || !videoBuffer){
printf("Reallocate raw video buffer\n");
free(videoBuffer);
videoBuffer = (uint8_t *) malloc(size);
videoBufferSize = size;
}
*pp_pixel_buffer = videoBuffer;
}
void cbVideoPostrender(void *p_video_data, uint8_t *p_pixel_buffer, int width, int height, int pixel_pitch, int size, int64_t pts) {
// Unlocking
// Unlocking the mutex here once I get the code to work.
}
static void handleEvent(const libvlc_event_t* pEvt, void* pUserData) {
libvlc_time_t time;
switch(pEvt->type){
case libvlc_MediaPlayerTimeChanged:
time = libvlc_media_player_get_time(mp);
printf("MediaPlayerTimeChanged %lld ms\n", (long long)time);
break;
case libvlc_MediaPlayerEndReached:
printf ("MediaPlayerEndReached\n");
done = 1;
break;
default:
printf("%s\n", libvlc_event_type_name(pEvt->type));
}
}
int main(int argc, char* argv[]) {
// VLC pointers:
libvlc_instance_t *inst;
libvlc_media_t *m;
void *pUserData = 0;
VideoDataStruct dataStruct;
// VLC options:
char smem_options[1000];
// RV24:
sprintf(smem_options
, "#transcode{vcodec=h264}:smem{"
"video-prerender-callback=%lld,"
"video-postrender-callback=%lld,"
"video-data=%lld,"
"no-time-sync},"
, (long long int)(intptr_t)(void*)&cbVideoPrerender
, (long long int)(intptr_t)(void*)&cbVideoPostrender
, (long long int)(intptr_t)(void*)&dataStruct
);
const char * const vlc_args[] = {
"-I", "dummy", // Don't use any interface
"--ignore-config", // Don't use VLC's config
"--extraintf=logger", // Log anything
"--verbose=2", // Be verbose
"--sout", smem_options // Stream to memory
};
// We launch VLC
inst = libvlc_new(sizeof(vlc_args) / sizeof(vlc_args[0]), vlc_args);
/* Create a new item */
m = libvlc_media_new_location(inst, "rtsp://*****:*******#IP/videoinput_1/h264_1/media.stm");
/* Create a media player playing environement */
mp = libvlc_media_player_new_from_media (m);
libvlc_event_manager_t* eventManager = libvlc_media_player_event_manager(mp);
libvlc_event_attach(eventManager, libvlc_MediaPlayerTimeChanged, handleEvent, pUserData);
libvlc_event_attach(eventManager, libvlc_MediaPlayerEndReached, handleEvent, pUserData);
libvlc_event_attach(eventManager, libvlc_MediaPlayerPositionChanged, handleEvent, pUserData);
libvlc_video_set_format(mp, "h264", 1920, 1080, 1920* 3 );
/* play the media_player */
libvlc_media_player_play (mp);
while(1){
if(videoBuffer){ // Check for invalid input
Mat img = Mat(Size(1920, 1080), CV_8UC3, videoBuffer); // CV_8UC3 = 8 bits, 3 chanels
namedWindow("Display window", WINDOW_AUTOSIZE); // Create a window for display.
imshow("Display window", img); // Show our image inside it.
}
}
libvlc_release (inst);
}
I am running the code on Ubuntu 16.04 using OpenCV 3.2 and libVLC 2.2.5.1. If you need any additional information, just ask.
PS: I know that the stream is working as I can open it through the streaming interface of the VLC player. I also know that libVLC can decode the stream as I have successfully opened recorded mp4s of the stream.

Not a complete answer, but too long for a comment:
In cbVideoPrerender, videoBuffer is always allocated as needed by vlc.
However, in main, you have the loop while(1){ if (videoBuffer) { ... } } where videoBuffer is always true from the first time cbVideoPrerender has been called. This means that from then on, the loop is infinite and non-blocking, so there is no synchronization between the input of the video and processing and if you're just trying to get the first image, you will be too early.
Your first link suggests using cbVideoPostrender as a synchronisation point so you know the frame can be read and thus converted to the needed format. There it is done in the function itself, but you could have some mechanism with a condition variable (queue, event,...) to process the frame in another thread and pass the image to openCV.
By the way: using a variable by setting in one thread and reading it in another one without thread mechanisms (mutex, atomic) is usably a bad idea.

Related

How to incorporate VideoWriter code into image_publisher code of ros?

I am setting up a ros system to publish images with ros, c++ and opencv-2 for my drone. The code, below, is publishing raw images. While publishing, I want to write gray-scale images frame-by-frame with resolution of 1280 x 720 to record a video. I have found a readily available video writing code with opencv. However, I could not incorporate this code into image_publisher code. Here is the image_publisher code:
#include <ros/ros.h>
#include <camera_info_manager/camera_info_manager.h>
#include <sensor_msgs/Image.h>
#include <sensor_msgs/CameraInfo.h>
#include <opencv2/opencv.hpp>
int main(int argc, char **argv)
{
ROS_INFO("Starting image_pub ROS node...\n");
ros::init(argc, argv, "image_pub");
ros::NodeHandle nh("~");
std::string camera_topic;
std::string camera_info_topic;
std::string camera_info_url;
std::string img_path;
std::string frame_id;
float pub_rate;
int start_sec;
bool repeat;
nh.param<std::string>("camera_topic", camera_topic, "/camera/image_raw");
nh.param<std::string>("camera_info_topic", camera_info_topic, "/camera/camera_info");
nh.param<std::string>("camera_info_url", camera_info_url, "");
nh.param<std::string>("img_path", img_path, "");
nh.param<std::string>("frame_id", frame_id, "");
nh.param("pub_rate", pub_rate, 30.0f);
nh.param("start_sec", start_sec, 0);
nh.param("repeat", repeat, false);
ROS_INFO("CTopic : %s", camera_topic.c_str());
ROS_INFO("ITopic : %s", camera_info_topic.c_str());
ROS_INFO("CI URL : %s", camera_info_url.c_str());
ROS_INFO("Source : %s", img_path.c_str());
ROS_INFO("Rate : %.1f", pub_rate);
ROS_INFO("Start : %d", start_sec);
ROS_INFO("Repeat : %s", repeat ? "yes" : "no");
ROS_INFO("FrameID: %s", frame_id.c_str());
camera_info_manager::CameraInfoManager camera_info_manager(nh);
if (camera_info_manager.validateURL(camera_info_url))
camera_info_manager.loadCameraInfo(camera_info_url);
ros::Publisher img_pub = nh.advertise<sensor_msgs::Image>(camera_topic, 1);
ros::Publisher info_pub = nh.advertise<sensor_msgs::CameraInfo>(camera_info_topic, 1);
cv::VideoCapture vid_cap(img_path.c_str());
if (start_sec > 0)
vid_cap.set(CV_CAP_PROP_POS_MSEC, 1000.0 * start_sec);
ros::Rate rate(pub_rate);
while (ros::ok())
{
cv::Mat img;
if (!vid_cap.read(img))
{
if (repeat)
{
vid_cap.open(img_path.c_str());
if (start_sec > 0)
vid_cap.set(CV_CAP_PROP_POS_MSEC, 1000.0 * start_sec);
continue;
}
ROS_ERROR("Failed to capture frame.");
ros::shutdown();
}
else
{
//ROS_DEBUG("Image: %dx%dx%d, %zu, %d", img.rows, img.cols, img.channels(), img.elemSize(), img.type() == CV_8UC3);
if (img.type() != CV_8UC3)
img.convertTo(img, CV_8UC3);
// Convert image from BGR format used by OpenCV to RGB.
cv::cvtColor(img, img, CV_BGR2RGB);
auto img_msg = boost::make_shared<sensor_msgs::Image>();
img_msg->header.stamp = ros::Time::now();
img_msg->header.frame_id = frame_id;
img_msg->encoding = "rgb8";
img_msg->width = img.cols;
img_msg->height = img.rows;
img_msg->step = img_msg->width * img.channels();
auto ptr = img.ptr<unsigned char>(0);
img_msg->data = std::vector<unsigned char>(ptr, ptr + img_msg->step * img_msg->height);
img_pub.publish(img_msg);
if (camera_info_manager.isCalibrated())
{
auto info = boost::make_shared<sensor_msgs::CameraInfo>(camera_info_manager.getCameraInfo());
info->header = img_msg->header;
info_pub.publish(info);
}
}
ros::spinOnce();
rate.sleep();
}
return 0;
}
The video writing code (referred to this link):
#include "opencv2/opencv.hpp"
#include <iostream>
using namespace std;
using namespace cv;
int main(){
// Create a VideoCapture object and use camera to capture the video
VideoCapture cap(0);
// Check if camera opened successfully
if(!cap.isOpened())
{
cout << "Error opening video stream" << endl;
return -1;
}
// Default resolution of the frame is obtained.The default resolution is system dependent.
int frame_width = cap.get(CV_CAP_PROP_FRAME_WIDTH);
int frame_height = cap.get(CV_CAP_PROP_FRAME_HEIGHT);
// Define the codec and create VideoWriter object.The output is stored in 'outcpp.avi' file.
VideoWriter video("outcpp.avi",CV_FOURCC('M','J','P','G'),10,
Size(frame_width,frame_height));
while(1)
{
Mat frame;
// Capture frame-by-frame
cap >> frame;
// If the frame is empty, break immediately
if (frame.empty())
break;
// Write the frame into the file 'outcpp.avi'
video.write(frame);
// Display the resulting frame
imshow( "Frame", frame );
// Press ESC on keyboard to exit
char c = (char)waitKey(1);
if( c == 27 )
break;
}
// When everything done, release the video capture and write object
cap.release();
video.release();
// Closes all the windows
destroyAllWindows();
return 0;
}
Firstly, I tried incorporating video writing code (without grayscale) into publisher code. But it failed to run. All in all, image_publisher code should result a video after completing its task. What is the right way of doing this ?
You cannot send opencv Mat inside ros topic system since Image topic has another coding system.
You need to use cv_bridge to convert images that you got from Opencv to ROS-image Format
there are some details and examples Here

Output image from OpenImageIO ImageBuf to OpenCV IplImage gets corrupted

I'm writing a converter from OpenImageIO ImageBufs to OpenCV IplImages. Running the following code causes the output images to get corrupted (pictures below). I am working on a pull request for OpenImageIO, but if someone from the OpenCV community has some insights on the OpenCV side, that would be super helpful. OpenImageIO Pull Request
Code
IplImage *
ImageBufAlgo::to_IplImage (const ImageBuf &src)
{
#ifdef USE_OPENCV
const ImageSpec &spec = src.spec();
const ImageBuf *tmp = &src;
ImageBuf localcopy;
int channels = std::min(spec.nchannels, 4);
// If the image has 4+ channels, then reduce to 4 channels, and use BGRA
// order (OpenCV ordering).
if (channels >=3) {
// OpenCV images support up to 4 channels (BGRA)
int channelorder[channels];
// Set the channel order
for (int i = 0; i < channels; ++i) {
channelorder[i] = i;
}
channelorder[0] = 2; // B
channelorder[1] = 1; // G
channelorder[2] = 0; // R
if (!ImageBufAlgo::channels(localcopy, src, channels, &channelorder[0])) {
DASSERT (0 && "Could not swap channels.");
return NULL;
}
tmp = &localcopy;
}
// Create an IplImage to write to.
CvSize size = cvSize(spec.width, spec.height);
IplImage *dst;
// Get the pixel data from the ImageBuf and send it to the IplImage.
switch (spec.format.basetype) {
case TypeDesc::UINT8: {
dst = cvCreateImageHeader(size, IPL_DEPTH_8U, channels);
if (!dst) {
DASSERT (0 && "Could not create dst IplImage.");
return NULL;
}
if (tmp->storage() != ImageBuf::IMAGECACHE) {
// Type is right and the IB is not backed by an ImageCache, so
// directly read from the local data of the IB.
cvSetData(dst, (void *)tmp->localpixels(), dst->widthStep);
} else {
// Either we are backed by an ImageCache, or we somehow still
// need a data format conversion, so make a local copy with
// get_pixels.
std::vector<unsigned char> data;
data.resize (spec.width * spec.height * spec.depth * channels);
tmp->get_pixels (ROI::All(), TypeDesc::INT8, &data[0]);
cvSetData(dst, &data[0], dst->widthStep);
}
break;
default:
DASSERT (0 && "unknown TypeDesc");
return NULL;
}
return dst;
#else
return NULL;
#endif
}
Test code
#include <ostream>
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/core/core_c.h>
#include <opencv2/highgui/highgui_c.h>
#include <OpenImageIO/imagebuf.h>
#include <OpenImageIO/imagebufalgo.h>
namespace oiio = OpenImageIO;
int main(int argc, char const *argv[]) {
oiio::ImageBuf src("/home/scott/Projects/oiio-test/images/input.jpg");
IplImage *dst = oiio::ImageBufAlgo::to_IplImage(src);
cvSaveImage("/home/scott/Projects/oiio-test/images/output.jpg", dst);
return 0;
}
Input images
Image 1
Image 2
Output images
Image 1
Image 2
My current theories
There's a difference in how the data is laid out. As far as I understand, OpenCV expects that a 3 channel image is laid out BGRBGRBGR... I think that by default, OpenImageIO's layout is RGBRGBRGB..., but that does get swapped. I'm not adding any padding, so there's padding already in the data, or I am missing something.
I am passing the wrong step into the cvSetData (step calculation: row * type size * channels). I tried manually setting some values but I either get worse results, or OpenCV raises an exception and segfaults.
Is there something else that I could be doing wrong?

OpenCV: processing multiple images in a C++ vector using pthreads

I have a large number images in a file that I need to perform various processing operations on. Here is what I am trying to do
1) Read the images into a file, and put them in a C++ vector named imageQueue (a mutable array)
2) Create a number of threads
3) Each thread grabs an image from imageQueue, and then erases that image from the vector
4) Each thread then goes ahead and processes that image
5) When finished processing, each thread grabs the next image from the vector
6) This entire process runs until there are no more images in the imageQueue, at which point the program ends. (currently I have 4 photos in the file that I am using for tests, which is why in my loops I run from i = 0 to i < 4. When I complete this, I will have many more photos.
I have named each of the images in the file 00.jpg, 01.jpg, 02.jpg....
For testing purposes, right now I am simply trying to have each thread display the image it grabbed. However, when I run this only purely white windows pop up, instead of the actual image. Any help on why this is happening and how to correct it?
Thanks!
Here is my code:
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <iostream>
#include <pthread.h>
#include <vector.h>
#define NUM_THREADS 2
using namespace std;
using namespace cv;
/* Function Declarations */
void* startProcessing(void* args);
/* Global Variables */
vector <Mat> imageQueue;
static pthread_mutex_t mutex = PTHREAD_MUTEX_INITIALIZER;
int main(int argc, char** argv)
{
VideoCapture cap;
cap.open("/Users/nlauer/Documents/ImageSequence/%02d.jpg");
for(int i = 0; i < 4; i++)
{
Mat frame;
vector<Mat>::iterator it;
cap >> frame;
it = imageQueue.end();
it = imageQueue.insert(it, frame);
}
/* Create the threads */
pthread_t tids[NUM_THREADS];
for(int i = 0; i < NUM_THREADS; i++)
{
pthread_create(&tids[i], NULL, startProcessing, NULL);
}
/* Reap the threads */
for(int i = 0; i < NUM_THREADS; i++)
{
pthread_join(tids[i], NULL);
}
imageQueue.clear();
return 0;
}
void* startProcessing(void* args)
{
/* Each thread grabs an image from imageQueue, removes it from the
queue, and then processes it. The grabbing and removing are done
under a lock */
Mat image;
Mat emptyImage;
/* Get the first image for each thread */
pthread_mutex_lock(&mutex);
if(!imageQueue.empty()) {
image = imageQueue[0];
vector<Mat>::iterator it;
it = imageQueue.begin();
it = imageQueue.erase(it);
}
pthread_mutex_unlock(&mutex);
while(!image.empty())
{
/* Process the image - right now I just want to display it */
namedWindow("window", CV_WINDOW_AUTOSIZE);
imshow("window", image);
sleep(10);
/* Obtain the next image in the queue */
pthread_mutex_lock(&mutex);
if(!imageQueue.empty()) {
image = imageQueue[0];
vector<Mat>::iterator it;
it = imageQueue.begin();
it = imageQueue.erase(it);
} else {
image = emptyImage;
}
pthread_mutex_unlock(&mutex);
}
return NULL;
}
What you try to achieve in the items 3)-5) is exactly what Intel's library TBB is designed for. Take a look at tbb::parallel_for.
All you need is a class which has operator(Mat) which processes a single image, the library TBB will take care of the thread handling.
You can go even further if you use tbb::concurrent_queue instead of your vector to keep the images. Then processing can start even while reading has not finished. You might use tbb::parallel_do instead of tbb::parallel_for in this approach.
Issue has been resolved - I simply went ahead and continued with the processing as is. Something strange was happening when I tried to show the image in each thread, but as long as you let each thread finish its routine, then show the image after, everything worked fine.

Opencv does not write video

I am trying to write a very basic video capturing program using opencv, but despite all my efforts, nothing gets written. I am fairly sure that i am following all the tutorials one can find on the subject.
Here is the code i am using:
#include <opencv2\core\core.hpp>
#include <opencv2\imgproc\imgproc.hpp>
#include <opencv2\highgui\highgui.hpp>
#include <videoInput.h>
static const cv::Size _SIZE(640, 480);
static const int FPS = 20;
int webcam = 0;
std::auto_ptr<videoInput> VI(NULL);
std::auto_ptr<cv::VideoWriter> outputVideo(NULL);
cv::Mat frame(_SIZE.height, _SIZE.width, CV_8UC3);
// get device list create videoInput
videoInput::listDevices();
VI.reset(new videoInput())
// choose first device
VI->setupDevice(webcam, _SIZE.width, _SIZE.height);
// always check
if(!VI->isDeviceSetup(webcam))
return -1; //-->
// set device frame rate
VI->setIdealFramerate(webcam, FPS);
// create named window and image
cv::namedWindow("CAM");
do
if (VI->isFrameNew(webcam))
{
VI->getPixels(webcam, (unsigned char*)frame.data, false, true); // получение пикселей в BGR
if (outputVideo.get())
{
(*outputVideo).write( frame);
}
char c = cv::waitKey(5);
if (!IsWindowVisible((HWND)cvGetWindowHandle("CAM")) || c==27)
{
exit(0);
}
cv::imshow("CAM", frame);
} while(1);
I have tried various extension and various Fourcc values. Usually, on any extension except avi, writer is created but does nothing. on the contrary, avi files writers simply fail to be created.
I have read that probably codecs are missing, but what does that mean - what exactly do i need to put where for them to be found by opencv?
All in all, i am very confused. This is tutorial code, it should work.

WebP encoding - Segmentation Fault

So I'm trying to use the webp API to encode images. Right now I'm going to be using openCV to open and manipulate the images, then I want to save them off as webp. Here's the source I'm using:
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include <cv.h>
#include <highgui.h>
#include <webp/encode.h>
int main(int argc, char *argv[])
{
IplImage* img = 0;
int height,width,step,channels;
uchar *data;
int i,j,k;
if (argc<2) {
printf("Usage:main <image-file-name>\n\7");
exit(0);
}
// load an image
img=cvLoadImage(argv[1]);
if(!img){
printf("could not load image file: %s\n",argv[1]);
exit(0);
}
// get the image data
height = img->height;
width = img->width;
step = img->widthStep;
channels = img->nChannels;
data = (uchar *)img->imageData;
printf("processing a %dx%d image with %d channels \n", width, height, channels);
// create a window
cvNamedWindow("mainWin", CV_WINDOW_AUTOSIZE);
cvMoveWindow("mainWin",100,100);
// invert the image
for (i=0;i<height;i++) {
for (j=0;j<width;j++) {
for (k=0;k<channels;k++) {
data[i*step+j*channels+k] = 255-data[i*step+j*channels+k];
}
}
}
// show the image
cvShowImage("mainWin", img);
// wait for a key
cvWaitKey(0);
// release the image
cvReleaseImage(&img);
float qualityFactor = .9;
uint8_t** output;
FILE *opFile;
size_t datasize;
printf("encoding image\n");
datasize = WebPEncodeRGB((uint8_t*)data,width,height,step,qualityFactor,output);
printf("writing file out\n");
opFile=fopen("output.webp","w");
fwrite(output,1,(int)datasize,opFile);
}
When I execute this, I get this:
nato#ubuntu:~/webp/webp_test$ ./helloWorld ~/Pictures/mars_sunrise.jpg
processing a 2486x1914 image with 3 channels
encoding image
Segmentation fault
It displays the image just fine, but segfaults on the encoding. My initial guess was that it's because I'm releasing the img before I try to write out the data, but it doesn't seem to matter whether I release it before or after I try the encoding. Is there something else I'm missing that might cause this problem? Do I have to make a copy of the image data or something?
The WebP api docs are... sparse. Here's what the README says about WebPEncodeRGB:
The main encoding functions are available in the header src/webp/encode.h
The ready-to-use ones are:
size_t WebPEncodeRGB(const uint8_t* rgb, int width, int height,
int stride, float quality_factor, uint8_t** output);
The docs specifically do not say what the 'stride' is, but I'm assuming that it's the same as the 'step' from opencv. Is that reasonable?
Thanks in advance!
First, don't release the image if you use it later. Second, your output argument is pointing to non-initialized address. This is how to use initialized memory for the output address:
uint8_t* output;
datasize = WebPEncodeRGB((uint8_t*)data, width, height, step, qualityFactor, &output);
You release the image with cvReleaseImage before you try to use the pointer to the image data for the encoding. Probably that release function frees the image buffer and your data pointer now doesn't point to valid memory anymore.
This might be the reason for your segfault.
so it looks like the problem was here:
// load an image
img=cvLoadImage(argv[1]);
The function cvLoadImage takes an extra parameter
cvLoadImage(const char* filename, int iscolor=CV_LOAD_IMAGE_COLOR)
and when I changed to
img=cvLoadImage(argv[1],1);
the segfault went away.