im on Ubuntu 14.04 and I'm trying to write a program that will stream my desktop, using the answer to this: libvlc stream part of screen as an example. However, I don't have another computer readily aviable to see that the stream is going along well, so how can I view that stream on my computer?
libvlc_vlm_add_broadcast(inst, "mybroad", "screen://",
"#transcode{vcodec=h264,vb=800,scale=1,acodec=mpga,ab=128,channels=2,samplerate=44100}:http{mux=ts,dst=:8080/stream}",
5, params, 1, 0)
My program throws no errors, and writes this
[0x7f0118000e18] x264 encoder: using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX
[0x7f0118000e18] x264 encoder: profile High, level 3.0
[0x7f0118000e18] x264 encoder: final ratefactor: 25.54
[0x7f0118000e18] x264 encoder: using SAR=1/1
[0x7f0118000e18] x264 encoder: using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX
[0x7f0118000e18] x264 encoder: profile High, level 2.2
So to me, everything seems ok. However, I don't know how to view that stream from my computer- if I open vlc and try to open a network stream, using http:// #:7777 (space on purpose, website does not allow to post such links) I get the invalid host error in its log. This probably is a silly mistake or error on my part, but any help would be greatly appreciated!
if anyone needs it, this is my entire code (I'm using QT 4.8.6):
#include <QCoreApplication>
#include <iostream>
#include <vlc/vlc.h>
#include <X11/Xlib.h>
// #include <QDebug>
using namespace std;
bool ended;
void playerEnded(const libvlc_event_t* event, void *ptr);
libvlc_media_list_t * subitems;
libvlc_instance_t * inst;
libvlc_media_player_t *mp;
libvlc_media_t *media;
libvlc_media_t * stream;
int main(int argc, char *argv[])
{
XInitThreads();
QCoreApplication::setAttribute(Qt::AA_X11InitThreads);
ended = false;
QCoreApplication a(argc, argv);
// the array with parameters
const char* params[] = {"screen-top=0",
"screen-left=0",
"screen-width=640",
"screen-height=480",
"screen-fps=10"};
// Load the VLC engine */
inst = libvlc_new (0, NULL);
if(!inst)
std::cout << "Can't load video player plugins" << std::endl;
cout<< "add broacast: " <<
libvlc_vlm_add_broadcast(inst, "mybroad",
"screen://",
"#transcode{vcodec=h264,vb=800,scale=1,acodec=mpga,ab=128,channels=2,samplerate=44100}:http{mux=ts,dst=:8080/stream}",
5, params, // <= 5 == sizeof(params) == count of parameters
1, 0)<< '\n';
cout<< "poczatek broacastu: " <<libvlc_vlm_play_media(inst, "mybroad")<< '\n';
media = libvlc_media_new_location(inst,http://#:8080/stream");
// Create a media player playing environment
mp = libvlc_media_player_new (inst);
libvlc_media_player_play (mp);
cout<<"szatan!!!"<<endl;
int e;
cin>>e;
/* Stop playing */
libvlc_media_player_stop (mp);
/* Free the media_player */
libvlc_media_player_release (mp);
libvlc_release (inst);
return a.exec();
}
so, i have found the answer- stack overflow wont let me post an answer because I'm new here, so its in the comments! I should have used my IP address when creating media: media = libvlc_media_new_location(inst, "http: //192.168.1.56:8080");(space on purpose so that forum does not hide link) works great! –
Related
I am trying to bridge Ros images to OpenCV so i started using cv_bridge and followed this tutorial. http://wiki.ros.org/cv_bridge/Tutorials/UsingCvBridgeToConvertBetweenROSImagesAndOpenCVImages
I think I was able to make the CMakeLists.txt file correct and using catkin_make does build the executable and the code runs, however nothing really happens besides the node showing up as a leaf topic when using rqt_graph.
However, i ran into some issues with the line of the tutorial where it says: Run a camera or play a bag file to generate the image stream. Now you can run this node, remapping "in" to the actual image stream topic.
I am using a Kinect for the image source and have installed the openni drivers and can confirm that it is working correctly, as when i run rviz or rtabmap point cloud images is being shown.
I'm guessing the issue is that i am not mapping the publishers and subscribers correctly, as when i am trying to use image_view to see if the camera data is working it returns blank. In the command line, i am typing in: rosrun image_view image_view image:=/camera/rgb/image_color However, I am receiving this error: GLib-GObject-CRITICAL **: 15:13:13.357: g_object_unref: assertion 'G_IS_OBJECT (object)' failed, which i'm assuming is an error with me renaming the topics.
When running rqt_graph with both the openni node and the tutorial file it looks like the this.
https://imgur.com/fLd69WG
#include <ros/ros.h>
#include <image_transport/image_transport.h>
#include <cv_bridge/cv_bridge.h>
#include <sensor_msgs/image_encodings.h>
#include <opencv2/imgproc/imgproc.hpp>
#include <opencv2/highgui/highgui.hpp>
static const std::string OPENCV_WINDOW = "Image window";
class ImageConverter
{
ros::NodeHandle nh_;
image_transport::ImageTransport it_;
image_transport::Subscriber image_sub_;
image_transport::Publisher image_pub_;
public:
ImageConverter()
: it_(nh_)
{
// Subscrive to input video feed and publish output video feed
//I'm guessing this is where my errors are
image_sub_ = it_.subscribe("/camera/image_raw", 1,
&ImageConverter::imageCb, this);
image_pub_ = it_.advertise("/image_converter/output_video", 1);
cv::namedWindow(OPENCV_WINDOW);
}
~ImageConverter()
{
cv::destroyWindow(OPENCV_WINDOW);
}
void imageCb(const sensor_msgs::ImageConstPtr& msg)
{
cv_bridge::CvImagePtr cv_ptr;
try
{
cv_ptr = cv_bridge::toCvCopy(msg, sensor_msgs::image_encodings::BGR8);
}
catch (cv_bridge::Exception& e)
{
ROS_ERROR("cv_bridge exception: %s", e.what());
return;
}
// Draw an example circle on the video stream
if (cv_ptr->image.rows > 60 && cv_ptr->image.cols > 60)
cv::circle(cv_ptr->image, cv::Point(50, 50), 10, CV_RGB(255,0,0));
// Update GUI Window
cv::imshow(OPENCV_WINDOW, cv_ptr->image);
cv::waitKey(3);
// Output modified video stream
image_pub_.publish(cv_ptr->toImageMsg());
}
};
int main(int argc, char** argv)
{
ros::init(argc, argv, "image_converter");
ROS_INFO_STREAM("test to see if node is running");
ImageConverter ic;
ros::spin();
return 0;
}
I have fixed the issue by using rostpic hz [topic] and checking which ones are receiving camera data. From there, I changed the subscriber to image_sub_ = it_.subscribe("camera/rgb/image_color", 1, &ImageConverter::imageCb, this); and it worked
I am working on a task in which I have to access live stream of an IP Camera (Edimax IC-3110P) using OpenCV 3. My Host system is Windows 10 and I have used Virtualbox to run Ubuntu 16.04 (Xenial) 64-bit. I am using C++ and Code::Blocks(IDE).
Finally I was able to access the livestream through Microsoft Visual Studio(in Windows 10) with the following program.
#include <stdio.h>
#include <opencv2/opencv.hpp>
#include <iostream>
int main(int, char**) {
cv::VideoCapture vcap;
cv::Mat image;
// This works on a D-Link CDS-932L
const std::string videoStreamAddress =
"http://admin:1234#192.168.2.3/mjpg/video.mjpg";
//open the video stream and make sure it's opened
if(!vcap.open(videoStreamAddress)) {
std::cout << "Error opening video stream or file" << std::endl;
return -1;
}
for(;;) {
if(!vcap.read(image)) {
std::cout << "No frame" << std::endl;
cv::waitKey();
}
cv::imshow("Output Window", image);
if(cv::waitKey(1) >= 0) break;
}
}
However, in Ubuntu with the same program in Code::Blocks it shows "Error loading stream video or file."
This camera doesn't support Linux OS but I can access the livestream through a browser's address bar(in Ubuntu) but not through my program.
Does anyone have any idea how to solve this?
Thank you.
So I have played around in OpenCV a bunch before and never run into this problem. I am implementing a MeanShift algorithm and trying to do it on video devices, images, and videos. Devices and images work; however, no matter what I try, when I run VideoCapture on my filename (whether setting it in the Constructor or using the VideoCapture::open() method, and whether local or with a full path) I always get stuck in my error check.
Thoughts? Ideas? code below. running in Visual Studio 2012
#include "opencv2\highgui\highgui.hpp"
#include "opencv2\core\core.hpp"
#include "opencv2\opencv.hpp"
#include "opencv2\video\video.hpp"
#include <string>
using cv::Mat;
using std::string;
enum Filetype{Image, Video};
int main(int argc, char* argv[])
{
string filename = "short_front.avi";// "C:\\Users\\Jonathan\\Videos\\short_front.mp4"; //"hallways.jpg";
Mat cv_image; //convert to unsigned char * with data
Mat filtImage_;
Mat segmImage_;
Mat whiteImage_;
cv::VideoCapture vid;
vid.open("C:/Users/Jonathan/Desktop/TestMeanShift/TestMeanShift/short_front.avi");
cv::waitKey(1000);
if ( !vid.isOpened() ){
throw "Error when reading vid";
cv::waitKey(0);
return -1;
}
// cv_image = cv::imread(filename);//, CV_LOAD_IMAGE_COLOR);
// if(! cv_image.data){
// std::cerr << "Image Failure: " << std::endl;
// system("pause");
// return -1;
// }
//Mat cv_image_gray;
//cv::cvtColor(cv_image,cv_image_gray,CV_RGB2GRAY);
for (;;)
{
vid >> cv_image;
if ( !cv_image.data)
continue;
cv::imshow("Input",cv_image); //add a normal window here to resizable
}
EDIT: This is a distinct problem from the one listed here because it deals with a specific corner case: VideoCapture and ImageCapture both work, only not VideoCapture with a file. When it doesn't work, the code runs properly, except that the "video" it creates is incomplete as it didn't open properly. Therefore, as the code above does not crash in compile time or run time, the only indicator is bad output (6KB video output file). If you are having issues not with the corner case I am describing but general issues with the above functions in OpenCV, the aforementioned link could help you.
Deal all,
I need to decode an animated gif format picture into some bitmap files in MFC2010. Is there any library to decode a gif picture? I cannot use GDIPlus because the program has to run on windows XP. I do appreciate if someone provides me with a library, Activex, dll or anything similar.
Many Thanks,
Shervin Zargham
It's pretty simple using ImageMagick's C++ API (Magick++) :
/* list of Image to store the GIF's frames */
std::vector<Magick::Image> imageList;
/* read all the frames of the animated GIF */
Magick::readImages( &imageList, "animated.gif" );
/* optionnally coalesce the frame sequence depending on the expected result */
Magick::coalesceImages( &imageList, imageList.begin(), imageList.end());
/* store each frame in a separate BMP file */
for(unsigned int i = 0; i < imageList.size(); ++i) {
std::stringstream ss;
ss << "frame" << i << ".bmp";
imageList[i].write(ss.str());
}
WIC (included in Vista, available for XP) offers CLSID_WICGifDecoder, a COM component.
Try this using ImageMagick's C++ API (Magick++) ,tested on VS210:
#include <Magick++.h>
#include <string>
#include <iostream>
#include <list>
using namespace std;
using namespace Magick;
void kk(char * nombre, char *ext)
{
/* list of Image to store the GIF's frames */
std::list<Magick::Image> imageList;
/* read all the frames of the animated GIF */
Magick::readImages( &imageList, nombre );
/* compone las diferencias para obtener los cuadros reales */
Magick::coalesceImages(&imageList,imageList.begin( ),imageList.end( ));
/* store each frame in a separate BMP file */
list <Magick::Image>::iterator it;
int i=1;
for ( it = imageList.begin( ); it != imageList.end( ); it++ , i++)
{
std::string name = "frame" + to_string((_Longlong)(i)) + ext ;
it->write(name);
}
}
int main( int /*argc*/, char ** argv)
{
// Initialize ImageMagick install location for Windows
InitializeMagick(*argv);
try {
kk("luni0.gif", ".png"); // using ".bmp", ".jpg", ".png", OK
return 0;
}
catch( exception &error_ )
{
cout << "Caught exception: " << error_.what() << endl;
return 1;
}
}
It's been a long time, but I recall once using OleLoadPicture to open GIF and PNG files on old versions of Windows, though the documentation seems to suggest that it's only for BMP, ICO, and WMF.
I am developing an application that requires multiple webcams. In order to make sure that the correct webcam is used for each part of the application, I created some udev rules that SYMLINK the webcam to a specific name, depending on the serial number.
This works great, and I can access the camera by that name using VLC and a variety of other applications.
But when I try to access the camera by that name (or the non-syminked name given by linux) using OpenCV and python, I can't read a frame from the camera and my program hangs. The camera is opened successfully. I've created a sample application in C++ to test if it was perhaps a python/opencv related bug, but the same thing happens in C++ too.
Here is my C++ test application that doesn't work:
#include <iostream>
#include <opencv2/opencv.hpp>
using namespace std;
using namespace cv;
int main (int argc, const char * argv[])
{
VideoCapture cap("/dev/my_custom_name");
if (!cap.isOpened())
return -1;
cout << "Opened..." << endl;
Mat img;
namedWindow("video capture", CV_WINDOW_AUTOSIZE);
while (true)
{
cout << "Trying..." << endl;
cap >> img;
cout << "Got" << endl;
imshow("video capture", img);
if (waitKey(10) >= 0)
break;
}
return 0;
}
I get the Opened... and Trying... messages, but not the Got message.
Any ideas on how to resolve this issue?
(This is all on linux btw).
Thanks
I figured this out. When I opened the capture in VLC, I noticed that it preixed the filename with v4l2://. When I did the same in my application, it worked!
So to reference above, "/dev/my_custom_name" should become "v4l2:///dev/my_custom_name".