I'm trying to read resolutions supported by camera using GStreamer and plugin camerabin2. The problem is that I'm getting NULL.
#include <gst/gst.h>
#include <stdio.h>
#define gstRef(element) { gst_object_ref(GST_OBJECT(element)); gst_object_sink(GST_OBJECT(element)); }
int main(int argc, char *argv[]) {
gst_init (&argc, &argv);
GstElement *m_camerabin = gst_element_factory_make("camerabin2", "camerabin2");
gstRef(m_camerabin);
GstCaps *supportedCaps = 0;
g_object_get(G_OBJECT(m_camerabin), "image-capture-supported-caps",
&supportedCaps, NULL);
char *c = gst_caps_to_string(supportedCaps);
printf("%s\n",c);
return 0;
}
Is there a better way to get supported resolutions? Should I use different plugin?
Thanks.
I haven't used this element, but in GStreamer the resolutions normally won't be available to your code until the element is placed in a pipeline and the pipeline is "played". Then the elements are activated and connect and make info available.
Hate to link and run, but you may want to start here.
https://gitorious.org/gstreamer-camerabin2/gst-plugins-bad/source/28540988b25f493274762d394c55a4beded5e428:tests/examples/camerabin2
I haven't used camerabin2 but I strongly suggest to use GstDeviceMonitor. By enabling GstDeviceMonitor, you can access all devices that is connected to PC. Not also microphone,speakers but also cameras. Furthermore, you can access whole information of camera devices like resolution, supported formats, fps and etc.
You will use:
GList* devices = gst_device_monitor_get_devices(mMonitor);
Then, you need to extract information from GList*. I cannot give the whole code because of company policies. I just give you the clue.
Suggested references about code of GstDeviceMonitor
https://gstreamer.freedesktop.org/documentation/gstreamer/gstdevicemonitor.html?gi-language=c
Related
I need to convert videostream data to cv::gpumat. Initially I tried copying to cv::Mat and then use upload to load it to gpumat. This process is very slow(20ms for a 640*480 frame).
I need a method to convert from openni videostream to gpumat directly. I tried the following code but it gives run time error
I am using opencv3.1, cuda-8.0, gtx titanx on ubuntu 16.04
#include "opencv2/opencv_modules.hpp"
#include <opencv2/core.hpp>
#include <opencv2/cudacodec.hpp>
#include <opencv2/highgui.hpp>
int main(int argc, const char* argv[])
{
const std::string fname = argv[1];
cv::cuda::GpuMat d_frame;
cv::Ptr<cv::cudacodec::VideoReader> d_reader = cv::cudacodec::createVideoReader(fname);
for (;;)
{
if (!d_reader->nextFrame(d_frame))
break;
cv::Mat frame;
d_frame.download(frame);
cv::imshow("GPU", frame);
if (cv::waitKey(3) > 0)
break;
}
return 0;
}
OpenCV Error: The function/feature is not implemented (The called functionality is disabled for current build or platform) in throw_no_cuda, file /home/krr/softwares/opencv-3.1.0/modules/core/include/opencv2/core/private.cuda.hpp, line 101
terminate called after throwing an instance of 'cv::Exception'
what(): /home/krr/softwares/opencv-3.1.0/modules/core/include/opencv2/core/private.cuda.hpp:101: error: (-213) The called functionality is disabled for current build or platform in function throw_no_cuda
Take a look into the source code. The framework called "throw_no_cuda()" (lines are different, version?). Also the error seems to be a duplicate of this one on github.
alalek:
https://developer.nvidia.com/nvidia-video-codec-sdk:
Note: For Video Codec SDK 7.0 and later, NVCUVID has been renamed to NVDECODE API.
OpenCV has no support for new API and there are no plans to add this.
The latest CUDA version with NVCUVID is ~ CUDA 6.5.
Consider using ffmpeg with enabled CUDA features (via normal cv::VideoCapture - but it can't work with CUDA's cv ::GpuMat).
And further:
dapicard:
I found a way to define the codec used by the FFMpeg backend :
export OPENCV_FFMPEG_CAPTURE_OPTIONS="video_codec|h264_cuvid"
More generally, it is possible to define theses parameters, using the syntax parameter_name|value;parameter_name2|value2
That is, to use the hardware capabilities to decode videos (which you tried). Sidenote: ffmpeg also offers options to transcode videos directly on the gpu (i.e. without moving fromes away from gpu memory).
Frankly, using the proposed method will not result in the matrix being delivered to your gpu memory directly, but only solve the error. I don't think it is possible to grab the memory directly from ffmpeg directly, so you are stuck with moving it.
I am writing an image processing application to align a set of images, and I would like there to be functionality to write those images into a video. The image processing part is done in OpenCV 3.2.0 (C++), and currently outputs still images that aren't stitched together.
I have successfully used the VideoWriter with one of the codecs available to my machine to write the output images to an .avi, but to my knowledge there is no guarantee that any codec will be available on different platforms. As I would like to be able to share this application, this is a problem.
If it matters, the GUI is built in wxWidgets 3.1.0, so if there is something that can help me there that I didn't find, I would love to know.
My assumption is that there is no way of guaranteeing a successful video without somehow shipping a codec with the app, but is there a way of browsing available codecs at run time?
I know that on some platforms the following brings up a dialog of additional codecs, which would be perfect if I could automatically interpret it:
cv::Size outputSize = myImage.size();
cv::VideoWriter("output.avi", -1, 30, outputSize);
But this also doesn't work on every platform. So is there any way of scrubbing available codecs from the machine at run time or do I have to supply a codec somehow in order to write videos cross platform?
There is no such function in OpenCV to list all the available codecs. However, if you've ffmpeg or LibAV on your machine - as you should've while building/installing OpenCV - then you can use ffmpeg/LibAV to list all the available codecs. Following is code that does that:
#include <iostream>
extern "C" {
#include <libavcodec/avcodec.h>
}
int main(int argc, char **argv)
{
/* initialize libavcodec, and register all codecs and formats */
avcodec_register_all();
// struct to hold the codecs
AVCodec* current_codec = NULL;
// initialize the AVCodec* object with first codec
current_codec = av_codec_next(current_codec);
std::cout<<"List of codecs:"<<std::endl;
// loop over all codecs
while (current_codec != NULL)
{
if(av_codec_is_encoder(current_codec) | av_codec_is_decoder(current_codec))
{
std::cout<<current_codec->name<<std::endl;
}
current_codec = av_codec_next(current_codec);
}
}
Compile with:
g++ listcodecs.cpp -o listcodecs `pkg-config libavcodec --cflags --libs`
I'm trying to beep, but I simply can't. I've already tried:
#include <iostream>
using namespace std;
int main(int argc, char **argv)
{
cout << '\a' << flush;
return 0;
}
I have also tried using this: http://www.johnath.com/beep/
But simply doesn't beep.
(If I run $ speaker-test -t sine -f 500 -l 2 2>&1 on the terminal, it beeps, but I would like to beep with c++ to study low-level sound programming)
And I would like to be able to control frequency and duration.
Unless you're logged in from the console, cout will not refer to the system console. You need to open /dev/console and send the \a there.
#include <stdio.h>
#include <unistd.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <fcntl.h>
int main()
{
int s = open ("/dev/console", O_WRONLY);
if (s < 0)
perror ("unable to open console");
else
{
if (write (s, "\a", 1) != 1)
perror ("unable to beep");
}
}
This depends on terminal emulator are you using. KDE's Konsole, for example, doesn't support beeping with buzzer at all AFAICT. First check that echo -e \\a works in your shell. If it doesn't, your C++ code won't work too. You can use xterm — it does support this.
But even in xterm it may not work if you don't have pcspkr (or snd_pcsp) kernel module loaded. This is often the case when distros blacklist it by default. In this case your bet is looking for a terminal which uses your sound card to emit beeps, not PC speaker AKA buzzer.
You're asking about "low level sound generation", typically, the lowest level of sound generation would involve constructing a wave form and passing it to the audio device in an appropriate format. Of course, then it comes out from your sound card rather than the PC speaker. The best advice I can give would be to read up on the APIs associated with pulse audio, alsa, or even the kernel sound drivers. Last time I played with this (~1996), it basically meant allocating an array of samples, and then computing values to put in there that approximated a sine wave with the appropriate frequency and amplitude, then writing that buffer to the output device. There may have even been some calls to ioctl to set the parameters on the device (sample rate, stereo vs. mono, bit rate, etc). If your audio device supports midi commands, it may be easier to send those in some form that is closer to "play these notes for this long using this instrument".
You may find these articles helpful:
https://jan.newmarch.name/LinuxSound/
http://www.linuxjournal.com/article/6735
Have you tried system call in your application?
system("echo -e '\a'");
Is there any way to capture frames from as many camera types as DirectShow do on Windows platform using Libav? I need to capture a camera output without using DirectShow filters and I want my application to work with many camera devices types.
I have searched the Internet about this capability of libav and found that it can be done via libav using special input format "vfwcap". Something like that (don't sure about code correctness - I wrote it by myself):
AVFormatParameters formatParams = NULL;
AVInputFormat* pInfmt = NULL;
pInFormatCtx* pInFormatCtx = NULL;
av_register_all();
//formatParams.device = NULL; //this was probably deprecated and then removed
formatParams.channel = 0;
formatParams.standard = "ntsc"; //deprecated too but still available
formatParams.width = 640;
formatParams.height = 480;
formatParams.time_base.num = 1000;
formatParams.time_base.den = 30000; //so we want 30000/1000 = 30 frames per second
formatParams.prealloced_context = 0;
pInfmt = av_find_input_format("vfwcap");
if( !pInfmt )
{
fprintf(stderr,"Unknown input format\n");
return -1;
}
// Open video file (formatParams can be NULL for autodetecting probably)
if (av_open_input_file(&pInFormatCtx, 0, pInfmt, 0, formatParams) < 0)
return -1; // Couldn't open device
/* Same as video4linux code*/
So another question is: how many devices are supported by Libav? All I have found about capture cameras output with libav on windows is advice to use DirectShow for this purpose because libav supports too few devices. Maybe situation has already changed now and it does support enough devices to use it in production applications?
If this isn't possible.. Well I hope my question won't be useless and this composed from different sources piece of code will help someone interested in this theme 'coz there are really too few information about it in the whole internet.
FFMPEG cannot capture video on Windows. Once I had to implement this myself, using DirectShow capturing
I have a limited exposure to the Mac OS X operating system and now I have started using Xcode and am studying about I/O kit. I need to create a program in Xcode under command line tool in order to list all USB devices connected in a Mac system. Those who have previous experience under this, please help me. If anyone could provide me with sample code then it will be of great use, as I am looking for starting point.
You can adapt USBPrivateDataSample to your needs, the sample sets up a notifier, lists the currently attached devices, then waits for device attach/detach. If you do, you will want to remove the usbVendor and usbProduct matching dictionaries, so all USB devices are matched.
Alternately, you can use IOServiceGetMatchingServices to get an iterator for all current matching services, using a dictionary created by IOServiceMatching(kIOUSBDeviceClassName).
Here's a short sample (which I've never run):
#include <IOKit/IOKitLib.h>
#include <IOKit/usb/IOUSBLib.h>
int main(int argc, const char *argv[])
{
CFMutableDictionaryRef matchingDict;
io_iterator_t iter;
kern_return_t kr;
io_service_t device;
/* set up a matching dictionary for the class */
matchingDict = IOServiceMatching(kIOUSBDeviceClassName);
if (matchingDict == NULL)
{
return -1; // fail
}
/* Now we have a dictionary, get an iterator.*/
kr = IOServiceGetMatchingServices(kIOMasterPortDefault, matchingDict, &iter);
if (kr != KERN_SUCCESS)
{
return -1;
}
/* iterate */
while ((device = IOIteratorNext(iter)))
{
/* do something with device, eg. check properties */
/* ... */
/* And free the reference taken before continuing to the next item */
IOObjectRelease(device);
}
/* Done, release the iterator */
IOObjectRelease(iter);
return 0;
}
You just need to access the IOKit Registry. You may well be able to use the ioreg tool to do this (e.g. run it via system() or popen()). If not then you can at least use it to verify your code:
Info on ioreg tool:
$ man ioreg
Get list of USB devices:
$ ioreg -Src IOUSBDevice
If you run system_profiler SPUSBDataType it'll list all the USB devices connected to the system, you can then interact with that data either by dumping it into a text file or reading it from the command into the application and working with it there.