I'm working on olimex a13 board with just eglfs i.e, no windowing system. Because of this Qt Multimedia stuff video and camera aren't working as Qt uses Gstreamer which in turn needs X. So I'm using QtGstreamer library which is here.
I've followed the examples and created a media player which is working as expected. Now, I want to do a camera and using camerabin2 which is from bad plugins.
This is my code:
//init QGst
QGst::init(&argc, &argv);
//create video surface
QGst::Quick::VideoSurface* videoSurface = new QGst::Quick::VideoSurface(&engine);
CameraPlayer player;
player.setVideoSink(videoSurface->videoSink());
//cameraplayer.cpp
void open()
{
if (!m_pipeline) {
m_pipeline = QGst::ElementFactory::make("camerabin").dynamicCast<QGst::Pipeline>();
if (m_pipeline) {
m_pipeline->setProperty("video-sink", m_videoSink);
//watch the bus for messages
QGst::BusPtr bus = m_pipeline->bus();
bus->addSignalWatch();
QGlib::connect(bus, "message", this, &CameraPlayer::onBusMessage);
//QGlib::connect(bus, "image-done", this, &CameraPlayer::onImageDone);
} else {
qCritical() << "Failed to create the pipeline";
}
}
}
//-----------------------------------
void CameraPlayer::setVideoSink(const QGst::ElementPtr & sink)
{
m_videoSink = sink;
}
//-----------------------------------
void CameraPlayer::start()
{
m_pipeline->setState(QGst::StateReady);
m_pipeline->setState(QGst::StatePlaying);
}
I then call cameraPlayer.start() which isn't working i.e, no video. Am I missing something here? Has anyone used QtGstreamer to stream webcam? Thanks in advance.
I realised some plugins (multifilesink) were missing. Started my Qt application with --gst-debug-level=4 argument and gstreamer then reported about missing plugins.
Related
I'm building a simple application in c++ (32-bits) that uses opencv to grab frames from a rtsp camera.
The function that grabs the camera frames run in a separated thread from the main program.
I've testes this application with a mp4 video, and it works fine. I was able to grab frames and process them.
However when I use the rtsp link, although I was able to open a connection with the camera, whenever I tried to read both grab() and read() functions returns False.
First, i thought it was an issue with the rtsp link, but I made a simple Python application to test it, and it worked as well. So it was not the link.
This is the code that I'm using to grab the frames:
#ifndef _IMAGE_BUFFER_H_
#define _IMAGE_BUFFER_H_
#include <opencv2/core.hpp>
#include <opencv2/videoio.hpp>
#include <opencv2/highgui.hpp>
.
.
.
VideoCapture capture_;
string address_;
atomic<bool> keep_alive_;
thread thread_;
int fps_;
mutex mutex_;
list<FramePair> frames_;
int Run()
{
capture_.open(address_, cv::CAP_ANY);
if (!capture_.isOpened()) {
printf("Could not open Camera feed! \n");
return -1;
}
uint64_t period = uint64_t((float(1) / float(fps_)) * float(1000));
period = period - (period / 20);
uint64_t t0 = getCurrentTimeInMilli();
while (keep_alive_) {
uint64_t difftime = getCurrentTimeInMilli() - t0;
if (difftime < period) {
uint64_t sleep_time = period - difftime;
if (sleep_time < period) {
std::this_thread::sleep_for(std::chrono::milliseconds(sleep_time));
}
}
t0 = getCurrentTimeInMilli();
CaptureFrame();
}
return 0;
}
void CaptureFrame()
{
Mat frame;
bool ret = capture_.read(frame);
if (!ret) {
printf("Error in frame reading! \n");
}
vector<uint8_t> jpeg;
cv::imencode(".jpg", frame, jpeg, vector<int>());
mutex_.lock();
frames_.push_back(FramePair(getCurrentTimeInMilli(), jpeg));
if (frames_.size() > FRAME_QUEUE_SIZE)
frames_.pop_front();
mutex_.unlock();
}
The OpenCv version that I'm using is 3.4.5.
The link : rtsp://<user>:<pass>#<ip>:<port>/media
I appreciate any help on this matter.
Edit1:
What I have tried:
I`ve tried this this, but it didn't work
Also Tried with a pre-built opencv 3.4.0 version for 64 bits and still the same
Sorry for answering my own question.
But after a lot of trial and error, and reading various SO threads regarding the issue of using cv::VideoCapture in in Windows.
I found the issue,
I was trying to statically link OpenCv in my application, and due to license issues ffmpeg cannot be compiled statically along with the application.
I solved it by, copying the opencv_ffmpegxxx.dll, which can be found in /bin/ . And pasting it in my .exe folder as suggested in here.
There might be some workaround to embed ffmpeg dll in the application as suggested in here, but i haven't tried yet. I hope someone else can benefit from this issue.
Thanks for the help.
I am trying to take snapshot from VLC player using libVLC. But whenever I run the following code, a window opens showing the video streaming and I don't want to open the media player window while doing that. I am taking the video input from an IP camera using RTSP link. Is there a way where I can achieve my goal by keeping the media player window hidden?
Here is my code that I've done till now.
m = libvlc_media_new_location(inst, "IP/camera/rtsp/link");
mp = libvlc_media_player_new_from_media(m);
libvlc_media_player_play(mp);
while (1) {
Sleep(500);
const char* image_path = "E:\\frames\\image.jpg";
int result = libvlc_video_take_snapshot(mp, 0, image_path, 0, 0);
}
libvlc_media_player_stop(mp);
libvlc_media_player_release(mp);
libvlc_release(inst);
Thank for your question. Add
const char* const vlc_args[] = {
"--intf", "dummy",
"--vout", "dummy",
};
when creating libvlc new inst and pass it as argument.
I am trying to create a Video Player inside GTKmm, for this I am using mpv. The documentation says, that I can embed the video player using an OpenGL view. However, I am having difficulties implementing the player inside a GTKmm app.
I have a GLWindow, that contains a GLArea which should then contain the video player. The problem is, that when I try to initialize the mpv render context, I get an error, telling me that the OpenGL was not initialized.
The following is my Constructor for the main window that I have:
GLWindow::GLWindow(): GLArea_{}
{
set_title("GL Area");
set_default_size(400, 600);
setlocale(LC_NUMERIC, "C");
VBox_.property_margin() = 12;
VBox_.set_spacing(6);
add(VBox_);
GLArea_.set_hexpand(true);
GLArea_.set_vexpand(true);
GLArea_.set_auto_render(true);
GLArea_.set_required_version(4, 0);
VBox_.add(GLArea_);
mpv = mpv_create();
if (!mpv)
throw std::runtime_error("Unable to create mpv context");
mpv_set_option_string(mpv, "terminal", "yes");
mpv_set_option_string(mpv, "msg-level", "all=v");
if (mpv_initialize(mpv) < 0)
throw std::runtime_error("could not initialize mpv context");
mpv_render_param params[] = {
{MPV_RENDER_PARAM_API_TYPE, const_cast<char*>(MPV_RENDER_API_TYPE_OPENGL)},
{MPV_RENDER_PARAM_OPENGL_INIT_PARAMS, static_cast<void*>(new (mpv_opengl_init_params){
.get_proc_address = get_proc_address,
})},
{MPV_RENDER_PARAM_INVALID}
};
if (mpv_render_context_create(&mpv_gl, mpv, params) < 0)
throw std::runtime_error("Failed to create render context");
mpv_render_context_set_update_callback(mpv_gl, GLWindow::onUpdate, this);
}
As far as I know, this should just initialize the video player view, but the problem arises when I try to create the render context with mpv_render_context_create. I get the following error on that line:
[libmpv_render] glGetString(GL_VERSION) returned NULL.
[libmpv_render] OpenGL not initialized.
Then the app terminates with a SIGSEGV Signal.
The problem may be from my get_proc_address function, currently I have only implemented it for linux, it looks like the following:
static void *get_proc_address(void *ctx, const char *name) {
return (void *)glXGetProcAddress(reinterpret_cast<const GLubyte *>(name));
}
To be honest, I am overwhelmed as to why the OpenGL context is not being created. How do I have to adjust my GTKmm app to allow the mpv video player to initialize correctly?
As the error suggested, the problem was that there was no OpenGL context. The GLArea is not instantly created, there is an event signal_realize on the GLArea when the OpenGL view has been created. I had to listen to that event, and in there initialize the mpv variables after setting GLArea.make_current(), to set the GLArea's context to the one we want to connect to mpv
I am making a game using SDL and my SoundHandler class is not working and I cannot figure out why. The file paths are definatly correct and I have SDL_Mixer set up properly as I have had sound work correctly before, I also get no errors or warnings the game runs fine there is just no music.
SoundHandler.h:
enum Sounds
{
BACKGROUND_MUSIC, STICK_COLLECT
};
class SoundHandler
{
public:
SoundHandler();
void PlaySound(Sounds sound);
private:
Mix_Music *backMusic;
Mix_Music *stickCollect;
};
SoundHandler.cpp:
SoundHandler::SoundHandler()
{
Mix_OpenAudio(22050, MIX_DEFAULT_FORMAT, 2, 4096);
this->backMusic = Mix_LoadMUS("Data//Music//Background.mp3");
this->stickCollect = Mix_LoadMUS("Data//Sounds//StickCollect.mp3");
Mix_VolumeMusic(128);
}
void SoundHandler::PlaySound(Sounds sound)
{
if(sound == BACKGROUND_MUSIC)
{
Mix_PlayMusic(this->backMusic, -1);
}
if(sound == STICK_COLLECT)
{
Mix_PlayMusic(this->stickCollect, 1);
}
}
Relevant Lines in main.cpp:
// Initialise Sound
SoundHandler soundHandler;
// Play Background Music
soundHandler.PlaySound(BACKGROUND_MUSIC);
// Play Sound
soundHandler.PlaySound(STICK_COLLECT);
I think that problem is in double slashes in file path, try to use single slash.
You will have long time of debugging without error checking.
Has anybody successfully implemented an Instrument using MoMu STK on iOS? I am bit stacked with initialization of a stream for Instrument.
I am using tutorial code and looks like something missing
RtAudio dac;
// Figure out how many bytes in an StkFloat and setup the RtAudio stream.
RtAudio::StreamParameters parameters;
parameters.deviceId = dac.getDefaultOutputDevice();
parameters.nChannels = 1;
RtAudioFormat format = ( sizeof(StkFloat) == 8 ) ? RTAUDIO_FLOAT64 : RTAUDIO_FLOAT32;
unsigned int bufferFrames = RT_BUFFER_SIZE;
dac.openStream( & parameters, NULL, format, (unsigned int)Stk::sampleRate(), &bufferFrames, &tick, (void *)&data );
Error description says that output parameters for output device are invalid, but when I skip to assign device id then it's not working as well.
Any idea would be great.
RtAudio is only for desktop apps and there is no need to open stream when implementing on iOS.
example:
Header file:
#import "Simple.h"
// make struct to hold
struct TickData {
Simple *synth;
};
// Make instance of the struct in #interface=
TickData data;
Implementation file:
// init the synth:
data.synth = new Simple();
data.synth->keyOff();
// to trigger note on/off:
data.synth->noteOn(frequency, velocity);
data.synth->noteOff(velocity);
// audio callback method:
for (int i=0; i < FRAMESIZE; i++) {
buffer[i] = data.synth -> tick();
}
Yep, I have a couple of apps in the store with STK classes running on them. Bear in mind that the setup required to run STK on iOS is different from the one required to run it on your desktop.
Here's a tutorial on how to use STK classes inside an iOS app:
https://arielelkin.github.io/articles/mandolin