in this program i want to capture frames of my webcam with a thread and then send the frames to another frame named MainThread and the show the webcam in a picturebox
so i want to pass the captured frame(_frame1) from capture_frame_1_Thread to MainThread.
any ideas how to do it?
Here is the code
VideoCapture cap1(0);
Mat _frame1;
void capture_frame_1() {
for (;;) {
cap1 >> _frame1;
if (waitKey(1) == 27) {
break;
}
}
}
void invoke_capture_frame_1() {
Invoke(gcnew System::Action(this, &MyForm::capture_frame_1));
}
void start_picture_Boxes() {
for (;;) {
mat2picture bimapconvert;
this->pictureBox1->Image = bimapconvert.Mat2Bimap(_frame1);
pictureBox1->Refresh();
if (waitKey(1) == 27) {
break;
}
}
}
void picture_Boxes() {
Invoke(gcnew System::Action(this, &MyForm::start_picture_Boxes));
}
private: System::Void button1_Click(System::Object^ sender, System::EventArgs^ e)
{
ThreadStart^ ThreadMethod1 = gcnew ThreadStart(this, &MyForm::invoke_capture_frame_1);
Thread^ capture_frame_1_Thread = gcnew Thread(ThreadMethod1);
ThreadStart^ ThreadMethod3 = gcnew ThreadStart(this, &MyForm::picture_Boxes);
Thread^ MainThread = gcnew Thread(ThreadMethod3);
capture_frame_1_Thread->Start();
MainThread->Start();
}
};
}
You could create a class named webcam, and have those methods. Then you can have some method that starts threads webcam.startThreads();.
Then you can create private variable called frame, so you can share between threads, but don't forget to use lock to eliminate race condition.
info about lock https://www.cplusplus.com/reference/mutex/mutex/lock/
I hope it helps, I used this structure when making network filter.
I'm junior programmer
recently, I have implemented grabbing of Image using Halcon library.
when I press live button, Timer start to grab image. it works but main screen freezes to the timer cycle.
so, I am improving performance grabbing of Image using Thread
first I implemented thread like this
[ImageUpdateWorker.h]
class ImageUpdateWorker : public QObject
{
Q_OBJECT
public:
explicit ImageUpdateWorker(QObject* parent = 0, QString strThreadName = "ImageUpdateWorker");
~ImageUpdateWorker();
signals:
void finished();
void grab();
public slots:
void run();
private:
bool m_bStop{ false };
};
[ImageUpdateWorker.cpp]
ImageUpdateWorker::ImageUpdateWorker(QObject* parent, QString strThreadName)
: QObject(parent)
{
setObjectName(strThreadName);
}
ImageUpdateWorker::~ImageUpdateWorker()
{
}
void ImageUpdateWorker::run()
{
while (m_bStop == false)
{
emit grab();
}
emit finished();
}
second I implemented inherited QWidget UI Widget with output Screen like this
m_pThread = new QThread();
m_pUpdateWorker = new ImageUpdateWorker(nullptr, strName);
m_pUpdateWorker->moveToThread(m_pThread); // UpdateWorker move to Thread
connect(m_pThread, SIGNAL(started()), m_pUpdateWorker, SLOT(run()));
connect(m_pThread, SIGNAL(finished()), m_pThread, SLOT(deleteLater()));
connect(m_pUpdateWorker, SIGNAL(finished()), m_pThread, SLOT(quit()));
connect(m_pUpdateWorker, SIGNAL(finished()), m_pUpdateWorker, SLOT(deleteLater()));
connect(m_pUpdateWorker, SIGNAL(grab()), this, SLOT(onGrab()));
when I call "m_pThread->start();" screen starts to blokcing :(
If you have any advice or information, I would appreciate it. thank you for reading.
Use m_pImageUpdateThread->moveToThread(m_pThread);
I don't know in QT.
I sent you the code I used in C#.
Mainly you must use the delegates if you don't want to freeze the GUI.
hdisplay is the object HalconDotNet:HWindowControlWPF.
camera is a class where I define the camera parameters.
inside camera.Grab there is the code:
HOperatorSet.GrabImage(out ho_Image, _AcqHandle);
h_Image = new HImage(ho_Image);
At the initialization there is the code:
// Initialise the delegate
updateLiveDelegate = new UpdateLiveDelegate(this.UpdateLive);
HImage ho_Image = new HImage();
Here the code I use:
// ==================
// LIVE
// ==================
bool stopLive = true;
// Declare the thread
private Thread liveThread = null;
// Declare a delegate used to communicate with the UI thread
private delegate void UpdateLiveDelegate();
private UpdateLiveDelegate updateLiveDelegate = null;
private void btnLive_Click(object sender, RoutedEventArgs e)
{
try
{
stopLive = !stopLive;
// if stopLive = false, live camera is activated
if (!stopLive)
{
// Launch the thread
liveThread = new Thread(new ThreadStart(Live));
liveThread.Start();
}
}
catch (Exception ex)
{
// Error
}
}
private void Live()
{
try
{
while (stopLive == false)
{
if (camera.Grab(out ho_Image))
{
// Show progress
Dispatcher.Invoke(this.updateLiveDelegate);
}
else
{
// No grab
stopLive = true;
}
}
// here stopLive is true
}
catch (Exception ex)
{
// Error
}
}
private void UpdateLive()
{
try
{
int imageHeight;
int imageWidth;
string imageType;
ho_Image.GetImagePointer1(out imageType, out imageWidth, out imageHeight);
hDisplay.HalconWindow.SetPart(0, 0, imageHeight - 1, imageWidth - 1);
// display
hDisplay.HalconWindow.DispImage(ho_Image);
}
catch (Exception ex)
{
// Error
}
}
Just implemented a SDL_Renderer in my engine
state_t init_rend(window_t *context,flag_t flags) {
rend.renderer = NULL;
rend.renderer = SDL_CreateRenderer(context,-1,flags);
rend.index = -1;
if (rend.renderer != NULL) {
return TRUE;
} else {
return FALSE;
}
}
In my client/test app:
// Init Base2D game
init_env(VIDEO|AUDIO);
// Init Display display
init_disp(640,480,"Display",RESIZABLE|VISIBLE,make_color(255,255,255,255));
// Init Renderer renderer
init_rend(display->window,SOFTWARE);
// Game Loop
state_t status;
while (TRUE) {
update();
status = listen();
if (!status) {
break;
}
/* User Event Handles */
}
And I could handle window resizing successfully with:
void resize_window() {
printf("I was here!\n");
SDL_FreeSurface(display->container);
printf("Now I am here\n");
display->container = SDL_GetWindowSurface(display->window);
SDL_FillRect(
display->container,
NULL,
SDL_MapRGBA(
display->container->format,
get_red(),
get_green(),
get_blue(),
get_alpha()
)
);
}
However, since I have implemented the renderer, whenever I attempt to resize my display it segfaults when trying to SDL_FreeSurface(display->container).
As I have mentioned, the resizing worked fine until I implemented a renderer.
Why is this happening?
Following the link provided by user:keltar,
It seems to me the way to go with SDL2 is to use a renderer for drawing to the window intead of the old SDL1 method of using a surface.
I did just that, removed the surface code and used a renderer only and the code works without problem
Thank You
EDIT: The first answer solved my problem. Apart from that I had to set the ASI_BANDWIDTH_OVERLOAD value to 0.
I am programming a Linux application in C++/Qt 5.7 to track stars in my telescope. I use a camera (ZWO ASI 120MM with according SDK v0.3) and grab its frames in a while loop in a separate thread. These are then emitted to a QOpenGlWidget to be displayed. I have following problem: When the mouse is inside the QOpenGlWidget area, the displayed frames get corrupted. Especially when the mouse is moved. The problem is worst when I use an exposure time of 50ms and disappears for lower exposure times. When I feed the pipeline with alternating images from disk, the problem disappears. I assume that this is some sort of thread-synchronization problem between the camera thread and the main thread, but I couldnt solve it. The same problem appears in the openastro software. Here are parts of the code:
MainWindow:
MainWindow::MainWindow(QWidget *parent) : QMainWindow(parent){
mutex = new QMutex;
camThread = new QThread(this);
camera = new Camera(nullptr, mutex);
display = new GLViewer(this, mutex);
setCentralWidget(display);
cameraHandle = camera->getHandle();
connect(camThread, SIGNAL(started()), camera, SLOT(connect()));
connect(camera, SIGNAL(exposureCompleted(const QImage)), display, SLOT(showImage(const QImage)), Qt::BlockingQueuedConnection );
camera->moveToThread(camThread);
camThread->start();
}
The routine that grabs the frames:
void Camera::captureFrame(){
while( cameraIsReady && capturing ){
mutex->lock();
error = ASIGetVideoData(camID, buffer, bufferSize, int(exposure*2*1e-3)+500);
if(error == ASI_SUCCESS){
frame = QImage(buffer,width,height,QImage::Format_Indexed8).convertToFormat(QImage::Format_RGB32); //Indexed8 is for 8bit
mutex->unlock();
emit exposureCompleted(frame);
}
else {
cameraStream << "timeout" << endl;
mutex->unlock();
}
}
}
The slot that receives the image:
bool GLViewer::showImage(const QImage image)
{
mutex->lock();
mOrigImage = image;
mRenderQtImg = mOrigImage;
recalculatePosition();
updateScene();
mutex->unlock();
return true;
}
And the GL function that sets the image:
void GLViewer::renderImage()
{
makeCurrent();
glClear(GL_COLOR_BUFFER_BIT);
if (!mRenderQtImg.isNull())
{
glLoadIdentity();
glPushMatrix();
{
if (mResizedImg.width() <= 0)
{
if (mRenderWidth == mRenderQtImg.width() && mRenderHeight == mRenderQtImg.height())
mResizedImg = mRenderQtImg;
else
mResizedImg = mRenderQtImg.scaled(QSize(mRenderWidth, mRenderHeight),
Qt::IgnoreAspectRatio,
Qt::SmoothTransformation);
}
glRasterPos2i(mRenderPosX, mRenderPosY);
glPixelZoom(1, -1);
glDrawPixels(mResizedImg.width(), mResizedImg.height(), GL_RGBA, GL_UNSIGNED_BYTE, mResizedImg.bits());
}
glPopMatrix();
glFlush();
}
}
I stole this code from here: https://github.com/Myzhar/QtOpenCVViewerGl
And lastly, here is how my problem looks:
This looks awful.
The image producer should produce new images and emit them through a signal. Since QImage is implicitly shared, it will automatically recycle frames to avoid new allocations. Only when the producer thread out-runs the display thread will image copies be made.
Instead of using an explicit loop in the Camera object, you can run the capture using a zero-duration timer, and having the event loop invoke it. That way the camera object can process events, e.g. timers, cross-thread slot invocations, etc.
There's no need for explicit mutexes, nor for a blocking connection. Qt's event loop provides cross-thread synchronization. Finally, the QtOpenCVViewerGl project performs image scaling on the CPU and is really an example of how not to do it. You can get image scaling for free by drawing the image on a quad, even though that's also an outdated technique from the fixed pipeline days - but it works just fine.
The ASICamera class would look roughly as follows:
// https://github.com/KubaO/stackoverflown/tree/master/questions/asi-astro-cam-39968889
#include <QtOpenGL>
#include <QOpenGLFunctions_2_0>
#include "ASICamera2.h"
class ASICamera : public QObject {
Q_OBJECT
ASI_ERROR_CODE m_error;
ASI_CAMERA_INFO m_info;
QImage m_frame{640, 480, QImage::Format_RGB888};
QTimer m_timer{this};
int m_exposure_ms = 0;
inline int id() const { return m_info.CameraID; }
void capture() {
m_error = ASIGetVideoData(id(), m_frame.bits(), m_frame.byteCount(),
m_exposure_ms*2 + 500);
if (m_error == ASI_SUCCESS)
emit newFrame(m_frame);
else
qDebug() << "capture error" << m_error;
}
public:
explicit ASICamera(QObject * parent = nullptr) : QObject{parent} {
connect(&m_timer, &QTimer::timeout, this, &ASICamera::capture);
}
ASI_ERROR_CODE error() const { return m_error; }
bool open(int index) {
m_error = ASIGetCameraProperty(&m_info, index);
if (m_error != ASI_SUCCESS)
return false;
m_error = ASIOpenCamera(id());
if (m_error != ASI_SUCCESS)
return false;
m_error = ASIInitCamera(id());
if (m_error != ASI_SUCCESS)
return false;
m_error = ASISetROIFormat(id(), m_frame.width(), m_frame.height(), 1, ASI_IMG_RGB24);
if (m_error != ASI_SUCCESS)
return false;
return true;
}
bool close() {
m_error = ASICloseCamera(id());
return m_error == ASI_SUCCESS;
}
Q_SIGNAL void newFrame(const QImage &);
QImage frame() const { return m_frame; }
Q_SLOT bool start() {
m_error = ASIStartVideoCapture(id());
if (m_error == ASI_SUCCESS)
m_timer.start(0);
return m_error == ASI_SUCCESS;
}
Q_SLOT bool stop() {
m_error = ASIStopVideoCapture(id());
return m_error == ASI_SUCCESS;
m_timer.stop();
}
~ASICamera() {
stop();
close();
}
};
Since I'm using a dummy ASI API implementation, the above is sufficient. Code for a real ASI camera would need to set appropriate controls, such as exposure.
The OpenGL viewer is also fairly simple:
class GLViewer : public QOpenGLWidget, protected QOpenGLFunctions_2_0 {
Q_OBJECT
QImage m_image;
void ck() {
for(GLenum err; (err = glGetError()) != GL_NO_ERROR;) qDebug() << "gl error" << err;
}
void initializeGL() override {
initializeOpenGLFunctions();
glClearColor(0.2f, 0.2f, 0.25f, 1.f);
}
void resizeGL(int width, int height) override {
glViewport(0, 0, width, height);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0, width, height, 0, 0, 1);
glMatrixMode(GL_MODELVIEW);
update();
}
// From http://stackoverflow.com/a/8774580/1329652
void paintGL() override {
auto scaled = m_image.size().scaled(this->size(), Qt::KeepAspectRatio);
GLuint texID;
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glGenTextures(1, &texID);
glEnable(GL_TEXTURE_RECTANGLE);
glBindTexture(GL_TEXTURE_RECTANGLE, texID);
glTexImage2D(GL_TEXTURE_RECTANGLE, 0, GL_RGB, m_image.width(), m_image.height(), 0,
GL_RGB, GL_UNSIGNED_BYTE, m_image.constBits());
glBegin(GL_QUADS);
glTexCoord2f(0, 0);
glVertex2f(0, 0);
glTexCoord2f(m_image.width(), 0);
glVertex2f(scaled.width(), 0);
glTexCoord2f(m_image.width(), m_image.height());
glVertex2f(scaled.width(), scaled.height());
glTexCoord2f(0, m_image.height());
glVertex2f(0, scaled.height());
glEnd();
glDisable(GL_TEXTURE_RECTANGLE);
glDeleteTextures(1, &texID);
ck();
}
public:
GLViewer(QWidget * parent = nullptr) : QOpenGLWidget{parent} {}
void setImage(const QImage & image) {
Q_ASSERT(image.format() == QImage::Format_RGB888);
m_image = image;
update();
}
};
Finally, we hook the camera and the viewer together. Since the camera initialization may take some time, we perform it in the camera's thread.
The UI should emit signals that control the camera, e.g. to open it, start/stop acquisition, etc., and have slots that provide feedback from the camera (e.g. state changes). A free-standing function would take the two objects and hook them together, using functors as appropriate to adapt the UI to a particular camera. If adapter code would be extensive, you'd use a helper QObject for that, but usually a function should suffice (as it does below).
class Thread : public QThread { public: ~Thread() { quit(); wait(); } };
// See http://stackoverflow.com/q/21646467/1329652
template <typename F>
static void postToThread(F && fun, QObject * obj = qApp) {
QObject src;
QObject::connect(&src, &QObject::destroyed, obj, std::forward<F>(fun),
Qt::QueuedConnection);
}
int main(int argc, char ** argv) {
QApplication app{argc, argv};
GLViewer viewer;
viewer.setMinimumSize(200, 200);
ASICamera camera;
Thread thread;
QObject::connect(&camera, &ASICamera::newFrame, &viewer, &GLViewer::setImage);
QObject::connect(&thread, &QThread::destroyed, [&]{ camera.moveToThread(app.thread()); });
camera.moveToThread(&thread);
thread.start();
postToThread([&]{
camera.open(0);
camera.start();
}, &camera);
viewer.show();
return app.exec();
}
#include "main.moc"
The GitHub project includes a very basic ASI camera API test harness and is complete: you can run it and see the test video rendered in real time.
I'm writing a simple Qt application to test multi-threading (something I am also completely new to).I made a QApplication to manage the GUI,then I write a class VisionApp that contains the class MainWindow which is a subclass of QMainWindow.
In the MainWindow class,I write a function void MainWindow::getfromfilevd() which is connected to the button using this:
QObject::connect(ui->FileVdButton,SIGNAL(clicked()),this,SLOT(getfromfilevd()));
Then I want to read a image from file by using QFileDialog::getOpenFileName,my code is here:
void MainWindow::getfromfilevd()
{
//mutex.lock();
from_imgorvd = true;
QString fileName = QFileDialog::getOpenFileName(this, tr("Open Image"),"", tr("Image Files (*.png *.jpg *.bmp *.xpm)"));
if(fileName.isEmpty()) {
cv::Mat image;
image = cv::imread(fileName.toUtf8().constData(), CV_LOAD_IMAGE_COLOR);
mutex.lock();
Mat_Img = image.clone();
mutex.unlock();
}
}
however,every time I click the button ,the window of QFileDialog opened but it is blank,then my program finished unexpected.
when I use this code:
void MainWindow::getfromfilevd()
{
from_imgorvd = true;
cv::Mat image;
image = cv::imread("/home/somnus/Picture/mouse.jpg", CV_LOAD_IMAGE_COLOR);
if(! image.data) {
std::cout << "Could not open or find the image" << std::endl ;
}
else {
mutex.lock();
Mat_Img = image.clone();
mutex.unlock();
}
}
It works well.
I am really wonder which mistake I take...
Hope for your help
It should be like this, !fileName.isEmpty() in stead of fileName.isEmpty() because you need to load image when the file name is not empty not the opposite.