I am relatively new to OpenCV. My program will have a fixed camera that will track insects moving passed it. I figured that this would mean that I could remove the background from the video. I have attempted to use the method (which I found in a tutorial - http://docs.opencv.org/3.1.0/d1/dc5/tutorial_background_subtraction.html#gsc.tab=0):
pMOG2 = cv::createBackgroundSubtractorMOG2();
..
pMOG2->apply(frame, background);
However, how does this determine the background?
I have tried another way, which I thought might work, which was to capture the background when the program first starts and then use absDiff() or subtraction() on the background and current frame. Unfortunately, this results in a strange image which has parts of the static background image displayed over the video, this messes up the tracking.
I am a bit confused as to what would be the best way to do things. Is it possible to remove a specific background from each frame?
Thanks!
Related
I am using Wowza Media Streams for a live streaming in my project. I am using overlay on video. My question is, that I want to hide displaying overlay after an interval of time. Please guide me is there any way to do this. My code is for using overlay is
wowzaImage = new OverlayImage(basePath+"logo_14.png",100);
mainImage.addOverlayImage(wowzaImage,srcWidth-wowzaImage.GetWidth(1.0),0);
This is being used to display overlay. To Hide this overlay after fix time, I tried this
mainImage.addOverlayImage(null,srcWidth-wowzaImage.GetWidth(1.0),0);
But this didn't work. Also tried
wowzaImage = new OverlayImage(basePath+"logo_14.png",0);
mainImage.addOverlayImage(wowzaImage,srcWidth-wowzaImage.GetWidth(1.0),0);
But it still shows overlay there.
Please help, Thanks
If you are looking at the example here you can find a couple of ways to accomplish this (depending on how you've set it up). You can leverage the add fading step convenience function as follows:
mainImage.addFadingStep([start-value],[end-value],[number-of-frames]);
Otherwise you can fade the image itself away on each frame (see onBeforeScaleFrame event handler) similar to the way you were suggesting in your question:
OverlayImage img = new OverlayImage([resource], opacity);
mainImage.addOverlayImage(img,xpos, ypos);
For the latter you will want to be sure it is being referenced correctly and that the positioning holds true.
Thanks,
Matt
I am working on a project that presents live data acquired in real-time using the QCustomPlot plug-in for Qt. The display has a black background color, and the multiple channels of data are colored differently. When taking a screenshot, we would like to make it printer-friendly, so the background is white and all data is black. I am thinking of a solution like this:
Change all colors the way I want by manipulating the pointers for the graphical objects
Grab the screenshot using QWidget::grab() to get a QPixmap
Change all the colors back to normal
This did not work at first, because the system could not change the colors in time for the screenshot to be taken. So I used a QApplication::processEvents(), and it all worked on my Mac.
However, it does not work on a Windows 7 (which is required). Any ideas what to do?
Code:
QSting fileLocation = "...";
toggleColors(false); //function to toggle the colors
QApplication::processEvents();
QPixmap shot = grab();
toggleColors(true);
shot.save(fileLocation, "png");
Again. It works on Mac, but not Windows.
Update 1. Content of toggleColors include:
if(enable)
ui->plot->setBackground(QBrush(Qt::black));
else
ui->plot->setBackground(QBrush(Qt::white));
ui->plot->repaint();
I also tried with ui->plot->update() instead.
I am not sure what is the problem on Windows specifically, but I recommend you calling QWidget::update() on the given widget. That forces the next update to re-render itself.
On the other hand, I'm not sure why toggleColors() didn't somehow cause that to happen.
Also, ensure that QWidget::setUpdatesEnabled(bool) wasn't set to "false."
It appears the problem lies with QCustomPlot. It was solved by performing a ui->plot->replot() which is specific to QCustomPlot and not QWidget.
I am currently using the 16-bit libnds (Whith devkitpro) example as a basis and am trying to display text and the png background image on the same screen (in this example it is the top sceen). I am having a similar issue as this post.
I have garbage on the top of the screen (only ifconsoleInit(...) is called), similar to the first problem in the thread. The only problem is that I am displaying the background image in a different method so the fixes they made in that thread did not apply to this.
All I am looking for is whether there is a way to fix the garbage on the top of the screen. If there is a more efficient/better way to display the image, I am willing to accept it, just I haven't found a detailed enough tutorial on how to load an image as a background without using this method. Any help would be appreciated. I will answer any further questions anyone has about what is not working.
You can find the project attached here.
Sorry for the long delay but there are a few issues with your code. The first is that in Mode 4 the only background that can be set up as a 16 bit bitmap is layer 3. http://answers.drunkencoders.com/what-graphics-modes-does-the-ds-support/
Next, the layers all share a single chunk of background memory and your garbage is coming from you overwriting part of the bitmap in video memory with the characters for the font and the map for the console background. A simple solution is to move the bitmap by settings its map base to 1. This offsets its in graphics memory by 16KB which leaves 16KB of room for your text layer (this only works because we cant display the entire 256x256 image on screen at once due the the resolution of the DS as 256x256x2bytes fills up all of memory bank A...to be more correct we should assign another memory bank to the main background...but since we cant see the bottom 70 or so lines of pixels of our image anyway its okay that they didnt quite make it into video memory).
libnds also has a macro to make finding the memory for your background a bit simpler called "bgGetGfxPtr(id)" which will get a pointer to your background gfx in video memory after you set it up so you dont have to try to calculate it via an offset from BG_GFX.
In all the changes to your code should look like this (I added a version of this to the libnds code faq at : http://answers.drunkencoders.com/wp-admin/post.php?post=289&action=edit&message=1)
int main(void) {
//Top screen pic init
videoSetMode(MODE_4_2D);
vramSetBankA(VRAM_A_MAIN_BG);
int bg = bgInit(3, BgType_Bmp16, BgSize_B16_256x256, 1,0);
decompress(drunkenlogoBitmap, bgGetGfxPtr(bg), LZ77Vram); //Displays/decompresses top image
//videoSetMode(MODE_4_2D);
consoleInit(0,0, BgType_Text4bpp, BgSize_T_256x256, 4,0, true, true);
iprintf("\x1b[1;1HThe garbage is up here ^^^^^.");
iprintf("\x1b[21;1HTesting the text function...");
while(1) {
swiWaitForVBlank();
scanKeys();
if (keysDown()&KEY_START) break;
}
return 0;
}
I'm trying to display video from a webcam. I capture the images from the webcam using opencv and then I try to display them on a GtkImage.
This is my code, which runs in a seperate thread.
gpointer View::updateView(gpointer v)
{
IplImage *image;
CvCapture *camera;
GMutex *mutex;
View *view;
view=(View*)v;
camera=view->camera;
mutex=view->cameraMutex;
while(1)
{
g_mutex_lock(view->cameraMutex);
image=cvQueryFrame(camera);
g_mutex_unlock(view->cameraMutex);
if(image==NULL) continue;
cvCvtColor(image,image,CV_BGR2RGB);
GDK_THREADS_ENTER();
g_object_unref(view->pixbuf);
view->pixbuf=gdk_pixbuf_new_from_data((guchar*)image->imageData,GDK_COLORSPACE_RGB,FALSE,image->depth,image->width,image->height,image->widthStep,NULL,NULL);
gtk_image_set_from_pixbuf(GTK_IMAGE(view->image),view->pixbuf);
gtk_widget_queue_draw(view->image);
GDK_THREADS_LEAVE();
usleep(10000);
}
}
What happens is that one image is taken from the webcam and displayed and then the GtkImage stops updating.
In addition, when I try to use cvReleaseImage, I get a seg fault which says that free has been passed an invalid pointer.
GTK is an event-driven toolkit, like many others. What you're doing is queuing the new images to draw in an infinite loop, but never give GTK a chance to draw them. This is not how a message pump works. You need to give a hand back to GTK, so it can draw the updated image. The way to do that is explained in gtk_events_pending documentation.
Moreover, allocating/drawing/deallocating a gdk-pixpuf for each image is sub-optimal. Just allocate the buffer once out of your loop, draw on it in your loop (it will overwrite the previous content), and display it. You only need to reallocate a new buffer if your image size changes.
I don't know how to work with GtkImgae, but my guess is that you are not passing the newer images to the window. You need something like the native cvShowImage to execute inside the loop. If it isn't that I don't know.
Also You shouldn't release the image used for capture. OpenCV allocates and deallocates it itself.
EDIT: Try using the OpenCV functions for viewing image and see if the problem is still there.
I want to control a button using hand motions. For example, in a video frame I create a circle-shaped button. Then when I move my hand to that circle I want to play an mp3 file, and when I move my hand to another circle the mp3 song stops playing. How can I do this?
i am working in windows7 OS and i use microsoft visual studio 2008 for work...
You have infinite options to do that. Probably the easiest is trying to do background segmentation and then check if there's anything which is not background that overlaps with the button area. It would work with any part of your body, not only your hands, but that might not be an issue.
Another option would be to try to detect and track your hands based on skin color. For this you need to obtain an histogram of the skin color and then use it with the camshift tracker. A nice way to obtain the skin color on runtime would be running a face detector (haarcascade) and getting the color from the detected region.
I'm sure there are hundreds of additional ways to do it.
Also, if you can get your hands on a Kinect camera it could help a lot. Check OpenNI and the MS Kinect SDK to see what it enables you to do.
The first thing you will have to do is create a haar cascade xml file and train it on human hands.