Since the presentation by Eero Bragge at theAmterdam devdays about QT / QT-Creator I've been looking for an excuse to try my hands at mobile development. Now this excuse has arrived in the form of the Nokia N900, my new phone!
My other hobby is computer vision, so my first Idea's for applications to try and build lie in that direction. My questions now are:
Has anyone tried QT Creator + OpenCV + Maemo 5? I see there is a year old port of opencv for Meamo Diablo (4.1) has anyone tried that one on Maemo 5?
I see that improvements to the OpenCV
port are were among the Meamo google
summer of code 2009 ideas that
didn't make the cut. Is there work
being done there?
How easy is it to acquire images from
the phone's camera and convert them
to something opencv understands?
Does anyone have any useful links to
share?
"I see that improvements to the OpenCV port are were among the Meamo google summer of code 2009 ideas that didn't make the cut. Is there work being done there?"
The project was not select, and AFAIK the people involved didn't carry the project.
OpenCV seems to work under Maemo5 according to the discussion here: http://n2.nabble.com/OpenCV-for-Maemo5-td4172275.html#a4172275
you can use gstreamer framework to acquire image from both cameras. gst-camerabin should work.
see: http://gstreamer.freedesktop.org/wiki/CameraBin
Related
I would like to recognize objects of windows applications, mainly computer games. I would like to accomplish this by opening the window in OpenCV and applying all kinds of effects to the game application under execution and recognize objects such as UI elements, messages and even characters that are on the screen.
Since OpenCV only allows video and webcam as input, is there a way to open a running application as a source for OpenCV?
There maybe some testing applications used in game development that use similar methods for testing, but I couldn't find any.
I also don't know the right terms that that are used when discussing the recognition of virtual objects of a computer program or computer game, but that is exactly what I would like to do.
I tried to look up forums, articles or anything written about this, but found nothing. Any help would be appreciated.
I used OpenCV for a year and I am not sure if you can pass the running application to it.
As an alternative, you might like to have a function which gives you the current screenshot of the desktop. So you can get it and pass to OpenCV.
This way you can periodically do screenshots and use your recognition.
If you are working under Windows you might need this discussion.
Hope this will somehow help you.
I've been trying all method using desktop as source in opencv few months ago as my experiment project and i success, but there's a trail inside the windows, as maybe the speed or processor depends on it. It using your own function to use desktop as source not from opencv libraries. And from there i continued custom things and got stuck in some bug that has something from memory, as it use lots of memory. I did post on stackoverflow. Hope this information helps.
I cannot find any suitable link or reply about playheaven.There is only sample project in github which did not help.
If there is some nice tutorial or video link explaining in steps how to do it.Please provide the link or help me integrate them
I'm Zachary Drake, Director of Sales Engineering at PlayHaven. Officially, PlayHaven doesn't currently support Cocos2D, so we don't have instructions on how to integrate PlayHaven into a Cocos2D game. PlayHaven may very well work with Cocos2d by simply following the iOS instructions, but we don't do QA and other testing on Cocos2d the way we do for officially supported platforms (iOS, Android, Unity iOS, Unity Andoid, Adobe AIR).
This person on the Cocos2d Forums seems to have successfully integrated PlayHaven (after ranting about it), but I have not actually performed the integration steps described and cannot vouch for them:
http://www.cocos2d-iphone.org/forum/topic/96170
The iOS integration documentation is not as good as we'd like it to be, and we are in the process of revising it now. If you email support#playhaven.com with specific iOS integration problems, we can assist you. But if a Cocos2D compatibility issue breaks the integration, unfortunately there won't be anything we can do to get around that.
Thanks for your interest in PlayHaven.
My Playhavan working sample with Cocos2d : https://www.box.com/s/lc2p8jhngn5o8hxeuw3y
Here is complete description of SDK integration:
https://github.com/playhaven/sdk-ios
I'm about to start my final year project which requires me to develop the Kinect Fusion Algorithm. I was told to code in C++ and use the OpenNI API.
Problem:
I read up online but I am still confused as to how to start. I installed Microsoft Visual Studio 2012 Express as well as OpenNI, but how should I start? (I was told to practice coding first before starting to work on the project)
If I want to practice and understand how the codes work and how the Kinect respond to the code, any advice on how should I start? As I am REALLY lost at the moment and hitting a dead end, not knowing what to do next with many of the information online which I do not really understand.
First of all, if you're planning to use OpenNI with Kinect, I advise you not to use version 2.0, which is available at the official website. The reason is simply that there currently is no driver yet to support the Microsoft Kinect (the company behind OpenNI - PrimeSense - only supports a driver for their own sensor, which is different from the Kinect, and the community hasn't gotten round to writing a Kinect driver yet).
Instead grab the package from the simple-openni project's downloads page - it contains everything to get you going: libraries from the 1.5.x line.
OpenNI is the barebone framework - it only contains the architecture for natural interface data processing.
NITE is a proprietary (freeware) library by PrimeSense that provides code to process the raw depth images into meaningful data - hand tracking, skeleton tracking etc.
SensorKinect is the community-maintained driver for making the Kinect interact with OpenNI.
Mind you that these drivers don't provide a way to control the Kinect's tilt motor and the LED light. You may need to use libfreenect for that.
As for getting started, both the OpenNI and NITE packages contain source code samples for simple demos of the technology. It's a good idea to start with one and modify it to suit your needs. That's what I've done to get my own project - controlling Google Chrome with Kinect - working.
As for learning C++, there are tons of materials out there. I recommend the book "Thinking in C++" by Bruce Eckel, if you're a technical person.
There are multiple examples written for OpenNI, available at the GitHub repository: https://github.com/OpenNI/OpenNI
Your best place to start is to review the Resources Page at OpenNI.org, followed by the Reference Guide. Then tackle several of the examples -- run them, step through them and modify them to understand how they are working.
Can OpenCV 2.2 provide cameras devices names for me?
Why I ask - normally (on Mac and Linux) simple hand written camera index lister (like this one here based on this research) works for 2.2.0 but on windows it works for OpeCV 2.1 and minor versions. And when you get on to 2.2.0 you for example start expiriencing problems
With ALL wirtual cameras
With lots of non virtual cameras.
And I tall you - it sucks when your code worked all across all platforms and than broke.
They say OpenCV can be recompiled manually with HAVE_VIDEOINPUT HAVE_DSHOW options... but I (C++ kind of beginner) really would like to see either a tutorial (step by step on to recompile openCV with such params for example from my vs08) or, better solution a way to get capturable camera names from OpenCV to be capable to say to user something like
We use OpenCV for camera capture and It has not found ANY CAPTURABLE DEVICES ON YOUR MACHINE AT ALL (while we bouth now you have at least 2) Go, buzz openCV community.
second option is rude. and I am deeply sorry but please OpenCV guys help us all save the world - provide cameras names lister=)
I am a great fan of the video editing sites animoto (and stupiflex), i enjoy computer graphics and also have that as my major subject at university.
My question is, I have trying to guess what would one need inorder to build such an application.
Do any such open-source / free tools do exist that can give the kind of quality offered by these professionally built products ?
--I've stumbled upon opensource multimedia framework GStreamer, but i'm have no idea if it can deliver here.
--And have been using OpenCV for academic purposes, would that be a better library ?
UPDATED:
1, my problems are only with the serverside video processing,
2, since this is more of a hobby/adacemic project i'm looking for free & open sources linux based tools/sdks for my task.
Please share your ideas,
thanks.
have you checked out http://www.openmovieeditor.org/?