Problem
I wan't to use kinfu with my miniTOF camera (model: PMD Camboard Nano). I set up everything and kinfu works with Kinect.
Solution!?
Tweak *openni_launch* package to somehow start my camera or remap my camera outputs to the openni's one in order that I can use with the kinfu algorithm.
If you need more information just ask, don't simply vote down.
Post ideas, solution anything that can be useful. The question is not trivial.
UPDATE
Any useful information gets rewarded.
you might also want to post your question on cayim.com - the CamBoard nano developer community.
I´m sorry I can´t provide further help.
LowKnee
ROS node for PMD camera
Multiple solutions from Cayim for those who can enter the forum
These are sugestions only, ... not full solutions.
Updated!!!
Related
I am looking to understand the Sunshine project by Loki which works with moonlight to stream a desktop with very low latency based on Nvidia's GameStream.
I looked at the source code on GitHub, but couldn't figure out how the screen is captured and the stream is converted to an RTSP packet.
I think they use ffmpeg to capture the screen.
I did find the RTSP packet definition here
Could someone explain how this works?
I am trying to understand this on a low level, since I want to implement a similar program as part of a project later(probably in C++ itself).
I don't have much experience here, but am looking to get my hands dirty! :)
Hope my question is clear and any help to understand this or other resources I could take a look at would be greatly appreciated.
I am learning unity 2D Games though video tutorials. The video tutorial I am following has used the older version of "CN Controls Joystick" (https://www.assetstore.unity3d.com/en/#!/content/15233).
Current version of "CN Controls Joystick" is completely different from the older version used in the tutorial.
So, does anyone have older version of "CN Controls Joystick". If anyone have, please share with me for my practice.
Thanks
the older version its not necesary, you only have to know are the axis names of every joystick, i think now its easier, and I recomend you to open the example scene, and look for the scripts atached to the player... if you need more help, I can help you. only tell me what you want to done and ill show you how to do it.
I would like to recognize objects of windows applications, mainly computer games. I would like to accomplish this by opening the window in OpenCV and applying all kinds of effects to the game application under execution and recognize objects such as UI elements, messages and even characters that are on the screen.
Since OpenCV only allows video and webcam as input, is there a way to open a running application as a source for OpenCV?
There maybe some testing applications used in game development that use similar methods for testing, but I couldn't find any.
I also don't know the right terms that that are used when discussing the recognition of virtual objects of a computer program or computer game, but that is exactly what I would like to do.
I tried to look up forums, articles or anything written about this, but found nothing. Any help would be appreciated.
I used OpenCV for a year and I am not sure if you can pass the running application to it.
As an alternative, you might like to have a function which gives you the current screenshot of the desktop. So you can get it and pass to OpenCV.
This way you can periodically do screenshots and use your recognition.
If you are working under Windows you might need this discussion.
Hope this will somehow help you.
I've been trying all method using desktop as source in opencv few months ago as my experiment project and i success, but there's a trail inside the windows, as maybe the speed or processor depends on it. It using your own function to use desktop as source not from opencv libraries. And from there i continued custom things and got stuck in some bug that has something from memory, as it use lots of memory. I did post on stackoverflow. Hope this information helps.
Newbie here. I just wrote an app/Glassware for Glass using GDK and I want to show it to others by making a video vignette (overlay of video from camera and glass display). I know picture vignettes are possible:
https://support.google.com/glass/answer/3405215?hl=en
But there's no mention of vignettes for videos. Anybody know if this feature will be implemented soon or if it's possible to implement this myself? Sorry, this is not a direct programming question but I don't know where else to ask. Thanks!
Cliff
You can't do this at the moment, but you could technically take two video feeds and overlay them manually in an editor to get the same feeling. If you have questions like this you should really use the Google Glass Explorers Community Forum located here:
https://www.glass-community.com/
I am trying to build an iphone app that connects to an IP camera. The IP camera is windows based to i need to create a server using c++ and then stream the video to the iphone app.
Can anyone tell me the best way in going about this task. I am new to programming so a dummies type guide would help.
Thanks
Inam
Go to the ITunes store and download a free app from Avigilon. You won't be able to see any video unless you connect to a system but it'll tell you what ports and user information would be needed. There are gateways and streaming methods involved as well. Not a situation where a new developer will have a lot of success.
Your question is rather too broad to fit into a comment box. It seems, and correct me if I'm wrong, that you're basically asking for someone to write the applications for you.
Instead, if you're a complete beginner, you'll want to first learn how to program for the plaform.
The StackOverflow question Howto articles for iPhone development, Objective C will help you get started with programming for the iPhone.
Once you have the basics down, you might then ask more specific questions.