I'm trying to use SwisTrack software.
http://en.wikibooks.org/wiki/SwisTrack
I've installed using windows installer in 2 computers.
in both computers I have the same problem.
also tried with different usb cameras.
when I choose Input from usb camera - the camera detected, and the light on the camera turned on (so the software can see the camera and connect to it) - but the frames I got is totally black.
is anyone can help? is there someone here familiar with SwisTrack?
Thanks
It works for me. Just image outputs flipped.
Did you select source for output window? (see click on yellow caption on window header) :
Related
when i open ubuntu16.0.4 with wmware in Desktop.exe's second screen, i meet a error,the vmware always keep black,i found a lot solutions but not resolve the question.
all in all ,open vmware always black screen on second screen using Desktop.exe
In fact, i found in windows7 ,when i open vmware not in first screen with Destops.exe, the vmware can not show normal image, and when it hang, it can show normal image
All in all, must open vmware in first screen
I am using multiple monitors on my PC. One of them is a TV. When I launch my application on my regular monitors, the application gets scaled properly.
However, when I run the application in fullscreen on my TV, the resolution will be too large and the output shows partially on another screen (see the blue colored output in my screenshot).
The TV is connected via HDMI and uses the same resolution as the other screens (1920x1080). It seems to be a software issue, because the output is partially visible on another screen.
I am using the following code to toggle fullscreen mode:
SDL_SetWindowFullscreen( m_Window, SDL_WINDOW_FULLSCREEN );
Any ideas on how to solve this issue?
UPDATE
When I make the TV my main display in Windows, it seems to fit properly on the TV. However, it still shows partially on the other screen (but this time it shows twice) and the mouse positioning is incorrect on the TV. Maybe the resolution is changed differently?
UPDATE 2
Windows 10 allows font sizes to be changed on a monitor. This is why my resolution detection in SDL2 identified a different resolution for my TV. Now I need to find a way to work around this.
I have successfully installed all the extension&support apps from sony and 2-3 example apps for example HelloSensor. I can select the apps to get cards from and see HelloSensor output.
But whenever I press the camera button and go to VR mode in emulator the device begin to show the camera app and not the selected HelloSensor app cards.
Does any one have any comments on that?
Regards
I'm not sure, what do you want to achieve. Pressing camera button starts the built-in camera application. So the behaviour you are describing is expected.
Maybe it is confusing for you, that the camera is not showing on the background, until the camera API is in use. That doesn't mean, that the glasses are not in AR mode. They are AR always, as the eyewear display is transparent.
I have a graph that displays the preview of a capture card (AVerMedia HD DVR PCI) to a window that is connected to a ps3 via components. I would like to know if there is a simple way of detecting when the resolution of the source changes.
For example, for ps3, the menu is displayed at 1920X1080 (1080i) and when you enter a game, the game changes to 1280X720 (720p). I would like to set the resolution using IAMStreamConfig and AM_MEDIA_TYPE but I need to know when to switch the resolution. If it were to stay at 1080i, the image would be 1/4 of the entire rectangle showing a bad experience.
Would a solution be creating a filter and reading the bytes of the image to detect if there is data there?
Thanks in advance.
I want to control a button using hand motions. For example, in a video frame I create a circle-shaped button. Then when I move my hand to that circle I want to play an mp3 file, and when I move my hand to another circle the mp3 song stops playing. How can I do this?
i am working in windows7 OS and i use microsoft visual studio 2008 for work...
You have infinite options to do that. Probably the easiest is trying to do background segmentation and then check if there's anything which is not background that overlaps with the button area. It would work with any part of your body, not only your hands, but that might not be an issue.
Another option would be to try to detect and track your hands based on skin color. For this you need to obtain an histogram of the skin color and then use it with the camshift tracker. A nice way to obtain the skin color on runtime would be running a face detector (haarcascade) and getting the color from the detected region.
I'm sure there are hundreds of additional ways to do it.
Also, if you can get your hands on a Kinect camera it could help a lot. Check OpenNI and the MS Kinect SDK to see what it enables you to do.
The first thing you will have to do is create a haar cascade xml file and train it on human hands.