Limiting data transferred through a socket in c++ - c++

I am working on a USB redirection software which redirects USB device on network by adding a virtual USB device on client machine. Everything is working fine but client complains that when he connects a webcam with 640X480 resolution, the 100 Mbps network chokes. I have tested the webcam on 1 Gbps Adapter and it utilizes around 16% (160 Mbps) bandwidth. Should a webcam take this much bandwidth? Anyways he wants network usage to be under 50 Mbps.
I have tried compressing data which I get from DeviceIoControl and then decompressing it on client side before passing it to DeviceIoControl. Works fine for file transfer but video stops working and bandwidth goes down to around 50 Mbps. I have tried adding short delays before sending data but this also results in a black screen. Now I am thinking of somehow forcefully lowering the camera resolution to 320X240. I am not sure if there is any other way of decreasing data thrown by DeviceIoControl.
I would really appreciate if you could share your thoughts and lead me in right direction. Thanks in advance.
Edit:
Its a YUV2 format webcam.
Is there any opensource library I can use to decrease frame rate or resolution of webcam on windows platform?

If the data is uncompressed: 640 px/line × 480 lines/frame × 30 frames/sec × 24 bit/px ≈ 211 Mbps
You can check the documentation of the webcam whether it supports some sort of compression or frame rate control.

Related

Output image data from HDMI port using C/C++/opencv on a windows 10 PC

I am working with a DMD (digital micromirror array) device Provided from TI (DLP3010EVM-LC). It is basically a projector that can be controlled through USB and HDMI. The HDMI is used to input external image data to the projector, thus the DMD. I am planning to control the pixels of the DMD by sending an image of 1-bit data. My goal framerate is about 1000 fps. It seems that this is possible in case of a 1-bit data where the pixel data even though HDMI port has a max of 60 fps. I am looking for a way to send image data using c/c++/opencv from my windows 10 PC HDMI output. I looked a lot on different sites and forums but couldn't find a concrete way or documentations of how to program the HDMI port (in fact, I just need image output with no sound data). Some people on the Rasberry Pi community simply use imshow() from opencv for this, but it doesn't seem to work on a Windows PC.
Any help on this matter is appreciated.

How do I use libusb to make my monitor compatible with a usb-hdmi cable if it isn't already?

the main problem that I'm facing is that i have no idea how to prepare the "frame buffer", and how to properly send it at some refresh rate (im guessing it's 60Hz)
The theory is, through knowledge of the hdmi protocol it should be possible to write software that takes in video memory from the windows PC and then transfers that through a USB medium into the PC monitor
edit #2: It seems that by simply creating a 2D array of 1920 pixels wide and 1080 pixels down it should be easy to use helper functions to send data to the monitor using a function that prepares the "packet" to be sent to the monitor
Unfortunately i don't have an hdmi with usb type A connector to test this using my monitor.
But essentially the idea is to write a start up-program that accomplishes this task using winAPI + libusb (ideally even before any users have already logged in).
The reason for this is because my micro usb port looks like is about to wear out doe to bent pins. I'd like to avoid downloading any software if possible.
edit: it doesn't have to be hdmi, vga or even rgb / composite is also considerable.

How to play audio stream over UDP?

I writing a Windows application, It receives audio data from an Android app, I use UDP to transfer data over LAN, and use RtAudio to play audio-stream.
Every UDP package payload is a audio sample array, in 32k/16bit/pcm format.
When data size is 576 bytes, 288 samples in other words, every thing is OK, we can hear a clear voice.
But when data size in 192 bytes, 96 samples in other words, the sound is not clear.
Does anyone have the problem?
It is a balancing act to determine optimum size of each buffer packet ... too large and you progressively move away from real time response yet too small and the code spends proportionately too much time negotiating the boilerplate plumbing of simply transferring the data. Looks like you have hit this lower boundary when as you say 192 bytes starts acting up.
This is true independent of transport mechanism. Also keep in mind the wall clock duration consumed when listening to a few hundred bytes is tiny (typically 44,100 samples per second for CD quality mono audio) so you will not loose much in the real time aspect to give yourself more than that lower bound you have hit.

Too high bandwidth in capturing multiple STILL images from multiple webcams with OpenCV

Now, I'm doing a project in which many webcams are used for capturing a still image for each webcam using OpenCV in C++.
Like other questions, multiple HD webcams may use too much bandwidth and exceed the limit.
Unlike the others, what I need is only a still image (only one frame) from each webcam. Let's say I have 15 webcams connecting to PC and every 10 seconds I would like to get still images (one image per webcam (total 15 images)) within 5 seconds. The images are then analysed and send a result to an arduino.
Approach 1: Open all webcams all the time and capture images every 10 seconds.
Problem: The bandwidth of USB is not enough.
Approach 2: Open all webcams but only one webcam at a time, and then close it and open the next one.
Problem: Switching the webcams from one to another takes at least 5 seconds for each switching.
What I need is only a single frame of an image from each webcam and not a video.
Is there any suggestions for this problem, besides loadbalancing of USB bus and adding USB PCI cards?
Thank you.
In opencv you can deal with the WebCam as stream which mean you have run as video. However, I think this kind of problem should be solved using the Webcam API if it is available. There should a way or another to take still image and return it to your program as data. So, you may search for this in website of the Camera.

encoding camera with audio source in realtime with WMAsfWriter - jitter problem

I build a DirectShow graph consisting of my video capture filter
(grabbing the screen), default audio input filter both connected
through spliiter to WM Asf Writter output filter and to VMR9 renderer.
This means I want to have realtime audio/video encoding to disk
together with preview. The problem is that no matter what WM profile I
choose (even very low resolution profile) the output video file is
always "jitter" - every few frames there is a delay. The audio is ok -
there is no jitter in audio. The CPU usage is low < 10% so I believe
this is not a problem of lack of CPU resources. I think I'm time-
stamping my frames correctly.
What could be the reason?
Below is a link to recorder video explaining the problem:
http://www.youtube.com/watch?v=b71iK-wG0zU
Thanks
Dominik Tomczak
I have had this problem in the past. Your problem is the volume of data being written to disk. Writing to a faster drive is a great and simple solution to this problem. The other thing I've done is placing a video compressor into the graph. You need to make sure both input streams are using the same reference clock. I have had a lot of problems using this compressor scheme and keeping a good preview. My preview's frame rate dies even if i use an infinite Tee rather than a Smart Tee, the result written to disk was fine though. Its also worth noting that the more of a beast the machine i was running it on was the less of an issue so it may not actually provide much of a win if you need both over sticking a new faster hard disk in the machine.
I don't think this is an issue. The volume of data written is less than 1MB/s (average compression ratio during encoding). I found the reason - when I build the graph without audio input (WM ASF writer has only video input pint) and my video capture pin is connected through Smart Tree to preview pin and to WM ASF writer input video pin then there is no glitch in the output movie. I reckon this is the problem with audio to video synchronization in my graph. The same happens when I build the graph in GraphEdit. Without audio, no glitch. With audio, there is a constant glitch every 1s. I wonder whether I time stamp my frames wrongly bu I think I'm doing it correctly. How is the general solution for audio to video synchronization in DirectShow graphs?