I’m writing file browsing software and I want it to work correctly with all portable devices, such as cameras, smart phones and so on. My program shows thumbnails, so I need to read the content of each file.
Now I’m facing some problems:
With both my photo cameras I can open only one ISteam from device. For every additional stream I get ERROR_BUSY error. This is inconvenient as I get thumbnails in several background threads.
I can open multiple streams from my smart phone, but I cannot seek that streams! As workaround I have to copy the entire stream to temp file system location and process it there.
I wonder what it depends on. Device file system? Driver implementation? Or anything else?
Those seem like very reasonable restrictions on file access to a peripheral with very limited memory (limited fast volatile memory and code EEPROM are more of a concern than size of the flash card).
It's not the file system (which is almost universally FAT or FAT32 for these kinds of devices) or even limitations in the Windows driver (although the limits are probably enforced there to avoid confusing the device) but limited number of file descriptors in the device's embedded file access code.
As a result, you'll probably have to have workarounds for these and other unsupported driver features.
On a related note, multiple threads usually aren't the right way to do background I/O operations. If your devices support OVERLAPPED operation then you can use that along with events and MsgWaitForMultipleObjects (which replaces PeekMessage or GetMessage in the classic GetMessage/TranslateMessage/DispatchMessage main event loop). By keeping everything on one thread you avoid synchronization issues, most race conditions, and prevent the following problem:
Your customer wants to select and use
one of the files on her device, but
oh no, the only IStream is being used
on a thread reading thumbnails. Too
bad, have to wait for that thread to
finish its current file.
Related
I use C++ (Visual Studio 2015) and OpenCV (ver 3.2.0) to process data sent from Kinect v1. My C++ program has no problem when it starts debugging for the first time. After it stops debugging and re-start debugging, however, it gets very slow.
I am suspecting that the program closes without releasing some memory (i.e., memory leak). I am aware of that I would need to use the delete function to release the memory if I use the new function. But I didn't use the new function in the C++ program (I neither used the malloc() function, which is equivalent to the new function in C programs).
For OpenCV, I use the destroyAllWindows function at the end of the program. For Kinect v1, I also use the NuiShutdown(), Release(), and CloseHandle() functions at the end of the program.
Is there anything else I need do to release the memory (e.g., releasing memory associated with Mat in OpenCV)? Or is something else causing the decrease in processing speed?
I'd appreciate your help. Thanks.
After first run disconnect Kinect then reconnect and try second run.
If all goes well now then the problem is most likely stuck thread. The device access is usually handled by separate threads and especially with USB they can get stuck (in case of error or sync problem between accessing form host and expecting on device side) until you disconnect device (not sure which Kinect driver are you using but JUNGO version which NuiShutdown() infers have this problem). You can also check task manager before disconnection if there are not some stuck processes left after first run.
To remedy this you need to find out what are you doing wrong during access. It could be:
wrong USB port
use the back side not front slots.
invalid USB transfer request
device is always waiting for specific set of commands or stream and waits until it does not receive it so it blocks all other things. So using unsupported commands or reading in wrong times or sizes of packets can cause this.
USB communication is out of sync
PC host can timeout in case you do not have enough CPU power while critical operation is processed (or have opened too many apps on background).
This can be caused also by wrong gfx driver as I suspect you are using rendering ... Intel HD graphics can generate such problems with ease especially on notebooks. Try to disable any rendering in your app or at least limit rendering to OpenGL 1.0 to see if speed is the same in between runs. If this is the case the whole desktop usually flickers or is not repainting parts of apps ... and animations are sometimes sluggish.
Another problem might be a debugger. If without it all is well then debugger is the problem and you can not solve it. Debugging while accessing IO can cause sync and timeout problems especially with USB.
To check for memory leaks you can simply see how much free memory you got before 1st run and compare it to values after 1st,2nd,3th .. runs if the value lowers you got something stuck somewhere. After app close all the memory belonging to app is freed by OS so even if you forget some delete that does not matter unless some thread is still running ...
Some USB drivers based on libUSB I encountered got also problem with Handle leaks. But that behaves differently ... all runs fine until there are no free handles. After that OS is non functional you can not open any window,app, anything ... until any app is closed.
[Edit1] Front USB slots
Front slots are usually connected to motherboard with relatively long cable (usually flat and not very well shielded) so it is more susceptible to noise. Also as it is located usually around HDD and above high frequency parts of the motherboard it also induce it into the USB feed. All this degrades the quality of USB signal causing much much bigger rejection rate hence lowering sync capability and also the overall usable bandwidth.
If you compare that with backside USB ports they have no cables but are connected directly in PCB with short and well shielded paths so the connection quality is much much better.
So if you use device demanding high bandwith or synchronism then front ports are a bad choice.
I am posting here concerning the emulation of computers on the local network, without destroying the local network.
When a computer connects to wifi on your local network, the computer can be seen in the "Network" section of Explorer(on Windows) and can be addressed via internet protocol.
My Goal: I would like to create a program such that I could receive streams of data from a remote computer via radio waves. The communication would be full duplex, there would be a transmitter and a receiver on both ends.
My Idea: I would like to utilize Qt to create an application which would receive and demodulate the radio waves like sound (through OpenAL buffers) which would act similarly to a driver program for the emulated computer connection, without destroying my connection to the local network. In this way, I believe that some truly novel things could be done, as I could for instance, use PuTTY one the emulated connected computer remotely while browsing my local network and the internet from my "base" computer.
Extended explanation: I want to perform a weather balloon project, sending a small computer (likely Raspberry Pi3) via a weather balloon to the far reaches of our atmosphere. One of my worries is being able to communicate with the device, such as receiving locational (telemetry) data in real time and being able to (potentially) retrieve arbitrary data in real time.
I understand that I may very well be approaching this question in the wrong way. There are probably existing systems out there that grant telemetry data and some arbitrary means to transfer file data, of which I am unaware. But from what I have seen, I also cannot find a device that utilizes this approach (packet radio emulation of a computer on the network). I have a personal curiosity towards this approach, and thusly will accept the answer which follows most in line with this approach.
P.S: Video which inspired this idea:
https://www.youtube.com/watch?v=Ueb5JG0dCL8
What you're trying to do is called software-defined radio, and is rather popular. Modern computers, even little ones, are more than powerful enough to do the modulation/demodulation entirely in software.
There's very little left for you to do other than designing the RF channel, purchasing often open-source hardware, and using existing open-source SDR implementations out there. The input/output to your Qt program would be either a QIODevice-like data stream that you'd couple to the SDR library's data scrambler/descrambler, or a packetized data stream that you could run some higher-level protocol on.
Do note that unless you limit yourself to an industrial license-free band, you are likely to need an FCC license to operate the transmitter, and an FAA license to launch the balloon.
Your question is essentially off-topic here. It probably belongs on amateur radio stack exchange under the [sdr] tag.
If you're thinking of implementing a complete WiFi stack using SDR, I'd like to discourage you: it's patent-encumbered up the wazoo, so no open-source implementations exist, and the sheer amount of standardese you'd have to wade through to do a compliant implementation is staggering. We're talking on the order of 5,000 pages of standardese, where almost every other sentence is important so if you ignore it, you're not compliant.
I'm trying to make a software that backups my entire hard drive.
I've managed to write a code for reading the raw data from hard disk sectors. However, i want to have incremental backups. For that i need to know the changed made to OS settings, file changes, everything.
My question is -
Using FileSystemWatcher and Inotify, will i be able to know every change made to every sector in the hard drive ? (OS settings etc)
I'm coding it in C++ for linux and windows.
(Saw this question on Stackoverflow which gave me some idea)
Inotify is to detect changes while your program is running, I'm guessing that FilySystemWatches is similar.
One way to solve this is to have a checksum on each sector or multiple of sectors, and when making a backup you compare the checksums to the list you have and only backup blocks that have been changed.
The MS Windows FileSystemWatcher mechanism is more limited than Linux's Inotify, but both probably will do what you need. The Linux mechanism provides (optional) notification for file reads, which causes the "access timestamp" to be updated.
However, the weakness from your application's perspective is that all file modifications made from system boot up to your program getting loaded (and unload to shutdown) will not be monitored. Your application might need to look through file modification timestamps of many files to identify changed files, depending on the level of monitoring you are targeting.
Both architectures maintain a timestamp for each file tracking when the file was last accessed. If that being updated is a trigger for a backup notification, the Windows mechanism lacking such notification will cause mismatched behavior on the platforms. Windows' mechanism can also drop notifications due to buffer size limitations. Here is a real gem from the documentation:
Note that a FileSystemWatcher does not raise an Error event when an event is missed or when the buffer size is exceeded, due to dependencies with the Windows operating system. To keep from missing events, follow these guidelines:
Increasing the buffer size with the InternalBufferSize property can prevent missing file system change events.
Avoid watching files with long file names. Consider renaming using shorter names.
Keep your event handling code as short as possible.
At least you can control two out of three of these....
I wanted to get some ideas one how some of you would approach this problem.
I've got a robot, that is running linux and uses a webcam (with a v4l2 driver) as one of its sensors. I've written a control panel with gtkmm. Both the server and client are written in C++. The server is the robot, client is the "control panel". The image analysis is happening on the robot, and I'd like to stream back the video from the camera to the control panel for two reasons:
A) for fun
B) to overlay image analysis results
So my question is, what are some good ways to stream video from the webcam to the control panel as well as giving priority to the robot code to process it? I'm not interested it writing my own video compression scheme and putting it through the existing networking port, a new network port (dedicated to video data) would be best I think. The second part of the problem is how do I display video in gtkmm? The video data arrives asynchronously and I don't have control over main() in gtkmm so I think that would be tricky.
I'm open to using things like vlc, gstreamer or any other general compression libraries I don't know about.
thanks!
EDIT:
The robot has a 1GHz processor, running a desktop like version of linux, but no X11.
Gstreamer solves nearly all of this for you, with very little effort, and also integrates nicely with the Glib event system. GStreamer includes V4L source plugins, gtk+ output widgets, various filters to resize / encode / decode the video, and best of all, network sink and sources to move the data between machines.
For prototype, you can use the 'gst-launch' tool to assemble video pipelines and test them, then it's fairly simply to create pipelines programatically in your code. Search for 'GStreamer network streaming' to see examples of people doing this with webcams and the like.
I'm not sure about the actual technologies used, but this can end up being a huge synchronization ***** if you want to avoid dropped frames. I was streaming a video to a file and network at the same time. What I eventually ended up doing was using a big circular buffer with three pointers: one write and two read. There were three control threads (and some additional encoding threads): one writing to the buffer which would pause if it reached a point in the buffer not read by both of the others, and two reader threads that would read from the buffer and write to the file/network (and pause if they got ahead of the producer). Since everything was written and read as frames, sync overhead could be kept to a minimum.
My producer was a transcoder (from another file source), but in your case, you may want the camera to produce whole frames in whatever format it normally does and only do the transcoding (with something like ffmpeg) for the server, while the robot processes the image.
Your problem is a bit more complex, though, since the robot needs real-time feedback so can't pause and wait for the streaming server to catch up. So you might want to get frames to the control system as fast as possible and buffer some up in a circular buffer separately for streaming to the "control panel". Certain codecs handle dropped frames better than others, so if the network gets behind you can start overwriting frames at the end of the buffer (taking care they're not being read).
When you say 'a new video port' and then start talking about vlc/gstreaming i'm finding it hard to work out what you want. Obviously these software packages will assist in streaming and compressing via a number of protocols but clearly you'll need a 'network port' not a 'video port' to send the stream.
If what you really mean is sending display output via wireless video/tv feed that's another matter, however you'll need advice from hardware experts rather than software experts on that.
Moving on. I've done plenty of streaming over MMS/UDP protocols and vlc handles it very well (as server and client). However it's designed for desktops and may not be as lightweight as you want. Something like gstreamer, mencoder or ffmpeg on the over hand is going to be better I think. What kind of CPU does the robot have? You'll need a bit of grunt if you're planning real-time compression.
On the client side I think you'll find a number of widgets to handle video in GTK. I would look into that before worrying about interface details.
I'm developing a kind of filesystem driver. All of read requests that windows makes to my filesystem goes by the driver implementation.
I would like to distinguish between "normal" read requests and those who want to get only the metadata from the file. ( Windows reads first 4K of the file and then stop reading ).
Does Windows mark this metadata reads in some way? It would be very useful in order to treat that two kind of operations in a different way.
In a typical CreateFile call, we have AccessMode, ShareMode, CreationDisposition and FlagsAndAttributes parameters ( being DWORD ), i'm not sure if it's possible to extract some clue of the operation requested.
Thanks for reading :)
I'd advise you to get the SysInternals file monitoring tool. It captures stacktraces for each call, and since it understands PDBs can even show you function names. That should allow you to figure out many details of this particular call.
On rereading, it appears that the question is looking at the wrong place for an optimization. Why not treat every request for the first 4KB as a request for metadata? There is very little harm in that assumption.
An assumption the other way around would be harmful, if you're doing 100 MB of real I/O when you really only needed 4KB. But if you need 100 MB, a small optimization for the first 4KB causes at most a one-time small hickup for an inherently lengthy operation.
It's not Windows, but Windows Explorer that performs scanning of files to extract metadata. Moreover, you will also face the reads to create thumbnails.
Reporting the drive as a remote / network to Windows will make Explorer read less information and reduce load on the file system, but unfortunately there seems to be no way to block such reading completely.