TWAIN: How do I get image scan progress? - c++

I have to get images from scanner with TWAIN 1.x interface
Some old scanners scan too long time so I guess how to notify user about scanning progress.
There is built-in popup window with progress bar and "cancel" button but it's something I want to override.
Instead of TWAIN WIA API sends me pieces of scan with progress percent so I can solve this task with WIA, but what about TWAIN ?
I tried this nice TWAIN demo http://www.codeproject.com/KB/audio-video/twaintest.aspx.
It uses message loop for scanning. I guess that scan progress should be represented as set of messages sent to message loop but I was wrong. There are only some initial and finalization messages.
Is there a way to be notified about scan progress with TWAIN 1.x API?
Thank you in advance!

For future reference...
There is no way to get scan progress through the TWAIN API, and no standard way to get it by 'cheating' either. TWAIN does give you a way to ask the TWAIN driver to display (or not) a progress dialog, as mentioned by the OP, but the dialog box is designed and operated by the individual TWAIN driver.
If you don't like a given driver's progress box, you have few choices:
Hack the driver's progress box at run-time (low-level Win32 programming, hook procedures, etc.)
Use TWAIN's Memory Transfer Mode (see the TWAIN Spec) - which delivers each image as a sequence of buffers. You can display progress by comparing the total pixels or rows transferred versus the expected total. However, be aware that either the scanner or the driver may scan, post-process the image, and only then transfer the data, putting your progress display pretty seriously out-of-sync.
My advice is to spend your effort on some other aspect of the system, where you have control.

You can get image scan progress if you acquire image from scanner using Memory transfer mode. Native and File transfer modes do not allow to get image scan progress. Progress step depends from TWAIN driver.

Related

How to get the next frame presentation time in Vulkan

Is there a way to get an estimated (or exact) timestamp when the submitted frame will be presented on screen?
I'm interested in WSI windowed presentation as well as fullscreen on Windows and Linux.
UPD: One of the possible ways on Windows is IDCompositionDevice::GetFrameStatistics (msdn), which is used for DirectComposition and DirectManipulation, but I'm not sure is it applicable to Vulkan WSI presentation.
VK_GOOGLE_display_timing extension exposes timings of past presents, and allows to supply timing hint for a subsequent present. But the extension is supported only on some Androids.
VK_EXT_display_control provides a VSync counter and an Fence signal when Vblank starts. But it only works with a VkDisplayKHR type swapchain. And it has only some small support on Linuxes.
The appropriate issue has been raised at Vulkan-Docs#370. Unfortunately, it is taking its time to be resolved.
I don't think you can get the exact presentation time (which would be tricky in any case, since monitors have some internal latency). I think you can get close though: The docs for vkAcquireNextImageKHR say you can pass a fence that gets signaled when the driver is done with the image, which should be close to the time it gets sent off to the display. If you're using VK_PRESENT_MODE_FIFO_KHR you can then use the refresh rate to work out when later images in the queue get presented.

Is there a simple and direct way of using audio as an output for a program?

I want to try some C and C++ programming with audio processing, such as synthesizers, chorus, delay etc, but I only know working with a console as output. I wish to have, instead of a console application, a window that would be capable of sending an audio signal to the speakers, running the code at the background and working with it in a similar way as it goes with printf: every time I would call the "output function", it would send to the speakers (or sound card) a sample value, indicating current oscilator position. This output operation could be executed every time it is requested or in the end of a built in loop. Doing all this with a high sample rate would be just great.
I think I could do all this using AudioWorker on Web Audio API, plus a flexible GUI on HTML5 canvas, but I'm new at this API and I'm not sure whether its resulting sound quality is good enough.
Thanks in advance.
Edit: I use Windows 8.1, but any answer for other platform is welcome.
Edit2: Any programming languages other than C, C++ or JavaScript suggestions are also welcome.
I do lots of sound synthesis with the Web Audio API and I think it sounds great. Javascript is really all you need. Well, if you want to use audio files, you need a web server to serve those audio files, but the audio synthesis all happens in javascript.
It doesn't matter so much what OS you use, but different browsers have different levels of support for the web audio API. Chrome tends to have the best support, and Internet Explorer definitely has the worst support.

Some questions about Canon Edsdk 2.14API with C++

I am a new programmer in C++ with the EDSDK 2.14. I am using a Canon EOS 5D Mark II and i have some questions to do (i'm starting the api, camera session, handlers, set capacity,my program take photos, set the correct parameters to the camera and i'm using windows message to treat some events) :
1) I need to save the photos in the host pc, i am doing this correct, but the camera only permits like 8 photos in the internal buffer and i need to test some combinations of parameters (AV,TV and ISO SPEED). I make a loop to take 10 photos when i press 's' (with windows message, callback), and only 8 photos was taken, the others have busy error, so i guess that is the internal buffer. How can i take more than 8 photos, changing the parameters correct with one windows event?
ps: I tried to reopensession (close and open session with camera again) but was not a good idea, because the event handle of transfer (download image) was only set with the release of the object.
2) I tried to get one photo and download, but was not possible, when i press 's', the program wait to take the 8 photos, and after that the camera send the event callback to the handle for download all images. I want to press 's', and the program take one photo, download and take the others, if it is possible, how could i do this?
3) If i make a method to set the AV, TV , ISO Speed parameters, this will be sent to the camera in time to take the photo, or i need to wait something like a callback of the camera. If i need something like that, what event i need to use?
ps: my program is all asynchronous, i am not using threads, only callbacks and windows event.
4) I search in internet about to put the correct focus, but some people said that is only possible in live view, and i can't use this in my application. It is possible to change focus without live view?
ps: Because i need a good photo and the autofocus of the camera with my program, is not doing the same quality of images like the EOS Utility, and i am thinking if they have a pos-processing in the image taken or not
If i have more questions or i resolved the questions i will answer to all the community, because this too many guys are using this API and it's not too trivial. Sorry about my english, i am not native in this language, but i am trying to do my best.
ad 1) you need to download the image before the camera's internal buffer overflow, like you try in 2
ad 2) make sure your program, after sending the first shot commands, somehow comes back in the 'global' event loop. This should give the EDSDK a chance to process camera events and send "download available" events to your callbacks. Take it from there
ad 3) no guarantees whether these events are applied, you'd rather attach to a property change event (kEdsPropertyEvent_PropertyChanged) or poll some time after
ad 4) you can use liveview and lens-based AF. For the latter, explore kEdsCameraCommand_ShutterButton_Halfway
Care to share the goal of your project?

Intercepting and postprocessing all audio streams on Windows

I would like to know is there any way I create an application which can intercept all the audio that is being played back on the computer, so I can process the audio (apply some effect) and then pass it further to the Windows audio subsystem?
I just had a glimpse in Vista/7 WASAPI, there is this sAPO:
http://www.microsoft.com/whdc/device/audio/sysfx.mspx
but it seems that I cannot create my sAPO and install it anywhere I like - I need a WHQL drivers for this.
Is there any universal way to do that?
I have an experience with DirectSound but I haven't seen any useful info about intercepting the audio streams.
If you're loading a custom sAPO, you're globally affecting the sound for a system. This is going to require signing. From this article:
The audio engine does not load
unsigned sAPOs into the audio
processing graph. So while you are
testing your sAPO, you must disable
the protected process for Audiodg.exe.
To disable the protected process, set
the value of the
DisableProtectedAudioDG registry key
to '1'.

streaming video to and from multiple sources

I wanted to get some ideas one how some of you would approach this problem.
I've got a robot, that is running linux and uses a webcam (with a v4l2 driver) as one of its sensors. I've written a control panel with gtkmm. Both the server and client are written in C++. The server is the robot, client is the "control panel". The image analysis is happening on the robot, and I'd like to stream back the video from the camera to the control panel for two reasons:
A) for fun
B) to overlay image analysis results
So my question is, what are some good ways to stream video from the webcam to the control panel as well as giving priority to the robot code to process it? I'm not interested it writing my own video compression scheme and putting it through the existing networking port, a new network port (dedicated to video data) would be best I think. The second part of the problem is how do I display video in gtkmm? The video data arrives asynchronously and I don't have control over main() in gtkmm so I think that would be tricky.
I'm open to using things like vlc, gstreamer or any other general compression libraries I don't know about.
thanks!
EDIT:
The robot has a 1GHz processor, running a desktop like version of linux, but no X11.
Gstreamer solves nearly all of this for you, with very little effort, and also integrates nicely with the Glib event system. GStreamer includes V4L source plugins, gtk+ output widgets, various filters to resize / encode / decode the video, and best of all, network sink and sources to move the data between machines.
For prototype, you can use the 'gst-launch' tool to assemble video pipelines and test them, then it's fairly simply to create pipelines programatically in your code. Search for 'GStreamer network streaming' to see examples of people doing this with webcams and the like.
I'm not sure about the actual technologies used, but this can end up being a huge synchronization ***** if you want to avoid dropped frames. I was streaming a video to a file and network at the same time. What I eventually ended up doing was using a big circular buffer with three pointers: one write and two read. There were three control threads (and some additional encoding threads): one writing to the buffer which would pause if it reached a point in the buffer not read by both of the others, and two reader threads that would read from the buffer and write to the file/network (and pause if they got ahead of the producer). Since everything was written and read as frames, sync overhead could be kept to a minimum.
My producer was a transcoder (from another file source), but in your case, you may want the camera to produce whole frames in whatever format it normally does and only do the transcoding (with something like ffmpeg) for the server, while the robot processes the image.
Your problem is a bit more complex, though, since the robot needs real-time feedback so can't pause and wait for the streaming server to catch up. So you might want to get frames to the control system as fast as possible and buffer some up in a circular buffer separately for streaming to the "control panel". Certain codecs handle dropped frames better than others, so if the network gets behind you can start overwriting frames at the end of the buffer (taking care they're not being read).
When you say 'a new video port' and then start talking about vlc/gstreaming i'm finding it hard to work out what you want. Obviously these software packages will assist in streaming and compressing via a number of protocols but clearly you'll need a 'network port' not a 'video port' to send the stream.
If what you really mean is sending display output via wireless video/tv feed that's another matter, however you'll need advice from hardware experts rather than software experts on that.
Moving on. I've done plenty of streaming over MMS/UDP protocols and vlc handles it very well (as server and client). However it's designed for desktops and may not be as lightweight as you want. Something like gstreamer, mencoder or ffmpeg on the over hand is going to be better I think. What kind of CPU does the robot have? You'll need a bit of grunt if you're planning real-time compression.
On the client side I think you'll find a number of widgets to handle video in GTK. I would look into that before worrying about interface details.