How could I handle mouse input while streaming MJPEG video to browser? - c++

Let's suppose I'm streaming a video to a browser, for example using one of the solutions reported here:
Streaming openCV C++ video to the browser
Let's suppose I would like to select a region of the streamed video for example with a rectangle drawn with the mouse and then I want to send the coordinates of the vertexes of the rectangle to the server.
I could connect server and client with websockets, for example, but I do not know how to handle the fact that I have a streaming loop in the server that cannot be interrupted, while at the same time I have to check for mouse input on the client and then send the result of the mouse position to the server that could decide to modify the streamed output depending on the mouse position (for example I would like to apply a specific image filter to the selected rectangle).
How could I achieve this?

Related

QT widget How to get RGB Buffer from QCamera

I am porting an video streamer application to QT (from HTML), I just had the webcam working using QT, and now I want to know How can I get the RGB Video buffer from a QT Camera? All the samples I can see is capturing image to a file.
I am using QT Widgets and not QML since its a desktop application.
What I am trying to do is to get the buffer image of the camera, compress it and send to network.
And I want to trigger this manualy since I want to call the capture of next frame when all the compression and sending is done to prevent timing issue.

The best way to display video from camera in Qt WebKit bridge

I'm developing an application that loads frames from an ethernet camera and displays these frames in an element within a Qt QWebView.
So I would like to ask, which is the best or the most efficient way to display images in sequence from the camera, so that it would display as a live video for the user.

[C++|winapi]Can you access video output of application before it is displayed?

I want to capture the video output of an application using C++ and winapi, and stream it over the network. At the moment, I am capturing this output using a DirectShow filter. The application displays it's video output on the screen, and I just capture whatever it is there. I want to optimize this process.
My question is: Is there a way to capture the video/audio output of an application before it is displayed on the screen?
Thanks.
Capture video before it is shown?
It is depends on how is the application provides the video for you.
Real-time rendering - You can't access what's not exists. Like video games, or any dynamic rendering only displaying the actual state, and perhaps don't know anything about the future.
Also there's an anomaly, when rendering becomes slower than the screen's refresh rate, called screen tearing.
Static displaying - All the data is available already. For example if it's a video player application, with a video on your local machine, your only task is to get the data, and capture it with the appropriate position in time.
Last but not least, every hardware has a reaction time, a small delay to process data.
Also, there is a similar question Fastest method of screen capturing on Windows

Can flash detect a screenshot (or GDI bitcopy) being taken?

Can pixels be read covertly from a browser window containing flash + HTML?
(Is it possible for flash or the browser to detect a screenshot being taken?)
What about for other methods of capturing pixels? (Like the one described here: How to read the screen pixels?)
EDIT: (background info)
A C++ application is going to read pixels from a browser window (which happens to contain Flash and some regular HTML and JavaScript). The browser window would like, if possible, to detect the fact that it's pixels have been read. The method of getting the pixels could be any (short of taking a photo of the screen itself).
For sure, you cannot detect someone using a framegrabber card to get a screenshot. There is no way you can be aware of this happening, as it happens behind the graphics card output. So for this way, no, no way you can detect it.
Otherwise, it's also pretty simple: Some application can hook your browser and prevent any event from arriving, so the user can press PrintScreen as much as he wants and your browser (let alone your Flash runtime) never gets notified. Your browser app is limited to the browser, while a desktop app has lots of means to hook and do things without notifying the browser.
(Think also about stuff like Linux/X-Windows, which will send the pixels over the wire, or RDP.)

Playing transparent video over screen with custom user input handling

I need to play animated characters over the screen on Windows. Basically, it will be character video with transparency and only non-transparent parts should be able to accept user input (e.g. mouse clicks), all other events should be passed through to underlying window.
I've made a simple transparent DirectX window with video in it. But I don't know how to make parts of this window "transparent" for user input. So if I clicking on the character, my application should accept this click, if I clicking on the transparent part of the video - click should be handled by the underlying window. How can I make it?
Thanks in advance.
I assume you mean Direct Show rather than DirectX?
You can do it using the Video Mixing Renderer. As with anything directshow its not, necessarily, easy.
First connect the video to the VMR Filter.
Second, for the animating characters all you need to do is build a simple DirectShow push source filter (Its explained really well in the DirectShow samples) that supplies the animation frames.
Third you need to create an IVMRImageCompositor class. You can then use DirectX to composite the images.