Capture image (from background) when screen is locked - expo

I'm trying to implement a functionality to capture image through front camera when someone tries to unlock my device a number of times. I believe it's possible on Android and some existing Android apps already do this. I'm not sure if it's possible on iOS, however. Does anyone know if captures can be made when the screen is off and locked using expo or RNCamera though? I can't seem to even get the camera to make a capture while the app is away from the foreground on Android.
Any thought appreciated.

Related

what's the best approach to design this simple ReactNative AR app?

I'm trying to write a simple AR app in ReactNative, it should simply see 4 predefined markers and draw a rectangle as a boundary on the live preview of the camera, the thing is I'm trying to do the processing in C++ using opencv so as to have the logic of the app in one place accessible to both Android & IOS.
here's what I've been thinking
write the OS dependent code to open the camera and get permissions in (java/ObjC) & the C++ part to do processing on each frame.
call the C++ code (from within the native code) on each frame, and that should return lets say coordinates for the markers.
draw the rect if 4 markers found on the preview in native code (No idea how to achieve this so far but I think it will be native code).
expose that preview (the live preview with the drawn view) to ReactNative (Not sure about that or how to achieve it)
I've looked at the react native camera component but it doesn't provide access to frames & if that's even possible, I'm not sure if it would be a good idea to send frames over the bridge between JS & java/ObjC.
the problem is that I'm not sure of the performance or if that is even possible.
if you know of any ReactNative library that would be great.
Your steps seem sound. After processing the frame in C++, you will need to set the application properties RCTRootView.appProperties in iOS, and emit an event using RCTDeviceEventEmitter on Android. So, you will need an Objective-C wrapper for your C++ code on iOS and a Java wrapper on Android. In either case, you should be able to use the same React Native code for actually drawing the rectangle on top of the camera preview. You're right that the React Native camera component does not have an API for getting individual frames from the camera, so you'll need to write that code natively for each platform.

OpenGL - Display video a stream of the desktop on Windows

So I am trying to figure out how get a video feed (or screenshot feed if I must) of the Desktop using OpenGL in Windows and display that in a 3D environment. I plan to integrate this with ARToolkit to make essentially a virtual screen. The only issue is that I have tried manually getting the pixels in OpenGl, but I have been unable to properly display them in a 3D environment?
I apologize in advance that I do not have minimum runnable code, but due to all the dependencies and whatnot trying to get an ARToolkit code running would be far from minimal. How would I capture the desktop on Windows and display it in ARToolkit?
BONUS: If you can grab each desktop from the 'virtual' desktops in Windows 10, that would be an excellent bonus!
Alternative: If you know another AR library that renders differently, or allows me to achieve the same effect, I would be grateful.
There are 2 different problems here:
a) Make an augmentation that plays video
b) Stream the desktop to somewhere else
For playing video on an augmentation you basically need to have a texture that gets updated on each frame. I recall that ARToolkit for Unity has an example that plays video.However.
Streaming the desktop to the other device is a problem of its own. There are tools that do screen recording, but you probably don't want that.
It sounds to me that what you want to do it to make a VLC viewer and put that into an augmentation. If I am correct, I suggest you to start by looking at existing open source VLC viewers.

[C++|winapi]Can you access video output of application before it is displayed?

I want to capture the video output of an application using C++ and winapi, and stream it over the network. At the moment, I am capturing this output using a DirectShow filter. The application displays it's video output on the screen, and I just capture whatever it is there. I want to optimize this process.
My question is: Is there a way to capture the video/audio output of an application before it is displayed on the screen?
Thanks.
Capture video before it is shown?
It is depends on how is the application provides the video for you.
Real-time rendering - You can't access what's not exists. Like video games, or any dynamic rendering only displaying the actual state, and perhaps don't know anything about the future.
Also there's an anomaly, when rendering becomes slower than the screen's refresh rate, called screen tearing.
Static displaying - All the data is available already. For example if it's a video player application, with a video on your local machine, your only task is to get the data, and capture it with the appropriate position in time.
Last but not least, every hardware has a reaction time, a small delay to process data.
Also, there is a similar question Fastest method of screen capturing on Windows

Can flash detect a screenshot (or GDI bitcopy) being taken?

Can pixels be read covertly from a browser window containing flash + HTML?
(Is it possible for flash or the browser to detect a screenshot being taken?)
What about for other methods of capturing pixels? (Like the one described here: How to read the screen pixels?)
EDIT: (background info)
A C++ application is going to read pixels from a browser window (which happens to contain Flash and some regular HTML and JavaScript). The browser window would like, if possible, to detect the fact that it's pixels have been read. The method of getting the pixels could be any (short of taking a photo of the screen itself).
For sure, you cannot detect someone using a framegrabber card to get a screenshot. There is no way you can be aware of this happening, as it happens behind the graphics card output. So for this way, no, no way you can detect it.
Otherwise, it's also pretty simple: Some application can hook your browser and prevent any event from arriving, so the user can press PrintScreen as much as he wants and your browser (let alone your Flash runtime) never gets notified. Your browser app is limited to the browser, while a desktop app has lots of means to hook and do things without notifying the browser.
(Think also about stuff like Linux/X-Windows, which will send the pixels over the wire, or RDP.)

How to hook webcam capture?

I'm working on a software that the current version has a custom made device driver of a webcam, and we use this driver with our software, that changes the captures image before displaying it, very similar to YouCam.
Basically, when any application that uses the webcam starts, our driver runs a processing in the frame before showing it.
The problem is that there is always "2" webcams installed, the real one, and our custom driver.
I noticed that YouCam does what we need, which is, to hook some method in any installed webcam that will process each frame before showing it.
Does anyone knows how to do this?
We use VC++.
Thanks
As bkritzer said, OpenCV easily does what you want.
IplImage *image = 0; // OpenCV type
CvCapture *capture = 0; // OpenCV type
// Create capture
capture = cvCaptureFromCAM (0);
assert (capture, "Can't connect webcam");
// Capture images
while (stilCapturing)
{
// Grab image
cvGrabFrame (capture);
// Retrieve image
image = cvRetrieveFrame (capture);
// You can configure refresh time
if (image) cvWaitKey (refreshTime);
// Process your image here
//...
}
You can encapsulate these OpenCV calls into a C++ class and dedicate a specific thread for it -- these will be your driver.
I think that YouCam uses DirectShow transform filter. Is that what you need?
Check out the OpenCV libraries. It has a bunch of tutorial examples and libraries that do exactly what you're asking for. It's a bit tough to install, but I've gotten it to work before.
Well, I think there are 2 key concepts in this question that have been misunderstood:
1) How to hook webcam capture
2) ...any application that uses the webcam...
If I understood right, OpenCV is useful for writing your own complete application, complete meaning that it will open camera and will process images. So it wouldn't satisfy point 2), which I understand as referring to other application (not yours!) opening the camera, and your application processing the images.
Point 1) seems to confirm it, because "hook" is a word usually meaning interception of some other process that are not part of your own application.
So I doubt if this question is answered or not. I am also interested on it.